diff --git a/spaces/101-5/gpt4free/g4f/.v1/README.md b/spaces/101-5/gpt4free/g4f/.v1/README.md deleted file mode 100644 index ce3ee7bea3622690fdc0437919f8c0c98f2db77d..0000000000000000000000000000000000000000 --- a/spaces/101-5/gpt4free/g4f/.v1/README.md +++ /dev/null @@ -1,255 +0,0 @@ -**A major update is to come this week (statement written 14 Jun)** -**You may check these out in the meanwhile**: - -- v2 prototype of gpt4free someone made: https://gitler.moe/g4f/gpt4free -- Discord bot with gpt-4 using poe.com: https://github.com/xtekky/gpt4free-discord - -______ -What can I do to contribute ? -you reverse a site from this list: [sites-to-reverse](https://github.com/xtekky/gpt4free/issues/40), and add it to [`./testing`](https://github.com/xtekky/gpt4free/tree/main/testing) or refractor it and add it to [`./gpt4free`](https://github.com/xtekky/gpt4free/tree/main/gpt4free) - -

You may join our discord: discord.gg/gpt4free for further updates. gpt4free Discord

- - -gpt4free logo - -## Legal Notice - -This repository is _not_ associated with or endorsed by providers of the APIs contained in this GitHub repository. This project is intended **for educational purposes only**. This is just a little personal project. Sites may contact me to improve their security or request the removal of their site from this repository. - -Please note the following: - -1. **Disclaimer**: The APIs, services, and trademarks mentioned in this repository belong to their respective owners. This project is _not_ claiming any right over them nor is it affiliated with or endorsed by any of the providers mentioned. - -2. **Responsibility**: The author of this repository is _not_ responsible for any consequences, damages, or losses arising from the use or misuse of this repository or the content provided by the third-party APIs. Users are solely responsible for their actions and any repercussions that may follow. We strongly recommend the users to follow the TOS of the each Website. - -3. **Educational Purposes Only**: This repository and its content are provided strictly for educational purposes. By using the information and code provided, users acknowledge that they are using the APIs and models at their own risk and agree to comply with any applicable laws and regulations. - -4. **Indemnification**: Users agree to indemnify, defend, and hold harmless the author of this repository from and against any and all claims, liabilities, damages, losses, or expenses, including legal fees and costs, arising out of or in any way connected with their use or misuse of this repository, its content, or related third-party APIs. - -5. **Updates and Changes**: The author reserves the right to modify, update, or remove any content, information, or features in this repository at any time without prior notice. Users are responsible for regularly reviewing the content and any changes made to this repository. - -By using this repository or any code related to it, you agree to these terms. The author is not responsible for any copies, forks, or reuploads made by other users. This is the author's only account and repository. To prevent impersonation or irresponsible actions, you may comply with the GNU GPL license this Repository uses. - -
- - -Just API's from some language model sites. - - -# Related gpt4free projects - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
🎁 Projects⭐ Stars📚 Forks🛎 Issues📬 Pull requests
gpt4freeStarsForksIssuesPull Requests
gpt4free-tsStarsForksIssuesPull Requests
ChatGPT-CloneStarsForksIssuesPull Requests
ChatGpt Discord BotStarsForksIssuesPull Requests
- - -## Table of Contents -| Section | Description | Link | Status | -| ------- | ----------- | ---- | ------ | -| **To do list** | List of tasks to be done | [![Link to Section](https://img.shields.io/badge/Link-Go%20to%20Section-blue)](#todo) | - | -| **Current Sites** | Current websites or platforms that can be used as APIs | [![Link to Section](https://img.shields.io/badge/Link-Go%20to%20Section-blue)](#current-sites) | - | -| **Best Sites for gpt4** | Recommended websites or platforms for gpt4 | [![Link to Section](https://img.shields.io/badge/Link-Go%20to%20Section-blue)](#best-sites) | - | -| **Streamlit GPT4Free GUI** | Web-based graphical user interface for interacting with gpt4free | [![Link to Section](https://img.shields.io/badge/Link-Go%20to%20Section-blue)](#streamlit-gpt4free-gui) | - | -| **Docker** | Instructions on how to run gpt4free in a Docker container | [![Link to Section](https://img.shields.io/badge/Link-Go%20to%20Section-blue)](#docker-instructions) | - | -| **ChatGPT clone** | A ChatGPT clone with new features and scalability | [![Link to Website](https://img.shields.io/badge/Link-Visit%20Site-blue)](https://chat.chatbot.sex/chat) | - | -| **How to install** | Instructions on how to install gpt4free | [![Link to Section](https://img.shields.io/badge/Link-Go%20to%20Section-blue)](#install) | - | -| **Usage Examples** | | | | -| `theb` | Example usage for theb (gpt-3.5) | [![Link to File](https://img.shields.io/badge/Link-Go%20to%20File-blue)](gpt4free/theb/README.md) | ![Active](https://img.shields.io/badge/Active-brightgreen) | -| `forefront` | Example usage for forefront (gpt-4) | [![Link to File](https://img.shields.io/badge/Link-Go%20to%20File-blue)](gpt4free/forefront/README.md) | ![Active](https://img.shields.io/badge/Active-brightgreen) | || -| `quora (poe)` | Example usage for quora | [![Link to File](https://img.shields.io/badge/Link-Go%20to%20File-blue)](gpt4free/quora/README.md) | ![Active](https://img.shields.io/badge/Active-brightgreen) | -| `you` | Example usage for you | [![Link to File](https://img.shields.io/badge/Link-Go%20to%20File-blue)](gpt4free/you/README.md) | ![Active](https://img.shields.io/badge/Active-brightgreen) | -| `deepai` | Example usage for DeepAI (gpt-3.5, with chat) | [![Link to File](https://img.shields.io/badge/Link-Go%20to%20File-blue)](gpt4free/deepai/README.md) | ![Active](https://img.shields.io/badge/Active-brightgreen) | -| **Try it Out** | | | | -| Google Colab Jupyter Notebook | Example usage for gpt4free | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DanielShemesh/gpt4free-colab/blob/main/gpt4free.ipynb) | - | -| replit Example (feel free to fork this repl) | Example usage for gpt4free | [![](https://img.shields.io/badge/Open%20in-Replit-1A1E27?logo=replit)](https://replit.com/@gpt4free/gpt4free-webui) | - | -| **Legal Notice** | Legal notice or disclaimer | [![Link to Section](https://img.shields.io/badge/Link-Go%20to%20Section-blue)](#legal-notice) | - | -| **Copyright** | Copyright information | [![Link to Section](https://img.shields.io/badge/Link-Go%20to%20Section-blue)](#copyright) | - | -| **Star History** | Star History | [![Link to Section](https://img.shields.io/badge/Link-Go%20to%20Section-blue)](#star-history) | - | - - -## To do list - -- [x] Add a GUI for the repo -- [ ] Make a general package named `gpt4free`, instead of different folders -- [ ] Live api status to know which are down and which can be used -- [ ] Integrate more API's in `./unfinished` as well as other ones in the lists -- [ ] Make an API to use as proxy for other projects -- [ ] Make a pypi package - -## Current Sites - -| Website s | Model(s) | -| ------------------------------------------------ | -------------------------------- | -| [forefront.ai](https://chat.forefront.ai) | GPT-4/3.5 | -| [poe.com](https://poe.com) | GPT-4/3.5 | -| [writesonic.com](https://writesonic.com) | GPT-3.5 / Internet | -| [t3nsor.com](https://t3nsor.com) | GPT-3.5 | -| [you.com](https://you.com) | GPT-3.5 / Internet / good search | -| [sqlchat.ai](https://sqlchat.ai) | GPT-3.5 | -| [bard.google.com](https://bard.google.com) | custom / search | -| [bing.com/chat](https://bing.com/chat) | GPT-4/3.5 | -| [italygpt.it](https://italygpt.it) | GPT-3.5 | -| [deepai.org](https://deepai.org/chat) | GPT-3.5 / chat support | - - -## Best sites - -#### gpt-4 - -- [`/forefront`](gpt4free/forefront/README.md) - -#### gpt-3.5 - -- [`/you`](gpt4free/you/README.md) - -## Install - -Download or clone this GitHub repo -install requirements with: - -```sh -python3 -m venv venv -. venv/bin/activate -pip3 install -r requirements.txt -``` - -## Install ffmpeg -```sh -sudo apt-get install ffmpeg -``` - -## Connect VPN if needed and get proxy (Optional) -```sh -echo "$http_proxy" # http://127.0.0.1:8889/ -``` - -## Set proxy in gpt4free/you/__init__.py (Optional) -``` -diff --git a/gpt4free/you/__init__.py b/gpt4free/you/__init__.py -index 11847fb..59d1162 100644 ---- a/gpt4free/you/__init__.py -+++ b/gpt4free/you/__init__.py -@@ -38,6 +38,7 @@ class Completion: - if chat is None: - chat = [] - -+ proxy = '127.0.0.1:8889' - proxies = {'http': 'http://' + proxy, 'https': 'http://' + proxy} if proxy else {} - - client = Session(client_identifier='chrome_108') -``` - - -## To start gpt4free GUI - -##### Note: streamlit app collects heavy analytics even when running locally. This includes events for every page load, form submission including metadata on queries (like length), browser and client information including host ips. These are all transmitted to a 3rd party analytics group, Segment.com. - -Move `streamlit_app.py` from `./gui` to the base folder then run: -`streamlit run streamlit_app.py` or `python3 -m streamlit run streamlit_app.py` - -```sh -cp gui/streamlit_app.py . -streamlit run streamlit_app.py -``` - - -## Docker - -Build - -``` -docker build -t gpt4free:latest . -``` - -Run - -``` -docker run -p 8501:8501 gpt4free:latest -``` - -## Deploy using docker-compose - -Run the following: - -``` -docker-compose up --build -d -``` - -## ChatGPT clone - -> Currently implementing new features and trying to scale it, please be patient it may be unstable -> https://chat.g4f.ai/chat -> This site was developed by me and includes **gpt-4/3.5**, **internet access** and **gpt-jailbreak's** like DAN -> Run locally here: https://github.com/xtekky/chatgpt-clone - -## Copyright: - -This program is licensed under the [GNU GPL v3](https://www.gnu.org/licenses/gpl-3.0.txt) - -Most code, with the exception of `quora/api.py` and `deepai/__init__.py` (by [ading2210](https://github.com/ading2210)), has been written by me, [xtekky](https://github.com/xtekky). - -### Copyright Notice: - -``` -xtekky/gpt4free: multiple reverse engineered language-model api's to decentralise the ai industry. -Copyright (C) 2023 xtekky - -This program is free software: you can redistribute it and/or modify -it under the terms of the GNU General Public License as published by -the Free Software Foundation, either version 3 of the License, or -(at your option) any later version. - -This program is distributed in the hope that it will be useful, -but WITHOUT ANY WARRANTY; without even the implied warranty of -MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -GNU General Public License for more details. - -You should have received a copy of the GNU General Public License -along with this program. If not, see . -``` - - -## Star History - - - Star History Chart - diff --git a/spaces/123aa/pastel-mix/README.md b/spaces/123aa/pastel-mix/README.md deleted file mode 100644 index 4981943047fe93097234dd49c6d2477cc80d3a50..0000000000000000000000000000000000000000 --- a/spaces/123aa/pastel-mix/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Pastel Mix -emoji: 🏢 -colorFrom: yellow -colorTo: red -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -duplicated_from: akhaliq/pastel-mix ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Abaqus 6.11 Torrent.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Abaqus 6.11 Torrent.md deleted file mode 100644 index 7c86e7761a1f3e3b5f194fa61e065c6ad862e42c..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Abaqus 6.11 Torrent.md +++ /dev/null @@ -1,133 +0,0 @@ -
-
- Benefits: highlight the features and advantages of Abaqus 6.11
- Risks: warn about the potential dangers and legal issues of downloading torrents | | H2: How to download Abaqus 6.11 torrent safely and legally | - Requirements: list the software and hardware needed to run Abaqus 6.11
- Sources: recommend some reliable and trustworthy websites that offer Abaqus 6.11 torrent
- Steps: provide a step-by-step guide on how to download and install Abaqus 6.11 torrent | | H2: How to use Abaqus 6.11 for your simulation needs | - Overview: give a brief overview of the user interface and the main tools of Abaqus 6.11
- Examples: show some practical examples of how to use Abaqus 6.11 for different types of simulations
- Tips: share some tips and tricks on how to optimize your simulation results and performance | | H2: Conclusion and FAQs | - Summary: summarize the main points of the article
- Call to action: encourage the reader to try Abaqus 6.11 for themselves
- FAQs: answer some common questions that the reader might have about Abaqus 6.11 | **Table 2: Article with HTML formatting** ```html

What is Abaqus 6.11 and why you might want to download it

-

Abaqus is a powerful software suite that allows you to perform realistic simulations of physical phenomena, such as structural mechanics, fluid dynamics, heat transfer, acoustics, and more. It is widely used by engineers, researchers, and students in various fields and industries, such as aerospace, automotive, biomedical, civil, energy, and manufacturing.

-

abaqus 6.11 torrent


DOWNLOADhttps://byltly.com/2uKzag



-

Abaqus 6.11 is one of the latest versions of the software that was released in 2011. It offers many improvements and enhancements over the previous versions, such as:

- -

If you are interested in using Abaqus 6.11 for your simulation needs, you might be tempted to download it from a torrent website. However, before you do that, you should be aware of the risks involved.

-

Downloading torrents is not only illegal but also risky. You might end up downloading a corrupted or infected file that could harm your computer or compromise your data. You might also face legal consequences if you are caught violating the intellectual property rights of the software developer.

-

DS SIMULIA Suite 2023 Free Download[^1^]
-Dassault Systemes DS SIMULIA Suite (Abaqus/Isight/Fe-safe/Tosca) x64 for Windows & Linux[^1^]
-SIMULIA delivers realistic simulation applications[^1^]
-SIMULIA Suite applications accelerate evaluating materials and products' performance, reliability, and safety[^1^]
-Aerospace & Defense manufacturers and suppliers use SIMULIA solutions[^1^]
-SIMULIA Suite delivers robust simulation structures and fluids technology[^1^]
-Modeling, simulation, and visualization technology are fully integrated into the 3DEXPERIENCE Platform[^1^]
-SIMULIA offers Abaqus Unified FEA solutions for predicting structure strength and deformations in linear and nonlinear regimes[^1^]
-Dassault Systemes SIMULIA applications, including Abaqus, fe-safe, Insight, Tosca, Simple, Simpack, and Simulation Lifecycle Management[^1^]
-SIMULIA applications accelerate the process of evaluating the performance, reliability, and safety of materials and products before committing to physical prototypes[^1^]
-System Requirements and Technical Details for DS SIMULIA Suite[^1^]
-simulia abaqus 6.12.1 | SolidTorrents[^2^]
-simulia abaqus 6.12.1 data.dat - 17.7 MB[^2^]
-simulia abaqus 6.12.1.zip - 2.81 MB[^2^]
-Tracker Seeder Leecher for simulia abaqus 6.12.1[^2^]
-Similar Torrents for simulia abaqus 6.12.1[^2^]
-DS.SIMULIA.Suite.2022.Win64-SSQ Other/DiskImage[^2^]
-Simulia Abaqus 6.14.1 Portable.zip Other/Archive[^2^]
-Dassault.Systemes.SIMULIA.Suite.2021.HF9.x64 Other/DiskImage[^2^]
-DS SIMULIA CST STUDIO SUITE 2022.04 SP4 (x64)[^2^]
-DS SIMULIA Suite Abaqus 2023 x64 Other[^2^]
-Solving Complex Problems for Structures and Bridges using ABAQUS Finite Element Package Other/Document[^2^]
-ABAQUS_6.14-1_x64_Win_SSQ Other[^2^]
-DS.SIMULIA.Suite.2021.HF5.Update.Only.Win.Linux-SSQ Other/DiskImage[^2^]
-Abaqus 6.11 for Catia V5-6R2012 x86 Other/DiskImage[^2^]
-DS SIMULIA CST STUDIO SUITE 2023.01 SP1 (x64) Other[^2^]
-Stream Abaqus 6.11 Torrent from Sumpchiscirdzu - SoundCloud[^3^]
-Play Abaqus 6.11 Torrent from Sumpchiscirdzu on SoundCloud desktop and mobile[^3^]
-abaqus 6.11 torrent download free full version
-abaqus 6.11 torrent crack serial keygen
-abaqus 6.11 torrent installation guide
-abaqus 6.11 torrent license file
-abaqus 6.11 torrent user manual
-abaqus 6.11 torrent tutorial pdf
-abaqus 6.11 torrent video training
-abaqus 6.11 torrent online course
-abaqus 6.11 torrent review ratings
-abaqus 6.11 torrent comparison with other software
-abaqus 6.11 torrent features and benefits
-abaqus 6.11 torrent system requirements
-abaqus 6.11 torrent technical support
-abaqus 6.11 torrent latest updates
-abaqus 6.11 torrent best practices
-abaqus 6.11 torrent tips and tricks
-abaqus 6.11 torrent case studies
-abaqus 6.11 torrent testimonials
-abaqus 6.11 torrent alternatives
-abaqus 6.11 torrent discounts and coupons
-abaqus 6.11 torrent free trial

-

Therefore, if you want to download Abaqus 6.11 torrent safely and legally, you should follow the instructions below.

-

How to download Abaqus 6.11 torrent safely and legally

-

To download Abaqus 6.11 torrent safely and legally, you will need the following:

- -

Once you have these requirements ready, you can proceed with the following steps:

-
    -
  1. Connect to a VPN server that matches your location or preference
  2. -
  3. Open your torrent client and copy the magnet link of Abaqus 6.11 torrent from one of these websites:
    -- FileCR
    -- SolidTorrents
    -- Wixsite
  4. -
  5. Paste the magnet link into your torrent client and start downloading Abaqus 6.11 torrent
  6. -
  7. Wait until the download is complete and verify the integrity of the file
  8. -
  9. Run the setup file and follow the instructions to install Abaqus 6.11 on your computer
  10. -
  11. Activate your license using your credentials from Dassault Systemes SIMULIA Corp.
  12. -
  13. Enjoy using Abaqus 6.11 for your simulation needs
  14. -
-

How to use Abaqus 6.11 for your simulation needs

-

Abaqus 6.11 is a comprehensive software suite that consists of several applications, such as:

- -

To use Abaqus 6.11 for your simulation needs, you will need to follow these general steps:

-
    -
  1. Launch Abaqus/CAE from your desktop or start menu
  2. -
  3. Create a new model or open an existing one from a file or database
  4. -
  5. Define the geometry, material properties, boundary conditions, loads, interactions, etc. of your model using the tools available in Abaqus/CAE
  6. -
  7. Select the appropriate solver (Abaqus/Standard or Abaqus/Explicit) and submit your simulation job to run on your computer or on a remote server
  8. -
  9. Monitor the progress and status of your simulation job using Abaqus/CAE or Abaqus/Viewer
  10. -
  11. Analyze the simulation results using Abaqus/CAE or Abaqus/Viewer
  12. -
  13. Create reports or export data using Abaqus/CAE or Abaqus/Viewer
  14. -
-

Examples of how to use Abaqus 6.11 for different types of simulations

-

Example 1: Structural analysis of a beam under bending load

-

In this example, we will use Abaqus/CAE to create a simple model of a beam under bending load and perform a linear static analysis using Abaqus/Standard.

-
    -
  1. Create a new model in Abaqus/CAE by clicking on File > New Model Database...
  2. - ![Create new model](https://i.imgur.com/xZ9Xn8f.png)
  3. Create a part representing the beam by clicking on Part > Create... Select "3D deformable" as type and "Solid" as base feature.
  4. - ![Create part](https://i.imgur.com/XyZwq4L.png)
  5. In the Sketcher window, draw a rectangle with dimensions 10 m x 0.2 m x 0.1 m using the Create Lines tool.
  6. - ![Draw rectangle](https://i.imgur.com/c7ZlQgF.png)
  7. In the Part module toolbar, click on Done to exit Sketcher mode.
  8. - ![Exit Sketcher](https://i.imgur.com/tWtYy5x.png)
  9. Create a material representing steel by clicking on Property > Material > Create... Enter "Steel" as name and assign density (7850 kg/m3), elastic modulus (200 GPa), Poisson's ratio (0.3), etc.
  10. - ![Create material](https://i.imgur.com/MvXjJQk.png)
  11. Create a section representing the beam cross-section by clicking on Property > Section > Create... Enter "Beam" as name and select "Solid" as category.
  12. - ![Create section](https://i.imgur.com/zV7cL9v.png)
  13. In the Edit Section dialog box, select "Steel" as material assignment.
  14. - ![Assign material](https://i.imgur.com/YuqPm8c.png)
  15. Assign the section to the beam part by clicking on Property > Section Assignment... Select "Beam" as section name.
  16. - ![Assign section](https://i.imgur.com/QoGKb5l.png)
  17. Create an assembly containing only one instance of the beam part by clicking on Assembly > Instance... Select "Dependent" as type and "Beam" as part name.
  18. - ![Create instance](https://i.imgur.com/0yXwQa9.png)
  19. Create a datum plane at the mid-span of the beam by clicking on Assembly > Datum > Plane... Select "Offset from plane" as type and enter 5 m as distance.
  20. - ![Create datum plane](https://i.imgur.com/4uq8V1M.png)
  21. Create a reference point at the center of the datum plane by clicking on Assembly > Reference Point... Select "Datum plane" as type and select the datum plane.
  22. - ![Create reference point](https://i.imgur.com/0yXwQa9.png)
  23. Create a step representing the bending load by clicking on Step > Create... Enter "Bending" as name and select "Static, General" as procedure type.
  24. - ![Create step](https://i.imgur.com/6LZfYn7.png)
  25. Create a load representing the bending load by clicking on Load > Create... Enter "Bending" as name and select "Concentrated force" as category. Select the reference point as region and enter -1000 N as magnitude in the CF2 direction.
  26. - ![Create load](https://i.imgur.com/7ZLzg3O.png)
  27. Create boundary conditions representing the fixed supports at the ends of the beam by clicking on Boundary Condition > Create... Enter "Fixed" as name and select "Encastre" as type. Select the two end faces of the beam as region.
  28. - ![Create boundary conditions](https://i.imgur.com/8kZlK1c.png)
  29. Create a mesh for the beam part by clicking on Mesh > Part... Select "Beam" as part name and "Linear open section beam" as element type. Enter 20 as approximate size.
  30. - ![Create mesh](https://i.imgur.com/7nqWp8o.png)
  31. Create a job for the analysis by clicking on Job > Manager... Enter "Beam_bending" as name and select "Model-1" as model.
  32. - ![Create job](https://i.imgur.com/6Fm3j5d.png)
  33. Submit the job for execution by clicking on Job > Submit...
  34. - ![Submit job](https://i.imgur.com/2xJ4bRv.png)
  35. Monitor the progress and status of the job by clicking on Job > Monitor...
  36. - ![Monitor job](https://i.imgur.com/6Fm3j5d.png)
  37. Analyze the simulation results by clicking on Visualization > ODB Display...
  38. - ![Analyze results](https://i.imgur.com/6Fm3j5d.png)
-

Example 2: Thermal analysis of a beam under heat flux

-

In this example, we will use Abaqus/CAE to create a simple model of a beam under heat flux and perform a steady-state thermal analysis using Abaqus/Standard.

-
    -
  1. Create a new model in Abaqus/CAE by clicking on File > New Model Database...
  2. 0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Duplicate Photo Finder Professional 5.22 Crack Portable License Key High Quality.md b/spaces/1gistliPinn/ChatGPT4/Examples/Duplicate Photo Finder Professional 5.22 Crack Portable License Key High Quality.md deleted file mode 100644 index 6fda633883112839c526a479372d4db11715e3d7..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Duplicate Photo Finder Professional 5.22 Crack Portable License Key High Quality.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Duplicate Photo Finder Professional 5.22 Crack Portable License key


    Download Zip ⚹⚹⚹ https://imgfil.com/2uxXFq



    -
    -Crack Download. Duplicate Photo Finder License Key is an influential accessible duplicate image taking away usefulness. The application lies ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/FileLocator Pro 8.5 Build 2944 Crack [Latest] 2020.md b/spaces/1gistliPinn/ChatGPT4/Examples/FileLocator Pro 8.5 Build 2944 Crack [Latest] 2020.md deleted file mode 100644 index cc03fece5d3bedac191768b4776cbebb8f7f68d1..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/FileLocator Pro 8.5 Build 2944 Crack [Latest] 2020.md +++ /dev/null @@ -1,5 +0,0 @@ -
    -

    FileLocator Pro 8.5 Build 2944 Crack [Latest] 2020: A Review

    ". Do you want me to change or improve it in any way?

    -

    FileLocator Pro 8.5 Build 2944 Crack [Latest] 2020


    DOWNLOAD ———>>> https://imgfil.com/2uxZDI



    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/1line/AutoGPT/autogpt/agent/__init__.py b/spaces/1line/AutoGPT/autogpt/agent/__init__.py deleted file mode 100644 index e928af2205b1c52d19dc89ec4246e8c1d2c20e3f..0000000000000000000000000000000000000000 --- a/spaces/1line/AutoGPT/autogpt/agent/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from autogpt.agent.agent import Agent -from autogpt.agent.agent_manager import AgentManager - -__all__ = ["Agent", "AgentManager"] diff --git a/spaces/1phancelerku/anime-remove-background/Download Gold Digger FRVR Mod APK and Get Unlimited Gems Coins and Stars.md b/spaces/1phancelerku/anime-remove-background/Download Gold Digger FRVR Mod APK and Get Unlimited Gems Coins and Stars.md deleted file mode 100644 index e4d45598da3919aa70841caa9d74b9294683ad56..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Gold Digger FRVR Mod APK and Get Unlimited Gems Coins and Stars.md +++ /dev/null @@ -1,72 +0,0 @@ -
    -

    Gold Digger FRVR Mod APK Unlimited Stars: A Guide for Mining Fans

    -

    If you are a fan of mining games, you might have heard of Gold Digger FRVR, a 2D mining game from the FRVR developers. In this game, you must dig underground and attempt to find hidden gems and precious metals. You can also buy upgrades for your miner and tools, build your own house, and explore an infinite mine full of treasures, dangers, and puzzles. But what if you want to enjoy the game without any limitations or interruptions? That's where Gold Digger FRVR mod apk comes in handy. In this article, we will tell you everything you need to know about this modded version of the game, including its features, benefits, download and installation steps, and tips and tricks. So, let's get started!

    -

    gold digger frvr mod apk unlimited stars


    Downloadhttps://jinyurl.com/2uNQ9i



    -

    What is Gold Digger FRVR?

    -

    Gold Digger FRVR is a casual arcade mining game that can be played on various platforms, such as web browser, Facebook, Google Play Store, App Store, and Samsung Instant. The game was released in March 2019 by Chris Benjaminsen, the founder of FRVR, a company that specializes in creating HTML5 games that work across devices. The game has received positive reviews from players and critics alike, with an average rating of 4.8/5 on the FRVR website and 9.2/10 on CrazyGames. The game has also been featured on Metacritic and Google Play Store.

    -

    In Gold Digger FRVR, you play as a miner who wants to realize his mining dreams and become a speleology tycoon. You have to use your pickaxe to dig through the rocks and find gold nuggets, diamonds, fossils, and other valuable items. You can also match three or more rocks of the same color to blast them and get more gold. You can sell your loot at Joe's shop and use the money to buy new equipment, such as helmets, gloves, boots, drills, dynamites, etc. You can also upgrade your skills by using blue star tokens that you earn by digging deeper and discovering new rocks. You can also build your own house by buying furniture and items from the home decor shop.

    -

    Why use Gold Digger FRVR mod apk?

    -

    Gold Digger FRVR is a fun and addictive game that can keep you entertained for hours. However, it also has some drawbacks that might affect your gaming experience. For example, you might run out of stars, coins, diamonds, or gems that are needed to buy upgrades or items. You might also get annoyed by the ads that pop up every now and then. You might more than three rocks of the same color in a row or column. You can also use special rocks such as rainbow rocks, bomb rocks, or magnet rocks to create bigger explosions and get more rewards.

    -

    Explore every corner of the cave and find hidden treasures and fossils

    -

    Another tip to play Gold Digger FRVR is to explore every corner of the cave and find hidden treasures and fossils. You can find chests, keys, maps, scrolls, and other items that can give you extra gold, stars, diamonds, or gems. You can also find fossils of dinosaurs, mammoths, sabertooths, and other ancient creatures that can be sold for a high price at Joe's shop. You can also collect them and display them in your house as trophies.

    -

    Buy upgrades for your miner and tools at Joe's shop

    -

    One of the most important aspects of Gold Digger FRVR is to buy upgrades for your miner and tools at Joe's shop. You can use the coins that you earn by selling your loot to buy new helmets, gloves, boots, drills, dynamites, etc. that can improve your mining abilities and skills. You can also use the stars that you earn by digging deeper and discovering new rocks to buy new pickaxes that can break more rocks in one hit. You can also use the diamonds that you earn by finding rare items to buy special items such as jetpacks, magnets, lasers, etc. that can give you an edge in the game.

    -

    Build and decorate your own house with furniture and items from the home decor shop

    -

    The last tip to play Gold Digger FRVR is to build and decorate your own house with furniture and items from the home decor shop. You can use the gems that you earn by matching three or more rocks of the same color to buy new furniture and items such as sofas, tables, chairs, lamps, paintings, etc. that can make your house look cozy and stylish. You can also use the fossils that you find in the cave to decorate your house with ancient artifacts. You can also invite your friends to visit your house and show off your achievements.

    -

    gold digger frvr hack unlimited money and gems
    -gold digger frvr cheats no ads free purchase
    -gold digger frvr mod apk download latest version
    -gold digger frvr mine puzzle hack 100000 diamonds
    -gold digger frvr unlimited all fixes bugs
    -gold digger frvr mod apk android 1
    -gold digger frvr game hack increased speed
    -gold digger frvr codes for free coins
    -gold digger frvr mod apk revdl
    -gold digger frvr how to get gems easily
    -gold digger frvr unlimited shopping unlocked
    -gold digger frvr mod apk rexdl
    -gold digger frvr games hack no root
    -gold digger frvr cheats youtube video
    -gold digger frvr mod apk happymod
    -gold digger frvr mine puzzle mod apk 2.8.6
    -gold digger frvr hack online generator
    -gold digger frvr cheats reddit forum
    -gold digger frvr mod apk 2023 update
    -gold digger frvr how to get star coins xp
    -gold digger frvr unlimited levels unlocked
    -gold digger frvr mod apk apkpure
    -gold digger frvr games hack ios iphone ipad
    -gold digger frvr cheats discord server
    -gold digger frvr mod apk 2.8.2 latest version
    -gold digger frvr hack tool download free
    -gold digger frvr cheats quora answers
    -gold digger frvr mod apk obb data file
    -gold digger frvr games hack pc windows mac
    -gold digger frvr cheats facebook group
    -gold digger frvr mod apk offline play mode
    -gold digger frvr hack apk mirror link
    -gold digger frvr cheats pinterest pin
    -gold digger frvr mod apk unlimited everything
    -gold digger frvr games hack bluestacks emulator
    -gold digger frvr cheats telegram channel
    -gold digger frvr mod apk no verification survey
    -gold digger frvr hack safe secure tested
    -gold digger frvr cheats instagram post story
    -gold digger frvr mod apk vip premium features

    -

    Conclusion

    -

    Gold Digger FRVR is a fun and addictive mining game that can keep you entertained for hours. However, if you want to enjoy the game without any limitations or interruptions, you might want to use Gold Digger FRVR mod apk, a modified version of the game that offers you unlimited resources, no ads, free purchases, bug fixes, and performance improvements. You can also hack 100000 diamonds and unlimited all in the game and customize it as you wish. To get Gold Digger FRVR mod apk on your device, you just need to download the apk file from a trusted source, enable unknown sources on your device settings, install the apk file and launch the game. Then, you can follow our tips and tricks to master the game and become a speleology tycoon. So, what are you waiting for? Download Gold Digger FRVR mod apk today and start digging!

    -

    FAQs

    -

    Here are some frequently asked questions about Gold Digger FRVR mod apk:

    -

    Is Gold Digger FRVR mod apk safe to use?

    -

    Yes, Gold Digger FRVR mod apk is safe to use as long as you download it from a trusted source. However, we recommend that you scan the apk file with an antivirus software before installing it on your device.

    -

    Is Gold Digger FRVR mod apk legal to use?

    -

    No, Gold Digger FRVR mod apk is not legal to use as it violates the terms and conditions of the original game. Therefore, we do not endorse or promote its use. Use it at your own risk.

    -

    Will Gold Digger FRVR mod apk work on my device?

    -

    Gold Digger FRVR mod apk should work on most devices that support Android 4.4 or higher. However, some devices might not be compatible with the mod apk due to different specifications or settings. Therefore, we suggest that you check the compatibility of your device before downloading and installing the mod apk.

    -

    Can I play Gold Digger FRVR mod apk online with other players?

    -

    No, Gold Digger FRVR mod apk is an offline game that does not require an internet connection to play. Therefore, you cannot play it online with other players.

    -

    Can I update Gold Digger FRVR mod apk to the latest version?

    -

    No, Gold Digger FRVR mod apk is not compatible with the latest version of the original game. Therefore, you cannot update it to the latest version. If you want to play the latest version of the game, you have to uninstall the mod apk and install the original game from the official source.

    -

    I hope this article has helped you learn more about Gold Digger FRVR mod apk and how to use it. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy digging!

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Subway Surfers Mod APK v2 31.0 Terbaru 2022 and Unlock All Characters and Boards.md b/spaces/1phancelerku/anime-remove-background/Download Subway Surfers Mod APK v2 31.0 Terbaru 2022 and Unlock All Characters and Boards.md deleted file mode 100644 index f7f3c82a54a97fb723fbd48315e74be0b836726f..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Subway Surfers Mod APK v2 31.0 Terbaru 2022 and Unlock All Characters and Boards.md +++ /dev/null @@ -1,105 +0,0 @@ - -

    Download Game Subway Surfers Mod Apk v2 31.0 Terbaru 2022

    -

    Are you looking for a fun and exciting game to play on your Android device? Do you want to enjoy unlimited coins, keys, and other resources in the game? If yes, then you should download game subway surfers mod apk v2 31.0 terbaru 2022. This is the latest version of the popular endless runner game that has millions of fans around the world. In this article, we will tell you everything you need to know about subway surfers, subway surfers mod apk, and why you should play it in 2022.

    -

    download game subway surfers mod apk v2 31.0 terbaru 2022


    DOWNLOAD ->->->-> https://jinyurl.com/2uNLwT



    -

    What is Subway Surfers?

    -

    Subway Surfers is an endless running game developed by Kiloo and SYBO Games. Like most games from this genre, the players only need to concern themselves with obstacle avoidance and collecting items. The game is set in various cities around the world, where the players control a group of young graffiti artists who run away from the police on their hoverboards. The game has colorful graphics, smooth animations, and catchy music that make it appealing to players of all ages.

    -

    Gameplay

    -

    The gameplay of subway surfers is simple and intuitive. The players swipe left or right to change lanes, swipe up to jump, swipe down to roll, and tap to use power-ups. The game has various obstacles such as trains, barriers, signs, tunnels, and more that the players have to avoid or jump over. The game also has coins, keys, magnets, jetpacks, hoverboards, and other items that the players can collect or use to enhance their performance. The game ends when the player crashes into an obstacle or gets caught by the police.

    -

    Features

    -

    Subway Surfers has many features that make it fun and engaging. Some of these features are:

    - -

    Characters

    -

    Subway Surfers has a diverse and colorful cast of characters that the players can choose from. Each character has a unique personality, style, and backstory. Some of the main characters are:

    - -

    What is Subway Surfers Mod

    What is Subway Surfers Mod Apk?

    -

    Subway Surfers Mod Apk is a modified version of the original game that gives the players access to unlimited coins, keys, power-ups, hoverboards, characters, outfits, and more. With Subway Surfers Mod Apk, the players can enjoy the game without any limitations or restrictions. They can unlock and customize their favorite characters, buy and upgrade their hoverboards, use various power-ups to boost their speed and score, and explore different cities with ease.

    -

    download subway surfers mod apk unlimited money and keys v2 31.0
    -subway surfers apk mod v2 31.0 latest version free download
    -how to download subway surfers mod apk v2 31.0 for android
    -subway surfers mod apk v2 31.0 new update 2022 download
    -download game subway surfers hack mod apk v2 31.0 terbaru
    -subway surfers mod apk v2 31.0 all characters unlocked download
    -subway surfers apk mod v2 31.0 offline download for pc
    -download subway surfers mod apk v2 31.0 unlimited coins and keys
    -subway surfers mod apk v2 31.0 mega mod download android
    -download game subway surfers cheat mod apk v2 31.0 terbaru
    -subway surfers mod apk v2 31.0 no ads download free
    -subway surfers apk mod v2 31.0 online multiplayer download
    -download subway surfers mod apk v2 31.0 with unlimited everything
    -subway surfers mod apk v2 31.0 high score hack download
    -download game subway surfers premium mod apk v2 31.0 terbaru
    -subway surfers mod apk v2 31.0 unlocked all boards and skins download
    -subway surfers apk mod v2 31.0 world tour download latest version
    -download subway surfers mod apk v2 31.0 anti ban and no root
    -subway surfers mod apk v2 31.0 unlimited hoverboards and boosters download
    -download game subway surfers pro mod apk v2 31.0 terbaru
    -subway surfers mod apk v2 31.0 god mode and invincible download
    -subway surfers apk mod v2 31.0 hd graphics and sound download
    -download subway surfers mod apk v2 31.0 with all missions completed
    -subway surfers mod apk v2 31.0 unlimited lives and time download
    -download game subway surfers super mod apk v2 31.0 terbaru

    -

    Benefits of Subway Surfers Mod Apk

    -

    Some of the benefits of using Subway Surfers Mod Apk are:

    - -

    How to Download and Install Subway Surfers Mod Apk v2 31.0

    -

    If you want to download game subway surfers mod apk v2 31.0 terbaru 2022, you need to follow these simple steps:

    -
      -
    1. Click on the link below to download the mod apk file.
    2. -
    3. Allow unknown sources in your device settings to install apps from third-party sources.
    4. -
    5. Locate and tap on the downloaded file to start the installation process.
    6. -
    7. Wait for a few seconds until the installation is complete.
    8. -
    9. Launch the game and enjoy unlimited resources and features.
    10. -
    -

    Precautions and Risks of Using Subway Surfers Mod Apk

    -

    While Subway Surfers Mod Apk can be fun and convenient, it also comes with some precautions and risks that you should be aware of. Some of these are:

    - -

    Why You Should Play Subway Surfers in 2022

    -

    Subway Surfers is not just a game, it is a phenomenon that has been entertaining millions of players for almost a decade. The game has been constantly updated and improved with new features, events, and content that keep it fresh and exciting. Here are some reasons why you should play subway surfers in 2022:

    -

    New Updates and Events

    -

    Subway Surfers never gets old because it always has something new to offer. Every month, the game takes you to a different city with new backgrounds, music, challenges, and rewards. You can also participate in seasonal events that celebrate various festivals and occasions with themed decorations, characters, hoverboards, and more. For example, in January 2022, you can join the Winter Wonderland event in Moscow and enjoy the snowy scenery, festive outfits, and special prizes.

    -

    Fun and Addictive Gameplay

    -

    Subway Surfers is one of those games that you can play for hours without getting bored. The gameplay is simple but addictive, as you try to run as far as you can while dodging obstacles and collecting items. The game also has a lot of variety and challenge, as you encounter different types of obstacles, power-ups, hoverboards, and enemies. The game also has a lot of humor and charm, as you witness funny animations, sound effects, and dialogues from the characters.

    -

    Global Leaderboard and Achievements

    -

    Subway Surfers is not just a solo game, it is also a social game that lets you compete with other players around the world. You can connect your game account to Facebook or Google Play Games and see how you rank among your friends and other players on the global leaderboard. You can also earn achievements by completing various tasks and milestones in the game. You can also share your high scores and screenshots with your friends on social media platforms.

    -

    Conclusion

    -

    Subway Surfers is a game that deserves your attention in 2022. It is a game that combines fun, excitement, adventure, creativity, and social interaction in one package. It is a game that will keep you entertained for hours with its endless running gameplay and its amazing features. It is a game that will challenge you with its various obstacles, power-ups, and enemies. It is a game that will connect you with other players and let you show off your skills and achievements. If you want to experience the best of subway surfers, you should download game subway surfers mod apk v2 31.0 terbaru 2022 and enjoy unlimited resources and features. However, you should also be careful of the risks and precautions of using the mod apk and play responsibly.

    -

    FAQs

    -

    Here are some frequently asked questions about subway surfers and subway surfers mod apk:

    -
      -
    1. What is the latest version of subway surfers?
    2. -

      The latest version of subway surfers as of June 2023 is v2 31.0, which takes the players to Moscow for the Winter Wonderland event.

      -
    3. How can I get more coins and keys in subway surfers?
    4. -

      You can get more coins and keys in subway surfers by completing daily challenges and missions, participating in weekly hunts and seasonal events, watching ads, or buying them with real money. Alternatively, you can use subway surfers mod apk to get unlimited coins and keys for free.

      -
    5. How can I unlock new characters and outfits in subway surfers?
    6. -

      You can unlock new characters and outfits in subway surfers by collecting a certain number of tokens or letters during the weekly hunts or seasonal events, or by buying them with coins or keys. Alternatively, you can use subway surfers mod apk to unlock all characters and outfits for free.

      -
    7. How can I change the city or location in subway surfers?
    8. -

      You can change the city or location in subway surfers by updating the game every month when a new world tour destination is released. Alternatively, you can use subway surfers mod apk to access any city or location at any time.

      -
    9. Is subway surfers mod apk safe to use?
    10. -

      Subway surfers mod apk is not an official version of the game and may not be safe to use. It may contain viruses or malware that can harm your device or steal your personal information. It may also cause your game account to be banned or suspended by the developers or Google Play Store for violating their terms and conditions. Therefore, you should use subway surfers mod apk at your own risk and discretion.

      -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download UNO! and Join the Fun of the Mobile Community Cup..md b/spaces/1phancelerku/anime-remove-background/Download UNO! and Join the Fun of the Mobile Community Cup..md deleted file mode 100644 index e36af0667f5b63e6eec0ce7796de106f7da4f638..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download UNO! and Join the Fun of the Mobile Community Cup..md +++ /dev/null @@ -1,132 +0,0 @@ -
    -

    Uno TM Free Download: How to Play the Classic Card Game on Your Mobile Device

    -

    Do you love playing card games with your friends and family? Do you want to enjoy a fun and memorable game wherever and whenever you want? If you answered yes, then you should try Uno TM, the official mobile version of the world's most beloved card game. In this article, we will show you how to download and play Uno TM on your mobile device, as well as some tips and strategies to win the game.

    -

    What is Uno TM and Why You Should Play It

    -

    Uno TM is a card game that is played by matching and discarding cards in your hand until none are left. The game is simple to learn but challenging to master, as you have to use strategy, luck, and skill to outsmart your opponents. You can play with up to 10 players online or offline, or play solo against the computer. You can also customize your game with various house rules, themes, and tournaments.

    -

    uno tm free download


    Download 🗹 https://jinyurl.com/2uNKDy



    -

    Playing Uno TM on your mobile device has many benefits. You can play anytime, anywhere, with anyone. You don't need a physical deck of cards or a table to play. You can also enjoy new features and updates that make the game more exciting and engaging. For example, you can chat with your friends, join clubs, spin the wheel for rewards, and participate in special events.

    -

    How to Download and Install Uno TM on Your Mobile Device

    -

    Downloading and installing Uno TM on your mobile device is easy and free. Here are the steps to follow:

    -
      -
    1. Go to the Google Play Store or the App Store on your device.
    2. -
    3. Search for "Uno TM" or "Uno Mobile" in the search bar.
    4. -
    5. Select the app from the list of results and tap on "Install".
    6. -
    7. Wait for the app to download and install on your device.
    8. -
    9. Open the app and sign in with your Facebook account or create a new account.
    10. -
    11. Enjoy playing Uno TM on your mobile device!
    12. -
    -

    The requirements and compatibility of Uno TM vary depending on your device. Generally, you need a device that runs on Android 4.4 or higher or iOS 9.0 or higher. You also need a stable internet connection to play online.

    -

    uno tm free download for android
    -uno tm free download for pc
    -uno tm free download for mac
    -uno tm free download apk
    -uno tm free download ios
    -uno tm free download windows 10
    -uno tm free download online
    -uno tm free download bluestacks
    -uno tm free download google play
    -uno tm free download app store
    -uno tm free download official site
    -uno tm free download latest version
    -uno tm free download mod apk
    -uno tm free download no ads
    -uno tm free download unlimited coins
    -uno tm free download multiplayer
    -uno tm free download classic mode
    -uno tm free download wild mode
    -uno tm free download 2v2 mode
    -uno tm free download tournaments
    -uno tm free download events
    -uno tm free download rewards
    -uno tm free download clubs
    -uno tm free download gifts
    -uno tm free download chat
    -uno tm free download tips and tricks
    -uno tm free download cheats and hacks
    -uno tm free download reviews and ratings
    -uno tm free download gameplay and features
    -uno tm free download updates and news
    -uno tm free download community and support
    -uno tm free download esports and competitions
    -uno tm free download mattel163 limited
    -uno tm free download official mobile game
    -uno tm free download fun and family-friendly
    -uno tm free download card game experience
    -uno tm free download house rules and customizations
    -uno tm free download quick play and easy start
    -uno tm free download buddy up and collaborate
    -uno tm free download connect and shout UNO!
    -uno tm free download challenges and leaderboards
    -uno tm free download go wild and win big
    -uno tm free download net energy gain
    -uno tm free download mini sun experiment
    -uno tm free download fusion reactor
    -uno tm free download south korea
    -uno tm free download 100 million degrees
    -uno tm free download 30 seconds
    -uno tm free download holy grail.

    -

    How to Play Uno TM on Your Mobile Device

    -

    The basic rules and gameplay of Uno TM are similar to the classic card game. Here are the main points to remember:

    - -

    There are also some special cards that have different effects on the game. Here are some examples:

    - - - - - - - - - - - - - - - - - - - - - - - - - -
    CardEffect
    WildAllows the player to choose the color of the next card to be played.
    Wild Draw FourAllows the player to choose the color of the next card to be played and forces the next player to draw four cards.
    Draw TwoForces the next player to draw two cards and skip their turn.
    SkipSkips the next player's turn.
    ReverseReverses the direction of play.
    -

    In Uno TM, you can also play with different modes and options that add more fun and challenge to the game. For example, you can play with 2v2 mode, where you team up with another player and share a hand. You can also play with Go Wild mode, where every card is a wild card. You can also play with various house rules, such as stacking, jumping in, 7-0, and bluffing.

    -

    To win Uno TM, you need to use your strategy, luck, and skill to outsmart your opponents. Here are some tips and strategies to help you:

    - -

    Conclusion

    -

    Uno TM is a great game that you can play on your mobile device anytime, anywhere, with anyone. It is easy to download and install, and it offers many features and options that make the game more exciting and engaging. It is also a game that tests your strategy, luck, and skill, and challenges you to outsmart your opponents. If you are looking for a fun and memorable game to play with your friends and family, you should try Uno TM today!

    -

    FAQs

    -

    Is Uno TM free to play?

    -

    Yes, Uno TM is free to download and play on your mobile device. However, there are some in-app purchases that you can make to enhance your gaming experience, such as buying coins, tokens, or gems.

    -

    Can I play Uno TM offline?

    -

    Yes, you can play Uno TM offline with up to three computer players. You can also play online with up to 10 players from around the world.

    -

    Can I chat with other players in Uno TM?

    -

    Yes, you can chat with other players in Uno TM by using the chat feature. You can also use emojis, stickers, or voice messages to express yourself.

    -

    Can I customize my game in Uno TM?

    -

    Yes, you can customize your game in Uno TM by choosing from various house rules, themes, and tournaments. You can also create your own rules and invite your friends to join your game.

    -

    How can I earn rewards in Uno TM?

    -

    You can earn rewards in Uno TM by playing the game regularly, spinning the wheel, completing missions, joining clubs, or participating in special events. You can use your rewards to buy more cards, themes, or items in the game.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy the New and Exciting Mobile Game from Azerbaijan Create 017 APK.md b/spaces/1phancelerku/anime-remove-background/Enjoy the New and Exciting Mobile Game from Azerbaijan Create 017 APK.md deleted file mode 100644 index 61a6df2fa2450500aa22f66b04d5e6f0bd86ed7d..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Enjoy the New and Exciting Mobile Game from Azerbaijan Create 017 APK.md +++ /dev/null @@ -1,117 +0,0 @@ - -

    Create 017 APK Download: How to Install and Play the New Mobile Game from Azerbaijan

    -

    If you are looking for a new and exciting mobile game to play, you might want to check out Create 017. This is a game that was developed by a team of young programmers from Azerbaijan, and it has been gaining popularity among gamers around the world. In this article, we will tell you what Create 017 is, how to download and install it on your device, and how to play it like a pro.

    -

    What is Create 017?

    -

    A brief introduction to the game and its features

    -

    Create 017 is a mobile game that combines elements of adventure, puzzle, and platformer genres. It is set in a futuristic world where you play as a hacker who has to infiltrate a secret facility and uncover its secrets. You will have to use your skills and creativity to hack various devices, solve puzzles, and avoid enemies. You will also have to explore different environments, such as a city, a forest, and a desert.

    -

    create 017 apk download


    Download Filehttps://jinyurl.com/2uNTwu



    -

    The story and the gameplay of Create 017

    -

    The game has a captivating story that will keep you hooked until the end. You will discover that the facility you are hacking is actually a project called CREATE, which stands for Creative Research Environment for Artificial Technology Evolution. This project aims to create artificial intelligence that can surpass human intelligence. However, something went wrong, and now you have to find out what happened and stop it before it's too late.

    -

    The gameplay of Create 017 is challenging and fun. You will have to use your phone as a hacking device, which can interact with various objects in the game world. You can hack cameras, doors, robots, drones, and more. You can also use your phone as a scanner, which can reveal hidden information and clues. You will have to use your logic and intuition to solve puzzles that require different types of hacking. You will also have to avoid or fight enemies that will try to stop you.

    -

    How to install create 017 apk on android
    -Create 017 apk latest version free download
    -Create 017 apk mod unlimited money and gems
    -Create 017 apk gameplay and review
    -Create 017 apk offline mode and multiplayer
    -Create 017 apk download for pc windows 10
    -Create 017 apk hack and cheats
    -Create 017 apk update and new features
    -Create 017 apk size and requirements
    -Create 017 apk best tips and tricks
    -Create 017 apk download link and qr code
    -Create 017 apk error and fix
    -Create 017 apk alternatives and similar apps
    -Create 017 apk rating and feedback
    -Create 017 apk developer and contact
    -Create 017 apk tutorial and guide
    -Create 017 apk comparison and benchmark
    -Create 017 apk awards and achievements
    -Create 017 apk news and events
    -Create 017 apk fan art and wallpapers
    -Create 017 apk fun facts and trivia
    -Create 017 apk challenges and missions
    -Create 017 apk secrets and easter eggs
    -Create 017 apk memes and jokes
    -Create 017 apk community and forum
    -Create 017 apk wiki and database
    -Create 017 apk support and faq
    -Create 017 apk beta and test version
    -Create 017 apk release date and countdown
    -Create 017 apk trailer and teaser
    -Create 017 apk genre and category
    -Create 017 apk languages and subtitles
    -Create 017 apk customization and settings
    -Create 017 apk characters and skills
    -Create 017 apk weapons and items
    -Create 017 apk maps and locations
    -Create 017 apk enemies and bosses
    -Create 017 apk modes and levels
    -Create 017 apk strategies and tactics
    -Create 017 apk codes and vouchers
    -Create 017 apk themes and sounds
    -Create 017 apk bugs and glitches
    -Create 017 apk backup and restore
    -Create 017 apk security and privacy
    -Create 017 apk compatibility and performance
    -Create 017 apk referral and invite friends
    -Create 017 apk donations and premium features
    -Create 017 apk history and versions
    -Create 017 apk source code and license

    -

    The graphics and the sound of Create 017

    -

    The game has impressive graphics that create a realistic and immersive atmosphere. The game uses realistic lighting, shadows, textures, and animations. The game also has dynamic weather effects, such as rain, snow, fog, and wind. The game has different levels that have different themes and styles. You will see a contrast between the futuristic cityscape and the natural landscapes.

    -

    The game also has amazing sound effects that enhance the gameplay experience. The game has realistic sounds of hacking, explosions, gunfire, alarms, and more. The game also has an original soundtrack that matches the mood and the tone of each level. The game also has voice acting that adds personality and emotion to the characters.

    -

    How to download and install Create 017 APK?

    -

    The requirements and the compatibility of Create 017 APK

    -

    Create 017 APK is an application file that allows you to install the game on your device without using any app store or platform. This means that you can enjoy the game without any restrictions or limitations. However, before you download and install Create 017 APK, you need to make sure that your device meets the following requirements:

    -
-

How to play Create 017?

-

The controls and the interface of Create 017

-

Create 017 has simple and intuitive controls that make it easy to play. You can use your finger to swipe on the screen to move your character and look around. You can also use buttons on the screen to perform various actions, such as jumping, crouching, hacking, scanning, shooting, etc. You can also customize the controls according to your preference in the settings menu.

-

The game also has a user-friendly interface that shows you important information and options. You can see your health bar, ammo count, hacking progress, scanner results, and more on the top of the screen. You can also see a map that shows you your location and objectives on the bottom left of the screen. You can also access a menu that lets you pause, resume, save, load, quit, or change settings on the top right of the screen.

-

The tips and the tricks to master Create 017

-

Create 017 is a game that requires skill and strategy to complete. Here are some tips and tricks that can help you master the game:

- -

The challenges and the rewards of Create 017

-

Create 017 is a game that offers many challenges and rewards for players who want to test their skills and have fun. Here are some of the challenges and rewards that you can expect from the game:

- -

Conclusion

-

A summary of the main points and a call to action

-

Create 017 is a mobile game that you should definitely try if you are looking for a new and exciting gaming experience. The game has a captivating story, challenging gameplay, impressive graphics, amazing sound effects, and user-friendly controls. The game also has a lot of features, options, content, and rewards that will keep you entertained for hours. You can download and install Create 017 APK on your device easily by following the steps we have provided in this article. You can also play Create 017 like a pro by following the tips and tricks we have shared in this article. So what are you waiting for? Download Create 017 APK now and start hacking!

-

FAQs

-

Here are some of the frequently asked questions about Create 017:

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/models/unet_2d_blocks.py b/spaces/1toTree/lora_test/ppdiffusers/models/unet_2d_blocks.py deleted file mode 100644 index 534e5148b1eb044e82fca8eec7ce404a8a922557..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/models/unet_2d_blocks.py +++ /dev/null @@ -1,2223 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import numpy as np -import paddle -from paddle import nn -from paddle.distributed.fleet.utils import recompute - -from .attention import AttentionBlock, DualTransformer2DModel, Transformer2DModel -from .cross_attention import CrossAttention, CrossAttnAddedKVProcessor -from .resnet import ( - Downsample2D, - FirDownsample2D, - FirUpsample2D, - ResnetBlock2D, - Upsample2D, -) - - -def get_down_block( - down_block_type, - num_layers, - in_channels, - out_channels, - temb_channels, - add_downsample, - resnet_eps, - resnet_act_fn, - attn_num_head_channels, - resnet_groups=None, - cross_attention_dim=None, - downsample_padding=None, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - resnet_time_scale_shift="default", -): - down_block_type = down_block_type[7:] if down_block_type.startswith("UNetRes") else down_block_type - if down_block_type == "DownBlock2D": - return DownBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - downsample_padding=downsample_padding, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif down_block_type == "ResnetDownsampleBlock2D": - return ResnetDownsampleBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif down_block_type == "AttnDownBlock2D": - return AttnDownBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - downsample_padding=downsample_padding, - attn_num_head_channels=attn_num_head_channels, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif down_block_type == "CrossAttnDownBlock2D": - if cross_attention_dim is None: - raise ValueError("cross_attention_dim must be specified for CrossAttnDownBlock2D") - return CrossAttnDownBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - downsample_padding=downsample_padding, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attn_num_head_channels, - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif down_block_type == "SimpleCrossAttnDownBlock2D": - if cross_attention_dim is None: - raise ValueError("cross_attention_dim must be specified for SimpleCrossAttnDownBlock2D") - return SimpleCrossAttnDownBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attn_num_head_channels, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif down_block_type == "SkipDownBlock2D": - return SkipDownBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - downsample_padding=downsample_padding, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif down_block_type == "AttnSkipDownBlock2D": - return AttnSkipDownBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - downsample_padding=downsample_padding, - attn_num_head_channels=attn_num_head_channels, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif down_block_type == "DownEncoderBlock2D": - return DownEncoderBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - downsample_padding=downsample_padding, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif down_block_type == "AttnDownEncoderBlock2D": - return AttnDownEncoderBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - downsample_padding=downsample_padding, - attn_num_head_channels=attn_num_head_channels, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - raise ValueError(f"{down_block_type} does not exist.") - - -def get_up_block( - up_block_type, - num_layers, - in_channels, - out_channels, - prev_output_channel, - temb_channels, - add_upsample, - resnet_eps, - resnet_act_fn, - attn_num_head_channels, - resnet_groups=None, - cross_attention_dim=None, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - resnet_time_scale_shift="default", -): - up_block_type = up_block_type[7:] if up_block_type.startswith("UNetRes") else up_block_type - if up_block_type == "UpBlock2D": - return UpBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif up_block_type == "ResnetUpsampleBlock2D": - return ResnetUpsampleBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif up_block_type == "CrossAttnUpBlock2D": - if cross_attention_dim is None: - raise ValueError("cross_attention_dim must be specified for CrossAttnUpBlock2D") - return CrossAttnUpBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attn_num_head_channels, - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif up_block_type == "SimpleCrossAttnUpBlock2D": - if cross_attention_dim is None: - raise ValueError("cross_attention_dim must be specified for SimpleCrossAttnUpBlock2D") - return SimpleCrossAttnUpBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attn_num_head_channels, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif up_block_type == "AttnUpBlock2D": - return AttnUpBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - attn_num_head_channels=attn_num_head_channels, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif up_block_type == "SkipUpBlock2D": - return SkipUpBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif up_block_type == "AttnSkipUpBlock2D": - return AttnSkipUpBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - attn_num_head_channels=attn_num_head_channels, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif up_block_type == "UpDecoderBlock2D": - return UpDecoderBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif up_block_type == "AttnUpDecoderBlock2D": - return AttnUpDecoderBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - attn_num_head_channels=attn_num_head_channels, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - raise ValueError(f"{up_block_type} does not exist.") - - -class UNetMidBlock2D(nn.Layer): - def __init__( - self, - in_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - add_attention: bool = True, - attn_num_head_channels=1, - output_scale_factor=1.0, - ): - super().__init__() - - resnet_groups = resnet_groups if resnet_groups is not None else min(in_channels // 4, 32) - self.add_attention = add_attention - - # there is always at least one resnet - resnets = [ - ResnetBlock2D( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ] - attentions = [] - - for _ in range(num_layers): - if self.add_attention: - attentions.append( - AttentionBlock( - in_channels, - num_head_channels=attn_num_head_channels, - rescale_output_factor=output_scale_factor, - eps=resnet_eps, - norm_num_groups=resnet_groups, - ) - ) - else: - attentions.append(None) - - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.attentions = nn.LayerList(attentions) - self.resnets = nn.LayerList(resnets) - - def forward(self, hidden_states, temb=None): - hidden_states = self.resnets[0](hidden_states, temb) - for attn, resnet in zip(self.attentions, self.resnets[1:]): - if attn is not None: - hidden_states = attn(hidden_states) - hidden_states = resnet(hidden_states, temb) - - return hidden_states - - -class UNetMidBlock2DCrossAttn(nn.Layer): - def __init__( - self, - in_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - output_scale_factor=1.0, - cross_attention_dim=1280, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - ): - super().__init__() - - self.has_cross_attention = True - self.attn_num_head_channels = attn_num_head_channels - resnet_groups = resnet_groups if resnet_groups is not None else min(in_channels // 4, 32) - - # there is always at least one resnet - resnets = [ - ResnetBlock2D( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ] - attentions = [] - - for _ in range(num_layers): - if not dual_cross_attention: - attentions.append( - Transformer2DModel( - attn_num_head_channels, - in_channels // attn_num_head_channels, - in_channels=in_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - ) - ) - else: - attentions.append( - DualTransformer2DModel( - attn_num_head_channels, - in_channels // attn_num_head_channels, - in_channels=in_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - ) - ) - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.attentions = nn.LayerList(attentions) - self.resnets = nn.LayerList(resnets) - - def forward( - self, hidden_states, temb=None, encoder_hidden_states=None, attention_mask=None, cross_attention_kwargs=None - ): - # TODO(Patrick, William) - attention_mask is currently not used. Implement once used - hidden_states = self.resnets[0](hidden_states, temb) - for attn, resnet in zip(self.attentions, self.resnets[1:]): - hidden_states = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - cross_attention_kwargs=cross_attention_kwargs, - ).sample - hidden_states = resnet(hidden_states, temb) - - return hidden_states - - -class UNetMidBlock2DSimpleCrossAttn(nn.Layer): - def __init__( - self, - in_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - output_scale_factor=1.0, - cross_attention_dim=1280, - ): - super().__init__() - - self.has_cross_attention = True - - self.attn_num_head_channels = attn_num_head_channels - resnet_groups = resnet_groups if resnet_groups is not None else min(in_channels // 4, 32) - - self.num_heads = in_channels // self.attn_num_head_channels - - # there is always at least one resnet - resnets = [ - ResnetBlock2D( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ] - attentions = [] - - for _ in range(num_layers): - attentions.append( - CrossAttention( - query_dim=in_channels, - cross_attention_dim=in_channels, - heads=self.num_heads, - dim_head=attn_num_head_channels, - added_kv_proj_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - bias=True, - upcast_softmax=True, - processor=CrossAttnAddedKVProcessor(), - ) - ) - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.attentions = nn.LayerList(attentions) - self.resnets = nn.LayerList(resnets) - - def set_attention_slice(self, slice_size): - head_dims = self.attn_num_head_channels - head_dims = [head_dims] if isinstance(head_dims, int) else head_dims - if slice_size is not None and any(dim % slice_size != 0 for dim in head_dims): - raise ValueError( - f"Make sure slice_size {slice_size} is a common divisor of " - f"the number of heads used in cross_attention: {head_dims}" - ) - if slice_size is not None and slice_size > min(head_dims): - raise ValueError( - f"slice_size {slice_size} has to be smaller or equal to " - f"the lowest number of heads used in cross_attention: min({head_dims}) = {min(head_dims)}" - ) - - for attn in self.attentions: - attn._set_attention_slice(slice_size) - - def forward( - self, hidden_states, temb=None, encoder_hidden_states=None, attention_mask=None, cross_attention_kwargs=None - ): - cross_attention_kwargs = cross_attention_kwargs if cross_attention_kwargs is not None else {} - hidden_states = self.resnets[0](hidden_states, temb) - for attn, resnet in zip(self.attentions, self.resnets[1:]): - # attn - hidden_states = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - attention_mask=attention_mask, - **cross_attention_kwargs, - ) - # resnet - hidden_states = resnet(hidden_states, temb) - - return hidden_states - - -class AttnDownBlock2D(nn.Layer): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - attention_type="default", - output_scale_factor=1.0, - downsample_padding=1, - add_downsample=True, - ): - super().__init__() - resnets = [] - attentions = [] - - self.attention_type = attention_type - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - attentions.append( - AttentionBlock( - out_channels, - num_head_channels=attn_num_head_channels, - rescale_output_factor=output_scale_factor, - eps=resnet_eps, - norm_num_groups=resnet_groups, - ) - ) - - self.attentions = nn.LayerList(attentions) - self.resnets = nn.LayerList(resnets) - - if add_downsample: - self.downsamplers = nn.LayerList( - [ - Downsample2D( - out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - def forward(self, hidden_states, temb=None): - output_states = () - - for resnet, attn in zip(self.resnets, self.attentions): - hidden_states = resnet(hidden_states, temb) - hidden_states = attn(hidden_states) - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - output_states += (hidden_states,) - - return hidden_states, output_states - - -class CrossAttnDownBlock2D(nn.Layer): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - cross_attention_dim=1280, - output_scale_factor=1.0, - downsample_padding=1, - add_downsample=True, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - ): - super().__init__() - resnets = [] - attentions = [] - - self.has_cross_attention = True - self.attn_num_head_channels = attn_num_head_channels - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - if not dual_cross_attention: - attentions.append( - Transformer2DModel( - attn_num_head_channels, - out_channels // attn_num_head_channels, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - use_linear_projection=use_linear_projection, - upcast_attention=upcast_attention, - ) - ) - else: - attentions.append( - DualTransformer2DModel( - attn_num_head_channels, - out_channels // attn_num_head_channels, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - ) - ) - self.attentions = nn.LayerList(attentions) - self.resnets = nn.LayerList(resnets) - - if add_downsample: - self.downsamplers = nn.LayerList( - [ - Downsample2D( - out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - self.gradient_checkpointing = False - - def forward( - self, hidden_states, temb=None, encoder_hidden_states=None, attention_mask=None, cross_attention_kwargs=None - ): - # TODO(Patrick, William) - attention mask is not used - output_states = () - - for resnet, attn in zip(self.resnets, self.attentions): - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module, return_dict=None): - def custom_forward(*inputs): - if return_dict is not None: - return module(*inputs, return_dict=return_dict)[0] # move [0] - else: - return module(*inputs) - - return custom_forward - - hidden_states = recompute(create_custom_forward(resnet), hidden_states, temb) - hidden_states = recompute( - create_custom_forward(attn, return_dict=False), - hidden_states, - encoder_hidden_states, - cross_attention_kwargs, - ) # [0] - else: - hidden_states = resnet(hidden_states, temb) - hidden_states = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - cross_attention_kwargs=cross_attention_kwargs, - ).sample - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - output_states += (hidden_states,) - - return hidden_states, output_states - - -class DownBlock2D(nn.Layer): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_downsample=True, - downsample_padding=1, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.resnets = nn.LayerList(resnets) - - if add_downsample: - self.downsamplers = nn.LayerList( - [ - Downsample2D( - out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - self.gradient_checkpointing = False - - def forward(self, hidden_states, temb=None): - output_states = () - - for resnet in self.resnets: - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - hidden_states = recompute(create_custom_forward(resnet), hidden_states, temb) - else: - hidden_states = resnet(hidden_states, temb) - - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - output_states += (hidden_states,) - - return hidden_states, output_states - - -class DownEncoderBlock2D(nn.Layer): - def __init__( - self, - in_channels: int, - out_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_downsample=True, - downsample_padding=1, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=None, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.resnets = nn.LayerList(resnets) - - if add_downsample: - self.downsamplers = nn.LayerList( - [ - Downsample2D( - out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - def forward(self, hidden_states): - for resnet in self.resnets: - hidden_states = resnet(hidden_states, temb=None) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - return hidden_states - - -class AttnDownEncoderBlock2D(nn.Layer): - def __init__( - self, - in_channels: int, - out_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - output_scale_factor=1.0, - add_downsample=True, - downsample_padding=1, - ): - super().__init__() - resnets = [] - attentions = [] - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=None, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - attentions.append( - AttentionBlock( - out_channels, - num_head_channels=attn_num_head_channels, - rescale_output_factor=output_scale_factor, - eps=resnet_eps, - norm_num_groups=resnet_groups, - ) - ) - - self.attentions = nn.LayerList(attentions) - self.resnets = nn.LayerList(resnets) - - if add_downsample: - self.downsamplers = nn.LayerList( - [ - Downsample2D( - out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - def forward(self, hidden_states): - for resnet, attn in zip(self.resnets, self.attentions): - hidden_states = resnet(hidden_states, temb=None) - hidden_states = attn(hidden_states) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - return hidden_states - - -class AttnSkipDownBlock2D(nn.Layer): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - output_scale_factor=np.sqrt(2.0), - downsample_padding=1, - add_downsample=True, - ): - super().__init__() - self.attentions = nn.LayerList([]) - self.resnets = nn.LayerList([]) - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - self.resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=min(in_channels // 4, 32), - groups_out=min(out_channels // 4, 32), - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - self.attentions.append( - AttentionBlock( - out_channels, - num_head_channels=attn_num_head_channels, - rescale_output_factor=output_scale_factor, - eps=resnet_eps, - ) - ) - - if add_downsample: - self.resnet_down = ResnetBlock2D( - in_channels=out_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=min(out_channels // 4, 32), - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - use_in_shortcut=True, - down=True, - kernel="fir", - ) - self.downsamplers = nn.LayerList([FirDownsample2D(out_channels, out_channels=out_channels)]) - self.skip_conv = nn.Conv2D(3, out_channels, kernel_size=(1, 1), stride=(1, 1)) - else: - self.resnet_down = None - self.downsamplers = None - self.skip_conv = None - - def forward(self, hidden_states, temb=None, skip_sample=None): - output_states = () - - for resnet, attn in zip(self.resnets, self.attentions): - hidden_states = resnet(hidden_states, temb) - hidden_states = attn(hidden_states) - output_states += (hidden_states,) - - if self.downsamplers is not None: - hidden_states = self.resnet_down(hidden_states, temb) - for downsampler in self.downsamplers: - skip_sample = downsampler(skip_sample) - - hidden_states = self.skip_conv(skip_sample) + hidden_states - - output_states += (hidden_states,) - - return hidden_states, output_states, skip_sample - - -class SkipDownBlock2D(nn.Layer): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_pre_norm: bool = True, - output_scale_factor=np.sqrt(2.0), - add_downsample=True, - downsample_padding=1, - ): - super().__init__() - self.resnets = nn.LayerList([]) - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - self.resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=min(in_channels // 4, 32), - groups_out=min(out_channels // 4, 32), - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - if add_downsample: - self.resnet_down = ResnetBlock2D( - in_channels=out_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=min(out_channels // 4, 32), - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - use_in_shortcut=True, - down=True, - kernel="fir", - ) - self.downsamplers = nn.LayerList([FirDownsample2D(out_channels, out_channels=out_channels)]) - self.skip_conv = nn.Conv2D(3, out_channels, kernel_size=(1, 1), stride=(1, 1)) - else: - self.resnet_down = None - self.downsamplers = None - self.skip_conv = None - - def forward(self, hidden_states, temb=None, skip_sample=None): - output_states = () - - for resnet in self.resnets: - hidden_states = resnet(hidden_states, temb) - output_states += (hidden_states,) - - if self.downsamplers is not None: - hidden_states = self.resnet_down(hidden_states, temb) - for downsampler in self.downsamplers: - skip_sample = downsampler(skip_sample) - - hidden_states = self.skip_conv(skip_sample) + hidden_states - - output_states += (hidden_states,) - - return hidden_states, output_states, skip_sample - - -class ResnetDownsampleBlock2D(nn.Layer): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_downsample=True, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.resnets = nn.LayerList(resnets) - - if add_downsample: - self.downsamplers = nn.LayerList( - [ - ResnetBlock2D( - in_channels=out_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - down=True, - ) - ] - ) - else: - self.downsamplers = None - - self.gradient_checkpointing = False - - def forward(self, hidden_states, temb=None): - output_states = () - - for resnet in self.resnets: - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - hidden_states = recompute(create_custom_forward(resnet), hidden_states, temb) - else: - hidden_states = resnet(hidden_states, temb) - - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states, temb) - - output_states += (hidden_states,) - - return hidden_states, output_states - - -class SimpleCrossAttnDownBlock2D(nn.Layer): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - cross_attention_dim=1280, - output_scale_factor=1.0, - add_downsample=True, - ): - super().__init__() - - self.has_cross_attention = True - - resnets = [] - attentions = [] - - self.attn_num_head_channels = attn_num_head_channels - self.num_heads = out_channels // self.attn_num_head_channels - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - attentions.append( - CrossAttention( - query_dim=out_channels, - cross_attention_dim=out_channels, - heads=self.num_heads, - dim_head=attn_num_head_channels, - added_kv_proj_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - bias=True, - upcast_softmax=True, - processor=CrossAttnAddedKVProcessor(), - ) - ) - self.attentions = nn.LayerList(attentions) - self.resnets = nn.LayerList(resnets) - - if add_downsample: - self.downsamplers = nn.LayerList( - [ - ResnetBlock2D( - in_channels=out_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - down=True, - ) - ] - ) - else: - self.downsamplers = None - - self.gradient_checkpointing = False - - def forward( - self, hidden_states, temb=None, encoder_hidden_states=None, attention_mask=None, cross_attention_kwargs=None - ): - output_states = () - cross_attention_kwargs = cross_attention_kwargs if cross_attention_kwargs is not None else {} - - for resnet, attn in zip(self.resnets, self.attentions): - # resnet - hidden_states = resnet(hidden_states, temb) - - # attn - hidden_states = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - attention_mask=attention_mask, - **cross_attention_kwargs, - ) - - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states, temb) - - output_states += (hidden_states,) - - return hidden_states, output_states - - -class AttnUpBlock2D(nn.Layer): - def __init__( - self, - in_channels: int, - prev_output_channel: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - output_scale_factor=1.0, - add_upsample=True, - ): - super().__init__() - resnets = [] - attentions = [] - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - resnets.append( - ResnetBlock2D( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - attentions.append( - AttentionBlock( - out_channels, - num_head_channels=attn_num_head_channels, - rescale_output_factor=output_scale_factor, - eps=resnet_eps, - norm_num_groups=resnet_groups, - ) - ) - - self.attentions = nn.LayerList(attentions) - self.resnets = nn.LayerList(resnets) - - if add_upsample: - self.upsamplers = nn.LayerList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - def forward(self, hidden_states, res_hidden_states_tuple, temb=None): - for resnet, attn in zip(self.resnets, self.attentions): - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = paddle.concat([hidden_states, res_hidden_states], axis=1) - - hidden_states = resnet(hidden_states, temb) - hidden_states = attn(hidden_states) - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states) - - return hidden_states - - -class CrossAttnUpBlock2D(nn.Layer): - def __init__( - self, - in_channels: int, - out_channels: int, - prev_output_channel: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - cross_attention_dim=1280, - output_scale_factor=1.0, - add_upsample=True, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - ): - super().__init__() - resnets = [] - attentions = [] - self.has_cross_attention = True - self.attn_num_head_channels = attn_num_head_channels - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - resnets.append( - ResnetBlock2D( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - if not dual_cross_attention: - attentions.append( - Transformer2DModel( - attn_num_head_channels, - out_channels // attn_num_head_channels, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - ) - ) - else: - attentions.append( - DualTransformer2DModel( - attn_num_head_channels, - out_channels // attn_num_head_channels, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - ) - ) - self.attentions = nn.LayerList(attentions) - self.resnets = nn.LayerList(resnets) - - if add_upsample: - self.upsamplers = nn.LayerList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - self.gradient_checkpointing = False - - def forward( - self, - hidden_states, - res_hidden_states_tuple, - temb=None, - encoder_hidden_states=None, - cross_attention_kwargs=None, - upsample_size=None, - attention_mask=None, - ): - # TODO(Patrick, William) - attention mask is not used - for resnet, attn in zip(self.resnets, self.attentions): - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = paddle.concat([hidden_states, res_hidden_states], axis=1) - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module, return_dict=None): - def custom_forward(*inputs): - if return_dict is not None: - return module(*inputs, return_dict=return_dict)[0] # move [0] - else: - return module(*inputs) - - return custom_forward - - hidden_states = recompute(create_custom_forward(resnet), hidden_states, temb) - hidden_states = recompute( - create_custom_forward(attn, return_dict=False), - hidden_states, - encoder_hidden_states, - cross_attention_kwargs, - ) # [0] - else: - hidden_states = resnet(hidden_states, temb) - hidden_states = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - cross_attention_kwargs=cross_attention_kwargs, - ).sample - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states, upsample_size) - - return hidden_states - - -class UpBlock2D(nn.Layer): - def __init__( - self, - in_channels: int, - prev_output_channel: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_upsample=True, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - resnets.append( - ResnetBlock2D( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.resnets = nn.LayerList(resnets) - - if add_upsample: - self.upsamplers = nn.LayerList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - self.gradient_checkpointing = False - - def forward(self, hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None): - for resnet in self.resnets: - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = paddle.concat([hidden_states, res_hidden_states], axis=1) - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - hidden_states = recompute(create_custom_forward(resnet), hidden_states, temb) - else: - hidden_states = resnet(hidden_states, temb) - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states, upsample_size) - - return hidden_states - - -class UpDecoderBlock2D(nn.Layer): - def __init__( - self, - in_channels: int, - out_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_upsample=True, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - input_channels = in_channels if i == 0 else out_channels - - resnets.append( - ResnetBlock2D( - in_channels=input_channels, - out_channels=out_channels, - temb_channels=None, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.resnets = nn.LayerList(resnets) - - if add_upsample: - self.upsamplers = nn.LayerList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - def forward(self, hidden_states): - for resnet in self.resnets: - hidden_states = resnet(hidden_states, temb=None) - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states) - - return hidden_states - - -class AttnUpDecoderBlock2D(nn.Layer): - def __init__( - self, - in_channels: int, - out_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - output_scale_factor=1.0, - add_upsample=True, - ): - super().__init__() - resnets = [] - attentions = [] - - for i in range(num_layers): - input_channels = in_channels if i == 0 else out_channels - - resnets.append( - ResnetBlock2D( - in_channels=input_channels, - out_channels=out_channels, - temb_channels=None, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - attentions.append( - AttentionBlock( - out_channels, - num_head_channels=attn_num_head_channels, - rescale_output_factor=output_scale_factor, - eps=resnet_eps, - norm_num_groups=resnet_groups, - ) - ) - - self.attentions = nn.LayerList(attentions) - self.resnets = nn.LayerList(resnets) - - if add_upsample: - self.upsamplers = nn.LayerList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - def forward(self, hidden_states): - for resnet, attn in zip(self.resnets, self.attentions): - hidden_states = resnet(hidden_states, temb=None) - hidden_states = attn(hidden_states) - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states) - - return hidden_states - - -class AttnSkipUpBlock2D(nn.Layer): - def __init__( - self, - in_channels: int, - prev_output_channel: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - output_scale_factor=np.sqrt(2.0), - upsample_padding=1, - add_upsample=True, - ): - super().__init__() - self.attentions = nn.LayerList([]) - self.resnets = nn.LayerList([]) - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - self.resnets.append( - ResnetBlock2D( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=min(resnet_in_channels + res_skip_channels // 4, 32), - groups_out=min(out_channels // 4, 32), - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.attentions.append( - AttentionBlock( - out_channels, - num_head_channels=attn_num_head_channels, - rescale_output_factor=output_scale_factor, - eps=resnet_eps, - ) - ) - - self.upsampler = FirUpsample2D(in_channels, out_channels=out_channels) - if add_upsample: - self.resnet_up = ResnetBlock2D( - in_channels=out_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=min(out_channels // 4, 32), - groups_out=min(out_channels // 4, 32), - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - use_in_shortcut=True, - up=True, - kernel="fir", - ) - self.skip_conv = nn.Conv2D(out_channels, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) - self.skip_norm = nn.GroupNorm( - num_groups=min(out_channels // 4, 32), num_channels=out_channels, epsilon=resnet_eps - ) - self.act = nn.Silu() - else: - self.resnet_up = None - self.skip_conv = None - self.skip_norm = None - self.act = None - - def forward(self, hidden_states, res_hidden_states_tuple, temb=None, skip_sample=None): - for resnet in self.resnets: - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = paddle.concat([hidden_states, res_hidden_states], axis=1) - hidden_states = resnet(hidden_states, temb) - - hidden_states = self.attentions[0](hidden_states) - - if skip_sample is not None: - skip_sample = self.upsampler(skip_sample) - else: - skip_sample = 0 - - if self.resnet_up is not None: - skip_sample_states = self.skip_norm(hidden_states) - skip_sample_states = self.act(skip_sample_states) - skip_sample_states = self.skip_conv(skip_sample_states) - - skip_sample = skip_sample + skip_sample_states - - hidden_states = self.resnet_up(hidden_states, temb) - - return hidden_states, skip_sample - - -class SkipUpBlock2D(nn.Layer): - def __init__( - self, - in_channels: int, - prev_output_channel: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_pre_norm: bool = True, - output_scale_factor=np.sqrt(2.0), - add_upsample=True, - upsample_padding=1, - ): - super().__init__() - self.resnets = nn.LayerList([]) - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - self.resnets.append( - ResnetBlock2D( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=min((resnet_in_channels + res_skip_channels) // 4, 32), - groups_out=min(out_channels // 4, 32), - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.upsampler = FirUpsample2D(in_channels, out_channels=out_channels) - if add_upsample: - self.resnet_up = ResnetBlock2D( - in_channels=out_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=min(out_channels // 4, 32), - groups_out=min(out_channels // 4, 32), - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - use_in_shortcut=True, - up=True, - kernel="fir", - ) - self.skip_conv = nn.Conv2D(out_channels, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) - self.skip_norm = nn.GroupNorm( - num_groups=min(out_channels // 4, 32), num_channels=out_channels, epsilon=resnet_eps - ) - self.act = nn.Silu() - else: - self.resnet_up = None - self.skip_conv = None - self.skip_norm = None - self.act = None - - def forward(self, hidden_states, res_hidden_states_tuple, temb=None, skip_sample=None): - for resnet in self.resnets: - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = paddle.concat([hidden_states, res_hidden_states], axis=1) - - hidden_states = resnet(hidden_states, temb) - - if skip_sample is not None: - skip_sample = self.upsampler(skip_sample) - else: - skip_sample = 0 - - if self.resnet_up is not None: - skip_sample_states = self.skip_norm(hidden_states) - skip_sample_states = self.act(skip_sample_states) - skip_sample_states = self.skip_conv(skip_sample_states) - - skip_sample = skip_sample + skip_sample_states - - hidden_states = self.resnet_up(hidden_states, temb) - - return hidden_states, skip_sample - - -class ResnetUpsampleBlock2D(nn.Layer): - def __init__( - self, - in_channels: int, - prev_output_channel: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_upsample=True, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - resnets.append( - ResnetBlock2D( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.resnets = nn.LayerList(resnets) - - if add_upsample: - self.upsamplers = nn.LayerList( - [ - ResnetBlock2D( - in_channels=out_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - up=True, - ) - ] - ) - else: - self.upsamplers = None - - self.gradient_checkpointing = False - - def forward(self, hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None): - for resnet in self.resnets: - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = paddle.concat([hidden_states, res_hidden_states], axis=1) - - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - hidden_states = recompute(create_custom_forward(resnet), hidden_states, temb) - else: - hidden_states = resnet(hidden_states, temb) - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states, temb) - - return hidden_states - - -class SimpleCrossAttnUpBlock2D(nn.Layer): - def __init__( - self, - in_channels: int, - out_channels: int, - prev_output_channel: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - cross_attention_dim=1280, - output_scale_factor=1.0, - add_upsample=True, - ): - super().__init__() - resnets = [] - attentions = [] - - self.has_cross_attention = True - self.attn_num_head_channels = attn_num_head_channels - - self.num_heads = out_channels // self.attn_num_head_channels - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - resnets.append( - ResnetBlock2D( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - attentions.append( - CrossAttention( - query_dim=out_channels, - cross_attention_dim=out_channels, - heads=self.num_heads, - dim_head=attn_num_head_channels, - added_kv_proj_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - bias=True, - upcast_softmax=True, - processor=CrossAttnAddedKVProcessor(), - ) - ) - self.attentions = nn.LayerList(attentions) - self.resnets = nn.LayerList(resnets) - - if add_upsample: - self.upsamplers = nn.LayerList( - [ - ResnetBlock2D( - in_channels=out_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - up=True, - ) - ] - ) - else: - self.upsamplers = None - - self.gradient_checkpointing = False - - def forward( - self, - hidden_states, - res_hidden_states_tuple, - temb=None, - encoder_hidden_states=None, - upsample_size=None, - attention_mask=None, - cross_attention_kwargs=None, - ): - cross_attention_kwargs = cross_attention_kwargs if cross_attention_kwargs is not None else {} - for resnet, attn in zip(self.resnets, self.attentions): - # resnet - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = paddle.concat([hidden_states, res_hidden_states], axis=1) - - hidden_states = resnet(hidden_states, temb) - - # attn - hidden_states = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - attention_mask=attention_mask, - **cross_attention_kwargs, - ) - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states, temb) - - return hidden_states diff --git a/spaces/AIConsultant/MusicGen/audiocraft/metrics/visqol.py b/spaces/AIConsultant/MusicGen/audiocraft/metrics/visqol.py deleted file mode 100644 index 44f4b0a2c3c6c726857db8386491823dd85dde51..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/metrics/visqol.py +++ /dev/null @@ -1,216 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import csv -import json -import logging -from pathlib import Path -import tempfile -import typing as tp -import subprocess -import shutil - -import torch -import torchaudio - -logger = logging.getLogger(__name__) - - -class ViSQOL: - """ViSQOL wrapper to run ViSQOL from Python using a pre-installed binary. - - To learn more about ViSQOL and how to build ViSQOL binary using bazel, please refer to the - instructions available in the open source repository: https://github.com/google/visqol - - ViSQOL is capable of running in two modes: - - Audio Mode: - When running in audio mode, input signals must have a 48kHz sample rate. Input should be resampled to 48kHz. - Input signals can be multi-channel, but they will be down-mixed to mono for performing the comparison. - Audio mode uses support vector regression, with the maximum range at ~4.75. - - Speech Mode: - When running in speech mode, ViSQOL uses a wideband model. It therefore expects input sample rates of 16kHz. - Input should be resampled to 16kHz. - As part of the speech mode processing, a root mean square implementation for voice activity detection - is performed on the reference signal to determine what parts of the signal have voice activity and - should therefore be included in the comparison. The signal is normalized before performing the voice - activity detection. - Input signals can be multi-channel, but they will be down-mixed to mono for performing the comparison. - Speech mode is scaled to have a maximum MOS of 5.0 to match previous version behavior. - - For more details, check the guidelines: https://github.com/google/visqol#general-guidelines-for-input - - Args: - visqol_bin (str): Path to the ViSQOL binary. - mode (str): ViSQOL computation mode, expecting "audio" or "speech". - model (str): Name of the model to use for similarity to quality model. - debug (bool): Whether to also get debug metrics from ViSQOL or not. - """ - SAMPLE_RATES_MODES = {"audio": 48_000, "speech": 16_000} - ALLOWED_SAMPLE_RATES = frozenset(SAMPLE_RATES_MODES.values()) - - def __init__(self, bin: tp.Union[Path, str], mode: str = "audio", - model: str = "libsvm_nu_svr_model.txt", debug: bool = False): - assert bin is not None and Path(bin).exists(), f"Could not find ViSQOL binary in specified path: {bin}" - self.visqol_bin = str(bin) - self.visqol_mode = mode - self.target_sr = self._get_target_sr(self.visqol_mode) - self.model = model - self.debug = debug - assert Path(self.visqol_model).exists(), \ - f"Could not find the specified model in ViSQOL install: {self.visqol_model}" - - def _get_target_sr(self, mode: str) -> int: - # returns target sampling rate for the corresponding ViSQOL mode. - if mode not in ViSQOL.SAMPLE_RATES_MODES: - raise ValueError( - f"Unsupported mode! Allowed are: {', '.join(ViSQOL.SAMPLE_RATES_MODES.keys())}" - ) - return ViSQOL.SAMPLE_RATES_MODES[mode] - - def _prepare_files( - self, ref_sig: torch.Tensor, deg_sig: torch.Tensor, sr: int, target_sr: int, pad_with_silence: bool = False - ): - # prepare files for ViSQOL evaluation. - assert target_sr in ViSQOL.ALLOWED_SAMPLE_RATES - assert len(ref_sig) == len(deg_sig), ( - "Expects same number of ref and degraded inputs", - f" but ref len {len(ref_sig)} != deg len {len(deg_sig)}" - ) - # resample audio if needed - if sr != target_sr: - transform = torchaudio.transforms.Resample(sr, target_sr) - pad = int(0.5 * target_sr) - rs_ref = [] - rs_deg = [] - for i in range(len(ref_sig)): - rs_ref_i = transform(ref_sig[i]) - rs_deg_i = transform(deg_sig[i]) - if pad_with_silence: - rs_ref_i = torch.nn.functional.pad(rs_ref_i, (pad, pad), mode='constant', value=0) - rs_deg_i = torch.nn.functional.pad(rs_deg_i, (pad, pad), mode='constant', value=0) - rs_ref.append(rs_ref_i) - rs_deg.append(rs_deg_i) - ref_sig = torch.stack(rs_ref) - deg_sig = torch.stack(rs_deg) - # save audio chunks to tmp dir and create csv - tmp_dir = Path(tempfile.mkdtemp()) - try: - tmp_input_csv_path = tmp_dir / "input.csv" - tmp_results_csv_path = tmp_dir / "results.csv" - tmp_debug_json_path = tmp_dir / "debug.json" - with open(tmp_input_csv_path, "w") as csv_file: - csv_writer = csv.writer(csv_file) - csv_writer.writerow(["reference", "degraded"]) - for i in range(len(ref_sig)): - tmp_ref_filename = tmp_dir / f"ref_{i}.wav" - tmp_deg_filename = tmp_dir / f"deg_{i}.wav" - torchaudio.save( - tmp_ref_filename, - torch.clamp(ref_sig[i], min=-0.99, max=0.99), - sample_rate=target_sr, - bits_per_sample=16, - encoding="PCM_S" - ) - torchaudio.save( - tmp_deg_filename, - torch.clamp(deg_sig[i], min=-0.99, max=0.99), - sample_rate=target_sr, - bits_per_sample=16, - encoding="PCM_S" - ) - csv_writer.writerow([str(tmp_ref_filename), str(tmp_deg_filename)]) - return tmp_dir, tmp_input_csv_path, tmp_results_csv_path, tmp_debug_json_path - except Exception as e: - logger.error("Exception occurred when preparing files for ViSQOL: %s", e) - return tmp_dir, None, None, None - - def _flush_files(self, tmp_dir: tp.Union[Path, str]): - # flush tmp files used to compute ViSQOL. - shutil.rmtree(str(tmp_dir)) - - def _collect_moslqo_score(self, results_csv_path: tp.Union[Path, str]) -> float: - # collect results for each evaluated pair and return averaged moslqo score. - with open(results_csv_path, "r") as csv_file: - reader = csv.DictReader(csv_file) - moslqo_scores = [float(row["moslqo"]) for row in reader] - if len(moslqo_scores) > 0: - return sum(moslqo_scores) / len(moslqo_scores) - else: - return 0.0 - - def _collect_debug_data(self, debug_json_path: tp.Union[Path, str]) -> dict: - # collect debug data for the visqol inference. - with open(debug_json_path, "r") as f: - data = json.load(f) - return data - - @property - def visqol_model(self): - return f'{self.visqol_bin}/model/{self.model}' - - def _run_visqol( - self, - input_csv_path: tp.Union[Path, str], - results_csv_path: tp.Union[Path, str], - debug_csv_path: tp.Optional[tp.Union[Path, str]], - ): - input_csv_path = str(input_csv_path) - results_csv_path = str(results_csv_path) - debug_csv_path = str(debug_csv_path) - cmd = [ - f'{self.visqol_bin}/bazel-bin/visqol', - '--batch_input_csv', f'{input_csv_path}', - '--results_csv', f'{results_csv_path}' - ] - if debug_csv_path is not None: - cmd += ['--output_debug', f'{debug_csv_path}'] - if self.visqol_mode == "speech": - cmd += ['--use_speech_mode'] - cmd += ['--similarity_to_quality_model', f'{self.visqol_model}'] - result = subprocess.run(cmd, capture_output=True) - if result.returncode: - logger.error("Error with visqol: \n %s \n %s", result.stdout.decode(), result.stderr.decode()) - raise RuntimeError("Error while executing visqol") - result.check_returncode() - - def __call__( - self, - ref_sig: torch.Tensor, - deg_sig: torch.Tensor, - sr: int, - pad_with_silence: bool = False, - ): - """Calculate the ViSQOL metric for a pair of audio signals at a given sample rate. - Args: - ref_sig (torch.Tensor): Reference signals as [B, C, T]. - deg_sig (torch.Tensor): Degraded signals as [B, C, T]. - sr (int): Sample rate of the two audio signals. - pad_with_silence (bool): Whether to pad the file with silences as recommended - in visqol guidelines (see: https://github.com/google/visqol#general-guidelines-for-input). - Returns: - float: The ViSQOL score or mean score for the batch. - """ - logger.debug(f"Calculating visqol with mode={self.visqol_mode} on {len(ref_sig)} samples") - tmp_dir, input_csv, results_csv, debug_json = self._prepare_files( - ref_sig, deg_sig, sr, self.target_sr, pad_with_silence - ) - try: - if input_csv and results_csv: - self._run_visqol( - input_csv, - results_csv, - debug_json if self.debug else None, - ) - mosqol = self._collect_moslqo_score(results_csv) - return mosqol - else: - raise RuntimeError("Something unexpected happened when running VISQOL!") - except Exception as e: - logger.error("Exception occurred when running ViSQOL: %s", e) - finally: - self._flush_files(tmp_dir) diff --git a/spaces/AIFILMS/generate_human_motion/VQ-Trans/train_t2m_trans.py b/spaces/AIFILMS/generate_human_motion/VQ-Trans/train_t2m_trans.py deleted file mode 100644 index 8da444f87aa7ca71cd8bc3604868cf30a6c70e02..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/generate_human_motion/VQ-Trans/train_t2m_trans.py +++ /dev/null @@ -1,191 +0,0 @@ -import os -import torch -import numpy as np - -from torch.utils.tensorboard import SummaryWriter -from os.path import join as pjoin -from torch.distributions import Categorical -import json -import clip - -import options.option_transformer as option_trans -import models.vqvae as vqvae -import utils.utils_model as utils_model -import utils.eval_trans as eval_trans -from dataset import dataset_TM_train -from dataset import dataset_TM_eval -from dataset import dataset_tokenize -import models.t2m_trans as trans -from options.get_eval_option import get_opt -from models.evaluator_wrapper import EvaluatorModelWrapper -import warnings -warnings.filterwarnings('ignore') - -##### ---- Exp dirs ---- ##### -args = option_trans.get_args_parser() -torch.manual_seed(args.seed) - -args.out_dir = os.path.join(args.out_dir, f'{args.exp_name}') -args.vq_dir= os.path.join("./dataset/KIT-ML" if args.dataname == 'kit' else "./dataset/HumanML3D", f'{args.vq_name}') -os.makedirs(args.out_dir, exist_ok = True) -os.makedirs(args.vq_dir, exist_ok = True) - -##### ---- Logger ---- ##### -logger = utils_model.get_logger(args.out_dir) -writer = SummaryWriter(args.out_dir) -logger.info(json.dumps(vars(args), indent=4, sort_keys=True)) - -##### ---- Dataloader ---- ##### -train_loader_token = dataset_tokenize.DATALoader(args.dataname, 1, unit_length=2**args.down_t) - -from utils.word_vectorizer import WordVectorizer -w_vectorizer = WordVectorizer('./glove', 'our_vab') -val_loader = dataset_TM_eval.DATALoader(args.dataname, False, 32, w_vectorizer) - -dataset_opt_path = 'checkpoints/kit/Comp_v6_KLD005/opt.txt' if args.dataname == 'kit' else 'checkpoints/t2m/Comp_v6_KLD005/opt.txt' - -wrapper_opt = get_opt(dataset_opt_path, torch.device('cuda')) -eval_wrapper = EvaluatorModelWrapper(wrapper_opt) - -##### ---- Network ---- ##### -clip_model, clip_preprocess = clip.load("ViT-B/32", device=torch.device('cuda'), jit=False, download_root='/apdcephfs_cq2/share_1290939/maelyszhang/.cache/clip') # Must set jit=False for training -clip.model.convert_weights(clip_model) # Actually this line is unnecessary since clip by default already on float16 -clip_model.eval() -for p in clip_model.parameters(): - p.requires_grad = False - -net = vqvae.HumanVQVAE(args, ## use args to define different parameters in different quantizers - args.nb_code, - args.code_dim, - args.output_emb_width, - args.down_t, - args.stride_t, - args.width, - args.depth, - args.dilation_growth_rate) - - -trans_encoder = trans.Text2Motion_Transformer(num_vq=args.nb_code, - embed_dim=args.embed_dim_gpt, - clip_dim=args.clip_dim, - block_size=args.block_size, - num_layers=args.num_layers, - n_head=args.n_head_gpt, - drop_out_rate=args.drop_out_rate, - fc_rate=args.ff_rate) - - -print ('loading checkpoint from {}'.format(args.resume_pth)) -ckpt = torch.load(args.resume_pth, map_location='cpu') -net.load_state_dict(ckpt['net'], strict=True) -net.eval() -net.cuda() - -if args.resume_trans is not None: - print ('loading transformer checkpoint from {}'.format(args.resume_trans)) - ckpt = torch.load(args.resume_trans, map_location='cpu') - trans_encoder.load_state_dict(ckpt['trans'], strict=True) -trans_encoder.train() -trans_encoder.cuda() - -##### ---- Optimizer & Scheduler ---- ##### -optimizer = utils_model.initial_optim(args.decay_option, args.lr, args.weight_decay, trans_encoder, args.optimizer) -scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=args.lr_scheduler, gamma=args.gamma) - -##### ---- Optimization goals ---- ##### -loss_ce = torch.nn.CrossEntropyLoss() - -nb_iter, avg_loss_cls, avg_acc = 0, 0., 0. -right_num = 0 -nb_sample_train = 0 - -##### ---- get code ---- ##### -for batch in train_loader_token: - pose, name = batch - bs, seq = pose.shape[0], pose.shape[1] - - pose = pose.cuda().float() # bs, nb_joints, joints_dim, seq_len - target = net.encode(pose) - target = target.cpu().numpy() - np.save(pjoin(args.vq_dir, name[0] +'.npy'), target) - - -train_loader = dataset_TM_train.DATALoader(args.dataname, args.batch_size, args.nb_code, args.vq_name, unit_length=2**args.down_t) -train_loader_iter = dataset_TM_train.cycle(train_loader) - - -##### ---- Training ---- ##### -best_fid, best_iter, best_div, best_top1, best_top2, best_top3, best_matching, writer, logger = eval_trans.evaluation_transformer(args.out_dir, val_loader, net, trans_encoder, logger, writer, 0, best_fid=1000, best_iter=0, best_div=100, best_top1=0, best_top2=0, best_top3=0, best_matching=100, clip_model=clip_model, eval_wrapper=eval_wrapper) -while nb_iter <= args.total_iter: - - batch = next(train_loader_iter) - clip_text, m_tokens, m_tokens_len = batch - m_tokens, m_tokens_len = m_tokens.cuda(), m_tokens_len.cuda() - bs = m_tokens.shape[0] - target = m_tokens # (bs, 26) - target = target.cuda() - - text = clip.tokenize(clip_text, truncate=True).cuda() - - feat_clip_text = clip_model.encode_text(text).float() - - input_index = target[:,:-1] - - if args.pkeep == -1: - proba = np.random.rand(1)[0] - mask = torch.bernoulli(proba * torch.ones(input_index.shape, - device=input_index.device)) - else: - mask = torch.bernoulli(args.pkeep * torch.ones(input_index.shape, - device=input_index.device)) - mask = mask.round().to(dtype=torch.int64) - r_indices = torch.randint_like(input_index, args.nb_code) - a_indices = mask*input_index+(1-mask)*r_indices - - cls_pred = trans_encoder(a_indices, feat_clip_text) - cls_pred = cls_pred.contiguous() - - loss_cls = 0.0 - for i in range(bs): - # loss function (26), (26, 513) - loss_cls += loss_ce(cls_pred[i][:m_tokens_len[i] + 1], target[i][:m_tokens_len[i] + 1]) / bs - - # Accuracy - probs = torch.softmax(cls_pred[i][:m_tokens_len[i] + 1], dim=-1) - - if args.if_maxtest: - _, cls_pred_index = torch.max(probs, dim=-1) - - else: - dist = Categorical(probs) - cls_pred_index = dist.sample() - right_num += (cls_pred_index.flatten(0) == target[i][:m_tokens_len[i] + 1].flatten(0)).sum().item() - - ## global loss - optimizer.zero_grad() - loss_cls.backward() - optimizer.step() - scheduler.step() - - avg_loss_cls = avg_loss_cls + loss_cls.item() - nb_sample_train = nb_sample_train + (m_tokens_len + 1).sum().item() - - nb_iter += 1 - if nb_iter % args.print_iter == 0 : - avg_loss_cls = avg_loss_cls / args.print_iter - avg_acc = right_num * 100 / nb_sample_train - writer.add_scalar('./Loss/train', avg_loss_cls, nb_iter) - writer.add_scalar('./ACC/train', avg_acc, nb_iter) - msg = f"Train. Iter {nb_iter} : Loss. {avg_loss_cls:.5f}, ACC. {avg_acc:.4f}" - logger.info(msg) - avg_loss_cls = 0. - right_num = 0 - nb_sample_train = 0 - - if nb_iter % args.eval_iter == 0: - best_fid, best_iter, best_div, best_top1, best_top2, best_top3, best_matching, writer, logger = eval_trans.evaluation_transformer(args.out_dir, val_loader, net, trans_encoder, logger, writer, nb_iter, best_fid, best_iter, best_div, best_top1, best_top2, best_top3, best_matching, clip_model=clip_model, eval_wrapper=eval_wrapper) - - if nb_iter == args.total_iter: - msg_final = f"Train. Iter {best_iter} : FID. {best_fid:.5f}, Diversity. {best_div:.4f}, TOP1. {best_top1:.4f}, TOP2. {best_top2:.4f}, TOP3. {best_top3:.4f}" - logger.info(msg_final) - break \ No newline at end of file diff --git a/spaces/AIFILMS/generate_human_motion/pyrender/pyrender/font.py b/spaces/AIFILMS/generate_human_motion/pyrender/pyrender/font.py deleted file mode 100644 index 5ac530d7b949f50314a0d9cf5d744bedcace0571..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/generate_human_motion/pyrender/pyrender/font.py +++ /dev/null @@ -1,272 +0,0 @@ -"""Font texture loader and processor. - -Author: Matthew Matl -""" -import freetype -import numpy as np -import os - -import OpenGL -from OpenGL.GL import * - -from .constants import TextAlign, FLOAT_SZ -from .texture import Texture -from .sampler import Sampler - - -class FontCache(object): - """A cache for fonts. - """ - - def __init__(self, font_dir=None): - self._font_cache = {} - self.font_dir = font_dir - if self.font_dir is None: - base_dir, _ = os.path.split(os.path.realpath(__file__)) - self.font_dir = os.path.join(base_dir, 'fonts') - - def get_font(self, font_name, font_pt): - # If it's a file, load it directly, else, try to load from font dir. - if os.path.isfile(font_name): - font_filename = font_name - _, font_name = os.path.split(font_name) - font_name, _ = os.path.split(font_name) - else: - font_filename = os.path.join(self.font_dir, font_name) + '.ttf' - - cid = OpenGL.contextdata.getContext() - key = (cid, font_name, int(font_pt)) - - if key not in self._font_cache: - self._font_cache[key] = Font(font_filename, font_pt) - return self._font_cache[key] - - def clear(self): - for key in self._font_cache: - self._font_cache[key].delete() - self._font_cache = {} - - -class Character(object): - """A single character, with its texture and attributes. - """ - - def __init__(self, texture, size, bearing, advance): - self.texture = texture - self.size = size - self.bearing = bearing - self.advance = advance - - -class Font(object): - """A font object. - - Parameters - ---------- - font_file : str - The file to load the font from. - font_pt : int - The height of the font in pixels. - """ - - def __init__(self, font_file, font_pt=40): - self.font_file = font_file - self.font_pt = int(font_pt) - self._face = freetype.Face(font_file) - self._face.set_pixel_sizes(0, font_pt) - self._character_map = {} - - for i in range(0, 128): - - # Generate texture - face = self._face - face.load_char(chr(i)) - buf = face.glyph.bitmap.buffer - src = (np.array(buf) / 255.0).astype(np.float32) - src = src.reshape((face.glyph.bitmap.rows, - face.glyph.bitmap.width)) - tex = Texture( - sampler=Sampler( - magFilter=GL_LINEAR, - minFilter=GL_LINEAR, - wrapS=GL_CLAMP_TO_EDGE, - wrapT=GL_CLAMP_TO_EDGE - ), - source=src, - source_channels='R', - ) - character = Character( - texture=tex, - size=np.array([face.glyph.bitmap.width, - face.glyph.bitmap.rows]), - bearing=np.array([face.glyph.bitmap_left, - face.glyph.bitmap_top]), - advance=face.glyph.advance.x - ) - self._character_map[chr(i)] = character - - self._vbo = None - self._vao = None - - @property - def font_file(self): - """str : The file the font was loaded from. - """ - return self._font_file - - @font_file.setter - def font_file(self, value): - self._font_file = value - - @property - def font_pt(self): - """int : The height of the font in pixels. - """ - return self._font_pt - - @font_pt.setter - def font_pt(self, value): - self._font_pt = int(value) - - def _add_to_context(self): - - self._vao = glGenVertexArrays(1) - glBindVertexArray(self._vao) - self._vbo = glGenBuffers(1) - glBindBuffer(GL_ARRAY_BUFFER, self._vbo) - glBufferData(GL_ARRAY_BUFFER, FLOAT_SZ * 6 * 4, None, GL_DYNAMIC_DRAW) - glEnableVertexAttribArray(0) - glVertexAttribPointer( - 0, 4, GL_FLOAT, GL_FALSE, 4 * FLOAT_SZ, ctypes.c_void_p(0) - ) - glBindVertexArray(0) - - glPixelStorei(GL_UNPACK_ALIGNMENT, 1) - for c in self._character_map: - ch = self._character_map[c] - if not ch.texture._in_context(): - ch.texture._add_to_context() - - def _remove_from_context(self): - for c in self._character_map: - ch = self._character_map[c] - ch.texture.delete() - if self._vao is not None: - glDeleteVertexArrays(1, [self._vao]) - glDeleteBuffers(1, [self._vbo]) - self._vao = None - self._vbo = None - - def _in_context(self): - return self._vao is not None - - def _bind(self): - glBindVertexArray(self._vao) - - def _unbind(self): - glBindVertexArray(0) - - def delete(self): - self._unbind() - self._remove_from_context() - - def render_string(self, text, x, y, scale=1.0, - align=TextAlign.BOTTOM_LEFT): - """Render a string to the current view buffer. - - Note - ---- - Assumes correct shader program already bound w/ uniforms set. - - Parameters - ---------- - text : str - The text to render. - x : int - Horizontal pixel location of text. - y : int - Vertical pixel location of text. - scale : int - Scaling factor for text. - align : int - One of the TextAlign options which specifies where the ``x`` - and ``y`` parameters lie on the text. For example, - :attr:`.TextAlign.BOTTOM_LEFT` means that ``x`` and ``y`` indicate - the position of the bottom-left corner of the textbox. - """ - glActiveTexture(GL_TEXTURE0) - glEnable(GL_BLEND) - glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) - glDisable(GL_DEPTH_TEST) - glPolygonMode(GL_FRONT_AND_BACK, GL_FILL) - self._bind() - - # Determine width and height of text relative to x, y - width = 0.0 - height = 0.0 - for c in text: - ch = self._character_map[c] - height = max(height, ch.bearing[1] * scale) - width += (ch.advance >> 6) * scale - - # Determine offsets based on alignments - xoff = 0 - yoff = 0 - if align == TextAlign.BOTTOM_RIGHT: - xoff = -width - elif align == TextAlign.BOTTOM_CENTER: - xoff = -width / 2.0 - elif align == TextAlign.TOP_LEFT: - yoff = -height - elif align == TextAlign.TOP_RIGHT: - yoff = -height - xoff = -width - elif align == TextAlign.TOP_CENTER: - yoff = -height - xoff = -width / 2.0 - elif align == TextAlign.CENTER: - xoff = -width / 2.0 - yoff = -height / 2.0 - elif align == TextAlign.CENTER_LEFT: - yoff = -height / 2.0 - elif align == TextAlign.CENTER_RIGHT: - xoff = -width - yoff = -height / 2.0 - - x += xoff - y += yoff - - ch = None - for c in text: - ch = self._character_map[c] - xpos = x + ch.bearing[0] * scale - ypos = y - (ch.size[1] - ch.bearing[1]) * scale - w = ch.size[0] * scale - h = ch.size[1] * scale - - vertices = np.array([ - [xpos, ypos, 0.0, 0.0], - [xpos + w, ypos, 1.0, 0.0], - [xpos + w, ypos + h, 1.0, 1.0], - [xpos + w, ypos + h, 1.0, 1.0], - [xpos, ypos + h, 0.0, 1.0], - [xpos, ypos, 0.0, 0.0], - ], dtype=np.float32) - - ch.texture._bind() - - glBindBuffer(GL_ARRAY_BUFFER, self._vbo) - glBufferData( - GL_ARRAY_BUFFER, FLOAT_SZ * 6 * 4, vertices, GL_DYNAMIC_DRAW - ) - # TODO MAKE THIS MORE EFFICIENT, lgBufferSubData is broken - # glBufferSubData( - # GL_ARRAY_BUFFER, 0, 6 * 4 * FLOAT_SZ, - # np.ascontiguousarray(vertices.flatten) - # ) - glDrawArrays(GL_TRIANGLES, 0, 6) - x += (ch.advance >> 6) * scale - - self._unbind() - if ch: - ch.texture._unbind() diff --git a/spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/utils/lr_scheduler.py b/spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/utils/lr_scheduler.py deleted file mode 100644 index b46e3f0397634bcf48a6a61ab041a7ea07577eb3..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/utils/lr_scheduler.py +++ /dev/null @@ -1,128 +0,0 @@ -import math -import torch - - -class ExponentialDecayScheduler(torch.optim.lr_scheduler._LRScheduler): - - def __init__(self, optimizer, total_iters, final_lrs, - warmup_iters=3000, last_epoch=-1, verbose=False): - self.total_iters = total_iters - self.final_lrs = final_lrs - if not isinstance(self.final_lrs, list) and not isinstance( - self.final_lrs, tuple): - self.final_lrs = [self.final_lrs] * len(optimizer.param_groups) - self.warmup_iters = warmup_iters - self.bases = [0.0,] * len(optimizer.param_groups) - super().__init__(optimizer, last_epoch, verbose) - for i, (base_lr, final_lr) in enumerate(zip(self.base_lrs, self.final_lrs)): - base = (final_lr / base_lr) ** (1 / ( - self.total_iters - self.warmup_iters)) - self.bases[i] = base - - def _get_closed_form_lr(self): - warmup_coeff = 1.0 - current_iter = self._step_count - if current_iter < self.warmup_iters: - warmup_coeff = current_iter / self.warmup_iters - current_lrs = [] - # if not self.linear_warmup: - # for base_lr, final_lr, base in zip(self.base_lrs, self.final_lrs, self.bases): - # # current_lr = warmup_coeff * base_lr * math.exp(((current_iter - self.warmup_iters) / self.total_iters) * math.log(final_lr / base_lr)) - # current_lr = warmup_coeff * base_lr * (base ** (current_iter - self.warmup_iters)) - # current_lrs.append(current_lr) - # else: - for base_lr, final_lr, base in zip(self.base_lrs, self.final_lrs, - self.bases): - if current_iter <= self.warmup_iters: - current_lr = warmup_coeff * base_lr - else: - # current_lr = warmup_coeff * base_lr * math.exp(((current_iter - self.warmup_iters) / self.total_iters) * math.log(final_lr / base_lr)) - current_lr = base_lr * (base ** (current_iter - self.warmup_iters)) - current_lrs.append(current_lr) - return current_lrs - - def get_lr(self): - return self._get_closed_form_lr() - - -class NoamScheduler(torch.optim.lr_scheduler._LRScheduler): - - def __init__(self, optimizer, model_size=512, factor=1, warmup_iters=3000, - last_epoch=-1, verbose=False): - self.model_size = model_size - self.warmup_iters = warmup_iters - # self.factors = [group["lr"] / (self.model_size ** (-0.5) * self.warmup_iters ** (-0.5)) for group in optimizer.param_groups] - self.factor = factor - super().__init__(optimizer, last_epoch, verbose) - - def _get_closed_form_lr(self): - current_iter = self._step_count - current_lrs = [] - for _ in self.base_lrs: - current_lr = self.factor * \ - (self.model_size ** (-0.5) * min(current_iter ** (-0.5), - current_iter * self.warmup_iters ** (-1.5))) - current_lrs.append(current_lr) - return current_lrs - - def get_lr(self): - return self._get_closed_form_lr() - - -class CosineWithWarmup(torch.optim.lr_scheduler._LRScheduler): - - def __init__(self, optimizer, total_iters, warmup_iters, - num_cycles=0.5, last_epoch=-1, verbose=False): - self.total_iters = total_iters - self.warmup_iters = warmup_iters - self.num_cycles = num_cycles - super().__init__(optimizer, last_epoch, verbose) - - def lr_lambda(self, iteration): - if iteration < self.warmup_iters: - return float(iteration) / float(max(1, self.warmup_iters)) - progress = float(iteration - self.warmup_iters) / float(max(1, - self.total_iters - self.warmup_iters)) - return max(0.0, 0.5 * (1.0 + math.cos(math.pi * float( - self.num_cycles) * 2.0 * progress))) - - def _get_closed_form_lr(self): - current_iter = self._step_count - current_lrs = [] - for base_lr in self.base_lrs: - current_lr = base_lr * self.lr_lambda(current_iter) - current_lrs.append(current_lr) - return current_lrs - - def get_lr(self): - return self._get_closed_form_lr() - - -if __name__ == "__main__": - model = torch.nn.Linear(10, 5) - optimizer = torch.optim.Adam(model.parameters(), 5e-4) - epochs = 25 - iters = 600 - scheduler = CosineWithWarmup(optimizer, 600 * 25, 600 * 5,) - # scheduler = ExponentialDecayScheduler(optimizer, 600 * 25, 5e-7, 600 * 5) - criterion = torch.nn.MSELoss() - lrs = [] - for epoch in range(1, epochs + 1): - for iteration in range(1, iters + 1): - optimizer.zero_grad() - x = torch.randn(4, 10) - y = torch.randn(4, 5) - loss = criterion(model(x), y) - loss.backward() - optimizer.step() - scheduler.step() - # print(f"lr: {scheduler.get_last_lr()}") - # lrs.append(scheduler.get_last_lr()) - lrs.append(optimizer.param_groups[0]["lr"]) - import matplotlib.pyplot as plt - plt.plot(list(range(1, len(lrs) + 1)), lrs, '-o', markersize=1) - # plt.legend(loc="best") - plt.xlabel("Iteration") - plt.ylabel("LR") - - plt.savefig("lr_curve.png", dpi=100) diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/datasets/custom_ds.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/datasets/custom_ds.py deleted file mode 100644 index 35a9a1fbebf38be92efcb59968f9342d71970051..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/datasets/custom_ds.py +++ /dev/null @@ -1,55 +0,0 @@ -dataset_type = 'CustomDataset' - -# config of data prepare -# None - -# config of pipline -train_pipeline = [ - dict(type='LoadImageFromFile'), # 读取图像 - dict(type='RandomResizedCrop', scale=224), # 随机放缩裁剪 - dict(type='RandomFlip', prob=0.5, direction='horizontal'), # 随机水平翻转 - dict(type='PackInputs'), # 准备图像以及标签 -] - -test_pipeline = [ - dict(type='LoadImageFromFile'), # 读取图像 - dict(type='ResizeEdge', scale=256, edge='short'), # 缩放短边尺寸至 256px - dict(type='CenterCrop', crop_size=224), # 中心裁剪 - dict(type='PackInputs'), # 准备图像以及标签 -] - -# config of dataloader -train_dataloader = dict( - batch_size=8, # 每张 GPU 的 batchsize - num_workers=4, # 每个 GPU 的线程数 - dataset=dict( # 训练数据集 - type=dataset_type, - data_root='../2_preprocess_data_3000', - with_label=True, - ann_file='', - data_prefix='train', - pipeline=train_pipeline), - sampler=dict(type='DefaultSampler', shuffle=True), # 默认采样器 - persistent_workers=True, # 是否保持进程,可以缩短每个 epoch 的准备时间 -) - -# 构造验证集 dataloader -val_dataloader = dict( - batch_size=8, - num_workers=4, - dataset=dict( - type=dataset_type, - data_root='../2_preprocess_data_3000', - with_label=True, - ann_file='', - data_prefix='val', - pipeline=test_pipeline), - sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, -) - -# set evaluator of validation dataset. Here uses top1 and top3 accuracy -val_evaluator = dict(type='Accuracy', topk=(1, 3)) - -test_dataloader = val_dataloader -test_evaluator = val_evaluator diff --git a/spaces/Abdllh/topic2poem/README.md b/spaces/Abdllh/topic2poem/README.md deleted file mode 100644 index 05648e6c852c5d9aa254acc60c33d84cb393d148..0000000000000000000000000000000000000000 --- a/spaces/Abdllh/topic2poem/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Topic2poem -emoji: 💻 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.2 -app_file: app.py -pinned: false -license: afl-3.0 -duplicated_from: aaaaaabbbbbbbdddddddduuuuulllll/topic2poem ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/privacy/$types.d.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/privacy/$types.d.ts deleted file mode 100644 index 2b7cd88fb7d834df80282d97c8adb0de7a12e296..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/privacy/$types.d.ts +++ /dev/null @@ -1,15 +0,0 @@ -import type * as Kit from '@sveltejs/kit'; - -type Expand = T extends infer O ? { [K in keyof O]: O[K] } : never; -type RouteParams = { } -type RouteId = '/privacy'; -type MaybeWithVoid = {} extends T ? T | void : T; -export type RequiredKeys = { [K in keyof T]-?: {} extends { [P in K]: T[K] } ? never : K; }[keyof T]; -type OutputDataShape = MaybeWithVoid> & Partial> & Record> -type EnsureDefined = T extends null | undefined ? {} : T; -type OptionalUnion, A extends keyof U = U extends U ? keyof U : never> = U extends unknown ? { [P in Exclude]?: never } & U : never; -export type Snapshot = Kit.Snapshot; -type PageParentData = EnsureDefined; - -export type PageServerData = null; -export type PageData = Expand; \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/styles/main.css b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/styles/main.css deleted file mode 100644 index 6ea57c50974dab960f23ce8440bfd576f10ddb52..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/styles/main.css +++ /dev/null @@ -1,17 +0,0 @@ -@import "./highlight-js.css"; - -@tailwind base; -@tailwind components; -@tailwind utilities; - -@layer components { - .btn { - @apply inline-flex flex-shrink-0 cursor-pointer select-none items-center justify-center whitespace-nowrap outline-none transition-all focus:ring disabled:cursor-default; - } -} - -@layer utilities { - .scrollbar-custom { - @apply scrollbar-thin scrollbar-track-transparent scrollbar-thumb-black/10 scrollbar-thumb-rounded-full scrollbar-w-1 hover:scrollbar-thumb-black/20 dark:scrollbar-thumb-white/10 dark:hover:scrollbar-thumb-white/20; - } -} diff --git a/spaces/AkitoP/umamusume_bert_vits2/transforms.py b/spaces/AkitoP/umamusume_bert_vits2/transforms.py deleted file mode 100644 index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000 --- a/spaces/AkitoP/umamusume_bert_vits2/transforms.py +++ /dev/null @@ -1,209 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/Alashazam/StoryGenerator/app.py b/spaces/Alashazam/StoryGenerator/app.py deleted file mode 100644 index 94e83436fab5829acb8608747a5dd64b8b3721a2..0000000000000000000000000000000000000000 --- a/spaces/Alashazam/StoryGenerator/app.py +++ /dev/null @@ -1,15 +0,0 @@ -import gradio as gr -from gradio import inputs -description = "Story generation with GPT-2" -interface = gr.Interface.load("huggingface/pranavpsv/gpt2-genre-story-generator", - title = "Story Generation with GPT-2", - inputs = [ - gr.inputs.Textbox(lines=7, label="Story"), - ], - description=description, - examples=[["Adventurer is approached by a mysterious stranger in the tavern for a new quest"], - ["A skilled pilot drives a spaceship ino a new quest"], - ["A wizard learn spells for a quest"] - ] -) -interface.launch() \ No newline at end of file diff --git a/spaces/Altinas/vits-uma-genshin-honkais/text/symbols.py b/spaces/Altinas/vits-uma-genshin-honkais/text/symbols.py deleted file mode 100644 index edfbd24247be8c757275ce80b9ec27a0ffa808f3..0000000000000000000000000000000000000000 --- a/spaces/Altinas/vits-uma-genshin-honkais/text/symbols.py +++ /dev/null @@ -1,39 +0,0 @@ -''' -Defines the set of symbols used in text input to the model. -''' - -'''# japanese_cleaners -_pad = '_' -_punctuation = ',.!?-' -_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧ↓↑ ' -''' - -'''# japanese_cleaners2 -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧʦ↓↑ ' -''' - -'''# korean_cleaners -_pad = '_' -_punctuation = ',.!?…~' -_letters = 'ㄱㄴㄷㄹㅁㅂㅅㅇㅈㅊㅋㅌㅍㅎㄲㄸㅃㅆㅉㅏㅓㅗㅜㅡㅣㅐㅔ ' -''' - -'''# chinese_cleaners -_pad = '_' -_punctuation = ',。!?—…' -_letters = 'ㄅㄆㄇㄈㄉㄊㄋㄌㄍㄎㄏㄐㄑㄒㄓㄔㄕㄖㄗㄘㄙㄚㄛㄜㄝㄞㄟㄠㄡㄢㄣㄤㄥㄦㄧㄨㄩˉˊˇˋ˙ ' -''' - -# zh_ja_mixture_cleaners -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'AEINOQUabdefghijklmnoprstuvwyzʃʧʦɯɹəɥ⁼ʰ`→↓↑ ' - - -# Export all symbols: -symbols = [_pad] + list(_punctuation) + list(_letters) - -# Special symbol ids -SPACE_ID = symbols.index(" ") \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/ddim.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/ddim.md deleted file mode 100644 index 2e69fd672cfadaf8870a3dad108ee9535c70593e..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/ddim.md +++ /dev/null @@ -1,88 +0,0 @@ - - -# Denoising Diffusion Implicit Models (DDIM) - -## Overview - -[Denoising Diffusion Implicit Models](https://arxiv.org/abs/2010.02502) (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. - -The abstract of the paper is the following: - -*Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, -yet they require simulating a Markov chain for many steps to produce a sample. -To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models -with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. -We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. -We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off -computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.* - -The original codebase of this paper can be found here: [ermongroup/ddim](https://github.com/ermongroup/ddim). -For questions, feel free to contact the author on [tsong.me](https://tsong.me/). - -### Experimental: "Common Diffusion Noise Schedules and Sample Steps are Flawed": - -The paper **[Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/abs/2305.08891)** -claims that a mismatch between the training and inference settings leads to suboptimal inference generation results for Stable Diffusion. - -The abstract reads as follows: - -*We discover that common diffusion noise schedules do not enforce the last timestep to have zero signal-to-noise ratio (SNR), -and some implementations of diffusion samplers do not start from the last timestep. -Such designs are flawed and do not reflect the fact that the model is given pure Gaussian noise at inference, creating a discrepancy between training and inference. -We show that the flawed design causes real problems in existing implementations. -In Stable Diffusion, it severely limits the model to only generate images with medium brightness and -prevents it from generating very bright and dark samples. We propose a few simple fixes: -- (1) rescale the noise schedule to enforce zero terminal SNR; -- (2) train the model with v prediction; -- (3) change the sampler to always start from the last timestep; -- (4) rescale classifier-free guidance to prevent over-exposure. -These simple changes ensure the diffusion process is congruent between training and inference and -allow the model to generate samples more faithful to the original data distribution.* - -You can apply all of these changes in `diffusers` when using [`DDIMScheduler`]: -- (1) rescale the noise schedule to enforce zero terminal SNR; -```py -pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, rescale_betas_zero_snr=True) -``` -- (2) train the model with v prediction; -Continue fine-tuning a checkpoint with [`train_text_to_image.py`](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) or [`train_text_to_image_lora.py`](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py) -and `--prediction_type="v_prediction"`. -- (3) change the sampler to always start from the last timestep; -```py -pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing") -``` -- (4) rescale classifier-free guidance to prevent over-exposure. -```py -pipe(..., guidance_rescale=0.7) -``` - -An example is to use [this checkpoint](https://huggingface.co/ptx0/pseudo-journey-v2) -which has been fine-tuned using the `"v_prediction"`. - -The checkpoint can then be run in inference as follows: - -```py -from diffusers import DiffusionPipeline, DDIMScheduler - -pipe = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", torch_dtype=torch.float16) -pipe.scheduler = DDIMScheduler.from_config( - pipe.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing" -) -pipe.to("cuda") - -prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" -image = pipeline(prompt, guidance_rescale=0.7).images[0] -``` - -## DDIMScheduler -[[autodoc]] DDIMScheduler diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/pipeline_flax_utils.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/pipeline_flax_utils.py deleted file mode 100644 index 21fbc36c610a2805a9c3d63999efb176e0170149..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/pipeline_flax_utils.py +++ /dev/null @@ -1,557 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. -# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import importlib -import inspect -import os -from typing import Any, Dict, List, Optional, Union - -import flax -import numpy as np -import PIL -from flax.core.frozen_dict import FrozenDict -from huggingface_hub import snapshot_download -from PIL import Image -from tqdm.auto import tqdm - -from ..configuration_utils import ConfigMixin -from ..models.modeling_flax_utils import FLAX_WEIGHTS_NAME, FlaxModelMixin -from ..schedulers.scheduling_utils_flax import SCHEDULER_CONFIG_NAME, FlaxSchedulerMixin -from ..utils import CONFIG_NAME, DIFFUSERS_CACHE, BaseOutput, http_user_agent, is_transformers_available, logging - - -if is_transformers_available(): - from transformers import FlaxPreTrainedModel - -INDEX_FILE = "diffusion_flax_model.bin" - - -logger = logging.get_logger(__name__) - - -LOADABLE_CLASSES = { - "diffusers": { - "FlaxModelMixin": ["save_pretrained", "from_pretrained"], - "FlaxSchedulerMixin": ["save_pretrained", "from_pretrained"], - "FlaxDiffusionPipeline": ["save_pretrained", "from_pretrained"], - }, - "transformers": { - "PreTrainedTokenizer": ["save_pretrained", "from_pretrained"], - "PreTrainedTokenizerFast": ["save_pretrained", "from_pretrained"], - "FlaxPreTrainedModel": ["save_pretrained", "from_pretrained"], - "FeatureExtractionMixin": ["save_pretrained", "from_pretrained"], - "ProcessorMixin": ["save_pretrained", "from_pretrained"], - "ImageProcessingMixin": ["save_pretrained", "from_pretrained"], - }, -} - -ALL_IMPORTABLE_CLASSES = {} -for library in LOADABLE_CLASSES: - ALL_IMPORTABLE_CLASSES.update(LOADABLE_CLASSES[library]) - - -def import_flax_or_no_model(module, class_name): - try: - # 1. First make sure that if a Flax object is present, import this one - class_obj = getattr(module, "Flax" + class_name) - except AttributeError: - # 2. If this doesn't work, it's not a model and we don't append "Flax" - class_obj = getattr(module, class_name) - except AttributeError: - raise ValueError(f"Neither Flax{class_name} nor {class_name} exist in {module}") - - return class_obj - - -@flax.struct.dataclass -class FlaxImagePipelineOutput(BaseOutput): - """ - Output class for image pipelines. - - Args: - images (`List[PIL.Image.Image]` or `np.ndarray`) - List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width, - num_channels)`. - """ - - images: Union[List[PIL.Image.Image], np.ndarray] - - -class FlaxDiffusionPipeline(ConfigMixin): - r""" - Base class for Flax-based pipelines. - - [`FlaxDiffusionPipeline`] stores all components (models, schedulers, and processors) for diffusion pipelines and - provides methods for loading, downloading and saving models. It also includes methods to: - - - enable/disable the progress bar for the denoising iteration - - Class attributes: - - - **config_name** ([`str`]) -- The configuration filename that stores the class and module names of all the - diffusion pipeline's components. - """ - config_name = "model_index.json" - - def register_modules(self, **kwargs): - # import it here to avoid circular import - from diffusers import pipelines - - for name, module in kwargs.items(): - if module is None: - register_dict = {name: (None, None)} - else: - # retrieve library - library = module.__module__.split(".")[0] - - # check if the module is a pipeline module - pipeline_dir = module.__module__.split(".")[-2] - path = module.__module__.split(".") - is_pipeline_module = pipeline_dir in path and hasattr(pipelines, pipeline_dir) - - # if library is not in LOADABLE_CLASSES, then it is a custom module. - # Or if it's a pipeline module, then the module is inside the pipeline - # folder so we set the library to module name. - if library not in LOADABLE_CLASSES or is_pipeline_module: - library = pipeline_dir - - # retrieve class_name - class_name = module.__class__.__name__ - - register_dict = {name: (library, class_name)} - - # save model index config - self.register_to_config(**register_dict) - - # set models - setattr(self, name, module) - - def save_pretrained(self, save_directory: Union[str, os.PathLike], params: Union[Dict, FrozenDict]): - # TODO: handle inference_state - """ - Save all saveable variables of the pipeline to a directory. A pipeline variable can be saved and loaded if its - class implements both a save and loading method. The pipeline is easily reloaded using the - [`~FlaxDiffusionPipeline.from_pretrained`] class method. - - Arguments: - save_directory (`str` or `os.PathLike`): - Directory to which to save. Will be created if it doesn't exist. - """ - self.save_config(save_directory) - - model_index_dict = dict(self.config) - model_index_dict.pop("_class_name") - model_index_dict.pop("_diffusers_version") - model_index_dict.pop("_module", None) - - for pipeline_component_name in model_index_dict.keys(): - sub_model = getattr(self, pipeline_component_name) - if sub_model is None: - # edge case for saving a pipeline with safety_checker=None - continue - - model_cls = sub_model.__class__ - - save_method_name = None - # search for the model's base class in LOADABLE_CLASSES - for library_name, library_classes in LOADABLE_CLASSES.items(): - library = importlib.import_module(library_name) - for base_class, save_load_methods in library_classes.items(): - class_candidate = getattr(library, base_class, None) - if class_candidate is not None and issubclass(model_cls, class_candidate): - # if we found a suitable base class in LOADABLE_CLASSES then grab its save method - save_method_name = save_load_methods[0] - break - if save_method_name is not None: - break - - save_method = getattr(sub_model, save_method_name) - expects_params = "params" in set(inspect.signature(save_method).parameters.keys()) - - if expects_params: - save_method( - os.path.join(save_directory, pipeline_component_name), params=params[pipeline_component_name] - ) - else: - save_method(os.path.join(save_directory, pipeline_component_name)) - - @classmethod - def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], **kwargs): - r""" - Instantiate a Flax-based diffusion pipeline from pretrained pipeline weights. - - The pipeline is set in evaluation mode (`model.eval()) by default and dropout modules are deactivated. - - If you get the error message below, you need to finetune the weights for your downstream task: - - ``` - Some weights of FlaxUNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: - ``` - - Parameters: - pretrained_model_name_or_path (`str` or `os.PathLike`, *optional*): - Can be either: - - - A string, the *repo id* (for example `runwayml/stable-diffusion-v1-5`) of a pretrained pipeline - hosted on the Hub. - - A path to a *directory* (for example `./my_model_directory`) containing the model weights saved - using [`~FlaxDiffusionPipeline.save_pretrained`]. - dtype (`str` or `jnp.dtype`, *optional*): - Override the default `jnp.dtype` and load the model under this dtype. If `"auto"`, the dtype is - automatically derived from the model's weights. - force_download (`bool`, *optional*, defaults to `False`): - Whether or not to force the (re-)download of the model weights and configuration files, overriding the - cached versions if they exist. - resume_download (`bool`, *optional*, defaults to `False`): - Whether or not to resume downloading the model weights and configuration files. If set to `False`, any - incompletely downloaded files are deleted. - proxies (`Dict[str, str]`, *optional*): - A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128', - 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. - output_loading_info(`bool`, *optional*, defaults to `False`): - Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. - local_files_only (`bool`, *optional*, defaults to `False`): - Whether to only load local model weights and configuration files or not. If set to `True`, the model - won't be downloaded from the Hub. - use_auth_token (`str` or *bool*, *optional*): - The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from - `diffusers-cli login` (stored in `~/.huggingface`) is used. - revision (`str`, *optional*, defaults to `"main"`): - The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier - allowed by Git. - mirror (`str`, *optional*): - Mirror source to resolve accessibility issues if you're downloading a model in China. We do not - guarantee the timeliness or safety of the source, and you should refer to the mirror site for more - information. - kwargs (remaining dictionary of keyword arguments, *optional*): - Can be used to overwrite load and saveable variables (the pipeline components) of the specific pipeline - class. The overwritten components are passed directly to the pipelines `__init__` method. - - - - To use private or [gated models](https://huggingface.co/docs/hub/models-gated#gated-models), log-in with - `huggingface-cli login`. You can also activate the special - [“offline-mode”](https://huggingface.co/diffusers/installation.html#offline-mode) to use this method in a - firewalled environment. - - - - Examples: - - ```py - >>> from diffusers import FlaxDiffusionPipeline - - >>> # Download pipeline from huggingface.co and cache. - >>> # Requires to be logged in to Hugging Face hub, - >>> # see more in [the documentation](https://huggingface.co/docs/hub/security-tokens) - >>> pipeline, params = FlaxDiffusionPipeline.from_pretrained( - ... "runwayml/stable-diffusion-v1-5", - ... revision="bf16", - ... dtype=jnp.bfloat16, - ... ) - - >>> # Download pipeline, but use a different scheduler - >>> from diffusers import FlaxDPMSolverMultistepScheduler - - >>> model_id = "runwayml/stable-diffusion-v1-5" - >>> dpmpp, dpmpp_state = FlaxDPMSolverMultistepScheduler.from_pretrained( - ... model_id, - ... subfolder="scheduler", - ... ) - - >>> dpm_pipe, dpm_params = FlaxStableDiffusionPipeline.from_pretrained( - ... model_id, revision="bf16", dtype=jnp.bfloat16, scheduler=dpmpp - ... ) - >>> dpm_params["scheduler"] = dpmpp_state - ``` - """ - cache_dir = kwargs.pop("cache_dir", DIFFUSERS_CACHE) - resume_download = kwargs.pop("resume_download", False) - proxies = kwargs.pop("proxies", None) - local_files_only = kwargs.pop("local_files_only", False) - use_auth_token = kwargs.pop("use_auth_token", None) - revision = kwargs.pop("revision", None) - from_pt = kwargs.pop("from_pt", False) - use_memory_efficient_attention = kwargs.pop("use_memory_efficient_attention", False) - dtype = kwargs.pop("dtype", None) - - # 1. Download the checkpoints and configs - # use snapshot download here to get it working from from_pretrained - if not os.path.isdir(pretrained_model_name_or_path): - config_dict = cls.load_config( - pretrained_model_name_or_path, - cache_dir=cache_dir, - resume_download=resume_download, - proxies=proxies, - local_files_only=local_files_only, - use_auth_token=use_auth_token, - revision=revision, - ) - # make sure we only download sub-folders and `diffusers` filenames - folder_names = [k for k in config_dict.keys() if not k.startswith("_")] - allow_patterns = [os.path.join(k, "*") for k in folder_names] - allow_patterns += [FLAX_WEIGHTS_NAME, SCHEDULER_CONFIG_NAME, CONFIG_NAME, cls.config_name] - - # make sure we don't download PyTorch weights, unless when using from_pt - ignore_patterns = "*.bin" if not from_pt else [] - - if cls != FlaxDiffusionPipeline: - requested_pipeline_class = cls.__name__ - else: - requested_pipeline_class = config_dict.get("_class_name", cls.__name__) - requested_pipeline_class = ( - requested_pipeline_class - if requested_pipeline_class.startswith("Flax") - else "Flax" + requested_pipeline_class - ) - - user_agent = {"pipeline_class": requested_pipeline_class} - user_agent = http_user_agent(user_agent) - - # download all allow_patterns - cached_folder = snapshot_download( - pretrained_model_name_or_path, - cache_dir=cache_dir, - resume_download=resume_download, - proxies=proxies, - local_files_only=local_files_only, - use_auth_token=use_auth_token, - revision=revision, - allow_patterns=allow_patterns, - ignore_patterns=ignore_patterns, - user_agent=user_agent, - ) - else: - cached_folder = pretrained_model_name_or_path - - config_dict = cls.load_config(cached_folder) - - # 2. Load the pipeline class, if using custom module then load it from the hub - # if we load from explicit class, let's use it - if cls != FlaxDiffusionPipeline: - pipeline_class = cls - else: - diffusers_module = importlib.import_module(cls.__module__.split(".")[0]) - class_name = ( - config_dict["_class_name"] - if config_dict["_class_name"].startswith("Flax") - else "Flax" + config_dict["_class_name"] - ) - pipeline_class = getattr(diffusers_module, class_name) - - # some modules can be passed directly to the init - # in this case they are already instantiated in `kwargs` - # extract them here - expected_modules, optional_kwargs = cls._get_signature_keys(pipeline_class) - passed_class_obj = {k: kwargs.pop(k) for k in expected_modules if k in kwargs} - - init_dict, _, _ = pipeline_class.extract_init_dict(config_dict, **kwargs) - - init_kwargs = {} - - # inference_params - params = {} - - # import it here to avoid circular import - from diffusers import pipelines - - # 3. Load each module in the pipeline - for name, (library_name, class_name) in init_dict.items(): - if class_name is None: - # edge case for when the pipeline was saved with safety_checker=None - init_kwargs[name] = None - continue - - is_pipeline_module = hasattr(pipelines, library_name) - loaded_sub_model = None - sub_model_should_be_defined = True - - # if the model is in a pipeline module, then we load it from the pipeline - if name in passed_class_obj: - # 1. check that passed_class_obj has correct parent class - if not is_pipeline_module: - library = importlib.import_module(library_name) - class_obj = getattr(library, class_name) - importable_classes = LOADABLE_CLASSES[library_name] - class_candidates = {c: getattr(library, c, None) for c in importable_classes.keys()} - - expected_class_obj = None - for class_name, class_candidate in class_candidates.items(): - if class_candidate is not None and issubclass(class_obj, class_candidate): - expected_class_obj = class_candidate - - if not issubclass(passed_class_obj[name].__class__, expected_class_obj): - raise ValueError( - f"{passed_class_obj[name]} is of type: {type(passed_class_obj[name])}, but should be" - f" {expected_class_obj}" - ) - elif passed_class_obj[name] is None: - logger.warning( - f"You have passed `None` for {name} to disable its functionality in {pipeline_class}. Note" - f" that this might lead to problems when using {pipeline_class} and is not recommended." - ) - sub_model_should_be_defined = False - else: - logger.warning( - f"You have passed a non-standard module {passed_class_obj[name]}. We cannot verify whether it" - " has the correct type" - ) - - # set passed class object - loaded_sub_model = passed_class_obj[name] - elif is_pipeline_module: - pipeline_module = getattr(pipelines, library_name) - class_obj = import_flax_or_no_model(pipeline_module, class_name) - - importable_classes = ALL_IMPORTABLE_CLASSES - class_candidates = {c: class_obj for c in importable_classes.keys()} - else: - # else we just import it from the library. - library = importlib.import_module(library_name) - class_obj = import_flax_or_no_model(library, class_name) - - importable_classes = LOADABLE_CLASSES[library_name] - class_candidates = {c: getattr(library, c, None) for c in importable_classes.keys()} - - if loaded_sub_model is None and sub_model_should_be_defined: - load_method_name = None - for class_name, class_candidate in class_candidates.items(): - if class_candidate is not None and issubclass(class_obj, class_candidate): - load_method_name = importable_classes[class_name][1] - - load_method = getattr(class_obj, load_method_name) - - # check if the module is in a subdirectory - if os.path.isdir(os.path.join(cached_folder, name)): - loadable_folder = os.path.join(cached_folder, name) - else: - loaded_sub_model = cached_folder - - if issubclass(class_obj, FlaxModelMixin): - loaded_sub_model, loaded_params = load_method( - loadable_folder, - from_pt=from_pt, - use_memory_efficient_attention=use_memory_efficient_attention, - dtype=dtype, - ) - params[name] = loaded_params - elif is_transformers_available() and issubclass(class_obj, FlaxPreTrainedModel): - if from_pt: - # TODO(Suraj): Fix this in Transformers. We should be able to use `_do_init=False` here - loaded_sub_model = load_method(loadable_folder, from_pt=from_pt) - loaded_params = loaded_sub_model.params - del loaded_sub_model._params - else: - loaded_sub_model, loaded_params = load_method(loadable_folder, _do_init=False) - params[name] = loaded_params - elif issubclass(class_obj, FlaxSchedulerMixin): - loaded_sub_model, scheduler_state = load_method(loadable_folder) - params[name] = scheduler_state - else: - loaded_sub_model = load_method(loadable_folder) - - init_kwargs[name] = loaded_sub_model # UNet(...), # DiffusionSchedule(...) - - # 4. Potentially add passed objects if expected - missing_modules = set(expected_modules) - set(init_kwargs.keys()) - passed_modules = list(passed_class_obj.keys()) - - if len(missing_modules) > 0 and missing_modules <= set(passed_modules): - for module in missing_modules: - init_kwargs[module] = passed_class_obj.get(module, None) - elif len(missing_modules) > 0: - passed_modules = set(list(init_kwargs.keys()) + list(passed_class_obj.keys())) - optional_kwargs - raise ValueError( - f"Pipeline {pipeline_class} expected {expected_modules}, but only {passed_modules} were passed." - ) - - model = pipeline_class(**init_kwargs, dtype=dtype) - return model, params - - @staticmethod - def _get_signature_keys(obj): - parameters = inspect.signature(obj.__init__).parameters - required_parameters = {k: v for k, v in parameters.items() if v.default == inspect._empty} - optional_parameters = set({k for k, v in parameters.items() if v.default != inspect._empty}) - expected_modules = set(required_parameters.keys()) - {"self"} - return expected_modules, optional_parameters - - @property - def components(self) -> Dict[str, Any]: - r""" - - The `self.components` property can be useful to run different pipelines with the same weights and - configurations to not have to re-allocate memory. - - Examples: - - ```py - >>> from diffusers import ( - ... FlaxStableDiffusionPipeline, - ... FlaxStableDiffusionImg2ImgPipeline, - ... ) - - >>> text2img = FlaxStableDiffusionPipeline.from_pretrained( - ... "runwayml/stable-diffusion-v1-5", revision="bf16", dtype=jnp.bfloat16 - ... ) - >>> img2img = FlaxStableDiffusionImg2ImgPipeline(**text2img.components) - ``` - - Returns: - A dictionary containing all the modules needed to initialize the pipeline. - """ - expected_modules, optional_parameters = self._get_signature_keys(self) - components = { - k: getattr(self, k) for k in self.config.keys() if not k.startswith("_") and k not in optional_parameters - } - - if set(components.keys()) != expected_modules: - raise ValueError( - f"{self} has been incorrectly initialized or {self.__class__} is incorrectly implemented. Expected" - f" {expected_modules} to be defined, but {components} are defined." - ) - - return components - - @staticmethod - def numpy_to_pil(images): - """ - Convert a NumPy image or a batch of images to a PIL image. - """ - if images.ndim == 3: - images = images[None, ...] - images = (images * 255).round().astype("uint8") - if images.shape[-1] == 1: - # special case for grayscale (single channel) images - pil_images = [Image.fromarray(image.squeeze(), mode="L") for image in images] - else: - pil_images = [Image.fromarray(image) for image in images] - - return pil_images - - # TODO: make it compatible with jax.lax - def progress_bar(self, iterable): - if not hasattr(self, "_progress_bar_config"): - self._progress_bar_config = {} - elif not isinstance(self._progress_bar_config, dict): - raise ValueError( - f"`self._progress_bar_config` should be of type `dict`, but is {type(self._progress_bar_config)}." - ) - - return tqdm(iterable, **self._progress_bar_config) - - def set_progress_bar_config(self, **kwargs): - self._progress_bar_config = kwargs diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/repaint/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/repaint/__init__.py deleted file mode 100644 index 16bc86d1cedf6243fb92f7ba331b5a6188133298..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/repaint/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .pipeline_repaint import RePaintPipeline diff --git a/spaces/Andy1621/uniformer_image_detection/configs/instaboost/mask_rcnn_r101_fpn_instaboost_4x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/instaboost/mask_rcnn_r101_fpn_instaboost_4x_coco.py deleted file mode 100644 index c2819477abb070b724d0295ccf028025918b263a..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/instaboost/mask_rcnn_r101_fpn_instaboost_4x_coco.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './mask_rcnn_r50_fpn_instaboost_4x_coco.py' -model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101)) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/pisa/pisa_faster_rcnn_r50_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/pisa/pisa_faster_rcnn_r50_fpn_1x_coco.py deleted file mode 100644 index 71e65b0b2bc72379f4db73e491f76fc767cb786b..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/pisa/pisa_faster_rcnn_r50_fpn_1x_coco.py +++ /dev/null @@ -1,30 +0,0 @@ -_base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py' - -model = dict( - roi_head=dict( - type='PISARoIHead', - bbox_head=dict( - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))), - train_cfg=dict( - rpn_proposal=dict( - nms_pre=2000, - max_per_img=2000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - sampler=dict( - type='ScoreHLRSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True, - k=0.5, - bias=0.), - isr=dict(k=2, bias=0), - carl=dict(k=1, bias=0.2))), - test_cfg=dict( - rpn=dict( - nms_pre=2000, - max_per_img=2000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0))) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18s_480x480_80k_pascal_context.py b/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18s_480x480_80k_pascal_context.py deleted file mode 100644 index 584b7135fd95464f3d2c965440a0b92161cde09a..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18s_480x480_80k_pascal_context.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = './fcn_hr18_480x480_80k_pascal_context.py' -model = dict( - pretrained='open-mmlab://msra/hrnetv2_w18_small', - backbone=dict( - extra=dict( - stage1=dict(num_blocks=(2, )), - stage2=dict(num_blocks=(2, 2)), - stage3=dict(num_modules=3, num_blocks=(2, 2, 2)), - stage4=dict(num_modules=2, num_blocks=(2, 2, 2, 2))))) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18s_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18s_512x1024_80k_cityscapes.py deleted file mode 100644 index ee2831d99d859c419b158b5f828d8a84063564ea..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18s_512x1024_80k_cityscapes.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = './fcn_hr18_512x1024_80k_cityscapes.py' -model = dict( - pretrained='open-mmlab://msra/hrnetv2_w18_small', - backbone=dict( - extra=dict( - stage1=dict(num_blocks=(2, )), - stage2=dict(num_blocks=(2, 2)), - stage3=dict(num_modules=3, num_blocks=(2, 2, 2)), - stage4=dict(num_modules=2, num_blocks=(2, 2, 2, 2))))) diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/activation.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/activation.py deleted file mode 100644 index cab2712287d5ef7be2f079dcb54a94b96394eab5..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/activation.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from annotator.uniformer.mmcv.utils import TORCH_VERSION, build_from_cfg, digit_version -from .registry import ACTIVATION_LAYERS - -for module in [ - nn.ReLU, nn.LeakyReLU, nn.PReLU, nn.RReLU, nn.ReLU6, nn.ELU, - nn.Sigmoid, nn.Tanh -]: - ACTIVATION_LAYERS.register_module(module=module) - - -@ACTIVATION_LAYERS.register_module(name='Clip') -@ACTIVATION_LAYERS.register_module() -class Clamp(nn.Module): - """Clamp activation layer. - - This activation function is to clamp the feature map value within - :math:`[min, max]`. More details can be found in ``torch.clamp()``. - - Args: - min (Number | optional): Lower-bound of the range to be clamped to. - Default to -1. - max (Number | optional): Upper-bound of the range to be clamped to. - Default to 1. - """ - - def __init__(self, min=-1., max=1.): - super(Clamp, self).__init__() - self.min = min - self.max = max - - def forward(self, x): - """Forward function. - - Args: - x (torch.Tensor): The input tensor. - - Returns: - torch.Tensor: Clamped tensor. - """ - return torch.clamp(x, min=self.min, max=self.max) - - -class GELU(nn.Module): - r"""Applies the Gaussian Error Linear Units function: - - .. math:: - \text{GELU}(x) = x * \Phi(x) - where :math:`\Phi(x)` is the Cumulative Distribution Function for - Gaussian Distribution. - - Shape: - - Input: :math:`(N, *)` where `*` means, any number of additional - dimensions - - Output: :math:`(N, *)`, same shape as the input - - .. image:: scripts/activation_images/GELU.png - - Examples:: - - >>> m = nn.GELU() - >>> input = torch.randn(2) - >>> output = m(input) - """ - - def forward(self, input): - return F.gelu(input) - - -if (TORCH_VERSION == 'parrots' - or digit_version(TORCH_VERSION) < digit_version('1.4')): - ACTIVATION_LAYERS.register_module(module=GELU) -else: - ACTIVATION_LAYERS.register_module(module=nn.GELU) - - -def build_activation_layer(cfg): - """Build activation layer. - - Args: - cfg (dict): The activation layer config, which should contain: - - type (str): Layer type. - - layer args: Args needed to instantiate an activation layer. - - Returns: - nn.Module: Created activation layer. - """ - return build_from_cfg(cfg, ACTIVATION_LAYERS) diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/box_iou_rotated.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/box_iou_rotated.py deleted file mode 100644 index 2d78015e9c2a9e7a52859b4e18f84a9aa63481a0..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/box_iou_rotated.py +++ /dev/null @@ -1,45 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['box_iou_rotated']) - - -def box_iou_rotated(bboxes1, bboxes2, mode='iou', aligned=False): - """Return intersection-over-union (Jaccard index) of boxes. - - Both sets of boxes are expected to be in - (x_center, y_center, width, height, angle) format. - - If ``aligned`` is ``False``, then calculate the ious between each bbox - of bboxes1 and bboxes2, otherwise the ious between each aligned pair of - bboxes1 and bboxes2. - - Arguments: - boxes1 (Tensor): rotated bboxes 1. \ - It has shape (N, 5), indicating (x, y, w, h, theta) for each row. - Note that theta is in radian. - boxes2 (Tensor): rotated bboxes 2. \ - It has shape (M, 5), indicating (x, y, w, h, theta) for each row. - Note that theta is in radian. - mode (str): "iou" (intersection over union) or iof (intersection over - foreground). - - Returns: - ious(Tensor): shape (N, M) if aligned == False else shape (N,) - """ - assert mode in ['iou', 'iof'] - mode_dict = {'iou': 0, 'iof': 1} - mode_flag = mode_dict[mode] - rows = bboxes1.size(0) - cols = bboxes2.size(0) - if aligned: - ious = bboxes1.new_zeros(rows) - else: - ious = bboxes1.new_zeros((rows * cols)) - bboxes1 = bboxes1.contiguous() - bboxes2 = bboxes2.contiguous() - ext_module.box_iou_rotated( - bboxes1, bboxes2, ious, mode_flag=mode_flag, aligned=aligned) - if not aligned: - ious = ious.view(rows, cols) - return ious diff --git a/spaces/Anonymous-sub/Rerender/gmflow_module/gmflow/gmflow.py b/spaces/Anonymous-sub/Rerender/gmflow_module/gmflow/gmflow.py deleted file mode 100644 index cd4138332571254631ad361fd94146706713cf1e..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/gmflow_module/gmflow/gmflow.py +++ /dev/null @@ -1,170 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .backbone import CNNEncoder -from .transformer import FeatureTransformer, FeatureFlowAttention -from .matching import global_correlation_softmax, local_correlation_softmax -from .geometry import flow_warp -from .utils import normalize_img, feature_add_position - - -class GMFlow(nn.Module): - def __init__(self, - num_scales=1, - upsample_factor=8, - feature_channels=128, - attention_type='swin', - num_transformer_layers=6, - ffn_dim_expansion=4, - num_head=1, - **kwargs, - ): - super(GMFlow, self).__init__() - - self.num_scales = num_scales - self.feature_channels = feature_channels - self.upsample_factor = upsample_factor - self.attention_type = attention_type - self.num_transformer_layers = num_transformer_layers - - # CNN backbone - self.backbone = CNNEncoder(output_dim=feature_channels, num_output_scales=num_scales) - - # Transformer - self.transformer = FeatureTransformer(num_layers=num_transformer_layers, - d_model=feature_channels, - nhead=num_head, - attention_type=attention_type, - ffn_dim_expansion=ffn_dim_expansion, - ) - - # flow propagation with self-attn - self.feature_flow_attn = FeatureFlowAttention(in_channels=feature_channels) - - # convex upsampling: concat feature0 and flow as input - self.upsampler = nn.Sequential(nn.Conv2d(2 + feature_channels, 256, 3, 1, 1), - nn.ReLU(inplace=True), - nn.Conv2d(256, upsample_factor ** 2 * 9, 1, 1, 0)) - - def extract_feature(self, img0, img1): - concat = torch.cat((img0, img1), dim=0) # [2B, C, H, W] - features = self.backbone(concat) # list of [2B, C, H, W], resolution from high to low - - # reverse: resolution from low to high - features = features[::-1] - - feature0, feature1 = [], [] - - for i in range(len(features)): - feature = features[i] - chunks = torch.chunk(feature, 2, 0) # tuple - feature0.append(chunks[0]) - feature1.append(chunks[1]) - - return feature0, feature1 - - def upsample_flow(self, flow, feature, bilinear=False, upsample_factor=8, - ): - if bilinear: - up_flow = F.interpolate(flow, scale_factor=upsample_factor, - mode='bilinear', align_corners=True) * upsample_factor - - else: - # convex upsampling - concat = torch.cat((flow, feature), dim=1) - - mask = self.upsampler(concat) - b, flow_channel, h, w = flow.shape - mask = mask.view(b, 1, 9, self.upsample_factor, self.upsample_factor, h, w) # [B, 1, 9, K, K, H, W] - mask = torch.softmax(mask, dim=2) - - up_flow = F.unfold(self.upsample_factor * flow, [3, 3], padding=1) - up_flow = up_flow.view(b, flow_channel, 9, 1, 1, h, w) # [B, 2, 9, 1, 1, H, W] - - up_flow = torch.sum(mask * up_flow, dim=2) # [B, 2, K, K, H, W] - up_flow = up_flow.permute(0, 1, 4, 2, 5, 3) # [B, 2, K, H, K, W] - up_flow = up_flow.reshape(b, flow_channel, self.upsample_factor * h, - self.upsample_factor * w) # [B, 2, K*H, K*W] - - return up_flow - - def forward(self, img0, img1, - attn_splits_list=None, - corr_radius_list=None, - prop_radius_list=None, - pred_bidir_flow=False, - **kwargs, - ): - - results_dict = {} - flow_preds = [] - - img0, img1 = normalize_img(img0, img1) # [B, 3, H, W] - - # resolution low to high - feature0_list, feature1_list = self.extract_feature(img0, img1) # list of features - - flow = None - - assert len(attn_splits_list) == len(corr_radius_list) == len(prop_radius_list) == self.num_scales - - for scale_idx in range(self.num_scales): - feature0, feature1 = feature0_list[scale_idx], feature1_list[scale_idx] - - if pred_bidir_flow and scale_idx > 0: - # predicting bidirectional flow with refinement - feature0, feature1 = torch.cat((feature0, feature1), dim=0), torch.cat((feature1, feature0), dim=0) - - upsample_factor = self.upsample_factor * (2 ** (self.num_scales - 1 - scale_idx)) - - if scale_idx > 0: - flow = F.interpolate(flow, scale_factor=2, mode='bilinear', align_corners=True) * 2 - - if flow is not None: - flow = flow.detach() - feature1 = flow_warp(feature1, flow) # [B, C, H, W] - - attn_splits = attn_splits_list[scale_idx] - corr_radius = corr_radius_list[scale_idx] - prop_radius = prop_radius_list[scale_idx] - - # add position to features - feature0, feature1 = feature_add_position(feature0, feature1, attn_splits, self.feature_channels) - - # Transformer - feature0, feature1 = self.transformer(feature0, feature1, attn_num_splits=attn_splits) - - # correlation and softmax - if corr_radius == -1: # global matching - flow_pred = global_correlation_softmax(feature0, feature1, pred_bidir_flow)[0] - else: # local matching - flow_pred = local_correlation_softmax(feature0, feature1, corr_radius)[0] - - # flow or residual flow - flow = flow + flow_pred if flow is not None else flow_pred - - # upsample to the original resolution for supervison - if self.training: # only need to upsample intermediate flow predictions at training time - flow_bilinear = self.upsample_flow(flow, None, bilinear=True, upsample_factor=upsample_factor) - flow_preds.append(flow_bilinear) - - # flow propagation with self-attn - if pred_bidir_flow and scale_idx == 0: - feature0 = torch.cat((feature0, feature1), dim=0) # [2*B, C, H, W] for propagation - flow = self.feature_flow_attn(feature0, flow.detach(), - local_window_attn=prop_radius > 0, - local_window_radius=prop_radius) - - # bilinear upsampling at training time except the last one - if self.training and scale_idx < self.num_scales - 1: - flow_up = self.upsample_flow(flow, feature0, bilinear=True, upsample_factor=upsample_factor) - flow_preds.append(flow_up) - - if scale_idx == self.num_scales - 1: - flow_up = self.upsample_flow(flow, feature0) - flow_preds.append(flow_up) - - results_dict.update({'flow_preds': flow_preds}) - - return results_dict diff --git a/spaces/Artrajz/vits-simple-api/vits/text/mandarin.py b/spaces/Artrajz/vits-simple-api/vits/text/mandarin.py deleted file mode 100644 index 80742a394f52165409bd820dc14e3cea6589454b..0000000000000000000000000000000000000000 --- a/spaces/Artrajz/vits-simple-api/vits/text/mandarin.py +++ /dev/null @@ -1,365 +0,0 @@ -import config -import re -from pypinyin import lazy_pinyin, BOPOMOFO -import jieba -import cn2an -import logging - -logging.getLogger('jieba').setLevel(logging.WARNING) -jieba.set_dictionary(config.ABS_PATH + '/vits/text/jieba/dict.txt') -jieba.initialize() - -# List of (Latin alphabet, bopomofo) pairs: -_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', 'ㄟˉ'), - ('b', 'ㄅㄧˋ'), - ('c', 'ㄙㄧˉ'), - ('d', 'ㄉㄧˋ'), - ('e', 'ㄧˋ'), - ('f', 'ㄝˊㄈㄨˋ'), - ('g', 'ㄐㄧˋ'), - ('h', 'ㄝˇㄑㄩˋ'), - ('i', 'ㄞˋ'), - ('j', 'ㄐㄟˋ'), - ('k', 'ㄎㄟˋ'), - ('l', 'ㄝˊㄛˋ'), - ('m', 'ㄝˊㄇㄨˋ'), - ('n', 'ㄣˉ'), - ('o', 'ㄡˉ'), - ('p', 'ㄆㄧˉ'), - ('q', 'ㄎㄧㄡˉ'), - ('r', 'ㄚˋ'), - ('s', 'ㄝˊㄙˋ'), - ('t', 'ㄊㄧˋ'), - ('u', 'ㄧㄡˉ'), - ('v', 'ㄨㄧˉ'), - ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'), - ('x', 'ㄝˉㄎㄨˋㄙˋ'), - ('y', 'ㄨㄞˋ'), - ('z', 'ㄗㄟˋ') -]] - -# List of (bopomofo, romaji) pairs: -_bopomofo_to_romaji = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'ʧ⁼'), - ('ㄑ', 'ʧʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ʦ`⁼'), - ('ㄔ', 'ʦ`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ʦ⁼'), - ('ㄘ', 'ʦʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'e'), - ('ㄞ', 'ai'), - ('ㄟ', 'ei'), - ('ㄠ', 'au'), - ('ㄡ', 'ou'), - ('ㄧㄢ', 'yeNN'), - ('ㄢ', 'aNN'), - ('ㄧㄣ', 'iNN'), - ('ㄣ', 'əNN'), - ('ㄤ', 'aNg'), - ('ㄧㄥ', 'iNg'), - ('ㄨㄥ', 'uNg'), - ('ㄩㄥ', 'yuNg'), - ('ㄥ', 'əNg'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - -# List of (romaji, ipa) pairs: -_romaji_to_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('ʃy', 'ʃ'), - ('ʧʰy', 'ʧʰ'), - ('ʧ⁼y', 'ʧ⁼'), - ('NN', 'n'), - ('Ng', 'ŋ'), - ('y', 'j'), - ('h', 'x') -]] - -# List of (bopomofo, ipa) pairs: -_bopomofo_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'x'), - ('ㄐ', 'tʃ⁼'), - ('ㄑ', 'tʃʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ts`⁼'), - ('ㄔ', 'ts`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ts⁼'), - ('ㄘ', 'tsʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'ɛ'), - ('ㄞ', 'aɪ'), - ('ㄟ', 'eɪ'), - ('ㄠ', 'ɑʊ'), - ('ㄡ', 'oʊ'), - ('ㄧㄢ', 'jɛn'), - ('ㄩㄢ', 'ɥæn'), - ('ㄢ', 'an'), - ('ㄧㄣ', 'in'), - ('ㄩㄣ', 'ɥn'), - ('ㄣ', 'ən'), - ('ㄤ', 'ɑŋ'), - ('ㄧㄥ', 'iŋ'), - ('ㄨㄥ', 'ʊŋ'), - ('ㄩㄥ', 'jʊŋ'), - ('ㄥ', 'əŋ'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - -# List of (bopomofo, ipa2) pairs: -_bopomofo_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄅㄛ', 'pwo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'tɕ'), - ('ㄑ', 'tɕʰ'), - ('ㄒ', 'ɕ'), - ('ㄓ', 'tʂ'), - ('ㄔ', 'tʂʰ'), - ('ㄕ', 'ʂ'), - ('ㄖ', 'ɻ'), - ('ㄗ', 'ts'), - ('ㄘ', 'tsʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ɤ'), - ('ㄝ', 'ɛ'), - ('ㄞ', 'aɪ'), - ('ㄟ', 'eɪ'), - ('ㄠ', 'ɑʊ'), - ('ㄡ', 'oʊ'), - ('ㄧㄢ', 'jɛn'), - ('ㄩㄢ', 'yæn'), - ('ㄢ', 'an'), - ('ㄧㄣ', 'in'), - ('ㄩㄣ', 'yn'), - ('ㄣ', 'ən'), - ('ㄤ', 'ɑŋ'), - ('ㄧㄥ', 'iŋ'), - ('ㄨㄥ', 'ʊŋ'), - ('ㄩㄥ', 'jʊŋ'), - ('ㄥ', 'ɤŋ'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'y'), - ('ˉ', '˥'), - ('ˊ', '˧˥'), - ('ˇ', '˨˩˦'), - ('ˋ', '˥˩'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - -_symbols_to_chinese = [(re.compile(f'{x[0]}'), x[1]) for x in [ - ('([0-9]+(?:\.?[0-9]+)?)%', r'百分之\1'), - ('([0-9]+)/([0-9]+)', r'\2分之\1'), - ('\+', r'加'), - ('([0-9]+)-([0-9]+)', r'\1减\2'), - ('×', r'乘以'), - ('([0-9]+)x([0-9]+)', r'\1乘以\2'), - ('([0-9]+)\*([0-9]+)', r'\1乘以\2'), - ('÷', r'除以'), - ('=', r'等于'), - ('≠', r'不等于'), -]] - - -def symbols_to_chinese(text): - for regex, replacement in _symbols_to_chinese: - text = re.sub(regex, replacement, text) - return text - - -def number_to_chinese(text): - numbers = re.findall(r'[0-9]+(?:\.?[0-9]+)?', text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - return text - - -def number_transform_to_chinese(text): - text = cn2an.transform(text, "an2cn") - return text - - -def chinese_to_bopomofo(text): - text = text.replace('、', ',').replace(';', ',').replace(':', ',') - words = jieba.lcut(text, cut_all=False) - text = '' - for word in words: - bopomofos = lazy_pinyin(word, BOPOMOFO) - if not re.search('[\u4e00-\u9fff]', word): - text += word - continue - for i in range(len(bopomofos)): - bopomofos[i] = re.sub(r'([\u3105-\u3129])$', r'\1ˉ', bopomofos[i]) - if text != '': - text += ' ' - text += ''.join(bopomofos) - return text - - -def latin_to_bopomofo(text): - for regex, replacement in _latin_to_bopomofo: - text = re.sub(regex, replacement, text) - return text - - -def bopomofo_to_romaji(text): - for regex, replacement in _bopomofo_to_romaji: - text = re.sub(regex, replacement, text) - return text - - -def bopomofo_to_ipa(text): - for regex, replacement in _bopomofo_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def bopomofo_to_ipa2(text): - for regex, replacement in _bopomofo_to_ipa2: - text = re.sub(regex, replacement, text) - return text - - -def chinese_to_romaji(text): - text = symbols_to_chinese(text) - text = number_transform_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = bopomofo_to_romaji(text) - text = re.sub('i([aoe])', r'y\1', text) - text = re.sub('u([aoəe])', r'w\1', text) - text = re.sub('([ʦsɹ]`[⁼ʰ]?)([→↓↑ ]+|$)', - r'\1ɹ`\2', text).replace('ɻ', 'ɹ`') - text = re.sub('([ʦs][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text) - return text - - -def chinese_to_lazy_ipa(text): - text = chinese_to_romaji(text) - for regex, replacement in _romaji_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def chinese_to_ipa(text): - text = symbols_to_chinese(text) - text = number_transform_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = bopomofo_to_ipa(text) - text = re.sub('i([aoe])', r'j\1', text) - text = re.sub('u([aoəe])', r'w\1', text) - text = re.sub('([sɹ]`[⁼ʰ]?)([→↓↑ ]+|$)', - r'\1ɹ`\2', text).replace('ɻ', 'ɹ`') - text = re.sub('([s][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text) - return text - - -def chinese_to_ipa2(text): - text = symbols_to_chinese(text) - text = number_transform_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = bopomofo_to_ipa2(text) - text = re.sub(r'i([aoe])', r'j\1', text) - text = re.sub(r'u([aoəe])', r'w\1', text) - text = re.sub(r'([ʂɹ]ʰ?)([˩˨˧˦˥ ]+|$)', r'\1ʅ\2', text) - text = re.sub(r'(sʰ?)([˩˨˧˦˥ ]+|$)', r'\1ɿ\2', text) - return text - - -def VITS_PinYin_model(): - import torch - import config - from vits.text.vits_pinyin import VITS_PinYin - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - # pinyin - tts_front = VITS_PinYin(f"{config.ABS_PATH}/vits/bert", device) - return tts_front diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/__init__.py deleted file mode 100644 index d9b0a8dea2e65d77c08a881b7c68979e0475b751..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/__init__.py +++ /dev/null @@ -1,82 +0,0 @@ -""" - Pygments - ~~~~~~~~ - - Pygments is a syntax highlighting package written in Python. - - It is a generic syntax highlighter for general use in all kinds of software - such as forum systems, wikis or other applications that need to prettify - source code. Highlights are: - - * a wide range of common languages and markup formats is supported - * special attention is paid to details, increasing quality by a fair amount - * support for new languages and formats are added easily - * a number of output formats, presently HTML, LaTeX, RTF, SVG, all image - formats that PIL supports, and ANSI sequences - * it is usable as a command-line tool and as a library - * ... and it highlights even Brainfuck! - - The `Pygments master branch`_ is installable with ``easy_install Pygments==dev``. - - .. _Pygments master branch: - https://github.com/pygments/pygments/archive/master.zip#egg=Pygments-dev - - :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" -from io import StringIO, BytesIO - -__version__ = '2.14.0' -__docformat__ = 'restructuredtext' - -__all__ = ['lex', 'format', 'highlight'] - - -def lex(code, lexer): - """ - Lex ``code`` with ``lexer`` and return an iterable of tokens. - """ - try: - return lexer.get_tokens(code) - except TypeError: - # Heuristic to catch a common mistake. - from pip._vendor.pygments.lexer import RegexLexer - if isinstance(lexer, type) and issubclass(lexer, RegexLexer): - raise TypeError('lex() argument must be a lexer instance, ' - 'not a class') - raise - - -def format(tokens, formatter, outfile=None): # pylint: disable=redefined-builtin - """ - Format a tokenlist ``tokens`` with the formatter ``formatter``. - - If ``outfile`` is given and a valid file object (an object - with a ``write`` method), the result will be written to it, otherwise - it is returned as a string. - """ - try: - if not outfile: - realoutfile = getattr(formatter, 'encoding', None) and BytesIO() or StringIO() - formatter.format(tokens, realoutfile) - return realoutfile.getvalue() - else: - formatter.format(tokens, outfile) - except TypeError: - # Heuristic to catch a common mistake. - from pip._vendor.pygments.formatter import Formatter - if isinstance(formatter, type) and issubclass(formatter, Formatter): - raise TypeError('format() argument must be a formatter instance, ' - 'not a class') - raise - - -def highlight(code, lexer, formatter, outfile=None): - """ - Lex ``code`` with ``lexer`` and format it with the formatter ``formatter``. - - If ``outfile`` is given and a valid file object (an object - with a ``write`` method), the result will be written to it, otherwise - it is returned as a string. - """ - return format(lex(code, lexer), formatter, outfile) diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/INSTALL.md b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/INSTALL.md deleted file mode 100644 index b40768913742ca2b2e11c74d5944561931ecb326..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/INSTALL.md +++ /dev/null @@ -1,261 +0,0 @@ -## Installation - -### Requirements -- Linux or macOS with Python ≥ 3.6 -- PyTorch ≥ 1.8 and [torchvision](https://github.com/pytorch/vision/) that matches the PyTorch installation. - Install them together at [pytorch.org](https://pytorch.org) to make sure of this -- OpenCV is optional but needed by demo and visualization - - -### Build Detectron2 from Source - -gcc & g++ ≥ 5.4 are required. [ninja](https://ninja-build.org/) is optional but recommended for faster build. -After having them, run: -``` -python -m pip install 'git+https://github.com/facebookresearch/detectron2.git' -# (add --user if you don't have permission) - -# Or, to install it from a local clone: -git clone https://github.com/facebookresearch/detectron2.git -python -m pip install -e detectron2 - -# On macOS, you may need to prepend the above commands with a few environment variables: -CC=clang CXX=clang++ ARCHFLAGS="-arch x86_64" python -m pip install ... -``` - -To __rebuild__ detectron2 that's built from a local clone, use `rm -rf build/ **/*.so` to clean the -old build first. You often need to rebuild detectron2 after reinstalling PyTorch. - -### Install Pre-Built Detectron2 (Linux only) - -Choose from this table to install [v0.6 (Oct 2021)](https://github.com/facebookresearch/detectron2/releases): - -
CUDA torch 1.10torch 1.9torch 1.8
11.3
install
python -m pip install detectron2 -f \
-  https://dl.fbaipublicfiles.com/detectron2/wheels/cu113/torch1.10/index.html
-
11.1
install
python -m pip install detectron2 -f \
-  https://dl.fbaipublicfiles.com/detectron2/wheels/cu111/torch1.10/index.html
-
install
python -m pip install detectron2 -f \
-  https://dl.fbaipublicfiles.com/detectron2/wheels/cu111/torch1.9/index.html
-
install
python -m pip install detectron2 -f \
-  https://dl.fbaipublicfiles.com/detectron2/wheels/cu111/torch1.8/index.html
-
10.2
install
python -m pip install detectron2 -f \
-  https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.10/index.html
-
install
python -m pip install detectron2 -f \
-  https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.9/index.html
-
install
python -m pip install detectron2 -f \
-  https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.8/index.html
-
10.1
install
python -m pip install detectron2 -f \
-  https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.8/index.html
-
cpu
install
python -m pip install detectron2 -f \
-  https://dl.fbaipublicfiles.com/detectron2/wheels/cpu/torch1.10/index.html
-
install
python -m pip install detectron2 -f \
-  https://dl.fbaipublicfiles.com/detectron2/wheels/cpu/torch1.9/index.html
-
install
python -m pip install detectron2 -f \
-  https://dl.fbaipublicfiles.com/detectron2/wheels/cpu/torch1.8/index.html
-
- -Note that: -1. The pre-built packages have to be used with corresponding version of CUDA and the official package of PyTorch. - Otherwise, please build detectron2 from source. -2. New packages are released every few months. Therefore, packages may not contain latest features in the main - branch and may not be compatible with the main branch of a research project that uses detectron2 - (e.g. those in [projects](projects)). - -### Common Installation Issues - -Click each issue for its solutions: - -
- -Undefined symbols that looks like "TH..","at::Tensor...","torch..." - -
- -This usually happens when detectron2 or torchvision is not -compiled with the version of PyTorch you're running. - -If the error comes from a pre-built torchvision, uninstall torchvision and pytorch and reinstall them -following [pytorch.org](http://pytorch.org). So the versions will match. - -If the error comes from a pre-built detectron2, check [release notes](https://github.com/facebookresearch/detectron2/releases), -uninstall and reinstall the correct pre-built detectron2 that matches pytorch version. - -If the error comes from detectron2 or torchvision that you built manually from source, -remove files you built (`build/`, `**/*.so`) and rebuild it so it can pick up the version of pytorch currently in your environment. - -If the above instructions do not resolve this problem, please provide an environment (e.g. a dockerfile) that can reproduce the issue. -
- -
- -Missing torch dynamic libraries, OR segmentation fault immediately when using detectron2. - -This usually happens when detectron2 or torchvision is not -compiled with the version of PyTorch you're running. See the previous common issue for the solution. -
- -
- -Undefined C++ symbols (e.g. "GLIBCXX..") or C++ symbols not found. - -
-Usually it's because the library is compiled with a newer C++ compiler but run with an old C++ runtime. - -This often happens with old anaconda. -It may help to run `conda update libgcc` to upgrade its runtime. - -The fundamental solution is to avoid the mismatch, either by compiling using older version of C++ -compiler, or run the code with proper C++ runtime. -To run the code with a specific C++ runtime, you can use environment variable `LD_PRELOAD=/path/to/libstdc++.so`. - -
- -
- -"nvcc not found" or "Not compiled with GPU support" or "Detectron2 CUDA Compiler: not available". - -
-CUDA is not found when building detectron2. -You should make sure - -``` -python -c 'import torch; from torch.utils.cpp_extension import CUDA_HOME; print(torch.cuda.is_available(), CUDA_HOME)' -``` - -print `(True, a directory with cuda)` at the time you build detectron2. - -Most models can run inference (but not training) without GPU support. To use CPUs, set `MODEL.DEVICE='cpu'` in the config. -
- -
- -"invalid device function" or "no kernel image is available for execution". - -
-Two possibilities: - -* You build detectron2 with one version of CUDA but run it with a different version. - - To check whether it is the case, - use `python -m detectron2.utils.collect_env` to find out inconsistent CUDA versions. - In the output of this command, you should expect "Detectron2 CUDA Compiler", "CUDA_HOME", "PyTorch built with - CUDA" - to contain cuda libraries of the same version. - - When they are inconsistent, - you need to either install a different build of PyTorch (or build by yourself) - to match your local CUDA installation, or install a different version of CUDA to match PyTorch. - -* PyTorch/torchvision/Detectron2 is not built for the correct GPU SM architecture (aka. compute capability). - - The architecture included by PyTorch/detectron2/torchvision is available in the "architecture flags" in - `python -m detectron2.utils.collect_env`. It must include - the architecture of your GPU, which can be found at [developer.nvidia.com/cuda-gpus](https://developer.nvidia.com/cuda-gpus). - - If you're using pre-built PyTorch/detectron2/torchvision, they have included support for most popular GPUs already. - If not supported, you need to build them from source. - - When building detectron2/torchvision from source, they detect the GPU device and build for only the device. - This means the compiled code may not work on a different GPU device. - To recompile them for the correct architecture, remove all installed/compiled files, - and rebuild them with the `TORCH_CUDA_ARCH_LIST` environment variable set properly. - For example, `export TORCH_CUDA_ARCH_LIST="6.0;7.0"` makes it compile for both P100s and V100s. -
- -
- -Undefined CUDA symbols; Cannot open libcudart.so - -
-The version of NVCC you use to build detectron2 or torchvision does -not match the version of CUDA you are running with. -This often happens when using anaconda's CUDA runtime. - -Use `python -m detectron2.utils.collect_env` to find out inconsistent CUDA versions. -In the output of this command, you should expect "Detectron2 CUDA Compiler", "CUDA_HOME", "PyTorch built with - CUDA" -to contain cuda libraries of the same version. - -When they are inconsistent, -you need to either install a different build of PyTorch (or build by yourself) -to match your local CUDA installation, or install a different version of CUDA to match PyTorch. -
- - -
- -C++ compilation errors from NVCC / NVRTC, or "Unsupported gpu architecture" - -
-A few possibilities: - -1. Local CUDA/NVCC version has to match the CUDA version of your PyTorch. Both can be found in `python collect_env.py`. - When they are inconsistent, you need to either install a different build of PyTorch (or build by yourself) - to match your local CUDA installation, or install a different version of CUDA to match PyTorch. - -2. Local CUDA/NVCC version shall support the SM architecture (a.k.a. compute capability) of your GPU. - The capability of your GPU can be found at [developer.nvidia.com/cuda-gpus](https://developer.nvidia.com/cuda-gpus). - The capability supported by NVCC is listed at [here](https://gist.github.com/ax3l/9489132). - If your NVCC version is too old, this can be workaround by setting environment variable - `TORCH_CUDA_ARCH_LIST` to a lower, supported capability. - -3. The combination of NVCC and GCC you use is incompatible. You need to change one of their versions. - See [here](https://gist.github.com/ax3l/9489132) for some valid combinations. - Notably, CUDA<=10.1.105 doesn't support GCC>7.3. - - The CUDA/GCC version used by PyTorch can be found by `print(torch.__config__.show())`. - -
- - -
- -"ImportError: cannot import name '_C'". - -
-Please build and install detectron2 following the instructions above. - -Or, if you are running code from detectron2's root directory, `cd` to a different one. -Otherwise you may not import the code that you installed. -
- - -
- -Any issue on windows. - -
- -Detectron2 is continuously built on windows with [CircleCI](https://app.circleci.com/pipelines/github/facebookresearch/detectron2?branch=main). -However we do not provide official support for it. -PRs that improves code compatibility on windows are welcome. -
- -
- -ONNX conversion segfault after some "TraceWarning". - -
-The ONNX package is compiled with a too old compiler. - -Please build and install ONNX from its source code using a compiler -whose version is closer to what's used by PyTorch (available in `torch.__config__.show()`). -
- - -
- -"library not found for -lstdc++" on older version of MacOS - -
-See -[this stackoverflow answer](https://stackoverflow.com/questions/56083725/macos-build-issues-lstdc-not-found-while-building-python-package). - -
- - -### Installation inside specific environments: - -* __Colab__: see our [Colab Tutorial](https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5) - which has step-by-step instructions. - -* __Docker__: The official [Dockerfile](docker) installs detectron2 with a few simple commands. - diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/cascade_rcnn.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/cascade_rcnn.py deleted file mode 100644 index a0ca70fe23a1d406ee9bed6204a987d7e0708b91..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/cascade_rcnn.py +++ /dev/null @@ -1,299 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from typing import List -import torch -from torch import nn -from torch.autograd.function import Function - -from detectron2.config import configurable -from detectron2.layers import ShapeSpec -from detectron2.structures import Boxes, Instances, pairwise_iou -from detectron2.utils.events import get_event_storage - -from ..box_regression import Box2BoxTransform -from ..matcher import Matcher -from ..poolers import ROIPooler -from .box_head import build_box_head -from .fast_rcnn import FastRCNNOutputLayers, fast_rcnn_inference -from .roi_heads import ROI_HEADS_REGISTRY, StandardROIHeads - - -class _ScaleGradient(Function): - @staticmethod - def forward(ctx, input, scale): - ctx.scale = scale - return input - - @staticmethod - def backward(ctx, grad_output): - return grad_output * ctx.scale, None - - -@ROI_HEADS_REGISTRY.register() -class CascadeROIHeads(StandardROIHeads): - """ - The ROI heads that implement :paper:`Cascade R-CNN`. - """ - - @configurable - def __init__( - self, - *, - box_in_features: List[str], - box_pooler: ROIPooler, - box_heads: List[nn.Module], - box_predictors: List[nn.Module], - proposal_matchers: List[Matcher], - **kwargs, - ): - """ - NOTE: this interface is experimental. - - Args: - box_pooler (ROIPooler): pooler that extracts region features from given boxes - box_heads (list[nn.Module]): box head for each cascade stage - box_predictors (list[nn.Module]): box predictor for each cascade stage - proposal_matchers (list[Matcher]): matcher with different IoU thresholds to - match boxes with ground truth for each stage. The first matcher matches - RPN proposals with ground truth, the other matchers use boxes predicted - by the previous stage as proposals and match them with ground truth. - """ - assert "proposal_matcher" not in kwargs, ( - "CascadeROIHeads takes 'proposal_matchers=' for each stage instead " - "of one 'proposal_matcher='." - ) - # The first matcher matches RPN proposals with ground truth, done in the base class - kwargs["proposal_matcher"] = proposal_matchers[0] - num_stages = self.num_cascade_stages = len(box_heads) - box_heads = nn.ModuleList(box_heads) - box_predictors = nn.ModuleList(box_predictors) - assert len(box_predictors) == num_stages, f"{len(box_predictors)} != {num_stages}!" - assert len(proposal_matchers) == num_stages, f"{len(proposal_matchers)} != {num_stages}!" - super().__init__( - box_in_features=box_in_features, - box_pooler=box_pooler, - box_head=box_heads, - box_predictor=box_predictors, - **kwargs, - ) - self.proposal_matchers = proposal_matchers - - @classmethod - def from_config(cls, cfg, input_shape): - ret = super().from_config(cfg, input_shape) - ret.pop("proposal_matcher") - return ret - - @classmethod - def _init_box_head(cls, cfg, input_shape): - # fmt: off - in_features = cfg.MODEL.ROI_HEADS.IN_FEATURES - pooler_resolution = cfg.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION - pooler_scales = tuple(1.0 / input_shape[k].stride for k in in_features) - sampling_ratio = cfg.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO - pooler_type = cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE - cascade_bbox_reg_weights = cfg.MODEL.ROI_BOX_CASCADE_HEAD.BBOX_REG_WEIGHTS - cascade_ious = cfg.MODEL.ROI_BOX_CASCADE_HEAD.IOUS - assert len(cascade_bbox_reg_weights) == len(cascade_ious) - assert cfg.MODEL.ROI_BOX_HEAD.CLS_AGNOSTIC_BBOX_REG, \ - "CascadeROIHeads only support class-agnostic regression now!" - assert cascade_ious[0] == cfg.MODEL.ROI_HEADS.IOU_THRESHOLDS[0] - # fmt: on - - in_channels = [input_shape[f].channels for f in in_features] - # Check all channel counts are equal - assert len(set(in_channels)) == 1, in_channels - in_channels = in_channels[0] - - box_pooler = ROIPooler( - output_size=pooler_resolution, - scales=pooler_scales, - sampling_ratio=sampling_ratio, - pooler_type=pooler_type, - ) - pooled_shape = ShapeSpec( - channels=in_channels, width=pooler_resolution, height=pooler_resolution - ) - - box_heads, box_predictors, proposal_matchers = [], [], [] - for match_iou, bbox_reg_weights in zip(cascade_ious, cascade_bbox_reg_weights): - box_head = build_box_head(cfg, pooled_shape) - box_heads.append(box_head) - box_predictors.append( - FastRCNNOutputLayers( - cfg, - box_head.output_shape, - box2box_transform=Box2BoxTransform(weights=bbox_reg_weights), - ) - ) - proposal_matchers.append(Matcher([match_iou], [0, 1], allow_low_quality_matches=False)) - return { - "box_in_features": in_features, - "box_pooler": box_pooler, - "box_heads": box_heads, - "box_predictors": box_predictors, - "proposal_matchers": proposal_matchers, - } - - def forward(self, images, features, proposals, targets=None): - del images - if self.training: - proposals = self.label_and_sample_proposals(proposals, targets) - - if self.training: - # Need targets to box head - losses = self._forward_box(features, proposals, targets) - losses.update(self._forward_mask(features, proposals)) - losses.update(self._forward_keypoint(features, proposals)) - return proposals, losses - else: - pred_instances = self._forward_box(features, proposals) - pred_instances = self.forward_with_given_boxes(features, pred_instances) - return pred_instances, {} - - def _forward_box(self, features, proposals, targets=None): - """ - Args: - features, targets: the same as in - Same as in :meth:`ROIHeads.forward`. - proposals (list[Instances]): the per-image object proposals with - their matching ground truth. - Each has fields "proposal_boxes", and "objectness_logits", - "gt_classes", "gt_boxes". - """ - features = [features[f] for f in self.box_in_features] - head_outputs = [] # (predictor, predictions, proposals) - prev_pred_boxes = None - image_sizes = [x.image_size for x in proposals] - for k in range(self.num_cascade_stages): - if k > 0: - # The output boxes of the previous stage are used to create the input - # proposals of the next stage. - proposals = self._create_proposals_from_boxes(prev_pred_boxes, image_sizes) - if self.training: - proposals = self._match_and_label_boxes(proposals, k, targets) - predictions = self._run_stage(features, proposals, k) - prev_pred_boxes = self.box_predictor[k].predict_boxes(predictions, proposals) - head_outputs.append((self.box_predictor[k], predictions, proposals)) - - if self.training: - losses = {} - storage = get_event_storage() - for stage, (predictor, predictions, proposals) in enumerate(head_outputs): - with storage.name_scope("stage{}".format(stage)): - stage_losses = predictor.losses(predictions, proposals) - losses.update({k + "_stage{}".format(stage): v for k, v in stage_losses.items()}) - return losses - else: - # Each is a list[Tensor] of length #image. Each tensor is Ri x (K+1) - scores_per_stage = [h[0].predict_probs(h[1], h[2]) for h in head_outputs] - - # Average the scores across heads - scores = [ - sum(list(scores_per_image)) * (1.0 / self.num_cascade_stages) - for scores_per_image in zip(*scores_per_stage) - ] - # Use the boxes of the last head - predictor, predictions, proposals = head_outputs[-1] - boxes = predictor.predict_boxes(predictions, proposals) - pred_instances, _ = fast_rcnn_inference( - boxes, - scores, - image_sizes, - predictor.test_score_thresh, - predictor.test_nms_thresh, - predictor.test_topk_per_image, - ) - return pred_instances - - @torch.no_grad() - def _match_and_label_boxes(self, proposals, stage, targets): - """ - Match proposals with groundtruth using the matcher at the given stage. - Label the proposals as foreground or background based on the match. - - Args: - proposals (list[Instances]): One Instances for each image, with - the field "proposal_boxes". - stage (int): the current stage - targets (list[Instances]): the ground truth instances - - Returns: - list[Instances]: the same proposals, but with fields "gt_classes" and "gt_boxes" - """ - num_fg_samples, num_bg_samples = [], [] - for proposals_per_image, targets_per_image in zip(proposals, targets): - match_quality_matrix = pairwise_iou( - targets_per_image.gt_boxes, proposals_per_image.proposal_boxes - ) - # proposal_labels are 0 or 1 - matched_idxs, proposal_labels = self.proposal_matchers[stage](match_quality_matrix) - if len(targets_per_image) > 0: - gt_classes = targets_per_image.gt_classes[matched_idxs] - # Label unmatched proposals (0 label from matcher) as background (label=num_classes) - gt_classes[proposal_labels == 0] = self.num_classes - gt_boxes = targets_per_image.gt_boxes[matched_idxs] - else: - gt_classes = torch.zeros_like(matched_idxs) + self.num_classes - gt_boxes = Boxes( - targets_per_image.gt_boxes.tensor.new_zeros((len(proposals_per_image), 4)) - ) - proposals_per_image.gt_classes = gt_classes - proposals_per_image.gt_boxes = gt_boxes - - num_fg_samples.append((proposal_labels == 1).sum().item()) - num_bg_samples.append(proposal_labels.numel() - num_fg_samples[-1]) - - # Log the number of fg/bg samples in each stage - storage = get_event_storage() - storage.put_scalar( - "stage{}/roi_head/num_fg_samples".format(stage), - sum(num_fg_samples) / len(num_fg_samples), - ) - storage.put_scalar( - "stage{}/roi_head/num_bg_samples".format(stage), - sum(num_bg_samples) / len(num_bg_samples), - ) - return proposals - - def _run_stage(self, features, proposals, stage): - """ - Args: - features (list[Tensor]): #lvl input features to ROIHeads - proposals (list[Instances]): #image Instances, with the field "proposal_boxes" - stage (int): the current stage - - Returns: - Same output as `FastRCNNOutputLayers.forward()`. - """ - box_features = self.box_pooler(features, [x.proposal_boxes for x in proposals]) - # The original implementation averages the losses among heads, - # but scale up the parameter gradients of the heads. - # This is equivalent to adding the losses among heads, - # but scale down the gradients on features. - if self.training: - box_features = _ScaleGradient.apply(box_features, 1.0 / self.num_cascade_stages) - box_features = self.box_head[stage](box_features) - return self.box_predictor[stage](box_features) - - def _create_proposals_from_boxes(self, boxes, image_sizes): - """ - Args: - boxes (list[Tensor]): per-image predicted boxes, each of shape Ri x 4 - image_sizes (list[tuple]): list of image shapes in (h, w) - - Returns: - list[Instances]: per-image proposals with the given boxes. - """ - # Just like RPN, the proposals should not have gradients - boxes = [Boxes(b.detach()) for b in boxes] - proposals = [] - for boxes_per_image, image_size in zip(boxes, image_sizes): - boxes_per_image.clip(image_size) - if self.training: - # do not filter empty boxes at inference time, - # because the scores from each stage need to be aligned and added later - boxes_per_image = boxes_per_image[boxes_per_image.nonempty()] - prop = Instances(image_size) - prop.proposal_boxes = boxes_per_image - proposals.append(prop) - return proposals diff --git a/spaces/BAAI/vid2vid-zero/vid2vid_zero/data/dataset.py b/spaces/BAAI/vid2vid-zero/vid2vid_zero/data/dataset.py deleted file mode 100644 index a56753f733682d3d1a18810ccbb81fef561bb714..0000000000000000000000000000000000000000 --- a/spaces/BAAI/vid2vid-zero/vid2vid_zero/data/dataset.py +++ /dev/null @@ -1,44 +0,0 @@ -import decord -decord.bridge.set_bridge('torch') - -from torch.utils.data import Dataset -from einops import rearrange - - -class VideoDataset(Dataset): - def __init__( - self, - video_path: str, - prompt: str, - width: int = 512, - height: int = 512, - n_sample_frames: int = 8, - sample_start_idx: int = 0, - sample_frame_rate: int = 1, - ): - self.video_path = video_path - self.prompt = prompt - self.prompt_ids = None - - self.width = width - self.height = height - self.n_sample_frames = n_sample_frames - self.sample_start_idx = sample_start_idx - self.sample_frame_rate = sample_frame_rate - - def __len__(self): - return 1 - - def __getitem__(self, index): - # load and sample video frames - vr = decord.VideoReader(self.video_path, width=self.width, height=self.height) - sample_index = list(range(self.sample_start_idx, len(vr), self.sample_frame_rate))[:self.n_sample_frames] - video = vr.get_batch(sample_index) - video = rearrange(video, "f h w c -> f c h w") - - example = { - "pixel_values": (video / 127.5 - 1.0), - "prompt_ids": self.prompt_ids - } - - return example diff --git a/spaces/BABASA/README/README.md b/spaces/BABASA/README/README.md deleted file mode 100644 index 7ed02d41e9a30f46d7ea27d359c2d70253275261..0000000000000000000000000000000000000000 --- a/spaces/BABASA/README/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: README -emoji: 📈 -colorFrom: pink -colorTo: red -sdk: static -pinned: false ---- - -Edit this `README.md` markdown file to author your organization card 🔥 diff --git a/spaces/Beasto/Photo2Monet_Cyclegan/app.py b/spaces/Beasto/Photo2Monet_Cyclegan/app.py deleted file mode 100644 index 6424b773ac8a8cf3723ac05bb0021540c4f98b7a..0000000000000000000000000000000000000000 --- a/spaces/Beasto/Photo2Monet_Cyclegan/app.py +++ /dev/null @@ -1,48 +0,0 @@ -import streamlit as st -import tensorflow as tf -import numpy as np -from PIL import Image -import tensorflow_addons as tfa - -import tensorflow as tf -from tensorflow.keras.utils import custom_object_scope - -# Define a function to create the InstanceNormalization layer -def create_in(): - return tfa.layers.InstanceNormalization() - - -def model_out(model_path,img): - with custom_object_scope({'InstanceNormalization': create_in}): - model = tf.keras.models.load_model(model_path) - img = (img-127.5)/127.5 - img = np.expand_dims(img, 0) - pred = model.predict(img) - pred = np.asarray(pred) - return pred[0] - -st.title("Image to Monet painting cyclegan") -face_input = st.file_uploader("Image input") - -if face_input is not None: - img = Image.open(face_input) - img = img.resize((256, 256)) - img = np.array(img) - pred = model_out('photo2monet2.h5', img) - st.image(img, caption="Uploaded Image") - st.image(((pred + 1) * 127.5).astype(np.uint8), caption="Generated Monet Painting") - -st.header('Which architecture did I use architecture, Resnet-Blocks or Unet architecture?') -st.write('I have tried both Resnet and unet architecture but the Resnet architecture producted black patches and did not work quite well') -st.write('But when using the Unet architecture, it produce more "Monet-ish" images') -st.write('I use the pix2pix generator from tensorflow examples module and same for the discriminator') -st.header('What datasets did you use to train your CycleGAN model?') -st.write('For the dataset, I used Monet2Photo architecture available on kaggle') -st.header('What hardware I trained it on?') -st.write('I trained the model on Kaggle notebook on P100 gpu with 13 gigs of ram cuz my pc wouldnt be in a good state if I trained the cyclegan model on Intel HD') -st.header('How much time did it take') -st.write('It took aboul 20-30 epochs each of 150 seconds, DO THE MATH') -st.write('I could have trained it for longer, But it started producing images same to the original images which were not "Monet-ish"') -st.header('Why did I make this model?') -st.subheader('I made this model to extend my experience but mostly for FUNN!!!!') -st.write("-------------------------------------------------") \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Como Hacer Un rbol De Navidad.md b/spaces/Benson/text-generation/Examples/Como Hacer Un rbol De Navidad.md deleted file mode 100644 index c2489e3e57a916081c185a3fa657a52aeffb9458..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Como Hacer Un rbol De Navidad.md +++ /dev/null @@ -1,81 +0,0 @@ -
-

Descarga de archivos ISO GTA 5: Todo lo que necesita saber

-

Grand Theft Auto V, o GTA 5, es uno de los videojuegos más populares y exitosos de todos los tiempos. Desarrollado por Rockstar Games, GTA 5 es un juego de acción y aventura de mundo abierto que te permite vivir tus fantasías criminales en la ciudad ficticia de Los Santos y sus alrededores. Si quieres robar bancos, correr coches, disparar a los enemigos, o simplemente explorar el impresionante paisaje, GTA 5 tiene algo para todos.

-

como hacer un árbol de navidad


Download File ››››› https://bltlly.com/2v6MS7



-

Pero ¿cómo se puede descargar e instalar GTA 5 en su PC? Y cuáles son algunas de las mejores características y consejos que usted debe saber antes de jugar? En este artículo, responderemos estas preguntas y más. Aquí está todo lo que necesita saber sobre la descarga del archivo ISO de GTA 5.

-

Características y jugabilidad de GTA 5

-

GTA 5 no es solo un juego, es un fenómeno. Con más de 150 millones de copias vendidas en todo el mundo, GTA 5 ha ganado numerosos premios y galardones por sus innovadores gráficos, jugabilidad, historia y modo en línea. Estas son algunas de las características principales que hacen que GTA 5 se destaque de otros juegos:

-
    -
  • Tres protagonistas con diferentes historias y habilidades: En GTA 5, puedes cambiar entre tres personajes jugables: Michael, un ladrón de bancos retirado; Franklin, un estafador callejero; y Trevor, un narcotraficante psicópata. Cada personaje tiene su propia personalidad, habilidades, misiones e interacciones con otros personajes. También puede combinar sus habilidades en ciertas situaciones, como robos, donde puede planificar y ejecutar robos elaborados con su tripulación.
  • - -
  • Rueda de armas: Una manera conveniente de cambiar entre armas: En GTA 5, tienes acceso a una amplia gama de armas, desde pistolas y escopetas hasta lanzacohetes y minipistolas. Para que sea más fácil seleccionar el arma de su elección, GTA 5 presenta la rueda de armas, que le permite cambiar rápidamente entre ocho categorías de armas utilizando el stick analógico derecho. También puede personalizar sus armas con accesorios, como alcances, supresores, cargadores extendidos y más.
  • -
  • Mercado de valores: Un sistema económico realista y dinámico: En GTA 5, puede invertir su dinero en el mercado de valores, que está influenciado por sus acciones y eventos en el mundo del juego. Por ejemplo, si destruyes los vehículos o edificios de una compañía rival, el precio de sus acciones bajará, mientras que el tuyo subirá. También puedes manipular el mercado completando ciertas misiones o escuchando consejos de otros personajes. El mercado de valores es una gran manera de ganar dinero en GTA 5, pero también una arriesgada.
  • -
  • Diversas actividades físicas: Desde el golf hasta el yoga, hay algo para todos: GTA 5 no es todo sobre la violencia y el crimen. También puede disfrutar de diversas actividades de ocio, como jugar al golf, tenis, dardos o bolos; practicar yoga, ciclismo o senderismo; ir al cine, club de striptease o bar; o incluso ver la televisión, navegar por Internet o leer libros en su propia casa. Estas actividades pueden mejorar tus habilidades, salud, estado de ánimo y relaciones con otros personajes.

    -

    Requisitos e instalación del sistema GTA 5

    -

    Si quieres jugar a GTA 5 en tu PC, debes asegurarte de que tu sistema cumple con los requisitos mínimos o recomendados para el juego. Aquí están las especificaciones que necesita comprobar antes de descargar GTA 5:

    - - -Requisitos mínimos -Requisitos recomendados - - -OS: Windows 10 64 Bit, Windows 8.1 64 Bit, Windows 8 64 Bit, Windows 7 64 Bit Service Pack 1 -OS: Windows 10 64 Bit - - -Procesador: Intel Core 2 Quad CPU Q6600 @ 2.40GHz (4 CPUs) / AMD Phenom 9850 Quad-Core Processor (4 CPUs) @ 2.5GHz -Procesador: Intel Core i5 3470 @ 3.2GHz (4 CPUs) / AMD X8 FX-8350 @ 4GHz (8 CPUs) - - -Memoria: 4 GB de RAM -Memoria: 8 GB de RAM - - -Gráficos: NVIDIA GeForce 9800 GT 1GB / AMD Radeon HD 4870 1GB (DX 10, 10.1, 11) -Gráficos: NVIDIA GeForce GTX 660 2GB / AMD Radeon HD 7870 2GB - - -Almacenamiento: 72 GB de espacio disponible -Almacenamiento: 72 GB de espacio disponible - - -Tarjeta de sonido: DirectX Compatible -Tarjeta de sonido: DirectX Compatible - - -

    Una vez que haya verificado que su PC puede ejecutar GTA 5 sin problemas, debe descargar el juego de fuentes oficiales. Puedes comprar una copia física del juego en un minorista o en una tienda online, o puedes comprar una copia digital en plataformas como Steam, Epic Games Store o Rockstar Games Launcher. La copia digital requerirá que descargues los archivos del juego e los instales en tu PC.

    -

    -

    Si has descargado GTA 5 como un archivo ISO, que es un archivo comprimido de los archivos del juego, necesitas extraerlo usando un software como WinRAR o 7-Zip. Luego, debe montar el archivo ISO utilizando un software como Daemon Tools o Virtual CloneDrive. Esto creará una unidad virtual en su PC que actuará como si hubiera insertado un disco físico del juego. Luego, debe ejecutar el archivo setup.exe desde la unidad virtual y seguir las instrucciones para instalar GTA 5 en su PC.

    - -

    Consejos y trucos de GTA 5

    -

    GTA 5 es un juego enorme y complejo que ofrece innumerables posibilidades y desafíos. Para ayudarte a sacar el máximo partido a tu experiencia de juego, aquí tienes algunos de los mejores consejos y trucos que debes saber antes de jugar a GTA 5:

    -
      -
    • Cómo hacer trampa y usar códigos en GTA 5: Si quieres divertirte y experimentar con diferentes aspectos del juego, puedes usar códigos de trucos en GTA 5. Para usar códigos de trucos, debes introducirlos usando el teléfono del juego o los botones del controlador. Puede encontrar una lista de códigos de trucos en línea, como [aquí]. Algunos de los códigos de trucos incluyen invencibilidad, súper salto, balas explosivas, cámara lenta y más. Sin embargo, tenga en cuenta que el uso de códigos de trucos desactivará los logros y trofeos, y puede afectar el progreso del juego y la estabilidad.
    • - -
    • Cómo encontrar objetos de colección y secretos ocultos en GTA 5: GTA 5 está lleno de objetos de colección ocultos y secretos que pueden desbloquear recompensas, huevos de Pascua, referencias y más. Algunos de los objetos de colección y secretos que se pueden encontrar en GTA 5 son: - Piezas de la nave espacial: Hay 50 piezas de la nave espacial dispersos alrededor del mapa que se puede recoger como Franklin después de conocer a Omega, un teórico de la conspiración. Recoger todas las piezas de la nave espacial desbloqueará un vehículo especial y un trofeo/ logro. - Sobras de cartas: Hay 50 sobras de cartas escondidas alrededor del mapa que puedes recoger como cualquier personaje. Recoger todas las sobras de cartas revelará la identidad de un asesino y le permitirá enfrentarse a él. - Plantas de peyote: Hay 27 plantas de peyote ubicadas alrededor del mapa que puedes consumir como cualquier personaje. Consumir una planta de peyote desencadenará una alucinación en la que puedes jugar como un animal, como un perro, un gato, un pájaro o incluso un tiburón. - Ovnis: Hay cuatro ovnis que se pueden ver en GTA 5 después de completar la historia principal y lograr el 100% de finalización. Puedes encontrarlos en Mount Chiliad, Fort Zancudo, Sandy Shores y Paleto Bay.
    • - -
    • Cómo divertirse y explorar el vasto mundo de GTA 5: GTA 5 no es solo un juego, es un sandbox donde puedes hacer lo que quieras y divertirte. Hay tantas cosas que hacer y ver en GTA 5 que nunca te aburrirás. Estas son algunas de las formas en que puedes divertirte y explorar el vasto mundo de GTA 5: - Usa el modo director: El modo director es una función que te permite crear tus propias escenas y escenarios utilizando los personajes, vehículos, armas, ubicaciones, y el clima de GTA 5. Puede acceder al modo director desde el menú Rockstar Editor o llamando a un contacto en su teléfono. A continuación, puede personalizar y controlar todos los aspectos de su escena y grabarla para su posterior edición o intercambio. - Pruebe los eventos aleatorios: Los eventos aleatorios son situaciones espontáneas e impredecibles que ocurren a lo largo del mapa de GTA 5. Pueden involucrar crímenes, accidentes, persecuciones, rescates, encuentros y más. Puede optar por intervenir, ignorar o ver estos eventos a medida que se desarrollan. Algunos de ellos pueden recompensarlo con dinero, artículos o reputación, mientras que otros pueden tener consecuencias por sus acciones. - Descubre los huevos de Pascua: Los huevos de Pascua son referencias ocultas, bromas, secretos o sorpresas que se encuentran esparcidos por el mapa de GTA 5. Pueden relacionarse con otros juegos, películas, programas de televisión, celebridades, mitos, leyendas o eventos de la vida real. Algunos de ellos son obvios y fáciles de encontrar, mientras que otros son oscuros y difíciles de detectar. Puede encontrar una lista de huevos de Pascua en línea, como [aquí].
    • -
    -

    Conclusión

    -

    GTA 5 es uno de los mejores juegos jamás hecho y un deber-juego para cualquier jugador. Ofrece un mundo inmersivo y realista donde puedes experimentar una historia épica, un juego emocionante y un modo en línea ilimitado. Ya sea que quieras seguir las misiones principales, explorar las actividades secundarias o crear tu propio contenido, GTA 5 tiene algo para todos.

    - -

    Si quieres aprovechar al máximo tu experiencia de juego, necesitas conocer algunas de las mejores características y consejos que GTA 5 tiene para ofrecer. Puedes usar códigos de trucos, ganar dinero rápido, encontrar objetos de colección y secretos ocultos, mejorar tus habilidades y estadísticas, y divertirte y explorar el vasto mundo de GTA 5. También puedes usar el modo director, probar los eventos aleatorios, y descubre los huevos de Pascua para crear tus propias escenas y escenarios.

    -

    GTA 5 es un juego que nunca olvidarás y al que siempre volverás. Es un juego que te desafiará, te entretendrá y te sorprenderá. Es un juego que te encantará.

    -

    Entonces, ¿qué estás esperando? Descarga GTA 5 hoy y disfruta de la mejor experiencia de juego!

    -

    Preguntas frecuentes

    -

    Aquí están algunas de las preguntas más frecuentes sobre el archivo ISO de GTA 5:

    -
      -
    • Q: ¿Es GTA 5 gratis para descargar? -

      A: No, GTA 5 no es gratis para descargar. Necesitas comprar el juego de fuentes oficiales, como Steam, Epic Games Store o Rockstar Games Launcher. Sin embargo, a veces el juego se puede ofrecer de forma gratuita o a un precio reducido en ciertas plataformas u ocasiones. Puede consultar el precio actual y la disponibilidad de GTA 5 en el sitio web oficial [aquí].

    • -
    • Q: ¿Es seguro descargar GTA 5? -

      A: Sí, GTA 5 es seguro de descargar si lo descarga de fuentes oficiales, como Steam, Epic Games Store o Rockstar Games Launcher. Estas plataformas cuentan con medidas de seguridad y sistemas de verificación que garantizan que los archivos del juego sean auténticos y libres de virus. Sin embargo, si descarga GTA 5 desde fuentes no oficiales o ilegales, como sitios de torrent o plataformas para compartir archivos, puede correr el riesgo de descargar archivos dañados, infectados o pirateados que pueden dañar su PC o comprometer su cuenta.

    • -
    • Q: ¿Cuánto tiempo se tarda en descargar GTA 5? - -
    • Q: ¿Cómo puedo jugar GTA 5 online? -

      A: Para jugar GTA 5 en línea, necesita tener una copia válida de GTA 5 instalada en su PC y una conexión a Internet activa. También necesitas tener una cuenta de Rockstar Games Social Club y una suscripción a un servicio en línea específico de la plataforma, como Steam, Epic Games Store o Rockstar Games Launcher. Una vez que tenga estos requisitos, puede iniciar GTA 5 desde su plataforma y seleccionar la opción GTA Online en el menú principal. A continuación, puede crear o unirse a una sesión en línea con otros jugadores y disfrutar del modo multijugador de GTA 5.

    • -
    • Q: ¿Cómo puedo modificar GTA 5? -

      A: Modding es el proceso de modificar o agregar nuevo contenido a un juego usando herramientas o software de terceros. Modding puede mejorar la jugabilidad, los gráficos, las características o el rendimiento de un juego. Sin embargo, modding no es oficialmente apoyado o respaldado por Rockstar Games, y puede causar problemas o conflictos con los archivos del juego o el modo en línea. Modding también puede violar los términos de servicio o el acuerdo de usuario del juego o la plataforma, y puede resultar en prohibiciones o sanciones. Por lo tanto, el modding se realiza bajo su propio riesgo y discreción.

      -

      Si todavía quieres mod GTA 5, necesitas tener una copia de seguridad de los archivos originales del juego y un gestor de mods que pueda instalar y desinstalar mods fácilmente. También necesitas encontrar y descargar mods de fuentes confiables y confiables, como [aquí]. A continuación, puede seguir las instrucciones proporcionadas por el administrador de mods o el creador de mods para instalar y activar los mods en su PC.

    • -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Creacin Y Construccin Apk Hack.md b/spaces/Benson/text-generation/Examples/Creacin Y Construccin Apk Hack.md deleted file mode 100644 index 07f083de32bf281ea91919b652df081ad32c2f73..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Creacin Y Construccin Apk Hack.md +++ /dev/null @@ -1,67 +0,0 @@ -
    -

    Elaboración y construcción de APK Hack: Todo lo que necesita saber

    -

    Si eres un fan de los juegos sandbox, es posible que hayas oído hablar de Crafting and Building, un juego gratuito que te permite crear tu propio mundo con bloques. Puedes explorar, construir, crear y jugar con tus amigos en este juego que tiene muchas características y posibilidades. Pero ¿qué pasa si quieres tener más diversión y libertad en tu juego? ¿Qué pasa si quieres obtener recursos ilimitados, desbloquear todos los artículos y personalizar tu juego a tu gusto? Ahí es donde la elaboración y construcción de APK Hack entra en.

    -

    Elaboración y construcción de APK Hack es una versión modificada del juego original que le da acceso a muchos trucos y hacks que pueden mejorar su experiencia de juego. Con este hack, puede obtener monedas ilimitadas, gemas, diamantes, madera, piedra, suciedad y otros recursos que necesita para construir lo que quieras. También puedes desbloquear todos los objetos del juego, como armas, armaduras, herramientas, muebles, animales, vehículos y más. Incluso puedes cambiar la configuración del juego, como la hora del día, el clima, el nivel de dificultad y el modo de juego. Puedes hacer todas estas cosas sin gastar dinero real ni ver ningún anuncio.

    -

    creación y construcción apk hack


    DOWNLOAD ····· https://bltlly.com/2v6L2s



    -

    Cómo descargar e instalar la elaboración y construcción de APK Hack en su dispositivo

    -

    Si usted está interesado en probar la elaboración y construcción de APK Hack, tendrá que descargar e instalar en su dispositivo. Estos son los pasos que debes seguir:

    -
      -
    1. Ir a un sitio web de confianza que ofrece la elaboración y construcción de enlaces de descarga APK Hack. Puede buscarlos en Google o utilizar uno de estos enlaces: . Asegúrese de que el sitio web es seguro antes de descargar nada.
    2. -
    3. Descargar la elaboración y construcción de archivos APK Hack en su dispositivo. Debe ser un archivo . apk que tiene un tamaño de unos 100 MB.
    4. - -
    5. Localizar el Crafting y la construcción de archivos APK Hack en su dispositivo. Puede utilizar una aplicación de administrador de archivos o ir a su carpeta de descargas. Toca el archivo y sigue las instrucciones para instalarlo.
    6. -
    7. Una vez completada la instalación, puede iniciar el juego desde el cajón de la aplicación o la pantalla de inicio. Usted debe ver un nuevo icono que dice Elaboración y construcción Hack o algo similar.
    8. -
    -

    Cómo utilizar la elaboración y construcción de APK Hack para obtener recursos ilimitados, desbloquear todos los artículos, y personalizar su juego

    -

    Ahora que ha instalado Elaboración y construcción de APK Hack en su dispositivo, se puede empezar a usarlo para disfrutar del juego con más características y opciones. Estas son algunas de las cosas que puedes hacer con este hack:

    -

    -
      -
    • Para obtener recursos ilimitados, como monedas, gemas, diamantes, madera, piedra, suciedad, etc., solo tiene que tocar el signo más (+) junto a cada recurso en la esquina superior derecha de la pantalla. Esto agregará instantáneamente 999999 unidades de ese recurso a su inventario. Puede hacer esto tantas veces como desee.
    • -
    • Para desbloquear todos los elementos en el juego, tales como armas, armaduras, herramientas, muebles, animales, vehículos, etc., solo tiene que ir al menú de elaboración tocando el icono del martillo en la esquina inferior derecha de la pantalla. Allí podrás ver todos los objetos disponibles en el juego. Puedes crear cualquier objeto sin necesidad de recursos ni requisitos previos. Simplemente toque en el artículo que desea y se añadirá a su inventario.
    • - -
    -

    Los beneficios y desventajas de usar la elaboración y construcción de APK Hack

    -

    Elaboración y construcción de APK Hack puede ser una manera divertida y emocionante para jugar el juego con más libertad y posibilidades. Sin embargo, también tiene algunos beneficios e inconvenientes que debe tener en cuenta antes de usarlo. Estos son algunos de ellos:

    - - -Beneficios -Inconvenientes - - -- Puede obtener recursos y artículos ilimitados sin gastar dinero ni ver ningún anuncio. -- Puedes perder el desafío y la emoción del juego teniendo todo a tu disposición. - - -- Puede personalizar la configuración del juego para adaptarse a su estado de ánimo y estilo. -- Es posible que encuentre algunos errores o fallos que podrían afectar el rendimiento o la estabilidad del juego. - - -- Puedes jugar con tus amigos online o offline en modo multijugador. -- Es posible que no pueda unirse a algunos servidores o juegos que no permiten versiones hackeadas del juego. - - -

    Las mejores alternativas a la elaboración y construcción de APK Hack

    -

    Si usted está buscando algunas alternativas a la elaboración y construcción de APK Hack, es posible que desee echa un vistazo a estos otros juegos que son similares en género y jugabilidad:

    -
      -
    • Minecraft: Este es el juego de sandbox más popular y conocido que inspiró a muchos otros, incluyendo Crafting y Building. Puedes crear tu propio mundo con bloques, explorar, crear, luchar y jugar con tus amigos en varios modos y servidores. También puedes descargar mods y mapas para mejorar tu experiencia de juego. Minecraft está disponible para varias plataformas, como Windows, Mac, Linux, Android, iOS, Xbox, PlayStation, Nintendo Switch y más.
    • - -
    • Terraria: Este es un juego de sandbox que combina elementos de acción, aventura, exploración, elaboración, construcción y supervivencia. Puedes cavar, luchar, construir y explorar en un mundo pixelado en 2D que se genera aleatoriamente. También puede encontrar varios enemigos, jefes, biomas, eventos, artículos y PNJ. Terraria está disponible para Windows, Mac, Linux, Android, iOS, Xbox, PlayStation, Nintendo Switch y más.
    • -
    -

    Conclusión: Un resumen de los puntos principales y un llamado a la acción

    -

    Elaboración y construcción de APK Hack es una versión modificada del juego original que le da acceso a muchos trucos y hacks que pueden mejorar su experiencia de juego. Puede obtener recursos ilimitados, desbloquear todos los artículos, y personalizar la configuración de su juego con este hack. Sin embargo, también debe ser consciente de los beneficios y desventajas de usar este truco, así como las mejores alternativas a ella. Si desea probar la elaboración y construcción de APK Hack, puede seguir los pasos anteriores para descargar e instalar en su dispositivo. ¡Diviértete y disfruta del juego!

    -

    Si te gustó este artículo, por favor compártelo con tus amigos y deja un comentario a continuación. También puede suscribirse a nuestro boletín para obtener más consejos y trucos sobre juegos y tecnología. ¡Gracias por leer!

    -

    Preguntas frecuentes: Cinco preguntas y respuestas comunes sobre la elaboración y construcción de APK Hack

    -
      -
    1. Es la elaboración y construcción de APK Hack seguro de usar?
    2. -

      Elaboración y construcción de APK Hack es generalmente seguro de usar, siempre y cuando se descarga desde un sitio web de confianza y escanearlo en busca de virus o malware antes de instalarlo. Sin embargo, siempre debe tener cuidado al descargar e instalar cualquier aplicación o archivo de fuentes desconocidas, ya que podrían contener contenido dañino o no deseado. También debe hacer una copia de seguridad de sus datos antes de usar cualquier truco, ya que podría causar algunos problemas o errores en su juego o dispositivo.

      -
    3. Es la elaboración y construcción de APK Hack legal de usar?
    4. - -
    5. ¿La elaboración y construcción de APK Hack funciona en todos los dispositivos?
    6. -

      Elaboración y construcción de APK Hack funciona en la mayoría de los dispositivos Android que soportan el juego original. Sin embargo, es posible que no funcione en algunos dispositivos que tienen diferentes especificaciones o problemas de compatibilidad. También podría no funcionar en algunos dispositivos que se han actualizado a la última versión del juego o el sistema operativo. Por lo tanto, usted debe comprobar la compatibilidad de su dispositivo antes de descargar e instalar este hack.

      -
    7. ¿Puedo actualizar la elaboración y construcción de APK Hack?
    8. -

      Elaboración y construcción de APK Hack se puede actualizar mediante la descarga e instalación de la última versión del hack desde el mismo sitio web que lo obtuvo de. Sin embargo, debe tener en cuenta que la actualización de este hack puede causar algunos problemas o errores en su juego o dispositivo. También podría hacer su hack incompatible con el juego original u otros hacks. Por lo tanto, usted debe copia de seguridad de sus datos antes de actualizar este hack.

      -
    9. ¿Puedo desinstalar la elaboración y construcción de APK Hack?
    10. -

      Elaboración y construcción de APK Hack se puede desinstalar mediante la eliminación de la aplicación de su dispositivo. Usted puede hacer esto yendo a la configuración de su dispositivo, a continuación, aplicaciones o aplicaciones, a continuación, la elaboración y la construcción de Hack o algo similar. Toque en la aplicación y seleccione desinstalar o quitar. También puede eliminar el archivo . apk de su dispositivo si todavía lo tiene. Sin embargo, debe tener en cuenta que la desinstalación de este truco podría no restaurar los datos del juego o la configuración a su estado original. Por lo tanto, debe hacer una copia de seguridad de sus datos antes de desinstalar este hack.

      -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/operations/build/metadata_editable.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/operations/build/metadata_editable.py deleted file mode 100644 index 27c69f0d1eaf3e223d599e91f969d52a821426fe..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/operations/build/metadata_editable.py +++ /dev/null @@ -1,41 +0,0 @@ -"""Metadata generation logic for source distributions. -""" - -import os - -from pip._vendor.pyproject_hooks import BuildBackendHookCaller - -from pip._internal.build_env import BuildEnvironment -from pip._internal.exceptions import ( - InstallationSubprocessError, - MetadataGenerationFailed, -) -from pip._internal.utils.subprocess import runner_with_spinner_message -from pip._internal.utils.temp_dir import TempDirectory - - -def generate_editable_metadata( - build_env: BuildEnvironment, backend: BuildBackendHookCaller, details: str -) -> str: - """Generate metadata using mechanisms described in PEP 660. - - Returns the generated metadata directory. - """ - metadata_tmpdir = TempDirectory(kind="modern-metadata", globally_managed=True) - - metadata_dir = metadata_tmpdir.path - - with build_env: - # Note that BuildBackendHookCaller implements a fallback for - # prepare_metadata_for_build_wheel/editable, so we don't have to - # consider the possibility that this hook doesn't exist. - runner = runner_with_spinner_message( - "Preparing editable metadata (pyproject.toml)" - ) - with backend.subprocess_runner(runner): - try: - distinfo_dir = backend.prepare_metadata_for_build_editable(metadata_dir) - except InstallationSubprocessError as error: - raise MetadataGenerationFailed(package_details=details) from error - - return os.path.join(metadata_dir, distinfo_dir) diff --git a/spaces/CVH-vn1210/make_hair/minigpt4/common/dist_utils.py b/spaces/CVH-vn1210/make_hair/minigpt4/common/dist_utils.py deleted file mode 100644 index 296a3c86f29c6e82fa8f1108c7dd9fa7d3e9ce45..0000000000000000000000000000000000000000 --- a/spaces/CVH-vn1210/make_hair/minigpt4/common/dist_utils.py +++ /dev/null @@ -1,137 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import datetime -import functools -import os - -import torch -import torch.distributed as dist -import timm.models.hub as timm_hub - - -def setup_for_distributed(is_master): - """ - This function disables printing when not in master process - """ - import builtins as __builtin__ - - builtin_print = __builtin__.print - - def print(*args, **kwargs): - force = kwargs.pop("force", False) - if is_master or force: - builtin_print(*args, **kwargs) - - __builtin__.print = print - - -def is_dist_avail_and_initialized(): - if not dist.is_available(): - return False - if not dist.is_initialized(): - return False - return True - - -def get_world_size(): - if not is_dist_avail_and_initialized(): - return 1 - return dist.get_world_size() - - -def get_rank(): - if not is_dist_avail_and_initialized(): - return 0 - return dist.get_rank() - - -def is_main_process(): - return get_rank() == 0 - - -def init_distributed_mode(args): - if "RANK" in os.environ and "WORLD_SIZE" in os.environ: - args.rank = int(os.environ["RANK"]) - args.world_size = int(os.environ["WORLD_SIZE"]) - args.gpu = int(os.environ["LOCAL_RANK"]) - elif "SLURM_PROCID" in os.environ: - args.rank = int(os.environ["SLURM_PROCID"]) - args.gpu = args.rank % torch.cuda.device_count() - else: - print("Not using distributed mode") - args.distributed = False - return - - args.distributed = True - - torch.cuda.set_device(args.gpu) - args.dist_backend = "nccl" - print( - "| distributed init (rank {}, world {}): {}".format( - args.rank, args.world_size, args.dist_url - ), - flush=True, - ) - torch.distributed.init_process_group( - backend=args.dist_backend, - init_method=args.dist_url, - world_size=args.world_size, - rank=args.rank, - timeout=datetime.timedelta( - days=365 - ), # allow auto-downloading and de-compressing - ) - torch.distributed.barrier() - setup_for_distributed(args.rank == 0) - - -def get_dist_info(): - if torch.__version__ < "1.0": - initialized = dist._initialized - else: - initialized = dist.is_initialized() - if initialized: - rank = dist.get_rank() - world_size = dist.get_world_size() - else: # non-distributed training - rank = 0 - world_size = 1 - return rank, world_size - - -def main_process(func): - @functools.wraps(func) - def wrapper(*args, **kwargs): - rank, _ = get_dist_info() - if rank == 0: - return func(*args, **kwargs) - - return wrapper - - -def download_cached_file(url, check_hash=True, progress=False): - """ - Download a file from a URL and cache it locally. If the file already exists, it is not downloaded again. - If distributed, only the main process downloads the file, and the other processes wait for the file to be downloaded. - """ - - def get_cached_file_path(): - # a hack to sync the file path across processes - parts = torch.hub.urlparse(url) - filename = os.path.basename(parts.path) - cached_file = os.path.join(timm_hub.get_cache_dir(), filename) - - return cached_file - - if is_main_process(): - timm_hub.download_cached_file(url, check_hash, progress) - - if is_dist_avail_and_initialized(): - dist.barrier() - - return get_cached_file_path() diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/data/build.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/data/build.py deleted file mode 100644 index 3cc7d6f7dab573a44f82da5e93fcd675b4db0f71..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/data/build.py +++ /dev/null @@ -1,397 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import bisect -import copy -import itertools -import logging -import numpy as np -import operator -import pickle -import torch.utils.data -from fvcore.common.file_io import PathManager -from tabulate import tabulate -from termcolor import colored - -from detectron2.structures import BoxMode -from detectron2.utils.comm import get_world_size -from detectron2.utils.env import seed_all_rng -from detectron2.utils.logger import log_first_n - -from . import samplers -from .catalog import DatasetCatalog, MetadataCatalog -from .common import AspectRatioGroupedDataset, DatasetFromList, MapDataset -from .dataset_mapper import DatasetMapper -from .detection_utils import check_metadata_consistency - -""" -This file contains the default logic to build a dataloader for training or testing. -""" - -__all__ = [ - "build_detection_train_loader", - "build_detection_test_loader", - "get_detection_dataset_dicts", - "load_proposals_into_dataset", - "print_instances_class_histogram", -] - - -def filter_images_with_only_crowd_annotations(dataset_dicts): - """ - Filter out images with none annotations or only crowd annotations - (i.e., images without non-crowd annotations). - A common training-time preprocessing on COCO dataset. - - Args: - dataset_dicts (list[dict]): annotations in Detectron2 Dataset format. - - Returns: - list[dict]: the same format, but filtered. - """ - num_before = len(dataset_dicts) - - def valid(anns): - for ann in anns: - if ann.get("iscrowd", 0) == 0: - return True - return False - - dataset_dicts = [x for x in dataset_dicts if valid(x["annotations"])] - num_after = len(dataset_dicts) - logger = logging.getLogger(__name__) - logger.info( - "Removed {} images with no usable annotations. {} images left.".format( - num_before - num_after, num_after - ) - ) - return dataset_dicts - - -def filter_images_with_few_keypoints(dataset_dicts, min_keypoints_per_image): - """ - Filter out images with too few number of keypoints. - - Args: - dataset_dicts (list[dict]): annotations in Detectron2 Dataset format. - - Returns: - list[dict]: the same format as dataset_dicts, but filtered. - """ - num_before = len(dataset_dicts) - - def visible_keypoints_in_image(dic): - # Each keypoints field has the format [x1, y1, v1, ...], where v is visibility - annotations = dic["annotations"] - return sum( - (np.array(ann["keypoints"][2::3]) > 0).sum() - for ann in annotations - if "keypoints" in ann - ) - - dataset_dicts = [ - x for x in dataset_dicts if visible_keypoints_in_image(x) >= min_keypoints_per_image - ] - num_after = len(dataset_dicts) - logger = logging.getLogger(__name__) - logger.info( - "Removed {} images with fewer than {} keypoints.".format( - num_before - num_after, min_keypoints_per_image - ) - ) - return dataset_dicts - - -def load_proposals_into_dataset(dataset_dicts, proposal_file): - """ - Load precomputed object proposals into the dataset. - - The proposal file should be a pickled dict with the following keys: - - - "ids": list[int] or list[str], the image ids - - "boxes": list[np.ndarray], each is an Nx4 array of boxes corresponding to the image id - - "objectness_logits": list[np.ndarray], each is an N sized array of objectness scores - corresponding to the boxes. - - "bbox_mode": the BoxMode of the boxes array. Defaults to ``BoxMode.XYXY_ABS``. - - Args: - dataset_dicts (list[dict]): annotations in Detectron2 Dataset format. - proposal_file (str): file path of pre-computed proposals, in pkl format. - - Returns: - list[dict]: the same format as dataset_dicts, but added proposal field. - """ - logger = logging.getLogger(__name__) - logger.info("Loading proposals from: {}".format(proposal_file)) - - with PathManager.open(proposal_file, "rb") as f: - proposals = pickle.load(f, encoding="latin1") - - # Rename the key names in D1 proposal files - rename_keys = {"indexes": "ids", "scores": "objectness_logits"} - for key in rename_keys: - if key in proposals: - proposals[rename_keys[key]] = proposals.pop(key) - - # Fetch the indexes of all proposals that are in the dataset - # Convert image_id to str since they could be int. - img_ids = set({str(record["image_id"]) for record in dataset_dicts}) - id_to_index = {str(id): i for i, id in enumerate(proposals["ids"]) if str(id) in img_ids} - - # Assuming default bbox_mode of precomputed proposals are 'XYXY_ABS' - bbox_mode = BoxMode(proposals["bbox_mode"]) if "bbox_mode" in proposals else BoxMode.XYXY_ABS - - for record in dataset_dicts: - # Get the index of the proposal - i = id_to_index[str(record["image_id"])] - - boxes = proposals["boxes"][i] - objectness_logits = proposals["objectness_logits"][i] - # Sort the proposals in descending order of the scores - inds = objectness_logits.argsort()[::-1] - record["proposal_boxes"] = boxes[inds] - record["proposal_objectness_logits"] = objectness_logits[inds] - record["proposal_bbox_mode"] = bbox_mode - - return dataset_dicts - - -def _quantize(x, bin_edges): - bin_edges = copy.copy(bin_edges) - bin_edges = sorted(bin_edges) - quantized = list(map(lambda y: bisect.bisect_right(bin_edges, y), x)) - return quantized - - -def print_instances_class_histogram(dataset_dicts, class_names): - """ - Args: - dataset_dicts (list[dict]): list of dataset dicts. - class_names (list[str]): list of class names (zero-indexed). - """ - num_classes = len(class_names) - hist_bins = np.arange(num_classes + 1) - histogram = np.zeros((num_classes,), dtype=np.int) - for entry in dataset_dicts: - annos = entry["annotations"] - classes = [x["category_id"] for x in annos if not x.get("iscrowd", 0)] - histogram += np.histogram(classes, bins=hist_bins)[0] - - N_COLS = min(6, len(class_names) * 2) - - def short_name(x): - # make long class names shorter. useful for lvis - if len(x) > 13: - return x[:11] + ".." - return x - - data = list( - itertools.chain(*[[short_name(class_names[i]), int(v)] for i, v in enumerate(histogram)]) - ) - total_num_instances = sum(data[1::2]) - data.extend([None] * (N_COLS - (len(data) % N_COLS))) - if num_classes > 1: - data.extend(["total", total_num_instances]) - data = itertools.zip_longest(*[data[i::N_COLS] for i in range(N_COLS)]) - table = tabulate( - data, - headers=["category", "#instances"] * (N_COLS // 2), - tablefmt="pipe", - numalign="left", - stralign="center", - ) - log_first_n( - logging.INFO, - "Distribution of instances among all {} categories:\n".format(num_classes) - + colored(table, "cyan"), - key="message", - ) - - -def get_detection_dataset_dicts( - dataset_names, filter_empty=True, min_keypoints=0, proposal_files=None -): - """ - Load and prepare dataset dicts for instance detection/segmentation and semantic segmentation. - - Args: - dataset_names (list[str]): a list of dataset names - filter_empty (bool): whether to filter out images without instance annotations - min_keypoints (int): filter out images with fewer keypoints than - `min_keypoints`. Set to 0 to do nothing. - proposal_files (list[str]): if given, a list of object proposal files - that match each dataset in `dataset_names`. - """ - assert len(dataset_names) - dataset_dicts = [DatasetCatalog.get(dataset_name) for dataset_name in dataset_names] - for dataset_name, dicts in zip(dataset_names, dataset_dicts): - assert len(dicts), "Dataset '{}' is empty!".format(dataset_name) - - if proposal_files is not None: - assert len(dataset_names) == len(proposal_files) - # load precomputed proposals from proposal files - dataset_dicts = [ - load_proposals_into_dataset(dataset_i_dicts, proposal_file) - for dataset_i_dicts, proposal_file in zip(dataset_dicts, proposal_files) - ] - - dataset_dicts = list(itertools.chain.from_iterable(dataset_dicts)) - - has_instances = "annotations" in dataset_dicts[0] - # Keep images without instance-level GT if the dataset has semantic labels. - if filter_empty and has_instances and "sem_seg_file_name" not in dataset_dicts[0]: - dataset_dicts = filter_images_with_only_crowd_annotations(dataset_dicts) - - if min_keypoints > 0 and has_instances: - dataset_dicts = filter_images_with_few_keypoints(dataset_dicts, min_keypoints) - - if has_instances: - try: - class_names = MetadataCatalog.get(dataset_names[0]).thing_classes - check_metadata_consistency("thing_classes", dataset_names) - print_instances_class_histogram(dataset_dicts, class_names) - except AttributeError: # class names are not available for this dataset - pass - return dataset_dicts - - -def build_detection_train_loader(cfg, mapper=None): - """ - A data loader is created by the following steps: - - 1. Use the dataset names in config to query :class:`DatasetCatalog`, and obtain a list of dicts. - 2. Start workers to work on the dicts. Each worker will: - - * Map each metadata dict into another format to be consumed by the model. - * Batch them by simply putting dicts into a list. - - The batched ``list[mapped_dict]`` is what this dataloader will return. - - Args: - cfg (CfgNode): the config - mapper (callable): a callable which takes a sample (dict) from dataset and - returns the format to be consumed by the model. - By default it will be `DatasetMapper(cfg, True)`. - - Returns: - an infinite iterator of training data - """ - num_workers = get_world_size() - images_per_batch = cfg.SOLVER.IMS_PER_BATCH - assert ( - images_per_batch % num_workers == 0 - ), "SOLVER.IMS_PER_BATCH ({}) must be divisible by the number of workers ({}).".format( - images_per_batch, num_workers - ) - assert ( - images_per_batch >= num_workers - ), "SOLVER.IMS_PER_BATCH ({}) must be larger than the number of workers ({}).".format( - images_per_batch, num_workers - ) - images_per_worker = images_per_batch // num_workers - - dataset_dicts = get_detection_dataset_dicts( - cfg.DATASETS.TRAIN, - filter_empty=cfg.DATALOADER.FILTER_EMPTY_ANNOTATIONS, - min_keypoints=cfg.MODEL.ROI_KEYPOINT_HEAD.MIN_KEYPOINTS_PER_IMAGE - if cfg.MODEL.KEYPOINT_ON - else 0, - proposal_files=cfg.DATASETS.PROPOSAL_FILES_TRAIN if cfg.MODEL.LOAD_PROPOSALS else None, - ) - dataset = DatasetFromList(dataset_dicts, copy=False) - - if mapper is None: - mapper = DatasetMapper(cfg, True) - dataset = MapDataset(dataset, mapper) - - sampler_name = cfg.DATALOADER.SAMPLER_TRAIN - logger = logging.getLogger(__name__) - logger.info("Using training sampler {}".format(sampler_name)) - if sampler_name == "TrainingSampler": - sampler = samplers.TrainingSampler(len(dataset)) - elif sampler_name == "RepeatFactorTrainingSampler": - sampler = samplers.RepeatFactorTrainingSampler( - dataset_dicts, cfg.DATALOADER.REPEAT_THRESHOLD - ) - else: - raise ValueError("Unknown training sampler: {}".format(sampler_name)) - - if cfg.DATALOADER.ASPECT_RATIO_GROUPING: - data_loader = torch.utils.data.DataLoader( - dataset, - sampler=sampler, - num_workers=cfg.DATALOADER.NUM_WORKERS, - batch_sampler=None, - collate_fn=operator.itemgetter(0), # don't batch, but yield individual elements - worker_init_fn=worker_init_reset_seed, - ) # yield individual mapped dict - data_loader = AspectRatioGroupedDataset(data_loader, images_per_worker) - else: - batch_sampler = torch.utils.data.sampler.BatchSampler( - sampler, images_per_worker, drop_last=True - ) - # drop_last so the batch always have the same size - data_loader = torch.utils.data.DataLoader( - dataset, - num_workers=cfg.DATALOADER.NUM_WORKERS, - batch_sampler=batch_sampler, - collate_fn=trivial_batch_collator, - worker_init_fn=worker_init_reset_seed, - ) - - return data_loader - - -def build_detection_test_loader(cfg, dataset_name, mapper=None): - """ - Similar to `build_detection_train_loader`. - But this function uses the given `dataset_name` argument (instead of the names in cfg), - and uses batch size 1. - - Args: - cfg: a detectron2 CfgNode - dataset_name (str): a name of the dataset that's available in the DatasetCatalog - mapper (callable): a callable which takes a sample (dict) from dataset - and returns the format to be consumed by the model. - By default it will be `DatasetMapper(cfg, False)`. - - Returns: - DataLoader: a torch DataLoader, that loads the given detection - dataset, with test-time transformation and batching. - """ - dataset_dicts = get_detection_dataset_dicts( - [dataset_name], - filter_empty=False, - proposal_files=[ - cfg.DATASETS.PROPOSAL_FILES_TEST[list(cfg.DATASETS.TEST).index(dataset_name)] - ] - if cfg.MODEL.LOAD_PROPOSALS - else None, - ) - - dataset = DatasetFromList(dataset_dicts) - if mapper is None: - mapper = DatasetMapper(cfg, False) - dataset = MapDataset(dataset, mapper) - - sampler = samplers.InferenceSampler(len(dataset)) - # Always use 1 image per worker during inference since this is the - # standard when reporting inference time in papers. - batch_sampler = torch.utils.data.sampler.BatchSampler(sampler, 1, drop_last=False) - - data_loader = torch.utils.data.DataLoader( - dataset, - num_workers=cfg.DATALOADER.NUM_WORKERS, - batch_sampler=batch_sampler, - collate_fn=trivial_batch_collator, - ) - return data_loader - - -def trivial_batch_collator(batch): - """ - A batch collator that does nothing. - """ - return batch - - -def worker_init_reset_seed(worker_id): - seed_all_rng(np.random.randint(2 ** 31) + worker_id) diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/solver/build.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/solver/build.py deleted file mode 100644 index 72786dec9efde7d32e73b54e7c72d9c782ec6bc4..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/solver/build.py +++ /dev/null @@ -1,163 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -from enum import Enum -from typing import Any, Callable, Dict, Iterable, List, Set, Type, Union -import torch - -from detectron2.config import CfgNode - -from .lr_scheduler import WarmupCosineLR, WarmupMultiStepLR - -_GradientClipperInput = Union[torch.Tensor, Iterable[torch.Tensor]] -_GradientClipper = Callable[[_GradientClipperInput], None] - - -class GradientClipType(Enum): - VALUE = "value" - NORM = "norm" - - -def _create_gradient_clipper(cfg: CfgNode) -> _GradientClipper: - """ - Creates gradient clipping closure to clip by value or by norm, - according to the provided config. - """ - cfg = cfg.clone() - - def clip_grad_norm(p: _GradientClipperInput): - torch.nn.utils.clip_grad_norm_(p, cfg.CLIP_VALUE, cfg.NORM_TYPE) - - def clip_grad_value(p: _GradientClipperInput): - torch.nn.utils.clip_grad_value_(p, cfg.CLIP_VALUE) - - _GRADIENT_CLIP_TYPE_TO_CLIPPER = { - GradientClipType.VALUE: clip_grad_value, - GradientClipType.NORM: clip_grad_norm, - } - return _GRADIENT_CLIP_TYPE_TO_CLIPPER[GradientClipType(cfg.CLIP_TYPE)] - - -def _generate_optimizer_class_with_gradient_clipping( - optimizer_type: Type[torch.optim.Optimizer], gradient_clipper: _GradientClipper -) -> Type[torch.optim.Optimizer]: - """ - Dynamically creates a new type that inherits the type of a given instance - and overrides the `step` method to add gradient clipping - """ - - def optimizer_wgc_step(self, closure=None): - for group in self.param_groups: - for p in group["params"]: - gradient_clipper(p) - super(type(self), self).step(closure) - - OptimizerWithGradientClip = type( - optimizer_type.__name__ + "WithGradientClip", - (optimizer_type,), - {"step": optimizer_wgc_step}, - ) - return OptimizerWithGradientClip - - -def maybe_add_gradient_clipping( - cfg: CfgNode, optimizer: torch.optim.Optimizer -) -> torch.optim.Optimizer: - """ - If gradient clipping is enabled through config options, wraps the existing - optimizer instance of some type OptimizerType to become an instance - of the new dynamically created class OptimizerTypeWithGradientClip - that inherits OptimizerType and overrides the `step` method to - include gradient clipping. - - Args: - cfg: CfgNode - configuration options - optimizer: torch.optim.Optimizer - existing optimizer instance - - Return: - optimizer: torch.optim.Optimizer - either the unmodified optimizer instance (if gradient clipping is - disabled), or the same instance with adjusted __class__ to override - the `step` method and include gradient clipping - """ - if not cfg.SOLVER.CLIP_GRADIENTS.ENABLED: - return optimizer - grad_clipper = _create_gradient_clipper(cfg.SOLVER.CLIP_GRADIENTS) - OptimizerWithGradientClip = _generate_optimizer_class_with_gradient_clipping( - type(optimizer), grad_clipper - ) - optimizer.__class__ = OptimizerWithGradientClip - return optimizer - - -def build_optimizer(cfg: CfgNode, model: torch.nn.Module) -> torch.optim.Optimizer: - """ - Build an optimizer from config. - """ - norm_module_types = ( - torch.nn.BatchNorm1d, - torch.nn.BatchNorm2d, - torch.nn.BatchNorm3d, - torch.nn.SyncBatchNorm, - # NaiveSyncBatchNorm inherits from BatchNorm2d - torch.nn.GroupNorm, - torch.nn.InstanceNorm1d, - torch.nn.InstanceNorm2d, - torch.nn.InstanceNorm3d, - torch.nn.LayerNorm, - torch.nn.LocalResponseNorm, - ) - params: List[Dict[str, Any]] = [] - memo: Set[torch.nn.parameter.Parameter] = set() - for module in model.modules(): - for key, value in module.named_parameters(recurse=False): - if not value.requires_grad: - continue - # Avoid duplicating parameters - if value in memo: - continue - memo.add(value) - lr = cfg.SOLVER.BASE_LR - weight_decay = cfg.SOLVER.WEIGHT_DECAY - if isinstance(module, norm_module_types): - weight_decay = cfg.SOLVER.WEIGHT_DECAY_NORM - elif key == "bias": - # NOTE: unlike Detectron v1, we now default BIAS_LR_FACTOR to 1.0 - # and WEIGHT_DECAY_BIAS to WEIGHT_DECAY so that bias optimizer - # hyperparameters are by default exactly the same as for regular - # weights. - lr = cfg.SOLVER.BASE_LR * cfg.SOLVER.BIAS_LR_FACTOR - weight_decay = cfg.SOLVER.WEIGHT_DECAY_BIAS - params += [{"params": [value], "lr": lr, "weight_decay": weight_decay}] - - optimizer = torch.optim.SGD(params, cfg.SOLVER.BASE_LR, momentum=cfg.SOLVER.MOMENTUM) - optimizer = maybe_add_gradient_clipping(cfg, optimizer) - return optimizer - - -def build_lr_scheduler( - cfg: CfgNode, optimizer: torch.optim.Optimizer -) -> torch.optim.lr_scheduler._LRScheduler: - """ - Build a LR scheduler from config. - """ - name = cfg.SOLVER.LR_SCHEDULER_NAME - if name == "WarmupMultiStepLR": - return WarmupMultiStepLR( - optimizer, - cfg.SOLVER.STEPS, - cfg.SOLVER.GAMMA, - warmup_factor=cfg.SOLVER.WARMUP_FACTOR, - warmup_iters=cfg.SOLVER.WARMUP_ITERS, - warmup_method=cfg.SOLVER.WARMUP_METHOD, - ) - elif name == "WarmupCosineLR": - return WarmupCosineLR( - optimizer, - cfg.SOLVER.MAX_ITER, - warmup_factor=cfg.SOLVER.WARMUP_FACTOR, - warmup_iters=cfg.SOLVER.WARMUP_ITERS, - warmup_method=cfg.SOLVER.WARMUP_METHOD, - ) - else: - raise ValueError("Unknown LR scheduler: {}".format(name)) diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/model_loader.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/model_loader.py deleted file mode 100644 index 7ee28669f89e6ce9a4069a12a609595aadf30794..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/model_loader.py +++ /dev/null @@ -1,27 +0,0 @@ -# -------------------------------------------------------- -# OpenVQA -# Written by Yuhao Cui https://github.com/cuiyuhao1996 -# -------------------------------------------------------- - -from importlib import import_module - - -class ModelLoader: - def __init__(self, __C): - - self.model_use = __C.MODEL_USE - model_moudle_path = 'openvqa.models.' + self.model_use + '.net' - self.model_moudle = import_module(model_moudle_path) - - def Net(self, __arg1, __arg2, __arg3, __arg4): - return self.model_moudle.Net(__arg1, __arg2, __arg3, __arg4) - - -class CfgLoader: - def __init__(self, model_use): - - cfg_moudle_path = 'openvqa.models.' + model_use + '.model_cfgs' - self.cfg_moudle = import_module(cfg_moudle_path) - - def load(self): - return self.cfg_moudle.Cfgs() diff --git a/spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/pose_model_identifier.py b/spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/pose_model_identifier.py deleted file mode 100644 index 8edb3d726c247cd742abc6f1a43393063d6651be..0000000000000000000000000000000000000000 --- a/spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/pose_model_identifier.py +++ /dev/null @@ -1,103 +0,0 @@ -import pandas as pd - -BODY_IDENTIFIERS = { - "nose": 0, - "neck": -1, - "rightEye": 5, - "leftEye": 2, - "rightEar": 8, - "leftEar": 7, - "rightShoulder": 12, - "leftShoulder": 11, - "rightElbow": 14, - "leftElbow": 13, - "rightWrist": 16, - "leftWrist": 15 -} -HAND_IDENTIFIERS = { - "wrist": 0, - "indexTip": 8, - "indexDIP": 7, - "indexPIP": 6, - "indexMCP": 5, - "middleTip": 12, - "middleDIP": 11, - "middlePIP": 10, - "middleMCP": 9, - "ringTip": 16, - "ringDIP": 15, - "ringPIP": 14, - "ringMCP": 13, - "littleTip": 20, - "littleDIP": 19, - "littlePIP": 18, - "littleMCP": 17, - "thumbTip": 4, - "thumbIP": 3, - "thumbMP": 2, - "thumbCMC": 1 -} - - -class mp_holistic_data: - def __init__(self, column_names): - self.data_hub = {} - for n in column_names[1:-1]: - self.data_hub[n] = [] - - def hand_append_zero(self, handedness): - for k in self.data_hub.keys(): - if "_" + handedness + "_" in k: - self.data_hub[k].append(0) - - def hand_append_value(self, handedness, hand_landmarks): - for name, lm_idx in HAND_IDENTIFIERS.items(): - lm = hand_landmarks.landmark[lm_idx] - for xy, xy_value in zip(['_X', '_Y'], [lm.x, lm.y]): - k = name + '_' + handedness + xy - self.data_hub[k].append(xy_value) - - def get_series(self): - return pd.Series(self.data_hub) - - def extract_data(self, holistic_results): - def neck(pose_results): - ls = pose_results.pose_landmarks.landmark[11] - rs = pose_results.pose_landmarks.landmark[12] - no = pose_results.pose_landmarks.landmark[0] - if (ls.visibility > 0.5) & (rs.visibility > 0.5) & (no.visibility > 0.5): - # This indicates the neck better. But it does not affect the result. - cx = (ls.x + rs.x) / 2 - cy = (ls.y + rs.y) / 2 - dx = no.x - cx - dy = no.y - cy - x = cx + 0.3 * dx - y = cy + 0.3 * dy - # x = (ls.x+rs.x)/2 - # y = (ls.y+rs.y)/2 - else: - x = 0 - y = 0 - return [x, y] - - # for the frame that can not extract skeleton from - if not holistic_results.pose_landmarks: - return - for name, lm_idx in BODY_IDENTIFIERS.items(): - if name == "neck": - xy_value = neck(holistic_results) - else: - lm = holistic_results.pose_landmarks.landmark[lm_idx] - visible = float(lm.visibility >= 0.5) - xy_value = [lm.x * visible, lm.y * visible] - for xy_id, xy in zip(['_X', '_Y'], xy_value): - s_name = name + xy_id - self.data_hub[s_name].append(xy) - - for handedness, lm in zip(['Right', 'Left'], - [holistic_results.right_hand_landmarks, holistic_results.left_hand_landmarks]): - if lm: - self.hand_append_value(handedness, lm) - else: - self.hand_append_zero(handedness) - return \ No newline at end of file diff --git a/spaces/CVPR/lama-example/models/ade20k/segm_lib/nn/parallel/data_parallel.py b/spaces/CVPR/lama-example/models/ade20k/segm_lib/nn/parallel/data_parallel.py deleted file mode 100644 index 376fc038919aa2a5bd696141e7bb6025d4981306..0000000000000000000000000000000000000000 --- a/spaces/CVPR/lama-example/models/ade20k/segm_lib/nn/parallel/data_parallel.py +++ /dev/null @@ -1,112 +0,0 @@ -# -*- coding: utf8 -*- - -import torch.cuda as cuda -import torch.nn as nn -import torch -import collections -from torch.nn.parallel._functions import Gather - - -__all__ = ['UserScatteredDataParallel', 'user_scattered_collate', 'async_copy_to'] - - -def async_copy_to(obj, dev, main_stream=None): - if torch.is_tensor(obj): - v = obj.cuda(dev, non_blocking=True) - if main_stream is not None: - v.data.record_stream(main_stream) - return v - elif isinstance(obj, collections.Mapping): - return {k: async_copy_to(o, dev, main_stream) for k, o in obj.items()} - elif isinstance(obj, collections.Sequence): - return [async_copy_to(o, dev, main_stream) for o in obj] - else: - return obj - - -def dict_gather(outputs, target_device, dim=0): - """ - Gathers variables from different GPUs on a specified device - (-1 means the CPU), with dictionary support. - """ - def gather_map(outputs): - out = outputs[0] - if torch.is_tensor(out): - # MJY(20180330) HACK:: force nr_dims > 0 - if out.dim() == 0: - outputs = [o.unsqueeze(0) for o in outputs] - return Gather.apply(target_device, dim, *outputs) - elif out is None: - return None - elif isinstance(out, collections.Mapping): - return {k: gather_map([o[k] for o in outputs]) for k in out} - elif isinstance(out, collections.Sequence): - return type(out)(map(gather_map, zip(*outputs))) - return gather_map(outputs) - - -class DictGatherDataParallel(nn.DataParallel): - def gather(self, outputs, output_device): - return dict_gather(outputs, output_device, dim=self.dim) - - -class UserScatteredDataParallel(DictGatherDataParallel): - def scatter(self, inputs, kwargs, device_ids): - assert len(inputs) == 1 - inputs = inputs[0] - inputs = _async_copy_stream(inputs, device_ids) - inputs = [[i] for i in inputs] - assert len(kwargs) == 0 - kwargs = [{} for _ in range(len(inputs))] - - return inputs, kwargs - - -def user_scattered_collate(batch): - return batch - - -def _async_copy(inputs, device_ids): - nr_devs = len(device_ids) - assert type(inputs) in (tuple, list) - assert len(inputs) == nr_devs - - outputs = [] - for i, dev in zip(inputs, device_ids): - with cuda.device(dev): - outputs.append(async_copy_to(i, dev)) - - return tuple(outputs) - - -def _async_copy_stream(inputs, device_ids): - nr_devs = len(device_ids) - assert type(inputs) in (tuple, list) - assert len(inputs) == nr_devs - - outputs = [] - streams = [_get_stream(d) for d in device_ids] - for i, dev, stream in zip(inputs, device_ids, streams): - with cuda.device(dev): - main_stream = cuda.current_stream() - with cuda.stream(stream): - outputs.append(async_copy_to(i, dev, main_stream=main_stream)) - main_stream.wait_stream(stream) - - return outputs - - -"""Adapted from: torch/nn/parallel/_functions.py""" -# background streams used for copying -_streams = None - - -def _get_stream(device): - """Gets a background stream for copying between CPU and GPU""" - global _streams - if device == -1: - return None - if _streams is None: - _streams = [None] * cuda.device_count() - if _streams[device] is None: _streams[device] = cuda.Stream(device) - return _streams[device] diff --git a/spaces/CVPR/transfiner/configs/common/models/mask_rcnn_fpn.py b/spaces/CVPR/transfiner/configs/common/models/mask_rcnn_fpn.py deleted file mode 100644 index 3f87d8da83d93932ddd5e9dc5b38d42786c0cbb4..0000000000000000000000000000000000000000 --- a/spaces/CVPR/transfiner/configs/common/models/mask_rcnn_fpn.py +++ /dev/null @@ -1,93 +0,0 @@ -from detectron2.config import LazyCall as L -from detectron2.layers import ShapeSpec -from detectron2.modeling.meta_arch import GeneralizedRCNN -from detectron2.modeling.anchor_generator import DefaultAnchorGenerator -from detectron2.modeling.backbone.fpn import LastLevelMaxPool -from detectron2.modeling.backbone import BasicStem, FPN, ResNet -from detectron2.modeling.box_regression import Box2BoxTransform -from detectron2.modeling.matcher import Matcher -from detectron2.modeling.poolers import ROIPooler -from detectron2.modeling.proposal_generator import RPN, StandardRPNHead -from detectron2.modeling.roi_heads import ( - StandardROIHeads, - FastRCNNOutputLayers, - MaskRCNNConvUpsampleHead, - FastRCNNConvFCHead, -) - -model = L(GeneralizedRCNN)( - backbone=L(FPN)( - bottom_up=L(ResNet)( - stem=L(BasicStem)(in_channels=3, out_channels=64, norm="FrozenBN"), - stages=L(ResNet.make_default_stages)( - depth=50, - stride_in_1x1=True, - norm="FrozenBN", - ), - out_features=["res2", "res3", "res4", "res5"], - ), - in_features="${.bottom_up.out_features}", - out_channels=256, - top_block=L(LastLevelMaxPool)(), - ), - proposal_generator=L(RPN)( - in_features=["p2", "p3", "p4", "p5", "p6"], - head=L(StandardRPNHead)(in_channels=256, num_anchors=3), - anchor_generator=L(DefaultAnchorGenerator)( - sizes=[[32], [64], [128], [256], [512]], - aspect_ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64], - offset=0.0, - ), - anchor_matcher=L(Matcher)( - thresholds=[0.3, 0.7], labels=[0, -1, 1], allow_low_quality_matches=True - ), - box2box_transform=L(Box2BoxTransform)(weights=[1.0, 1.0, 1.0, 1.0]), - batch_size_per_image=256, - positive_fraction=0.5, - pre_nms_topk=(2000, 1000), - post_nms_topk=(1000, 1000), - nms_thresh=0.7, - ), - roi_heads=L(StandardROIHeads)( - num_classes=80, - batch_size_per_image=512, - positive_fraction=0.25, - proposal_matcher=L(Matcher)( - thresholds=[0.5], labels=[0, 1], allow_low_quality_matches=False - ), - box_in_features=["p2", "p3", "p4", "p5"], - box_pooler=L(ROIPooler)( - output_size=7, - scales=(1.0 / 4, 1.0 / 8, 1.0 / 16, 1.0 / 32), - sampling_ratio=0, - pooler_type="ROIAlignV2", - ), - box_head=L(FastRCNNConvFCHead)( - input_shape=ShapeSpec(channels=256, height=7, width=7), - conv_dims=[], - fc_dims=[1024, 1024], - ), - box_predictor=L(FastRCNNOutputLayers)( - input_shape=ShapeSpec(channels=1024), - test_score_thresh=0.05, - box2box_transform=L(Box2BoxTransform)(weights=(10, 10, 5, 5)), - num_classes="${..num_classes}", - ), - mask_in_features=["p2", "p3", "p4", "p5"], - mask_pooler=L(ROIPooler)( - output_size=14, # ori is 14 - scales=(1.0 / 4, 1.0 / 8, 1.0 / 16, 1.0 / 32), - sampling_ratio=0, - pooler_type="ROIAlignV2", - ), - mask_head=L(MaskRCNNConvUpsampleHead)( - input_shape=ShapeSpec(channels=256, width=14, height=14), - num_classes="${..num_classes}", - conv_dims=[256, 256, 256, 256, 256], - ), - ), - pixel_mean=[103.530, 116.280, 123.675], - pixel_std=[1.0, 1.0, 1.0], - input_format="BGR", -) diff --git a/spaces/CarlDennis/Lovelive-VITS-JPZH/modules.py b/spaces/CarlDennis/Lovelive-VITS-JPZH/modules.py deleted file mode 100644 index f5af1fd9a20dc03707889f360a39bb4b784a6df3..0000000000000000000000000000000000000000 --- a/spaces/CarlDennis/Lovelive-VITS-JPZH/modules.py +++ /dev/null @@ -1,387 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/ChevyWithAI/rvc-aicover/infer_pack/models_onnx.py b/spaces/ChevyWithAI/rvc-aicover/infer_pack/models_onnx.py deleted file mode 100644 index 3cdae2f7f8591a1e43b1d8520baa37b7e9744d72..0000000000000000000000000000000000000000 --- a/spaces/ChevyWithAI/rvc-aicover/infer_pack/models_onnx.py +++ /dev/null @@ -1,849 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder256Sim(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, pitch, nsff0, sid, rnd, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class SynthesizerTrnMs256NSFsid_sim(nn.Module): - """ - Synthesizer for Training - """ - - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - # hop_length, - gin_channels=0, - use_sdp=True, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256Sim( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - is_half=kwargs["is_half"], - ) - - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, ds, max_len=None - ): # y是spec不需要了现在 - g = self.emb_g(ds.unsqueeze(0)).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/other/install.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/other/install.js deleted file mode 100644 index d01dbf42c8eff05ce888004b24eda7d3e7011cda..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/other/install.js +++ /dev/null @@ -1,124 +0,0 @@ -import { exec, execSync } from "child_process" -import plugin from "../../lib/plugins/plugin.js" -import fs from "node:fs" -import { Restart } from "./restart.js" - -let insing = false -const list = { - "Atlas":"https://gitee.com/Nwflower/atlas", - "ws-plugin":"https://gitee.com/xiaoye12123/ws-plugin", - "TRSS-Plugin" :"https://Yunzai.TRSS.me", - "yenai-plugin" :"https://gitee.com/yeyang52/yenai-plugin", - "flower-plugin" :"https://gitee.com/Nwflower/flower-plugin", - "xianyu-plugin" :"https://gitee.com/suancaixianyu/xianyu-plugin", - "earth-k-plugin":"https://gitee.com/SmallK111407/earth-k-plugin", - "useless-plugin":"https://gitee.com/SmallK111407/useless-plugin", - "StarRail-plugin" :"https://gitee.com/hewang1an/StarRail-plugin", - "xiaoyao-cvs-plugin":"https://gitee.com/Ctrlcvs/xiaoyao-cvs-plugin", - "Jinmaocuicuisha-plugin":"https://gitee.com/JMCCS/jinmaocuicuisha", - "trss-xianxin-plugin" :"https://gitee.com/snowtafir/xianxin-plugin", - "mysVilla-Plugin" :"https://gitee.com/TimeRainStarSky/Yunzai-mysVilla-Plugin", - "Telegram-Plugin" :"https://gitee.com/TimeRainStarSky/Yunzai-Telegram-Plugin", - "Discord-Plugin":"https://gitee.com/TimeRainStarSky/Yunzai-Discord-Plugin", - "QQGuild-Plugin":"https://gitee.com/TimeRainStarSky/Yunzai-QQGuild-Plugin", - "WeChat-Plugin" :"https://gitee.com/TimeRainStarSky/Yunzai-WeChat-Plugin", - "Proxy-Plugin" :"https://gitee.com/TimeRainStarSky/Yunzai-Proxy-Plugin", - "ICQQ-Plugin" :"https://gitee.com/TimeRainStarSky/Yunzai-ICQQ-Plugin", - "KOOK-Plugin" :"https://gitee.com/TimeRainStarSky/Yunzai-KOOK-Plugin", -} - -export class install extends plugin { - constructor() { - super({ - name: "安装插件", - dsc: "#安装插件 #安装TRSS-Plugin", - event: "message", - rule: [ - { - reg: `^#安装(插件|${Object.keys(list).join("|")})$`, - fnc: "install", - permission: "master" - } - ] - }) - } - - async install() { - if (insing) { - await this.reply("已有命令安装中..请勿重复操作") - return false - } - - const name = this.e.msg.replace(/^#安装/, "").trim() - if (name == "插件") { - let msg = "\n" - for (const name in list) - if (!fs.existsSync(`plugins/${name}`)) - msg += `${name}\n` - - if (msg == "\n") - msg = "暂无可安装插件" - else - msg = `可安装插件列表:${msg}发送 #安装+插件名 进行安装` - - await this.reply(msg) - return true - } - - const path = `plugins/${name}` - if (fs.existsSync(path)) { - await this.reply(`${name} 插件已安装`) - return false - } - await this.runInstall(name, list[name], path) - this.restart() - } - - async execSync(cmd) { - return new Promise(resolve => { - exec(cmd, (error, stdout, stderr) => { - resolve({ error, stdout, stderr }) - }) - }) - } - - async runInstall(name, url, path) { - logger.mark(`${this.e.logFnc} 开始安装:${name} 插件`) - await this.reply(`开始安装 ${name} 插件`) - - const cm = `git clone --depth 1 --single-branch "${url}" "${path}"` - insing = true - const ret = await this.execSync(cm) - if (fs.existsSync(`${path}/package.json`)) - await this.execSync("pnpm install") - insing = false - - if (ret.error) { - logger.mark(`${this.e.logFnc} 插件安装失败:${name}`) - this.gitErr(ret.error, ret.stdout) - return false - } - } - - async gitErr(err, stdout) { - let msg = "安装失败!" - let errMsg = err.toString() - stdout = stdout.toString() - - if (errMsg.includes('Timed out')) { - const remote = errMsg.match(/'(.+?)'/g)[0].replace(/'/g, '') - return this.reply(`${msg}\n连接超时:${remote}`) - } - - if (/Failed to connect|unable to access/g.test(errMsg)) { - const remote = errMsg.match(/'(.+?)'/g)[0].replace(/'/g, '') - return this.reply(`${msg}\n连接失败:${remote}`) - } - - await this.reply([errMsg, stdout]) - } - - restart() { - new Restart(this.e).restart() - } -} \ No newline at end of file diff --git a/spaces/Clementapa/orang-outan-image-video-detection/README.md b/spaces/Clementapa/orang-outan-image-video-detection/README.md deleted file mode 100644 index 05a246efc7894668dab6d075f4522a0ae049500d..0000000000000000000000000000000000000000 --- a/spaces/Clementapa/orang-outan-image-video-detection/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AI for Orangutan Ecosystem Surveillance -emoji: 🦧🔍 -colorFrom: green -colorTo: green -sdk: gradio -sdk_version: 4.1.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git "a/spaces/Cong723/gpt-academic-public/crazy_functions/\347\224\237\346\210\220\345\207\275\346\225\260\346\263\250\351\207\212.py" "b/spaces/Cong723/gpt-academic-public/crazy_functions/\347\224\237\346\210\220\345\207\275\346\225\260\346\263\250\351\207\212.py" deleted file mode 100644 index a564f21d231cd65c29b539573929ca5d2df63203..0000000000000000000000000000000000000000 --- "a/spaces/Cong723/gpt-academic-public/crazy_functions/\347\224\237\346\210\220\345\207\275\346\225\260\346\263\250\351\207\212.py" +++ /dev/null @@ -1,54 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -fast_debug = False - -def 生成函数注释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import time, os - print('begin analysis on:', file_manifest) - for index, fp in enumerate(file_manifest): - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - - i_say = f'请对下面的程序文件做一个概述,并对文件中的所有函数生成注释,使用markdown表格输出结果,文件名是{os.path.relpath(fp, project_folder)},文件内容是 ```{file_content}```' - i_say_show_user = f'[{index}/{len(file_manifest)}] 请对下面的程序文件做一个概述,并对文件中的所有函数生成注释: {os.path.abspath(fp)}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - i_say, i_say_show_user, llm_kwargs, chatbot, history=[], sys_prompt=system_prompt) # 带超时倒计时 - - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - if not fast_debug: time.sleep(2) - - if not fast_debug: - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - - - -@CatchException -def 批量生成函数注释(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.py', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] - - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 生成函数注释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) diff --git a/spaces/Cpp4App/Cpp4App/CDM/cnn/Config.py b/spaces/Cpp4App/Cpp4App/CDM/cnn/Config.py deleted file mode 100644 index 7143c0b3b4ae150bae1afd3046db3e98d752f706..0000000000000000000000000000000000000000 --- a/spaces/Cpp4App/Cpp4App/CDM/cnn/Config.py +++ /dev/null @@ -1,21 +0,0 @@ - -class Config: - def __init__(self): - # cnn 4 classes - # self.MODEL_PATH = 'E:/Mulong/Model/ui_compos/cnn6_icon.h5' # cnn 4 classes - # self.class_map = ['Image', 'Icon', 'Button', 'Input'] - - # resnet 14 classes - # self.DATA_PATH = "E:/Mulong/Datasets/rico/elements-14-2" - # self.MODEL_PATH = 'E:/Mulong/Model/rico_compos/resnet-ele14.h5' - # self.class_map = ['Button', 'CheckBox', 'Chronometer', 'EditText', 'ImageButton', 'ImageView', - # 'ProgressBar', 'RadioButton', 'RatingBar', 'SeekBar', 'Spinner', 'Switch', - # 'ToggleButton', 'VideoView', 'TextView'] # ele-14 - - self.DATA_PATH = "E:\Mulong\Datasets\dataset_webpage\Components3" - - self.MODEL_PATH = 'E:/Mulong/Model/rico_compos/cnn2-textview.h5' - self.class_map = ['Text', 'Non-Text'] - - self.image_shape = (32, 32, 3) - self.class_number = len(self.class_map) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/checkboxgroup.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/checkboxgroup.py deleted file mode 100644 index 3e67090ac8afcd62b938878ae65c02c2be25149a..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/checkboxgroup.py +++ /dev/null @@ -1,213 +0,0 @@ -"""gr.CheckboxGroup() component""" - -from __future__ import annotations - -from typing import Any, Callable, Literal - -from gradio_client.documentation import document, set_documentation_group -from gradio_client.serializing import ListStringSerializable - -from gradio.components.base import FormComponent, IOComponent, _Keywords -from gradio.deprecation import warn_deprecation, warn_style_method_deprecation -from gradio.events import Changeable, EventListenerMethod, Inputable, Selectable -from gradio.interpretation import NeighborInterpretable - -set_documentation_group("component") - - -@document() -class CheckboxGroup( - FormComponent, - Changeable, - Inputable, - Selectable, - IOComponent, - ListStringSerializable, - NeighborInterpretable, -): - """ - Creates a set of checkboxes of which a subset can be checked. - Preprocessing: passes the list of checked checkboxes as a {List[str]} or their indices as a {List[int]} into the function, depending on `type`. - Postprocessing: expects a {List[str]}, each element of which becomes a checked checkbox. - Examples-format: a {List[str]} representing the values to be checked. - Demos: sentence_builder, titanic_survival - """ - - def __init__( - self, - choices: list[str] | None = None, - *, - value: list[str] | str | Callable | None = None, - type: Literal["value", "index"] = "value", - label: str | None = None, - info: str | None = None, - every: float | None = None, - show_label: bool | None = None, - container: bool = True, - scale: int | None = None, - min_width: int = 160, - interactive: bool | None = None, - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - **kwargs, - ): - """ - Parameters: - choices: list of options to select from. - value: default selected list of options. If callable, the function will be called whenever the app loads to set the initial value of the component. - type: Type of value to be returned by component. "value" returns the list of strings of the choices selected, "index" returns the list of indices of the choices selected. - label: component name in interface. - info: additional component description. - every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute. - show_label: if True, will display label. - container: If True, will place the component in a container - providing some extra padding around the border. - scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer. - min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first. - interactive: if True, choices in this checkbox group will be checkable; if False, checking will be disabled. If not provided, this is inferred based on whether the component is used as an input or output. - visible: If False, component will be hidden. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - """ - self.choices = choices or [] - valid_types = ["value", "index"] - if type not in valid_types: - raise ValueError( - f"Invalid value for parameter `type`: {type}. Please choose from one of: {valid_types}" - ) - self.type = type - self.select: EventListenerMethod - """ - Event listener for when the user selects or deselects within CheckboxGroup. - Uses event data gradio.SelectData to carry `value` referring to label of selected checkbox, `index` to refer to index, and `selected` to refer to state of checkbox. - See EventData documentation on how to use this event data. - """ - IOComponent.__init__( - self, - label=label, - info=info, - every=every, - show_label=show_label, - container=container, - scale=scale, - min_width=min_width, - interactive=interactive, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - value=value, - **kwargs, - ) - NeighborInterpretable.__init__(self) - - def get_config(self): - return { - "choices": self.choices, - "value": self.value, - **IOComponent.get_config(self), - } - - def example_inputs(self) -> dict[str, Any]: - return { - "raw": self.choices[0] if self.choices else None, - "serialized": self.choices[0] if self.choices else None, - } - - @staticmethod - def update( - value: list[str] - | str - | Literal[_Keywords.NO_VALUE] - | None = _Keywords.NO_VALUE, - choices: list[str] | None = None, - label: str | None = None, - info: str | None = None, - show_label: bool | None = None, - container: bool | None = None, - scale: int | None = None, - min_width: int | None = None, - interactive: bool | None = None, - visible: bool | None = None, - ): - return { - "choices": choices, - "label": label, - "info": info, - "show_label": show_label, - "container": container, - "scale": scale, - "min_width": min_width, - "interactive": interactive, - "visible": visible, - "value": value, - "__type__": "update", - } - - def preprocess(self, x: list[str]) -> list[str] | list[int]: - """ - Parameters: - x: list of selected choices - Returns: - list of selected choices as strings or indices within choice list - """ - if self.type == "value": - return x - elif self.type == "index": - return [self.choices.index(choice) for choice in x] - else: - raise ValueError( - f"Unknown type: {self.type}. Please choose from: 'value', 'index'." - ) - - def postprocess(self, y: list[str] | str | None) -> list[str]: - """ - Any postprocessing needed to be performed on function output. - Parameters: - y: List of selected choices. If a single choice is selected, it can be passed in as a string - Returns: - List of selected choices - """ - if y is None: - return [] - if not isinstance(y, list): - y = [y] - return y - - def get_interpretation_neighbors(self, x): - leave_one_out_sets = [] - for choice in self.choices: - leave_one_out_set = list(x) - if choice in leave_one_out_set: - leave_one_out_set.remove(choice) - else: - leave_one_out_set.append(choice) - leave_one_out_sets.append(leave_one_out_set) - return leave_one_out_sets, {} - - def get_interpretation_scores(self, x, neighbors, scores, **kwargs): - """ - Returns: - For each tuple in the list, the first value represents the interpretation score if the input is False, and the second if the input is True. - """ - final_scores = [] - for choice, score in zip(self.choices, scores): - score_set = [score, None] if choice in x else [None, score] - final_scores.append(score_set) - return final_scores - - def style( - self, - *, - item_container: bool | None = None, - container: bool | None = None, - **kwargs, - ): - """ - This method is deprecated. Please set these arguments in the constructor instead. - """ - warn_style_method_deprecation() - if item_container is not None: - warn_deprecation("The `item_container` parameter is deprecated.") - if container is not None: - self.container = container - return self diff --git a/spaces/DaleChen/AutoGPT/autogpt/speech/gtts.py b/spaces/DaleChen/AutoGPT/autogpt/speech/gtts.py deleted file mode 100644 index 1c3e9cae0567428582891b11eca42f82a64f5c8e..0000000000000000000000000000000000000000 --- a/spaces/DaleChen/AutoGPT/autogpt/speech/gtts.py +++ /dev/null @@ -1,22 +0,0 @@ -""" GTTS Voice. """ -import os - -import gtts -from playsound import playsound - -from autogpt.speech.base import VoiceBase - - -class GTTSVoice(VoiceBase): - """GTTS Voice.""" - - def _setup(self) -> None: - pass - - def _speech(self, text: str, _: int = 0) -> bool: - """Play the given text.""" - tts = gtts.gTTS(text) - tts.save("speech.mp3") - playsound("speech.mp3", True) - os.remove("speech.mp3") - return True diff --git a/spaces/Danielsun888/pocSearch/README.md b/spaces/Danielsun888/pocSearch/README.md deleted file mode 100644 index b604513150169de014b82fd5a743a6471707ad98..0000000000000000000000000000000000000000 --- a/spaces/Danielsun888/pocSearch/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: PocSearch -emoji: 👀 -colorFrom: blue -colorTo: gray -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DeepFloyd/deepfloyd-if-license/index.html b/spaces/DeepFloyd/deepfloyd-if-license/index.html deleted file mode 100644 index 38afb68ac90b1c200ca289c1bbc84c3c13a275ce..0000000000000000000000000000000000000000 --- a/spaces/DeepFloyd/deepfloyd-if-license/index.html +++ /dev/null @@ -1,53 +0,0 @@ - - - - - - Deepfloyd IF License Agreement - - - -

    DEEPFLOYD IF LICENSE AGREEMENT

    -

    - This License Agreement (as may be amended in accordance with this License Agreement, “License”), between you, or your employer or other entity (if you are entering into this agreement on behalf of your employer or other entity) (“Licensee” or “you”) and Stability AI Ltd.. (“Stability AI” or “we”) applies to your use of any computer program, algorithm, source code, object code, or software that is made available by Stability AI under this License (“Software”) and any specifications, manuals, documentation, and other written information provided by Stability AI related to the Software (“Documentation”). -

    -

    - By clicking “I Accept” below or by using the Software, you agree to the terms of this License. If you do not agree to this License, then you do not have any rights to use the Software or Documentation (collectively, the “Software Products”), and you must immediately cease using the Software Products. If you are agreeing to be bound by the terms of this License on behalf of your employer or other entity, you represent and warrant to Stability AI that you have full legal authority to bind your employer or such entity to this License. If you do not have the requisite authority, you may not accept the License or access the Software Products on behalf of your employer or other entity. -

    -
      -
    1. - LICENSE GRANT -
        -
      1. Subject to your compliance with the Documentation and Sections 2, 3, and 5, Stability AI grants you a non-exclusive, worldwide, non-transferable, non-sublicensable, revocable, royalty free and limited license under Stability AI’s copyright interests to reproduce, distribute, and create derivative works of the Software solely for your non-commercial research purposes. The foregoing license is personal to you, and you may not assign or sublicense this License or any other rights or obligations under this License without Stability AI’s prior written consent; any such assignment or sublicense will be void and will automatically and immediately terminate this License. -
      2. You may make a reasonable number of copies of the Documentation solely for use in connection with the license to the Software granted above.
        -
      3. The grant of rights expressly set forth in this Section 1 (License Grant) are the complete grant of rights to you in the Software Products, and no other licenses are granted, whether by waiver, estoppel, implication, equity or otherwise. Stability AI and its licensors reserve all rights not expressly granted by this License.
      4. -
      -
    2. - RESTRICTIONS

      You will not, and will not permit, assist or cause any third party to: -
        -
      1. use, modify, copy, reproduce, create derivative works of, or distribute the Software Products (or any derivative works thereof, works incorporating the Software Products, or any data produced by the Software), in whole or in part, for (i) any commercial or production purposes, (ii) military purposes or in the service of nuclear technology, (iii) purposes of surveillance, including any research or development relating to surveillance, (iv) biometric processing, (v) in any manner that infringes, misappropriates, or otherwise violates any third-party rights, or (vi) in any manner that violates any applicable law and violating any privacy or security laws, rules, regulations, directives, or governmental requirements (including the General Data Privacy Regulation (Regulation (EU) 2016/679), the California Consumer Privacy Act, and any and all laws governing the processing of biometric information), as well as all amendments and successor laws to any of the foregoing;
      2. -
      3. alter or remove copyright and other proprietary notices which appear on or in the Software Products;
      4. -
      5. utilize any equipment, device, software, or other means to circumvent or remove any security or protection used by Stability AI in connection with the Software, or to circumvent or remove any usage restrictions, or to enable functionality disabled by Stability AI; or
      6. -
      7. offer or impose any terms on the Software Products that alter, restrict, or are inconsistent with the terms of this License.
      8. -
      9. 1) violate any applicable U.S. and non-U.S. export control and trade sanctions laws (“Export Laws”); 2) directly or indirectly export, re-export, provide, or otherwise transfer Software Products: (a) to any individual, entity, or country prohibited by Export Laws; (b) to anyone on U.S. or non-U.S. government restricted parties lists; or (c) for any purpose prohibited by Export Laws, including nuclear, chemical or biological weapons, or missile technology applications; 3) use or download Software Products if you or they are: (a) located in a comprehensively sanctioned jurisdiction, (b) currently listed on any U.S. or non-U.S. restricted parties list, or (c) for any purpose prohibited by Export Laws; and (4) will not disguise your location through IP proxying or other methods.
      10. -
      -
    3. ATTRIBUTION

      Together with any copies of the Software Products (as well as derivative works thereof or works incorporating the Software Products) that you distribute, you must provide (i) a copy of this License, and (ii) the following attribution notice: “DeepFloyd is licensed under the DeepFloyd License, Copyright (c) Stability AI Ltd. All Rights Reserved.”
      -
    4. DISCLAIMERS

      THE SOFTWARE PRODUCTS ARE PROVIDED “AS IS” and “WITH ALL FAULTS” WITH NO WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. STABILITY AIEXPRESSLY DISCLAIMS ALL REPRESENTATIONS AND WARRANTIES, EXPRESS OR IMPLIED, WHETHER BY STATUTE, CUSTOM, USAGE OR OTHERWISE AS TO ANY MATTERS RELATED TO THE SOFTWARE PRODUCTS, INCLUDING BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE, SATISFACTORY QUALITY, OR NON-INFRINGEMENT. STABILITY AI MAKES NO WARRANTIES OR REPRESENTATIONS THAT THE SOFTWARE PRODUCTS WILL BE ERROR FREE OR FREE OF VIRUSES OR OTHER HARMFUL COMPONENTS, OR PRODUCE ANY PARTICULAR RESULTS.
      -
    5. LIMITATION OF LIABILITY

      TO THE FULLEST EXTENT PERMITTED BY LAW, IN NO EVENT WILL STABILITY AI BE LIABLE TO YOU (A) UNDER ANY THEORY OF LIABILITY, WHETHER BASED IN CONTRACT, TORT, NEGLIGENCE, STRICT LIABILITY, WARRANTY, OR OTHERWISE UNDER THIS LICENSE, OR (B) FOR ANY INDIRECT, CONSEQUENTIAL, EXEMPLARY, INCIDENTAL, PUNITIVE OR SPECIAL DAMAGES OR LOST PROFITS, EVEN IF STABILITY AI HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. THE SOFTWARE PRODUCTS, THEIR CONSTITUENT COMPONENTS, AND ANY OUTPUT (COLLECTIVELY, “SOFTWARE MATERIALS”) ARE NOT DESIGNED OR INTENDED FOR USE IN ANY APPLICATION OR SITUATION WHERE FAILURE OR FAULT OF THE SOFTWARE MATERIALS COULD REASONABLY BE ANTICIPATED TO LEAD TO SERIOUS INJURY OF ANY PERSON, INCLUDING POTENTIAL DISCRIMINATION OR VIOLATION OF AN INDIVIDUAL’S PRIVACY RIGHTS, OR TO SEVERE PHYSICAL, PROPERTY, OR ENVIRONMENTAL DAMAGE (EACH, A “HIGH-RISK USE”). IF YOU ELECT TO USE ANY OF THE SOFTWARE MATERIALS FOR A HIGH-RISK USE, YOU DO SO AT YOUR OWN RISK. YOU AGREE TO DESIGN AND IMPLEMENT APPROPRIATE DECISION-MAKING AND RISK-MITIGATION PROCEDURES AND POLICIES IN CONNECTION WITH A HIGH-RISK USE SUCH THAT EVEN IF THERE IS A FAILURE OR FAULT IN ANY OF THE SOFTWARE MATERIALS, THE SAFETY OF PERSONS OR PROPERTY AFFECTED BY THE ACTIVITY STAYS AT A LEVEL THAT IS REASONABLE, APPROPRIATE, AND LAWFUL FOR THE FIELD OF THE HIGH-RISK USE.
      -
    6. INDEMNIFICATION

      You will indemnify, defend and hold harmless Stability AI and our subsidiaries and affiliates, and each of our respective shareholders, directors, officers, employees, agents, successors, and assigns (collectively, the “Stability AI Parties”) from and against any losses, liabilities, damages, fines, penalties, and expenses (including reasonable attorneys’ fees) incurred by any Stability AI Party in connection with any claim, demand, allegation, lawsuit, proceeding, or investigation (collectively, “Claims”) arising out of or related to: (a) your access to or use of the Software Products (as well as any results or data generated from such access or use), including any High-Risk Use (defined below); (b) your violation of this License; or (c) your violation, misappropriation or infringement of any rights of another (including intellectual property or other proprietary rights and privacy rights). You will promptly notify the Stability AI Parties of any such Claims, and cooperate with Stability AI Parties in defending such Claims. You will also grant the Stability AI Parties sole control of the defense or settlement, at Stability AI’s sole option, of any Claims. This indemnity is in addition to, and not in lieu of, any other indemnities or remedies set forth in a written agreement between you and Stability AI or the other Stability AI Parties.
      -
    7. - TERMINATION; SURVIVAL -
        -
      1. This License will automatically terminate upon any breach by you of the terms of this License. -
      2. We may terminate this License, in whole or in part, at any time upon notice (including electronic) to you.
        -
      3. The following sections survive termination of this License: 2 (Restrictions), 3 (Attribution), 4 (Disclaimers), 5 (Limitation on Liability), 6 (Indemnification) 7 (Termination; Survival), 8 (Third Party Materials), 9 (Trademarks), 10 (Applicable Law; Dispute Resolution), and 11 (Miscellaneous).
      4. -
      -
    8. THIRD PARTY MATERIALS

      The Software Products may contain third-party software or other components (including free and open source software) (all of the foregoing, “Third Party Materials”), which are subject to the license terms of the respective third-party licensors. Your dealings or correspondence with third parties and your use of or interaction with any Third Party Materials are solely between you and the third party. Stability AI does not control or endorse, and makes no representations or warranties regarding, any Third Party Materials, and your access to and use of such Third Party Materials are at your own risk.
      -
    9. TRADEMARKS

      Licensee has not been granted any trademark license as part of this License and may not use any name or mark associated with Stability AI without the prior written permission of Stability AI, except to the extent necessary to make the reference required by the “ATTRIBUTION” section of this Agreement.
      -
    10. APPLICABLE LAW; DISPUTE RESOLUTION

      This License will be governed and construed under the laws of the State of California without regard to conflicts of law provisions. Any suit or proceeding arising out of or relating to this License will be brought in the federal or state courts, as applicable, in San Mateo County, California, and each party irrevocably submits to the jurisdiction and venue of such courts.
      -
    11. MISCELLANEOUS

      If any provision or part of a provision of this License is unlawful, void or unenforceable, that provision or part of the provision is deemed severed from this License, and will not affect the validity and enforceability of any remaining provisions. The failure of Stability AI to exercise or enforce any right or provision of this License will not operate as a waiver of such right or provision. This License does not confer any third-party beneficiary rights upon any other person or entity. This License, together with the Documentation, contains the entire understanding between you and Stability AI regarding the subject matter of this License, and supersedes all other written or oral agreements and understandings between you and Stability AI regarding such subject matter. No change or addition to any provision of this License will be binding unless it is in writing and signed by an authorized representative of both you and Stability AI.
    12. -
    -
  • - - - diff --git a/spaces/DuckyPolice/DeciDiffusion-v1-0/header.html b/spaces/DuckyPolice/DeciDiffusion-v1-0/header.html deleted file mode 100644 index fafbcb3146686659a84a80ead9d1c4b7998dd94b..0000000000000000000000000000000000000000 --- a/spaces/DuckyPolice/DeciDiffusion-v1-0/header.html +++ /dev/null @@ -1,17 +0,0 @@ -
    -
    -

    - Deci Diffusion 1.0 -

    -
    -
    -

    - Demo for the DeciDiffusion 1.0 model -

    -
    \ No newline at end of file diff --git a/spaces/Elegbede/Text_to_emotion_classifier/README.md b/spaces/Elegbede/Text_to_emotion_classifier/README.md deleted file mode 100644 index c18e2d86ecb98c6ee6c6874fa96e1d4b73aa3151..0000000000000000000000000000000000000000 --- a/spaces/Elegbede/Text_to_emotion_classifier/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Text To Emotion Classifier -emoji: 📚 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 3.45.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/EronSamez/RVC_HFmeu/demucs/repitch.py b/spaces/EronSamez/RVC_HFmeu/demucs/repitch.py deleted file mode 100644 index 8846ab2d951a024c95067f66a113968500442828..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/demucs/repitch.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import io -import random -import subprocess as sp -import tempfile - -import numpy as np -import torch -from scipy.io import wavfile - - -def i16_pcm(wav): - if wav.dtype == np.int16: - return wav - return (wav * 2**15).clamp_(-2**15, 2**15 - 1).short() - - -def f32_pcm(wav): - if wav.dtype == np.float: - return wav - return wav.float() / 2**15 - - -class RepitchedWrapper: - """ - Wrap a dataset to apply online change of pitch / tempo. - """ - def __init__(self, dataset, proba=0.2, max_pitch=2, max_tempo=12, tempo_std=5, vocals=[3]): - self.dataset = dataset - self.proba = proba - self.max_pitch = max_pitch - self.max_tempo = max_tempo - self.tempo_std = tempo_std - self.vocals = vocals - - def __len__(self): - return len(self.dataset) - - def __getitem__(self, index): - streams = self.dataset[index] - in_length = streams.shape[-1] - out_length = int((1 - 0.01 * self.max_tempo) * in_length) - - if random.random() < self.proba: - delta_pitch = random.randint(-self.max_pitch, self.max_pitch) - delta_tempo = random.gauss(0, self.tempo_std) - delta_tempo = min(max(-self.max_tempo, delta_tempo), self.max_tempo) - outs = [] - for idx, stream in enumerate(streams): - stream = repitch( - stream, - delta_pitch, - delta_tempo, - voice=idx in self.vocals) - outs.append(stream[:, :out_length]) - streams = torch.stack(outs) - else: - streams = streams[..., :out_length] - return streams - - -def repitch(wav, pitch, tempo, voice=False, quick=False, samplerate=44100): - """ - tempo is a relative delta in percentage, so tempo=10 means tempo at 110%! - pitch is in semi tones. - Requires `soundstretch` to be installed, see - https://www.surina.net/soundtouch/soundstretch.html - """ - outfile = tempfile.NamedTemporaryFile(suffix=".wav") - in_ = io.BytesIO() - wavfile.write(in_, samplerate, i16_pcm(wav).t().numpy()) - command = [ - "soundstretch", - "stdin", - outfile.name, - f"-pitch={pitch}", - f"-tempo={tempo:.6f}", - ] - if quick: - command += ["-quick"] - if voice: - command += ["-speech"] - try: - sp.run(command, capture_output=True, input=in_.getvalue(), check=True) - except sp.CalledProcessError as error: - raise RuntimeError(f"Could not change bpm because {error.stderr.decode('utf-8')}") - sr, wav = wavfile.read(outfile.name) - wav = wav.copy() - wav = f32_pcm(torch.from_numpy(wav).t()) - assert sr == samplerate - return wav diff --git a/spaces/EronSamez/RVC_HFmeu/infer/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py b/spaces/EronSamez/RVC_HFmeu/infer/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py deleted file mode 100644 index 55abcfdb87636a9ee85b8df5cdc1bec64098b5da..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/infer/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py +++ /dev/null @@ -1,91 +0,0 @@ -import numpy as np -import pyworld - -from infer.lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor - - -class DioF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def resize_f0(self, x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * target_len, len(source)) / target_len, - np.arange(0, len(source)), - source, - ) - res = np.nan_to_num(target) - return res - - def compute_f0(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.dio( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return self.interpolate_f0(self.resize_f0(f0, p_len))[0] - - def compute_f0_uv(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.dio( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return self.interpolate_f0(self.resize_f0(f0, p_len)) diff --git a/spaces/Facepounder/gpt2-xl/README.md b/spaces/Facepounder/gpt2-xl/README.md deleted file mode 100644 index 199c4229e390cdbd814817d8e026c717a3202653..0000000000000000000000000000000000000000 --- a/spaces/Facepounder/gpt2-xl/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Gpt2 Xl -emoji: 🔥 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.28.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Faizanshaikh/runwayml-stable-diffusion-v1-5/README.md b/spaces/Faizanshaikh/runwayml-stable-diffusion-v1-5/README.md deleted file mode 100644 index 9c8eba9f23ee2a09090be3030d6275447be178d7..0000000000000000000000000000000000000000 --- a/spaces/Faizanshaikh/runwayml-stable-diffusion-v1-5/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Runwayml Stable Diffusion V1 5 -emoji: 🌍 -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Felladrin/MiniSearch/src/components/App.tsx b/spaces/Felladrin/MiniSearch/src/components/App.tsx deleted file mode 100644 index 17ed19ed49bdae7b41fd3eb7102d86a459a57029..0000000000000000000000000000000000000000 --- a/spaces/Felladrin/MiniSearch/src/components/App.tsx +++ /dev/null @@ -1,35 +0,0 @@ -import { usePubSub } from "create-pubsub/react"; -import { - promptPubSub, - responsePubSub, - searchResultsPubSub, - urlsDescriptionsPubSub, -} from "../modules/pubSub"; -import { ConfigForm } from "./ConfigForm"; -import { SearchForm } from "./SearchForm"; -import { ResponseView } from "./ResponseView"; - -export function App() { - const [prompt] = usePubSub(promptPubSub); - const [response] = usePubSub(responsePubSub); - const [searchResults] = usePubSub(searchResultsPubSub); - const [urlsDescriptions] = usePubSub(urlsDescriptionsPubSub); - - return ( - <> - {new URLSearchParams(window.location.search).has("q") ? ( - - ) : ( - <> - - - - )} - - ); -} diff --git "a/spaces/Fengbinbin/gpt-academic/crazy_functions/\346\211\271\351\207\217Markdown\347\277\273\350\257\221.py" "b/spaces/Fengbinbin/gpt-academic/crazy_functions/\346\211\271\351\207\217Markdown\347\277\273\350\257\221.py" deleted file mode 100644 index 26f42cad0c13bf601fc997c4d7cc5b237d2f97df..0000000000000000000000000000000000000000 --- "a/spaces/Fengbinbin/gpt-academic/crazy_functions/\346\211\271\351\207\217Markdown\347\277\273\350\257\221.py" +++ /dev/null @@ -1,186 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -fast_debug = False - -class PaperFileGroup(): - def __init__(self): - self.file_paths = [] - self.file_contents = [] - self.sp_file_contents = [] - self.sp_file_index = [] - self.sp_file_tag = [] - - # count_token - from request_llm.bridge_all import model_info - enc = model_info["gpt-3.5-turbo"]['tokenizer'] - def get_token_num(txt): return len(enc.encode(txt, disallowed_special=())) - self.get_token_num = get_token_num - - def run_file_split(self, max_token_limit=1900): - """ - 将长文本分离开来 - """ - for index, file_content in enumerate(self.file_contents): - if self.get_token_num(file_content) < max_token_limit: - self.sp_file_contents.append(file_content) - self.sp_file_index.append(index) - self.sp_file_tag.append(self.file_paths[index]) - else: - from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf - segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit) - for j, segment in enumerate(segments): - self.sp_file_contents.append(segment) - self.sp_file_index.append(index) - self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.md") - - print('Segmentation: done') - -def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en'): - import time, os, re - from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency - - # <-------- 读取Markdown文件,删除其中的所有注释 ----------> - pfg = PaperFileGroup() - - for index, fp in enumerate(file_manifest): - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - # 记录删除注释后的文本 - pfg.file_paths.append(fp) - pfg.file_contents.append(file_content) - - # <-------- 拆分过长的Markdown文件 ----------> - pfg.run_file_split(max_token_limit=1500) - n_split = len(pfg.sp_file_contents) - - # <-------- 多线程润色开始 ----------> - if language == 'en->zh': - inputs_array = ["This is a Markdown file, translate it into Chinese, do not modify any existing Markdown commands:" + - f"\n\n{frag}" for frag in pfg.sp_file_contents] - inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag] - sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)] - elif language == 'zh->en': - inputs_array = [f"This is a Markdown file, translate it into English, do not modify any existing Markdown commands:" + - f"\n\n{frag}" for frag in pfg.sp_file_contents] - inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag] - sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)] - - gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( - inputs_array=inputs_array, - inputs_show_user_array=inputs_show_user_array, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history_array=[[""] for _ in range(n_split)], - sys_prompt_array=sys_prompt_array, - # max_workers=5, # OpenAI所允许的最大并行过载 - scroller_max_len = 80 - ) - - # <-------- 整理结果,退出 ----------> - create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md" - res = write_results_to_file(gpt_response_collection, file_name=create_report_file_name) - history = gpt_response_collection - chatbot.append((f"{fp}完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - -def get_files_from_everything(txt): - import glob, os - - success = True - if txt.startswith('http'): - # 网络的远程文件 - txt = txt.replace("https://github.com/", "https://raw.githubusercontent.com/") - txt = txt.replace("/blob/", "/") - import requests - from toolbox import get_conf - proxies, = get_conf('proxies') - r = requests.get(txt, proxies=proxies) - with open('./gpt_log/temp.md', 'wb+') as f: f.write(r.content) - project_folder = './gpt_log/' - file_manifest = ['./gpt_log/temp.md'] - elif txt.endswith('.md'): - # 直接给定文件 - file_manifest = [txt] - project_folder = os.path.dirname(txt) - elif os.path.exists(txt): - # 本地路径,递归搜索 - project_folder = txt - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.md', recursive=True)] - else: - success = False - - return success, file_manifest, project_folder - - -@CatchException -def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "对整个Markdown项目进行翻译。函数插件贡献者: Binary-Husky"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import tiktoken - import glob, os - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - history = [] # 清空历史,以免输入溢出 - - success, file_manifest, project_folder = get_files_from_everything(txt) - - if not success: - # 什么都没有 - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - yield from 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en->zh') - - - - - -@CatchException -def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "对整个Markdown项目进行翻译。函数插件贡献者: Binary-Husky"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import tiktoken - import glob, os - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - history = [] # 清空历史,以免输入溢出 - success, file_manifest, project_folder = get_files_from_everything(txt) - if not success: - # 什么都没有 - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='zh->en') \ No newline at end of file diff --git a/spaces/Filmor/Bot/style.css b/spaces/Filmor/Bot/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/Filmor/Bot/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/Fr33d0m21/chatbot_dialogpt/README.md b/spaces/Fr33d0m21/chatbot_dialogpt/README.md deleted file mode 100644 index a92b3c0bb9c26b528f6486655cdb750e1b7bfa1f..0000000000000000000000000000000000000000 --- a/spaces/Fr33d0m21/chatbot_dialogpt/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Chatbot Dialogpt -emoji: 🦀 -colorFrom: purple -colorTo: blue -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -duplicated_from: mandar100/chatbot_dialogpt ---- -This code deploys dialogpt model with gradio. -User can chat with a bot using dialogpt model. - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/GIanlucaRub/DoubleResolution/app.py b/spaces/GIanlucaRub/DoubleResolution/app.py deleted file mode 100644 index 02baff1d9ee54e1b5aaf8da2a21bce57a363dff7..0000000000000000000000000000000000000000 --- a/spaces/GIanlucaRub/DoubleResolution/app.py +++ /dev/null @@ -1,146 +0,0 @@ -import gradio as gr -import numpy as np -from math import ceil -from huggingface_hub import from_pretrained_keras - -model = from_pretrained_keras("GIanlucaRub/doubleResFinal") -# model = from_pretrained_keras("GIanlucaRub/autoencoder_model_d_0") - -def double_res(input_image): - input_height = input_image.shape[0] - input_width = input_image.shape[1] - height = ceil(input_height/128) - width = ceil(input_width/128) - expanded_input_image = np.zeros((128*height, 128*width, 3), dtype=np.uint8) - np.copyto(expanded_input_image[0:input_height, 0:input_width], input_image) - - output_image = np.zeros((128*height*2, 128*width*2, 3), dtype=np.float32) - - to_predict = [] - for i in range(height): - for j in range(width): - temp_slice = expanded_input_image[i * - 128:(i+1)*128, j*128:(j+1)*128]/255 - to_predict.append(temp_slice) - -# removing inner borders - - for i in range(height): - for j in range(width): - if i != 0 and j != 0 and i != height-1 and j != width-1: - right_slice = expanded_input_image[i * - 128:(i+1)*128, (j+1)*128-64:(j+1)*128+64]/255 - to_predict.append(right_slice) - - - left_slice = expanded_input_image[i * - 128:(i+1)*128, j*128-64:(j)*128+64]/255 - to_predict.append(left_slice) - - - upper_slice = expanded_input_image[( - i+1)*128-64:(i+1)*128+64, j*128:(j+1)*128]/255 - to_predict.append(upper_slice) - - - lower_slice = expanded_input_image[i * - 128-64:i*128+64, j*128:(j+1)*128]/255 - to_predict.append(lower_slice) - # removing angles - - lower_right_slice = expanded_input_image[i * - 128-64:i*128+64, (j+1)*128-64:(j+1)*128+64]/255 - to_predict.append(lower_right_slice) - - lower_left_slice = expanded_input_image[i * - 128-64:i*128+64, j*128-64:j*128+64]/255 - to_predict.append(lower_left_slice) - -# predicting all images at once - completed = False - n = 16 - # n = 1 - while not completed: - try: - print("attempting with "+ str(n)) - predicted = model.predict(np.array(to_predict),batch_size = n) - completed = True - print("completed with "+ str(n)) - except: - print("attempt with " + str(n) + " failed") - n += -1 - if n <= 0: - n = 1 - counter = 0 - for i in range(height): - for j in range(width): - np.copyto(output_image[i*256:(i+1)*256, j * - 256:(j+1)*256], predicted[counter]) - counter+=1 - - - - for i in range(height): - for j in range(width): - if i != 0 and j != 0 and i != height-1 and j != width-1: - right_upsampled_slice = predicted[counter] - counter+=1 - resized_right_slice = right_upsampled_slice[64:192, 64:192] - np.copyto(output_image[i*256+64:(i+1)*256-64, - (j+1)*256-64:(j+1)*256+64], resized_right_slice) - - - - - left_upsampled_slice = predicted[counter] - counter+=1 - resized_left_slice = left_upsampled_slice[64:192, 64:192] - np.copyto(output_image[i*256+64:(i+1)*256-64, - j*256-64:j*256+64], resized_left_slice) - - - - upper_upsampled_slice = predicted[counter] - counter+=1 - resized_upper_slice = upper_upsampled_slice[64:192, 64:192] - np.copyto(output_image[(i+1)*256-64:(i+1)*256+64, - j*256+64:(j+1)*256-64], resized_upper_slice) - - - - lower_upsampled_slice = predicted[counter] - counter+=1 - resized_lower_slice = lower_upsampled_slice[64:192, 64:192] - np.copyto(output_image[i*256-64:i*256+64, - j*256+64:(j+1)*256-64], resized_lower_slice) - - - - lower_right_upsampled_slice = predicted[counter] - counter+=1 - resized_lower_right_slice = lower_right_upsampled_slice[64:192, 64:192] - np.copyto(output_image[i*256-64:i*256+64, (j+1) - * 256-64:(j+1)*256+64], resized_lower_right_slice) - - - lower_left_upsampled_slice = predicted[counter] - counter+=1 - resized_lower_left_slice = lower_left_upsampled_slice[64:192, 64:192] - np.copyto( - output_image[i*256-64:i*256+64, j*256-64:j*256+64], resized_lower_left_slice) - - resized_output_image = output_image[0:input_height*2, 0:input_width*2] - return resized_output_image - -demo = gr.Interface( - fn=double_res, - title="Double picture resolution", - description="Upload a picture and get the horizontal and vertical resolution doubled (4x pixels)", - allow_flagging="never", - inputs=[ - gr.inputs.Image(type="numpy") - ], - outputs=gr.Image(type="numpy")) - -demo.launch() - diff --git a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/inference_realesrgan_video.py b/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/inference_realesrgan_video.py deleted file mode 100644 index 170fb23971d135ebf0c854c652a0005d3f31abaa..0000000000000000000000000000000000000000 --- a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/inference_realesrgan_video.py +++ /dev/null @@ -1,566 +0,0 @@ -import argparse -import cv2 -import glob -import mimetypes -import numpy as np -import os -import shutil -import subprocess -import torch -from basicsr.archs.rrdbnet_arch import RRDBNet -from basicsr.utils.download_util import load_file_from_url -from os import path as osp -from tqdm import tqdm - -from realesrgan import RealESRGANer -from realesrgan.archs.srvgg_arch import SRVGGNetCompact - -try: - import ffmpeg -except ImportError: - import pip - - pip.main(["install", "--user", "ffmpeg-python"]) - import ffmpeg - - -def get_video_meta_info(video_path): - ret = {} - probe = ffmpeg.probe(video_path) - video_streams = [ - stream for stream in probe["streams"] if stream["codec_type"] == "video" - ] - has_audio = any(stream["codec_type"] == "audio" for stream in probe["streams"]) - ret["width"] = video_streams[0]["width"] - ret["height"] = video_streams[0]["height"] - ret["fps"] = eval(video_streams[0]["avg_frame_rate"]) - ret["audio"] = ffmpeg.input(video_path).audio if has_audio else None - ret["nb_frames"] = int(video_streams[0]["nb_frames"]) - return ret - - -def get_sub_video(args, num_process, process_idx): - if num_process == 1: - return args.input - meta = get_video_meta_info(args.input) - duration = int(meta["nb_frames"] / meta["fps"]) - part_time = duration // num_process - print(f"duration: {duration}, part_time: {part_time}") - os.makedirs( - osp.join(args.output, f"{args.video_name}_inp_tmp_videos"), exist_ok=True - ) - out_path = osp.join( - args.output, f"{args.video_name}_inp_tmp_videos", f"{process_idx:03d}.mp4" - ) - cmd = [ - args.ffmpeg_bin, - f"-i {args.input}", - "-ss", - f"{part_time * process_idx}", - f"-to {part_time * (process_idx + 1)}" - if process_idx != num_process - 1 - else "", - "-async 1", - out_path, - "-y", - ] - print(" ".join(cmd)) - subprocess.call(" ".join(cmd), shell=True) - return out_path - - -class Reader: - def __init__(self, args, total_workers=1, worker_idx=0): - self.args = args - input_type = mimetypes.guess_type(args.input)[0] - self.input_type = "folder" if input_type is None else input_type - self.paths = [] # for image&folder type - self.audio = None - self.input_fps = None - if self.input_type.startswith("video"): - video_path = get_sub_video(args, total_workers, worker_idx) - self.stream_reader = ( - ffmpeg.input(video_path) - .output("pipe:", format="rawvideo", pix_fmt="bgr24", loglevel="error") - .run_async(pipe_stdin=True, pipe_stdout=True, cmd=args.ffmpeg_bin) - ) - meta = get_video_meta_info(video_path) - self.width = meta["width"] - self.height = meta["height"] - self.input_fps = meta["fps"] - self.audio = meta["audio"] - self.nb_frames = meta["nb_frames"] - - else: - if self.input_type.startswith("image"): - self.paths = [args.input] - else: - paths = sorted(glob.glob(os.path.join(args.input, "*"))) - tot_frames = len(paths) - num_frame_per_worker = tot_frames // total_workers + ( - 1 if tot_frames % total_workers else 0 - ) - self.paths = paths[ - num_frame_per_worker - * worker_idx : num_frame_per_worker - * (worker_idx + 1) - ] - - self.nb_frames = len(self.paths) - assert self.nb_frames > 0, "empty folder" - from PIL import Image - - tmp_img = Image.open(self.paths[0]) - self.width, self.height = tmp_img.size - self.idx = 0 - - def get_resolution(self): - return self.height, self.width - - def get_fps(self): - if self.args.fps is not None: - return self.args.fps - elif self.input_fps is not None: - return self.input_fps - return 24 - - def get_audio(self): - return self.audio - - def __len__(self): - return self.nb_frames - - def get_frame_from_stream(self): - img_bytes = self.stream_reader.stdout.read( - self.width * self.height * 3 - ) # 3 bytes for one pixel - if not img_bytes: - return None - img = np.frombuffer(img_bytes, np.uint8).reshape([self.height, self.width, 3]) - return img - - def get_frame_from_list(self): - if self.idx >= self.nb_frames: - return None - img = cv2.imread(self.paths[self.idx]) - self.idx += 1 - return img - - def get_frame(self): - if self.input_type.startswith("video"): - return self.get_frame_from_stream() - else: - return self.get_frame_from_list() - - def close(self): - if self.input_type.startswith("video"): - self.stream_reader.stdin.close() - self.stream_reader.wait() - - -class Writer: - def __init__(self, args, audio, height, width, video_save_path, fps): - out_width, out_height = int(width * args.outscale), int(height * args.outscale) - if out_height > 2160: - print( - "You are generating video that is larger than 4K, which will be very slow due to IO speed.", - "We highly recommend to decrease the outscale(aka, -s).", - ) - - if audio is not None: - self.stream_writer = ( - ffmpeg.input( - "pipe:", - format="rawvideo", - pix_fmt="bgr24", - s=f"{out_width}x{out_height}", - framerate=fps, - ) - .output( - audio, - video_save_path, - pix_fmt="yuv420p", - vcodec="libx264", - loglevel="error", - acodec="copy", - ) - .overwrite_output() - .run_async(pipe_stdin=True, pipe_stdout=True, cmd=args.ffmpeg_bin) - ) - else: - self.stream_writer = ( - ffmpeg.input( - "pipe:", - format="rawvideo", - pix_fmt="bgr24", - s=f"{out_width}x{out_height}", - framerate=fps, - ) - .output( - video_save_path, - pix_fmt="yuv420p", - vcodec="libx264", - loglevel="error", - ) - .overwrite_output() - .run_async(pipe_stdin=True, pipe_stdout=True, cmd=args.ffmpeg_bin) - ) - - def write_frame(self, frame): - frame = frame.astype(np.uint8).tobytes() - self.stream_writer.stdin.write(frame) - - def close(self): - self.stream_writer.stdin.close() - self.stream_writer.wait() - - -def inference_video(args, video_save_path, device=None, total_workers=1, worker_idx=0): - # ---------------------- determine models according to model names ---------------------- # - args.model_name = args.model_name.split(".pth")[0] - if args.model_name == "RealESRGAN_x4plus": # x4 RRDBNet model - model = RRDBNet( - num_in_ch=3, - num_out_ch=3, - num_feat=64, - num_block=23, - num_grow_ch=32, - scale=4, - ) - netscale = 4 - file_url = [ - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth" - ] - elif args.model_name == "RealESRNet_x4plus": # x4 RRDBNet model - model = RRDBNet( - num_in_ch=3, - num_out_ch=3, - num_feat=64, - num_block=23, - num_grow_ch=32, - scale=4, - ) - netscale = 4 - file_url = [ - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/RealESRNet_x4plus.pth" - ] - elif ( - args.model_name == "RealESRGAN_x4plus_anime_6B" - ): # x4 RRDBNet model with 6 blocks - model = RRDBNet( - num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4 - ) - netscale = 4 - file_url = [ - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth" - ] - elif args.model_name == "RealESRGAN_x2plus": # x2 RRDBNet model - model = RRDBNet( - num_in_ch=3, - num_out_ch=3, - num_feat=64, - num_block=23, - num_grow_ch=32, - scale=2, - ) - netscale = 2 - file_url = [ - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth" - ] - elif args.model_name == "realesr-animevideov3": # x4 VGG-style model (XS size) - model = SRVGGNetCompact( - num_in_ch=3, - num_out_ch=3, - num_feat=64, - num_conv=16, - upscale=4, - act_type="prelu", - ) - netscale = 4 - file_url = [ - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth" - ] - elif args.model_name == "realesr-general-x4v3": # x4 VGG-style model (S size) - model = SRVGGNetCompact( - num_in_ch=3, - num_out_ch=3, - num_feat=64, - num_conv=32, - upscale=4, - act_type="prelu", - ) - netscale = 4 - file_url = [ - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-wdn-x4v3.pth", - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth", - ] - - # ---------------------- determine model paths ---------------------- # - model_path = os.path.join("weights", args.model_name + ".pth") - if not os.path.isfile(model_path): - ROOT_DIR = os.path.dirname(os.path.abspath(__file__)) - for url in file_url: - # model_path will be updated - model_path = load_file_from_url( - url=url, - model_dir=os.path.join(ROOT_DIR, "weights"), - progress=True, - file_name=None, - ) - - # use dni to control the denoise strength - dni_weight = None - if args.model_name == "realesr-general-x4v3" and args.denoise_strength != 1: - wdn_model_path = model_path.replace( - "realesr-general-x4v3", "realesr-general-wdn-x4v3" - ) - model_path = [model_path, wdn_model_path] - dni_weight = [args.denoise_strength, 1 - args.denoise_strength] - - # restorer - upsampler = RealESRGANer( - scale=netscale, - model_path=model_path, - dni_weight=dni_weight, - model=model, - tile=args.tile, - tile_pad=args.tile_pad, - pre_pad=args.pre_pad, - half=not args.fp32, - device=device, - ) - - if "anime" in args.model_name and args.face_enhance: - print( - "face_enhance is not supported in anime models, we turned this option off for you. " - "if you insist on turning it on, please manually comment the relevant lines of code." - ) - args.face_enhance = False - - if args.face_enhance: # Use GFPGAN for face enhancement - from gfpgan import GFPGANer - - face_enhancer = GFPGANer( - model_path="https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth", - upscale=args.outscale, - arch="clean", - channel_multiplier=2, - bg_upsampler=upsampler, - ) # TODO support custom device - else: - face_enhancer = None - - reader = Reader(args, total_workers, worker_idx) - audio = reader.get_audio() - height, width = reader.get_resolution() - fps = reader.get_fps() - writer = Writer(args, audio, height, width, video_save_path, fps) - - pbar = tqdm(total=len(reader), unit="frame", desc="inference") - while True: - img = reader.get_frame() - if img is None: - break - - try: - if args.face_enhance: - _, _, output = face_enhancer.enhance( - img, has_aligned=False, only_center_face=False, paste_back=True - ) - else: - output, _ = upsampler.enhance(img, outscale=args.outscale) - except RuntimeError as error: - print("Error", error) - print( - "If you encounter CUDA out of memory, try to set --tile with a smaller number." - ) - else: - writer.write_frame(output) - - torch.cuda.synchronize(device) - pbar.update(1) - - reader.close() - writer.close() - - -def run(args): - args.video_name = osp.splitext(os.path.basename(args.input))[0] - video_save_path = osp.join(args.output, f"{args.video_name}_{args.suffix}.mp4") - - if args.extract_frame_first: - tmp_frames_folder = osp.join(args.output, f"{args.video_name}_inp_tmp_frames") - os.makedirs(tmp_frames_folder, exist_ok=True) - os.system( - f"ffmpeg -i {args.input} -qscale:v 1 -qmin 1 -qmax 1 -vsync 0 {tmp_frames_folder}/frame%08d.png" - ) - args.input = tmp_frames_folder - - num_gpus = torch.cuda.device_count() - num_process = num_gpus * args.num_process_per_gpu - if num_process == 1: - inference_video(args, video_save_path) - return - - ctx = torch.multiprocessing.get_context("spawn") - pool = ctx.Pool(num_process) - os.makedirs( - osp.join(args.output, f"{args.video_name}_out_tmp_videos"), exist_ok=True - ) - pbar = tqdm(total=num_process, unit="sub_video", desc="inference") - for i in range(num_process): - sub_video_save_path = osp.join( - args.output, f"{args.video_name}_out_tmp_videos", f"{i:03d}.mp4" - ) - pool.apply_async( - inference_video, - args=( - args, - sub_video_save_path, - torch.device(i % num_gpus), - num_process, - i, - ), - callback=lambda arg: pbar.update(1), - ) - pool.close() - pool.join() - - # combine sub videos - # prepare vidlist.txt - with open(f"{args.output}/{args.video_name}_vidlist.txt", "w") as f: - for i in range(num_process): - f.write(f"file '{args.video_name}_out_tmp_videos/{i:03d}.mp4'\n") - - cmd = [ - args.ffmpeg_bin, - "-f", - "concat", - "-safe", - "0", - "-i", - f"{args.output}/{args.video_name}_vidlist.txt", - "-c", - "copy", - f"{video_save_path}", - ] - print(" ".join(cmd)) - subprocess.call(cmd) - shutil.rmtree(osp.join(args.output, f"{args.video_name}_out_tmp_videos")) - if osp.exists(osp.join(args.output, f"{args.video_name}_inp_tmp_videos")): - shutil.rmtree(osp.join(args.output, f"{args.video_name}_inp_tmp_videos")) - os.remove(f"{args.output}/{args.video_name}_vidlist.txt") - - -def main(): - """Inference demo for Real-ESRGAN. - It mainly for restoring anime videos. - - """ - parser = argparse.ArgumentParser() - parser.add_argument( - "-i", "--input", type=str, default="inputs", help="Input video, image or folder" - ) - parser.add_argument( - "-n", - "--model_name", - type=str, - default="realesr-animevideov3", - help=( - "Model names: realesr-animevideov3 | RealESRGAN_x4plus_anime_6B | RealESRGAN_x4plus | RealESRNet_x4plus |" - " RealESRGAN_x2plus | realesr-general-x4v3" - "Default:realesr-animevideov3" - ), - ) - parser.add_argument( - "-o", "--output", type=str, default="results", help="Output folder" - ) - parser.add_argument( - "-dn", - "--denoise_strength", - type=float, - default=0.5, - help=( - "Denoise strength. 0 for weak denoise (keep noise), 1 for strong denoise ability. " - "Only used for the realesr-general-x4v3 model" - ), - ) - parser.add_argument( - "-s", - "--outscale", - type=float, - default=4, - help="The final upsampling scale of the image", - ) - parser.add_argument( - "--suffix", type=str, default="out", help="Suffix of the restored video" - ) - parser.add_argument( - "-t", - "--tile", - type=int, - default=0, - help="Tile size, 0 for no tile during testing", - ) - parser.add_argument("--tile_pad", type=int, default=10, help="Tile padding") - parser.add_argument( - "--pre_pad", type=int, default=0, help="Pre padding size at each border" - ) - parser.add_argument( - "--face_enhance", action="store_true", help="Use GFPGAN to enhance face" - ) - parser.add_argument( - "--fp32", - action="store_true", - help="Use fp32 precision during inference. Default: fp16 (half precision).", - ) - parser.add_argument( - "--fps", type=float, default=None, help="FPS of the output video" - ) - parser.add_argument( - "--ffmpeg_bin", type=str, default="ffmpeg", help="The path to ffmpeg" - ) - parser.add_argument("--extract_frame_first", action="store_true") - parser.add_argument("--num_process_per_gpu", type=int, default=1) - - parser.add_argument( - "--alpha_upsampler", - type=str, - default="realesrgan", - help="The upsampler for the alpha channels. Options: realesrgan | bicubic", - ) - parser.add_argument( - "--ext", - type=str, - default="auto", - help="Image extension. Options: auto | jpg | png, auto means using the same extension as inputs", - ) - args = parser.parse_args() - - args.input = args.input.rstrip("/").rstrip("\\") - os.makedirs(args.output, exist_ok=True) - - if mimetypes.guess_type(args.input)[0] is not None and mimetypes.guess_type( - args.input - )[0].startswith("video"): - is_video = True - else: - is_video = False - - if is_video and args.input.endswith(".flv"): - mp4_path = args.input.replace(".flv", ".mp4") - os.system(f"ffmpeg -i {args.input} -codec copy {mp4_path}") - args.input = mp4_path - - if args.extract_frame_first and not is_video: - args.extract_frame_first = False - - run(args) - - if args.extract_frame_first: - tmp_frames_folder = osp.join(args.output, f"{args.video_name}_inp_tmp_frames") - shutil.rmtree(tmp_frames_folder) - - -if __name__ == "__main__": - main() diff --git a/spaces/Gradio-Blocks/clip-guided-faces/app.py b/spaces/Gradio-Blocks/clip-guided-faces/app.py deleted file mode 100644 index acb6a312f080243a7e43a94f0281f66ad47137c9..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/clip-guided-faces/app.py +++ /dev/null @@ -1,281 +0,0 @@ -import os -import sys -import gradio as gr -os.system('git clone https://github.com/openai/CLIP') -os.system('git clone https://github.com/crowsonkb/guided-diffusion') -os.system('pip install -e ./CLIP') -os.system('pip install -e ./guided-diffusion') -os.system('pip install lpips') -os.system("curl -OL 'https://github.com/Sxela/DiscoDiffusion-Warp/releases/download/v0.1.1/256x256_openai_comics_faces_v2.by_alex_spirin_114k.pt'") - - - - -import io -import math -import sys -import lpips -from PIL import Image -import requests -import torch -from torch import nn -from torch.nn import functional as F -from torchvision import transforms -from torchvision.transforms import functional as TF -from tqdm.notebook import tqdm -sys.path.append('./CLIP') -sys.path.append('./guided-diffusion') -import clip -from guided_diffusion.script_util import create_model_and_diffusion, model_and_diffusion_defaults -import numpy as np -import imageio - -torch.hub.download_url_to_file('https://images.pexels.com/photos/68767/divers-underwater-ocean-swim-68767.jpeg', 'face.jpeg') - -def fetch(url_or_path): - if str(url_or_path).startswith('http://') or str(url_or_path).startswith('https://'): - r = requests.get(url_or_path) - r.raise_for_status() - fd = io.BytesIO() - fd.write(r.content) - fd.seek(0) - return fd - return open(url_or_path, 'rb') -def parse_prompt(prompt): - if prompt.startswith('http://') or prompt.startswith('https://'): - vals = prompt.rsplit(':', 2) - vals = [vals[0] + ':' + vals[1], *vals[2:]] - else: - vals = prompt.rsplit(':', 1) - vals = vals + ['', '1'][len(vals):] - return vals[0], float(vals[1]) -class MakeCutouts(nn.Module): - def __init__(self, cut_size, cutn, cut_pow=1.): - super().__init__() - self.cut_size = cut_size - self.cutn = cutn - self.cut_pow = cut_pow - def forward(self, input): - sideY, sideX = input.shape[2:4] - max_size = min(sideX, sideY) - min_size = min(sideX, sideY, self.cut_size) - cutouts = [] - for _ in range(self.cutn): - size = int(torch.rand([])**self.cut_pow * (max_size - min_size) + min_size) - offsetx = torch.randint(0, sideX - size + 1, ()) - offsety = torch.randint(0, sideY - size + 1, ()) - cutout = input[:, :, offsety:offsety + size, offsetx:offsetx + size] - cutouts.append(F.adaptive_avg_pool2d(cutout, self.cut_size)) - return torch.cat(cutouts) -def spherical_dist_loss(x, y): - x = F.normalize(x, dim=-1) - y = F.normalize(y, dim=-1) - return (x - y).norm(dim=-1).div(2).arcsin().pow(2).mul(2) -def tv_loss(input): - """L2 total variation loss, as in Mahendran et al.""" - input = F.pad(input, (0, 1, 0, 1), 'replicate') - x_diff = input[..., :-1, 1:] - input[..., :-1, :-1] - y_diff = input[..., 1:, :-1] - input[..., :-1, :-1] - return (x_diff**2 + y_diff**2).mean([1, 2, 3]) -def range_loss(input): - return (input - input.clamp(-1, 1)).pow(2).mean([1, 2, 3]) - -def inference(text, init_image, skip_timesteps, clip_guidance_scale, tv_scale, range_scale, init_scale, seed, image_prompts,timestep_respacing, cutn, im_prompt_weight): - # Model settings - skip_timesteps = min(skip_timesteps, timestep_respacing-1) - skip_timesteps = int(timestep_respacing-1 - (timestep_respacing-1)*skip_timesteps/100) - device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') - model_config = model_and_diffusion_defaults() - model_config.update({ - 'attention_resolutions': '16', - 'class_cond': False, - 'diffusion_steps': 1000, - 'rescale_timesteps': True, - 'timestep_respacing': str(timestep_respacing), - 'image_size': 256, - 'learn_sigma': True, - 'noise_schedule': 'linear', - 'num_channels': 128, - 'num_heads': 1, - 'num_res_blocks': 2, - 'use_checkpoint': True, - 'use_fp16': False if device.type == 'cpu' else True, - 'use_scale_shift_norm': False, - }) - - # Load models - print('Using fp16: ',model_config['use_fp16']) - print('Using device:', device) - model, diffusion = create_model_and_diffusion(**model_config) - model.load_state_dict(torch.load('256x256_openai_comics_faces_v2.by_alex_spirin_114k.pt', map_location='cpu')) - model.requires_grad_(False).eval().to(device).float() - for name, param in model.named_parameters(): - if 'qkv' in name or 'norm' in name or 'proj' in name: - param.requires_grad_() - if model_config['use_fp16']: - model.convert_to_fp16() - else: model.convert_to_fp32() - clip_model = clip.load('ViT-B/16', jit=False)[0].eval().requires_grad_(False).to(device).float() - clip_size = clip_model.visual.input_resolution - normalize = transforms.Normalize(mean=[0.48145466, 0.4578275, 0.40821073], - std=[0.26862954, 0.26130258, 0.27577711]) - - - all_frames = [] - prompts = [text] - - batch_size = 1 - clip_guidance_scale = clip_guidance_scale # Controls how much the image should look like the prompt. - tv_scale = tv_scale # Controls the smoothness of the final output. - range_scale = range_scale # Controls how far out of range RGB values are allowed to be. - cutn = cutn - n_batches = 1 - - skip_timesteps = skip_timesteps # This needs to be between approx. 200 and 500 when using an init image. - # Higher values make the output look more like the init. - init_scale = init_scale # This enhances the effect of the init image, a good value is 1000. - seed = seed - - if seed is not None: - torch.manual_seed(seed) - make_cutouts = MakeCutouts(clip_size, cutn) - side_x = side_y = model_config['image_size'] - target_embeds, weights = [], [] - for prompt in prompts: - txt, weight = parse_prompt(prompt) - target_embeds.append(clip_model.encode_text(clip.tokenize(txt).to(device)).float()) - weights.append(weight) - if image_prompts is not None: - img = Image.fromarray(image_prompts).convert('RGB') - img = TF.resize(img, min(side_x, side_y, *img.size), transforms.InterpolationMode.LANCZOS) - batch = make_cutouts(TF.to_tensor(img).unsqueeze(0).to(device)) - embed = clip_model.encode_image(normalize(batch)).float() - target_embeds.append(embed) - weights.extend([im_prompt_weight / cutn] * cutn) - target_embeds = torch.cat(target_embeds) - weights = torch.tensor(weights, device=device) - if weights.sum().abs() < 1e-3: - raise RuntimeError('The weights must not sum to 0.') - weights /= weights.sum().abs() - init = None - if init_image is not None: - lpips_model = lpips.LPIPS(net='vgg').to(device) - init = Image.fromarray(init_image).convert('RGB') - init = init.resize((side_x, side_y), Image.LANCZOS) - init = TF.to_tensor(init).to(device).unsqueeze(0).mul(2).sub(1) - else: skip_timesteps = 0 - cur_t = None - def cond_fn(x, t, y=None): - with torch.enable_grad(): - x = x.detach().requires_grad_() - n = x.shape[0] - my_t = torch.ones([n], device=device, dtype=torch.long) * cur_t - out = diffusion.p_mean_variance(model, x, my_t, clip_denoised=False, model_kwargs={'y': y}) - fac = diffusion.sqrt_one_minus_alphas_cumprod[cur_t] - x_in = out['pred_xstart'] * fac + x * (1 - fac) - clip_in = normalize(make_cutouts(x_in.add(1).div(2))) - image_embeds = clip_model.encode_image(clip_in).float() - dists = spherical_dist_loss(image_embeds.unsqueeze(1), target_embeds.unsqueeze(0)) - dists = dists.view([cutn, n, -1]) - losses = dists.mul(weights).sum(2).mean(0) - tv_losses = tv_loss(x_in) - range_losses = range_loss(out['pred_xstart']) - loss = losses.sum() * clip_guidance_scale + tv_losses.sum() * tv_scale + range_losses.sum() * range_scale - if init is not None and init_scale: - - init_losses = lpips_model(x_in, init) - loss = loss + init_losses.sum() * init_scale - return -torch.autograd.grad(loss, x)[0] - if model_config['timestep_respacing'].startswith('ddim'): - sample_fn = diffusion.ddim_sample_loop_progressive - else: - sample_fn = diffusion.p_sample_loop_progressive - for i in range(n_batches): - cur_t = diffusion.num_timesteps - skip_timesteps - 1 - samples = sample_fn( - model, - (batch_size, 3, side_y, side_x), - clip_denoised=False, - model_kwargs={}, - cond_fn=cond_fn, - progress=True, - skip_timesteps=skip_timesteps, - init_image=init, - randomize_class=True, - ) - for j, sample in enumerate(samples): - cur_t -= 1 - if j % 1 == 0 or cur_t == -1: - print() - for k, image in enumerate(sample['pred_xstart']): - img = TF.to_pil_image(image.add(1).div(2).clamp(0, 1)) - all_frames.append(img) - tqdm.write(f'Batch {i}, step {j}, output {k}:') - writer = imageio.get_writer('video.mp4', fps=5) - for im in all_frames: - writer.append_data(np.array(im)) - writer.close() - return img, 'video.mp4' - -demo = gr.Blocks() -with demo: - gr.Markdown( - """ - # CLIP Guided Openai Diffusion Faces Model - ### by [Alex Spirin](https://linktr.ee/devdef) - Gradio Blocks demo for CLIP Guided Diffusion. To use it, simply add your text, or click one of the examples to load them. - Based on the original [Space](https://huggingface.co/spaces/EleutherAI/clip-guided-diffusion) by akhaliq. - ![visitors](https://visitor-badge.glitch.me/badge?page_id=sxela_dd_custom_model_hf_space) - """) - - with gr.Row(): - text = gr.Textbox(placeholder="Enter a description of a face", label='Text prompt', value="A beautiful girl by Greg Rutkowski") - with gr.Tabs(): - with gr.TabItem("Settings"): - with gr.Row(): - # with gr.Group(): - with gr.Column(): - clip_guidance_scale = gr.Slider(minimum=0, maximum=3000, step=1, value=600, label="Prompt strength") - tv_scale = gr.Slider(minimum=0, maximum=1000, step=1, value=0, label="Smoothness") - range_scale = gr.Slider(minimum=0, maximum=1000, step=1, value=0, label="Compress color range") - # with gr.Group(): - with gr.Column(): - timestep_respacing = gr.Slider(minimum=25, maximum=100, step=1, value=25, label="Timestep respacing") - cutn = gr.Slider(minimum=4, maximum=32, step=1, value=16, label="cutn") - seed = gr.Number(value=0, label="Seed") - with gr.TabItem("Input images"): - with gr.Row(): - # with gr.Group(): - with gr.Column(): - init_image = gr.Image(source="upload", label='initial image (optional)') - init_scale = gr.Slider(minimum=0, maximum=1000, step=10, value=0, label="Look like the image above") - skip_timesteps = gr.Slider(minimum=0, maximum=100, step=1, value=30, label="Style strength, % (0 = initial image)") - # with gr.Group(): - with gr.Column(): - image_prompts = gr.Image(source="upload", label='image prompt (optional)') - im_prompt_weight = gr.Slider(minimum=0, maximum=10, step=1, value=1, label="Look like the image above") - - with gr.Group(): - with gr.Row(): - gr.Markdown( - """ - ### Press Run to Run :D - ---- - """) - with gr.Row(): - run_button = gr.Button("Run!") - with gr.Row(): - gr.Markdown( - """ - ### Results - --- - """) - with gr.Row(): - output_image = gr.Image(label='Output image', type='numpy') - output_video = gr.Video(label='Output video') - - outputs=[output_image,output_video] - - run_button.click(inference, inputs=[text, init_image, skip_timesteps, clip_guidance_scale, tv_scale, range_scale, init_scale, seed, image_prompts,timestep_respacing, cutn, im_prompt_weight], outputs=outputs) - -demo.launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/regnet/faster_rcnn_regnetx-3.2GF_fpn_mstrain_3x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/regnet/faster_rcnn_regnetx-3.2GF_fpn_mstrain_3x_coco.py deleted file mode 100644 index e73a098d32d6ce3f6a0e121538ed90de81699ff5..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/regnet/faster_rcnn_regnetx-3.2GF_fpn_mstrain_3x_coco.py +++ /dev/null @@ -1,63 +0,0 @@ -_base_ = [ - '../_base_/models/faster_rcnn_r50_fpn.py', - '../_base_/datasets/coco_detection.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] -model = dict( - pretrained='open-mmlab://regnetx_3.2gf', - backbone=dict( - _delete_=True, - type='RegNet', - arch='regnetx_3.2gf', - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[96, 192, 432, 1008], - out_channels=256, - num_outs=5)) -img_norm_cfg = dict( - # The mean and std are used in PyCls when training RegNets - mean=[103.53, 116.28, 123.675], - std=[57.375, 57.12, 58.395], - to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='Resize', - img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736), - (1333, 768), (1333, 800)], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.00005) -lr_config = dict(step=[28, 34]) -runner = dict(type='EpochBasedRunner', max_epochs=36) diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/scripts/templates/base.html b/spaces/GrandaddyShmax/AudioCraft_Plus/scripts/templates/base.html deleted file mode 100644 index f74668c19ecb83090a8a2d82c026bf417190ec6d..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/scripts/templates/base.html +++ /dev/null @@ -1,16 +0,0 @@ - - - - {% block head %} - - - AudioCraft — MOS - {% endblock %} - - -
    -

    AudioCraft — MOS

    - {% block content %}{% endblock %} -
    - - diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/dpt/transforms.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/dpt/transforms.py deleted file mode 100644 index 399adbcdad096ae3fb8a190ecd3ec5483a897251..0000000000000000000000000000000000000000 --- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/dpt/transforms.py +++ /dev/null @@ -1,231 +0,0 @@ -import numpy as np -import cv2 -import math - - -def apply_min_size(sample, size, image_interpolation_method=cv2.INTER_AREA): - """Rezise the sample to ensure the given size. Keeps aspect ratio. - - Args: - sample (dict): sample - size (tuple): image size - - Returns: - tuple: new size - """ - shape = list(sample["disparity"].shape) - - if shape[0] >= size[0] and shape[1] >= size[1]: - return sample - - scale = [0, 0] - scale[0] = size[0] / shape[0] - scale[1] = size[1] / shape[1] - - scale = max(scale) - - shape[0] = math.ceil(scale * shape[0]) - shape[1] = math.ceil(scale * shape[1]) - - # resize - sample["image"] = cv2.resize( - sample["image"], tuple(shape[::-1]), interpolation=image_interpolation_method - ) - - sample["disparity"] = cv2.resize( - sample["disparity"], tuple(shape[::-1]), interpolation=cv2.INTER_NEAREST - ) - sample["mask"] = cv2.resize( - sample["mask"].astype(np.float32), - tuple(shape[::-1]), - interpolation=cv2.INTER_NEAREST, - ) - sample["mask"] = sample["mask"].astype(bool) - - return tuple(shape) - - -class Resize(object): - """Resize sample to given size (width, height).""" - - def __init__( - self, - width, - height, - resize_target=True, - keep_aspect_ratio=False, - ensure_multiple_of=1, - resize_method="lower_bound", - image_interpolation_method=cv2.INTER_AREA, - ): - """Init. - - Args: - width (int): desired output width - height (int): desired output height - resize_target (bool, optional): - True: Resize the full sample (image, mask, target). - False: Resize image only. - Defaults to True. - keep_aspect_ratio (bool, optional): - True: Keep the aspect ratio of the input sample. - Output sample might not have the given width and height, and - resize behaviour depends on the parameter 'resize_method'. - Defaults to False. - ensure_multiple_of (int, optional): - Output width and height is constrained to be multiple of this parameter. - Defaults to 1. - resize_method (str, optional): - "lower_bound": Output will be at least as large as the given size. - "upper_bound": Output will be at max as large as the given size. (Output size might be smaller than given size.) - "minimal": Scale as least as possible. (Output size might be smaller than given size.) - Defaults to "lower_bound". - """ - self.__width = width - self.__height = height - - self.__resize_target = resize_target - self.__keep_aspect_ratio = keep_aspect_ratio - self.__multiple_of = ensure_multiple_of - self.__resize_method = resize_method - self.__image_interpolation_method = image_interpolation_method - - def constrain_to_multiple_of(self, x, min_val=0, max_val=None): - y = (np.round(x / self.__multiple_of) * self.__multiple_of).astype(int) - - if max_val is not None and y > max_val: - y = (np.floor(x / self.__multiple_of) * self.__multiple_of).astype(int) - - if y < min_val: - y = (np.ceil(x / self.__multiple_of) * self.__multiple_of).astype(int) - - return y - - def get_size(self, width, height): - # determine new height and width - scale_height = self.__height / height - scale_width = self.__width / width - - if self.__keep_aspect_ratio: - if self.__resize_method == "lower_bound": - # scale such that output size is lower bound - if scale_width > scale_height: - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - elif self.__resize_method == "upper_bound": - # scale such that output size is upper bound - if scale_width < scale_height: - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - elif self.__resize_method == "minimal": - # scale as least as possbile - if abs(1 - scale_width) < abs(1 - scale_height): - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - else: - raise ValueError( - f"resize_method {self.__resize_method} not implemented" - ) - - if self.__resize_method == "lower_bound": - new_height = self.constrain_to_multiple_of( - scale_height * height, min_val=self.__height - ) - new_width = self.constrain_to_multiple_of( - scale_width * width, min_val=self.__width - ) - elif self.__resize_method == "upper_bound": - new_height = self.constrain_to_multiple_of( - scale_height * height, max_val=self.__height - ) - new_width = self.constrain_to_multiple_of( - scale_width * width, max_val=self.__width - ) - elif self.__resize_method == "minimal": - new_height = self.constrain_to_multiple_of(scale_height * height) - new_width = self.constrain_to_multiple_of(scale_width * width) - else: - raise ValueError(f"resize_method {self.__resize_method} not implemented") - - return (new_width, new_height) - - def __call__(self, sample): - width, height = self.get_size( - sample["image"].shape[1], sample["image"].shape[0] - ) - - # resize sample - sample["image"] = cv2.resize( - sample["image"], - (width, height), - interpolation=self.__image_interpolation_method, - ) - - if self.__resize_target: - if "disparity" in sample: - sample["disparity"] = cv2.resize( - sample["disparity"], - (width, height), - interpolation=cv2.INTER_NEAREST, - ) - - if "depth" in sample: - sample["depth"] = cv2.resize( - sample["depth"], (width, height), interpolation=cv2.INTER_NEAREST - ) - - sample["mask"] = cv2.resize( - sample["mask"].astype(np.float32), - (width, height), - interpolation=cv2.INTER_NEAREST, - ) - sample["mask"] = sample["mask"].astype(bool) - - return sample - - -class NormalizeImage(object): - """Normlize image by given mean and std.""" - - def __init__(self, mean, std): - self.__mean = mean - self.__std = std - - def __call__(self, sample): - sample["image"] = (sample["image"] - self.__mean) / self.__std - - return sample - - -class PrepareForNet(object): - """Prepare sample for usage as network input.""" - - def __init__(self): - pass - - def __call__(self, sample): - image = np.transpose(sample["image"], (2, 0, 1)) - sample["image"] = np.ascontiguousarray(image).astype(np.float32) - - if "mask" in sample: - sample["mask"] = sample["mask"].astype(np.float32) - sample["mask"] = np.ascontiguousarray(sample["mask"]) - - if "disparity" in sample: - disparity = sample["disparity"].astype(np.float32) - sample["disparity"] = np.ascontiguousarray(disparity) - - if "depth" in sample: - depth = sample["depth"].astype(np.float32) - sample["depth"] = np.ascontiguousarray(depth) - - return sample diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/seeds.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/seeds.py deleted file mode 100644 index 23aaf88741e7b69efa19333eff2d06081bc55a17..0000000000000000000000000000000000000000 --- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/seeds.py +++ /dev/null @@ -1,27 +0,0 @@ -import random, torch, os -import numpy as np -from torch.backends import cudnn - -def setup_seed(seed: int, cuda_deterministic=False): - torch.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - np.random.seed(seed) - random.seed(seed) - os.environ["PYTHONHASHSEED"] = str(seed) - - # Benchmark mode is good whenever your input sizes for your network do not vary. - # This way, cudnn will look for the optimal set of algorithms for - # that particular configuration (which takes some time). - # This usually leads to faster runtime. - # - # But if your input sizes changes at each iteration, - # then cudnn will benchmark every time a new size appears, - # possibly leading to worse runtime performances. - cudnn.benchmark = True - - if cuda_deterministic: - # given the same input, and when run on the same software and hardware, - # always produce the same output - cudnn.deterministic = True - else: - cudnn.deterministic = False diff --git a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/glow_tts/commons.py b/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/glow_tts/commons.py deleted file mode 100644 index 8da7b35049d768a29de6f66cbe8795a825967818..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/glow_tts/commons.py +++ /dev/null @@ -1,273 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from librosa.filters import mel as librosa_mel_fn -from audio_processing import dynamic_range_compression -from audio_processing import dynamic_range_decompression -from stft import STFT - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def mle_loss(z, m, logs, logdet, mask): - l = torch.sum(logs) + 0.5 * torch.sum( - torch.exp(-2 * logs) * ((z - m) ** 2) - ) # neg normal likelihood w/o the constant term - l = l - torch.sum(logdet) # log jacobian determinant - l = l / torch.sum( - torch.ones_like(z) * mask - ) # averaging across batch, channel and time axes - l = l + 0.5 * math.log(2 * math.pi) # add the remaining constant term - return l - - -def duration_loss(logw, logw_, lengths): - l = torch.sum((logw - logw_) ** 2) / torch.sum(lengths) - return l - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def maximum_path(value, mask, max_neg_val=-np.inf): - """Numpy-friendly version. It's about 4 times faster than torch version. - value: [b, t_x, t_y] - mask: [b, t_x, t_y] - """ - value = value * mask - - device = value.device - dtype = value.dtype - value = value.cpu().detach().numpy() - mask = mask.cpu().detach().numpy().astype(np.bool) - - b, t_x, t_y = value.shape - direction = np.zeros(value.shape, dtype=np.int64) - v = np.zeros((b, t_x), dtype=np.float32) - x_range = np.arange(t_x, dtype=np.float32).reshape(1, -1) - for j in range(t_y): - v0 = np.pad(v, [[0, 0], [1, 0]], mode="constant", constant_values=max_neg_val)[ - :, :-1 - ] - v1 = v - max_mask = v1 >= v0 - v_max = np.where(max_mask, v1, v0) - direction[:, :, j] = max_mask - - index_mask = x_range <= j - v = np.where(index_mask, v_max + value[:, :, j], max_neg_val) - direction = np.where(mask, direction, 1) - - path = np.zeros(value.shape, dtype=np.float32) - index = mask[:, :, 0].sum(1).astype(np.int64) - 1 - index_range = np.arange(b) - for j in reversed(range(t_y)): - path[index_range, index, j] = 1 - index = index + direction[index_range, index, j] - 1 - path = path * mask.astype(np.float32) - path = torch.from_numpy(path).to(device=device, dtype=dtype) - return path - - -def generate_path(duration, mask): - """ - duration: [b, t_x] - mask: [b, t_x, t_y] - """ - device = duration.device - - b, t_x, t_y = mask.shape - cum_duration = torch.cumsum(duration, 1) - path = torch.zeros(b, t_x, t_y, dtype=mask.dtype).to(device=device) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path * mask - return path - - -class Adam: - def __init__( - self, - params, - scheduler, - dim_model, - warmup_steps=4000, - lr=1e0, - betas=(0.9, 0.98), - eps=1e-9, - ): - self.params = params - self.scheduler = scheduler - self.dim_model = dim_model - self.warmup_steps = warmup_steps - self.lr = lr - self.betas = betas - self.eps = eps - - self.step_num = 1 - self.cur_lr = lr * self._get_lr_scale() - - self._optim = torch.optim.Adam(params, lr=self.cur_lr, betas=betas, eps=eps) - - def _get_lr_scale(self): - if self.scheduler == "noam": - return np.power(self.dim_model, -0.5) * np.min( - [ - np.power(self.step_num, -0.5), - self.step_num * np.power(self.warmup_steps, -1.5), - ] - ) - else: - return 1 - - def _update_learning_rate(self): - self.step_num += 1 - if self.scheduler == "noam": - self.cur_lr = self.lr * self._get_lr_scale() - for param_group in self._optim.param_groups: - param_group["lr"] = self.cur_lr - - def get_lr(self): - return self.cur_lr - - def step(self): - self._optim.step() - self._update_learning_rate() - - def zero_grad(self): - self._optim.zero_grad() - - def load_state_dict(self, d): - self._optim.load_state_dict(d) - - def state_dict(self): - return self._optim.state_dict() - - -class TacotronSTFT(nn.Module): - def __init__( - self, - filter_length=1024, - hop_length=256, - win_length=1024, - n_mel_channels=80, - sampling_rate=22050, - mel_fmin=0.0, - mel_fmax=8000.0, - ): - super(TacotronSTFT, self).__init__() - self.n_mel_channels = n_mel_channels - self.sampling_rate = sampling_rate - self.stft_fn = STFT(filter_length, hop_length, win_length) - mel_basis = librosa_mel_fn( - sampling_rate, filter_length, n_mel_channels, mel_fmin, mel_fmax - ) - mel_basis = torch.from_numpy(mel_basis).float() - self.register_buffer("mel_basis", mel_basis) - - def spectral_normalize(self, magnitudes): - output = dynamic_range_compression(magnitudes) - return output - - def spectral_de_normalize(self, magnitudes): - output = dynamic_range_decompression(magnitudes) - return output - - def mel_spectrogram(self, y): - """Computes mel-spectrograms from a batch of waves - PARAMS - ------ - y: Variable(torch.FloatTensor) with shape (B, T) in range [-1, 1] - - RETURNS - ------- - mel_output: torch.FloatTensor of shape (B, n_mel_channels, T) - """ - assert torch.min(y.data) >= -1 - assert torch.max(y.data) <= 1 - - magnitudes, phases = self.stft_fn.transform(y) - magnitudes = magnitudes.data - mel_output = torch.matmul(self.mel_basis, magnitudes) - mel_output = self.spectral_normalize(mel_output) - return mel_output - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm - - -def squeeze(x, x_mask=None, n_sqz=2): - b, c, t = x.size() - - t = (t // n_sqz) * n_sqz - x = x[:, :, :t] - x_sqz = x.view(b, c, t // n_sqz, n_sqz) - x_sqz = x_sqz.permute(0, 3, 1, 2).contiguous().view(b, c * n_sqz, t // n_sqz) - - if x_mask is not None: - x_mask = x_mask[:, :, n_sqz - 1 :: n_sqz] - else: - x_mask = torch.ones(b, 1, t // n_sqz).to(device=x.device, dtype=x.dtype) - return x_sqz * x_mask, x_mask - - -def unsqueeze(x, x_mask=None, n_sqz=2): - b, c, t = x.size() - - x_unsqz = x.view(b, n_sqz, c // n_sqz, t) - x_unsqz = x_unsqz.permute(0, 2, 3, 1).contiguous().view(b, c // n_sqz, t * n_sqz) - - if x_mask is not None: - x_mask = x_mask.unsqueeze(-1).repeat(1, 1, 1, n_sqz).view(b, 1, t * n_sqz) - else: - x_mask = torch.ones(b, 1, t * n_sqz).to(device=x.device, dtype=x.dtype) - return x_unsqz * x_mask, x_mask diff --git a/spaces/Harveenchadha/oiTrans/indic_nlp_library/indicnlp/transliterate/sinhala_transliterator.py b/spaces/Harveenchadha/oiTrans/indic_nlp_library/indicnlp/transliterate/sinhala_transliterator.py deleted file mode 100644 index 1e762252a56e93c94cd488a07031f7d7eae8a1d3..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/oiTrans/indic_nlp_library/indicnlp/transliterate/sinhala_transliterator.py +++ /dev/null @@ -1,171 +0,0 @@ -# -# Copyright (c) 2013-present, Anoop Kunchukuttan -# All rights reserved. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -# - -class SinhalaDevanagariTransliterator(object): - """ - A Devanagari to Sinhala transliterator based on explicit Unicode Mapping - """ - - sinhala_devnag_map={ - '\u0d82':'\u0902', - '\u0d83':'\u0903', - '\u0d84':'\u0904', - '\u0d85':'\u0905', - '\u0d86':'\u0906', - '\u0d87':'\u090d', - '\u0d88':'\u090d', - '\u0d89':'\u0907', - '\u0d8a':'\u0908', - '\u0d8b':'\u0909', - '\u0d8c':'\u090a', - '\u0d8d':'\u090b', - '\u0d8f':'\u090c', - '\u0d91':'\u090e', - '\u0d92':'\u090f', - '\u0d93':'\u0910', - '\u0d94':'\u0912', - '\u0d95':'\u0913', - '\u0d96':'\u0914', - '\u0d9a':'\u0915', - '\u0d9b':'\u0916', - '\u0d9c':'\u0917', - '\u0d9d':'\u0918', - '\u0d9e':'\u0919', - '\u0d9f':'\u0919', - '\u0da0':'\u091a', - '\u0da1':'\u091b', - '\u0da2':'\u091c', - '\u0da3':'\u091d', - '\u0da4':'\u091e', - '\u0da5':'\u091e', - '\u0da6':'\u091e', - '\u0da7':'\u091f', - '\u0da8':'\u0920', - '\u0da9':'\u0921', - '\u0daa':'\u0922', - '\u0dab':'\u0923', - '\u0dac':'\u0923', - '\u0dad':'\u0924', - '\u0dae':'\u0925', - '\u0daf':'\u0926', - '\u0db0':'\u0927', - '\u0db1':'\u0928', - '\u0db2':'\u0928', - '\u0db3':'\u0928', - '\u0db4':'\u092a', - '\u0db5':'\u092b', - '\u0db6':'\u092c', - '\u0db7':'\u092d', - '\u0db8':'\u092e', - '\u0dba':'\u092f', - '\u0dbb':'\u0930', - '\u0dbd':'\u0932', - '\u0dc5':'\u0933', - '\u0dc0':'\u0935', - '\u0dc1':'\u0936', - '\u0dc2':'\u0937', - '\u0dc3':'\u0938', - '\u0dc4':'\u0939', - '\u0dcf':'\u093e', - '\u0dd0':'\u0949', - '\u0dd1':'\u0949', - '\u0dd2':'\u093f', - '\u0dd3':'\u0940', - '\u0dd4':'\u0941', - '\u0dd6':'\u0942', - '\u0dd8':'\u0943', - '\u0dd9':'\u0946', - '\u0dda':'\u0947', - '\u0ddb':'\u0948', - '\u0ddc':'\u094a', - '\u0ddd':'\u094b', - '\u0dde':'\u094c', - '\u0dca':'\u094d', - } - - devnag_sinhala_map={ - '\u0900':'\u0d82', - '\u0901':'\u0d82', - '\u0902':'\u0d82', - '\u0903':'\u0d83', - '\u0904':'\u0d84', - '\u0905':'\u0d85', - '\u0906':'\u0d86', - '\u0907':'\u0d89', - '\u0908':'\u0d8a', - '\u0909':'\u0d8b', - '\u090a':'\u0d8c', - '\u090b':'\u0d8d', - '\u090c':'\u0d8f', - '\u090d':'\u0d88', - '\u090e':'\u0d91', - '\u090f':'\u0d92', - '\u0910':'\u0d93', - '\u0912':'\u0d94', - '\u0913':'\u0d95', - '\u0914':'\u0d96', - '\u0915':'\u0d9a', - '\u0916':'\u0d9b', - '\u0917':'\u0d9c', - '\u0918':'\u0d9d', - '\u0919':'\u0d9e', - '\u091a':'\u0da0', - '\u091b':'\u0da1', - '\u091c':'\u0da2', - '\u091d':'\u0da3', - '\u091e':'\u0da4', - '\u091f':'\u0da7', - '\u0920':'\u0da8', - '\u0921':'\u0da9', - '\u0922':'\u0daa', - '\u0923':'\u0dab', - '\u0924':'\u0dad', - '\u0925':'\u0dae', - '\u0926':'\u0daf', - '\u0927':'\u0db0', - '\u0928':'\u0db1', - '\u0929':'\u0db1', - '\u092a':'\u0db4', - '\u092b':'\u0db5', - '\u092c':'\u0db6', - '\u092d':'\u0db7', - '\u092e':'\u0db8', - '\u092f':'\u0dba', - '\u0930':'\u0dbb', - '\u0932':'\u0dbd', - '\u0933':'\u0dc5', - '\u0935':'\u0dc0', - '\u0936':'\u0dc1', - '\u0937':'\u0dc2', - '\u0938':'\u0dc3', - '\u0939':'\u0dc4', - '\u093e':'\u0dcf', - '\u0949':'\u0dd1', - '\u093f':'\u0dd2', - '\u0940':'\u0dd3', - '\u0941':'\u0dd4', - '\u0942':'\u0dd6', - '\u0943':'\u0dd8', - '\u0946':'\u0dd9', - '\u0947':'\u0dda', - '\u0948':'\u0ddb', - '\u094a':'\u0ddc', - '\u094b':'\u0ddd', - '\u094c':'\u0dde', - '\u094d':'\u0dca', - - } - - @staticmethod - def devanagari_to_sinhala(text): - return ''.join([ SinhalaDevanagariTransliterator.devnag_sinhala_map.get(c,c) for c in text ]) - - @staticmethod - def sinhala_to_devanagari(text): - return ''.join([ SinhalaDevanagariTransliterator.sinhala_devnag_map.get(c,c) for c in text ]) - diff --git a/spaces/HighCWu/GFPGAN-1.3/gfpgan/__init__.py b/spaces/HighCWu/GFPGAN-1.3/gfpgan/__init__.py deleted file mode 100644 index 94daaeebce5604d61999f0b1b354b9a9e299b991..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/GFPGAN-1.3/gfpgan/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# flake8: noqa -from .archs import * -from .data import * -from .models import * -from .utils import * - -# from .version import * diff --git a/spaces/Hina4867/bingo/src/components/chat.tsx b/spaces/Hina4867/bingo/src/components/chat.tsx deleted file mode 100644 index fcb6f467e2c773d364d684bdca4c39b4656fe417..0000000000000000000000000000000000000000 --- a/spaces/Hina4867/bingo/src/components/chat.tsx +++ /dev/null @@ -1,92 +0,0 @@ -'use client' - -import { useCallback, useEffect, useMemo, useState } from 'react' -import { useAtom } from 'jotai' -import Image from 'next/image' -import { cn } from '@/lib/utils' -import { ChatList } from '@/components/chat-list' -import { ChatPanel } from '@/components/chat-panel' -import { WelcomeScreen } from '@/components/welcome-screen' -import { ChatScrollAnchor } from '@/components/chat-scroll-anchor' -import { ToneSelector } from './tone-selector' -import { ChatHeader } from './chat-header' -import { ChatSuggestions } from './chat-suggestions' -import { bingConversationStyleAtom } from '@/state' -import { ButtonScrollToBottom } from '@/components/button-scroll-to-bottom' -import StopIcon from '@/assets/images/stop.svg' -import { useBing } from '@/lib/hooks/use-bing' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { ChatNotification } from './chat-notification' -import { Settings } from './settings' - -export type ChatProps = React.ComponentProps<'div'> & { initialMessages?: ChatMessageModel[] } - -export default function Chat({ className }: ChatProps) { - - const [bingStyle, setBingStyle] = useAtom(bingConversationStyleAtom) - const { - messages, - sendMessage, - resetConversation, - stopGenerating, - setInput, - bot, - input, - generating, - isSpeaking, - uploadImage, - attachmentList, - setAttachmentList, - } = useBing() - - useEffect(() => { - window.scrollTo({ - top: document.body.offsetHeight, - behavior: 'smooth' - }) - }, []) - - return ( -
    - -
    - - - - {messages.length ? ( - <> - - - - - - {generating ? ( -
    - -
    - ) : null} - - ) : null} -
    - - -
    - ) -} diff --git a/spaces/Hina4867/bingo/src/components/ui/button.tsx b/spaces/Hina4867/bingo/src/components/ui/button.tsx deleted file mode 100644 index 281da005124fa94c89a9a9db7605748a92b60865..0000000000000000000000000000000000000000 --- a/spaces/Hina4867/bingo/src/components/ui/button.tsx +++ /dev/null @@ -1,57 +0,0 @@ -import * as React from 'react' -import { Slot } from '@radix-ui/react-slot' -import { cva, type VariantProps } from 'class-variance-authority' - -import { cn } from '@/lib/utils' - -const buttonVariants = cva( - 'inline-flex items-center justify-center rounded-md text-sm font-medium shadow ring-offset-background transition-colors outline-none disabled:pointer-events-none disabled:opacity-50', - { - variants: { - variant: { - default: - 'bg-primary text-primary-foreground shadow-md hover:bg-primary/90', - destructive: - 'bg-destructive text-destructive-foreground hover:bg-destructive/90', - outline: - 'border border-input hover:bg-accent hover:text-accent-foreground', - secondary: - 'bg-secondary text-secondary-foreground hover:bg-secondary/80', - ghost: 'shadow-none hover:bg-accent hover:text-accent-foreground', - link: 'text-primary underline-offset-4 shadow-none hover:underline' - }, - size: { - default: 'h-8 px-4 py-2', - sm: 'h-8 rounded-md px-3', - lg: 'h-11 rounded-md px-8', - icon: 'h-8 w-8 p-0' - } - }, - defaultVariants: { - variant: 'default', - size: 'default' - } - } -) - -export interface ButtonProps - extends React.ButtonHTMLAttributes, - VariantProps { - asChild?: boolean -} - -const Button = React.forwardRef( - ({ className, variant, size, asChild = false, ...props }, ref) => { - const Comp = asChild ? Slot : 'button' - return ( - - ) - } -) -Button.displayName = 'Button' - -export { Button, buttonVariants } diff --git a/spaces/HugoDzz/spaceship_drift/build/_app/immutable/entry/app.a067d86b.js b/spaces/HugoDzz/spaceship_drift/build/_app/immutable/entry/app.a067d86b.js deleted file mode 100644 index bcb0cb8fe0ed2ad81fd7fb4bbe4714ba0ba278db..0000000000000000000000000000000000000000 --- a/spaces/HugoDzz/spaceship_drift/build/_app/immutable/entry/app.a067d86b.js +++ /dev/null @@ -1 +0,0 @@ -import{S as V,i as q,s as U,a as j,e as h,c as z,b as w,d as p,f as y,g as d,h as g,j as W,o as F,k as G,l as H,m as J,n as N,p as m,q as K,r as M,u as Q,v as L,w as P,x as k,y as v,z as A,A as E,B as R}from"../chunks/index.0d3f7c7a.js";const X="modulepreload",Y=function(a,e){return new URL(a,e).href},B={},S=function(e,n,i){if(!n||n.length===0)return e();const s=document.getElementsByTagName("link");return Promise.all(n.map(f=>{if(f=Y(f,i),f in B)return;B[f]=!0;const t=f.endsWith(".css"),r=t?'[rel="stylesheet"]':"";if(!!i)for(let l=s.length-1;l>=0;l--){const _=s[l];if(_.href===f&&(!t||_.rel==="stylesheet"))return}else if(document.querySelector(`link[href="${f}"]${r}`))return;const o=document.createElement("link");if(o.rel=t?"stylesheet":X,t||(o.as="script",o.crossOrigin=""),o.href=f,document.head.appendChild(o),t)return new Promise((l,_)=>{o.addEventListener("load",l),o.addEventListener("error",()=>_(new Error(`Unable to preload CSS for ${f}`)))})})).then(()=>e())},ie={};function Z(a){let e,n,i;var s=a[1][0];function f(t){return{props:{data:t[3],form:t[2]}}}return s&&(e=k(s,f(a)),a[12](e)),{c(){e&&v(e.$$.fragment),n=h()},l(t){e&&A(e.$$.fragment,t),n=h()},m(t,r){e&&E(e,t,r),w(t,n,r),i=!0},p(t,r){const u={};if(r&8&&(u.data=t[3]),r&4&&(u.form=t[2]),r&2&&s!==(s=t[1][0])){if(e){L();const o=e;p(o.$$.fragment,1,0,()=>{R(o,1)}),y()}s?(e=k(s,f(t)),t[12](e),v(e.$$.fragment),d(e.$$.fragment,1),E(e,n.parentNode,n)):e=null}else s&&e.$set(u)},i(t){i||(e&&d(e.$$.fragment,t),i=!0)},o(t){e&&p(e.$$.fragment,t),i=!1},d(t){a[12](null),t&&g(n),e&&R(e,t)}}}function $(a){let e,n,i;var s=a[1][0];function f(t){return{props:{data:t[3],$$slots:{default:[x]},$$scope:{ctx:t}}}}return s&&(e=k(s,f(a)),a[11](e)),{c(){e&&v(e.$$.fragment),n=h()},l(t){e&&A(e.$$.fragment,t),n=h()},m(t,r){e&&E(e,t,r),w(t,n,r),i=!0},p(t,r){const u={};if(r&8&&(u.data=t[3]),r&8215&&(u.$$scope={dirty:r,ctx:t}),r&2&&s!==(s=t[1][0])){if(e){L();const o=e;p(o.$$.fragment,1,0,()=>{R(o,1)}),y()}s?(e=k(s,f(t)),t[11](e),v(e.$$.fragment),d(e.$$.fragment,1),E(e,n.parentNode,n)):e=null}else s&&e.$set(u)},i(t){i||(e&&d(e.$$.fragment,t),i=!0)},o(t){e&&p(e.$$.fragment,t),i=!1},d(t){a[11](null),t&&g(n),e&&R(e,t)}}}function x(a){let e,n,i;var s=a[1][1];function f(t){return{props:{data:t[4],form:t[2]}}}return s&&(e=k(s,f(a)),a[10](e)),{c(){e&&v(e.$$.fragment),n=h()},l(t){e&&A(e.$$.fragment,t),n=h()},m(t,r){e&&E(e,t,r),w(t,n,r),i=!0},p(t,r){const u={};if(r&16&&(u.data=t[4]),r&4&&(u.form=t[2]),r&2&&s!==(s=t[1][1])){if(e){L();const o=e;p(o.$$.fragment,1,0,()=>{R(o,1)}),y()}s?(e=k(s,f(t)),t[10](e),v(e.$$.fragment),d(e.$$.fragment,1),E(e,n.parentNode,n)):e=null}else s&&e.$set(u)},i(t){i||(e&&d(e.$$.fragment,t),i=!0)},o(t){e&&p(e.$$.fragment,t),i=!1},d(t){a[10](null),t&&g(n),e&&R(e,t)}}}function C(a){let e,n=a[6]&&D(a);return{c(){e=G("div"),n&&n.c(),this.h()},l(i){e=H(i,"DIV",{id:!0,"aria-live":!0,"aria-atomic":!0,style:!0});var s=J(e);n&&n.l(s),s.forEach(g),this.h()},h(){N(e,"id","svelte-announcer"),N(e,"aria-live","assertive"),N(e,"aria-atomic","true"),m(e,"position","absolute"),m(e,"left","0"),m(e,"top","0"),m(e,"clip","rect(0 0 0 0)"),m(e,"clip-path","inset(50%)"),m(e,"overflow","hidden"),m(e,"white-space","nowrap"),m(e,"width","1px"),m(e,"height","1px")},m(i,s){w(i,e,s),n&&n.m(e,null)},p(i,s){i[6]?n?n.p(i,s):(n=D(i),n.c(),n.m(e,null)):n&&(n.d(1),n=null)},d(i){i&&g(e),n&&n.d()}}}function D(a){let e;return{c(){e=K(a[7])},l(n){e=M(n,a[7])},m(n,i){w(n,e,i)},p(n,i){i&128&&Q(e,n[7])},d(n){n&&g(e)}}}function ee(a){let e,n,i,s,f;const t=[$,Z],r=[];function u(l,_){return l[1][1]?0:1}e=u(a),n=r[e]=t[e](a);let o=a[5]&&C(a);return{c(){n.c(),i=j(),o&&o.c(),s=h()},l(l){n.l(l),i=z(l),o&&o.l(l),s=h()},m(l,_){r[e].m(l,_),w(l,i,_),o&&o.m(l,_),w(l,s,_),f=!0},p(l,[_]){let b=e;e=u(l),e===b?r[e].p(l,_):(L(),p(r[b],1,1,()=>{r[b]=null}),y(),n=r[e],n?n.p(l,_):(n=r[e]=t[e](l),n.c()),d(n,1),n.m(i.parentNode,i)),l[5]?o?o.p(l,_):(o=C(l),o.c(),o.m(s.parentNode,s)):o&&(o.d(1),o=null)},i(l){f||(d(n),f=!0)},o(l){p(n),f=!1},d(l){r[e].d(l),l&&g(i),o&&o.d(l),l&&g(s)}}}function te(a,e,n){let{stores:i}=e,{page:s}=e,{constructors:f}=e,{components:t=[]}=e,{form:r}=e,{data_0:u=null}=e,{data_1:o=null}=e;W(i.page.notify);let l=!1,_=!1,b=null;F(()=>{const c=i.page.subscribe(()=>{l&&(n(6,_=!0),n(7,b=document.title||"untitled page"))});return n(5,l=!0),c});function I(c){P[c?"unshift":"push"](()=>{t[1]=c,n(0,t)})}function O(c){P[c?"unshift":"push"](()=>{t[0]=c,n(0,t)})}function T(c){P[c?"unshift":"push"](()=>{t[0]=c,n(0,t)})}return a.$$set=c=>{"stores"in c&&n(8,i=c.stores),"page"in c&&n(9,s=c.page),"constructors"in c&&n(1,f=c.constructors),"components"in c&&n(0,t=c.components),"form"in c&&n(2,r=c.form),"data_0"in c&&n(3,u=c.data_0),"data_1"in c&&n(4,o=c.data_1)},a.$$.update=()=>{a.$$.dirty&768&&i.page.set(s)},[t,f,r,u,o,l,_,b,i,s,I,O,T]}class se extends V{constructor(e){super(),q(this,e,te,ee,U,{stores:8,page:9,constructors:1,components:0,form:2,data_0:3,data_1:4})}}const re=[()=>S(()=>import("../nodes/0.2bc3f307.js"),["../nodes/0.2bc3f307.js","../chunks/index.0d3f7c7a.js","../assets/0.cdd10e73.css"],import.meta.url),()=>S(()=>import("../nodes/1.dac78f11.js"),["../nodes/1.dac78f11.js","../chunks/index.0d3f7c7a.js","../chunks/stores.bd2e29f1.js","../chunks/singletons.afdbe156.js"],import.meta.url),()=>S(()=>import("../nodes/2.1cc72ea4.js"),["../nodes/2.1cc72ea4.js","../chunks/index.0d3f7c7a.js","../chunks/stores.bd2e29f1.js","../chunks/singletons.afdbe156.js"],import.meta.url)],oe=[],ae={"/":[2]},le={handleError:({error:a})=>{console.error(a)}};export{ae as dictionary,le as hooks,ie as matchers,re as nodes,se as root,oe as server_loads}; diff --git a/spaces/ICCV2023/ICCV2023-papers/README.md b/spaces/ICCV2023/ICCV2023-papers/README.md deleted file mode 100644 index 6e42748c86191e848683d94ce87af6747dd60f7c..0000000000000000000000000000000000000000 --- a/spaces/ICCV2023/ICCV2023-papers/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ICCV2023 Papers -emoji: 🦀 -colorFrom: gray -colorTo: red -sdk: gradio -sdk_version: 3.47.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/IDKiro/DehazeFormer_Demo/models/__init__.py b/spaces/IDKiro/DehazeFormer_Demo/models/__init__.py deleted file mode 100644 index 0bea062a3f73766b42b5d640986a1d6af69995e4..0000000000000000000000000000000000000000 --- a/spaces/IDKiro/DehazeFormer_Demo/models/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .dehazeformer import MCT as dehazeformer \ No newline at end of file diff --git a/spaces/IXIAOHEII/NB/Dockerfile b/spaces/IXIAOHEII/NB/Dockerfile deleted file mode 100644 index 3698c7cb7938e025afc53b18a571ae2961fbdffe..0000000000000000000000000000000000000000 --- a/spaces/IXIAOHEII/NB/Dockerfile +++ /dev/null @@ -1,34 +0,0 @@ -# Build Stage -# 使用 golang:alpine 作为构建阶段的基础镜像 -FROM golang:alpine AS builder - -# 添加 git,以便之后能从GitHub克隆项目 -RUN apk --no-cache add git - -# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下 -RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app - -# 设置工作目录为之前克隆的项目目录 -WORKDIR /workspace/app - -# 编译 go 项目。-ldflags="-s -w" 是为了减少编译后的二进制大小 -RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go - -# Runtime Stage -# 使用轻量级的 alpine 镜像作为运行时的基础镜像 -FROM alpine - -# 设置工作目录 -WORKDIR /workspace/app - -# 从构建阶段复制编译后的二进制文件到运行时镜像中 -COPY --from=builder /workspace/app/go-proxy-bingai . - -# 设置环境变量,此处为随机字符 -ENV Go_Proxy_BingAI_USER_TOKEN_1="kJs8hD92ncMzLaoQWYtX5rG6bE3fZ4iO" - -# 暴露8080端口 -EXPOSE 8080 - -# 容器启动时运行的命令 -CMD ["/workspace/app/go-proxy-bingai"] \ No newline at end of file diff --git a/spaces/Illumotion/Koboldcpp/otherarch/llama-util.h b/spaces/Illumotion/Koboldcpp/otherarch/llama-util.h deleted file mode 100644 index e1986eb268564943486452bda4a2f3cabe441630..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/otherarch/llama-util.h +++ /dev/null @@ -1,557 +0,0 @@ -// Internal header to be included only by llama.cpp. -// Contains wrappers around OS interfaces. -#pragma once -#ifndef LLAMA_V3_UTIL_H -#define LLAMA_V3_UTIL_H - -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#include - -#ifdef __has_include - #if __has_include() - #include - #if defined(_POSIX_MAPPED_FILES) - #include - #endif - #if defined(_POSIX_MEMLOCK_RANGE) - #include - #endif - #endif -#endif - -#if defined(_WIN32) - #define WIN32_LEAN_AND_MEAN - #ifndef NOMINMAX - #define NOMINMAX - #endif - #include - #include - #include // for _fseeki64 -#endif - -#define LLAMA_V3_ASSERT(x) \ - do { \ - if (!(x)) { \ - fprintf(stderr, "LLAMA_V3_ASSERT: %s:%d: %s\n", __FILE__, __LINE__, #x); \ - abort(); \ - } \ - } while (0) - -#ifdef __GNUC__ -#ifdef __MINGW32__ -__attribute__((format_old(gnu_printf, 1, 2))) -#else -__attribute__((format_old(printf, 1, 2))) -#endif -#endif -static std::string format_old(const char * fmt, ...) { - va_list ap, ap2; - va_start(ap, fmt); - va_copy(ap2, ap); - int size = vsnprintf(NULL, 0, fmt, ap); - LLAMA_V3_ASSERT(size >= 0 && size < INT_MAX); - std::vector buf(size + 1); - int size2 = vsnprintf(buf.data(), size + 1, fmt, ap2); - LLAMA_V3_ASSERT(size2 == size); - va_end(ap2); - va_end(ap); - return std::string(buf.data(), size); -} - -struct llama_v3_file { - // use FILE * so we don't have to re-open the file to mmap - FILE * fp; - size_t size; - - llama_v3_file(const char * fname, const char * mode) { - fp = std::fopen(fname, mode); - if (fp == NULL) { - throw std::runtime_error(format_old("failed to open %s: %s", fname, strerror(errno))); - } - seek(0, SEEK_END); - size = tell(); - seek(0, SEEK_SET); - } - - size_t tell() const { -#ifdef _WIN32 - __int64 ret = _ftelli64(fp); -#else - long ret = std::ftell(fp); -#endif - LLAMA_V3_ASSERT(ret != -1); // this really shouldn't fail - return (size_t) ret; - } - - void seek(size_t offset, int whence) { -#ifdef _WIN32 - int ret = _fseeki64(fp, (__int64) offset, whence); -#else - int ret = std::fseek(fp, (long) offset, whence); -#endif - LLAMA_V3_ASSERT(ret == 0); // same - } - - void read_raw(void * ptr, size_t len) const { - if (len == 0) { - return; - } - errno = 0; - std::size_t ret = std::fread(ptr, len, 1, fp); - if (ferror(fp)) { - throw std::runtime_error(format_old("read error: %s", strerror(errno))); - } - if (ret != 1) { - throw std::runtime_error(std::string("unexpectedly reached end of file")); - } - } - - std::uint32_t read_u32() { - std::uint32_t ret; - read_raw(&ret, sizeof(ret)); - return ret; - } - - std::string read_string(std::uint32_t len) { - std::vector chars(len); - read_raw(chars.data(), len); - return std::string(chars.data(), len); - } - - void write_raw(const void * ptr, size_t len) const { - if (len == 0) { - return; - } - errno = 0; - size_t ret = std::fwrite(ptr, len, 1, fp); - if (ret != 1) { - throw std::runtime_error(format_old("write error: %s", strerror(errno))); - } - } - - void write_u32(std::uint32_t val) { - write_raw(&val, sizeof(val)); - } - - ~llama_v3_file() { - if (fp) { - std::fclose(fp); - } - } -}; - -// llama_v3_context_data -struct llama_v3_data_context { - virtual void write(const void * src, size_t size) = 0; - virtual size_t get_size_written() = 0; - virtual ~llama_v3_data_context() = default; -}; - -struct llama_v3_data_buffer_context : llama_v3_data_context { - uint8_t* ptr; - size_t size_written = 0; - - llama_v3_data_buffer_context(uint8_t * p) : ptr(p) {} - - void write(const void * src, size_t size) override { - memcpy(ptr, src, size); - ptr += size; - size_written += size; - } - - size_t get_size_written() override { - return size_written; - } -}; - -struct llama_v3_data_file_context : llama_v3_data_context { - llama_v3_file* file; - size_t size_written = 0; - - llama_v3_data_file_context(llama_v3_file * f) : file(f) {} - - void write(const void * src, size_t size) override { - file->write_raw(src, size); - size_written += size; - } - - size_t get_size_written() override { - return size_written; - } -}; - -#if defined(_WIN32) -static std::string llama_v3_format_win_err(DWORD err) { - LPSTR buf; - size_t size = FormatMessageA(FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS, - NULL, err, MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT), (LPSTR)&buf, 0, NULL); - if (!size) { - return "FormatMessageA failed"; - } - std::string ret(buf, size); - LocalFree(buf); - return ret; -} -#endif - -struct llama_v3_mmap { - void * addr; - size_t size; - - llama_v3_mmap(const llama_v3_mmap &) = delete; - -#ifdef _POSIX_MAPPED_FILES - static constexpr bool SUPPORTED = true; - - llama_v3_mmap(struct llama_v3_file * file, size_t prefetch = (size_t) -1 /* -1 = max value */, bool numa = false) { - size = file->size; - int fd = fileno(file->fp); - int flags = MAP_SHARED; - // prefetch/readahead impairs performance on NUMA systems - if (numa) { prefetch = 0; } -#ifdef __linux__ - if (prefetch >= file->size) { flags |= MAP_POPULATE; } -#endif - addr = mmap(NULL, file->size, PROT_READ, flags, fd, 0); - if (addr == MAP_FAILED) { - throw std::runtime_error(format_old("mmap failed: %s", strerror(errno))); - } - - if (prefetch > 0) { - // Advise the kernel to preload the mapped memory - if (madvise(addr, std::min(file->size, prefetch), MADV_WILLNEED)) { - fprintf(stderr, "warning: madvise(.., MADV_WILLNEED) failed: %s\n", - strerror(errno)); - } - } - if (numa) { - // advise the kernel not to use readahead - // (because the next page might not belong on the same node) - if (madvise(addr, file->size, MADV_RANDOM)) { - fprintf(stderr, "warning: madvise(.., MADV_RANDOM) failed: %s\n", - strerror(errno)); - } - } - } - - ~llama_v3_mmap() { - munmap(addr, size); - } -#elif defined(_WIN32) - static constexpr bool SUPPORTED = true; - - llama_v3_mmap(struct llama_v3_file * file, bool prefetch = true, bool numa = false) { - (void) numa; - - size = file->size; - - HANDLE hFile = (HANDLE) _get_osfhandle(_fileno(file->fp)); - - HANDLE hMapping = CreateFileMappingA(hFile, NULL, PAGE_READONLY, 0, 0, NULL); - DWORD error = GetLastError(); - - if (hMapping == NULL) { - throw std::runtime_error(format_old("CreateFileMappingA failed: %s", llama_v3_format_win_err(error).c_str())); - } - - addr = MapViewOfFile(hMapping, FILE_MAP_READ, 0, 0, 0); - error = GetLastError(); - CloseHandle(hMapping); - - if (addr == NULL) { - throw std::runtime_error(format_old("MapViewOfFile failed: %s", llama_v3_format_win_err(error).c_str())); - } - - #ifndef USE_FAILSAFE - if (prefetch) { - // The PrefetchVirtualMemory API is only present on Windows 8 and above, so we - // will dynamically load it using GetProcAddress. - BOOL (WINAPI *pPrefetchVirtualMemory) (HANDLE, ULONG_PTR, PWIN32_MEMORY_RANGE_ENTRY, ULONG); - HMODULE hKernel32; - - // This call is guaranteed to succeed. - hKernel32 = GetModuleHandleW(L"kernel32.dll"); - - // This call may fail if on a pre-Win8 system. - pPrefetchVirtualMemory = reinterpret_cast (GetProcAddress(hKernel32, "PrefetchVirtualMemory")); - - if (pPrefetchVirtualMemory) { - // Advise the kernel to preload the mapped memory. - WIN32_MEMORY_RANGE_ENTRY range; - range.VirtualAddress = addr; - range.NumberOfBytes = (SIZE_T)size; - if (!pPrefetchVirtualMemory(GetCurrentProcess(), 1, &range, 0)) { - fprintf(stderr, "warning: PrefetchVirtualMemory failed: %s\n", - llama_v3_format_win_err(GetLastError()).c_str()); - } - } - } - #else - printf("\nPrefetchVirtualMemory skipped in compatibility mode.\n"); - #endif - } - - ~llama_v3_mmap() { - if (!UnmapViewOfFile(addr)) { - fprintf(stderr, "warning: UnmapViewOfFile failed: %s\n", - llama_v3_format_win_err(GetLastError()).c_str()); - } - } -#else - static constexpr bool SUPPORTED = false; - - llama_v3_mmap(struct llama_v3_file *, bool prefetch = true, bool numa = false) { - (void) prefetch; - (void) numa; - - throw std::runtime_error(std::string("mmap not supported")); - } -#endif -}; - -// Represents some region of memory being locked using mlock or VirtualLock; -// will automatically unlock on destruction. -struct llama_v3_mlock { - void * addr = NULL; - size_t size = 0; - bool failed_already = false; - - llama_v3_mlock() {} - llama_v3_mlock(const llama_v3_mlock &) = delete; - - ~llama_v3_mlock() { - if (size) { - raw_unlock(addr, size); - } - } - - void init(void * ptr) { - LLAMA_V3_ASSERT(addr == NULL && size == 0); - addr = ptr; - } - - void grow_to(size_t target_size) { - LLAMA_V3_ASSERT(addr); - if (failed_already) { - return; - } - size_t granularity = lock_granularity(); - target_size = (target_size + granularity - 1) & ~(granularity - 1); - if (target_size > size) { - if (raw_lock((uint8_t *) addr + size, target_size - size)) { - size = target_size; - } else { - failed_already = true; - } - } - } - -#ifdef _POSIX_MEMLOCK_RANGE - static constexpr bool SUPPORTED = true; - - size_t lock_granularity() { - return (size_t) sysconf(_SC_PAGESIZE); - } - - #ifdef __APPLE__ - #define MLOCK_SUGGESTION \ - "Try increasing the sysctl values 'vm.user_wire_limit' and 'vm.global_user_wire_limit' and/or " \ - "decreasing 'vm.global_no_user_wire_amount'. Also try increasing RLIMIT_MLOCK (ulimit -l).\n" - #else - #define MLOCK_SUGGESTION \ - "Try increasing RLIMIT_MLOCK ('ulimit -l' as root).\n" - #endif - - bool raw_lock(const void * addr, size_t size) { - if (!mlock(addr, size)) { - return true; - } else { - char* errmsg = std::strerror(errno); - bool suggest = (errno == ENOMEM); - - // Check if the resource limit is fine after all - struct rlimit lock_limit; - if (suggest && getrlimit(RLIMIT_MEMLOCK, &lock_limit)) - suggest = false; - if (suggest && (lock_limit.rlim_max > lock_limit.rlim_cur + size)) - suggest = false; - - fprintf(stderr, "warning: failed to mlock %zu-byte buffer (after previously locking %zu bytes): %s\n%s", - size, this->size, errmsg, suggest ? MLOCK_SUGGESTION : ""); - return false; - } - } - - #undef MLOCK_SUGGESTION - - void raw_unlock(void * addr, size_t size) { - if (munlock(addr, size)) { - fprintf(stderr, "warning: failed to munlock buffer: %s\n", std::strerror(errno)); - } - } -#elif defined(_WIN32) - static constexpr bool SUPPORTED = true; - - size_t lock_granularity() { - SYSTEM_INFO si; - GetSystemInfo(&si); - return (size_t) si.dwPageSize; - } - - bool raw_lock(void * ptr, size_t len) { - for (int tries = 1; ; tries++) { - if (VirtualLock(ptr, len)) { - return true; - } - if (tries == 2) { - fprintf(stderr, "warning: failed to VirtualLock %zu-byte buffer (after previously locking %zu bytes): %s\n", - len, size, llama_v3_format_win_err(GetLastError()).c_str()); - return false; - } - - // It failed but this was only the first try; increase the working - // set size and try again. - SIZE_T min_ws_size, max_ws_size; - if (!GetProcessWorkingSetSize(GetCurrentProcess(), &min_ws_size, &max_ws_size)) { - fprintf(stderr, "warning: GetProcessWorkingSetSize failed: %s\n", - llama_v3_format_win_err(GetLastError()).c_str()); - return false; - } - // Per MSDN: "The maximum number of pages that a process can lock - // is equal to the number of pages in its minimum working set minus - // a small overhead." - // Hopefully a megabyte is enough overhead: - size_t increment = len + 1048576; - // The minimum must be <= the maximum, so we need to increase both: - min_ws_size += increment; - max_ws_size += increment; - if (!SetProcessWorkingSetSize(GetCurrentProcess(), min_ws_size, max_ws_size)) { - fprintf(stderr, "warning: SetProcessWorkingSetSize failed: %s\n", - llama_v3_format_win_err(GetLastError()).c_str()); - return false; - } - } - } - - void raw_unlock(void * ptr, size_t len) { - if (!VirtualUnlock(ptr, len)) { - fprintf(stderr, "warning: failed to VirtualUnlock buffer: %s\n", - llama_v3_format_win_err(GetLastError()).c_str()); - } - } -#else - static constexpr bool SUPPORTED = false; - - size_t lock_granularity() { - return (size_t) 65536; - } - - bool raw_lock(const void * addr, size_t len) { - fprintf(stderr, "warning: mlock not supported on this system\n"); - return false; - } - - void raw_unlock(const void * addr, size_t len) {} -#endif -}; - -// Replacement for std::vector that doesn't require zero-initialization. -struct llama_v3_buffer { - uint8_t * addr = NULL; - size_t size = 0; - - llama_v3_buffer() = default; - - void resize(size_t len) { -#ifdef GGML_USE_METAL - free(addr); - int result = posix_memalign((void **) &addr, getpagesize(), len); - if (result == 0) { - memset(addr, 0, len); - } - else { - addr = NULL; - } -#else - delete[] addr; - addr = new uint8_t[len]; -#endif - size = len; - } - - ~llama_v3_buffer() { -#ifdef GGML_USE_METAL - free(addr); -#else - delete[] addr; -#endif - addr = NULL; - } - - // disable copy and move - llama_v3_buffer(const llama_v3_buffer&) = delete; - llama_v3_buffer(llama_v3_buffer&&) = delete; - llama_v3_buffer& operator=(const llama_v3_buffer&) = delete; - llama_v3_buffer& operator=(llama_v3_buffer&&) = delete; -}; - -#ifdef GGML_USE_CUBLAS -#include "ggml-cuda.h" -struct llama_v3_ctx_buffer { - uint8_t * addr = NULL; - bool is_cuda; - size_t size = 0; - - llama_v3_ctx_buffer() = default; - - void resize(size_t size) { - free(); - - addr = (uint8_t *) ggml_cuda_host_malloc(size); - if (addr) { - is_cuda = true; - } - else { - // fall back to pageable memory - addr = new uint8_t[size]; - is_cuda = false; - } - this->size = size; - } - - void free() { - if (addr) { - if (is_cuda) { - ggml_cuda_host_free(addr); - } - else { - delete[] addr; - } - } - addr = NULL; - } - - ~llama_v3_ctx_buffer() { - free(); - } - - // disable copy and move - llama_v3_ctx_buffer(const llama_v3_ctx_buffer&) = delete; - llama_v3_ctx_buffer(llama_v3_ctx_buffer&&) = delete; - llama_v3_ctx_buffer& operator=(const llama_v3_ctx_buffer&) = delete; - llama_v3_ctx_buffer& operator=(llama_v3_ctx_buffer&&) = delete; -}; -#else -typedef llama_v3_buffer llama_v3_ctx_buffer; -#endif - -#endif diff --git a/spaces/Illumotion/Koboldcpp/tests/test-llama-grammar.cpp b/spaces/Illumotion/Koboldcpp/tests/test-llama-grammar.cpp deleted file mode 100644 index 73dd33dd286a5c0d3123ed88badd70d1b1f6b975..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/tests/test-llama-grammar.cpp +++ /dev/null @@ -1,403 +0,0 @@ -#ifdef NDEBUG -#undef NDEBUG -#endif - -#include "llama.cpp" // TODO: not great -#include "grammar-parser.h" - -#include - -int main() -{ - grammar_parser::parse_state parsed_grammar; - - std::vector> expected = { - {"expr", 2}, - {"expr_6", 6}, - {"expr_7", 7}, - {"ident", 8}, - {"ident_10", 10}, - {"num", 9}, - {"num_11", 11}, - {"root", 0}, - {"root_1", 1}, - {"root_5", 5}, - {"term", 4}, - {"ws", 3}, - {"ws_12", 12}, - }; - - std::vector> expected_rules = { - {{LLAMA_GRETYPE_RULE_REF, 5}, {LLAMA_GRETYPE_END, 0}}, - { - {LLAMA_GRETYPE_RULE_REF, 2}, - {LLAMA_GRETYPE_CHAR, 61}, - {LLAMA_GRETYPE_RULE_REF, 3}, - {LLAMA_GRETYPE_RULE_REF, 4}, - {LLAMA_GRETYPE_CHAR, 10}, - {LLAMA_GRETYPE_END, 0}, - }, - {{LLAMA_GRETYPE_RULE_REF, 4}, {LLAMA_GRETYPE_RULE_REF, 7}, {LLAMA_GRETYPE_END, 0}}, - {{LLAMA_GRETYPE_RULE_REF, 12}, {LLAMA_GRETYPE_END, 0}}, - { - {LLAMA_GRETYPE_RULE_REF, 8}, - {LLAMA_GRETYPE_ALT, 0}, - {LLAMA_GRETYPE_RULE_REF, 9}, - {LLAMA_GRETYPE_ALT, 0}, - {LLAMA_GRETYPE_CHAR, 40}, - {LLAMA_GRETYPE_RULE_REF, 3}, - {LLAMA_GRETYPE_RULE_REF, 2}, - {LLAMA_GRETYPE_CHAR, 41}, - {LLAMA_GRETYPE_RULE_REF, 3}, - {LLAMA_GRETYPE_END, 0}, - }, - {{LLAMA_GRETYPE_RULE_REF, 1}, {LLAMA_GRETYPE_RULE_REF, 5}, {LLAMA_GRETYPE_ALT, 0}, {LLAMA_GRETYPE_RULE_REF, 1}, {LLAMA_GRETYPE_END, 0}}, - { - {LLAMA_GRETYPE_CHAR, 45}, - {LLAMA_GRETYPE_CHAR_ALT, 43}, - {LLAMA_GRETYPE_CHAR_ALT, 42}, - {LLAMA_GRETYPE_CHAR_ALT, 47}, - {LLAMA_GRETYPE_RULE_REF, 4}, - {LLAMA_GRETYPE_END, 0}, - }, - {{LLAMA_GRETYPE_RULE_REF, 6}, {LLAMA_GRETYPE_RULE_REF, 7}, {LLAMA_GRETYPE_ALT, 0}, {LLAMA_GRETYPE_END, 0}}, - { - {LLAMA_GRETYPE_CHAR, 97}, - {LLAMA_GRETYPE_CHAR_RNG_UPPER, 122}, - {LLAMA_GRETYPE_RULE_REF, 10}, - {LLAMA_GRETYPE_RULE_REF, 3}, - {LLAMA_GRETYPE_END, 0}, - }, - {{LLAMA_GRETYPE_RULE_REF, 11}, {LLAMA_GRETYPE_RULE_REF, 3}, {LLAMA_GRETYPE_END, 0}}, - { - {LLAMA_GRETYPE_CHAR, 97}, - {LLAMA_GRETYPE_CHAR_RNG_UPPER, 122}, - {LLAMA_GRETYPE_CHAR_ALT, 48}, - {LLAMA_GRETYPE_CHAR_RNG_UPPER, 57}, - {LLAMA_GRETYPE_CHAR_ALT, 95}, - {LLAMA_GRETYPE_RULE_REF, 10}, - {LLAMA_GRETYPE_ALT, 0}, - {LLAMA_GRETYPE_END, 0}, - }, - { - {LLAMA_GRETYPE_CHAR, 48}, - {LLAMA_GRETYPE_CHAR_RNG_UPPER, 57}, - {LLAMA_GRETYPE_RULE_REF, 11}, - {LLAMA_GRETYPE_ALT, 0}, - {LLAMA_GRETYPE_CHAR, 48}, - {LLAMA_GRETYPE_CHAR_RNG_UPPER, 57}, - {LLAMA_GRETYPE_END, 0}, - }, - { - {LLAMA_GRETYPE_CHAR, 32}, - {LLAMA_GRETYPE_CHAR_ALT, 9}, - {LLAMA_GRETYPE_CHAR_ALT, 10}, - {LLAMA_GRETYPE_RULE_REF, 12}, - {LLAMA_GRETYPE_ALT, 0}, - {LLAMA_GRETYPE_END, 0}, - }, - }; - - for (auto pair : expected) - { - parsed_grammar.symbol_ids[pair.first] = pair.second; - } - - for (auto rule : expected_rules) - { - parsed_grammar.rules.push_back({}); - for (auto element : rule) - { - parsed_grammar.rules.back().push_back(element); - } - } - - llama_grammar *grammar = NULL; - std::vector grammar_rules(parsed_grammar.c_rules()); - grammar = llama_grammar_init( - grammar_rules.data(), grammar_rules.size(), parsed_grammar.symbol_ids.at("root")); - - std::vector> expected_stacks = { - { - {LLAMA_GRETYPE_RULE_REF, 5}, - {LLAMA_GRETYPE_CHAR, 61}, - {LLAMA_GRETYPE_RULE_REF, 7}, - {LLAMA_GRETYPE_CHAR, 97}, - }, - { - {LLAMA_GRETYPE_RULE_REF, 5}, - {LLAMA_GRETYPE_CHAR, 61}, - {LLAMA_GRETYPE_RULE_REF, 7}, - {LLAMA_GRETYPE_RULE_REF, 3}, - {LLAMA_GRETYPE_CHAR, 48}, - }, - { - {LLAMA_GRETYPE_RULE_REF, 5}, - {LLAMA_GRETYPE_CHAR, 61}, - {LLAMA_GRETYPE_RULE_REF, 7}, - {LLAMA_GRETYPE_RULE_REF, 3}, - {LLAMA_GRETYPE_CHAR, 48}, - }, - { - {LLAMA_GRETYPE_RULE_REF, 5}, - {LLAMA_GRETYPE_CHAR, 61}, - {LLAMA_GRETYPE_RULE_REF, 7}, - {LLAMA_GRETYPE_CHAR, 40}, - }, - { - {LLAMA_GRETYPE_CHAR, 61}, - {LLAMA_GRETYPE_RULE_REF, 7}, - {LLAMA_GRETYPE_CHAR, 97}, - }, - { - {LLAMA_GRETYPE_CHAR, 61}, - {LLAMA_GRETYPE_RULE_REF, 7}, - {LLAMA_GRETYPE_RULE_REF, 3}, - {LLAMA_GRETYPE_CHAR, 48}, - }, - { - {LLAMA_GRETYPE_CHAR, 61}, - {LLAMA_GRETYPE_RULE_REF, 7}, - {LLAMA_GRETYPE_RULE_REF, 3}, - {LLAMA_GRETYPE_CHAR, 48}, - }, - { - {LLAMA_GRETYPE_CHAR, 61}, - {LLAMA_GRETYPE_RULE_REF, 7}, - {LLAMA_GRETYPE_CHAR, 40}, - }}; - - auto index = 0; - for (auto stack : grammar->stacks) - { - // compare stack to expected_stack - for (uint32_t i = 0; i < stack.size(); i++) - { - auto element = stack[i]; - auto expected_element = expected_stacks[index][i]; - - // pretty print error message before asserting - if (expected_element.type != element->type || expected_element.value != element->value) - { - fprintf(stderr, "index: %d\n", index); - fprintf(stderr, "expected_element: %d, %d\n", expected_element.type, expected_element.value); - fprintf(stderr, "actual_element: %d, %d\n", element->type, element->value); - fprintf(stderr, "expected_element != actual_element\n"); - } - - assert(expected_element.type == element->type && expected_element.value == element->value); - } - index++; - } - - std::vector> next_stacks; - std::vector next_candidates; - next_candidates.resize(24); - - for (size_t i = 0; i < 24; ++i) - { - uint32_t *cp = new uint32_t[2]; // dynamically allocate memory for code_point - cp[0] = 37 + i; - cp[1] = 0; - next_candidates[i] = {i, cp, {}}; - } - - std::vector>> expected_reject = { - { - {0, 37}, - {1, 38}, - {2, 39}, - {3, 40}, - {4, 41}, - {5, 42}, - {6, 43}, - {7, 44}, - {8, 45}, - {9, 46}, - {10, 47}, - {11, 48}, - {12, 49}, - {13, 50}, - {14, 51}, - {15, 52}, - {16, 53}, - {17, 54}, - {18, 55}, - {19, 56}, - {20, 57}, - {21, 58}, - {22, 59}, - {23, 60}, - }, - { - {0, 37}, - {1, 38}, - {2, 39}, - {3, 40}, - {4, 41}, - {5, 42}, - {6, 43}, - {7, 44}, - {8, 45}, - {9, 46}, - {10, 47}, - {21, 58}, - {22, 59}, - {23, 60}, - }, - { - {0, 37}, - {1, 38}, - {2, 39}, - {3, 40}, - {4, 41}, - {5, 42}, - {6, 43}, - {7, 44}, - {8, 45}, - {9, 46}, - {10, 47}, - {21, 58}, - {22, 59}, - {23, 60}, - }, - { - {0, 37}, - {1, 38}, - {2, 39}, - {4, 41}, - {5, 42}, - {6, 43}, - {7, 44}, - {8, 45}, - {9, 46}, - {10, 47}, - {11, 48}, - {12, 49}, - {13, 50}, - {14, 51}, - {15, 52}, - {16, 53}, - {17, 54}, - {18, 55}, - {19, 56}, - {20, 57}, - {21, 58}, - {22, 59}, - {23, 60}, - }, - { - {0, 37}, - {1, 38}, - {2, 39}, - {3, 40}, - {4, 41}, - {5, 42}, - {6, 43}, - {7, 44}, - {8, 45}, - {9, 46}, - {10, 47}, - {11, 48}, - {12, 49}, - {13, 50}, - {14, 51}, - {15, 52}, - {16, 53}, - {17, 54}, - {18, 55}, - {19, 56}, - {20, 57}, - {21, 58}, - {22, 59}, - {23, 60}, - }, - { - {0, 37}, - {1, 38}, - {2, 39}, - {3, 40}, - {4, 41}, - {5, 42}, - {6, 43}, - {7, 44}, - {8, 45}, - {9, 46}, - {10, 47}, - {21, 58}, - {22, 59}, - {23, 60}, - }, - { - {0, 37}, - {1, 38}, - {2, 39}, - {3, 40}, - {4, 41}, - {5, 42}, - {6, 43}, - {7, 44}, - {8, 45}, - {9, 46}, - {10, 47}, - {21, 58}, - {22, 59}, - {23, 60}, - }, - { - {0, 37}, - {1, 38}, - {2, 39}, - {4, 41}, - {5, 42}, - {6, 43}, - {7, 44}, - {8, 45}, - {9, 46}, - {10, 47}, - {11, 48}, - {12, 49}, - {13, 50}, - {14, 51}, - {15, 52}, - {16, 53}, - {17, 54}, - {18, 55}, - {19, 56}, - {20, 57}, - {21, 58}, - {22, 59}, - {23, 60}, - }, - }; - - std::vector rejects = llama_grammar_reject_candidates_for_stack(grammar->rules, grammar->stacks[0], next_candidates); - - std::vector> all_rejects; - - for (std::size_t count = 0; count < grammar->stacks.size(); ++count) - { - rejects = llama_grammar_reject_candidates_for_stack(grammar->rules, grammar->stacks[count], next_candidates); - all_rejects.push_back(rejects); - } - - index = 0; - for (auto rej : all_rejects) - { - for (uint32_t i = 0; i < rej.size(); i++) - { - auto element = rej[i]; - auto expected_element = expected_reject[index][i]; - assert(element.index == expected_element.first && *element.code_points == expected_element.second); - } - index++; - } - - for (auto &candidate : next_candidates) - { - delete[] candidate.code_points; - candidate.code_points = nullptr; - } - delete grammar; - return 0; -} diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/experimental/rl/__init__.py b/spaces/Jackflack09/diffuse-custom/diffusers/experimental/rl/__init__.py deleted file mode 100644 index 7b338d3173e12d478b6b6d6fd0e50650a0ab5a4c..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/experimental/rl/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .value_guided_sampling import ValueGuidedRLPipeline diff --git a/spaces/JammyMachina/the-jam-machine-app/generation_utils.py b/spaces/JammyMachina/the-jam-machine-app/generation_utils.py deleted file mode 100644 index 496e71d3602f48275271adcb3bdfdc409353c540..0000000000000000000000000000000000000000 --- a/spaces/JammyMachina/the-jam-machine-app/generation_utils.py +++ /dev/null @@ -1,191 +0,0 @@ -import os -import numpy as np -import matplotlib.pyplot as plt -import matplotlib -from utils import writeToFile, get_datetime - -from constants import INSTRUMENT_CLASSES -from playback import get_music, show_piano_roll - -# matplotlib settings -matplotlib.use("Agg") # for server -matplotlib.rcParams["xtick.major.size"] = 0 -matplotlib.rcParams["ytick.major.size"] = 0 -matplotlib.rcParams["axes.facecolor"] = "none" -matplotlib.rcParams["axes.edgecolor"] = "grey" - - -class WriteTextMidiToFile: # utils saving miditext from teh class GenerateMidiText to file - def __init__(self, generate_midi, output_path): - self.generated_midi = generate_midi.generated_piece - self.output_path = output_path - self.hyperparameter_and_bars = generate_midi.piece_by_track - - def hashing_seq(self): - self.current_time = get_datetime() - self.output_path_filename = f"{self.output_path}/{self.current_time}.json" - - def wrapping_seq_hyperparameters_in_dict(self): - # assert type(self.generated_midi) is str, "error: generate_midi must be a string" - # assert ( - # type(self.hyperparameter_dict) is dict - # ), "error: feature_dict must be a dictionnary" - return { - "generated_midi": self.generated_midi, - "hyperparameters_and_bars": self.hyperparameter_and_bars, - } - - def text_midi_to_file(self): - self.hashing_seq() - output_dict = self.wrapping_seq_hyperparameters_in_dict() - print(f"Token generate_midi written: {self.output_path_filename}") - writeToFile(self.output_path_filename, output_dict) - return self.output_path_filename - - -def define_generation_dir(generation_dir): - if not os.path.exists(generation_dir): - os.makedirs(generation_dir) - return generation_dir - - -def bar_count_check(sequence, n_bars): - """check if the sequence contains the right number of bars""" - sequence = sequence.split(" ") - # find occurences of "BAR_END" in a "sequence" - # I don't check for "BAR_START" because it is not always included in "sequence" - # e.g. BAR_START is included the prompt when generating one more bar - bar_count = 0 - for seq in sequence: - if seq == "BAR_END": - bar_count += 1 - bar_count_matches = bar_count == n_bars - if not bar_count_matches: - print(f"Bar count is {bar_count} - but should be {n_bars}") - return bar_count_matches, bar_count - - -def print_inst_classes(INSTRUMENT_CLASSES): - """Print the instrument classes""" - for classe in INSTRUMENT_CLASSES: - print(f"{classe}") - - -def check_if_prompt_inst_in_tokenizer_vocab(tokenizer, inst_prompt_list): - """Check if the prompt instrument are in the tokenizer vocab""" - for inst in inst_prompt_list: - if f"INST={inst}" not in tokenizer.vocab: - instruments_in_dataset = np.sort( - [tok.split("=")[-1] for tok in tokenizer.vocab if "INST" in tok] - ) - print_inst_classes(INSTRUMENT_CLASSES) - raise ValueError( - f"""The instrument {inst} is not in the tokenizer vocabulary. - Available Instruments: {instruments_in_dataset}""" - ) - - -# TODO -def check_if_prompt_density_in_tokenizer_vocab(tokenizer, density_prompt_list): - pass - - -def forcing_bar_count(input_prompt, generated, bar_count, expected_length): - """Forcing the generated sequence to have the expected length - expected_length and bar_count refers to the length of newly_generated_only (without input prompt) - """ - - if bar_count - expected_length > 0: # Cut the sequence if too long - full_piece = "" - splited = generated.split("BAR_END ") - for count, spl in enumerate(splited): - if count < expected_length: - full_piece += spl + "BAR_END " - - full_piece += "TRACK_END " - full_piece = input_prompt + full_piece - print(f"Generated sequence trunkated at {expected_length} bars") - bar_count_checks = True - - elif bar_count - expected_length < 0: # Do nothing it the sequence if too short - full_piece = input_prompt + generated - bar_count_checks = False - print(f"--- Generated sequence is too short - Force Regeration ---") - - return full_piece, bar_count_checks - - -def get_max_time(inst_midi): - max_time = 0 - for inst in inst_midi.instruments: - max_time = max(max_time, inst.get_end_time()) - return max_time - - -def plot_piano_roll(inst_midi): - piano_roll_fig = plt.figure(figsize=(25, 3 * len(inst_midi.instruments))) - piano_roll_fig.tight_layout() - piano_roll_fig.patch.set_alpha(0) - inst_count = 0 - beats_per_bar = 4 - sec_per_beat = 0.5 - next_beat = max(inst_midi.get_beats()) + np.diff(inst_midi.get_beats())[0] - bars_time = np.append(inst_midi.get_beats(), (next_beat))[::beats_per_bar].astype( - int - ) - for inst in inst_midi.instruments: - # hardcoded for now - if inst.name == "Drums": - color = "purple" - elif inst.name == "Synth Bass 1": - color = "orange" - else: - color = "green" - - inst_count += 1 - plt.subplot(len(inst_midi.instruments), 1, inst_count) - - for bar in bars_time: - plt.axvline(bar, color="grey", linewidth=0.5) - octaves = np.arange(0, 128, 12) - for octave in octaves: - plt.axhline(octave, color="grey", linewidth=0.5) - plt.yticks(octaves, visible=False) - - p_midi_note_list = inst.notes - note_time = [] - note_pitch = [] - for note in p_midi_note_list: - note_time.append([note.start, note.end]) - note_pitch.append([note.pitch, note.pitch]) - note_pitch = np.array(note_pitch) - note_time = np.array(note_time) - - plt.plot( - note_time.T, - note_pitch.T, - color=color, - linewidth=4, - solid_capstyle="butt", - ) - plt.ylim(0, 128) - xticks = np.array(bars_time)[:-1] - plt.tight_layout() - plt.xlim(min(bars_time), max(bars_time)) - plt.ylim(max([note_pitch.min() - 5, 0]), note_pitch.max() + 5) - plt.xticks( - xticks + 0.5 * beats_per_bar * sec_per_beat, - labels=xticks.argsort() + 1, - visible=False, - ) - plt.text( - 0.2, - note_pitch.max() + 4, - inst.name, - fontsize=20, - color=color, - horizontalalignment="left", - verticalalignment="top", - ) - - return piano_roll_fig diff --git a/spaces/Jeff2323/ai-comic-factory/src/lib/loadImage.ts b/spaces/Jeff2323/ai-comic-factory/src/lib/loadImage.ts deleted file mode 100644 index d2e7dcb6a548a9ce1937315486954e66e2c54746..0000000000000000000000000000000000000000 --- a/spaces/Jeff2323/ai-comic-factory/src/lib/loadImage.ts +++ /dev/null @@ -1,14 +0,0 @@ -export async function loadImage(image: string): Promise { - const img = new Image(); - img.src = image; - - const imgOnLoad = () => { - return new Promise((resolve, reject) => { - img.onload = () => { resolve(img) }; - img.onerror = (err) => { reject(err) }; - }) - }; - - const loadImg = await imgOnLoad(); - return loadImg -} \ No newline at end of file diff --git a/spaces/JosePezantes/Violencia-politica-genero/README.md b/spaces/JosePezantes/Violencia-politica-genero/README.md deleted file mode 100644 index f1b05a60ece59fdd2407d379fb8549a28b610d5c..0000000000000000000000000000000000000000 --- a/spaces/JosePezantes/Violencia-politica-genero/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Violencia Politica Genero -emoji: 📊 -colorFrom: red -colorTo: gray -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false ---- diff --git a/spaces/KalbeDigitalLab/ham1000-skin-classification/utils/page_utils.py b/spaces/KalbeDigitalLab/ham1000-skin-classification/utils/page_utils.py deleted file mode 100644 index 5d3e4e78e97ab27a97c198dfee4df3d0051971f0..0000000000000000000000000000000000000000 --- a/spaces/KalbeDigitalLab/ham1000-skin-classification/utils/page_utils.py +++ /dev/null @@ -1,51 +0,0 @@ -from typing import Optional - - -class ColorPalette: - """Color Palette Container.""" - all = [] - - def __init__( - self, - c50: str, - c100: str, - c200: str, - c300: str, - c400: str, - c500: str, - c600: str, - c700: str, - c800: str, - c900: str, - c950: str, - name: Optional[str] = None, - ): - self.c50 = c50 - self.c100 = c100 - self.c200 = c200 - self.c300 = c300 - self.c400 = c400 - self.c500 = c500 - self.c600 = c600 - self.c700 = c700 - self.c800 = c800 - self.c900 = c900 - self.c950 = c950 - self.name = name - ColorPalette.all.append(self) - - -KALBE_THEME_COLOR = ColorPalette( - name='kalbe', - c50='#f2f9e8', - c100='#dff3c4', - c200='#c2e78d', - c300='#9fd862', - c400='#7fc93f', - c500='#3F831C', - c600='#31661a', - c700='#244c13', - c800='#18340c', - c900='#0c1b06', - c950='#050a02', -) \ No newline at end of file diff --git a/spaces/Kevin676/Raven-with-Voice-Cloning/app.py b/spaces/Kevin676/Raven-with-Voice-Cloning/app.py deleted file mode 100644 index 131f543902c89576ae2451c1e357e2db1bb1d83a..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/Raven-with-Voice-Cloning/app.py +++ /dev/null @@ -1,147 +0,0 @@ -import gradio as gr -import os, gc, torch -from datetime import datetime -from huggingface_hub import hf_hub_download -from pynvml import * -nvmlInit() -gpu_h = nvmlDeviceGetHandleByIndex(0) -ctx_limit = 1024 -import whisper -model1 = whisper.load_model("small") -title1 = "RWKV-4-Raven-7B-v8-Eng-20230408-ctx4096" - -os.environ["RWKV_JIT_ON"] = '1' -os.environ["RWKV_CUDA_ON"] = '1' # if '1' then use CUDA kernel for seq mode (much faster) - -from TTS.api import TTS -tts = TTS(model_name="tts_models/multilingual/multi-dataset/your_tts", progress_bar=False, gpu=True) - -from rwkv.model import RWKV -model_path = hf_hub_download(repo_id="BlinkDL/rwkv-4-raven", filename=f"{title1}.pth") -model = RWKV(model=model_path, strategy='cuda fp16i8 *8 -> cuda fp16') -from rwkv.utils import PIPELINE, PIPELINE_ARGS -pipeline = PIPELINE(model, "20B_tokenizer.json") - -def generate_prompt(instruction, input=None): - if input: - return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. -# Instruction: -{instruction} -# Input: -{input} -# Response: -""" - else: - return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request. -# Instruction: -{instruction} -# Response: -""" - -def evaluate( -# instruction, - audio, - upload, - input=None, - token_count=200, - temperature=1.0, - top_p=0.7, - presencePenalty = 0.1, - countPenalty = 0.1, -): - res = [] - # load audio and pad/trim it to fit 30 seconds - audio = whisper.load_audio(audio) - audio = whisper.pad_or_trim(audio) - - # make log-Mel spectrogram and move to the same device as the model1 - mel = whisper.log_mel_spectrogram(audio).to(model1.device) - - # detect the spoken language - _, probs = model1.detect_language(mel) - print(f"Detected language: {max(probs, key=probs.get)}") - - # decode the audio - options = whisper.DecodingOptions() - result = whisper.decode(model1, mel, options) - - args = PIPELINE_ARGS(temperature = max(0.2, float(temperature)), top_p = float(top_p), - alpha_frequency = countPenalty, - alpha_presence = presencePenalty, - token_ban = [], # ban the generation of some tokens - token_stop = [0]) # stop generation whenever you see any token here - - instruction = result.text.strip() - input = input.strip() - ctx = generate_prompt(instruction, input) - - gpu_info = nvmlDeviceGetMemoryInfo(gpu_h) - print(f'vram {gpu_info.total} used {gpu_info.used} free {gpu_info.free}') - - all_tokens = [] - out_last = 0 - out_str = '' - occurrence = {} - state = None - for i in range(int(token_count)): - out, state = model.forward(pipeline.encode(ctx)[-ctx_limit:] if i == 0 else [token], state) - for n in occurrence: - out[n] -= (args.alpha_presence + occurrence[n] * args.alpha_frequency) - - token = pipeline.sample_logits(out, temperature=args.temperature, top_p=args.top_p) - if token in args.token_stop: - break - all_tokens += [token] - if token not in occurrence: - occurrence[token] = 1 - else: - occurrence[token] += 1 - - tmp = pipeline.decode(all_tokens[out_last:]) - if '\ufffd' not in tmp: - out_str += tmp - yield out_str.strip() - out_last = i + 1 - gc.collect() - torch.cuda.empty_cache() - - res.append(out_str.strip()) - -# res1 = ' '.join(str(x) for x in res) - - tts.tts_to_file(res1, speaker_wav = upload, language="en", file_path="output.wav") - -# return out_str.strip() - -# return [result.text, res] - - return "output.wav" - -# yield out_str.strip() - -g = gr.Interface( - fn=evaluate, - inputs=[ -# gr.components.Textbox(lines=2, label="Instruction", value="Tell me about ravens."), - gr.Audio(source="microphone", label = "请开始对话吧!", type="filepath"), - gr.Audio(source="upload", label = "请上传您喜欢的声音(wav文件)", type="filepath"), - gr.components.Textbox(lines=2, label="Input", placeholder="none"), - gr.components.Slider(minimum=10, maximum=200, step=10, value=150), # token_count - gr.components.Slider(minimum=0.2, maximum=2.0, step=0.1, value=1.0), # temperature - gr.components.Slider(minimum=0, maximum=1, step=0.05, value=0.5), # top_p - gr.components.Slider(0.0, 1.0, step=0.1, value=0.4), # presencePenalty - gr.components.Slider(0.0, 1.0, step=0.1, value=0.4), # countPenalty - ], - outputs=[ -# gr.inputs.Textbox( -# lines=5, -# label="Raven Output", -# ), - gr.Audio(label="Audio with Custom Voice"), - ], - title="🥳💬💕 - TalktoAI,随时随地,谈天说地!", - description="🤖 - 让有人文关怀的AI造福每一个人!AI向善,文明璀璨!TalktoAI - Enable the future!", - article = "Powered by the RWKV Language Model" -) -g.queue(concurrency_count=1, max_size=10) -g.launch(show_error=True) \ No newline at end of file diff --git a/spaces/Kimata/Sanskrit-TTS/utils/normalizer_utils.py b/spaces/Kimata/Sanskrit-TTS/utils/normalizer_utils.py deleted file mode 100644 index 5c7b2ae387623a2b7b6953b4d7ef6364ad99179a..0000000000000000000000000000000000000000 --- a/spaces/Kimata/Sanskrit-TTS/utils/normalizer_utils.py +++ /dev/null @@ -1,103 +0,0 @@ -DEPENDENT_VOWELS = ["ा", "ि", "ी", "ु", "ू", "े", "ै", "ो", "ौ", "ं", "ः", "ृ", "ॄ"] - -dict_num = {'१': 'एकः', - '२': 'द्वौ', - '३': 'त्रयः', - '४': 'चत्वारः', - '५': 'पञ्च', - '६': 'षट्', - '७': 'सप्त', - '८': 'ष्ट', - '९': 'नव', - '१॰': 'दश', - '११': 'एकादशन्', - '१२': 'द्वादशन्', - '१३': 'त्रयोदशन्', - '१४': 'चतुर्दशन्', - '१५': 'पञ्चदशन्', - '१६': 'षोडशन्', - '१७': 'सप्तदशन्', - '१८': 'ष्टादशन्', - '१९': 'नवदशन्', - '२॰': 'विंशति', - '२१': 'एकाविंशति', - '२२': 'द्वाविंशति', - '२३': 'त्रयोविंशति', - '२४': 'चतुर्विंशति', - '२५': 'पञ्चविंशति', - '२६': 'षड्विंशति', - '२७': 'सप्तविंशति', - '२८': 'ष्टाविंशति', - '२९': 'नवविंशति', - '३॰': 'त्रिंशत्', - '३१': 'एकत्रिंशत्', - '३२': 'द्वात्रिंशत्', - '३३': 'त्रयत्रिंशत्', - '३४': 'चतुस्त्रिंशत्', - '३५': 'पञ्चत्रिंशत्', - '३६': 'षट्त्रिंशत्', - '३७': 'सप्तत्रिंशत्', - '३८': 'ष्टात्रिंशत्', - '३९': 'एकोनचत्वारिंशत्', - '४॰': 'चत्वारिंशत्', - '४१': 'एकचत्वारिंशत्', - '४२': 'द्विचत्वारिंशत्', - '४३': 'त्रिचत्वारिंशत्', - '४४': 'चतुश्चत्वारिंशत्', - '४५': 'पञ्चचत्वारिंशत्', - '४६': 'षट्चत्वारिंशत्', - '४७': 'सप्तचत्वारिंशत्', - '४८': 'ष्टचत्वारिंशत्', - '४९': 'एकोनपञ्चाशत्', - '५॰': 'पञ्चाशत्', - '५१': 'एकपञ्चाशत्', - '५२': 'द्विपञ्चाशत्', - '५३': 'त्रिपञ्चाशत्', - '५४': 'चतुःपञ्चाशत्', - '५५': 'पञ्चपञ्चाशत्', - '५६': 'षट्पञ्चाशत्', - '५७': 'सप्तपञ्चाशत्', - '५८': 'ष्टपञ्चाशत्', - '५९': 'एकोनषष्ठिः', - '६॰': 'षष्ठिः', - '६१': 'एकषष्ठिः', - '६२': 'द्विषष्ठिः', - '६३': 'त्रिषष्ठिः', - '६४': 'चतुःषष्ठिः', - '६५': 'पञ्चषष्ठिः', - '६६': 'षट्षष्ठिः', - '६७': 'सप्तषष्ठिः', - '६८': 'ष्टषष्ठिः', - '६९': 'एकोनसप्ततिः', - '७॰': 'सप्ततिः', - '७१': 'एकसप्ततिः', - '७२': 'द्विसप्ततिः', - '७३': 'त्रिसप्ततिः', - '७४': 'चतुःसप्ततिः', - '७५': 'पञ्चसप्ततिः', - '७६': 'षट्सप्ततिः', - '७७': 'सप्तसप्ततिः', - '७८': 'ष्टसप्ततिः', - '७९': 'एकोनाशीतिः', - '८॰': 'शीतिः', - '८१': 'एकाशीतिः', - '८२': 'द्वशीतिः', - '८३': 'त्र्यशीतिः', - '८४': 'चतुरशीतिः', - '८५': 'पञ्चाशीतिः', - '८६': 'षडशीतिः', - '८७': 'सप्ताशीतिः', - '८८': 'ष्टाशीतिः', - '८९': 'एकोननवतिः', - '९॰': 'नवतिः', - '९१': 'एकनवतिः', - '९२': 'द्विनवतिः', - '९३': 'त्रिनवतिः', - '९४': 'चतुर्नवतिः', - '९५': 'पञ्चनवतिः', - '९६': 'षण्णवतिः', - '९७': 'सप्तनवतिः', - '९८': 'ष्टनवतिः', - '९९': 'एकोनशतम्', - '१॰॰': 'शतम्' -} diff --git a/spaces/KyanChen/FunSR/tools/paper_vis_tools/get_feature_map_vis.py b/spaces/KyanChen/FunSR/tools/paper_vis_tools/get_feature_map_vis.py deleted file mode 100644 index df500a566a5a1394dd5311a3853b3e719d799de9..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/FunSR/tools/paper_vis_tools/get_feature_map_vis.py +++ /dev/null @@ -1,43 +0,0 @@ -import os -# import sys -# sys.path.append(sys.path[0]+'/../../') - -exp = 'EXP20221219_1' -model_name = 'FunSR-RDN' # bicubic, SRCNN, FSRCNN, LGCNet -dataset_name = 'AID' # UC, AID - -for cp in ['epoch-last.pth']: - for scale_ratio in [4.0]: - # os.system(f'CUDA_VISIBLE_DEVICES=2 python test_cnn_sr.py ' - # f'--config tools/paper_tools/vis_fixed_scale_UC_INR_diinn_arbrcan_funsr_overnet.yaml ' - # f'--model checkpoints/{exp}/{cp} ' - # f'--scale_ratio {scale_ratio} ' - # f'--save_fig True ' - # f'--save_path vis_{model_name}_{dataset_name}_4x_testset ' - # f'--cal_metrics True ' - # f'--dataset_name {dataset_name}' - # ) - - os.system(f'CUDA_VISIBLE_DEVICES=2 python test_inr_diinn_arbrcan_sadnarc_funsr_overnet.py ' - f'--config tools/paper_tools/vis_fixed_scale_UC_INR_diinn_arbrcan_funsr_overnet.yaml ' - f'--model checkpoints/{exp}/{cp} ' - f'--scale_ratio {scale_ratio} ' - f'--save_fig False ' - f'--save_featmap True ' - f'--save_path vis_{model_name}_{dataset_name}_4x_testset_featmap ' - f'--cal_metrics True ' - f'--dataset_name {dataset_name}' - ) - - # os.system(f'CUDA_VISIBLE_DEVICES=5 python test_inr_liif_metasr_aliif.py ' - # f'--config tools/paper_tools/vis_fixed_scale_UC_INR_liif_metasr_aliif.yaml ' - # f'--model checkpoints/{exp}/{cp} ' - # f'--scale_ratio {scale_ratio} ' - # f'--save_fig True ' - # f'--save_path vis_{model_name}_{dataset_name}_4x_testset ' - # f'--cal_metrics True ' - # f'--dataset_name {dataset_name}' - # ) - -# os.system(f'zip -q -r vis_{model_name}_{dataset_name}_4x_testset_featmap.zip vis_{model_name}_{dataset_name}_4x_testset') -# os.system(f'aws s3 cp vis_{model_name}_{dataset_name}_4x_testset_featmap.zip s3://xhs.bravo/user/kyanchen/tmp/') diff --git a/spaces/KyanChen/RSPrompter/mmpretrain/engine/hooks/precise_bn_hook.py b/spaces/KyanChen/RSPrompter/mmpretrain/engine/hooks/precise_bn_hook.py deleted file mode 100644 index 4fb0e4c419e4ed2af23574769815aaecbcd629c0..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmpretrain/engine/hooks/precise_bn_hook.py +++ /dev/null @@ -1,223 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Adapted from https://github.com/facebookresearch/pycls/blob/f8cd962737e33ce9e19b3083a33551da95c2d9c0/pycls/core/net.py # noqa: E501 -# Original licence: Copyright (c) 2019 Facebook, Inc under the Apache License 2.0 # noqa: E501 - -import itertools -import logging -from typing import List, Optional, Sequence, Union - -import mmengine -import torch -import torch.nn as nn -from mmengine.hooks import Hook -from mmengine.logging import print_log -from mmengine.model import is_model_wrapper -from mmengine.runner import EpochBasedTrainLoop, IterBasedTrainLoop, Runner -from mmengine.utils import ProgressBar -from torch.functional import Tensor -from torch.nn import GroupNorm -from torch.nn.modules.batchnorm import _BatchNorm -from torch.nn.modules.instancenorm import _InstanceNorm -from torch.utils.data import DataLoader - -from mmpretrain.registry import HOOKS - -DATA_BATCH = Optional[Sequence[dict]] - - -def scaled_all_reduce(tensors: List[Tensor], num_gpus: int) -> List[Tensor]: - """Performs the scaled all_reduce operation on the provided tensors. - - The input tensors are modified in-place. Currently supports only the sum - reduction operator. The reduced values are scaled by the inverse size of - the process group. - - Args: - tensors (List[torch.Tensor]): The tensors to process. - num_gpus (int): The number of gpus to use - Returns: - List[torch.Tensor]: The processed tensors. - """ - # There is no need for reduction in the single-proc case - if num_gpus == 1: - return tensors - # Queue the reductions - reductions = [] - for tensor in tensors: - reduction = torch.distributed.all_reduce(tensor, async_op=True) - reductions.append(reduction) - # Wait for reductions to finish - for reduction in reductions: - reduction.wait() - # Scale the results - for tensor in tensors: - tensor.mul_(1.0 / num_gpus) - return tensors - - -@torch.no_grad() -def update_bn_stats( - model: nn.Module, - loader: DataLoader, - num_samples: int = 8192, - logger: Optional[Union[logging.Logger, str]] = None) -> None: - """Computes precise BN stats on training data. - - Args: - model (nn.module): The model whose bn stats will be recomputed. - loader (DataLoader): PyTorch dataloader._dataloader - num_samples (int): The number of samples to update the bn stats. - Defaults to 8192. - logger (logging.Logger or str, optional): If the type of logger is - ``logging.Logger``, we directly use logger to log messages. - Some special loggers are: - - "silent": No message will be printed. - - "current": Use latest created logger to log message. - - other str: Instance name of logger. The corresponding logger - will log message if it has been created, otherwise will raise a - `ValueError`. - - None: The `print()` method will be used to print log messages. - """ - if is_model_wrapper(model): - model = model.module - - # get dist info - rank, world_size = mmengine.dist.get_dist_info() - # Compute the number of mini-batches to use, if the size of dataloader is - # less than num_iters, use all the samples in dataloader. - num_iter = num_samples // (loader.batch_size * world_size) - num_iter = min(num_iter, len(loader)) - # Retrieve the BN layers - bn_layers = [ - m for m in model.modules() - if m.training and isinstance(m, (_BatchNorm)) - ] - if len(bn_layers) == 0: - print_log('No BN found in model', logger=logger, level=logging.WARNING) - return - print_log( - f'{len(bn_layers)} BN found, run {num_iter} iters...', logger=logger) - - # Finds all the other norm layers with training=True. - other_norm_layers = [ - m for m in model.modules() - if m.training and isinstance(m, (_InstanceNorm, GroupNorm)) - ] - if len(other_norm_layers) > 0: - print_log( - 'IN/GN stats will not be updated in PreciseHook.', - logger=logger, - level=logging.INFO) - - # Initialize BN stats storage for computing - # mean(mean(batch)) and mean(var(batch)) - running_means = [torch.zeros_like(bn.running_mean) for bn in bn_layers] - running_vars = [torch.zeros_like(bn.running_var) for bn in bn_layers] - # Remember momentum values - momentums = [bn.momentum for bn in bn_layers] - # Set momentum to 1.0 to compute BN stats that reflect the current batch - for bn in bn_layers: - bn.momentum = 1.0 - # Average the BN stats for each BN layer over the batches - if rank == 0: - prog_bar = ProgressBar(num_iter) - - for data in itertools.islice(loader, num_iter): - data = model.data_preprocessor(data, False) - model(**data) - - for i, bn in enumerate(bn_layers): - running_means[i] += bn.running_mean / num_iter - running_vars[i] += bn.running_var / num_iter - if rank == 0: - prog_bar.update() - - # Sync BN stats across GPUs (no reduction if 1 GPU used) - running_means = scaled_all_reduce(running_means, world_size) - running_vars = scaled_all_reduce(running_vars, world_size) - # Set BN stats and restore original momentum values - for i, bn in enumerate(bn_layers): - bn.running_mean = running_means[i] - bn.running_var = running_vars[i] - bn.momentum = momentums[i] - - -@HOOKS.register_module() -class PreciseBNHook(Hook): - """Precise BN hook. - - Recompute and update the batch norm stats to make them more precise. During - training both BN stats and the weight are changing after every iteration, - so the running average can not precisely reflect the actual stats of the - current model. - - With this hook, the BN stats are recomputed with fixed weights, to make the - running average more precise. Specifically, it computes the true average of - per-batch mean/variance instead of the running average. See Sec. 3 of the - paper `Rethinking Batch in BatchNorm ` - for details. - - This hook will update BN stats, so it should be executed before - ``CheckpointHook`` and ``EMAHook``, generally set its priority to - "ABOVE_NORMAL". - - Args: - num_samples (int): The number of samples to update the bn stats. - Defaults to 8192. - interval (int): Perform precise bn interval. If the train loop is - `EpochBasedTrainLoop` or `by_epoch=True`, its unit is 'epoch'; if the - train loop is `IterBasedTrainLoop` or `by_epoch=False`, its unit is - 'iter'. Defaults to 1. - """ - - def __init__(self, num_samples: int = 8192, interval: int = 1) -> None: - assert interval > 0 and num_samples > 0, "'interval' and " \ - "'num_samples' must be bigger than 0." - - self.interval = interval - self.num_samples = num_samples - - def _perform_precise_bn(self, runner: Runner) -> None: - """perform precise bn.""" - print_log( - f'Running Precise BN for {self.num_samples} samples...', - logger=runner.logger) - update_bn_stats( - runner.model, - runner.train_loop.dataloader, - self.num_samples, - logger=runner.logger) - print_log('Finish Precise BN, BN stats updated.', logger=runner.logger) - - def after_train_epoch(self, runner: Runner) -> None: - """Calculate prcise BN and broadcast BN stats across GPUs. - - Args: - runner (obj:`Runner`): The runner of the training process. - """ - # if use `EpochBasedTrainLoop``, do perform precise every - # `self.interval` epochs. - if isinstance(runner.train_loop, - EpochBasedTrainLoop) and self.every_n_epochs( - runner, self.interval): - self._perform_precise_bn(runner) - - def after_train_iter(self, - runner, - batch_idx: int, - data_batch: DATA_BATCH = None, - outputs: Optional[dict] = None) -> None: - """Calculate prcise BN and broadcast BN stats across GPUs. - - Args: - runner (obj:`Runner`): The runner of the training process. - batch_idx (int): The index of the current batch in the train loop. - data_batch (Sequence[dict], optional): Data from dataloader. - Defaults to None. - """ - # if use `IterBasedTrainLoop``, do perform precise every - # `self.interval` iters. - if isinstance(runner.train_loop, - IterBasedTrainLoop) and self.every_n_train_iters( - runner, self.interval): - self._perform_precise_bn(runner) diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/infer_pack/modules/F0Predictor/F0Predictor.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/infer_pack/modules/F0Predictor/F0Predictor.py deleted file mode 100644 index f56e49e7f0e6eab3babf0711cae2933371b9f9cc..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/infer_pack/modules/F0Predictor/F0Predictor.py +++ /dev/null @@ -1,16 +0,0 @@ -class F0Predictor(object): - def compute_f0(self, wav, p_len): - """ - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length] - """ - pass - - def compute_f0_uv(self, wav, p_len): - """ - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length],uv:[signal_length//hop_length] - """ - pass diff --git a/spaces/MLVKU/Human_Object_Interaction/hotr/util/misc.py b/spaces/MLVKU/Human_Object_Interaction/hotr/util/misc.py deleted file mode 100644 index 7eb86a9bb399af732ada22bde2fe9285b1397f32..0000000000000000000000000000000000000000 --- a/spaces/MLVKU/Human_Object_Interaction/hotr/util/misc.py +++ /dev/null @@ -1,401 +0,0 @@ -# ------------------------------------------------------------------------ -# HOTR official code : hotr/util/misc.py -# Copyright (c) Kakao Brain, Inc. and its affiliates. All Rights Reserved -# ------------------------------------------------------------------------ -# Modified from DETR (https://github.com/facebookresearch/detr) -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -# ------------------------------------------------------------------------ -""" -Misc functions, including distributed helpers. -Mostly copy-paste from torchvision references. -""" -import os -import subprocess -from collections import deque -import pickle -import socket -from typing import Optional, List -import ast -import torch -import torch.distributed as dist -from torch import Tensor - -# needed due to empty tensor bug in pytorch and torchvision 0.5 -import torchvision -if float(torchvision.__version__.split('.',2)[1]) < 5: - from torchvision.ops import _new_empty_tensor - from torchvision.ops.misc import _output_size - -os.environ['MASTER_PORT']='8993' -class SmoothedValue(object): - """Track a series of values and provide access to smoothed values over a - window or the global series average. - """ - - def __init__(self, window_size=20, fmt=None): - if fmt is None: - fmt = "{median:.4f} ({global_avg:.4f})" - self.deque = deque(maxlen=window_size) - self.total = 0.0 - self.count = 0 - self.fmt = fmt - - def update(self, value, n=1): - self.deque.append(value) - self.count += n - self.total += value * n - - def synchronize_between_processes(self): - """ - Warning: does not synchronize the deque! - """ - if not is_dist_avail_and_initialized(): - return - t = torch.tensor([self.count, self.total], dtype=torch.float64, device='cuda') - dist.barrier() - dist.all_reduce(t) - t = t.tolist() - self.count = int(t[0]) - self.total = t[1] - - @property - def median(self): - d = torch.tensor(list(self.deque)) - return d.median().item() - - @property - def avg(self): - d = torch.tensor(list(self.deque), dtype=torch.float32) - return d.mean().item() - - @property - def global_avg(self): - return self.total / self.count - - @property - def max(self): - return max(self.deque) - - @property - def value(self): - return self.deque[-1] - - def __str__(self): - return self.fmt.format( - median=self.median, - avg=self.avg, - global_avg=self.global_avg, - max=self.max, - value=self.value) - - -def all_gather(data): - """ - Run all_gather on arbitrary picklable data (not necessarily tensors) - Args: - data: any picklable object - Returns: - list[data]: list of data gathered from each rank - """ - world_size = get_world_size() - if world_size == 1: - return [data] - - # serialized to a Tensor - buffer = pickle.dumps(data) - storage = torch.ByteStorage.from_buffer(buffer) - tensor = torch.ByteTensor(storage).to("cuda") - - # obtain Tensor size of each rank - local_size = torch.tensor([tensor.numel()], device="cuda") - size_list = [torch.tensor([0], device="cuda") for _ in range(world_size)] - dist.all_gather(size_list, local_size) - size_list = [int(size.item()) for size in size_list] - max_size = max(size_list) - - # receiving Tensor from all ranks - # we pad the tensor because torch all_gather does not support - # gathering tensors of different shapes - tensor_list = [] - for _ in size_list: - tensor_list.append(torch.empty((max_size,), dtype=torch.uint8, device="cuda")) - if local_size != max_size: - padding = torch.empty(size=(max_size - local_size,), dtype=torch.uint8, device="cuda") - tensor = torch.cat((tensor, padding), dim=0) - dist.all_gather(tensor_list, tensor) - - data_list = [] - for size, tensor in zip(size_list, tensor_list): - buffer = tensor.cpu().numpy().tobytes()[:size] - data_list.append(pickle.loads(buffer)) - - return data_list - - -def reduce_dict(input_dict, average=True): - """ - Args: - input_dict (dict): all the values will be reduced - average (bool): whether to do average or sum - Reduce the values in the dictionary from all processes so that all processes - have the averaged results. Returns a dict with the same fields as - input_dict, after reduction. - """ - world_size = get_world_size() - if world_size < 2: - return input_dict - with torch.no_grad(): - names = [] - values = [] - # sort the keys so that they are consistent across processes - for k in sorted(input_dict.keys()): - names.append(k) - values.append(input_dict[k]) - values = torch.stack(values, dim=0) - dist.all_reduce(values) - if average: - values /= world_size - reduced_dict = {k: v for k, v in zip(names, values)} - return reduced_dict - - -def get_sha(): - cwd = os.path.dirname(os.path.abspath(__file__)) - - def _run(command): - return subprocess.check_output(command, cwd=cwd).decode('ascii').strip() - sha = 'N/A' - diff = "clean" - branch = 'N/A' - try: - sha = _run(['git', 'rev-parse', 'HEAD']) - subprocess.check_output(['git', 'diff'], cwd=cwd) - diff = _run(['git', 'diff-index', 'HEAD']) - diff = "has uncommited changes" if diff else "clean" - branch = _run(['git', 'rev-parse', '--abbrev-ref', 'HEAD']) - except Exception: - pass - message = f"sha: {sha}, status: {diff}, branch: {branch}" - return message - - -def collate_fn(batch): - batch = list(zip(*batch)) - batch[0] = nested_tensor_from_tensor_list(batch[0]) - return tuple(batch) - - -def _max_by_axis(the_list): - # type: (List[List[int]]) -> List[int] - maxes = the_list[0] - for sublist in the_list[1:]: - for index, item in enumerate(sublist): - maxes[index] = max(maxes[index], item) - return maxes - - -def nested_tensor_from_tensor_list(tensor_list: List[Tensor]): - # TODO make this more general - if tensor_list[0].ndim == 3: - # TODO make it support different-sized images - max_size = _max_by_axis([list(img.shape) for img in tensor_list]) - # min_size = tuple(min(s) for s in zip(*[img.shape for img in tensor_list])) - batch_shape = [len(tensor_list)] + max_size - b, c, h, w = batch_shape - dtype = tensor_list[0].dtype - device = tensor_list[0].device - tensor = torch.zeros(batch_shape, dtype=dtype, device=device) - mask = torch.ones((b, h, w), dtype=torch.bool, device=device) - for img, pad_img, m in zip(tensor_list, tensor, mask): - pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img) - m[: img.shape[1], :img.shape[2]] = False - else: - raise ValueError('not supported') - return NestedTensor(tensor, mask) - - -class NestedTensor(object): - def __init__(self, tensors, mask: Optional[Tensor]): - self.tensors = tensors - self.mask = mask - - def to(self, device): - # type: (Device) -> NestedTensor # noqa - cast_tensor = self.tensors.to(device) - mask = self.mask - if mask is not None: - assert mask is not None - cast_mask = mask.to(device) - else: - cast_mask = None - return NestedTensor(cast_tensor, cast_mask) - - def decompose(self): - return self.tensors, self.mask - - def __repr__(self): - return str(self.tensors) - - -def setup_for_distributed(is_master): - """ - This function disables printing when not in master process - """ - import builtins as __builtin__ - builtin_print = __builtin__.print - - def print(*args, **kwargs): - force = kwargs.pop('force', False) - if is_master or force: - builtin_print(*args, **kwargs) - - __builtin__.print = print - - -def is_dist_avail_and_initialized(): - if not dist.is_available(): - return False - if not dist.is_initialized(): - return False - return True - - -def get_world_size(): - if not is_dist_avail_and_initialized(): - return 1 - return dist.get_world_size() - - -def get_rank(): - if not is_dist_avail_and_initialized(): - return 0 - return dist.get_rank() - - -def is_main_process(): - return get_rank() == 0 - - -def save_on_master(*args, **kwargs): - if is_main_process(): - torch.save(*args, **kwargs) - - -def _check_if_valid_ip(ip): - try: - socket.inet_aton(ip) - # legal - except socket.error: - # Not legal - return False - return True - -def arg_as_list(s): - v = ast.literal_eval(s) - if type(v) is not list: - raise argparse.ArgumentTypeError("List should be given.") - return v - -def _maybe_gethostbyname(addr): - """to be compatible with Braincloud on which one can access the nodes by their task names. - Each node has to wait until all the tasks in the group are up on the cloud.""" - if _check_if_valid_ip(addr): - # If IP address is given, do nothing - return addr - - # Otherwise, find the IP address by hostname - done = False - retry = 0 - print(f"Get URL by the given hostname '{addr}' in Braincloud..") - while not done: - try: - addr = socket.gethostbyname(addr) - done = True - except: - retry += 1 - print(f"Retrying count: {retry}") - time.sleep(3) - print(f"Found the host by IP address: {addr}") - return addr - - -def init_distributed_mode(args): - - if 'RANK' in os.environ and 'WORLD_SIZE' in os.environ: - os.environ["MASTER_ADDR"] = _maybe_gethostbyname(os.environ["MASTER_ADDR"]) - args.rank = int(os.environ["RANK"]) - args.world_size = int(os.environ['WORLD_SIZE']) - args.gpu = int(os.environ['LOCAL_RANK']) - args.dist_url = 'env://' - os.environ['LOCAL_SIZE'] = str(torch.cuda.device_count()) - elif 'SLURM_PROCID' in os.environ: - proc_id = int(os.environ['SLURM_PROCID']) - ntasks = int(os.environ['SLURM_NTASKS']) - node_list = os.environ['SLURM_NODELIST'] - num_gpus = torch.cuda.device_count() - addr = subprocess.getoutput( - 'scontrol show hostname {} | head -n1'.format(node_list)) - os.environ['MASTER_PORT'] = os.environ.get('MASTER_PORT', '29500') - os.environ['MASTER_ADDR'] = addr - os.environ['WORLD_SIZE'] = str(ntasks) - os.environ['RANK'] = str(proc_id) - os.environ['LOCAL_RANK'] = str(proc_id % num_gpus) - os.environ['LOCAL_SIZE'] = str(num_gpus) - args.dist_url = 'env://' - args.world_size = ntasks - args.rank = proc_id - args.gpu = proc_id % num_gpus - else: - print('Not using distributed mode') - args.distributed = False - return - - args.distributed = True - - torch.cuda.set_device(args.gpu) - args.dist_backend = 'nccl' - print('| distributed init (rank {}): {}'.format( - args.rank, args.dist_url), flush=True) - torch.distributed.init_process_group(backend=args.dist_backend, init_method=args.dist_url, - world_size=args.world_size, rank=args.rank) - torch.distributed.barrier() - setup_for_distributed(args.rank == 0) - - -@torch.no_grad() -def accuracy(output, target, topk=(1,)): - """Computes the precision@k for the specified values of k""" - if target.numel() == 0: - return [torch.zeros([], device=output.device)] - maxk = max(topk) - batch_size = target.size(0) - - _, pred = output.topk(maxk, 1, True, True) - pred = pred.t() - correct = pred.eq(target.view(1, -1).expand_as(pred)) - - res = [] - for k in topk: - correct_k = correct[:k].view(-1).float().sum(0) - res.append(correct_k.mul_(100.0 / batch_size)) - return res - - -def interpolate(input, size=None, scale_factor=None, mode="nearest", align_corners=None): - # type: (Tensor, Optional[List[int]], Optional[float], str, Optional[bool]) -> Tensor - """ - Equivalent to nn.functional.interpolate, but with support for empty batch sizes. - This will eventually be supported natively by PyTorch, and this - class can go away. - """ - if float(torchvision.__version__.split('.',2)[1]) < 5: - if input.numel() > 0: - return torch.nn.functional.interpolate( - input, size, scale_factor, mode, align_corners - ) - - output_shape = _output_size(2, input, size, scale_factor) - output_shape = list(input.shape[:-2]) + list(output_shape) - return _new_empty_tensor(input, output_shape) - else: - return torchvision.ops.misc.interpolate(input, size, scale_factor, mode, align_corners) \ No newline at end of file diff --git a/spaces/MMMMQZ/MQZGPT/custom.css b/spaces/MMMMQZ/MQZGPT/custom.css deleted file mode 100644 index 5143eb138ea2469d8c457c71cb210fd3fb7cbe15..0000000000000000000000000000000000000000 --- a/spaces/MMMMQZ/MQZGPT/custom.css +++ /dev/null @@ -1,162 +0,0 @@ -:root { - --chatbot-color-light: #F3F3F3; - --chatbot-color-dark: #121111; -} - -/* status_display */ -#status_display { - display: flex; - min-height: 2.5em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: monospace; - color: var(--body-text-color-subdued); -} - -#chuanhu_chatbot, #status_display { - transition: all 0.6s; -} -/* list */ -ol:not(.options), ul:not(.options) { - padding-inline-start: 2em !important; -} - -/* 亮色 */ -#chuanhu_chatbot { - background-color: var(--chatbot-color-light) !important; -} -[data-testid = "bot"] { - background-color: #FFFFFF !important; -} -[data-testid = "user"] { - background-color: #95EC69 !important; -} -/* 对话气泡 */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; - min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); - min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: hsla(0, 0%, 0%, 80%)!important; - border-radius: 10px; - padding: 1.4em 1.2em 0em 1.4em; - margin: 1.2em 2em 1.2em 0.5em; - color: #FFF; - box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2); -} -/* 代码高亮样式 */ -.highlight .hll { background-color: #49483e } -.highlight .c { color: #75715e } /* Comment */ -.highlight .err { color: #960050; background-color: #1e0010 } /* Error */ -.highlight .k { color: #66d9ef } /* Keyword */ -.highlight .l { color: #ae81ff } /* Literal */ -.highlight .n { color: #f8f8f2 } /* Name */ -.highlight .o { color: #f92672 } /* Operator */ -.highlight .p { color: #f8f8f2 } /* Punctuation */ -.highlight .ch { color: #75715e } /* Comment.Hashbang */ -.highlight .cm { color: #75715e } /* Comment.Multiline */ -.highlight .cp { color: #75715e } /* Comment.Preproc */ -.highlight .cpf { color: #75715e } /* Comment.PreprocFile */ -.highlight .c1 { color: #75715e } /* Comment.Single */ -.highlight .cs { color: #75715e } /* Comment.Special */ -.highlight .gd { color: #f92672 } /* Generic.Deleted */ -.highlight .ge { font-style: italic } /* Generic.Emph */ -.highlight .gi { color: #a6e22e } /* Generic.Inserted */ -.highlight .gs { font-weight: bold } /* Generic.Strong */ -.highlight .gu { color: #75715e } /* Generic.Subheading */ -.highlight .kc { color: #66d9ef } /* Keyword.Constant */ -.highlight .kd { color: #66d9ef } /* Keyword.Declaration */ -.highlight .kn { color: #f92672 } /* Keyword.Namespace */ -.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */ -.highlight .kr { color: #66d9ef } /* Keyword.Reserved */ -.highlight .kt { color: #66d9ef } /* Keyword.Type */ -.highlight .ld { color: #e6db74 } /* Literal.Date */ -.highlight .m { color: #ae81ff } /* Literal.Number */ -.highlight .s { color: #e6db74 } /* Literal.String */ -.highlight .na { color: #a6e22e } /* Name.Attribute */ -.highlight .nb { color: #f8f8f2 } /* Name.Builtin */ -.highlight .nc { color: #a6e22e } /* Name.Class */ -.highlight .no { color: #66d9ef } /* Name.Constant */ -.highlight .nd { color: #a6e22e } /* Name.Decorator */ -.highlight .ni { color: #f8f8f2 } /* Name.Entity */ -.highlight .ne { color: #a6e22e } /* Name.Exception */ -.highlight .nf { color: #a6e22e } /* Name.Function */ -.highlight .nl { color: #f8f8f2 } /* Name.Label */ -.highlight .nn { color: #f8f8f2 } /* Name.Namespace */ -.highlight .nx { color: #a6e22e } /* Name.Other */ -.highlight .py { color: #f8f8f2 } /* Name.Property */ -.highlight .nt { color: #f92672 } /* Name.Tag */ -.highlight .nv { color: #f8f8f2 } /* Name.Variable */ -.highlight .ow { color: #f92672 } /* Operator.Word */ -.highlight .w { color: #f8f8f2 } /* Text.Whitespace */ -.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */ -.highlight .mf { color: #ae81ff } /* Literal.Number.Float */ -.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */ -.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */ -.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */ -.highlight .sa { color: #e6db74 } /* Literal.String.Affix */ -.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */ -.highlight .sc { color: #e6db74 } /* Literal.String.Char */ -.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */ -.highlight .sd { color: #e6db74 } /* Literal.String.Doc */ -.highlight .s2 { color: #e6db74 } /* Literal.String.Double */ -.highlight .se { color: #ae81ff } /* Literal.String.Escape */ -.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */ -.highlight .si { color: #e6db74 } /* Literal.String.Interpol */ -.highlight .sx { color: #e6db74 } /* Literal.String.Other */ -.highlight .sr { color: #e6db74 } /* Literal.String.Regex */ -.highlight .s1 { color: #e6db74 } /* Literal.String.Single */ -.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */ -.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */ -.highlight .fm { color: #a6e22e } /* Name.Function.Magic */ -.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */ -.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */ -.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */ -.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */ -.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */ diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Dockerfile b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Dockerfile deleted file mode 100644 index d91373cf495a8e625128ddd189b3045fdc3c64d7..0000000000000000000000000000000000000000 --- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Dockerfile +++ /dev/null @@ -1,73 +0,0 @@ -FROM nvidia/cuda:11.7.1-cudnn8-devel-ubuntu22.04 -ENV DEBIAN_FRONTEND=noninteractive -RUN apt-get update && \ - apt-get upgrade -y && \ - apt-get install -y --no-install-recommends \ - git \ - git-lfs \ - wget \ - curl \ - # ffmpeg \ - ffmpeg \ - x264 \ - # python build dependencies \ - build-essential \ - libssl-dev \ - zlib1g-dev \ - libbz2-dev \ - libreadline-dev \ - libsqlite3-dev \ - libncursesw5-dev \ - xz-utils \ - tk-dev \ - libxml2-dev \ - libxmlsec1-dev \ - libffi-dev \ - liblzma-dev && \ - apt-get clean && \ - rm -rf /var/lib/apt/lists/* - -RUN useradd -m -u 1000 user -USER user -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:${PATH} -WORKDIR ${HOME}/app - -RUN curl https://pyenv.run | bash -ENV PATH=${HOME}/.pyenv/shims:${HOME}/.pyenv/bin:${PATH} -ENV PYTHON_VERSION=3.10.9 -RUN pyenv install ${PYTHON_VERSION} && \ - pyenv global ${PYTHON_VERSION} && \ - pyenv rehash && \ - pip install --no-cache-dir -U pip setuptools wheel - -# RUN pip install --no-cache-dir -U torch==1.13.1 torchvision==0.14.1 -RUN pip install --no-cache-dir -U torch==2.0.0+cu117 torchvision==0.15.1+cu117 torchaudio==2.0.1 --index-url https://download.pytorch.org/whl/cu117 -RUN pip install --no-cache-dir -U xformers==0.0.17 -COPY --chown=1000 requirements.txt /tmp/requirements.txt -RUN pip install --no-cache-dir -U -r /tmp/requirements.txt - -COPY --chown=1000 requirements.txt /tmp/requirements.txt - -# WORKDIR ${HOME}/app/Make-A-Protagonist/experts/GroundedSAM/segment_anything -# RUN pip install dist/segment_anything-1.0-py3-none-any.whl -# WORKDIR ${HOME}/app/Make-A-Protagonist/experts/GroundedSAM/GroundingDINO -# RUN pip install dist/groundingdino-0.1.0-cp310-cp310-linux_x86_64.whl -# WORKDIR ${HOME}/app - -COPY --chown=1000 checkpoints/*.whl /tmp/ -RUN pip install --no-cache-dir -U /tmp/groundingdino-0.1.0-cp310-cp310-linux_x86_64.whl -RUN pip install --no-cache-dir -U /tmp/segment_anything-1.0-py3-none-any.whl -# RUN pip install -e Make-A-Protagonist/experts/GroundedSAM/segment_anything -# RUN pip install -e Make-A-Protagonist/experts/GroundedSAM/GroundingDINO - -COPY --chown=1000 . ${HOME}/app -# RUN cd Make-A-Protagonist && patch -p1 < ../patch -ENV PYTHONPATH=${HOME}/app \ - PYTHONUNBUFFERED=1 \ - GRADIO_ALLOW_FLAGGING=never \ - GRADIO_NUM_PORTS=1 \ - GRADIO_SERVER_NAME=0.0.0.0 \ - GRADIO_THEME=huggingface \ - SYSTEM=spaces -CMD ["python", "app.py"] diff --git a/spaces/Marne/MockingBird/mockingbirdforuse/encoder/model.py b/spaces/Marne/MockingBird/mockingbirdforuse/encoder/model.py deleted file mode 100644 index f5d8ae2ffebf8ad2c8694b0ed02332849989a696..0000000000000000000000000000000000000000 --- a/spaces/Marne/MockingBird/mockingbirdforuse/encoder/model.py +++ /dev/null @@ -1,145 +0,0 @@ -import torch -import numpy as np -from torch import nn -from scipy.optimize import brentq -from sklearn.metrics import roc_curve -from scipy.interpolate import interp1d -from torch.nn.parameter import Parameter -from torch.nn.utils.clip_grad import clip_grad_norm_ - -from .hparams import hparams as hp - - -class SpeakerEncoder(nn.Module): - def __init__(self, device, loss_device): - super().__init__() - self.loss_device = loss_device - - # Network defition - self.lstm = nn.LSTM( - input_size=hp.mel_n_channels, - hidden_size=hp.model_hidden_size, - num_layers=hp.model_num_layers, - batch_first=True, - ).to(device) - self.linear = nn.Linear( - in_features=hp.model_hidden_size, out_features=hp.model_embedding_size - ).to(device) - self.relu = torch.nn.ReLU().to(device) - - # Cosine similarity scaling (with fixed initial parameter values) - self.similarity_weight = Parameter(torch.tensor([10.0])).to(loss_device) - self.similarity_bias = Parameter(torch.tensor([-5.0])).to(loss_device) - - # Loss - self.loss_fn = nn.CrossEntropyLoss().to(loss_device) - - def do_gradient_ops(self): - # Gradient scale - self.similarity_weight.grad *= 0.01 - self.similarity_bias.grad *= 0.01 - - # Gradient clipping - clip_grad_norm_(self.parameters(), 3, norm_type=2) - - def forward(self, utterances, hidden_init=None): - """ - Computes the embeddings of a batch of utterance spectrograms. - - :param utterances: batch of mel-scale filterbanks of same duration as a tensor of shape - (batch_size, n_frames, n_channels) - :param hidden_init: initial hidden state of the LSTM as a tensor of shape (num_layers, - batch_size, hidden_size). Will default to a tensor of zeros if None. - :return: the embeddings as a tensor of shape (batch_size, embedding_size) - """ - # Pass the input through the LSTM layers and retrieve all outputs, the final hidden state - # and the final cell state. - out, (hidden, cell) = self.lstm(utterances, hidden_init) - - # We take only the hidden state of the last layer - embeds_raw = self.relu(self.linear(hidden[-1])) - - # L2-normalize it - embeds = embeds_raw / (torch.norm(embeds_raw, dim=1, keepdim=True) + 1e-5) - - return embeds - - def similarity_matrix(self, embeds): - """ - Computes the similarity matrix according the section 2.1 of GE2E. - - :param embeds: the embeddings as a tensor of shape (speakers_per_batch, - utterances_per_speaker, embedding_size) - :return: the similarity matrix as a tensor of shape (speakers_per_batch, - utterances_per_speaker, speakers_per_batch) - """ - speakers_per_batch, utterances_per_speaker = embeds.shape[:2] - - # Inclusive centroids (1 per speaker). Cloning is needed for reverse differentiation - centroids_incl = torch.mean(embeds, dim=1, keepdim=True) - centroids_incl = centroids_incl.clone() / ( - torch.norm(centroids_incl, dim=2, keepdim=True) + 1e-5 - ) - - # Exclusive centroids (1 per utterance) - centroids_excl = torch.sum(embeds, dim=1, keepdim=True) - embeds - centroids_excl /= utterances_per_speaker - 1 - centroids_excl = centroids_excl.clone() / ( - torch.norm(centroids_excl, dim=2, keepdim=True) + 1e-5 - ) - - # Similarity matrix. The cosine similarity of already 2-normed vectors is simply the dot - # product of these vectors (which is just an element-wise multiplication reduced by a sum). - # We vectorize the computation for efficiency. - sim_matrix = torch.zeros( - speakers_per_batch, utterances_per_speaker, speakers_per_batch - ).to(self.loss_device) - mask_matrix = 1 - np.eye(speakers_per_batch, dtype=np.int32) - for j in range(speakers_per_batch): - mask = np.where(mask_matrix[j])[0] - sim_matrix[mask, :, j] = (embeds[mask] * centroids_incl[j]).sum(dim=2) - sim_matrix[j, :, j] = (embeds[j] * centroids_excl[j]).sum(dim=1) - - ## Even more vectorized version (slower maybe because of transpose) - # sim_matrix2 = torch.zeros(speakers_per_batch, speakers_per_batch, utterances_per_speaker - # ).to(self.loss_device) - # eye = np.eye(speakers_per_batch, dtype=np.int) - # mask = np.where(1 - eye) - # sim_matrix2[mask] = (embeds[mask[0]] * centroids_incl[mask[1]]).sum(dim=2) - # mask = np.where(eye) - # sim_matrix2[mask] = (embeds * centroids_excl).sum(dim=2) - # sim_matrix2 = sim_matrix2.transpose(1, 2) - - sim_matrix = sim_matrix * self.similarity_weight + self.similarity_bias - return sim_matrix - - def loss(self, embeds): - """ - Computes the softmax loss according the section 2.1 of GE2E. - - :param embeds: the embeddings as a tensor of shape (speakers_per_batch, - utterances_per_speaker, embedding_size) - :return: the loss and the EER for this batch of embeddings. - """ - speakers_per_batch, utterances_per_speaker = embeds.shape[:2] - - # Loss - sim_matrix = self.similarity_matrix(embeds) - sim_matrix = sim_matrix.reshape( - (speakers_per_batch * utterances_per_speaker, speakers_per_batch) - ) - ground_truth = np.repeat(np.arange(speakers_per_batch), utterances_per_speaker) - target = torch.from_numpy(ground_truth).long().to(self.loss_device) - loss = self.loss_fn(sim_matrix, target) - - # EER (not backpropagated) - with torch.no_grad(): - inv_argmax = lambda i: np.eye(1, speakers_per_batch, i, dtype=np.int32)[0] - labels = np.array([inv_argmax(i) for i in ground_truth]) - preds = sim_matrix.detach().cpu().numpy() - - # Snippet from https://yangcha.github.io/EER-ROC/ - fpr, tpr, thresholds = roc_curve(labels.flatten(), preds.flatten()) - eer = brentq(lambda x: 1.0 - x - interp1d(fpr, tpr)(x), 0.0, 1.0) - - return loss, eer diff --git a/spaces/Marshalls/testmtd/analysis/fix_scale.sh b/spaces/Marshalls/testmtd/analysis/fix_scale.sh deleted file mode 100644 index 9525ad116f9b15aa0c97fd9c5127216ba65138d8..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/analysis/fix_scale.sh +++ /dev/null @@ -1,3 +0,0 @@ -#!/bin/bash - -find $1 -name "*.bvh" -print0 | xargs -0 -I{} python3 analysis/fix_scale.py {} diff --git a/spaces/Marshalls/testmtd/training/options/task_options.py b/spaces/Marshalls/testmtd/training/options/task_options.py deleted file mode 100644 index b25c241be090a36a8ef3148f17c4497ca69686be..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/training/options/task_options.py +++ /dev/null @@ -1,37 +0,0 @@ -import argparse -import importlib - - -class TaskOptions: - """ - Base class to be inherited from task instances when they want to add task-dependent options. - E.g. segmentation options for images. - The options from this object are added to the options in BaseOptions - """ - def __init__(self): - self.parser = argparse.ArgumentParser(add_help=False) - - def add_actions(self, parser): - self.actions = self.parser._actions - for action in self.actions: - for i, ex_action in enumerate(parser._actions): - if action.option_strings == ex_action.option_strings: - parser._actions[i] = action - return parser - - -def get_task_options(task_name): - - task_module = importlib.import_module(task_name) - options_filename = task_name + ".options." + task_name.lower() + "_options" - optionslib = importlib.import_module(options_filename, package=task_module) - options = None - target_options_name = task_name.replace('_', '') + 'options' - for name, cls in optionslib.__dict__.items(): - if name.lower() == target_options_name.lower() \ - and next(iter(cls.__bases__)).__module__.endswith(TaskOptions.__module__): # check that base class is BaseModel - options = cls - - if options is None: - raise NotImplementedError("In %s.py, there should be a subclass of BaseModel with class name that matches %s in lowercase." % (options_filename, target_options_name)) - return options() \ No newline at end of file diff --git a/spaces/Matthijs/mms-tts-demo/uroman/lib/JSON.pm b/spaces/Matthijs/mms-tts-demo/uroman/lib/JSON.pm deleted file mode 100644 index 8bac7eb5b90b530b828b25d41cec812d2dc2cf8f..0000000000000000000000000000000000000000 --- a/spaces/Matthijs/mms-tts-demo/uroman/lib/JSON.pm +++ /dev/null @@ -1,2317 +0,0 @@ -package JSON; - - -use strict; -use Carp (); -use base qw(Exporter); -@JSON::EXPORT = qw(from_json to_json jsonToObj objToJson encode_json decode_json); - -BEGIN { - $JSON::VERSION = '2.90'; - $JSON::DEBUG = 0 unless (defined $JSON::DEBUG); - $JSON::DEBUG = $ENV{ PERL_JSON_DEBUG } if exists $ENV{ PERL_JSON_DEBUG }; -} - -my $Module_XS = 'JSON::XS'; -my $Module_PP = 'JSON::PP'; -my $Module_bp = 'JSON::backportPP'; # included in JSON distribution -my $PP_Version = '2.27203'; -my $XS_Version = '2.34'; - - -# XS and PP common methods - -my @PublicMethods = qw/ - ascii latin1 utf8 pretty indent space_before space_after relaxed canonical allow_nonref - allow_blessed convert_blessed filter_json_object filter_json_single_key_object - shrink max_depth max_size encode decode decode_prefix allow_unknown -/; - -my @Properties = qw/ - ascii latin1 utf8 indent space_before space_after relaxed canonical allow_nonref - allow_blessed convert_blessed shrink max_depth max_size allow_unknown -/; - -my @XSOnlyMethods = qw/allow_tags/; # Currently nothing - -my @PPOnlyMethods = qw/ - indent_length sort_by - allow_singlequote allow_bignum loose allow_barekey escape_slash as_nonblessed -/; # JSON::PP specific - - -# used in _load_xs and _load_pp ($INSTALL_ONLY is not used currently) -my $_INSTALL_DONT_DIE = 1; # When _load_xs fails to load XS, don't die. -my $_INSTALL_ONLY = 2; # Don't call _set_methods() -my $_ALLOW_UNSUPPORTED = 0; -my $_UNIV_CONV_BLESSED = 0; -my $_USSING_bpPP = 0; - - -# Check the environment variable to decide worker module. - -unless ($JSON::Backend) { - $JSON::DEBUG and Carp::carp("Check used worker module..."); - - my $backend = exists $ENV{PERL_JSON_BACKEND} ? $ENV{PERL_JSON_BACKEND} : 1; - - if ($backend eq '1' or $backend =~ /JSON::XS\s*,\s*JSON::PP/) { - _load_xs($_INSTALL_DONT_DIE) or _load_pp(); - } - elsif ($backend eq '0' or $backend eq 'JSON::PP') { - _load_pp(); - } - elsif ($backend eq '2' or $backend eq 'JSON::XS') { - _load_xs(); - } - elsif ($backend eq 'JSON::backportPP') { - $_USSING_bpPP = 1; - _load_pp(); - } - else { - Carp::croak "The value of environmental variable 'PERL_JSON_BACKEND' is invalid."; - } -} - - -sub import { - my $pkg = shift; - my @what_to_export; - my $no_export; - - for my $tag (@_) { - if ($tag eq '-support_by_pp') { - if (!$_ALLOW_UNSUPPORTED++) { - JSON::Backend::XS - ->support_by_pp(@PPOnlyMethods) if ($JSON::Backend eq $Module_XS); - } - next; - } - elsif ($tag eq '-no_export') { - $no_export++, next; - } - elsif ( $tag eq '-convert_blessed_universally' ) { - eval q| - require B; - *UNIVERSAL::TO_JSON = sub { - my $b_obj = B::svref_2object( $_[0] ); - return $b_obj->isa('B::HV') ? { %{ $_[0] } } - : $b_obj->isa('B::AV') ? [ @{ $_[0] } ] - : undef - ; - } - | if ( !$_UNIV_CONV_BLESSED++ ); - next; - } - push @what_to_export, $tag; - } - - return if ($no_export); - - __PACKAGE__->export_to_level(1, $pkg, @what_to_export); -} - - -# OBSOLETED - -sub jsonToObj { - my $alternative = 'from_json'; - if (defined $_[0] and UNIVERSAL::isa($_[0], 'JSON')) { - shift @_; $alternative = 'decode'; - } - Carp::carp "'jsonToObj' will be obsoleted. Please use '$alternative' instead."; - return JSON::from_json(@_); -}; - -sub objToJson { - my $alternative = 'to_json'; - if (defined $_[0] and UNIVERSAL::isa($_[0], 'JSON')) { - shift @_; $alternative = 'encode'; - } - Carp::carp "'objToJson' will be obsoleted. Please use '$alternative' instead."; - JSON::to_json(@_); -}; - - -# INTERFACES - -sub to_json ($@) { - if ( - ref($_[0]) eq 'JSON' - or (@_ > 2 and $_[0] eq 'JSON') - ) { - Carp::croak "to_json should not be called as a method."; - } - my $json = JSON->new; - - if (@_ == 2 and ref $_[1] eq 'HASH') { - my $opt = $_[1]; - for my $method (keys %$opt) { - $json->$method( $opt->{$method} ); - } - } - - $json->encode($_[0]); -} - - -sub from_json ($@) { - if ( ref($_[0]) eq 'JSON' or $_[0] eq 'JSON' ) { - Carp::croak "from_json should not be called as a method."; - } - my $json = JSON->new; - - if (@_ == 2 and ref $_[1] eq 'HASH') { - my $opt = $_[1]; - for my $method (keys %$opt) { - $json->$method( $opt->{$method} ); - } - } - - return $json->decode( $_[0] ); -} - - - -sub true { $JSON::true } - -sub false { $JSON::false } - -sub null { undef; } - - -sub require_xs_version { $XS_Version; } - -sub backend { - my $proto = shift; - $JSON::Backend; -} - -#*module = *backend; - - -sub is_xs { - return $_[0]->backend eq $Module_XS; -} - - -sub is_pp { - return not $_[0]->is_xs; -} - - -sub pureperl_only_methods { @PPOnlyMethods; } - - -sub property { - my ($self, $name, $value) = @_; - - if (@_ == 1) { - my %props; - for $name (@Properties) { - my $method = 'get_' . $name; - if ($name eq 'max_size') { - my $value = $self->$method(); - $props{$name} = $value == 1 ? 0 : $value; - next; - } - $props{$name} = $self->$method(); - } - return \%props; - } - elsif (@_ > 3) { - Carp::croak('property() can take only the option within 2 arguments.'); - } - elsif (@_ == 2) { - if ( my $method = $self->can('get_' . $name) ) { - if ($name eq 'max_size') { - my $value = $self->$method(); - return $value == 1 ? 0 : $value; - } - $self->$method(); - } - } - else { - $self->$name($value); - } - -} - - - -# INTERNAL - -sub _load_xs { - my $opt = shift; - - $JSON::DEBUG and Carp::carp "Load $Module_XS."; - - # if called after install module, overload is disable.... why? - JSON::Boolean::_overrride_overload($Module_XS); - JSON::Boolean::_overrride_overload($Module_PP); - - eval qq| - use $Module_XS $XS_Version (); - |; - - if ($@) { - if (defined $opt and $opt & $_INSTALL_DONT_DIE) { - $JSON::DEBUG and Carp::carp "Can't load $Module_XS...($@)"; - return 0; - } - Carp::croak $@; - } - - unless (defined $opt and $opt & $_INSTALL_ONLY) { - _set_module( $JSON::Backend = $Module_XS ); - my $data = join("", ); # this code is from Jcode 2.xx. - close(DATA); - eval $data; - JSON::Backend::XS->init; - } - - return 1; -}; - - -sub _load_pp { - my $opt = shift; - my $backend = $_USSING_bpPP ? $Module_bp : $Module_PP; - - $JSON::DEBUG and Carp::carp "Load $backend."; - - # if called after install module, overload is disable.... why? - JSON::Boolean::_overrride_overload($Module_XS); - JSON::Boolean::_overrride_overload($backend); - - if ( $_USSING_bpPP ) { - eval qq| require $backend |; - } - else { - eval qq| use $backend $PP_Version () |; - } - - if ($@) { - if ( $backend eq $Module_PP ) { - $JSON::DEBUG and Carp::carp "Can't load $Module_PP ($@), so try to load $Module_bp"; - $_USSING_bpPP++; - $backend = $Module_bp; - JSON::Boolean::_overrride_overload($backend); - local $^W; # if PP installed but invalid version, backportPP redefines methods. - eval qq| require $Module_bp |; - } - Carp::croak $@ if $@; - } - - unless (defined $opt and $opt & $_INSTALL_ONLY) { - _set_module( $JSON::Backend = $Module_PP ); # even if backportPP, set $Backend with 'JSON::PP' - JSON::Backend::PP->init; - } -}; - - -sub _set_module { - return if defined $JSON::true; - - my $module = shift; - - local $^W; - no strict qw(refs); - - $JSON::true = ${"$module\::true"}; - $JSON::false = ${"$module\::false"}; - - push @JSON::ISA, $module; - if ( JSON->is_xs and JSON->backend->VERSION < 3 ) { - eval 'package JSON::PP::Boolean'; - push @{"$module\::Boolean::ISA"}, qw(JSON::PP::Boolean); - } - - *{"JSON::is_bool"} = \&{"$module\::is_bool"}; - - for my $method ($module eq $Module_XS ? @PPOnlyMethods : @XSOnlyMethods) { - *{"JSON::$method"} = sub { - Carp::carp("$method is not supported in $module."); - $_[0]; - }; - } - - return 1; -} - - - -# -# JSON Boolean -# - -package JSON::Boolean; - -my %Installed; - -sub _overrride_overload { - return; # this function is currently disable. - return if ($Installed{ $_[0] }++); - - my $boolean = $_[0] . '::Boolean'; - - eval sprintf(q| - package %s; - use overload ( - '""' => sub { ${$_[0]} == 1 ? 'true' : 'false' }, - 'eq' => sub { - my ($obj, $op) = ref ($_[0]) ? ($_[0], $_[1]) : ($_[1], $_[0]); - if ($op eq 'true' or $op eq 'false') { - return "$obj" eq 'true' ? 'true' eq $op : 'false' eq $op; - } - else { - return $obj ? 1 == $op : 0 == $op; - } - }, - ); - |, $boolean); - - if ($@) { Carp::croak $@; } - - if ( exists $INC{'JSON/XS.pm'} and $boolean eq 'JSON::XS::Boolean' ) { - local $^W; - my $true = do { bless \(my $dummy = 1), $boolean }; - my $false = do { bless \(my $dummy = 0), $boolean }; - *JSON::XS::true = sub () { $true }; - *JSON::XS::false = sub () { $false }; - } - elsif ( exists $INC{'JSON/PP.pm'} and $boolean eq 'JSON::PP::Boolean' ) { - local $^W; - my $true = do { bless \(my $dummy = 1), $boolean }; - my $false = do { bless \(my $dummy = 0), $boolean }; - *JSON::PP::true = sub { $true }; - *JSON::PP::false = sub { $false }; - } - - return 1; -} - - -# -# Helper classes for Backend Module (PP) -# - -package JSON::Backend::PP; - -sub init { - local $^W; - no strict qw(refs); # this routine may be called after JSON::Backend::XS init was called. - *{"JSON::decode_json"} = \&{"JSON::PP::decode_json"}; - *{"JSON::encode_json"} = \&{"JSON::PP::encode_json"}; - *{"JSON::PP::is_xs"} = sub { 0 }; - *{"JSON::PP::is_pp"} = sub { 1 }; - return 1; -} - -# -# To save memory, the below lines are read only when XS backend is used. -# - -package JSON; - -1; -__DATA__ - - -# -# Helper classes for Backend Module (XS) -# - -package JSON::Backend::XS; - -use constant INDENT_LENGTH_FLAG => 15 << 12; - -use constant UNSUPPORTED_ENCODE_FLAG => { - ESCAPE_SLASH => 0x00000010, - ALLOW_BIGNUM => 0x00000020, - AS_NONBLESSED => 0x00000040, - EXPANDED => 0x10000000, # for developer's -}; - -use constant UNSUPPORTED_DECODE_FLAG => { - LOOSE => 0x00000001, - ALLOW_BIGNUM => 0x00000002, - ALLOW_BAREKEY => 0x00000004, - ALLOW_SINGLEQUOTE => 0x00000008, - EXPANDED => 0x20000000, # for developer's -}; - - -sub init { - local $^W; - no strict qw(refs); - *{"JSON::decode_json"} = \&{"JSON::XS::decode_json"}; - *{"JSON::encode_json"} = \&{"JSON::XS::encode_json"}; - *{"JSON::XS::is_xs"} = sub { 1 }; - *{"JSON::XS::is_pp"} = sub { 0 }; - return 1; -} - - -sub support_by_pp { - my ($class, @methods) = @_; - - local $^W; - no strict qw(refs); - - my $JSON_XS_encode_orignal = \&JSON::XS::encode; - my $JSON_XS_decode_orignal = \&JSON::XS::decode; - my $JSON_XS_incr_parse_orignal = \&JSON::XS::incr_parse; - - *JSON::XS::decode = \&JSON::Backend::XS::Supportable::_decode; - *JSON::XS::encode = \&JSON::Backend::XS::Supportable::_encode; - *JSON::XS::incr_parse = \&JSON::Backend::XS::Supportable::_incr_parse; - - *{JSON::XS::_original_decode} = $JSON_XS_decode_orignal; - *{JSON::XS::_original_encode} = $JSON_XS_encode_orignal; - *{JSON::XS::_original_incr_parse} = $JSON_XS_incr_parse_orignal; - - push @JSON::Backend::XS::Supportable::ISA, 'JSON'; - - my $pkg = 'JSON::Backend::XS::Supportable'; - - *{JSON::new} = sub { - my $proto = JSON::XS->new; $$proto = 0; - bless $proto, $pkg; - }; - - - for my $method (@methods) { - my $flag = uc($method); - my $type |= (UNSUPPORTED_ENCODE_FLAG->{$flag} || 0); - $type |= (UNSUPPORTED_DECODE_FLAG->{$flag} || 0); - - next unless($type); - - $pkg->_make_unsupported_method($method => $type); - } - -# push @{"JSON::XS::Boolean::ISA"}, qw(JSON::PP::Boolean); -# push @{"JSON::PP::Boolean::ISA"}, qw(JSON::Boolean); - - $JSON::DEBUG and Carp::carp("set -support_by_pp mode."); - - return 1; -} - - - - -# -# Helper classes for XS -# - -package JSON::Backend::XS::Supportable; - -$Carp::Internal{'JSON::Backend::XS::Supportable'} = 1; - -sub _make_unsupported_method { - my ($pkg, $method, $type) = @_; - - local $^W; - no strict qw(refs); - - *{"$pkg\::$method"} = sub { - local $^W; - if (defined $_[1] ? $_[1] : 1) { - ${$_[0]} |= $type; - } - else { - ${$_[0]} &= ~$type; - } - $_[0]; - }; - - *{"$pkg\::get_$method"} = sub { - ${$_[0]} & $type ? 1 : ''; - }; - -} - - -sub _set_for_pp { - JSON::_load_pp( $_INSTALL_ONLY ); - - my $type = shift; - my $pp = JSON::PP->new; - my $prop = $_[0]->property; - - for my $name (keys %$prop) { - $pp->$name( $prop->{$name} ? $prop->{$name} : 0 ); - } - - my $unsupported = $type eq 'encode' ? JSON::Backend::XS::UNSUPPORTED_ENCODE_FLAG - : JSON::Backend::XS::UNSUPPORTED_DECODE_FLAG; - my $flags = ${$_[0]} || 0; - - for my $name (keys %$unsupported) { - next if ($name eq 'EXPANDED'); # for developer's - my $enable = ($flags & $unsupported->{$name}) ? 1 : 0; - my $method = lc $name; - $pp->$method($enable); - } - - $pp->indent_length( $_[0]->get_indent_length ); - - return $pp; -} - -sub _encode { # using with PP encode - if (${$_[0]}) { - _set_for_pp('encode' => @_)->encode($_[1]); - } - else { - $_[0]->_original_encode( $_[1] ); - } -} - - -sub _decode { # if unsupported-flag is set, use PP - if (${$_[0]}) { - _set_for_pp('decode' => @_)->decode($_[1]); - } - else { - $_[0]->_original_decode( $_[1] ); - } -} - - -sub decode_prefix { # if unsupported-flag is set, use PP - _set_for_pp('decode' => @_)->decode_prefix($_[1]); -} - - -sub _incr_parse { - if (${$_[0]}) { - _set_for_pp('decode' => @_)->incr_parse($_[1]); - } - else { - $_[0]->_original_incr_parse( $_[1] ); - } -} - - -sub get_indent_length { - ${$_[0]} << 4 >> 16; -} - - -sub indent_length { - my $length = $_[1]; - - if (!defined $length or $length > 15 or $length < 0) { - Carp::carp "The acceptable range of indent_length() is 0 to 15."; - } - else { - local $^W; - $length <<= 12; - ${$_[0]} &= ~ JSON::Backend::XS::INDENT_LENGTH_FLAG; - ${$_[0]} |= $length; - *JSON::XS::encode = \&JSON::Backend::XS::Supportable::_encode; - } - - $_[0]; -} - - -1; -__END__ - -=head1 NAME - -JSON - JSON (JavaScript Object Notation) encoder/decoder - -=head1 SYNOPSIS - - use JSON; # imports encode_json, decode_json, to_json and from_json. - - # simple and fast interfaces (expect/generate UTF-8) - - $utf8_encoded_json_text = encode_json $perl_hash_or_arrayref; - $perl_hash_or_arrayref = decode_json $utf8_encoded_json_text; - - # OO-interface - - $json = JSON->new->allow_nonref; - - $json_text = $json->encode( $perl_scalar ); - $perl_scalar = $json->decode( $json_text ); - - $pretty_printed = $json->pretty->encode( $perl_scalar ); # pretty-printing - - # If you want to use PP only support features, call with '-support_by_pp' - # When XS unsupported feature is enable, using PP (de|en)code instead of XS ones. - - use JSON -support_by_pp; - - # option-acceptable interfaces (expect/generate UNICODE by default) - - $json_text = to_json( $perl_scalar, { ascii => 1, pretty => 1 } ); - $perl_scalar = from_json( $json_text, { utf8 => 1 } ); - - # Between (en|de)code_json and (to|from)_json, if you want to write - # a code which communicates to an outer world (encoded in UTF-8), - # recommend to use (en|de)code_json. - -=head1 VERSION - - 2.90 - -This version is compatible with JSON::XS B<2.34> and later. -(Not yet compatble to JSON::XS B<3.0x>.) - - -=head1 NOTE - -JSON::PP was earlier included in the C distribution, but -has since Perl 5.14 been a core module. For this reason, -L was removed from the JSON distribution and can now -be found also in the Perl5 repository at - -=over - -=item * L - -=back - -(The newest JSON::PP version still exists in CPAN.) - -Instead, the C distribution will include JSON::backportPP -for backwards computability. JSON.pm should thus work as it did -before. - -=head1 DESCRIPTION - - *************************** CAUTION ************************************** - * * - * INCOMPATIBLE CHANGE (JSON::XS version 2.90) * - * * - * JSON.pm had patched JSON::XS::Boolean and JSON::PP::Boolean internally * - * on loading time for making these modules inherit JSON::Boolean. * - * But since JSON::XS v3.0 it use Types::Serialiser as boolean class. * - * Then now JSON.pm breaks boolean classe overload features and * - * -support_by_pp if JSON::XS v3.0 or later is installed. * - * * - * JSON::true and JSON::false returned JSON::Boolean objects. * - * For workaround, they return JSON::PP::Boolean objects in this version. * - * * - * isa_ok(JSON::true, 'JSON::PP::Boolean'); * - * * - * And it discards a feature: * - * * - * ok(JSON::true eq 'true'); * - * * - * In other word, JSON::PP::Boolean overload numeric only. * - * * - * ok( JSON::true == 1 ); * - * * - ************************************************************************** - - ************************** CAUTION ******************************** - * This is 'JSON module version 2' and there are many differences * - * to version 1.xx * - * Please check your applications using old version. * - * See to 'INCOMPATIBLE CHANGES TO OLD VERSION' * - ******************************************************************* - -JSON (JavaScript Object Notation) is a simple data format. -See to L and C(L). - -This module converts Perl data structures to JSON and vice versa using either -L or L. - -JSON::XS is the fastest and most proper JSON module on CPAN which must be -compiled and installed in your environment. -JSON::PP is a pure-Perl module which is bundled in this distribution and -has a strong compatibility to JSON::XS. - -This module try to use JSON::XS by default and fail to it, use JSON::PP instead. -So its features completely depend on JSON::XS or JSON::PP. - -See to L. - -To distinguish the module name 'JSON' and the format type JSON, -the former is quoted by CEE (its results vary with your using media), -and the latter is left just as it is. - -Module name : C - -Format type : JSON - -=head2 FEATURES - -=over - -=item * correct unicode handling - -This module (i.e. backend modules) knows how to handle Unicode, documents -how and when it does so, and even documents what "correct" means. - -Even though there are limitations, this feature is available since Perl version 5.6. - -JSON::XS requires Perl 5.8.2 (but works correctly in 5.8.8 or later), so in older versions -C should call JSON::PP as the backend which can be used since Perl 5.005. - -With Perl 5.8.x JSON::PP works, but from 5.8.0 to 5.8.2, because of a Perl side problem, -JSON::PP works slower in the versions. And in 5.005, the Unicode handling is not available. -See to L for more information. - -See also to L -and L. - - -=item * round-trip integrity - -When you serialise a perl data structure using only data types supported -by JSON and Perl, the deserialised data structure is identical on the Perl -level. (e.g. the string "2.0" doesn't suddenly become "2" just because -it looks like a number). There I minor exceptions to this, read the -L section below to learn about those. - - -=item * strict checking of JSON correctness - -There is no guessing, no generating of illegal JSON texts by default, -and only JSON is accepted as input by default (the latter is a security -feature). - -See to L and L. - -=item * fast - -This module returns a JSON::XS object itself if available. -Compared to other JSON modules and other serialisers such as Storable, -JSON::XS usually compares favorably in terms of speed, too. - -If not available, C returns a JSON::PP object instead of JSON::XS and -it is very slow as pure-Perl. - -=item * simple to use - -This module has both a simple functional interface as well as an -object oriented interface interface. - -=item * reasonably versatile output formats - -You can choose between the most compact guaranteed-single-line format possible -(nice for simple line-based protocols), a pure-ASCII format (for when your transport -is not 8-bit clean, still supports the whole Unicode range), or a pretty-printed -format (for when you want to read that stuff). Or you can combine those features -in whatever way you like. - -=back - -=head1 FUNCTIONAL INTERFACE - -Some documents are copied and modified from L. -C and C are additional functions. - -=head2 encode_json - - $json_text = encode_json $perl_scalar - -Converts the given Perl data structure to a UTF-8 encoded, binary string. - -This function call is functionally identical to: - - $json_text = JSON->new->utf8->encode($perl_scalar) - -=head2 decode_json - - $perl_scalar = decode_json $json_text - -The opposite of C: expects an UTF-8 (binary) string and tries -to parse that as an UTF-8 encoded JSON text, returning the resulting -reference. - -This function call is functionally identical to: - - $perl_scalar = JSON->new->utf8->decode($json_text) - - -=head2 to_json - - $json_text = to_json($perl_scalar) - -Converts the given Perl data structure to a json string. - -This function call is functionally identical to: - - $json_text = JSON->new->encode($perl_scalar) - -Takes a hash reference as the second. - - $json_text = to_json($perl_scalar, $flag_hashref) - -So, - - $json_text = to_json($perl_scalar, {utf8 => 1, pretty => 1}) - -equivalent to: - - $json_text = JSON->new->utf8(1)->pretty(1)->encode($perl_scalar) - -If you want to write a modern perl code which communicates to outer world, -you should use C (supposed that JSON data are encoded in UTF-8). - -=head2 from_json - - $perl_scalar = from_json($json_text) - -The opposite of C: expects a json string and tries -to parse it, returning the resulting reference. - -This function call is functionally identical to: - - $perl_scalar = JSON->decode($json_text) - -Takes a hash reference as the second. - - $perl_scalar = from_json($json_text, $flag_hashref) - -So, - - $perl_scalar = from_json($json_text, {utf8 => 1}) - -equivalent to: - - $perl_scalar = JSON->new->utf8(1)->decode($json_text) - -If you want to write a modern perl code which communicates to outer world, -you should use C (supposed that JSON data are encoded in UTF-8). - -=head2 JSON::is_bool - - $is_boolean = JSON::is_bool($scalar) - -Returns true if the passed scalar represents either JSON::true or -JSON::false, two constants that act like C<1> and C<0> respectively -and are also used to represent JSON C and C in Perl strings. - -=head2 JSON::true - -Returns JSON true value which is blessed object. -It C JSON::Boolean object. - -=head2 JSON::false - -Returns JSON false value which is blessed object. -It C JSON::Boolean object. - -=head2 JSON::null - -Returns C. - -See L, below, for more information on how JSON values are mapped to -Perl. - -=head1 HOW DO I DECODE A DATA FROM OUTER AND ENCODE TO OUTER - -This section supposes that your perl version is 5.8 or later. - -If you know a JSON text from an outer world - a network, a file content, and so on, -is encoded in UTF-8, you should use C or C module object -with C enable. And the decoded result will contain UNICODE characters. - - # from network - my $json = JSON->new->utf8; - my $json_text = CGI->new->param( 'json_data' ); - my $perl_scalar = $json->decode( $json_text ); - - # from file content - local $/; - open( my $fh, '<', 'json.data' ); - $json_text = <$fh>; - $perl_scalar = decode_json( $json_text ); - -If an outer data is not encoded in UTF-8, firstly you should C it. - - use Encode; - local $/; - open( my $fh, '<', 'json.data' ); - my $encoding = 'cp932'; - my $unicode_json_text = decode( $encoding, <$fh> ); # UNICODE - - # or you can write the below code. - # - # open( my $fh, "<:encoding($encoding)", 'json.data' ); - # $unicode_json_text = <$fh>; - -In this case, C<$unicode_json_text> is of course UNICODE string. -So you B use C nor C module object with C enable. -Instead of them, you use C module object with C disable or C. - - $perl_scalar = $json->utf8(0)->decode( $unicode_json_text ); - # or - $perl_scalar = from_json( $unicode_json_text ); - -Or C and C: - - $perl_scalar = decode_json( encode( 'utf8', $unicode_json_text ) ); - # this way is not efficient. - -And now, you want to convert your C<$perl_scalar> into JSON data and -send it to an outer world - a network or a file content, and so on. - -Your data usually contains UNICODE strings and you want the converted data to be encoded -in UTF-8, you should use C or C module object with C enable. - - print encode_json( $perl_scalar ); # to a network? file? or display? - # or - print $json->utf8->encode( $perl_scalar ); - -If C<$perl_scalar> does not contain UNICODE but C<$encoding>-encoded strings -for some reason, then its characters are regarded as B for perl -(because it does not concern with your $encoding). -You B use C nor C module object with C enable. -Instead of them, you use C module object with C disable or C. -Note that the resulted text is a UNICODE string but no problem to print it. - - # $perl_scalar contains $encoding encoded string values - $unicode_json_text = $json->utf8(0)->encode( $perl_scalar ); - # or - $unicode_json_text = to_json( $perl_scalar ); - # $unicode_json_text consists of characters less than 0x100 - print $unicode_json_text; - -Or C all string values and C: - - $perl_scalar->{ foo } = decode( $encoding, $perl_scalar->{ foo } ); - # ... do it to each string values, then encode_json - $json_text = encode_json( $perl_scalar ); - -This method is a proper way but probably not efficient. - -See to L, L. - - -=head1 COMMON OBJECT-ORIENTED INTERFACE - -=head2 new - - $json = JSON->new - -Returns a new C object inherited from either JSON::XS or JSON::PP -that can be used to de/encode JSON strings. - -All boolean flags described below are by default I. - -The mutators for flags all return the JSON object again and thus calls can -be chained: - - my $json = JSON->new->utf8->space_after->encode({a => [1,2]}) - => {"a": [1, 2]} - -=head2 ascii - - $json = $json->ascii([$enable]) - - $enabled = $json->get_ascii - -If $enable is true (or missing), then the encode method will not generate characters outside -the code range 0..127. Any Unicode characters outside that range will be escaped using either -a single \uXXXX or a double \uHHHH\uLLLLL escape sequence, as per RFC4627. - -If $enable is false, then the encode method will not escape Unicode characters unless -required by the JSON syntax or other flags. This results in a faster and more compact format. - -This feature depends on the used Perl version and environment. - -See to L if the backend is PP. - - JSON->new->ascii(1)->encode([chr 0x10401]) - => ["\ud801\udc01"] - -=head2 latin1 - - $json = $json->latin1([$enable]) - - $enabled = $json->get_latin1 - -If $enable is true (or missing), then the encode method will encode the resulting JSON -text as latin1 (or iso-8859-1), escaping any characters outside the code range 0..255. - -If $enable is false, then the encode method will not escape Unicode characters -unless required by the JSON syntax or other flags. - - JSON->new->latin1->encode (["\x{89}\x{abc}"] - => ["\x{89}\\u0abc"] # (perl syntax, U+abc escaped, U+89 not) - -=head2 utf8 - - $json = $json->utf8([$enable]) - - $enabled = $json->get_utf8 - -If $enable is true (or missing), then the encode method will encode the JSON result -into UTF-8, as required by many protocols, while the decode method expects to be handled -an UTF-8-encoded string. Please note that UTF-8-encoded strings do not contain any -characters outside the range 0..255, they are thus useful for bytewise/binary I/O. - -In future versions, enabling this option might enable autodetection of the UTF-16 and UTF-32 -encoding families, as described in RFC4627. - -If $enable is false, then the encode method will return the JSON string as a (non-encoded) -Unicode string, while decode expects thus a Unicode string. Any decoding or encoding -(e.g. to UTF-8 or UTF-16) needs to be done yourself, e.g. using the Encode module. - - -Example, output UTF-16BE-encoded JSON: - - use Encode; - $jsontext = encode "UTF-16BE", JSON::XS->new->encode ($object); - -Example, decode UTF-32LE-encoded JSON: - - use Encode; - $object = JSON::XS->new->decode (decode "UTF-32LE", $jsontext); - -See to L if the backend is PP. - - -=head2 pretty - - $json = $json->pretty([$enable]) - -This enables (or disables) all of the C, C and -C (and in the future possibly more) flags in one call to -generate the most readable (or most compact) form possible. - -Equivalent to: - - $json->indent->space_before->space_after - -The indent space length is three and JSON::XS cannot change the indent -space length. - -=head2 indent - - $json = $json->indent([$enable]) - - $enabled = $json->get_indent - -If C<$enable> is true (or missing), then the C method will use a multiline -format as output, putting every array member or object/hash key-value pair -into its own line, identifying them properly. - -If C<$enable> is false, no newlines or indenting will be produced, and the -resulting JSON text is guaranteed not to contain any C. - -This setting has no effect when decoding JSON texts. - -The indent space length is three. -With JSON::PP, you can also access C to change indent space length. - - -=head2 space_before - - $json = $json->space_before([$enable]) - - $enabled = $json->get_space_before - -If C<$enable> is true (or missing), then the C method will add an extra -optional space before the C<:> separating keys from values in JSON objects. - -If C<$enable> is false, then the C method will not add any extra -space at those places. - -This setting has no effect when decoding JSON texts. - -Example, space_before enabled, space_after and indent disabled: - - {"key" :"value"} - - -=head2 space_after - - $json = $json->space_after([$enable]) - - $enabled = $json->get_space_after - -If C<$enable> is true (or missing), then the C method will add an extra -optional space after the C<:> separating keys from values in JSON objects -and extra whitespace after the C<,> separating key-value pairs and array -members. - -If C<$enable> is false, then the C method will not add any extra -space at those places. - -This setting has no effect when decoding JSON texts. - -Example, space_before and indent disabled, space_after enabled: - - {"key": "value"} - - -=head2 relaxed - - $json = $json->relaxed([$enable]) - - $enabled = $json->get_relaxed - -If C<$enable> is true (or missing), then C will accept some -extensions to normal JSON syntax (see below). C will not be -affected in anyway. I. I suggest only to use this option to -parse application-specific files written by humans (configuration files, -resource files etc.) - -If C<$enable> is false (the default), then C will only accept -valid JSON texts. - -Currently accepted extensions are: - -=over 4 - -=item * list items can have an end-comma - -JSON I array elements and key-value pairs with commas. This -can be annoying if you write JSON texts manually and want to be able to -quickly append elements, so this extension accepts comma at the end of -such items not just between them: - - [ - 1, - 2, <- this comma not normally allowed - ] - { - "k1": "v1", - "k2": "v2", <- this comma not normally allowed - } - -=item * shell-style '#'-comments - -Whenever JSON allows whitespace, shell-style comments are additionally -allowed. They are terminated by the first carriage-return or line-feed -character, after which more white-space and comments are allowed. - - [ - 1, # this comment not allowed in JSON - # neither this one... - ] - -=back - - -=head2 canonical - - $json = $json->canonical([$enable]) - - $enabled = $json->get_canonical - -If C<$enable> is true (or missing), then the C method will output JSON objects -by sorting their keys. This is adding a comparatively high overhead. - -If C<$enable> is false, then the C method will output key-value -pairs in the order Perl stores them (which will likely change between runs -of the same script). - -This option is useful if you want the same data structure to be encoded as -the same JSON text (given the same overall settings). If it is disabled, -the same hash might be encoded differently even if contains the same data, -as key-value pairs have no inherent ordering in Perl. - -This setting has no effect when decoding JSON texts. - -=head2 allow_nonref - - $json = $json->allow_nonref([$enable]) - - $enabled = $json->get_allow_nonref - -If C<$enable> is true (or missing), then the C method can convert a -non-reference into its corresponding string, number or null JSON value, -which is an extension to RFC4627. Likewise, C will accept those JSON -values instead of croaking. - -If C<$enable> is false, then the C method will croak if it isn't -passed an arrayref or hashref, as JSON texts must either be an object -or array. Likewise, C will croak if given something that is not a -JSON object or array. - - JSON->new->allow_nonref->encode ("Hello, World!") - => "Hello, World!" - -=head2 allow_unknown - - $json = $json->allow_unknown ([$enable]) - - $enabled = $json->get_allow_unknown - -If $enable is true (or missing), then "encode" will *not* throw an -exception when it encounters values it cannot represent in JSON (for -example, filehandles) but instead will encode a JSON "null" value. -Note that blessed objects are not included here and are handled -separately by c. - -If $enable is false (the default), then "encode" will throw an -exception when it encounters anything it cannot encode as JSON. - -This option does not affect "decode" in any way, and it is -recommended to leave it off unless you know your communications -partner. - -=head2 allow_blessed - - $json = $json->allow_blessed([$enable]) - - $enabled = $json->get_allow_blessed - -If C<$enable> is true (or missing), then the C method will not -barf when it encounters a blessed reference. Instead, the value of the -B option will decide whether C (C -disabled or no C method found) or a representation of the -object (C enabled and C method found) is being -encoded. Has no effect on C. - -If C<$enable> is false (the default), then C will throw an -exception when it encounters a blessed object. - - -=head2 convert_blessed - - $json = $json->convert_blessed([$enable]) - - $enabled = $json->get_convert_blessed - -If C<$enable> is true (or missing), then C, upon encountering a -blessed object, will check for the availability of the C method -on the object's class. If found, it will be called in scalar context -and the resulting scalar will be encoded instead of the object. If no -C method is found, the value of C will decide what -to do. - -The C method may safely call die if it wants. If C -returns other blessed objects, those will be handled in the same -way. C must take care of not causing an endless recursion cycle -(== crash) in this case. The name of C was chosen because other -methods called by the Perl core (== not by the user of the object) are -usually in upper case letters and to avoid collisions with the C -function or method. - -This setting does not yet influence C in any way. - -If C<$enable> is false, then the C setting will decide what -to do when a blessed object is found. - -=over - -=item convert_blessed_universally mode - -If use C with C<-convert_blessed_universally>, the C -subroutine is defined as the below code: - - *UNIVERSAL::TO_JSON = sub { - my $b_obj = B::svref_2object( $_[0] ); - return $b_obj->isa('B::HV') ? { %{ $_[0] } } - : $b_obj->isa('B::AV') ? [ @{ $_[0] } ] - : undef - ; - } - -This will cause that C method converts simple blessed objects into -JSON objects as non-blessed object. - - JSON -convert_blessed_universally; - $json->allow_blessed->convert_blessed->encode( $blessed_object ) - -This feature is experimental and may be removed in the future. - -=back - -=head2 filter_json_object - - $json = $json->filter_json_object([$coderef]) - -When C<$coderef> is specified, it will be called from C each -time it decodes a JSON object. The only argument passed to the coderef -is a reference to the newly-created hash. If the code references returns -a single scalar (which need not be a reference), this value -(i.e. a copy of that scalar to avoid aliasing) is inserted into the -deserialised data structure. If it returns an empty list -(NOTE: I C, which is a valid scalar), the original deserialised -hash will be inserted. This setting can slow down decoding considerably. - -When C<$coderef> is omitted or undefined, any existing callback will -be removed and C will not change the deserialised hash in any -way. - -Example, convert all JSON objects into the integer 5: - - my $js = JSON->new->filter_json_object (sub { 5 }); - # returns [5] - $js->decode ('[{}]'); # the given subroutine takes a hash reference. - # throw an exception because allow_nonref is not enabled - # so a lone 5 is not allowed. - $js->decode ('{"a":1, "b":2}'); - - -=head2 filter_json_single_key_object - - $json = $json->filter_json_single_key_object($key [=> $coderef]) - -Works remotely similar to C, but is only called for -JSON objects having a single key named C<$key>. - -This C<$coderef> is called before the one specified via -C, if any. It gets passed the single value in the JSON -object. If it returns a single value, it will be inserted into the data -structure. If it returns nothing (not even C but the empty list), -the callback from C will be called next, as if no -single-key callback were specified. - -If C<$coderef> is omitted or undefined, the corresponding callback will be -disabled. There can only ever be one callback for a given key. - -As this callback gets called less often then the C -one, decoding speed will not usually suffer as much. Therefore, single-key -objects make excellent targets to serialise Perl objects into, especially -as single-key JSON objects are as close to the type-tagged value concept -as JSON gets (it's basically an ID/VALUE tuple). Of course, JSON does not -support this in any way, so you need to make sure your data never looks -like a serialised Perl hash. - -Typical names for the single object key are C<__class_whatever__>, or -C<$__dollars_are_rarely_used__$> or C<}ugly_brace_placement>, or even -things like C<__class_md5sum(classname)__>, to reduce the risk of clashing -with real hashes. - -Example, decode JSON objects of the form C<< { "__widget__" => } >> -into the corresponding C<< $WIDGET{} >> object: - - # return whatever is in $WIDGET{5}: - JSON - ->new - ->filter_json_single_key_object (__widget__ => sub { - $WIDGET{ $_[0] } - }) - ->decode ('{"__widget__": 5') - - # this can be used with a TO_JSON method in some "widget" class - # for serialisation to json: - sub WidgetBase::TO_JSON { - my ($self) = @_; - - unless ($self->{id}) { - $self->{id} = ..get..some..id..; - $WIDGET{$self->{id}} = $self; - } - - { __widget__ => $self->{id} } - } - - -=head2 shrink - - $json = $json->shrink([$enable]) - - $enabled = $json->get_shrink - -With JSON::XS, this flag resizes strings generated by either -C or C to their minimum size possible. This can save -memory when your JSON texts are either very very long or you have many -short strings. It will also try to downgrade any strings to octet-form -if possible: perl stores strings internally either in an encoding called -UTF-X or in octet-form. The latter cannot store everything but uses less -space in general (and some buggy Perl or C code might even rely on that -internal representation being used). - -With JSON::PP, it is noop about resizing strings but tries -C to the returned string by C. See to L. - -See to L and L. - -=head2 max_depth - - $json = $json->max_depth([$maximum_nesting_depth]) - - $max_depth = $json->get_max_depth - -Sets the maximum nesting level (default C<512>) accepted while encoding -or decoding. If a higher nesting level is detected in JSON text or a Perl -data structure, then the encoder and decoder will stop and croak at that -point. - -Nesting level is defined by number of hash- or arrayrefs that the encoder -needs to traverse to reach a given point or the number of C<{> or C<[> -characters without their matching closing parenthesis crossed to reach a -given character in a string. - -If no argument is given, the highest possible setting will be used, which -is rarely useful. - -Note that nesting is implemented by recursion in C. The default value has -been chosen to be as large as typical operating systems allow without -crashing. (JSON::XS) - -With JSON::PP as the backend, when a large value (100 or more) was set and -it de/encodes a deep nested object/text, it may raise a warning -'Deep recursion on subroutine' at the perl runtime phase. - -See L for more info on why this is useful. - -=head2 max_size - - $json = $json->max_size([$maximum_string_size]) - - $max_size = $json->get_max_size - -Set the maximum length a JSON text may have (in bytes) where decoding is -being attempted. The default is C<0>, meaning no limit. When C -is called on a string that is longer then this many bytes, it will not -attempt to decode the string but throw an exception. This setting has no -effect on C (yet). - -If no argument is given, the limit check will be deactivated (same as when -C<0> is specified). - -See L, below, for more info on why this is useful. - -=head2 encode - - $json_text = $json->encode($perl_scalar) - -Converts the given Perl data structure (a simple scalar or a reference -to a hash or array) to its JSON representation. Simple scalars will be -converted into JSON string or number sequences, while references to arrays -become JSON arrays and references to hashes become JSON objects. Undefined -Perl values (e.g. C) become JSON C values. -References to the integers C<0> and C<1> are converted into C and C. - -=head2 decode - - $perl_scalar = $json->decode($json_text) - -The opposite of C: expects a JSON text and tries to parse it, -returning the resulting simple scalar or reference. Croaks on error. - -JSON numbers and strings become simple Perl scalars. JSON arrays become -Perl arrayrefs and JSON objects become Perl hashrefs. C becomes -C<1> (C), C becomes C<0> (C) and -C becomes C. - -=head2 decode_prefix - - ($perl_scalar, $characters) = $json->decode_prefix($json_text) - -This works like the C method, but instead of raising an exception -when there is trailing garbage after the first JSON object, it will -silently stop parsing there and return the number of characters consumed -so far. - - JSON->new->decode_prefix ("[1] the tail") - => ([], 3) - -See to L - -=head2 property - - $boolean = $json->property($property_name) - -Returns a boolean value about above some properties. - -The available properties are C, C, C, -C,C, C, C, C, -C, C, C, C, -C, C and C. - - $boolean = $json->property('utf8'); - => 0 - $json->utf8; - $boolean = $json->property('utf8'); - => 1 - -Sets the property with a given boolean value. - - $json = $json->property($property_name => $boolean); - -With no argument, it returns all the above properties as a hash reference. - - $flag_hashref = $json->property(); - -=head1 INCREMENTAL PARSING - -Most of this section are copied and modified from L. - -In some cases, there is the need for incremental parsing of JSON texts. -This module does allow you to parse a JSON stream incrementally. -It does so by accumulating text until it has a full JSON object, which -it then can decode. This process is similar to using C -to see if a full JSON object is available, but is much more efficient -(and can be implemented with a minimum of method calls). - -The backend module will only attempt to parse the JSON text once it is sure it -has enough text to get a decisive result, using a very simple but -truly incremental parser. This means that it sometimes won't stop as -early as the full parser, for example, it doesn't detect parenthesis -mismatches. The only thing it guarantees is that it starts decoding as -soon as a syntactically valid JSON text has been seen. This means you need -to set resource limits (e.g. C) to ensure the parser will stop -parsing in the presence if syntax errors. - -The following methods implement this incremental parser. - -=head2 incr_parse - - $json->incr_parse( [$string] ) # void context - - $obj_or_undef = $json->incr_parse( [$string] ) # scalar context - - @obj_or_empty = $json->incr_parse( [$string] ) # list context - -This is the central parsing function. It can both append new text and -extract objects from the stream accumulated so far (both of these -functions are optional). - -If C<$string> is given, then this string is appended to the already -existing JSON fragment stored in the C<$json> object. - -After that, if the function is called in void context, it will simply -return without doing anything further. This can be used to add more text -in as many chunks as you want. - -If the method is called in scalar context, then it will try to extract -exactly I JSON object. If that is successful, it will return this -object, otherwise it will return C. If there is a parse error, -this method will croak just as C would do (one can then use -C to skip the erroneous part). This is the most common way of -using the method. - -And finally, in list context, it will try to extract as many objects -from the stream as it can find and return them, or the empty list -otherwise. For this to work, there must be no separators between the JSON -objects or arrays, instead they must be concatenated back-to-back. If -an error occurs, an exception will be raised as in the scalar context -case. Note that in this case, any previously-parsed JSON texts will be -lost. - -Example: Parse some JSON arrays/objects in a given string and return them. - - my @objs = JSON->new->incr_parse ("[5][7][1,2]"); - -=head2 incr_text - - $lvalue_string = $json->incr_text - -This method returns the currently stored JSON fragment as an lvalue, that -is, you can manipulate it. This I works when a preceding call to -C in I successfully returned an object. Under -all other circumstances you must not call this function (I mean it. -although in simple tests it might actually work, it I fail under -real world conditions). As a special exception, you can also call this -method before having parsed anything. - -This function is useful in two cases: a) finding the trailing text after a -JSON object or b) parsing multiple JSON objects separated by non-JSON text -(such as commas). - - $json->incr_text =~ s/\s*,\s*//; - -In Perl 5.005, C attribute is not available. -You must write codes like the below: - - $string = $json->incr_text; - $string =~ s/\s*,\s*//; - $json->incr_text( $string ); - -=head2 incr_skip - - $json->incr_skip - -This will reset the state of the incremental parser and will remove the -parsed text from the input buffer. This is useful after C -died, in which case the input buffer and incremental parser state is left -unchanged, to skip the text parsed so far and to reset the parse state. - -=head2 incr_reset - - $json->incr_reset - -This completely resets the incremental parser, that is, after this call, -it will be as if the parser had never parsed anything. - -This is useful if you want to repeatedly parse JSON objects and want to -ignore any trailing data, which means you have to reset the parser after -each successful decode. - -See to L for examples. - - -=head1 JSON::PP SUPPORT METHODS - -The below methods are JSON::PP own methods, so when C works -with JSON::PP (i.e. the created object is a JSON::PP object), available. -See to L in detail. - -If you use C with additional C<-support_by_pp>, some methods -are available even with JSON::XS. See to L. - - BEING { $ENV{PERL_JSON_BACKEND} = 'JSON::XS' } - - use JSON -support_by_pp; - - my $json = JSON->new; - $json->allow_nonref->escape_slash->encode("/"); - - # functional interfaces too. - print to_json(["/"], {escape_slash => 1}); - print from_json('["foo"]', {utf8 => 1}); - -If you do not want to all functions but C<-support_by_pp>, -use C<-no_export>. - - use JSON -support_by_pp, -no_export; - # functional interfaces are not exported. - -=head2 allow_singlequote - - $json = $json->allow_singlequote([$enable]) - -If C<$enable> is true (or missing), then C will accept -any JSON strings quoted by single quotations that are invalid JSON -format. - - $json->allow_singlequote->decode({"foo":'bar'}); - $json->allow_singlequote->decode({'foo':"bar"}); - $json->allow_singlequote->decode({'foo':'bar'}); - -As same as the C option, this option may be used to parse -application-specific files written by humans. - -=head2 allow_barekey - - $json = $json->allow_barekey([$enable]) - -If C<$enable> is true (or missing), then C will accept -bare keys of JSON object that are invalid JSON format. - -As same as the C option, this option may be used to parse -application-specific files written by humans. - - $json->allow_barekey->decode('{foo:"bar"}'); - -=head2 allow_bignum - - $json = $json->allow_bignum([$enable]) - -If C<$enable> is true (or missing), then C will convert -the big integer Perl cannot handle as integer into a L -object and convert a floating number (any) into a L. - -On the contrary, C converts C objects and C -objects into JSON numbers with C enable. - - $json->allow_nonref->allow_blessed->allow_bignum; - $bigfloat = $json->decode('2.000000000000000000000000001'); - print $json->encode($bigfloat); - # => 2.000000000000000000000000001 - -See to L about the conversion of JSON number. - -=head2 loose - - $json = $json->loose([$enable]) - -The unescaped [\x00-\x1f\x22\x2f\x5c] strings are invalid in JSON strings -and the module doesn't allow to C to these (except for \x2f). -If C<$enable> is true (or missing), then C will accept these -unescaped strings. - - $json->loose->decode(qq|["abc - def"]|); - -See to L. - -=head2 escape_slash - - $json = $json->escape_slash([$enable]) - -According to JSON Grammar, I (U+002F) is escaped. But by default -JSON backend modules encode strings without escaping slash. - -If C<$enable> is true (or missing), then C will escape slashes. - -=head2 indent_length - - $json = $json->indent_length($length) - -With JSON::XS, The indent space length is 3 and cannot be changed. -With JSON::PP, it sets the indent space length with the given $length. -The default is 3. The acceptable range is 0 to 15. - -=head2 sort_by - - $json = $json->sort_by($function_name) - $json = $json->sort_by($subroutine_ref) - -If $function_name or $subroutine_ref are set, its sort routine are used. - - $js = $pc->sort_by(sub { $JSON::PP::a cmp $JSON::PP::b })->encode($obj); - # is($js, q|{"a":1,"b":2,"c":3,"d":4,"e":5,"f":6,"g":7,"h":8,"i":9}|); - - $js = $pc->sort_by('own_sort')->encode($obj); - # is($js, q|{"a":1,"b":2,"c":3,"d":4,"e":5,"f":6,"g":7,"h":8,"i":9}|); - - sub JSON::PP::own_sort { $JSON::PP::a cmp $JSON::PP::b } - -As the sorting routine runs in the JSON::PP scope, the given -subroutine name and the special variables C<$a>, C<$b> will begin -with 'JSON::PP::'. - -If $integer is set, then the effect is same as C on. - -See to L. - -=head1 MAPPING - -This section is copied from JSON::XS and modified to C. -JSON::XS and JSON::PP mapping mechanisms are almost equivalent. - -See to L. - -=head2 JSON -> PERL - -=over 4 - -=item object - -A JSON object becomes a reference to a hash in Perl. No ordering of object -keys is preserved (JSON does not preserver object key ordering itself). - -=item array - -A JSON array becomes a reference to an array in Perl. - -=item string - -A JSON string becomes a string scalar in Perl - Unicode codepoints in JSON -are represented by the same codepoints in the Perl string, so no manual -decoding is necessary. - -=item number - -A JSON number becomes either an integer, numeric (floating point) or -string scalar in perl, depending on its range and any fractional parts. On -the Perl level, there is no difference between those as Perl handles all -the conversion details, but an integer may take slightly less memory and -might represent more values exactly than floating point numbers. - -If the number consists of digits only, C will try to represent -it as an integer value. If that fails, it will try to represent it as -a numeric (floating point) value if that is possible without loss of -precision. Otherwise it will preserve the number as a string value (in -which case you lose roundtripping ability, as the JSON number will be -re-encoded to a JSON string). - -Numbers containing a fractional or exponential part will always be -represented as numeric (floating point) values, possibly at a loss of -precision (in which case you might lose perfect roundtripping ability, but -the JSON number will still be re-encoded as a JSON number). - -Note that precision is not accuracy - binary floating point values cannot -represent most decimal fractions exactly, and when converting from and to -floating point, C only guarantees precision up to but not including -the least significant bit. - -If the backend is JSON::PP and C is enable, the big integers -and the numeric can be optionally converted into L and -L objects. - -=item true, false - -These JSON atoms become C and C, -respectively. They are overloaded to act almost exactly like the numbers -C<1> and C<0>. You can check whether a scalar is a JSON boolean by using -the C function. - - print JSON::true + 1; - => 1 - - ok(JSON::true eq '1'); - ok(JSON::true == 1); - -C will install these missing overloading features to the backend modules. - - -=item null - -A JSON null atom becomes C in Perl. - -C returns C. - -=back - - -=head2 PERL -> JSON - -The mapping from Perl to JSON is slightly more difficult, as Perl is a -truly typeless language, so we can only guess which JSON type is meant by -a Perl value. - -=over 4 - -=item hash references - -Perl hash references become JSON objects. As there is no inherent ordering -in hash keys (or JSON objects), they will usually be encoded in a -pseudo-random order that can change between runs of the same program but -stays generally the same within a single run of a program. C -optionally sort the hash keys (determined by the I flag), so -the same data structure will serialise to the same JSON text (given same -settings and version of JSON::XS), but this incurs a runtime overhead -and is only rarely useful, e.g. when you want to compare some JSON text -against another for equality. - -In future, the ordered object feature will be added to JSON::PP using C mechanism. - - -=item array references - -Perl array references become JSON arrays. - -=item other references - -Other unblessed references are generally not allowed and will cause an -exception to be thrown, except for references to the integers C<0> and -C<1>, which get turned into C and C atoms in JSON. You can -also use C and C to improve readability. - - to_json [\0,JSON::true] # yields [false,true] - -=item JSON::true, JSON::false, JSON::null - -These special values become JSON true and JSON false values, -respectively. You can also use C<\1> and C<\0> directly if you want. - -JSON::null returns C. - -=item blessed objects - -Blessed objects are not directly representable in JSON. See the -C and C methods on various options on -how to deal with this: basically, you can choose between throwing an -exception, encoding the reference as if it weren't blessed, or provide -your own serialiser method. - -With C mode, C converts blessed -hash references or blessed array references (contains other blessed references) -into JSON members and arrays. - - use JSON -convert_blessed_universally; - JSON->new->allow_blessed->convert_blessed->encode( $blessed_object ); - -See to L. - -=item simple scalars - -Simple Perl scalars (any scalar that is not a reference) are the most -difficult objects to encode: JSON::XS and JSON::PP will encode undefined scalars as -JSON C values, scalars that have last been used in a string context -before encoding as JSON strings, and anything else as number value: - - # dump as number - encode_json [2] # yields [2] - encode_json [-3.0e17] # yields [-3e+17] - my $value = 5; encode_json [$value] # yields [5] - - # used as string, so dump as string - print $value; - encode_json [$value] # yields ["5"] - - # undef becomes null - encode_json [undef] # yields [null] - -You can force the type to be a string by stringifying it: - - my $x = 3.1; # some variable containing a number - "$x"; # stringified - $x .= ""; # another, more awkward way to stringify - print $x; # perl does it for you, too, quite often - -You can force the type to be a number by numifying it: - - my $x = "3"; # some variable containing a string - $x += 0; # numify it, ensuring it will be dumped as a number - $x *= 1; # same thing, the choice is yours. - -You can not currently force the type in other, less obscure, ways. - -Note that numerical precision has the same meaning as under Perl (so -binary to decimal conversion follows the same rules as in Perl, which -can differ to other languages). Also, your perl interpreter might expose -extensions to the floating point numbers of your platform, such as -infinities or NaN's - these cannot be represented in JSON, and it is an -error to pass those in. - -=item Big Number - -If the backend is JSON::PP and C is enable, -C converts C objects and C -objects into JSON numbers. - - -=back - -=head1 JSON and ECMAscript - -See to L. - -=head1 JSON and YAML - -JSON is not a subset of YAML. -See to L. - - -=head1 BACKEND MODULE DECISION - -When you use C, C tries to C JSON::XS. If this call failed, it will -C JSON::PP. The required JSON::XS version is I<2.2> or later. - -The C constructor method returns an object inherited from the backend module, -and JSON::XS object is a blessed scalar reference while JSON::PP is a blessed hash -reference. - -So, your program should not depend on the backend module, especially -returned objects should not be modified. - - my $json = JSON->new; # XS or PP? - $json->{stash} = 'this is xs object'; # this code may raise an error! - -To check the backend module, there are some methods - C, C and C. - - JSON->backend; # 'JSON::XS' or 'JSON::PP' - - JSON->backend->is_pp: # 0 or 1 - - JSON->backend->is_xs: # 1 or 0 - - $json->is_xs; # 1 or 0 - - $json->is_pp; # 0 or 1 - - -If you set an environment variable C, the calling action will be changed. - -=over - -=item PERL_JSON_BACKEND = 0 or PERL_JSON_BACKEND = 'JSON::PP' - -Always use JSON::PP - -=item PERL_JSON_BACKEND == 1 or PERL_JSON_BACKEND = 'JSON::XS,JSON::PP' - -(The default) Use compiled JSON::XS if it is properly compiled & installed, -otherwise use JSON::PP. - -=item PERL_JSON_BACKEND == 2 or PERL_JSON_BACKEND = 'JSON::XS' - -Always use compiled JSON::XS, die if it isn't properly compiled & installed. - -=item PERL_JSON_BACKEND = 'JSON::backportPP' - -Always use JSON::backportPP. -JSON::backportPP is JSON::PP back port module. -C includes JSON::backportPP instead of JSON::PP. - -=back - -These ideas come from L mechanism. - -example: - - BEGIN { $ENV{PERL_JSON_BACKEND} = 'JSON::PP' } - use JSON; # always uses JSON::PP - -In future, it may be able to specify another module. - -=head1 USE PP FEATURES EVEN THOUGH XS BACKEND - -Many methods are available with either JSON::XS or JSON::PP and -when the backend module is JSON::XS, if any JSON::PP specific (i.e. JSON::XS unsupported) -method is called, it will C and be noop. - -But If you C C passing the optional string C<-support_by_pp>, -it makes a part of those unsupported methods available. -This feature is achieved by using JSON::PP in C. - - BEGIN { $ENV{PERL_JSON_BACKEND} = 2 } # with JSON::XS - use JSON -support_by_pp; - my $json = JSON->new; - $json->allow_nonref->escape_slash->encode("/"); - -At this time, the returned object is a C -object (re-blessed XS object), and by checking JSON::XS unsupported flags -in de/encoding, can support some unsupported methods - C, C, -C, C, C and C. - -When any unsupported methods are not enable, C will be -used as is. The switch is achieved by changing the symbolic tables. - -C<-support_by_pp> is effective only when the backend module is JSON::XS -and it makes the de/encoding speed down a bit. - -See to L. - -=head1 INCOMPATIBLE CHANGES TO OLD VERSION - -There are big incompatibility between new version (2.00) and old (1.xx). -If you use old C 1.xx in your code, please check it. - -See to L - -=over - -=item jsonToObj and objToJson are obsoleted. - -Non Perl-style name C and C are obsoleted -(but not yet deleted from the source). -If you use these functions in your code, please replace them -with C and C. - - -=item Global variables are no longer available. - -C class variables - C<$JSON::AUTOCONVERT>, C<$JSON::BareKey>, etc... -- are not available any longer. -Instead, various features can be used through object methods. - - -=item Package JSON::Converter and JSON::Parser are deleted. - -Now C bundles with JSON::PP which can handle JSON more properly than them. - -=item Package JSON::NotString is deleted. - -There was C class which represents JSON value C, C, C -and numbers. It was deleted and replaced by C. - -C represents C and C. - -C does not represent C. - -C returns C. - -C makes L and L is-a relation -to L. - -=item function JSON::Number is obsoleted. - -C is now needless because JSON::XS and JSON::PP have -round-trip integrity. - -=item JSONRPC modules are deleted. - -Perl implementation of JSON-RPC protocol - C, C -and C are deleted in this distribution. -Instead of them, there is L which supports JSON-RPC protocol version 1.1. - -=back - -=head2 Transition ways from 1.xx to 2.xx. - -You should set C mode firstly, because -it is always successful for the below codes even with JSON::XS. - - use JSON -support_by_pp; - -=over - -=item Exported jsonToObj (simple) - - from_json($json_text); - -=item Exported objToJson (simple) - - to_json($perl_scalar); - -=item Exported jsonToObj (advanced) - - $flags = {allow_barekey => 1, allow_singlequote => 1}; - from_json($json_text, $flags); - -equivalent to: - - $JSON::BareKey = 1; - $JSON::QuotApos = 1; - jsonToObj($json_text); - -=item Exported objToJson (advanced) - - $flags = {allow_blessed => 1, allow_barekey => 1}; - to_json($perl_scalar, $flags); - -equivalent to: - - $JSON::BareKey = 1; - objToJson($perl_scalar); - -=item jsonToObj as object method - - $json->decode($json_text); - -=item objToJson as object method - - $json->encode($perl_scalar); - -=item new method with parameters - -The C method in 2.x takes any parameters no longer. -You can set parameters instead; - - $json = JSON->new->pretty; - -=item $JSON::Pretty, $JSON::Indent, $JSON::Delimiter - -If C is enable, that means C<$JSON::Pretty> flag set. And -C<$JSON::Delimiter> was substituted by C and C. -In conclusion: - - $json->indent->space_before->space_after; - -Equivalent to: - - $json->pretty; - -To change indent length, use C. - -(Only with JSON::PP, if C<-support_by_pp> is not used.) - - $json->pretty->indent_length(2)->encode($perl_scalar); - -=item $JSON::BareKey - -(Only with JSON::PP, if C<-support_by_pp> is not used.) - - $json->allow_barekey->decode($json_text) - -=item $JSON::ConvBlessed - -use C<-convert_blessed_universally>. See to L. - -=item $JSON::QuotApos - -(Only with JSON::PP, if C<-support_by_pp> is not used.) - - $json->allow_singlequote->decode($json_text) - -=item $JSON::SingleQuote - -Disable. C does not make such a invalid JSON string any longer. - -=item $JSON::KeySort - - $json->canonical->encode($perl_scalar) - -This is the ascii sort. - -If you want to use with your own sort routine, check the C method. - -(Only with JSON::PP, even if C<-support_by_pp> is used currently.) - - $json->sort_by($sort_routine_ref)->encode($perl_scalar) - - $json->sort_by(sub { $JSON::PP::a <=> $JSON::PP::b })->encode($perl_scalar) - -Can't access C<$a> and C<$b> but C<$JSON::PP::a> and C<$JSON::PP::b>. - -=item $JSON::SkipInvalid - - $json->allow_unknown - -=item $JSON::AUTOCONVERT - -Needless. C backend modules have the round-trip integrity. - -=item $JSON::UTF8 - -Needless because C (JSON::XS/JSON::PP) sets -the UTF8 flag on properly. - - # With UTF8-flagged strings - - $json->allow_nonref; - $str = chr(1000); # UTF8-flagged - - $json_text = $json->utf8(0)->encode($str); - utf8::is_utf8($json_text); - # true - $json_text = $json->utf8(1)->encode($str); - utf8::is_utf8($json_text); - # false - - $str = '"' . chr(1000) . '"'; # UTF8-flagged - - $perl_scalar = $json->utf8(0)->decode($str); - utf8::is_utf8($perl_scalar); - # true - $perl_scalar = $json->utf8(1)->decode($str); - # died because of 'Wide character in subroutine' - -See to L. - -=item $JSON::UnMapping - -Disable. See to L. - -=item $JSON::SelfConvert - -This option was deleted. -Instead of it, if a given blessed object has the C method, -C will be executed with C. - - $json->convert_blessed->encode($blessed_hashref_or_arrayref) - # if need, call allow_blessed - -Note that it was C in old version, but now not C but C. - -=back - -=head1 TODO - -=over - -=item example programs - -=back - -=head1 THREADS - -No test with JSON::PP. If with JSON::XS, See to L. - - -=head1 BUGS - -Please report bugs relevant to C to Emakamaka[at]cpan.orgE. - - -=head1 SEE ALSO - -Most of the document is copied and modified from JSON::XS doc. - -L, L - -C(L) - -=head1 AUTHOR - -Makamaka Hannyaharamitu, Emakamaka[at]cpan.orgE - -JSON::XS was written by Marc Lehmann - -The release of this new version owes to the courtesy of Marc Lehmann. - - -=head1 COPYRIGHT AND LICENSE - -Copyright 2005-2013 by Makamaka Hannyaharamitu - -This library is free software; you can redistribute it and/or modify -it under the same terms as Perl itself. - -=cut - diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/midas/api.py b/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/midas/api.py deleted file mode 100644 index b58ebbffd942a2fc22264f0ab47e400c26b9f41c..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/midas/api.py +++ /dev/null @@ -1,170 +0,0 @@ -# based on https://github.com/isl-org/MiDaS - -import cv2 -import torch -import torch.nn as nn -from torchvision.transforms import Compose - -from ldm.modules.midas.midas.dpt_depth import DPTDepthModel -from ldm.modules.midas.midas.midas_net import MidasNet -from ldm.modules.midas.midas.midas_net_custom import MidasNet_small -from ldm.modules.midas.midas.transforms import Resize, NormalizeImage, PrepareForNet - - -ISL_PATHS = { - "dpt_large": "midas_models/dpt_large-midas-2f21e586.pt", - "dpt_hybrid": "midas_models/dpt_hybrid-midas-501f0c75.pt", - "midas_v21": "", - "midas_v21_small": "", -} - - -def disabled_train(self, mode=True): - """Overwrite model.train with this function to make sure train/eval mode - does not change anymore.""" - return self - - -def load_midas_transform(model_type): - # https://github.com/isl-org/MiDaS/blob/master/run.py - # load transform only - if model_type == "dpt_large": # DPT-Large - net_w, net_h = 384, 384 - resize_mode = "minimal" - normalization = NormalizeImage(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) - - elif model_type == "dpt_hybrid": # DPT-Hybrid - net_w, net_h = 384, 384 - resize_mode = "minimal" - normalization = NormalizeImage(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) - - elif model_type == "midas_v21": - net_w, net_h = 384, 384 - resize_mode = "upper_bound" - normalization = NormalizeImage(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) - - elif model_type == "midas_v21_small": - net_w, net_h = 256, 256 - resize_mode = "upper_bound" - normalization = NormalizeImage(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) - - else: - assert False, f"model_type '{model_type}' not implemented, use: --model_type large" - - transform = Compose( - [ - Resize( - net_w, - net_h, - resize_target=None, - keep_aspect_ratio=True, - ensure_multiple_of=32, - resize_method=resize_mode, - image_interpolation_method=cv2.INTER_CUBIC, - ), - normalization, - PrepareForNet(), - ] - ) - - return transform - - -def load_model(model_type): - # https://github.com/isl-org/MiDaS/blob/master/run.py - # load network - model_path = ISL_PATHS[model_type] - if model_type == "dpt_large": # DPT-Large - model = DPTDepthModel( - path=model_path, - backbone="vitl16_384", - non_negative=True, - ) - net_w, net_h = 384, 384 - resize_mode = "minimal" - normalization = NormalizeImage(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) - - elif model_type == "dpt_hybrid": # DPT-Hybrid - model = DPTDepthModel( - path=model_path, - backbone="vitb_rn50_384", - non_negative=True, - ) - net_w, net_h = 384, 384 - resize_mode = "minimal" - normalization = NormalizeImage(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) - - elif model_type == "midas_v21": - model = MidasNet(model_path, non_negative=True) - net_w, net_h = 384, 384 - resize_mode = "upper_bound" - normalization = NormalizeImage( - mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] - ) - - elif model_type == "midas_v21_small": - model = MidasNet_small(model_path, features=64, backbone="efficientnet_lite3", exportable=True, - non_negative=True, blocks={'expand': True}) - net_w, net_h = 256, 256 - resize_mode = "upper_bound" - normalization = NormalizeImage( - mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] - ) - - else: - print(f"model_type '{model_type}' not implemented, use: --model_type large") - assert False - - transform = Compose( - [ - Resize( - net_w, - net_h, - resize_target=None, - keep_aspect_ratio=True, - ensure_multiple_of=32, - resize_method=resize_mode, - image_interpolation_method=cv2.INTER_CUBIC, - ), - normalization, - PrepareForNet(), - ] - ) - - return model.eval(), transform - - -class MiDaSInference(nn.Module): - MODEL_TYPES_TORCH_HUB = [ - "DPT_Large", - "DPT_Hybrid", - "MiDaS_small" - ] - MODEL_TYPES_ISL = [ - "dpt_large", - "dpt_hybrid", - "midas_v21", - "midas_v21_small", - ] - - def __init__(self, model_type): - super().__init__() - assert (model_type in self.MODEL_TYPES_ISL) - model, _ = load_model(model_type) - self.model = model - self.model.train = disabled_train - - def forward(self, x): - # x in 0..1 as produced by calling self.transform on a 0..1 float64 numpy array - # NOTE: we expect that the correct transform has been called during dataloading. - with torch.no_grad(): - prediction = self.model(x) - prediction = torch.nn.functional.interpolate( - prediction.unsqueeze(1), - size=x.shape[2:], - mode="bicubic", - align_corners=False, - ) - assert prediction.shape == (x.shape[0], 1, x.shape[2], x.shape[3]) - return prediction - diff --git a/spaces/Mohamed90/Geoappfolium/README.md b/spaces/Mohamed90/Geoappfolium/README.md deleted file mode 100644 index 51cf4406a4a9a58ccb6ca930fc81f9594ef182c3..0000000000000000000000000000000000000000 --- a/spaces/Mohamed90/Geoappfolium/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Geoappfolium -emoji: 🚀 -colorFrom: pink -colorTo: gray -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/kie/_base_/datasets/wildreceipt-openset.py b/spaces/Mountchicken/MAERec-Gradio/configs/kie/_base_/datasets/wildreceipt-openset.py deleted file mode 100644 index f82512839cdea57e559bd375be2a3f4146558af3..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/configs/kie/_base_/datasets/wildreceipt-openset.py +++ /dev/null @@ -1,26 +0,0 @@ -wildreceipt_openset_data_root = 'data/wildreceipt/' - -wildreceipt_openset_train = dict( - type='WildReceiptDataset', - data_root=wildreceipt_openset_data_root, - metainfo=dict(category=[ - dict(id=0, name='bg'), - dict(id=1, name='key'), - dict(id=2, name='value'), - dict(id=3, name='other') - ]), - ann_file='openset_train.txt', - pipeline=None) - -wildreceipt_openset_test = dict( - type='WildReceiptDataset', - data_root=wildreceipt_openset_data_root, - metainfo=dict(category=[ - dict(id=0, name='bg'), - dict(id=1, name='key'), - dict(id=2, name='value'), - dict(id=3, name='other') - ]), - ann_file='openset_test.txt', - test_mode=True, - pipeline=None) diff --git a/spaces/Nultx/VITS-TTS/text/english.py b/spaces/Nultx/VITS-TTS/text/english.py deleted file mode 100644 index 6817392ba8a9eb830351de89fb7afc5ad72f5e42..0000000000000000000000000000000000000000 --- a/spaces/Nultx/VITS-TTS/text/english.py +++ /dev/null @@ -1,188 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - - -# Regular expression matching whitespace: - - -import re -import inflect -from unidecode import unidecode -import eng_to_ipa as ipa -_inflect = inflect.engine() -_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])') -_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)') -_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)') -_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)') -_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)') -_number_re = re.compile(r'[0-9]+') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - - -# List of (ipa, lazy ipa) pairs: -_lazy_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('æ', 'e'), - ('ɑ', 'a'), - ('ɔ', 'o'), - ('ð', 'z'), - ('θ', 's'), - ('ɛ', 'e'), - ('ɪ', 'i'), - ('ʊ', 'u'), - ('ʒ', 'ʥ'), - ('ʤ', 'ʥ'), - ('ˈ', '↓'), -]] - -# List of (ipa, lazy ipa2) pairs: -_lazy_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('ð', 'z'), - ('θ', 's'), - ('ʒ', 'ʑ'), - ('ʤ', 'dʑ'), - ('ˈ', '↓'), -]] - -# List of (ipa, ipa2) pairs -_ipa_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('ʤ', 'dʒ'), - ('ʧ', 'tʃ') -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def collapse_whitespace(text): - return re.sub(r'\s+', ' ', text) - - -def _remove_commas(m): - return m.group(1).replace(',', '') - - -def _expand_decimal_point(m): - return m.group(1).replace('.', ' point ') - - -def _expand_dollars(m): - match = m.group(1) - parts = match.split('.') - if len(parts) > 2: - return match + ' dollars' # Unexpected format - dollars = int(parts[0]) if parts[0] else 0 - cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0 - if dollars and cents: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit) - elif dollars: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - return '%s %s' % (dollars, dollar_unit) - elif cents: - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s' % (cents, cent_unit) - else: - return 'zero dollars' - - -def _expand_ordinal(m): - return _inflect.number_to_words(m.group(0)) - - -def _expand_number(m): - num = int(m.group(0)) - if num > 1000 and num < 3000: - if num == 2000: - return 'two thousand' - elif num > 2000 and num < 2010: - return 'two thousand ' + _inflect.number_to_words(num % 100) - elif num % 100 == 0: - return _inflect.number_to_words(num // 100) + ' hundred' - else: - return _inflect.number_to_words(num, andword='', zero='oh', group=2).replace(', ', ' ') - else: - return _inflect.number_to_words(num, andword='') - - -def normalize_numbers(text): - text = re.sub(_comma_number_re, _remove_commas, text) - text = re.sub(_pounds_re, r'\1 pounds', text) - text = re.sub(_dollars_re, _expand_dollars, text) - text = re.sub(_decimal_number_re, _expand_decimal_point, text) - text = re.sub(_ordinal_re, _expand_ordinal, text) - text = re.sub(_number_re, _expand_number, text) - return text - - -def mark_dark_l(text): - return re.sub(r'l([^aeiouæɑɔəɛɪʊ ]*(?: |$))', lambda x: 'ɫ'+x.group(1), text) - - -def english_to_ipa(text): - text = unidecode(text).lower() - text = expand_abbreviations(text) - text = normalize_numbers(text) - phonemes = ipa.convert(text) - phonemes = collapse_whitespace(phonemes) - return phonemes - - -def english_to_lazy_ipa(text): - text = english_to_ipa(text) - for regex, replacement in _lazy_ipa: - text = re.sub(regex, replacement, text) - return text - - -def english_to_ipa2(text): - text = english_to_ipa(text) - text = mark_dark_l(text) - for regex, replacement in _ipa_to_ipa2: - text = re.sub(regex, replacement, text) - return text.replace('...', '…') - - -def english_to_lazy_ipa2(text): - text = english_to_ipa(text) - for regex, replacement in _lazy_ipa2: - text = re.sub(regex, replacement, text) - return text diff --git a/spaces/OAOA/DifFace/basicsr/archs/vgg_arch.py b/spaces/OAOA/DifFace/basicsr/archs/vgg_arch.py deleted file mode 100644 index 05200334e477e59feefd1e4a0b5e94204e4eb2fa..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/basicsr/archs/vgg_arch.py +++ /dev/null @@ -1,161 +0,0 @@ -import os -import torch -from collections import OrderedDict -from torch import nn as nn -from torchvision.models import vgg as vgg - -from basicsr.utils.registry import ARCH_REGISTRY - -VGG_PRETRAIN_PATH = 'experiments/pretrained_models/vgg19-dcbb9e9d.pth' -NAMES = { - 'vgg11': [ - 'conv1_1', 'relu1_1', 'pool1', 'conv2_1', 'relu2_1', 'pool2', 'conv3_1', 'relu3_1', 'conv3_2', 'relu3_2', - 'pool3', 'conv4_1', 'relu4_1', 'conv4_2', 'relu4_2', 'pool4', 'conv5_1', 'relu5_1', 'conv5_2', 'relu5_2', - 'pool5' - ], - 'vgg13': [ - 'conv1_1', 'relu1_1', 'conv1_2', 'relu1_2', 'pool1', 'conv2_1', 'relu2_1', 'conv2_2', 'relu2_2', 'pool2', - 'conv3_1', 'relu3_1', 'conv3_2', 'relu3_2', 'pool3', 'conv4_1', 'relu4_1', 'conv4_2', 'relu4_2', 'pool4', - 'conv5_1', 'relu5_1', 'conv5_2', 'relu5_2', 'pool5' - ], - 'vgg16': [ - 'conv1_1', 'relu1_1', 'conv1_2', 'relu1_2', 'pool1', 'conv2_1', 'relu2_1', 'conv2_2', 'relu2_2', 'pool2', - 'conv3_1', 'relu3_1', 'conv3_2', 'relu3_2', 'conv3_3', 'relu3_3', 'pool3', 'conv4_1', 'relu4_1', 'conv4_2', - 'relu4_2', 'conv4_3', 'relu4_3', 'pool4', 'conv5_1', 'relu5_1', 'conv5_2', 'relu5_2', 'conv5_3', 'relu5_3', - 'pool5' - ], - 'vgg19': [ - 'conv1_1', 'relu1_1', 'conv1_2', 'relu1_2', 'pool1', 'conv2_1', 'relu2_1', 'conv2_2', 'relu2_2', 'pool2', - 'conv3_1', 'relu3_1', 'conv3_2', 'relu3_2', 'conv3_3', 'relu3_3', 'conv3_4', 'relu3_4', 'pool3', 'conv4_1', - 'relu4_1', 'conv4_2', 'relu4_2', 'conv4_3', 'relu4_3', 'conv4_4', 'relu4_4', 'pool4', 'conv5_1', 'relu5_1', - 'conv5_2', 'relu5_2', 'conv5_3', 'relu5_3', 'conv5_4', 'relu5_4', 'pool5' - ] -} - - -def insert_bn(names): - """Insert bn layer after each conv. - - Args: - names (list): The list of layer names. - - Returns: - list: The list of layer names with bn layers. - """ - names_bn = [] - for name in names: - names_bn.append(name) - if 'conv' in name: - position = name.replace('conv', '') - names_bn.append('bn' + position) - return names_bn - - -@ARCH_REGISTRY.register() -class VGGFeatureExtractor(nn.Module): - """VGG network for feature extraction. - - In this implementation, we allow users to choose whether use normalization - in the input feature and the type of vgg network. Note that the pretrained - path must fit the vgg type. - - Args: - layer_name_list (list[str]): Forward function returns the corresponding - features according to the layer_name_list. - Example: {'relu1_1', 'relu2_1', 'relu3_1'}. - vgg_type (str): Set the type of vgg network. Default: 'vgg19'. - use_input_norm (bool): If True, normalize the input image. Importantly, - the input feature must in the range [0, 1]. Default: True. - range_norm (bool): If True, norm images with range [-1, 1] to [0, 1]. - Default: False. - requires_grad (bool): If true, the parameters of VGG network will be - optimized. Default: False. - remove_pooling (bool): If true, the max pooling operations in VGG net - will be removed. Default: False. - pooling_stride (int): The stride of max pooling operation. Default: 2. - """ - - def __init__(self, - layer_name_list, - vgg_type='vgg19', - use_input_norm=True, - range_norm=False, - requires_grad=False, - remove_pooling=False, - pooling_stride=2): - super(VGGFeatureExtractor, self).__init__() - - self.layer_name_list = layer_name_list - self.use_input_norm = use_input_norm - self.range_norm = range_norm - - self.names = NAMES[vgg_type.replace('_bn', '')] - if 'bn' in vgg_type: - self.names = insert_bn(self.names) - - # only borrow layers that will be used to avoid unused params - max_idx = 0 - for v in layer_name_list: - idx = self.names.index(v) - if idx > max_idx: - max_idx = idx - - if os.path.exists(VGG_PRETRAIN_PATH): - vgg_net = getattr(vgg, vgg_type)(pretrained=False) - state_dict = torch.load(VGG_PRETRAIN_PATH, map_location=lambda storage, loc: storage) - vgg_net.load_state_dict(state_dict) - else: - vgg_net = getattr(vgg, vgg_type)(pretrained=True) - - features = vgg_net.features[:max_idx + 1] - - modified_net = OrderedDict() - for k, v in zip(self.names, features): - if 'pool' in k: - # if remove_pooling is true, pooling operation will be removed - if remove_pooling: - continue - else: - # in some cases, we may want to change the default stride - modified_net[k] = nn.MaxPool2d(kernel_size=2, stride=pooling_stride) - else: - modified_net[k] = v - - self.vgg_net = nn.Sequential(modified_net) - - if not requires_grad: - self.vgg_net.eval() - for param in self.parameters(): - param.requires_grad = False - else: - self.vgg_net.train() - for param in self.parameters(): - param.requires_grad = True - - if self.use_input_norm: - # the mean is for image with range [0, 1] - self.register_buffer('mean', torch.Tensor([0.485, 0.456, 0.406]).view(1, 3, 1, 1)) - # the std is for image with range [0, 1] - self.register_buffer('std', torch.Tensor([0.229, 0.224, 0.225]).view(1, 3, 1, 1)) - - def forward(self, x): - """Forward function. - - Args: - x (Tensor): Input tensor with shape (n, c, h, w). - - Returns: - Tensor: Forward results. - """ - if self.range_norm: - x = (x + 1) / 2 - if self.use_input_norm: - x = (x - self.mean) / self.std - - output = {} - for key, layer in self.vgg_net._modules.items(): - x = layer(x) - if key in self.layer_name_list: - output[key] = x.clone() - - return output diff --git a/spaces/OAOA/DifFace/utils/util_sisr.py b/spaces/OAOA/DifFace/utils/util_sisr.py deleted file mode 100644 index 68d69f9297264ef232da071eeffecd78bf03c1ec..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/utils/util_sisr.py +++ /dev/null @@ -1,187 +0,0 @@ -#!/usr/bin/env python -# -*- coding:utf-8 -*- -# Power by Zongsheng Yue 2021-12-07 21:37:58 - -import sys -import math -import torch -import numpy as np -import scipy.ndimage as snd -from scipy.special import softmax -from scipy.interpolate import interp2d - -import torch.nn.functional as F - - -from . import util_image -from ResizeRight.resize_right import resize - -def modcrop(im, sf): - h, w = im.shape[:2] - h -= (h % sf) - w -= (w % sf) - return im[:h, :w,] - -#--------------------------------------------Kernel----------------------------------------------- -def sigma2kernel(sigma, k_size=21, sf=3, shift=False): - ''' - Generate Gaussian kernel according to cholesky decomposion. - Input: - sigma: N x 1 x 2 x 2 torch tensor, covariance matrix - k_size: integer, kernel size - sf: scale factor - Output: - kernel: N x 1 x k x k torch tensor - ''' - try: - sigma_inv = torch.inverse(sigma) - except: - sigma_disturb = sigma + torch.eye(2, dtype=sigma.dtype, device=sigma.device).unsqueeze(0).unsqueeze(0) * 1e-5 - sigma_inv = torch.inverse(sigma_disturb) - - # Set expectation position (shifting kernel for aligned image) - if shift: - center = k_size // 2 + 0.5 * (sf - k_size % 2) # + 0.5 * (sf - k_size % 2) - else: - center = k_size // 2 - - # Create meshgrid for Gaussian - X, Y = torch.meshgrid(torch.arange(k_size), torch.arange(k_size)) - Z = torch.stack((X, Y), dim=2).to(device=sigma.device, dtype=sigma.dtype).view(1, -1, 2, 1) # 1 x k^2 x 2 x 1 - - # Calcualte Gaussian for every pixel of the kernel - ZZ = Z - center # 1 x k^2 x 2 x 1 - ZZ_t = ZZ.permute(0, 1, 3, 2) # 1 x k^2 x 1 x 2 - ZZZ = -0.5 * ZZ_t.matmul(sigma_inv).matmul(ZZ).squeeze(-1).squeeze(-1) # N x k^2 - kernel = F.softmax(ZZZ, dim=1) # N x k^2 - - return kernel.view(-1, 1, k_size, k_size) # N x 1 x k x k - -def shifted_anisotropic_Gaussian(k_size=21, sf=4, lambda_1=1.2, lambda_2=5., theta=0, shift=True): - ''' - # modified version of https://github.com/cszn/USRNet/blob/master/utils/utils_sisr.py - ''' - # set covariance matrix - Lam = np.diag([lambda_1, lambda_2]) - U = np.array([[np.cos(theta), -np.sin(theta)], - [np.sin(theta), np.cos(theta)]]) - sigma = U @ Lam @ U.T # 2 x 2 - inv_sigma = np.linalg.inv(sigma)[None, None, :, :] # 1 x 1 x 2 x 2 - - # set expectation position (shifting kernel for aligned image) - if shift: - center = k_size // 2 + 0.5*(sf - k_size % 2) - else: - center = k_size // 2 - - # Create meshgrid for Gaussian - X, Y = np.meshgrid(range(k_size), range(k_size)) - Z = np.stack([X, Y], 2).astype(np.float32)[:, :, :, None] # k x k x 2 x 1 - - # Calcualte Gaussian for every pixel of the kernel - ZZ = Z - center - ZZ_t = ZZ.transpose(0,1,3,2) - ZZZ = -0.5 * np.squeeze(ZZ_t @ inv_sigma @ ZZ).reshape([1, -1]) - kernel = softmax(ZZZ, axis=1).reshape([k_size, k_size]) # k x k - - # The convariance of the marginal distributions along x and y axis - s1, s2 = sigma[0, 0], sigma[1, 1] - # Pearson corrleation coefficient - rho = sigma[0, 1] / (math.sqrt(s1) * math.sqrt(s2)) - kernel_infos = np.array([s1, s2, rho]) # (3,) - - return kernel, kernel_infos - -#------------------------------------------Degradation------------------------------------------- -def imconv_np(im, kernel, padding_mode='reflect', correlate=False): - ''' - Image convolution or correlation. - Input: - im: h x w x c numpy array - kernel: k x k numpy array - padding_mode: 'reflect', 'constant' or 'wrap' - ''' - if kernel.ndim != im.ndim: kernel = kernel[:, :, np.newaxis] - - if correlate: - out = snd.correlate(im, kernel, mode=padding_mode) - else: - out = snd.convolve(im, kernel, mode=padding_mode) - - return out - -def conv_multi_kernel_tensor(im_hr, kernel, sf, downsampler): - ''' - Degradation model by Pytorch. - Input: - im_hr: N x c x h x w - kernel: N x 1 x k x k - sf: scale factor - ''' - im_hr_pad = F.pad(im_hr, (kernel.shape[-1] // 2,)*4, mode='reflect') - im_blur = F.conv3d(im_hr_pad.unsqueeze(0), kernel.unsqueeze(1), groups=im_hr.shape[0]) - if downsampler.lower() == 'direct': - im_blur = im_blur[0, :, :, ::sf, ::sf] # N x c x ... - elif downsampler.lower() == 'bicubic': - im_blur = resize(im_blur, scale_factors=1/sf) - else: - sys.exit('Please input the corrected downsampler: Direct or Bicubic!') - - return im_blur - -def tidy_kernel(kernel, expect_size=21): - ''' - Input: - kernel: p x p numpy array - ''' - k_size = kernel.shape[-1] - kernel_new = np.zeros([expect_size, expect_size], dtype=kernel.dtype) - if expect_size >= k_size: - start_ind = expect_size // 2 - k_size // 2 - end_ind = start_ind + k_size - kernel_new[start_ind:end_ind, start_ind:end_ind] = kernel - elif expect_size < k_size: - start_ind = k_size // 2 - expect_size // 2 - end_ind = start_ind + expect_size - kernel_new = kernel[start_ind:end_ind, start_ind:end_ind] - kernel_new /= kernel_new.sum() - - return kernel_new - -def shift_pixel(x, sf, upper_left=True): - """shift pixel for super-resolution with different scale factors - Args: - x: WxHxC or WxH - sf: scale factor - upper_left: shift direction - """ - h, w = x.shape[:2] - shift = (sf-1)*0.5 - xv, yv = np.arange(0, w, 1.0), np.arange(0, h, 1.0) - if upper_left: - x1 = xv + shift - y1 = yv + shift - else: - x1 = xv - shift - y1 = yv - shift - - x1 = np.clip(x1, 0, w-1) - y1 = np.clip(y1, 0, h-1) - - if x.ndim == 2: - x = interp2d(xv, yv, x)(x1, y1) - if x.ndim == 3: - for i in range(x.shape[-1]): - x[:, :, i] = interp2d(xv, yv, x[:, :, i])(x1, y1) - - return x - -#-----------------------------------------Transform-------------------------------------------- -class Bicubic: - def __init__(self, scale=0.25): - self.scale = scale - - def __call__(self, im, scale=None, out_shape=None): - scale = self.scale if scale is None else scale - out = resize(im, scale_factors=scale, out_shape=None) - return out diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/sentence_ranking.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/sentence_ranking.py deleted file mode 100644 index bed44f34e5f8e506b6ae7ba30ddaa661bf4a7522..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/sentence_ranking.py +++ /dev/null @@ -1,219 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os - -import numpy as np -from fairseq import utils -from fairseq.data import ( - ConcatSentencesDataset, - Dictionary, - IdDataset, - NestedDictionaryDataset, - NumelDataset, - NumSamplesDataset, - PrependTokenDataset, - RawLabelDataset, - RightPadDataset, - SortDataset, - TruncateDataset, - data_utils, -) -from fairseq.data.shorten_dataset import maybe_shorten_dataset -from fairseq.tasks import LegacyFairseqTask, register_task - - -logger = logging.getLogger(__name__) - - -@register_task("sentence_ranking") -class SentenceRankingTask(LegacyFairseqTask): - """ - Ranking task on multiple sentences. - - Args: - dictionary (Dictionary): the dictionary for the input of the task - """ - - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - parser.add_argument("data", metavar="FILE", help="file prefix for data") - parser.add_argument( - "--num-classes", type=int, help="number of sentences to be ranked" - ) - parser.add_argument( - "--init-token", - type=int, - help="add token at the beginning of each batch item", - ) - parser.add_argument( - "--separator-token", type=int, help="add separator token between inputs" - ) - parser.add_argument("--no-shuffle", action="store_true") - parser.add_argument( - "--shorten-method", - default="none", - choices=["none", "truncate", "random_crop"], - help="if not none, shorten sequences that exceed --tokens-per-sample", - ) - parser.add_argument( - "--shorten-data-split-list", - default="", - help="comma-separated list of dataset splits to apply shortening to, " - 'e.g., "train,valid" (default: all dataset splits)', - ) - parser.add_argument( - "--max-option-length", type=int, help="max length for each option" - ) - - def __init__(self, args, dictionary): - super().__init__(args) - self.dictionary = dictionary - - @classmethod - def load_dictionary(cls, args, filename, source=True): - """Load the dictionary from the filename - - Args: - filename (str): the filename - """ - dictionary = Dictionary.load(filename) - dictionary.add_symbol("") - return dictionary - - @classmethod - def setup_task(cls, args, **kwargs): - assert ( - args.criterion == "sentence_ranking" - ), "Must set --criterion=sentence_ranking" - - # load data dictionary - data_dict = cls.load_dictionary( - args, - os.path.join(args.data, "input0", "dict.txt"), - source=True, - ) - logger.info("[input] dictionary: {} types".format(len(data_dict))) - return SentenceRankingTask(args, data_dict) - - def load_dataset(self, split, combine=False, **kwargs): - """Load a given dataset split (e.g., train, valid, test).""" - - def get_path(type, split): - return os.path.join(self.args.data, type, split) - - def make_dataset(type, dictionary): - split_path = get_path(type, split) - - dataset = data_utils.load_indexed_dataset( - split_path, - self.source_dictionary, - self.args.dataset_impl, - combine=combine, - ) - return dataset - - input0 = make_dataset("input0", self.source_dictionary) - input_options = [ - make_dataset("input{idx}".format(idx=idx + 1), self.source_dictionary) - for idx in range(self.args.num_classes) - ] - - if self.args.separator_token is not None: - input0 = PrependTokenDataset(input0, self.args.separator_token) - - src_tokens = [] - for input_option in input_options: - if self.args.init_token is not None: - input_option = PrependTokenDataset(input_option, self.args.init_token) - if self.args.max_option_length is not None: - input_option = TruncateDataset( - input_option, self.args.max_option_length - ) - src_token = ConcatSentencesDataset(input_option, input0) - src_token = maybe_shorten_dataset( - src_token, - split, - self.args.shorten_data_split_list, - self.args.shorten_method, - self.args.max_positions, - self.args.seed, - ) - src_tokens.append(src_token) - - with data_utils.numpy_seed(self.args.seed): - shuffle = np.random.permutation(len(src_tokens[0])) - - dataset = { - "id": IdDataset(), - "nsentences": NumSamplesDataset(), - "ntokens": NumelDataset(src_tokens[0], reduce=True), - } - - for src_token_idx in range(len(src_tokens)): - dataset.update( - { - "net_input{idx}".format(idx=src_token_idx + 1): { - "src_tokens": RightPadDataset( - src_tokens[src_token_idx], - pad_idx=self.source_dictionary.pad(), - ), - "src_lengths": NumelDataset( - src_tokens[src_token_idx], reduce=False - ), - } - } - ) - - label_path = "{}.label".format(get_path("label", split)) - if os.path.exists(label_path): - with open(label_path) as h: - dataset.update( - target=RawLabelDataset([int(x.strip()) for x in h.readlines()]) - ) - - nested_dataset = NestedDictionaryDataset( - dataset, - sizes=[np.maximum.reduce([src_token.sizes for src_token in src_tokens])], - ) - - if self.args.no_shuffle: - dataset = nested_dataset - else: - dataset = SortDataset( - nested_dataset, - # shuffle - sort_order=[shuffle], - ) - - logger.info("Loaded {0} with #samples: {1}".format(split, len(dataset))) - - self.datasets[split] = dataset - return self.datasets[split] - - def build_model(self, args): - from fairseq import models - - model = models.build_model(args, self) - - model.register_classification_head( - getattr(args, "ranking_head_name", "sentence_classification_head"), - num_classes=1, - ) - - return model - - def max_positions(self): - return self.args.max_positions - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/byte_level_bpe/README.md b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/byte_level_bpe/README.md deleted file mode 100644 index 657092660eae42d20f67647417623b8b8cb7b66c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/byte_level_bpe/README.md +++ /dev/null @@ -1,88 +0,0 @@ -# Neural Machine Translation with Byte-Level Subwords - -https://arxiv.org/abs/1909.03341 - -We provide an implementation of byte-level byte-pair encoding (BBPE), taking IWSLT 2017 Fr-En translation as -example. - -## Data -Get data and generate fairseq binary dataset: -```bash -bash ./get_data.sh -``` - -## Model Training -Train Transformer model with Bi-GRU embedding contextualization (implemented in `gru_transformer.py`): -```bash -# VOCAB=bytes -# VOCAB=chars -VOCAB=bbpe2048 -# VOCAB=bpe2048 -# VOCAB=bbpe4096 -# VOCAB=bpe4096 -# VOCAB=bpe16384 -``` -```bash -fairseq-train "data/bin_${VOCAB}" --task translation --user-dir examples/byte_level_bpe/gru_transformer \ - --arch gru_transformer --encoder-layers 2 --decoder-layers 2 --dropout 0.3 --share-all-embeddings \ - --optimizer adam --adam-betas '(0.9, 0.98)' \ - --lr 5e-4 --lr-scheduler inverse_sqrt --warmup-updates 4000 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --log-format 'simple' --log-interval 100 --save-dir "checkpoints/${VOCAB}" \ - --batch-size 100 --max-update 100000 --update-freq 2 -``` - -## Generation -`fairseq-generate` requires bytes (BBPE) decoder to convert byte-level representation back to characters: -```bash -# BPE=--bpe bytes -# BPE=--bpe characters -BPE=--bpe byte_bpe --sentencepiece-model-path data/spm_bbpe2048.model -# BPE=--bpe sentencepiece --sentencepiece-model data/spm_bpe2048.model -# BPE=--bpe byte_bpe --sentencepiece-model-path data/spm_bbpe4096.model -# BPE=--bpe sentencepiece --sentencepiece-model data/spm_bpe4096.model -# BPE=--bpe sentencepiece --sentencepiece-model data/spm_bpe16384.model -``` - -```bash -fairseq-generate "data/bin_${VOCAB}" --task translation --user-dir examples/byte_level_bpe/gru_transformer \ - --source-lang fr --gen-subset test --sacrebleu --path "checkpoints/${VOCAB}/checkpoint_last.pt" \ - --tokenizer moses --moses-target-lang en ${BPE} -``` -When using `fairseq-interactive`, bytes (BBPE) encoder/decoder is required to tokenize input data and detokenize model predictions: -```bash -fairseq-interactive "data/bin_${VOCAB}" --task translation --user-dir examples/byte_level_bpe/gru_transformer \ - --path "checkpoints/${VOCAB}/checkpoint_last.pt" --input data/test.fr --tokenizer moses --moses-source-lang fr \ - --moses-target-lang en ${BPE} --buffer-size 1000 --max-tokens 10000 -``` - -## Results -| Vocabulary | Model | BLEU | -|:-------------:|:-------------:|:-------------:| -| Joint BPE 16k ([Kudo, 2018](https://arxiv.org/abs/1804.10959)) | 512d LSTM 2+2 | 33.81 | -| Joint BPE 16k | Transformer base 2+2 (w/ GRU) | 36.64 (36.72) | -| Joint BPE 4k | Transformer base 2+2 (w/ GRU) | 35.49 (36.10) | -| Joint BBPE 4k | Transformer base 2+2 (w/ GRU) | 35.61 (35.82) | -| Joint BPE 2k | Transformer base 2+2 (w/ GRU) | 34.87 (36.13) | -| Joint BBPE 2k | Transformer base 2+2 (w/ GRU) | 34.98 (35.43) | -| Characters | Transformer base 2+2 (w/ GRU) | 31.78 (33.30) | -| Bytes | Transformer base 2+2 (w/ GRU) | 31.57 (33.62) | - - -## Citation -``` -@misc{wang2019neural, - title={Neural Machine Translation with Byte-Level Subwords}, - author={Changhan Wang and Kyunghyun Cho and Jiatao Gu}, - year={2019}, - eprint={1909.03341}, - archivePrefix={arXiv}, - primaryClass={cs.CL} -} -``` - - -## Contact -Changhan Wang ([changhan@fb.com](mailto:changhan@fb.com)), -Kyunghyun Cho ([kyunghyuncho@fb.com](mailto:kyunghyuncho@fb.com)), -Jiatao Gu ([jgu@fb.com](mailto:jgu@fb.com)) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/multilingual/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/multilingual/__init__.py deleted file mode 100644 index 6264236915a7269a4d920ee8213004374dd86a9a..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/multilingual/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/discriminative_reranking_nmt/criterions/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/discriminative_reranking_nmt/criterions/__init__.py deleted file mode 100644 index 7c257c2700f015cb123a976584aef72f0429eb0c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/discriminative_reranking_nmt/criterions/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -from .discriminative_reranking_criterion import KLDivergenceRerankingCriterion - - -__all__ = [ - "KLDivergenceRerankingCriterion", -] diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/subsample_dataset.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/subsample_dataset.py deleted file mode 100644 index 48feaf883f87dc95f8637c24d3c96f3f9fd8bd1d..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/subsample_dataset.py +++ /dev/null @@ -1,72 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import numpy as np - -from . import BaseWrapperDataset - - -logger = logging.getLogger(__name__) - - -class SubsampleDataset(BaseWrapperDataset): - """Subsamples a given dataset by a specified ratio. Subsampling is done on the number of examples - - Args: - dataset (~torch.utils.data.Dataset): dataset to subsample - size_ratio(float): the ratio to subsample to. must be between 0 and 1 (exclusive) - """ - - def __init__(self, dataset, size_ratio, shuffle=False): - super().__init__(dataset) - assert size_ratio < 1 - self.actual_size = np.ceil(len(dataset) * size_ratio).astype(int) - self.indices = np.random.choice( - list(range(len(self.dataset))), self.actual_size, replace=False - ) - self.shuffle = shuffle - logger.info( - "subsampled dataset from {} to {} (ratio={})".format( - len(self.dataset), self.actual_size, size_ratio - ) - ) - - def __getitem__(self, index): - return self.dataset[self.indices[index]] - - def __len__(self): - return self.actual_size - - def collater(self, samples): - return self.dataset.collater(samples) - - @property - def sizes(self): - return self.dataset.sizes[self.indices] - - @property - def name(self): - return self.dataset.name - - def num_tokens(self, index): - return self.dataset.num_tokens(self.indices[index]) - - def size(self, index): - return self.dataset.size(self.indices[index]) - - def ordered_indices(self): - """Return an ordered list of indices. Batches will be constructed based - on this order.""" - if self.shuffle: - order = [np.random.permutation(len(self))] - else: - order = [np.arange(len(self))] - order.append(self.sizes) - return np.lexsort(order) - - def prefetch(self, indices): - self.dataset.prefetch(self.indices[indices]) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/dynamic_crf_layer.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/dynamic_crf_layer.py deleted file mode 100644 index 8fcc6b8d2672d2eacc6d01b9688bac44d5e1ce26..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/dynamic_crf_layer.py +++ /dev/null @@ -1,189 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -This file is to re-implemented the low-rank and beam approximation of CRF layer -Proposed by: - -Sun, Zhiqing, et al. -Fast Structured Decoding for Sequence Models -https://arxiv.org/abs/1910.11555 - -The CRF implementation is mainly borrowed from -https://github.com/kmkurn/pytorch-crf/blob/master/torchcrf/__init__.py - -""" - -import numpy as np -import torch -import torch.nn as nn - - -def logsumexp(x, dim=1): - return torch.logsumexp(x.float(), dim=dim).type_as(x) - - -class DynamicCRF(nn.Module): - """Dynamic CRF layer is used to approximate the traditional - Conditional Random Fields (CRF) - $P(y | x) = 1/Z(x) exp(sum_i s(y_i, x) + sum_i t(y_{i-1}, y_i, x))$ - - where in this function, we assume the emition scores (s) are given, - and the transition score is a |V| x |V| matrix $M$ - - in the following two aspects: - (1) it used a low-rank approximation for the transition matrix: - $M = E_1 E_2^T$ - (2) it used a beam to estimate the normalizing factor Z(x) - """ - - def __init__(self, num_embedding, low_rank=32, beam_size=64): - super().__init__() - - self.E1 = nn.Embedding(num_embedding, low_rank) - self.E2 = nn.Embedding(num_embedding, low_rank) - - self.vocb = num_embedding - self.rank = low_rank - self.beam = beam_size - - def extra_repr(self): - return "vocab_size={}, low_rank={}, beam_size={}".format( - self.vocb, self.rank, self.beam - ) - - def forward(self, emissions, targets, masks, beam=None): - """ - Compute the conditional log-likelihood of a sequence of target tokens given emission scores - - Args: - emissions (`~torch.Tensor`): Emission score are usually the unnormalized decoder output - ``(batch_size, seq_len, vocab_size)``. We assume batch-first - targets (`~torch.LongTensor`): Sequence of target token indices - ``(batch_size, seq_len) - masks (`~torch.ByteTensor`): Mask tensor with the same size as targets - - Returns: - `~torch.Tensor`: approximated log-likelihood - """ - numerator = self._compute_score(emissions, targets, masks) - denominator = self._compute_normalizer(emissions, targets, masks, beam) - return numerator - denominator - - def forward_decoder(self, emissions, masks=None, beam=None): - """ - Find the most likely output sequence using Viterbi algorithm. - - Args: - emissions (`~torch.Tensor`): Emission score are usually the unnormalized decoder output - ``(batch_size, seq_len, vocab_size)``. We assume batch-first - masks (`~torch.ByteTensor`): Mask tensor with the same size as targets - - Returns: - `~torch.LongTensor`: decoded sequence from the CRF model - """ - return self._viterbi_decode(emissions, masks, beam) - - def _compute_score(self, emissions, targets, masks=None): - batch_size, seq_len = targets.size() - emission_scores = emissions.gather(2, targets[:, :, None])[:, :, 0] # B x T - transition_scores = (self.E1(targets[:, :-1]) * self.E2(targets[:, 1:])).sum(2) - - scores = emission_scores - scores[:, 1:] += transition_scores - - if masks is not None: - scores = scores * masks.type_as(scores) - return scores.sum(-1) - - def _compute_normalizer(self, emissions, targets=None, masks=None, beam=None): - # HACK: we include "target" which is a hueristic for training - # HACK: we use a beam of tokens to approximate the normalizing factor (which is bad?) - - beam = beam if beam is not None else self.beam - batch_size, seq_len = emissions.size()[:2] - if targets is not None: - _emissions = emissions.scatter(2, targets[:, :, None], np.float("inf")) - beam_targets = _emissions.topk(beam, 2)[1] - beam_emission_scores = emissions.gather(2, beam_targets) - else: - beam_emission_scores, beam_targets = emissions.topk(beam, 2) - beam_transition_score1 = self.E1(beam_targets[:, :-1]) # B x (T-1) x K x D - beam_transition_score2 = self.E2(beam_targets[:, 1:]) # B x (T-1) x K x D - beam_transition_matrix = torch.bmm( - beam_transition_score1.view(-1, beam, self.rank), - beam_transition_score2.view(-1, beam, self.rank).transpose(1, 2), - ) - beam_transition_matrix = beam_transition_matrix.view(batch_size, -1, beam, beam) - - # compute the normalizer in the log-space - score = beam_emission_scores[:, 0] # B x K - for i in range(1, seq_len): - next_score = score[:, :, None] + beam_transition_matrix[:, i - 1] - next_score = logsumexp(next_score, dim=1) + beam_emission_scores[:, i] - - if masks is not None: - score = torch.where(masks[:, i : i + 1], next_score, score) - else: - score = next_score - - # Sum (log-sum-exp) over all possible tags - return logsumexp(score, dim=1) - - def _viterbi_decode(self, emissions, masks=None, beam=None): - # HACK: we use a beam of tokens to approximate the normalizing factor (which is bad?) - - beam = beam if beam is not None else self.beam - batch_size, seq_len = emissions.size()[:2] - beam_emission_scores, beam_targets = emissions.topk(beam, 2) - beam_transition_score1 = self.E1(beam_targets[:, :-1]) # B x (T-1) x K x D - beam_transition_score2 = self.E2(beam_targets[:, 1:]) # B x (T-1) x K x D - beam_transition_matrix = torch.bmm( - beam_transition_score1.view(-1, beam, self.rank), - beam_transition_score2.view(-1, beam, self.rank).transpose(1, 2), - ) - beam_transition_matrix = beam_transition_matrix.view(batch_size, -1, beam, beam) - - traj_tokens, traj_scores = [], [] - finalized_tokens, finalized_scores = [], [] - - # compute the normalizer in the log-space - score = beam_emission_scores[:, 0] # B x K - dummy = ( - torch.arange(beam, device=score.device).expand(*score.size()).contiguous() - ) - - for i in range(1, seq_len): - traj_scores.append(score) - _score = score[:, :, None] + beam_transition_matrix[:, i - 1] - _score, _index = _score.max(dim=1) - _score = _score + beam_emission_scores[:, i] - - if masks is not None: - score = torch.where(masks[:, i : i + 1], _score, score) - index = torch.where(masks[:, i : i + 1], _index, dummy) - else: - score, index = _score, _index - traj_tokens.append(index) - - # now running the back-tracing and find the best - best_score, best_index = score.max(dim=1) - finalized_tokens.append(best_index[:, None]) - finalized_scores.append(best_score[:, None]) - - for idx, scs in zip(reversed(traj_tokens), reversed(traj_scores)): - previous_index = finalized_tokens[-1] - finalized_tokens.append(idx.gather(1, previous_index)) - finalized_scores.append(scs.gather(1, previous_index)) - - finalized_tokens.reverse() - finalized_tokens = torch.cat(finalized_tokens, 1) - finalized_tokens = beam_targets.gather(2, finalized_tokens[:, :, None])[:, :, 0] - - finalized_scores.reverse() - finalized_scores = torch.cat(finalized_scores, 1) - finalized_scores[:, 1:] = finalized_scores[:, 1:] - finalized_scores[:, :-1] - - return finalized_scores, finalized_tokens diff --git a/spaces/ORI-Muchim/MinamiTTS/app.py b/spaces/ORI-Muchim/MinamiTTS/app.py deleted file mode 100644 index 1bf850720c7273f3b65d30a79f2f5c6c20437412..0000000000000000000000000000000000000000 --- a/spaces/ORI-Muchim/MinamiTTS/app.py +++ /dev/null @@ -1,159 +0,0 @@ -import json -import os -import re - -import librosa -import numpy as np -import torch -from torch import no_grad, LongTensor -import commons -import utils -import gradio as gr -from models import SynthesizerTrn -from text import text_to_sequence, _clean_text -from mel_processing import spectrogram_torch - -limitation = os.getenv("SYSTEM") == "spaces" # limit text and audio length in huggingface spaces - - -def get_text(text, hps, is_phoneme): - text_norm = text_to_sequence(text, hps.symbols, [] if is_phoneme else hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = LongTensor(text_norm) - return text_norm - - -def create_tts_fn(model, hps, speaker_ids): - def tts_fn(text, speaker, speed, is_phoneme): - if limitation: - text_len = len(text) - max_len = 700 - if is_phoneme: - max_len *= 3 - else: - if len(hps.data.text_cleaners) > 0 and hps.data.text_cleaners[0] == "zh_ja_mixture_cleaners": - text_len = len(re.sub("(\[ZH\]|\[JA\])", "", text)) - if text_len > max_len: - return "Error: Text is too long", None - - speaker_id = speaker_ids[speaker] - stn_tst = get_text(text, hps, is_phoneme) - with no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = LongTensor([stn_tst.size(0)]) - sid = LongTensor([speaker_id]) - audio = model.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8, - length_scale=1.0 / speed)[0][0, 0].data.cpu().float().numpy() - del stn_tst, x_tst, x_tst_lengths, sid - return "Success", (hps.data.sampling_rate, audio) - - return tts_fn - - - - - -def create_to_phoneme_fn(hps): - def to_phoneme_fn(text): - return _clean_text(text, hps.data.text_cleaners) if text != "" else "" - - return to_phoneme_fn - - -css = """ - #advanced-btn { - color: white; - border-color: black; - background: black; - font-size: .7rem !important; - line-height: 19px; - margin-top: 24px; - margin-bottom: 12px; - padding: 2px 8px; - border-radius: 14px !important; - } - #advanced-options { - display: none; - margin-bottom: 20px; - } -""" - -if __name__ == '__main__': - models_tts = [] - name = 'MinamiTTS' - lang = '日本語 (Japanese)' - example = 'こんにちは。' - config_path = f"saved_model/config.json" - model_path = f"saved_model/model.pth" - cover_path = f"saved_model/cover.jpg" - hps = utils.get_hparams_from_file(config_path) - model = SynthesizerTrn( - len(hps.symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model) - utils.load_checkpoint(model_path, model, None) - model.eval() - speaker_ids = [0] - speakers = [name] - - t = 'vits' - models_tts.append((name, cover_path, speakers, lang, example, - hps.symbols, create_tts_fn(model, hps, speaker_ids), - create_to_phoneme_fn(hps))) - - app = gr.Blocks(css=css) - - with app: - gr.Markdown("# Happiness Double Room MinamiTTS Using Vits Model\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=ORI-Muchim.MinamiTTS)\n\n") - - for i, (name, cover_path, speakers, lang, example, symbols, tts_fn, - to_phoneme_fn) in enumerate(models_tts): - - with gr.Column(): - gr.Markdown(f"## {name}\n\n" - f"![cover](file/{cover_path})\n\n" - f"lang: {lang}") - tts_input1 = gr.TextArea(label="Text (700 words limitation)", value=example, - elem_id=f"tts-input{i}") - tts_input2 = gr.Dropdown(label="Speaker", choices=speakers, - type="index", value=speakers[0]) - tts_input3 = gr.Slider(label="Speed", value=1, minimum=0.1, maximum=2, step=0.1) - with gr.Accordion(label="Advanced Options", open=False): - phoneme_input = gr.Checkbox(value=False, label="Phoneme input") - to_phoneme_btn = gr.Button("Covert text to phoneme") - phoneme_list = gr.Dataset(label="Phoneme list", components=[tts_input1], - samples=[[x] for x in symbols], - elem_id=f"phoneme-list{i}") - phoneme_list_json = gr.Json(value=symbols, visible=False) - tts_submit = gr.Button("Generate", variant="primary") - tts_output1 = gr.Textbox(label="Output Message") - tts_output2 = gr.Audio(label="Output Audio") - tts_submit.click(tts_fn, [tts_input1, tts_input2, tts_input3, phoneme_input], - [tts_output1, tts_output2]) - to_phoneme_btn.click(to_phoneme_fn, [tts_input1], [tts_input1]) - phoneme_list.click(None, [phoneme_list, phoneme_list_json], [], - _js=f""" - (i,phonemes) => {{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let text_input = root.querySelector("#tts-input{i}").querySelector("textarea"); - let startPos = text_input.selectionStart; - let endPos = text_input.selectionEnd; - let oldTxt = text_input.value; - let result = oldTxt.substring(0, startPos) + phonemes[i] + oldTxt.substring(endPos); - text_input.value = result; - let x = window.scrollX, y = window.scrollY; - text_input.focus(); - text_input.selectionStart = startPos + phonemes[i].length; - text_input.selectionEnd = startPos + phonemes[i].length; - text_input.blur(); - window.scrollTo(x, y); - return []; - }}""") - - app.queue(concurrency_count=3).launch(show_api=False) diff --git a/spaces/OkamiFeng/Bark-with-Voice-Cloning/bark/model.py b/spaces/OkamiFeng/Bark-with-Voice-Cloning/bark/model.py deleted file mode 100644 index 457b49e749f396c47c6b35f44955fd512d233d79..0000000000000000000000000000000000000000 --- a/spaces/OkamiFeng/Bark-with-Voice-Cloning/bark/model.py +++ /dev/null @@ -1,218 +0,0 @@ -""" -Much of this code is adapted from Andrej Karpathy's NanoGPT -(https://github.com/karpathy/nanoGPT) -""" -import math -from dataclasses import dataclass - -import torch -import torch.nn as nn -from torch.nn import functional as F - -class LayerNorm(nn.Module): - """ LayerNorm but with an optional bias. PyTorch doesn't support simply bias=False """ - - def __init__(self, ndim, bias): - super().__init__() - self.weight = nn.Parameter(torch.ones(ndim)) - self.bias = nn.Parameter(torch.zeros(ndim)) if bias else None - - def forward(self, input): - return F.layer_norm(input, self.weight.shape, self.weight, self.bias, 1e-5) - -class CausalSelfAttention(nn.Module): - - def __init__(self, config): - super().__init__() - assert config.n_embd % config.n_head == 0 - # key, query, value projections for all heads, but in a batch - self.c_attn = nn.Linear(config.n_embd, 3 * config.n_embd, bias=config.bias) - # output projection - self.c_proj = nn.Linear(config.n_embd, config.n_embd, bias=config.bias) - # regularization - self.attn_dropout = nn.Dropout(config.dropout) - self.resid_dropout = nn.Dropout(config.dropout) - self.n_head = config.n_head - self.n_embd = config.n_embd - self.dropout = config.dropout - # flash attention make GPU go brrrrr but support is only in PyTorch nightly and still a bit scary - self.flash = hasattr(torch.nn.functional, 'scaled_dot_product_attention') - if not self.flash: - # print("WARNING: using slow attention. Flash Attention atm needs PyTorch nightly and dropout=0.0") - # causal mask to ensure that attention is only applied to the left in the input sequence - self.register_buffer("bias", torch.tril(torch.ones(config.block_size, config.block_size)) - .view(1, 1, config.block_size, config.block_size)) - - def forward(self, x, past_kv=None, use_cache=False): - B, T, C = x.size() # batch size, sequence length, embedding dimensionality (n_embd) - - # calculate query, key, values for all heads in batch and move head forward to be the batch dim - q, k ,v = self.c_attn(x).split(self.n_embd, dim=2) - k = k.view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs) - q = q.view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs) - v = v.view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs) - - if past_kv is not None: - past_key = past_kv[0] - past_value = past_kv[1] - k = torch.cat((past_key, k), dim=-2) - v = torch.cat((past_value, v), dim=-2) - - FULL_T = k.shape[-2] - - if use_cache is True: - present = (k, v) - else: - present = None - - # causal self-attention; Self-attend: (B, nh, T, hs) x (B, nh, hs, T) -> (B, nh, T, T) - if self.flash: - # efficient attention using Flash Attention CUDA kernels - if past_kv is not None: - # When `past_kv` is provided, we're doing incremental decoding and `q.shape[2] == 1`: q only contains - # the query for the last token. scaled_dot_product_attention interprets this as the first token in the - # sequence, so if is_causal=True it will mask out all attention from it. This is not what we want, so - # to work around this we set is_causal=False. - is_causal = False - else: - is_causal = True - - y = torch.nn.functional.scaled_dot_product_attention(q, k, v, dropout_p=self.dropout, is_causal=is_causal) - else: - # manual implementation of attention - att = (q @ k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1))) - att = att.masked_fill(self.bias[:,:,FULL_T-T:FULL_T,:FULL_T] == 0, float('-inf')) - att = F.softmax(att, dim=-1) - att = self.attn_dropout(att) - y = att @ v # (B, nh, T, T) x (B, nh, T, hs) -> (B, nh, T, hs) - y = y.transpose(1, 2).contiguous().view(B, T, C) # re-assemble all head outputs side by side - - # output projection - y = self.resid_dropout(self.c_proj(y)) - return (y, present) - -class MLP(nn.Module): - - def __init__(self, config): - super().__init__() - self.c_fc = nn.Linear(config.n_embd, 4 * config.n_embd, bias=config.bias) - self.c_proj = nn.Linear(4 * config.n_embd, config.n_embd, bias=config.bias) - self.dropout = nn.Dropout(config.dropout) - self.gelu = nn.GELU() - - def forward(self, x): - x = self.c_fc(x) - x = self.gelu(x) - x = self.c_proj(x) - x = self.dropout(x) - return x - -class Block(nn.Module): - - def __init__(self, config, layer_idx): - super().__init__() - self.ln_1 = LayerNorm(config.n_embd, bias=config.bias) - self.attn = CausalSelfAttention(config) - self.ln_2 = LayerNorm(config.n_embd, bias=config.bias) - self.mlp = MLP(config) - self.layer_idx = layer_idx - - def forward(self, x, past_kv=None, use_cache=False): - attn_output, prev_kvs = self.attn(self.ln_1(x), past_kv=past_kv, use_cache=use_cache) - x = x + attn_output - x = x + self.mlp(self.ln_2(x)) - return (x, prev_kvs) - -@dataclass -class GPTConfig: - block_size: int = 1024 - input_vocab_size: int = 10_048 - output_vocab_size: int = 10_048 - n_layer: int = 12 - n_head: int = 12 - n_embd: int = 768 - dropout: float = 0.0 - bias: bool = True # True: bias in Linears and LayerNorms, like GPT-2. False: a bit better and faster - -class GPT(nn.Module): - - def __init__(self, config): - super().__init__() - assert config.input_vocab_size is not None - assert config.output_vocab_size is not None - assert config.block_size is not None - self.config = config - - self.transformer = nn.ModuleDict(dict( - wte = nn.Embedding(config.input_vocab_size, config.n_embd), - wpe = nn.Embedding(config.block_size, config.n_embd), - drop = nn.Dropout(config.dropout), - h = nn.ModuleList([Block(config, idx) for idx in range(config.n_layer)]), - ln_f = LayerNorm(config.n_embd, bias=config.bias), - )) - self.lm_head = nn.Linear(config.n_embd, config.output_vocab_size, bias=False) - - def get_num_params(self, non_embedding=True): - """ - Return the number of parameters in the model. - For non-embedding count (default), the position embeddings get subtracted. - The token embeddings would too, except due to the parameter sharing these - params are actually used as weights in the final layer, so we include them. - """ - n_params = sum(p.numel() for p in self.parameters()) - if non_embedding: - n_params -= self.transformer.wte.weight.numel() - n_params -= self.transformer.wpe.weight.numel() - return n_params - - def forward(self, idx, merge_context=False, past_kv=None, position_ids=None, use_cache=False): - device = idx.device - b, t = idx.size() - if past_kv is not None: - assert t == 1 - tok_emb = self.transformer.wte(idx) # token embeddings of shape (b, t, n_embd) - else: - if merge_context: - assert(idx.shape[1] >= 256+256+1) - t = idx.shape[1] - 256 - else: - assert t <= self.config.block_size, f"Cannot forward sequence of length {t}, block size is only {self.config.block_size}" - - # forward the GPT model itself - if merge_context: - tok_emb = torch.cat([ - self.transformer.wte(idx[:,:256]) + self.transformer.wte(idx[:,256:256+256]), - self.transformer.wte(idx[:,256+256:]) - ], dim=1) - else: - tok_emb = self.transformer.wte(idx) # token embeddings of shape (b, t, n_embd) - - if past_kv is None: - past_length = 0 - past_kv = tuple([None] * len(self.transformer.h)) - else: - past_length = past_kv[0][0].size(-2) - - if position_ids is None: - position_ids = torch.arange(past_length, t + past_length, dtype=torch.long, device=device) - position_ids = position_ids.unsqueeze(0) # shape (1, t) - assert position_ids.shape == (1, t) - - pos_emb = self.transformer.wpe(position_ids) # position embeddings of shape (1, t, n_embd) - - x = self.transformer.drop(tok_emb + pos_emb) - - new_kv = () if use_cache else None - - for i, (block, past_layer_kv) in enumerate(zip(self.transformer.h, past_kv)): - x, kv = block(x, past_kv=past_layer_kv, use_cache=use_cache) - - if use_cache: - new_kv = new_kv + (kv,) - - x = self.transformer.ln_f(x) - - # inference-time mini-optimization: only forward the lm_head on the very last position - logits = self.lm_head(x[:, [-1], :]) # note: using list [-1] to preserve the time dim - - return (logits, new_kv) diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/docs/tutorials/configs.md b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/docs/tutorials/configs.md deleted file mode 100644 index 751e4eb638baeae0e8ff5c65869163a1d64e6b66..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/docs/tutorials/configs.md +++ /dev/null @@ -1,62 +0,0 @@ -# Yacs Configs - -Detectron2 provides a key-value based config system that can be -used to obtain standard, common behaviors. - -This system uses YAML and [yacs](https://github.com/rbgirshick/yacs). -Yaml is a very limited language, -so we do not expect all features in detectron2 to be available through configs. -If you need something that's not available in the config space, -please write code using detectron2's API. - -With the introduction of a more powerful [LazyConfig system](lazyconfigs.md), -we no longer add functionality / new keys to the Yacs/Yaml-based config system. - -### Basic Usage - -Some basic usage of the `CfgNode` object is shown here. See more in [documentation](../modules/config.html#detectron2.config.CfgNode). -```python -from detectron2.config import get_cfg -cfg = get_cfg() # obtain detectron2's default config -cfg.xxx = yyy # add new configs for your own custom components -cfg.merge_from_file("my_cfg.yaml") # load values from a file - -cfg.merge_from_list(["MODEL.WEIGHTS", "weights.pth"]) # can also load values from a list of str -print(cfg.dump()) # print formatted configs -with open("output.yaml", "w") as f: - f.write(cfg.dump()) # save config to file -``` - -In addition to the basic Yaml syntax, the config file can -define a `_BASE_: base.yaml` field, which will load a base config file first. -Values in the base config will be overwritten in sub-configs, if there are any conflicts. -We provided several base configs for standard model architectures. - -Many builtin tools in detectron2 accept command line config overwrite: -Key-value pairs provided in the command line will overwrite the existing values in the config file. -For example, [demo.py](../../demo/demo.py) can be used with -``` -./demo.py --config-file config.yaml [--other-options] \ - --opts MODEL.WEIGHTS /path/to/weights INPUT.MIN_SIZE_TEST 1000 -``` - -To see a list of available configs in detectron2 and what they mean, -check [Config References](../modules/config.html#config-references) - -### Configs in Projects - -A project that lives outside the detectron2 library may define its own configs, which will need to be added -for the project to be functional, e.g.: -```python -from detectron2.projects.point_rend import add_pointrend_config -cfg = get_cfg() # obtain detectron2's default config -add_pointrend_config(cfg) # add pointrend's default config -# ... ... -``` - -### Best Practice with Configs - -1. Treat the configs you write as "code": avoid copying them or duplicating them; use `_BASE_` - to share common parts between configs. - -2. Keep the configs you write simple: don't include keys that do not affect the experimental setting. diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/render/blender/sampler.py b/spaces/OpenMotionLab/MotionGPT/mGPT/render/blender/sampler.py deleted file mode 100644 index 7aa8d853f5867974aceae5b50ac9ae4b99f1e686..0000000000000000000000000000000000000000 --- a/spaces/OpenMotionLab/MotionGPT/mGPT/render/blender/sampler.py +++ /dev/null @@ -1,15 +0,0 @@ -import numpy as np - -def get_frameidx(*, mode, nframes, exact_frame, frames_to_keep): - if mode == "sequence": - frameidx = np.linspace(0, nframes - 1, frames_to_keep) - frameidx = np.round(frameidx).astype(int) - frameidx = list(frameidx) - elif mode == "frame": - index_frame = int(exact_frame*nframes) - frameidx = [index_frame] - elif mode == "video": - frameidx = range(0, nframes) - else: - raise ValueError(f"Not support {mode} render mode") - return frameidx diff --git a/spaces/OptimalScale/Robin-7b/lmflow/models/interfaces/__init__.py b/spaces/OptimalScale/Robin-7b/lmflow/models/interfaces/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Osborn-bh/ChatGLM3-6B-Osborn/composite_demo/README.md b/spaces/Osborn-bh/ChatGLM3-6B-Osborn/composite_demo/README.md deleted file mode 100644 index 428dd6620539ac90bd20f2fd1d264df391b5425d..0000000000000000000000000000000000000000 --- a/spaces/Osborn-bh/ChatGLM3-6B-Osborn/composite_demo/README.md +++ /dev/null @@ -1,85 +0,0 @@ -# ChatGLM3 Web Demo - -![Demo webpage](assets/demo.png) - -## 安装 - -我们建议通过 [Conda](https://docs.conda.io/en/latest/) 进行环境管理。 - -执行以下命令新建一个 conda 环境并安装所需依赖: - -```bash -conda create -n chatglm3-demo python=3.10 -conda activate chatglm3-demo -pip install -r requirements.txt -``` - -请注意,本项目需要 Python 3.10 或更高版本。 - -此外,使用 Code Interpreter 还需要安装 Jupyter 内核: - -```bash -ipython kernel install --name chatglm3-demo --user -``` - -## 运行 - -运行以下命令在本地加载模型并启动 demo: - -```bash -streamlit run main.py -``` - -之后即可从命令行中看到 demo 的地址,点击即可访问。初次访问需要下载并加载模型,可能需要花费一定时间。 - -如果已经在本地下载了模型,可以通过 `export MODEL_PATH=/path/to/model` 来指定从本地加载模型。如果需要自定义 Jupyter 内核,可以通过 `export IPYKERNEL=` 来指定。 - -## 使用 - -ChatGLM3 Demo 拥有三种模式: - -- Chat: 对话模式,在此模式下可以与模型进行对话。 -- Tool: 工具模式,模型除了对话外,还可以通过工具进行其他操作。 -- Code Interpreter: 代码解释器模式,模型可以在一个 Jupyter 环境中执行代码并获取结果,以完成复杂任务。 - -### 对话模式 - -对话模式下,用户可以直接在侧边栏修改 top_p, temperature, System Prompt 等参数来调整模型的行为。例如 - -![The model responses following system prompt](assets/emojis.png) - -### 工具模式 - -可以通过在 `tool_registry.py` 中注册新的工具来增强模型的能力。只需要使用 `@register_tool` 装饰函数即可完成注册。对于工具声明,函数名称即为工具的名称,函数 docstring 即为工具的说明;对于工具的参数,使用 `Annotated[typ: type, description: str, required: bool]` 标注参数的类型、描述和是否必须。 - -例如,`get_weather` 工具的注册如下: - -```python -@register_tool -def get_weather( - city_name: Annotated[str, 'The name of the city to be queried', True], -) -> str: - """ - Get the weather for `city_name` in the following week - """ - ... -``` - -![The model uses tool to query the weather of pairs.](assets/tool.png) - -此外,你也可以在页面中通过 `Manual mode` 进入手动模式,在这一模式下你可以通过 YAML 来直接指定工具列表,但你需要手动将工具的输出反馈给模型。 - -### 代码解释器模式 - -由于拥有代码执行环境,此模式下的模型能够执行更为复杂的任务,例如绘制图表、执行符号运算等等。模型会根据对任务完成情况的理解自动地连续执行多个代码块,直到任务完成。因此,在这一模式下,你只需要指明希望模型执行的任务即可。 - -例如,我们可以让 ChatGLM3 画一个爱心: - -![The code interpreter draws a heart according to the user's instructions.](assets/heart.png) - -### 额外技巧 - -- 在模型生成文本时,可以通过页面右上角的 `Stop` 按钮进行打断。 -- 刷新页面即可清空对话记录。 - -# Enjoy! \ No newline at end of file diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/necks/fpn.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/necks/fpn.py deleted file mode 100644 index a53b2a69500f8c2edb835abc3ff0ccc2173d1fb1..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/necks/fpn.py +++ /dev/null @@ -1,212 +0,0 @@ -import torch.nn as nn -import torch.nn.functional as F -from annotator.uniformer.mmcv.cnn import ConvModule, xavier_init - -from ..builder import NECKS - - -@NECKS.register_module() -class FPN(nn.Module): - """Feature Pyramid Network. - - This is an implementation of - Feature Pyramid Networks for Object - Detection (https://arxiv.org/abs/1612.03144) - - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale) - num_outs (int): Number of output scales. - start_level (int): Index of the start input backbone level used to - build the feature pyramid. Default: 0. - end_level (int): Index of the end input backbone level (exclusive) to - build the feature pyramid. Default: -1, which means the last level. - add_extra_convs (bool | str): If bool, it decides whether to add conv - layers on top of the original feature maps. Default to False. - If True, its actual mode is specified by `extra_convs_on_inputs`. - If str, it specifies the source feature map of the extra convs. - Only the following options are allowed - - - 'on_input': Last feat map of neck inputs (i.e. backbone feature). - - 'on_lateral': Last feature map after lateral convs. - - 'on_output': The last output feature map after fpn convs. - extra_convs_on_inputs (bool, deprecated): Whether to apply extra convs - on the original feature from the backbone. If True, - it is equivalent to `add_extra_convs='on_input'`. If False, it is - equivalent to set `add_extra_convs='on_output'`. Default to True. - relu_before_extra_convs (bool): Whether to apply relu before the extra - conv. Default: False. - no_norm_on_lateral (bool): Whether to apply norm on lateral. - Default: False. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Config dict for normalization layer. Default: None. - act_cfg (str): Config dict for activation layer in ConvModule. - Default: None. - upsample_cfg (dict): Config dict for interpolate layer. - Default: `dict(mode='nearest')` - - Example: - >>> import torch - >>> in_channels = [2, 3, 5, 7] - >>> scales = [340, 170, 84, 43] - >>> inputs = [torch.rand(1, c, s, s) - ... for c, s in zip(in_channels, scales)] - >>> self = FPN(in_channels, 11, len(in_channels)).eval() - >>> outputs = self.forward(inputs) - >>> for i in range(len(outputs)): - ... print(f'outputs[{i}].shape = {outputs[i].shape}') - outputs[0].shape = torch.Size([1, 11, 340, 340]) - outputs[1].shape = torch.Size([1, 11, 170, 170]) - outputs[2].shape = torch.Size([1, 11, 84, 84]) - outputs[3].shape = torch.Size([1, 11, 43, 43]) - """ - - def __init__(self, - in_channels, - out_channels, - num_outs, - start_level=0, - end_level=-1, - add_extra_convs=False, - extra_convs_on_inputs=False, - relu_before_extra_convs=False, - no_norm_on_lateral=False, - conv_cfg=None, - norm_cfg=None, - act_cfg=None, - upsample_cfg=dict(mode='nearest')): - super(FPN, self).__init__() - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) - self.num_outs = num_outs - self.relu_before_extra_convs = relu_before_extra_convs - self.no_norm_on_lateral = no_norm_on_lateral - self.fp16_enabled = False - self.upsample_cfg = upsample_cfg.copy() - - if end_level == -1: - self.backbone_end_level = self.num_ins - assert num_outs >= self.num_ins - start_level - else: - # if end_level < inputs, no extra level is allowed - self.backbone_end_level = end_level - assert end_level <= len(in_channels) - assert num_outs == end_level - start_level - self.start_level = start_level - self.end_level = end_level - self.add_extra_convs = add_extra_convs - assert isinstance(add_extra_convs, (str, bool)) - if isinstance(add_extra_convs, str): - # Extra_convs_source choices: 'on_input', 'on_lateral', 'on_output' - assert add_extra_convs in ('on_input', 'on_lateral', 'on_output') - elif add_extra_convs: # True - if extra_convs_on_inputs: - # For compatibility with previous release - # TODO: deprecate `extra_convs_on_inputs` - self.add_extra_convs = 'on_input' - else: - self.add_extra_convs = 'on_output' - - self.lateral_convs = nn.ModuleList() - self.fpn_convs = nn.ModuleList() - - for i in range(self.start_level, self.backbone_end_level): - l_conv = ConvModule( - in_channels[i], - out_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg if not self.no_norm_on_lateral else None, - act_cfg=act_cfg, - inplace=False) - fpn_conv = ConvModule( - out_channels, - out_channels, - 3, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - inplace=False) - - self.lateral_convs.append(l_conv) - self.fpn_convs.append(fpn_conv) - - # add extra conv layers (e.g., RetinaNet) - extra_levels = num_outs - self.backbone_end_level + self.start_level - if self.add_extra_convs and extra_levels >= 1: - for i in range(extra_levels): - if i == 0 and self.add_extra_convs == 'on_input': - in_channels = self.in_channels[self.backbone_end_level - 1] - else: - in_channels = out_channels - extra_fpn_conv = ConvModule( - in_channels, - out_channels, - 3, - stride=2, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - inplace=False) - self.fpn_convs.append(extra_fpn_conv) - - # default init_weights for conv(msra) and norm in ConvModule - def init_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - xavier_init(m, distribution='uniform') - - def forward(self, inputs): - assert len(inputs) == len(self.in_channels) - - # build laterals - laterals = [ - lateral_conv(inputs[i + self.start_level]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - - # build top-down path - used_backbone_levels = len(laterals) - for i in range(used_backbone_levels - 1, 0, -1): - # In some cases, fixing `scale factor` (e.g. 2) is preferred, but - # it cannot co-exist with `size` in `F.interpolate`. - if 'scale_factor' in self.upsample_cfg: - laterals[i - 1] += F.interpolate(laterals[i], - **self.upsample_cfg) - else: - prev_shape = laterals[i - 1].shape[2:] - laterals[i - 1] += F.interpolate( - laterals[i], size=prev_shape, **self.upsample_cfg) - - # build outputs - # part 1: from original levels - outs = [ - self.fpn_convs[i](laterals[i]) for i in range(used_backbone_levels) - ] - # part 2: add extra levels - if self.num_outs > len(outs): - # use max pool to get more levels on top of outputs - # (e.g., Faster R-CNN, Mask R-CNN) - if not self.add_extra_convs: - for i in range(self.num_outs - used_backbone_levels): - outs.append(F.max_pool2d(outs[-1], 1, stride=2)) - # add conv layers on top of original feature maps (RetinaNet) - else: - if self.add_extra_convs == 'on_input': - extra_source = inputs[self.backbone_end_level - 1] - elif self.add_extra_convs == 'on_lateral': - extra_source = laterals[-1] - elif self.add_extra_convs == 'on_output': - extra_source = outs[-1] - else: - raise NotImplementedError - outs.append(self.fpn_convs[used_backbone_levels](extra_source)) - for i in range(used_backbone_levels + 1, self.num_outs): - if self.relu_before_extra_convs: - outs.append(self.fpn_convs[i](F.relu(outs[-1]))) - else: - outs.append(self.fpn_convs[i](outs[-1])) - return tuple(outs) diff --git a/spaces/PKUWilliamYang/VToonify/vtoonify/model/vtoonify.py b/spaces/PKUWilliamYang/VToonify/vtoonify/model/vtoonify.py deleted file mode 100644 index 6556a0a6c734be5f413f4683eb63c44f449c6af8..0000000000000000000000000000000000000000 --- a/spaces/PKUWilliamYang/VToonify/vtoonify/model/vtoonify.py +++ /dev/null @@ -1,286 +0,0 @@ -import torch -import numpy as np -import math -from torch import nn -from model.stylegan.model import ConvLayer, EqualLinear, Generator, ResBlock -from model.dualstylegan import AdaptiveInstanceNorm, AdaResBlock, DualStyleGAN -import torch.nn.functional as F - -# IC-GAN: stylegan discriminator -class ConditionalDiscriminator(nn.Module): - def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1], use_condition=False, style_num=None): - super().__init__() - - channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - convs = [ConvLayer(3, channels[size], 1)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - - convs.append(ResBlock(in_channel, out_channel, blur_kernel)) - - in_channel = out_channel - - self.convs = nn.Sequential(*convs) - - self.stddev_group = 4 - self.stddev_feat = 1 - self.use_condition = use_condition - - if self.use_condition: - self.condition_dim = 128 - # map style degree to 64-dimensional vector - self.label_mapper = nn.Sequential( - nn.Linear(1, 64), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Linear(64, 64), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Linear(64, self.condition_dim//2), - ) - # map style code index to 64-dimensional vector - self.style_mapper = nn.Embedding(style_num, self.condition_dim-self.condition_dim//2) - else: - self.condition_dim = 1 - - self.final_conv = ConvLayer(in_channel + 1, channels[4], 3) - self.final_linear = nn.Sequential( - EqualLinear(channels[4] * 4 * 4, channels[4], activation="fused_lrelu"), - EqualLinear(channels[4], self.condition_dim), - ) - - def forward(self, input, degree_label=None, style_ind=None): - out = self.convs(input) - - batch, channel, height, width = out.shape - group = min(batch, self.stddev_group) - stddev = out.view( - group, -1, self.stddev_feat, channel // self.stddev_feat, height, width - ) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - out = out.view(batch, -1) - - if self.use_condition: - h = self.final_linear(out) - condition = torch.cat((self.label_mapper(degree_label), self.style_mapper(style_ind)), dim=1) - out = (h * condition).sum(dim=1, keepdim=True) * (1 / np.sqrt(self.condition_dim)) - else: - out = self.final_linear(out) - - return out - - -class VToonifyResBlock(nn.Module): - def __init__(self, fin): - super().__init__() - - self.conv = nn.Conv2d(fin, fin, 3, 1, 1) - self.conv2 = nn.Conv2d(fin, fin, 3, 1, 1) - self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True) - - def forward(self, x): - out = self.lrelu(self.conv(x)) - out = self.lrelu(self.conv2(out)) - out = (out + x) / math.sqrt(2) - return out - -class Fusion(nn.Module): - def __init__(self, in_channels, skip_channels, out_channels): - super().__init__() - - # create conv layers - self.conv = nn.Conv2d(in_channels + skip_channels, out_channels, 3, 1, 1, bias=True) - self.norm = AdaptiveInstanceNorm(in_channels + skip_channels, 128) - self.conv2 = nn.Conv2d(in_channels + skip_channels, 1, 3, 1, 1, bias=True) - #''' - self.linear = nn.Sequential( - nn.Linear(1, 64), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Linear(64, 128), - nn.LeakyReLU(negative_slope=0.2, inplace=True) - ) - - def forward(self, f_G, f_E, d_s=1): - # label of style degree - label = self.linear(torch.zeros(f_G.size(0),1).to(f_G.device) + d_s) - out = torch.cat([f_G, abs(f_G-f_E)], dim=1) - m_E = (F.relu(self.conv2(self.norm(out, label)))).tanh() - f_out = self.conv(torch.cat([f_G, f_E * m_E], dim=1)) - return f_out, m_E - -class VToonify(nn.Module): - def __init__(self, - in_size=256, - out_size=1024, - img_channels=3, - style_channels=512, - num_mlps=8, - channel_multiplier=2, - num_res_layers=6, - backbone = 'dualstylegan', - ): - - super().__init__() - - self.backbone = backbone - if self.backbone == 'dualstylegan': - # DualStyleGAN, with weights being fixed - self.generator = DualStyleGAN(out_size, style_channels, num_mlps, channel_multiplier) - else: - # StyleGANv2, with weights being fixed - self.generator = Generator(out_size, style_channels, num_mlps, channel_multiplier) - - self.in_size = in_size - self.style_channels = style_channels - channels = self.generator.channels - - # encoder - num_styles = int(np.log2(out_size)) * 2 - 2 - encoder_res = [2**i for i in range(int(np.log2(in_size)), 4, -1)] - self.encoder = nn.ModuleList() - self.encoder.append( - nn.Sequential( - nn.Conv2d(img_channels+19, 32, 3, 1, 1, bias=True), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Conv2d(32, channels[in_size], 3, 1, 1, bias=True), - nn.LeakyReLU(negative_slope=0.2, inplace=True))) - - for res in encoder_res: - in_channels = channels[res] - if res > 32: - out_channels = channels[res // 2] - block = nn.Sequential( - nn.Conv2d(in_channels, out_channels, 3, 2, 1, bias=True), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Conv2d(out_channels, out_channels, 3, 1, 1, bias=True), - nn.LeakyReLU(negative_slope=0.2, inplace=True)) - self.encoder.append(block) - else: - layers = [] - for _ in range(num_res_layers): - layers.append(VToonifyResBlock(in_channels)) - self.encoder.append(nn.Sequential(*layers)) - block = nn.Conv2d(in_channels, img_channels, 1, 1, 0, bias=True) - self.encoder.append(block) - - # trainable fusion module - self.fusion_out = nn.ModuleList() - self.fusion_skip = nn.ModuleList() - for res in encoder_res[::-1]: - num_channels = channels[res] - if self.backbone == 'dualstylegan': - self.fusion_out.append( - Fusion(num_channels, num_channels, num_channels)) - else: - self.fusion_out.append( - nn.Conv2d(num_channels * 2, num_channels, 3, 1, 1, bias=True)) - - self.fusion_skip.append( - nn.Conv2d(num_channels + 3, 3, 3, 1, 1, bias=True)) - - # Modified ModRes blocks in DualStyleGAN, with weights being fixed - if self.backbone == 'dualstylegan': - self.res = nn.ModuleList() - self.res.append(AdaResBlock(self.generator.channels[2 ** 2])) # for conv1, no use in this model - for i in range(3, 6): - out_channel = self.generator.channels[2 ** i] - self.res.append(AdaResBlock(out_channel, dilation=2**(5-i))) - self.res.append(AdaResBlock(out_channel, dilation=2**(5-i))) - - - def forward(self, x, style, d_s=None, return_mask=False, return_feat=False): - # map style to W+ space - if style is not None and style.ndim < 3: - if self.backbone == 'dualstylegan': - resstyles = self.generator.style(style).unsqueeze(1).repeat(1, self.generator.n_latent, 1) - adastyles = style.unsqueeze(1).repeat(1, self.generator.n_latent, 1) - elif style is not None: - nB, nL, nD = style.shape - if self.backbone == 'dualstylegan': - resstyles = self.generator.style(style.reshape(nB*nL, nD)).reshape(nB, nL, nD) - adastyles = style - if self.backbone == 'dualstylegan': - adastyles = adastyles.clone() - for i in range(7, self.generator.n_latent): - adastyles[:, i] = self.generator.res[i](adastyles[:, i]) - - # obtain multi-scale content features - feat = x - encoder_features = [] - # downsampling conv parts of E - for block in self.encoder[:-2]: - feat = block(feat) - encoder_features.append(feat) - encoder_features = encoder_features[::-1] - # Resblocks in E - for ii, block in enumerate(self.encoder[-2]): - feat = block(feat) - # adjust Resblocks with ModRes blocks - if self.backbone == 'dualstylegan': - feat = self.res[ii+1](feat, resstyles[:, ii+1], d_s) - # the last-layer feature of E (inputs of backbone) - out = feat - skip = self.encoder[-1](feat) - if return_feat: - return out, skip - - # 32x32 ---> higher res - _index = 1 - m_Es = [] - for conv1, conv2, to_rgb in zip( - self.stylegan().convs[6::2], self.stylegan().convs[7::2], self.stylegan().to_rgbs[3:]): - - # pass the mid-layer features of E to the corresponding resolution layers of G - if 2 ** (5+((_index-1)//2)) <= self.in_size: - fusion_index = (_index - 1) // 2 - f_E = encoder_features[fusion_index] - - if self.backbone == 'dualstylegan': - out, m_E = self.fusion_out[fusion_index](out, f_E, d_s) - skip = self.fusion_skip[fusion_index](torch.cat([skip, f_E*m_E], dim=1)) - m_Es += [m_E] - else: - out = self.fusion_out[fusion_index](torch.cat([out, f_E], dim=1)) - skip = self.fusion_skip[fusion_index](torch.cat([skip, f_E], dim=1)) - - # remove the noise input - batch, _, height, width = out.shape - noise = x.new_empty(batch, 1, height * 2, width * 2).normal_().detach() * 0.0 - - out = conv1(out, adastyles[:, _index+6], noise=noise) - out = conv2(out, adastyles[:, _index+7], noise=noise) - skip = to_rgb(out, adastyles[:, _index+8], skip) - _index += 2 - - image = skip - if return_mask and self.backbone == 'dualstylegan': - return image, m_Es - return image - - def stylegan(self): - if self.backbone == 'dualstylegan': - return self.generator.generator - else: - return self.generator - - def zplus2wplus(self, zplus): - return self.stylegan().style(zplus.reshape(zplus.shape[0]*zplus.shape[1], zplus.shape[2])).reshape(zplus.shape) \ No newline at end of file diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/tree-il/canonicalize.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/tree-il/canonicalize.go deleted file mode 100644 index a0a0b4d013faad882874a2ae61ae51534e6d36fd..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/tree-il/canonicalize.go and /dev/null differ diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/paper-system.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/paper-system.go deleted file mode 100644 index 81903c0e3f03a3378737e90bee49d084074a963a..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/paper-system.go and /dev/null differ diff --git a/spaces/Pennywise881/wiki-chat/QuestionAnswer.py b/spaces/Pennywise881/wiki-chat/QuestionAnswer.py deleted file mode 100644 index f27742aed5b5edba85a9a6a0d9c5f0038747afc2..0000000000000000000000000000000000000000 --- a/spaces/Pennywise881/wiki-chat/QuestionAnswer.py +++ /dev/null @@ -1,129 +0,0 @@ -import torch -import numpy as np -# # from transformers import AutoTokenizer, AutoModelForQuestionAnswering - - -class QuestionAnswer: - - def __init__(self, data, model, tokenizer, torch_device): - - self.max_length = 384 - self.doc_stride = 128 - - self.tokenizer = tokenizer - self.model = model - self.data = data - self.torch_device = torch_device - - self.output = None - self.features = None - self.results = None - - def get_output_from_model(self): - # data = {'question': question, 'context': context} - - with torch.no_grad(): - tokenized_data = self.tokenizer( - self.data['question'], - self.data['context'], - truncation='only_second', - max_length=self.max_length, - stride=self.doc_stride, - return_overflowing_tokens=True, - return_offsets_mapping=True, - padding='max_length', - return_tensors='pt' - ).to(self.torch_device) - - output = self.model(tokenized_data['input_ids'], tokenized_data['attention_mask']) - - return output - - # print(output.keys()) - # print(output['start_logits'].shape) - # print(output['end_logits'].shape) - # print(tokenized_data.keys()) - - def prepare_features(self, example): - tokenized_example = self.tokenizer( - example['question'], - example['context'], - truncation='only_second', - max_length=self.max_length, - stride=self.doc_stride, - return_overflowing_tokens=True, - return_offsets_mapping=True, - padding='max_length', - ) - - # sample_mapping = tokenized_example.pop("overflow_to_sample_mapping") - - for i in range(len(tokenized_example['input_ids'])): - sequence_ids = tokenized_example.sequence_ids(i) - # print(sequence_ids) - context_index = 1 - - # sample_index = sample_mapping[i] - - tokenized_example["offset_mapping"][i] = [ - (o if sequence_ids[k] == context_index else None) - for k, o in enumerate(tokenized_example["offset_mapping"][i]) - ] - - return tokenized_example - - def postprocess_qa_predictions(self, data, features, raw_predictions, top_n_answers=5, max_answer_length=30): - all_start_logits, all_end_logits = raw_predictions.start_logits, raw_predictions.end_logits - - # print(all_start_logits) - - results = [] - context = data['context'] - - # print(len(features['input_ids'])) - for i in range(len(features['input_ids'])): - start_logits = all_start_logits[i].cpu().numpy() - end_logits = all_end_logits[i].cpu().numpy() - - # print(start_logits) - - offset_mapping = features['offset_mapping'][i] - - start_indices = np.argsort(start_logits)[-1: -top_n_answers - 1: -1].tolist() - end_indices = np.argsort(end_logits)[-1: -top_n_answers - 1: -1].tolist() - - for start_index in start_indices: - for end_index in end_indices: - if ( - start_index >= len(offset_mapping) - or end_index >= len(offset_mapping) - or offset_mapping[start_index] is None - or offset_mapping[end_index] is None - or end_index < start_index - or end_index - start_index + 1 > max_answer_length - ): - continue - - start_char = offset_mapping[start_index][0] - end_char = offset_mapping[end_index][1] - - # print(start_logits[start_index]) - # print(end_logits[end_index]) - score = start_logits[start_index] + end_logits[end_index] - results.append( - { - 'score': float('%.*g' % (3, score)), - 'text': context[start_char: end_char] - } - ) - - results = sorted(results, key=lambda x: x["score"], reverse=True)[:top_n_answers] - return results - - - def get_results(self): - self.output = self.get_output_from_model() - self.features = self.prepare_features(self.data) - self.results = self.postprocess_qa_predictions(self.data, self.features, self.output) - - return self.results \ No newline at end of file diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/decode_heads/fpn_head.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/decode_heads/fpn_head.py deleted file mode 100644 index 1241c55b0813d1ecdddf1e66e7c5031fbf78ed50..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/decode_heads/fpn_head.py +++ /dev/null @@ -1,68 +0,0 @@ -import numpy as np -import torch.nn as nn -from annotator.uniformer.mmcv.cnn import ConvModule - -from annotator.uniformer.mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -@HEADS.register_module() -class FPNHead(BaseDecodeHead): - """Panoptic Feature Pyramid Networks. - - This head is the implementation of `Semantic FPN - `_. - - Args: - feature_strides (tuple[int]): The strides for input feature maps. - stack_lateral. All strides suppose to be power of 2. The first - one is of largest resolution. - """ - - def __init__(self, feature_strides, **kwargs): - super(FPNHead, self).__init__( - input_transform='multiple_select', **kwargs) - assert len(feature_strides) == len(self.in_channels) - assert min(feature_strides) == feature_strides[0] - self.feature_strides = feature_strides - - self.scale_heads = nn.ModuleList() - for i in range(len(feature_strides)): - head_length = max( - 1, - int(np.log2(feature_strides[i]) - np.log2(feature_strides[0]))) - scale_head = [] - for k in range(head_length): - scale_head.append( - ConvModule( - self.in_channels[i] if k == 0 else self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - if feature_strides[i] != feature_strides[0]: - scale_head.append( - nn.Upsample( - scale_factor=2, - mode='bilinear', - align_corners=self.align_corners)) - self.scale_heads.append(nn.Sequential(*scale_head)) - - def forward(self, inputs): - - x = self._transform_inputs(inputs) - - output = self.scale_heads[0](x[0]) - for i in range(1, len(self.feature_strides)): - # non inplace - output = output + resize( - self.scale_heads[i](x[i]), - size=output.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - - output = self.cls_seg(output) - return output diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/layers/dyrelu.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/layers/dyrelu.py deleted file mode 100644 index 3170a9efedfa05988242e04d2c204992a2dcd3f8..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/layers/dyrelu.py +++ /dev/null @@ -1,120 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -def _make_divisible(v, divisor, min_value=None): - if min_value is None: - min_value = divisor - new_v = max(min_value, int(v + divisor / 2) // divisor * divisor) - # Make sure that round down does not go down by more than 10%. - if new_v < 0.9 * v: - new_v += divisor - return new_v - - -class swish(nn.Module): - def forward(self, x): - return x * torch.sigmoid(x) - - -class h_swish(nn.Module): - def __init__(self, inplace=False): - super(h_swish, self).__init__() - self.inplace = inplace - - def forward(self, x): - return x * F.relu6(x + 3.0, inplace=self.inplace) / 6.0 - - -class h_sigmoid(nn.Module): - def __init__(self, inplace=True, h_max=1): - super(h_sigmoid, self).__init__() - self.relu = nn.ReLU6(inplace=inplace) - self.h_max = h_max - - def forward(self, x): - return self.relu(x + 3) * self.h_max / 6 - - -class DYReLU(nn.Module): - def __init__(self, inp, oup, reduction=4, lambda_a=1.0, K2=True, use_bias=True, use_spatial=False, - init_a=[1.0, 0.0], init_b=[0.0, 0.0]): - super(DYReLU, self).__init__() - self.oup = oup - self.lambda_a = lambda_a * 2 - self.K2 = K2 - self.avg_pool = nn.AdaptiveAvgPool2d(1) - - self.use_bias = use_bias - if K2: - self.exp = 4 if use_bias else 2 - else: - self.exp = 2 if use_bias else 1 - self.init_a = init_a - self.init_b = init_b - - # determine squeeze - if reduction == 4: - squeeze = inp // reduction - else: - squeeze = _make_divisible(inp // reduction, 4) - # print('reduction: {}, squeeze: {}/{}'.format(reduction, inp, squeeze)) - # print('init_a: {}, init_b: {}'.format(self.init_a, self.init_b)) - - self.fc = nn.Sequential( - nn.Linear(inp, squeeze), - nn.ReLU(inplace=True), - nn.Linear(squeeze, oup * self.exp), - h_sigmoid() - ) - if use_spatial: - self.spa = nn.Sequential( - nn.Conv2d(inp, 1, kernel_size=1), - nn.BatchNorm2d(1), - ) - else: - self.spa = None - - def forward(self, x): - if isinstance(x, list): - x_in = x[0] - x_out = x[1] - else: - x_in = x - x_out = x - b, c, h, w = x_in.size() - y = self.avg_pool(x_in).view(b, c) - y = self.fc(y).view(b, self.oup * self.exp, 1, 1) - if self.exp == 4: - a1, b1, a2, b2 = torch.split(y, self.oup, dim=1) - a1 = (a1 - 0.5) * self.lambda_a + self.init_a[0] # 1.0 - a2 = (a2 - 0.5) * self.lambda_a + self.init_a[1] - - b1 = b1 - 0.5 + self.init_b[0] - b2 = b2 - 0.5 + self.init_b[1] - out = torch.max(x_out * a1 + b1, x_out * a2 + b2) - elif self.exp == 2: - if self.use_bias: # bias but not PL - a1, b1 = torch.split(y, self.oup, dim=1) - a1 = (a1 - 0.5) * self.lambda_a + self.init_a[0] # 1.0 - b1 = b1 - 0.5 + self.init_b[0] - out = x_out * a1 + b1 - - else: - a1, a2 = torch.split(y, self.oup, dim=1) - a1 = (a1 - 0.5) * self.lambda_a + self.init_a[0] # 1.0 - a2 = (a2 - 0.5) * self.lambda_a + self.init_a[1] - out = torch.max(x_out * a1, x_out * a2) - - elif self.exp == 1: - a1 = y - a1 = (a1 - 0.5) * self.lambda_a + self.init_a[0] # 1.0 - out = x_out * a1 - - if self.spa: - ys = self.spa(x_in).view(b, -1) - ys = F.softmax(ys, dim=1).view(b, 1, h, w) * h * w - ys = F.hardtanh(ys, 0, 3, inplace=True)/3 - out = out * ys - - return out diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/miscellaneous.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/miscellaneous.py deleted file mode 100644 index 08d9a5b0090241cb57df2e1360354249efbab293..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/miscellaneous.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import errno -import os -from .comm import is_main_process - -def mkdir(path): - try: - os.makedirs(path) - except OSError as e: - if e.errno != errno.EEXIST: - raise - - -def save_config(cfg, path): - if is_main_process(): - with open(path, 'w') as f: - f.write(cfg.dump()) diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/solvers/__init__.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/solvers/__init__.py deleted file mode 100644 index ae19f3a8c51abf469697d6affa91449d668716ba..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/solvers/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -""" -Solvers. A Solver is a training recipe, combining the dataloaders, models, -optimizer, losses etc into a single convenient object. -""" - -# flake8: noqa -from .audiogen import AudioGenSolver -from .builders import get_solver -from .base import StandardSolver -from .compression import CompressionSolver -from .musicgen import MusicGenSolver -from .diffusion import DiffusionSolver diff --git a/spaces/Purple11/Grounded-Diffusion/src/CLIP/clip/model.py b/spaces/Purple11/Grounded-Diffusion/src/CLIP/clip/model.py deleted file mode 100644 index 232b7792eb97440642547bd462cf128df9243933..0000000000000000000000000000000000000000 --- a/spaces/Purple11/Grounded-Diffusion/src/CLIP/clip/model.py +++ /dev/null @@ -1,436 +0,0 @@ -from collections import OrderedDict -from typing import Tuple, Union - -import numpy as np -import torch -import torch.nn.functional as F -from torch import nn - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1): - super().__init__() - - # all conv layers have stride 1. an avgpool is performed after the second convolution when stride > 1 - self.conv1 = nn.Conv2d(inplanes, planes, 1, bias=False) - self.bn1 = nn.BatchNorm2d(planes) - self.relu1 = nn.ReLU(inplace=True) - - self.conv2 = nn.Conv2d(planes, planes, 3, padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(planes) - self.relu2 = nn.ReLU(inplace=True) - - self.avgpool = nn.AvgPool2d(stride) if stride > 1 else nn.Identity() - - self.conv3 = nn.Conv2d(planes, planes * self.expansion, 1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * self.expansion) - self.relu3 = nn.ReLU(inplace=True) - - self.downsample = None - self.stride = stride - - if stride > 1 or inplanes != planes * Bottleneck.expansion: - # downsampling layer is prepended with an avgpool, and the subsequent convolution has stride 1 - self.downsample = nn.Sequential(OrderedDict([ - ("-1", nn.AvgPool2d(stride)), - ("0", nn.Conv2d(inplanes, planes * self.expansion, 1, stride=1, bias=False)), - ("1", nn.BatchNorm2d(planes * self.expansion)) - ])) - - def forward(self, x: torch.Tensor): - identity = x - - out = self.relu1(self.bn1(self.conv1(x))) - out = self.relu2(self.bn2(self.conv2(out))) - out = self.avgpool(out) - out = self.bn3(self.conv3(out)) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu3(out) - return out - - -class AttentionPool2d(nn.Module): - def __init__(self, spacial_dim: int, embed_dim: int, num_heads: int, output_dim: int = None): - super().__init__() - self.positional_embedding = nn.Parameter(torch.randn(spacial_dim ** 2 + 1, embed_dim) / embed_dim ** 0.5) - self.k_proj = nn.Linear(embed_dim, embed_dim) - self.q_proj = nn.Linear(embed_dim, embed_dim) - self.v_proj = nn.Linear(embed_dim, embed_dim) - self.c_proj = nn.Linear(embed_dim, output_dim or embed_dim) - self.num_heads = num_heads - - def forward(self, x): - x = x.flatten(start_dim=2).permute(2, 0, 1) # NCHW -> (HW)NC - x = torch.cat([x.mean(dim=0, keepdim=True), x], dim=0) # (HW+1)NC - x = x + self.positional_embedding[:, None, :].to(x.dtype) # (HW+1)NC - x, _ = F.multi_head_attention_forward( - query=x[:1], key=x, value=x, - embed_dim_to_check=x.shape[-1], - num_heads=self.num_heads, - q_proj_weight=self.q_proj.weight, - k_proj_weight=self.k_proj.weight, - v_proj_weight=self.v_proj.weight, - in_proj_weight=None, - in_proj_bias=torch.cat([self.q_proj.bias, self.k_proj.bias, self.v_proj.bias]), - bias_k=None, - bias_v=None, - add_zero_attn=False, - dropout_p=0, - out_proj_weight=self.c_proj.weight, - out_proj_bias=self.c_proj.bias, - use_separate_proj_weight=True, - training=self.training, - need_weights=False - ) - return x.squeeze(0) - - -class ModifiedResNet(nn.Module): - """ - A ResNet class that is similar to torchvision's but contains the following changes: - - There are now 3 "stem" convolutions as opposed to 1, with an average pool instead of a max pool. - - Performs anti-aliasing strided convolutions, where an avgpool is prepended to convolutions with stride > 1 - - The final pooling layer is a QKV attention instead of an average pool - """ - - def __init__(self, layers, output_dim, heads, input_resolution=224, width=64): - super().__init__() - self.output_dim = output_dim - self.input_resolution = input_resolution - - # the 3-layer stem - self.conv1 = nn.Conv2d(3, width // 2, kernel_size=3, stride=2, padding=1, bias=False) - self.bn1 = nn.BatchNorm2d(width // 2) - self.relu1 = nn.ReLU(inplace=True) - self.conv2 = nn.Conv2d(width // 2, width // 2, kernel_size=3, padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(width // 2) - self.relu2 = nn.ReLU(inplace=True) - self.conv3 = nn.Conv2d(width // 2, width, kernel_size=3, padding=1, bias=False) - self.bn3 = nn.BatchNorm2d(width) - self.relu3 = nn.ReLU(inplace=True) - self.avgpool = nn.AvgPool2d(2) - - # residual layers - self._inplanes = width # this is a *mutable* variable used during construction - self.layer1 = self._make_layer(width, layers[0]) - self.layer2 = self._make_layer(width * 2, layers[1], stride=2) - self.layer3 = self._make_layer(width * 4, layers[2], stride=2) - self.layer4 = self._make_layer(width * 8, layers[3], stride=2) - - embed_dim = width * 32 # the ResNet feature dimension - self.attnpool = AttentionPool2d(input_resolution // 32, embed_dim, heads, output_dim) - - def _make_layer(self, planes, blocks, stride=1): - layers = [Bottleneck(self._inplanes, planes, stride)] - - self._inplanes = planes * Bottleneck.expansion - for _ in range(1, blocks): - layers.append(Bottleneck(self._inplanes, planes)) - - return nn.Sequential(*layers) - - def forward(self, x): - def stem(x): - x = self.relu1(self.bn1(self.conv1(x))) - x = self.relu2(self.bn2(self.conv2(x))) - x = self.relu3(self.bn3(self.conv3(x))) - x = self.avgpool(x) - return x - - x = x.type(self.conv1.weight.dtype) - x = stem(x) - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - x = self.attnpool(x) - - return x - - -class LayerNorm(nn.LayerNorm): - """Subclass torch's LayerNorm to handle fp16.""" - - def forward(self, x: torch.Tensor): - orig_type = x.dtype - ret = super().forward(x.type(torch.float32)) - return ret.type(orig_type) - - -class QuickGELU(nn.Module): - def forward(self, x: torch.Tensor): - return x * torch.sigmoid(1.702 * x) - - -class ResidualAttentionBlock(nn.Module): - def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None): - super().__init__() - - self.attn = nn.MultiheadAttention(d_model, n_head) - self.ln_1 = LayerNorm(d_model) - self.mlp = nn.Sequential(OrderedDict([ - ("c_fc", nn.Linear(d_model, d_model * 4)), - ("gelu", QuickGELU()), - ("c_proj", nn.Linear(d_model * 4, d_model)) - ])) - self.ln_2 = LayerNorm(d_model) - self.attn_mask = attn_mask - - def attention(self, x: torch.Tensor): - self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None - return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0] - - def forward(self, x: torch.Tensor): - x = x + self.attention(self.ln_1(x)) - x = x + self.mlp(self.ln_2(x)) - return x - - -class Transformer(nn.Module): - def __init__(self, width: int, layers: int, heads: int, attn_mask: torch.Tensor = None): - super().__init__() - self.width = width - self.layers = layers - self.resblocks = nn.Sequential(*[ResidualAttentionBlock(width, heads, attn_mask) for _ in range(layers)]) - - def forward(self, x: torch.Tensor): - return self.resblocks(x) - - -class VisionTransformer(nn.Module): - def __init__(self, input_resolution: int, patch_size: int, width: int, layers: int, heads: int, output_dim: int): - super().__init__() - self.input_resolution = input_resolution - self.output_dim = output_dim - self.conv1 = nn.Conv2d(in_channels=3, out_channels=width, kernel_size=patch_size, stride=patch_size, bias=False) - - scale = width ** -0.5 - self.class_embedding = nn.Parameter(scale * torch.randn(width)) - self.positional_embedding = nn.Parameter(scale * torch.randn((input_resolution // patch_size) ** 2 + 1, width)) - self.ln_pre = LayerNorm(width) - - self.transformer = Transformer(width, layers, heads) - - self.ln_post = LayerNorm(width) - self.proj = nn.Parameter(scale * torch.randn(width, output_dim)) - - def forward(self, x: torch.Tensor): - x = self.conv1(x) # shape = [*, width, grid, grid] - x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2] - x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width] - x = torch.cat([self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device), x], dim=1) # shape = [*, grid ** 2 + 1, width] - x = x + self.positional_embedding.to(x.dtype) - x = self.ln_pre(x) - - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer(x) - x = x.permute(1, 0, 2) # LND -> NLD - - x = self.ln_post(x[:, 0, :]) - - if self.proj is not None: - x = x @ self.proj - - return x - - -class CLIP(nn.Module): - def __init__(self, - embed_dim: int, - # vision - image_resolution: int, - vision_layers: Union[Tuple[int, int, int, int], int], - vision_width: int, - vision_patch_size: int, - # text - context_length: int, - vocab_size: int, - transformer_width: int, - transformer_heads: int, - transformer_layers: int - ): - super().__init__() - - self.context_length = context_length - - if isinstance(vision_layers, (tuple, list)): - vision_heads = vision_width * 32 // 64 - self.visual = ModifiedResNet( - layers=vision_layers, - output_dim=embed_dim, - heads=vision_heads, - input_resolution=image_resolution, - width=vision_width - ) - else: - vision_heads = vision_width // 64 - self.visual = VisionTransformer( - input_resolution=image_resolution, - patch_size=vision_patch_size, - width=vision_width, - layers=vision_layers, - heads=vision_heads, - output_dim=embed_dim - ) - - self.transformer = Transformer( - width=transformer_width, - layers=transformer_layers, - heads=transformer_heads, - attn_mask=self.build_attention_mask() - ) - - self.vocab_size = vocab_size - self.token_embedding = nn.Embedding(vocab_size, transformer_width) - self.positional_embedding = nn.Parameter(torch.empty(self.context_length, transformer_width)) - self.ln_final = LayerNorm(transformer_width) - - self.text_projection = nn.Parameter(torch.empty(transformer_width, embed_dim)) - self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / 0.07)) - - self.initialize_parameters() - - def initialize_parameters(self): - nn.init.normal_(self.token_embedding.weight, std=0.02) - nn.init.normal_(self.positional_embedding, std=0.01) - - if isinstance(self.visual, ModifiedResNet): - if self.visual.attnpool is not None: - std = self.visual.attnpool.c_proj.in_features ** -0.5 - nn.init.normal_(self.visual.attnpool.q_proj.weight, std=std) - nn.init.normal_(self.visual.attnpool.k_proj.weight, std=std) - nn.init.normal_(self.visual.attnpool.v_proj.weight, std=std) - nn.init.normal_(self.visual.attnpool.c_proj.weight, std=std) - - for resnet_block in [self.visual.layer1, self.visual.layer2, self.visual.layer3, self.visual.layer4]: - for name, param in resnet_block.named_parameters(): - if name.endswith("bn3.weight"): - nn.init.zeros_(param) - - proj_std = (self.transformer.width ** -0.5) * ((2 * self.transformer.layers) ** -0.5) - attn_std = self.transformer.width ** -0.5 - fc_std = (2 * self.transformer.width) ** -0.5 - for block in self.transformer.resblocks: - nn.init.normal_(block.attn.in_proj_weight, std=attn_std) - nn.init.normal_(block.attn.out_proj.weight, std=proj_std) - nn.init.normal_(block.mlp.c_fc.weight, std=fc_std) - nn.init.normal_(block.mlp.c_proj.weight, std=proj_std) - - if self.text_projection is not None: - nn.init.normal_(self.text_projection, std=self.transformer.width ** -0.5) - - def build_attention_mask(self): - # lazily create causal attention mask, with full attention between the vision tokens - # pytorch uses additive attention mask; fill with -inf - mask = torch.empty(self.context_length, self.context_length) - mask.fill_(float("-inf")) - mask.triu_(1) # zero out the lower diagonal - return mask - - @property - def dtype(self): - return self.visual.conv1.weight.dtype - - def encode_image(self, image): - return self.visual(image.type(self.dtype)) - - def encode_text(self, text): - x = self.token_embedding(text).type(self.dtype) # [batch_size, n_ctx, d_model] - - x = x + self.positional_embedding.type(self.dtype) - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer(x) - x = x.permute(1, 0, 2) # LND -> NLD - x = self.ln_final(x).type(self.dtype) - - # x.shape = [batch_size, n_ctx, transformer.width] - # take features from the eot embedding (eot_token is the highest number in each sequence) - x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection - - return x - - def forward(self, image, text): - image_features = self.encode_image(image) - text_features = self.encode_text(text) - - # normalized features - image_features = image_features / image_features.norm(dim=1, keepdim=True) - text_features = text_features / text_features.norm(dim=1, keepdim=True) - - # cosine similarity as logits - logit_scale = self.logit_scale.exp() - logits_per_image = logit_scale * image_features @ text_features.t() - logits_per_text = logits_per_image.t() - - # shape = [global_batch_size, global_batch_size] - return logits_per_image, logits_per_text - - -def convert_weights(model: nn.Module): - """Convert applicable model parameters to fp16""" - - def _convert_weights_to_fp16(l): - if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Linear)): - l.weight.data = l.weight.data.half() - if l.bias is not None: - l.bias.data = l.bias.data.half() - - if isinstance(l, nn.MultiheadAttention): - for attr in [*[f"{s}_proj_weight" for s in ["in", "q", "k", "v"]], "in_proj_bias", "bias_k", "bias_v"]: - tensor = getattr(l, attr) - if tensor is not None: - tensor.data = tensor.data.half() - - for name in ["text_projection", "proj"]: - if hasattr(l, name): - attr = getattr(l, name) - if attr is not None: - attr.data = attr.data.half() - - model.apply(_convert_weights_to_fp16) - - -def build_model(state_dict: dict): - vit = "visual.proj" in state_dict - - if vit: - vision_width = state_dict["visual.conv1.weight"].shape[0] - vision_layers = len([k for k in state_dict.keys() if k.startswith("visual.") and k.endswith(".attn.in_proj_weight")]) - vision_patch_size = state_dict["visual.conv1.weight"].shape[-1] - grid_size = round((state_dict["visual.positional_embedding"].shape[0] - 1) ** 0.5) - image_resolution = vision_patch_size * grid_size - else: - counts: list = [len(set(k.split(".")[2] for k in state_dict if k.startswith(f"visual.layer{b}"))) for b in [1, 2, 3, 4]] - vision_layers = tuple(counts) - vision_width = state_dict["visual.layer1.0.conv1.weight"].shape[0] - output_width = round((state_dict["visual.attnpool.positional_embedding"].shape[0] - 1) ** 0.5) - vision_patch_size = None - assert output_width ** 2 + 1 == state_dict["visual.attnpool.positional_embedding"].shape[0] - image_resolution = output_width * 32 - - embed_dim = state_dict["text_projection"].shape[1] - context_length = state_dict["positional_embedding"].shape[0] - vocab_size = state_dict["token_embedding.weight"].shape[0] - transformer_width = state_dict["ln_final.weight"].shape[0] - transformer_heads = transformer_width // 64 - transformer_layers = len(set(k.split(".")[2] for k in state_dict if k.startswith("transformer.resblocks"))) - - model = CLIP( - embed_dim, - image_resolution, vision_layers, vision_width, vision_patch_size, - context_length, vocab_size, transformer_width, transformer_heads, transformer_layers - ) - - for key in ["input_resolution", "context_length", "vocab_size"]: - if key in state_dict: - del state_dict[key] - - convert_weights(model) - model.load_state_dict(state_dict) - return model.eval() diff --git a/spaces/QINGCHE/TSA/app.py b/spaces/QINGCHE/TSA/app.py deleted file mode 100644 index db0ecef01067fd66c9958a62098d8f2d561d0cff..0000000000000000000000000000000000000000 --- a/spaces/QINGCHE/TSA/app.py +++ /dev/null @@ -1,46 +0,0 @@ -import numpy as np -import gradio as gr -import textInput -from BERT_inference import BertClassificationModel - - -output = [] -keys = [] - -# css = ".output {min-height: 500px}" - - -with gr.Blocks(css = ".output {min-height: 500px}") as demo: - #用markdown语法编辑输出一段话 - gr.Markdown("# TSA - 文本整理助手") - gr.Markdown("请选择要输入的文件或填入文本") - topic_num = gr.Number(label="主题个数") - max_length = gr.Number(label="摘要最大长度") - with gr.Tabs(): - with gr.Tab("文本输入"): - text_input = gr.TextArea(lines=10) - text_button = gr.Button("生成") - - with gr.Tab("文件输入"): - gr.Markdown("目前支持的格式有PDF、Word、txt") - file_input = gr.File(file_types=["text", ".pdf", ".docx"]) - file_button = gr.Button("生成") - # 设置tab选项卡 - with gr.Tabs(): - with gr.Tab("分类页"): - text_keys_output = gr.TextArea(lines=30) - - with gr.Tab("摘要页",): - #Blocks特有组件,设置所有子组件按水平排列 - text_ab_output = gr.TextArea(lines=30) - - with gr.Tab("下载页"): - file_txt_output = gr.File(label="txt格式") - file_docx_output = gr.File(label="docx格式") - file_pdf_output = gr.File(label="pdf格式") - - text_button.click(textInput.text_dump_to_lines, inputs=[text_input,topic_num,max_length], outputs=[text_keys_output,text_ab_output,file_txt_output,file_docx_output,file_pdf_output]) - file_button.click(textInput.file_dump_to_lines,inputs=[file_input,topic_num,max_length], outputs=[text_keys_output,text_ab_output,file_txt_output,file_docx_output,file_pdf_output]) - - -demo.queue().launch() \ No newline at end of file diff --git a/spaces/QINGFNEG/Real-CUGAN/app.py b/spaces/QINGFNEG/Real-CUGAN/app.py deleted file mode 100644 index 2439c5cec6b61e8a517f957daf710cbb6b5c3cf6..0000000000000000000000000000000000000000 --- a/spaces/QINGFNEG/Real-CUGAN/app.py +++ /dev/null @@ -1,62 +0,0 @@ -from upcunet_v3 import RealWaifuUpScaler -import gradio as gr -import time -import logging -import os -from PIL import ImageOps -import numpy as np -import math - - -def greet(input_img, input_model_name, input_tile_mode): - # if input_img.size[0] * input_img.size[1] > 256 * 256: - # y = int(math.sqrt(256*256/input_img.size[0]*input_img.size[1])) - # x = int(input_img.size[0]/input_img.size[1]*y) - # input_img = ImageOps.fit(input_img, (x, y)) - input_img = np.array(input_img) - if input_model_name not in model_cache: - t1 = time.time() - upscaler = RealWaifuUpScaler(input_model_name[2], ModelPath + input_model_name, half=False, device="cpu") - t2 = time.time() - logger.info(f'load model time, {t2 - t1}') - model_cache[input_model_name] = upscaler - else: - upscaler = model_cache[input_model_name] - logger.info(f'load model from cache') - - start = time.time() - result = upscaler(input_img, tile_mode=input_tile_mode) - end = time.time() - logger.info(f'input_model_name, {input_model_name}') - logger.info(f'input_tile_mode, {input_tile_mode}') - logger.info(f'input shape, {input_img.shape}') - logger.info(f'output shape, {result.shape}') - logger.info(f'speed time, {end - start}') - return result - - -if __name__ == '__main__': - logging.basicConfig(level=logging.INFO, format="[%(asctime)s] [%(process)d] [%(levelname)s] %(message)s") - logger = logging.getLogger() - - ModelPath = "weights_v3/" - model_cache = {} - - input_model_name = gr.inputs.Dropdown(os.listdir(ModelPath), default="up2x-latest-denoise2x.pth", label='选择model') - input_tile_mode = gr.inputs.Dropdown([0, 1, 2, 3, 4], default=2, label='选择tile_mode') - input_img = gr.inputs.Image(label='image', type='pil') - - inputs = [input_img, input_model_name, input_tile_mode] - outputs = "image" - iface = gr.Interface(fn=greet, - inputs=inputs, - outputs=outputs, - allow_screenshot=False, - allow_flagging='never', - examples=[['test-img.jpg', "up2x-latest-denoise2x.pth", 2]], - article='[https://github.com/bilibili/ailab/tree/main/Real-CUGAN](https://github.com/bilibili/ailab/tree/main/Real-CUGAN)
    ' - '感谢b站开源的项目,图片过大会导致内存不足,所有我将图片裁剪小,想体验大图片的效果请自行前往上面的链接。
    ' - '修改bbb' - 'The large image will lead to memory limit exceeded. So I crop and resize image. ' - 'If you want to experience the large image, please go to the link above.') - iface.launch() diff --git a/spaces/Rahmat/Phishing-Detect/README.md b/spaces/Rahmat/Phishing-Detect/README.md deleted file mode 100644 index 26216ddf7cbf1a567b5743916e038df25dca4945..0000000000000000000000000000000000000000 --- a/spaces/Rahmat/Phishing-Detect/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Phishing Detect -emoji: 🐨 -colorFrom: green -colorTo: gray -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: bigscience-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Ramse/TTS_Hindi/modules/commons/espnet_positional_embedding.py b/spaces/Ramse/TTS_Hindi/modules/commons/espnet_positional_embedding.py deleted file mode 100644 index 74decb6ab300951490ae08a4b93041a0542b5bb7..0000000000000000000000000000000000000000 --- a/spaces/Ramse/TTS_Hindi/modules/commons/espnet_positional_embedding.py +++ /dev/null @@ -1,113 +0,0 @@ -import math -import torch - - -class PositionalEncoding(torch.nn.Module): - """Positional encoding. - Args: - d_model (int): Embedding dimension. - dropout_rate (float): Dropout rate. - max_len (int): Maximum input length. - reverse (bool): Whether to reverse the input position. - """ - - def __init__(self, d_model, dropout_rate, max_len=5000, reverse=False): - """Construct an PositionalEncoding object.""" - super(PositionalEncoding, self).__init__() - self.d_model = d_model - self.reverse = reverse - self.xscale = math.sqrt(self.d_model) - self.dropout = torch.nn.Dropout(p=dropout_rate) - self.pe = None - self.extend_pe(torch.tensor(0.0).expand(1, max_len)) - - def extend_pe(self, x): - """Reset the positional encodings.""" - if self.pe is not None: - if self.pe.size(1) >= x.size(1): - if self.pe.dtype != x.dtype or self.pe.device != x.device: - self.pe = self.pe.to(dtype=x.dtype, device=x.device) - return - pe = torch.zeros(x.size(1), self.d_model) - if self.reverse: - position = torch.arange( - x.size(1) - 1, -1, -1.0, dtype=torch.float32 - ).unsqueeze(1) - else: - position = torch.arange(0, x.size(1), dtype=torch.float32).unsqueeze(1) - div_term = torch.exp( - torch.arange(0, self.d_model, 2, dtype=torch.float32) - * -(math.log(10000.0) / self.d_model) - ) - pe[:, 0::2] = torch.sin(position * div_term) - pe[:, 1::2] = torch.cos(position * div_term) - pe = pe.unsqueeze(0) - self.pe = pe.to(device=x.device, dtype=x.dtype) - - def forward(self, x: torch.Tensor): - """Add positional encoding. - Args: - x (torch.Tensor): Input tensor (batch, time, `*`). - Returns: - torch.Tensor: Encoded tensor (batch, time, `*`). - """ - self.extend_pe(x) - x = x * self.xscale + self.pe[:, : x.size(1)] - return self.dropout(x) - - -class ScaledPositionalEncoding(PositionalEncoding): - """Scaled positional encoding module. - See Sec. 3.2 https://arxiv.org/abs/1809.08895 - Args: - d_model (int): Embedding dimension. - dropout_rate (float): Dropout rate. - max_len (int): Maximum input length. - """ - - def __init__(self, d_model, dropout_rate, max_len=5000): - """Initialize class.""" - super().__init__(d_model=d_model, dropout_rate=dropout_rate, max_len=max_len) - self.alpha = torch.nn.Parameter(torch.tensor(1.0)) - - def reset_parameters(self): - """Reset parameters.""" - self.alpha.data = torch.tensor(1.0) - - def forward(self, x): - """Add positional encoding. - Args: - x (torch.Tensor): Input tensor (batch, time, `*`). - Returns: - torch.Tensor: Encoded tensor (batch, time, `*`). - """ - self.extend_pe(x) - x = x + self.alpha * self.pe[:, : x.size(1)] - return self.dropout(x) - - -class RelPositionalEncoding(PositionalEncoding): - """Relative positional encoding module. - See : Appendix B in https://arxiv.org/abs/1901.02860 - Args: - d_model (int): Embedding dimension. - dropout_rate (float): Dropout rate. - max_len (int): Maximum input length. - """ - - def __init__(self, d_model, dropout_rate, max_len=5000): - """Initialize class.""" - super().__init__(d_model, dropout_rate, max_len, reverse=True) - - def forward(self, x): - """Compute positional encoding. - Args: - x (torch.Tensor): Input tensor (batch, time, `*`). - Returns: - torch.Tensor: Encoded tensor (batch, time, `*`). - torch.Tensor: Positional embedding tensor (1, time, `*`). - """ - self.extend_pe(x) - x = x * self.xscale - pos_emb = self.pe[:, : x.size(1)] - return self.dropout(x) + self.dropout(pos_emb) \ No newline at end of file diff --git a/spaces/Realcat/image-matching-webui/hloc/utils/viz.py b/spaces/Realcat/image-matching-webui/hloc/utils/viz.py deleted file mode 100644 index 2c67d59619ff2f7e4e5fc1a222b864ea46e2d534..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/hloc/utils/viz.py +++ /dev/null @@ -1,153 +0,0 @@ -""" -2D visualization primitives based on Matplotlib. - -1) Plot images with `plot_images`. -2) Call `plot_keypoints` or `plot_matches` any number of times. -3) Optionally: save a .png or .pdf plot (nice in papers!) with `save_plot`. -""" - -import matplotlib -import matplotlib.pyplot as plt -import matplotlib.patheffects as path_effects -import numpy as np - - -def cm_RdGn(x): - """Custom colormap: red (0) -> yellow (0.5) -> green (1).""" - x = np.clip(x, 0, 1)[..., None] * 2 - c = x * np.array([[0, 1.0, 0]]) + (2 - x) * np.array([[1.0, 0, 0]]) - return np.clip(c, 0, 1) - - -def plot_images( - imgs, titles=None, cmaps="gray", dpi=100, pad=0.5, adaptive=True -): - """Plot a set of images horizontally. - Args: - imgs: a list of NumPy or PyTorch images, RGB (H, W, 3) or mono (H, W). - titles: a list of strings, as titles for each image. - cmaps: colormaps for monochrome images. - adaptive: whether the figure size should fit the image aspect ratios. - """ - n = len(imgs) - if not isinstance(cmaps, (list, tuple)): - cmaps = [cmaps] * n - - if adaptive: - ratios = [i.shape[1] / i.shape[0] for i in imgs] # W / H - else: - ratios = [4 / 3] * n - figsize = [sum(ratios) * 4.5, 4.5] - fig, ax = plt.subplots( - 1, n, figsize=figsize, dpi=dpi, gridspec_kw={"width_ratios": ratios} - ) - if n == 1: - ax = [ax] - for i in range(n): - ax[i].imshow(imgs[i], cmap=plt.get_cmap(cmaps[i])) - ax[i].get_yaxis().set_ticks([]) - ax[i].get_xaxis().set_ticks([]) - ax[i].set_axis_off() - for spine in ax[i].spines.values(): # remove frame - spine.set_visible(False) - if titles: - ax[i].set_title(titles[i]) - fig.tight_layout(pad=pad) - - -def plot_keypoints(kpts, colors="lime", ps=4): - """Plot keypoints for existing images. - Args: - kpts: list of ndarrays of size (N, 2). - colors: string, or list of list of tuples (one for each keypoints). - ps: size of the keypoints as float. - """ - if not isinstance(colors, list): - colors = [colors] * len(kpts) - axes = plt.gcf().axes - for a, k, c in zip(axes, kpts, colors): - a.scatter(k[:, 0], k[:, 1], c=c, s=ps, linewidths=0) - - -def plot_matches(kpts0, kpts1, color=None, lw=1.5, ps=4, indices=(0, 1), a=1.0): - """Plot matches for a pair of existing images. - Args: - kpts0, kpts1: corresponding keypoints of size (N, 2). - color: color of each match, string or RGB tuple. Random if not given. - lw: width of the lines. - ps: size of the end points (no endpoint if ps=0) - indices: indices of the images to draw the matches on. - a: alpha opacity of the match lines. - """ - fig = plt.gcf() - ax = fig.axes - assert len(ax) > max(indices) - ax0, ax1 = ax[indices[0]], ax[indices[1]] - fig.canvas.draw() - - assert len(kpts0) == len(kpts1) - if color is None: - color = matplotlib.cm.hsv(np.random.rand(len(kpts0))).tolist() - elif len(color) > 0 and not isinstance(color[0], (tuple, list)): - color = [color] * len(kpts0) - - if lw > 0: - # transform the points into the figure coordinate system - transFigure = fig.transFigure.inverted() - fkpts0 = transFigure.transform(ax0.transData.transform(kpts0)) - fkpts1 = transFigure.transform(ax1.transData.transform(kpts1)) - fig.lines += [ - matplotlib.lines.Line2D( - (fkpts0[i, 0], fkpts1[i, 0]), - (fkpts0[i, 1], fkpts1[i, 1]), - zorder=1, - transform=fig.transFigure, - c=color[i], - linewidth=lw, - alpha=a, - ) - for i in range(len(kpts0)) - ] - - # freeze the axes to prevent the transform to change - ax0.autoscale(enable=False) - ax1.autoscale(enable=False) - - if ps > 0: - ax0.scatter(kpts0[:, 0], kpts0[:, 1], c=color, s=ps) - ax1.scatter(kpts1[:, 0], kpts1[:, 1], c=color, s=ps) - - -def add_text( - idx, - text, - pos=(0.01, 0.99), - fs=15, - color="w", - lcolor="k", - lwidth=2, - ha="left", - va="top", -): - ax = plt.gcf().axes[idx] - t = ax.text( - *pos, - text, - fontsize=fs, - ha=ha, - va=va, - color=color, - transform=ax.transAxes - ) - if lcolor is not None: - t.set_path_effects( - [ - path_effects.Stroke(linewidth=lwidth, foreground=lcolor), - path_effects.Normal(), - ] - ) - - -def save_plot(path, **kw): - """Save the current figure without any white margin.""" - plt.savefig(path, bbox_inches="tight", pad_inches=0, **kw) diff --git a/spaces/Reha2704/VToonify/vtoonify/model/encoder/encoders/helpers.py b/spaces/Reha2704/VToonify/vtoonify/model/encoder/encoders/helpers.py deleted file mode 100644 index b51fdf97141407fcc1c9d249a086ddbfd042469f..0000000000000000000000000000000000000000 --- a/spaces/Reha2704/VToonify/vtoonify/model/encoder/encoders/helpers.py +++ /dev/null @@ -1,119 +0,0 @@ -from collections import namedtuple -import torch -from torch.nn import Conv2d, BatchNorm2d, PReLU, ReLU, Sigmoid, MaxPool2d, AdaptiveAvgPool2d, Sequential, Module - -""" -ArcFace implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch) -""" - - -class Flatten(Module): - def forward(self, input): - return input.view(input.size(0), -1) - - -def l2_norm(input, axis=1): - norm = torch.norm(input, 2, axis, True) - output = torch.div(input, norm) - return output - - -class Bottleneck(namedtuple('Block', ['in_channel', 'depth', 'stride'])): - """ A named tuple describing a ResNet block. """ - - -def get_block(in_channel, depth, num_units, stride=2): - return [Bottleneck(in_channel, depth, stride)] + [Bottleneck(depth, depth, 1) for i in range(num_units - 1)] - - -def get_blocks(num_layers): - if num_layers == 50: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=4), - get_block(in_channel=128, depth=256, num_units=14), - get_block(in_channel=256, depth=512, num_units=3) - ] - elif num_layers == 100: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=13), - get_block(in_channel=128, depth=256, num_units=30), - get_block(in_channel=256, depth=512, num_units=3) - ] - elif num_layers == 152: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=8), - get_block(in_channel=128, depth=256, num_units=36), - get_block(in_channel=256, depth=512, num_units=3) - ] - else: - raise ValueError("Invalid number of layers: {}. Must be one of [50, 100, 152]".format(num_layers)) - return blocks - - -class SEModule(Module): - def __init__(self, channels, reduction): - super(SEModule, self).__init__() - self.avg_pool = AdaptiveAvgPool2d(1) - self.fc1 = Conv2d(channels, channels // reduction, kernel_size=1, padding=0, bias=False) - self.relu = ReLU(inplace=True) - self.fc2 = Conv2d(channels // reduction, channels, kernel_size=1, padding=0, bias=False) - self.sigmoid = Sigmoid() - - def forward(self, x): - module_input = x - x = self.avg_pool(x) - x = self.fc1(x) - x = self.relu(x) - x = self.fc2(x) - x = self.sigmoid(x) - return module_input * x - - -class bottleneck_IR(Module): - def __init__(self, in_channel, depth, stride): - super(bottleneck_IR, self).__init__() - if in_channel == depth: - self.shortcut_layer = MaxPool2d(1, stride) - else: - self.shortcut_layer = Sequential( - Conv2d(in_channel, depth, (1, 1), stride, bias=False), - BatchNorm2d(depth) - ) - self.res_layer = Sequential( - BatchNorm2d(in_channel), - Conv2d(in_channel, depth, (3, 3), (1, 1), 1, bias=False), PReLU(depth), - Conv2d(depth, depth, (3, 3), stride, 1, bias=False), BatchNorm2d(depth) - ) - - def forward(self, x): - shortcut = self.shortcut_layer(x) - res = self.res_layer(x) - return res + shortcut - - -class bottleneck_IR_SE(Module): - def __init__(self, in_channel, depth, stride): - super(bottleneck_IR_SE, self).__init__() - if in_channel == depth: - self.shortcut_layer = MaxPool2d(1, stride) - else: - self.shortcut_layer = Sequential( - Conv2d(in_channel, depth, (1, 1), stride, bias=False), - BatchNorm2d(depth) - ) - self.res_layer = Sequential( - BatchNorm2d(in_channel), - Conv2d(in_channel, depth, (3, 3), (1, 1), 1, bias=False), - PReLU(depth), - Conv2d(depth, depth, (3, 3), stride, 1, bias=False), - BatchNorm2d(depth), - SEModule(depth, 16) - ) - - def forward(self, x): - shortcut = self.shortcut_layer(x) - res = self.res_layer(x) - return res + shortcut diff --git a/spaces/Riksarkivet/htr_demo/tabs/stepwise_htr_tool.py b/spaces/Riksarkivet/htr_demo/tabs/stepwise_htr_tool.py deleted file mode 100644 index 7a63fb37e4fe467487e6f006ffba1d7139f0281f..0000000000000000000000000000000000000000 --- a/spaces/Riksarkivet/htr_demo/tabs/stepwise_htr_tool.py +++ /dev/null @@ -1,406 +0,0 @@ -import os -import shutil -from difflib import Differ - -import evaluate -import gradio as gr - -from helper.examples.examples import DemoImages -from helper.utils import TrafficDataHandler -from src.htr_pipeline.gradio_backend import CustomTrack, SingletonModelLoader - -model_loader = SingletonModelLoader() - -custom_track = CustomTrack(model_loader) - -images_for_demo = DemoImages() - -cer_metric = evaluate.load("cer") - - -with gr.Blocks() as stepwise_htr_tool_tab: - with gr.Tabs(): - with gr.Tab("1. Region segmentation"): - with gr.Row(): - with gr.Column(scale=1): - vis_data_folder_placeholder = gr.Markdown(visible=False) - name_files_placeholder = gr.Markdown(visible=False) - - with gr.Group(): - input_region_image = gr.Image( - label="Image to region segment", - # type="numpy", - tool="editor", - height=500, - ) - with gr.Accordion("Settings", open=False): - with gr.Group(): - reg_pred_score_threshold_slider = gr.Slider( - minimum=0.4, - maximum=1, - value=0.5, - step=0.05, - label="P-threshold", - info="""Filter the confidence score for a prediction score to be considered""", - ) - reg_containments_threshold_slider = gr.Slider( - minimum=0, - maximum=1, - value=0.5, - step=0.05, - label="C-threshold", - info="""The minimum required overlap or similarity - for a detected region or object to be considered valid""", - ) - - region_segment_model_dropdown = gr.Dropdown( - choices=["Riksarkivet/rtm_region"], - value="Riksarkivet/rtm_region", - label="Region segmentation model", - info="More models will be added", - ) - - with gr.Row(): - clear_button = gr.Button("Clear", variant="secondary", elem_id="clear_button") - - region_segment_button = gr.Button( - "Run", - variant="primary", - elem_id="region_segment_button", - ) - - region_segment_button_var = gr.State(value="region_segment_button") - - with gr.Column(scale=2): - with gr.Box(): - with gr.Row(): - with gr.Column(scale=2): - gr.Examples( - examples=images_for_demo.examples_list, - inputs=[name_files_placeholder, input_region_image], - label="Example images", - examples_per_page=5, - ) - with gr.Column(scale=3): - output_region_image = gr.Image(label="Segmented regions", type="numpy", height=600) - - ############################################## - with gr.Tab("2. Line segmentation"): - image_placeholder_lines = gr.Image( - label="Segmented lines", - # type="numpy", - interactive="False", - visible=True, - height=600, - ) - - with gr.Row(visible=False) as control_line_segment: - with gr.Column(scale=2): - with gr.Group(): - with gr.Box(): - regions_cropped_gallery = gr.Gallery( - label="Segmented regions", - elem_id="gallery", - columns=[2], - rows=[2], - # object_fit="contain", - height=450, - preview=True, - container=False, - ) - - input_region_from_gallery = gr.Image( - label="Region segmentation to line segment", interactive="False", visible=False, height=400 - ) - - with gr.Row(): - with gr.Accordion("Settings", open=False): - with gr.Row(): - line_pred_score_threshold_slider = gr.Slider( - minimum=0.3, - maximum=1, - value=0.4, - step=0.05, - label="Pred_score threshold", - info="""Filter the confidence score for a prediction score to be considered""", - ) - line_containments_threshold_slider = gr.Slider( - minimum=0, - maximum=1, - value=0.5, - step=0.05, - label="Containments threshold", - info="""The minimum required overlap or similarity - for a detected region or object to be considered valid""", - ) - with gr.Row(equal_height=False): - line_segment_model_dropdown = gr.Dropdown( - choices=["Riksarkivet/rtmdet_lines"], - value="Riksarkivet/rtmdet_lines", - label="Line segment model", - info="More models will be added", - ) - with gr.Row(): - # placeholder_line_button = gr.Button( - # "", - # variant="secondary", - # scale=1, - # ) - gr.Markdown(" ") - - line_segment_button = gr.Button( - "Run", - variant="primary", - # elem_id="center_button", - scale=1, - ) - - with gr.Column(scale=3): - output_line_from_region = gr.Image( - label="Segmented lines", type="numpy", interactive="False", height=600 - ) - - ############################################### - with gr.Tab("3. Text recognition"): - image_placeholder_htr = gr.Image( - label="Transcribed lines", - # type="numpy", - interactive="False", - visible=True, - height=600, - ) - - with gr.Row(visible=False) as control_htr: - inputs_lines_to_transcribe = gr.Variable() - - with gr.Column(scale=2): - with gr.Group(): - image_inputs_lines_to_transcribe = gr.Image( - label="Transcribed lines", type="numpy", interactive="False", visible=False, height=470 - ) - with gr.Row(): - with gr.Accordion("Settings", open=False): - transcriber_model = gr.Dropdown( - choices=["Riksarkivet/satrn_htr", "microsoft/trocr-base-handwritten"], - value="Riksarkivet/satrn_htr", - label="Text recognition model", - info="More models will be added", - ) - - gr.Slider( - value=0.6, - minimum=0.5, - maximum=1, - label="HTR threshold", - info="Prediction score threshold for transcribed lines", - scale=1, - ) - - with gr.Row(): - copy_textarea = gr.Button("Copy text", variant="secondary", visible=True, scale=1) - - transcribe_button = gr.Button("Run", variant="primary", visible=True, scale=1) - - with gr.Column(scale=3): - with gr.Row(): - transcribed_text = gr.Textbox( - label="Transcribed text", - info="Transcribed text is being streamed back from the Text recognition model", - lines=26, - value="", - show_copy_button=True, - elem_id="textarea_stepwise_3", - ) - - ##################################### - with gr.Tab("4. Explore results"): - image_placeholder_explore_results = gr.Image( - label="Cropped transcribed lines", - # type="numpy", - interactive="False", - visible=True, - height=600, - ) - - with gr.Row(visible=False, equal_height=False) as control_results_transcribe: - with gr.Column(scale=1, visible=True): - with gr.Group(): - with gr.Box(): - temp_gallery_input = gr.Variable() - - gallery_inputs_lines_to_transcribe = gr.Gallery( - label="Cropped transcribed lines", - elem_id="gallery_lines", - columns=[3], - rows=[3], - # object_fit="contain", - height=150, - preview=True, - container=False, - ) - with gr.Row(): - dataframe_text_index = gr.Textbox( - label="Text from DataFrame selection", - placeholder="Select row from the DataFrame.", - interactive=False, - ) - with gr.Row(): - gt_text_index = gr.Textbox( - label="Ground Truth", - placeholder="Provide the ground truth, if available.", - interactive=True, - ) - with gr.Row(): - diff_token_output = gr.HighlightedText( - label="Text diff", - combine_adjacent=True, - show_legend=True, - color_map={"+": "red", "-": "green"}, - ) - - with gr.Row(equal_height=False): - cer_output = gr.Textbox(label="Character Error Rate") - gr.Markdown("") - calc_cer_button = gr.Button("Calculate CER", variant="primary", visible=True) - - with gr.Column(scale=1, visible=True): - mapping_dict = gr.Variable() - transcribed_text_df_finish = gr.Dataframe( - headers=["Transcribed text", "Prediction score"], - max_rows=14, - col_count=(2, "fixed"), - wrap=True, - interactive=False, - overflow_row_behaviour="paginate", - height=600, - ) - - # custom track - - def diff_texts(text1, text2): - d = Differ() - return [(token[2:], token[0] if token[0] != " " else None) for token in d.compare(text1, text2)] - - def compute_cer(dataframe_text_index, gt_text_index): - if gt_text_index is not None and gt_text_index.strip() != "": - return round(cer_metric.compute(predictions=[dataframe_text_index], references=[gt_text_index]), 4) - else: - return "Ground truth not provided" - - calc_cer_button.click( - compute_cer, - inputs=[dataframe_text_index, gt_text_index], - outputs=cer_output, - api_name=False, - ) - - calc_cer_button.click( - diff_texts, - inputs=[dataframe_text_index, gt_text_index], - outputs=[diff_token_output], - api_name=False, - ) - - region_segment_button.click( - custom_track.region_segment, - inputs=[input_region_image, reg_pred_score_threshold_slider, reg_containments_threshold_slider], - outputs=[output_region_image, regions_cropped_gallery, image_placeholder_lines, control_line_segment], - api_name=False, - ) - - regions_cropped_gallery.select( - custom_track.get_select_index_image, - regions_cropped_gallery, - input_region_from_gallery, - api_name=False, - ) - - transcribed_text_df_finish.select( - fn=custom_track.get_select_index_df, - inputs=[transcribed_text_df_finish, mapping_dict], - outputs=[gallery_inputs_lines_to_transcribe, dataframe_text_index], - api_name=False, - ) - - line_segment_button.click( - custom_track.line_segment, - inputs=[input_region_from_gallery, line_pred_score_threshold_slider, line_containments_threshold_slider], - outputs=[ - output_line_from_region, - image_inputs_lines_to_transcribe, - inputs_lines_to_transcribe, - gallery_inputs_lines_to_transcribe, - temp_gallery_input, - # Hide - transcribe_button, - image_inputs_lines_to_transcribe, - image_placeholder_htr, - control_htr, - ], - api_name=False, - ) - - copy_textarea.click( - fn=None, - _js="""document.querySelector("#textarea_stepwise_3 > label > button").click()""", - api_name=False, - ) - - transcribe_button.click( - custom_track.transcribe_text, - inputs=[inputs_lines_to_transcribe], - outputs=[ - transcribed_text, - transcribed_text_df_finish, - mapping_dict, - # Hide - control_results_transcribe, - image_placeholder_explore_results, - ], - api_name=False, - ) - - clear_button.click( - lambda: ( - (shutil.rmtree("./vis_data") if os.path.exists("./vis_data") else None, None)[1], - None, - None, - None, - gr.update(visible=False), - None, - None, - None, - gr.update(visible=False), - gr.update(visible=False), - gr.update(visible=True), - None, - gr.update(visible=False), - gr.update(visible=False), - gr.update(visible=True), - gr.update(visible=True), - ), - inputs=[], - outputs=[ - vis_data_folder_placeholder, - input_region_image, - regions_cropped_gallery, - input_region_from_gallery, - control_line_segment, - output_line_from_region, - inputs_lines_to_transcribe, - transcribed_text, - control_htr, - inputs_lines_to_transcribe, - image_placeholder_htr, - output_region_image, - image_inputs_lines_to_transcribe, - control_results_transcribe, - image_placeholder_explore_results, - image_placeholder_lines, - ], - api_name=False, - ) - - SECRET_KEY = os.environ.get("AM_I_IN_A_DOCKER_CONTAINER", False) - if SECRET_KEY: - region_segment_button.click(fn=TrafficDataHandler.store_metric_data, inputs=region_segment_button_var) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/configs/_base_/schedules/schedule_20k.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/configs/_base_/schedules/schedule_20k.py deleted file mode 100644 index bf780a1b6f6521833c6a5859675147824efa599d..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/configs/_base_/schedules/schedule_20k.py +++ /dev/null @@ -1,9 +0,0 @@ -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005) -optimizer_config = dict() -# learning policy -lr_config = dict(policy='poly', power=0.9, min_lr=1e-4, by_epoch=False) -# runtime settings -runner = dict(type='IterBasedRunner', max_iters=20000) -checkpoint_config = dict(by_epoch=False, interval=2000) -evaluation = dict(interval=2000, metric='mIoU') diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/image/geometric.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/image/geometric.py deleted file mode 100644 index cf97c201cb4e43796c911919d03fb26a07ed817d..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/image/geometric.py +++ /dev/null @@ -1,728 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numbers - -import cv2 -import numpy as np - -from ..utils import to_2tuple -from .io import imread_backend - -try: - from PIL import Image -except ImportError: - Image = None - - -def _scale_size(size, scale): - """Rescale a size by a ratio. - - Args: - size (tuple[int]): (w, h). - scale (float | tuple(float)): Scaling factor. - - Returns: - tuple[int]: scaled size. - """ - if isinstance(scale, (float, int)): - scale = (scale, scale) - w, h = size - return int(w * float(scale[0]) + 0.5), int(h * float(scale[1]) + 0.5) - - -cv2_interp_codes = { - 'nearest': cv2.INTER_NEAREST, - 'bilinear': cv2.INTER_LINEAR, - 'bicubic': cv2.INTER_CUBIC, - 'area': cv2.INTER_AREA, - 'lanczos': cv2.INTER_LANCZOS4 -} - -if Image is not None: - pillow_interp_codes = { - 'nearest': Image.NEAREST, - 'bilinear': Image.BILINEAR, - 'bicubic': Image.BICUBIC, - 'box': Image.BOX, - 'lanczos': Image.LANCZOS, - 'hamming': Image.HAMMING - } - - -def imresize(img, - size, - return_scale=False, - interpolation='bilinear', - out=None, - backend=None): - """Resize image to a given size. - - Args: - img (ndarray): The input image. - size (tuple[int]): Target size (w, h). - return_scale (bool): Whether to return `w_scale` and `h_scale`. - interpolation (str): Interpolation method, accepted values are - "nearest", "bilinear", "bicubic", "area", "lanczos" for 'cv2' - backend, "nearest", "bilinear" for 'pillow' backend. - out (ndarray): The output destination. - backend (str | None): The image resize backend type. Options are `cv2`, - `pillow`, `None`. If backend is None, the global imread_backend - specified by ``mmcv.use_backend()`` will be used. Default: None. - - Returns: - tuple | ndarray: (`resized_img`, `w_scale`, `h_scale`) or - `resized_img`. - """ - h, w = img.shape[:2] - if backend is None: - backend = imread_backend - if backend not in ['cv2', 'pillow']: - raise ValueError(f'backend: {backend} is not supported for resize.' - f"Supported backends are 'cv2', 'pillow'") - - if backend == 'pillow': - assert img.dtype == np.uint8, 'Pillow backend only support uint8 type' - pil_image = Image.fromarray(img) - pil_image = pil_image.resize(size, pillow_interp_codes[interpolation]) - resized_img = np.array(pil_image) - else: - resized_img = cv2.resize( - img, size, dst=out, interpolation=cv2_interp_codes[interpolation]) - if not return_scale: - return resized_img - else: - w_scale = size[0] / w - h_scale = size[1] / h - return resized_img, w_scale, h_scale - - -def imresize_to_multiple(img, - divisor, - size=None, - scale_factor=None, - keep_ratio=False, - return_scale=False, - interpolation='bilinear', - out=None, - backend=None): - """Resize image according to a given size or scale factor and then rounds - up the the resized or rescaled image size to the nearest value that can be - divided by the divisor. - - Args: - img (ndarray): The input image. - divisor (int | tuple): Resized image size will be a multiple of - divisor. If divisor is a tuple, divisor should be - (w_divisor, h_divisor). - size (None | int | tuple[int]): Target size (w, h). Default: None. - scale_factor (None | float | tuple[float]): Multiplier for spatial - size. Should match input size if it is a tuple and the 2D style is - (w_scale_factor, h_scale_factor). Default: None. - keep_ratio (bool): Whether to keep the aspect ratio when resizing the - image. Default: False. - return_scale (bool): Whether to return `w_scale` and `h_scale`. - interpolation (str): Interpolation method, accepted values are - "nearest", "bilinear", "bicubic", "area", "lanczos" for 'cv2' - backend, "nearest", "bilinear" for 'pillow' backend. - out (ndarray): The output destination. - backend (str | None): The image resize backend type. Options are `cv2`, - `pillow`, `None`. If backend is None, the global imread_backend - specified by ``mmcv.use_backend()`` will be used. Default: None. - - Returns: - tuple | ndarray: (`resized_img`, `w_scale`, `h_scale`) or - `resized_img`. - """ - h, w = img.shape[:2] - if size is not None and scale_factor is not None: - raise ValueError('only one of size or scale_factor should be defined') - elif size is None and scale_factor is None: - raise ValueError('one of size or scale_factor should be defined') - elif size is not None: - size = to_2tuple(size) - if keep_ratio: - size = rescale_size((w, h), size, return_scale=False) - else: - size = _scale_size((w, h), scale_factor) - - divisor = to_2tuple(divisor) - size = tuple([int(np.ceil(s / d)) * d for s, d in zip(size, divisor)]) - resized_img, w_scale, h_scale = imresize( - img, - size, - return_scale=True, - interpolation=interpolation, - out=out, - backend=backend) - if return_scale: - return resized_img, w_scale, h_scale - else: - return resized_img - - -def imresize_like(img, - dst_img, - return_scale=False, - interpolation='bilinear', - backend=None): - """Resize image to the same size of a given image. - - Args: - img (ndarray): The input image. - dst_img (ndarray): The target image. - return_scale (bool): Whether to return `w_scale` and `h_scale`. - interpolation (str): Same as :func:`resize`. - backend (str | None): Same as :func:`resize`. - - Returns: - tuple or ndarray: (`resized_img`, `w_scale`, `h_scale`) or - `resized_img`. - """ - h, w = dst_img.shape[:2] - return imresize(img, (w, h), return_scale, interpolation, backend=backend) - - -def rescale_size(old_size, scale, return_scale=False): - """Calculate the new size to be rescaled to. - - Args: - old_size (tuple[int]): The old size (w, h) of image. - scale (float | tuple[int]): The scaling factor or maximum size. - If it is a float number, then the image will be rescaled by this - factor, else if it is a tuple of 2 integers, then the image will - be rescaled as large as possible within the scale. - return_scale (bool): Whether to return the scaling factor besides the - rescaled image size. - - Returns: - tuple[int]: The new rescaled image size. - """ - w, h = old_size - if isinstance(scale, (float, int)): - if scale <= 0: - raise ValueError(f'Invalid scale {scale}, must be positive.') - scale_factor = scale - elif isinstance(scale, tuple): - max_long_edge = max(scale) - max_short_edge = min(scale) - scale_factor = min(max_long_edge / max(h, w), - max_short_edge / min(h, w)) - else: - raise TypeError( - f'Scale must be a number or tuple of int, but got {type(scale)}') - - new_size = _scale_size((w, h), scale_factor) - - if return_scale: - return new_size, scale_factor - else: - return new_size - - -def imrescale(img, - scale, - return_scale=False, - interpolation='bilinear', - backend=None): - """Resize image while keeping the aspect ratio. - - Args: - img (ndarray): The input image. - scale (float | tuple[int]): The scaling factor or maximum size. - If it is a float number, then the image will be rescaled by this - factor, else if it is a tuple of 2 integers, then the image will - be rescaled as large as possible within the scale. - return_scale (bool): Whether to return the scaling factor besides the - rescaled image. - interpolation (str): Same as :func:`resize`. - backend (str | None): Same as :func:`resize`. - - Returns: - ndarray: The rescaled image. - """ - h, w = img.shape[:2] - new_size, scale_factor = rescale_size((w, h), scale, return_scale=True) - rescaled_img = imresize( - img, new_size, interpolation=interpolation, backend=backend) - if return_scale: - return rescaled_img, scale_factor - else: - return rescaled_img - - -def imflip(img, direction='horizontal'): - """Flip an image horizontally or vertically. - - Args: - img (ndarray): Image to be flipped. - direction (str): The flip direction, either "horizontal" or - "vertical" or "diagonal". - - Returns: - ndarray: The flipped image. - """ - assert direction in ['horizontal', 'vertical', 'diagonal'] - if direction == 'horizontal': - return np.flip(img, axis=1) - elif direction == 'vertical': - return np.flip(img, axis=0) - else: - return np.flip(img, axis=(0, 1)) - - -def imflip_(img, direction='horizontal'): - """Inplace flip an image horizontally or vertically. - - Args: - img (ndarray): Image to be flipped. - direction (str): The flip direction, either "horizontal" or - "vertical" or "diagonal". - - Returns: - ndarray: The flipped image (inplace). - """ - assert direction in ['horizontal', 'vertical', 'diagonal'] - if direction == 'horizontal': - return cv2.flip(img, 1, img) - elif direction == 'vertical': - return cv2.flip(img, 0, img) - else: - return cv2.flip(img, -1, img) - - -def imrotate(img, - angle, - center=None, - scale=1.0, - border_value=0, - interpolation='bilinear', - auto_bound=False): - """Rotate an image. - - Args: - img (ndarray): Image to be rotated. - angle (float): Rotation angle in degrees, positive values mean - clockwise rotation. - center (tuple[float], optional): Center point (w, h) of the rotation in - the source image. If not specified, the center of the image will be - used. - scale (float): Isotropic scale factor. - border_value (int): Border value. - interpolation (str): Same as :func:`resize`. - auto_bound (bool): Whether to adjust the image size to cover the whole - rotated image. - - Returns: - ndarray: The rotated image. - """ - if center is not None and auto_bound: - raise ValueError('`auto_bound` conflicts with `center`') - h, w = img.shape[:2] - if center is None: - center = ((w - 1) * 0.5, (h - 1) * 0.5) - assert isinstance(center, tuple) - - matrix = cv2.getRotationMatrix2D(center, -angle, scale) - if auto_bound: - cos = np.abs(matrix[0, 0]) - sin = np.abs(matrix[0, 1]) - new_w = h * sin + w * cos - new_h = h * cos + w * sin - matrix[0, 2] += (new_w - w) * 0.5 - matrix[1, 2] += (new_h - h) * 0.5 - w = int(np.round(new_w)) - h = int(np.round(new_h)) - rotated = cv2.warpAffine( - img, - matrix, (w, h), - flags=cv2_interp_codes[interpolation], - borderValue=border_value) - return rotated - - -def bbox_clip(bboxes, img_shape): - """Clip bboxes to fit the image shape. - - Args: - bboxes (ndarray): Shape (..., 4*k) - img_shape (tuple[int]): (height, width) of the image. - - Returns: - ndarray: Clipped bboxes. - """ - assert bboxes.shape[-1] % 4 == 0 - cmin = np.empty(bboxes.shape[-1], dtype=bboxes.dtype) - cmin[0::2] = img_shape[1] - 1 - cmin[1::2] = img_shape[0] - 1 - clipped_bboxes = np.maximum(np.minimum(bboxes, cmin), 0) - return clipped_bboxes - - -def bbox_scaling(bboxes, scale, clip_shape=None): - """Scaling bboxes w.r.t the box center. - - Args: - bboxes (ndarray): Shape(..., 4). - scale (float): Scaling factor. - clip_shape (tuple[int], optional): If specified, bboxes that exceed the - boundary will be clipped according to the given shape (h, w). - - Returns: - ndarray: Scaled bboxes. - """ - if float(scale) == 1.0: - scaled_bboxes = bboxes.copy() - else: - w = bboxes[..., 2] - bboxes[..., 0] + 1 - h = bboxes[..., 3] - bboxes[..., 1] + 1 - dw = (w * (scale - 1)) * 0.5 - dh = (h * (scale - 1)) * 0.5 - scaled_bboxes = bboxes + np.stack((-dw, -dh, dw, dh), axis=-1) - if clip_shape is not None: - return bbox_clip(scaled_bboxes, clip_shape) - else: - return scaled_bboxes - - -def imcrop(img, bboxes, scale=1.0, pad_fill=None): - """Crop image patches. - - 3 steps: scale the bboxes -> clip bboxes -> crop and pad. - - Args: - img (ndarray): Image to be cropped. - bboxes (ndarray): Shape (k, 4) or (4, ), location of cropped bboxes. - scale (float, optional): Scale ratio of bboxes, the default value - 1.0 means no padding. - pad_fill (Number | list[Number]): Value to be filled for padding. - Default: None, which means no padding. - - Returns: - list[ndarray] | ndarray: The cropped image patches. - """ - chn = 1 if img.ndim == 2 else img.shape[2] - if pad_fill is not None: - if isinstance(pad_fill, (int, float)): - pad_fill = [pad_fill for _ in range(chn)] - assert len(pad_fill) == chn - - _bboxes = bboxes[None, ...] if bboxes.ndim == 1 else bboxes - scaled_bboxes = bbox_scaling(_bboxes, scale).astype(np.int32) - clipped_bbox = bbox_clip(scaled_bboxes, img.shape) - - patches = [] - for i in range(clipped_bbox.shape[0]): - x1, y1, x2, y2 = tuple(clipped_bbox[i, :]) - if pad_fill is None: - patch = img[y1:y2 + 1, x1:x2 + 1, ...] - else: - _x1, _y1, _x2, _y2 = tuple(scaled_bboxes[i, :]) - if chn == 1: - patch_shape = (_y2 - _y1 + 1, _x2 - _x1 + 1) - else: - patch_shape = (_y2 - _y1 + 1, _x2 - _x1 + 1, chn) - patch = np.array( - pad_fill, dtype=img.dtype) * np.ones( - patch_shape, dtype=img.dtype) - x_start = 0 if _x1 >= 0 else -_x1 - y_start = 0 if _y1 >= 0 else -_y1 - w = x2 - x1 + 1 - h = y2 - y1 + 1 - patch[y_start:y_start + h, x_start:x_start + w, - ...] = img[y1:y1 + h, x1:x1 + w, ...] - patches.append(patch) - - if bboxes.ndim == 1: - return patches[0] - else: - return patches - - -def impad(img, - *, - shape=None, - padding=None, - pad_val=0, - padding_mode='constant'): - """Pad the given image to a certain shape or pad on all sides with - specified padding mode and padding value. - - Args: - img (ndarray): Image to be padded. - shape (tuple[int]): Expected padding shape (h, w). Default: None. - padding (int or tuple[int]): Padding on each border. If a single int is - provided this is used to pad all borders. If tuple of length 2 is - provided this is the padding on left/right and top/bottom - respectively. If a tuple of length 4 is provided this is the - padding for the left, top, right and bottom borders respectively. - Default: None. Note that `shape` and `padding` can not be both - set. - pad_val (Number | Sequence[Number]): Values to be filled in padding - areas when padding_mode is 'constant'. Default: 0. - padding_mode (str): Type of padding. Should be: constant, edge, - reflect or symmetric. Default: constant. - - - constant: pads with a constant value, this value is specified - with pad_val. - - edge: pads with the last value at the edge of the image. - - reflect: pads with reflection of image without repeating the - last value on the edge. For example, padding [1, 2, 3, 4] - with 2 elements on both sides in reflect mode will result - in [3, 2, 1, 2, 3, 4, 3, 2]. - - symmetric: pads with reflection of image repeating the last - value on the edge. For example, padding [1, 2, 3, 4] with - 2 elements on both sides in symmetric mode will result in - [2, 1, 1, 2, 3, 4, 4, 3] - - Returns: - ndarray: The padded image. - """ - - assert (shape is not None) ^ (padding is not None) - if shape is not None: - padding = (0, 0, shape[1] - img.shape[1], shape[0] - img.shape[0]) - - # check pad_val - if isinstance(pad_val, tuple): - assert len(pad_val) == img.shape[-1] - elif not isinstance(pad_val, numbers.Number): - raise TypeError('pad_val must be a int or a tuple. ' - f'But received {type(pad_val)}') - - # check padding - if isinstance(padding, tuple) and len(padding) in [2, 4]: - if len(padding) == 2: - padding = (padding[0], padding[1], padding[0], padding[1]) - elif isinstance(padding, numbers.Number): - padding = (padding, padding, padding, padding) - else: - raise ValueError('Padding must be a int or a 2, or 4 element tuple.' - f'But received {padding}') - - # check padding mode - assert padding_mode in ['constant', 'edge', 'reflect', 'symmetric'] - - border_type = { - 'constant': cv2.BORDER_CONSTANT, - 'edge': cv2.BORDER_REPLICATE, - 'reflect': cv2.BORDER_REFLECT_101, - 'symmetric': cv2.BORDER_REFLECT - } - img = cv2.copyMakeBorder( - img, - padding[1], - padding[3], - padding[0], - padding[2], - border_type[padding_mode], - value=pad_val) - - return img - - -def impad_to_multiple(img, divisor, pad_val=0): - """Pad an image to ensure each edge to be multiple to some number. - - Args: - img (ndarray): Image to be padded. - divisor (int): Padded image edges will be multiple to divisor. - pad_val (Number | Sequence[Number]): Same as :func:`impad`. - - Returns: - ndarray: The padded image. - """ - pad_h = int(np.ceil(img.shape[0] / divisor)) * divisor - pad_w = int(np.ceil(img.shape[1] / divisor)) * divisor - return impad(img, shape=(pad_h, pad_w), pad_val=pad_val) - - -def cutout(img, shape, pad_val=0): - """Randomly cut out a rectangle from the original img. - - Args: - img (ndarray): Image to be cutout. - shape (int | tuple[int]): Expected cutout shape (h, w). If given as a - int, the value will be used for both h and w. - pad_val (int | float | tuple[int | float]): Values to be filled in the - cut area. Defaults to 0. - - Returns: - ndarray: The cutout image. - """ - - channels = 1 if img.ndim == 2 else img.shape[2] - if isinstance(shape, int): - cut_h, cut_w = shape, shape - else: - assert isinstance(shape, tuple) and len(shape) == 2, \ - f'shape must be a int or a tuple with length 2, but got type ' \ - f'{type(shape)} instead.' - cut_h, cut_w = shape - if isinstance(pad_val, (int, float)): - pad_val = tuple([pad_val] * channels) - elif isinstance(pad_val, tuple): - assert len(pad_val) == channels, \ - 'Expected the num of elements in tuple equals the channels' \ - 'of input image. Found {} vs {}'.format( - len(pad_val), channels) - else: - raise TypeError(f'Invalid type {type(pad_val)} for `pad_val`') - - img_h, img_w = img.shape[:2] - y0 = np.random.uniform(img_h) - x0 = np.random.uniform(img_w) - - y1 = int(max(0, y0 - cut_h / 2.)) - x1 = int(max(0, x0 - cut_w / 2.)) - y2 = min(img_h, y1 + cut_h) - x2 = min(img_w, x1 + cut_w) - - if img.ndim == 2: - patch_shape = (y2 - y1, x2 - x1) - else: - patch_shape = (y2 - y1, x2 - x1, channels) - - img_cutout = img.copy() - patch = np.array( - pad_val, dtype=img.dtype) * np.ones( - patch_shape, dtype=img.dtype) - img_cutout[y1:y2, x1:x2, ...] = patch - - return img_cutout - - -def _get_shear_matrix(magnitude, direction='horizontal'): - """Generate the shear matrix for transformation. - - Args: - magnitude (int | float): The magnitude used for shear. - direction (str): The flip direction, either "horizontal" - or "vertical". - - Returns: - ndarray: The shear matrix with dtype float32. - """ - if direction == 'horizontal': - shear_matrix = np.float32([[1, magnitude, 0], [0, 1, 0]]) - elif direction == 'vertical': - shear_matrix = np.float32([[1, 0, 0], [magnitude, 1, 0]]) - return shear_matrix - - -def imshear(img, - magnitude, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """Shear an image. - - Args: - img (ndarray): Image to be sheared with format (h, w) - or (h, w, c). - magnitude (int | float): The magnitude used for shear. - direction (str): The flip direction, either "horizontal" - or "vertical". - border_value (int | tuple[int]): Value used in case of a - constant border. - interpolation (str): Same as :func:`resize`. - - Returns: - ndarray: The sheared image. - """ - assert direction in ['horizontal', - 'vertical'], f'Invalid direction: {direction}' - height, width = img.shape[:2] - if img.ndim == 2: - channels = 1 - elif img.ndim == 3: - channels = img.shape[-1] - if isinstance(border_value, int): - border_value = tuple([border_value] * channels) - elif isinstance(border_value, tuple): - assert len(border_value) == channels, \ - 'Expected the num of elements in tuple equals the channels' \ - 'of input image. Found {} vs {}'.format( - len(border_value), channels) - else: - raise ValueError( - f'Invalid type {type(border_value)} for `border_value`') - shear_matrix = _get_shear_matrix(magnitude, direction) - sheared = cv2.warpAffine( - img, - shear_matrix, - (width, height), - # Note case when the number elements in `border_value` - # greater than 3 (e.g. shearing masks whose channels large - # than 3) will raise TypeError in `cv2.warpAffine`. - # Here simply slice the first 3 values in `border_value`. - borderValue=border_value[:3], - flags=cv2_interp_codes[interpolation]) - return sheared - - -def _get_translate_matrix(offset, direction='horizontal'): - """Generate the translate matrix. - - Args: - offset (int | float): The offset used for translate. - direction (str): The translate direction, either - "horizontal" or "vertical". - - Returns: - ndarray: The translate matrix with dtype float32. - """ - if direction == 'horizontal': - translate_matrix = np.float32([[1, 0, offset], [0, 1, 0]]) - elif direction == 'vertical': - translate_matrix = np.float32([[1, 0, 0], [0, 1, offset]]) - return translate_matrix - - -def imtranslate(img, - offset, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """Translate an image. - - Args: - img (ndarray): Image to be translated with format - (h, w) or (h, w, c). - offset (int | float): The offset used for translate. - direction (str): The translate direction, either "horizontal" - or "vertical". - border_value (int | tuple[int]): Value used in case of a - constant border. - interpolation (str): Same as :func:`resize`. - - Returns: - ndarray: The translated image. - """ - assert direction in ['horizontal', - 'vertical'], f'Invalid direction: {direction}' - height, width = img.shape[:2] - if img.ndim == 2: - channels = 1 - elif img.ndim == 3: - channels = img.shape[-1] - if isinstance(border_value, int): - border_value = tuple([border_value] * channels) - elif isinstance(border_value, tuple): - assert len(border_value) == channels, \ - 'Expected the num of elements in tuple equals the channels' \ - 'of input image. Found {} vs {}'.format( - len(border_value), channels) - else: - raise ValueError( - f'Invalid type {type(border_value)} for `border_value`.') - translate_matrix = _get_translate_matrix(offset, direction) - translated = cv2.warpAffine( - img, - translate_matrix, - (width, height), - # Note case when the number elements in `border_value` - # greater than 3 (e.g. translating masks whose channels - # large than 3) will raise TypeError in `cv2.warpAffine`. - # Here simply slice the first 3 values in `border_value`. - borderValue=border_value[:3], - flags=cv2_interp_codes[interpolation]) - return translated diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/mask/utils.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/mask/utils.py deleted file mode 100644 index c88208291ab2a605bee9fe6c1a28a443b74c6372..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/mask/utils.py +++ /dev/null @@ -1,63 +0,0 @@ -import mmcv -import numpy as np -import pycocotools.mask as mask_util - - -def split_combined_polys(polys, poly_lens, polys_per_mask): - """Split the combined 1-D polys into masks. - - A mask is represented as a list of polys, and a poly is represented as - a 1-D array. In dataset, all masks are concatenated into a single 1-D - tensor. Here we need to split the tensor into original representations. - - Args: - polys (list): a list (length = image num) of 1-D tensors - poly_lens (list): a list (length = image num) of poly length - polys_per_mask (list): a list (length = image num) of poly number - of each mask - - Returns: - list: a list (length = image num) of list (length = mask num) of \ - list (length = poly num) of numpy array. - """ - mask_polys_list = [] - for img_id in range(len(polys)): - polys_single = polys[img_id] - polys_lens_single = poly_lens[img_id].tolist() - polys_per_mask_single = polys_per_mask[img_id].tolist() - - split_polys = mmcv.slice_list(polys_single, polys_lens_single) - mask_polys = mmcv.slice_list(split_polys, polys_per_mask_single) - mask_polys_list.append(mask_polys) - return mask_polys_list - - -# TODO: move this function to more proper place -def encode_mask_results(mask_results): - """Encode bitmap mask to RLE code. - - Args: - mask_results (list | tuple[list]): bitmap mask results. - In mask scoring rcnn, mask_results is a tuple of (segm_results, - segm_cls_score). - - Returns: - list | tuple: RLE encoded mask. - """ - if isinstance(mask_results, tuple): # mask scoring - cls_segms, cls_mask_scores = mask_results - else: - cls_segms = mask_results - num_classes = len(cls_segms) - encoded_mask_results = [[] for _ in range(num_classes)] - for i in range(len(cls_segms)): - for cls_segm in cls_segms[i]: - encoded_mask_results[i].append( - mask_util.encode( - np.array( - cls_segm[:, :, np.newaxis], order='F', - dtype='uint8'))[0]) # encoded with RLE - if isinstance(mask_results, tuple): - return encoded_mask_results, cls_mask_scores - else: - return encoded_mask_results diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/losses/mse_loss.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/losses/mse_loss.py deleted file mode 100644 index 68d05752a245548862f4c9919448d4fb8dc1b8ca..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/losses/mse_loss.py +++ /dev/null @@ -1,49 +0,0 @@ -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import weighted_loss - - -@weighted_loss -def mse_loss(pred, target): - """Warpper of mse loss.""" - return F.mse_loss(pred, target, reduction='none') - - -@LOSSES.register_module() -class MSELoss(nn.Module): - """MSELoss. - - Args: - reduction (str, optional): The method that reduces the loss to a - scalar. Options are "none", "mean" and "sum". - loss_weight (float, optional): The weight of the loss. Defaults to 1.0 - """ - - def __init__(self, reduction='mean', loss_weight=1.0): - super().__init__() - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, pred, target, weight=None, avg_factor=None): - """Forward function of loss. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - weight (torch.Tensor, optional): Weight of the loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - - Returns: - torch.Tensor: The calculated loss - """ - loss = self.loss_weight * mse_loss( - pred, - target, - weight, - reduction=self.reduction, - avg_factor=avg_factor) - return loss diff --git a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/losses/loss.py b/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/losses/loss.py deleted file mode 100644 index 8812f883045afedfcfbf6cc37be39959af96fcb4..0000000000000000000000000000000000000000 --- a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/losses/loss.py +++ /dev/null @@ -1,170 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import numpy as np -import torch -from torch_utils import training_stats -from torch_utils import misc -from torch_utils.ops import conv2d_gradfix -from losses.pcp import PerceptualLoss - -#---------------------------------------------------------------------------- - -class Loss: - def accumulate_gradients(self, phase, real_img, real_c, gen_z, gen_c, sync, gain): # to be overridden by subclass - raise NotImplementedError() - -#---------------------------------------------------------------------------- - -class TwoStageLoss(Loss): - def __init__(self, device, G_mapping, G_synthesis, D, augment_pipe=None, style_mixing_prob=0.9, r1_gamma=10, pl_batch_shrink=2, pl_decay=0.01, pl_weight=2, truncation_psi=1, pcp_ratio=1.0): - super().__init__() - self.device = device - self.G_mapping = G_mapping - self.G_synthesis = G_synthesis - self.D = D - self.augment_pipe = augment_pipe - self.style_mixing_prob = style_mixing_prob - self.r1_gamma = r1_gamma - self.pl_batch_shrink = pl_batch_shrink - self.pl_decay = pl_decay - self.pl_weight = pl_weight - self.pl_mean = torch.zeros([], device=device) - self.truncation_psi = truncation_psi - self.pcp = PerceptualLoss(layer_weights=dict(conv4_4=1/4, conv5_4=1/2)).to(device) - self.pcp_ratio = pcp_ratio - - def run_G(self, img_in, mask_in, z, c, sync): - with misc.ddp_sync(self.G_mapping, sync): - ws = self.G_mapping(z, c, truncation_psi=self.truncation_psi) - if self.style_mixing_prob > 0: - with torch.autograd.profiler.record_function('style_mixing'): - cutoff = torch.empty([], dtype=torch.int64, device=ws.device).random_(1, ws.shape[1]) - cutoff = torch.where(torch.rand([], device=ws.device) < self.style_mixing_prob, cutoff, torch.full_like(cutoff, ws.shape[1])) - ws[:, cutoff:] = self.G_mapping(torch.randn_like(z), c, truncation_psi=self.truncation_psi, skip_w_avg_update=True)[:, cutoff:] - with misc.ddp_sync(self.G_synthesis, sync): - img, img_stg1 = self.G_synthesis(img_in, mask_in, ws, return_stg1=True) - return img, ws, img_stg1 - - def run_D(self, img, mask, img_stg1, c, sync): - # if self.augment_pipe is not None: - # # img = self.augment_pipe(img) - # # !!!!! have to remove the color transform - # tmp_img = torch.cat([img, mask], dim=1) - # tmp_img = self.augment_pipe(tmp_img) - # img, mask = torch.split(tmp_img, [3, 1]) - with misc.ddp_sync(self.D, sync): - logits, logits_stg1 = self.D(img, mask, img_stg1, c) - return logits, logits_stg1 - - def accumulate_gradients(self, phase, real_img, mask, real_c, gen_z, gen_c, sync, gain): - assert phase in ['Gmain', 'Greg', 'Gboth', 'Dmain', 'Dreg', 'Dboth'] - do_Gmain = (phase in ['Gmain', 'Gboth']) - do_Dmain = (phase in ['Dmain', 'Dboth']) - do_Gpl = (phase in ['Greg', 'Gboth']) and (self.pl_weight != 0) - do_Dr1 = (phase in ['Dreg', 'Dboth']) and (self.r1_gamma != 0) - - # Gmain: Maximize logits for generated images. - if do_Gmain: - with torch.autograd.profiler.record_function('Gmain_forward'): - gen_img, _gen_ws, gen_img_stg1 = self.run_G(real_img, mask, gen_z, gen_c, sync=(sync and not do_Gpl)) # May get synced by Gpl. - gen_logits, gen_logits_stg1 = self.run_D(gen_img, mask, gen_img_stg1, gen_c, sync=False) - training_stats.report('Loss/scores/fake', gen_logits) - training_stats.report('Loss/signs/fake', gen_logits.sign()) - training_stats.report('Loss/scores/fake_s1', gen_logits_stg1) - training_stats.report('Loss/signs/fake_s1', gen_logits_stg1.sign()) - loss_Gmain = torch.nn.functional.softplus(-gen_logits) # -log(sigmoid(gen_logits)) - training_stats.report('Loss/G/loss', loss_Gmain) - loss_Gmain_stg1 = torch.nn.functional.softplus(-gen_logits_stg1) - training_stats.report('Loss/G/loss_s1', loss_Gmain_stg1) - # just for showing - l1_loss = torch.mean(torch.abs(gen_img - real_img)) - training_stats.report('Loss/G/l1_loss', l1_loss) - pcp_loss, _ = self.pcp(gen_img, real_img) - training_stats.report('Loss/G/pcp_loss', pcp_loss) - with torch.autograd.profiler.record_function('Gmain_backward'): - loss_Gmain_all = loss_Gmain + loss_Gmain_stg1 + pcp_loss * self.pcp_ratio - loss_Gmain_all.mean().mul(gain).backward() - - # # Gpl: Apply path length regularization. - # if do_Gpl: - # with torch.autograd.profiler.record_function('Gpl_forward'): - # batch_size = gen_z.shape[0] // self.pl_batch_shrink - # gen_img, gen_ws = self.run_G(real_img[:batch_size], mask[:batch_size], gen_z[:batch_size], gen_c[:batch_size], sync=sync) - # pl_noise = torch.randn_like(gen_img) / np.sqrt(gen_img.shape[2] * gen_img.shape[3]) - # with torch.autograd.profiler.record_function('pl_grads'), conv2d_gradfix.no_weight_gradients(): - # pl_grads = torch.autograd.grad(outputs=[(gen_img * pl_noise).sum()], inputs=[gen_ws], create_graph=True, only_inputs=True)[0] - # pl_lengths = pl_grads.square().sum(2).mean(1).sqrt() - # pl_mean = self.pl_mean.lerp(pl_lengths.mean(), self.pl_decay) - # self.pl_mean.copy_(pl_mean.detach()) - # pl_penalty = (pl_lengths - pl_mean).square() - # training_stats.report('Loss/pl_penalty', pl_penalty) - # loss_Gpl = pl_penalty * self.pl_weight - # training_stats.report('Loss/G/reg', loss_Gpl) - # with torch.autograd.profiler.record_function('Gpl_backward'): - # (gen_img[:, 0, 0, 0] * 0 + loss_Gpl).mean().mul(gain).backward() - - # Dmain: Minimize logits for generated images. - loss_Dgen = 0 - loss_Dgen_stg1 = 0 - if do_Dmain: - with torch.autograd.profiler.record_function('Dgen_forward'): - gen_img, _gen_ws, gen_img_stg1 = self.run_G(real_img, mask, gen_z, gen_c, sync=False) - gen_logits, gen_logits_stg1 = self.run_D(gen_img, mask, gen_img_stg1, gen_c, sync=False) # Gets synced by loss_Dreal. - training_stats.report('Loss/scores/fake', gen_logits) - training_stats.report('Loss/signs/fake', gen_logits.sign()) - loss_Dgen = torch.nn.functional.softplus(gen_logits) # -log(1 - sigmoid(gen_logits)) - training_stats.report('Loss/scores/fake_s1', gen_logits_stg1) - training_stats.report('Loss/signs/fake_s1', gen_logits_stg1.sign()) - loss_Dgen_stg1 = torch.nn.functional.softplus(gen_logits_stg1) # -log(1 - sigmoid(gen_logits)) - with torch.autograd.profiler.record_function('Dgen_backward'): - loss_Dgen_all = loss_Dgen + loss_Dgen_stg1 - loss_Dgen_all.mean().mul(gain).backward() - - # Dmain: Maximize logits for real images. - # Dr1: Apply R1 regularization. - if do_Dmain or do_Dr1: - name = 'Dreal_Dr1' if do_Dmain and do_Dr1 else 'Dreal' if do_Dmain else 'Dr1' - with torch.autograd.profiler.record_function(name + '_forward'): - real_img_tmp = real_img.detach().requires_grad_(do_Dr1) - mask_tmp = mask.detach().requires_grad_(do_Dr1) - real_img_tmp_stg1 = real_img.detach().requires_grad_(do_Dr1) - real_logits, real_logits_stg1 = self.run_D(real_img_tmp, mask_tmp, real_img_tmp_stg1, real_c, sync=sync) - training_stats.report('Loss/scores/real', real_logits) - training_stats.report('Loss/signs/real', real_logits.sign()) - training_stats.report('Loss/scores/real_s1', real_logits_stg1) - training_stats.report('Loss/signs/real_s1', real_logits_stg1.sign()) - - loss_Dreal = 0 - loss_Dreal_stg1 = 0 - if do_Dmain: - loss_Dreal = torch.nn.functional.softplus(-real_logits) # -log(sigmoid(real_logits)) - loss_Dreal_stg1 = torch.nn.functional.softplus(-real_logits_stg1) # -log(sigmoid(real_logits)) - training_stats.report('Loss/D/loss', loss_Dgen + loss_Dreal) - training_stats.report('Loss/D/loss_s1', loss_Dgen_stg1 + loss_Dreal_stg1) - - loss_Dr1 = 0 - loss_Dr1_stg1 = 0 - if do_Dr1: - with torch.autograd.profiler.record_function('r1_grads'), conv2d_gradfix.no_weight_gradients(): - r1_grads = torch.autograd.grad(outputs=[real_logits.sum()], inputs=[real_img_tmp], create_graph=True, only_inputs=True)[0] - r1_grads_stg1 = torch.autograd.grad(outputs=[real_logits_stg1.sum()], inputs=[real_img_tmp_stg1], create_graph=True, only_inputs=True)[0] - r1_penalty = r1_grads.square().sum([1,2,3]) - loss_Dr1 = r1_penalty * (self.r1_gamma / 2) - training_stats.report('Loss/r1_penalty', r1_penalty) - training_stats.report('Loss/D/reg', loss_Dr1) - - r1_penalty_stg1 = r1_grads_stg1.square().sum([1, 2, 3]) - loss_Dr1_stg1 = r1_penalty_stg1 * (self.r1_gamma / 2) - training_stats.report('Loss/r1_penalty_s1', r1_penalty_stg1) - training_stats.report('Loss/D/reg_s1', loss_Dr1_stg1) - - with torch.autograd.profiler.record_function(name + '_backward'): - ((real_logits + real_logits_stg1) * 0 + loss_Dreal + loss_Dreal_stg1 + loss_Dr1 + loss_Dr1_stg1).mean().mul(gain).backward() - -#---------------------------------------------------------------------------- diff --git a/spaces/SWHL/RapidOCRDemo/utils.py b/spaces/SWHL/RapidOCRDemo/utils.py deleted file mode 100644 index 34d1a23724dce518197975b4b5dac60684e769ee..0000000000000000000000000000000000000000 --- a/spaces/SWHL/RapidOCRDemo/utils.py +++ /dev/null @@ -1,78 +0,0 @@ -# -*- encoding: utf-8 -*- -# @Author: SWHL -# @Contact: liekkaskono@163.com -import math -import random -from pathlib import Path - -import numpy as np -from PIL import Image, ImageDraw, ImageFont - - -def draw_ocr_box_txt(image, boxes, txts, font_path, scores=None, text_score=0.5): - h, w = image.height, image.width - img_left = image.copy() - img_right = Image.new("RGB", (w, h), (255, 255, 255)) - - random.seed(0) - draw_left = ImageDraw.Draw(img_left) - draw_right = ImageDraw.Draw(img_right) - for idx, (box, txt) in enumerate(zip(boxes, txts)): - if scores is not None and float(scores[idx]) < text_score: - continue - - color = (random.randint(0, 255), random.randint(0, 255), random.randint(0, 255)) - - box = [tuple(v) for v in box] - draw_left.polygon(box, fill=color) - draw_right.text([box[3][0], box[3][1]], str(idx), fill=color) - - draw_right.polygon( - [ - box[0][0], - box[0][1], - box[1][0], - box[1][1], - box[2][0], - box[2][1], - box[3][0], - box[3][1], - ], - outline=color, - ) - - box_height = math.sqrt( - (box[0][0] - box[3][0]) ** 2 + (box[0][1] - box[3][1]) ** 2 - ) - - box_width = math.sqrt( - (box[0][0] - box[1][0]) ** 2 + (box[0][1] - box[1][1]) ** 2 - ) - - if box_height > 2 * box_width: - font_size = max(int(box_width * 0.9), 10) - font = ImageFont.truetype(font_path, font_size, encoding="utf-8") - cur_y = box[0][1] - for c in txt: - char_size = font.getsize(c) - draw_right.text((box[0][0] + 3, cur_y), c, fill=(0, 0, 0), font=font) - cur_y += char_size[1] - else: - font_size = max(int(box_height * 0.8), 10) - font = ImageFont.truetype(font_path, font_size, encoding="utf-8") - draw_right.text([box[0][0], box[0][1]], txt, fill=(0, 0, 0), font=font) - - img_left = Image.blend(image, img_left, 0.5) - img_show = Image.new("RGB", (w * 2, h), (255, 255, 255)) - img_show.paste(img_left, (0, 0, w, h)) - img_show.paste(img_right, (w, 0, w * 2, h)) - return np.array(img_show) - - -def visualize(image, boxes, txts, scores, font_path="./fonts/FZYTK.TTF"): - draw_img = draw_ocr_box_txt(image, boxes, txts, font_path, scores, text_score=0.5) - - draw_img_save = Path("./inference_results/") - if not draw_img_save.exists(): - draw_img_save.mkdir(parents=True, exist_ok=True) - return draw_img[:, :, ::-1] diff --git a/spaces/Salesforce/EDICT/my_half_diffusers/pipelines/pndm/__init__.py b/spaces/Salesforce/EDICT/my_half_diffusers/pipelines/pndm/__init__.py deleted file mode 100644 index 6fc46aaab9fa26e83b49c26843d854e217742664..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/my_half_diffusers/pipelines/pndm/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# flake8: noqa -from .pipeline_pndm import PNDMPipeline diff --git a/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/train_sppe/src/utils/eval.py b/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/train_sppe/src/utils/eval.py deleted file mode 100644 index 881b69bf1cd46e4448751550b08f75fd8902cb3e..0000000000000000000000000000000000000000 --- a/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/train_sppe/src/utils/eval.py +++ /dev/null @@ -1,208 +0,0 @@ -# ----------------------------------------------------- -# Copyright (c) Shanghai Jiao Tong University. All rights reserved. -# Written by Jiefeng Li (jeff.lee.sjtu@gmail.com) -# ----------------------------------------------------- - -from opt import opt -import sys -import numpy as np - -import torch -from pycocotools.coco import COCO -from pycocotools.cocoeval import COCOeval - -from .img import transformBoxInvert - - -class DataLogger(object): - def __init__(self): - self.clear() - - def clear(self): - self.value = 0 - self.sum = 0 - self.cnt = 0 - self.avg = 0 - - def update(self, value, n=1): - self.value = value - self.sum += value * n - self.cnt += n - self._cal_avg() - - def _cal_avg(self): - self.avg = self.sum / self.cnt - - -class NullWriter(object): - def write(self, arg): - pass - - -def accuracy(output, label, dataset, out_offset=None): - if type(output) == list: - return accuracy(output[opt.nStack - 1], label[opt.nStack - 1], dataset, out_offset) - else: - return heatmapAccuracy(output.cpu().data, label.cpu().data, dataset.accIdxs) - - -def heatmapAccuracy(output, label, idxs): - preds = getPreds(output) - gt = getPreds(label) - - norm = torch.ones(preds.size(0)) * opt.outputResH / 10 - dists = calc_dists(preds, gt, norm) - - acc = torch.zeros(len(idxs) + 1) - avg_acc = 0 - cnt = 0 - for i in range(len(idxs)): - acc[i + 1] = dist_acc(dists[idxs[i] - 1]) - if acc[i + 1] >= 0: - avg_acc = avg_acc + acc[i + 1] - cnt += 1 - if cnt != 0: - acc[0] = avg_acc / cnt - return acc - - -def getPreds(hm): - ''' get predictions from score maps in torch Tensor - return type: torch.LongTensor - ''' - assert hm.dim() == 4, 'Score maps should be 4-dim' - maxval, idx = torch.max(hm.view(hm.size(0), hm.size(1), -1), 2) - - maxval = maxval.view(hm.size(0), hm.size(1), 1) - idx = idx.view(hm.size(0), hm.size(1), 1) + 1 - - preds = idx.repeat(1, 1, 2).float() - - preds[:, :, 0] = (preds[:, :, 0] - 1) % hm.size(3) - preds[:, :, 1] = torch.floor((preds[:, :, 1] - 1) / hm.size(3)) - - # pred_mask = maxval.gt(0).repeat(1, 1, 2).float() - # preds *= pred_mask - return preds - - -def calc_dists(preds, target, normalize): - preds = preds.float().clone() - target = target.float().clone() - dists = torch.zeros(preds.size(1), preds.size(0)) - for n in range(preds.size(0)): - for c in range(preds.size(1)): - if target[n, c, 0] > 0 and target[n, c, 1] > 0: - dists[c, n] = torch.dist( - preds[n, c, :], target[n, c, :]) / normalize[n] - else: - dists[c, n] = -1 - return dists - - -def dist_acc(dists, thr=0.5): - ''' Return percentage below threshold while ignoring values with a -1 ''' - if dists.ne(-1).sum() > 0: - return dists.le(thr).eq(dists.ne(-1)).float().sum() * 1.0 / dists.ne(-1).float().sum() - else: - return - 1 - - -def postprocess(output): - p = getPreds(output) - - for i in range(p.size(0)): - for j in range(p.size(1)): - hm = output[i][j] - pX, pY = int(round(p[i][j][0])), int(round(p[i][j][1])) - if 0 < pX < opt.outputResW - 1 and 0 < pY < opt.outputResH - 1: - diff = torch.Tensor( - (hm[pY][pX + 1] - hm[pY][pX - 1], hm[pY + 1][pX] - hm[pY - 1][pX])) - p[i][j] += diff.sign() * 0.25 - p -= 0.5 - - return p - - -def getPrediction(hms, pt1, pt2, inpH, inpW, resH, resW): - assert hms.dim() == 4, 'Score maps should be 4-dim' - maxval, idx = torch.max(hms.view(hms.size(0), hms.size(1), -1), 2) - - maxval = maxval.view(hms.size(0), hms.size(1), 1) - idx = idx.view(hms.size(0), hms.size(1), 1) + 1 - - preds = idx.repeat(1, 1, 2).float() - - preds[:, :, 0] = (preds[:, :, 0] - 1) % hms.size(3) - preds[:, :, 1] = torch.floor((preds[:, :, 1] - 1) / hms.size(3)) - - pred_mask = maxval.gt(0).repeat(1, 1, 2).float() - preds *= pred_mask - - # Very simple post-processing step to improve performance at tight PCK thresholds - for i in range(preds.size(0)): - for j in range(preds.size(1)): - hm = hms[i][j] - pX, pY = int(round(float(preds[i][j][0]))), int( - round(float(preds[i][j][1]))) - if 1 < pX < opt.outputResW - 2 and 1 < pY < opt.outputResH - 2: - diff = torch.Tensor( - (hm[pY][pX + 1] - hm[pY][pX - 1], hm[pY + 1][pX] - hm[pY - 1][pX])) - diff = diff.sign() * 0.25 - diff[1] = diff[1] * inpH / inpW - preds[i][j] += diff - - preds_tf = torch.zeros(preds.size()) - for i in range(hms.size(0)): # Number of samples - for j in range(hms.size(1)): # Number of output heatmaps for one sample - preds_tf[i][j] = transformBoxInvert( - preds[i][j], pt1[i], pt2[i], inpH, inpW, resH, resW) - - return preds, preds_tf, maxval - - -def getmap(JsonDir='./val/alphapose-results.json'): - ListDir = '../coco-minival500_images.txt' - - annType = ['segm', 'bbox', 'keypoints'] - annType = annType[2] # specify type here - prefix = 'person_keypoints' if annType == 'keypoints' else 'instances' - print('Running evaluation for *%s* results.' % (annType)) - - # load Ground_truth - dataType = 'val2014' - annFile = '../%s_%s.json' % (prefix, dataType) - cocoGt = COCO(annFile) - - # load Answer(json) - resFile = JsonDir - cocoDt = cocoGt.loadRes(resFile) - - # load List - fin = open(ListDir, 'r') - imgIds_str = fin.readline() - if imgIds_str[-1] == '\n': - imgIds_str = imgIds_str[:-1] - imgIds_str = imgIds_str.split(',') - - imgIds = [] - for x in imgIds_str: - imgIds.append(int(x)) - - # running evaluation - iouThrs = np.linspace(.5, 0.95, np.round((0.95 - .5) / .05) + 1, endpoint=True) - t = np.where(0.5 == iouThrs)[0] - - cocoEval = COCOeval(cocoGt, cocoDt, annType) - cocoEval.params.imgIds = imgIds - cocoEval.evaluate() - cocoEval.accumulate() - - score = cocoEval.eval['precision'][:, :, :, 0, :] - mApAll, mAp5 = 0.01, 0.01 - if len(score[score > -1]) != 0: - score2 = score[t] - mApAll = np.mean(score[score > -1]) - mAp5 = np.mean(score2[score2 > -1]) - cocoEval.summarize() - return mApAll, mAp5 diff --git a/spaces/ServerX/PorcoDiaz/infer/lib/infer_pack/modules/F0Predictor/__init__.py b/spaces/ServerX/PorcoDiaz/infer/lib/infer_pack/modules/F0Predictor/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ServerX/PorcoDiaz/lib/uvr5_pack/lib_v5/dataset.py b/spaces/ServerX/PorcoDiaz/lib/uvr5_pack/lib_v5/dataset.py deleted file mode 100644 index cfd01a174978d97180a897e40cb59ecadec1d12e..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/lib/uvr5_pack/lib_v5/dataset.py +++ /dev/null @@ -1,183 +0,0 @@ -import os -import random - -import numpy as np -import torch -import torch.utils.data -from tqdm import tqdm - -from . import spec_utils - - -class VocalRemoverValidationSet(torch.utils.data.Dataset): - def __init__(self, patch_list): - self.patch_list = patch_list - - def __len__(self): - return len(self.patch_list) - - def __getitem__(self, idx): - path = self.patch_list[idx] - data = np.load(path) - - X, y = data["X"], data["y"] - - X_mag = np.abs(X) - y_mag = np.abs(y) - - return X_mag, y_mag - - -def make_pair(mix_dir, inst_dir): - input_exts = [".wav", ".m4a", ".mp3", ".mp4", ".flac"] - - X_list = sorted( - [ - os.path.join(mix_dir, fname) - for fname in os.listdir(mix_dir) - if os.path.splitext(fname)[1] in input_exts - ] - ) - y_list = sorted( - [ - os.path.join(inst_dir, fname) - for fname in os.listdir(inst_dir) - if os.path.splitext(fname)[1] in input_exts - ] - ) - - filelist = list(zip(X_list, y_list)) - - return filelist - - -def train_val_split(dataset_dir, split_mode, val_rate, val_filelist): - if split_mode == "random": - filelist = make_pair( - os.path.join(dataset_dir, "mixtures"), - os.path.join(dataset_dir, "instruments"), - ) - - random.shuffle(filelist) - - if len(val_filelist) == 0: - val_size = int(len(filelist) * val_rate) - train_filelist = filelist[:-val_size] - val_filelist = filelist[-val_size:] - else: - train_filelist = [ - pair for pair in filelist if list(pair) not in val_filelist - ] - elif split_mode == "subdirs": - if len(val_filelist) != 0: - raise ValueError( - "The `val_filelist` option is not available in `subdirs` mode" - ) - - train_filelist = make_pair( - os.path.join(dataset_dir, "training/mixtures"), - os.path.join(dataset_dir, "training/instruments"), - ) - - val_filelist = make_pair( - os.path.join(dataset_dir, "validation/mixtures"), - os.path.join(dataset_dir, "validation/instruments"), - ) - - return train_filelist, val_filelist - - -def augment(X, y, reduction_rate, reduction_mask, mixup_rate, mixup_alpha): - perm = np.random.permutation(len(X)) - for i, idx in enumerate(tqdm(perm)): - if np.random.uniform() < reduction_rate: - y[idx] = spec_utils.reduce_vocal_aggressively( - X[idx], y[idx], reduction_mask - ) - - if np.random.uniform() < 0.5: - # swap channel - X[idx] = X[idx, ::-1] - y[idx] = y[idx, ::-1] - if np.random.uniform() < 0.02: - # mono - X[idx] = X[idx].mean(axis=0, keepdims=True) - y[idx] = y[idx].mean(axis=0, keepdims=True) - if np.random.uniform() < 0.02: - # inst - X[idx] = y[idx] - - if np.random.uniform() < mixup_rate and i < len(perm) - 1: - lam = np.random.beta(mixup_alpha, mixup_alpha) - X[idx] = lam * X[idx] + (1 - lam) * X[perm[i + 1]] - y[idx] = lam * y[idx] + (1 - lam) * y[perm[i + 1]] - - return X, y - - -def make_padding(width, cropsize, offset): - left = offset - roi_size = cropsize - left * 2 - if roi_size == 0: - roi_size = cropsize - right = roi_size - (width % roi_size) + left - - return left, right, roi_size - - -def make_training_set(filelist, cropsize, patches, sr, hop_length, n_fft, offset): - len_dataset = patches * len(filelist) - - X_dataset = np.zeros((len_dataset, 2, n_fft // 2 + 1, cropsize), dtype=np.complex64) - y_dataset = np.zeros((len_dataset, 2, n_fft // 2 + 1, cropsize), dtype=np.complex64) - - for i, (X_path, y_path) in enumerate(tqdm(filelist)): - X, y = spec_utils.cache_or_load(X_path, y_path, sr, hop_length, n_fft) - coef = np.max([np.abs(X).max(), np.abs(y).max()]) - X, y = X / coef, y / coef - - l, r, roi_size = make_padding(X.shape[2], cropsize, offset) - X_pad = np.pad(X, ((0, 0), (0, 0), (l, r)), mode="constant") - y_pad = np.pad(y, ((0, 0), (0, 0), (l, r)), mode="constant") - - starts = np.random.randint(0, X_pad.shape[2] - cropsize, patches) - ends = starts + cropsize - for j in range(patches): - idx = i * patches + j - X_dataset[idx] = X_pad[:, :, starts[j] : ends[j]] - y_dataset[idx] = y_pad[:, :, starts[j] : ends[j]] - - return X_dataset, y_dataset - - -def make_validation_set(filelist, cropsize, sr, hop_length, n_fft, offset): - patch_list = [] - patch_dir = "cs{}_sr{}_hl{}_nf{}_of{}".format( - cropsize, sr, hop_length, n_fft, offset - ) - os.makedirs(patch_dir, exist_ok=True) - - for i, (X_path, y_path) in enumerate(tqdm(filelist)): - basename = os.path.splitext(os.path.basename(X_path))[0] - - X, y = spec_utils.cache_or_load(X_path, y_path, sr, hop_length, n_fft) - coef = np.max([np.abs(X).max(), np.abs(y).max()]) - X, y = X / coef, y / coef - - l, r, roi_size = make_padding(X.shape[2], cropsize, offset) - X_pad = np.pad(X, ((0, 0), (0, 0), (l, r)), mode="constant") - y_pad = np.pad(y, ((0, 0), (0, 0), (l, r)), mode="constant") - - len_dataset = int(np.ceil(X.shape[2] / roi_size)) - for j in range(len_dataset): - outpath = os.path.join(patch_dir, "{}_p{}.npz".format(basename, j)) - start = j * roi_size - if not os.path.exists(outpath): - np.savez( - outpath, - X=X_pad[:, :, start : start + cropsize], - y=y_pad[:, :, start : start + cropsize], - ) - patch_list.append(outpath) - - return VocalRemoverValidationSet(patch_list) diff --git a/spaces/Shakeb100/GroomingGenie_AI/clipseg/score.py b/spaces/Shakeb100/GroomingGenie_AI/clipseg/score.py deleted file mode 100644 index 8db8915b109953931fa2a330a7731db4a51b44f8..0000000000000000000000000000000000000000 --- a/spaces/Shakeb100/GroomingGenie_AI/clipseg/score.py +++ /dev/null @@ -1,453 +0,0 @@ -from torch.functional import Tensor - -import torch -import inspect -import json -import yaml -import time -import sys - -from general_utils import log - -import numpy as np -from os.path import expanduser, join, isfile, realpath - -from torch.utils.data import DataLoader - -from metrics import FixedIntervalMetrics - -from general_utils import load_model, log, score_config_from_cli_args, AttributeDict, get_attribute, filter_args - - -DATASET_CACHE = dict() - -def load_model(checkpoint_id, weights_file=None, strict=True, model_args='from_config', with_config=False, ignore_weights=False): - - config = json.load(open(join('logs', checkpoint_id, 'config.json'))) - - if model_args != 'from_config' and type(model_args) != dict: - raise ValueError('model_args must either be "from_config" or a dictionary of values') - - model_cls = get_attribute(config['model']) - - # load model - if model_args == 'from_config': - _, model_args, _ = filter_args(config, inspect.signature(model_cls).parameters) - - model = model_cls(**model_args) - - if weights_file is None: - weights_file = realpath(join('logs', checkpoint_id, 'weights.pth')) - else: - weights_file = realpath(join('logs', checkpoint_id, weights_file)) - - if isfile(weights_file) and not ignore_weights: - weights = torch.load(weights_file) - for _, w in weights.items(): - assert not torch.any(torch.isnan(w)), 'weights contain NaNs' - model.load_state_dict(weights, strict=strict) - else: - if not ignore_weights: - raise FileNotFoundError(f'model checkpoint {weights_file} was not found') - - if with_config: - return model, config - - return model - - -def compute_shift2(model, datasets, seed=123, repetitions=1): - """ computes shift """ - - model.eval() - model.cuda() - - import random - random.seed(seed) - - preds, gts = [], [] - for i_dataset, dataset in enumerate(datasets): - - loader = DataLoader(dataset, batch_size=1, num_workers=0, shuffle=False, drop_last=False) - - max_iterations = int(repetitions * len(dataset.dataset.data_list)) - - with torch.no_grad(): - - i, losses = 0, [] - for i_all, (data_x, data_y) in enumerate(loader): - - data_x = [v.cuda(non_blocking=True) if v is not None else v for v in data_x] - data_y = [v.cuda(non_blocking=True) if v is not None else v for v in data_y] - - pred, = model(data_x[0], data_x[1], data_x[2]) - preds += [pred.detach()] - gts += [data_y] - - i += 1 - if max_iterations and i >= max_iterations: - break - - from metrics import FixedIntervalMetrics - n_values = 51 - thresholds = np.linspace(0, 1, n_values)[1:-1] - metric = FixedIntervalMetrics(resize_pred=True, sigmoid=True, n_values=n_values) - - for p, y in zip(preds, gts): - metric.add(p.unsqueeze(1), y) - - best_idx = np.argmax(metric.value()['fgiou_scores']) - best_thresh = thresholds[best_idx] - - return best_thresh - - -def get_cached_pascal_pfe(split, config): - from datasets.pfe_dataset import PFEPascalWrapper - try: - dataset = DATASET_CACHE[(split, config.image_size, config.label_support, config.mask)] - except KeyError: - dataset = PFEPascalWrapper(mode='val', split=split, mask=config.mask, image_size=config.image_size, label_support=config.label_support) - DATASET_CACHE[(split, config.image_size, config.label_support, config.mask)] = dataset - return dataset - - - - -def main(): - config, train_checkpoint_id = score_config_from_cli_args() - - metrics = score(config, train_checkpoint_id, None) - - for dataset in metrics.keys(): - for k in metrics[dataset]: - if type(metrics[dataset][k]) in {float, int}: - print(dataset, f'{k:<16} {metrics[dataset][k]:.3f}') - - -def score(config, train_checkpoint_id, train_config): - - config = AttributeDict(config) - - print(config) - - # use training dataset and loss - train_config = AttributeDict(json.load(open(f'logs/{train_checkpoint_id}/config.json'))) - - cp_str = f'_{config.iteration_cp}' if config.iteration_cp is not None else '' - - - model_cls = get_attribute(train_config['model']) - - _, model_args, _ = filter_args(train_config, inspect.signature(model_cls).parameters) - - model_args = {**model_args, **{k: config[k] for k in ['process_cond', 'fix_shift'] if k in config}} - - strict_models = {'ConditionBase4', 'PFENetWrapper'} - model = load_model(train_checkpoint_id, strict=model_cls.__name__ in strict_models, model_args=model_args, - weights_file=f'weights{cp_str}.pth', ) - - - model.eval() - model.cuda() - - metric_args = dict() - - if 'threshold' in config: - if config.metric.split('.')[-1] == 'SkLearnMetrics': - metric_args['threshold'] = config.threshold - - if 'resize_to' in config: - metric_args['resize_to'] = config.resize_to - - if 'sigmoid' in config: - metric_args['sigmoid'] = config.sigmoid - - if 'custom_threshold' in config: - metric_args['custom_threshold'] = config.custom_threshold - - if config.test_dataset == 'pascal': - - loss_fn = get_attribute(train_config.loss) - # assume that if no split is specified in train_config, test on all splits, - - if 'splits' in config: - splits = config.splits - else: - if 'split' in train_config and type(train_config.split) == int: - # unless train_config has a split set, in that case assume train mode in training - splits = [train_config.split] - assert train_config.mode == 'train' - else: - splits = [0,1,2,3] - - log.info('Test on these splits', splits) - - scores = dict() - for split in splits: - - shift = config.shift if 'shift' in config else 0 - - # automatic shift - if shift == 'auto': - shift_compute_t = time.time() - shift = compute_shift2(model, [get_cached_pascal_pfe(s, config) for s in range(4) if s != split], repetitions=config.compute_shift_fac) - log.info(f'Best threshold is {shift}, computed on splits: {[s for s in range(4) if s != split]}, took {time.time() - shift_compute_t:.1f}s') - - dataset = get_cached_pascal_pfe(split, config) - - eval_start_t = time.time() - - loader = DataLoader(dataset, batch_size=1, num_workers=0, shuffle=False, drop_last=False) - - assert config.batch_size is None or config.batch_size == 1, 'When PFE Dataset is used, batch size must be 1' - - metric = FixedIntervalMetrics(resize_pred=True, sigmoid=True, custom_threshold=shift, **metric_args) - - with torch.no_grad(): - - i, losses = 0, [] - for i_all, (data_x, data_y) in enumerate(loader): - - data_x = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_x] - data_y = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_y] - - if config.mask == 'separate': # for old CondBase model - pred, = model(data_x[0], data_x[1], data_x[2]) - else: - # assert config.mask in {'text', 'highlight'} - pred, _, _, _ = model(data_x[0], data_x[1], return_features=True) - - # loss = loss_fn(pred, data_y[0]) - metric.add(pred.unsqueeze(1) + shift, data_y) - - # losses += [float(loss)] - - i += 1 - if config.max_iterations and i >= config.max_iterations: - break - - #scores[split] = {m: s for m, s in zip(metric.names(), metric.value())} - - log.info(f'Dataset length: {len(dataset)}, took {time.time() - eval_start_t:.1f}s to evaluate.') - - print(metric.value()['mean_iou_scores']) - - scores[split] = metric.scores() - - log.info(f'Completed split {split}') - - key_prefix = config['name'] if 'name' in config else 'pas' - - all_keys = set.intersection(*[set(v.keys()) for v in scores.values()]) - - valid_keys = [k for k in all_keys if all(v[k] is not None and isinstance(v[k], (int, float, np.float)) for v in scores.values())] - - return {key_prefix: {k: np.mean([s[k] for s in scores.values()]) for k in valid_keys}} - - - if config.test_dataset == 'coco': - from datasets.coco_wrapper import COCOWrapper - - coco_dataset = COCOWrapper('test', fold=train_config.fold, image_size=train_config.image_size, mask=config.mask, - with_class_label=True) - - log.info('Dataset length', len(coco_dataset)) - loader = DataLoader(coco_dataset, batch_size=config.batch_size, num_workers=2, shuffle=False, drop_last=False) - - metric = get_attribute(config.metric)(resize_pred=True, **metric_args) - - shift = config.shift if 'shift' in config else 0 - - with torch.no_grad(): - - i, losses = 0, [] - for i_all, (data_x, data_y) in enumerate(loader): - data_x = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_x] - data_y = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_y] - - if config.mask == 'separate': # for old CondBase model - pred, = model(data_x[0], data_x[1], data_x[2]) - else: - # assert config.mask in {'text', 'highlight'} - pred, _, _, _ = model(data_x[0], data_x[1], return_features=True) - - metric.add([pred + shift], data_y) - - i += 1 - if config.max_iterations and i >= config.max_iterations: - break - - key_prefix = config['name'] if 'name' in config else 'coco' - return {key_prefix: metric.scores()} - #return {key_prefix: {k: v for k, v in zip(metric.names(), metric.value())}} - - - if config.test_dataset == 'phrasecut': - from datasets.phrasecut import PhraseCut - - only_visual = config.only_visual is not None and config.only_visual - with_visual = config.with_visual is not None and config.with_visual - - dataset = PhraseCut('test', - image_size=train_config.image_size, - mask=config.mask, - with_visual=with_visual, only_visual=only_visual, aug_crop=False, - aug_color=False) - - loader = DataLoader(dataset, batch_size=config.batch_size, num_workers=2, shuffle=False, drop_last=False) - metric = get_attribute(config.metric)(resize_pred=True, **metric_args) - - shift = config.shift if 'shift' in config else 0 - - - with torch.no_grad(): - - i, losses = 0, [] - for i_all, (data_x, data_y) in enumerate(loader): - data_x = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_x] - data_y = [v.cuda(non_blocking=True) if isinstance(v, torch.Tensor) else v for v in data_y] - - pred, _, _, _ = model(data_x[0], data_x[1], return_features=True) - metric.add([pred + shift], data_y) - - i += 1 - if config.max_iterations and i >= config.max_iterations: - break - - key_prefix = config['name'] if 'name' in config else 'phrasecut' - return {key_prefix: metric.scores()} - #return {key_prefix: {k: v for k, v in zip(metric.names(), metric.value())}} - - if config.test_dataset == 'pascal_zs': - from third_party.JoEm.model.metric import Evaluator - from third_party.JoEm.data_loader import get_seen_idx, get_unseen_idx, VOC - from datasets.pascal_zeroshot import PascalZeroShot, PASCAL_VOC_CLASSES_ZS - - from models.clipseg import CLIPSegMultiLabel - - n_unseen = train_config.remove_classes[1] - - pz = PascalZeroShot('val', n_unseen, image_size=352) - m = CLIPSegMultiLabel(model=train_config.name).cuda() - m.eval(); - - print(len(pz), n_unseen) - print('training removed', [c for class_set in PASCAL_VOC_CLASSES_ZS[:n_unseen // 2] for c in class_set]) - - print('unseen', [VOC[i] for i in get_unseen_idx(n_unseen)]) - print('seen', [VOC[i] for i in get_seen_idx(n_unseen)]) - - loader = DataLoader(pz, batch_size=8) - evaluator = Evaluator(21, get_unseen_idx(n_unseen), get_seen_idx(n_unseen)) - - for i, (data_x, data_y) in enumerate(loader): - pred = m(data_x[0].cuda()) - evaluator.add_batch(data_y[0].numpy(), pred.argmax(1).cpu().detach().numpy()) - - if config.max_iter is not None and i > config.max_iter: - break - - scores = evaluator.Mean_Intersection_over_Union() - key_prefix = config['name'] if 'name' in config else 'pas_zs' - - return {key_prefix: {k: scores[k] for k in ['seen', 'unseen', 'harmonic', 'overall']}} - - elif config.test_dataset in {'same_as_training', 'affordance'}: - loss_fn = get_attribute(train_config.loss) - - metric_cls = get_attribute(config.metric) - metric = metric_cls(**metric_args) - - if config.test_dataset == 'same_as_training': - dataset_cls = get_attribute(train_config.dataset) - elif config.test_dataset == 'affordance': - dataset_cls = get_attribute('datasets.lvis_oneshot3.LVIS_Affordance') - dataset_name = 'aff' - else: - dataset_cls = get_attribute('datasets.lvis_oneshot3.LVIS_OneShot') - dataset_name = 'lvis' - - _, dataset_args, _ = filter_args(config, inspect.signature(dataset_cls).parameters) - - dataset_args['image_size'] = train_config.image_size # explicitly use training image size for evaluation - - if model.__class__.__name__ == 'PFENetWrapper': - dataset_args['image_size'] = config.image_size - - log.info('init dataset', str(dataset_cls)) - dataset = dataset_cls(**dataset_args) - - log.info(f'Score on {model.__class__.__name__} on {dataset_cls.__name__}') - - data_loader = torch.utils.data.DataLoader(dataset, batch_size=config.batch_size, shuffle=config.shuffle) - - # explicitly set prompts - if config.prompt == 'plain': - model.prompt_list = ['{}'] - elif config.prompt == 'fixed': - model.prompt_list = ['a photo of a {}.'] - elif config.prompt == 'shuffle': - model.prompt_list = ['a photo of a {}.', 'a photograph of a {}.', 'an image of a {}.', '{}.'] - elif config.prompt == 'shuffle_clip': - from models.clip_prompts import imagenet_templates - model.prompt_list = imagenet_templates - - config.assume_no_unused_keys(exceptions=['max_iterations']) - - t_start = time.time() - - with torch.no_grad(): # TODO: switch to inference_mode (torch 1.9) - i, losses = 0, [] - for data_x, data_y in data_loader: - - data_x = [x.cuda() if isinstance(x, torch.Tensor) else x for x in data_x] - data_y = [x.cuda() if isinstance(x, torch.Tensor) else x for x in data_y] - - if model.__class__.__name__ in {'ConditionBase4', 'PFENetWrapper'}: - pred, = model(data_x[0], data_x[1], data_x[2]) - visual_q = None - else: - pred, visual_q, _, _ = model(data_x[0], data_x[1], return_features=True) - - loss = loss_fn(pred, data_y[0]) - - metric.add([pred], data_y) - - losses += [float(loss)] - - i += 1 - if config.max_iterations and i >= config.max_iterations: - break - - # scores = {m: s for m, s in zip(metric.names(), metric.value())} - scores = metric.scores() - - keys = set(scores.keys()) - if dataset.negative_prob > 0 and 'mIoU' in keys: - keys.remove('mIoU') - - name_mask = dataset.mask.replace('text_label', 'txt')[:3] - name_neg = '' if dataset.negative_prob == 0 else '_' + str(dataset.negative_prob) - - score_name = config.name if 'name' in config else f'{dataset_name}_{name_mask}{name_neg}' - - scores = {score_name: {k: v for k,v in scores.items() if k in keys}} - scores[score_name].update({'test_loss': np.mean(losses)}) - - log.info(f'Evaluation took {time.time() - t_start:.1f}s') - - return scores - else: - raise ValueError('invalid test dataset') - - - - - - - - - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/Silentlin/DiffSinger/modules/parallel_wavegan/models/__init__.py b/spaces/Silentlin/DiffSinger/modules/parallel_wavegan/models/__init__.py deleted file mode 100644 index 4803ba6b2a0afc8022e756ae5b3f4c7403c3c1bd..0000000000000000000000000000000000000000 --- a/spaces/Silentlin/DiffSinger/modules/parallel_wavegan/models/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .melgan import * # NOQA -from .parallel_wavegan import * # NOQA diff --git a/spaces/SkidPC/SweetLuna-Aurora/README.md b/spaces/SkidPC/SweetLuna-Aurora/README.md deleted file mode 100644 index 51b88cdbc60f763bb56133f6fb4126a4f81f6225..0000000000000000000000000000000000000000 --- a/spaces/SkidPC/SweetLuna-Aurora/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: SweetLuna Aurora -emoji: 💻 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/StanfordAIMI/radiology_report_generation/app.py b/spaces/StanfordAIMI/radiology_report_generation/app.py deleted file mode 100644 index 476459985fd3dbf473ba1e0739e87546e429482b..0000000000000000000000000000000000000000 --- a/spaces/StanfordAIMI/radiology_report_generation/app.py +++ /dev/null @@ -1,177 +0,0 @@ -import os -import torch -import gradio as gr -from vilmedic import AutoModel -from radgraph import RadGraph -import glob - -model, processor = AutoModel.from_pretrained("rrg/baseline-mimic") -device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") -model = model.to(device) -radgraph = RadGraph(cuda=-1) - -refs = { - '943486a3-b3fa9ff7-50f5a769-7a62fcbb-f39b6da4.jpg': 'Right upper lobe pneumonia or mass. However, given right hilar fullness, a mass resulting in post-obstructive pneumonia is within the differential. Recommend chest CT with intravenous contrast for further assessment. Dr. ___ communicated the above results to Dr. ___ at 8:55 am on ___ by telephone.', - '6ad819bb-bae74eb9-7b663e90-b8deabd7-57f8054a.jpg': 'Mild pulmonary edema with superimposed left upper lung consolidation, potentially more confluent edema versus superimposed infection.', - '54affd39-8bf24209-232bac8a-df6c277a-398ee8a5.jpg': '1. New mild pulmonary edema with persistent small bilateral pleural effusions. 2. Severe cardiomegaly is likely accentuated due to low lung volumes and patient positioning.', - '57a3c797-7272b246-fa226777-e4c7d84c-91ec2e96.jpg': 'Tiny pleural effusions, new. Otherwise unremarkable.', - '4c3fdd2f-79be0bc9-f5a0ed41-3c9dc58e-75a6d19a.jpg': 'No evidence of pneumonia. No acute cardiopulmonary process.', - '66ee3842-a927ac25-a5df697e-f1f36b1f-201b2172.jpg': 'In comparison with the study of ___, the increased opacification at the right base has essentially cleared with better inspiration. Cardiac silhouette remains at the upper limits of normal in size and there is again tortuosity of the aorta without vascular congestion or pleural effusion. Biapical changes, especially on the right, are stable.', - '9b1a8a51-2b8e4a04-1719059d-aa6bc888-7ace612b.jpg': 'In comparison to previous radiograph of 1 day earlier, support and monitoring devices are unchanged in position. Pulmonary vascular congestion has improved. Airspace opacity at the left lung base has worsened, and additional patchy opacities have developed at the right lung base. Findings could potentially be due to aspiration or evolving aspiration pneumonia in the appropriate clinical setting. Exam is otherwise remarkable for probable small bilateral pleural effusions.', - '81bca127-0c416084-67f8033c-ecb26476-6d1ecf60.jpg': 'New moderate left pleural effusion with adjacent atelectasis in the left lung base.', - '3bea0373-0d10dd77-1cac5b90-651be924-d343b184.jpg': 'No significant change in right middle and lower lobe pneumonia. Small increase in left pleural effusion.', - '4b00acf0-a59615c4-6d607a0c-e6ee01b7-177c175f.jpg': 'No previous images. The cardiac silhouette is within upper limits of normal in size and there is no evidence of vascular congestion, pleural effusion, or acute focal', - 'a664e3c4-97f37598-e008ddb5-674d8b24-8a49114f.jpg': 'As compared to the previous radiograph, the lung volumes have slightly decreased. There is minimal fluid overload in both the vascular and interstitial compartment. Normal size of the cardiac silhouette. Moderate tortuosity of the thoracic aorta. No pleural effusions. No pneumonia.', - '5a907c47-9d944216-c8477dd2-95d08914-13239bec.jpg': 'As compared to ___, the patient has received a new nasogastric tube. The tube is located in the middle parts of the stomach. The previous overinflation of the stomach is no longer present. The lung volumes remain low. Moderate cardiomegaly. Moderate bilateral areas of atelectasis and mild to moderate right pleural effusion.', - '828dd9de-ee245eb4-e8513715-9218b1f9-081a1742.jpg': 'ET tube tip is 4.7 cm above the carinal. NG tube tip is 2 proximal, at the gastroesophageal junction and should be advanced. Right internal jugular line tip is in the right atrium. The cardiomediastinal silhouette is substantially increased potentially due to very low lung volumes with bibasal consolidations. Nodular opacity in the right lower lobe is not well seen on the previous examination, might represent summation of shadows and should be reassessed on subsequent imaging. The findings might in fact represent a combination of low lung volumes and mild pulmonary edema.', - 'dcdc4bd9-4301b111-2a65a814-ee8e7bc5-7f0b9a5a.jpg': 'No acute cardiopulmonary abnormality.', - '4c028244-47499ecc-3fab489b-15ec1e76-47055a4d.jpg': 'In comparison with the study of scratch then no previous images. Low lung volumes accentuate the enlargement of the cardiac silhouette. Indistinctness of engorged pulmonary vessels most likely reflects elevation of pulmonary venous pressure. No definite acute focal pneumonia, though the retrocardiac area is difficult to assess in the absence of a lateral view.', - '4a72d28f-0e2f3e12-475c7fc7-42e5a7e5-297c09cd.jpg': 'In comparison with study of ___, there has been placement of a single lead pacer that extends to the apex of the right ventricle. No evidence of post procedure pneumothorax. Cardiac silhouette is at the upper limits of normal or mildly enlarged. No evidence of appreciable vascular congestion, pleural effusion, or acute focal pneumonia.', - 'd71a4931-5c0832b8-ae60fd56-1e3658d3-a392959a.jpg': 'Progression of bilateral opacities, now more confluent, particularly on the left. suggesting progression of alveolar edema. In the appropriate clinical setting, underlying infectious infiltrate would be difficult to exclude.', - 'f9c51c13-4a226906-c3daea10-5b1e4027-ae2ed354.jpg': 'In comparison with the study of ___, the monitoring and support devices are essentially unchanged. The patient has taken a somewhat better inspiration. Nevertheless, there is enlargement of the cardiac silhouette with bibasilar opacifications.', - '125cdd3f-57f5c50a-e59e5476-64c27621-f211c385.jpg': 'As compared to ___ radiograph, the patient has been extubated. Cardiomediastinal contours are stable, and pulmonary vascular congestion persists. Interval improved aeration in the left mid and lower lung but slight worsening of right juxta hilar and basilar opacities.', - '2288b20e-56691344-f1f5825a-d8f8976c-662478fc.jpg': '1. Stable small to moderate bilateral pleural effusions. 2. Stable mild cardiomegaly and pulmonary artery enlargement.'} - - -def highlight_radgraph_entities(word, entity): - if 'DA' in entity: - color = ((254, 226, 226), (220, 38, 38)) - else: - color = ((219, 234, 254), (37, 99, 235)) - - return '' + word + \ - ' ' + entity + ' ' - - -def get_token_from_strings(s): - if not isinstance(s, str): - return None, None - if s == "": - return None, None - - if ',' in s: - s = s.split(',') - if not isinstance(s, list): - s = [s] - s = [w.strip() for w in s] - return [processor.tokenizer(w, add_special_tokens=False).input_ids for w in s], s - - -def highlight_word(word, entity): - return '' + word + \ - ' ' + entity + ' ' - - -def run(image, beam_size, num_return_sequences, include_words, exclude_words, do_radgraph): - if image is None: - return {}, 'Please select an image' # , "" - if num_return_sequences > beam_size: - return {}, '"Beam size" must be greater or equal than "Number of generated reports"' # , "" - - try: - include_words_ids, include_words = get_token_from_strings(include_words) - exclude_words_ids, exclude_words = get_token_from_strings(exclude_words) - - if include_words_ids is not None and [3] in include_words_ids: - return {}, '"' + include_words[ - include_words_ids.index([3])] + '" is not in the vocabulary"' # , "" - - with torch.no_grad(): - batch = processor.inference(image=[ - [image] - ]) - batch_size = 1 - encoder_output, encoder_attention_mask = model.encode(**batch) - expanded_idx = torch.arange(batch_size).view(-1, 1).repeat(1, beam_size).view(-1) - input_ids = torch.ones((len(batch["images"]), 1), dtype=torch.long) - if torch.cuda.is_available(): - expanded_idx = expanded_idx.cuda() - input_ids = input_ids.cuda() - - # Using huggingface generate method - hyps = model.dec.generate( - input_ids=input_ids * model.dec.config.bos_token_id, - encoder_hidden_states=encoder_output.index_select(0, expanded_idx), - encoder_attention_mask=encoder_attention_mask.index_select(0, expanded_idx), - num_return_sequences=num_return_sequences, - max_length=processor.tokenizer_max_len, - num_beams=beam_size, - bad_words_ids=exclude_words_ids, - force_words_ids=include_words_ids, - ) - - # Decode - hyps = [processor.tokenizer.decode(h, skip_special_tokens=True, clean_up_tokenization_spaces=False) for h in - hyps] - - # RadGraph - if do_radgraph: - radgraph_annots = [radgraph(hyps=[h], refs=[h])[-1][0]["entities"] for h in hyps] - # Find entites : Radgraph - new_hyp_strs = [] - for hyp_str, radgraph_annot in zip(hyps, radgraph_annots): - values = radgraph_annot.values() - new_hyp_str = hyp_str.split() - for v in values: - new_hyp_str[v["start_ix"]] = highlight_radgraph_entities(v["tokens"], v["label"]) - new_hyp_strs.append(' '.join(new_hyp_str)) - else: - new_hyp_strs = hyps - - # Find user entites - if include_words is not None: - for w in include_words: - new_hyp_strs = [h.replace(w, highlight_word(w, "user")) for h in new_hyp_strs] - - # Formating - new_hyp_strs = ["

    Hypothesis {}:
    {}

    " \ - "".format(i + 1, h) for i, h in enumerate(new_hyp_strs)] + ( - ["

    Anat: anatomy
    " - "OBS: observation
    " - "DA: definitely absent
    " - "DP: definitely present
    "] if do_radgraph else [""]) - - # Params - out_json = { - "beam size": beam_size, "number of generated reports": num_return_sequences, - "included words": include_words, "excluded words": exclude_words, "show radgraph": do_radgraph - } - - return out_json, str(''.join(new_hyp_strs)) # , str(refs[os.path.basename(image)]) - - except Exception as e: - print(e) - return {}, "An error occured, try again..." - - -examples = [[i, 8, 1, '', '', True] for i in glob.glob("./images/*")] -demo = gr.Interface(fn=run, - inputs=[gr.Image(type="filepath", - label="Image to run", interactive=True, tool="editor"), - gr.Slider(minimum=2, maximum=16, step=1, value=8, label="Beam size", - optional=False), - gr.Slider(1, 3, step=1, value=1, label="Number of generated reports", - optional=False), - gr.Textbox(placeholder="word1, word2", - label="Words to include (comma separated)", optional=True), - gr.Textbox(placeholder="word1, word2", - label="Words to exclude (comma separated)", optional=True), - gr.Checkbox(value=True, label="Show RadGraph entities") - ], - outputs=["json", "html"], - examples=examples, - cache_examples=False, - allow_flagging="never", - title="Automatic Radiology Report Generation", - description="This demo gives you possibility to select a chest x-ray and ask a trained A.I. " - "to automatically generate the radiology report. Feel free to play with the parameters, or to force words in the generation!" - "

    Trained with ViLMedic by JB ()

    " - ) -if __name__ == "__main__": - demo.launch() diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/url/url_3d/url_3d.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/url/url_3d/url_3d.py deleted file mode 100644 index c55c0f954e768698b5fcc1e6bc13224a9d31ddb7..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/url/url_3d/url_3d.py +++ /dev/null @@ -1,52 +0,0 @@ -from abc import ABC -from typing import TYPE_CHECKING, Any, Dict, Optional, TypeVar, Union - -from docarray.typing.proto_register import _register_proto -from docarray.typing.url.any_url import AnyUrl -from docarray.utils._internal.misc import import_library - -if TYPE_CHECKING: - import trimesh - -T = TypeVar('T', bound='Url3D') - - -@_register_proto(proto_type_name='url3d') -class Url3D(AnyUrl, ABC): - """ - URL to a file containing 3D mesh or point cloud information. - Can be remote (web) URL, or a local file path. - """ - - def _load_trimesh_instance( - self: T, - force: Optional[str] = None, - skip_materials: bool = True, - trimesh_args: Optional[Dict[str, Any]] = None, - ) -> Union['trimesh.Trimesh', 'trimesh.Scene']: - """ - Load the data from the url into a trimesh.Mesh or trimesh.Scene object. - - :param force: str or None. For 'mesh' try to coerce scenes into a single mesh. - For 'scene' try to coerce everything into a scene. - :param skip_materials: Skip materials if True, else skip. - :param trimesh_args: dictionary of additional arguments for `trimesh.load()` - or `trimesh.load_remote()`. - :return: trimesh.Mesh or trimesh.Scene object - """ - import urllib.parse - - if TYPE_CHECKING: - import trimesh - else: - trimesh = import_library('trimesh', raise_error=True) - - if not trimesh_args: - trimesh_args = {} - - scheme = urllib.parse.urlparse(self).scheme - loader = trimesh.load_remote if scheme in ['http', 'https'] else trimesh.load - - mesh = loader(self, force=force, skip_materials=skip_materials, **trimesh_args) - - return mesh diff --git a/spaces/Superlang/ImageProcessor/annotator/lineart/__init__.py b/spaces/Superlang/ImageProcessor/annotator/lineart/__init__.py deleted file mode 100644 index 87f45913ee46af9888888db3424c08ea72d42789..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/lineart/__init__.py +++ /dev/null @@ -1,129 +0,0 @@ -import os -import torch -import numpy as np - -import torch.nn as nn -from einops import rearrange -from annotator.base_annotator import BaseProcessor -norm_layer = nn.InstanceNorm2d - - -class ResidualBlock(nn.Module): - def __init__(self, in_features): - super(ResidualBlock, self).__init__() - - conv_block = [nn.ReflectionPad2d(1), - nn.Conv2d(in_features, in_features, 3), - norm_layer(in_features), - nn.ReLU(inplace=True), - nn.ReflectionPad2d(1), - nn.Conv2d(in_features, in_features, 3), - norm_layer(in_features) - ] - - self.conv_block = nn.Sequential(*conv_block) - - def forward(self, x): - return x + self.conv_block(x) - - -class Generator(nn.Module): - def __init__(self, input_nc, output_nc, n_residual_blocks=9, sigmoid=True): - super(Generator, self).__init__() - - # Initial convolution block - model0 = [nn.ReflectionPad2d(3), - nn.Conv2d(input_nc, 64, 7), - norm_layer(64), - nn.ReLU(inplace=True)] - self.model0 = nn.Sequential(*model0) - - # Downsampling - model1 = [] - in_features = 64 - out_features = in_features * 2 - for _ in range(2): - model1 += [nn.Conv2d(in_features, out_features, 3, stride=2, padding=1), - norm_layer(out_features), - nn.ReLU(inplace=True)] - in_features = out_features - out_features = in_features * 2 - self.model1 = nn.Sequential(*model1) - - model2 = [] - # Residual blocks - for _ in range(n_residual_blocks): - model2 += [ResidualBlock(in_features)] - self.model2 = nn.Sequential(*model2) - - # Upsampling - model3 = [] - out_features = in_features // 2 - for _ in range(2): - model3 += [nn.ConvTranspose2d(in_features, out_features, 3, stride=2, padding=1, output_padding=1), - norm_layer(out_features), - nn.ReLU(inplace=True)] - in_features = out_features - out_features = in_features // 2 - self.model3 = nn.Sequential(*model3) - - # Output layer - model4 = [nn.ReflectionPad2d(3), - nn.Conv2d(64, output_nc, 7)] - if sigmoid: - model4 += [nn.Sigmoid()] - - self.model4 = nn.Sequential(*model4) - - def forward(self, x, cond=None): - out = self.model0(x) - out = self.model1(out) - out = self.model2(out) - out = self.model3(out) - out = self.model4(out) - - return out - - -class LineArtDetector(BaseProcessor): - model_default = 'sk_model.pth' - model_coarse = 'sk_model2.pth' - - def __init__(self, model_name=model_default, **kwargs): - super().__init__(**kwargs) - self.model = None - self.model_dir = os.path.join(self.models_path, "lineart") - self.model_name = model_name - - def load_model(self, name): - remote_model_path = "https://huggingface.co/lllyasviel/Annotators/resolve/main/" + name - model_path = os.path.join(self.model_dir, name) - if not os.path.exists(model_path): - from basicsr.utils.download_util import load_file_from_url - load_file_from_url(remote_model_path, model_dir=self.model_dir) - model = Generator(3, 1, 3) - model.load_state_dict(torch.load(model_path, map_location=torch.device('cpu'))) - model.eval() - self.model = model.to(self.device) - - def unload_model(self): - if self.model is not None: - self.model.cpu() - - def __call__(self, input_image): - if self.model is None: - self.load_model(self.model_name) - self.model.to(self.device) - - assert input_image.ndim == 3 - image = input_image - with torch.no_grad(): - image = torch.from_numpy(image).float().to(self.device) - image = image / 255.0 - image = rearrange(image, 'h w c -> 1 c h w') - line = self.model(image)[0][0] - - line = line.cpu().numpy() - line = (line * 255.0).clip(0, 255).astype(np.uint8) - - return line diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/data/datasets/register_coco.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/data/datasets/register_coco.py deleted file mode 100644 index e564438d5bf016bcdbb65b4bbdc215d79f579f8a..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/data/datasets/register_coco.py +++ /dev/null @@ -1,3 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .coco import register_coco_instances # noqa -from .coco_panoptic import register_coco_panoptic_separated # noqa diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/decode_heads/apc_head.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/decode_heads/apc_head.py deleted file mode 100644 index c7038bdbe0edf2a1f184b6899486d2d190dda076..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/decode_heads/apc_head.py +++ /dev/null @@ -1,158 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from annotator.uniformer.mmcv.cnn import ConvModule - -from annotator.uniformer.mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -class ACM(nn.Module): - """Adaptive Context Module used in APCNet. - - Args: - pool_scale (int): Pooling scale used in Adaptive Context - Module to extract region features. - fusion (bool): Add one conv to fuse residual feature. - in_channels (int): Input channels. - channels (int): Channels after modules, before conv_seg. - conv_cfg (dict | None): Config of conv layers. - norm_cfg (dict | None): Config of norm layers. - act_cfg (dict): Config of activation layers. - """ - - def __init__(self, pool_scale, fusion, in_channels, channels, conv_cfg, - norm_cfg, act_cfg): - super(ACM, self).__init__() - self.pool_scale = pool_scale - self.fusion = fusion - self.in_channels = in_channels - self.channels = channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.pooled_redu_conv = ConvModule( - self.in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - self.input_redu_conv = ConvModule( - self.in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - self.global_info = ConvModule( - self.channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - self.gla = nn.Conv2d(self.channels, self.pool_scale**2, 1, 1, 0) - - self.residual_conv = ConvModule( - self.channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - if self.fusion: - self.fusion_conv = ConvModule( - self.channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, x): - """Forward function.""" - pooled_x = F.adaptive_avg_pool2d(x, self.pool_scale) - # [batch_size, channels, h, w] - x = self.input_redu_conv(x) - # [batch_size, channels, pool_scale, pool_scale] - pooled_x = self.pooled_redu_conv(pooled_x) - batch_size = x.size(0) - # [batch_size, pool_scale * pool_scale, channels] - pooled_x = pooled_x.view(batch_size, self.channels, - -1).permute(0, 2, 1).contiguous() - # [batch_size, h * w, pool_scale * pool_scale] - affinity_matrix = self.gla(x + resize( - self.global_info(F.adaptive_avg_pool2d(x, 1)), size=x.shape[2:]) - ).permute(0, 2, 3, 1).reshape( - batch_size, -1, self.pool_scale**2) - affinity_matrix = F.sigmoid(affinity_matrix) - # [batch_size, h * w, channels] - z_out = torch.matmul(affinity_matrix, pooled_x) - # [batch_size, channels, h * w] - z_out = z_out.permute(0, 2, 1).contiguous() - # [batch_size, channels, h, w] - z_out = z_out.view(batch_size, self.channels, x.size(2), x.size(3)) - z_out = self.residual_conv(z_out) - z_out = F.relu(z_out + x) - if self.fusion: - z_out = self.fusion_conv(z_out) - - return z_out - - -@HEADS.register_module() -class APCHead(BaseDecodeHead): - """Adaptive Pyramid Context Network for Semantic Segmentation. - - This head is the implementation of - `APCNet `_. - - Args: - pool_scales (tuple[int]): Pooling scales used in Adaptive Context - Module. Default: (1, 2, 3, 6). - fusion (bool): Add one conv to fuse residual feature. - """ - - def __init__(self, pool_scales=(1, 2, 3, 6), fusion=True, **kwargs): - super(APCHead, self).__init__(**kwargs) - assert isinstance(pool_scales, (list, tuple)) - self.pool_scales = pool_scales - self.fusion = fusion - acm_modules = [] - for pool_scale in self.pool_scales: - acm_modules.append( - ACM(pool_scale, - self.fusion, - self.in_channels, - self.channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - self.acm_modules = nn.ModuleList(acm_modules) - self.bottleneck = ConvModule( - self.in_channels + len(pool_scales) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - acm_outs = [x] - for acm_module in self.acm_modules: - acm_outs.append(acm_module(x)) - acm_outs = torch.cat(acm_outs, dim=1) - output = self.bottleneck(acm_outs) - output = self.cls_seg(output) - return output diff --git a/spaces/TNR-5/semantic-image-search.img/src/app/app.js b/spaces/TNR-5/semantic-image-search.img/src/app/app.js deleted file mode 100644 index 2fbd623f4d12cfe003682098bfc5266e3081cf81..0000000000000000000000000000000000000000 --- a/spaces/TNR-5/semantic-image-search.img/src/app/app.js +++ /dev/null @@ -1,51 +0,0 @@ -import { AutoTokenizer, CLIPTextModelWithProjection } from "@xenova/transformers"; -import { createClient } from '@supabase/supabase-js' - -// Use the Singleton pattern to enable lazy construction of the pipeline. -// NOTE: We wrap the class in a function to prevent code duplication (see below). -const S = () => class ApplicationSingleton { - static model_id = 'Xenova/clip-vit-base-patch16'; - static tokenizer = null; - static text_model = null; - static database = null; - - static async getInstance() { - // Load tokenizer and text model - if (this.tokenizer === null) { - this.tokenizer = AutoTokenizer.from_pretrained(this.model_id); - } - - if (this.text_model === null) { - this.text_model = CLIPTextModelWithProjection.from_pretrained(this.model_id, { - quantized: false, - }); - } - - if (this.database === null) { - this.database = createClient( - process.env.SUPABASE_URL, - process.env.SUPABASE_ANON_KEY, - ) - } - - return Promise.all([ - this.tokenizer, - this.text_model, - this.database, - ]); - } -} - -let ApplicationSingleton; -if (process.env.NODE_ENV !== 'production') { - // When running in development mode, attach the pipeline to the - // global object so that it's preserved between hot reloads. - // For more information, see https://vercel.com/guides/nextjs-prisma-postgres - if (!global.ApplicationSingleton) { - global.ApplicationSingleton = S(); - } - ApplicationSingleton = global.ApplicationSingleton; -} else { - ApplicationSingleton = S(); -} -export default ApplicationSingleton; diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/util/timeout.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/util/timeout.py deleted file mode 100644 index 78e18a6272482e3946de83c0274badc4a5cfcdfa..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/util/timeout.py +++ /dev/null @@ -1,271 +0,0 @@ -from __future__ import absolute_import - -import time - -# The default socket timeout, used by httplib to indicate that no timeout was; specified by the user -from socket import _GLOBAL_DEFAULT_TIMEOUT, getdefaulttimeout - -from ..exceptions import TimeoutStateError - -# A sentinel value to indicate that no timeout was specified by the user in -# urllib3 -_Default = object() - - -# Use time.monotonic if available. -current_time = getattr(time, "monotonic", time.time) - - -class Timeout(object): - """Timeout configuration. - - Timeouts can be defined as a default for a pool: - - .. code-block:: python - - timeout = Timeout(connect=2.0, read=7.0) - http = PoolManager(timeout=timeout) - response = http.request('GET', 'http://example.com/') - - Or per-request (which overrides the default for the pool): - - .. code-block:: python - - response = http.request('GET', 'http://example.com/', timeout=Timeout(10)) - - Timeouts can be disabled by setting all the parameters to ``None``: - - .. code-block:: python - - no_timeout = Timeout(connect=None, read=None) - response = http.request('GET', 'http://example.com/, timeout=no_timeout) - - - :param total: - This combines the connect and read timeouts into one; the read timeout - will be set to the time leftover from the connect attempt. In the - event that both a connect timeout and a total are specified, or a read - timeout and a total are specified, the shorter timeout will be applied. - - Defaults to None. - - :type total: int, float, or None - - :param connect: - The maximum amount of time (in seconds) to wait for a connection - attempt to a server to succeed. Omitting the parameter will default the - connect timeout to the system default, probably `the global default - timeout in socket.py - `_. - None will set an infinite timeout for connection attempts. - - :type connect: int, float, or None - - :param read: - The maximum amount of time (in seconds) to wait between consecutive - read operations for a response from the server. Omitting the parameter - will default the read timeout to the system default, probably `the - global default timeout in socket.py - `_. - None will set an infinite timeout. - - :type read: int, float, or None - - .. note:: - - Many factors can affect the total amount of time for urllib3 to return - an HTTP response. - - For example, Python's DNS resolver does not obey the timeout specified - on the socket. Other factors that can affect total request time include - high CPU load, high swap, the program running at a low priority level, - or other behaviors. - - In addition, the read and total timeouts only measure the time between - read operations on the socket connecting the client and the server, - not the total amount of time for the request to return a complete - response. For most requests, the timeout is raised because the server - has not sent the first byte in the specified time. This is not always - the case; if a server streams one byte every fifteen seconds, a timeout - of 20 seconds will not trigger, even though the request will take - several minutes to complete. - - If your goal is to cut off any request after a set amount of wall clock - time, consider having a second "watcher" thread to cut off a slow - request. - """ - - #: A sentinel object representing the default timeout value - DEFAULT_TIMEOUT = _GLOBAL_DEFAULT_TIMEOUT - - def __init__(self, total=None, connect=_Default, read=_Default): - self._connect = self._validate_timeout(connect, "connect") - self._read = self._validate_timeout(read, "read") - self.total = self._validate_timeout(total, "total") - self._start_connect = None - - def __repr__(self): - return "%s(connect=%r, read=%r, total=%r)" % ( - type(self).__name__, - self._connect, - self._read, - self.total, - ) - - # __str__ provided for backwards compatibility - __str__ = __repr__ - - @classmethod - def resolve_default_timeout(cls, timeout): - return getdefaulttimeout() if timeout is cls.DEFAULT_TIMEOUT else timeout - - @classmethod - def _validate_timeout(cls, value, name): - """Check that a timeout attribute is valid. - - :param value: The timeout value to validate - :param name: The name of the timeout attribute to validate. This is - used to specify in error messages. - :return: The validated and casted version of the given value. - :raises ValueError: If it is a numeric value less than or equal to - zero, or the type is not an integer, float, or None. - """ - if value is _Default: - return cls.DEFAULT_TIMEOUT - - if value is None or value is cls.DEFAULT_TIMEOUT: - return value - - if isinstance(value, bool): - raise ValueError( - "Timeout cannot be a boolean value. It must " - "be an int, float or None." - ) - try: - float(value) - except (TypeError, ValueError): - raise ValueError( - "Timeout value %s was %s, but it must be an " - "int, float or None." % (name, value) - ) - - try: - if value <= 0: - raise ValueError( - "Attempted to set %s timeout to %s, but the " - "timeout cannot be set to a value less " - "than or equal to 0." % (name, value) - ) - except TypeError: - # Python 3 - raise ValueError( - "Timeout value %s was %s, but it must be an " - "int, float or None." % (name, value) - ) - - return value - - @classmethod - def from_float(cls, timeout): - """Create a new Timeout from a legacy timeout value. - - The timeout value used by httplib.py sets the same timeout on the - connect(), and recv() socket requests. This creates a :class:`Timeout` - object that sets the individual timeouts to the ``timeout`` value - passed to this function. - - :param timeout: The legacy timeout value. - :type timeout: integer, float, sentinel default object, or None - :return: Timeout object - :rtype: :class:`Timeout` - """ - return Timeout(read=timeout, connect=timeout) - - def clone(self): - """Create a copy of the timeout object - - Timeout properties are stored per-pool but each request needs a fresh - Timeout object to ensure each one has its own start/stop configured. - - :return: a copy of the timeout object - :rtype: :class:`Timeout` - """ - # We can't use copy.deepcopy because that will also create a new object - # for _GLOBAL_DEFAULT_TIMEOUT, which socket.py uses as a sentinel to - # detect the user default. - return Timeout(connect=self._connect, read=self._read, total=self.total) - - def start_connect(self): - """Start the timeout clock, used during a connect() attempt - - :raises urllib3.exceptions.TimeoutStateError: if you attempt - to start a timer that has been started already. - """ - if self._start_connect is not None: - raise TimeoutStateError("Timeout timer has already been started.") - self._start_connect = current_time() - return self._start_connect - - def get_connect_duration(self): - """Gets the time elapsed since the call to :meth:`start_connect`. - - :return: Elapsed time in seconds. - :rtype: float - :raises urllib3.exceptions.TimeoutStateError: if you attempt - to get duration for a timer that hasn't been started. - """ - if self._start_connect is None: - raise TimeoutStateError( - "Can't get connect duration for timer that has not started." - ) - return current_time() - self._start_connect - - @property - def connect_timeout(self): - """Get the value to use when setting a connection timeout. - - This will be a positive float or integer, the value None - (never timeout), or the default system timeout. - - :return: Connect timeout. - :rtype: int, float, :attr:`Timeout.DEFAULT_TIMEOUT` or None - """ - if self.total is None: - return self._connect - - if self._connect is None or self._connect is self.DEFAULT_TIMEOUT: - return self.total - - return min(self._connect, self.total) - - @property - def read_timeout(self): - """Get the value for the read timeout. - - This assumes some time has elapsed in the connection timeout and - computes the read timeout appropriately. - - If self.total is set, the read timeout is dependent on the amount of - time taken by the connect timeout. If the connection time has not been - established, a :exc:`~urllib3.exceptions.TimeoutStateError` will be - raised. - - :return: Value to use for the read timeout. - :rtype: int, float, :attr:`Timeout.DEFAULT_TIMEOUT` or None - :raises urllib3.exceptions.TimeoutStateError: If :meth:`start_connect` - has not yet been called on this object. - """ - if ( - self.total is not None - and self.total is not self.DEFAULT_TIMEOUT - and self._read is not None - and self._read is not self.DEFAULT_TIMEOUT - ): - # In case the connect timeout has not yet been established. - if self._start_connect is None: - return self._read - return max(0, min(self.total - self.get_connect_duration(), self._read)) - elif self.total is not None and self.total is not self.DEFAULT_TIMEOUT: - return max(0, self.total - self.get_connect_duration()) - else: - return self._read diff --git a/spaces/Teklia/doc-ufcn/config.py b/spaces/Teklia/doc-ufcn/config.py deleted file mode 100644 index 432f8514bbfe859e753b42b1f8bcb1b6fd5ee0e8..0000000000000000000000000000000000000000 --- a/spaces/Teklia/doc-ufcn/config.py +++ /dev/null @@ -1,28 +0,0 @@ -# -*- coding: utf-8 -*- - -from pathlib import Path - -from teklia_toolbox.config import ConfigParser - - -def parse_configurations(config_path: Path): - """ - Parse multiple YAML configuration files into a single source - of configuration for the HuggingFace app - - :param config_path: pathlib.Path, Path to the .yaml config file - :return: dict, containing the configuration. Ensures config is complete and with correct typing - """ - parser = ConfigParser() - - parser.add_option("title") - parser.add_option("description") - parser.add_option("examples", type=list) - model_parser = parser.add_subparser("models", many=True) - - model_parser.add_option("model_name") - model_parser.add_option("title") - model_parser.add_option("description") - model_parser.add_option("classes_colors", type=list) - - return parser.parse(config_path) diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/coco_panoptic.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/coco_panoptic.py deleted file mode 100644 index b8dae44317b556610d7fed39017e082d7e855956..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/coco_panoptic.py +++ /dev/null @@ -1,228 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import json -import os - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.utils.file_io import PathManager - -from .coco import load_coco_json, load_sem_seg - -__all__ = ["register_coco_panoptic", "register_coco_panoptic_separated"] - - -def load_coco_panoptic_json(json_file, image_dir, gt_dir, meta): - """ - Args: - image_dir (str): path to the raw dataset. e.g., "~/coco/train2017". - gt_dir (str): path to the raw annotations. e.g., "~/coco/panoptic_train2017". - json_file (str): path to the json file. e.g., "~/coco/annotations/panoptic_train2017.json". - - Returns: - list[dict]: a list of dicts in Detectron2 standard format. (See - `Using Custom Datasets `_ ) - """ - - def _convert_category_id(segment_info, meta): - if segment_info["category_id"] in meta["thing_dataset_id_to_contiguous_id"]: - segment_info["category_id"] = meta["thing_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - segment_info["isthing"] = True - else: - segment_info["category_id"] = meta["stuff_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - segment_info["isthing"] = False - return segment_info - - with PathManager.open(json_file) as f: - json_info = json.load(f) - - ret = [] - for ann in json_info["annotations"]: - image_id = int(ann["image_id"]) - # TODO: currently we assume image and label has the same filename but - # different extension, and images have extension ".jpg" for COCO. Need - # to make image extension a user-provided argument if we extend this - # function to support other COCO-like datasets. - image_file = os.path.join(image_dir, os.path.splitext(ann["file_name"])[0] + ".jpg") - label_file = os.path.join(gt_dir, ann["file_name"]) - segments_info = [_convert_category_id(x, meta) for x in ann["segments_info"]] - ret.append( - { - "file_name": image_file, - "image_id": image_id, - "pan_seg_file_name": label_file, - "segments_info": segments_info, - } - ) - assert len(ret), f"No images found in {image_dir}!" - assert PathManager.isfile(ret[0]["file_name"]), ret[0]["file_name"] - assert PathManager.isfile(ret[0]["pan_seg_file_name"]), ret[0]["pan_seg_file_name"] - return ret - - -def register_coco_panoptic( - name, metadata, image_root, panoptic_root, panoptic_json, instances_json=None -): - """ - Register a "standard" version of COCO panoptic segmentation dataset named `name`. - The dictionaries in this registered dataset follows detectron2's standard format. - Hence it's called "standard". - - Args: - name (str): the name that identifies a dataset, - e.g. "coco_2017_train_panoptic" - metadata (dict): extra metadata associated with this dataset. - image_root (str): directory which contains all the images - panoptic_root (str): directory which contains panoptic annotation images in COCO format - panoptic_json (str): path to the json panoptic annotation file in COCO format - sem_seg_root (none): not used, to be consistent with - `register_coco_panoptic_separated`. - instances_json (str): path to the json instance annotation file - """ - panoptic_name = name - DatasetCatalog.register( - panoptic_name, - lambda: load_coco_panoptic_json(panoptic_json, image_root, panoptic_root, metadata), - ) - MetadataCatalog.get(panoptic_name).set( - panoptic_root=panoptic_root, - image_root=image_root, - panoptic_json=panoptic_json, - json_file=instances_json, - evaluator_type="coco_panoptic_seg", - ignore_label=255, - label_divisor=1000, - **metadata, - ) - - -def register_coco_panoptic_separated( - name, metadata, image_root, panoptic_root, panoptic_json, sem_seg_root, instances_json -): - """ - Register a "separated" version of COCO panoptic segmentation dataset named `name`. - The annotations in this registered dataset will contain both instance annotations and - semantic annotations, each with its own contiguous ids. Hence it's called "separated". - - It follows the setting used by the PanopticFPN paper: - - 1. The instance annotations directly come from polygons in the COCO - instances annotation task, rather than from the masks in the COCO panoptic annotations. - - The two format have small differences: - Polygons in the instance annotations may have overlaps. - The mask annotations are produced by labeling the overlapped polygons - with depth ordering. - - 2. The semantic annotations are converted from panoptic annotations, where - all "things" are assigned a semantic id of 0. - All semantic categories will therefore have ids in contiguous - range [1, #stuff_categories]. - - This function will also register a pure semantic segmentation dataset - named ``name + '_stuffonly'``. - - Args: - name (str): the name that identifies a dataset, - e.g. "coco_2017_train_panoptic" - metadata (dict): extra metadata associated with this dataset. - image_root (str): directory which contains all the images - panoptic_root (str): directory which contains panoptic annotation images - panoptic_json (str): path to the json panoptic annotation file - sem_seg_root (str): directory which contains all the ground truth segmentation annotations. - instances_json (str): path to the json instance annotation file - """ - panoptic_name = name + "_separated" - DatasetCatalog.register( - panoptic_name, - lambda: merge_to_panoptic( - load_coco_json(instances_json, image_root, panoptic_name), - load_sem_seg(sem_seg_root, image_root), - ), - ) - MetadataCatalog.get(panoptic_name).set( - panoptic_root=panoptic_root, - image_root=image_root, - panoptic_json=panoptic_json, - sem_seg_root=sem_seg_root, - json_file=instances_json, # TODO rename - evaluator_type="coco_panoptic_seg", - ignore_label=255, - **metadata, - ) - - semantic_name = name + "_stuffonly" - DatasetCatalog.register(semantic_name, lambda: load_sem_seg(sem_seg_root, image_root)) - MetadataCatalog.get(semantic_name).set( - sem_seg_root=sem_seg_root, - image_root=image_root, - evaluator_type="sem_seg", - ignore_label=255, - **metadata, - ) - - -def merge_to_panoptic(detection_dicts, sem_seg_dicts): - """ - Create dataset dicts for panoptic segmentation, by - merging two dicts using "file_name" field to match their entries. - - Args: - detection_dicts (list[dict]): lists of dicts for object detection or instance segmentation. - sem_seg_dicts (list[dict]): lists of dicts for semantic segmentation. - - Returns: - list[dict] (one per input image): Each dict contains all (key, value) pairs from dicts in - both detection_dicts and sem_seg_dicts that correspond to the same image. - The function assumes that the same key in different dicts has the same value. - """ - results = [] - sem_seg_file_to_entry = {x["file_name"]: x for x in sem_seg_dicts} - assert len(sem_seg_file_to_entry) > 0 - - for det_dict in detection_dicts: - dic = copy.copy(det_dict) - dic.update(sem_seg_file_to_entry[dic["file_name"]]) - results.append(dic) - return results - - -if __name__ == "__main__": - """ - Test the COCO panoptic dataset loader. - - Usage: - python -m detectron2.data.datasets.coco_panoptic \ - path/to/image_root path/to/panoptic_root path/to/panoptic_json dataset_name 10 - - "dataset_name" can be "coco_2017_train_panoptic", or other - pre-registered ones - """ - from detectron2.utils.logger import setup_logger - from detectron2.utils.visualizer import Visualizer - import detectron2.data.datasets # noqa # add pre-defined metadata - import sys - from PIL import Image - import numpy as np - - logger = setup_logger(name=__name__) - assert sys.argv[4] in DatasetCatalog.list() - meta = MetadataCatalog.get(sys.argv[4]) - - dicts = load_coco_panoptic_json(sys.argv[3], sys.argv[1], sys.argv[2], meta.as_dict()) - logger.info("Done loading {} samples.".format(len(dicts))) - - dirname = "coco-data-vis" - os.makedirs(dirname, exist_ok=True) - num_imgs_to_vis = int(sys.argv[5]) - for i, d in enumerate(dicts): - img = np.array(Image.open(d["file_name"])) - visualizer = Visualizer(img, metadata=meta) - vis = visualizer.draw_dataset_dict(d) - fpath = os.path.join(dirname, os.path.basename(d["file_name"])) - vis.save(fpath) - if i + 1 >= num_imgs_to_vis: - break diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/evaluation/coco_evaluation.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/evaluation/coco_evaluation.py deleted file mode 100644 index aad7f5a6e79a9047e7eea623ecc761ea9655b8d6..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/evaluation/coco_evaluation.py +++ /dev/null @@ -1,710 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import contextlib -import copy -import io -import itertools -import json -import logging -import numpy as np -import os -import pickle -from collections import OrderedDict -import pycocotools.mask as mask_util -import torch -from pycocotools.coco import COCO -from pycocotools.cocoeval import COCOeval -from tabulate import tabulate - -import detectron2.utils.comm as comm -from detectron2.config import CfgNode -from detectron2.data import MetadataCatalog -from detectron2.data.datasets.coco import convert_to_coco_json -from detectron2.evaluation.fast_eval_api import COCOeval_opt -from detectron2.structures import Boxes, BoxMode, pairwise_iou -from detectron2.utils.file_io import PathManager -from detectron2.utils.logger import create_small_table - -from .evaluator import DatasetEvaluator - - -class COCOEvaluator(DatasetEvaluator): - """ - Evaluate AR for object proposals, AP for instance detection/segmentation, AP - for keypoint detection outputs using COCO's metrics. - See http://cocodataset.org/#detection-eval and - http://cocodataset.org/#keypoints-eval to understand its metrics. - The metrics range from 0 to 100 (instead of 0 to 1), where a -1 or NaN means - the metric cannot be computed (e.g. due to no predictions made). - - In addition to COCO, this evaluator is able to support any bounding box detection, - instance segmentation, or keypoint detection dataset. - """ - - def __init__( - self, - dataset_name, - tasks=None, - distributed=True, - output_dir=None, - *, - max_dets_per_image=None, - use_fast_impl=True, - kpt_oks_sigmas=(), - ): - """ - Args: - dataset_name (str): name of the dataset to be evaluated. - It must have either the following corresponding metadata: - - "json_file": the path to the COCO format annotation - - Or it must be in detectron2's standard dataset format - so it can be converted to COCO format automatically. - tasks (tuple[str]): tasks that can be evaluated under the given - configuration. A task is one of "bbox", "segm", "keypoints". - By default, will infer this automatically from predictions. - distributed (True): if True, will collect results from all ranks and run evaluation - in the main process. - Otherwise, will only evaluate the results in the current process. - output_dir (str): optional, an output directory to dump all - results predicted on the dataset. The dump contains two files: - - 1. "instances_predictions.pth" a file that can be loaded with `torch.load` and - contains all the results in the format they are produced by the model. - 2. "coco_instances_results.json" a json file in COCO's result format. - max_dets_per_image (int): limit on the maximum number of detections per image. - By default in COCO, this limit is to 100, but this can be customized - to be greater, as is needed in evaluation metrics AP fixed and AP pool - (see https://arxiv.org/pdf/2102.01066.pdf) - This doesn't affect keypoint evaluation. - use_fast_impl (bool): use a fast but **unofficial** implementation to compute AP. - Although the results should be very close to the official implementation in COCO - API, it is still recommended to compute results with the official API for use in - papers. The faster implementation also uses more RAM. - kpt_oks_sigmas (list[float]): The sigmas used to calculate keypoint OKS. - See http://cocodataset.org/#keypoints-eval - When empty, it will use the defaults in COCO. - Otherwise it should be the same length as ROI_KEYPOINT_HEAD.NUM_KEYPOINTS. - """ - self._logger = logging.getLogger(__name__) - self._distributed = distributed - self._output_dir = output_dir - self._use_fast_impl = use_fast_impl - - # COCOeval requires the limit on the number of detections per image (maxDets) to be a list - # with at least 3 elements. The default maxDets in COCOeval is [1, 10, 100], in which the - # 3rd element (100) is used as the limit on the number of detections per image when - # evaluating AP. COCOEvaluator expects an integer for max_dets_per_image, so for COCOeval, - # we reformat max_dets_per_image into [1, 10, max_dets_per_image], based on the defaults. - if max_dets_per_image is None: - max_dets_per_image = [1, 10, 100] - else: - max_dets_per_image = [1, 10, max_dets_per_image] - self._max_dets_per_image = max_dets_per_image - - if tasks is not None and isinstance(tasks, CfgNode): - kpt_oks_sigmas = ( - tasks.TEST.KEYPOINT_OKS_SIGMAS if not kpt_oks_sigmas else kpt_oks_sigmas - ) - self._logger.warn( - "COCO Evaluator instantiated using config, this is deprecated behavior." - " Please pass in explicit arguments instead." - ) - self._tasks = None # Infering it from predictions should be better - else: - self._tasks = tasks - - self._cpu_device = torch.device("cpu") - - self._metadata = MetadataCatalog.get(dataset_name) - if not hasattr(self._metadata, "json_file"): - if output_dir is None: - raise ValueError( - "output_dir must be provided to COCOEvaluator " - "for datasets not in COCO format." - ) - self._logger.info(f"Trying to convert '{dataset_name}' to COCO format ...") - - cache_path = os.path.join(output_dir, f"{dataset_name}_coco_format.json") - self._metadata.json_file = cache_path - convert_to_coco_json(dataset_name, cache_path) - - json_file = PathManager.get_local_path(self._metadata.json_file) - with contextlib.redirect_stdout(io.StringIO()): - self._coco_api = COCO(json_file) - - # Test set json files do not contain annotations (evaluation must be - # performed using the COCO evaluation server). - self._do_evaluation = "annotations" in self._coco_api.dataset - if self._do_evaluation: - self._kpt_oks_sigmas = kpt_oks_sigmas - - def reset(self): - self._predictions = [] - - def process(self, inputs, outputs): - """ - Args: - inputs: the inputs to a COCO model (e.g., GeneralizedRCNN). - It is a list of dict. Each dict corresponds to an image and - contains keys like "height", "width", "file_name", "image_id". - outputs: the outputs of a COCO model. It is a list of dicts with key - "instances" that contains :class:`Instances`. - """ - for input, output in zip(inputs, outputs): - prediction = {"image_id": input["image_id"]} - - if "instances" in output: - instances = output["instances"].to(self._cpu_device) - prediction["instances"] = instances_to_coco_json(instances, input["image_id"]) - if "proposals" in output: - prediction["proposals"] = output["proposals"].to(self._cpu_device) - if len(prediction) > 1: - self._predictions.append(prediction) - - def evaluate(self, img_ids=None): - """ - Args: - img_ids: a list of image IDs to evaluate on. Default to None for the whole dataset - """ - if self._distributed: - comm.synchronize() - predictions = comm.gather(self._predictions, dst=0) - predictions = list(itertools.chain(*predictions)) - - if not comm.is_main_process(): - return {} - else: - predictions = self._predictions - - if len(predictions) == 0: - self._logger.warning("[COCOEvaluator] Did not receive valid predictions.") - return {} - - if self._output_dir: - PathManager.mkdirs(self._output_dir) - file_path = os.path.join(self._output_dir, "instances_predictions.pth") - with PathManager.open(file_path, "wb") as f: - torch.save(predictions, f) - - self._results = OrderedDict() - if "proposals" in predictions[0]: - self._eval_box_proposals(predictions) - if "instances" in predictions[0]: - self._eval_predictions(predictions, img_ids=img_ids) - # Copy so the caller can do whatever with results - return copy.deepcopy(self._results) - - def _tasks_from_predictions(self, predictions): - """ - Get COCO API "tasks" (i.e. iou_type) from COCO-format predictions. - """ - tasks = {"bbox"} - for pred in predictions: - if "segmentation" in pred: - tasks.add("segm") - if "keypoints" in pred: - tasks.add("keypoints") - return sorted(tasks) - - def _eval_predictions(self, predictions, img_ids=None): - """ - Evaluate predictions. Fill self._results with the metrics of the tasks. - """ - self._logger.info("Preparing results for COCO format ...") - coco_results = list(itertools.chain(*[x["instances"] for x in predictions])) - tasks = self._tasks or self._tasks_from_predictions(coco_results) - - # unmap the category ids for COCO - if hasattr(self._metadata, "thing_dataset_id_to_contiguous_id"): - dataset_id_to_contiguous_id = self._metadata.thing_dataset_id_to_contiguous_id - all_contiguous_ids = list(dataset_id_to_contiguous_id.values()) - num_classes = len(all_contiguous_ids) - assert min(all_contiguous_ids) == 0 and max(all_contiguous_ids) == num_classes - 1 - - reverse_id_mapping = {v: k for k, v in dataset_id_to_contiguous_id.items()} - for result in coco_results: - category_id = result["category_id"] - assert category_id < num_classes, ( - f"A prediction has class={category_id}, " - f"but the dataset only has {num_classes} classes and " - f"predicted class id should be in [0, {num_classes - 1}]." - ) - result["category_id"] = reverse_id_mapping[category_id] - - if self._output_dir: - file_path = os.path.join(self._output_dir, "coco_instances_results.json") - self._logger.info("Saving results to {}".format(file_path)) - with PathManager.open(file_path, "w") as f: - f.write(json.dumps(coco_results)) - f.flush() - - if not self._do_evaluation: - self._logger.info("Annotations are not available for evaluation.") - return - - self._logger.info( - "Evaluating predictions with {} COCO API...".format( - "unofficial" if self._use_fast_impl else "official" - ) - ) - for task in sorted(tasks): - assert task in {"bbox", "segm", "keypoints"}, f"Got unknown task: {task}!" - coco_eval = ( - _evaluate_predictions_on_coco( - self._coco_api, - coco_results, - task, - kpt_oks_sigmas=self._kpt_oks_sigmas, - use_fast_impl=self._use_fast_impl, - img_ids=img_ids, - max_dets_per_image=self._max_dets_per_image, - ) - if len(coco_results) > 0 - else None # cocoapi does not handle empty results very well - ) - - res = self._derive_coco_results( - coco_eval, task, class_names=self._metadata.get("thing_classes") - ) - self._results[task] = res - - def _eval_box_proposals(self, predictions): - """ - Evaluate the box proposals in predictions. - Fill self._results with the metrics for "box_proposals" task. - """ - if self._output_dir: - # Saving generated box proposals to file. - # Predicted box_proposals are in XYXY_ABS mode. - bbox_mode = BoxMode.XYXY_ABS.value - ids, boxes, objectness_logits = [], [], [] - for prediction in predictions: - ids.append(prediction["image_id"]) - boxes.append(prediction["proposals"].proposal_boxes.tensor.numpy()) - objectness_logits.append(prediction["proposals"].objectness_logits.numpy()) - - proposal_data = { - "boxes": boxes, - "objectness_logits": objectness_logits, - "ids": ids, - "bbox_mode": bbox_mode, - } - with PathManager.open(os.path.join(self._output_dir, "box_proposals.pkl"), "wb") as f: - pickle.dump(proposal_data, f) - - if not self._do_evaluation: - self._logger.info("Annotations are not available for evaluation.") - return - - self._logger.info("Evaluating bbox proposals ...") - res = {} - areas = {"all": "", "small": "s", "medium": "m", "large": "l"} - for limit in [100, 1000]: - for area, suffix in areas.items(): - stats = _evaluate_box_proposals(predictions, self._coco_api, area=area, limit=limit) - key = "AR{}@{:d}".format(suffix, limit) - res[key] = float(stats["ar"].item() * 100) - self._logger.info("Proposal metrics: \n" + create_small_table(res)) - self._results["box_proposals"] = res - - def _derive_coco_results(self, coco_eval, iou_type, class_names=None): - """ - Derive the desired score numbers from summarized COCOeval. - - Args: - coco_eval (None or COCOEval): None represents no predictions from model. - iou_type (str): - class_names (None or list[str]): if provided, will use it to predict - per-category AP. - - Returns: - a dict of {metric name: score} - """ - - metrics = { - "bbox": ["AP", "AP50", "AP75", "APs", "APm", "APl"], - "segm": ["AP", "AP50", "AP75", "APs", "APm", "APl"], - "keypoints": ["AP", "AP50", "AP75", "APm", "APl"], - }[iou_type] - - if coco_eval is None: - self._logger.warn("No predictions from the model!") - return {metric: float("nan") for metric in metrics} - - # the standard metrics - results = { - metric: float(coco_eval.stats[idx] * 100 if coco_eval.stats[idx] >= 0 else "nan") - for idx, metric in enumerate(metrics) - } - self._logger.info( - "Evaluation results for {}: \n".format(iou_type) + create_small_table(results) - ) - if not np.isfinite(sum(results.values())): - self._logger.info("Some metrics cannot be computed and is shown as NaN.") - - if class_names is None or len(class_names) <= 1: - return results - # Compute per-category AP - # from https://github.com/facebookresearch/Detectron/blob/a6a835f5b8208c45d0dce217ce9bbda915f44df7/detectron/datasets/json_dataset_evaluator.py#L222-L252 # noqa - precisions = coco_eval.eval["precision"] - # precision has dims (iou, recall, cls, area range, max dets) - assert len(class_names) == precisions.shape[2] - - results_per_category = [] - for idx, name in enumerate(class_names): - # area range index 0: all area ranges - # max dets index -1: typically 100 per image - precision = precisions[:, :, idx, 0, -1] - precision = precision[precision > -1] - ap = np.mean(precision) if precision.size else float("nan") - results_per_category.append(("{}".format(name), float(ap * 100))) - - # tabulate it - N_COLS = min(6, len(results_per_category) * 2) - results_flatten = list(itertools.chain(*results_per_category)) - results_2d = itertools.zip_longest(*[results_flatten[i::N_COLS] for i in range(N_COLS)]) - table = tabulate( - results_2d, - tablefmt="pipe", - floatfmt=".3f", - headers=["category", "AP"] * (N_COLS // 2), - numalign="left", - ) - self._logger.info("Per-category {} AP: \n".format(iou_type) + table) - - results.update({"AP-" + name: ap for name, ap in results_per_category}) - return results - - -def instances_to_coco_json(instances, img_id): - """ - Dump an "Instances" object to a COCO-format json that's used for evaluation. - - Args: - instances (Instances): - img_id (int): the image id - - Returns: - list[dict]: list of json annotations in COCO format. - """ - num_instance = len(instances) - if num_instance == 0: - return [] - - boxes = instances.pred_boxes.tensor.numpy() - boxes = BoxMode.convert(boxes, BoxMode.XYXY_ABS, BoxMode.XYWH_ABS) - boxes = boxes.tolist() - scores = instances.scores.tolist() - classes = instances.pred_classes.tolist() - - has_mask = instances.has("pred_masks") - if has_mask: - # use RLE to encode the masks, because they are too large and takes memory - # since this evaluator stores outputs of the entire dataset - rles = [ - mask_util.encode(np.array(mask[:, :, None], order="F", dtype="uint8"))[0] - for mask in instances.pred_masks - ] - for rle in rles: - # "counts" is an array encoded by mask_util as a byte-stream. Python3's - # json writer which always produces strings cannot serialize a bytestream - # unless you decode it. Thankfully, utf-8 works out (which is also what - # the pycocotools/_mask.pyx does). - rle["counts"] = rle["counts"].decode("utf-8") - - has_keypoints = instances.has("pred_keypoints") - if has_keypoints: - keypoints = instances.pred_keypoints - - results = [] - for k in range(num_instance): - result = { - "image_id": img_id, - "category_id": classes[k], - "bbox": boxes[k], - "score": scores[k], - } - if has_mask: - result["segmentation"] = rles[k] - if has_keypoints: - # In COCO annotations, - # keypoints coordinates are pixel indices. - # However our predictions are floating point coordinates. - # Therefore we subtract 0.5 to be consistent with the annotation format. - # This is the inverse of data loading logic in `datasets/coco.py`. - keypoints[k][:, :2] -= 0.5 - result["keypoints"] = keypoints[k].flatten().tolist() - results.append(result) - return results - - -# inspired from Detectron: -# https://github.com/facebookresearch/Detectron/blob/a6a835f5b8208c45d0dce217ce9bbda915f44df7/detectron/datasets/json_dataset_evaluator.py#L255 # noqa -def _evaluate_box_proposals(dataset_predictions, coco_api, thresholds=None, area="all", limit=None): - """ - Evaluate detection proposal recall metrics. This function is a much - faster alternative to the official COCO API recall evaluation code. However, - it produces slightly different results. - """ - # Record max overlap value for each gt box - # Return vector of overlap values - areas = { - "all": 0, - "small": 1, - "medium": 2, - "large": 3, - "96-128": 4, - "128-256": 5, - "256-512": 6, - "512-inf": 7, - } - area_ranges = [ - [0 ** 2, 1e5 ** 2], # all - [0 ** 2, 32 ** 2], # small - [32 ** 2, 96 ** 2], # medium - [96 ** 2, 1e5 ** 2], # large - [96 ** 2, 128 ** 2], # 96-128 - [128 ** 2, 256 ** 2], # 128-256 - [256 ** 2, 512 ** 2], # 256-512 - [512 ** 2, 1e5 ** 2], - ] # 512-inf - assert area in areas, "Unknown area range: {}".format(area) - area_range = area_ranges[areas[area]] - gt_overlaps = [] - num_pos = 0 - - for prediction_dict in dataset_predictions: - predictions = prediction_dict["proposals"] - - # sort predictions in descending order - # TODO maybe remove this and make it explicit in the documentation - inds = predictions.objectness_logits.sort(descending=True)[1] - predictions = predictions[inds] - - ann_ids = coco_api.getAnnIds(imgIds=prediction_dict["image_id"]) - anno = coco_api.loadAnns(ann_ids) - gt_boxes = [ - BoxMode.convert(obj["bbox"], BoxMode.XYWH_ABS, BoxMode.XYXY_ABS) - for obj in anno - if obj["iscrowd"] == 0 - ] - gt_boxes = torch.as_tensor(gt_boxes).reshape(-1, 4) # guard against no boxes - gt_boxes = Boxes(gt_boxes) - gt_areas = torch.as_tensor([obj["area"] for obj in anno if obj["iscrowd"] == 0]) - - if len(gt_boxes) == 0 or len(predictions) == 0: - continue - - valid_gt_inds = (gt_areas >= area_range[0]) & (gt_areas <= area_range[1]) - gt_boxes = gt_boxes[valid_gt_inds] - - num_pos += len(gt_boxes) - - if len(gt_boxes) == 0: - continue - - if limit is not None and len(predictions) > limit: - predictions = predictions[:limit] - - overlaps = pairwise_iou(predictions.proposal_boxes, gt_boxes) - - _gt_overlaps = torch.zeros(len(gt_boxes)) - for j in range(min(len(predictions), len(gt_boxes))): - # find which proposal box maximally covers each gt box - # and get the iou amount of coverage for each gt box - max_overlaps, argmax_overlaps = overlaps.max(dim=0) - - # find which gt box is 'best' covered (i.e. 'best' = most iou) - gt_ovr, gt_ind = max_overlaps.max(dim=0) - assert gt_ovr >= 0 - # find the proposal box that covers the best covered gt box - box_ind = argmax_overlaps[gt_ind] - # record the iou coverage of this gt box - _gt_overlaps[j] = overlaps[box_ind, gt_ind] - assert _gt_overlaps[j] == gt_ovr - # mark the proposal box and the gt box as used - overlaps[box_ind, :] = -1 - overlaps[:, gt_ind] = -1 - - # append recorded iou coverage level - gt_overlaps.append(_gt_overlaps) - gt_overlaps = ( - torch.cat(gt_overlaps, dim=0) if len(gt_overlaps) else torch.zeros(0, dtype=torch.float32) - ) - gt_overlaps, _ = torch.sort(gt_overlaps) - - if thresholds is None: - step = 0.05 - thresholds = torch.arange(0.5, 0.95 + 1e-5, step, dtype=torch.float32) - recalls = torch.zeros_like(thresholds) - # compute recall for each iou threshold - for i, t in enumerate(thresholds): - recalls[i] = (gt_overlaps >= t).float().sum() / float(num_pos) - # ar = 2 * np.trapz(recalls, thresholds) - ar = recalls.mean() - return { - "ar": ar, - "recalls": recalls, - "thresholds": thresholds, - "gt_overlaps": gt_overlaps, - "num_pos": num_pos, - } - - -def _evaluate_predictions_on_coco( - coco_gt, - coco_results, - iou_type, - kpt_oks_sigmas=None, - use_fast_impl=True, - img_ids=None, - max_dets_per_image=None, -): - """ - Evaluate the coco results using COCOEval API. - """ - assert len(coco_results) > 0 - - if iou_type == "segm": - coco_results = copy.deepcopy(coco_results) - # When evaluating mask AP, if the results contain bbox, cocoapi will - # use the box area as the area of the instance, instead of the mask area. - # This leads to a different definition of small/medium/large. - # We remove the bbox field to let mask AP use mask area. - for c in coco_results: - c.pop("bbox", None) - - coco_dt = coco_gt.loadRes(coco_results) - coco_eval = (COCOeval_opt if use_fast_impl else COCOeval)(coco_gt, coco_dt, iou_type) - # For COCO, the default max_dets_per_image is [1, 10, 100]. - if max_dets_per_image is None: - max_dets_per_image = [1, 10, 100] # Default from COCOEval - else: - assert ( - len(max_dets_per_image) >= 3 - ), "COCOeval requires maxDets (and max_dets_per_image) to have length at least 3" - # In the case that user supplies a custom input for max_dets_per_image, - # apply COCOevalMaxDets to evaluate AP with the custom input. - if max_dets_per_image[2] != 100: - coco_eval = COCOevalMaxDets(coco_gt, coco_dt, iou_type) - if iou_type != "keypoints": - coco_eval.params.maxDets = max_dets_per_image - - if img_ids is not None: - coco_eval.params.imgIds = img_ids - - if iou_type == "keypoints": - # Use the COCO default keypoint OKS sigmas unless overrides are specified - if kpt_oks_sigmas: - assert hasattr(coco_eval.params, "kpt_oks_sigmas"), "pycocotools is too old!" - coco_eval.params.kpt_oks_sigmas = np.array(kpt_oks_sigmas) - # COCOAPI requires every detection and every gt to have keypoints, so - # we just take the first entry from both - num_keypoints_dt = len(coco_results[0]["keypoints"]) // 3 - num_keypoints_gt = len(next(iter(coco_gt.anns.values()))["keypoints"]) // 3 - num_keypoints_oks = len(coco_eval.params.kpt_oks_sigmas) - assert num_keypoints_oks == num_keypoints_dt == num_keypoints_gt, ( - f"[COCOEvaluator] Prediction contain {num_keypoints_dt} keypoints. " - f"Ground truth contains {num_keypoints_gt} keypoints. " - f"The length of cfg.TEST.KEYPOINT_OKS_SIGMAS is {num_keypoints_oks}. " - "They have to agree with each other. For meaning of OKS, please refer to " - "http://cocodataset.org/#keypoints-eval." - ) - - coco_eval.evaluate() - coco_eval.accumulate() - coco_eval.summarize() - - return coco_eval - - -class COCOevalMaxDets(COCOeval): - """ - Modified version of COCOeval for evaluating AP with a custom - maxDets (by default for COCO, maxDets is 100) - """ - - def summarize(self): - """ - Compute and display summary metrics for evaluation results given - a custom value for max_dets_per_image - """ - - def _summarize(ap=1, iouThr=None, areaRng="all", maxDets=100): - p = self.params - iStr = " {:<18} {} @[ IoU={:<9} | area={:>6s} | maxDets={:>3d} ] = {:0.3f}" - titleStr = "Average Precision" if ap == 1 else "Average Recall" - typeStr = "(AP)" if ap == 1 else "(AR)" - iouStr = ( - "{:0.2f}:{:0.2f}".format(p.iouThrs[0], p.iouThrs[-1]) - if iouThr is None - else "{:0.2f}".format(iouThr) - ) - - aind = [i for i, aRng in enumerate(p.areaRngLbl) if aRng == areaRng] - mind = [i for i, mDet in enumerate(p.maxDets) if mDet == maxDets] - if ap == 1: - # dimension of precision: [TxRxKxAxM] - s = self.eval["precision"] - # IoU - if iouThr is not None: - t = np.where(iouThr == p.iouThrs)[0] - s = s[t] - s = s[:, :, :, aind, mind] - else: - # dimension of recall: [TxKxAxM] - s = self.eval["recall"] - if iouThr is not None: - t = np.where(iouThr == p.iouThrs)[0] - s = s[t] - s = s[:, :, aind, mind] - if len(s[s > -1]) == 0: - mean_s = -1 - else: - mean_s = np.mean(s[s > -1]) - print(iStr.format(titleStr, typeStr, iouStr, areaRng, maxDets, mean_s)) - return mean_s - - def _summarizeDets(): - stats = np.zeros((12,)) - # Evaluate AP using the custom limit on maximum detections per image - stats[0] = _summarize(1, maxDets=self.params.maxDets[2]) - stats[1] = _summarize(1, iouThr=0.5, maxDets=self.params.maxDets[2]) - stats[2] = _summarize(1, iouThr=0.75, maxDets=self.params.maxDets[2]) - stats[3] = _summarize(1, areaRng="small", maxDets=self.params.maxDets[2]) - stats[4] = _summarize(1, areaRng="medium", maxDets=self.params.maxDets[2]) - stats[5] = _summarize(1, areaRng="large", maxDets=self.params.maxDets[2]) - stats[6] = _summarize(0, maxDets=self.params.maxDets[0]) - stats[7] = _summarize(0, maxDets=self.params.maxDets[1]) - stats[8] = _summarize(0, maxDets=self.params.maxDets[2]) - stats[9] = _summarize(0, areaRng="small", maxDets=self.params.maxDets[2]) - stats[10] = _summarize(0, areaRng="medium", maxDets=self.params.maxDets[2]) - stats[11] = _summarize(0, areaRng="large", maxDets=self.params.maxDets[2]) - return stats - - def _summarizeKps(): - stats = np.zeros((10,)) - stats[0] = _summarize(1, maxDets=20) - stats[1] = _summarize(1, maxDets=20, iouThr=0.5) - stats[2] = _summarize(1, maxDets=20, iouThr=0.75) - stats[3] = _summarize(1, maxDets=20, areaRng="medium") - stats[4] = _summarize(1, maxDets=20, areaRng="large") - stats[5] = _summarize(0, maxDets=20) - stats[6] = _summarize(0, maxDets=20, iouThr=0.5) - stats[7] = _summarize(0, maxDets=20, iouThr=0.75) - stats[8] = _summarize(0, maxDets=20, areaRng="medium") - stats[9] = _summarize(0, maxDets=20, areaRng="large") - return stats - - if not self.eval: - raise Exception("Please run accumulate() first") - iouType = self.params.iouType - if iouType == "segm" or iouType == "bbox": - summarize = _summarizeDets - elif iouType == "keypoints": - summarize = _summarizeKps - self.stats = summarize() - - def __str__(self): - self.summarize() diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/modeling/test_fast_rcnn.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/modeling/test_fast_rcnn.py deleted file mode 100644 index e29b944bffca1ccbf5b02be59a753f3188d90a4f..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/modeling/test_fast_rcnn.py +++ /dev/null @@ -1,171 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import unittest -import torch - -from detectron2.layers import ShapeSpec -from detectron2.modeling.box_regression import Box2BoxTransform, Box2BoxTransformRotated -from detectron2.modeling.roi_heads.fast_rcnn import FastRCNNOutputLayers -from detectron2.modeling.roi_heads.rotated_fast_rcnn import RotatedFastRCNNOutputLayers -from detectron2.structures import Boxes, Instances, RotatedBoxes -from detectron2.utils.events import EventStorage - -logger = logging.getLogger(__name__) - - -class FastRCNNTest(unittest.TestCase): - def test_fast_rcnn(self): - torch.manual_seed(132) - - box_head_output_size = 8 - - box_predictor = FastRCNNOutputLayers( - ShapeSpec(channels=box_head_output_size), - box2box_transform=Box2BoxTransform(weights=(10, 10, 5, 5)), - num_classes=5, - ) - feature_pooled = torch.rand(2, box_head_output_size) - predictions = box_predictor(feature_pooled) - - proposal_boxes = torch.tensor([[0.8, 1.1, 3.2, 2.8], [2.3, 2.5, 7, 8]], dtype=torch.float32) - gt_boxes = torch.tensor([[1, 1, 3, 3], [2, 2, 6, 6]], dtype=torch.float32) - proposal = Instances((10, 10)) - proposal.proposal_boxes = Boxes(proposal_boxes) - proposal.gt_boxes = Boxes(gt_boxes) - proposal.gt_classes = torch.tensor([1, 2]) - - with EventStorage(): # capture events in a new storage to discard them - losses = box_predictor.losses(predictions, [proposal]) - - expected_losses = { - "loss_cls": torch.tensor(1.7951188087), - "loss_box_reg": torch.tensor(4.0357131958), - } - for name in expected_losses.keys(): - assert torch.allclose(losses[name], expected_losses[name]) - - def test_fast_rcnn_empty_batch(self, device="cpu"): - box_predictor = FastRCNNOutputLayers( - ShapeSpec(channels=10), - box2box_transform=Box2BoxTransform(weights=(10, 10, 5, 5)), - num_classes=8, - ).to(device=device) - - logits = torch.randn(0, 100, requires_grad=True, device=device) - deltas = torch.randn(0, 4, requires_grad=True, device=device) - losses = box_predictor.losses([logits, deltas], []) - for value in losses.values(): - self.assertTrue(torch.allclose(value, torch.zeros_like(value))) - sum(losses.values()).backward() - self.assertTrue(logits.grad is not None) - self.assertTrue(deltas.grad is not None) - - predictions, _ = box_predictor.inference([logits, deltas], []) - self.assertEqual(len(predictions), 0) - - @unittest.skipIf(not torch.cuda.is_available(), "CUDA not available") - def test_fast_rcnn_empty_batch_cuda(self): - self.test_fast_rcnn_empty_batch(device=torch.device("cuda")) - - def test_fast_rcnn_rotated(self): - torch.manual_seed(132) - box_head_output_size = 8 - - box_predictor = RotatedFastRCNNOutputLayers( - ShapeSpec(channels=box_head_output_size), - box2box_transform=Box2BoxTransformRotated(weights=(10, 10, 5, 5, 1)), - num_classes=5, - ) - feature_pooled = torch.rand(2, box_head_output_size) - predictions = box_predictor(feature_pooled) - proposal_boxes = torch.tensor( - [[2, 1.95, 2.4, 1.7, 0], [4.65, 5.25, 4.7, 5.5, 0]], dtype=torch.float32 - ) - gt_boxes = torch.tensor([[2, 2, 2, 2, 0], [4, 4, 4, 4, 0]], dtype=torch.float32) - proposal = Instances((10, 10)) - proposal.proposal_boxes = RotatedBoxes(proposal_boxes) - proposal.gt_boxes = RotatedBoxes(gt_boxes) - proposal.gt_classes = torch.tensor([1, 2]) - - with EventStorage(): # capture events in a new storage to discard them - losses = box_predictor.losses(predictions, [proposal]) - - # Note: the expected losses are slightly different even if - # the boxes are essentially the same as in the FastRCNNOutput test, because - # bbox_pred in FastRCNNOutputLayers have different Linear layers/initialization - # between the two cases. - expected_losses = { - "loss_cls": torch.tensor(1.7920907736), - "loss_box_reg": torch.tensor(4.0410838127), - } - for name in expected_losses.keys(): - assert torch.allclose(losses[name], expected_losses[name]) - - def test_predict_boxes_tracing(self): - class Model(torch.nn.Module): - def __init__(self, output_layer): - super(Model, self).__init__() - self._output_layer = output_layer - - def forward(self, proposal_deltas, proposal_boxes): - instances = Instances((10, 10)) - instances.proposal_boxes = Boxes(proposal_boxes) - return self._output_layer.predict_boxes((None, proposal_deltas), [instances]) - - box_head_output_size = 8 - - box_predictor = FastRCNNOutputLayers( - ShapeSpec(channels=box_head_output_size), - box2box_transform=Box2BoxTransform(weights=(10, 10, 5, 5)), - num_classes=5, - ) - - model = Model(box_predictor) - - from detectron2.export.torchscript_patch import patch_builtin_len - - with torch.no_grad(), patch_builtin_len(): - func = torch.jit.trace(model, (torch.randn(10, 20), torch.randn(10, 4))) - - o = func(torch.randn(10, 20), torch.randn(10, 4)) - self.assertEqual(o[0].shape, (10, 20)) - o = func(torch.randn(5, 20), torch.randn(5, 4)) - self.assertEqual(o[0].shape, (5, 20)) - o = func(torch.randn(20, 20), torch.randn(20, 4)) - self.assertEqual(o[0].shape, (20, 20)) - - def test_predict_probs_tracing(self): - class Model(torch.nn.Module): - def __init__(self, output_layer): - super(Model, self).__init__() - self._output_layer = output_layer - - def forward(self, scores, proposal_boxes): - instances = Instances((10, 10)) - instances.proposal_boxes = Boxes(proposal_boxes) - return self._output_layer.predict_probs((scores, None), [instances]) - - box_head_output_size = 8 - - box_predictor = FastRCNNOutputLayers( - ShapeSpec(channels=box_head_output_size), - box2box_transform=Box2BoxTransform(weights=(10, 10, 5, 5)), - num_classes=5, - ) - - model = Model(box_predictor) - - from detectron2.export.torchscript_patch import patch_builtin_len - - with torch.no_grad(), patch_builtin_len(): - func = torch.jit.trace(model, (torch.randn(10, 6), torch.rand(10, 4))) - o = func(torch.randn(10, 6), torch.randn(10, 4)) - self.assertEqual(o[0].shape, (10, 6)) - o = func(torch.randn(5, 6), torch.randn(5, 4)) - self.assertEqual(o[0].shape, (5, 6)) - o = func(torch.randn(20, 6), torch.randn(20, 4)) - self.assertEqual(o[0].shape, (20, 6)) - - -if __name__ == "__main__": - unittest.main() diff --git "a/spaces/Tuana/find-the-animal/pages/1_\342\255\220\357\270\217_Info.py" "b/spaces/Tuana/find-the-animal/pages/1_\342\255\220\357\270\217_Info.py" deleted file mode 100644 index bedde25b603c7ad6b8f83c43d661b7a4399931f4..0000000000000000000000000000000000000000 --- "a/spaces/Tuana/find-the-animal/pages/1_\342\255\220\357\270\217_Info.py" +++ /dev/null @@ -1,60 +0,0 @@ -import streamlit as st -from utils.frontend import build_sidebar - -build_sidebar() - -st.markdown(""" -# Better Image Retrieval With Retrieval-Augmented CLIP 🧠 - - -[CLIP](https://openai.com/blog/clip/) is a neural network that can predict how semantically close images and text pairs are. -In simpler terms, it can tell that the string "Cat" is closer to images of cats rather than images of dogs. - -What makes CLIP so powerful is that is a zero-shot model: that means that it can generalize concepts, -understand text and images it has never seen before. For example, it can tell that the string "an animal with yellow eyes" -is closer to images of cats rather than dogs, even though such pair was not in its training data. - -Why does this matter? Because zero shot capabilities allow models to understand descriptions. And in fact -CLIP understands that "an animal with pink feathers" matches a flamingo better than a pig. - -However, these descriptions need to be related to what the image shows. CLIP knows nothing about the animal features, -history and cultural references: It doesn't know which animals live longer than others, that jaguars were often depicted -in Aztec wall paintings, or that wolves and bears are typical animals that show up in European fairy tales. It doesn't even -know that cheetas are fast, because it cannot tell it from the image. - -However, Wikipedia contains all this information, and more. Can we make CLIP "look up" the answer to -our questions on Wikipedia before looking for matches? - -In this demo application, we see how can we combine traditional Extractive QA on Wikipedia and CLIP with Haystack.""") - -st.image("diagram.png") - -st.markdown(""" -In the image above you can see how the process looks like. - -First, we download a slice of Wikipedia with information about all the animals in the Lisbon zoo and preprocess, -index, embed and store them in a DocumentStore. For this demo we're using -[FAISSDocumentStore](https://docs.haystack.deepset.ai/docs/document_store). - -At this point they are ready to be queried by the text Retriever, in this case an instance of -[EmbeddingRetriever](https://docs.haystack.deepset.ai/docs/retriever#embedding-retrieval-recommended). -It compares the user's question ("The fastest animal") to all the documents indexed earlier and returns the -documents which are more likely to contain an answer to the question. -In this case, it will probably return snippets from the Cheetah Wikipedia entry. - -Once the documents are found, they are handed over to the Reader (in this demo, a -[FARMReader](https://docs.haystack.deepset.ai/docs/reader) node): -a model that is able to locate precisely the answer to a question into a document. -These answers are strings that should be now very easy for CLIP to understand, such as the name of an animal. -In this case, the Reader will return answers such as "Cheetah", "the cheetah", etc. - -These strings are then ranked and the most likely one is sent over to the -[MultiModalRetriever](https://docs.haystack.deepset.ai/docs/retriever#multimodal-retrieval) -that contains CLIP, which will use its own document store of images to find all the pictures that match the string. -Cheetah are present in the Lisbon zoo, so it will find pictures of them and return them. - -These nodes are chained together using a [Pipeline](https://docs.haystack.deepset.ai/docs/pipelines) object, -so that all you need to do to run a system like this is a single call: `pipeline.run(query="What's the fastest animal?")` -will return the list of images directly. -Have a look at [how we implemented it](https://github.com/TuanaCelik/find-the-animal/blob/main/utils/haystack.py)! -""") \ No newline at end of file diff --git a/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/models/GroundingDINO/bertwarper.py b/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/models/GroundingDINO/bertwarper.py deleted file mode 100644 index f0cf9779b270e1aead32845006f8b881fcba37ad..0000000000000000000000000000000000000000 --- a/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/models/GroundingDINO/bertwarper.py +++ /dev/null @@ -1,273 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ - -import torch -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -from torch import Tensor, nn -from torchvision.ops.boxes import nms -from transformers import BertConfig, BertModel, BertPreTrainedModel -from transformers.modeling_outputs import BaseModelOutputWithPoolingAndCrossAttentions - - -class BertModelWarper(nn.Module): - def __init__(self, bert_model): - super().__init__() - # self.bert = bert_modelc - - self.config = bert_model.config - self.embeddings = bert_model.embeddings - self.encoder = bert_model.encoder - self.pooler = bert_model.pooler - - self.get_extended_attention_mask = bert_model.get_extended_attention_mask - self.invert_attention_mask = bert_model.invert_attention_mask - self.get_head_mask = bert_model.get_head_mask - - def forward( - self, - input_ids=None, - attention_mask=None, - token_type_ids=None, - position_ids=None, - head_mask=None, - inputs_embeds=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_values=None, - use_cache=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - ): - r""" - encoder_hidden_states (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if - the model is configured as a decoder. - encoder_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): - Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in - the cross-attention if the model is configured as a decoder. Mask values selected in ``[0, 1]``: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - past_key_values (:obj:`tuple(tuple(torch.FloatTensor))` of length :obj:`config.n_layers` with each tuple having 4 tensors of shape :obj:`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): - Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. - - If :obj:`past_key_values` are used, the user can optionally input only the last :obj:`decoder_input_ids` - (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)` - instead of all :obj:`decoder_input_ids` of shape :obj:`(batch_size, sequence_length)`. - use_cache (:obj:`bool`, `optional`): - If set to :obj:`True`, :obj:`past_key_values` key value states are returned and can be used to speed up - decoding (see :obj:`past_key_values`). - """ - output_attentions = ( - output_attentions if output_attentions is not None else self.config.output_attentions - ) - output_hidden_states = ( - output_hidden_states - if output_hidden_states is not None - else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if self.config.is_decoder: - use_cache = use_cache if use_cache is not None else self.config.use_cache - else: - use_cache = False - - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - input_shape = input_ids.size() - batch_size, seq_length = input_shape - elif inputs_embeds is not None: - input_shape = inputs_embeds.size()[:-1] - batch_size, seq_length = input_shape - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - device = input_ids.device if input_ids is not None else inputs_embeds.device - - # past_key_values_length - past_key_values_length = ( - past_key_values[0][0].shape[2] if past_key_values is not None else 0 - ) - - if attention_mask is None: - attention_mask = torch.ones( - ((batch_size, seq_length + past_key_values_length)), device=device - ) - if token_type_ids is None: - token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device) - - # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] - # ourselves in which case we just need to make it broadcastable to all heads. - extended_attention_mask: torch.Tensor = self.get_extended_attention_mask( - attention_mask, input_shape, device - ) - - # If a 2D or 3D attention mask is provided for the cross-attention - # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length] - if self.config.is_decoder and encoder_hidden_states is not None: - encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size() - encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length) - if encoder_attention_mask is None: - encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device) - encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask) - else: - encoder_extended_attention_mask = None - # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO': - # import ipdb; ipdb.set_trace() - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - - embedding_output = self.embeddings( - input_ids=input_ids, - position_ids=position_ids, - token_type_ids=token_type_ids, - inputs_embeds=inputs_embeds, - past_key_values_length=past_key_values_length, - ) - - encoder_outputs = self.encoder( - embedding_output, - attention_mask=extended_attention_mask, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_extended_attention_mask, - past_key_values=past_key_values, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - sequence_output = encoder_outputs[0] - pooled_output = self.pooler(sequence_output) if self.pooler is not None else None - - if not return_dict: - return (sequence_output, pooled_output) + encoder_outputs[1:] - - return BaseModelOutputWithPoolingAndCrossAttentions( - last_hidden_state=sequence_output, - pooler_output=pooled_output, - past_key_values=encoder_outputs.past_key_values, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - cross_attentions=encoder_outputs.cross_attentions, - ) - - -class TextEncoderShell(nn.Module): - def __init__(self, text_encoder): - super().__init__() - self.text_encoder = text_encoder - self.config = self.text_encoder.config - - def forward(self, **kw): - # feed into text encoder - return self.text_encoder(**kw) - - -def generate_masks_with_special_tokens(tokenized, special_tokens_list, tokenizer): - """Generate attention mask between each pair of special tokens - Args: - input_ids (torch.Tensor): input ids. Shape: [bs, num_token] - special_tokens_mask (list): special tokens mask. - Returns: - torch.Tensor: attention mask between each special tokens. - """ - input_ids = tokenized["input_ids"] - bs, num_token = input_ids.shape - # special_tokens_mask: bs, num_token. 1 for special tokens. 0 for normal tokens - special_tokens_mask = torch.zeros((bs, num_token), device=input_ids.device).bool() - for special_token in special_tokens_list: - special_tokens_mask |= input_ids == special_token - - # idxs: each row is a list of indices of special tokens - idxs = torch.nonzero(special_tokens_mask) - - # generate attention mask and positional ids - attention_mask = ( - torch.eye(num_token, device=input_ids.device).bool().unsqueeze(0).repeat(bs, 1, 1) - ) - position_ids = torch.zeros((bs, num_token), device=input_ids.device) - previous_col = 0 - for i in range(idxs.shape[0]): - row, col = idxs[i] - if (col == 0) or (col == num_token - 1): - attention_mask[row, col, col] = True - position_ids[row, col] = 0 - else: - attention_mask[row, previous_col + 1 : col + 1, previous_col + 1 : col + 1] = True - position_ids[row, previous_col + 1 : col + 1] = torch.arange( - 0, col - previous_col, device=input_ids.device - ) - - previous_col = col - - # # padding mask - # padding_mask = tokenized['attention_mask'] - # attention_mask = attention_mask & padding_mask.unsqueeze(1).bool() & padding_mask.unsqueeze(2).bool() - - return attention_mask, position_ids.to(torch.long) - - -def generate_masks_with_special_tokens_and_transfer_map(tokenized, special_tokens_list, tokenizer): - """Generate attention mask between each pair of special tokens - Args: - input_ids (torch.Tensor): input ids. Shape: [bs, num_token] - special_tokens_mask (list): special tokens mask. - Returns: - torch.Tensor: attention mask between each special tokens. - """ - input_ids = tokenized["input_ids"] - bs, num_token = input_ids.shape - # special_tokens_mask: bs, num_token. 1 for special tokens. 0 for normal tokens - special_tokens_mask = torch.zeros((bs, num_token), device=input_ids.device).bool() - for special_token in special_tokens_list: - special_tokens_mask |= input_ids == special_token - - # idxs: each row is a list of indices of special tokens - idxs = torch.nonzero(special_tokens_mask) - - # generate attention mask and positional ids - attention_mask = ( - torch.eye(num_token, device=input_ids.device).bool().unsqueeze(0).repeat(bs, 1, 1) - ) - position_ids = torch.zeros((bs, num_token), device=input_ids.device) - cate_to_token_mask_list = [[] for _ in range(bs)] - previous_col = 0 - for i in range(idxs.shape[0]): - row, col = idxs[i] - if (col == 0) or (col == num_token - 1): - attention_mask[row, col, col] = True - position_ids[row, col] = 0 - else: - attention_mask[row, previous_col + 1 : col + 1, previous_col + 1 : col + 1] = True - position_ids[row, previous_col + 1 : col + 1] = torch.arange( - 0, col - previous_col, device=input_ids.device - ) - c2t_maski = torch.zeros((num_token), device=input_ids.device).bool() - c2t_maski[previous_col + 1 : col] = True - cate_to_token_mask_list[row].append(c2t_maski) - previous_col = col - - cate_to_token_mask_list = [ - torch.stack(cate_to_token_mask_listi, dim=0) - for cate_to_token_mask_listi in cate_to_token_mask_list - ] - - # # padding mask - # padding_mask = tokenized['attention_mask'] - # attention_mask = attention_mask & padding_mask.unsqueeze(1).bool() & padding_mask.unsqueeze(2).bool() - - return attention_mask, position_ids.to(torch.long), cate_to_token_mask_list diff --git a/spaces/Workhack/chatgpt-prompt-playground/static/js/main.60bfe039.js b/spaces/Workhack/chatgpt-prompt-playground/static/js/main.60bfe039.js deleted file mode 100644 index 1e7c782a78493400c91343b48c0719de1cb9e4f7..0000000000000000000000000000000000000000 --- a/spaces/Workhack/chatgpt-prompt-playground/static/js/main.60bfe039.js +++ /dev/null @@ -1,3 +0,0 @@ -/*! For license information please see main.60bfe039.js.LICENSE.txt */ -!function(){"use strict";var e={463:function(e,t,n){var r=n(791),l=n(296);function a(e){for(var t="https://reactjs.org/docs/error-decoder.html?invariant="+e,n=1;n
- - Results in something like this: - - .. sourcecode:: html - -
    - ... -
- - As you can see it automatically prepends a space in front of the item - if the filter returned something unless the second parameter is false. - """ - rv = " ".join( - f'{escape(key)}="{escape(value)}"' - for key, value in d.items() - if value is not None and not isinstance(value, Undefined) - ) - - if autospace and rv: - rv = " " + rv - - if eval_ctx.autoescape: - rv = Markup(rv) - - return rv - - -def do_capitalize(s: str) -> str: - """Capitalize a value. The first character will be uppercase, all others - lowercase. - """ - return soft_str(s).capitalize() - - -_word_beginning_split_re = re.compile(r"([-\s({\[<]+)") - - -def do_title(s: str) -> str: - """Return a titlecased version of the value. I.e. words will start with - uppercase letters, all remaining characters are lowercase. - """ - return "".join( - [ - item[0].upper() + item[1:].lower() - for item in _word_beginning_split_re.split(soft_str(s)) - if item - ] - ) - - -def do_dictsort( - value: t.Mapping[K, V], - case_sensitive: bool = False, - by: 'te.Literal["key", "value"]' = "key", - reverse: bool = False, -) -> t.List[t.Tuple[K, V]]: - """Sort a dict and yield (key, value) pairs. Python dicts may not - be in the order you want to display them in, so sort them first. - - .. sourcecode:: jinja - - {% for key, value in mydict|dictsort %} - sort the dict by key, case insensitive - - {% for key, value in mydict|dictsort(reverse=true) %} - sort the dict by key, case insensitive, reverse order - - {% for key, value in mydict|dictsort(true) %} - sort the dict by key, case sensitive - - {% for key, value in mydict|dictsort(false, 'value') %} - sort the dict by value, case insensitive - """ - if by == "key": - pos = 0 - elif by == "value": - pos = 1 - else: - raise FilterArgumentError('You can only sort by either "key" or "value"') - - def sort_func(item: t.Tuple[t.Any, t.Any]) -> t.Any: - value = item[pos] - - if not case_sensitive: - value = ignore_case(value) - - return value - - return sorted(value.items(), key=sort_func, reverse=reverse) - - -@pass_environment -def do_sort( - environment: "Environment", - value: "t.Iterable[V]", - reverse: bool = False, - case_sensitive: bool = False, - attribute: t.Optional[t.Union[str, int]] = None, -) -> "t.List[V]": - """Sort an iterable using Python's :func:`sorted`. - - .. sourcecode:: jinja - - {% for city in cities|sort %} - ... - {% endfor %} - - :param reverse: Sort descending instead of ascending. - :param case_sensitive: When sorting strings, sort upper and lower - case separately. - :param attribute: When sorting objects or dicts, an attribute or - key to sort by. Can use dot notation like ``"address.city"``. - Can be a list of attributes like ``"age,name"``. - - The sort is stable, it does not change the relative order of - elements that compare equal. This makes it is possible to chain - sorts on different attributes and ordering. - - .. sourcecode:: jinja - - {% for user in users|sort(attribute="name") - |sort(reverse=true, attribute="age") %} - ... - {% endfor %} - - As a shortcut to chaining when the direction is the same for all - attributes, pass a comma separate list of attributes. - - .. sourcecode:: jinja - - {% for user in users|sort(attribute="age,name") %} - ... - {% endfor %} - - .. versionchanged:: 2.11.0 - The ``attribute`` parameter can be a comma separated list of - attributes, e.g. ``"age,name"``. - - .. versionchanged:: 2.6 - The ``attribute`` parameter was added. - """ - key_func = make_multi_attrgetter( - environment, attribute, postprocess=ignore_case if not case_sensitive else None - ) - return sorted(value, key=key_func, reverse=reverse) - - -@pass_environment -def do_unique( - environment: "Environment", - value: "t.Iterable[V]", - case_sensitive: bool = False, - attribute: t.Optional[t.Union[str, int]] = None, -) -> "t.Iterator[V]": - """Returns a list of unique items from the given iterable. - - .. sourcecode:: jinja - - {{ ['foo', 'bar', 'foobar', 'FooBar']|unique|list }} - -> ['foo', 'bar', 'foobar'] - - The unique items are yielded in the same order as their first occurrence in - the iterable passed to the filter. - - :param case_sensitive: Treat upper and lower case strings as distinct. - :param attribute: Filter objects with unique values for this attribute. - """ - getter = make_attrgetter( - environment, attribute, postprocess=ignore_case if not case_sensitive else None - ) - seen = set() - - for item in value: - key = getter(item) - - if key not in seen: - seen.add(key) - yield item - - -def _min_or_max( - environment: "Environment", - value: "t.Iterable[V]", - func: "t.Callable[..., V]", - case_sensitive: bool, - attribute: t.Optional[t.Union[str, int]], -) -> "t.Union[V, Undefined]": - it = iter(value) - - try: - first = next(it) - except StopIteration: - return environment.undefined("No aggregated item, sequence was empty.") - - key_func = make_attrgetter( - environment, attribute, postprocess=ignore_case if not case_sensitive else None - ) - return func(chain([first], it), key=key_func) - - -@pass_environment -def do_min( - environment: "Environment", - value: "t.Iterable[V]", - case_sensitive: bool = False, - attribute: t.Optional[t.Union[str, int]] = None, -) -> "t.Union[V, Undefined]": - """Return the smallest item from the sequence. - - .. sourcecode:: jinja - - {{ [1, 2, 3]|min }} - -> 1 - - :param case_sensitive: Treat upper and lower case strings as distinct. - :param attribute: Get the object with the min value of this attribute. - """ - return _min_or_max(environment, value, min, case_sensitive, attribute) - - -@pass_environment -def do_max( - environment: "Environment", - value: "t.Iterable[V]", - case_sensitive: bool = False, - attribute: t.Optional[t.Union[str, int]] = None, -) -> "t.Union[V, Undefined]": - """Return the largest item from the sequence. - - .. sourcecode:: jinja - - {{ [1, 2, 3]|max }} - -> 3 - - :param case_sensitive: Treat upper and lower case strings as distinct. - :param attribute: Get the object with the max value of this attribute. - """ - return _min_or_max(environment, value, max, case_sensitive, attribute) - - -def do_default( - value: V, - default_value: V = "", # type: ignore - boolean: bool = False, -) -> V: - """If the value is undefined it will return the passed default value, - otherwise the value of the variable: - - .. sourcecode:: jinja - - {{ my_variable|default('my_variable is not defined') }} - - This will output the value of ``my_variable`` if the variable was - defined, otherwise ``'my_variable is not defined'``. If you want - to use default with variables that evaluate to false you have to - set the second parameter to `true`: - - .. sourcecode:: jinja - - {{ ''|default('the string was empty', true) }} - - .. versionchanged:: 2.11 - It's now possible to configure the :class:`~jinja2.Environment` with - :class:`~jinja2.ChainableUndefined` to make the `default` filter work - on nested elements and attributes that may contain undefined values - in the chain without getting an :exc:`~jinja2.UndefinedError`. - """ - if isinstance(value, Undefined) or (boolean and not value): - return default_value - - return value - - -@pass_eval_context -def sync_do_join( - eval_ctx: "EvalContext", - value: t.Iterable, - d: str = "", - attribute: t.Optional[t.Union[str, int]] = None, -) -> str: - """Return a string which is the concatenation of the strings in the - sequence. The separator between elements is an empty string per - default, you can define it with the optional parameter: - - .. sourcecode:: jinja - - {{ [1, 2, 3]|join('|') }} - -> 1|2|3 - - {{ [1, 2, 3]|join }} - -> 123 - - It is also possible to join certain attributes of an object: - - .. sourcecode:: jinja - - {{ users|join(', ', attribute='username') }} - - .. versionadded:: 2.6 - The `attribute` parameter was added. - """ - if attribute is not None: - value = map(make_attrgetter(eval_ctx.environment, attribute), value) - - # no automatic escaping? joining is a lot easier then - if not eval_ctx.autoescape: - return str(d).join(map(str, value)) - - # if the delimiter doesn't have an html representation we check - # if any of the items has. If yes we do a coercion to Markup - if not hasattr(d, "__html__"): - value = list(value) - do_escape = False - - for idx, item in enumerate(value): - if hasattr(item, "__html__"): - do_escape = True - else: - value[idx] = str(item) - - if do_escape: - d = escape(d) - else: - d = str(d) - - return d.join(value) - - # no html involved, to normal joining - return soft_str(d).join(map(soft_str, value)) - - -@async_variant(sync_do_join) # type: ignore -async def do_join( - eval_ctx: "EvalContext", - value: t.Union[t.AsyncIterable, t.Iterable], - d: str = "", - attribute: t.Optional[t.Union[str, int]] = None, -) -> str: - return sync_do_join(eval_ctx, await auto_to_list(value), d, attribute) - - -def do_center(value: str, width: int = 80) -> str: - """Centers the value in a field of a given width.""" - return soft_str(value).center(width) - - -@pass_environment -def sync_do_first( - environment: "Environment", seq: "t.Iterable[V]" -) -> "t.Union[V, Undefined]": - """Return the first item of a sequence.""" - try: - return next(iter(seq)) - except StopIteration: - return environment.undefined("No first item, sequence was empty.") - - -@async_variant(sync_do_first) # type: ignore -async def do_first( - environment: "Environment", seq: "t.Union[t.AsyncIterable[V], t.Iterable[V]]" -) -> "t.Union[V, Undefined]": - try: - return await auto_aiter(seq).__anext__() - except StopAsyncIteration: - return environment.undefined("No first item, sequence was empty.") - - -@pass_environment -def do_last( - environment: "Environment", seq: "t.Reversible[V]" -) -> "t.Union[V, Undefined]": - """Return the last item of a sequence. - - Note: Does not work with generators. You may want to explicitly - convert it to a list: - - .. sourcecode:: jinja - - {{ data | selectattr('name', '==', 'Jinja') | list | last }} - """ - try: - return next(iter(reversed(seq))) - except StopIteration: - return environment.undefined("No last item, sequence was empty.") - - -# No async do_last, it may not be safe in async mode. - - -@pass_context -def do_random(context: "Context", seq: "t.Sequence[V]") -> "t.Union[V, Undefined]": - """Return a random item from the sequence.""" - try: - return random.choice(seq) - except IndexError: - return context.environment.undefined("No random item, sequence was empty.") - - -def do_filesizeformat(value: t.Union[str, float, int], binary: bool = False) -> str: - """Format the value like a 'human-readable' file size (i.e. 13 kB, - 4.1 MB, 102 Bytes, etc). Per default decimal prefixes are used (Mega, - Giga, etc.), if the second parameter is set to `True` the binary - prefixes are used (Mebi, Gibi). - """ - bytes = float(value) - base = 1024 if binary else 1000 - prefixes = [ - ("KiB" if binary else "kB"), - ("MiB" if binary else "MB"), - ("GiB" if binary else "GB"), - ("TiB" if binary else "TB"), - ("PiB" if binary else "PB"), - ("EiB" if binary else "EB"), - ("ZiB" if binary else "ZB"), - ("YiB" if binary else "YB"), - ] - - if bytes == 1: - return "1 Byte" - elif bytes < base: - return f"{int(bytes)} Bytes" - else: - for i, prefix in enumerate(prefixes): - unit = base ** (i + 2) - - if bytes < unit: - return f"{base * bytes / unit:.1f} {prefix}" - - return f"{base * bytes / unit:.1f} {prefix}" - - -def do_pprint(value: t.Any) -> str: - """Pretty print a variable. Useful for debugging.""" - return pformat(value) - - -_uri_scheme_re = re.compile(r"^([\w.+-]{2,}:(/){0,2})$") - - -@pass_eval_context -def do_urlize( - eval_ctx: "EvalContext", - value: str, - trim_url_limit: t.Optional[int] = None, - nofollow: bool = False, - target: t.Optional[str] = None, - rel: t.Optional[str] = None, - extra_schemes: t.Optional[t.Iterable[str]] = None, -) -> str: - """Convert URLs in text into clickable links. - - This may not recognize links in some situations. Usually, a more - comprehensive formatter, such as a Markdown library, is a better - choice. - - Works on ``http://``, ``https://``, ``www.``, ``mailto:``, and email - addresses. Links with trailing punctuation (periods, commas, closing - parentheses) and leading punctuation (opening parentheses) are - recognized excluding the punctuation. Email addresses that include - header fields are not recognized (for example, - ``mailto:address@example.com?cc=copy@example.com``). - - :param value: Original text containing URLs to link. - :param trim_url_limit: Shorten displayed URL values to this length. - :param nofollow: Add the ``rel=nofollow`` attribute to links. - :param target: Add the ``target`` attribute to links. - :param rel: Add the ``rel`` attribute to links. - :param extra_schemes: Recognize URLs that start with these schemes - in addition to the default behavior. Defaults to - ``env.policies["urlize.extra_schemes"]``, which defaults to no - extra schemes. - - .. versionchanged:: 3.0 - The ``extra_schemes`` parameter was added. - - .. versionchanged:: 3.0 - Generate ``https://`` links for URLs without a scheme. - - .. versionchanged:: 3.0 - The parsing rules were updated. Recognize email addresses with - or without the ``mailto:`` scheme. Validate IP addresses. Ignore - parentheses and brackets in more cases. - - .. versionchanged:: 2.8 - The ``target`` parameter was added. - """ - policies = eval_ctx.environment.policies - rel_parts = set((rel or "").split()) - - if nofollow: - rel_parts.add("nofollow") - - rel_parts.update((policies["urlize.rel"] or "").split()) - rel = " ".join(sorted(rel_parts)) or None - - if target is None: - target = policies["urlize.target"] - - if extra_schemes is None: - extra_schemes = policies["urlize.extra_schemes"] or () - - for scheme in extra_schemes: - if _uri_scheme_re.fullmatch(scheme) is None: - raise FilterArgumentError(f"{scheme!r} is not a valid URI scheme prefix.") - - rv = urlize( - value, - trim_url_limit=trim_url_limit, - rel=rel, - target=target, - extra_schemes=extra_schemes, - ) - - if eval_ctx.autoescape: - rv = Markup(rv) - - return rv - - -def do_indent( - s: str, width: t.Union[int, str] = 4, first: bool = False, blank: bool = False -) -> str: - """Return a copy of the string with each line indented by 4 spaces. The - first line and blank lines are not indented by default. - - :param width: Number of spaces, or a string, to indent by. - :param first: Don't skip indenting the first line. - :param blank: Don't skip indenting empty lines. - - .. versionchanged:: 3.0 - ``width`` can be a string. - - .. versionchanged:: 2.10 - Blank lines are not indented by default. - - Rename the ``indentfirst`` argument to ``first``. - """ - if isinstance(width, str): - indention = width - else: - indention = " " * width - - newline = "\n" - - if isinstance(s, Markup): - indention = Markup(indention) - newline = Markup(newline) - - s += newline # this quirk is necessary for splitlines method - - if blank: - rv = (newline + indention).join(s.splitlines()) - else: - lines = s.splitlines() - rv = lines.pop(0) - - if lines: - rv += newline + newline.join( - indention + line if line else line for line in lines - ) - - if first: - rv = indention + rv - - return rv - - -@pass_environment -def do_truncate( - env: "Environment", - s: str, - length: int = 255, - killwords: bool = False, - end: str = "...", - leeway: t.Optional[int] = None, -) -> str: - """Return a truncated copy of the string. The length is specified - with the first parameter which defaults to ``255``. If the second - parameter is ``true`` the filter will cut the text at length. Otherwise - it will discard the last word. If the text was in fact - truncated it will append an ellipsis sign (``"..."``). If you want a - different ellipsis sign than ``"..."`` you can specify it using the - third parameter. Strings that only exceed the length by the tolerance - margin given in the fourth parameter will not be truncated. - - .. sourcecode:: jinja - - {{ "foo bar baz qux"|truncate(9) }} - -> "foo..." - {{ "foo bar baz qux"|truncate(9, True) }} - -> "foo ba..." - {{ "foo bar baz qux"|truncate(11) }} - -> "foo bar baz qux" - {{ "foo bar baz qux"|truncate(11, False, '...', 0) }} - -> "foo bar..." - - The default leeway on newer Jinja versions is 5 and was 0 before but - can be reconfigured globally. - """ - if leeway is None: - leeway = env.policies["truncate.leeway"] - - assert length >= len(end), f"expected length >= {len(end)}, got {length}" - assert leeway >= 0, f"expected leeway >= 0, got {leeway}" - - if len(s) <= length + leeway: - return s - - if killwords: - return s[: length - len(end)] + end - - result = s[: length - len(end)].rsplit(" ", 1)[0] - return result + end - - -@pass_environment -def do_wordwrap( - environment: "Environment", - s: str, - width: int = 79, - break_long_words: bool = True, - wrapstring: t.Optional[str] = None, - break_on_hyphens: bool = True, -) -> str: - """Wrap a string to the given width. Existing newlines are treated - as paragraphs to be wrapped separately. - - :param s: Original text to wrap. - :param width: Maximum length of wrapped lines. - :param break_long_words: If a word is longer than ``width``, break - it across lines. - :param break_on_hyphens: If a word contains hyphens, it may be split - across lines. - :param wrapstring: String to join each wrapped line. Defaults to - :attr:`Environment.newline_sequence`. - - .. versionchanged:: 2.11 - Existing newlines are treated as paragraphs wrapped separately. - - .. versionchanged:: 2.11 - Added the ``break_on_hyphens`` parameter. - - .. versionchanged:: 2.7 - Added the ``wrapstring`` parameter. - """ - import textwrap - - if wrapstring is None: - wrapstring = environment.newline_sequence - - # textwrap.wrap doesn't consider existing newlines when wrapping. - # If the string has a newline before width, wrap will still insert - # a newline at width, resulting in a short line. Instead, split and - # wrap each paragraph individually. - return wrapstring.join( - [ - wrapstring.join( - textwrap.wrap( - line, - width=width, - expand_tabs=False, - replace_whitespace=False, - break_long_words=break_long_words, - break_on_hyphens=break_on_hyphens, - ) - ) - for line in s.splitlines() - ] - ) - - -_word_re = re.compile(r"\w+") - - -def do_wordcount(s: str) -> int: - """Count the words in that string.""" - return len(_word_re.findall(soft_str(s))) - - -def do_int(value: t.Any, default: int = 0, base: int = 10) -> int: - """Convert the value into an integer. If the - conversion doesn't work it will return ``0``. You can - override this default using the first parameter. You - can also override the default base (10) in the second - parameter, which handles input with prefixes such as - 0b, 0o and 0x for bases 2, 8 and 16 respectively. - The base is ignored for decimal numbers and non-string values. - """ - try: - if isinstance(value, str): - return int(value, base) - - return int(value) - except (TypeError, ValueError): - # this quirk is necessary so that "42.23"|int gives 42. - try: - return int(float(value)) - except (TypeError, ValueError): - return default - - -def do_float(value: t.Any, default: float = 0.0) -> float: - """Convert the value into a floating point number. If the - conversion doesn't work it will return ``0.0``. You can - override this default using the first parameter. - """ - try: - return float(value) - except (TypeError, ValueError): - return default - - -def do_format(value: str, *args: t.Any, **kwargs: t.Any) -> str: - """Apply the given values to a `printf-style`_ format string, like - ``string % values``. - - .. sourcecode:: jinja - - {{ "%s, %s!"|format(greeting, name) }} - Hello, World! - - In most cases it should be more convenient and efficient to use the - ``%`` operator or :meth:`str.format`. - - .. code-block:: text - - {{ "%s, %s!" % (greeting, name) }} - {{ "{}, {}!".format(greeting, name) }} - - .. _printf-style: https://docs.python.org/library/stdtypes.html - #printf-style-string-formatting - """ - if args and kwargs: - raise FilterArgumentError( - "can't handle positional and keyword arguments at the same time" - ) - - return soft_str(value) % (kwargs or args) - - -def do_trim(value: str, chars: t.Optional[str] = None) -> str: - """Strip leading and trailing characters, by default whitespace.""" - return soft_str(value).strip(chars) - - -def do_striptags(value: "t.Union[str, HasHTML]") -> str: - """Strip SGML/XML tags and replace adjacent whitespace by one space.""" - if hasattr(value, "__html__"): - value = t.cast("HasHTML", value).__html__() - - return Markup(str(value)).striptags() - - -def sync_do_slice( - value: "t.Collection[V]", slices: int, fill_with: "t.Optional[V]" = None -) -> "t.Iterator[t.List[V]]": - """Slice an iterator and return a list of lists containing - those items. Useful if you want to create a div containing - three ul tags that represent columns: - - .. sourcecode:: html+jinja - -
- {%- for column in items|slice(3) %} -
    - {%- for item in column %} -
  • {{ item }}
  • - {%- endfor %} -
- {%- endfor %} -
- - If you pass it a second argument it's used to fill missing - values on the last iteration. - """ - seq = list(value) - length = len(seq) - items_per_slice = length // slices - slices_with_extra = length % slices - offset = 0 - - for slice_number in range(slices): - start = offset + slice_number * items_per_slice - - if slice_number < slices_with_extra: - offset += 1 - - end = offset + (slice_number + 1) * items_per_slice - tmp = seq[start:end] - - if fill_with is not None and slice_number >= slices_with_extra: - tmp.append(fill_with) - - yield tmp - - -@async_variant(sync_do_slice) # type: ignore -async def do_slice( - value: "t.Union[t.AsyncIterable[V], t.Iterable[V]]", - slices: int, - fill_with: t.Optional[t.Any] = None, -) -> "t.Iterator[t.List[V]]": - return sync_do_slice(await auto_to_list(value), slices, fill_with) - - -def do_batch( - value: "t.Iterable[V]", linecount: int, fill_with: "t.Optional[V]" = None -) -> "t.Iterator[t.List[V]]": - """ - A filter that batches items. It works pretty much like `slice` - just the other way round. It returns a list of lists with the - given number of items. If you provide a second parameter this - is used to fill up missing items. See this example: - - .. sourcecode:: html+jinja - - - {%- for row in items|batch(3, ' ') %} - - {%- for column in row %} - - {%- endfor %} - - {%- endfor %} -
{{ column }}
- """ - tmp: "t.List[V]" = [] - - for item in value: - if len(tmp) == linecount: - yield tmp - tmp = [] - - tmp.append(item) - - if tmp: - if fill_with is not None and len(tmp) < linecount: - tmp += [fill_with] * (linecount - len(tmp)) - - yield tmp - - -def do_round( - value: float, - precision: int = 0, - method: 'te.Literal["common", "ceil", "floor"]' = "common", -) -> float: - """Round the number to a given precision. The first - parameter specifies the precision (default is ``0``), the - second the rounding method: - - - ``'common'`` rounds either up or down - - ``'ceil'`` always rounds up - - ``'floor'`` always rounds down - - If you don't specify a method ``'common'`` is used. - - .. sourcecode:: jinja - - {{ 42.55|round }} - -> 43.0 - {{ 42.55|round(1, 'floor') }} - -> 42.5 - - Note that even if rounded to 0 precision, a float is returned. If - you need a real integer, pipe it through `int`: - - .. sourcecode:: jinja - - {{ 42.55|round|int }} - -> 43 - """ - if method not in {"common", "ceil", "floor"}: - raise FilterArgumentError("method must be common, ceil or floor") - - if method == "common": - return round(value, precision) - - func = getattr(math, method) - return t.cast(float, func(value * (10**precision)) / (10**precision)) - - -class _GroupTuple(t.NamedTuple): - grouper: t.Any - list: t.List - - # Use the regular tuple repr to hide this subclass if users print - # out the value during debugging. - def __repr__(self) -> str: - return tuple.__repr__(self) - - def __str__(self) -> str: - return tuple.__str__(self) - - -@pass_environment -def sync_do_groupby( - environment: "Environment", - value: "t.Iterable[V]", - attribute: t.Union[str, int], - default: t.Optional[t.Any] = None, - case_sensitive: bool = False, -) -> "t.List[_GroupTuple]": - """Group a sequence of objects by an attribute using Python's - :func:`itertools.groupby`. The attribute can use dot notation for - nested access, like ``"address.city"``. Unlike Python's ``groupby``, - the values are sorted first so only one group is returned for each - unique value. - - For example, a list of ``User`` objects with a ``city`` attribute - can be rendered in groups. In this example, ``grouper`` refers to - the ``city`` value of the group. - - .. sourcecode:: html+jinja - -
    {% for city, items in users|groupby("city") %} -
  • {{ city }} -
      {% for user in items %} -
    • {{ user.name }} - {% endfor %}
    -
  • - {% endfor %}
- - ``groupby`` yields namedtuples of ``(grouper, list)``, which - can be used instead of the tuple unpacking above. ``grouper`` is the - value of the attribute, and ``list`` is the items with that value. - - .. sourcecode:: html+jinja - -
    {% for group in users|groupby("city") %} -
  • {{ group.grouper }}: {{ group.list|join(", ") }} - {% endfor %}
- - You can specify a ``default`` value to use if an object in the list - does not have the given attribute. - - .. sourcecode:: jinja - -
    {% for city, items in users|groupby("city", default="NY") %} -
  • {{ city }}: {{ items|map(attribute="name")|join(", ") }}
  • - {% endfor %}
- - Like the :func:`~jinja-filters.sort` filter, sorting and grouping is - case-insensitive by default. The ``key`` for each group will have - the case of the first item in that group of values. For example, if - a list of users has cities ``["CA", "NY", "ca"]``, the "CA" group - will have two values. This can be disabled by passing - ``case_sensitive=True``. - - .. versionchanged:: 3.1 - Added the ``case_sensitive`` parameter. Sorting and grouping is - case-insensitive by default, matching other filters that do - comparisons. - - .. versionchanged:: 3.0 - Added the ``default`` parameter. - - .. versionchanged:: 2.6 - The attribute supports dot notation for nested access. - """ - expr = make_attrgetter( - environment, - attribute, - postprocess=ignore_case if not case_sensitive else None, - default=default, - ) - out = [ - _GroupTuple(key, list(values)) - for key, values in groupby(sorted(value, key=expr), expr) - ] - - if not case_sensitive: - # Return the real key from the first value instead of the lowercase key. - output_expr = make_attrgetter(environment, attribute, default=default) - out = [_GroupTuple(output_expr(values[0]), values) for _, values in out] - - return out - - -@async_variant(sync_do_groupby) # type: ignore -async def do_groupby( - environment: "Environment", - value: "t.Union[t.AsyncIterable[V], t.Iterable[V]]", - attribute: t.Union[str, int], - default: t.Optional[t.Any] = None, - case_sensitive: bool = False, -) -> "t.List[_GroupTuple]": - expr = make_attrgetter( - environment, - attribute, - postprocess=ignore_case if not case_sensitive else None, - default=default, - ) - out = [ - _GroupTuple(key, await auto_to_list(values)) - for key, values in groupby(sorted(await auto_to_list(value), key=expr), expr) - ] - - if not case_sensitive: - # Return the real key from the first value instead of the lowercase key. - output_expr = make_attrgetter(environment, attribute, default=default) - out = [_GroupTuple(output_expr(values[0]), values) for _, values in out] - - return out - - -@pass_environment -def sync_do_sum( - environment: "Environment", - iterable: "t.Iterable[V]", - attribute: t.Optional[t.Union[str, int]] = None, - start: V = 0, # type: ignore -) -> V: - """Returns the sum of a sequence of numbers plus the value of parameter - 'start' (which defaults to 0). When the sequence is empty it returns - start. - - It is also possible to sum up only certain attributes: - - .. sourcecode:: jinja - - Total: {{ items|sum(attribute='price') }} - - .. versionchanged:: 2.6 - The ``attribute`` parameter was added to allow summing up over - attributes. Also the ``start`` parameter was moved on to the right. - """ - if attribute is not None: - iterable = map(make_attrgetter(environment, attribute), iterable) - - return sum(iterable, start) # type: ignore[no-any-return, call-overload] - - -@async_variant(sync_do_sum) # type: ignore -async def do_sum( - environment: "Environment", - iterable: "t.Union[t.AsyncIterable[V], t.Iterable[V]]", - attribute: t.Optional[t.Union[str, int]] = None, - start: V = 0, # type: ignore -) -> V: - rv = start - - if attribute is not None: - func = make_attrgetter(environment, attribute) - else: - - def func(x: V) -> V: - return x - - async for item in auto_aiter(iterable): - rv += func(item) - - return rv - - -def sync_do_list(value: "t.Iterable[V]") -> "t.List[V]": - """Convert the value into a list. If it was a string the returned list - will be a list of characters. - """ - return list(value) - - -@async_variant(sync_do_list) # type: ignore -async def do_list(value: "t.Union[t.AsyncIterable[V], t.Iterable[V]]") -> "t.List[V]": - return await auto_to_list(value) - - -def do_mark_safe(value: str) -> Markup: - """Mark the value as safe which means that in an environment with automatic - escaping enabled this variable will not be escaped. - """ - return Markup(value) - - -def do_mark_unsafe(value: str) -> str: - """Mark a value as unsafe. This is the reverse operation for :func:`safe`.""" - return str(value) - - -@typing.overload -def do_reverse(value: str) -> str: - ... - - -@typing.overload -def do_reverse(value: "t.Iterable[V]") -> "t.Iterable[V]": - ... - - -def do_reverse(value: t.Union[str, t.Iterable[V]]) -> t.Union[str, t.Iterable[V]]: - """Reverse the object or return an iterator that iterates over it the other - way round. - """ - if isinstance(value, str): - return value[::-1] - - try: - return reversed(value) # type: ignore - except TypeError: - try: - rv = list(value) - rv.reverse() - return rv - except TypeError as e: - raise FilterArgumentError("argument must be iterable") from e - - -@pass_environment -def do_attr( - environment: "Environment", obj: t.Any, name: str -) -> t.Union[Undefined, t.Any]: - """Get an attribute of an object. ``foo|attr("bar")`` works like - ``foo.bar`` just that always an attribute is returned and items are not - looked up. - - See :ref:`Notes on subscriptions ` for more details. - """ - try: - name = str(name) - except UnicodeError: - pass - else: - try: - value = getattr(obj, name) - except AttributeError: - pass - else: - if environment.sandboxed: - environment = t.cast("SandboxedEnvironment", environment) - - if not environment.is_safe_attribute(obj, name, value): - return environment.unsafe_undefined(obj, name) - - return value - - return environment.undefined(obj=obj, name=name) - - -@typing.overload -def sync_do_map( - context: "Context", value: t.Iterable, name: str, *args: t.Any, **kwargs: t.Any -) -> t.Iterable: - ... - - -@typing.overload -def sync_do_map( - context: "Context", - value: t.Iterable, - *, - attribute: str = ..., - default: t.Optional[t.Any] = None, -) -> t.Iterable: - ... - - -@pass_context -def sync_do_map( - context: "Context", value: t.Iterable, *args: t.Any, **kwargs: t.Any -) -> t.Iterable: - """Applies a filter on a sequence of objects or looks up an attribute. - This is useful when dealing with lists of objects but you are really - only interested in a certain value of it. - - The basic usage is mapping on an attribute. Imagine you have a list - of users but you are only interested in a list of usernames: - - .. sourcecode:: jinja - - Users on this page: {{ users|map(attribute='username')|join(', ') }} - - You can specify a ``default`` value to use if an object in the list - does not have the given attribute. - - .. sourcecode:: jinja - - {{ users|map(attribute="username", default="Anonymous")|join(", ") }} - - Alternatively you can let it invoke a filter by passing the name of the - filter and the arguments afterwards. A good example would be applying a - text conversion filter on a sequence: - - .. sourcecode:: jinja - - Users on this page: {{ titles|map('lower')|join(', ') }} - - Similar to a generator comprehension such as: - - .. code-block:: python - - (u.username for u in users) - (getattr(u, "username", "Anonymous") for u in users) - (do_lower(x) for x in titles) - - .. versionchanged:: 2.11.0 - Added the ``default`` parameter. - - .. versionadded:: 2.7 - """ - if value: - func = prepare_map(context, args, kwargs) - - for item in value: - yield func(item) - - -@typing.overload -def do_map( - context: "Context", - value: t.Union[t.AsyncIterable, t.Iterable], - name: str, - *args: t.Any, - **kwargs: t.Any, -) -> t.Iterable: - ... - - -@typing.overload -def do_map( - context: "Context", - value: t.Union[t.AsyncIterable, t.Iterable], - *, - attribute: str = ..., - default: t.Optional[t.Any] = None, -) -> t.Iterable: - ... - - -@async_variant(sync_do_map) # type: ignore -async def do_map( - context: "Context", - value: t.Union[t.AsyncIterable, t.Iterable], - *args: t.Any, - **kwargs: t.Any, -) -> t.AsyncIterable: - if value: - func = prepare_map(context, args, kwargs) - - async for item in auto_aiter(value): - yield await auto_await(func(item)) - - -@pass_context -def sync_do_select( - context: "Context", value: "t.Iterable[V]", *args: t.Any, **kwargs: t.Any -) -> "t.Iterator[V]": - """Filters a sequence of objects by applying a test to each object, - and only selecting the objects with the test succeeding. - - If no test is specified, each object will be evaluated as a boolean. - - Example usage: - - .. sourcecode:: jinja - - {{ numbers|select("odd") }} - {{ numbers|select("odd") }} - {{ numbers|select("divisibleby", 3) }} - {{ numbers|select("lessthan", 42) }} - {{ strings|select("equalto", "mystring") }} - - Similar to a generator comprehension such as: - - .. code-block:: python - - (n for n in numbers if test_odd(n)) - (n for n in numbers if test_divisibleby(n, 3)) - - .. versionadded:: 2.7 - """ - return select_or_reject(context, value, args, kwargs, lambda x: x, False) - - -@async_variant(sync_do_select) # type: ignore -async def do_select( - context: "Context", - value: "t.Union[t.AsyncIterable[V], t.Iterable[V]]", - *args: t.Any, - **kwargs: t.Any, -) -> "t.AsyncIterator[V]": - return async_select_or_reject(context, value, args, kwargs, lambda x: x, False) - - -@pass_context -def sync_do_reject( - context: "Context", value: "t.Iterable[V]", *args: t.Any, **kwargs: t.Any -) -> "t.Iterator[V]": - """Filters a sequence of objects by applying a test to each object, - and rejecting the objects with the test succeeding. - - If no test is specified, each object will be evaluated as a boolean. - - Example usage: - - .. sourcecode:: jinja - - {{ numbers|reject("odd") }} - - Similar to a generator comprehension such as: - - .. code-block:: python - - (n for n in numbers if not test_odd(n)) - - .. versionadded:: 2.7 - """ - return select_or_reject(context, value, args, kwargs, lambda x: not x, False) - - -@async_variant(sync_do_reject) # type: ignore -async def do_reject( - context: "Context", - value: "t.Union[t.AsyncIterable[V], t.Iterable[V]]", - *args: t.Any, - **kwargs: t.Any, -) -> "t.AsyncIterator[V]": - return async_select_or_reject(context, value, args, kwargs, lambda x: not x, False) - - -@pass_context -def sync_do_selectattr( - context: "Context", value: "t.Iterable[V]", *args: t.Any, **kwargs: t.Any -) -> "t.Iterator[V]": - """Filters a sequence of objects by applying a test to the specified - attribute of each object, and only selecting the objects with the - test succeeding. - - If no test is specified, the attribute's value will be evaluated as - a boolean. - - Example usage: - - .. sourcecode:: jinja - - {{ users|selectattr("is_active") }} - {{ users|selectattr("email", "none") }} - - Similar to a generator comprehension such as: - - .. code-block:: python - - (u for user in users if user.is_active) - (u for user in users if test_none(user.email)) - - .. versionadded:: 2.7 - """ - return select_or_reject(context, value, args, kwargs, lambda x: x, True) - - -@async_variant(sync_do_selectattr) # type: ignore -async def do_selectattr( - context: "Context", - value: "t.Union[t.AsyncIterable[V], t.Iterable[V]]", - *args: t.Any, - **kwargs: t.Any, -) -> "t.AsyncIterator[V]": - return async_select_or_reject(context, value, args, kwargs, lambda x: x, True) - - -@pass_context -def sync_do_rejectattr( - context: "Context", value: "t.Iterable[V]", *args: t.Any, **kwargs: t.Any -) -> "t.Iterator[V]": - """Filters a sequence of objects by applying a test to the specified - attribute of each object, and rejecting the objects with the test - succeeding. - - If no test is specified, the attribute's value will be evaluated as - a boolean. - - .. sourcecode:: jinja - - {{ users|rejectattr("is_active") }} - {{ users|rejectattr("email", "none") }} - - Similar to a generator comprehension such as: - - .. code-block:: python - - (u for user in users if not user.is_active) - (u for user in users if not test_none(user.email)) - - .. versionadded:: 2.7 - """ - return select_or_reject(context, value, args, kwargs, lambda x: not x, True) - - -@async_variant(sync_do_rejectattr) # type: ignore -async def do_rejectattr( - context: "Context", - value: "t.Union[t.AsyncIterable[V], t.Iterable[V]]", - *args: t.Any, - **kwargs: t.Any, -) -> "t.AsyncIterator[V]": - return async_select_or_reject(context, value, args, kwargs, lambda x: not x, True) - - -@pass_eval_context -def do_tojson( - eval_ctx: "EvalContext", value: t.Any, indent: t.Optional[int] = None -) -> Markup: - """Serialize an object to a string of JSON, and mark it safe to - render in HTML. This filter is only for use in HTML documents. - - The returned string is safe to render in HTML documents and - ``