diff --git a/spaces/0x90e/ESRGAN-MANGA/README.md b/spaces/0x90e/ESRGAN-MANGA/README.md deleted file mode 100644 index 0c4af47dd0d658f34cc47997043a50157b284ece..0000000000000000000000000000000000000000 --- a/spaces/0x90e/ESRGAN-MANGA/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: ESRGAN MANGA -emoji: 🏃 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false ---- diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EaseUS Data Recovery Wizard Crack v13 With License Key 2020 What You Need to Know Before Downloading.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EaseUS Data Recovery Wizard Crack v13 With License Key 2020 What You Need to Know Before Downloading.md deleted file mode 100644 index 8ada5492000d574ab3ca4a4cdd39212dab89d85b..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EaseUS Data Recovery Wizard Crack v13 With License Key 2020 What You Need to Know Before Downloading.md +++ /dev/null @@ -1,159 +0,0 @@ - -

EaseUS Data Recovery Wizard Crack v13 With License Key 2020

-

Have you ever lost your important data due to accidental deletion, formatting, virus attack, system crash, or other reasons? If so, you may have heard of EaseUS Data Recovery Wizard, a powerful and easy-to-use data recovery software that can help you restore your lost files in minutes. But what if you don't want to pay for the full version of this software? Is there a way to get a crack version of EaseUS Data Recovery Wizard v13 with a license key for free? In this article, we will answer these questions and show you how to get a crack version of EaseUS Data Recovery Wizard v13 with a license key 2020. But before that, let's take a look at what EaseUS Data Recovery Wizard is and what it can do for you.

-

EaseUS Data Recovery Wizard Crack v13 With License Key 2020


Download Filehttps://byltly.com/2uKwoC



-

What is EaseUS Data Recovery Wizard?

-

EaseUS Data Recovery Wizard is a professional data recovery software that can help you recover deleted, formatted, or lost data from your PC, laptop, hard drive, USB drive, memory card, digital camera, mobile phone, or other storage devices. It supports data recovery from various scenarios, such as recycle bin recovery, partition recovery, OS crash recovery, virus attack recovery, and more. It also supports different file types and storage devices, such as photos, videos, documents, audio files, emails, NTFS, FAT32, exFAT, etc. Moreover, it allows you to preview and repair corrupted files before recovery.

-

Features of EaseUS Data Recovery Wizard

-

Data recovery from various scenarios

-

EaseUS Data Recovery Wizard can recover data from different data loss situations, such as:

- -

Support for different file types and storage devices

-

EaseUS Data Recovery Wizard can recover more than 1000 file types from various storage devices. Some examples are:

- -

Preview and repair of corrupted files

-

EaseUS Data Recovery Wizard allows you to preview the recoverable files before recovery. You can check the file name, size, type, date, and quality to make sure you are recovering the right files. You can also filter the files by category, path, or keyword to locate them faster. Moreover, EaseUS Data Recovery Wizard can automatically repair corrupted JPEG/JPG/PNG/GIF images during the scanning process. You can preview the repaired images before saving them.

-

Why do you need a license key for EaseUS Data Recovery Wizard?

-

EaseUS Data Recovery Wizard has two versions: free and pro. The free version allows you to scan and recover up to 2GB of data for free. However, if you want to recover more data or enjoy more features, you need to upgrade to the pro version. To do that, you need to buy a license key from the official website of EaseUS Data Recovery Wizard. The license key will activate the pro version and unlock all its benefits.

-

EaseUS Data Recovery Wizard Technician 13.3 + Activator
-EaseUS Data Recovery Wizard WinPE v13.5 + Keygen
-EaseUS Data Recovery Wizard Professional 13.6 + Serial Key
-EaseUS Data Recovery Wizard Free Download Full Version with Crack
-EaseUS Data Recovery Wizard License Code Generator Online
-EaseUS Data Recovery Wizard Crack Reddit
-EaseUS Data Recovery Wizard Activation Key 2020
-EaseUS Data Recovery Wizard Torrent Download
-EaseUS Data Recovery Wizard Full Version with Crack and Keygen
-EaseUS Data Recovery Wizard Crack v13.5 Free Download
-EaseUS Data Recovery Wizard License Code List 2020
-EaseUS Data Recovery Wizard Crack v13.6 Latest Version
-EaseUS Data Recovery Wizard Patch Download
-EaseUS Data Recovery Wizard Crack v13.3 for Windows 10
-EaseUS Data Recovery Wizard Crack v13.2 for Mac OS
-EaseUS Data Recovery Wizard Crack v13.1 for Linux
-EaseUS Data Recovery Wizard Crack v13.4 for Android
-EaseUS Data Recovery Wizard Crack v13.0 for iOS
-EaseUS Data Recovery Wizard Crack v13 with Lifetime Activation
-EaseUS Data Recovery Wizard Crack v13 with Unlimited Usage
-EaseUS Data Recovery Wizard Crack v13 with All File Types Support
-EaseUS Data Recovery Wizard Crack v13 with RAW Partition Recovery
-EaseUS Data Recovery Wizard Crack v13 with OS Crash Recovery
-EaseUS Data Recovery Wizard Crack v13 with Virus Attack Recovery
-EaseUS Data Recovery Wizard Crack v13 with Formatted Drive Recovery
-EaseUS Data Recovery Wizard Crack v13 with Deleted File Recovery
-EaseUS Data Recovery Wizard Crack v13 with Memory Card Recovery
-EaseUS Data Recovery Wizard Crack v13 with USB Drive Recovery
-EaseUS Data Recovery Wizard Crack v13 with SSD Recovery
-EaseUS Data Recovery Wizard Crack v13 with Hard Drive Recovery
-EaseUS Data Recovery Wizard Crack v13 with RAID Recovery
-EaseUS Data Recovery Wizard Crack v13 with Digital Camera Recovery
-EaseUS Data Recovery Wizard Crack v13 with MP3/MP4 Player Recovery
-EaseUS Data Recovery Wizard Crack v13 with Mobile Device Recovery
-EaseUS Data Recovery Wizard Crack v13 with Photo/Video/Music/Document/Email/File/Data/Recovery
-How to Install and Activate EaseUS Data Recovery Wizard Crack v13
-How to Use EaseUS Data Recovery Wizard Crack v13 to Recover Lost Files
-How to Fix Errors and Problems in EaseUS Data Recovery Wizard Crack v13
-How to Update and Upgrade to the Latest Version of EaseUS Data Recovery Wizard Crack v13
-How to Uninstall and Remove EaseUS Data Recovery Wizard Crack v13
-Is it Safe and Legal to Use EaseUS Data Recovery Wizard Crack v13
-What are the Risks and Consequences of Using EaseUS Data Recovery Wizard Crack v13
-What are the Alternatives and Competitors of EaseUS Data Recovery Wizard Crack v13
-What are the Features and Benefits of Using EaseUS Data Recovery Wizard Crack v13
-What are the Limitations and Drawbacks of Using EaseUS Data Recovery Wizard Crack v13
-What are the Reviews and Ratings of Using EaseUS Data Recovery Wizard Crack v13
-What are the Tips and Tricks of Using EaseUS Data Recovery Wizard Crack v13
-What are the FAQs and Solutions of Using EaseUS Data Recovery Wizard Crack v13

-

Limitations of the free version

-

The free version of EaseUS Data Recovery Wizard has some limitations that may affect your data recovery experience. Some of them are:

- -

Benefits of the pro version

-

The pro version of EaseUS Data Recovery Wizard has many advantages that can improve your data recovery experience. Some of them are:

- -

How to get a crack version of EaseUS Data Recovery Wizard v13?

-

If you don't want to pay for the pro version of EaseUS Data Recovery Wizard, you may be tempted to look for a crack version online. A crack version is a modified version of the original software that bypasses its security features and allows you to use it for free. However, using a crack version is illegal and risky. It may cause serious problems for your computer and your data. In this section, we will show you how to get a crack version of EaseUS Data Recovery Wizard v13 with a license key 2020. But we do not recommend you to do so.

-

Risks of using a crack version

-

Using a crack version of EaseUS Data Recovery Wizard v13 may seem like a good idea at first glance. But it actually comes with many risks and dangers that outweigh its benefits. Some of them are:

- -

Steps to download and install a crack version

-

If you still want to try a crack version of EaseUS Data Recovery Wizard v13, you can follow these steps:

-
    -
  1. Search for a crack version of EaseUS Data Recovery Wizard v13 on the internet. You may find some websites that claim to provide the download link and the license key for free.
  2. -
  3. Download the crack version from one of these websites. Be careful of fake or malicious links that may harm your computer or data.
  4. -
  5. Extract the downloaded file and run the setup.exe file to install the crack version on your computer.
  6. -
  7. Follow the instructions on the screen to complete the installation process.
  8. -
-

How to activate the crack version with a license key

-

After installing the crack version of EaseUS Data Recovery Wizard v13, you need to activate it with a license key. You can follow these steps:

-
    -
  1. Launch the crack version of EaseUS Data Recovery Wizard v13 on your computer.
  2. -
  3. Click on the "Activate" button on the main interface.
  4. -
  5. Enter one of the license keys that you have obtained from the internet. You can try some of these license keys:
  6. -
- - - - - - - - - - - -
FUIERUI-REUIE83UW-ERIOE93-TRIOE93
E89237472-20W0W0-2929W-ERIE93I
ERIW8Q8SD-FIIFDUFG-GFIOD-GOSOIW
C8XIP–2YHL2-39UMI-QVR56-4CI6L
JGFT5-YRUHJ-FYT45-TRUGH-GJRTU-YFH
Y7GKK-JIURT-HFJKH-RTHGI-EIJKRY-TRU
EYTUG-HARJU-TYUJHG-RYGHF-TRYGYT
UTIYH-GRD5YH-YRIT7RY-IYEIUG-8756
HRUY5-RJGT87-4TGKR-Y4875Y-TI45YT
SKSKFSD-DKDFTGY-HUJIKOL-SLOSHY
-
    -
  1. Click on the "OK" button to activate the crack version.
  2. -
  3. Enjoy using the crack version of EaseUS Data Recovery Wizard v13 for free.
  4. -
-

Is there a better alternative to EaseUS Data Recovery Wizard crack?

-

The answer is yes. There is a better and safer alternative to EaseUS Data Recovery Wizard crack. That is to buy a genuine license key from the official website of EaseUS Data Recovery Wizard. By doing so, you can enjoy all the benefits of the pro version without any risks or limitations.

-

The official website of EaseUS Data Recovery Wizard

-

The official website of EaseUS Data Recovery Wizard is https://www.easeus.com/data-recovery-software/. On this website, you can find all the information and features about EaseUS Data Recovery Wizard. You can also download the free or trial version of EaseUS Data Recovery Wizard to test its performance and functionality. Moreover, you can buy a genuine license key for EaseUS Data Recovery Wizard from this website. There are different plans and prices for different needs and budgets. For example, you can buy a one-month plan for $69.95, a one-year plan for $99.95, or a lifetime plan for $149.95.

-

The advantages of buying a genuine license key

-

By buying a genuine license key from the official website of EaseUS Data Recovery Wizard, you can get many advantages that a crack version cannot offer. Some of them are:

- -

Conclusion

-

In conclusion, EaseUS Data Recovery Wizard is a powerful and easy-to-use data recovery software that can help you recover deleted, formatted, or lost data from various storage devices. However, if you want to use its full features and functions, you need to buy a genuine license key from its official website. Using a crack version of EaseUS Data Recovery Wizard v13 with a license key 2020 may seem tempting, but it is illegal and risky. It may cause more harm than good to your computer and your data. Therefore, we recommend you to avoid using a crack version and choose a better and safer alternative instead.

-

Frequently Asked Questions (FAQs)

-

Here are some frequently asked questions about EaseUS Data Recovery Wizard and its crack version:

-
    -
  1. What is EaseUS Data Recovery Wizard?
    EaseUS Data Recovery Wizard is a professional data recovery software that can help you recover deleted, formatted, or lost data from your PC, laptop, hard drive, USB drive, memory card, digital camera, mobile phone, or other storage devices.
  2. -
  3. What is EaseUS Data Recovery Wizard crack?
    EaseUS Data Recovery Wizard crack is a modified version of the original software that bypasses its security features and allows you to use it for free without paying for a license key.
  4. -
  5. Is EaseUS Data Recovery Wizard free?
    EaseUS Data Recovery Wizard has both free and pro versions. The free version allows you to recover up to 2GB of data for free in data loss scenarios. The pro version allows you to recover unlimited lost data like pictures and documents with a 99% success rate.
  6. -
  7. How to get EaseUS Data Recovery Wizard pro for free?
    To get EaseUS Data Recovery Wizard pro for free, you need to use a crack version of EaseUS Data Recovery Wizard v13 with a license key 2020. However, this is illegal and risky. It may cause virus infection, data loss, system damage, or legal issues.
  8. -
  9. How to get a genuine license key for EaseUS Data Recovery Wizard?
    To get a genuine license key for EaseUS Data Recovery Wizard, you need to buy it from its official website at https://www.easeus.com/data-recovery-software/. There are different plans and prices for different needs and budgets.
  10. -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FordETIS2012zip.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FordETIS2012zip.md deleted file mode 100644 index 765b9dbdc91d715085a8224f15a49e29488e7917..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FordETIS2012zip.md +++ /dev/null @@ -1,34 +0,0 @@ - -`

How to Download and Install Ford ETIS 2012 Zip File

` - -`

Ford ETIS is a web-based service and repair information system that provides access to technical information for Ford vehicles. It includes mechanical repairs, body and paint, wiring diagrams, diagnostic trouble codes, and more. Ford ETIS was decommissioned in 2021 and replaced by different websites for authorized repairers and independent operators. However, some users may still want to use the old version of Ford ETIS that was available in 2012.

-

FordETIS2012zip


DOWNLOADhttps://byltly.com/2uKyCr



` - -`

In this article, we will show you how to download and install Ford ETIS 2012 zip file on your computer. This is a torrent file that contains the installation files for Ford ETIS 2012. You will need a torrent client such as uTorrent or BitTorrent to download it. You will also need a DVD burner and a blank DVD to install it.

` - -`

Step 1: Download Ford ETIS 2012 zip file

` - -`

The first step is to download Ford ETIS 2012 zip file from a torrent website. You can find the link to the torrent file on MHH AUTO forum[^2^]. The file size is about 4.3 GB and the name is Ford Etis (12.2016).torrent. You will need to register on the forum to access the link.

` - -`

Once you have the torrent file, open it with your torrent client and start downloading the zip file. It may take some time depending on your internet speed and the number of seeders. Make sure you have enough space on your hard drive to store the zip file.

` - -`

Step 2: Extract Ford ETIS 2012 zip file

` - -`

The next step is to extract Ford ETIS 2012 zip file to a folder on your computer. You will need a software such as WinRAR or 7-Zip to do this. Right-click on the zip file and choose Extract Here or Extract to Ford Etis (12.2016). You will see a folder named Ford Etis (12.2016) with several subfolders and files inside.

` - -`

Step 3: Burn Ford ETIS 2012 iso file to DVD

` - -`

The final step is to burn Ford ETIS 2012 iso file to a blank DVD. You will need a software such as Nero or ImgBurn to do this. The iso file is located in the folder Ford Etis (12.2016)\FordEtis\DVD\ETIS_1216.iso. It is about 4 GB in size.

-

` - -`

Insert a blank DVD into your DVD burner and launch your burning software. Choose the option to burn an image file and select the iso file as the source. Choose a low burning speed and verify the data after burning. Label the DVD as Ford Etis (12.2016).

` - -`

Step 4: Install Ford ETIS 2012 from DVD

` - -`

The last step is to install Ford ETIS 2012 from the DVD you just burned. Insert the DVD into your DVD drive and run the setup.exe file in the root folder of the DVD. Follow the instructions on the screen to complete the installation.

` - -`

You may need to change the date of your computer to December 2016 or earlier before running the setup.exe file. Some users have reported that they get an error message saying that the DVD is not correct if they use a later date.

` - -`

After installing Ford ETIS 2012, you can launch it from your desktop or start menu. You will need an internet connection to access some of the features of Ford ETIS 2012.

` 7b8c122e87
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/7Zip APK How to Compress and Extract Files on Android.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/7Zip APK How to Compress and Extract Files on Android.md deleted file mode 100644 index ef2aab8278e58cc0753fd9ef6feb146813889b06..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/7Zip APK How to Compress and Extract Files on Android.md +++ /dev/null @@ -1,151 +0,0 @@ - -

7 Zip APK: A Powerful Tool for Managing Archive Files on Android

-

Do you need to create, extract or browse archive files like 7Zip (7z), Zip, Rar, Tar, Jar or Apk on your Android device? If so, you might want to check out 7 Zip APK, a free app that lets you do all that and more. In this article, we will explain what 7 Zip APK is, how it works, what features it offers, how to download and install it, and how to use it. We will also answer some frequently asked questions about 7 Zip APK.

-

7 zip apk


DOWNLOADhttps://urlin.us/2uSRQK



-

What is 7 Zip APK?

-

7 Zip APK is an Android app that allows you to manage archive files on your device. Archive files are files that contain multiple files or folders compressed into one smaller file. They are usually used to save disk space, reduce file size, or share files online. Some common archive formats are 7Zip (7z), Zip, Rar, Tar, Jar and Apk.

-

7 Zip APK lets you create your own archive files by compressing files and folders. You can also extract or open existing archive files and view their contents. You can even create encrypted zip files with a password for extra security. 7 Zip APK supports all the popular archive formats and types, as well as some less used ones.

-

How does 7 Zip APK work?

-

7 Zip APK works by using different compression algorithms to reduce the size of files or folders. Compression algorithms are mathematical methods that remove redundant or unnecessary data from a file without affecting its quality or functionality. Different compression algorithms have different advantages and disadvantages in terms of speed, efficiency and compatibility.

-

7Zip APP: Zip & 7Zip Files Manager
-7Zipper: Android file manager and archiver
-7-Zip: Linux command line version of 7-Zip
-7Z: Open, extract or create 7z archives on Android
-7Zipper 2.0: File browser and image viewer for Android
-7Zip & Zip: Zip file extractor and compressor for Android
-7-Zipper - File Explorer (zip, 7zip, rar): File manager and archive tool for Android
-ZArchiver: Archive manager for Android with support for 7z and other formats
-RAR: WinRAR app for Android with support for 7z and other formats
-B1 Archiver zip rar unzip: Archive utility for Android with support for 7z and other formats
-Easy Unrar, Unzip & Zip: Archive extractor and creator for Android with support for 7z and other formats
-AndroZip™ FREE File Manager: File manager and archive tool for Android with support for 7z and other formats
-Zipper - File Management: File manager and archive tool for Android with support for 7z and other formats
-ALZip – File Manager & Unzip & Archive: File manager and archive tool for Android with support for 7z and other formats
-X-plore File Manager: File manager and archive tool for Android with support for 7z and other formats
-WinZip – Zip UnZip Tool: Zip file utility for Android with support for 7z and other formats
-Total Commander - file manager: File manager and archive tool for Android with support for 7z and other formats
-MiXplorer Silver - File Manager: File manager and archive tool for Android with support for 7z and other formats
-Solid Explorer File Manager: File manager and archive tool for Android with support for 7z and other formats
-FX File Explorer: file manager, media manager, root, cloud & Wi-Fi transfer: File manager and archive tool for Android with support for 7z and other formats
-ES File Explorer File Manager: File manager and archive tool for Android with support for 7z and other formats
-Root Explorer: Ultimate file manager for root users with support for 7z and other formats
-ASTRO File Manager & Storage Organizer: File manager and archive tool for Android with support for 7z and other formats
-Amaze File Manager: Open source file manager and archive tool for Android with support for 7z and other formats
-Simple Unrar: Simple app to extract rar files on Android with support for 7z and other formats
-Simple Unzip: Simple app to extract zip files on Android with support for 7z and other formats
-Simple Zip Viewer (zip, rar, jar, apk): Simple app to view zip files on Android with support for 7z and other formats
-APK Extractor - Creator: App to extract apk files from installed apps on Android with support for zip compression
-APK Editor Pro: App to edit apk files on Android with support for zip compression
-APK Installer - the best app manager for Android: App to install apk files on Android with support for zip compression
-APKPure App - Download APK free online downloader: App to download apk files from various sources on Android with support for zip compression
-APKMirror Installer (Official): App to install apk files from APKMirror on Android with support for zip compression
-APK Downloader - Download APK Online Free | APKNite.Com: App to download apk files from various sources on Android with support for zip compression
-APKCombo Installer - Download APK Bundle (Split APKs) Online Free | APKCombo.Com: App to download apk bundle files from various sources on Android with support for zip compression
-Apk Extractor Lite - Extract Apk's easily.: App to extract apk files from installed apps on Android with support for zip compression
-Apk Analyzer - Analyze your installed applications.: App to analyze apk files on Android with support for zip compression
-Apk Share Bluetooth - Send/Backup/Uninstall/Manage.: App to share apk files via Bluetooth on Android with support for zip compression
-Apk Backup - Restore, Extract & Manage your apps.: App to backup apk files on Android with support for zip compression
-Apk Installer / Apk Manager / Apk Share Pro.: App to install, manage and share apk files on Android with support for zip compression
-Apk Editor : Apk Maker : Apk Creator.: App to create apk files on Android with support for zip compression
-Apk Extractor Pro+: App to extract apk files from installed apps on Android with support for zip compression
-Apk Extract

-

7 Zip APK uses the 7z compression algorithm for creating 7Zip files. This algorithm offers high compression ratio, which means it can make files much smaller than other algorithms. However, it also requires more processing power and time to compress and decompress files.

-

For other archive formats, such as Zip or Rar, 7 Zip APK uses the corresponding compression algorithms that are compatible with those formats. For example, it uses the zip algorithm for creating zip files and the rar algorithm for creating rar files.

-

What features does 7 Zip APK offer?

-

Some of the features that 7 Zip APK offers are:

- -

How to download and install 7 Zip APK?

-

To download and install 7 Zip APK on your Android device, you can follow these steps:

-
    -
  1. Go to the Google Play Store and search for "7Zipper" or click on this link: [Download](^4^).
  2. -
  3. Tap on the "Install" button and wait for the app to download and install on your device.
  4. -
  5. Open the app and grant it the necessary permissions to access your files and storage.
  6. -
  7. You can now start using 7 Zip APK to create or extract archive files on your device.
  8. -
-

How to use 7 Zip APK?

-

To use 7 Zip APK to create or extract archive files on your device, you can follow these steps:

-

To create an archive file:

-
    -
  1. Open the app and tap on the "Create" button at the bottom I have already written the first part of the article. Here is the rest of it:

    To create an archive file:

    -
      -
    1. Open the app and tap on the "Create" button at the bottom.
    2. -
    3. Select the files or folders that you want to compress and tap on the "OK" button.
    4. -
    5. Choose the archive format that you want to use, such as 7Zip, Zip, Tar, etc.
    6. -
    7. Optionally, you can set a password, a compression level, a split size, and a volume label for your archive file.
    8. -
    9. Tap on the "Create" button and wait for the app to create your archive file.
    10. -
    11. You can find your archive file in the same folder as the original files or folders.
    12. -
    -

    To extract an archive file:

    -
      -
    1. Open the app and tap on the "Extract" button at the bottom.
    2. -
    3. Select the archive file that you want to decompress and tap on the "OK" button.
    4. -
    5. If the archive file is encrypted, enter the password and tap on the "OK" button.
    6. -
    7. Choose the destination folder where you want to extract the files or folders.
    8. -
    9. Tap on the "Extract" button and wait for the app to extract your archive file.
    10. -
    11. You can find your extracted files or folders in the destination folder that you chose.
    12. -
    -

    Conclusion

    -

    7 Zip APK is a powerful tool for managing archive files on your Android device. It allows you to create, extract, browse, encrypt, and decrypt archive files of various formats and types. It also offers a simple and intuitive file manager with standard file operations. 7 Zip APK is free to download and use from the Google Play Store. If you need to work with archive files on your Android device, 7 Zip APK is a great app to have.

    -

    Frequently Asked Questions

    -

    Here are some of the common questions that people ask about 7 Zip APK:

    -

    Q: Is 7 Zip APK safe to use?

    -

    A: Yes, 7 Zip APK is safe to use. It does not contain any malware or viruses. It only requires permissions to access your files and storage. It does not collect or share any personal data or information.

    -

    Q: What is the difference between 7Zipper and 7Zipper 2.0?

    -

    A: 7Zipper is the original version of 7 Zip APK. It has more features and options than 7Zipper 2.0, but it also has more ads and pop-ups. 7Zipper 2.0 is a newer version of 7 Zip APK. It has fewer features and options than 7Zipper, but it also has fewer ads and pop-ups. Both versions are compatible with Android devices running Android 4.0 or higher.

    -

    Q: How can I open a zip file without extracting it?

    -

    A: You can open a zip file without extracting it by using the "Browse" feature of 7 Zip APK. To do this, follow these steps:

    -
      -
    1. Open the app and tap on the "Browse" button at the bottom.
    2. -
    3. Select the zip file that you want to open and tap on it.
    4. -
    5. You will see a list of files or folders inside the zip file. You can tap on any file or folder to view its contents or properties.
    6. -
    7. You can also perform some actions on the files or folders, such as copy, move, delete, rename, etc.
    8. -
    -

    Q: How can I create a self-extracting archive file?

    -

    A: A self-extracting archive file is an archive file that can be opened without using any software or app. It has an executable extension (such as .exe) that allows it to run by itself. To create a self-extracting archive file using 7 Zip APK, follow these steps:

    -
      -
    1. Open the app and tap on the "Create" button at the bottom.
    2. -
    3. Select the files or folders that you want to compress and tap on the "OK" button.
    4. -
    5. Choose the "SFX (Self Extract)" option from the archive format list.
    6. -
    7. Optionally, you can set a password, a compression level, a split size, and a volume label for your archive file.
    8. -
    9. Tap on the "Create" button and wait for the app to create your self-extracting archive file.
    10. -
    11. You can find your self-extracting archive file in the same folder as the original files or folders. It will I have already written the second part of the article. Here is the final part of it:
    12. You can find your self-extracting archive file in the same folder as the original files or folders. It will have an .exe extension and an icon that looks like a 7Zip logo.
    13. -
    -

    Q: How can I update or delete files from an archive file?

    -

    A: You can update or delete files from an archive file by using the "Update" feature of 7 Zip APK. To do this, follow these steps:

    -
      -
    1. Open the app and tap on the "Update" button at the bottom.
    2. -
    3. Select the archive file that you want to update or delete files from and tap on the "OK" button.
    4. -
    5. You will see a list of files or folders inside the archive file. You can tap on any file or folder to select or deselect it.
    6. -
    7. To update a file or folder, tap on the "Add" button at the bottom and select the new file or folder that you want to replace the old one with.
    8. -
    9. To delete a file or folder, tap on the "Delete" button at the bottom and confirm your action.
    10. -
    11. Tap on the "Update" button and wait for the app to update or delete files from your archive file.
    12. -
    13. You can find your updated archive file in the same folder as the original archive file. It will have the same name and extension as before.
    14. -
    -

    Outline of the article

    -

    Here is a table that shows the outline of the article with the headings and subheadings:

    - - - - - - - - - - - - - - -
    H1H2H3H4
    7 Zip APK: A Powerful Tool for Managing Archive Files on Android
    What is 7 Zip APK?
    How does 7 Zip APK work?
    What features does 7 Zip APK offer?
    How to download and install 7 Zip APK?
    How to use 7 Zip APK?To create an archive file:
    To extract an archive file:
    ConclusionFrequently Asked QuestionsQ: Is 7 Zip APK safe to use?
    Q: What is the difference between 7Zipper and 7Zipper 2.0?
    Q: How can I open a zip file without extracting it?
    Q: How can I create a self-extracting archive file?
    Q: How can I update or delete files from an archive file?

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/CarX Street APK - The Most Realistic Mobile Racing Game Ever - Free Download.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/CarX Street APK - The Most Realistic Mobile Racing Game Ever - Free Download.md deleted file mode 100644 index 51dffc717cf3c3d4588ca6cb7638c931e910d354..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/CarX Street APK - The Most Realistic Mobile Racing Game Ever - Free Download.md +++ /dev/null @@ -1,122 +0,0 @@ -
    -

    How to Download CarX Street APK for Android

    -

    Are you a fan of street racing games? Do you want to experience the thrill of driving in a dynamic open world? If yes, then you should try CarX Street, a new game from the creators of CarX Drift Racing. In this article, we will show you how to download CarX Street APK for Android, and what are the benefits and risks of doing so.

    -

    What is CarX Street?

    -

    CarX Street is a street racing game that lets you customize your car, challenge other racers, and explore a realistic city. You can choose from a variety of cars, from classic muscle cars to modern sports cars, and tune them to your liking. You can also join clubs, participate in events, and earn rewards.

    -

    carx street download apk


    Download Ziphttps://urlin.us/2uT0mT



    -

    Features of CarX Street

    -

    Some of the features of CarX Street are:

    - -

    Requirements for CarX Street

    -

    To play CarX Street on your Android device, you need to have:

    - -

    How to Download and Install CarX Street APK

    -

    If you want to download CarX Street APK for Android, you need to follow these steps:

    -

    Step 1: Enable Unknown Sources

    -

    Before you can install any APK file on your Android device, you need to enable the option to allow installation from unknown sources. To do this, go to your device settings, then security, then toggle on the unknown sources option.

    -

    Step 2: Download CarX Street APK File

    -

    Next, you need to download the CarX Street APK file from a reliable source. You can use one of the links below:

    - - - - - -
    NameVersionSizeLink
    CarX Street APK (Game)0.9.21.4 GBDownload here(^1^)
    CarX Street APK (App)9.814 MBDownload here(^3^)
    CarX Street - Apps on Google PlayN/AN/ADownload here
    -

    Make sure you download the file that matches your device and preferences. You can also scan the file with an antivirus software before installing it.

    -

    Step 3: Install CarX Street APK File

    -

    After you have downloaded the CarX Street APK file, you need to install it on your device. To do this, locate the file in your file manager or downloads folder, and tap on it. You will see a prompt asking you to confirm the installation. Tap on install and wait for the process to finish.

    -

    Step 4: Launch CarX Street and Enjoy

    -

    Once the installation is complete, you can launch CarX Street from your app drawer or home screen. You will need to grant some permissions and accept the terms and conditions. Then, you can create your account, choose your car, and start racing.

    -

    carx street racing game free download apk
    -carx street mod apk unlimited money download
    -carx street android apk download latest version
    -carx street apk download for pc windows 10
    -carx street online racing apk download
    -carx street apk download apkpure
    -carx street apk download uptodown
    -carx street apk download rexdl
    -carx street apk download no obb
    -carx street apk download highly compressed
    -carx street open world racing apk download
    -carx street sunset city apk download
    -carx street realistic racing apk download
    -carx street offline racing apk download
    -carx street drift racing apk download
    -carx street hack apk download android 1
    -carx street cheats apk download ios
    -carx street beta apk download android
    -carx street update apk download 2023
    -carx street full version apk download 2022
    -carx street cracked apk download 2021
    -carx street premium apk download 2020
    -carx street pro apk download 2019
    -carx street old version apk download 2018
    -carx street new version apk download 2017
    -how to download carx street apk on android phone
    -how to install carx street apk on android tablet
    -how to play carx street apk on android tv
    -how to update carx street apk on android device
    -how to uninstall carx street apk on android emulator
    -where to download carx street apk for android free
    -where to find carx street apk for android safe
    -where to get carx street apk for android fast
    -where to buy carx street apk for android cheap
    -where to sell carx street apk for android best price
    -what is carx street apk for android review
    -what is the size of carx street apk for android file
    -what is the rating of carx street apk for android app
    -what is the genre of carx street apk for android game
    -what is the developer of carx street apk for android studio
    -why download carx street apk for android fun
    -why install carx street apk for android easy
    -why play carx street apk for android addictive
    -why update carx street apk for android improved
    -why uninstall carx street apk for android buggy

    -

    Benefits of Downloading CarX Street APK

    -

    There are some benefits of downloading CarX Street APK instead of using the Google Play Store version. Some of them are:

    -

    Access to Latest Updates and Features

    -

    By downloading CarX Street APK, you can get access to the latest updates and features of the game before they are released on the official platform. This way, you can enjoy the new content and improvements as soon as possible.

    -

    No Need to Use Google Play Store

    -

    If you have problems with using the Google Play Store, such as slow downloads, errors, or restrictions, you can download CarX Street APK without using it. This way, you can avoid any hassle or inconvenience that might occur with the Google Play Store.

    -

    Save Storage Space and Data

    -

    By downloading CarX Street APK, you can save some storage space and data on your device. This is because you can download only the file that you need, and not the whole game package. You can also delete the APK file after installing it, and free up some space.

    -

    Risks of Downloading CarX Street APK

    -

    However, there are also some risks of downloading CarX Street APK that you should be aware of. Some of them are:

    -

    Potential Malware and Viruses

    -

    If you download CarX Street APK from an untrusted source, you might expose your device to malware and viruses that can harm your device or steal your data. Therefore, you should always download CarX Street APK from a reliable source, and scan the file with an antivirus software before installing it.

    -

    Legal Issues and Violations

    -

    If you download CarX Street APK without the permission of the developers or publishers, you might violate their terms of service or intellectual property rights. This could result in legal issues or penalties, such as fines, bans, or lawsuits. Therefore, you should always respect the rights of the creators and follow their rules.

    -

    Compatibility and Stability Issues

    -

    If you download CarX Street APK that is not compatible with your device or version of Android, you might experience compatibility and stability issues, such as crashes, glitches, or errors. Therefore, you should always check the requirements and specifications of the game before downloading it.

    -

    Conclusion

    -

    In conclusion, CarX Street is a street racing game that lets you customize your car, challenge other racers, and explore a realistic city. You can download CarX Street APK for Android from one of the links above, but you should also be aware of the benefits and risks of doing so. We hope this article has helped you learn how to download CarX Street APK for Android.

    -

    FAQs

    -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Call of Duty Warzone Mobile and Fight Like Never Before.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Call of Duty Warzone Mobile and Fight Like Never Before.md deleted file mode 100644 index 4f328e9f575faff9b986c3e984199a0b422577f7..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Call of Duty Warzone Mobile and Fight Like Never Before.md +++ /dev/null @@ -1,153 +0,0 @@ - -

    Download Call of Duty Warzone Mobile: The Next Era of Battle Royale

    -

    If you are a fan of the Call of Duty franchise, you must have heard about the latest sensation in the mobile gaming world: Call of Duty Warzone Mobile. This is the next generation of mobile battle royale, featuring authentic COD gameplay, shared progression, and up to 120 player count matches on mobile. In this article, we will tell you everything you need to know about this amazing game, including what it is, how to get it, how to play it, and how to optimize your device and performance for it. So, without further ado, let's dive into the new era of fun battle royale!

    -

    What is Call of Duty Warzone Mobile?

    -

    Call of Duty Warzone Mobile is a mobile version of the popular Call of Duty Warzone game, which is a free-to-play online multiplayer battle royale game developed by Activision. It is part of the Call of Duty Modern Warfare II series, which is a reboot of the original Modern Warfare sub-series. Call of Duty Warzone Mobile is built for mobile gamers, with first-class graphics, intuitive controls, and optimized physics, animations, and sound. It also features unified Call of Duty technology, which means that your Battle Pass and friends list sync across platforms for a truly connected multiplayer FPS game experience.

    -

    download call of duty warzone mobile


    Download 🗹 https://urlin.us/2uSSeB



    -

    The features and benefits of Call of Duty Warzone Mobile

    -

    Call of Duty Warzone Mobile offers a lot of features and benefits that make it stand out from other mobile battle royale games. Here are some of them:

    - -

    How to pre-register and pre-order Call of Duty Warzone Mobile

    -

    Call of Duty Warzone Mobile is not yet officially released worldwide, but you can pre-register or pre-order it now to get a chance to unlock rewards at launch. Here's how:

    - -

    How to play Call of Duty Warzone Mobile

    -

    Once you have downloaded and installed Call of Duty Warzone Mobile on your device, you can start playing it right away. Here are some basic steps to follow:

    - -

    The gameplay and modes of Call of Duty Warzone Mobile

    -

    Call of Duty Warzone Mobile has different gameplay and modes that cater to different preferences and styles. Here are some of them:

    - -

    The maps and locations of Call of Duty Warzone Mobile

    -

    Call of Duty Warzone Mobile has stunning maps and locations that offer diverse environments and challenges. Here are some of them:

    -

    How to download call of duty warzone mobile on android
    -Call of duty warzone mobile apk download free
    -Call of duty warzone mobile release date and pre-registration
    -Call of duty warzone mobile gameplay and features
    -Call of duty warzone mobile system requirements and compatibility
    -Call of duty warzone mobile vs call of duty mobile comparison
    -Call of duty warzone mobile tips and tricks for beginners
    -Call of duty warzone mobile best weapons and loadouts
    -Call of duty warzone mobile map and locations guide
    -Call of duty warzone mobile cross-play and cross-progression support
    -Call of duty warzone mobile review and ratings
    -Call of duty warzone mobile cheats and hacks
    -Call of duty warzone mobile controller support and settings
    -Call of duty warzone mobile battle pass and rewards
    -Call of duty warzone mobile zombies mode and easter eggs
    -Call of duty warzone mobile update and patch notes
    -Call of duty warzone mobile download size and installation time
    -Call of duty warzone mobile error codes and fixes
    -Call of duty warzone mobile best graphics settings and performance optimization
    -Call of duty warzone mobile clans and tournaments
    -Call of duty warzone mobile skins and customization options
    -Call of duty warzone mobile solo vs squad mode
    -Call of duty warzone mobile voice chat and communication options
    -Call of duty warzone mobile emulator for PC and Mac
    -Call of duty warzone mobile fan art and wallpapers
    -Call of duty warzone mobile memes and funny moments
    -Call of duty warzone mobile reddit and discord communities
    -Call of duty warzone mobile official website and social media accounts
    -Call of duty warzone mobile feedback and suggestions
    -Call of duty warzone mobile news and rumors
    -Call of duty warzone mobile mod apk download unlimited money
    -Call of duty warzone mobile offline mode and bots
    -Call of duty warzone mobile VPN and region lock bypass
    -Call of duty warzone mobile streamers and influencers to follow
    -Call of duty warzone mobile merchandise and accessories
    -Call of duty warzone mobile wallpapers for iphone and ipad
    -Call of duty warzone mobile challenges and achievements
    -Call of duty warzone mobile best strategies and tactics
    -Call of duty warzone mobile gulag tips and tricks
    -Call of duty warzone mobile killstreaks and contracts guide
    -Call of duty warzone mobile vehicles and transportation options
    -Call of duty warzone mobile ping system and markers guide
    -Call of duty warzone mobile sensitivity settings and aim assist options
    -Call of duty warzone mobile spectate mode and replay feature
    -Call of duty warzone mobile esports scene and competitive events

    - -

    The weapons and vehicles of Call of Duty Warzone Mobile

    -

    Call of Duty Warzone Mobile has a vast arsenal of weapons and vehicles that you can use to dominate the battlefield. Here are some of them:

    - -

    How to optimize your device and performance for Call of Duty Warzone Mobile

    -

    Call of Duty Warzone Mobile is a demanding game that requires a lot of resources and power from your device. Therefore, you need to optimize your device and performance for the best gaming experience. Here are some tips to do that:

    -

    The minimum device specifications for Call of Duty Warzone Mobile

    -

    Before you download and play Call of Duty Warzone Mobile, you need to make sure that your device meets the minimum specifications for the game. Here are the minimum requirements for Android and iOS devices:

    - - - - - - - - - - - - - - - - - - - - - - - - - -
    AndroidiOS
    OS: Android 5.0 or higherOS: iOS 10 or higher
    CPU: Snapdragon 625 or equivalentCPU: A10 Fusion or equivalent
    RAM: 2 GB or higherRAM: 2 GB or higher
    Storage: 4 GB or higherStorage: 4 GB or higher
    Internet: Wi-Fi or cellular data (4G or higher)Internet: Wi-Fi or cellular data (4G or higher)
    -

    The best settings and tips for Call of Duty Warzone Mobile

    -

    Once you have checked your device specifications, you can also adjust the settings and preferences of the game to optimize your performance and gameplay. Here are some suggestions:

    -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Copy Text from Apps Images and More with Universal Copy APK.md b/spaces/1phancelerku/anime-remove-background/Copy Text from Apps Images and More with Universal Copy APK.md deleted file mode 100644 index 540b1c5b3307435cee399c3c788e24f78a7a7519..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Copy Text from Apps Images and More with Universal Copy APK.md +++ /dev/null @@ -1,124 +0,0 @@ - -

    How to Download APK Universal Copy for Android

    -

    Have you ever wanted to copy text from an app or an image that doesn't allow you to do so? Have you ever wished you could extract addresses, phone numbers, emails, hashtags, or other entities from a text without having to type them manually? Have you ever wondered how to perform quick actions on the text you copied, such as translating, locating, sharing, or searching?

    -

    If you answered yes to any of these questions, then you need APK Universal Copy. APK Universal Copy is a powerful and versatile app that lets you copy text from any app or image on your Android device. It also detects and extracts useful information from the text and allows you to perform actions on it in one tap. In this article, we will tell you what APK Universal Copy is, why you should download it, and how to download and install it on your device.

    -

    download apk universal copy


    Download Zip ✶✶✶ https://jinyurl.com/2uNSxJ



    -

    What is APK Universal Copy?

    -

    APK Universal Copy is an app that enables you to copy text from any app or image on your Android device, even from the ones that don't let you or inside images. It uses OCR (optical character recognition) technology to scan and recognize text inside images. It also uses smart detection of entities to identify and extract addresses, emails, phone numbers, @, #, and other useful information from the text. It also allows you to perform quick actions on the text you copied, such as translating, locating, sharing, or searching.

    -

    APK Universal Copy has several modes that you can choose from depending on your needs:

    -

    Features of APK Universal Copy

    -

    Normal mode

    -

    This mode lets you copy text from any app such as Facebook, Twitter, Instagram, YouTube, Chrome, WhatsApp, Tumblr, News Republic, Snapchat, and more. You just need to launch Universal Copy from your notification bar or via a shortcut, select the text you want to copy, and it's done.

    -

    Scanner mode

    -

    This mode lets you copy text inside images using OCR technology. It currently works with Chinese, Devanagari (Hindi...), Japanese, Korean and Latin (English, Portuguese...) character sets. You just need to launch Universal Copy in scanner mode, select the image you want to copy text from, and it's done.

    -

    Smart detection of entities

    -

    This feature automatically detects and extracts addresses, emails, phone numbers, @, #, and other useful information from the text you copied. You can then tap on them to perform quick actions such as opening Google Maps for an address, calling a phone number, sending an email, or searching for a hashtag.

    -

    download apk universal copy app
    -download apk universal copy for android
    -download apk universal copy pro
    -download apk universal copy plus
    -download apk universal copy mod
    -download apk universal copy latest version
    -download apk universal copy free
    -download apk universal copy no ads
    -download apk universal copy premium
    -download apk universal copy full
    -download apk universal copy cracked
    -download apk universal copy unlocked
    -download apk universal copy offline
    -download apk universal copy online
    -download apk universal copy update
    -download apk universal copy old version
    -download apk universal copy beta
    -download apk universal copy from play store
    -download apk universal copy from softpedia
    -download apk universal copy from apkpure
    -download apk universal copy from uptodown
    -download apk universal copy from apkmirror
    -download apk universal copy from apksfree
    -download apk universal copy from apktada
    -download apk universal copy from apknite
    -how to download apk universal copy
    -where to download apk universal copy
    -why download apk universal copy
    -what is apk universal copy
    -who made apk universal copy
    -when was apk universal copy released
    -which apps support apk universal copy
    -which languages does apk universal copy support
    -which devices are compatible with apk universal copy
    -which permissions does apk universal copy require
    -what can you do with apk universal copy
    -what are the features of apk universal copy
    -what are the benefits of apk universal copy
    -what are the drawbacks of apk universal copy
    -what are the alternatives to apk universal copy
    -how to use apk universal copy
    -how to install apk universal copy
    -how to update apk universal copy
    -how to uninstall apk universal copy
    -how to activate apk universal copy plus
    -how to disable ads in apk universal copy
    -how to enable smart detection in apk universal copy
    -how to change settings in apk universal copy
    -how to contact developers of apk universal copy

    -

    Copy-Paste in 1-tap

    -

    This feature lets you perform quick actions on the text you copied without having to switch apps. You can translate the text using Google Translate, locate it using Google Maps, share it via social media or messaging apps, or search for it using Google or Wikipedia.

    -

    Scroll mode

    -

    This mode lets you select texts from multiple screens or apps to copy them all at once. You just need to launch Universal Copy in scroll mode, scroll through the screens or apps you want to copy text from, select the texts you want to copy, and it's done.

    -

    Harvest mode

    -

    This mode lets you extract all the texts from a screen or an app and copy them to your clipboard. You just need to launch Universal Copy in harvest mode, select the screen or the app you want to copy text from, and it's done.

    -

    Why Download APK Universal Copy?

    -

    APK Universal Copy is a must-have app for anyone who wants to copy text from any app or image on their Android device. It has many benefits that make it worth downloading and installing:

    -

    Benefits of APK Universal Copy

    -

    Copy text from any app or image

    -

    With APK Universal Copy, you can copy text from any app or image on your device, even from the ones that don't let you or inside images. This means you can copy text from Facebook posts, Instagram captions, YouTube comments, Chrome web pages, WhatsApp messages, Tumblr blogs, News Republic articles, Snapchat stories, and more. You can also copy text inside images such as memes, screenshots, flyers, posters, logos, and more.

    -

    Extract useful information quickly

    -

    With APK Universal Copy, you can extract useful information from the text you copied without having to type them manually. You can extract addresses, emails, phone numbers, @, #, and other entities from the text and perform quick actions on them. This means you can open Google Maps for an address, call a phone number, send an email, or search for a hashtag in one tap.

    -

    Perform actions on the text you copied

    -

    With APK Universal Copy, you can perform quick actions on the text you copied without having to switch apps. You can translate the text using Google Translate, locate it using Google Maps, share it via social media or messaging apps, or search for it using Google or Wikipedia in one tap. This means you can save time and hassle by using APK Universal Copy as your all-in-one tool for copying and pasting.

    -

    Save time and hassle

    -

    With APK Universal Copy, you can save time and hassle by copying and pasting text from any app or image on your device. You don't have to switch apps or type manually to copy and paste text. You don't have to worry about the app or the image not letting you copy text. You don't have to waste time looking for useful information in the text. You don't have to open multiple apps to perform actions on the text. You just need to use APK Universal Copy and enjoy its features.

    -

    How to Download and Install APK Universal Copy?

    -

    If you are convinced by the benefits of APK Universal Copy and want to download and install it on your device, here are the steps you need to follow:

    -

    Steps to Download and Install APK Universal Copy

    -

    Step 1: Enable unknown sources on your device

    -

    Since APK Universal Copy is not available on Google Play Store, you need to enable unknown sources on your device to install it. To do this, go to Settings > Security > Unknown sources and toggle it on. This will allow you to install apps from sources other than Google Play Store.

    -

    Step 2: Download the APK file from a trusted source

    -

    The next step is to download the APK file of APK Universal Copy from a trusted source. You can use this link to download the latest version of APK Universal Copy (version 5.0.5) for free. The file size is about 7 MB and it requires Android 5.0 or higher.

    -

    Step 3: Locate and install the APK file on your device

    -

    The third step is to locate and install the APK file on your device. To do this, go to your Downloads folder or use a file manager app to find the APK file of APK Universal Copy. Tap on it and follow the instructions to install it on your device.

    -

    Step 4: Launch and enjoy APK Universal Copy

    -

    The final step is to launch and enjoy APK Universal Copy on your device. To do this, go to your app drawer or home screen and look for the icon of APK Universal Copy. Tap on it and grant the necessary permissions to access your device's screen content and camera. Then, choose the mode you want to use (normal mode, scanner mode, scroll mode, or harvest mode) and start copying text from any app or image on your device.

    -

    Conclusion

    -

    In conclusion, APK Universal Copy is a powerful and versatile app that lets you copy text from any app or image on your Android device. It also detects and extracts useful information from the text and allows you to perform actions on it in one tap. It has many benefits such as copying text from any app or image, extracting useful information quickly, performing actions on the text you copied, and saving time and hassle. It is easy to download and install on your device by following a few simple steps. If you want to copy text from any app or image on your Android device, you should definitely try APK Universal Copy. You will be amazed by its features and performance.

    -

    FAQs

    -

    Here are some frequently asked questions about APK Universal Copy:

    - - - - - - - - - - - - - - - - - - - - - - - - - -
    QuestionAnswer
    Is APK Universal Copy safe to use?Yes, APK Universal Copy is safe to use. It does not contain any malware or viruses. It only requires permissions to access your device's screen content and camera to copy text from any app or image. It does not collect or share any personal data.
    Is APK Universal Copy free to use?Yes, APK Universal Copy is free to use. It does not have any in-app purchases or ads. You can download and install it on your device without paying anything.
    Does APK Universal Copy work offline?Yes, APK Universal Copy works offline. You can copy text from any app or image on your device without an internet connection. However, some features such as translating, locating, sharing, or searching may require an internet connection.
    How can I contact the developer of APK Universal Copy?You can contact the developer of APK Universal Copy by sending an email to contact@universal-copy.com. You can also visit their website at https://universal-copy.com/ for more information.
    How can I support the development of APK Universal Copy?You can support the development of APK Universal Copy by rating and reviewing it on the source where you downloaded it from. You can also share it with your friends and family who might find it useful.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy PUBG MOBILE 1.8 with MOD APK ESP Aimbot Anti-Ban and Mega Menu Included.md b/spaces/1phancelerku/anime-remove-background/Enjoy PUBG MOBILE 1.8 with MOD APK ESP Aimbot Anti-Ban and Mega Menu Included.md deleted file mode 100644 index 2eae4ab1b79bffed111df01d00d7855b026f3632..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Enjoy PUBG MOBILE 1.8 with MOD APK ESP Aimbot Anti-Ban and Mega Menu Included.md +++ /dev/null @@ -1,91 +0,0 @@ -
    -

    PUBG Mobile 1.8 Mod APK Hack Download: Everything You Need to Know

    -

    If you are a fan of battle royale games, you must have heard of PUBG Mobile, one of the most popular and addictive games in the genre. But did you know that there is a way to enjoy the game even more with unlimited resources and features? In this article, we will tell you everything you need to know about PUBG Mobile 1.8 Mod APK Hack, a modified version of the original game that gives you an edge over your opponents. We will also show you how to download and install it on your device, and how to play it like a pro.

    -

    pubg mobile 1.8 mod apk hack download


    DOWNLOAD ✏ ✏ ✏ https://jinyurl.com/2uNQPg



    -

    What is PUBG Mobile?

    -

    A brief introduction to the popular battle royale game

    -

    PUBG Mobile is a mobile version of PlayerUnknown's Battlegrounds, a multiplayer online battle royale game developed by PUBG Corporation. The game was released in 2018 and has since become one of the most downloaded and played games in the world. The game has won several awards and accolades, such as the Google Play Best Game of 2018, the Golden Joystick Award for Mobile Game of the Year, and the Esports Game of the Year.

    -

    The main features and gameplay of PUBG Mobile

    -

    PUBG Mobile is a game where up to 100 players parachute onto an island and fight for survival. The game offers various modes, such as solo, duo, squad, arcade, arena, and classic. The game also features different maps, such as Erangel, Miramar, Sanhok, Vikendi, Livik, and Karakin. The game is updated regularly with new content, such as weapons, vehicles, skins, events, and seasons.

    -

    The gameplay of PUBG Mobile is simple but thrilling. You have to loot weapons, armor, ammo, and other items from buildings, crates, or dead enemies. You have to avoid the blue zone, which is a shrinking circle that forces players to move closer together. You have to kill or avoid other players while staying alive until the end. The last player or team standing wins the match.

    -

    What is PUBG Mobile 1.8 Mod APK Hack?

    -

    A modified version of the original game with unlimited resources and features

    -

    PUBG Mobile 1.8 Mod APK Hack is a hacked version of the original game that gives you access to unlimited resources and features that are not available in the official version. For example, with this modded version, you can get unlimited UC (Unknown Cash), which is the in-game currency that you can use to buy skins, outfits, crates, emotes, and more. You can also get unlimited BP (Battle Points), which are used to level up your account and unlock rewards. You can also get unlimited health, ammo, aimbot, wallhack, speedhack, no recoil, no fog, no grass, and more.

    -

    The benefits and risks of using PUBG Mobile 1.8 Mod APK Hack

    -

    The benefits of using PUBG Mobile 1.8 Mod APK Hack are obvious. You can enjoy the game without any limitations or restrictions. You can customize your character and weapons with any skin or outfit you want. You can dominate every match with your enhanced skills and abilities. You

    The risks of using PUBG Mobile 1.8 Mod APK Hack are also evident. You can get banned from the game if the developers detect that you are using a modified version. You can also expose your device to malware or viruses that may harm your data or privacy. You can also ruin the fun and fairness of the game for other players who are playing legitimately. Therefore, you should use PUBG Mobile 1.8 Mod APK Hack at your own risk and discretion.

    -

    How to download and install PUBG Mobile 1.8 Mod APK Hack?

    -

    The steps to download and install the modded version of the game

    -

    If you want to try PUBG Mobile 1.8 Mod APK Hack, you will need to follow these steps:

    -
      -
    1. Download the PUBG Mobile 1.8 Mod APK Hack file from a trusted source. You can search for it on Google or use the link below.
    2. -
    3. Enable the installation of apps from unknown sources on your device. You can do this by going to Settings > Security > Unknown Sources and toggling it on.
    4. -
    5. Locate the downloaded file on your device and tap on it to start the installation process.
    6. -
    7. Follow the instructions on the screen and wait for the installation to complete.
    8. -
    9. Launch the game and enjoy PUBG Mobile 1.8 Mod APK Hack.
    10. -
    -

    Note: You may need to uninstall the original version of PUBG Mobile before installing the modded version. You may also need to allow some permissions for the game to run properly.

    -

    pubg mobile 1.8 mod apk unlimited uc and bp
    -pubg mobile 1.8 mod apk aimbot and wallhack
    -pubg mobile 1.8 mod apk no root and anti ban
    -pubg mobile 1.8 mod apk latest version and obb
    -pubg mobile 1.8 mod apk esp and radar hack
    -pubg mobile 1.8 mod apk god mode and speed hack
    -pubg mobile 1.8 mod apk free fire and magic bullet
    -pubg mobile 1.8 mod apk auto headshot and recoil
    -pubg mobile 1.8 mod apk unlock all skins and weapons
    -pubg mobile 1.8 mod apk high damage and jump hack
    -pubg mobile 1.8 mod apk mega menu and vip features
    -pubg mobile 1.8 mod apk unlimited health and ammo
    -pubg mobile 1.8 mod apk fast run and fly hack
    -pubg mobile 1.8 mod apk no grass and fog removal
    -pubg mobile 1.8 mod apk cheat menu and script
    -pubg mobile 1.8 mod apk all maps and modes unlocked
    -pubg mobile 1.8 mod apk low ping and lag fix
    -pubg mobile 1.8 mod apk global version and kr version
    -pubg mobile 1.8 mod apk zombie mode and infection mode
    -pubg mobile 1.8 mod apk night mode and hdr graphics
    -pubg mobile 1.8 mod apk voice chat and team up
    -pubg mobile 1.8 mod apk new weapons and vehicles
    -pubg mobile 1.8 mod apk season pass and royale pass
    -pubg mobile 1.8 mod apk custom room and tournament
    -pubg mobile 1.8 mod apk lite version and full version

    -

    The precautions and tips to avoid any issues or errors

    -

    To avoid any issues or errors while using PUBG Mobile 1.8 Mod APK Hack, you should take some precautions and tips:

    - -

    How to play PUBG Mobile 1.8 Mod APK Hack?

    -

    The basic controls and settings of the game

    -

    The basic controls and settings of PUBG Mobile 1.8 Mod APK Hack are similar to those of the original game. You can use the virtual joystick on the left side of the screen to move your character, and the buttons on the right side of the screen to shoot, aim, jump, crouch, prone, reload, switch weapons, and more. You can also customize your controls and settings by going to Settings > Controls > Customize.

    -

    The best strategies and tips to win every match

    -

    The best strategies and tips to win every match with PUBG Mobile 1.8 Mod APK Hack are as follows:

    - -

    Conclusion

    -

    PUBG Mobile 1.8 Mod APK Hack is a modified version of the original game that gives you unlimited resources and features that are not available in the official version. It can make the game more fun and exciting for some players who want to try something new and different. However, it also comes

    It also comes with some risks and drawbacks, such as getting banned from the game, exposing your device to malware or viruses, and ruining the fun and fairness of the game for other players. Therefore, you should use PUBG Mobile 1.8 Mod APK Hack at your own risk and discretion, and follow the steps and tips we provided in this article to avoid any issues or errors.

    -

    We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. And if you liked this article, please share it with your friends and fellow PUBG Mobile fans. Thank you for reading and happy gaming!

    -

    FAQs

    -

    Here are some frequently asked questions about PUBG Mobile 1.8 Mod APK Hack:

    -
      -
    1. Q: Is PUBG Mobile 1.8 Mod APK Hack safe to use?
      -A: PUBG Mobile 1.8 Mod APK Hack is not safe to use, as it is a hacked version of the original game that may contain malware or viruses that can harm your device or data. It may also get you banned from the game if the developers detect that you are using a modified version.
    2. -
    3. Q: Is PUBG Mobile 1.8 Mod APK Hack legal to use?
      -A: PUBG Mobile 1.8 Mod APK Hack is not legal to use, as it violates the terms of service and the intellectual property rights of PUBG Corporation, the developer of the game. It may also infringe on the rights of other players who are playing legitimately.
    4. -
    5. Q: Where can I download PUBG Mobile 1.8 Mod APK Hack?
      -A: You can download PUBG Mobile 1.8 Mod APK Hack from various sources on the internet, such as websites, blogs, forums, or social media. However, you should be careful and cautious when downloading any file or app from unknown sources, as they may be fake, corrupted, or malicious.
    6. -
    7. Q: How can I update PUBG Mobile 1.8 Mod APK Hack?
      -A: You cannot update PUBG Mobile 1.8 Mod APK Hack from the Play Store or any other source, as it is a modded version of the game that is not compatible with the official version. You will have to wait for the developer of the modded version to release a new update that matches the original version.
    8. -
    9. Q: Can I play PUBG Mobile 1.8 Mod APK Hack with my friends?
      -A: You can play PUBG Mobile 1.8 Mod APK Hack with your friends if they are also using the same modded version of the game. However, you cannot play with your friends who are using the official version of the game, as they will not be able to join your server or match.
    10. -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/44ov41za8i/FreeVC/speaker_encoder/train.py b/spaces/44ov41za8i/FreeVC/speaker_encoder/train.py deleted file mode 100644 index 282e4f51b3825c7f32e628506eb40a98e58e2deb..0000000000000000000000000000000000000000 --- a/spaces/44ov41za8i/FreeVC/speaker_encoder/train.py +++ /dev/null @@ -1,125 +0,0 @@ -from speaker_encoder.visualizations import Visualizations -from speaker_encoder.data_objects import SpeakerVerificationDataLoader, SpeakerVerificationDataset -from speaker_encoder.params_model import * -from speaker_encoder.model import SpeakerEncoder -from utils.profiler import Profiler -from pathlib import Path -import torch - -def sync(device: torch.device): - # FIXME - return - # For correct profiling (cuda operations are async) - if device.type == "cuda": - torch.cuda.synchronize(device) - -def train(run_id: str, clean_data_root: Path, models_dir: Path, umap_every: int, save_every: int, - backup_every: int, vis_every: int, force_restart: bool, visdom_server: str, - no_visdom: bool): - # Create a dataset and a dataloader - dataset = SpeakerVerificationDataset(clean_data_root) - loader = SpeakerVerificationDataLoader( - dataset, - speakers_per_batch, # 64 - utterances_per_speaker, # 10 - num_workers=8, - ) - - # Setup the device on which to run the forward pass and the loss. These can be different, - # because the forward pass is faster on the GPU whereas the loss is often (depending on your - # hyperparameters) faster on the CPU. - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - # FIXME: currently, the gradient is None if loss_device is cuda - loss_device = torch.device("cpu") - - # Create the model and the optimizer - model = SpeakerEncoder(device, loss_device) - optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate_init) - init_step = 1 - - # Configure file path for the model - state_fpath = models_dir.joinpath(run_id + ".pt") - backup_dir = models_dir.joinpath(run_id + "_backups") - - # Load any existing model - if not force_restart: - if state_fpath.exists(): - print("Found existing model \"%s\", loading it and resuming training." % run_id) - checkpoint = torch.load(state_fpath) - init_step = checkpoint["step"] - model.load_state_dict(checkpoint["model_state"]) - optimizer.load_state_dict(checkpoint["optimizer_state"]) - optimizer.param_groups[0]["lr"] = learning_rate_init - else: - print("No model \"%s\" found, starting training from scratch." % run_id) - else: - print("Starting the training from scratch.") - model.train() - - # Initialize the visualization environment - vis = Visualizations(run_id, vis_every, server=visdom_server, disabled=no_visdom) - vis.log_dataset(dataset) - vis.log_params() - device_name = str(torch.cuda.get_device_name(0) if torch.cuda.is_available() else "CPU") - vis.log_implementation({"Device": device_name}) - - # Training loop - profiler = Profiler(summarize_every=10, disabled=False) - for step, speaker_batch in enumerate(loader, init_step): - profiler.tick("Blocking, waiting for batch (threaded)") - - # Forward pass - inputs = torch.from_numpy(speaker_batch.data).to(device) - sync(device) - profiler.tick("Data to %s" % device) - embeds = model(inputs) - sync(device) - profiler.tick("Forward pass") - embeds_loss = embeds.view((speakers_per_batch, utterances_per_speaker, -1)).to(loss_device) - loss, eer = model.loss(embeds_loss) - sync(loss_device) - profiler.tick("Loss") - - # Backward pass - model.zero_grad() - loss.backward() - profiler.tick("Backward pass") - model.do_gradient_ops() - optimizer.step() - profiler.tick("Parameter update") - - # Update visualizations - # learning_rate = optimizer.param_groups[0]["lr"] - vis.update(loss.item(), eer, step) - - # Draw projections and save them to the backup folder - if umap_every != 0 and step % umap_every == 0: - print("Drawing and saving projections (step %d)" % step) - backup_dir.mkdir(exist_ok=True) - projection_fpath = backup_dir.joinpath("%s_umap_%06d.png" % (run_id, step)) - embeds = embeds.detach().cpu().numpy() - vis.draw_projections(embeds, utterances_per_speaker, step, projection_fpath) - vis.save() - - # Overwrite the latest version of the model - if save_every != 0 and step % save_every == 0: - print("Saving the model (step %d)" % step) - torch.save({ - "step": step + 1, - "model_state": model.state_dict(), - "optimizer_state": optimizer.state_dict(), - }, state_fpath) - - # Make a backup - if backup_every != 0 and step % backup_every == 0: - print("Making a backup (step %d)" % step) - backup_dir.mkdir(exist_ok=True) - backup_fpath = backup_dir.joinpath("%s_bak_%06d.pt" % (run_id, step)) - torch.save({ - "step": step + 1, - "model_state": model.state_dict(), - "optimizer_state": optimizer.state_dict(), - }, backup_fpath) - - profiler.tick("Extras (visualizations, saving)") - \ No newline at end of file diff --git a/spaces/AIConsultant/MusicGen/audiocraft/modules/seanet.py b/spaces/AIConsultant/MusicGen/audiocraft/modules/seanet.py deleted file mode 100644 index 3e5998e9153afb6e68ea410d565e00ea835db248..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/modules/seanet.py +++ /dev/null @@ -1,258 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -import numpy as np -import torch.nn as nn - -from .conv import StreamableConv1d, StreamableConvTranspose1d -from .lstm import StreamableLSTM - - -class SEANetResnetBlock(nn.Module): - """Residual block from SEANet model. - - Args: - dim (int): Dimension of the input/output. - kernel_sizes (list): List of kernel sizes for the convolutions. - dilations (list): List of dilations for the convolutions. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - true_skip (bool): Whether to use true skip connection or a simple - (streamable) convolution as the skip connection. - """ - def __init__(self, dim: int, kernel_sizes: tp.List[int] = [3, 1], dilations: tp.List[int] = [1, 1], - activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, causal: bool = False, - pad_mode: str = 'reflect', compress: int = 2, true_skip: bool = True): - super().__init__() - assert len(kernel_sizes) == len(dilations), 'Number of kernel sizes should match number of dilations' - act = getattr(nn, activation) - hidden = dim // compress - block = [] - for i, (kernel_size, dilation) in enumerate(zip(kernel_sizes, dilations)): - in_chs = dim if i == 0 else hidden - out_chs = dim if i == len(kernel_sizes) - 1 else hidden - block += [ - act(**activation_params), - StreamableConv1d(in_chs, out_chs, kernel_size=kernel_size, dilation=dilation, - norm=norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode), - ] - self.block = nn.Sequential(*block) - self.shortcut: nn.Module - if true_skip: - self.shortcut = nn.Identity() - else: - self.shortcut = StreamableConv1d(dim, dim, kernel_size=1, norm=norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode) - - def forward(self, x): - return self.shortcut(x) + self.block(x) - - -class SEANetEncoder(nn.Module): - """SEANet encoder. - - Args: - channels (int): Audio channels. - dimension (int): Intermediate representation dimension. - n_filters (int): Base width for the model. - n_residual_layers (int): nb of residual layers. - ratios (Sequence[int]): kernel size and stride ratios. The encoder uses downsampling ratios instead of - upsampling ratios, hence it will use the ratios in the reverse order to the ones specified here - that must match the decoder order. We use the decoder order as some models may only employ the decoder. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - kernel_size (int): Kernel size for the initial convolution. - last_kernel_size (int): Kernel size for the initial convolution. - residual_kernel_size (int): Kernel size for the residual layers. - dilation_base (int): How much to increase the dilation with each layer. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - true_skip (bool): Whether to use true skip connection or a simple - (streamable) convolution as the skip connection in the residual network blocks. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - lstm (int): Number of LSTM layers at the end of the encoder. - disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm. - For the encoder, it corresponds to the N first blocks. - """ - def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3, - ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7, - last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False, - pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0, - disable_norm_outer_blocks: int = 0): - super().__init__() - self.channels = channels - self.dimension = dimension - self.n_filters = n_filters - self.ratios = list(reversed(ratios)) - del ratios - self.n_residual_layers = n_residual_layers - self.hop_length = np.prod(self.ratios) - self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks - self.disable_norm_outer_blocks = disable_norm_outer_blocks - assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \ - "Number of blocks for which to disable norm is invalid." \ - "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0." - - act = getattr(nn, activation) - mult = 1 - model: tp.List[nn.Module] = [ - StreamableConv1d(channels, mult * n_filters, kernel_size, - norm='none' if self.disable_norm_outer_blocks >= 1 else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - # Downsample to raw audio scale - for i, ratio in enumerate(self.ratios): - block_norm = 'none' if self.disable_norm_outer_blocks >= i + 2 else norm - # Add residual layers - for j in range(n_residual_layers): - model += [ - SEANetResnetBlock(mult * n_filters, kernel_sizes=[residual_kernel_size, 1], - dilations=[dilation_base ** j, 1], - norm=block_norm, norm_params=norm_params, - activation=activation, activation_params=activation_params, - causal=causal, pad_mode=pad_mode, compress=compress, true_skip=true_skip)] - - # Add downsampling layers - model += [ - act(**activation_params), - StreamableConv1d(mult * n_filters, mult * n_filters * 2, - kernel_size=ratio * 2, stride=ratio, - norm=block_norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode), - ] - mult *= 2 - - if lstm: - model += [StreamableLSTM(mult * n_filters, num_layers=lstm)] - - model += [ - act(**activation_params), - StreamableConv1d(mult * n_filters, dimension, last_kernel_size, - norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - - self.model = nn.Sequential(*model) - - def forward(self, x): - return self.model(x) - - -class SEANetDecoder(nn.Module): - """SEANet decoder. - - Args: - channels (int): Audio channels. - dimension (int): Intermediate representation dimension. - n_filters (int): Base width for the model. - n_residual_layers (int): nb of residual layers. - ratios (Sequence[int]): kernel size and stride ratios. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - final_activation (str): Final activation function after all convolutions. - final_activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - kernel_size (int): Kernel size for the initial convolution. - last_kernel_size (int): Kernel size for the initial convolution. - residual_kernel_size (int): Kernel size for the residual layers. - dilation_base (int): How much to increase the dilation with each layer. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - true_skip (bool): Whether to use true skip connection or a simple. - (streamable) convolution as the skip connection in the residual network blocks. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - lstm (int): Number of LSTM layers at the end of the encoder. - disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm. - For the decoder, it corresponds to the N last blocks. - trim_right_ratio (float): Ratio for trimming at the right of the transposed convolution under the causal setup. - If equal to 1.0, it means that all the trimming is done at the right. - """ - def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3, - ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - final_activation: tp.Optional[str] = None, final_activation_params: tp.Optional[dict] = None, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7, - last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False, - pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0, - disable_norm_outer_blocks: int = 0, trim_right_ratio: float = 1.0): - super().__init__() - self.dimension = dimension - self.channels = channels - self.n_filters = n_filters - self.ratios = ratios - del ratios - self.n_residual_layers = n_residual_layers - self.hop_length = np.prod(self.ratios) - self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks - self.disable_norm_outer_blocks = disable_norm_outer_blocks - assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \ - "Number of blocks for which to disable norm is invalid." \ - "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0." - - act = getattr(nn, activation) - mult = int(2 ** len(self.ratios)) - model: tp.List[nn.Module] = [ - StreamableConv1d(dimension, mult * n_filters, kernel_size, - norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - - if lstm: - model += [StreamableLSTM(mult * n_filters, num_layers=lstm)] - - # Upsample to raw audio scale - for i, ratio in enumerate(self.ratios): - block_norm = 'none' if self.disable_norm_outer_blocks >= self.n_blocks - (i + 1) else norm - # Add upsampling layers - model += [ - act(**activation_params), - StreamableConvTranspose1d(mult * n_filters, mult * n_filters // 2, - kernel_size=ratio * 2, stride=ratio, - norm=block_norm, norm_kwargs=norm_params, - causal=causal, trim_right_ratio=trim_right_ratio), - ] - # Add residual layers - for j in range(n_residual_layers): - model += [ - SEANetResnetBlock(mult * n_filters // 2, kernel_sizes=[residual_kernel_size, 1], - dilations=[dilation_base ** j, 1], - activation=activation, activation_params=activation_params, - norm=block_norm, norm_params=norm_params, causal=causal, - pad_mode=pad_mode, compress=compress, true_skip=true_skip)] - - mult //= 2 - - # Add final layers - model += [ - act(**activation_params), - StreamableConv1d(n_filters, channels, last_kernel_size, - norm='none' if self.disable_norm_outer_blocks >= 1 else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - # Add optional final activation to decoder (eg. tanh) - if final_activation is not None: - final_act = getattr(nn, final_activation) - final_activation_params = final_activation_params or {} - model += [ - final_act(**final_activation_params) - ] - self.model = nn.Sequential(*model) - - def forward(self, z): - y = self.model(z) - return y diff --git a/spaces/AIFILMS/StyleGANEX/models/encoders/model_irse.py b/spaces/AIFILMS/StyleGANEX/models/encoders/model_irse.py deleted file mode 100644 index bc41ace0ba04cf4285c283a28e6c36113a18e6d6..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/StyleGANEX/models/encoders/model_irse.py +++ /dev/null @@ -1,84 +0,0 @@ -from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, Dropout, Sequential, Module -from models.encoders.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE, l2_norm - -""" -Modified Backbone implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch) -""" - - -class Backbone(Module): - def __init__(self, input_size, num_layers, mode='ir', drop_ratio=0.4, affine=True): - super(Backbone, self).__init__() - assert input_size in [112, 224], "input_size should be 112 or 224" - assert num_layers in [50, 100, 152], "num_layers should be 50, 100 or 152" - assert mode in ['ir', 'ir_se'], "mode should be ir or ir_se" - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - if input_size == 112: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 7 * 7, 512), - BatchNorm1d(512, affine=affine)) - else: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 14 * 14, 512), - BatchNorm1d(512, affine=affine)) - - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - def forward(self, x): - x = self.input_layer(x) - x = self.body(x) - x = self.output_layer(x) - return l2_norm(x) - - -def IR_50(input_size): - """Constructs a ir-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_101(input_size): - """Constructs a ir-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_152(input_size): - """Constructs a ir-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_50(input_size): - """Constructs a ir_se-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir_se', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_101(input_size): - """Constructs a ir_se-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir_se', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_152(input_size): - """Constructs a ir_se-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir_se', drop_ratio=0.4, affine=False) - return model diff --git a/spaces/AUST001/HDTV/README.md b/spaces/AUST001/HDTV/README.md deleted file mode 100644 index ff8344be920efede58fea029e1e2f8de1a5af321..0000000000000000000000000000000000000000 --- a/spaces/AUST001/HDTV/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: HDTV -emoji: 👁 -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: cc-by-nc-nd-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AchyuthGamer/MagicPrompt-Stable-Diffusion/app.py b/spaces/AchyuthGamer/MagicPrompt-Stable-Diffusion/app.py deleted file mode 100644 index 8644606f60321da5256c0cf7440c0aa06fea5da1..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/MagicPrompt-Stable-Diffusion/app.py +++ /dev/null @@ -1,96 +0,0 @@ -from transformers import pipeline, set_seed -import gradio as grad, random, re -import os -import sys - -gpt2_pipe = pipeline('text-generation', model='Gustavosta/MagicPrompt-Stable-Diffusion', tokenizer='gpt2') - -def generate(starting_text): - with open("ideas.txt", "r") as f: - line = f.readlines() - seed = random.randint(100, 1000000) - set_seed(seed) - - if starting_text == "": - starting_text: str = line[random.randrange(0, len(line))].replace("\n", "").capitalize() - starting_text: str = re.sub(r"\.", '', starting_text) - - response = gpt2_pipe(starting_text, max_length=(len(starting_text) + random.randint(60, 80)), num_return_sequences=1) - response_list = [] - for x in response: - resp = x['generated_text'].strip() - if resp != starting_text and len(resp) > (len(starting_text) + 4) and resp.endswith((":", "-", "—")) is False: - response_list.append(resp) - - response_end = "\n".join(response_list) - response_end = re.sub('[^ ]+\.[^ ]+','', response_end) - response_end = response_end.replace("<", "").replace(">", "") - - if response_end != "": - return response_end - -with grad.Blocks(css='style.css') as demo: - grad.HTML( - """ -
    -
    -

    - The Stable Diffusion Prompt Generator - because your text needs a little more visual spice. -

    -
    -

    - Ready to see some magic happen? Simply type in your basic idea. Feeling lazy? No problem, just hit the "Magic Prompt" button and it will randomly pull from a list of thousands of ideas for you. -

    -

    - ❤️ Press the Like Button if you enjoy my space! ❤️ -

    -
    - """ - ) - with grad.Column(elem_id="col-container"): - with grad.Row(variant="compact"): - txt = grad.Textbox( - label="Initial Text", - show_label=False, - max_lines=1, - placeholder="Enter a basic idea", - ).style( - container=False, - ) - run = grad.Button("✨ Magic Prompt ✨").style(full_width=False) - - - - with grad.Row(variant="compact"): - out = grad.Textbox( - label="Generated Text", - show_label=False, - lines=5, - ).style( - container=False, - ) - - run.click(generate, inputs=[txt], outputs=[out]) - - - - with grad.Row(): - grad.HTML( - """ - -
    -

    Transform your boring ideas into creative masterpieces with just one click! Enter a spark of inspiration and let the "Magic Prompt" button work its magic. -

    -
    - """ -) - - - fn=generate, - run=generate, - inputs=txt, - outputs=out - demo.launch(enable_queue=False, inline=True) \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/actions/SelectChess.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/actions/SelectChess.js deleted file mode 100644 index 7e7ea454dd29e0cc8173cc4b4dc1c9bea68fa2c0..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/actions/SelectChess.js +++ /dev/null @@ -1,9 +0,0 @@ -/* -Do nothing -*/ - -var SelectChess = function (chess, board, bejeweled) { - // Do nothing -} - -export default SelectChess; \ No newline at end of file diff --git a/spaces/Akash473/FunkoHairBeard/README.md b/spaces/Akash473/FunkoHairBeard/README.md deleted file mode 100644 index 6168692ad95bf16585e264d4cf01d74c169e9cda..0000000000000000000000000000000000000000 --- a/spaces/Akash473/FunkoHairBeard/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: FunkoHairBeard -emoji: 🏢 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.44.2 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Akash473/FunkoHairBeard/app.py b/spaces/Akash473/FunkoHairBeard/app.py deleted file mode 100644 index 6025f0d6aecce54b62ee68e45ec32dc9288d8665..0000000000000000000000000000000000000000 --- a/spaces/Akash473/FunkoHairBeard/app.py +++ /dev/null @@ -1,502 +0,0 @@ -from io import BytesIO -import base64 - -import numpy as np -import torch -import torch.nn as nn -import torch.optim as optim -from torchvision import transforms, models -from PIL import Image -import gradio as gr - -# Combined Code for Beard and Hairstyle Detection and Styling - -male_background_image_paths = [ - "Data/AdobeColorFunko/Outfits/MenOutfits/DummyDress1.png", - "Data/AdobeColorFunko/Outfits/MenOutfits/GlassesDummy.png", - "Data/AdobeColorFunko/Outfits/MenOutfits/DummyDress3.png" -] - -female_background_image_paths = [ - "Data/AdobeColorFunko/Outfits/WomenOutfits/WomenOne.png", - "Data/AdobeColorFunko/Outfits/WomenOutfits/WomenTwo.png", - "Data/AdobeColorFunko/Outfits/WomenOutfits/WomenThree.png" -] - - -class GenderClassifier: - def __init__(self, model_path, class_names): - self.model = models.resnet18(pretrained=False) - num_ftrs = self.model.fc.in_features - self.model.fc = nn.Linear(num_ftrs, len(class_names)) - self.load_model(model_path) - self.model.eval() - self.data_transforms = transforms.Compose([ - transforms.Resize((224, 224)), - transforms.ToTensor(), - transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) - ]) - self.class_names = class_names - - def preprocess_image(self, image_path): - image = Image.open(image_path).convert("RGB") - image = self.data_transforms(image) - image = image.unsqueeze(0) - return image - - def load_model(self, model_path): - if torch.cuda.is_available(): - self.model.load_state_dict(torch.load(model_path)) - else: - self.model.load_state_dict(torch.load(model_path, map_location=torch.device('cpu'))) - - def classify_gender(self, image_path): - input_image = self.preprocess_image(image_path) - - with torch.no_grad(): - predictions = self.model(input_image) - - probabilities = torch.nn.functional.softmax(predictions[0], dim=0) - predicted_class = torch.argmax(probabilities).item() - predicted_label = self.class_names[predicted_class] - - return predicted_label - -class WomenHairStyleClassifier: - def __init__(self, model_path, class_names): - self.model = models.resnet18(pretrained=False) - num_ftrs = self.model.fc.in_features - self.model.fc = nn.Linear(num_ftrs, len(class_names)) - self.load_model(model_path) - self.model.eval() - self.data_transforms = transforms.Compose([ - transforms.Resize((224, 224)), - transforms.ToTensor(), - transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) - ]) - self.class_names = class_names - - def preprocess_image(self, image_path): - image = Image.open(image_path).convert("RGB") - image = self.data_transforms(image) - image = image.unsqueeze(0) - return image - - def load_model(self, model_path): - if torch.cuda.is_available(): - self.model.load_state_dict(torch.load(model_path)) - else: - self.model.load_state_dict(torch.load(model_path, map_location=torch.device('cpu'))) - - def classify_hairStyle(self, image_path): - input_image = self.preprocess_image(image_path) - - with torch.no_grad(): - predictions = self.model(input_image) - - probabilities = torch.nn.functional.softmax(predictions[0], dim=0) - predicted_class = torch.argmax(probabilities).item() - predicted_label = self.class_names[predicted_class] - - return predicted_label - -class WomenHairColorClassifier: - def __init__(self, model_path, class_names): - self.model = models.resnet18(pretrained=False) - num_ftrs = self.model.fc.in_features - self.model.fc = nn.Linear(num_ftrs, len(class_names)) - self.load_model(model_path) - self.model.eval() - self.data_transforms = transforms.Compose([ - transforms.Resize((224, 224)), - transforms.ToTensor(), - transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) - ]) - self.class_names = class_names - - def preprocess_image(self, image_path): - image = Image.open(image_path).convert("RGB") - image = self.data_transforms(image) - image = image.unsqueeze(0) - return image - - def load_model(self, model_path): - if torch.cuda.is_available(): - self.model.load_state_dict(torch.load(model_path)) - else: - self.model.load_state_dict(torch.load(model_path, map_location=torch.device('cpu'))) - - def classify_hairColor(self, image_path): - input_image = self.preprocess_image(image_path) - - with torch.no_grad(): - predictions = self.model(input_image) - - probabilities = torch.nn.functional.softmax(predictions[0], dim=0) - predicted_class = torch.argmax(probabilities).item() - predicted_label = self.class_names[predicted_class] - - return predicted_label -# Function to classify beard style -class BeardClassifier: - def __init__(self, model_path, class_names): - self.model = torch.hub.load('pytorch/vision', 'resnet18', pretrained=False) - num_ftrs = self.model.fc.in_features - self.model.fc = torch.nn.Linear(num_ftrs, len(class_names)) - self.load_model(model_path) - self.model.eval() - self.data_transforms = transforms.Compose([ - transforms.Resize((224, 224)), - transforms.ToTensor(), - transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) - ]) - self.class_names = class_names - - def preprocess_image(self, image): - image = Image.open(image).convert("RGB") - image = self.data_transforms(image) - image = image.unsqueeze(0) - return image - - def load_model(self, model_path): - if torch.cuda.is_available(): - self.model.load_state_dict(torch.load(model_path)) - else: - self.model.load_state_dict(torch.load(model_path, map_location=torch.device('cpu'))) - - def classify_beard(self, image): - input_image = self.preprocess_image(image) - with torch.no_grad(): - predictions = self.model(input_image) - probabilities = torch.nn.functional.softmax(predictions[0], dim=0) - predicted_class = torch.argmax(probabilities).item() - predicted_label = self.class_names[predicted_class] - return predicted_label - -# Function to classify beard color -class BeardColorClassifier: - def __init__(self, model_path, class_names): - self.model = torch.hub.load('pytorch/vision', 'resnet18', pretrained=False) - num_ftrs = self.model.fc.in_features - self.model.fc = torch.nn.Linear(num_ftrs, len(class_names)) - self.load_model(model_path) - self.model.eval() - self.data_transforms = transforms.Compose([ - transforms.Resize((224, 224)), - transforms.ToTensor(), - transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) - ]) - self.class_names = class_names - - def preprocess_image(self, image): - image = Image.open(image).convert("RGB") - image = self.data_transforms(image) - image = image.unsqueeze(0) - return image - - def load_model(self, model_path): - if torch.cuda.is_available(): - self.model.load_state_dict(torch.load(model_path)) - else: - self.model.load_state_dict(torch.load(model_path, map_location=torch.device('cpu'))) - - def classify_beard_color(self, image): - input_image = self.preprocess_image(image) - with torch.no_grad(): - predictions = self.model(input_image) - probabilities = torch.nn.functional.softmax(predictions[0], dim=0) - predicted_class = torch.argmax(probabilities).item() - predicted_label = self.class_names[predicted_class] - return predicted_label - - -# Function to classify hairstyle -class HairStyleClassifier: - def __init__(self, model_path, class_names): - self.model = torch.hub.load('pytorch/vision', 'resnet18', pretrained=False) - num_ftrs = self.model.fc.in_features - self.model.fc = torch.nn.Linear(num_ftrs, len(class_names)) - self.load_model(model_path) - self.model.eval() - self.data_transforms = transforms.Compose([ - transforms.Resize((224, 224)), - transforms.ToTensor(), - transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) - ]) - self.class_names = class_names - - def preprocess_image(self, image): - image = Image.open(image).convert("RGB") - image = self.data_transforms(image) - image = image.unsqueeze(0) - return image - - def load_model(self, model_path): - if torch.cuda.is_available(): - self.model.load_state_dict(torch.load(model_path)) - else: - self.model.load_state_dict(torch.load(model_path, map_location=torch.device('cpu'))) - - def classify_hair(self, image): - input_image = self.preprocess_image(image) - with torch.no_grad(): - predictions = self.model(input_image) - probabilities = torch.nn.functional.softmax(predictions[0], dim=0) - predicted_class = torch.argmax(probabilities).item() - predicted_label = self.class_names[predicted_class] - return predicted_label - -class MenHairColorClassifier: - def __init__(self, model_path, class_names): - self.model = torch.hub.load('pytorch/vision', 'resnet18', pretrained=False) - num_ftrs = self.model.fc.in_features - self.model.fc = torch.nn.Linear(num_ftrs, len(class_names)) - self.load_model(model_path) - self.model.eval() - self.data_transforms = transforms.Compose([ - transforms.Resize((224, 224)), - transforms.ToTensor(), - transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) - ]) - self.class_names = class_names - - def preprocess_image(self, image): - image = Image.open(image).convert("RGB") - image = self.data_transforms(image) - image = image.unsqueeze(0) - return image - - def load_model(self, model_path): - if torch.cuda.is_available(): - self.model.load_state_dict(torch.load(model_path)) - else: - self.model.load_state_dict(torch.load(model_path, map_location=torch.device('cpu'))) - - def classify_menHair_color(self, image): - input_image = self.preprocess_image(image) - with torch.no_grad(): - predictions = self.model(input_image) - probabilities = torch.nn.functional.softmax(predictions[0], dim=0) - predicted_class = torch.argmax(probabilities).item() - predicted_label = self.class_names[predicted_class] - return predicted_label - - -def dummy_eye(background_image, x, y, placeholder_image_path, x_coordinate, y_coordinate): - placeholder_image = Image.open(placeholder_image_path) - target_size = (x, y) - placeholder_image = placeholder_image.resize(target_size, Image.LANCZOS) - placeholder_array = np.array(placeholder_image) - placeholder_width, placeholder_height = placeholder_image.size - region_box = (x_coordinate, y_coordinate, x_coordinate + placeholder_width, y_coordinate + placeholder_height) - placeholder_mask = placeholder_image.split()[3] if placeholder_image.mode == 'RGBA' else None - background_image.paste(placeholder_image, region_box, mask=placeholder_mask) - background_array = np.array(background_image) - -# Function to overlay a beard on a background image -def process_image_Beard(background_image, x, placeholder_image_path, x_coordinate, y_coordinate): - placeholder_image = Image.open(placeholder_image_path) - target_size = (x, x) - placeholder_image = placeholder_image.resize(target_size, Image.LANCZOS) - placeholder_array = np.array(placeholder_image) - placeholder_width, placeholder_height = placeholder_image.size - region_box = (x_coordinate, y_coordinate, x_coordinate + placeholder_width, y_coordinate + placeholder_height) - placeholder_mask = placeholder_image.split()[3] if placeholder_image.mode == 'RGBA' else None - background_image.paste(placeholder_image, region_box, mask=placeholder_mask) - background_array = np.array(background_image) - placeholder_alpha = placeholder_image.split()[3] if placeholder_image.mode == 'RGBA' else None - -def process_image_WomanHair(background_image, x, y, placeholder_image_path, x_coordinate, y_coordinate): - placeholder_image = Image.open(placeholder_image_path) - target_size = (x, y) - placeholder_image = placeholder_image.resize(target_size, Image.LANCZOS) - placeholder_array = np.array(placeholder_image) - placeholder_width, placeholder_height = placeholder_image.size - region_box = (x_coordinate, y_coordinate, x_coordinate + placeholder_width, y_coordinate + placeholder_height) - placeholder_mask = placeholder_image.split()[3] if placeholder_image.mode == 'RGBA' else None - background_image.paste(placeholder_image, region_box, mask=placeholder_mask) - background_array = np.array(background_image) - placeholder_alpha = placeholder_image.split()[3] if placeholder_image.mode == 'RGBA' else None - - -def add_eyebrow(background_image, x_coordinate, y_coordinate, eyebrow_image_path): - eyebrow_image = Image.open(eyebrow_image_path) - target_size = (200, 200) # Adjust the size as needed - eyebrow_image = eyebrow_image.resize(target_size, Image.LANCZOS) - region_box = (x_coordinate, y_coordinate, x_coordinate + eyebrow_image.width, y_coordinate + eyebrow_image.height) - eyebrow_mask = eyebrow_image.split()[3] if eyebrow_image.mode == 'RGBA' else None - background_image.paste(eyebrow_image, region_box, mask=eyebrow_mask) - background_array = np.array(background_image) - - - - -# Function to overlay a hairstyle on a background image -def process_image_menHair(background_image, x, y, placeholder_image_path, x_coordinate, y_coordinate): - placeholder_image = Image.open(placeholder_image_path) - target_size = (x, y) - placeholder_image = placeholder_image.resize(target_size, Image.LANCZOS) - placeholder_array = np.array(placeholder_image) - placeholder_width, placeholder_height = placeholder_image.size - region_box = (x_coordinate, y_coordinate, x_coordinate + placeholder_width, y_coordinate + placeholder_height) - placeholder_mask = placeholder_image.split()[3] if placeholder_image.mode == 'RGBA' else None - background_image.paste(placeholder_image, region_box, mask=placeholder_mask) - background_array = np.array(background_image) - placeholder_alpha = placeholder_image.split()[3] if placeholder_image.mode == 'RGBA' else None - -# Function to generate Funko figurines -def Igenerate_funko_figurines(input_image): - - WomenHairStyle_classifier = WomenHairStyleClassifier('Data/FunkoSavedModels/WomenHairStyle.pt', ['MediumLength', 'ShortHair', 'SidePlait']) - predicted_WomenHairStyle = WomenHairStyle_classifier.classify_hairStyle(input_image) - - WomenHairColor_classifier = WomenHairColorClassifier('Data/FunkoSavedModels/WomenHairColor.pt', ['Black', 'Brown', 'Ginger', 'White']) - predicted_WomenHairColor = WomenHairColor_classifier.classify_hairColor(input_image) - # Detect and classify gender - gender_classifier = GenderClassifier('Data/FunkoSavedModels/Gender.pt', ['Female', 'Male']) - predicted_gender = gender_classifier.classify_gender(input_image) - - # Detect and classify beard style - beard_classifier = BeardClassifier('Data/FunkoSavedModels/FunkoResnet18BeardStyle.pt', ['Bandholz', 'CleanShave', 'FullGoatee', 'Moustache', 'RapIndustryStandards', 'ShortBeard']) - predicted_style_label = beard_classifier.classify_beard(input_image) - - # Detect and classify beard color - beard_color_classifier = BeardColorClassifier('Data/FunkoSavedModels/FunkoResnet18BeardColor.pt', ['Black', 'DarkBrown', 'Ginger', 'LightBrown', 'SaltAndPepper', 'White']) - predicted_color_label = beard_color_classifier.classify_beard_color(input_image) - - # Classify hairstyle - hair_style_classifier = HairStyleClassifier('Data/FunkoSavedModels/FunkoResnet18HairStyle.pt', ['Afro', 'Bald', 'Puff', 'Spike']) - predicted_hairStyle_label = hair_style_classifier.classify_hair(input_image) - - #classify menHairColor - menhair_color_classifier = MenHairColorClassifier('Data/FunkoSavedModels/FunkoResnet18MenHairColor.pt', ['Black', 'DarkBrown', 'Ginger', 'LightBrown', 'SaltAndPepper', 'White']) - predicted_menhairColor_label = menhair_color_classifier.classify_menHair_color(input_image) - # Process background images and apply beard style and color along with hair style and color - final_images = [] - - if predicted_gender == 'Male': - background_image_paths = male_background_image_paths - if predicted_gender == 'Female': - background_image_paths = female_background_image_paths - - for background_image_paths in background_image_paths: - background_image = Image.open(background_image_paths) - x_coordinate = 90 - y_coordinate = 50 - add_eyebrow(background_image, 115, 80, "Data/AdobeColorFunko/EyezBrowz/Eyebrow.png") - #dummy_eye(background_image, 245, 345, 'Data/AdobeColorFunko/EyezBrowz/MaleEye.png', x_coordinate, y_coordinate) - if predicted_gender == 'Male': - x = 245 - y = 345 - placeholder_image_path = f"Data/AdobeColorFunko/EyezBrowz/{predicted_gender}Eye.png" - x_coordinate = 90 - y_coordinate = 50 - dummy_eye(background_image, x, y, placeholder_image_path, x_coordinate, y_coordinate) - - if predicted_style_label == 'Bandholz': - process_image_Beard(background_image, 320, - f"Data/AdobeColorFunko/Beard/Bandholz/{predicted_color_label}.png", - 50, 142) - - if predicted_style_label == 'ShortBeard': - process_image_Beard(background_image, 300, - f"Data/AdobeColorFunko/Beard/ShortBeard/{predicted_color_label}.png", - 62, 118) - - if predicted_style_label == 'FullGoatee': - process_image_Beard(background_image, 230, - f"Data/AdobeColorFunko/Beard/Goatee/{predicted_color_label}.png", - 96, 168) - - if predicted_style_label == 'RapIndustryStandards': - process_image_Beard(background_image, 290, - f"Data/AdobeColorFunko/Beard/RapIndustry/{predicted_color_label}.png", - 67, 120) - - if predicted_style_label == 'Moustache': - process_image_Beard(background_image, 220, - f"Data/AdobeColorFunko/Beard/Moustache/{predicted_color_label}.png", - 100, 160) - - if predicted_style_label == 'CleanShave': - process_image_Beard(background_image, 220, - f"Data/AdobeColorFunko/Beard/CleanShave/{predicted_color_label}.png", - 100, 160) - - # Add other conditions for different beard styles - - # Overlay hairstyle - if predicted_hairStyle_label == 'Afro': - process_image_menHair(background_image, 336, 420, - f"Data/AdobeColorFunko/MenHairstyle/Afro/{predicted_menhairColor_label}.png", - 41, 76) - - if predicted_hairStyle_label == 'Puff': - process_image_menHair(background_image, 305, 420, - f"Data/AdobeColorFunko/MenHairstyle/Puff/{predicted_menhairColor_label}.png", - 56, 68) - - if predicted_hairStyle_label == 'Spike': - process_image_menHair(background_image, 310, 420, - f"Data/AdobeColorFunko/MenHairstyle/Spike/{predicted_menhairColor_label}.png", - 52, 70) - - if predicted_hairStyle_label == 'Bald': - process_image_menHair(background_image, 310, 420, - f"Data/AdobeColorFunko/MenHairstyle/Bald/{predicted_menhairColor_label}.png", - 67, 120) - - - if predicted_gender == 'Female': - x = 245 - y = 345 - placeholder_image_path = f"Data/AdobeColorFunko/EyezBrowz/{predicted_gender}Eye.png" - x_coordinate = 90 - y_coordinate = 50 - dummy_eye(background_image, x, y, placeholder_image_path, x_coordinate, y_coordinate) - if predicted_WomenHairStyle == 'MediumLength': - process_image_WomanHair(background_image, 300,460, - f"Data/AdobeColorFunko/WomenHairstyle/MediumLength/{predicted_WomenHairColor}.png", - 56, 50) - - if predicted_WomenHairStyle == 'ShortHair': - process_image_WomanHair(background_image, 270,460, - f"Data/AdobeColorFunko/WomenHairstyle/ShortHair/{predicted_WomenHairColor}.png", - 61, 49) - - if predicted_WomenHairStyle == 'SidePlait': - process_image_WomanHair(background_image, 300,450, - f"Data/AdobeColorFunko/WomenHairstyle/SidePlait/{predicted_WomenHairColor}.png", - 54, 56) - - - # Convert the resulting image to base64 - buffered = BytesIO() - background_image.save(buffered, format="PNG") - #base64_image = base64.b64encode(buffered.getvalue()).decode("utf-8") - final_images.append(background_image) - - return final_images -imageComponent = gr.Image(type="filepath") - -# Define Gradio input components -input_image = gr.inputs.Image(type="pil", label="Upload your image") - - -with gr.Blocks() as demo: - gr.Markdown( - """ - # Funko POP! Figurine Creation - Enabling Streamlined Automation with Generative Artificial Intelligence - """) - imageComponent = gr.Image(type="filepath").style(height=300, width=300) - #MyOutputs=[gr.Image(type="pil", label="Generated Image " + str(i + 1)) for i in range(3)] - with gr.Row(): - MyOutputs = [gr.Image(type="pil", label="Generated Image " + str(i + 1)).style(height=300, width=300) for i in range(3)] - submitButton = gr.Button(value="Submit") - submitButton.click(Igenerate_funko_figurines, inputs=imageComponent, outputs=MyOutputs) - - -if __name__ == "__main__": - demo.launch() - diff --git a/spaces/AlexKoff88/stable_diffusion/app.py b/spaces/AlexKoff88/stable_diffusion/app.py deleted file mode 100644 index 3f471e6ef35ac78b375fdbd4604c0d9919d71fa6..0000000000000000000000000000000000000000 --- a/spaces/AlexKoff88/stable_diffusion/app.py +++ /dev/null @@ -1,73 +0,0 @@ -import gradio as gr -from optimum.intel.openvino import OVStableDiffusionPipeline -from diffusers.training_utils import set_seed -from diffusers import DDPMScheduler, StableDiffusionPipeline -import gc - -import subprocess - -import time - - -def create_pipeline(name): - if name == "svjack/Stable-Diffusion-Pokemon-en": #"valhalla/sd-pokemon-model": - scheduler = DDPMScheduler(beta_start=0.00085, beta_end=0.012, - beta_schedule="scaled_linear", num_train_timesteps=1000) - pipe = StableDiffusionPipeline.from_pretrained(name, scheduler=scheduler) - pipe.safety_checker = lambda images, clip_input: (images, False) - elif name == "OpenVINO/stable-diffusion-pokemons-fp32": #"stable-diffusion-pokemons-valhalla-fp32": - scheduler = DDPMScheduler(beta_start=0.00085, beta_end=0.012, - beta_schedule="scaled_linear", num_train_timesteps=1000) - pipe = OVStableDiffusionPipeline.from_pretrained(name, compile=False, scheduler=scheduler) - pipe.reshape(batch_size=1, height=512, width=512, num_images_per_prompt=1) - pipe.compile() - else: - pipe = OVStableDiffusionPipeline.from_pretrained(name, compile=False) - pipe.reshape(batch_size=1, height=512, width=512, num_images_per_prompt=1) - pipe.compile() - return pipe - -pipes = { - "Torch fp32": "svjack/Stable-Diffusion-Pokemon-en", #"valhalla/sd-pokemon-model" - "OpenVINO fp32": "OpenVINO/stable-diffusion-pokemons-fp32", #"OpenVINO/stable-diffusion-pokemons-valhalla-fp32" - "OpenVINO 8-bit quantized": "OpenVINO/stable-diffusion-pokemons-quantized-aggressive", #"OpenVINO/stable-diffusion-pokemons-valhalla-quantized-agressive" - "OpenVINO merged and quantized": "OpenVINO/stable-diffusion-pokemons-tome-quantized-aggressive" #"OpenVINO/stable-diffusion-pokemons-valhalla-tome-quantized-agressive" -} - -# prefetch pipelines on start -for v in pipes.values(): - pipe = create_pipeline(v) - del pipe - gc.collect() - -print((subprocess.check_output("lscpu", shell=True).strip()).decode()) - -def generate(prompt, option, seed): - pipe = create_pipeline(pipes[option]) - set_seed(int(seed)) - start_time = time.time() - if "Torch" in option: - output = pipe(prompt, num_inference_steps=50, output_type="pil", height=512, width=512) - else: - output = pipe(prompt, num_inference_steps=50, output_type="pil") - elapsed_time = time.time() - start_time - return (output.images[0], "{:10.4f}".format(elapsed_time)) - -examples = ["cartoon bird", - "a drawing of a green pokemon with red eyes", - "plant pokemon in jungle"] - -model_options = [option for option in pipes.keys()] - -gr.Interface( - fn=generate, - inputs=[gr.inputs.Textbox(default="cartoon bird", label="Prompt", lines=1), - gr.inputs.Dropdown(choices=model_options, default=model_options[-1], label="Model version"), - gr.inputs.Textbox(default="42", label="Seed", lines=1) - ], - outputs=[gr.outputs.Image(type="pil", label="Generated Image"), gr.outputs.Textbox(label="Inference time")], - title="OpenVINO-optimized Stable Diffusion", - description="This is the Optimum-based demo for NNCF-optimized Stable Diffusion pipeline trained on 'lambdalabs/pokemon-blip-captions' dataset and running with OpenVINO.\n" - "The pipeline is run using 8 vCPUs (4 cores) only.", - theme="huggingface", -).launch() \ No newline at end of file diff --git a/spaces/Amon1/ChatGPTForAcadamic/show_math.py b/spaces/Amon1/ChatGPTForAcadamic/show_math.py deleted file mode 100644 index 80fa881d1c2ace5813f75b5d8a19ca056a8bfa4f..0000000000000000000000000000000000000000 --- a/spaces/Amon1/ChatGPTForAcadamic/show_math.py +++ /dev/null @@ -1,80 +0,0 @@ -# This program is written by: https://github.com/polarwinkel/mdtex2html - -from latex2mathml.converter import convert as tex2mathml -import re - -incomplete = 'formula incomplete' -convError = 'LaTeX-convert-error' - -def convert(mdtex, extensions=[], splitParagraphs=True): - ''' converts recursively the Markdown-LaTeX-mixture to HTML with MathML ''' - found = False - # handle all paragraphs separately (prevents aftereffects) - if splitParagraphs: - parts = re.split("\n\n", mdtex) - result = '' - for part in parts: - result += convert(part, extensions, splitParagraphs=False) - return result - # find first $$-formula: - parts = re.split('\${2}', mdtex, 2) - if len(parts)>1: - found = True - result = convert(parts[0], extensions, splitParagraphs=False)+'\n' - try: - result += '
    '+tex2mathml(parts[1])+'
    \n' - except: - result += '
    '+convError+'
    ' - if len(parts)==3: - result += convert(parts[2], extensions, splitParagraphs=False) - else: - result += '
    '+incomplete+'
    ' - # else find first $-formulas: - else: - parts = re.split('\${1}', mdtex, 2) - if len(parts)>1 and not found: - found = True - try: - mathml = tex2mathml(parts[1]) - except: - mathml = convError - if parts[0].endswith('\n\n') or parts[0]=='': # make sure textblock starts before formula! - parts[0]=parts[0]+'​' - if len(parts)==3: - result = convert(parts[0]+mathml+parts[2], extensions, splitParagraphs=False) - else: - result = convert(parts[0]+mathml+incomplete, extensions, splitParagraphs=False) - # else find first \[..\]-equation: - else: - parts = re.split(r'\\\[', mdtex, 1) - if len(parts)>1 and not found: - found = True - result = convert(parts[0], extensions, splitParagraphs=False)+'\n' - parts = re.split(r'\\\]', parts[1], 1) - try: - result += '
    '+tex2mathml(parts[0])+'
    \n' - except: - result += '
    '+convError+'
    ' - if len(parts)==2: - result += convert(parts[1], extensions, splitParagraphs=False) - else: - result += '
    '+incomplete+'
    ' - # else find first \(..\)-equation: - else: - parts = re.split(r'\\\(', mdtex, 1) - if len(parts)>1 and not found: - found = True - subp = re.split(r'\\\)', parts[1], 1) - try: - mathml = tex2mathml(subp[0]) - except: - mathml = convError - if parts[0].endswith('\n\n') or parts[0]=='': # make sure textblock starts before formula! - parts[0]=parts[0]+'​' - if len(subp)==2: - result = convert(parts[0]+mathml+subp[1], extensions, splitParagraphs=False) - else: - result = convert(parts[0]+mathml+incomplete, extensions, splitParagraphs=False) - if not found: - result = mdtex - return result diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/models/e4e/stylegan2/op/fused_bias_act.cpp b/spaces/Amrrs/DragGan-Inversion/PTI/models/e4e/stylegan2/op/fused_bias_act.cpp deleted file mode 100644 index 02be898f970bcc8ea297867fcaa4e71b24b3d949..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/PTI/models/e4e/stylegan2/op/fused_bias_act.cpp +++ /dev/null @@ -1,21 +0,0 @@ -#include - - -torch::Tensor fused_bias_act_op(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer, - int act, int grad, float alpha, float scale); - -#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) - -torch::Tensor fused_bias_act(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer, - int act, int grad, float alpha, float scale) { - CHECK_CUDA(input); - CHECK_CUDA(bias); - - return fused_bias_act_op(input, bias, refer, act, grad, alpha, scale); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("fused_bias_act", &fused_bias_act, "fused bias act (CUDA)"); -} \ No newline at end of file diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/utils/alignment.py b/spaces/Amrrs/DragGan-Inversion/PTI/utils/alignment.py deleted file mode 100644 index d1e13a0d70eb0827abca405401f83b9939122f2d..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/PTI/utils/alignment.py +++ /dev/null @@ -1,113 +0,0 @@ -import numpy as np -import PIL -import PIL.Image -import scipy -import scipy.ndimage -import dlib - -def get_landmark(img, predictor): - """get landmark with dlib - :return: np.array shape=(68, 2) - """ - detector = dlib.get_frontal_face_detector() - - img = np.array(img) - dets = detector(img, 1) - - for k, d in enumerate(dets): - shape = predictor(img, d) - - t = list(shape.parts()) - a = [] - for tt in t: - a.append([tt.x, tt.y]) - lm = np.array(a) - return lm - - -def align_face(img, predictor, output_size): - """ - :param img: PIL Image - :return: PIL Image - """ - - lm = get_landmark(img, predictor) - - lm_chin = lm[0: 17] # left-right - lm_eyebrow_left = lm[17: 22] # left-right - lm_eyebrow_right = lm[22: 27] # left-right - lm_nose = lm[27: 31] # top-down - lm_nostrils = lm[31: 36] # top-down - lm_eye_left = lm[36: 42] # left-clockwise - lm_eye_right = lm[42: 48] # left-clockwise - lm_mouth_outer = lm[48: 60] # left-clockwise - lm_mouth_inner = lm[60: 68] # left-clockwise - - # Calculate auxiliary vectors. - eye_left = np.mean(lm_eye_left, axis=0) - eye_right = np.mean(lm_eye_right, axis=0) - eye_avg = (eye_left + eye_right) * 0.5 - eye_to_eye = eye_right - eye_left - mouth_left = lm_mouth_outer[0] - mouth_right = lm_mouth_outer[6] - mouth_avg = (mouth_left + mouth_right) * 0.5 - eye_to_mouth = mouth_avg - eye_avg - - # Choose oriented crop rectangle. - x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1] - x /= np.hypot(*x) - x *= max(np.hypot(*eye_to_eye) * 2.0, np.hypot(*eye_to_mouth) * 1.8) - y = np.flipud(x) * [-1, 1] - c = eye_avg + eye_to_mouth * 0.1 - quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y]) - qsize = np.hypot(*x) * 2 - - # read image - # img = img - - transform_size = output_size - enable_padding = True - - # Shrink. - shrink = int(np.floor(qsize / output_size * 0.5)) - if shrink > 1: - rsize = (int(np.rint(float(img.size[0]) / shrink)), int(np.rint(float(img.size[1]) / shrink))) - img = img.resize(rsize, PIL.Image.ANTIALIAS) - quad /= shrink - qsize /= shrink - - # Crop. - border = max(int(np.rint(qsize * 0.1)), 3) - crop = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))), - int(np.ceil(max(quad[:, 1])))) - crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, img.size[0]), - min(crop[3] + border, img.size[1])) - if crop[2] - crop[0] < img.size[0] or crop[3] - crop[1] < img.size[1]: - img = img.crop(crop) - quad -= crop[0:2] - - # Pad. - pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))), - int(np.ceil(max(quad[:, 1])))) - pad = (max(-pad[0] + border, 0), max(-pad[1] + border, 0), max(pad[2] - img.size[0] + border, 0), - max(pad[3] - img.size[1] + border, 0)) - if enable_padding and max(pad) > border - 4: - pad = np.maximum(pad, int(np.rint(qsize * 0.3))) - img = np.pad(np.float32(img), ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect') - h, w, _ = img.shape - y, x, _ = np.ogrid[:h, :w, :1] - mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0], np.float32(w - 1 - x) / pad[2]), - 1.0 - np.minimum(np.float32(y) / pad[1], np.float32(h - 1 - y) / pad[3])) - blur = qsize * 0.02 - img += (scipy.ndimage.gaussian_filter(img, [blur, blur, 0]) - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0) - img += (np.median(img, axis=(0, 1)) - img) * np.clip(mask, 0.0, 1.0) - img = PIL.Image.fromarray(np.uint8(np.clip(np.rint(img), 0, 255)), 'RGB') - quad += pad[:2] - - # Transform. - img = img.transform((transform_size, transform_size), PIL.Image.QUAD, (quad + 0.5).flatten(), PIL.Image.BILINEAR) - if output_size < transform_size: - img = img.resize((output_size, output_size), PIL.Image.ANTIALIAS) - - # Return aligned image. - return img diff --git a/spaces/Amrrs/DragGan-Inversion/gui_utils/__init__.py b/spaces/Amrrs/DragGan-Inversion/gui_utils/__init__.py deleted file mode 100644 index 939e7c6c8f94c4ea1141885c3c3295fe083b06aa..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/gui_utils/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -# empty diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/clip_guided_images_mixing_stable_diffusion.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/clip_guided_images_mixing_stable_diffusion.py deleted file mode 100644 index e4c52fe63f492526cd078950798a6564405e008b..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/clip_guided_images_mixing_stable_diffusion.py +++ /dev/null @@ -1,456 +0,0 @@ -# -*- coding: utf-8 -*- -import inspect -from typing import Optional, Union - -import numpy as np -import PIL -import torch -from torch.nn import functional as F -from torchvision import transforms -from transformers import CLIPFeatureExtractor, CLIPModel, CLIPTextModel, CLIPTokenizer - -from diffusers import ( - AutoencoderKL, - DDIMScheduler, - DiffusionPipeline, - DPMSolverMultistepScheduler, - LMSDiscreteScheduler, - PNDMScheduler, - UNet2DConditionModel, -) -from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipelineOutput -from diffusers.utils import ( - PIL_INTERPOLATION, - randn_tensor, -) - - -def preprocess(image, w, h): - if isinstance(image, torch.Tensor): - return image - elif isinstance(image, PIL.Image.Image): - image = [image] - - if isinstance(image[0], PIL.Image.Image): - image = [np.array(i.resize((w, h), resample=PIL_INTERPOLATION["lanczos"]))[None, :] for i in image] - image = np.concatenate(image, axis=0) - image = np.array(image).astype(np.float32) / 255.0 - image = image.transpose(0, 3, 1, 2) - image = 2.0 * image - 1.0 - image = torch.from_numpy(image) - elif isinstance(image[0], torch.Tensor): - image = torch.cat(image, dim=0) - return image - - -def slerp(t, v0, v1, DOT_THRESHOLD=0.9995): - if not isinstance(v0, np.ndarray): - inputs_are_torch = True - input_device = v0.device - v0 = v0.cpu().numpy() - v1 = v1.cpu().numpy() - - dot = np.sum(v0 * v1 / (np.linalg.norm(v0) * np.linalg.norm(v1))) - if np.abs(dot) > DOT_THRESHOLD: - v2 = (1 - t) * v0 + t * v1 - else: - theta_0 = np.arccos(dot) - sin_theta_0 = np.sin(theta_0) - theta_t = theta_0 * t - sin_theta_t = np.sin(theta_t) - s0 = np.sin(theta_0 - theta_t) / sin_theta_0 - s1 = sin_theta_t / sin_theta_0 - v2 = s0 * v0 + s1 * v1 - - if inputs_are_torch: - v2 = torch.from_numpy(v2).to(input_device) - - return v2 - - -def spherical_dist_loss(x, y): - x = F.normalize(x, dim=-1) - y = F.normalize(y, dim=-1) - return (x - y).norm(dim=-1).div(2).arcsin().pow(2).mul(2) - - -def set_requires_grad(model, value): - for param in model.parameters(): - param.requires_grad = value - - -class CLIPGuidedImagesMixingStableDiffusion(DiffusionPipeline): - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - clip_model: CLIPModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: Union[PNDMScheduler, LMSDiscreteScheduler, DDIMScheduler, DPMSolverMultistepScheduler], - feature_extractor: CLIPFeatureExtractor, - coca_model=None, - coca_tokenizer=None, - coca_transform=None, - ): - super().__init__() - self.register_modules( - vae=vae, - text_encoder=text_encoder, - clip_model=clip_model, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - feature_extractor=feature_extractor, - coca_model=coca_model, - coca_tokenizer=coca_tokenizer, - coca_transform=coca_transform, - ) - self.feature_extractor_size = ( - feature_extractor.size - if isinstance(feature_extractor.size, int) - else feature_extractor.size["shortest_edge"] - ) - self.normalize = transforms.Normalize(mean=feature_extractor.image_mean, std=feature_extractor.image_std) - set_requires_grad(self.text_encoder, False) - set_requires_grad(self.clip_model, False) - - def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"): - if slice_size == "auto": - # half the attention head size is usually a good trade-off between - # speed and memory - slice_size = self.unet.config.attention_head_dim // 2 - self.unet.set_attention_slice(slice_size) - - def disable_attention_slicing(self): - self.enable_attention_slicing(None) - - def freeze_vae(self): - set_requires_grad(self.vae, False) - - def unfreeze_vae(self): - set_requires_grad(self.vae, True) - - def freeze_unet(self): - set_requires_grad(self.unet, False) - - def unfreeze_unet(self): - set_requires_grad(self.unet, True) - - def get_timesteps(self, num_inference_steps, strength, device): - # get the original timestep using init_timestep - init_timestep = min(int(num_inference_steps * strength), num_inference_steps) - - t_start = max(num_inference_steps - init_timestep, 0) - timesteps = self.scheduler.timesteps[t_start:] - - return timesteps, num_inference_steps - t_start - - def prepare_latents(self, image, timestep, batch_size, dtype, device, generator=None): - if not isinstance(image, torch.Tensor): - raise ValueError(f"`image` has to be of type `torch.Tensor` but is {type(image)}") - - image = image.to(device=device, dtype=dtype) - - if isinstance(generator, list): - init_latents = [ - self.vae.encode(image[i : i + 1]).latent_dist.sample(generator[i]) for i in range(batch_size) - ] - init_latents = torch.cat(init_latents, dim=0) - else: - init_latents = self.vae.encode(image).latent_dist.sample(generator) - - # Hardcode 0.18215 because stable-diffusion-2-base has not self.vae.config.scaling_factor - init_latents = 0.18215 * init_latents - init_latents = init_latents.repeat_interleave(batch_size, dim=0) - - noise = randn_tensor(init_latents.shape, generator=generator, device=device, dtype=dtype) - - # get latents - init_latents = self.scheduler.add_noise(init_latents, noise, timestep) - latents = init_latents - - return latents - - def get_image_description(self, image): - transformed_image = self.coca_transform(image).unsqueeze(0) - with torch.no_grad(), torch.cuda.amp.autocast(): - generated = self.coca_model.generate(transformed_image.to(device=self.device, dtype=self.coca_model.dtype)) - generated = self.coca_tokenizer.decode(generated[0].cpu().numpy()) - return generated.split("")[0].replace("", "").rstrip(" .,") - - def get_clip_image_embeddings(self, image, batch_size): - clip_image_input = self.feature_extractor.preprocess(image) - clip_image_features = torch.from_numpy(clip_image_input["pixel_values"][0]).unsqueeze(0).to(self.device).half() - image_embeddings_clip = self.clip_model.get_image_features(clip_image_features) - image_embeddings_clip = image_embeddings_clip / image_embeddings_clip.norm(p=2, dim=-1, keepdim=True) - image_embeddings_clip = image_embeddings_clip.repeat_interleave(batch_size, dim=0) - return image_embeddings_clip - - @torch.enable_grad() - def cond_fn( - self, - latents, - timestep, - index, - text_embeddings, - noise_pred_original, - original_image_embeddings_clip, - clip_guidance_scale, - ): - latents = latents.detach().requires_grad_() - - latent_model_input = self.scheduler.scale_model_input(latents, timestep) - - # predict the noise residual - noise_pred = self.unet(latent_model_input, timestep, encoder_hidden_states=text_embeddings).sample - - if isinstance(self.scheduler, (PNDMScheduler, DDIMScheduler, DPMSolverMultistepScheduler)): - alpha_prod_t = self.scheduler.alphas_cumprod[timestep] - beta_prod_t = 1 - alpha_prod_t - # compute predicted original sample from predicted noise also called - # "predicted x_0" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf - pred_original_sample = (latents - beta_prod_t ** (0.5) * noise_pred) / alpha_prod_t ** (0.5) - - fac = torch.sqrt(beta_prod_t) - sample = pred_original_sample * (fac) + latents * (1 - fac) - elif isinstance(self.scheduler, LMSDiscreteScheduler): - sigma = self.scheduler.sigmas[index] - sample = latents - sigma * noise_pred - else: - raise ValueError(f"scheduler type {type(self.scheduler)} not supported") - - # Hardcode 0.18215 because stable-diffusion-2-base has not self.vae.config.scaling_factor - sample = 1 / 0.18215 * sample - image = self.vae.decode(sample).sample - image = (image / 2 + 0.5).clamp(0, 1) - - image = transforms.Resize(self.feature_extractor_size)(image) - image = self.normalize(image).to(latents.dtype) - - image_embeddings_clip = self.clip_model.get_image_features(image) - image_embeddings_clip = image_embeddings_clip / image_embeddings_clip.norm(p=2, dim=-1, keepdim=True) - - loss = spherical_dist_loss(image_embeddings_clip, original_image_embeddings_clip).mean() * clip_guidance_scale - - grads = -torch.autograd.grad(loss, latents)[0] - - if isinstance(self.scheduler, LMSDiscreteScheduler): - latents = latents.detach() + grads * (sigma**2) - noise_pred = noise_pred_original - else: - noise_pred = noise_pred_original - torch.sqrt(beta_prod_t) * grads - return noise_pred, latents - - @torch.no_grad() - def __call__( - self, - style_image: Union[torch.FloatTensor, PIL.Image.Image], - content_image: Union[torch.FloatTensor, PIL.Image.Image], - style_prompt: Optional[str] = None, - content_prompt: Optional[str] = None, - height: Optional[int] = 512, - width: Optional[int] = 512, - noise_strength: float = 0.6, - num_inference_steps: Optional[int] = 50, - guidance_scale: Optional[float] = 7.5, - batch_size: Optional[int] = 1, - eta: float = 0.0, - clip_guidance_scale: Optional[float] = 100, - generator: Optional[torch.Generator] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - slerp_latent_style_strength: float = 0.8, - slerp_prompt_style_strength: float = 0.1, - slerp_clip_image_style_strength: float = 0.1, - ): - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError(f"You have passed {batch_size} batch_size, but only {len(generator)} generators.") - - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - if isinstance(generator, torch.Generator) and batch_size > 1: - generator = [generator] + [None] * (batch_size - 1) - - coca_is_none = [ - ("model", self.coca_model is None), - ("tokenizer", self.coca_tokenizer is None), - ("transform", self.coca_transform is None), - ] - coca_is_none = [x[0] for x in coca_is_none if x[1]] - coca_is_none_str = ", ".join(coca_is_none) - # generate prompts with coca model if prompt is None - if content_prompt is None: - if len(coca_is_none): - raise ValueError( - f"Content prompt is None and CoCa [{coca_is_none_str}] is None." - f"Set prompt or pass Coca [{coca_is_none_str}] to DiffusionPipeline." - ) - content_prompt = self.get_image_description(content_image) - if style_prompt is None: - if len(coca_is_none): - raise ValueError( - f"Style prompt is None and CoCa [{coca_is_none_str}] is None." - f" Set prompt or pass Coca [{coca_is_none_str}] to DiffusionPipeline." - ) - style_prompt = self.get_image_description(style_image) - - # get prompt text embeddings for content and style - content_text_input = self.tokenizer( - content_prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - content_text_embeddings = self.text_encoder(content_text_input.input_ids.to(self.device))[0] - - style_text_input = self.tokenizer( - style_prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - style_text_embeddings = self.text_encoder(style_text_input.input_ids.to(self.device))[0] - - text_embeddings = slerp(slerp_prompt_style_strength, content_text_embeddings, style_text_embeddings) - - # duplicate text embeddings for each generation per prompt - text_embeddings = text_embeddings.repeat_interleave(batch_size, dim=0) - - # set timesteps - accepts_offset = "offset" in set(inspect.signature(self.scheduler.set_timesteps).parameters.keys()) - extra_set_kwargs = {} - if accepts_offset: - extra_set_kwargs["offset"] = 1 - - self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs) - # Some schedulers like PNDM have timesteps as arrays - # It's more optimized to move all timesteps to correct device beforehand - self.scheduler.timesteps.to(self.device) - - timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, noise_strength, self.device) - latent_timestep = timesteps[:1].repeat(batch_size) - - # Preprocess image - preprocessed_content_image = preprocess(content_image, width, height) - content_latents = self.prepare_latents( - preprocessed_content_image, latent_timestep, batch_size, text_embeddings.dtype, self.device, generator - ) - - preprocessed_style_image = preprocess(style_image, width, height) - style_latents = self.prepare_latents( - preprocessed_style_image, latent_timestep, batch_size, text_embeddings.dtype, self.device, generator - ) - - latents = slerp(slerp_latent_style_strength, content_latents, style_latents) - - if clip_guidance_scale > 0: - content_clip_image_embedding = self.get_clip_image_embeddings(content_image, batch_size) - style_clip_image_embedding = self.get_clip_image_embeddings(style_image, batch_size) - clip_image_embeddings = slerp( - slerp_clip_image_style_strength, content_clip_image_embedding, style_clip_image_embedding - ) - - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - max_length = content_text_input.input_ids.shape[-1] - uncond_input = self.tokenizer([""], padding="max_length", max_length=max_length, return_tensors="pt") - uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0] - # duplicate unconditional embeddings for each generation per prompt - uncond_embeddings = uncond_embeddings.repeat_interleave(batch_size, dim=0) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) - - # get the initial random noise unless the user supplied it - - # Unlike in other pipelines, latents need to be generated in the target device - # for 1-to-1 results reproducibility with the CompVis implementation. - # However this currently doesn't work in `mps`. - latents_shape = (batch_size, self.unet.config.in_channels, height // 8, width // 8) - latents_dtype = text_embeddings.dtype - if latents is None: - if self.device.type == "mps": - # randn does not work reproducibly on mps - latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to( - self.device - ) - else: - latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype) - else: - if latents.shape != latents_shape: - raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}") - latents = latents.to(self.device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - - with self.progress_bar(total=num_inference_steps): - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample - - # perform classifier free guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # perform clip guidance - if clip_guidance_scale > 0: - text_embeddings_for_guidance = ( - text_embeddings.chunk(2)[1] if do_classifier_free_guidance else text_embeddings - ) - noise_pred, latents = self.cond_fn( - latents, - t, - i, - text_embeddings_for_guidance, - noise_pred, - clip_image_embeddings, - clip_guidance_scale, - ) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # Hardcode 0.18215 because stable-diffusion-2-base has not self.vae.config.scaling_factor - latents = 1 / 0.18215 * latents - image = self.vae.decode(latents).sample - - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).numpy() - - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image, None) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=None) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_original_audioldm_to_diffusers.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_original_audioldm_to_diffusers.py deleted file mode 100644 index a0d154d7e6baaba90216d5e1f30ad58ae3359d73..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_original_audioldm_to_diffusers.py +++ /dev/null @@ -1,1052 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" Conversion script for the AudioLDM checkpoints.""" - -import argparse -import re - -import torch -from transformers import ( - AutoTokenizer, - ClapTextConfig, - ClapTextModelWithProjection, - SpeechT5HifiGan, - SpeechT5HifiGanConfig, -) - -from diffusers import ( - AudioLDMPipeline, - AutoencoderKL, - DDIMScheduler, - DPMSolverMultistepScheduler, - EulerAncestralDiscreteScheduler, - EulerDiscreteScheduler, - HeunDiscreteScheduler, - LMSDiscreteScheduler, - PNDMScheduler, - UNet2DConditionModel, -) -from diffusers.utils import is_omegaconf_available, is_safetensors_available -from diffusers.utils.import_utils import BACKENDS_MAPPING - - -# Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.shave_segments -def shave_segments(path, n_shave_prefix_segments=1): - """ - Removes segments. Positive values shave the first segments, negative shave the last segments. - """ - if n_shave_prefix_segments >= 0: - return ".".join(path.split(".")[n_shave_prefix_segments:]) - else: - return ".".join(path.split(".")[:n_shave_prefix_segments]) - - -# Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.renew_resnet_paths -def renew_resnet_paths(old_list, n_shave_prefix_segments=0): - """ - Updates paths inside resnets to the new naming scheme (local renaming) - """ - mapping = [] - for old_item in old_list: - new_item = old_item.replace("in_layers.0", "norm1") - new_item = new_item.replace("in_layers.2", "conv1") - - new_item = new_item.replace("out_layers.0", "norm2") - new_item = new_item.replace("out_layers.3", "conv2") - - new_item = new_item.replace("emb_layers.1", "time_emb_proj") - new_item = new_item.replace("skip_connection", "conv_shortcut") - - new_item = shave_segments(new_item, n_shave_prefix_segments=n_shave_prefix_segments) - - mapping.append({"old": old_item, "new": new_item}) - - return mapping - - -# Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.renew_vae_resnet_paths -def renew_vae_resnet_paths(old_list, n_shave_prefix_segments=0): - """ - Updates paths inside resnets to the new naming scheme (local renaming) - """ - mapping = [] - for old_item in old_list: - new_item = old_item - - new_item = new_item.replace("nin_shortcut", "conv_shortcut") - new_item = shave_segments(new_item, n_shave_prefix_segments=n_shave_prefix_segments) - - mapping.append({"old": old_item, "new": new_item}) - - return mapping - - -# Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.renew_attention_paths -def renew_attention_paths(old_list): - """ - Updates paths inside attentions to the new naming scheme (local renaming) - """ - mapping = [] - for old_item in old_list: - new_item = old_item - - # new_item = new_item.replace('norm.weight', 'group_norm.weight') - # new_item = new_item.replace('norm.bias', 'group_norm.bias') - - # new_item = new_item.replace('proj_out.weight', 'proj_attn.weight') - # new_item = new_item.replace('proj_out.bias', 'proj_attn.bias') - - # new_item = shave_segments(new_item, n_shave_prefix_segments=n_shave_prefix_segments) - - mapping.append({"old": old_item, "new": new_item}) - - return mapping - - -# Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.renew_vae_attention_paths -def renew_vae_attention_paths(old_list, n_shave_prefix_segments=0): - """ - Updates paths inside attentions to the new naming scheme (local renaming) - """ - mapping = [] - for old_item in old_list: - new_item = old_item - - new_item = new_item.replace("norm.weight", "group_norm.weight") - new_item = new_item.replace("norm.bias", "group_norm.bias") - - new_item = new_item.replace("q.weight", "query.weight") - new_item = new_item.replace("q.bias", "query.bias") - - new_item = new_item.replace("k.weight", "key.weight") - new_item = new_item.replace("k.bias", "key.bias") - - new_item = new_item.replace("v.weight", "value.weight") - new_item = new_item.replace("v.bias", "value.bias") - - new_item = new_item.replace("proj_out.weight", "proj_attn.weight") - new_item = new_item.replace("proj_out.bias", "proj_attn.bias") - - new_item = shave_segments(new_item, n_shave_prefix_segments=n_shave_prefix_segments) - - mapping.append({"old": old_item, "new": new_item}) - - return mapping - - -# Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.assign_to_checkpoint -def assign_to_checkpoint( - paths, checkpoint, old_checkpoint, attention_paths_to_split=None, additional_replacements=None, config=None -): - """ - This does the final conversion step: take locally converted weights and apply a global renaming to them. It splits - attention layers, and takes into account additional replacements that may arise. - - Assigns the weights to the new checkpoint. - """ - assert isinstance(paths, list), "Paths should be a list of dicts containing 'old' and 'new' keys." - - # Splits the attention layers into three variables. - if attention_paths_to_split is not None: - for path, path_map in attention_paths_to_split.items(): - old_tensor = old_checkpoint[path] - channels = old_tensor.shape[0] // 3 - - target_shape = (-1, channels) if len(old_tensor.shape) == 3 else (-1) - - num_heads = old_tensor.shape[0] // config["num_head_channels"] // 3 - - old_tensor = old_tensor.reshape((num_heads, 3 * channels // num_heads) + old_tensor.shape[1:]) - query, key, value = old_tensor.split(channels // num_heads, dim=1) - - checkpoint[path_map["query"]] = query.reshape(target_shape) - checkpoint[path_map["key"]] = key.reshape(target_shape) - checkpoint[path_map["value"]] = value.reshape(target_shape) - - for path in paths: - new_path = path["new"] - - # These have already been assigned - if attention_paths_to_split is not None and new_path in attention_paths_to_split: - continue - - # Global renaming happens here - new_path = new_path.replace("middle_block.0", "mid_block.resnets.0") - new_path = new_path.replace("middle_block.1", "mid_block.attentions.0") - new_path = new_path.replace("middle_block.2", "mid_block.resnets.1") - - if additional_replacements is not None: - for replacement in additional_replacements: - new_path = new_path.replace(replacement["old"], replacement["new"]) - - # proj_attn.weight has to be converted from conv 1D to linear - if "proj_attn.weight" in new_path: - checkpoint[new_path] = old_checkpoint[path["old"]][:, :, 0] - else: - checkpoint[new_path] = old_checkpoint[path["old"]] - - -# Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.conv_attn_to_linear -def conv_attn_to_linear(checkpoint): - keys = list(checkpoint.keys()) - attn_keys = ["query.weight", "key.weight", "value.weight"] - for key in keys: - if ".".join(key.split(".")[-2:]) in attn_keys: - if checkpoint[key].ndim > 2: - checkpoint[key] = checkpoint[key][:, :, 0, 0] - elif "proj_attn.weight" in key: - if checkpoint[key].ndim > 2: - checkpoint[key] = checkpoint[key][:, :, 0] - - -def create_unet_diffusers_config(original_config, image_size: int): - """ - Creates a UNet config for diffusers based on the config of the original AudioLDM model. - """ - unet_params = original_config.model.params.unet_config.params - vae_params = original_config.model.params.first_stage_config.params.ddconfig - - block_out_channels = [unet_params.model_channels * mult for mult in unet_params.channel_mult] - - down_block_types = [] - resolution = 1 - for i in range(len(block_out_channels)): - block_type = "CrossAttnDownBlock2D" if resolution in unet_params.attention_resolutions else "DownBlock2D" - down_block_types.append(block_type) - if i != len(block_out_channels) - 1: - resolution *= 2 - - up_block_types = [] - for i in range(len(block_out_channels)): - block_type = "CrossAttnUpBlock2D" if resolution in unet_params.attention_resolutions else "UpBlock2D" - up_block_types.append(block_type) - resolution //= 2 - - vae_scale_factor = 2 ** (len(vae_params.ch_mult) - 1) - - cross_attention_dim = ( - unet_params.cross_attention_dim if "cross_attention_dim" in unet_params else block_out_channels - ) - - class_embed_type = "simple_projection" if "extra_film_condition_dim" in unet_params else None - projection_class_embeddings_input_dim = ( - unet_params.extra_film_condition_dim if "extra_film_condition_dim" in unet_params else None - ) - class_embeddings_concat = unet_params.extra_film_use_concat if "extra_film_use_concat" in unet_params else None - - config = { - "sample_size": image_size // vae_scale_factor, - "in_channels": unet_params.in_channels, - "out_channels": unet_params.out_channels, - "down_block_types": tuple(down_block_types), - "up_block_types": tuple(up_block_types), - "block_out_channels": tuple(block_out_channels), - "layers_per_block": unet_params.num_res_blocks, - "cross_attention_dim": cross_attention_dim, - "class_embed_type": class_embed_type, - "projection_class_embeddings_input_dim": projection_class_embeddings_input_dim, - "class_embeddings_concat": class_embeddings_concat, - } - - return config - - -# Adapted from diffusers.pipelines.stable_diffusion.convert_from_ckpt.create_vae_diffusers_config -def create_vae_diffusers_config(original_config, checkpoint, image_size: int): - """ - Creates a VAE config for diffusers based on the config of the original AudioLDM model. Compared to the original - Stable Diffusion conversion, this function passes a *learnt* VAE scaling factor to the diffusers VAE. - """ - vae_params = original_config.model.params.first_stage_config.params.ddconfig - _ = original_config.model.params.first_stage_config.params.embed_dim - - block_out_channels = [vae_params.ch * mult for mult in vae_params.ch_mult] - down_block_types = ["DownEncoderBlock2D"] * len(block_out_channels) - up_block_types = ["UpDecoderBlock2D"] * len(block_out_channels) - - scaling_factor = checkpoint["scale_factor"] if "scale_by_std" in original_config.model.params else 0.18215 - - config = { - "sample_size": image_size, - "in_channels": vae_params.in_channels, - "out_channels": vae_params.out_ch, - "down_block_types": tuple(down_block_types), - "up_block_types": tuple(up_block_types), - "block_out_channels": tuple(block_out_channels), - "latent_channels": vae_params.z_channels, - "layers_per_block": vae_params.num_res_blocks, - "scaling_factor": float(scaling_factor), - } - return config - - -# Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.create_diffusers_schedular -def create_diffusers_schedular(original_config): - schedular = DDIMScheduler( - num_train_timesteps=original_config.model.params.timesteps, - beta_start=original_config.model.params.linear_start, - beta_end=original_config.model.params.linear_end, - beta_schedule="scaled_linear", - ) - return schedular - - -# Adapted from diffusers.pipelines.stable_diffusion.convert_from_ckpt.convert_ldm_unet_checkpoint -def convert_ldm_unet_checkpoint(checkpoint, config, path=None, extract_ema=False): - """ - Takes a state dict and a config, and returns a converted checkpoint. Compared to the original Stable Diffusion - conversion, this function additionally converts the learnt film embedding linear layer. - """ - - # extract state_dict for UNet - unet_state_dict = {} - keys = list(checkpoint.keys()) - - unet_key = "model.diffusion_model." - # at least a 100 parameters have to start with `model_ema` in order for the checkpoint to be EMA - if sum(k.startswith("model_ema") for k in keys) > 100 and extract_ema: - print(f"Checkpoint {path} has both EMA and non-EMA weights.") - print( - "In this conversion only the EMA weights are extracted. If you want to instead extract the non-EMA" - " weights (useful to continue fine-tuning), please make sure to remove the `--extract_ema` flag." - ) - for key in keys: - if key.startswith("model.diffusion_model"): - flat_ema_key = "model_ema." + "".join(key.split(".")[1:]) - unet_state_dict[key.replace(unet_key, "")] = checkpoint.pop(flat_ema_key) - else: - if sum(k.startswith("model_ema") for k in keys) > 100: - print( - "In this conversion only the non-EMA weights are extracted. If you want to instead extract the EMA" - " weights (usually better for inference), please make sure to add the `--extract_ema` flag." - ) - - for key in keys: - if key.startswith(unet_key): - unet_state_dict[key.replace(unet_key, "")] = checkpoint.pop(key) - - new_checkpoint = {} - - new_checkpoint["time_embedding.linear_1.weight"] = unet_state_dict["time_embed.0.weight"] - new_checkpoint["time_embedding.linear_1.bias"] = unet_state_dict["time_embed.0.bias"] - new_checkpoint["time_embedding.linear_2.weight"] = unet_state_dict["time_embed.2.weight"] - new_checkpoint["time_embedding.linear_2.bias"] = unet_state_dict["time_embed.2.bias"] - - new_checkpoint["class_embedding.weight"] = unet_state_dict["film_emb.weight"] - new_checkpoint["class_embedding.bias"] = unet_state_dict["film_emb.bias"] - - new_checkpoint["conv_in.weight"] = unet_state_dict["input_blocks.0.0.weight"] - new_checkpoint["conv_in.bias"] = unet_state_dict["input_blocks.0.0.bias"] - - new_checkpoint["conv_norm_out.weight"] = unet_state_dict["out.0.weight"] - new_checkpoint["conv_norm_out.bias"] = unet_state_dict["out.0.bias"] - new_checkpoint["conv_out.weight"] = unet_state_dict["out.2.weight"] - new_checkpoint["conv_out.bias"] = unet_state_dict["out.2.bias"] - - # Retrieves the keys for the input blocks only - num_input_blocks = len({".".join(layer.split(".")[:2]) for layer in unet_state_dict if "input_blocks" in layer}) - input_blocks = { - layer_id: [key for key in unet_state_dict if f"input_blocks.{layer_id}" in key] - for layer_id in range(num_input_blocks) - } - - # Retrieves the keys for the middle blocks only - num_middle_blocks = len({".".join(layer.split(".")[:2]) for layer in unet_state_dict if "middle_block" in layer}) - middle_blocks = { - layer_id: [key for key in unet_state_dict if f"middle_block.{layer_id}" in key] - for layer_id in range(num_middle_blocks) - } - - # Retrieves the keys for the output blocks only - num_output_blocks = len({".".join(layer.split(".")[:2]) for layer in unet_state_dict if "output_blocks" in layer}) - output_blocks = { - layer_id: [key for key in unet_state_dict if f"output_blocks.{layer_id}" in key] - for layer_id in range(num_output_blocks) - } - - for i in range(1, num_input_blocks): - block_id = (i - 1) // (config["layers_per_block"] + 1) - layer_in_block_id = (i - 1) % (config["layers_per_block"] + 1) - - resnets = [ - key for key in input_blocks[i] if f"input_blocks.{i}.0" in key and f"input_blocks.{i}.0.op" not in key - ] - attentions = [key for key in input_blocks[i] if f"input_blocks.{i}.1" in key] - - if f"input_blocks.{i}.0.op.weight" in unet_state_dict: - new_checkpoint[f"down_blocks.{block_id}.downsamplers.0.conv.weight"] = unet_state_dict.pop( - f"input_blocks.{i}.0.op.weight" - ) - new_checkpoint[f"down_blocks.{block_id}.downsamplers.0.conv.bias"] = unet_state_dict.pop( - f"input_blocks.{i}.0.op.bias" - ) - - paths = renew_resnet_paths(resnets) - meta_path = {"old": f"input_blocks.{i}.0", "new": f"down_blocks.{block_id}.resnets.{layer_in_block_id}"} - assign_to_checkpoint( - paths, new_checkpoint, unet_state_dict, additional_replacements=[meta_path], config=config - ) - - if len(attentions): - paths = renew_attention_paths(attentions) - meta_path = {"old": f"input_blocks.{i}.1", "new": f"down_blocks.{block_id}.attentions.{layer_in_block_id}"} - assign_to_checkpoint( - paths, new_checkpoint, unet_state_dict, additional_replacements=[meta_path], config=config - ) - - resnet_0 = middle_blocks[0] - attentions = middle_blocks[1] - resnet_1 = middle_blocks[2] - - resnet_0_paths = renew_resnet_paths(resnet_0) - assign_to_checkpoint(resnet_0_paths, new_checkpoint, unet_state_dict, config=config) - - resnet_1_paths = renew_resnet_paths(resnet_1) - assign_to_checkpoint(resnet_1_paths, new_checkpoint, unet_state_dict, config=config) - - attentions_paths = renew_attention_paths(attentions) - meta_path = {"old": "middle_block.1", "new": "mid_block.attentions.0"} - assign_to_checkpoint( - attentions_paths, new_checkpoint, unet_state_dict, additional_replacements=[meta_path], config=config - ) - - for i in range(num_output_blocks): - block_id = i // (config["layers_per_block"] + 1) - layer_in_block_id = i % (config["layers_per_block"] + 1) - output_block_layers = [shave_segments(name, 2) for name in output_blocks[i]] - output_block_list = {} - - for layer in output_block_layers: - layer_id, layer_name = layer.split(".")[0], shave_segments(layer, 1) - if layer_id in output_block_list: - output_block_list[layer_id].append(layer_name) - else: - output_block_list[layer_id] = [layer_name] - - if len(output_block_list) > 1: - resnets = [key for key in output_blocks[i] if f"output_blocks.{i}.0" in key] - attentions = [key for key in output_blocks[i] if f"output_blocks.{i}.1" in key] - - resnet_0_paths = renew_resnet_paths(resnets) - paths = renew_resnet_paths(resnets) - - meta_path = {"old": f"output_blocks.{i}.0", "new": f"up_blocks.{block_id}.resnets.{layer_in_block_id}"} - assign_to_checkpoint( - paths, new_checkpoint, unet_state_dict, additional_replacements=[meta_path], config=config - ) - - output_block_list = {k: sorted(v) for k, v in output_block_list.items()} - if ["conv.bias", "conv.weight"] in output_block_list.values(): - index = list(output_block_list.values()).index(["conv.bias", "conv.weight"]) - new_checkpoint[f"up_blocks.{block_id}.upsamplers.0.conv.weight"] = unet_state_dict[ - f"output_blocks.{i}.{index}.conv.weight" - ] - new_checkpoint[f"up_blocks.{block_id}.upsamplers.0.conv.bias"] = unet_state_dict[ - f"output_blocks.{i}.{index}.conv.bias" - ] - - # Clear attentions as they have been attributed above. - if len(attentions) == 2: - attentions = [] - - if len(attentions): - paths = renew_attention_paths(attentions) - meta_path = { - "old": f"output_blocks.{i}.1", - "new": f"up_blocks.{block_id}.attentions.{layer_in_block_id}", - } - assign_to_checkpoint( - paths, new_checkpoint, unet_state_dict, additional_replacements=[meta_path], config=config - ) - else: - resnet_0_paths = renew_resnet_paths(output_block_layers, n_shave_prefix_segments=1) - for path in resnet_0_paths: - old_path = ".".join(["output_blocks", str(i), path["old"]]) - new_path = ".".join(["up_blocks", str(block_id), "resnets", str(layer_in_block_id), path["new"]]) - - new_checkpoint[new_path] = unet_state_dict[old_path] - - return new_checkpoint - - -# Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.convert_ldm_vae_checkpoint -def convert_ldm_vae_checkpoint(checkpoint, config): - # extract state dict for VAE - vae_state_dict = {} - vae_key = "first_stage_model." - keys = list(checkpoint.keys()) - for key in keys: - if key.startswith(vae_key): - vae_state_dict[key.replace(vae_key, "")] = checkpoint.get(key) - - new_checkpoint = {} - - new_checkpoint["encoder.conv_in.weight"] = vae_state_dict["encoder.conv_in.weight"] - new_checkpoint["encoder.conv_in.bias"] = vae_state_dict["encoder.conv_in.bias"] - new_checkpoint["encoder.conv_out.weight"] = vae_state_dict["encoder.conv_out.weight"] - new_checkpoint["encoder.conv_out.bias"] = vae_state_dict["encoder.conv_out.bias"] - new_checkpoint["encoder.conv_norm_out.weight"] = vae_state_dict["encoder.norm_out.weight"] - new_checkpoint["encoder.conv_norm_out.bias"] = vae_state_dict["encoder.norm_out.bias"] - - new_checkpoint["decoder.conv_in.weight"] = vae_state_dict["decoder.conv_in.weight"] - new_checkpoint["decoder.conv_in.bias"] = vae_state_dict["decoder.conv_in.bias"] - new_checkpoint["decoder.conv_out.weight"] = vae_state_dict["decoder.conv_out.weight"] - new_checkpoint["decoder.conv_out.bias"] = vae_state_dict["decoder.conv_out.bias"] - new_checkpoint["decoder.conv_norm_out.weight"] = vae_state_dict["decoder.norm_out.weight"] - new_checkpoint["decoder.conv_norm_out.bias"] = vae_state_dict["decoder.norm_out.bias"] - - new_checkpoint["quant_conv.weight"] = vae_state_dict["quant_conv.weight"] - new_checkpoint["quant_conv.bias"] = vae_state_dict["quant_conv.bias"] - new_checkpoint["post_quant_conv.weight"] = vae_state_dict["post_quant_conv.weight"] - new_checkpoint["post_quant_conv.bias"] = vae_state_dict["post_quant_conv.bias"] - - # Retrieves the keys for the encoder down blocks only - num_down_blocks = len({".".join(layer.split(".")[:3]) for layer in vae_state_dict if "encoder.down" in layer}) - down_blocks = { - layer_id: [key for key in vae_state_dict if f"down.{layer_id}" in key] for layer_id in range(num_down_blocks) - } - - # Retrieves the keys for the decoder up blocks only - num_up_blocks = len({".".join(layer.split(".")[:3]) for layer in vae_state_dict if "decoder.up" in layer}) - up_blocks = { - layer_id: [key for key in vae_state_dict if f"up.{layer_id}" in key] for layer_id in range(num_up_blocks) - } - - for i in range(num_down_blocks): - resnets = [key for key in down_blocks[i] if f"down.{i}" in key and f"down.{i}.downsample" not in key] - - if f"encoder.down.{i}.downsample.conv.weight" in vae_state_dict: - new_checkpoint[f"encoder.down_blocks.{i}.downsamplers.0.conv.weight"] = vae_state_dict.pop( - f"encoder.down.{i}.downsample.conv.weight" - ) - new_checkpoint[f"encoder.down_blocks.{i}.downsamplers.0.conv.bias"] = vae_state_dict.pop( - f"encoder.down.{i}.downsample.conv.bias" - ) - - paths = renew_vae_resnet_paths(resnets) - meta_path = {"old": f"down.{i}.block", "new": f"down_blocks.{i}.resnets"} - assign_to_checkpoint(paths, new_checkpoint, vae_state_dict, additional_replacements=[meta_path], config=config) - - mid_resnets = [key for key in vae_state_dict if "encoder.mid.block" in key] - num_mid_res_blocks = 2 - for i in range(1, num_mid_res_blocks + 1): - resnets = [key for key in mid_resnets if f"encoder.mid.block_{i}" in key] - - paths = renew_vae_resnet_paths(resnets) - meta_path = {"old": f"mid.block_{i}", "new": f"mid_block.resnets.{i - 1}"} - assign_to_checkpoint(paths, new_checkpoint, vae_state_dict, additional_replacements=[meta_path], config=config) - - mid_attentions = [key for key in vae_state_dict if "encoder.mid.attn" in key] - paths = renew_vae_attention_paths(mid_attentions) - meta_path = {"old": "mid.attn_1", "new": "mid_block.attentions.0"} - assign_to_checkpoint(paths, new_checkpoint, vae_state_dict, additional_replacements=[meta_path], config=config) - conv_attn_to_linear(new_checkpoint) - - for i in range(num_up_blocks): - block_id = num_up_blocks - 1 - i - resnets = [ - key for key in up_blocks[block_id] if f"up.{block_id}" in key and f"up.{block_id}.upsample" not in key - ] - - if f"decoder.up.{block_id}.upsample.conv.weight" in vae_state_dict: - new_checkpoint[f"decoder.up_blocks.{i}.upsamplers.0.conv.weight"] = vae_state_dict[ - f"decoder.up.{block_id}.upsample.conv.weight" - ] - new_checkpoint[f"decoder.up_blocks.{i}.upsamplers.0.conv.bias"] = vae_state_dict[ - f"decoder.up.{block_id}.upsample.conv.bias" - ] - - paths = renew_vae_resnet_paths(resnets) - meta_path = {"old": f"up.{block_id}.block", "new": f"up_blocks.{i}.resnets"} - assign_to_checkpoint(paths, new_checkpoint, vae_state_dict, additional_replacements=[meta_path], config=config) - - mid_resnets = [key for key in vae_state_dict if "decoder.mid.block" in key] - num_mid_res_blocks = 2 - for i in range(1, num_mid_res_blocks + 1): - resnets = [key for key in mid_resnets if f"decoder.mid.block_{i}" in key] - - paths = renew_vae_resnet_paths(resnets) - meta_path = {"old": f"mid.block_{i}", "new": f"mid_block.resnets.{i - 1}"} - assign_to_checkpoint(paths, new_checkpoint, vae_state_dict, additional_replacements=[meta_path], config=config) - - mid_attentions = [key for key in vae_state_dict if "decoder.mid.attn" in key] - paths = renew_vae_attention_paths(mid_attentions) - meta_path = {"old": "mid.attn_1", "new": "mid_block.attentions.0"} - assign_to_checkpoint(paths, new_checkpoint, vae_state_dict, additional_replacements=[meta_path], config=config) - conv_attn_to_linear(new_checkpoint) - return new_checkpoint - - -CLAP_KEYS_TO_MODIFY_MAPPING = { - "text_branch": "text_model", - "attn": "attention.self", - "self.proj": "output.dense", - "attention.self_mask": "attn_mask", - "mlp.fc1": "intermediate.dense", - "mlp.fc2": "output.dense", - "norm1": "layernorm_before", - "norm2": "layernorm_after", - "bn0": "batch_norm", -} - -CLAP_KEYS_TO_IGNORE = ["text_transform"] - -CLAP_EXPECTED_MISSING_KEYS = ["text_model.embeddings.token_type_ids"] - - -def convert_open_clap_checkpoint(checkpoint): - """ - Takes a state dict and returns a converted CLAP checkpoint. - """ - # extract state dict for CLAP text embedding model, discarding the audio component - model_state_dict = {} - model_key = "cond_stage_model.model.text_" - keys = list(checkpoint.keys()) - for key in keys: - if key.startswith(model_key): - model_state_dict[key.replace(model_key, "text_")] = checkpoint.get(key) - - new_checkpoint = {} - - sequential_layers_pattern = r".*sequential.(\d+).*" - text_projection_pattern = r".*_projection.(\d+).*" - - for key, value in model_state_dict.items(): - # check if key should be ignored in mapping - if key.split(".")[0] in CLAP_KEYS_TO_IGNORE: - continue - - # check if any key needs to be modified - for key_to_modify, new_key in CLAP_KEYS_TO_MODIFY_MAPPING.items(): - if key_to_modify in key: - key = key.replace(key_to_modify, new_key) - - if re.match(sequential_layers_pattern, key): - # replace sequential layers with list - sequential_layer = re.match(sequential_layers_pattern, key).group(1) - - key = key.replace(f"sequential.{sequential_layer}.", f"layers.{int(sequential_layer)//3}.linear.") - elif re.match(text_projection_pattern, key): - projecton_layer = int(re.match(text_projection_pattern, key).group(1)) - - # Because in CLAP they use `nn.Sequential`... - transformers_projection_layer = 1 if projecton_layer == 0 else 2 - - key = key.replace(f"_projection.{projecton_layer}.", f"_projection.linear{transformers_projection_layer}.") - - if "audio" and "qkv" in key: - # split qkv into query key and value - mixed_qkv = value - qkv_dim = mixed_qkv.size(0) // 3 - - query_layer = mixed_qkv[:qkv_dim] - key_layer = mixed_qkv[qkv_dim : qkv_dim * 2] - value_layer = mixed_qkv[qkv_dim * 2 :] - - new_checkpoint[key.replace("qkv", "query")] = query_layer - new_checkpoint[key.replace("qkv", "key")] = key_layer - new_checkpoint[key.replace("qkv", "value")] = value_layer - else: - new_checkpoint[key] = value - - return new_checkpoint - - -def create_transformers_vocoder_config(original_config): - """ - Creates a config for transformers SpeechT5HifiGan based on the config of the vocoder model. - """ - vocoder_params = original_config.model.params.vocoder_config.params - - config = { - "model_in_dim": vocoder_params.num_mels, - "sampling_rate": vocoder_params.sampling_rate, - "upsample_initial_channel": vocoder_params.upsample_initial_channel, - "upsample_rates": list(vocoder_params.upsample_rates), - "upsample_kernel_sizes": list(vocoder_params.upsample_kernel_sizes), - "resblock_kernel_sizes": list(vocoder_params.resblock_kernel_sizes), - "resblock_dilation_sizes": [ - list(resblock_dilation) for resblock_dilation in vocoder_params.resblock_dilation_sizes - ], - "normalize_before": False, - } - - return config - - -def convert_hifigan_checkpoint(checkpoint, config): - """ - Takes a state dict and config, and returns a converted HiFiGAN vocoder checkpoint. - """ - # extract state dict for vocoder - vocoder_state_dict = {} - vocoder_key = "first_stage_model.vocoder." - keys = list(checkpoint.keys()) - for key in keys: - if key.startswith(vocoder_key): - vocoder_state_dict[key.replace(vocoder_key, "")] = checkpoint.get(key) - - # fix upsampler keys, everything else is correct already - for i in range(len(config.upsample_rates)): - vocoder_state_dict[f"upsampler.{i}.weight"] = vocoder_state_dict.pop(f"ups.{i}.weight") - vocoder_state_dict[f"upsampler.{i}.bias"] = vocoder_state_dict.pop(f"ups.{i}.bias") - - if not config.normalize_before: - # if we don't set normalize_before then these variables are unused, so we set them to their initialised values - vocoder_state_dict["mean"] = torch.zeros(config.model_in_dim) - vocoder_state_dict["scale"] = torch.ones(config.model_in_dim) - - return vocoder_state_dict - - -# Adapted from https://huggingface.co/spaces/haoheliu/audioldm-text-to-audio-generation/blob/84a0384742a22bd80c44e903e241f0623e874f1d/audioldm/utils.py#L72-L73 -DEFAULT_CONFIG = { - "model": { - "params": { - "linear_start": 0.0015, - "linear_end": 0.0195, - "timesteps": 1000, - "channels": 8, - "scale_by_std": True, - "unet_config": { - "target": "audioldm.latent_diffusion.openaimodel.UNetModel", - "params": { - "extra_film_condition_dim": 512, - "extra_film_use_concat": True, - "in_channels": 8, - "out_channels": 8, - "model_channels": 128, - "attention_resolutions": [8, 4, 2], - "num_res_blocks": 2, - "channel_mult": [1, 2, 3, 5], - "num_head_channels": 32, - }, - }, - "first_stage_config": { - "target": "audioldm.variational_autoencoder.autoencoder.AutoencoderKL", - "params": { - "embed_dim": 8, - "ddconfig": { - "z_channels": 8, - "resolution": 256, - "in_channels": 1, - "out_ch": 1, - "ch": 128, - "ch_mult": [1, 2, 4], - "num_res_blocks": 2, - }, - }, - }, - "vocoder_config": { - "target": "audioldm.first_stage_model.vocoder", - "params": { - "upsample_rates": [5, 4, 2, 2, 2], - "upsample_kernel_sizes": [16, 16, 8, 4, 4], - "upsample_initial_channel": 1024, - "resblock_kernel_sizes": [3, 7, 11], - "resblock_dilation_sizes": [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - "num_mels": 64, - "sampling_rate": 16000, - }, - }, - }, - }, -} - - -def load_pipeline_from_original_audioldm_ckpt( - checkpoint_path: str, - original_config_file: str = None, - image_size: int = 512, - prediction_type: str = None, - extract_ema: bool = False, - scheduler_type: str = "ddim", - num_in_channels: int = None, - model_channels: int = None, - num_head_channels: int = None, - device: str = None, - from_safetensors: bool = False, -) -> AudioLDMPipeline: - """ - Load an AudioLDM pipeline object from a `.ckpt`/`.safetensors` file and (ideally) a `.yaml` config file. - - Although many of the arguments can be automatically inferred, some of these rely on brittle checks against the - global step count, which will likely fail for models that have undergone further fine-tuning. Therefore, it is - recommended that you override the default values and/or supply an `original_config_file` wherever possible. - - Args: - checkpoint_path (`str`): Path to `.ckpt` file. - original_config_file (`str`): - Path to `.yaml` config file corresponding to the original architecture. If `None`, will be automatically - set to the audioldm-s-full-v2 config. - image_size (`int`, *optional*, defaults to 512): - The image size that the model was trained on. - prediction_type (`str`, *optional*): - The prediction type that the model was trained on. If `None`, will be automatically - inferred by looking for a key in the config. For the default config, the prediction type is `'epsilon'`. - num_in_channels (`int`, *optional*, defaults to None): - The number of UNet input channels. If `None`, it will be automatically inferred from the config. - model_channels (`int`, *optional*, defaults to None): - The number of UNet model channels. If `None`, it will be automatically inferred from the config. Override - to 128 for the small checkpoints, 192 for the medium checkpoints and 256 for the large. - num_head_channels (`int`, *optional*, defaults to None): - The number of UNet head channels. If `None`, it will be automatically inferred from the config. Override - to 32 for the small and medium checkpoints, and 64 for the large. - scheduler_type (`str`, *optional*, defaults to 'pndm'): - Type of scheduler to use. Should be one of `["pndm", "lms", "heun", "euler", "euler-ancestral", "dpm", - "ddim"]`. - extract_ema (`bool`, *optional*, defaults to `False`): Only relevant for - checkpoints that have both EMA and non-EMA weights. Whether to extract the EMA weights or not. Defaults to - `False`. Pass `True` to extract the EMA weights. EMA weights usually yield higher quality images for - inference. Non-EMA weights are usually better to continue fine-tuning. - device (`str`, *optional*, defaults to `None`): - The device to use. Pass `None` to determine automatically. - from_safetensors (`str`, *optional*, defaults to `False`): - If `checkpoint_path` is in `safetensors` format, load checkpoint with safetensors instead of PyTorch. - return: An AudioLDMPipeline object representing the passed-in `.ckpt`/`.safetensors` file. - """ - - if not is_omegaconf_available(): - raise ValueError(BACKENDS_MAPPING["omegaconf"][1]) - - from omegaconf import OmegaConf - - if from_safetensors: - if not is_safetensors_available(): - raise ValueError(BACKENDS_MAPPING["safetensors"][1]) - - from safetensors import safe_open - - checkpoint = {} - with safe_open(checkpoint_path, framework="pt", device="cpu") as f: - for key in f.keys(): - checkpoint[key] = f.get_tensor(key) - else: - if device is None: - device = "cuda" if torch.cuda.is_available() else "cpu" - checkpoint = torch.load(checkpoint_path, map_location=device) - else: - checkpoint = torch.load(checkpoint_path, map_location=device) - - if "state_dict" in checkpoint: - checkpoint = checkpoint["state_dict"] - - if original_config_file is None: - original_config = DEFAULT_CONFIG - original_config = OmegaConf.create(original_config) - else: - original_config = OmegaConf.load(original_config_file) - - if num_in_channels is not None: - original_config["model"]["params"]["unet_config"]["params"]["in_channels"] = num_in_channels - - if model_channels is not None: - original_config["model"]["params"]["unet_config"]["params"]["model_channels"] = model_channels - - if num_head_channels is not None: - original_config["model"]["params"]["unet_config"]["params"]["num_head_channels"] = num_head_channels - - if ( - "parameterization" in original_config["model"]["params"] - and original_config["model"]["params"]["parameterization"] == "v" - ): - if prediction_type is None: - prediction_type = "v_prediction" - else: - if prediction_type is None: - prediction_type = "epsilon" - - if image_size is None: - image_size = 512 - - num_train_timesteps = original_config.model.params.timesteps - beta_start = original_config.model.params.linear_start - beta_end = original_config.model.params.linear_end - - scheduler = DDIMScheduler( - beta_end=beta_end, - beta_schedule="scaled_linear", - beta_start=beta_start, - num_train_timesteps=num_train_timesteps, - steps_offset=1, - clip_sample=False, - set_alpha_to_one=False, - prediction_type=prediction_type, - ) - # make sure scheduler works correctly with DDIM - scheduler.register_to_config(clip_sample=False) - - if scheduler_type == "pndm": - config = dict(scheduler.config) - config["skip_prk_steps"] = True - scheduler = PNDMScheduler.from_config(config) - elif scheduler_type == "lms": - scheduler = LMSDiscreteScheduler.from_config(scheduler.config) - elif scheduler_type == "heun": - scheduler = HeunDiscreteScheduler.from_config(scheduler.config) - elif scheduler_type == "euler": - scheduler = EulerDiscreteScheduler.from_config(scheduler.config) - elif scheduler_type == "euler-ancestral": - scheduler = EulerAncestralDiscreteScheduler.from_config(scheduler.config) - elif scheduler_type == "dpm": - scheduler = DPMSolverMultistepScheduler.from_config(scheduler.config) - elif scheduler_type == "ddim": - scheduler = scheduler - else: - raise ValueError(f"Scheduler of type {scheduler_type} doesn't exist!") - - # Convert the UNet2DModel - unet_config = create_unet_diffusers_config(original_config, image_size=image_size) - unet = UNet2DConditionModel(**unet_config) - - converted_unet_checkpoint = convert_ldm_unet_checkpoint( - checkpoint, unet_config, path=checkpoint_path, extract_ema=extract_ema - ) - - unet.load_state_dict(converted_unet_checkpoint) - - # Convert the VAE model - vae_config = create_vae_diffusers_config(original_config, checkpoint=checkpoint, image_size=image_size) - converted_vae_checkpoint = convert_ldm_vae_checkpoint(checkpoint, vae_config) - - vae = AutoencoderKL(**vae_config) - vae.load_state_dict(converted_vae_checkpoint) - - # Convert the text model - # AudioLDM uses the same configuration and tokenizer as the original CLAP model - config = ClapTextConfig.from_pretrained("laion/clap-htsat-unfused") - tokenizer = AutoTokenizer.from_pretrained("laion/clap-htsat-unfused") - - converted_text_model = convert_open_clap_checkpoint(checkpoint) - text_model = ClapTextModelWithProjection(config) - - missing_keys, unexpected_keys = text_model.load_state_dict(converted_text_model, strict=False) - # we expect not to have token_type_ids in our original state dict so let's ignore them - missing_keys = list(set(missing_keys) - set(CLAP_EXPECTED_MISSING_KEYS)) - - if len(unexpected_keys) > 0: - raise ValueError(f"Unexpected keys when loading CLAP model: {unexpected_keys}") - - if len(missing_keys) > 0: - raise ValueError(f"Missing keys when loading CLAP model: {missing_keys}") - - # Convert the vocoder model - vocoder_config = create_transformers_vocoder_config(original_config) - vocoder_config = SpeechT5HifiGanConfig(**vocoder_config) - converted_vocoder_checkpoint = convert_hifigan_checkpoint(checkpoint, vocoder_config) - - vocoder = SpeechT5HifiGan(vocoder_config) - vocoder.load_state_dict(converted_vocoder_checkpoint) - - # Instantiate the diffusers pipeline - pipe = AudioLDMPipeline( - vae=vae, - text_encoder=text_model, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - vocoder=vocoder, - ) - - return pipe - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - parser.add_argument( - "--checkpoint_path", default=None, type=str, required=True, help="Path to the checkpoint to convert." - ) - parser.add_argument( - "--original_config_file", - default=None, - type=str, - help="The YAML config file corresponding to the original architecture.", - ) - parser.add_argument( - "--num_in_channels", - default=None, - type=int, - help="The number of input channels. If `None` number of input channels will be automatically inferred.", - ) - parser.add_argument( - "--model_channels", - default=None, - type=int, - help="The number of UNet model channels. If `None`, it will be automatically inferred from the config. Override" - " to 128 for the small checkpoints, 192 for the medium checkpoints and 256 for the large.", - ) - parser.add_argument( - "--num_head_channels", - default=None, - type=int, - help="The number of UNet head channels. If `None`, it will be automatically inferred from the config. Override" - " to 32 for the small and medium checkpoints, and 64 for the large.", - ) - parser.add_argument( - "--scheduler_type", - default="ddim", - type=str, - help="Type of scheduler to use. Should be one of ['pndm', 'lms', 'ddim', 'euler', 'euler-ancestral', 'dpm']", - ) - parser.add_argument( - "--image_size", - default=None, - type=int, - help=("The image size that the model was trained on."), - ) - parser.add_argument( - "--prediction_type", - default=None, - type=str, - help=("The prediction type that the model was trained on."), - ) - parser.add_argument( - "--extract_ema", - action="store_true", - help=( - "Only relevant for checkpoints that have both EMA and non-EMA weights. Whether to extract the EMA weights" - " or not. Defaults to `False`. Add `--extract_ema` to extract the EMA weights. EMA weights usually yield" - " higher quality images for inference. Non-EMA weights are usually better to continue fine-tuning." - ), - ) - parser.add_argument( - "--from_safetensors", - action="store_true", - help="If `--checkpoint_path` is in `safetensors` format, load checkpoint with safetensors instead of PyTorch.", - ) - parser.add_argument( - "--to_safetensors", - action="store_true", - help="Whether to store pipeline in safetensors format or not.", - ) - parser.add_argument("--dump_path", default=None, type=str, required=True, help="Path to the output model.") - parser.add_argument("--device", type=str, help="Device to use (e.g. cpu, cuda:0, cuda:1, etc.)") - args = parser.parse_args() - - pipe = load_pipeline_from_original_audioldm_ckpt( - checkpoint_path=args.checkpoint_path, - original_config_file=args.original_config_file, - image_size=args.image_size, - prediction_type=args.prediction_type, - extract_ema=args.extract_ema, - scheduler_type=args.scheduler_type, - num_in_channels=args.num_in_channels, - model_channels=args.model_channels, - num_head_channels=args.num_head_channels, - from_safetensors=args.from_safetensors, - device=args.device, - ) - pipe.save_pretrained(args.dump_path, safe_serialization=args.to_safetensors) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/transformer_2d.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/transformer_2d.py deleted file mode 100644 index 998535c58a730cafbef53af02d127392d05acdf2..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/transformer_2d.py +++ /dev/null @@ -1,342 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from dataclasses import dataclass -from typing import Any, Dict, Optional - -import torch -import torch.nn.functional as F -from torch import nn - -from ..configuration_utils import ConfigMixin, register_to_config -from ..models.embeddings import ImagePositionalEmbeddings -from ..utils import BaseOutput, deprecate -from .attention import BasicTransformerBlock -from .embeddings import PatchEmbed -from .lora import LoRACompatibleConv, LoRACompatibleLinear -from .modeling_utils import ModelMixin - - -@dataclass -class Transformer2DModelOutput(BaseOutput): - """ - The output of [`Transformer2DModel`]. - - Args: - sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [`Transformer2DModel`] is discrete): - The hidden states output conditioned on the `encoder_hidden_states` input. If discrete, returns probability - distributions for the unnoised latent pixels. - """ - - sample: torch.FloatTensor - - -class Transformer2DModel(ModelMixin, ConfigMixin): - """ - A 2D Transformer model for image-like data. - - Parameters: - num_attention_heads (`int`, *optional*, defaults to 16): The number of heads to use for multi-head attention. - attention_head_dim (`int`, *optional*, defaults to 88): The number of channels in each head. - in_channels (`int`, *optional*): - The number of channels in the input and output (specify if the input is **continuous**). - num_layers (`int`, *optional*, defaults to 1): The number of layers of Transformer blocks to use. - dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use. - cross_attention_dim (`int`, *optional*): The number of `encoder_hidden_states` dimensions to use. - sample_size (`int`, *optional*): The width of the latent images (specify if the input is **discrete**). - This is fixed during training since it is used to learn a number of position embeddings. - num_vector_embeds (`int`, *optional*): - The number of classes of the vector embeddings of the latent pixels (specify if the input is **discrete**). - Includes the class for the masked latent pixel. - activation_fn (`str`, *optional*, defaults to `"geglu"`): Activation function to use in feed-forward. - num_embeds_ada_norm ( `int`, *optional*): - The number of diffusion steps used during training. Pass if at least one of the norm_layers is - `AdaLayerNorm`. This is fixed during training since it is used to learn a number of embeddings that are - added to the hidden states. - - During inference, you can denoise for up to but not more steps than `num_embeds_ada_norm`. - attention_bias (`bool`, *optional*): - Configure if the `TransformerBlocks` attention should contain a bias parameter. - """ - - @register_to_config - def __init__( - self, - num_attention_heads: int = 16, - attention_head_dim: int = 88, - in_channels: Optional[int] = None, - out_channels: Optional[int] = None, - num_layers: int = 1, - dropout: float = 0.0, - norm_num_groups: int = 32, - cross_attention_dim: Optional[int] = None, - attention_bias: bool = False, - sample_size: Optional[int] = None, - num_vector_embeds: Optional[int] = None, - patch_size: Optional[int] = None, - activation_fn: str = "geglu", - num_embeds_ada_norm: Optional[int] = None, - use_linear_projection: bool = False, - only_cross_attention: bool = False, - upcast_attention: bool = False, - norm_type: str = "layer_norm", - norm_elementwise_affine: bool = True, - ): - super().__init__() - self.use_linear_projection = use_linear_projection - self.num_attention_heads = num_attention_heads - self.attention_head_dim = attention_head_dim - inner_dim = num_attention_heads * attention_head_dim - - # 1. Transformer2DModel can process both standard continuous images of shape `(batch_size, num_channels, width, height)` as well as quantized image embeddings of shape `(batch_size, num_image_vectors)` - # Define whether input is continuous or discrete depending on configuration - self.is_input_continuous = (in_channels is not None) and (patch_size is None) - self.is_input_vectorized = num_vector_embeds is not None - self.is_input_patches = in_channels is not None and patch_size is not None - - if norm_type == "layer_norm" and num_embeds_ada_norm is not None: - deprecation_message = ( - f"The configuration file of this model: {self.__class__} is outdated. `norm_type` is either not set or" - " incorrectly set to `'layer_norm'`.Make sure to set `norm_type` to `'ada_norm'` in the config." - " Please make sure to update the config accordingly as leaving `norm_type` might led to incorrect" - " results in future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it" - " would be very nice if you could open a Pull request for the `transformer/config.json` file" - ) - deprecate("norm_type!=num_embeds_ada_norm", "1.0.0", deprecation_message, standard_warn=False) - norm_type = "ada_norm" - - if self.is_input_continuous and self.is_input_vectorized: - raise ValueError( - f"Cannot define both `in_channels`: {in_channels} and `num_vector_embeds`: {num_vector_embeds}. Make" - " sure that either `in_channels` or `num_vector_embeds` is None." - ) - elif self.is_input_vectorized and self.is_input_patches: - raise ValueError( - f"Cannot define both `num_vector_embeds`: {num_vector_embeds} and `patch_size`: {patch_size}. Make" - " sure that either `num_vector_embeds` or `num_patches` is None." - ) - elif not self.is_input_continuous and not self.is_input_vectorized and not self.is_input_patches: - raise ValueError( - f"Has to define `in_channels`: {in_channels}, `num_vector_embeds`: {num_vector_embeds}, or patch_size:" - f" {patch_size}. Make sure that `in_channels`, `num_vector_embeds` or `num_patches` is not None." - ) - - # 2. Define input layers - if self.is_input_continuous: - self.in_channels = in_channels - - self.norm = torch.nn.GroupNorm(num_groups=norm_num_groups, num_channels=in_channels, eps=1e-6, affine=True) - if use_linear_projection: - self.proj_in = LoRACompatibleLinear(in_channels, inner_dim) - else: - self.proj_in = LoRACompatibleConv(in_channels, inner_dim, kernel_size=1, stride=1, padding=0) - elif self.is_input_vectorized: - assert sample_size is not None, "Transformer2DModel over discrete input must provide sample_size" - assert num_vector_embeds is not None, "Transformer2DModel over discrete input must provide num_embed" - - self.height = sample_size - self.width = sample_size - self.num_vector_embeds = num_vector_embeds - self.num_latent_pixels = self.height * self.width - - self.latent_image_embedding = ImagePositionalEmbeddings( - num_embed=num_vector_embeds, embed_dim=inner_dim, height=self.height, width=self.width - ) - elif self.is_input_patches: - assert sample_size is not None, "Transformer2DModel over patched input must provide sample_size" - - self.height = sample_size - self.width = sample_size - - self.patch_size = patch_size - self.pos_embed = PatchEmbed( - height=sample_size, - width=sample_size, - patch_size=patch_size, - in_channels=in_channels, - embed_dim=inner_dim, - ) - - # 3. Define transformers blocks - self.transformer_blocks = nn.ModuleList( - [ - BasicTransformerBlock( - inner_dim, - num_attention_heads, - attention_head_dim, - dropout=dropout, - cross_attention_dim=cross_attention_dim, - activation_fn=activation_fn, - num_embeds_ada_norm=num_embeds_ada_norm, - attention_bias=attention_bias, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - norm_type=norm_type, - norm_elementwise_affine=norm_elementwise_affine, - ) - for d in range(num_layers) - ] - ) - - # 4. Define output layers - self.out_channels = in_channels if out_channels is None else out_channels - if self.is_input_continuous: - # TODO: should use out_channels for continuous projections - if use_linear_projection: - self.proj_out = LoRACompatibleLinear(inner_dim, in_channels) - else: - self.proj_out = LoRACompatibleConv(inner_dim, in_channels, kernel_size=1, stride=1, padding=0) - elif self.is_input_vectorized: - self.norm_out = nn.LayerNorm(inner_dim) - self.out = nn.Linear(inner_dim, self.num_vector_embeds - 1) - elif self.is_input_patches: - self.norm_out = nn.LayerNorm(inner_dim, elementwise_affine=False, eps=1e-6) - self.proj_out_1 = nn.Linear(inner_dim, 2 * inner_dim) - self.proj_out_2 = nn.Linear(inner_dim, patch_size * patch_size * self.out_channels) - - def forward( - self, - hidden_states: torch.Tensor, - encoder_hidden_states: Optional[torch.Tensor] = None, - timestep: Optional[torch.LongTensor] = None, - class_labels: Optional[torch.LongTensor] = None, - cross_attention_kwargs: Dict[str, Any] = None, - attention_mask: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.Tensor] = None, - return_dict: bool = True, - ): - """ - The [`Transformer2DModel`] forward method. - - Args: - hidden_states (`torch.LongTensor` of shape `(batch size, num latent pixels)` if discrete, `torch.FloatTensor` of shape `(batch size, channel, height, width)` if continuous): - Input `hidden_states`. - encoder_hidden_states ( `torch.FloatTensor` of shape `(batch size, sequence len, embed dims)`, *optional*): - Conditional embeddings for cross attention layer. If not given, cross-attention defaults to - self-attention. - timestep ( `torch.LongTensor`, *optional*): - Used to indicate denoising step. Optional timestep to be applied as an embedding in `AdaLayerNorm`. - class_labels ( `torch.LongTensor` of shape `(batch size, num classes)`, *optional*): - Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in - `AdaLayerZeroNorm`. - encoder_attention_mask ( `torch.Tensor`, *optional*): - Cross-attention mask applied to `encoder_hidden_states`. Two formats supported: - - * Mask `(batch, sequence_length)` True = keep, False = discard. - * Bias `(batch, 1, sequence_length)` 0 = keep, -10000 = discard. - - If `ndim == 2`: will be interpreted as a mask, then converted into a bias consistent with the format - above. This bias will be added to the cross-attention scores. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~models.unet_2d_condition.UNet2DConditionOutput`] instead of a plain - tuple. - - Returns: - If `return_dict` is True, an [`~models.transformer_2d.Transformer2DModelOutput`] is returned, otherwise a - `tuple` where the first element is the sample tensor. - """ - # ensure attention_mask is a bias, and give it a singleton query_tokens dimension. - # we may have done this conversion already, e.g. if we came here via UNet2DConditionModel#forward. - # we can tell by counting dims; if ndim == 2: it's a mask rather than a bias. - # expects mask of shape: - # [batch, key_tokens] - # adds singleton query_tokens dimension: - # [batch, 1, key_tokens] - # this helps to broadcast it as a bias over attention scores, which will be in one of the following shapes: - # [batch, heads, query_tokens, key_tokens] (e.g. torch sdp attn) - # [batch * heads, query_tokens, key_tokens] (e.g. xformers or classic attn) - if attention_mask is not None and attention_mask.ndim == 2: - # assume that mask is expressed as: - # (1 = keep, 0 = discard) - # convert mask into a bias that can be added to attention scores: - # (keep = +0, discard = -10000.0) - attention_mask = (1 - attention_mask.to(hidden_states.dtype)) * -10000.0 - attention_mask = attention_mask.unsqueeze(1) - - # convert encoder_attention_mask to a bias the same way we do for attention_mask - if encoder_attention_mask is not None and encoder_attention_mask.ndim == 2: - encoder_attention_mask = (1 - encoder_attention_mask.to(hidden_states.dtype)) * -10000.0 - encoder_attention_mask = encoder_attention_mask.unsqueeze(1) - - # 1. Input - if self.is_input_continuous: - batch, _, height, width = hidden_states.shape - residual = hidden_states - - hidden_states = self.norm(hidden_states) - if not self.use_linear_projection: - hidden_states = self.proj_in(hidden_states) - inner_dim = hidden_states.shape[1] - hidden_states = hidden_states.permute(0, 2, 3, 1).reshape(batch, height * width, inner_dim) - else: - inner_dim = hidden_states.shape[1] - hidden_states = hidden_states.permute(0, 2, 3, 1).reshape(batch, height * width, inner_dim) - hidden_states = self.proj_in(hidden_states) - elif self.is_input_vectorized: - hidden_states = self.latent_image_embedding(hidden_states) - elif self.is_input_patches: - hidden_states = self.pos_embed(hidden_states) - - # 2. Blocks - for block in self.transformer_blocks: - hidden_states = block( - hidden_states, - attention_mask=attention_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - timestep=timestep, - cross_attention_kwargs=cross_attention_kwargs, - class_labels=class_labels, - ) - - # 3. Output - if self.is_input_continuous: - if not self.use_linear_projection: - hidden_states = hidden_states.reshape(batch, height, width, inner_dim).permute(0, 3, 1, 2).contiguous() - hidden_states = self.proj_out(hidden_states) - else: - hidden_states = self.proj_out(hidden_states) - hidden_states = hidden_states.reshape(batch, height, width, inner_dim).permute(0, 3, 1, 2).contiguous() - - output = hidden_states + residual - elif self.is_input_vectorized: - hidden_states = self.norm_out(hidden_states) - logits = self.out(hidden_states) - # (batch, self.num_vector_embeds - 1, self.num_latent_pixels) - logits = logits.permute(0, 2, 1) - - # log(p(x_0)) - output = F.log_softmax(logits.double(), dim=1).float() - elif self.is_input_patches: - # TODO: cleanup! - conditioning = self.transformer_blocks[0].norm1.emb( - timestep, class_labels, hidden_dtype=hidden_states.dtype - ) - shift, scale = self.proj_out_1(F.silu(conditioning)).chunk(2, dim=1) - hidden_states = self.norm_out(hidden_states) * (1 + scale[:, None]) + shift[:, None] - hidden_states = self.proj_out_2(hidden_states) - - # unpatchify - height = width = int(hidden_states.shape[1] ** 0.5) - hidden_states = hidden_states.reshape( - shape=(-1, height, width, self.patch_size, self.patch_size, self.out_channels) - ) - hidden_states = torch.einsum("nhwpqc->nchpwq", hidden_states) - output = hidden_states.reshape( - shape=(-1, self.out_channels, height * self.patch_size, width * self.patch_size) - ) - - if not return_dict: - return (output,) - - return Transformer2DModelOutput(sample=output) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/fsaf/fsaf_r50_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/fsaf/fsaf_r50_fpn_1x_coco.py deleted file mode 100644 index 67f3ec1c4c16fb9bd041dbb3a24d269a83145f26..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/fsaf/fsaf_r50_fpn_1x_coco.py +++ /dev/null @@ -1,48 +0,0 @@ -_base_ = '../retinanet/retinanet_r50_fpn_1x_coco.py' -# model settings -model = dict( - type='FSAF', - bbox_head=dict( - type='FSAFHead', - num_classes=80, - in_channels=256, - stacked_convs=4, - feat_channels=256, - reg_decoded_bbox=True, - # Only anchor-free branch is implemented. The anchor generator only - # generates 1 anchor at each feature point, as a substitute of the - # grid of features. - anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=1, - scales_per_octave=1, - ratios=[1.0], - strides=[8, 16, 32, 64, 128]), - bbox_coder=dict(_delete_=True, type='TBLRBBoxCoder', normalizer=4.0), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0, - reduction='none'), - loss_bbox=dict( - _delete_=True, - type='IoULoss', - eps=1e-6, - loss_weight=1.0, - reduction='none')), - # training and testing settings - train_cfg=dict( - assigner=dict( - _delete_=True, - type='CenterRegionAssigner', - pos_scale=0.2, - neg_scale=0.2, - min_pos_iof=0.01), - allowed_border=-1, - pos_weight=-1, - debug=False)) -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001) -optimizer_config = dict( - _delete_=True, grad_clip=dict(max_norm=10, norm_type=2)) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/reppoints/bbox_r50_grid_center_fpn_gn-neck+head_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/reppoints/bbox_r50_grid_center_fpn_gn-neck+head_1x_coco.py deleted file mode 100644 index b24c8db768423de12d1e8582bb26dd71218f52ee..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/reppoints/bbox_r50_grid_center_fpn_gn-neck+head_1x_coco.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './reppoints_moment_r50_fpn_gn-neck+head_1x_coco.py' -model = dict(bbox_head=dict(transform_method='minmax', use_grid_points=True)) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/retinanet/retinanet_r50_caffe_fpn_mstrain_2x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/retinanet/retinanet_r50_caffe_fpn_mstrain_2x_coco.py deleted file mode 100644 index eea9690eb159fe03865825bb9f9ca5fd6ff99d70..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/retinanet/retinanet_r50_caffe_fpn_mstrain_2x_coco.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './retinanet_r50_caffe_fpn_mstrain_1x_coco.py' -# learning policy -lr_config = dict(step=[16, 23]) -runner = dict(type='EpochBasedRunner', max_epochs=24) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/cascade_roi_head.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/cascade_roi_head.py deleted file mode 100644 index 45b6f36a386cd37c50cc43666fcc516f2e14d868..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/cascade_roi_head.py +++ /dev/null @@ -1,507 +0,0 @@ -import torch -import torch.nn as nn - -from mmdet.core import (bbox2result, bbox2roi, bbox_mapping, build_assigner, - build_sampler, merge_aug_bboxes, merge_aug_masks, - multiclass_nms) -from ..builder import HEADS, build_head, build_roi_extractor -from .base_roi_head import BaseRoIHead -from .test_mixins import BBoxTestMixin, MaskTestMixin - - -@HEADS.register_module() -class CascadeRoIHead(BaseRoIHead, BBoxTestMixin, MaskTestMixin): - """Cascade roi head including one bbox head and one mask head. - - https://arxiv.org/abs/1712.00726 - """ - - def __init__(self, - num_stages, - stage_loss_weights, - bbox_roi_extractor=None, - bbox_head=None, - mask_roi_extractor=None, - mask_head=None, - shared_head=None, - train_cfg=None, - test_cfg=None): - assert bbox_roi_extractor is not None - assert bbox_head is not None - assert shared_head is None, \ - 'Shared head is not supported in Cascade RCNN anymore' - self.num_stages = num_stages - self.stage_loss_weights = stage_loss_weights - super(CascadeRoIHead, self).__init__( - bbox_roi_extractor=bbox_roi_extractor, - bbox_head=bbox_head, - mask_roi_extractor=mask_roi_extractor, - mask_head=mask_head, - shared_head=shared_head, - train_cfg=train_cfg, - test_cfg=test_cfg) - - def init_bbox_head(self, bbox_roi_extractor, bbox_head): - """Initialize box head and box roi extractor. - - Args: - bbox_roi_extractor (dict): Config of box roi extractor. - bbox_head (dict): Config of box in box head. - """ - self.bbox_roi_extractor = nn.ModuleList() - self.bbox_head = nn.ModuleList() - if not isinstance(bbox_roi_extractor, list): - bbox_roi_extractor = [ - bbox_roi_extractor for _ in range(self.num_stages) - ] - if not isinstance(bbox_head, list): - bbox_head = [bbox_head for _ in range(self.num_stages)] - assert len(bbox_roi_extractor) == len(bbox_head) == self.num_stages - for roi_extractor, head in zip(bbox_roi_extractor, bbox_head): - self.bbox_roi_extractor.append(build_roi_extractor(roi_extractor)) - self.bbox_head.append(build_head(head)) - - def init_mask_head(self, mask_roi_extractor, mask_head): - """Initialize mask head and mask roi extractor. - - Args: - mask_roi_extractor (dict): Config of mask roi extractor. - mask_head (dict): Config of mask in mask head. - """ - self.mask_head = nn.ModuleList() - if not isinstance(mask_head, list): - mask_head = [mask_head for _ in range(self.num_stages)] - assert len(mask_head) == self.num_stages - for head in mask_head: - self.mask_head.append(build_head(head)) - if mask_roi_extractor is not None: - self.share_roi_extractor = False - self.mask_roi_extractor = nn.ModuleList() - if not isinstance(mask_roi_extractor, list): - mask_roi_extractor = [ - mask_roi_extractor for _ in range(self.num_stages) - ] - assert len(mask_roi_extractor) == self.num_stages - for roi_extractor in mask_roi_extractor: - self.mask_roi_extractor.append( - build_roi_extractor(roi_extractor)) - else: - self.share_roi_extractor = True - self.mask_roi_extractor = self.bbox_roi_extractor - - def init_assigner_sampler(self): - """Initialize assigner and sampler for each stage.""" - self.bbox_assigner = [] - self.bbox_sampler = [] - if self.train_cfg is not None: - for idx, rcnn_train_cfg in enumerate(self.train_cfg): - self.bbox_assigner.append( - build_assigner(rcnn_train_cfg.assigner)) - self.current_stage = idx - self.bbox_sampler.append( - build_sampler(rcnn_train_cfg.sampler, context=self)) - - def init_weights(self, pretrained): - """Initialize the weights in head. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if self.with_shared_head: - self.shared_head.init_weights(pretrained=pretrained) - for i in range(self.num_stages): - if self.with_bbox: - self.bbox_roi_extractor[i].init_weights() - self.bbox_head[i].init_weights() - if self.with_mask: - if not self.share_roi_extractor: - self.mask_roi_extractor[i].init_weights() - self.mask_head[i].init_weights() - - def forward_dummy(self, x, proposals): - """Dummy forward function.""" - # bbox head - outs = () - rois = bbox2roi([proposals]) - if self.with_bbox: - for i in range(self.num_stages): - bbox_results = self._bbox_forward(i, x, rois) - outs = outs + (bbox_results['cls_score'], - bbox_results['bbox_pred']) - # mask heads - if self.with_mask: - mask_rois = rois[:100] - for i in range(self.num_stages): - mask_results = self._mask_forward(i, x, mask_rois) - outs = outs + (mask_results['mask_pred'], ) - return outs - - def _bbox_forward(self, stage, x, rois): - """Box head forward function used in both training and testing.""" - bbox_roi_extractor = self.bbox_roi_extractor[stage] - bbox_head = self.bbox_head[stage] - bbox_feats = bbox_roi_extractor(x[:bbox_roi_extractor.num_inputs], - rois) - # do not support caffe_c4 model anymore - cls_score, bbox_pred = bbox_head(bbox_feats) - - bbox_results = dict( - cls_score=cls_score, bbox_pred=bbox_pred, bbox_feats=bbox_feats) - return bbox_results - - def _bbox_forward_train(self, stage, x, sampling_results, gt_bboxes, - gt_labels, rcnn_train_cfg): - """Run forward function and calculate loss for box head in training.""" - rois = bbox2roi([res.bboxes for res in sampling_results]) - bbox_results = self._bbox_forward(stage, x, rois) - bbox_targets = self.bbox_head[stage].get_targets( - sampling_results, gt_bboxes, gt_labels, rcnn_train_cfg) - loss_bbox = self.bbox_head[stage].loss(bbox_results['cls_score'], - bbox_results['bbox_pred'], rois, - *bbox_targets) - - bbox_results.update( - loss_bbox=loss_bbox, rois=rois, bbox_targets=bbox_targets) - return bbox_results - - def _mask_forward(self, stage, x, rois): - """Mask head forward function used in both training and testing.""" - mask_roi_extractor = self.mask_roi_extractor[stage] - mask_head = self.mask_head[stage] - mask_feats = mask_roi_extractor(x[:mask_roi_extractor.num_inputs], - rois) - # do not support caffe_c4 model anymore - mask_pred = mask_head(mask_feats) - - mask_results = dict(mask_pred=mask_pred) - return mask_results - - def _mask_forward_train(self, - stage, - x, - sampling_results, - gt_masks, - rcnn_train_cfg, - bbox_feats=None): - """Run forward function and calculate loss for mask head in - training.""" - pos_rois = bbox2roi([res.pos_bboxes for res in sampling_results]) - mask_results = self._mask_forward(stage, x, pos_rois) - - mask_targets = self.mask_head[stage].get_targets( - sampling_results, gt_masks, rcnn_train_cfg) - pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results]) - loss_mask = self.mask_head[stage].loss(mask_results['mask_pred'], - mask_targets, pos_labels) - - mask_results.update(loss_mask=loss_mask) - return mask_results - - def forward_train(self, - x, - img_metas, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None): - """ - Args: - x (list[Tensor]): list of multi-level img features. - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - proposals (list[Tensors]): list of region proposals. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - gt_masks (None | Tensor) : true segmentation masks for each box - used if the architecture supports a segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - losses = dict() - for i in range(self.num_stages): - self.current_stage = i - rcnn_train_cfg = self.train_cfg[i] - lw = self.stage_loss_weights[i] - - # assign gts and sample proposals - sampling_results = [] - if self.with_bbox or self.with_mask: - bbox_assigner = self.bbox_assigner[i] - bbox_sampler = self.bbox_sampler[i] - num_imgs = len(img_metas) - if gt_bboxes_ignore is None: - gt_bboxes_ignore = [None for _ in range(num_imgs)] - - for j in range(num_imgs): - assign_result = bbox_assigner.assign( - proposal_list[j], gt_bboxes[j], gt_bboxes_ignore[j], - gt_labels[j]) - sampling_result = bbox_sampler.sample( - assign_result, - proposal_list[j], - gt_bboxes[j], - gt_labels[j], - feats=[lvl_feat[j][None] for lvl_feat in x]) - sampling_results.append(sampling_result) - - # bbox head forward and loss - bbox_results = self._bbox_forward_train(i, x, sampling_results, - gt_bboxes, gt_labels, - rcnn_train_cfg) - - for name, value in bbox_results['loss_bbox'].items(): - losses[f's{i}.{name}'] = ( - value * lw if 'loss' in name else value) - - # mask head forward and loss - if self.with_mask: - mask_results = self._mask_forward_train( - i, x, sampling_results, gt_masks, rcnn_train_cfg, - bbox_results['bbox_feats']) - for name, value in mask_results['loss_mask'].items(): - losses[f's{i}.{name}'] = ( - value * lw if 'loss' in name else value) - - # refine bboxes - if i < self.num_stages - 1: - pos_is_gts = [res.pos_is_gt for res in sampling_results] - # bbox_targets is a tuple - roi_labels = bbox_results['bbox_targets'][0] - with torch.no_grad(): - roi_labels = torch.where( - roi_labels == self.bbox_head[i].num_classes, - bbox_results['cls_score'][:, :-1].argmax(1), - roi_labels) - proposal_list = self.bbox_head[i].refine_bboxes( - bbox_results['rois'], roi_labels, - bbox_results['bbox_pred'], pos_is_gts, img_metas) - - return losses - - def simple_test(self, x, proposal_list, img_metas, rescale=False): - """Test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - num_imgs = len(proposal_list) - img_shapes = tuple(meta['img_shape'] for meta in img_metas) - ori_shapes = tuple(meta['ori_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - # "ms" in variable names means multi-stage - ms_bbox_result = {} - ms_segm_result = {} - ms_scores = [] - rcnn_test_cfg = self.test_cfg - - rois = bbox2roi(proposal_list) - for i in range(self.num_stages): - bbox_results = self._bbox_forward(i, x, rois) - - # split batch bbox prediction back to each image - cls_score = bbox_results['cls_score'] - bbox_pred = bbox_results['bbox_pred'] - num_proposals_per_img = tuple( - len(proposals) for proposals in proposal_list) - rois = rois.split(num_proposals_per_img, 0) - cls_score = cls_score.split(num_proposals_per_img, 0) - if isinstance(bbox_pred, torch.Tensor): - bbox_pred = bbox_pred.split(num_proposals_per_img, 0) - else: - bbox_pred = self.bbox_head[i].bbox_pred_split( - bbox_pred, num_proposals_per_img) - ms_scores.append(cls_score) - - if i < self.num_stages - 1: - bbox_label = [s[:, :-1].argmax(dim=1) for s in cls_score] - rois = torch.cat([ - self.bbox_head[i].regress_by_class(rois[j], bbox_label[j], - bbox_pred[j], - img_metas[j]) - for j in range(num_imgs) - ]) - - # average scores of each image by stages - cls_score = [ - sum([score[i] for score in ms_scores]) / float(len(ms_scores)) - for i in range(num_imgs) - ] - - # apply bbox post-processing to each image individually - det_bboxes = [] - det_labels = [] - for i in range(num_imgs): - det_bbox, det_label = self.bbox_head[-1].get_bboxes( - rois[i], - cls_score[i], - bbox_pred[i], - img_shapes[i], - scale_factors[i], - rescale=rescale, - cfg=rcnn_test_cfg) - det_bboxes.append(det_bbox) - det_labels.append(det_label) - - if torch.onnx.is_in_onnx_export(): - return det_bboxes, det_labels - bbox_results = [ - bbox2result(det_bboxes[i], det_labels[i], - self.bbox_head[-1].num_classes) - for i in range(num_imgs) - ] - ms_bbox_result['ensemble'] = bbox_results - - if self.with_mask: - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - mask_classes = self.mask_head[-1].num_classes - segm_results = [[[] for _ in range(mask_classes)] - for _ in range(num_imgs)] - else: - if rescale and not isinstance(scale_factors[0], float): - scale_factors = [ - torch.from_numpy(scale_factor).to(det_bboxes[0].device) - for scale_factor in scale_factors - ] - _bboxes = [ - det_bboxes[i][:, :4] * - scale_factors[i] if rescale else det_bboxes[i][:, :4] - for i in range(len(det_bboxes)) - ] - mask_rois = bbox2roi(_bboxes) - num_mask_rois_per_img = tuple( - _bbox.size(0) for _bbox in _bboxes) - aug_masks = [] - for i in range(self.num_stages): - mask_results = self._mask_forward(i, x, mask_rois) - mask_pred = mask_results['mask_pred'] - # split batch mask prediction back to each image - mask_pred = mask_pred.split(num_mask_rois_per_img, 0) - aug_masks.append( - [m.sigmoid().cpu().numpy() for m in mask_pred]) - - # apply mask post-processing to each image individually - segm_results = [] - for i in range(num_imgs): - if det_bboxes[i].shape[0] == 0: - segm_results.append( - [[] - for _ in range(self.mask_head[-1].num_classes)]) - else: - aug_mask = [mask[i] for mask in aug_masks] - merged_masks = merge_aug_masks( - aug_mask, [[img_metas[i]]] * self.num_stages, - rcnn_test_cfg) - segm_result = self.mask_head[-1].get_seg_masks( - merged_masks, _bboxes[i], det_labels[i], - rcnn_test_cfg, ori_shapes[i], scale_factors[i], - rescale) - segm_results.append(segm_result) - ms_segm_result['ensemble'] = segm_results - - if self.with_mask: - results = list( - zip(ms_bbox_result['ensemble'], ms_segm_result['ensemble'])) - else: - results = ms_bbox_result['ensemble'] - - return results - - def aug_test(self, features, proposal_list, img_metas, rescale=False): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ - rcnn_test_cfg = self.test_cfg - aug_bboxes = [] - aug_scores = [] - for x, img_meta in zip(features, img_metas): - # only one image in the batch - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - - proposals = bbox_mapping(proposal_list[0][:, :4], img_shape, - scale_factor, flip, flip_direction) - # "ms" in variable names means multi-stage - ms_scores = [] - - rois = bbox2roi([proposals]) - for i in range(self.num_stages): - bbox_results = self._bbox_forward(i, x, rois) - ms_scores.append(bbox_results['cls_score']) - - if i < self.num_stages - 1: - bbox_label = bbox_results['cls_score'][:, :-1].argmax( - dim=1) - rois = self.bbox_head[i].regress_by_class( - rois, bbox_label, bbox_results['bbox_pred'], - img_meta[0]) - - cls_score = sum(ms_scores) / float(len(ms_scores)) - bboxes, scores = self.bbox_head[-1].get_bboxes( - rois, - cls_score, - bbox_results['bbox_pred'], - img_shape, - scale_factor, - rescale=False, - cfg=None) - aug_bboxes.append(bboxes) - aug_scores.append(scores) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes, merged_scores = merge_aug_bboxes( - aug_bboxes, aug_scores, img_metas, rcnn_test_cfg) - det_bboxes, det_labels = multiclass_nms(merged_bboxes, merged_scores, - rcnn_test_cfg.score_thr, - rcnn_test_cfg.nms, - rcnn_test_cfg.max_per_img) - - bbox_result = bbox2result(det_bboxes, det_labels, - self.bbox_head[-1].num_classes) - - if self.with_mask: - if det_bboxes.shape[0] == 0: - segm_result = [[[] - for _ in range(self.mask_head[-1].num_classes)] - ] - else: - aug_masks = [] - aug_img_metas = [] - for x, img_meta in zip(features, img_metas): - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - _bboxes = bbox_mapping(det_bboxes[:, :4], img_shape, - scale_factor, flip, flip_direction) - mask_rois = bbox2roi([_bboxes]) - for i in range(self.num_stages): - mask_results = self._mask_forward(i, x, mask_rois) - aug_masks.append( - mask_results['mask_pred'].sigmoid().cpu().numpy()) - aug_img_metas.append(img_meta) - merged_masks = merge_aug_masks(aug_masks, aug_img_metas, - self.test_cfg) - - ori_shape = img_metas[0][0]['ori_shape'] - segm_result = self.mask_head[-1].get_seg_masks( - merged_masks, - det_bboxes, - det_labels, - rcnn_test_cfg, - ori_shape, - scale_factor=1.0, - rescale=False) - return [(bbox_result, segm_result)] - else: - return [bbox_result] diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r101-d8_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r101-d8_512x1024_80k_cityscapes.py deleted file mode 100644 index 1eeff0b030cf1db8c6ec9740fa65db44b2026d58..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r101-d8_512x1024_80k_cityscapes.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './ann_r50-d8_512x1024_80k_cityscapes.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_512x1024_80k_cityscapes.py deleted file mode 100644 index 528110dc73c15008869a9ad9851ef487f0c952c7..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_512x1024_80k_cityscapes.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './fcn_r50-d8_512x1024_80k_cityscapes.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/AngoHF/ANGO-Leaderboard/assets/path.py b/spaces/AngoHF/ANGO-Leaderboard/assets/path.py deleted file mode 100644 index fb3acc6d017e053bce0db379410e73fb8e86a663..0000000000000000000000000000000000000000 --- a/spaces/AngoHF/ANGO-Leaderboard/assets/path.py +++ /dev/null @@ -1,4 +0,0 @@ -SEASON = { - "latest": "202309", - "2023-09": "202309" -} \ No newline at end of file diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/script.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/script.py deleted file mode 100644 index b44fc53590567c10603a57a7e4711e2c205259aa..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/script.py +++ /dev/null @@ -1,339 +0,0 @@ -import json -import os -import traceback -from http.server import BaseHTTPRequestHandler, ThreadingHTTPServer -from threading import Thread - -import extensions.openai.completions as OAIcompletions -import extensions.openai.edits as OAIedits -import extensions.openai.embeddings as OAIembeddings -import extensions.openai.images as OAIimages -import extensions.openai.models as OAImodels -import extensions.openai.moderations as OAImoderations -from extensions.openai.defaults import clamp, default, get_default_req_params -from extensions.openai.errors import ( - InvalidRequestError, - OpenAIError, - ServiceUnavailableError -) -from extensions.openai.tokens import token_count, token_decode, token_encode -from extensions.openai.utils import debug_msg -from modules import shared - -import cgi -import speech_recognition as sr -from pydub import AudioSegment - -params = { - # default params - 'port': 5001, - 'embedding_device': 'cpu', - 'embedding_model': 'all-mpnet-base-v2', - - # optional params - 'sd_webui_url': '', - 'debug': 0 -} - -class Handler(BaseHTTPRequestHandler): - def send_access_control_headers(self): - self.send_header("Access-Control-Allow-Origin", "*") - self.send_header("Access-Control-Allow-Credentials", "true") - self.send_header( - "Access-Control-Allow-Methods", - "GET,HEAD,OPTIONS,POST,PUT" - ) - self.send_header( - "Access-Control-Allow-Headers", - "Origin, Accept, X-Requested-With, Content-Type, " - "Access-Control-Request-Method, Access-Control-Request-Headers, " - "Authorization" - ) - - def do_OPTIONS(self): - self.send_response(200) - self.send_access_control_headers() - self.send_header('Content-Type', 'application/json') - self.end_headers() - self.wfile.write("OK".encode('utf-8')) - - def start_sse(self): - self.send_response(200) - self.send_access_control_headers() - self.send_header('Content-Type', 'text/event-stream') - self.send_header('Cache-Control', 'no-cache') - # self.send_header('Connection', 'keep-alive') - self.end_headers() - - def send_sse(self, chunk: dict): - response = 'data: ' + json.dumps(chunk) + '\r\n\r\n' - debug_msg(response[:-4]) - self.wfile.write(response.encode('utf-8')) - - def end_sse(self): - response = 'data: [DONE]\r\n\r\n' - debug_msg(response[:-4]) - self.wfile.write(response.encode('utf-8')) - - def return_json(self, ret: dict, code: int = 200, no_debug=False): - self.send_response(code) - self.send_access_control_headers() - self.send_header('Content-Type', 'application/json') - - response = json.dumps(ret) - r_utf8 = response.encode('utf-8') - - self.send_header('Content-Length', str(len(r_utf8))) - self.end_headers() - - self.wfile.write(r_utf8) - if not no_debug: - debug_msg(r_utf8) - - def openai_error(self, message, code=500, error_type='APIError', param='', internal_message=''): - - error_resp = { - 'error': { - 'message': message, - 'code': code, - 'type': error_type, - 'param': param, - } - } - if internal_message: - print(error_type, message) - print(internal_message) - # error_resp['internal_message'] = internal_message - - self.return_json(error_resp, code) - - def openai_error_handler(func): - def wrapper(self): - try: - func(self) - except InvalidRequestError as e: - self.openai_error(e.message, e.code, e.__class__.__name__, e.param, internal_message=e.internal_message) - except OpenAIError as e: - self.openai_error(e.message, e.code, e.__class__.__name__, internal_message=e.internal_message) - except Exception as e: - self.openai_error(repr(e), 500, 'OpenAIError', internal_message=traceback.format_exc()) - - return wrapper - - @openai_error_handler - def do_GET(self): - debug_msg(self.requestline) - debug_msg(self.headers) - - if self.path.startswith('/v1/engines') or self.path.startswith('/v1/models'): - is_legacy = 'engines' in self.path - is_list = self.path in ['/v1/engines', '/v1/models'] - if is_legacy and not is_list: - model_name = self.path[self.path.find('/v1/engines/') + len('/v1/engines/'):] - resp = OAImodels.load_model(model_name) - elif is_list: - resp = OAImodels.list_models(is_legacy) - else: - model_name = self.path[len('/v1/models/'):] - resp = OAImodels.model_info(model_name) - - self.return_json(resp) - - elif '/billing/usage' in self.path: - # Ex. /v1/dashboard/billing/usage?start_date=2023-05-01&end_date=2023-05-31 - self.return_json({"total_usage": 0}, no_debug=True) - - else: - self.send_error(404) - - @openai_error_handler - def do_POST(self): - - if '/v1/audio/transcriptions' in self.path: - r = sr.Recognizer() - - # Parse the form data - form = cgi.FieldStorage( - fp=self.rfile, - headers=self.headers, - environ={'REQUEST_METHOD': 'POST', 'CONTENT_TYPE': self.headers['Content-Type']} - ) - - audio_file = form['file'].file - audio_data = AudioSegment.from_file(audio_file) - - # Convert AudioSegment to raw data - raw_data = audio_data.raw_data - - # Create AudioData object - audio_data = sr.AudioData(raw_data, audio_data.frame_rate, audio_data.sample_width) - whipser_language = form.getvalue('language', None) - whipser_model = form.getvalue('model', 'tiny') # Use the model from the form data if it exists, otherwise default to tiny - - transcription = {"text": ""} - - try: - transcription["text"] = r.recognize_whisper(audio_data, language=whipser_language, model=whipser_model) - except sr.UnknownValueError: - print("Whisper could not understand audio") - transcription["text"] = "Whisper could not understand audio UnknownValueError" - except sr.RequestError as e: - print("Could not request results from Whisper", e) - transcription["text"] = "Whisper could not understand audio RequestError" - - self.return_json(transcription, no_debug=True) - return - - debug_msg(self.requestline) - debug_msg(self.headers) - - content_length = self.headers.get('Content-Length') - transfer_encoding = self.headers.get('Transfer-Encoding') - - if content_length: - body = json.loads(self.rfile.read(int(content_length)).decode('utf-8')) - elif transfer_encoding == 'chunked': - chunks = [] - while True: - chunk_size = int(self.rfile.readline(), 16) # Read the chunk size - if chunk_size == 0: - break # End of chunks - chunks.append(self.rfile.read(chunk_size)) - self.rfile.readline() # Consume the trailing newline after each chunk - body = json.loads(b''.join(chunks).decode('utf-8')) - else: - self.send_response(400, "Bad Request: Either Content-Length or Transfer-Encoding header expected.") - self.end_headers() - return - - debug_msg(body) - - if '/completions' in self.path or '/generate' in self.path: - - if not shared.model: - raise ServiceUnavailableError("No model loaded.") - - is_legacy = '/generate' in self.path - is_streaming = body.get('stream', False) - - if is_streaming: - self.start_sse() - - response = [] - if 'chat' in self.path: - response = OAIcompletions.stream_chat_completions(body, is_legacy=is_legacy) - else: - response = OAIcompletions.stream_completions(body, is_legacy=is_legacy) - - for resp in response: - self.send_sse(resp) - - self.end_sse() - - else: - response = '' - if 'chat' in self.path: - response = OAIcompletions.chat_completions(body, is_legacy=is_legacy) - else: - response = OAIcompletions.completions(body, is_legacy=is_legacy) - - self.return_json(response) - - elif '/edits' in self.path: - # deprecated - - if not shared.model: - raise ServiceUnavailableError("No model loaded.") - - req_params = get_default_req_params() - - instruction = body['instruction'] - input = body.get('input', '') - temperature = clamp(default(body, 'temperature', req_params['temperature']), 0.001, 1.999) # fixup absolute 0.0 - top_p = clamp(default(body, 'top_p', req_params['top_p']), 0.001, 1.0) - - response = OAIedits.edits(instruction, input, temperature, top_p) - - self.return_json(response) - - elif '/images/generations' in self.path: - if not os.environ.get('SD_WEBUI_URL', params.get('sd_webui_url', '')): - raise ServiceUnavailableError("Stable Diffusion not available. SD_WEBUI_URL not set.") - - prompt = body['prompt'] - size = default(body, 'size', '1024x1024') - response_format = default(body, 'response_format', 'url') # or b64_json - n = default(body, 'n', 1) # ignore the batch limits of max 10 - - response = OAIimages.generations(prompt=prompt, size=size, response_format=response_format, n=n) - - self.return_json(response, no_debug=True) - - elif '/embeddings' in self.path: - encoding_format = body.get('encoding_format', '') - - input = body.get('input', body.get('text', '')) - if not input: - raise InvalidRequestError("Missing required argument input", params='input') - - if type(input) is str: - input = [input] - - response = OAIembeddings.embeddings(input, encoding_format) - - self.return_json(response, no_debug=True) - - elif '/moderations' in self.path: - input = body['input'] - if not input: - raise InvalidRequestError("Missing required argument input", params='input') - - response = OAImoderations.moderations(input) - - self.return_json(response, no_debug=True) - - elif self.path == '/api/v1/token-count': - # NOT STANDARD. lifted from the api extension, but it's still very useful to calculate tokenized length client side. - response = token_count(body['prompt']) - - self.return_json(response, no_debug=True) - - elif self.path == '/api/v1/token/encode': - # NOT STANDARD. needed to support logit_bias, logprobs and token arrays for native models - encoding_format = body.get('encoding_format', '') - - response = token_encode(body['input'], encoding_format) - - self.return_json(response, no_debug=True) - - elif self.path == '/api/v1/token/decode': - # NOT STANDARD. needed to support logit_bias, logprobs and token arrays for native models - encoding_format = body.get('encoding_format', '') - - response = token_decode(body['input'], encoding_format) - - self.return_json(response, no_debug=True) - - else: - self.send_error(404) - - -def run_server(): - port = int(os.environ.get('OPENEDAI_PORT', params.get('port', 5001))) - server_addr = ('0.0.0.0' if shared.args.listen else '127.0.0.1', port) - server = ThreadingHTTPServer(server_addr, Handler) - if shared.args.share: - try: - from flask_cloudflared import _run_cloudflared - public_url = _run_cloudflared(port, port + 1) - print(f'OpenAI compatible API ready at: OPENAI_API_BASE={public_url}/v1') - except ImportError: - print('You should install flask_cloudflared manually') - else: - print(f'OpenAI compatible API ready at: OPENAI_API_BASE=http://{server_addr[0]}:{server_addr[1]}/v1') - - server.serve_forever() - - -def setup(): - Thread(target=run_server, daemon=True).start() diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/builder.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/builder.py deleted file mode 100644 index 0798b14cd8b39fc58d8f2a4930f1e079b5bf8b55..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/builder.py +++ /dev/null @@ -1,169 +0,0 @@ -import copy -import platform -import random -from functools import partial - -import numpy as np -from annotator.uniformer.mmcv.parallel import collate -from annotator.uniformer.mmcv.runner import get_dist_info -from annotator.uniformer.mmcv.utils import Registry, build_from_cfg -from annotator.uniformer.mmcv.utils.parrots_wrapper import DataLoader, PoolDataLoader -from torch.utils.data import DistributedSampler - -if platform.system() != 'Windows': - # https://github.com/pytorch/pytorch/issues/973 - import resource - rlimit = resource.getrlimit(resource.RLIMIT_NOFILE) - hard_limit = rlimit[1] - soft_limit = min(4096, hard_limit) - resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit)) - -DATASETS = Registry('dataset') -PIPELINES = Registry('pipeline') - - -def _concat_dataset(cfg, default_args=None): - """Build :obj:`ConcatDataset by.""" - from .dataset_wrappers import ConcatDataset - img_dir = cfg['img_dir'] - ann_dir = cfg.get('ann_dir', None) - split = cfg.get('split', None) - num_img_dir = len(img_dir) if isinstance(img_dir, (list, tuple)) else 1 - if ann_dir is not None: - num_ann_dir = len(ann_dir) if isinstance(ann_dir, (list, tuple)) else 1 - else: - num_ann_dir = 0 - if split is not None: - num_split = len(split) if isinstance(split, (list, tuple)) else 1 - else: - num_split = 0 - if num_img_dir > 1: - assert num_img_dir == num_ann_dir or num_ann_dir == 0 - assert num_img_dir == num_split or num_split == 0 - else: - assert num_split == num_ann_dir or num_ann_dir <= 1 - num_dset = max(num_split, num_img_dir) - - datasets = [] - for i in range(num_dset): - data_cfg = copy.deepcopy(cfg) - if isinstance(img_dir, (list, tuple)): - data_cfg['img_dir'] = img_dir[i] - if isinstance(ann_dir, (list, tuple)): - data_cfg['ann_dir'] = ann_dir[i] - if isinstance(split, (list, tuple)): - data_cfg['split'] = split[i] - datasets.append(build_dataset(data_cfg, default_args)) - - return ConcatDataset(datasets) - - -def build_dataset(cfg, default_args=None): - """Build datasets.""" - from .dataset_wrappers import ConcatDataset, RepeatDataset - if isinstance(cfg, (list, tuple)): - dataset = ConcatDataset([build_dataset(c, default_args) for c in cfg]) - elif cfg['type'] == 'RepeatDataset': - dataset = RepeatDataset( - build_dataset(cfg['dataset'], default_args), cfg['times']) - elif isinstance(cfg.get('img_dir'), (list, tuple)) or isinstance( - cfg.get('split', None), (list, tuple)): - dataset = _concat_dataset(cfg, default_args) - else: - dataset = build_from_cfg(cfg, DATASETS, default_args) - - return dataset - - -def build_dataloader(dataset, - samples_per_gpu, - workers_per_gpu, - num_gpus=1, - dist=True, - shuffle=True, - seed=None, - drop_last=False, - pin_memory=True, - dataloader_type='PoolDataLoader', - **kwargs): - """Build PyTorch DataLoader. - - In distributed training, each GPU/process has a dataloader. - In non-distributed training, there is only one dataloader for all GPUs. - - Args: - dataset (Dataset): A PyTorch dataset. - samples_per_gpu (int): Number of training samples on each GPU, i.e., - batch size of each GPU. - workers_per_gpu (int): How many subprocesses to use for data loading - for each GPU. - num_gpus (int): Number of GPUs. Only used in non-distributed training. - dist (bool): Distributed training/test or not. Default: True. - shuffle (bool): Whether to shuffle the data at every epoch. - Default: True. - seed (int | None): Seed to be used. Default: None. - drop_last (bool): Whether to drop the last incomplete batch in epoch. - Default: False - pin_memory (bool): Whether to use pin_memory in DataLoader. - Default: True - dataloader_type (str): Type of dataloader. Default: 'PoolDataLoader' - kwargs: any keyword argument to be used to initialize DataLoader - - Returns: - DataLoader: A PyTorch dataloader. - """ - rank, world_size = get_dist_info() - if dist: - sampler = DistributedSampler( - dataset, world_size, rank, shuffle=shuffle) - shuffle = False - batch_size = samples_per_gpu - num_workers = workers_per_gpu - else: - sampler = None - batch_size = num_gpus * samples_per_gpu - num_workers = num_gpus * workers_per_gpu - - init_fn = partial( - worker_init_fn, num_workers=num_workers, rank=rank, - seed=seed) if seed is not None else None - - assert dataloader_type in ( - 'DataLoader', - 'PoolDataLoader'), f'unsupported dataloader {dataloader_type}' - - if dataloader_type == 'PoolDataLoader': - dataloader = PoolDataLoader - elif dataloader_type == 'DataLoader': - dataloader = DataLoader - - data_loader = dataloader( - dataset, - batch_size=batch_size, - sampler=sampler, - num_workers=num_workers, - collate_fn=partial(collate, samples_per_gpu=samples_per_gpu), - pin_memory=pin_memory, - shuffle=shuffle, - worker_init_fn=init_fn, - drop_last=drop_last, - **kwargs) - - return data_loader - - -def worker_init_fn(worker_id, num_workers, rank, seed): - """Worker init func for dataloader. - - The seed of each worker equals to num_worker * rank + worker_id + user_seed - - Args: - worker_id (int): Worker id. - num_workers (int): Number of workers. - rank (int): The rank of current process. - seed (int): The random seed to use. - """ - - worker_seed = num_workers * rank + worker_id + seed - np.random.seed(worker_seed) - random.seed(worker_seed) diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/tutorial_dataset_test.py b/spaces/Anonymous-sub/Rerender/ControlNet/tutorial_dataset_test.py deleted file mode 100644 index b0c4c355065d15c266a8b4e8c68dfcbe2b246730..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/tutorial_dataset_test.py +++ /dev/null @@ -1,12 +0,0 @@ -from tutorial_dataset import MyDataset - -dataset = MyDataset() -print(len(dataset)) - -item = dataset[1234] -jpg = item['jpg'] -txt = item['txt'] -hint = item['hint'] -print(txt) -print(jpg.shape) -print(hint.shape) diff --git a/spaces/Anonymous-sub/Rerender/src/ddim_v_hacked.py b/spaces/Anonymous-sub/Rerender/src/ddim_v_hacked.py deleted file mode 100644 index 8d9ee6c1df57c93331bcefeee1f985af98b9f2a8..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/src/ddim_v_hacked.py +++ /dev/null @@ -1,589 +0,0 @@ -"""SAMPLING ONLY.""" - -# CrossAttn precision handling -import os - -import einops -import numpy as np -import torch -from tqdm import tqdm - -from ControlNet.ldm.modules.diffusionmodules.util import ( - extract_into_tensor, make_ddim_sampling_parameters, make_ddim_timesteps, - noise_like) - -_ATTN_PRECISION = os.environ.get('ATTN_PRECISION', 'fp32') - -device = 'cuda' if torch.cuda.is_available() else 'cpu' - - -def register_attention_control(model, controller=None): - - def ca_forward(self, place_in_unet): - - def forward(x, context=None, mask=None): - h = self.heads - - q = self.to_q(x) - is_cross = context is not None - context = context if is_cross else x - context = controller(context, is_cross, place_in_unet) - - k = self.to_k(context) - v = self.to_v(context) - - q, k, v = map( - lambda t: einops.rearrange(t, 'b n (h d) -> (b h) n d', h=h), - (q, k, v)) - - # force cast to fp32 to avoid overflowing - if _ATTN_PRECISION == 'fp32': - with torch.autocast(enabled=False, device_type=device): - q, k = q.float(), k.float() - sim = torch.einsum('b i d, b j d -> b i j', q, - k) * self.scale - else: - sim = torch.einsum('b i d, b j d -> b i j', q, k) * self.scale - - del q, k - - if mask is not None: - mask = einops.rearrange(mask, 'b ... -> b (...)') - max_neg_value = -torch.finfo(sim.dtype).max - mask = einops.repeat(mask, 'b j -> (b h) () j', h=h) - sim.masked_fill_(~mask, max_neg_value) - - # attention, what we cannot get enough of - sim = sim.softmax(dim=-1) - - out = torch.einsum('b i j, b j d -> b i d', sim, v) - out = einops.rearrange(out, '(b h) n d -> b n (h d)', h=h) - return self.to_out(out) - - return forward - - class DummyController: - - def __call__(self, *args): - return args[0] - - def __init__(self): - self.cur_step = 0 - - if controller is None: - controller = DummyController() - - def register_recr(net_, place_in_unet): - if net_.__class__.__name__ == 'CrossAttention': - net_.forward = ca_forward(net_, place_in_unet) - elif hasattr(net_, 'children'): - for net__ in net_.children(): - register_recr(net__, place_in_unet) - - sub_nets = model.named_children() - for net in sub_nets: - if 'input_blocks' in net[0]: - register_recr(net[1], 'down') - elif 'output_blocks' in net[0]: - register_recr(net[1], 'up') - elif 'middle_block' in net[0]: - register_recr(net[1], 'mid') - - -class DDIMVSampler(object): - - def __init__(self, model, schedule='linear', **kwargs): - super().__init__() - self.model = model - self.ddpm_num_timesteps = model.num_timesteps - self.schedule = schedule - - def register_buffer(self, name, attr): - if type(attr) == torch.Tensor: - if attr.device != torch.device(device): - attr = attr.to(torch.device(device)) - setattr(self, name, attr) - - def make_schedule(self, - ddim_num_steps, - ddim_discretize='uniform', - ddim_eta=0., - verbose=True): - self.ddim_timesteps = make_ddim_timesteps( - ddim_discr_method=ddim_discretize, - num_ddim_timesteps=ddim_num_steps, - num_ddpm_timesteps=self.ddpm_num_timesteps, - verbose=verbose) - alphas_cumprod = self.model.alphas_cumprod - assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, \ - 'alphas have to be defined for each timestep' - - def to_torch(x): - return x.clone().detach().to(torch.float32).to(self.model.device) - - self.register_buffer('betas', to_torch(self.model.betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', - to_torch(self.model.alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', - to_torch(np.sqrt(alphas_cumprod.cpu()))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', - to_torch(np.sqrt(1. - alphas_cumprod.cpu()))) - self.register_buffer('log_one_minus_alphas_cumprod', - to_torch(np.log(1. - alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recip_alphas_cumprod', - to_torch(np.sqrt(1. / alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', - to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1))) - - # ddim sampling parameters - ddim_sigmas, ddim_alphas, ddim_alphas_prev = \ - make_ddim_sampling_parameters( - alphacums=alphas_cumprod.cpu(), - ddim_timesteps=self.ddim_timesteps, - eta=ddim_eta, - verbose=verbose) - self.register_buffer('ddim_sigmas', ddim_sigmas) - self.register_buffer('ddim_alphas', ddim_alphas) - self.register_buffer('ddim_alphas_prev', ddim_alphas_prev) - self.register_buffer('ddim_sqrt_one_minus_alphas', - np.sqrt(1. - ddim_alphas)) - sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt( - (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * - (1 - self.alphas_cumprod / self.alphas_cumprod_prev)) - self.register_buffer('ddim_sigmas_for_original_num_steps', - sigmas_for_original_sampling_steps) - - @torch.no_grad() - def sample(self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - xtrg=None, - noise_rescale=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, - dynamic_threshold=None, - ucg_schedule=None, - controller=None, - strength=0.0, - **kwargs): - if conditioning is not None: - if isinstance(conditioning, dict): - ctmp = conditioning[list(conditioning.keys())[0]] - while isinstance(ctmp, list): - ctmp = ctmp[0] - cbs = ctmp.shape[0] - if cbs != batch_size: - print(f'Warning: Got {cbs} conditionings' - f'but batch-size is {batch_size}') - - elif isinstance(conditioning, list): - for ctmp in conditioning: - if ctmp.shape[0] != batch_size: - print(f'Warning: Got {cbs} conditionings' - f'but batch-size is {batch_size}') - - else: - if conditioning.shape[0] != batch_size: - print(f'Warning: Got {conditioning.shape[0]}' - f'conditionings but batch-size is {batch_size}') - - self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose) - # sampling - C, H, W = shape - size = (batch_size, C, H, W) - print(f'Data shape for DDIM sampling is {size}, eta {eta}') - - samples, intermediates = self.ddim_sampling( - conditioning, - size, - callback=callback, - img_callback=img_callback, - quantize_denoised=quantize_x0, - mask=mask, - x0=x0, - xtrg=xtrg, - noise_rescale=noise_rescale, - ddim_use_original_steps=False, - noise_dropout=noise_dropout, - temperature=temperature, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - x_T=x_T, - log_every_t=log_every_t, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - dynamic_threshold=dynamic_threshold, - ucg_schedule=ucg_schedule, - controller=controller, - strength=strength, - ) - return samples, intermediates - - @torch.no_grad() - def ddim_sampling(self, - cond, - shape, - x_T=None, - ddim_use_original_steps=False, - callback=None, - timesteps=None, - quantize_denoised=False, - mask=None, - x0=None, - xtrg=None, - noise_rescale=None, - img_callback=None, - log_every_t=100, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - unconditional_guidance_scale=1., - unconditional_conditioning=None, - dynamic_threshold=None, - ucg_schedule=None, - controller=None, - strength=0.0): - - if strength == 1 and x0 is not None: - return x0, None - - register_attention_control(self.model.model.diffusion_model, - controller) - - device = self.model.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - if timesteps is None: - timesteps = self.ddpm_num_timesteps if ddim_use_original_steps \ - else self.ddim_timesteps - elif timesteps is not None and not ddim_use_original_steps: - subset_end = int( - min(timesteps / self.ddim_timesteps.shape[0], 1) * - self.ddim_timesteps.shape[0]) - 1 - timesteps = self.ddim_timesteps[:subset_end] - - intermediates = {'x_inter': [img], 'pred_x0': [img]} - time_range = reversed(range( - 0, timesteps)) if ddim_use_original_steps else np.flip(timesteps) - total_steps = timesteps if ddim_use_original_steps \ - else timesteps.shape[0] - print(f'Running DDIM Sampling with {total_steps} timesteps') - - iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps) - if controller is not None: - controller.set_total_step(total_steps) - if mask is None: - mask = [None] * total_steps - - dir_xt = 0 - for i, step in enumerate(iterator): - if controller is not None: - controller.set_step(i) - index = total_steps - i - 1 - ts = torch.full((b, ), step, device=device, dtype=torch.long) - - if strength >= 0 and i == int( - total_steps * strength) and x0 is not None: - img = self.model.q_sample(x0, ts) - if mask is not None and xtrg is not None: - # TODO: deterministic forward pass? - if type(mask) == list: - weight = mask[i] - else: - weight = mask - if weight is not None: - rescale = torch.maximum(1. - weight, (1 - weight**2)**0.5 * - controller.inner_strength) - if noise_rescale is not None: - rescale = (1. - weight) * ( - 1 - noise_rescale) + rescale * noise_rescale - img_ref = self.model.q_sample(xtrg, ts) - img = img_ref * weight + (1. - weight) * ( - img - dir_xt) + rescale * dir_xt - - if ucg_schedule is not None: - assert len(ucg_schedule) == len(time_range) - unconditional_guidance_scale = ucg_schedule[i] - - outs = self.p_sample_ddim( - img, - cond, - ts, - index=index, - use_original_steps=ddim_use_original_steps, - quantize_denoised=quantize_denoised, - temperature=temperature, - noise_dropout=noise_dropout, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - dynamic_threshold=dynamic_threshold, - controller=controller, - return_dir=True) - img, pred_x0, dir_xt = outs - if callback: - callback(i) - if img_callback: - img_callback(pred_x0, i) - - if index % log_every_t == 0 or index == total_steps - 1: - intermediates['x_inter'].append(img) - intermediates['pred_x0'].append(pred_x0) - - return img, intermediates - - @torch.no_grad() - def p_sample_ddim(self, - x, - c, - t, - index, - repeat_noise=False, - use_original_steps=False, - quantize_denoised=False, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - unconditional_guidance_scale=1., - unconditional_conditioning=None, - dynamic_threshold=None, - controller=None, - return_dir=False): - b, *_, device = *x.shape, x.device - - if unconditional_conditioning is None or \ - unconditional_guidance_scale == 1.: - model_output = self.model.apply_model(x, t, c) - else: - model_t = self.model.apply_model(x, t, c) - model_uncond = self.model.apply_model(x, t, - unconditional_conditioning) - model_output = model_uncond + unconditional_guidance_scale * ( - model_t - model_uncond) - - if self.model.parameterization == 'v': - e_t = self.model.predict_eps_from_z_and_v(x, t, model_output) - else: - e_t = model_output - - if score_corrector is not None: - assert self.model.parameterization == 'eps', 'not implemented' - e_t = score_corrector.modify_score(self.model, e_t, x, t, c, - **corrector_kwargs) - - if use_original_steps: - alphas = self.model.alphas_cumprod - alphas_prev = self.model.alphas_cumprod_prev - sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod - sigmas = self.model.ddim_sigmas_for_original_num_steps - else: - alphas = self.ddim_alphas - alphas_prev = self.ddim_alphas_prev - sqrt_one_minus_alphas = self.ddim_sqrt_one_minus_alphas - sigmas = self.ddim_sigmas - - # select parameters corresponding to the currently considered timestep - a_t = torch.full((b, 1, 1, 1), alphas[index], device=device) - a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device) - sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device) - sqrt_one_minus_at = torch.full((b, 1, 1, 1), - sqrt_one_minus_alphas[index], - device=device) - - # current prediction for x_0 - if self.model.parameterization != 'v': - pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt() - else: - pred_x0 = self.model.predict_start_from_z_and_v(x, t, model_output) - - if quantize_denoised: - pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0) - - if dynamic_threshold is not None: - raise NotImplementedError() - ''' - if mask is not None and xtrg is not None: - pred_x0 = xtrg * mask + (1. - mask) * pred_x0 - ''' - - if controller is not None: - pred_x0 = controller.update_x0(pred_x0) - - # direction pointing to x_t - dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t - noise = sigma_t * noise_like(x.shape, device, - repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise - - if return_dir: - return x_prev, pred_x0, dir_xt - return x_prev, pred_x0 - - @torch.no_grad() - def encode(self, - x0, - c, - t_enc, - use_original_steps=False, - return_intermediates=None, - unconditional_guidance_scale=1.0, - unconditional_conditioning=None, - callback=None): - timesteps = np.arange(self.ddpm_num_timesteps - ) if use_original_steps else self.ddim_timesteps - num_reference_steps = timesteps.shape[0] - - assert t_enc <= num_reference_steps - num_steps = t_enc - - if use_original_steps: - alphas_next = self.alphas_cumprod[:num_steps] - alphas = self.alphas_cumprod_prev[:num_steps] - else: - alphas_next = self.ddim_alphas[:num_steps] - alphas = torch.tensor(self.ddim_alphas_prev[:num_steps]) - - x_next = x0 - intermediates = [] - inter_steps = [] - for i in tqdm(range(num_steps), desc='Encoding Image'): - t = torch.full((x0.shape[0], ), - timesteps[i], - device=self.model.device, - dtype=torch.long) - if unconditional_guidance_scale == 1.: - noise_pred = self.model.apply_model(x_next, t, c) - else: - assert unconditional_conditioning is not None - e_t_uncond, noise_pred = torch.chunk( - self.model.apply_model( - torch.cat((x_next, x_next)), torch.cat((t, t)), - torch.cat((unconditional_conditioning, c))), 2) - noise_pred = e_t_uncond + unconditional_guidance_scale * ( - noise_pred - e_t_uncond) - xt_weighted = (alphas_next[i] / alphas[i]).sqrt() * x_next - weighted_noise_pred = alphas_next[i].sqrt() * ( - (1 / alphas_next[i] - 1).sqrt() - - (1 / alphas[i] - 1).sqrt()) * noise_pred - x_next = xt_weighted + weighted_noise_pred - if return_intermediates and i % (num_steps // return_intermediates - ) == 0 and i < num_steps - 1: - intermediates.append(x_next) - inter_steps.append(i) - elif return_intermediates and i >= num_steps - 2: - intermediates.append(x_next) - inter_steps.append(i) - if callback: - callback(i) - - out = {'x_encoded': x_next, 'intermediate_steps': inter_steps} - if return_intermediates: - out.update({'intermediates': intermediates}) - return x_next, out - - @torch.no_grad() - def stochastic_encode(self, x0, t, use_original_steps=False, noise=None): - # fast, but does not allow for exact reconstruction - # t serves as an index to gather the correct alphas - if use_original_steps: - sqrt_alphas_cumprod = self.sqrt_alphas_cumprod - sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod - else: - sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas) - sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas - - if noise is None: - noise = torch.randn_like(x0) - if t >= len(sqrt_alphas_cumprod): - return noise - return ( - extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 + - extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * - noise) - - @torch.no_grad() - def decode(self, - x_latent, - cond, - t_start, - unconditional_guidance_scale=1.0, - unconditional_conditioning=None, - use_original_steps=False, - callback=None): - - timesteps = np.arange(self.ddpm_num_timesteps - ) if use_original_steps else self.ddim_timesteps - timesteps = timesteps[:t_start] - - time_range = np.flip(timesteps) - total_steps = timesteps.shape[0] - print(f'Running DDIM Sampling with {total_steps} timesteps') - - iterator = tqdm(time_range, desc='Decoding image', total=total_steps) - x_dec = x_latent - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((x_latent.shape[0], ), - step, - device=x_latent.device, - dtype=torch.long) - x_dec, _ = self.p_sample_ddim( - x_dec, - cond, - ts, - index=index, - use_original_steps=use_original_steps, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning) - if callback: - callback(i) - return x_dec - - -def calc_mean_std(feat, eps=1e-5): - # eps is a small value added to the variance to avoid divide-by-zero. - size = feat.size() - assert (len(size) == 4) - N, C = size[:2] - feat_var = feat.view(N, C, -1).var(dim=2) + eps - feat_std = feat_var.sqrt().view(N, C, 1, 1) - feat_mean = feat.view(N, C, -1).mean(dim=2).view(N, C, 1, 1) - return feat_mean, feat_std - - -def adaptive_instance_normalization(content_feat, style_feat): - assert (content_feat.size()[:2] == style_feat.size()[:2]) - size = content_feat.size() - style_mean, style_std = calc_mean_std(style_feat) - content_mean, content_std = calc_mean_std(content_feat) - - normalized_feat = (content_feat - - content_mean.expand(size)) / content_std.expand(size) - return normalized_feat * style_std.expand(size) + style_mean.expand(size) diff --git a/spaces/AnthonyTruchetPoC/persistent-docker/scripts/interactive-rebuild-docs.sh b/spaces/AnthonyTruchetPoC/persistent-docker/scripts/interactive-rebuild-docs.sh deleted file mode 100644 index 3302e10e8e0f2e1f8a5487d180e7e1623d61461c..0000000000000000000000000000000000000000 --- a/spaces/AnthonyTruchetPoC/persistent-docker/scripts/interactive-rebuild-docs.sh +++ /dev/null @@ -1,2 +0,0 @@ -#!/usr/bin/env sh -poetry run sphinx-autobuild --open-browser doc dist/doc diff --git a/spaces/Antonpy/stable-diffusion-license/license.html b/spaces/Antonpy/stable-diffusion-license/license.html deleted file mode 100644 index 5dacb08ef3076530e5c3f13144d2668b22527d05..0000000000000000000000000000000000000000 --- a/spaces/Antonpy/stable-diffusion-license/license.html +++ /dev/null @@ -1,242 +0,0 @@ - - - - - - - - - - - - - - - - - -
    -
    Copyright (c) 2022 Robin Rombach and Patrick Esser and contributors
    CreativeML Open RAIL-M
    dated August 22, 2022
    Section I: PREAMBLE
    Multimodal generative models are being widely adopted and used, and have
    the potential to transform the way artists, among other individuals,
    conceive and benefit from AI or ML technologies as a tool for content
    creation.
    Notwithstanding the current and potential benefits that these artifacts
    can bring to society at large, there are also concerns about potential
    misuses of them, either due to their technical limitations or ethical
    considerations.
    In short, this license strives for both the open and responsible
    downstream use of the accompanying model. When it comes to the open
    character, we took inspiration from open source permissive licenses
    regarding the grant of IP rights. Referring to the downstream responsible
    use, we added use-based restrictions not permitting the use of the Model
    in very specific scenarios, in order for the licensor to be able to
    enforce the license in case potential misuses of the Model may occur. At
    the same time, we strive to promote open and responsible research on
    generative models for art and content generation.
    Even though downstream derivative versions of the model could be released
    under different licensing terms, the latter will always have to include -
    at minimum - the same use-based restrictions as the ones in the original
    license (this license). We believe in the intersection between open and
    responsible AI development; thus, this License aims to strike a balance
    between both in order to enable responsible open-science in the field of
    AI.
    This License governs the use of the model (and its derivatives) and is
    informed by the model card associated with the model.
    NOW THEREFORE, You and Licensor agree as follows:
    1. Definitions
    - "License" means the terms and conditions for use, reproduction, and
    Distribution as defined in this document.
    - "Data" means a collection of information and/or content extracted from
    the dataset used with the Model, including to train, pretrain, or
    otherwise evaluate the Model. The Data is not licensed under this
    License.
    - "Output" means the results of operating a Model as embodied in
    informational content resulting therefrom.
    - "Model" means any accompanying machine-learning based assemblies
    (including checkpoints), consisting of learnt weights, parameters
    (including optimizer states), corresponding to the model architecture as
    -
    embodied in the Complementary Material, that have been trained or tuned,
    in whole or in part on the Data, using the Complementary Material.
    - "Derivatives of the Model" means all modifications to the Model, works
    based on the Model, or any other model which is created or initialized by
    transfer of patterns of the weights, parameters, activations or output of
    the Model, to the other model, in order to cause the other model to
    perform similarly to the Model, including - but not limited to -
    distillation methods entailing the use of intermediate data
    representations or methods based on the generation of synthetic data by
    the Model for training the other model.
    - "Complementary Material" means the accompanying source code and scripts
    used to define, run, load, benchmark or evaluate the Model, and used to
    prepare data for training or evaluation, if any. This includes any
    accompanying documentation, tutorials, examples, etc, if any.
    - "Distribution" means any transmission, reproduction, publication or
    other sharing of the Model or Derivatives of the Model to a third party,
    including providing the Model as a hosted service made available by
    electronic or other remote means - e.g. API-based or web access.
    - "Licensor" means the copyright owner or entity authorized by the
    copyright owner that is granting the License, including the persons or
    entities that may have rights in the Model and/or distributing the Model.
    - "You" (or "Your") means an individual or Legal Entity exercising
    permissions granted by this License and/or making use of the Model for
    whichever purpose and in any field of use, including usage of the Model
    in an end-use application - e.g. chatbot, translator, image generator.
    - "Third Parties" means individuals or legal entities that are not under
    common control with Licensor or You.
    - "Contribution" means any work of authorship, including the original
    version of the Model and any modifications or additions to that Model or
    Derivatives of the Model thereof, that is intentionally submitted to
    Licensor for inclusion in the Model by the copyright owner or by an
    individual or Legal Entity authorized to submit on behalf of the
    copyright owner. For the purposes of this definition, "submitted" means
    any form of electronic, verbal, or written communication sent to the
    Licensor or its representatives, including but not limited to
    communication on electronic mailing lists, source code control systems,
    and issue tracking systems that are managed by, or on behalf of, the
    Licensor for the purpose of discussing and improving the Model, but
    excluding communication that is conspicuously marked or otherwise
    designated in writing by the copyright owner as "Not a Contribution."
    - "Contributor" means Licensor and any individual or Legal Entity on
    behalf of whom a Contribution has been received by Licensor and
    subsequently incorporated within the Model.
    Section II: INTELLECTUAL PROPERTY RIGHTS
    Both copyright and patent grants apply to the Model, Derivatives of the
    Model and Complementary Material. The Model and Derivatives of the Model
    are subject to additional terms as described in Section III.
    2. Grant of Copyright License. Subject to the terms and conditions of
    this License, each Contributor hereby grants to You a perpetual,
    worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright
    license to reproduce, prepare, publicly display, publicly perform,
    -
    sublicense, and distribute the Complementary Material, the Model, and
    Derivatives of the Model.
    3. Grant of Patent License. Subject to the terms and conditions of this
    License and where and as applicable, each Contributor hereby grants to
    You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
    irrevocable (except as stated in this paragraph) patent license to make,
    have made, use, offer to sell, sell, import, and otherwise transfer the
    Model and the Complementary Material, where such license applies only to
    those patent claims licensable by such Contributor that are necessarily
    infringed by their Contribution(s) alone or by combination of their
    Contribution(s) with the Model to which such Contribution(s) was
    submitted. If You institute patent litigation against any entity
    (including a cross-claim or counterclaim in a lawsuit) alleging that the
    Model and/or Complementary Material or a Contribution incorporated within
    the Model and/or Complementary Material constitutes direct or
    contributory patent infringement, then any patent licenses granted to You
    under this License for the Model and/or Work shall terminate as of the
    date such litigation is asserted or filed.
    Section III: CONDITIONS OF USAGE, DISTRIBUTION AND REDISTRIBUTION
    4. Distribution and Redistribution. You may host for Third Party remote
    access purposes (e.g. software-as-a-service), reproduce and distribute
    copies of the Model or Derivatives of the Model thereof in any medium,
    with or without modifications, provided that You meet the following
    conditions:
    Use-based restrictions as referenced in paragraph 5 MUST be included as
    an enforceable provision by You in any type of legal agreement (e.g. a
    license) governing the use and/or distribution of the Model or
    Derivatives of the Model, and You shall give notice to subsequent users
    You Distribute to, that the Model or Derivatives of the Model are subject
    to paragraph 5. This provision does not apply to the use of Complementary
    Material.
    You must give any Third Party recipients of the Model or Derivatives of
    the Model a copy of this License;
    You must cause any modified files to carry prominent notices stating that
    You changed the files;
    You must retain all copyright, patent, trademark, and attribution notices
    excluding those notices that do not pertain to any part of the Model,
    Derivatives of the Model.
    You may add Your own copyright statement to Your modifications and may
    provide additional or different license terms and conditions - respecting
    paragraph 4.a. - for use, reproduction, or Distribution of Your
    modifications, or for any such Derivatives of the Model as a whole,
    provided Your use, reproduction, and Distribution of the Model otherwise
    complies with the conditions stated in this License.
    5. Use-based restrictions. The restrictions set forth in Attachment A are
    considered Use-based restrictions. Therefore You cannot use the Model and
    the Derivatives of the Model for the specified restricted uses. You may
    use the Model subject to this License, including only for lawful purposes
    and in accordance with the License. Use may include creating any content
    with, finetuning, updating, running, training, evaluating and/or
    reparametrizing the Model. You shall require all of Your users who use
    -
    the Model or a Derivative of the Model to comply with the terms of this
    paragraph (paragraph 5).
    6. The Output You Generate. Except as set forth herein, Licensor claims
    no rights in the Output You generate using the Model. You are accountable
    for the Output you generate and its subsequent uses. No use of the output
    can contravene any provision as stated in the License.
    Section IV: OTHER PROVISIONS
    7. Updates and Runtime Restrictions. To the maximum extent permitted by
    law, Licensor reserves the right to restrict (remotely or otherwise)
    usage of the Model in violation of this License, update the Model through
    electronic means, or modify the Output of the Model based on updates. You
    shall undertake reasonable efforts to use the latest version of the
    Model.
    8. Trademarks and related. Nothing in this License permits You to make
    use of Licensors’ trademarks, trade names, logos or to otherwise suggest
    endorsement or misrepresent the relationship between the parties; and any
    rights not expressly granted herein are reserved by the Licensors.
    9. Disclaimer of Warranty. Unless required by applicable law or agreed to
    in writing, Licensor provides the Model and the Complementary Material
    (and each Contributor provides its Contributions) on an "AS IS" BASIS,
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied,
    including, without limitation, any warranties or conditions of TITLE,
    NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE.
    You are solely responsible for determining the appropriateness of using
    or redistributing the Model, Derivatives of the Model, and the
    Complementary Material and assume any risks associated with Your exercise
    of permissions under this License.
    10. Limitation of Liability. In no event and under no legal theory,
    whether in tort (including negligence), contract, or otherwise, unless
    required by applicable law (such as deliberate and grossly negligent
    acts) or agreed to in writing, shall any Contributor be liable to You for
    damages, including any direct, indirect, special, incidental, or
    consequential damages of any character arising as a result of this
    License or out of the use or inability to use the Model and the
    Complementary Material (including but not limited to damages for loss of
    goodwill, work stoppage, computer failure or malfunction, or any and all
    other commercial damages or losses), even if such Contributor has been
    advised of the possibility of such damages.
    11. Accepting Warranty or Additional Liability. While redistributing the
    Model, Derivatives of the Model and the Complementary Material thereof,
    You may choose to offer, and charge a fee for, acceptance of support,
    warranty, indemnity, or other liability obligations and/or rights
    consistent with this License. However, in accepting such obligations, You
    may act only on Your own behalf and on Your sole responsibility, not on
    behalf of any other Contributor, and only if You agree to indemnify,
    defend, and hold each Contributor harmless for any liability incurred by,
    or claims asserted against, such Contributor by reason of your accepting
    any such warranty or additional liability.
    12. If any provision of this License is held to be invalid, illegal or
    unenforceable, the remaining provisions shall be unaffected thereby and
    remain valid as if such provision had not been set forth herein.
    -
    END OF TERMS AND CONDITIONS
    Attachment A
    Use Restrictions
    You agree not to use the Model or Derivatives of the Model:
    - In any way that violates any applicable national, federal, state, local
    or international law or regulation;
    - For the purpose of exploiting, harming or attempting to exploit or harm
    minors in any way;
    - To generate or disseminate verifiably false information and/or content
    with the purpose of harming others;
    - To generate or disseminate personal identifiable information that can
    be used to harm an individual;
    - To defame, disparage or otherwise harass others;
    - For fully automated decision making that adversely impacts an
    individual’s legal rights or otherwise creates or modifies a binding,
    enforceable obligation;
    - For any use intended to or which has the effect of discriminating
    against or harming individuals or groups based on online or offline
    social behavior or known or predicted personal or personality
    characteristics;
    - To exploit any of the vulnerabilities of a specific group of persons
    based on their age, social, physical or mental characteristics, in order
    to materially distort the behavior of a person pertaining to that group
    in a manner that causes or is likely to cause that person or another
    person physical or psychological harm;
    - For any use intended to or which has the effect of discriminating
    against individuals or groups based on legally protected characteristics
    or categories;
    - To provide medical advice and medical results interpretation;
    - To generate or disseminate information for the purpose to be used for
    administration of justice, law enforcement, immigration or asylum
    processes, such as predicting an individual will commit fraud/crime
    commitment (e.g. by text profiling, drawing causal relationships between
    assertions made in documents, indiscriminate and arbitrarily-targeted
    use).
    -
    -
    - -
    - - diff --git a/spaces/Apex-X/Tm/roop/face_analyser.py b/spaces/Apex-X/Tm/roop/face_analyser.py deleted file mode 100644 index 9c0afe458763edb22dc2332f527dfdba48575b1d..0000000000000000000000000000000000000000 --- a/spaces/Apex-X/Tm/roop/face_analyser.py +++ /dev/null @@ -1,34 +0,0 @@ -import threading -from typing import Any -import insightface - -import roop.globals -from roop.typing import Frame - -FACE_ANALYSER = None -THREAD_LOCK = threading.Lock() - - -def get_face_analyser() -> Any: - global FACE_ANALYSER - - with THREAD_LOCK: - if FACE_ANALYSER is None: - FACE_ANALYSER = insightface.app.FaceAnalysis(name='buffalo_l', providers=roop.globals.execution_providers) - FACE_ANALYSER.prepare(ctx_id=0, det_size=(640, 640)) - return FACE_ANALYSER - - -def get_one_face(frame: Frame) -> Any: - face = get_face_analyser().get(frame) - try: - return min(face, key=lambda x: x.bbox[0]) - except ValueError: - return None - - -def get_many_faces(frame: Frame) -> Any: - try: - return get_face_analyser().get(frame) - except IndexError: - return None diff --git a/spaces/Artrajz/vits-simple-api/utils/classify_language.py b/spaces/Artrajz/vits-simple-api/utils/classify_language.py deleted file mode 100644 index 608a880442e4e173cb46b4befaeaf94e1896f4f9..0000000000000000000000000000000000000000 --- a/spaces/Artrajz/vits-simple-api/utils/classify_language.py +++ /dev/null @@ -1,60 +0,0 @@ -from config import LANGUAGE_IDENTIFICATION_LIBRARY - -module = LANGUAGE_IDENTIFICATION_LIBRARY.lower() - -langid_languages = ["af", "am", "an", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "dz", "el", - "en", "eo", "es", "et", "eu", "fa", "fi", "fo", "fr", "ga", "gl", "gu", "he", "hi", "hr", "ht", "hu", "hy", - "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", - "mk", "ml", "mn", "mr", "ms", "mt", "nb", "ne", "nl", "nn", "no", "oc", "or", "pa", "pl", "ps", "pt", "qu", - "ro", "ru", "rw", "se", "si", "sk", "sl", "sq", "sr", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", - "ur", "vi", "vo", "wa", "xh", "zh", "zu"] - - -def classify_language(text: str, target_languages: list = None) -> str: - if module == "fastlid" or module == "fasttext": - from fastlid import fastlid, supported_langs - classifier = fastlid - if target_languages != None: - target_languages = [lang for lang in target_languages if lang in supported_langs] - fastlid.set_languages = target_languages - elif module == "langid": - import langid - classifier = langid.classify - if target_languages != None: - target_languages = [lang for lang in target_languages if lang in langid_languages] - langid.set_languages(target_languages) - else: - raise ValueError(f"Wrong LANGUAGE_IDENTIFICATION_LIBRARY in config.py") - - lang = classifier(text)[0] - - return lang - - -def classify_zh_ja(text: str) -> str: - for idx, char in enumerate(text): - unicode_val = ord(char) - - # 检测日语字符 - if 0x3040 <= unicode_val <= 0x309F or 0x30A0 <= unicode_val <= 0x30FF: - return "ja" - - # 检测汉字字符 - if 0x4E00 <= unicode_val <= 0x9FFF: - # 检查周围的字符 - next_char = text[idx + 1] if idx + 1 < len(text) else None - - if next_char and (0x3040 <= ord(next_char) <= 0x309F or 0x30A0 <= ord(next_char) <= 0x30FF): - return "ja" - - return "zh" - - -if __name__ == "__main__": - text = "这是一个测试文本" - print(classify_language(text)) - print(classify_zh_ja(text)) # "zh" - - text = "これはテストテキストです" - print(classify_language(text)) - print(classify_zh_ja(text)) # "ja" diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/idna/package_data.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/idna/package_data.py deleted file mode 100644 index 8501893bd153b7216524084cad23e90aeac0b1f8..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/idna/package_data.py +++ /dev/null @@ -1,2 +0,0 @@ -__version__ = '3.4' - diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/clean.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/clean.py deleted file mode 100644 index b731b60609621ad822aa989ffa1f711ec2932278..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/clean.py +++ /dev/null @@ -1,76 +0,0 @@ -"""distutils.command.clean - -Implements the Distutils 'clean' command.""" - -# contributed by Bastian Kleineidam , added 2000-03-18 - -import os -from distutils.core import Command -from distutils.dir_util import remove_tree -from distutils import log - - -class clean(Command): - - description = "clean up temporary files from 'build' command" - user_options = [ - ('build-base=', 'b', "base build directory (default: 'build.build-base')"), - ( - 'build-lib=', - None, - "build directory for all modules (default: 'build.build-lib')", - ), - ('build-temp=', 't', "temporary build directory (default: 'build.build-temp')"), - ( - 'build-scripts=', - None, - "build directory for scripts (default: 'build.build-scripts')", - ), - ('bdist-base=', None, "temporary directory for built distributions"), - ('all', 'a', "remove all build output, not just temporary by-products"), - ] - - boolean_options = ['all'] - - def initialize_options(self): - self.build_base = None - self.build_lib = None - self.build_temp = None - self.build_scripts = None - self.bdist_base = None - self.all = None - - def finalize_options(self): - self.set_undefined_options( - 'build', - ('build_base', 'build_base'), - ('build_lib', 'build_lib'), - ('build_scripts', 'build_scripts'), - ('build_temp', 'build_temp'), - ) - self.set_undefined_options('bdist', ('bdist_base', 'bdist_base')) - - def run(self): - # remove the build/temp. directory (unless it's already - # gone) - if os.path.exists(self.build_temp): - remove_tree(self.build_temp, dry_run=self.dry_run) - else: - log.debug("'%s' does not exist -- can't clean it", self.build_temp) - - if self.all: - # remove build directories - for directory in (self.build_lib, self.bdist_base, self.build_scripts): - if os.path.exists(directory): - remove_tree(directory, dry_run=self.dry_run) - else: - log.warn("'%s' does not exist -- can't clean it", directory) - - # just for the heck of it, try to remove the base build directory: - # we might have emptied it right now, but if not we don't care - if not self.dry_run: - try: - os.rmdir(self.build_base) - log.info("removing '%s'", self.build_base) - except OSError: - pass diff --git a/spaces/BartPoint/VoiceChange_Beta/infer_pack/modules/F0Predictor/__init__.py b/spaces/BartPoint/VoiceChange_Beta/infer_pack/modules/F0Predictor/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Benson/text-generation/Examples/Auto Clicker For Clicker Heroes Download.md b/spaces/Benson/text-generation/Examples/Auto Clicker For Clicker Heroes Download.md deleted file mode 100644 index 099192f6c2100a2d006e9af42f79caaee9de9ac5..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Auto Clicker For Clicker Heroes Download.md +++ /dev/null @@ -1,78 +0,0 @@ -
    -

    Auto Clicker para Clicker Heroes Descargar

    -

    Si eres un fan de los juegos de clickers ociosos, es posible que hayas escuchado o jugado Clicker Heroes, un juego popular donde matas monstruos, mejoras héroes, encuentras tesoros y matas jefes. ¿Pero sabías que puedes mejorar tu experiencia de juego usando un auto clicker para héroes clickers? En este artículo, explicaremos qué es un clicker automático, cómo usarlo para héroes clickers y cuáles son los beneficios de usarlo.

    -

    auto clicker for clicker heroes download


    DOWNLOAD ☆☆☆ https://bltlly.com/2v6KI0



    -

    ¿Qué es Auto Clicker?

    -

    Un auto clicker es un programa que le permite configurar y automatizar el click de un mouse en la pantalla de su computadora. Un clicker automático no solo sigue el cursor, pero a menudo tiene soporte para doble y triple clic, teclas de acceso rápido que funcionan incluso en segundo plano, ajustes automáticos ahorra, y más.

    -

    ¿Cómo usar el Auto Clicker?

    -

    Para usar un auto clicker, debes seguir estos pasos:

    -
      -
    1. Visite AutoClickers.org para encontrar las diferentes opciones de dispositivos disponibles y descargar el que se adapte a sus necesidades.
    2. -
    3. Ejecute el instalador y siga las instrucciones para completar la instalación.
    4. -
    5. Abra el auto clicker haciendo clic en el icono o en el acceso directo del escritorio.
    6. -
    7. Elija el atajo de teclado que desea utilizar para iniciar o dejar de hacer clic y haga clic en "Aplicar".
    8. -
    9. Seleccione el área en la pantalla donde desea que haga clic el clicker automático. Puede hacer esto arrastrando el cursor del ratón o usando las coordenadas.
    10. -
    11. Ajuste la velocidad de clic y la duración moviendo los controles deslizantes o introduciendo los valores. También puede elegir el tipo de clic (izquierda, derecha, centro) y el número de clics.
    12. -
    13. Pulse el atajo de teclado para iniciar el auto clic. Puede ver el número de clics y el tiempo transcurrido en la ventana del auto clicker.
    14. - -
    -

    Beneficios de Auto Clicker

    -

    Usar un auto clicker puede tener muchas ventajas, como:

    -
      -
    • Ahorra tiempo y esfuerzo: No tienes que hacer clic manualmente en la pantalla repetidamente, lo que puede ser agotador y aburrido. Puedes dejar que el auto clicker haga el trabajo por ti mientras te enfocas en otras tareas o te relajas.
    • -
    • Reducir errores: No tienes que preocuparte por perder un clic o hacer clic en el lugar equivocado. El clicker automático hará clic de forma precisa y consistente de acuerdo con su configuración.
    • -
    • Mejore la experiencia de juego: Puede disfrutar jugando juegos que requieren mucho clic sin frustrarse o perder interés. También puede mejorar su rendimiento de juego y puntuación mediante el uso de un auto clicker.
    • -
    • Personalizar las opciones de clic: Puede ajustar la velocidad de clic, duración, área, tipo y número de acuerdo a sus preferencias y necesidades. También puede crear diferentes perfiles para diferentes juegos o tareas y cambiar entre ellos fácilmente.
    • -
    -

    ¿Qué es Clicker Heroes?

    -

    Clicker Heroes es uno de los juegos de clickers inactivos más populares en la web. Fue lanzado en 2014 por Playsaurus, un estudio de juegos independiente con sede en California. El juego ha sido jugado por millones de personas en todo el mundo y ha recibido críticas positivas de críticos y jugadores por igual.

    -

    -

    Cómo jugar Clicker Heroes?

    -

    El modo de juego de Clicker Heroes es simple pero adictivo. Aquí están las instrucciones básicas:

    -
      -
    1. Haga clic en monstruos para atacarlos y recoger el oro de ellos.
    2. -
    3. Usa el oro para subir de nivel a tus héroes, que te ayudarán a luchar contra los monstruos automáticamente.
    4. -
    5. Compra mejoras y habilidades para tus héroes para hacerlos más fuertes y desbloquear nuevas habilidades.
    6. -
    7. Progresa a través de zonas y mundos, cada uno con diferentes monstruos y fondos.
    8. - -
    -

    Consejos y trucos para Clicker Heroes

    -

    Para aprovechar al máximo Clicker Heroes, debes seguir estos consejos y trucos:

    -
      -
    • Usa antiguos y extraños, que son personajes especiales que pueden aumentar tu progreso al darte varios bonos y efectos. Puedes comprar antiguos con almas de héroe, que obtienes de ascendente, y extraños con almas antiguas, que obtienes de trascender.
    • -
    • Ascender y trascender regularmente, que son formas de restablecer su juego con beneficios adicionales. Ascender les dará almas de héroes basadas en su zona más alta alcanzada, mientras que trascender les dará almas antiguas basadas en sus almas de héroes totales sacrificadas. Ambas acciones aumentarán tu poder general y acelerarán tu progreso.
    • -
    • Únete a clanes y redadas, que son características multijugador que te permiten cooperar con otros jugadores y obtener más recompensas. Puedes unirte a un clan introduciendo su nombre o creando el tuyo propio, y participar en incursiones luchando contra inmortales con los miembros de tu clan. Puedes obtener almas de héroe, rubíes y monedas de clan de las redadas.
    • -
    • Usa mercenarios y misiones, que son características adicionales que pueden ayudarte a obtener recursos adicionales. Puedes contratar mercenarios con rubíes, que son la moneda premium del juego, y enviarlos en misiones para obtener oro, almas de héroes, rubíes, reliquias o habilidades. Puedes tener hasta cinco mercenarios a la vez.
    • -
    -

    ¿Por qué usar Auto Clicker para Clicker Heroes?

    -

    Como puedes ver, Clicker Heroes es un juego que involucra muchos clics. Si bien esto puede ser divertido al principio, también puede volverse tedioso y aburrido después de un tiempo. Es por eso que usar un clicker automático para héroes clickers puede ser una gran idea. Aquí hay algunas razones por las que:

    -

    Los mejores clickers automáticos para Clicker Heroes

    - -
      -
    • OP Auto Clicker: Este es un clicker automático gratuito y fácil de usar que te permite elegir el intervalo de clic, el tipo y la ubicación. También puede establecer teclas de acceso rápido, aleatorizar clics y grabar y reproducir clics. Puede descargarlo desde here.
    • -
    • GS Auto Clicker: Este es otro clicker automático gratuito y simple que te permite configurar la tasa de clics, el número y la ubicación. También puede usar teclas de acceso rápido, guardar y cargar la configuración y usar la opción de registro para hacer clic en varios lugares. Puede descargarlo desde aquí.
    • -
    • Speed Auto Clicker: Este es un rápido y potente clicker automático que puede alcanzar hasta 50000 clicks por segundo. Puede ajustar la velocidad, el tipo y la ubicación de los clics, así como usar teclas de acceso rápido, aleatorizar los clics y establecer un límite de clics. Puede descargarlo desde aquí.
    • -
    • Murgee Auto Clicker: Este es un clicker automático de pago pero versátil que ofrece muchas características y opciones. Puede personalizar el intervalo de clic, el tipo, la ubicación y la duración, así como usar teclas de acceso rápido, programar clics y crear macros. Puede descargarlo desde aquí.
    • -
    -

    ¿Cómo configurar los clickers automáticos para los héroes del clicker?

    -

    Para configurar los clickers automáticos para los héroes clicker, debe seguir estas directrices:

    -
      -
    1. Arrastre y suelte el icono del clicker automático al área deseada en la pantalla del juego. Puedes colocarlo en el área enemiga, los botones de nivel de héroe, las habilidades o el botón de compra de mejoras disponibles.
    2. -
    3. Elija el número de clickers automáticos que desea utilizar para cada tarea. Puedes tener hasta 99 clickers automáticos en total, pero solo uno por cada botón de nivel de héroe, botón de habilidad o botón de compra de mejoras disponibles.
    4. - -
    5. Retire los clickers automáticos haciendo clic en el botón X en la esquina superior derecha de cada icono. También puede arrastrar y soltar de nuevo a la piscina de auto clickers en el lado derecho de la pantalla.
    6. -
    -

    Conclusión

    -

    En conclusión, auto clicker es una herramienta útil para jugar clicker héroes, ya que puede automatizar el proceso de clic y mejorar su rendimiento de juego. Hay muchos clickers automáticos disponibles para descargar, cada uno con sus propias características y ventajas. Para usar clickers automáticos para los héroes clickers, necesitas configurarlos correctamente y asignarlos a diferentes tareas. Al hacerlo, puedes disfrutar jugando a clicker heroes sin cansarte o aburrirte.

    -

    Preguntas frecuentes

    -

    Aquí hay algunas preguntas frecuentes sobre el clicker automático para los héroes de clicker:

    -
      -
    • ¿Cuál es el mejor clicker automático para los héroes clicker? No hay una respuesta definitiva a esta pregunta, ya que diferentes clickers automáticos pueden adaptarse a diferentes preferencias y necesidades. Sin embargo, algunos de los más populares y recomendados son OP Auto Clicker, GS Auto Clicker, Speed Auto Clicker y Murgee Auto Clicker.
    • -
    • ¿Qué tan rápido puede hacer clic un auto clicker? La velocidad de un auto clicker depende de su configuración y características. Algunos clickers automáticos pueden alcanzar hasta 50000 clicks por segundo, mientras que otros solo pueden llegar hasta 100 clicks por segundo. Puede ajustar la velocidad de su auto clicker cambiando su intervalo o tasa.
    • -
    • Está usando un clicker automático de engaño? Esto depende de su perspectiva y opinión. Algunas personas pueden considerar el uso de un auto clicker como trampa, ya que le da una ventaja injusta sobre otros jugadores que no lo utilizan. Otros pueden verlo como una forma legítima de jugar el juego de manera más eficiente y conveniente.
    • - -
    • ¿Cuántos clickers automáticos necesito para héroes clickers? El número de clickers automáticos que necesitas para los clickers depende de tus objetivos y estrategias. En general, usted debe tener al menos un auto clicker en el área enemiga para atacar más rápido, y un auto clicker en el botón comprar mejoras disponibles para subir de nivel héroes y comprar mejoras automáticamente. También puede tener más clickers automáticos en los botones de nivel de héroe o las habilidades para activarlos más a menudo.
    • -
    -

    Espero que este artículo te haya ayudado a entender más sobre el clicker automático para la descarga de héroes clicker. Si usted tiene alguna pregunta o comentario, por favor no dude en dejar un comentario a continuación. Gracias por leer y feliz clic!

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/requests/adapters.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/requests/adapters.py deleted file mode 100644 index f68f7d467530845447278f6c0ad104b4beca9531..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/requests/adapters.py +++ /dev/null @@ -1,584 +0,0 @@ -""" -requests.adapters -~~~~~~~~~~~~~~~~~ - -This module contains the transport adapters that Requests uses to define -and maintain connections. -""" - -import os.path -import socket # noqa: F401 - -from pip._vendor.urllib3.exceptions import ClosedPoolError, ConnectTimeoutError -from pip._vendor.urllib3.exceptions import HTTPError as _HTTPError -from pip._vendor.urllib3.exceptions import InvalidHeader as _InvalidHeader -from pip._vendor.urllib3.exceptions import ( - LocationValueError, - MaxRetryError, - NewConnectionError, - ProtocolError, -) -from pip._vendor.urllib3.exceptions import ProxyError as _ProxyError -from pip._vendor.urllib3.exceptions import ReadTimeoutError, ResponseError -from pip._vendor.urllib3.exceptions import SSLError as _SSLError -from pip._vendor.urllib3.poolmanager import PoolManager, proxy_from_url -from pip._vendor.urllib3.response import HTTPResponse -from pip._vendor.urllib3.util import Timeout as TimeoutSauce -from pip._vendor.urllib3.util import parse_url -from pip._vendor.urllib3.util.retry import Retry - -from .auth import _basic_auth_str -from .compat import basestring, urlparse -from .cookies import extract_cookies_to_jar -from .exceptions import ( - ConnectionError, - ConnectTimeout, - InvalidHeader, - InvalidProxyURL, - InvalidSchema, - InvalidURL, - ProxyError, - ReadTimeout, - RetryError, - SSLError, -) -from .models import Response -from .structures import CaseInsensitiveDict -from .utils import ( - DEFAULT_CA_BUNDLE_PATH, - extract_zipped_paths, - get_auth_from_url, - get_encoding_from_headers, - prepend_scheme_if_needed, - select_proxy, - urldefragauth, -) - -try: - from pip._vendor.urllib3.contrib.socks import SOCKSProxyManager -except ImportError: - - def SOCKSProxyManager(*args, **kwargs): - raise InvalidSchema("Missing dependencies for SOCKS support.") - - -DEFAULT_POOLBLOCK = False -DEFAULT_POOLSIZE = 10 -DEFAULT_RETRIES = 0 -DEFAULT_POOL_TIMEOUT = None - - -class BaseAdapter: - """The Base Transport Adapter""" - - def __init__(self): - super().__init__() - - def send( - self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None - ): - """Sends PreparedRequest object. Returns Response object. - - :param request: The :class:`PreparedRequest ` being sent. - :param stream: (optional) Whether to stream the request content. - :param timeout: (optional) How long to wait for the server to send - data before giving up, as a float, or a :ref:`(connect timeout, - read timeout) ` tuple. - :type timeout: float or tuple - :param verify: (optional) Either a boolean, in which case it controls whether we verify - the server's TLS certificate, or a string, in which case it must be a path - to a CA bundle to use - :param cert: (optional) Any user-provided SSL certificate to be trusted. - :param proxies: (optional) The proxies dictionary to apply to the request. - """ - raise NotImplementedError - - def close(self): - """Cleans up adapter specific items.""" - raise NotImplementedError - - -class HTTPAdapter(BaseAdapter): - """The built-in HTTP Adapter for urllib3. - - Provides a general-case interface for Requests sessions to contact HTTP and - HTTPS urls by implementing the Transport Adapter interface. This class will - usually be created by the :class:`Session ` class under the - covers. - - :param pool_connections: The number of urllib3 connection pools to cache. - :param pool_maxsize: The maximum number of connections to save in the pool. - :param max_retries: The maximum number of retries each connection - should attempt. Note, this applies only to failed DNS lookups, socket - connections and connection timeouts, never to requests where data has - made it to the server. By default, Requests does not retry failed - connections. If you need granular control over the conditions under - which we retry a request, import urllib3's ``Retry`` class and pass - that instead. - :param pool_block: Whether the connection pool should block for connections. - - Usage:: - - >>> import requests - >>> s = requests.Session() - >>> a = requests.adapters.HTTPAdapter(max_retries=3) - >>> s.mount('http://', a) - """ - - __attrs__ = [ - "max_retries", - "config", - "_pool_connections", - "_pool_maxsize", - "_pool_block", - ] - - def __init__( - self, - pool_connections=DEFAULT_POOLSIZE, - pool_maxsize=DEFAULT_POOLSIZE, - max_retries=DEFAULT_RETRIES, - pool_block=DEFAULT_POOLBLOCK, - ): - if max_retries == DEFAULT_RETRIES: - self.max_retries = Retry(0, read=False) - else: - self.max_retries = Retry.from_int(max_retries) - self.config = {} - self.proxy_manager = {} - - super().__init__() - - self._pool_connections = pool_connections - self._pool_maxsize = pool_maxsize - self._pool_block = pool_block - - self.init_poolmanager(pool_connections, pool_maxsize, block=pool_block) - - def __getstate__(self): - return {attr: getattr(self, attr, None) for attr in self.__attrs__} - - def __setstate__(self, state): - # Can't handle by adding 'proxy_manager' to self.__attrs__ because - # self.poolmanager uses a lambda function, which isn't pickleable. - self.proxy_manager = {} - self.config = {} - - for attr, value in state.items(): - setattr(self, attr, value) - - self.init_poolmanager( - self._pool_connections, self._pool_maxsize, block=self._pool_block - ) - - def init_poolmanager( - self, connections, maxsize, block=DEFAULT_POOLBLOCK, **pool_kwargs - ): - """Initializes a urllib3 PoolManager. - - This method should not be called from user code, and is only - exposed for use when subclassing the - :class:`HTTPAdapter `. - - :param connections: The number of urllib3 connection pools to cache. - :param maxsize: The maximum number of connections to save in the pool. - :param block: Block when no free connections are available. - :param pool_kwargs: Extra keyword arguments used to initialize the Pool Manager. - """ - # save these values for pickling - self._pool_connections = connections - self._pool_maxsize = maxsize - self._pool_block = block - - self.poolmanager = PoolManager( - num_pools=connections, - maxsize=maxsize, - block=block, - strict=True, - **pool_kwargs, - ) - - def proxy_manager_for(self, proxy, **proxy_kwargs): - """Return urllib3 ProxyManager for the given proxy. - - This method should not be called from user code, and is only - exposed for use when subclassing the - :class:`HTTPAdapter `. - - :param proxy: The proxy to return a urllib3 ProxyManager for. - :param proxy_kwargs: Extra keyword arguments used to configure the Proxy Manager. - :returns: ProxyManager - :rtype: urllib3.ProxyManager - """ - if proxy in self.proxy_manager: - manager = self.proxy_manager[proxy] - elif proxy.lower().startswith("socks"): - username, password = get_auth_from_url(proxy) - manager = self.proxy_manager[proxy] = SOCKSProxyManager( - proxy, - username=username, - password=password, - num_pools=self._pool_connections, - maxsize=self._pool_maxsize, - block=self._pool_block, - **proxy_kwargs, - ) - else: - proxy_headers = self.proxy_headers(proxy) - manager = self.proxy_manager[proxy] = proxy_from_url( - proxy, - proxy_headers=proxy_headers, - num_pools=self._pool_connections, - maxsize=self._pool_maxsize, - block=self._pool_block, - **proxy_kwargs, - ) - - return manager - - def cert_verify(self, conn, url, verify, cert): - """Verify a SSL certificate. This method should not be called from user - code, and is only exposed for use when subclassing the - :class:`HTTPAdapter `. - - :param conn: The urllib3 connection object associated with the cert. - :param url: The requested URL. - :param verify: Either a boolean, in which case it controls whether we verify - the server's TLS certificate, or a string, in which case it must be a path - to a CA bundle to use - :param cert: The SSL certificate to verify. - """ - if url.lower().startswith("https") and verify: - - cert_loc = None - - # Allow self-specified cert location. - if verify is not True: - cert_loc = verify - - if not cert_loc: - cert_loc = extract_zipped_paths(DEFAULT_CA_BUNDLE_PATH) - - if not cert_loc or not os.path.exists(cert_loc): - raise OSError( - f"Could not find a suitable TLS CA certificate bundle, " - f"invalid path: {cert_loc}" - ) - - conn.cert_reqs = "CERT_REQUIRED" - - if not os.path.isdir(cert_loc): - conn.ca_certs = cert_loc - else: - conn.ca_cert_dir = cert_loc - else: - conn.cert_reqs = "CERT_NONE" - conn.ca_certs = None - conn.ca_cert_dir = None - - if cert: - if not isinstance(cert, basestring): - conn.cert_file = cert[0] - conn.key_file = cert[1] - else: - conn.cert_file = cert - conn.key_file = None - if conn.cert_file and not os.path.exists(conn.cert_file): - raise OSError( - f"Could not find the TLS certificate file, " - f"invalid path: {conn.cert_file}" - ) - if conn.key_file and not os.path.exists(conn.key_file): - raise OSError( - f"Could not find the TLS key file, invalid path: {conn.key_file}" - ) - - def build_response(self, req, resp): - """Builds a :class:`Response ` object from a urllib3 - response. This should not be called from user code, and is only exposed - for use when subclassing the - :class:`HTTPAdapter ` - - :param req: The :class:`PreparedRequest ` used to generate the response. - :param resp: The urllib3 response object. - :rtype: requests.Response - """ - response = Response() - - # Fallback to None if there's no status_code, for whatever reason. - response.status_code = getattr(resp, "status", None) - - # Make headers case-insensitive. - response.headers = CaseInsensitiveDict(getattr(resp, "headers", {})) - - # Set encoding. - response.encoding = get_encoding_from_headers(response.headers) - response.raw = resp - response.reason = response.raw.reason - - if isinstance(req.url, bytes): - response.url = req.url.decode("utf-8") - else: - response.url = req.url - - # Add new cookies from the server. - extract_cookies_to_jar(response.cookies, req, resp) - - # Give the Response some context. - response.request = req - response.connection = self - - return response - - def get_connection(self, url, proxies=None): - """Returns a urllib3 connection for the given URL. This should not be - called from user code, and is only exposed for use when subclassing the - :class:`HTTPAdapter `. - - :param url: The URL to connect to. - :param proxies: (optional) A Requests-style dictionary of proxies used on this request. - :rtype: urllib3.ConnectionPool - """ - proxy = select_proxy(url, proxies) - - if proxy: - proxy = prepend_scheme_if_needed(proxy, "http") - proxy_url = parse_url(proxy) - if not proxy_url.host: - raise InvalidProxyURL( - "Please check proxy URL. It is malformed " - "and could be missing the host." - ) - proxy_manager = self.proxy_manager_for(proxy) - conn = proxy_manager.connection_from_url(url) - else: - # Only scheme should be lower case - parsed = urlparse(url) - url = parsed.geturl() - conn = self.poolmanager.connection_from_url(url) - - return conn - - def close(self): - """Disposes of any internal state. - - Currently, this closes the PoolManager and any active ProxyManager, - which closes any pooled connections. - """ - self.poolmanager.clear() - for proxy in self.proxy_manager.values(): - proxy.clear() - - def request_url(self, request, proxies): - """Obtain the url to use when making the final request. - - If the message is being sent through a HTTP proxy, the full URL has to - be used. Otherwise, we should only use the path portion of the URL. - - This should not be called from user code, and is only exposed for use - when subclassing the - :class:`HTTPAdapter `. - - :param request: The :class:`PreparedRequest ` being sent. - :param proxies: A dictionary of schemes or schemes and hosts to proxy URLs. - :rtype: str - """ - proxy = select_proxy(request.url, proxies) - scheme = urlparse(request.url).scheme - - is_proxied_http_request = proxy and scheme != "https" - using_socks_proxy = False - if proxy: - proxy_scheme = urlparse(proxy).scheme.lower() - using_socks_proxy = proxy_scheme.startswith("socks") - - url = request.path_url - if is_proxied_http_request and not using_socks_proxy: - url = urldefragauth(request.url) - - return url - - def add_headers(self, request, **kwargs): - """Add any headers needed by the connection. As of v2.0 this does - nothing by default, but is left for overriding by users that subclass - the :class:`HTTPAdapter `. - - This should not be called from user code, and is only exposed for use - when subclassing the - :class:`HTTPAdapter `. - - :param request: The :class:`PreparedRequest ` to add headers to. - :param kwargs: The keyword arguments from the call to send(). - """ - pass - - def proxy_headers(self, proxy): - """Returns a dictionary of the headers to add to any request sent - through a proxy. This works with urllib3 magic to ensure that they are - correctly sent to the proxy, rather than in a tunnelled request if - CONNECT is being used. - - This should not be called from user code, and is only exposed for use - when subclassing the - :class:`HTTPAdapter `. - - :param proxy: The url of the proxy being used for this request. - :rtype: dict - """ - headers = {} - username, password = get_auth_from_url(proxy) - - if username: - headers["Proxy-Authorization"] = _basic_auth_str(username, password) - - return headers - - def send( - self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None - ): - """Sends PreparedRequest object. Returns Response object. - - :param request: The :class:`PreparedRequest ` being sent. - :param stream: (optional) Whether to stream the request content. - :param timeout: (optional) How long to wait for the server to send - data before giving up, as a float, or a :ref:`(connect timeout, - read timeout) ` tuple. - :type timeout: float or tuple or urllib3 Timeout object - :param verify: (optional) Either a boolean, in which case it controls whether - we verify the server's TLS certificate, or a string, in which case it - must be a path to a CA bundle to use - :param cert: (optional) Any user-provided SSL certificate to be trusted. - :param proxies: (optional) The proxies dictionary to apply to the request. - :rtype: requests.Response - """ - - try: - conn = self.get_connection(request.url, proxies) - except LocationValueError as e: - raise InvalidURL(e, request=request) - - self.cert_verify(conn, request.url, verify, cert) - url = self.request_url(request, proxies) - self.add_headers( - request, - stream=stream, - timeout=timeout, - verify=verify, - cert=cert, - proxies=proxies, - ) - - chunked = not (request.body is None or "Content-Length" in request.headers) - - if isinstance(timeout, tuple): - try: - connect, read = timeout - timeout = TimeoutSauce(connect=connect, read=read) - except ValueError: - raise ValueError( - f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " - f"or a single float to set both timeouts to the same value." - ) - elif isinstance(timeout, TimeoutSauce): - pass - else: - timeout = TimeoutSauce(connect=timeout, read=timeout) - - try: - if not chunked: - resp = conn.urlopen( - method=request.method, - url=url, - body=request.body, - headers=request.headers, - redirect=False, - assert_same_host=False, - preload_content=False, - decode_content=False, - retries=self.max_retries, - timeout=timeout, - ) - - # Send the request. - else: - if hasattr(conn, "proxy_pool"): - conn = conn.proxy_pool - - low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT) - - try: - skip_host = "Host" in request.headers - low_conn.putrequest( - request.method, - url, - skip_accept_encoding=True, - skip_host=skip_host, - ) - - for header, value in request.headers.items(): - low_conn.putheader(header, value) - - low_conn.endheaders() - - for i in request.body: - low_conn.send(hex(len(i))[2:].encode("utf-8")) - low_conn.send(b"\r\n") - low_conn.send(i) - low_conn.send(b"\r\n") - low_conn.send(b"0\r\n\r\n") - - # Receive the response from the server - r = low_conn.getresponse() - - resp = HTTPResponse.from_httplib( - r, - pool=conn, - connection=low_conn, - preload_content=False, - decode_content=False, - ) - except Exception: - # If we hit any problems here, clean up the connection. - # Then, raise so that we can handle the actual exception. - low_conn.close() - raise - - except (ProtocolError, OSError) as err: - raise ConnectionError(err, request=request) - - except MaxRetryError as e: - if isinstance(e.reason, ConnectTimeoutError): - # TODO: Remove this in 3.0.0: see #2811 - if not isinstance(e.reason, NewConnectionError): - raise ConnectTimeout(e, request=request) - - if isinstance(e.reason, ResponseError): - raise RetryError(e, request=request) - - if isinstance(e.reason, _ProxyError): - raise ProxyError(e, request=request) - - if isinstance(e.reason, _SSLError): - # This branch is for urllib3 v1.22 and later. - raise SSLError(e, request=request) - - raise ConnectionError(e, request=request) - - except ClosedPoolError as e: - raise ConnectionError(e, request=request) - - except _ProxyError as e: - raise ProxyError(e) - - except (_SSLError, _HTTPError) as e: - if isinstance(e, _SSLError): - # This branch is for urllib3 versions earlier than v1.22 - raise SSLError(e, request=request) - elif isinstance(e, ReadTimeoutError): - raise ReadTimeout(e, request=request) - elif isinstance(e, _InvalidHeader): - raise InvalidHeader(e, request=request) - else: - raise - - return self.build_response(request, resp) diff --git a/spaces/Blaise-g/summarize-biomedical-papers-long-summary-or-tldr/summarize.py b/spaces/Blaise-g/summarize-biomedical-papers-long-summary-or-tldr/summarize.py deleted file mode 100644 index 51cd47f24123375a2df9c97143927f9c98c60644..0000000000000000000000000000000000000000 --- a/spaces/Blaise-g/summarize-biomedical-papers-long-summary-or-tldr/summarize.py +++ /dev/null @@ -1,131 +0,0 @@ -import logging - -import torch -from tqdm.auto import tqdm -from transformers import AutoModelForSeq2SeqLM, AutoTokenizer - - -def load_model_and_tokenizer(model_name): - """ - load_model_and_tokenizer - a function that loads a model and tokenizer from huggingface - Args: - model_name (str): the name of the model to load - Returns: - AutoModelForSeq2SeqLM: the model - AutoTokenizer: the tokenizer - """ - - model = AutoModelForSeq2SeqLM.from_pretrained( - model_name, - # low_cpu_mem_usage=True, - # use_cache=False, - ) - tokenizer = AutoTokenizer.from_pretrained(model_name) - model = model.to("cuda") if torch.cuda.is_available() else model - - logging.info(f"Loaded model {model_name}") - return model, tokenizer - - -def summarize(ids, mask, model, tokenizer, **kwargs): - """ - summarize - given a batch of ids and a mask, returns a summary and the token length of the output summary - Args: - ids (): the batch of ids - mask (): the attention mask for the batch - model (): the model to use for summarization - tokenizer (): the tokenizer to use for summarization - Returns: - str: the summary of the batch - """ - - ids = ids[None, :] - mask = mask[None, :] - - input_ids = ids.to("cuda") if torch.cuda.is_available() else ids - attention_mask = mask.to("cuda") if torch.cuda.is_available() else mask - - #global_attention_mask = torch.zeros_like(attention_mask) - # put global attention on token - #global_attention_mask[:, 0] = 1 - - summary_pred_ids = model.generate( - input_ids, - attention_mask=attention_mask, - #global_attention_mask=global_attention_mask, - return_dict_in_generate=True, - **kwargs, - ) - summary = tokenizer.batch_decode( - summary_pred_ids.sequences, - skip_special_tokens=True, - remove_invalid_values=True, - ) - len_res = len(summary_pred_ids.sequences.cpu().numpy()[0]) - return summary, len_res - - -def summarize_via_tokenbatches( - input_text: str, - model, - tokenizer, - batch_length=2048, - batch_stride=16, - **kwargs, -): - """ - summarize_via_tokenbatches - a function that takes a string and returns a summary - Args: - input_text (str): the text to summarize - model (): the model to use for summarization - tokenizer (): the tokenizer to use for summarization - batch_length (int, optional): the length of each batch. Defaults to 2048. - batch_stride (int, optional): the stride of each batch. Defaults to 16. The stride is the number of tokens that overlap between batches. - Returns: - str: the summary - """ - # log all input parameters - if batch_length < 512: - batch_length = 512 - print("WARNING: batch_length was set to 512") - print( - f"input parameters: {kwargs}, batch_length={batch_length}, batch_stride={batch_stride}" - ) - encoded_input = tokenizer( - input_text, - padding="max_length", - truncation=True, - max_length=batch_length, - stride=batch_stride, - return_overflowing_tokens=True, - add_special_tokens=False, - return_tensors="pt", - ) - - in_id_arr, att_arr = encoded_input.input_ids, encoded_input.attention_mask - gen_summaries = [] - - pbar = tqdm(total=len(in_id_arr)) - - for _id, _mask in zip(in_id_arr, att_arr): - - result, l = summarize( - ids=_id, - mask=_mask, - model=model, - tokenizer=tokenizer, - **kwargs, - ) - rate = round(float((len(_id)-l)/len(_id)),3) - _sum = { - "input_tokens": _id, - "summary": result, - "compression_rate": rate, - } - gen_summaries.append(_sum) - print(f"\t{result[0]}\nCompression:\t{rate}") - pbar.update() - - pbar.close() - - return gen_summaries \ No newline at end of file diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/config/compiler.h b/spaces/CVPR/LIVE/thrust/thrust/detail/config/compiler.h deleted file mode 100644 index 644db93d4e00c4e81cd32f38eab017a6637ca9dd..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/config/compiler.h +++ /dev/null @@ -1,186 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file compiler.h - * \brief Compiler-specific configuration - */ - -#pragma once - -// enumerate host compilers we know about -#define THRUST_HOST_COMPILER_UNKNOWN 0 -#define THRUST_HOST_COMPILER_MSVC 1 -#define THRUST_HOST_COMPILER_GCC 2 -#define THRUST_HOST_COMPILER_CLANG 3 - -// enumerate device compilers we know about -#define THRUST_DEVICE_COMPILER_UNKNOWN 0 -#define THRUST_DEVICE_COMPILER_MSVC 1 -#define THRUST_DEVICE_COMPILER_GCC 2 -#define THRUST_DEVICE_COMPILER_NVCC 3 -#define THRUST_DEVICE_COMPILER_CLANG 4 - -// figure out which host compiler we're using -// XXX we should move the definition of THRUST_DEPRECATED out of this logic -#if defined(_MSC_VER) -#define THRUST_HOST_COMPILER THRUST_HOST_COMPILER_MSVC -#define THRUST_MSVC_VERSION _MSC_VER -#define THRUST_MSVC_VERSION_FULL _MSC_FULL_VER -#elif defined(__clang__) -#define THRUST_HOST_COMPILER THRUST_HOST_COMPILER_CLANG -#define THRUST_CLANG_VERSION (__clang_major__ * 10000 + __clang_minor__ * 100 + __clang_patchlevel__) -#elif defined(__GNUC__) -#define THRUST_HOST_COMPILER THRUST_HOST_COMPILER_GCC -#define THRUST_GCC_VERSION (__GNUC__ * 10000 + __GNUC_MINOR__ * 100 + __GNUC_PATCHLEVEL__) -#if (THRUST_GCC_VERSION >= 50000) -#define THRUST_MODERN_GCC -#else -#define THRUST_LEGACY_GCC -#endif -#else -#define THRUST_HOST_COMPILER THRUST_HOST_COMPILER_UNKNOWN -#endif // THRUST_HOST_COMPILER - -// figure out which device compiler we're using -#if defined(__CUDACC__) -#define THRUST_DEVICE_COMPILER THRUST_DEVICE_COMPILER_NVCC -#elif THRUST_HOST_COMPILER == THRUST_HOST_COMPILER_MSVC -#define THRUST_DEVICE_COMPILER THRUST_DEVICE_COMPILER_MSVC -#elif THRUST_HOST_COMPILER == THRUST_HOST_COMPILER_GCC -#define THRUST_DEVICE_COMPILER THRUST_DEVICE_COMPILER_GCC -#elif THRUST_HOST_COMPILER == THRUST_HOST_COMPILER_CLANG -// CUDA-capable clang should behave similar to NVCC. -#if defined(__CUDA__) -#define THRUST_DEVICE_COMPILER THRUST_DEVICE_COMPILER_NVCC -#else -#define THRUST_DEVICE_COMPILER THRUST_DEVICE_COMPILER_CLANG -#endif -#else -#define THRUST_DEVICE_COMPILER THRUST_DEVICE_COMPILER_UNKNOWN -#endif - -// is the device compiler capable of compiling omp? -#ifdef _OPENMP -#define THRUST_DEVICE_COMPILER_IS_OMP_CAPABLE THRUST_TRUE -#else -#define THRUST_DEVICE_COMPILER_IS_OMP_CAPABLE THRUST_FALSE -#endif // _OPENMP - - -#if (THRUST_HOST_COMPILER == THRUST_HOST_COMPILER_MSVC) && !defined(__CUDA_ARCH__) - #define THRUST_DISABLE_MSVC_WARNING_BEGIN(x) \ - __pragma(warning(push)) \ - __pragma(warning(disable : x)) \ - /**/ - #define THRUST_DISABLE_MSVC_WARNING_END(x) \ - __pragma(warning(pop)) \ - /**/ -#else - #define THRUST_DISABLE_MSVC_WARNING_BEGIN(x) - #define THRUST_DISABLE_MSVC_WARNING_END(x) -#endif - -#if (THRUST_HOST_COMPILER == THRUST_HOST_COMPILER_CLANG) && !defined(__CUDA_ARCH__) - #define THRUST_IGNORE_CLANG_WARNING_IMPL(x) \ - THRUST_PP_STRINGIZE(clang diagnostic ignored x) \ - /**/ - #define THRUST_IGNORE_CLANG_WARNING(x) \ - THRUST_IGNORE_CLANG_WARNING_IMPL(THRUST_PP_STRINGIZE(x)) \ - /**/ - - #define THRUST_DISABLE_CLANG_WARNING_BEGIN(x) \ - _Pragma("clang diagnostic push") \ - _Pragma(THRUST_IGNORE_CLANG_WARNING(x)) \ - /**/ - #define THRUST_DISABLE_CLANG_WARNING_END(x) \ - _Pragma("clang diagnostic pop") \ - /**/ -#else - #define THRUST_DISABLE_CLANG_WARNING_BEGIN(x) - #define THRUST_DISABLE_CLANG_WARNING_END(x) -#endif - -#if (THRUST_HOST_COMPILER == THRUST_HOST_COMPILER_GCC) && !defined(__CUDA_ARCH__) - #define THRUST_IGNORE_GCC_WARNING_IMPL(x) \ - THRUST_PP_STRINGIZE(GCC diagnostic ignored x) \ - /**/ - #define THRUST_IGNORE_GCC_WARNING(x) \ - THRUST_IGNORE_GCC_WARNING_IMPL(THRUST_PP_STRINGIZE(x)) \ - /**/ - - #define THRUST_DISABLE_GCC_WARNING_BEGIN(x) \ - _Pragma("GCC diagnostic push") \ - _Pragma(THRUST_IGNORE_GCC_WARNING(x)) \ - /**/ - #define THRUST_DISABLE_GCC_WARNING_END(x) \ - _Pragma("GCC diagnostic pop") \ - /**/ -#else - #define THRUST_DISABLE_GCC_WARNING_BEGIN(x) - #define THRUST_DISABLE_GCC_WARNING_END(x) -#endif - -#define THRUST_DISABLE_MSVC_POSSIBLE_LOSS_OF_DATA_WARNING_BEGIN \ - THRUST_DISABLE_MSVC_WARNING_BEGIN(4244 4267) \ - /**/ -#define THRUST_DISABLE_MSVC_POSSIBLE_LOSS_OF_DATA_WARNING_END \ - THRUST_DISABLE_MSVC_WARNING_END(4244 4267) \ - /**/ -#define THRUST_DISABLE_MSVC_POSSIBLE_LOSS_OF_DATA_WARNING(x) \ - THRUST_DISABLE_MSVC_POSSIBLE_LOSS_OF_DATA_WARNING_BEGIN \ - x; \ - THRUST_DISABLE_MSVC_POSSIBLE_LOSS_OF_DATA_WARNING_END \ - /**/ - -#define THRUST_DISABLE_MSVC_FORCING_VALUE_TO_BOOL_WARNING_BEGIN \ - THRUST_DISABLE_MSVC_WARNING_BEGIN(4800) \ - /**/ -#define THRUST_DISABLE_MSVC_FORCING_VALUE_TO_BOOL_WARNING_END \ - THRUST_DISABLE_MSVC_WARNING_END(4800) \ - /**/ -#define THRUST_DISABLE_MSVC_FORCING_VALUE_TO_BOOL_WARNING(x) \ - THRUST_DISABLE_MSVC_FORCING_VALUE_TO_BOOL_WARNING_BEGIN \ - x; \ - THRUST_DISABLE_MSVC_FORCING_VALUE_TO_BOOL_WARNING_END \ - /**/ - -#define THRUST_DISABLE_CLANG_SELF_ASSIGNMENT_WARNING_BEGIN \ - THRUST_DISABLE_CLANG_WARNING_BEGIN(-Wself-assign) \ - /**/ -#define THRUST_DISABLE_CLANG_SELF_ASSIGNMENT_WARNING_END \ - THRUST_DISABLE_CLANG_WARNING_END(-Wself-assign) \ - /**/ -#define THRUST_DISABLE_CLANG_SELF_ASSIGNMENT_WARNING(x) \ - THRUST_DISABLE_CLANG_SELF_ASSIGNMENT_WARNING_BEGIN \ - x; \ - THRUST_DISABLE_CLANG_SELF_ASSIGNMENT_WARNING_END \ - /**/ - -#define THRUST_DISABLE_CLANG_AND_GCC_INITIALIZER_REORDERING_WARNING_BEGIN \ - THRUST_DISABLE_CLANG_WARNING_BEGIN(-Wreorder) \ - THRUST_DISABLE_GCC_WARNING_BEGIN(-Wreorder) \ - /**/ -#define THRUST_DISABLE_CLANG_AND_GCC_INITIALIZER_REORDERING_WARNING_END \ - THRUST_DISABLE_CLANG_WARNING_END(-Wreorder) \ - THRUST_DISABLE_GCC_WARNING_END(-Wreorder) \ - /**/ -#define THRUST_DISABLE_CLANG_AND_GCC_INITIALIZER_REORDERING_WARNING(x) \ - THRUST_DISABLE_CLANG_AND_GCC_INITIALIZER_REORDERING_WARNING_BEGIN \ - x; \ - THRUST_DISABLE_CLANG_AND_GCC_INITIALIZER_REORDERING_WARNING_END \ - /**/ - - diff --git a/spaces/CVPR/LIVE/thrust/thrust/type_traits/is_operator_plus_function_object.h b/spaces/CVPR/LIVE/thrust/thrust/type_traits/is_operator_plus_function_object.h deleted file mode 100644 index 0b2ebb107434f4a28f1b1901d6566ed92cb57dd1..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/type_traits/is_operator_plus_function_object.h +++ /dev/null @@ -1,77 +0,0 @@ -/* - * Copyright 2008-2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file is_operator_plus_function_object.h - * \brief Type traits for determining if a \c BinaryFunction is equivalent to -/// \c operator+. - */ - -#pragma once - -#include -#include -#include -#include - -namespace thrust -{ - -namespace detail -{ - -template -struct is_operator_plus_function_object_impl; - -} // namespace detail - -/// Unary metafunction returns \c true_type if \c FunctionObject is equivalent -/// to \c operator<, and \c false_type otherwise. -template -#if THRUST_CPP_DIALECT >= 2011 -using is_operator_plus_function_object = -#else -struct is_operator_plus_function_object : -#endif - detail::is_operator_plus_function_object_impl -#if THRUST_CPP_DIALECT < 2011 -{} -#endif -; - -#if THRUST_CPP_DIALECT >= 2014 -/// constexpr bool that is \c true if \c FunctionObject is -/// equivalent to \c operator<, and \c false otherwise. -template -constexpr bool is_operator_plus_function_object_v - = is_operator_plus_function_object::value; -#endif - -/////////////////////////////////////////////////////////////////////////////// - -namespace detail -{ - -template -struct is_operator_plus_function_object_impl : false_type {}; -template -struct is_operator_plus_function_object_impl > : true_type {}; -template -struct is_operator_plus_function_object_impl > : true_type {}; - -} // namespace detail - -} // end namespace thrust - diff --git a/spaces/CVPR/transfiner/configs/new_baselines/mask_rcnn_R_50_FPN_400ep_LSJ.py b/spaces/CVPR/transfiner/configs/new_baselines/mask_rcnn_R_50_FPN_400ep_LSJ.py deleted file mode 100644 index 97586b8f5330a9d995a0bffd1f5e7bd5b5656462..0000000000000000000000000000000000000000 --- a/spaces/CVPR/transfiner/configs/new_baselines/mask_rcnn_R_50_FPN_400ep_LSJ.py +++ /dev/null @@ -1,14 +0,0 @@ -from .mask_rcnn_R_50_FPN_100ep_LSJ import ( - dataloader, - lr_multiplier, - model, - optimizer, - train, -) - -train.max_iter *= 4 # 100ep -> 400ep - -lr_multiplier.scheduler.milestones = [ - milestone * 4 for milestone in lr_multiplier.scheduler.milestones -] -lr_multiplier.scheduler.num_updates = train.max_iter diff --git a/spaces/Cherrycreamco/webui/README.md b/spaces/Cherrycreamco/webui/README.md deleted file mode 100644 index 74607246ea3d716425e4b089e873cebaafe9535f..0000000000000000000000000000000000000000 --- a/spaces/Cherrycreamco/webui/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Stable Diffusion Web UI -emoji: 🧿 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.9 -app_file: app.py -pinned: false -duplicated_from: camenduru/webui ---- - -## Stable Diffusion Web UI -[https://github.com/AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) - -## Documentation -[https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki) - -## Models License -https://huggingface.co/spaces/CompVis/stable-diffusion-license \ No newline at end of file diff --git a/spaces/ChristopherMarais/Andrew_AI-BB_classification-beta/mysite/andrew_alpha/tests.py b/spaces/ChristopherMarais/Andrew_AI-BB_classification-beta/mysite/andrew_alpha/tests.py deleted file mode 100644 index 7ce503c2dd97ba78597f6ff6e4393132753573f6..0000000000000000000000000000000000000000 --- a/spaces/ChristopherMarais/Andrew_AI-BB_classification-beta/mysite/andrew_alpha/tests.py +++ /dev/null @@ -1,3 +0,0 @@ -from django.test import TestCase - -# Create your tests here. diff --git a/spaces/CofAI/chat/g4f/Provider/Providers/You.py b/spaces/CofAI/chat/g4f/Provider/Providers/You.py deleted file mode 100644 index 02a2774ce62bae33612a73272d584dc2acaf3eb0..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat/g4f/Provider/Providers/You.py +++ /dev/null @@ -1,24 +0,0 @@ -import os -import json -import time -import subprocess - -from ...typing import sha256, Dict, get_type_hints - -url = 'https://you.com' -model = 'gpt-3.5-turbo' -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - - path = os.path.dirname(os.path.realpath(__file__)) - config = json.dumps({ - 'messages': messages}, separators=(',', ':')) - - cmd = ['python3', f'{path}/helpers/you.py', config] - - p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) - - for line in iter(p.stdout.readline, b''): - yield line.decode('utf-8') #[:-1] \ No newline at end of file diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/color_picker.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/color_picker.py deleted file mode 100644 index 49881866187e229d2abdaf819a7c14c02e44c635..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/color_picker.py +++ /dev/null @@ -1,143 +0,0 @@ -"""gr.ColorPicker() component.""" - -from __future__ import annotations - -from typing import Any, Callable, Literal - -from gradio_client.documentation import document, set_documentation_group -from gradio_client.serializing import StringSerializable - -from gradio.components.base import IOComponent, _Keywords -from gradio.events import ( - Blurrable, - Changeable, - Inputable, - Submittable, -) - -set_documentation_group("component") - - -@document() -class ColorPicker( - Changeable, Inputable, Submittable, Blurrable, IOComponent, StringSerializable -): - """ - Creates a color picker for user to select a color as string input. - Preprocessing: passes selected color value as a {str} into the function. - Postprocessing: expects a {str} returned from function and sets color picker value to it. - Examples-format: a {str} with a hexadecimal representation of a color, e.g. "#ff0000" for red. - Demos: color_picker, color_generator - """ - - def __init__( - self, - value: str | Callable | None = None, - *, - label: str | None = None, - info: str | None = None, - every: float | None = None, - show_label: bool | None = None, - container: bool = True, - scale: int | None = None, - min_width: int = 160, - interactive: bool | None = None, - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - **kwargs, - ): - """ - Parameters: - value: default text to provide in color picker. If callable, the function will be called whenever the app loads to set the initial value of the component. - label: component name in interface. - info: additional component description. - every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute. - show_label: if True, will display label. - container: If True, will place the component in a container - providing some extra padding around the border. - scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer. - min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first. - interactive: if True, will be rendered as an editable color picker; if False, editing will be disabled. If not provided, this is inferred based on whether the component is used as an input or output. - visible: If False, component will be hidden. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - """ - IOComponent.__init__( - self, - label=label, - info=info, - every=every, - show_label=show_label, - container=container, - scale=scale, - min_width=min_width, - interactive=interactive, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - value=value, - **kwargs, - ) - - def example_inputs(self) -> dict[str, Any]: - return { - "raw": "#000000", - "serialized": "#000000", - } - - def get_config(self): - return { - "value": self.value, - **IOComponent.get_config(self), - } - - @staticmethod - def update( - value: str | Literal[_Keywords.NO_VALUE] | None = _Keywords.NO_VALUE, - label: str | None = None, - info: str | None = None, - show_label: bool | None = None, - container: bool | None = None, - scale: int | None = None, - min_width: int | None = None, - visible: bool | None = None, - interactive: bool | None = None, - ): - return { - "value": value, - "label": label, - "info": info, - "show_label": show_label, - "container": container, - "scale": scale, - "min_width": min_width, - "visible": visible, - "interactive": interactive, - "__type__": "update", - } - - def preprocess(self, x: str | None) -> str | None: - """ - Any preprocessing needed to be performed on function input. - Parameters: - x: text - Returns: - text - """ - if x is None: - return None - else: - return str(x) - - def postprocess(self, y: str | None) -> str | None: - """ - Any postprocessing needed to be performed on function output. - Parameters: - y: text - Returns: - text - """ - if y is None: - return None - else: - return str(y) diff --git a/spaces/DataScienceEngineering/2-GradioLiveASR/app.py b/spaces/DataScienceEngineering/2-GradioLiveASR/app.py deleted file mode 100644 index b19b04136d7b2ab879c98b3d38b872a735352641..0000000000000000000000000000000000000000 --- a/spaces/DataScienceEngineering/2-GradioLiveASR/app.py +++ /dev/null @@ -1,138 +0,0 @@ -import gradio as gr -import torch -import time -import librosa -import soundfile -import nemo.collections.asr as nemo_asr -import tempfile -import os -import uuid - -from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration -import torch - -# PersistDataset ----- -import os -import csv -import gradio as gr -from gradio import inputs, outputs -import huggingface_hub -from huggingface_hub import Repository, hf_hub_download, upload_file -from datetime import datetime - -# --------------------------------------------- -# Dataset and Token links - change awacke1 to your own HF id, and add a HF_TOKEN copy to your repo for write permissions -# This should allow you to save your results to your own Dataset hosted on HF. - -DATASET_REPO_URL = "https://huggingface.co/datasets/awacke1/ASRLive.csv" -DATASET_REPO_ID = "awacke1/ASRLive.csv" -DATA_FILENAME = "ASRLive.csv" -DATA_FILE = os.path.join("data", DATA_FILENAME) -HF_TOKEN = os.environ.get("HF_TOKEN") - -PersistToDataset = False -#PersistToDataset = True # uncomment to save inference output to ASRLive.csv dataset - -if PersistToDataset: - try: - hf_hub_download( - repo_id=DATASET_REPO_ID, - filename=DATA_FILENAME, - cache_dir=DATA_DIRNAME, - force_filename=DATA_FILENAME - ) - except: - print("file not found") - repo = Repository( - local_dir="data", clone_from=DATASET_REPO_URL, use_auth_token=HF_TOKEN - ) - -def store_message(name: str, message: str): - if name and message: - with open(DATA_FILE, "a") as csvfile: - writer = csv.DictWriter(csvfile, fieldnames=["name", "message", "time"]) - writer.writerow( - {"name": name.strip(), "message": message.strip(), "time": str(datetime.now())} - ) - # uncomment line below to begin saving - - commit_url = repo.push_to_hub() - ret = "" - with open(DATA_FILE, "r") as csvfile: - reader = csv.DictReader(csvfile) - - for row in reader: - ret += row - ret += "\r\n" - return ret - -# main ------------------------- -mname = "facebook/blenderbot-400M-distill" -model = BlenderbotForConditionalGeneration.from_pretrained(mname) -tokenizer = BlenderbotTokenizer.from_pretrained(mname) - -def take_last_tokens(inputs, note_history, history): - filterTokenCount = 128 # filter last 128 tokens - if inputs['input_ids'].shape[1] > filterTokenCount: - inputs['input_ids'] = torch.tensor([inputs['input_ids'][0][-filterTokenCount:].tolist()]) - inputs['attention_mask'] = torch.tensor([inputs['attention_mask'][0][-filterTokenCount:].tolist()]) - note_history = [' '.join(note_history[0].split(' ')[2:])] - history = history[1:] - return inputs, note_history, history - -def add_note_to_history(note, note_history): - note_history.append(note) - note_history = ' '.join(note_history) - return [note_history] - - - -SAMPLE_RATE = 16000 -model = nemo_asr.models.EncDecRNNTBPEModel.from_pretrained("nvidia/stt_en_conformer_transducer_xlarge") -model.change_decoding_strategy(None) -model.eval() - -def process_audio_file(file): - data, sr = librosa.load(file) - if sr != SAMPLE_RATE: - data = librosa.resample(data, orig_sr=sr, target_sr=SAMPLE_RATE) - data = librosa.to_mono(data) - return data - - -def transcribe(audio, state = ""): - if state is None: - state = "" - audio_data = process_audio_file(audio) - with tempfile.TemporaryDirectory() as tmpdir: - audio_path = os.path.join(tmpdir, f'audio_{uuid.uuid4()}.wav') - soundfile.write(audio_path, audio_data, SAMPLE_RATE) - transcriptions = model.transcribe([audio_path]) - if type(transcriptions) == tuple and len(transcriptions) == 2: - transcriptions = transcriptions[0] - transcriptions = transcriptions[0] - - if PersistToDataset: - ret = store_message(transcriptions, state) # Save to dataset - uncomment to store into a dataset - hint you will need your HF_TOKEN - state = state + transcriptions + " " + ret - else: - state = state + transcriptions - return state, state - -gr.Interface( - fn=transcribe, - inputs=[ - gr.Audio(source="microphone", type='filepath', streaming=True), - "state", - ], - outputs=[ - "textbox", - "state" - ], - layout="horizontal", - theme="huggingface", - title="🗣️ASR-Gradio-Live🧠💾", - description=f"Live Automatic Speech Recognition (ASR).", - allow_flagging='never', - live=True, - article=f"Result💾 Dataset: [{DATASET_REPO_URL}]({DATASET_REPO_URL})" -).launch(debug=True) diff --git a/spaces/DataScienceGuild/AI-DataViz-Graphviz/README.md b/spaces/DataScienceGuild/AI-DataViz-Graphviz/README.md deleted file mode 100644 index 3dd4ddba3a8c437b1ef2cc4b81d0f106d96de57b..0000000000000000000000000000000000000000 --- a/spaces/DataScienceGuild/AI-DataViz-Graphviz/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AI DataViz Graphviz -emoji: 🌍 -colorFrom: yellow -colorTo: gray -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DeepDrivePL/PaddleSeg-Matting/matting/model/modnet.py b/spaces/DeepDrivePL/PaddleSeg-Matting/matting/model/modnet.py deleted file mode 100644 index 911832906d5b78f333757e231432195948607a1a..0000000000000000000000000000000000000000 --- a/spaces/DeepDrivePL/PaddleSeg-Matting/matting/model/modnet.py +++ /dev/null @@ -1,481 +0,0 @@ -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from collections import defaultdict - -import paddle -import paddle.nn as nn -import paddle.nn.functional as F -import paddleseg -from paddleseg.models import layers, losses -from paddleseg import utils -from paddleseg.cvlibs import manager, param_init -import numpy as np -import scipy - - -@manager.MODELS.add_component -class MODNet(nn.Layer): - """ - The MODNet implementation based on PaddlePaddle. - - The original article refers to - Zhanghan Ke, et, al. "Is a Green Screen Really Necessary for Real-Time Portrait Matting?" - (https://arxiv.org/pdf/2011.11961.pdf). - - Args: - backbone: backbone model. - hr(int, optional): The channels of high resolutions branch. Defautl: None. - pretrained(str, optional): The path of pretrianed model. Defautl: None. - - """ - - def __init__(self, backbone, hr_channels=32, pretrained=None): - super().__init__() - self.backbone = backbone - self.pretrained = pretrained - - self.head = MODNetHead( - hr_channels=hr_channels, backbone_channels=backbone.feat_channels) - self.init_weight() - self.blurer = GaussianBlurLayer(1, 3) - - def forward(self, inputs): - """ - If training, return a dict. - If evaluation, return the final alpha prediction. - """ - x = inputs['img'] - feat_list = self.backbone(x) - y = self.head(inputs=inputs, feat_list=feat_list) - - return y - - def loss(self, logit_dict, label_dict, loss_func_dict=None): - if loss_func_dict is None: - loss_func_dict = defaultdict(list) - loss_func_dict['semantic'].append(paddleseg.models.MSELoss()) - loss_func_dict['detail'].append(paddleseg.models.L1Loss()) - loss_func_dict['fusion'].append(paddleseg.models.L1Loss()) - loss_func_dict['fusion'].append(paddleseg.models.L1Loss()) - - loss = {} - # semantic loss - semantic_gt = F.interpolate( - label_dict['alpha'], - scale_factor=1 / 16, - mode='bilinear', - align_corners=False) - semantic_gt = self.blurer(semantic_gt) - # semantic_gt.stop_gradient=True - loss['semantic'] = loss_func_dict['semantic'][0](logit_dict['semantic'], - semantic_gt) - - # detail loss - trimap = label_dict['trimap'] - mask = (trimap == 128).astype('float32') - logit_detail = logit_dict['detail'] * mask - label_detail = label_dict['alpha'] * mask - loss_detail = loss_func_dict['detail'][0](logit_detail, label_detail) - loss_detail = loss_detail / (mask.mean() + 1e-6) - loss['detail'] = 10 * loss_detail - - # fusion loss - matte = logit_dict['matte'] - alpha = label_dict['alpha'] - transition_mask = label_dict['trimap'] == 128 - matte_boundary = paddle.where(transition_mask, matte, alpha) - # l1 loss - loss_fusion_l1 = loss_func_dict['fusion'][0]( - matte, - alpha) + 4 * loss_func_dict['fusion'][0](matte_boundary, alpha) - # composition loss - loss_fusion_comp = loss_func_dict['fusion'][1]( - matte * label_dict['img'], - alpha * label_dict['img']) + 4 * loss_func_dict['fusion'][1]( - matte_boundary * label_dict['img'], alpha * label_dict['img']) - # consisten loss with semantic - transition_mask = F.interpolate( - label_dict['trimap'], - scale_factor=1 / 16, - mode='nearest', - align_corners=False) - transition_mask = transition_mask == 128 - matte_con_sem = F.interpolate( - matte, scale_factor=1 / 16, mode='bilinear', align_corners=False) - matte_con_sem = self.blurer(matte_con_sem) - logit_semantic = logit_dict['semantic'].clone() - logit_semantic.stop_gradient = True - matte_con_sem = paddle.where(transition_mask, logit_semantic, - matte_con_sem) - if False: - import cv2 - matte_con_sem_num = matte_con_sem.numpy() - matte_con_sem_num = matte_con_sem_num[0].squeeze() - matte_con_sem_num = (matte_con_sem_num * 255).astype('uint8') - semantic = logit_dict['semantic'].numpy() - semantic = semantic[0].squeeze() - semantic = (semantic * 255).astype('uint8') - transition_mask = transition_mask.astype('uint8') - transition_mask = transition_mask.numpy() - transition_mask = (transition_mask[0].squeeze()) * 255 - cv2.imwrite('matte_con.png', matte_con_sem_num) - cv2.imwrite('semantic.png', semantic) - cv2.imwrite('transition.png', transition_mask) - mse_loss = paddleseg.models.MSELoss() - loss_fusion_con_sem = mse_loss(matte_con_sem, logit_dict['semantic']) - loss_fusion = loss_fusion_l1 + loss_fusion_comp + loss_fusion_con_sem - loss['fusion'] = loss_fusion - loss['fusion_l1'] = loss_fusion_l1 - loss['fusion_comp'] = loss_fusion_comp - loss['fusion_con_sem'] = loss_fusion_con_sem - - loss['all'] = loss['semantic'] + loss['detail'] + loss['fusion'] - - return loss - - def init_weight(self): - if self.pretrained is not None: - utils.load_entire_model(self, self.pretrained) - - -class MODNetHead(nn.Layer): - def __init__(self, hr_channels, backbone_channels): - super().__init__() - - self.lr_branch = LRBranch(backbone_channels) - self.hr_branch = HRBranch(hr_channels, backbone_channels) - self.f_branch = FusionBranch(hr_channels, backbone_channels) - self.init_weight() - - def forward(self, inputs, feat_list): - pred_semantic, lr8x, [enc2x, enc4x] = self.lr_branch(feat_list) - pred_detail, hr2x = self.hr_branch(inputs['img'], enc2x, enc4x, lr8x) - pred_matte = self.f_branch(inputs['img'], lr8x, hr2x) - - if self.training: - logit_dict = { - 'semantic': pred_semantic, - 'detail': pred_detail, - 'matte': pred_matte - } - return logit_dict - else: - return pred_matte - - def init_weight(self): - for layer in self.sublayers(): - if isinstance(layer, nn.Conv2D): - param_init.kaiming_uniform(layer.weight) - - -class FusionBranch(nn.Layer): - def __init__(self, hr_channels, enc_channels): - super().__init__() - self.conv_lr4x = Conv2dIBNormRelu( - enc_channels[2], hr_channels, 5, stride=1, padding=2) - - self.conv_f2x = Conv2dIBNormRelu( - 2 * hr_channels, hr_channels, 3, stride=1, padding=1) - self.conv_f = nn.Sequential( - Conv2dIBNormRelu( - hr_channels + 3, int(hr_channels / 2), 3, stride=1, padding=1), - Conv2dIBNormRelu( - int(hr_channels / 2), - 1, - 1, - stride=1, - padding=0, - with_ibn=False, - with_relu=False)) - - def forward(self, img, lr8x, hr2x): - lr4x = F.interpolate( - lr8x, scale_factor=2, mode='bilinear', align_corners=False) - lr4x = self.conv_lr4x(lr4x) - lr2x = F.interpolate( - lr4x, scale_factor=2, mode='bilinear', align_corners=False) - - f2x = self.conv_f2x(paddle.concat((lr2x, hr2x), axis=1)) - f = F.interpolate( - f2x, scale_factor=2, mode='bilinear', align_corners=False) - f = self.conv_f(paddle.concat((f, img), axis=1)) - pred_matte = F.sigmoid(f) - - return pred_matte - - -class HRBranch(nn.Layer): - """ - High Resolution Branch of MODNet - """ - - def __init__(self, hr_channels, enc_channels): - super().__init__() - - self.tohr_enc2x = Conv2dIBNormRelu( - enc_channels[0], hr_channels, 1, stride=1, padding=0) - self.conv_enc2x = Conv2dIBNormRelu( - hr_channels + 3, hr_channels, 3, stride=2, padding=1) - - self.tohr_enc4x = Conv2dIBNormRelu( - enc_channels[1], hr_channels, 1, stride=1, padding=0) - self.conv_enc4x = Conv2dIBNormRelu( - 2 * hr_channels, 2 * hr_channels, 3, stride=1, padding=1) - - self.conv_hr4x = nn.Sequential( - Conv2dIBNormRelu( - 2 * hr_channels + enc_channels[2] + 3, - 2 * hr_channels, - 3, - stride=1, - padding=1), - Conv2dIBNormRelu( - 2 * hr_channels, 2 * hr_channels, 3, stride=1, padding=1), - Conv2dIBNormRelu( - 2 * hr_channels, hr_channels, 3, stride=1, padding=1)) - - self.conv_hr2x = nn.Sequential( - Conv2dIBNormRelu( - 2 * hr_channels, 2 * hr_channels, 3, stride=1, padding=1), - Conv2dIBNormRelu( - 2 * hr_channels, hr_channels, 3, stride=1, padding=1), - Conv2dIBNormRelu(hr_channels, hr_channels, 3, stride=1, padding=1), - Conv2dIBNormRelu(hr_channels, hr_channels, 3, stride=1, padding=1)) - - self.conv_hr = nn.Sequential( - Conv2dIBNormRelu( - hr_channels + 3, hr_channels, 3, stride=1, padding=1), - Conv2dIBNormRelu( - hr_channels, - 1, - 1, - stride=1, - padding=0, - with_ibn=False, - with_relu=False)) - - def forward(self, img, enc2x, enc4x, lr8x): - img2x = F.interpolate( - img, scale_factor=1 / 2, mode='bilinear', align_corners=False) - img4x = F.interpolate( - img, scale_factor=1 / 4, mode='bilinear', align_corners=False) - - enc2x = self.tohr_enc2x(enc2x) - hr4x = self.conv_enc2x(paddle.concat((img2x, enc2x), axis=1)) - - enc4x = self.tohr_enc4x(enc4x) - hr4x = self.conv_enc4x(paddle.concat((hr4x, enc4x), axis=1)) - - lr4x = F.interpolate( - lr8x, scale_factor=2, mode='bilinear', align_corners=False) - hr4x = self.conv_hr4x(paddle.concat((hr4x, lr4x, img4x), axis=1)) - - hr2x = F.interpolate( - hr4x, scale_factor=2, mode='bilinear', align_corners=False) - hr2x = self.conv_hr2x(paddle.concat((hr2x, enc2x), axis=1)) - - pred_detail = None - if self.training: - hr = F.interpolate( - hr2x, scale_factor=2, mode='bilinear', align_corners=False) - hr = self.conv_hr(paddle.concat((hr, img), axis=1)) - pred_detail = F.sigmoid(hr) - - return pred_detail, hr2x - - -class LRBranch(nn.Layer): - def __init__(self, backbone_channels): - super().__init__() - self.se_block = SEBlock(backbone_channels[4], reduction=4) - self.conv_lr16x = Conv2dIBNormRelu( - backbone_channels[4], backbone_channels[3], 5, stride=1, padding=2) - self.conv_lr8x = Conv2dIBNormRelu( - backbone_channels[3], backbone_channels[2], 5, stride=1, padding=2) - self.conv_lr = Conv2dIBNormRelu( - backbone_channels[2], - 1, - 3, - stride=2, - padding=1, - with_ibn=False, - with_relu=False) - - def forward(self, feat_list): - enc2x, enc4x, enc32x = feat_list[0], feat_list[1], feat_list[4] - - enc32x = self.se_block(enc32x) - lr16x = F.interpolate( - enc32x, scale_factor=2, mode='bilinear', align_corners=False) - lr16x = self.conv_lr16x(lr16x) - lr8x = F.interpolate( - lr16x, scale_factor=2, mode='bilinear', align_corners=False) - lr8x = self.conv_lr8x(lr8x) - - pred_semantic = None - if self.training: - lr = self.conv_lr(lr8x) - pred_semantic = F.sigmoid(lr) - - return pred_semantic, lr8x, [enc2x, enc4x] - - -class IBNorm(nn.Layer): - """ - Combine Instance Norm and Batch Norm into One Layer - """ - - def __init__(self, in_channels): - super().__init__() - self.bnorm_channels = in_channels // 2 - self.inorm_channels = in_channels - self.bnorm_channels - - self.bnorm = nn.BatchNorm2D(self.bnorm_channels) - self.inorm = nn.InstanceNorm2D(self.inorm_channels) - - def forward(self, x): - bn_x = self.bnorm(x[:, :self.bnorm_channels, :, :]) - in_x = self.inorm(x[:, self.bnorm_channels:, :, :]) - - return paddle.concat((bn_x, in_x), 1) - - -class Conv2dIBNormRelu(nn.Layer): - """ - Convolution + IBNorm + Relu - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - bias_attr=None, - with_ibn=True, - with_relu=True): - - super().__init__() - - layers = [ - nn.Conv2D( - in_channels, - out_channels, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - bias_attr=bias_attr) - ] - - if with_ibn: - layers.append(IBNorm(out_channels)) - - if with_relu: - layers.append(nn.ReLU()) - - self.layers = nn.Sequential(*layers) - - def forward(self, x): - return self.layers(x) - - -class SEBlock(nn.Layer): - """ - SE Block Proposed in https://arxiv.org/pdf/1709.01507.pdf - """ - - def __init__(self, num_channels, reduction=1): - super().__init__() - self.pool = nn.AdaptiveAvgPool2D(1) - self.conv = nn.Sequential( - nn.Conv2D( - num_channels, - int(num_channels // reduction), - 1, - bias_attr=False), nn.ReLU(), - nn.Conv2D( - int(num_channels // reduction), - num_channels, - 1, - bias_attr=False), nn.Sigmoid()) - - def forward(self, x): - w = self.pool(x) - w = self.conv(w) - return w * x - - -class GaussianBlurLayer(nn.Layer): - """ Add Gaussian Blur to a 4D tensors - This layer takes a 4D tensor of {N, C, H, W} as input. - The Gaussian blur will be performed in given channel number (C) splitly. - """ - - def __init__(self, channels, kernel_size): - """ - Args: - channels (int): Channel for input tensor - kernel_size (int): Size of the kernel used in blurring - """ - - super(GaussianBlurLayer, self).__init__() - self.channels = channels - self.kernel_size = kernel_size - assert self.kernel_size % 2 != 0 - - self.op = nn.Sequential( - nn.Pad2D(int(self.kernel_size / 2), mode='reflect'), - nn.Conv2D( - channels, - channels, - self.kernel_size, - stride=1, - padding=0, - bias_attr=False, - groups=channels)) - - self._init_kernel() - self.op[1].weight.stop_gradient = True - - def forward(self, x): - """ - Args: - x (paddle.Tensor): input 4D tensor - Returns: - paddle.Tensor: Blurred version of the input - """ - - if not len(list(x.shape)) == 4: - print('\'GaussianBlurLayer\' requires a 4D tensor as input\n') - exit() - elif not x.shape[1] == self.channels: - print('In \'GaussianBlurLayer\', the required channel ({0}) is' - 'not the same as input ({1})\n'.format( - self.channels, x.shape[1])) - exit() - - return self.op(x) - - def _init_kernel(self): - sigma = 0.3 * ((self.kernel_size - 1) * 0.5 - 1) + 0.8 - - n = np.zeros((self.kernel_size, self.kernel_size)) - i = int(self.kernel_size / 2) - n[i, i] = 1 - kernel = scipy.ndimage.gaussian_filter(n, sigma) - kernel = kernel.astype('float32') - kernel = kernel[np.newaxis, np.newaxis, :, :] - paddle.assign(kernel, self.op[1].weight) diff --git a/spaces/DragGan/DragGan-Inversion/PTI/utils/__init__.py b/spaces/DragGan/DragGan-Inversion/PTI/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/encoders/model_irse.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/encoders/model_irse.py deleted file mode 100644 index daa7a98457de533545a16b2e09030d8414c5b00e..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/encoders/model_irse.py +++ /dev/null @@ -1,91 +0,0 @@ -from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, Dropout, Sequential, Module -from encoder4editing.models.encoders.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE, l2_norm - -""" -Modified Backbone implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch) -""" - - -class Backbone(Module): - def __init__(self, input_size, num_layers, mode='ir', drop_ratio=0.4, affine=True): - super(Backbone, self).__init__() - assert input_size in [112, 224], "input_size should be 112 or 224" - assert num_layers in [ - 50, 100, 152], "num_layers should be 50, 100 or 152" - assert mode in ['ir', 'ir_se'], "mode should be ir or ir_se" - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - if input_size == 112: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 7 * 7, 512), - BatchNorm1d(512, affine=affine)) - else: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 14 * 14, 512), - BatchNorm1d(512, affine=affine)) - - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - def forward(self, x): - x = self.input_layer(x) - x = self.body(x) - x = self.output_layer(x) - return l2_norm(x) - - -def IR_50(input_size): - """Constructs a ir-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir', - drop_ratio=0.4, affine=False) - return model - - -def IR_101(input_size): - """Constructs a ir-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir', - drop_ratio=0.4, affine=False) - return model - - -def IR_152(input_size): - """Constructs a ir-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir', - drop_ratio=0.4, affine=False) - return model - - -def IR_SE_50(input_size): - """Constructs a ir_se-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir_se', - drop_ratio=0.4, affine=False) - return model - - -def IR_SE_101(input_size): - """Constructs a ir_se-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir_se', - drop_ratio=0.4, affine=False) - return model - - -def IR_SE_152(input_size): - """Constructs a ir_se-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir_se', - drop_ratio=0.4, affine=False) - return model diff --git a/spaces/DragGan/DragGan/stylegan_human/edit.py b/spaces/DragGan/DragGan/stylegan_human/edit.py deleted file mode 100644 index e91e625a21c914e42430b7b8779bd5335d76e037..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan/stylegan_human/edit.py +++ /dev/null @@ -1,194 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -import os -import sys -import torch -import numpy as np -sys.path.append(".") -from torch_utils.models import Generator -import click -import cv2 -from typing import List, Optional -import subprocess -import legacy -from edit.edit_helper import conv_warper, decoder, encoder_ifg, encoder_ss, encoder_sefa - - -""" -Edit generated images with different SOTA methods. - Notes: - 1. We provide some latent directions in the folder, you can play around with them. - 2. ''upper_length'' and ''bottom_length'' of ''attr_name'' are available for demo. - 3. Layers to control and editing strength are set in edit/edit_config.py. - -Examples: - -\b -# Editing with InterfaceGAN, StyleSpace, and Sefa -python edit.py --network pretrained_models/stylegan_human_v2_1024.pkl --attr_name upper_length \\ - --seeds 61531,61570,61571,61610 --outdir outputs/edit_results - - -# Editing using inverted latent code -python edit.py ---network outputs/pti/checkpoints/model_test.pkl --attr_name upper_length \\ - --outdir outputs/edit_results --real True --real_w_path outputs/pti/embeddings/test/PTI/test/0.pt --real_img_path aligned_image/test.png - -""" - - - -@click.command() -@click.pass_context -@click.option('--network', 'ckpt_path', help='Network pickle filename', required=True) -@click.option('--attr_name', help='choose one of the attr: upper_length or bottom_length', type=str, required=True) -@click.option('--trunc', 'truncation', type=float, help='Truncation psi', default=0.8, show_default=True) -@click.option('--gen_video', type=bool, default=True, help='If want to generate video') -@click.option('--combine', type=bool, default=True, help='If want to combine different editing results in the same frame') -@click.option('--seeds', type=legacy.num_range, help='List of random seeds') -@click.option('--outdir', help='Where to save the output images', type=str, required=True, default='outputs/editing', metavar='DIR') -@click.option('--real', type=bool, help='True for editing real image', default=False) -@click.option('--real_w_path', help='Path of latent code for real image') -@click.option('--real_img_path', help='Path of real image, this just concat real image with inverted and edited results together') - - - -def main( - ctx: click.Context, - ckpt_path: str, - attr_name: str, - truncation: float, - gen_video: bool, - combine: bool, - seeds: Optional[List[int]], - outdir: str, - real: str, - real_w_path: str, - real_img_path: str -): - ## convert pkl to pth - # if not os.path.exists(ckpt_path.replace('.pkl','.pth')): - legacy.convert(ckpt_path, ckpt_path.replace('.pkl','.pth'), G_only=real) - ckpt_path = ckpt_path.replace('.pkl','.pth') - print("start...", flush=True) - config = {"latent" : 512, "n_mlp" : 8, "channel_multiplier": 2} - generator = Generator( - size = 1024, - style_dim=config["latent"], - n_mlp=config["n_mlp"], - channel_multiplier=config["channel_multiplier"] - ) - - generator.load_state_dict(torch.load(ckpt_path)['g_ema']) - generator.eval().cuda() - - with torch.no_grad(): - mean_path = os.path.join('edit','mean_latent.pkl') - if not os.path.exists(mean_path): - mean_n = 3000 - mean_latent = generator.mean_latent(mean_n).detach() - legacy.save_obj(mean_latent, mean_path) - else: - mean_latent = legacy.load_pkl(mean_path).cuda() - finals = [] - - ## -- selected sample seeds -- ## - # seeds = [60948,60965,61174,61210,61511,61598,61610] #bottom -> long - # [60941,61064,61103,61313,61531,61570,61571] # bottom -> short - # [60941,60965,61064,61103,6117461210,61531,61570,61571,61610] # upper --> long - # [60948,61313,61511,61598] # upper --> short - if real: seeds = [0] - - for t in seeds: - if real: # now assume process single real image only - if real_img_path: - real_image = cv2.imread(real_img_path) - real_image = cv2.cvtColor(real_image, cv2.COLOR_BGR2RGB) - import torchvision.transforms as transforms - transform = transforms.Compose( # normalize to (-1, 1) - [transforms.ToTensor(), - transforms.Normalize(mean=(.5,.5,.5), std=(.5,.5,.5))] - ) - real_image = transform(real_image).unsqueeze(0).cuda() - - test_input = torch.load(real_w_path) - output, _ = generator(test_input, False, truncation=1,input_is_latent=True, real=True) - - else: # generate image from random seeds - test_input = torch.from_numpy(np.random.RandomState(t).randn(1, 512)).float().cuda() # torch.Size([1, 512]) - output, _ = generator([test_input], False, truncation=truncation, truncation_latent=mean_latent, real=real) - - # interfacegan - style_space, latent, noise = encoder_ifg(generator, test_input, attr_name, truncation, mean_latent,real=real) - image1 = decoder(generator, style_space, latent, noise) - # stylespace - style_space, latent, noise = encoder_ss(generator, test_input, attr_name, truncation, mean_latent,real=real) - image2 = decoder(generator, style_space, latent, noise) - # sefa - latent, noise = encoder_sefa(generator, test_input, attr_name, truncation, mean_latent,real=real) - image3, _ = generator([latent], noise=noise, input_is_latent=True) - if real_img_path: - final = torch.cat((real_image, output, image1, image2, image3), 3) - else: - final = torch.cat((output, image1, image2, image3), 3) - - # legacy.visual(output, f'{outdir}/{attr_name}_{t:05d}_raw.jpg') - # legacy.visual(image1, f'{outdir}/{attr_name}_{t:05d}_ifg.jpg') - # legacy.visual(image2, f'{outdir}/{attr_name}_{t:05d}_ss.jpg') - # legacy.visual(image3, f'{outdir}/{attr_name}_{t:05d}_sefa.jpg') - - if gen_video: - total_step = 90 - if real: - video_ifg_path = f"{outdir}/video/ifg_{attr_name}_{real_w_path.split('/')[-2]}/" - video_ss_path = f"{outdir}/video/ss_{attr_name}_{real_w_path.split('/')[-2]}/" - video_sefa_path = f"{outdir}/video/ss_{attr_name}_{real_w_path.split('/')[-2]}/" - else: - video_ifg_path = f"{outdir}/video/ifg_{attr_name}_{t:05d}/" - video_ss_path = f"{outdir}/video/ss_{attr_name}_{t:05d}/" - video_sefa_path = f"{outdir}/video/ss_{attr_name}_{t:05d}/" - video_comb_path = f"{outdir}/video/tmp" - - if combine: - if not os.path.exists(video_comb_path): - os.makedirs(video_comb_path) - else: - if not os.path.exists(video_ifg_path): - os.makedirs(video_ifg_path) - if not os.path.exists(video_ss_path): - os.makedirs(video_ss_path) - if not os.path.exists(video_sefa_path): - os.makedirs(video_sefa_path) - for i in range(total_step): - style_space, latent, noise = encoder_ifg(generator, test_input, attr_name, truncation, mean_latent, step=i, total=total_step,real=real) - image1 = decoder(generator, style_space, latent, noise) - style_space, latent, noise = encoder_ss(generator, test_input, attr_name, truncation, mean_latent, step=i, total=total_step,real=real) - image2 = decoder(generator, style_space, latent, noise) - latent, noise = encoder_sefa(generator, test_input, attr_name, truncation, mean_latent, step=i, total=total_step,real=real) - image3, _ = generator([latent], noise=noise, input_is_latent=True) - if combine: - if real_img_path: - comb_img = torch.cat((real_image, output, image1, image2, image3), 3) - else: - comb_img = torch.cat((output, image1, image2, image3), 3) - legacy.visual(comb_img, os.path.join(video_comb_path, f'{i:05d}.jpg')) - else: - legacy.visual(image1, os.path.join(video_ifg_path, f'{i:05d}.jpg')) - legacy.visual(image2, os.path.join(video_ss_path, f'{i:05d}.jpg')) - if combine: - cmd=f"ffmpeg -hide_banner -loglevel error -y -r 30 -i {video_comb_path}/%05d.jpg -vcodec libx264 -pix_fmt yuv420p {video_ifg_path.replace('ifg_', '')[:-1] + '.mp4'}" - subprocess.call(cmd, shell=True) - else: - cmd=f"ffmpeg -hide_banner -loglevel error -y -r 30 -i {video_ifg_path}/%05d.jpg -vcodec libx264 -pix_fmt yuv420p {video_ifg_path[:-1] + '.mp4'}" - subprocess.call(cmd, shell=True) - cmd=f"ffmpeg -hide_banner -loglevel error -y -r 30 -i {video_ss_path}/%05d.jpg -vcodec libx264 -pix_fmt yuv420p {video_ss_path[:-1] + '.mp4'}" - subprocess.call(cmd, shell=True) - - # interfacegan, stylespace, sefa - finals.append(final) - - final = torch.cat(finals, 2) - legacy.visual(final, os.path.join(outdir,'final.jpg')) - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/Dragonnnext/charybdis/Dockerfile b/spaces/Dragonnnext/charybdis/Dockerfile deleted file mode 100644 index cee9bcd0c69dbeb6e903c3f64531b2ff70f021f6..0000000000000000000000000000000000000000 --- a/spaces/Dragonnnext/charybdis/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitlab.com/khanon/oai-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/ECCV2022/PSG/OpenPSG/configs/psgformer/psgformer_r50.py b/spaces/ECCV2022/PSG/OpenPSG/configs/psgformer/psgformer_r50.py deleted file mode 100644 index 31f77e61bf46c57f8b064ca94d6a5d35b8008411..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/PSG/OpenPSG/configs/psgformer/psgformer_r50.py +++ /dev/null @@ -1,96 +0,0 @@ -model = dict( - type='PSGTr', - backbone=dict(type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=False), - norm_eval=True, - style='pytorch', - init_cfg=dict(type='Pretrained', - checkpoint='torchvision://resnet50')), - bbox_head=dict( - type='PSGFormerHead', - num_classes=80, - num_relations=117, - in_channels=2048, - transformer=dict( - type='DualTransformer', - encoder=dict(type='DetrTransformerEncoder', - num_layers=6, - transformerlayers=dict( - type='BaseTransformerLayer', - attn_cfgs=[ - dict(type='MultiheadAttention', - embed_dims=256, - num_heads=8, - dropout=0.1) - ], - feedforward_channels=2048, - ffn_dropout=0.1, - operation_order=('self_attn', 'norm', 'ffn', - 'norm'))), - decoder1=dict(type='DetrTransformerDecoder', - return_intermediate=True, - num_layers=6, - transformerlayers=dict( - type='DetrTransformerDecoderLayer', - attn_cfgs=dict(type='MultiheadAttention', - embed_dims=256, - num_heads=8, - dropout=0.1), - feedforward_channels=2048, - ffn_dropout=0.1, - operation_order=('self_attn', 'norm', - 'cross_attn', 'norm', 'ffn', - 'norm'))), - decoder2=dict(type='DetrTransformerDecoder', - return_intermediate=True, - num_layers=6, - transformerlayers=dict( - type='DetrTransformerDecoderLayer', - attn_cfgs=dict(type='MultiheadAttention', - embed_dims=256, - num_heads=8, - dropout=0.1), - feedforward_channels=2048, - ffn_dropout=0.1, - operation_order=('self_attn', 'norm', - 'cross_attn', 'norm', 'ffn', - 'norm'))), - ), - positional_encoding=dict(type='SinePositionalEncoding', - num_feats=128, - normalize=True), - rel_loss_cls=dict(type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=2.0, - class_weight=1.0), - sub_id_loss=dict(type='MultilabelCrossEntropy', loss_weight=2.0), - obj_id_loss=dict(type='MultilabelCrossEntropy', loss_weight=2.0), - loss_cls=dict(type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=4.0, - class_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=3.0), - loss_iou=dict(type='GIoULoss', loss_weight=2.0), - focal_loss=dict(type='BCEFocalLoss', loss_weight=1.0), - dice_loss=dict(type='psgtrDiceLoss', loss_weight=1.0)), - # training and testing settings - train_cfg=dict(id_assigner=dict(type='IdMatcher', - sub_id_cost=dict(type='ClassificationCost', - weight=1.), - obj_id_cost=dict(type='ClassificationCost', - weight=1.), - r_cls_cost=dict(type='ClassificationCost', - weight=1.)), - bbox_assigner=dict(type='HungarianAssigner', - cls_cost=dict(type='ClassificationCost', - weight=4.0), - reg_cost=dict(type='BBoxL1Cost', - weight=3.0), - iou_cost=dict(type='IoUCost', - iou_mode='giou', - weight=2.0))), - test_cfg=dict(max_per_img=100)) diff --git a/spaces/Eddycrack864/Applio-Inference/infer/modules/uvr5/modules.py b/spaces/Eddycrack864/Applio-Inference/infer/modules/uvr5/modules.py deleted file mode 100644 index f63ac6a794100cc95da21dcba78b23377a1f133d..0000000000000000000000000000000000000000 --- a/spaces/Eddycrack864/Applio-Inference/infer/modules/uvr5/modules.py +++ /dev/null @@ -1,107 +0,0 @@ -import os -import traceback -import logging - -logger = logging.getLogger(__name__) - -import ffmpeg -import torch - -from configs.config import Config -from infer.modules.uvr5.mdxnet import MDXNetDereverb -from infer.modules.uvr5.preprocess import AudioPre, AudioPreDeEcho - -config = Config() - - -def uvr(model_name, inp_root, save_root_vocal, paths, save_root_ins, agg, format0): - infos = [] - try: - inp_root = inp_root.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - save_root_vocal = ( - save_root_vocal.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) - save_root_ins = ( - save_root_ins.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) - if model_name == "onnx_dereverb_By_FoxJoy": - pre_fun = MDXNetDereverb(15, config.device) - else: - func = AudioPre if "DeEcho" not in model_name else AudioPreDeEcho - pre_fun = func( - agg=int(agg), - model_path=os.path.join( - os.getenv("weight_uvr5_root"), model_name + ".pth" - ), - device=config.device, - is_half=config.is_half, - ) - if inp_root != "": - paths = [os.path.join(inp_root, name) for name in os.listdir(inp_root)] - else: - paths = [path.name for path in paths] - for path in paths: - inp_path = os.path.join(inp_root, path) - need_reformat = 1 - done = 0 - try: - info = ffmpeg.probe(inp_path, cmd="ffprobe") - if ( - info["streams"][0]["channels"] == 2 - and info["streams"][0]["sample_rate"] == "44100" - ): - need_reformat = 0 - pre_fun._path_audio_( - inp_path, save_root_ins, save_root_vocal, format0 - ) - done = 1 - except: - need_reformat = 1 - traceback.print_exc() - if need_reformat == 1: - tmp_path = "%s/%s.reformatted.wav" % ( - os.path.join(os.environ["TEMP"]), - os.path.basename(inp_path), - ) - os.system( - "ffmpeg -i %s -vn -acodec pcm_s16le -ac 2 -ar 44100 %s -y" - % (inp_path, tmp_path) - ) - inp_path = tmp_path - try: - if done == 0: - pre_fun.path_audio( - inp_path, save_root_ins, save_root_vocal, format0 - ) - infos.append("%s->Success" % (os.path.basename(inp_path))) - yield "\n".join(infos) - except: - try: - if done == 0: - pre_fun._path_audio_( - inp_path, save_root_ins, save_root_vocal, format0 - ) - infos.append("%s->Success" % (os.path.basename(inp_path))) - yield "\n".join(infos) - except: - infos.append( - "%s->%s" % (os.path.basename(inp_path), traceback.format_exc()) - ) - yield "\n".join(infos) - except: - infos.append(traceback.format_exc()) - yield "\n".join(infos) - finally: - try: - if model_name == "onnx_dereverb_By_FoxJoy": - del pre_fun.pred.model - del pre_fun.pred.model_ - else: - del pre_fun.model - del pre_fun - except: - traceback.print_exc() - if torch.cuda.is_available(): - torch.cuda.empty_cache() - logger.info("Executed torch.cuda.empty_cache()") - yield "\n".join(infos) diff --git a/spaces/ElainaFanBoy/MusicGen/audiocraft/models/encodec.py b/spaces/ElainaFanBoy/MusicGen/audiocraft/models/encodec.py deleted file mode 100644 index 69621a695887b0b41614c51cae020f6fd0af221d..0000000000000000000000000000000000000000 --- a/spaces/ElainaFanBoy/MusicGen/audiocraft/models/encodec.py +++ /dev/null @@ -1,302 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from abc import ABC, abstractmethod -import typing as tp - -from einops import rearrange -import torch -from torch import nn - -from .. import quantization as qt - - -class CompressionModel(ABC, nn.Module): - - @abstractmethod - def forward(self, x: torch.Tensor) -> qt.QuantizedResult: - ... - - @abstractmethod - def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]: - """See `EncodecModel.encode`""" - ... - - @abstractmethod - def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None): - """See `EncodecModel.decode`""" - ... - - @property - @abstractmethod - def channels(self) -> int: - ... - - @property - @abstractmethod - def frame_rate(self) -> int: - ... - - @property - @abstractmethod - def sample_rate(self) -> int: - ... - - @property - @abstractmethod - def cardinality(self) -> int: - ... - - @property - @abstractmethod - def num_codebooks(self) -> int: - ... - - @property - @abstractmethod - def total_codebooks(self) -> int: - ... - - @abstractmethod - def set_num_codebooks(self, n: int): - """Set the active number of codebooks used by the quantizer. - """ - ... - - -class EncodecModel(CompressionModel): - """Encodec model operating on the raw waveform. - - Args: - encoder (nn.Module): Encoder network. - decoder (nn.Module): Decoder network. - quantizer (qt.BaseQuantizer): Quantizer network. - frame_rate (int): Frame rate for the latent representation. - sample_rate (int): Audio sample rate. - channels (int): Number of audio channels. - causal (bool): Whether to use a causal version of the model. - renormalize (bool): Whether to renormalize the audio before running the model. - """ - # we need assignement to override the property in the abstract class, - # I couldn't find a better way... - frame_rate: int = 0 - sample_rate: int = 0 - channels: int = 0 - - def __init__(self, - encoder: nn.Module, - decoder: nn.Module, - quantizer: qt.BaseQuantizer, - frame_rate: int, - sample_rate: int, - channels: int, - causal: bool = False, - renormalize: bool = False): - super().__init__() - self.encoder = encoder - self.decoder = decoder - self.quantizer = quantizer - self.frame_rate = frame_rate - self.sample_rate = sample_rate - self.channels = channels - self.renormalize = renormalize - self.causal = causal - if self.causal: - # we force disabling here to avoid handling linear overlap of segments - # as supported in original EnCodec codebase. - assert not self.renormalize, 'Causal model does not support renormalize' - - @property - def total_codebooks(self): - """Total number of quantizer codebooks available. - """ - return self.quantizer.total_codebooks - - @property - def num_codebooks(self): - """Active number of codebooks used by the quantizer. - """ - return self.quantizer.num_codebooks - - def set_num_codebooks(self, n: int): - """Set the active number of codebooks used by the quantizer. - """ - self.quantizer.set_num_codebooks(n) - - @property - def cardinality(self): - """Cardinality of each codebook. - """ - return self.quantizer.bins - - def preprocess(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]: - scale: tp.Optional[torch.Tensor] - if self.renormalize: - mono = x.mean(dim=1, keepdim=True) - volume = mono.pow(2).mean(dim=2, keepdim=True).sqrt() - scale = 1e-8 + volume - x = x / scale - scale = scale.view(-1, 1) - else: - scale = None - return x, scale - - def postprocess(self, - x: torch.Tensor, - scale: tp.Optional[torch.Tensor] = None) -> torch.Tensor: - if scale is not None: - assert self.renormalize - x = x * scale.view(-1, 1, 1) - return x - - def forward(self, x: torch.Tensor) -> qt.QuantizedResult: - assert x.dim() == 3 - length = x.shape[-1] - x, scale = self.preprocess(x) - - emb = self.encoder(x) - q_res = self.quantizer(emb, self.frame_rate) - out = self.decoder(q_res.x) - - # remove extra padding added by the encoder and decoder - assert out.shape[-1] >= length, (out.shape[-1], length) - out = out[..., :length] - - q_res.x = self.postprocess(out, scale) - - return q_res - - def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]: - """Encode the given input tensor to quantized representation along with scale parameter. - - Args: - x (torch.Tensor): Float tensor of shape [B, C, T] - - Returns: - codes, scale (tp.Tuple[torch.Tensor, torch.Tensor]): Tuple composed of: - codes a float tensor of shape [B, K, T] with K the number of codebooks used and T the timestep. - scale a float tensor containing the scale for audio renormalizealization. - """ - assert x.dim() == 3 - x, scale = self.preprocess(x) - emb = self.encoder(x) - codes = self.quantizer.encode(emb) - return codes, scale - - def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None): - """Decode the given codes to a reconstructed representation, using the scale to perform - audio denormalization if needed. - - Args: - codes (torch.Tensor): Int tensor of shape [B, K, T] - scale (tp.Optional[torch.Tensor]): Float tensor containing the scale value. - - Returns: - out (torch.Tensor): Float tensor of shape [B, C, T], the reconstructed audio. - """ - emb = self.quantizer.decode(codes) - out = self.decoder(emb) - out = self.postprocess(out, scale) - # out contains extra padding added by the encoder and decoder - return out - - -class FlattenedCompressionModel(CompressionModel): - """Wraps a CompressionModel and flatten its codebooks, e.g. - instead of returning [B, K, T], return [B, S, T * (K // S)] with - S the number of codebooks per step, and `K // S` the number of 'virtual steps' - for each real time step. - - Args: - model (CompressionModel): compression model to wrap. - codebooks_per_step (int): number of codebooks to keep per step, - this must divide the number of codebooks provided by the wrapped model. - extend_cardinality (bool): if True, and for instance if codebooks_per_step = 1, - if each codebook has a cardinality N, then the first codebook will - use the range [0, N - 1], and the second [N, 2 N - 1] etc. - On decoding, this can lead to potentially invalid sequences. - Any invalid entry will be silently remapped to the proper range - with a modulo. - """ - def __init__(self, model: CompressionModel, codebooks_per_step: int = 1, - extend_cardinality: bool = True): - super().__init__() - self.model = model - self.codebooks_per_step = codebooks_per_step - self.extend_cardinality = extend_cardinality - - @property - def total_codebooks(self): - return self.model.total_codebooks - - @property - def num_codebooks(self): - """Active number of codebooks used by the quantizer. - - ..Warning:: this reports the number of codebooks after the flattening - of the codebooks! - """ - assert self.model.num_codebooks % self.codebooks_per_step == 0 - return self.codebooks_per_step - - def set_num_codebooks(self, n: int): - """Set the active number of codebooks used by the quantizer. - - ..Warning:: this sets the number of codebooks **before** the flattening - of the codebooks. - """ - assert n % self.codebooks_per_step == 0 - self.model.set_num_codebooks(n) - - @property - def num_virtual_steps(self) -> int: - """Return the number of virtual steps, e.g. one real step - will be split into that many steps. - """ - return self.model.num_codebooks // self.codebooks_per_step - - @property - def frame_rate(self) -> int: - return self.model.frame_rate * self.num_virtual_steps - - @property - def sample_rate(self) -> int: - return self.model.sample_rate - - @property - def channels(self) -> int: - return self.model.channels - - @property - def cardinality(self): - """Cardinality of each codebook. - """ - if self.extend_cardinality: - return self.model.cardinality * self.num_virtual_steps - else: - return self.model.cardinality - - def forward(self, x: torch.Tensor) -> qt.QuantizedResult: - raise NotImplementedError("Not supported, use encode and decode.") - - def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]: - indices, scales = self.model.encode(x) - B, K, T = indices.shape - indices = rearrange(indices, 'b (k v) t -> b k t v', k=self.codebooks_per_step) - if self.extend_cardinality: - for virtual_step in range(1, self.num_virtual_steps): - indices[..., virtual_step] += self.model.cardinality * virtual_step - indices = rearrange(indices, 'b k t v -> b k (t v)') - return (indices, scales) - - def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None): - B, K, T = codes.shape - assert T % self.num_virtual_steps == 0 - codes = rearrange(codes, 'b k (t v) -> b (k v) t', v=self.num_virtual_steps) - # We silently ignore potential errors from the LM when - # using extend_cardinality. - codes = codes % self.model.cardinality - return self.model.decode(codes, scale) diff --git a/spaces/ElainaFanBoy/MusicGen/audiocraft/models/lm.py b/spaces/ElainaFanBoy/MusicGen/audiocraft/models/lm.py deleted file mode 100644 index 43f82b42340dd9e721a3a76fa58e27f70fe2b4e5..0000000000000000000000000000000000000000 --- a/spaces/ElainaFanBoy/MusicGen/audiocraft/models/lm.py +++ /dev/null @@ -1,526 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass -from functools import partial -import logging -import math -import typing as tp - -import torch -from torch import nn - -from ..utils import utils -from ..modules.streaming import StreamingModule, State -from ..modules.transformer import StreamingTransformer, create_norm_fn -from ..modules.conditioners import ( - ConditionFuser, - ClassifierFreeGuidanceDropout, - AttributeDropout, - ConditioningProvider, - ConditioningAttributes, - ConditionType, -) -from ..modules.codebooks_patterns import CodebooksPatternProvider -from ..modules.activations import get_activation_fn - - -logger = logging.getLogger(__name__) -ConditionTensors = tp.Dict[str, ConditionType] -CFGConditions = tp.Union[ConditionTensors, tp.Tuple[ConditionTensors, ConditionTensors]] - - -def get_init_fn(method: str, input_dim: int, init_depth: tp.Optional[int] = None): - """LM layer initialization. - Inspired from xlformers: https://github.com/fairinternal/xlformers - - Args: - method (str): Method name for init function. Valid options are: - 'gaussian', 'uniform'. - input_dim (int): Input dimension of the initialized module. - init_depth (Optional[int]): Optional init depth value used to rescale - the standard deviation if defined. - """ - # Compute std - std = 1 / math.sqrt(input_dim) - # Rescale with depth - if init_depth is not None: - std = std / math.sqrt(2 * init_depth) - - if method == 'gaussian': - return partial( - torch.nn.init.trunc_normal_, mean=0.0, std=std, a=-3 * std, b=3 * std - ) - elif method == 'uniform': - bound = math.sqrt(3) * std # ensure the standard deviation is `std` - return partial(torch.nn.init.uniform_, a=-bound, b=bound) - else: - raise ValueError("Unsupported layer initialization method") - - -def init_layer(m: nn.Module, - method: str, - init_depth: tp.Optional[int] = None, - zero_bias_init: bool = False): - """Wrapper around ``get_init_fn`` for proper initialization of LM modules. - - Args: - m (nn.Module): Module to initialize. - method (str): Method name for the init function. - init_depth (Optional[int]): Optional init depth value used to rescale - the standard deviation if defined. - zero_bias_init (bool): Whether to initialize the bias to 0 or not. - """ - if isinstance(m, nn.Linear): - init_fn = get_init_fn(method, m.in_features, init_depth=init_depth) - if m.weight.device.type == 'cpu' and m.weight.dtype == torch.float16: - weight = m.weight.float() - init_fn(weight) - m.weight.data[:] = weight.half() - else: - init_fn(m.weight) - if zero_bias_init and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.Embedding): - init_fn = get_init_fn(method, m.embedding_dim, init_depth=None) - if m.weight.device.type == 'cpu' and m.weight.dtype == torch.float16: - weight = m.weight.float() - init_fn(weight) - m.weight.data[:] = weight.half() - else: - init_fn(m.weight) - - -class ScaledEmbedding(nn.Embedding): - """Boost learning rate for embeddings (with `scale`). - """ - def __init__(self, *args, lr=None, **kwargs): - super().__init__(*args, **kwargs) - self.lr = lr - - def make_optim_group(self): - group = {"params": list(self.parameters())} - if self.lr is not None: - group["lr"] = self.lr - return group - - -@dataclass -class LMOutput: - # The logits are already re-aligned with the input codes - # hence no extra shift is required, e.g. when computing CE - logits: torch.Tensor # [B, K, T, card] - mask: torch.Tensor # [B, K, T] - - -class LMModel(StreamingModule): - """Transformer-based language model on multiple streams of codes. - - Args: - pattern_provider (CodebooksPatternProvider): Pattern provider for codebook interleaving. - condition_provider (MusicConditioningProvider): Conditioning provider from metadata. - fuser (ConditionFuser): Fuser handling the fusing of conditions with language model input. - n_q (int): Number of parallel streams to model. - card (int): Cardinality, vocabulary size. - dim (int): Dimension of the transformer encoder. - num_heads (int): Number of heads for the transformer encoder. - hidden_scale (int): Scale for hidden feed forward dimension of the transformer encoder. - norm (str): Normalization method. - norm_first (bool): Use pre-norm instead of post-norm. - emb_lr (Optional[float]): Embedding-specific learning rate. - bias_proj (bool): Use bias for output projections. - weight_init (Optional[str]): Method for weight initialization. - depthwise_init (Optional[str]): Method for depthwise weight initialization. - zero_bias_init (bool): If true and bias in Linears, initialize bias to zeros. - cfg_dropout (float): Classifier-free guidance dropout. - cfg_coef (float): Classifier-free guidance coefficient. - attribute_dropout (dict): Attribute dropout probabilities. - two_step_cfg (bool): Whether to run classifier free-guidance with 2 distinct steps. - **kwargs: Additional parameters for the transformer encoder. - """ - def __init__(self, pattern_provider: CodebooksPatternProvider, condition_provider: ConditioningProvider, - fuser: ConditionFuser, n_q: int = 8, card: int = 1024, dim: int = 128, num_heads: int = 8, - hidden_scale: int = 4, norm: str = 'layer_norm', norm_first: bool = False, - emb_lr: tp.Optional[float] = None, bias_proj: bool = True, - weight_init: tp.Optional[str] = None, depthwise_init: tp.Optional[str] = None, - zero_bias_init: bool = False, cfg_dropout: float = 0, cfg_coef: float = 1.0, - attribute_dropout: tp.Dict[str, tp.Dict[str, float]] = {}, two_step_cfg: bool = False, - **kwargs): - super().__init__() - self.cfg_coef = cfg_coef - self.cfg_dropout = ClassifierFreeGuidanceDropout(p=cfg_dropout) - self.att_dropout = AttributeDropout(p=attribute_dropout) - self.condition_provider = condition_provider - self.fuser = fuser - self.card = card - embed_dim = self.card + 1 - self.n_q = n_q - self.dim = dim - self.pattern_provider = pattern_provider - self.two_step_cfg = two_step_cfg - self.emb = nn.ModuleList([ScaledEmbedding(embed_dim, dim, lr=emb_lr) for _ in range(n_q)]) - if 'activation' in kwargs: - kwargs['activation'] = get_activation_fn(kwargs['activation']) - self.transformer = StreamingTransformer( - d_model=dim, num_heads=num_heads, dim_feedforward=int(hidden_scale * dim), - norm=norm, norm_first=norm_first, **kwargs) - self.out_norm: tp.Optional[nn.Module] = None - if norm_first: - self.out_norm = create_norm_fn(norm, dim) - self.linears = nn.ModuleList([nn.Linear(dim, self.card, bias=bias_proj) for _ in range(n_q)]) - self._init_weights(weight_init, depthwise_init, zero_bias_init) - self._fsdp: tp.Optional[nn.Module] - self.__dict__['_fsdp'] = None - - def _init_weights(self, weight_init: tp.Optional[str], depthwise_init: tp.Optional[str], zero_bias_init: bool): - """Initialization of the transformer module weights. - - Args: - weight_init (Optional[str]): Weight initialization strategy. See ``get_init_fn`` for valid options. - depthwise_init (Optional[str]): Depwthwise initialization strategy. The following options are valid: - 'current' where the depth corresponds to the current layer index or 'global' where the total number - of layer is used as depth. If not set, no depthwise initialization strategy is used. - zero_bias_init (bool): Whether to initalize bias to zero or not. - """ - assert depthwise_init is None or depthwise_init in ['current', 'global'] - assert depthwise_init is None or weight_init is not None, \ - "If 'depthwise_init' is defined, a 'weight_init' method should be provided." - assert not zero_bias_init or weight_init is not None, \ - "If 'zero_bias_init', a 'weight_init' method should be provided" - - if weight_init is None: - return - - for emb_layer in self.emb: - init_layer(emb_layer, method=weight_init, init_depth=None, zero_bias_init=zero_bias_init) - - for layer_idx, tr_layer in enumerate(self.transformer.layers): - depth = None - if depthwise_init == 'current': - depth = layer_idx + 1 - elif depthwise_init == 'global': - depth = len(self.transformer.layers) - init_fn = partial(init_layer, method=weight_init, init_depth=depth, zero_bias_init=zero_bias_init) - tr_layer.apply(init_fn) - - for linear in self.linears: - init_layer(linear, method=weight_init, init_depth=None, zero_bias_init=zero_bias_init) - - @property - def special_token_id(self) -> int: - return self.card - - @property - def num_codebooks(self) -> int: - return self.n_q - - def forward(self, sequence: torch.Tensor, - conditions: tp.List[ConditioningAttributes], - condition_tensors: tp.Optional[ConditionTensors] = None) -> torch.Tensor: - """Apply language model on sequence and conditions. - Given a tensor of sequence of shape [B, K, S] with K the number of codebooks and - S the sequence steps, return the logits with shape [B, card, K, S]. - - Args: - indices (torch.Tensor): indices of the codes to model. - conditions (list[ConditioningAttributes]): conditionings to use when modeling - the given codes. Note that when evaluating multiple time with the same conditioning - you should pre-compute those and pass them as `condition_tensors`. - condition_tensors (dict[str, ConditionType] or None): pre-computed conditioning - tensors, see `conditions`. - Returns: - torch.Tensor: Logits. - """ - B, K, S = sequence.shape - assert K == self.num_codebooks, 'Sequence shape must match the specified number of codebooks' - input_ = sum([self.emb[k](sequence[:, k]) for k in range(K)]) - if condition_tensors is None: - assert not self._is_streaming, "Conditions tensors should be precomputed when streaming." - # apply dropout modules - conditions = self.cfg_dropout(conditions) - conditions = self.att_dropout(conditions) - tokenized = self.condition_provider.tokenize(conditions) - # encode conditions and fuse, both have a streaming cache to not recompute when generating. - condition_tensors = self.condition_provider(tokenized) - else: - assert not conditions, "Shouldn't pass both conditions and condition_tensors." - - input_, cross_attention_input = self.fuser(input_, condition_tensors) - - out = self.transformer(input_, cross_attention_src=cross_attention_input) - if self.out_norm: - out = self.out_norm(out) - logits = torch.stack([self.linears[k](out) for k in range(K)], dim=1) # [B, K, S, card] - - # remove the prefix from the model outputs - if len(self.fuser.fuse2cond['prepend']) > 0: - logits = logits[:, :, -S:] - - return logits # [B, K, S, card] - - def compute_predictions( - self, codes: torch.Tensor, - conditions: tp.List[ConditioningAttributes], - condition_tensors: tp.Optional[ConditionTensors] = None) -> LMOutput: - """Given an input tensor of codes [B, K, T] and list of conditions, runs the model - forward using the specified codes interleaving pattern. - - Args: - codes (torch.Tensor): Input codes of shape [B, K, T] with B the batch size, - K the number of codebooks and T the number of timesteps. - conditions (list[ConditioningAttributes]): conditionings to use when modeling - the given codes. Note that when evaluating multiple time with the same conditioning - you should pre-compute those and pass them as `condition_tensors`. - condition_tensors (dict[str, ConditionType] or None): pre-computed conditioning - tensors, see `conditions`. - Returns: - LMOutput: Language model outputs - logits (torch.Tensor) of shape [B, K, T, card] corresponding to the provided codes, - i.e. the first item corresponds to logits to predict the first code, meaning that - no additional shifting of codes and logits is required. - mask (torch.Tensor) of shape [B, K, T], mask over valid and invalid positions. - Given the specified interleaving strategies, parts of the logits and codes should - not be considered as valid predictions because of invalid context. - """ - B, K, T = codes.shape - codes = codes.contiguous() - # map codes [B, K, T] into pattern sequence [B, K, S] using special_token_id for masked tokens - pattern = self.pattern_provider.get_pattern(T) - sequence_codes, sequence_indexes, sequence_mask = pattern.build_pattern_sequence( - codes, self.special_token_id, keep_only_valid_steps=True - ) - # apply model on pattern sequence - model = self if self._fsdp is None else self._fsdp - logits = model(sequence_codes, conditions, condition_tensors) # [B, K, S, card] - # map back the logits on pattern sequence to logits on original codes: [B, K, S, card] -> [B, K, T, card] - # and provide the corresponding mask over invalid positions of tokens - logits = logits.permute(0, 3, 1, 2) # [B, card, K, S] - # note: we use nans as special token to make it obvious if we feed unexpected logits - logits, logits_indexes, logits_mask = pattern.revert_pattern_logits( - logits, float('nan'), keep_only_valid_steps=True - ) - logits = logits.permute(0, 2, 3, 1) # [B, K, T, card] - logits_mask = logits_mask[None, :, :].expand(B, -1, -1) # [K, T] -> [B, K, T] - return LMOutput(logits, logits_mask) - - def _sample_next_token(self, - sequence: torch.Tensor, - cfg_conditions: CFGConditions, - unconditional_state: State, - use_sampling: bool = False, - temp: float = 1.0, - top_k: int = 0, - top_p: float = 0.0, - cfg_coef: tp.Optional[float] = None) -> torch.Tensor: - """Sample next token from the model given a sequence and a set of conditions. The model supports - multiple sampling strategies (greedy sampling, softmax, top-k, top-p...). - - Args: - sequence (torch.Tensor): Current sequence of shape [B, K, S] - with K corresponding to the number of codebooks and S the number of sequence steps. - S = 1 in streaming mode, except for the first step that contains a bigger prompt. - condition_tensors (Dict[str, ConditionType): Set of conditions. If CFG is used, - should be twice the batch size, being the concatenation of the conditions + null conditions. - use_sampling (bool): Whether to use a sampling strategy or not. - temp (float): Sampling temperature. - top_k (int): K for "top-k" sampling. - top_p (float): P for "top-p" sampling. - cfg_coef (float): classifier free guidance coefficient - Returns: - next_token (torch.Tensor): Next token tensor of shape [B, K, 1]. - """ - B = sequence.shape[0] - cfg_coef = self.cfg_coef if cfg_coef is None else cfg_coef - model = self if self._fsdp is None else self._fsdp - if self.two_step_cfg and cfg_conditions != {}: - assert isinstance(cfg_conditions, tuple) - condition_tensors, null_condition_tensors = cfg_conditions - cond_logits = model(sequence, conditions=[], condition_tensors=condition_tensors) - state = self.get_streaming_state() - self.set_streaming_state(unconditional_state) - uncond_logits = model(sequence, conditions=[], condition_tensors=null_condition_tensors) - unconditional_state.update(self.get_streaming_state()) - self.set_streaming_state(state) - logits = uncond_logits + (cond_logits - uncond_logits) * self.cfg_coef - else: - assert isinstance(cfg_conditions, dict) - condition_tensors = cfg_conditions - if condition_tensors: - # Preparing for CFG, predicting both conditional and unconditional logits. - sequence = torch.cat([sequence, sequence], dim=0) - all_logits = model( - sequence, - conditions=[], condition_tensors=condition_tensors) - if condition_tensors: - cond_logits, uncond_logits = all_logits.split(B, dim=0) # [B, K, T, card] - logits = uncond_logits + (cond_logits - uncond_logits) * cfg_coef - else: - logits = all_logits - - logits = logits.permute(0, 1, 3, 2) # [B, K, card, T] - logits = logits[..., -1] # [B x K x card] - - if use_sampling: - probs = torch.softmax(logits / temp, dim=-1) - if top_p > 0.0: - next_token = utils.sample_top_p(probs, p=top_p) - elif top_k > 0: - next_token = utils.sample_top_k(probs, k=top_k) - else: - next_token = utils.multinomial(probs, num_samples=1) - else: - next_token = torch.argmax(logits, dim=-1, keepdim=True) - - return next_token - - @torch.no_grad() - def generate(self, - prompt: tp.Optional[torch.Tensor] = None, - conditions: tp.List[ConditioningAttributes] = [], - num_samples: tp.Optional[int] = None, - max_gen_len: int = 256, - use_sampling: bool = True, - temp: float = 1.0, - top_k: int = 250, - top_p: float = 0.0, - cfg_coef: tp.Optional[float] = None, - two_step_cfg: bool = False, - remove_prompts: bool = False, - check: bool = False, - callback: tp.Optional[tp.Callable[[int, int], None]] = None) -> torch.Tensor: - """Generate tokens sampling from the model given a prompt or unconditionally. Generation can - be perform in a greedy fashion or using sampling with top K and top P strategies. - - Args: - prompt (Optional[torch.Tensor]): Prompt tokens of shape [B, K, T]. - conditions_tensors (Dict[str, torch.Tensor]): Set of conditions or None. - num_samples (int or None): Number of samples to generate when no prompt and no conditions are given. - max_gen_len (int): Maximum generation length. - use_sampling (bool): Whether to use a sampling strategy or not. - temp (float): Sampling temperature. - top_k (int): K for "top-k" sampling. - top_p (float): P for "top-p" sampling. - remove_prompts (bool): Whether to remove prompts from generation or not. - Returns: - torch.Tensor: Generated tokens. - """ - assert not self.training, "generation shouldn't be used in training mode." - first_param = next(iter(self.parameters())) - device = first_param.device - - # Checking all input shapes are consistents. - possible_num_samples = [] - if num_samples is not None: - possible_num_samples.append(num_samples) - elif prompt is not None: - possible_num_samples.append(prompt.shape[0]) - elif conditions: - possible_num_samples.append(len(conditions)) - else: - possible_num_samples.append(1) - assert [x == possible_num_samples[0] for x in possible_num_samples], "Inconsitent inputs shapes" - num_samples = possible_num_samples[0] - - # below we create set of conditions: one conditional and one unconditional - # to do that we merge the regular condition together with the null condition - # we then do 1 forward pass instead of 2. - # the reason for that is two-fold: - # 1. it is about x2 faster than doing 2 forward passes - # 2. avoid the streaming API treating the 2 passes as part of different time steps - # We also support doing two different passes, in particular to ensure that - # the padding structure is exactly the same between train anf test. - # With a batch size of 1, this can be slower though. - cfg_conditions: CFGConditions - two_step_cfg = self.two_step_cfg if two_step_cfg is None else two_step_cfg - if conditions: - null_conditions = ClassifierFreeGuidanceDropout(p=1.0)(conditions) - if two_step_cfg: - cfg_conditions = ( - self.condition_provider(self.condition_provider.tokenize(conditions)), - self.condition_provider(self.condition_provider.tokenize(null_conditions)), - ) - else: - conditions = conditions + null_conditions - tokenized = self.condition_provider.tokenize(conditions) - cfg_conditions = self.condition_provider(tokenized) - else: - cfg_conditions = {} - - if prompt is None: - assert num_samples > 0 - prompt = torch.zeros((num_samples, self.num_codebooks, 0), dtype=torch.long, device=device) - - B, K, T = prompt.shape - start_offset = T - assert start_offset < max_gen_len - - pattern = self.pattern_provider.get_pattern(max_gen_len) - # this token is used as default value for codes that are not generated yet - unknown_token = -1 - - # we generate codes up to the max_gen_len that will be mapped to the pattern sequence - gen_codes = torch.full((B, K, max_gen_len), unknown_token, dtype=torch.long, device=device) - # filling the gen_codes with the prompt if needed - gen_codes[..., :start_offset] = prompt - # create the gen_sequence with proper interleaving from the pattern: [B, K, S] - gen_sequence, indexes, mask = pattern.build_pattern_sequence(gen_codes, self.special_token_id) - # retrieve the start_offset in the sequence: - # it is the first sequence step that contains the `start_offset` timestep - start_offset_sequence = pattern.get_first_step_with_timesteps(start_offset) - assert start_offset_sequence is not None - - with self.streaming(): - unconditional_state = self.get_streaming_state() - prev_offset = 0 - gen_sequence_len = gen_sequence.shape[-1] # gen_sequence shape is [B, K, S] - for offset in range(start_offset_sequence, gen_sequence_len): - # get current sequence (note that the streaming API is providing the caching over previous offsets) - curr_sequence = gen_sequence[..., prev_offset:offset] - curr_mask = mask[None, ..., prev_offset:offset].expand(B, -1, -1) - if check: - # check coherence between mask and sequence - assert (curr_sequence == torch.where(curr_mask, curr_sequence, self.special_token_id)).all() - # should never happen as gen_sequence is filled progressively - assert not (curr_sequence == unknown_token).any() - # sample next token from the model, next token shape is [B, K, 1] - next_token = self._sample_next_token( - curr_sequence, cfg_conditions, unconditional_state, use_sampling, temp, top_k, top_p, - cfg_coef=cfg_coef) - # ensure the tokens that should be masked are properly set to special_token_id - # as the model never output special_token_id - valid_mask = mask[..., offset:offset+1].expand(B, -1, -1) - next_token[~valid_mask] = self.special_token_id - # ensure we don't overwrite prompt tokens, we only write over unknown tokens - # (then mask tokens should be left as is as well, which is correct) - gen_sequence[..., offset:offset+1] = torch.where( - gen_sequence[..., offset:offset+1] == unknown_token, - next_token, gen_sequence[..., offset:offset+1] - ) - prev_offset = offset - if callback is not None: - callback(1 + offset - start_offset_sequence, gen_sequence_len - start_offset_sequence) - unconditional_state.clear() - - # ensure sequence has been entirely filled - assert not (gen_sequence == unknown_token).any() - # ensure gen_sequence pattern and mask are matching - # which means the gen_sequence is valid according to the pattern - assert ( - gen_sequence == torch.where(mask[None, ...].expand(B, -1, -1), gen_sequence, self.special_token_id) - ).all() - # get back the codes, trimming the prompt if needed and cutting potentially incomplete timesteps - out_codes, out_indexes, out_mask = pattern.revert_pattern_sequence(gen_sequence, special_token=unknown_token) - - # sanity checks over the returned codes and corresponding masks - assert (out_codes[..., :max_gen_len] != unknown_token).all() - assert (out_mask[..., :max_gen_len] == 1).all() - - out_start_offset = start_offset if remove_prompts else 0 - out_codes = out_codes[..., out_start_offset:max_gen_len] - - # ensure the returned codes are all valid - assert (out_codes >= 0).all() and (out_codes <= self.card).all() - return out_codes diff --git a/spaces/EuroPython2022/OCR-Translate/app.py b/spaces/EuroPython2022/OCR-Translate/app.py deleted file mode 100644 index ca4e074725dd166462f1b9378610e4a623078c5d..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/OCR-Translate/app.py +++ /dev/null @@ -1,180 +0,0 @@ -# OCR Translate v0.2 -# 创建人:曾逸夫 -# 创建时间:2022-07-19 - -import os - -os.system("sudo apt-get install xclip") - -import gradio as gr -import nltk -import pyclip -import pytesseract -from nltk.tokenize import sent_tokenize -from transformers import MarianMTModel, MarianTokenizer - -nltk.download('punkt') - -OCR_TR_DESCRIPTION = '''# OCR Translate v0.2 -
    OCR translation system based on Tesseract
    ''' - -# 图片路径 -img_dir = "./data" - -# 获取tesseract语言列表 -choices = os.popen('tesseract --list-langs').read().split('\n')[1:-1] - - -# 翻译模型选择 -def model_choice(src="en", trg="zh"): - # https://huggingface.co/Helsinki-NLP/opus-mt-zh-en - # https://huggingface.co/Helsinki-NLP/opus-mt-en-zh - model_name = f"Helsinki-NLP/opus-mt-{src}-{trg}" # 模型名称 - - tokenizer = MarianTokenizer.from_pretrained(model_name) # 分词器 - model = MarianMTModel.from_pretrained(model_name) # 模型 - - return tokenizer, model - - -# tesseract语言列表转pytesseract语言 -def ocr_lang(lang_list): - lang_str = "" - lang_len = len(lang_list) - if lang_len == 1: - return lang_list[0] - else: - for i in range(lang_len): - lang_list.insert(lang_len - i, "+") - - lang_str = "".join(lang_list[:-1]) - return lang_str - - -# ocr tesseract -def ocr_tesseract(img, languages): - ocr_str = pytesseract.image_to_string(img, lang=ocr_lang(languages)) - return ocr_str - - -# 清除 -def clear_content(): - return None - - -# 复制到剪贴板 -def cp_text(input_text): - # sudo apt-get install xclip - try: - pyclip.copy(input_text) - except Exception as e: - print("sudo apt-get install xclip") - print(e) - - -# 清除剪贴板 -def cp_clear(): - pyclip.clear() - - -# 翻译 -def translate(input_text, inputs_transStyle): - # 参考:https://huggingface.co/docs/transformers/model_doc/marian - if input_text is None or input_text == "": - return "System prompt: There is no content to translate!" - - # 选择翻译模型 - trans_src, trans_trg = inputs_transStyle.split("-")[0], inputs_transStyle.split("-")[1] - tokenizer, model = model_choice(trans_src, trans_trg) - - translate_text = "" - input_text_list = input_text.split("\n\n") - - translate_text_list_tmp = [] - for i in range(len(input_text_list)): - if input_text_list[i] != "": - translate_text_list_tmp.append(input_text_list[i]) - - for i in range(len(translate_text_list_tmp)): - translated_sub = model.generate( - **tokenizer(sent_tokenize(translate_text_list_tmp[i]), return_tensors="pt", truncation=True, padding=True)) - tgt_text_sub = [tokenizer.decode(t, skip_special_tokens=True) for t in translated_sub] - translate_text_sub = "".join(tgt_text_sub) - translate_text = translate_text + "\n\n" + translate_text_sub - - return translate_text[2:] - - -def main(): - - with gr.Blocks(css='style.css') as ocr_tr: - gr.Markdown(OCR_TR_DESCRIPTION) - - # -------------- OCR 文字提取 -------------- - with gr.Box(): - - with gr.Row(): - gr.Markdown("### Step 01: Text Extraction") - - with gr.Row(): - with gr.Column(): - with gr.Row(): - inputs_img = gr.Image(image_mode="RGB", source="upload", type="pil", label="image") - with gr.Row(): - inputs_lang = gr.CheckboxGroup(choices=["chi_sim", "eng"], - type="value", - value=['eng'], - label='language') - - with gr.Row(): - clear_img_btn = gr.Button('Clear') - ocr_btn = gr.Button(value='OCR Extraction', variant="primary") - - with gr.Column(): - with gr.Row(): - outputs_text = gr.Textbox(label="Extract content", lines=20) - with gr.Row(): - inputs_transStyle = gr.Radio(choices=["zh-en", "en-zh"], - type="value", - value="zh-en", - label='translation mode') - with gr.Row(): - clear_text_btn = gr.Button('Clear') - translate_btn = gr.Button(value='Translate', variant="primary") - - with gr.Row(): - example_list = [["./data/test.png", ["eng"]], ["./data/test02.png", ["eng"]], - ["./data/test03.png", ["chi_sim"]]] - gr.Examples(example_list, [inputs_img, inputs_lang], outputs_text, ocr_tesseract, cache_examples=False) - - # -------------- 翻译 -------------- - with gr.Box(): - - with gr.Row(): - gr.Markdown("### Step 02: Translation") - - with gr.Row(): - outputs_tr_text = gr.Textbox(label="Translate Content", lines=20) - - with gr.Row(): - cp_clear_btn = gr.Button(value='Clear Clipboard') - cp_btn = gr.Button(value='Copy to clipboard', variant="primary") - - # ---------------------- OCR Tesseract ---------------------- - ocr_btn.click(fn=ocr_tesseract, inputs=[inputs_img, inputs_lang], outputs=[ - outputs_text,]) - clear_img_btn.click(fn=clear_content, inputs=[], outputs=[inputs_img]) - - # ---------------------- 翻译 ---------------------- - translate_btn.click(fn=translate, inputs=[outputs_text, inputs_transStyle], outputs=[outputs_tr_text]) - clear_text_btn.click(fn=clear_content, inputs=[], outputs=[outputs_text]) - - # ---------------------- 复制到剪贴板 ---------------------- - cp_btn.click(fn=cp_text, inputs=[outputs_tr_text], outputs=[]) - cp_clear_btn.click(fn=cp_clear, inputs=[], outputs=[]) - - ocr_tr.launch(inbrowser=True) - - -if __name__ == '__main__': - main() diff --git a/spaces/FrankZxShen/vits-fast-fineturning-models-ba/mel_processing.py b/spaces/FrankZxShen/vits-fast-fineturning-models-ba/mel_processing.py deleted file mode 100644 index 3614150259809983e776d3fed83021decca06a9c..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/vits-fast-fineturning-models-ba/mel_processing.py +++ /dev/null @@ -1,112 +0,0 @@ -import math -import os -import random -import torch -from torch import nn -import torch.nn.functional as F -import torch.utils.data -import numpy as np -import librosa -import librosa.util as librosa_util -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y.float(), n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/GaenKoki/voicevox/test/test_setting.py b/spaces/GaenKoki/voicevox/test/test_setting.py deleted file mode 100644 index 494e3095e1e26b74bb70436f5ff317bca26b13c7..0000000000000000000000000000000000000000 --- a/spaces/GaenKoki/voicevox/test/test_setting.py +++ /dev/null @@ -1,72 +0,0 @@ -from pathlib import Path -from tempfile import TemporaryDirectory -from unittest import TestCase - -from voicevox_engine.setting import CorsPolicyMode, Setting, SettingLoader - - -class TestSettingLoader(TestCase): - def setUp(self): - self.tmp_dir = TemporaryDirectory() - self.tmp_dir_path = Path(self.tmp_dir.name) - - def test_loading_1(self): - setting_loader = SettingLoader(Path("not_exist.yaml")) - settings = setting_loader.load_setting_file() - - self.assertEqual( - settings.dict(), - {"allow_origin": None, "cors_policy_mode": CorsPolicyMode.localapps}, - ) - - def test_loading_2(self): - setting_loader = SettingLoader( - setting_file_path=Path("test/setting-test-load-1.yaml") - ) - settings = setting_loader.load_setting_file() - - self.assertEqual( - settings.dict(), - {"allow_origin": None, "cors_policy_mode": CorsPolicyMode.localapps}, - ) - - def test_loading_3(self): - setting_loader = SettingLoader( - setting_file_path=Path("test/setting-test-load-2.yaml") - ) - settings = setting_loader.load_setting_file() - - self.assertEqual( - settings.dict(), - {"allow_origin": None, "cors_policy_mode": "all"}, - ) - - def test_loading_4(self): - setting_loader = SettingLoader( - setting_file_path=Path("test/setting-test-load-3.yaml") - ) - settings = setting_loader.load_setting_file() - - self.assertEqual( - settings.dict(), - { - "allow_origin": "192.168.254.255 192.168.255.255", - "cors_policy_mode": CorsPolicyMode.localapps, - }, - ) - - def test_dump(self): - setting_loader = SettingLoader( - setting_file_path=Path(self.tmp_dir_path / "setting-test-dump.yaml") - ) - settings = Setting(cors_policy_mode=CorsPolicyMode.localapps) - setting_loader.dump_setting_file(settings) - - self.assertTrue(setting_loader.setting_file_path.is_file()) - self.assertEqual( - setting_loader.load_setting_file().dict(), - {"allow_origin": None, "cors_policy_mode": CorsPolicyMode.localapps}, - ) - - def tearDown(self): - self.tmp_dir.cleanup() diff --git a/spaces/GaenKoki/voicevox/test/test_synthesis_engine.py b/spaces/GaenKoki/voicevox/test/test_synthesis_engine.py deleted file mode 100644 index b1a21741623145d1e9833b9ebf238c16d1e86edc..0000000000000000000000000000000000000000 --- a/spaces/GaenKoki/voicevox/test/test_synthesis_engine.py +++ /dev/null @@ -1,654 +0,0 @@ -import math -from copy import deepcopy -from random import random -from typing import Union -from unittest import TestCase -from unittest.mock import Mock - -import numpy - -from voicevox_engine.acoustic_feature_extractor import OjtPhoneme -from voicevox_engine.model import AccentPhrase, AudioQuery, Mora -from voicevox_engine.synthesis_engine import SynthesisEngine - -# TODO: import from voicevox_engine.synthesis_engine.mora -from voicevox_engine.synthesis_engine.synthesis_engine import ( - mora_phoneme_list, - pre_process, - split_mora, - to_flatten_moras, - to_phoneme_data_list, - unvoiced_mora_phoneme_list, -) - - -def yukarin_s_mock(length: int, phoneme_list: numpy.ndarray, speaker_id: numpy.ndarray): - result = [] - # mockとしての適当な処理、特に意味はない - for i in range(length): - result.append(float(phoneme_list[i] * 0.5 + speaker_id)) - return numpy.array(result) - - -def yukarin_sa_mock( - length: int, - vowel_phoneme_list: numpy.ndarray, - consonant_phoneme_list: numpy.ndarray, - start_accent_list: numpy.ndarray, - end_accent_list: numpy.ndarray, - start_accent_phrase_list: numpy.ndarray, - end_accent_phrase_list: numpy.ndarray, - speaker_id: numpy.ndarray, -): - result = [] - # mockとしての適当な処理、特に意味はない - for i in range(length): - result.append( - float( - ( - vowel_phoneme_list[0][i] - + consonant_phoneme_list[0][i] - + start_accent_list[0][i] - + end_accent_list[0][i] - + start_accent_phrase_list[0][i] - + end_accent_phrase_list[0][i] - ) - * 0.5 - + speaker_id - ) - ) - return numpy.array(result)[numpy.newaxis] - - -def decode_mock( - length: int, - phoneme_size: int, - f0: numpy.ndarray, - phoneme: numpy.ndarray, - speaker_id: Union[numpy.ndarray, int], -): - result = [] - # mockとしての適当な処理、特に意味はない - for i in range(length): - # decode forwardはデータサイズがlengthの256倍になるのでとりあえず256回データをresultに入れる - for _ in range(256): - result.append( - float( - f0[i][0] * (numpy.where(phoneme[i] == 1)[0] / phoneme_size) - + speaker_id - ) - ) - return numpy.array(result) - - -class MockCore: - yukarin_s_forward = Mock(side_effect=yukarin_s_mock) - yukarin_sa_forward = Mock(side_effect=yukarin_sa_mock) - decode_forward = Mock(side_effect=decode_mock) - - def metas(self): - return "" - - def supported_devices(self): - return "" - - def is_model_loaded(self, speaker_id): - return True - - -class TestSynthesisEngine(TestCase): - def setUp(self): - super().setUp() - self.str_list_hello_hiho = ( - "sil k o N n i ch i w a pau h i h o d e s U sil".split() - ) - self.phoneme_data_list_hello_hiho = [ - OjtPhoneme(phoneme=p, start=i, end=i + 1) - for i, p in enumerate( - "pau k o N n i ch i w a pau h i h o d e s U pau".split() - ) - ] - self.accent_phrases_hello_hiho = [ - AccentPhrase( - moras=[ - Mora( - text="コ", - consonant="k", - consonant_length=0.0, - vowel="o", - vowel_length=0.0, - pitch=0.0, - ), - Mora( - text="ン", - consonant=None, - consonant_length=None, - vowel="N", - vowel_length=0.0, - pitch=0.0, - ), - Mora( - text="ニ", - consonant="n", - consonant_length=0.0, - vowel="i", - vowel_length=0.0, - pitch=0.0, - ), - Mora( - text="チ", - consonant="ch", - consonant_length=0.0, - vowel="i", - vowel_length=0.0, - pitch=0.0, - ), - Mora( - text="ワ", - consonant="w", - consonant_length=0.0, - vowel="a", - vowel_length=0.0, - pitch=0.0, - ), - ], - accent=5, - pause_mora=Mora( - text="、", - consonant=None, - consonant_length=None, - vowel="pau", - vowel_length=0.0, - pitch=0.0, - ), - ), - AccentPhrase( - moras=[ - Mora( - text="ヒ", - consonant="h", - consonant_length=0.0, - vowel="i", - vowel_length=0.0, - pitch=0.0, - ), - Mora( - text="ホ", - consonant="h", - consonant_length=0.0, - vowel="o", - vowel_length=0.0, - pitch=0.0, - ), - Mora( - text="デ", - consonant="d", - consonant_length=0.0, - vowel="e", - vowel_length=0.0, - pitch=0.0, - ), - Mora( - text="ス", - consonant="s", - consonant_length=0.0, - vowel="U", - vowel_length=0.0, - pitch=0.0, - ), - ], - accent=1, - pause_mora=None, - ), - ] - core = MockCore() - self.yukarin_s_mock = core.yukarin_s_forward - self.yukarin_sa_mock = core.yukarin_sa_forward - self.decode_mock = core.decode_forward - self.synthesis_engine = SynthesisEngine( - core=core, - ) - - def test_to_flatten_moras(self): - flatten_moras = to_flatten_moras(self.accent_phrases_hello_hiho) - self.assertEqual( - flatten_moras, - self.accent_phrases_hello_hiho[0].moras - + [self.accent_phrases_hello_hiho[0].pause_mora] - + self.accent_phrases_hello_hiho[1].moras, - ) - - def test_to_phoneme_data_list(self): - phoneme_data_list = to_phoneme_data_list(self.str_list_hello_hiho) - self.assertEqual(phoneme_data_list, self.phoneme_data_list_hello_hiho) - - def test_split_mora(self): - consonant_phoneme_list, vowel_phoneme_list, vowel_indexes = split_mora( - self.phoneme_data_list_hello_hiho - ) - - self.assertEqual(vowel_indexes, [0, 2, 3, 5, 7, 9, 10, 12, 14, 16, 18, 19]) - self.assertEqual( - vowel_phoneme_list, - [ - OjtPhoneme(phoneme="pau", start=0, end=1), - OjtPhoneme(phoneme="o", start=2, end=3), - OjtPhoneme(phoneme="N", start=3, end=4), - OjtPhoneme(phoneme="i", start=5, end=6), - OjtPhoneme(phoneme="i", start=7, end=8), - OjtPhoneme(phoneme="a", start=9, end=10), - OjtPhoneme(phoneme="pau", start=10, end=11), - OjtPhoneme(phoneme="i", start=12, end=13), - OjtPhoneme(phoneme="o", start=14, end=15), - OjtPhoneme(phoneme="e", start=16, end=17), - OjtPhoneme(phoneme="U", start=18, end=19), - OjtPhoneme(phoneme="pau", start=19, end=20), - ], - ) - self.assertEqual( - consonant_phoneme_list, - [ - None, - OjtPhoneme(phoneme="k", start=1, end=2), - None, - OjtPhoneme(phoneme="n", start=4, end=5), - OjtPhoneme(phoneme="ch", start=6, end=7), - OjtPhoneme(phoneme="w", start=8, end=9), - None, - OjtPhoneme(phoneme="h", start=11, end=12), - OjtPhoneme(phoneme="h", start=13, end=14), - OjtPhoneme(phoneme="d", start=15, end=16), - OjtPhoneme(phoneme="s", start=17, end=18), - None, - ], - ) - - def test_pre_process(self): - flatten_moras, phoneme_data_list = pre_process( - deepcopy(self.accent_phrases_hello_hiho) - ) - - mora_index = 0 - phoneme_index = 1 - - self.assertEqual(phoneme_data_list[0], OjtPhoneme("pau", 0, 1)) - for accent_phrase in self.accent_phrases_hello_hiho: - moras = accent_phrase.moras - for mora in moras: - self.assertEqual(flatten_moras[mora_index], mora) - mora_index += 1 - if mora.consonant is not None: - self.assertEqual( - phoneme_data_list[phoneme_index], - OjtPhoneme(mora.consonant, phoneme_index, phoneme_index + 1), - ) - phoneme_index += 1 - self.assertEqual( - phoneme_data_list[phoneme_index], - OjtPhoneme(mora.vowel, phoneme_index, phoneme_index + 1), - ) - phoneme_index += 1 - if accent_phrase.pause_mora: - self.assertEqual(flatten_moras[mora_index], accent_phrase.pause_mora) - mora_index += 1 - self.assertEqual( - phoneme_data_list[phoneme_index], - OjtPhoneme("pau", phoneme_index, phoneme_index + 1), - ) - phoneme_index += 1 - self.assertEqual( - phoneme_data_list[phoneme_index], - OjtPhoneme("pau", phoneme_index, phoneme_index + 1), - ) - - def test_replace_phoneme_length(self): - result = self.synthesis_engine.replace_phoneme_length( - accent_phrases=deepcopy(self.accent_phrases_hello_hiho), speaker_id=1 - ) - - # yukarin_sに渡される値の検証 - yukarin_s_args = self.yukarin_s_mock.call_args[1] - list_length = yukarin_s_args["length"] - phoneme_list = yukarin_s_args["phoneme_list"] - self.assertEqual(list_length, 20) - self.assertEqual(list_length, len(phoneme_list)) - numpy.testing.assert_array_equal( - phoneme_list, - numpy.array( - [ - 0, - 23, - 30, - 4, - 28, - 21, - 10, - 21, - 42, - 7, - 0, - 19, - 21, - 19, - 30, - 12, - 14, - 35, - 6, - 0, - ], - dtype=numpy.int64, - ), - ) - self.assertEqual(yukarin_s_args["speaker_id"], 1) - - # flatten_morasを使わずに愚直にaccent_phrasesにデータを反映させてみる - true_result = deepcopy(self.accent_phrases_hello_hiho) - index = 1 - - def result_value(i: int): - return float(phoneme_list[i] * 0.5 + 1) - - for accent_phrase in true_result: - moras = accent_phrase.moras - for mora in moras: - if mora.consonant is not None: - mora.consonant_length = result_value(index) - index += 1 - mora.vowel_length = result_value(index) - index += 1 - if accent_phrase.pause_mora is not None: - accent_phrase.pause_mora.vowel_length = result_value(index) - index += 1 - - self.assertEqual(result, true_result) - - def test_replace_mora_pitch(self): - # 空のリストでエラーを吐かないか - empty_accent_phrases = [] - self.assertEqual( - self.synthesis_engine.replace_mora_pitch( - accent_phrases=empty_accent_phrases, speaker_id=1 - ), - [], - ) - - result = self.synthesis_engine.replace_mora_pitch( - accent_phrases=deepcopy(self.accent_phrases_hello_hiho), speaker_id=1 - ) - - # yukarin_saに渡される値の検証 - yukarin_sa_args = self.yukarin_sa_mock.call_args[1] - list_length = yukarin_sa_args["length"] - vowel_phoneme_list = yukarin_sa_args["vowel_phoneme_list"][0] - consonant_phoneme_list = yukarin_sa_args["consonant_phoneme_list"][0] - start_accent_list = yukarin_sa_args["start_accent_list"][0] - end_accent_list = yukarin_sa_args["end_accent_list"][0] - start_accent_phrase_list = yukarin_sa_args["start_accent_phrase_list"][0] - end_accent_phrase_list = yukarin_sa_args["end_accent_phrase_list"][0] - self.assertEqual(list_length, 12) - self.assertEqual(list_length, len(vowel_phoneme_list)) - self.assertEqual(list_length, len(consonant_phoneme_list)) - self.assertEqual(list_length, len(start_accent_list)) - self.assertEqual(list_length, len(end_accent_list)) - self.assertEqual(list_length, len(start_accent_phrase_list)) - self.assertEqual(list_length, len(end_accent_phrase_list)) - self.assertEqual(yukarin_sa_args["speaker_id"], 1) - - numpy.testing.assert_array_equal( - vowel_phoneme_list, - numpy.array( - [ - 0, - 30, - 4, - 21, - 21, - 7, - 0, - 21, - 30, - 14, - 6, - 0, - ] - ), - ) - numpy.testing.assert_array_equal( - consonant_phoneme_list, - numpy.array( - [ - -1, - 23, - -1, - 28, - 10, - 42, - -1, - 19, - 19, - 12, - 35, - -1, - ] - ), - ) - numpy.testing.assert_array_equal( - start_accent_list, numpy.array([0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0]) - ) - numpy.testing.assert_array_equal( - end_accent_list, numpy.array([0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0]) - ) - numpy.testing.assert_array_equal( - start_accent_phrase_list, numpy.array([0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]) - ) - numpy.testing.assert_array_equal( - end_accent_phrase_list, numpy.array([0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0]) - ) - - # flatten_morasを使わずに愚直にaccent_phrasesにデータを反映させてみる - true_result = deepcopy(self.accent_phrases_hello_hiho) - index = 1 - - def result_value(i: int): - # unvoiced_mora_phoneme_listのPhoneme ID版 - unvoiced_mora_phoneme_id_list = [ - OjtPhoneme(p, 0, 0).phoneme_id for p in unvoiced_mora_phoneme_list - ] - if vowel_phoneme_list[i] in unvoiced_mora_phoneme_id_list: - return 0 - return ( - vowel_phoneme_list[i] - + consonant_phoneme_list[i] - + start_accent_list[i] - + end_accent_list[i] - + start_accent_phrase_list[i] - + end_accent_phrase_list[i] - ) * 0.5 + 1 - - for accent_phrase in true_result: - moras = accent_phrase.moras - for mora in moras: - mora.pitch = result_value(index) - index += 1 - if accent_phrase.pause_mora is not None: - accent_phrase.pause_mora.pitch = result_value(index) - index += 1 - - self.assertEqual(result, true_result) - - def synthesis_test_base(self, audio_query: AudioQuery): - accent_phrases = audio_query.accent_phrases - - # decode forwardのために適当にpitchとlengthを設定し、リストで持っておく - phoneme_length_list = [0.0] - phoneme_id_list = [0] - f0_list = [0.0] - for accent_phrase in accent_phrases: - moras = accent_phrase.moras - for mora in moras: - if mora.consonant is not None: - mora.consonant_length = 0.1 - phoneme_length_list.append(0.1) - phoneme_id_list.append(OjtPhoneme(mora.consonant, 0, 0).phoneme_id) - mora.vowel_length = 0.2 - phoneme_length_list.append(0.2) - phoneme_id_list.append(OjtPhoneme(mora.vowel, 0, 0).phoneme_id) - if mora.vowel not in unvoiced_mora_phoneme_list: - mora.pitch = 5.0 + random() - f0_list.append(mora.pitch) - if accent_phrase.pause_mora is not None: - accent_phrase.pause_mora.vowel_length = 0.2 - phoneme_length_list.append(0.2) - phoneme_id_list.append(OjtPhoneme("pau", 0, 0).phoneme_id) - f0_list.append(0.0) - phoneme_length_list.append(0.0) - phoneme_id_list.append(0) - f0_list.append(0.0) - - phoneme_length_list[0] = audio_query.prePhonemeLength - phoneme_length_list[-1] = audio_query.postPhonemeLength - - for i in range(len(phoneme_length_list)): - phoneme_length_list[i] /= audio_query.speedScale - - result = self.synthesis_engine.synthesis(query=audio_query, speaker_id=1) - - # decodeに渡される値の検証 - decode_args = self.decode_mock.call_args[1] - list_length = decode_args["length"] - self.assertEqual( - list_length, - int(sum([round(p * 24000 / 256) for p in phoneme_length_list])), - ) - - num_phoneme = OjtPhoneme.num_phoneme - # mora_phoneme_listのPhoneme ID版 - mora_phoneme_id_list = [ - OjtPhoneme(p, 0, 0).phoneme_id for p in mora_phoneme_list - ] - - # numpy.repeatをfor文でやる - f0 = [] - phoneme = [] - f0_index = 0 - mean_f0 = [] - for i, phoneme_length in enumerate(phoneme_length_list): - f0_single = numpy.array(f0_list[f0_index], dtype=numpy.float32) * ( - 2**audio_query.pitchScale - ) - for _ in range(int(round(phoneme_length * (24000 / 256)))): - f0.append([f0_single]) - phoneme_s = [] - for _ in range(num_phoneme): - phoneme_s.append(0) - # one hot - phoneme_s[phoneme_id_list[i]] = 1 - phoneme.append(phoneme_s) - # consonantとvowelを判別し、vowelであればf0_indexを一つ進める - if phoneme_id_list[i] in mora_phoneme_id_list: - if f0_single > 0: - mean_f0.append(f0_single) - f0_index += 1 - - mean_f0 = numpy.array(mean_f0, dtype=numpy.float32).mean() - f0 = numpy.array(f0, dtype=numpy.float32) - for i in range(len(f0)): - if f0[i][0] != 0.0: - f0[i][0] = (f0[i][0] - mean_f0) * audio_query.intonationScale + mean_f0 - - phoneme = numpy.array(phoneme, dtype=numpy.float32) - - # 乱数の影響で数値の位置がずれが生じるので、大半(4/5)があっていればよしとする - # また、上の部分のint(round(phoneme_length * (24000 / 256)))の影響で - # 本来のf0/phonemeとテスト生成したf0/phonemeの長さが変わることがあり、 - # テスト生成したものが若干長くなることがあるので、本来のものの長さを基準にassertする - assert_f0_count = 0 - decode_f0 = decode_args["f0"] - for i in range(len(decode_f0)): - # 乱数の影響等で数値にずれが生じるので、10の-5乗までの近似値であれば許容する - assert_f0_count += math.isclose(f0[i][0], decode_f0[i][0], rel_tol=10e-5) - self.assertTrue(assert_f0_count >= int(len(decode_f0) / 5) * 4) - assert_phoneme_count = 0 - decode_phoneme = decode_args["phoneme"] - for i in range(len(decode_phoneme)): - assert_true_count = 0 - for j in range(len(decode_phoneme[i])): - assert_true_count += bool(phoneme[i][j] == decode_phoneme[i][j]) - assert_phoneme_count += assert_true_count == num_phoneme - self.assertTrue(assert_phoneme_count >= int(len(decode_phoneme) / 5) * 4) - self.assertEqual(decode_args["speaker_id"], 1) - - # decode forwarderのmockを使う - true_result = decode_mock(list_length, num_phoneme, f0, phoneme, 1) - - true_result *= audio_query.volumeScale - - # TODO: resampyの部分は値の検証しようがないので、パスする - if audio_query.outputSamplingRate != 24000: - return - - assert_result_count = 0 - for i in range(len(true_result)): - if audio_query.outputStereo: - assert_result_count += math.isclose( - true_result[i], result[i][0], rel_tol=10e-5 - ) and math.isclose(true_result[i], result[i][1], rel_tol=10e-5) - else: - assert_result_count += math.isclose( - true_result[i], result[i], rel_tol=10e-5 - ) - self.assertTrue(assert_result_count >= int(len(true_result) / 5) * 4) - - def test_synthesis(self): - audio_query = AudioQuery( - accent_phrases=deepcopy(self.accent_phrases_hello_hiho), - speedScale=1.0, - pitchScale=1.0, - intonationScale=1.0, - volumeScale=1.0, - prePhonemeLength=0.1, - postPhonemeLength=0.1, - outputSamplingRate=24000, - outputStereo=False, - # このテスト内では使わないので生成不要 - kana="", - ) - - self.synthesis_test_base(audio_query) - - # speed scaleのテスト - audio_query.speedScale = 1.2 - self.synthesis_test_base(audio_query) - - # pitch scaleのテスト - audio_query.pitchScale = 1.5 - audio_query.speedScale = 1.0 - self.synthesis_test_base(audio_query) - - # intonation scaleのテスト - audio_query.pitchScale = 1.0 - audio_query.intonationScale = 1.4 - self.synthesis_test_base(audio_query) - - # volume scaleのテスト - audio_query.intonationScale = 1.0 - audio_query.volumeScale = 2.0 - self.synthesis_test_base(audio_query) - - # pre/post phoneme lengthのテスト - audio_query.volumeScale = 1.0 - audio_query.prePhonemeLength = 0.5 - audio_query.postPhonemeLength = 0.5 - self.synthesis_test_base(audio_query) - - # output sampling rateのテスト - audio_query.prePhonemeLength = 0.1 - audio_query.postPhonemeLength = 0.1 - audio_query.outputSamplingRate = 48000 - self.synthesis_test_base(audio_query) - - # output stereoのテスト - audio_query.outputSamplingRate = 24000 - audio_query.outputStereo = True - self.synthesis_test_base(audio_query) diff --git a/spaces/Gaofish/AI_bing/Dockerfile b/spaces/Gaofish/AI_bing/Dockerfile deleted file mode 100644 index cb33c49f65d675a52e7591038fbed9230532908f..0000000000000000000000000000000000000000 --- a/spaces/Gaofish/AI_bing/Dockerfile +++ /dev/null @@ -1,34 +0,0 @@ -# Build Stage -# 使用 golang:alpine 作为构建阶段的基础镜像 -FROM golang:alpine AS builder - -# 添加 git,以便之后能从GitHub克隆项目 -RUN apk --no-cache add git - -# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下 -RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app - -# 设置工作目录为之前克隆的项目目录 -WORKDIR /workspace/app - -# 编译 go 项目。-ldflags="-s -w" 是为了减少编译后的二进制大小 -RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go - -# Runtime Stage -# 使用轻量级的 alpine 镜像作为运行时的基础镜像 -FROM alpine - -# 设置工作目录 -WORKDIR /workspace/app - -# 从构建阶段复制编译后的二进制文件到运行时镜像中 -COPY --from=builder /workspace/app/go-proxy-bingai . - -# 设置环境变量,此处为随机字符 -ENV Go_Proxy_BingAI_USER_TOKEN_1="BD7bo9UY63iIut7t5rWPZi98js6rBu487shsH1hjkcH90Sa125" - -# 暴露8080端口 -EXPOSE 8080 - -# 容器启动时运行的命令 -CMD ["/workspace/app/go-proxy-bingai"] \ No newline at end of file diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/color_coded_blocks_on_corner.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/color_coded_blocks_on_corner.py deleted file mode 100644 index 7eac819cade5fb643e6019b67b99a4ef5750432f..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/color_coded_blocks_on_corner.py +++ /dev/null @@ -1,57 +0,0 @@ -import numpy as np -import os -import pybullet as p -import random -from cliport.tasks import primitives -from cliport.tasks.grippers import Spatula -from cliport.tasks.task import Task -from cliport.utils import utils -import numpy as np -from cliport.tasks.task import Task -from cliport.utils import utils -import pybullet as p - -class ColorCodedBlocksOnCorner(Task): - """Pick up blocks of different colors and place them in a corner structure in a specific color sequence.""" - - def __init__(self): - super().__init__() - self.max_steps = 10 - self.lang_template = "place the blocks in the corner in the sequence red, blue, green, yellow" - self.task_completed_desc = "done placing blocks in the corner." - self.additional_reset() - - def reset(self, env): - super().reset(env) - - # Add corner structure. - corner_size = (0.15, 0.15, 0.05) - corner_pose = self.get_random_pose(env, corner_size) - corner_urdf = 'corner/corner-template.urdf' - env.add_object(corner_urdf, corner_pose, 'fixed') - - # Block colors. - colors = [ - utils.COLORS['red'], utils.COLORS['blue'], utils.COLORS['green'], - utils.COLORS['yellow'] - ] - - # Add blocks. - block_size = (0.04, 0.04, 0.04) - block_urdf = 'block/block.urdf' - blocks = [] - for i in range(4): - block_pose = self.get_random_pose(env, block_size) - block_id = env.add_object(block_urdf, block_pose, color=colors[i]) - blocks.append(block_id) - - # Associate placement locations for goals. - place_pos = [(0, -0.05, 0.03), (0, 0, 0.03), - (0, 0.05, 0.03), (0, 0, 0.08)] - targs = [(utils.apply(corner_pose, i), corner_pose[1]) for i in place_pos] - - # Goal: blocks are placed in the corner in the sequence red, blue, green, yellow. - for i in range(4): - self.add_goal(objs=[blocks[i]], matches=np.ones((1, 1)), targ_poses=[targs[i]], replace=False, - rotations=True, metric='pose', params=None, step_max_reward=1 / 4, - language_goal=self.lang_template.format(blocks="the red, blue, green, yellow blocks")) \ No newline at end of file diff --git a/spaces/GeorgeOrville/bingo/src/pages/api/healthz.ts b/spaces/GeorgeOrville/bingo/src/pages/api/healthz.ts deleted file mode 100644 index f6ae44ff0fd66ccd3f7feaa550025fbf2a83bf77..0000000000000000000000000000000000000000 --- a/spaces/GeorgeOrville/bingo/src/pages/api/healthz.ts +++ /dev/null @@ -1,7 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - res.status(200).end('ok') -} diff --git a/spaces/Gradio-Blocks/CloudSaveText2Speech/README.md b/spaces/Gradio-Blocks/CloudSaveText2Speech/README.md deleted file mode 100644 index 3213bf8f6e1b9869142445a49fff4d7151eb65fe..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/CloudSaveText2Speech/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: CloudSaveText2Speech -emoji: 🧠💬💾 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.0.9 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/pascal_voc/faster_rcnn_r50_fpn_1x_voc0712_cocofmt.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/pascal_voc/faster_rcnn_r50_fpn_1x_voc0712_cocofmt.py deleted file mode 100644 index 12eee2c1ecdaa5f9e84a3bd2084b00493f2f76c0..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/pascal_voc/faster_rcnn_r50_fpn_1x_voc0712_cocofmt.py +++ /dev/null @@ -1,75 +0,0 @@ -_base_ = [ - '../_base_/models/faster_rcnn_r50_fpn.py', '../_base_/datasets/voc0712.py', - '../_base_/default_runtime.py' -] -model = dict(roi_head=dict(bbox_head=dict(num_classes=20))) - -CLASSES = ('aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', - 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', - 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor') - -# dataset settings -dataset_type = 'CocoDataset' -data_root = 'data/VOCdevkit/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict(type='Resize', img_scale=(1000, 600), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1000, 600), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type='RepeatDataset', - times=3, - dataset=dict( - type=dataset_type, - ann_file='data/voc0712_trainval.json', - img_prefix='data/VOCdevkit', - pipeline=train_pipeline, - classes=CLASSES)), - val=dict( - type=dataset_type, - ann_file='data/voc07_test.json', - img_prefix='data/VOCdevkit', - pipeline=test_pipeline, - classes=CLASSES), - test=dict( - type=dataset_type, - ann_file='data/voc07_test.json', - img_prefix='data/VOCdevkit', - pipeline=test_pipeline, - classes=CLASSES)) -evaluation = dict(interval=1, metric='bbox') - -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001) -optimizer_config = dict(grad_clip=None) -# learning policy -# actual epoch = 3 * 3 = 9 -lr_config = dict(policy='step', step=[3]) -# runtime settings -runner = dict( - type='EpochBasedRunner', max_epochs=4) # actual epoch = 4 * 3 = 12 diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/datasets/pascal_context.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/datasets/pascal_context.py deleted file mode 100644 index ff65bad1b86d7e3a5980bb5b9fc55798dc8df5f4..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/datasets/pascal_context.py +++ /dev/null @@ -1,60 +0,0 @@ -# dataset settings -dataset_type = 'PascalContextDataset' -data_root = 'data/VOCdevkit/VOC2010/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) - -img_scale = (520, 520) -crop_size = (480, 480) - -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations'), - dict(type='Resize', img_scale=img_scale, ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=img_scale, - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClassContext', - split='ImageSets/SegmentationContext/train.txt', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClassContext', - split='ImageSets/SegmentationContext/val.txt', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClassContext', - split='ImageSets/SegmentationContext/val.txt', - pipeline=test_pipeline)) diff --git a/spaces/Gradio-Blocks/uniformer_video_demo/transforms.py b/spaces/Gradio-Blocks/uniformer_video_demo/transforms.py deleted file mode 100644 index 2483fdf8569e25978b922774e84cc2244315fe61..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_video_demo/transforms.py +++ /dev/null @@ -1,443 +0,0 @@ -import torchvision -import random -from PIL import Image, ImageOps -import numpy as np -import numbers -import math -import torch - - -class GroupRandomCrop(object): - def __init__(self, size): - if isinstance(size, numbers.Number): - self.size = (int(size), int(size)) - else: - self.size = size - - def __call__(self, img_group): - - w, h = img_group[0].size - th, tw = self.size - - out_images = list() - - x1 = random.randint(0, w - tw) - y1 = random.randint(0, h - th) - - for img in img_group: - assert(img.size[0] == w and img.size[1] == h) - if w == tw and h == th: - out_images.append(img) - else: - out_images.append(img.crop((x1, y1, x1 + tw, y1 + th))) - - return out_images - - -class MultiGroupRandomCrop(object): - def __init__(self, size, groups=1): - if isinstance(size, numbers.Number): - self.size = (int(size), int(size)) - else: - self.size = size - self.groups = groups - - def __call__(self, img_group): - - w, h = img_group[0].size - th, tw = self.size - - out_images = list() - - for i in range(self.groups): - x1 = random.randint(0, w - tw) - y1 = random.randint(0, h - th) - - for img in img_group: - assert(img.size[0] == w and img.size[1] == h) - if w == tw and h == th: - out_images.append(img) - else: - out_images.append(img.crop((x1, y1, x1 + tw, y1 + th))) - - return out_images - - -class GroupCenterCrop(object): - def __init__(self, size): - self.worker = torchvision.transforms.CenterCrop(size) - - def __call__(self, img_group): - return [self.worker(img) for img in img_group] - - -class GroupRandomHorizontalFlip(object): - """Randomly horizontally flips the given PIL.Image with a probability of 0.5 - """ - - def __init__(self, is_flow=False): - self.is_flow = is_flow - - def __call__(self, img_group, is_flow=False): - v = random.random() - if v < 0.5: - ret = [img.transpose(Image.FLIP_LEFT_RIGHT) for img in img_group] - if self.is_flow: - for i in range(0, len(ret), 2): - # invert flow pixel values when flipping - ret[i] = ImageOps.invert(ret[i]) - return ret - else: - return img_group - - -class GroupNormalize(object): - def __init__(self, mean, std): - self.mean = mean - self.std = std - - def __call__(self, tensor): - rep_mean = self.mean * (tensor.size()[0] // len(self.mean)) - rep_std = self.std * (tensor.size()[0] // len(self.std)) - - # TODO: make efficient - for t, m, s in zip(tensor, rep_mean, rep_std): - t.sub_(m).div_(s) - - return tensor - - -class GroupScale(object): - """ Rescales the input PIL.Image to the given 'size'. - 'size' will be the size of the smaller edge. - For example, if height > width, then image will be - rescaled to (size * height / width, size) - size: size of the smaller edge - interpolation: Default: PIL.Image.BILINEAR - """ - - def __init__(self, size, interpolation=Image.BILINEAR): - self.worker = torchvision.transforms.Resize(size, interpolation) - - def __call__(self, img_group): - return [self.worker(img) for img in img_group] - - -class GroupOverSample(object): - def __init__(self, crop_size, scale_size=None, flip=True): - self.crop_size = crop_size if not isinstance( - crop_size, int) else (crop_size, crop_size) - - if scale_size is not None: - self.scale_worker = GroupScale(scale_size) - else: - self.scale_worker = None - self.flip = flip - - def __call__(self, img_group): - - if self.scale_worker is not None: - img_group = self.scale_worker(img_group) - - image_w, image_h = img_group[0].size - crop_w, crop_h = self.crop_size - - offsets = GroupMultiScaleCrop.fill_fix_offset( - False, image_w, image_h, crop_w, crop_h) - oversample_group = list() - for o_w, o_h in offsets: - normal_group = list() - flip_group = list() - for i, img in enumerate(img_group): - crop = img.crop((o_w, o_h, o_w + crop_w, o_h + crop_h)) - normal_group.append(crop) - flip_crop = crop.copy().transpose(Image.FLIP_LEFT_RIGHT) - - if img.mode == 'L' and i % 2 == 0: - flip_group.append(ImageOps.invert(flip_crop)) - else: - flip_group.append(flip_crop) - - oversample_group.extend(normal_group) - if self.flip: - oversample_group.extend(flip_group) - return oversample_group - - -class GroupFullResSample(object): - def __init__(self, crop_size, scale_size=None, flip=True): - self.crop_size = crop_size if not isinstance( - crop_size, int) else (crop_size, crop_size) - - if scale_size is not None: - self.scale_worker = GroupScale(scale_size) - else: - self.scale_worker = None - self.flip = flip - - def __call__(self, img_group): - - if self.scale_worker is not None: - img_group = self.scale_worker(img_group) - - image_w, image_h = img_group[0].size - crop_w, crop_h = self.crop_size - - w_step = (image_w - crop_w) // 4 - h_step = (image_h - crop_h) // 4 - - offsets = list() - offsets.append((0 * w_step, 2 * h_step)) # left - offsets.append((4 * w_step, 2 * h_step)) # right - offsets.append((2 * w_step, 2 * h_step)) # center - - oversample_group = list() - for o_w, o_h in offsets: - normal_group = list() - flip_group = list() - for i, img in enumerate(img_group): - crop = img.crop((o_w, o_h, o_w + crop_w, o_h + crop_h)) - normal_group.append(crop) - if self.flip: - flip_crop = crop.copy().transpose(Image.FLIP_LEFT_RIGHT) - - if img.mode == 'L' and i % 2 == 0: - flip_group.append(ImageOps.invert(flip_crop)) - else: - flip_group.append(flip_crop) - - oversample_group.extend(normal_group) - oversample_group.extend(flip_group) - return oversample_group - - -class GroupMultiScaleCrop(object): - - def __init__(self, input_size, scales=None, max_distort=1, - fix_crop=True, more_fix_crop=True): - self.scales = scales if scales is not None else [1, .875, .75, .66] - self.max_distort = max_distort - self.fix_crop = fix_crop - self.more_fix_crop = more_fix_crop - self.input_size = input_size if not isinstance(input_size, int) else [ - input_size, input_size] - self.interpolation = Image.BILINEAR - - def __call__(self, img_group): - - im_size = img_group[0].size - - crop_w, crop_h, offset_w, offset_h = self._sample_crop_size(im_size) - crop_img_group = [ - img.crop( - (offset_w, - offset_h, - offset_w + - crop_w, - offset_h + - crop_h)) for img in img_group] - ret_img_group = [img.resize((self.input_size[0], self.input_size[1]), self.interpolation) - for img in crop_img_group] - return ret_img_group - - def _sample_crop_size(self, im_size): - image_w, image_h = im_size[0], im_size[1] - - # find a crop size - base_size = min(image_w, image_h) - crop_sizes = [int(base_size * x) for x in self.scales] - crop_h = [ - self.input_size[1] if abs( - x - self.input_size[1]) < 3 else x for x in crop_sizes] - crop_w = [ - self.input_size[0] if abs( - x - self.input_size[0]) < 3 else x for x in crop_sizes] - - pairs = [] - for i, h in enumerate(crop_h): - for j, w in enumerate(crop_w): - if abs(i - j) <= self.max_distort: - pairs.append((w, h)) - - crop_pair = random.choice(pairs) - if not self.fix_crop: - w_offset = random.randint(0, image_w - crop_pair[0]) - h_offset = random.randint(0, image_h - crop_pair[1]) - else: - w_offset, h_offset = self._sample_fix_offset( - image_w, image_h, crop_pair[0], crop_pair[1]) - - return crop_pair[0], crop_pair[1], w_offset, h_offset - - def _sample_fix_offset(self, image_w, image_h, crop_w, crop_h): - offsets = self.fill_fix_offset( - self.more_fix_crop, image_w, image_h, crop_w, crop_h) - return random.choice(offsets) - - @staticmethod - def fill_fix_offset(more_fix_crop, image_w, image_h, crop_w, crop_h): - w_step = (image_w - crop_w) // 4 - h_step = (image_h - crop_h) // 4 - - ret = list() - ret.append((0, 0)) # upper left - ret.append((4 * w_step, 0)) # upper right - ret.append((0, 4 * h_step)) # lower left - ret.append((4 * w_step, 4 * h_step)) # lower right - ret.append((2 * w_step, 2 * h_step)) # center - - if more_fix_crop: - ret.append((0, 2 * h_step)) # center left - ret.append((4 * w_step, 2 * h_step)) # center right - ret.append((2 * w_step, 4 * h_step)) # lower center - ret.append((2 * w_step, 0 * h_step)) # upper center - - ret.append((1 * w_step, 1 * h_step)) # upper left quarter - ret.append((3 * w_step, 1 * h_step)) # upper right quarter - ret.append((1 * w_step, 3 * h_step)) # lower left quarter - ret.append((3 * w_step, 3 * h_step)) # lower righ quarter - - return ret - - -class GroupRandomSizedCrop(object): - """Random crop the given PIL.Image to a random size of (0.08 to 1.0) of the original size - and and a random aspect ratio of 3/4 to 4/3 of the original aspect ratio - This is popularly used to train the Inception networks - size: size of the smaller edge - interpolation: Default: PIL.Image.BILINEAR - """ - - def __init__(self, size, interpolation=Image.BILINEAR): - self.size = size - self.interpolation = interpolation - - def __call__(self, img_group): - for attempt in range(10): - area = img_group[0].size[0] * img_group[0].size[1] - target_area = random.uniform(0.08, 1.0) * area - aspect_ratio = random.uniform(3. / 4, 4. / 3) - - w = int(round(math.sqrt(target_area * aspect_ratio))) - h = int(round(math.sqrt(target_area / aspect_ratio))) - - if random.random() < 0.5: - w, h = h, w - - if w <= img_group[0].size[0] and h <= img_group[0].size[1]: - x1 = random.randint(0, img_group[0].size[0] - w) - y1 = random.randint(0, img_group[0].size[1] - h) - found = True - break - else: - found = False - x1 = 0 - y1 = 0 - - if found: - out_group = list() - for img in img_group: - img = img.crop((x1, y1, x1 + w, y1 + h)) - assert(img.size == (w, h)) - out_group.append( - img.resize( - (self.size, self.size), self.interpolation)) - return out_group - else: - # Fallback - scale = GroupScale(self.size, interpolation=self.interpolation) - crop = GroupRandomCrop(self.size) - return crop(scale(img_group)) - - -class ConvertDataFormat(object): - def __init__(self, model_type): - self.model_type = model_type - - def __call__(self, images): - if self.model_type == '2D': - return images - tc, h, w = images.size() - t = tc // 3 - images = images.view(t, 3, h, w) - images = images.permute(1, 0, 2, 3) - return images - - -class Stack(object): - - def __init__(self, roll=False): - self.roll = roll - - def __call__(self, img_group): - if img_group[0].mode == 'L': - return np.concatenate([np.expand_dims(x, 2) - for x in img_group], axis=2) - elif img_group[0].mode == 'RGB': - if self.roll: - return np.concatenate([np.array(x)[:, :, ::-1] - for x in img_group], axis=2) - else: - #print(np.concatenate(img_group, axis=2).shape) - # print(img_group[0].shape) - return np.concatenate(img_group, axis=2) - - -class ToTorchFormatTensor(object): - """ Converts a PIL.Image (RGB) or numpy.ndarray (H x W x C) in the range [0, 255] - to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0] """ - - def __init__(self, div=True): - self.div = div - - def __call__(self, pic): - if isinstance(pic, np.ndarray): - # handle numpy array - img = torch.from_numpy(pic).permute(2, 0, 1).contiguous() - else: - # handle PIL Image - img = torch.ByteTensor( - torch.ByteStorage.from_buffer( - pic.tobytes())) - img = img.view(pic.size[1], pic.size[0], len(pic.mode)) - # put it from HWC to CHW format - # yikes, this transpose takes 80% of the loading time/CPU - img = img.transpose(0, 1).transpose(0, 2).contiguous() - return img.float().div(255) if self.div else img.float() - - -class IdentityTransform(object): - - def __call__(self, data): - return data - - -if __name__ == "__main__": - trans = torchvision.transforms.Compose([ - GroupScale(256), - GroupRandomCrop(224), - Stack(), - ToTorchFormatTensor(), - GroupNormalize( - mean=[.485, .456, .406], - std=[.229, .224, .225] - )] - ) - - im = Image.open('../tensorflow-model-zoo.torch/lena_299.png') - - color_group = [im] * 3 - rst = trans(color_group) - - gray_group = [im.convert('L')] * 9 - gray_rst = trans(gray_group) - - trans2 = torchvision.transforms.Compose([ - GroupRandomSizedCrop(256), - Stack(), - ToTorchFormatTensor(), - GroupNormalize( - mean=[.485, .456, .406], - std=[.229, .224, .225]) - ]) - print(trans2(color_group)) diff --git a/spaces/GrandaddyShmax/MusicGen_Plus/tests/common_utils/temp_utils.py b/spaces/GrandaddyShmax/MusicGen_Plus/tests/common_utils/temp_utils.py deleted file mode 100644 index d1e0367e979c8b9fea65472c373916d956ad5aaa..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/MusicGen_Plus/tests/common_utils/temp_utils.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import os -import tempfile - - -class TempDirMixin: - """Mixin to provide easy access to temp dir. - """ - - temp_dir_ = None - - @classmethod - def get_base_temp_dir(cls): - # If AUDIOCRAFT_TEST_DIR is set, use it instead of temporary directory. - # this is handy for debugging. - key = "AUDIOCRAFT_TEST_DIR" - if key in os.environ: - return os.environ[key] - if cls.temp_dir_ is None: - cls.temp_dir_ = tempfile.TemporaryDirectory() - return cls.temp_dir_.name - - @classmethod - def tearDownClass(cls): - if cls.temp_dir_ is not None: - try: - cls.temp_dir_.cleanup() - cls.temp_dir_ = None - except PermissionError: - # On Windows there is a know issue with `shutil.rmtree`, - # which fails intermittenly. - # https://github.com/python/cpython/issues/74168 - # Following the above thread, we ignore it. - pass - super().tearDownClass() - - @property - def id(self): - return self.__class__.__name__ - - def get_temp_path(self, *paths): - temp_dir = os.path.join(self.get_base_temp_dir(), self.id) - path = os.path.join(temp_dir, *paths) - os.makedirs(os.path.dirname(path), exist_ok=True) - return path - - def get_temp_dir(self, *paths): - temp_dir = os.path.join(self.get_base_temp_dir(), self.id) - path = os.path.join(temp_dir, *paths) - os.makedirs(path, exist_ok=True) - return path diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/qualitative_evaluation.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/qualitative_evaluation.py deleted file mode 100644 index 17163168aa27a082a2d9eee3929a7a73dda2135e..0000000000000000000000000000000000000000 --- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/qualitative_evaluation.py +++ /dev/null @@ -1,388 +0,0 @@ -import shutil -import glob -import copy -from pprint import pprint -from typing import Dict, List, Tuple -from PIL import Image -import os -import numpy as np -import cv2 -from py_sod_metrics import Smeasure -import pandas as pd -import wandb -from tqdm import tqdm - -from .visualizer import apply_threshold, apply_vis_to_image, post_processing_depth -from .wandb_manager import wandb_init_sota_benchmark_sm, wandb_delete_artifacts -from .configs.base_config import base_cfg -from .dataset_fn import TestDataset -from .prepare_datasets import unzip_SOTAs, unzip_datasets - -def __select_samples( - df: pd.DataFrame, - sota_names: List[str], - our_model_name: str, - no_samples: int, - is_best: bool, -) -> Dict[str, List[str]]: - best_rs = pd.DataFrame() - best_rs[['image_name', 'dataset_name']] = df[['image_name', 'dataset_name']] - for sota_name in sota_names: - best_rs[sota_name] = df[our_model_name] - df[sota_name] if is_best \ - else df[sota_name] - df[our_model_name] - - best_rs['min'] = best_rs.min(axis=1, numeric_only=True) - best_rs = best_rs[best_rs['min'] > 0] - best_rs = best_rs.sort_values('min', ascending=False) - best_rs = best_rs[['image_name', 'dataset_name', 'min']] - print(best_rs.head(10)) - - selected_samples: Dict[str, List[str]] = dict() - for v in best_rs.iloc[:no_samples].values: - dataset_name = v[1] - image_name = v[0] - if dataset_name not in selected_samples: - selected_samples[dataset_name] = [image_name] - else: - selected_samples[dataset_name].append(image_name) - - return selected_samples - -def qualitative_evaluation( - cfg: base_cfg, - no_samples: int, - dataset_names: List[str], - sota_model_names: List[str], -) -> Dict[str, List[str]]: - unzip_SOTAs( - cfg, - [dataset_names for _ in range(len(sota_model_names))], - sota_model_names - ) - clean_qualitative_evaluation_latex(cfg) - - df = pd.read_csv(os.path.join( - cfg.benchmark_csv_dir_path, f'smeasure_set{cfg.datasets_set}.csv' - )) - - columns: List[str] = df.columns.tolist() - our_model_names = [column for column in columns \ - if column.startswith('exp_v')] - - assert len(our_model_names) == 1, 'Should contain only one of our model' - - our_model_name = our_model_names[0] - print(f'Our model name {our_model_name}') - - sota_names = [column for column in columns \ - if column not in ['index', 'image_name', 'dataset_name'] \ - and not column.startswith('exp_v') - ] - print('SOTA names:', sota_names) - - best_samples = __select_samples( - df, sota_names, our_model_name, no_samples, is_best=True - ) - print('Best samples:') - pprint(best_samples) - best_table, best_vis_table, best_s_measure_table = __qualitative_evaluation( - cfg, best_samples, sota_model_names, is_best=True - ) - - worst_samples = __select_samples( - df, sota_names, our_model_name, no_samples, is_best=False - ) - print('Worst samples:') - pprint(worst_samples) - worst_table, worst_vis_table, worst_s_measure_table = __qualitative_evaluation( - cfg, worst_samples, sota_model_names, is_best=False - ) - - wandb_run = wandb_init_sota_benchmark_sm(cfg.datasets_set) - wandb_delete_artifacts(cfg, wandb_run) - wandb_run.log({ - f'benchmark_best_sm': best_table, - f'benchmark_best_vis': best_vis_table, - f'benchmark_best_s_measure': best_s_measure_table, - f'benchmark_worst_sm': worst_table, - f'benchmark_worst_vis': worst_vis_table, - f'benchmark_worst_s_measure': worst_s_measure_table, - }) - wandb_run.finish() - -def __format_number(num: float) -> str: - return "{:.4f}".format(num).lstrip('0') - -def __qualitative_evaluation_latext(cfg: base_cfg, is_best: bool, scale=0.1): - df = pd.read_csv(__qualitative_evaluation_csv_path(cfg, is_best=is_best)) - text = f""" -\\begin{{tabularx}}{{\\textwidth}}{{ - c {"Y " * df.shape[0]} -}} -""" - no_samples = df.shape[0] - range_indices = range(0, no_samples) - for i in range_indices: - text += f" & Sample {i+1}" - text += "\\\\ \\midrule \n" - - text += "Color \n" - for i in range_indices: - text += f" & \includegraphics[scale={scale}]{{Images/QualitativeEvaluationSet{cfg.datasets_set}/{'best' if is_best else 'worst'}_{df['Index'][i]}_RGB.png}}" - text += f"\\\\ \\midrule \n" - - text += "Depth \n" - for i in range_indices: - text += f" & \includegraphics[scale={scale}]{{Images/QualitativeEvaluationSet{cfg.datasets_set}/{'best' if is_best else 'worst'}_{df['Index'][i]}_DEPTH.png}}" - text += f"\\\\ \\midrule \n" - - text += "GT \n" - for i in range_indices: - text += f" & \includegraphics[scale={scale}]{{Images/QualitativeEvaluationSet{cfg.datasets_set}/{'best' if is_best else 'worst'}_{df['Index'][i]}_GT.png}}" - text += f"\\\\ \\midrule \n" - - columns = df.columns - sota_names = columns[6:] - - for sota_name in sota_names: - text += f"{sota_name} \n" - for i in range_indices: - text += f" & \includegraphics[scale={scale}]{{Images/QualitativeEvaluationSet{cfg.datasets_set}/{'best' if is_best else 'worst'}_{df['Index'][i]}_{sota_name}.png}} {__format_number(df[sota_name][i])}" - text += f"\\\\ \\midrule \n" - - text += f"\n \\end{{tabularx}}" - - txt_path = os.path.join( - cfg.source_code_dir, 'latex', - f'{"best" if is_best else "worst"}_qualitative_evaluation_set{cfg.datasets_set}.txt' - ) - with open(txt_path, 'w') as f: - f.write(text) - -def qualitative_evaluation_latex(cfg: base_cfg) -> None: - __qualitative_evaluation_latext(cfg, is_best=True) - __qualitative_evaluation_latext(cfg, is_best=False) - -def clean_qualitative_evaluation_latex(cfg: base_cfg) -> None: - """Clean: remove directory cfg.qualitative_evaluation_latex - - Args: - cfg (base_cfg): Config - """ - shutil.rmtree(cfg.qualitative_evaluation_latex_dir_path, ignore_errors=True) - -def save_qualitative_evaluation_latex( - cfg: base_cfg, - i: int, - img: np.ndarray, - label: str, - is_best: bool, -) -> None: - os.makedirs(cfg.qualitative_evaluation_latex_dir_path, exist_ok=True) - cv2.imwrite( - os.path.join( - cfg.qualitative_evaluation_latex_dir_path, - f'{"best" if is_best else "worst"}_{i}_{label}.png' - ), - cv2.cvtColor(cv2.resize(img, (150, 150)), cv2.COLOR_BGR2RGB) - ) - -def __qualitative_evaluation_csv_path(cfg: base_cfg, is_best: bool) -> str: - """Return the csv path to the best/worst qualitative evaluation of our experiment - - Args: - cfg (base_cfg): Config - is_best (bool): best/worst - - Returns: - str: csv path - """ - return os.path.join( - cfg.benchmark_csv_dir_path, - f'qualitative_evaluation_{"best" if is_best else "worst"}_set{cfg.datasets_set}.csv' - ) - -def __mapping_sota_model_names(sota_model_names: List[str]) -> List[str]: - """Map our experiment name "exp_v4.0.19_epoch175" to "Ours" - - Args: - sota_model_names (List[str]): List of SOTA names, must include only one "exp_v*" - - Returns: - List[str]: List of SOTA names after mapping our experiment name to "Ours" - """ - our_experiments = [sota_model_name for sota_model_name in sota_model_names if sota_model_name.startswith("exp_v")] - assert len(our_experiments) == 1, "must include only one 'exp_v*'" - return [ - "Ours" if sota_model_name.startswith("exp_v") else sota_model_name for sota_model_name in sota_model_names - ] - -def __qualitative_evaluation( - cfg: base_cfg, - selected_samples: Dict[str, List[str]], - sota_model_names: List[str], - is_best: bool, -) -> Tuple[wandb.Table, wandb.Table, wandb.Table]: - """ - Note: Make sure unzip_SOTAs before execute this function - - selected_samples = { - 'COME-E': [ - 'COME_Hard_2050', - 'COME_Hard_1961', - ... - ], - .... - } - - dataset_names can be ['COME-E', 'COME-H'] - """ - mapped_sota_model_names = __mapping_sota_model_names(sota_model_names) - columns = ['Index', 'Image name', 'Dataset', 'RGB', 'Depth', 'GT'] + mapped_sota_model_names - results = [] - vis_results = [] - s_measure_results = [] - color = np.array([0., 0.97647059, 0.]) - - for dataset_name, image_names in selected_samples.items(): - print(f'Dataset {dataset_name}') - dataset_dir_path = os.path.join( - cfg.test_datasets_working_dir_path, - dataset_name - ) - dataset = TestDataset(cfg, dataset_dir_path) - - for image_name in tqdm(image_names): - possible_image_names = [ - existed_image_name for existed_image_name in dataset.images \ - if os.path.basename(existed_image_name).startswith(f'{image_name}.') - ] - assert len(possible_image_names) == 1, \ - f'Should have exactly one image {image_name} - No. len {len(possible_image_names)}' - - i = dataset.images.index(possible_image_names[0]) - - image, depth, gt, image_name = dataset.get_raw_item(i) - postprocessed_depth = post_processing_depth(np.asarray(depth)) - - row = [ - i, image_name, dataset_name, - wandb.Image(image), - wandb.Image(postprocessed_depth), - wandb.Image(gt), - ] - vis_row = copy.deepcopy(row) - - binary_mask = apply_threshold(np.asarray(gt)) - vis_image = apply_vis_to_image(image, binary_mask, color) - vis_row[5] = wandb.Image(vis_image) - - save_qualitative_evaluation_latex(cfg, i, np.array(image), 'RGB', is_best) - save_qualitative_evaluation_latex(cfg, i, postprocessed_depth, 'DEPTH', is_best) - save_qualitative_evaluation_latex(cfg, i, vis_image, 'GT', is_best) - - s_measure_row = copy.deepcopy(row) - - for sota_model_name, mapped_sota_model_name in zip(sota_model_names, mapped_sota_model_names): - file_path_pattern = os.path.join(cfg.sotas_working_dir, sota_model_name, dataset_name, f'{image_name}.*') - files = glob.glob(file_path_pattern) - assert len(files) == 1, f'Can not find the salient map {image_name} \ - of SOTA {sota_model_name} on dataset {dataset_name}, \ - Length = {len(files)}, File path pattern = {file_path_pattern}' - pred = Image.open(files[0]).convert('L') - pred = pred.resize(image.size) - pred = np.asarray(pred) - - # SM - row.append(wandb.Image(pred)) - - # Visualization - binary_mask = apply_threshold(pred) - vis_image = apply_vis_to_image(image, binary_mask, color) - vis_row.append(wandb.Image(vis_image)) - - save_qualitative_evaluation_latex(cfg, i, vis_image, mapped_sota_model_name, is_best) - - # S-measure - sm = Smeasure() - sm.step(pred, np.asarray(gt)) - s_measure_row.append(sm.get_results()["sm"]) - - results.append(row) - vis_results.append(vis_row) - s_measure_results.append(s_measure_row) - - table = wandb.Table(data=results, columns=columns) - vis_table = wandb.Table(data=vis_results, columns=columns) - s_measure_table = wandb.Table(data=s_measure_results, columns=columns) - - df = pd.DataFrame(s_measure_results, columns=columns) - df.to_csv(__qualitative_evaluation_csv_path(cfg, is_best), index=False) - - return table, vis_table, s_measure_table - -def generate_s_measure_csv( - cfg: base_cfg, - dataset_names: List[str], - sota_model_names: List[str], -) -> None: - """Calculate S-measure for the entire dataset - Each dataset has a seperated csv file. - - dataset_names can be ['COME-E', 'COME-H'] - """ - unzip_SOTAs( - cfg, - [dataset_names for _ in range(len(sota_model_names))], - sota_model_names - ) - - s_measure_results = [] - - for dataset_name in dataset_names: - dataset_dir_path = os.path.join( - cfg.test_datasets_working_dir_path, - dataset_name - ) - print(f'Dataset: {dataset_name}') - dataset = TestDataset(cfg, dataset_dir_path) - - length = len(dataset) - - for i in tqdm(range(length)): - image, depth, gt, image_name = dataset.get_raw_item(i) - - s_measure_row = [ - i, image_name, dataset_name, - ] - - for sota_model_name in sota_model_names: - file_path_pattern = os.path.join( - cfg.sotas_working_dir, sota_model_name, - dataset_name, f'{image_name}.*' - ) - files = glob.glob(file_path_pattern) - assert len(files) == 1, f'Can not find the salient map {image_name} \ - of SOTA {sota_model_name} on dataset {dataset_name}, \ - Length = {len(files)}, File path pattern = {file_path_pattern}' - pred = Image.open(files[0]).convert('L') - pred = pred.resize(image.size) - pred = np.asarray(pred) - - # S-measure - sm = Smeasure() - sm.step(pred, np.asarray(gt)) - s_measure_row.append(sm.get_results()["sm"]) - - s_measure_results.append(s_measure_row) - - df = pd.DataFrame( - s_measure_results, - columns=['index', 'image_name', 'dataset_name'] + sota_model_names - ) - csv_file_path = os.path.join( - cfg.benchmark_csv_dir_path, - f'smeasure_set{cfg.datasets_set}.csv' - ) - print(f'Saved S-measure results into {csv_file_path}') - df.to_csv(csv_file_path, index=False) diff --git a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/workerpool.py b/spaces/HaHaBill/LandShapes-Antarctica/netdissect/workerpool.py deleted file mode 100644 index fe79124ddc86d0e7251d9e1a5d1012e7165249e3..0000000000000000000000000000000000000000 --- a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/workerpool.py +++ /dev/null @@ -1,158 +0,0 @@ -''' -WorkerPool and WorkerBase for handling the common problems in managing -a multiprocess pool of workers that aren't done by multiprocessing.Pool, -including setup with per-process state, debugging by putting the worker -on the main thread, and correct handling of unexpected errors, and ctrl-C. - -To use it, -1. Put the per-process setup and the per-task work in the - setup() and work() methods of your own WorkerBase subclass. -2. To prepare the process pool, instantiate a WorkerPool, passing your - subclass type as the first (worker) argument, as well as any setup keyword - arguments. The WorkerPool will instantiate one of your workers in each - worker process (passing in the setup arguments in those processes). - If debugging, the pool can have process_count=0 to force all the work - to be done immediately on the main thread; otherwise all the work - will be passed to other processes. -3. Whenever there is a new piece of work to distribute, call pool.add(*args). - The arguments will be queued and passed as worker.work(*args) to the - next available worker. -4. When all the work has been distributed, call pool.join() to wait for all - the work to complete and to finish and terminate all the worker processes. - When pool.join() returns, all the work will have been done. - -No arrangement is made to collect the results of the work: for example, -the return value of work() is ignored. If you need to collect the -results, use your own mechanism (filesystem, shared memory object, queue) -which can be distributed using setup arguments. -''' - -from multiprocessing import Process, Queue, cpu_count -import signal -import atexit -import sys - -class WorkerBase(Process): - ''' - Subclass this class and override its work() method (and optionally, - setup() as well) to define the units of work to be done in a process - worker in a woker pool. - ''' - def __init__(self, i, process_count, queue, initargs): - if process_count > 0: - # Make sure we ignore ctrl-C if we are not on main process. - signal.signal(signal.SIGINT, signal.SIG_IGN) - self.process_id = i - self.process_count = process_count - self.queue = queue - super(WorkerBase, self).__init__() - self.setup(**initargs) - def run(self): - # Do the work until None is dequeued - while True: - try: - work_batch = self.queue.get() - except (KeyboardInterrupt, SystemExit): - print('Exiting...') - break - if work_batch is None: - self.queue.put(None) # for another worker - return - self.work(*work_batch) - def setup(self, **initargs): - ''' - Override this method for any per-process initialization. - Keywoard args are passed from WorkerPool constructor. - ''' - pass - def work(self, *args): - ''' - Override this method for one-time initialization. - Args are passed from WorkerPool.add() arguments. - ''' - raise NotImplementedError('worker subclass needed') - -class WorkerPool(object): - ''' - Instantiate this object (passing a WorkerBase subclass type - as its first argument) to create a worker pool. Then call - pool.add(*args) to queue args to distribute to worker.work(*args), - and call pool.join() to wait for all the workers to complete. - ''' - def __init__(self, worker=WorkerBase, process_count=None, **initargs): - global active_pools - if process_count is None: - process_count = cpu_count() - if process_count == 0: - # zero process_count uses only main process, for debugging. - self.queue = None - self.processes = None - self.worker = worker(None, 0, None, initargs) - return - # Ctrl-C strategy: worker processes should ignore ctrl-C. Set - # this up to be inherited by child processes before forking. - original_sigint_handler = signal.signal(signal.SIGINT, signal.SIG_IGN) - active_pools[id(self)] = self - self.queue = Queue(maxsize=(process_count * 3)) - self.processes = None # Initialize before trying to construct workers - self.processes = [worker(i, process_count, self.queue, initargs) - for i in range(process_count)] - for p in self.processes: - p.start() - # The main process should handle ctrl-C. Restore this now. - signal.signal(signal.SIGINT, original_sigint_handler) - def add(self, *work_batch): - if self.queue is None: - if hasattr(self, 'worker'): - self.worker.work(*work_batch) - else: - print('WorkerPool shutting down.', file=sys.stderr) - else: - try: - # The queue can block if the work is so slow it gets full. - self.queue.put(work_batch) - except (KeyboardInterrupt, SystemExit): - # Handle ctrl-C if done while waiting for the queue. - self.early_terminate() - def join(self): - # End the queue, and wait for all worker processes to complete nicely. - if self.queue is not None: - self.queue.put(None) - for p in self.processes: - p.join() - self.queue = None - # Remove myself from the set of pools that need cleanup on shutdown. - try: - del active_pools[id(self)] - except: - pass - def early_terminate(self): - # When shutting down unexpectedly, first end the queue. - if self.queue is not None: - try: - self.queue.put_nowait(None) # Nonblocking put throws if full. - self.queue = None - except: - pass - # But then don't wait: just forcibly terminate workers. - if self.processes is not None: - for p in self.processes: - p.terminate() - self.processes = None - try: - del active_pools[id(self)] - except: - pass - def __del__(self): - if self.queue is not None: - print('ERROR: workerpool.join() not called!', file=sys.stderr) - self.join() - -# Error and ctrl-C handling: kill worker processes if the main process ends. -active_pools = {} -def early_terminate_pools(): - for _, pool in list(active_pools.items()): - pool.early_terminate() - -atexit.register(early_terminate_pools) - diff --git a/spaces/HaloMaster/chinesesummary/fengshen/models/auto/auto_factory.py b/spaces/HaloMaster/chinesesummary/fengshen/models/auto/auto_factory.py deleted file mode 100644 index 688bbd4853284305d047be0552077f721e2f97de..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/models/auto/auto_factory.py +++ /dev/null @@ -1,644 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The IDEA Authors. All rights reserved. - -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at - -# http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Factory function to build auto-model classes.""" -import importlib -from collections import OrderedDict - -from transformers.configuration_utils import PretrainedConfig -from transformers.file_utils import copy_func -from transformers.utils import logging -from .configuration_auto import AutoConfig, model_type_to_module_name, replace_list_option_in_docstrings -from .dynamic import get_class_from_dynamic_module - - -logger = logging.get_logger(__name__) - - -CLASS_DOCSTRING = """ - This is a generic model class that will be instantiated as one of the model classes of the library when created - with the [`~BaseAutoModelClass.from_pretrained`] class method or the [`~BaseAutoModelClass.from_config`] class - method. - - This class cannot be instantiated directly using `__init__()` (throws an error). -""" - -FROM_CONFIG_DOCSTRING = """ - Instantiates one of the model classes of the library from a configuration. - - Note: - Loading a model from its configuration file does **not** load the model weights. It only affects the - model's configuration. Use [`~BaseAutoModelClass.from_pretrained`] to load the model weights. - - Args: - config ([`PretrainedConfig`]): - The model class to instantiate is selected based on the configuration class: - - List options - - Examples: - - ```python - >>> from transformers import AutoConfig, BaseAutoModelClass - - >>> # Download configuration from huggingface.co and cache. - >>> config = AutoConfig.from_pretrained("checkpoint_placeholder") - >>> model = BaseAutoModelClass.from_config(config) - ``` -""" - -FROM_PRETRAINED_TORCH_DOCSTRING = """ - Instantiate one of the model classes of the library from a pretrained model. - - The model class to instantiate is selected based on the `model_type` property of the config object (either - passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by - falling back to using pattern matching on `pretrained_model_name_or_path`: - - List options - - The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are - deactivated). To train the model, you should first set it back in training mode with `model.train()` - - Args: - pretrained_model_name_or_path (`str` or `os.PathLike`): - Can be either: - - - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - Valid model ids can be located at the root-level, like `bert-base-uncased`, or namespaced under a - user or organization name, like `dbmdz/bert-base-german-cased`. - - A path to a *directory* containing model weights saved using - [`~PreTrainedModel.save_pretrained`], e.g., `./my_model_directory/`. - - A path or url to a *tensorflow index checkpoint file* (e.g, `./tf_model/model.ckpt.index`). In - this case, `from_tf` should be set to `True` and a configuration object should be provided as - `config` argument. This loading path is slower than converting the TensorFlow checkpoint in a - PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. - model_args (additional positional arguments, *optional*): - Will be passed along to the underlying model `__init__()` method. - config ([`PretrainedConfig`], *optional*): - Configuration for the model to use instead of an automatically loaded configuration. Configuration can - be automatically loaded when: - - - The model is a model provided by the library (loaded with the *model id* string of a pretrained - model). - - The model was saved using [`~PreTrainedModel.save_pretrained`] and is reloaded by supplying the - save directory. - - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a - configuration JSON file named *config.json* is found in the directory. - state_dict (*Dict[str, torch.Tensor]*, *optional*): - A state dictionary to use instead of a state dictionary loaded from saved weights file. - - This option can be used if you want to create a model from a pretrained configuration but load your own - weights. In this case though, you should check if using [`~PreTrainedModel.save_pretrained`] and - [`~PreTrainedModel.from_pretrained`] is not a simpler option. - cache_dir (`str` or `os.PathLike`, *optional*): - Path to a directory in which a downloaded pretrained model configuration should be cached if the - standard cache should not be used. - from_tf (`bool`, *optional*, defaults to `False`): - Load the model weights from a TensorFlow checkpoint save file (see docstring of - `pretrained_model_name_or_path` argument). - force_download (`bool`, *optional*, defaults to `False`): - Whether or not to force the (re-)download of the model weights and configuration files, overriding the - cached versions if they exist. - resume_download (`bool`, *optional*, defaults to `False`): - Whether or not to delete incompletely received files. Will attempt to resume the download if such a - file exists. - proxies (`Dict[str, str]`, *optional*): - A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', - 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. - output_loading_info(`bool`, *optional*, defaults to `False`): - Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. - local_files_only(`bool`, *optional*, defaults to `False`): - Whether or not to only look at local files (e.g., not try downloading the model). - revision(`str`, *optional*, defaults to `"main"`): - The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a - git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any - identifier allowed by git. - trust_remote_code (`bool`, *optional*, defaults to `False`): - Whether or not to allow for custom models defined on the Hub in their own modeling files. This option - should only be set to `True` for repositories you trust and in which you have read the code, as it will - execute code present on the Hub on your local machine. - kwargs (additional keyword arguments, *optional*): - Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., - `output_attentions=True`). Behaves differently depending on whether a `config` is provided or - automatically loaded: - - - If a configuration is provided with `config`, `**kwargs` will be directly passed to the - underlying model's `__init__` method (we assume all relevant updates to the configuration have - already been done) - - If a configuration is not provided, `kwargs` will be first passed to the configuration class - initialization function ([`~PretrainedConfig.from_pretrained`]). Each key of `kwargs` that - corresponds to a configuration attribute will be used to override said attribute with the - supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute - will be passed to the underlying model's `__init__` function. - - Examples: - - ```python - >>> from transformers import AutoConfig, BaseAutoModelClass - - >>> # Download model and configuration from huggingface.co and cache. - >>> model = BaseAutoModelClass.from_pretrained("checkpoint_placeholder") - - >>> # Update configuration during loading - >>> model = BaseAutoModelClass.from_pretrained("checkpoint_placeholder", output_attentions=True) - >>> model.config.output_attentions - True - - >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) - >>> config = AutoConfig.from_pretrained("./tf_model/shortcut_placeholder_tf_model_config.json") - >>> model = BaseAutoModelClass.from_pretrained( - ... "./tf_model/shortcut_placeholder_tf_checkpoint.ckpt.index", from_tf=True, config=config - ... ) - ``` -""" - -FROM_PRETRAINED_TF_DOCSTRING = """ - Instantiate one of the model classes of the library from a pretrained model. - - The model class to instantiate is selected based on the `model_type` property of the config object (either - passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by - falling back to using pattern matching on `pretrained_model_name_or_path`: - - List options - - Args: - pretrained_model_name_or_path (`str` or `os.PathLike`): - Can be either: - - - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - Valid model ids can be located at the root-level, like `bert-base-uncased`, or namespaced under a - user or organization name, like `dbmdz/bert-base-german-cased`. - - A path to a *directory* containing model weights saved using - [`~PreTrainedModel.save_pretrained`], e.g., `./my_model_directory/`. - - A path or url to a *PyTorch state_dict save file* (e.g, `./pt_model/pytorch_model.bin`). In this - case, `from_pt` should be set to `True` and a configuration object should be provided as `config` - argument. This loading path is slower than converting the PyTorch model in a TensorFlow model - using the provided conversion scripts and loading the TensorFlow model afterwards. - model_args (additional positional arguments, *optional*): - Will be passed along to the underlying model `__init__()` method. - config ([`PretrainedConfig`], *optional*): - Configuration for the model to use instead of an automatically loaded configuration. Configuration can - be automatically loaded when: - - - The model is a model provided by the library (loaded with the *model id* string of a pretrained - model). - - The model was saved using [`~PreTrainedModel.save_pretrained`] and is reloaded by supplying the - save directory. - - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a - configuration JSON file named *config.json* is found in the directory. - cache_dir (`str` or `os.PathLike`, *optional*): - Path to a directory in which a downloaded pretrained model configuration should be cached if the - standard cache should not be used. - from_pt (`bool`, *optional*, defaults to `False`): - Load the model weights from a PyTorch checkpoint save file (see docstring of - `pretrained_model_name_or_path` argument). - force_download (`bool`, *optional*, defaults to `False`): - Whether or not to force the (re-)download of the model weights and configuration files, overriding the - cached versions if they exist. - resume_download (`bool`, *optional*, defaults to `False`): - Whether or not to delete incompletely received files. Will attempt to resume the download if such a - file exists. - proxies (`Dict[str, str]`, *optional*): - A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', - 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. - output_loading_info(`bool`, *optional*, defaults to `False`): - Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. - local_files_only(`bool`, *optional*, defaults to `False`): - Whether or not to only look at local files (e.g., not try downloading the model). - revision(`str`, *optional*, defaults to `"main"`): - The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a - git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any - identifier allowed by git. - trust_remote_code (`bool`, *optional*, defaults to `False`): - Whether or not to allow for custom models defined on the Hub in their own modeling files. This option - should only be set to `True` for repositories you trust and in which you have read the code, as it will - execute code present on the Hub on your local machine. - kwargs (additional keyword arguments, *optional*): - Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., - `output_attentions=True`). Behaves differently depending on whether a `config` is provided or - automatically loaded: - - - If a configuration is provided with `config`, `**kwargs` will be directly passed to the - underlying model's `__init__` method (we assume all relevant updates to the configuration have - already been done) - - If a configuration is not provided, `kwargs` will be first passed to the configuration class - initialization function ([`~PretrainedConfig.from_pretrained`]). Each key of `kwargs` that - corresponds to a configuration attribute will be used to override said attribute with the - supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute - will be passed to the underlying model's `__init__` function. - - Examples: - - ```python - >>> from transformers import AutoConfig, BaseAutoModelClass - - >>> # Download model and configuration from huggingface.co and cache. - >>> model = BaseAutoModelClass.from_pretrained("checkpoint_placeholder") - - >>> # Update configuration during loading - >>> model = BaseAutoModelClass.from_pretrained("checkpoint_placeholder", output_attentions=True) - >>> model.config.output_attentions - True - - >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) - >>> config = AutoConfig.from_pretrained("./pt_model/shortcut_placeholder_pt_model_config.json") - >>> model = BaseAutoModelClass.from_pretrained( - ... "./pt_model/shortcut_placeholder_pytorch_model.bin", from_pt=True, config=config - ... ) - ``` -""" - -FROM_PRETRAINED_FLAX_DOCSTRING = """ - Instantiate one of the model classes of the library from a pretrained model. - - The model class to instantiate is selected based on the `model_type` property of the config object (either - passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by - falling back to using pattern matching on `pretrained_model_name_or_path`: - - List options - - Args: - pretrained_model_name_or_path (`str` or `os.PathLike`): - Can be either: - - - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - Valid model ids can be located at the root-level, like `bert-base-uncased`, or namespaced under a - user or organization name, like `dbmdz/bert-base-german-cased`. - - A path to a *directory* containing model weights saved using - [`~PreTrainedModel.save_pretrained`], e.g., `./my_model_directory/`. - - A path or url to a *PyTorch state_dict save file* (e.g, `./pt_model/pytorch_model.bin`). In this - case, `from_pt` should be set to `True` and a configuration object should be provided as `config` - argument. This loading path is slower than converting the PyTorch model in a TensorFlow model - using the provided conversion scripts and loading the TensorFlow model afterwards. - model_args (additional positional arguments, *optional*): - Will be passed along to the underlying model `__init__()` method. - config ([`PretrainedConfig`], *optional*): - Configuration for the model to use instead of an automatically loaded configuration. Configuration can - be automatically loaded when: - - - The model is a model provided by the library (loaded with the *model id* string of a pretrained - model). - - The model was saved using [`~PreTrainedModel.save_pretrained`] and is reloaded by supplying the - save directory. - - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a - configuration JSON file named *config.json* is found in the directory. - cache_dir (`str` or `os.PathLike`, *optional*): - Path to a directory in which a downloaded pretrained model configuration should be cached if the - standard cache should not be used. - from_pt (`bool`, *optional*, defaults to `False`): - Load the model weights from a PyTorch checkpoint save file (see docstring of - `pretrained_model_name_or_path` argument). - force_download (`bool`, *optional*, defaults to `False`): - Whether or not to force the (re-)download of the model weights and configuration files, overriding the - cached versions if they exist. - resume_download (`bool`, *optional*, defaults to `False`): - Whether or not to delete incompletely received files. Will attempt to resume the download if such a - file exists. - proxies (`Dict[str, str]`, *optional*): - A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', - 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. - output_loading_info(`bool`, *optional*, defaults to `False`): - Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. - local_files_only(`bool`, *optional*, defaults to `False`): - Whether or not to only look at local files (e.g., not try downloading the model). - revision(`str`, *optional*, defaults to `"main"`): - The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a - git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any - identifier allowed by git. - trust_remote_code (`bool`, *optional*, defaults to `False`): - Whether or not to allow for custom models defined on the Hub in their own modeling files. This option - should only be set to `True` for repositories you trust and in which you have read the code, as it will - execute code present on the Hub on your local machine. - kwargs (additional keyword arguments, *optional*): - Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., - `output_attentions=True`). Behaves differently depending on whether a `config` is provided or - automatically loaded: - - - If a configuration is provided with `config`, `**kwargs` will be directly passed to the - underlying model's `__init__` method (we assume all relevant updates to the configuration have - already been done) - - If a configuration is not provided, `kwargs` will be first passed to the configuration class - initialization function ([`~PretrainedConfig.from_pretrained`]). Each key of `kwargs` that - corresponds to a configuration attribute will be used to override said attribute with the - supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute - will be passed to the underlying model's `__init__` function. - - Examples: - - ```python - >>> from transformers import AutoConfig, BaseAutoModelClass - - >>> # Download model and configuration from huggingface.co and cache. - >>> model = BaseAutoModelClass.from_pretrained("checkpoint_placeholder") - - >>> # Update configuration during loading - >>> model = BaseAutoModelClass.from_pretrained("checkpoint_placeholder", output_attentions=True) - >>> model.config.output_attentions - True - - >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) - >>> config = AutoConfig.from_pretrained("./pt_model/shortcut_placeholder_pt_model_config.json") - >>> model = BaseAutoModelClass.from_pretrained( - ... "./pt_model/shortcut_placeholder_pytorch_model.bin", from_pt=True, config=config - ... ) - ``` -""" - - -def _get_model_class(config, model_mapping): - supported_models = model_mapping[type(config)] - if not isinstance(supported_models, (list, tuple)): - return supported_models - - name_to_model = {model.__name__: model for model in supported_models} - architectures = getattr(config, "architectures", []) - for arch in architectures: - if arch in name_to_model: - return name_to_model[arch] - elif f"TF{arch}" in name_to_model: - return name_to_model[f"TF{arch}"] - elif f"Flax{arch}" in name_to_model: - return name_to_model[f"Flax{arch}"] - - # If not architecture is set in the config or match the supported models, the first element of the tuple is the - # defaults. - return supported_models[0] - - -class _BaseAutoModelClass: - # Base class for auto models. - _model_mapping = None - - def __init__(self, *args, **kwargs): - raise EnvironmentError( - f"{self.__class__.__name__} is designed to be instantiated " - f"using the `{self.__class__.__name__}.from_pretrained(pretrained_model_name_or_path)` or " - f"`{self.__class__.__name__}.from_config(config)` methods." - ) - - @classmethod - def from_config(cls, config, **kwargs): - trust_remote_code = kwargs.pop("trust_remote_code", False) - if hasattr(config, "auto_map") and cls.__name__ in config.auto_map: - if not trust_remote_code: - raise ValueError( - "Loading this model requires you to execute the modeling file in that repo " - "on your local machine. Make sure you have read the code there to avoid malicious use, then set " - "the option `trust_remote_code=True` to remove this error." - ) - if kwargs.get("revision", None) is None: - logger.warn( - "Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure " - "no malicious code has been contributed in a newer revision." - ) - class_ref = config.auto_map[cls.__name__] - module_file, class_name = class_ref.split(".") - model_class = get_class_from_dynamic_module( - config.name_or_path, module_file + ".py", class_name, **kwargs) - return model_class._from_config(config, **kwargs) - elif type(config) in cls._model_mapping.keys(): - model_class = _get_model_class(config, cls._model_mapping) - return model_class._from_config(config, **kwargs) - - raise ValueError( - f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n" - f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}." - ) - - @classmethod - def from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs): - config = kwargs.pop("config", None) - trust_remote_code = kwargs.pop("trust_remote_code", False) - kwargs["_from_auto"] = True - if not isinstance(config, PretrainedConfig): - config, kwargs = AutoConfig.from_pretrained( - pretrained_model_name_or_path, return_unused_kwargs=True, trust_remote_code=trust_remote_code, **kwargs - ) - if hasattr(config, "auto_map") and cls.__name__ in config.auto_map: - if not trust_remote_code: - raise ValueError( - f"Loading {pretrained_model_name_or_path} requires you to execute the modeling file in that repo " - "on your local machine. Make sure you have read the code there to avoid malicious use, then set " - "the option `trust_remote_code=True` to remove this error." - ) - if kwargs.get("revision", None) is None: - logger.warn( - "Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure " - "no malicious code has been contributed in a newer revision." - ) - class_ref = config.auto_map[cls.__name__] - module_file, class_name = class_ref.split(".") - model_class = get_class_from_dynamic_module( - pretrained_model_name_or_path, module_file + ".py", class_name, **kwargs - ) - return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs) - elif type(config) in cls._model_mapping.keys(): - model_class = _get_model_class(config, cls._model_mapping) - return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs) - raise ValueError( - f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n" - f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}." - ) - - @classmethod - def register(cls, config_class, model_class): - """ - Register a new model for this class. - - Args: - config_class ([`PretrainedConfig`]): - The configuration corresponding to the model to register. - model_class ([`PreTrainedModel`]): - The model to register. - """ - if hasattr(model_class, "config_class") and model_class.config_class != config_class: - raise ValueError( - "The model class you are passing has a `config_class` attribute that is not consistent with the " - f"config class you passed (model has {model_class.config_class} and you passed {config_class}. Fix " - "one of those so they match!" - ) - cls._model_mapping.register(config_class, model_class) - - -def insert_head_doc(docstring, head_doc=""): - if len(head_doc) > 0: - return docstring.replace( - "one of the model classes of the library ", - f"one of the model classes of the library (with a {head_doc} head) ", - ) - return docstring.replace( - "one of the model classes of the library ", "one of the base model classes of the library " - ) - - -def auto_class_update(cls, checkpoint_for_example="bert-base-cased", head_doc=""): - # Create a new class with the right name from the base class - model_mapping = cls._model_mapping - name = cls.__name__ - class_docstring = insert_head_doc(CLASS_DOCSTRING, head_doc=head_doc) - cls.__doc__ = class_docstring.replace("BaseAutoModelClass", name) - - # Now we need to copy and re-register `from_config` and `from_pretrained` as class methods otherwise we can't - # have a specific docstrings for them. - from_config = copy_func(_BaseAutoModelClass.from_config) - from_config_docstring = insert_head_doc( - FROM_CONFIG_DOCSTRING, head_doc=head_doc) - from_config_docstring = from_config_docstring.replace( - "BaseAutoModelClass", name) - from_config_docstring = from_config_docstring.replace( - "checkpoint_placeholder", checkpoint_for_example) - from_config.__doc__ = from_config_docstring - from_config = replace_list_option_in_docstrings( - model_mapping._model_mapping, use_model_types=False)(from_config) - cls.from_config = classmethod(from_config) - - if name.startswith("TF"): - from_pretrained_docstring = FROM_PRETRAINED_TF_DOCSTRING - elif name.startswith("Flax"): - from_pretrained_docstring = FROM_PRETRAINED_FLAX_DOCSTRING - else: - from_pretrained_docstring = FROM_PRETRAINED_TORCH_DOCSTRING - from_pretrained = copy_func(_BaseAutoModelClass.from_pretrained) - from_pretrained_docstring = insert_head_doc( - from_pretrained_docstring, head_doc=head_doc) - from_pretrained_docstring = from_pretrained_docstring.replace( - "BaseAutoModelClass", name) - from_pretrained_docstring = from_pretrained_docstring.replace( - "checkpoint_placeholder", checkpoint_for_example) - shortcut = checkpoint_for_example.split("/")[-1].split("-")[0] - from_pretrained_docstring = from_pretrained_docstring.replace( - "shortcut_placeholder", shortcut) - from_pretrained.__doc__ = from_pretrained_docstring - from_pretrained = replace_list_option_in_docstrings( - model_mapping._model_mapping)(from_pretrained) - cls.from_pretrained = classmethod(from_pretrained) - return cls - - -def get_values(model_mapping): - result = [] - for model in model_mapping.values(): - if isinstance(model, (list, tuple)): - result += list(model) - else: - result.append(model) - - return result - - -def getattribute_from_module(module, attr): - if attr is None: - return None - if isinstance(attr, tuple): - return tuple(getattribute_from_module(module, a) for a in attr) - if hasattr(module, attr): - return getattr(module, attr) - # Some of the mappings have entries model_type -> object of another model type. In that case we try to grab the - # object at the top level. - transformers_module = importlib.import_module("fengshen") - return getattribute_from_module(transformers_module, attr) - - -class _LazyAutoMapping(OrderedDict): - """ - " A mapping config to object (model or tokenizer for instance) that will load keys and values when it is accessed. - - Args: - - - config_mapping: The map model type to config class - - model_mapping: The map model type to model (or tokenizer) class - """ - - def __init__(self, config_mapping, model_mapping): - self._config_mapping = config_mapping - self._reverse_config_mapping = { - v: k for k, v in config_mapping.items()} - self._model_mapping = model_mapping - self._extra_content = {} - self._modules = {} - - def __getitem__(self, key): - if key in self._extra_content: - return self._extra_content[key] - model_type = self._reverse_config_mapping[key.__name__] - if model_type not in self._model_mapping: - raise KeyError(key) - model_name = self._model_mapping[model_type] - return self._load_attr_from_module(model_type, model_name) - - def _load_attr_from_module(self, model_type, attr): - module_name = model_type_to_module_name(model_type) - if module_name not in self._modules: - self._modules[module_name] = importlib.import_module( - f".{module_name}", "fengshen.models") - return getattribute_from_module(self._modules[module_name], attr) - - def keys(self): - mapping_keys = [ - self._load_attr_from_module(key, name) - for key, name in self._config_mapping.items() - if key in self._model_mapping.keys() - ] - return mapping_keys + list(self._extra_content.keys()) - - def get(self, key, default): - try: - return self.__getitem__(key) - except KeyError: - return default - - def __bool__(self): - return bool(self.keys()) - - def values(self): - mapping_values = [ - self._load_attr_from_module(key, name) - for key, name in self._model_mapping.items() - if key in self._config_mapping.keys() - ] - return mapping_values + list(self._extra_content.values()) - - def items(self): - mapping_items = [ - ( - self._load_attr_from_module(key, self._config_mapping[key]), - self._load_attr_from_module(key, self._model_mapping[key]), - ) - for key in self._model_mapping.keys() - if key in self._config_mapping.keys() - ] - return mapping_items + list(self._extra_content.items()) - - def __iter__(self): - return iter(self.keys()) - - def __contains__(self, item): - if item in self._extra_content: - return True - if not hasattr(item, "__name__") or item.__name__ not in self._reverse_config_mapping: - return False - model_type = self._reverse_config_mapping[item.__name__] - return model_type in self._model_mapping - - def register(self, key, value): - """ - Register a new model in this mapping. - """ - if hasattr(key, "__name__") and key.__name__ in self._reverse_config_mapping: - model_type = self._reverse_config_mapping[key.__name__] - if model_type in self._model_mapping.keys(): - raise ValueError( - f"'{key}' is already used by a Transformers model.") - - self._extra_content[key] = value diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/criterions/adaptive_loss.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/criterions/adaptive_loss.py deleted file mode 100644 index 6209ceaedb6d8120ad820c11b55c13596447933c..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/criterions/adaptive_loss.py +++ /dev/null @@ -1,123 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from dataclasses import dataclass - -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass -from fairseq.dataclass.constants import DDP_BACKEND_CHOICES -from omegaconf import II - - -@dataclass -class AdaptiveLossConfig(FairseqDataclass): - sentence_avg: bool = II("optimization.sentence_avg") - ddp_backend: DDP_BACKEND_CHOICES = II("distributed_training.ddp_backend") - - -@register_criterion("adaptive_loss", dataclass=AdaptiveLossConfig) -class AdaptiveLoss(FairseqCriterion): - """This is an implementation of the loss function accompanying the adaptive softmax approximation for - graphical processing units (GPU), described in the paper "Efficient softmax approximation for GPUs" - (http://arxiv.org/abs/1609.04309).""" - - def __init__(self, task, sentence_avg): - super().__init__(task) - self.sentence_avg = sentence_avg - - @classmethod - def build_criterion(cls, cfg: AdaptiveLossConfig, task): - if cfg.ddp_backend in {"c10d", "pytorch_ddp"}: - raise Exception( - "AdaptiveLoss is not compatible with the PyTorch " - "version of DistributedDataParallel. Please use " - "`--ddp-backend=legacy_ddp` instead." - ) - return cls(task, cfg.sentence_avg) - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - - assert ( - hasattr(model.decoder, "adaptive_softmax") - and model.decoder.adaptive_softmax is not None - ) - adaptive_softmax = model.decoder.adaptive_softmax - - net_output = model(**sample["net_input"]) - orig_target = model.get_targets(sample, net_output) - - nsentences = orig_target.size(0) - orig_target = orig_target.view(-1) - - bsz = orig_target.size(0) - - logits, target = adaptive_softmax(net_output[0], orig_target) - assert len(target) == len(logits) - - loss = net_output[0].new(1 if reduce else bsz).zero_() - - for i in range(len(target)): - if target[i] is not None: - assert target[i].min() >= 0 and target[i].max() <= logits[i].size(1) - loss += F.cross_entropy( - logits[i], - target[i], - ignore_index=self.padding_idx, - reduction="sum" if reduce else "none", - ) - - orig = utils.strip_pad(orig_target, self.padding_idx) - ntokens = orig.numel() - sample_size = sample["target"].size(0) if self.sentence_avg else ntokens - logging_output = { - "loss": loss.data, - "ntokens": ntokens, - "nsentences": nsentences, - "sample_size": sample_size, - } - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = utils.item(sum(log.get("loss", 0) for log in logging_outputs)) - ntokens = utils.item(sum(log.get("ntokens", 0) for log in logging_outputs)) - sample_size = utils.item( - sum(log.get("sample_size", 0) for log in logging_outputs) - ) - - metrics.log_scalar( - "loss", loss_sum / sample_size / math.log(2), sample_size, round=3 - ) - if sample_size != ntokens: - metrics.log_scalar( - "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3 - ) - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg) - ) - else: - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["loss"].avg) - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/scripts/inference/advanced_infer.sh b/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/scripts/inference/advanced_infer.sh deleted file mode 100644 index 6bbd53454331f0bd5157aa4e38ae4d329fba05fd..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/scripts/inference/advanced_infer.sh +++ /dev/null @@ -1,22 +0,0 @@ -gender='male' -glowdir='../../checkpoints/glow/'$gender'/' -hifidir='../../checkpoints/hifi/'$gender'/' -device='cpu' -text='Hey mr. I am testing this one. Now on multiple sentences. Just want to see the flow.' -noise_scale='0.667' -length_scale='1.0' -transliteration=1 -number_conversion=1 -split_sentences=1 -lang='en' - - -timestamp=$(date +%s) -wav='../../results/'$gender'/' -wav_file=$wav/$timestamp'.wav' - - -mkdir -p $wav - -python ../../utils/inference/advanced_tts.py -a $glowdir -v $hifidir -d $device -t "$text" -w $wav_file -L $lang -n $noise_scale -l $length_scale -T $transliteration -N $number_conversion -S $split_sentences -echo "File saved at: "$wav_file diff --git a/spaces/Harveenchadha/oiTrans/indic_nlp_library/indicnlp/syllable/syllabifier.py b/spaces/Harveenchadha/oiTrans/indic_nlp_library/indicnlp/syllable/syllabifier.py deleted file mode 100644 index 2a0cfb0be6ac9e9c2c9938b4a8b4b84b054d28c8..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/oiTrans/indic_nlp_library/indicnlp/syllable/syllabifier.py +++ /dev/null @@ -1,302 +0,0 @@ -# -# Copyright (c) 2013-present, Anoop Kunchukuttan -# All rights reserved. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -# - -import codecs, sys -from indicnlp.script import indic_scripts as si -import re - -chillu_char_map= { - '\u0d7a': '\u0d23', - '\u0d7b': '\u0d28', - '\u0d7c': '\u0d30', - '\u0d7d': '\u0d32', - '\u0d7e': '\u0d33', - '\u0d7f': '\u0d15', - } - -char_chillu_map= {} -for k,v in chillu_char_map.items(): - char_chillu_map[v]=k - -def normalize_malayalam(word): - - word_mask=re.sub(r'[0-9]','0',word) - - # instead of chillu characters, use consonant+halant - for chillu,char in chillu_char_map.items(): - word=word.replace(chillu,'{}\u0d4d'.format(char)) - word_mask=word_mask.replace(chillu,'41') - - word_mask=re.sub(r'[^0-9]','0',word_mask) - - return word, word_mask - -def denormalize_malayalam(word, word_mask): - - word=list(word) - word_mask=list(word_mask) - - ## pattern 4 - idx=0 - while idx>=0: - try: - idx=word_mask.index('4',idx) - word[idx:idx+2]=char_chillu_map[word[idx]] - word_mask[idx:idx+2]='0' - start=idx - except ValueError as e: - break - - return ''.join(word) - -def normalize_punjabi(word): - word_mask=re.sub(r'[0-9]','0',word) - - ## replace tippi with anusvaar - word=word.replace('\u0a70','\u0a02') - word_mask=word_mask.replace('\u0a70','2') - - ## replace addak+consonant with consonat+halant+consonant - word=re.sub(r'\u0a71(.)','\\1\u0a4d\\1',word) - word_mask=re.sub(r'\u0a71(.)','311',word_mask) - - word_mask=re.sub(r'[^0-9]','0',word_mask) - - return word, word_mask - -def denormalize_punjabi(word, word_mask): - - word=list(word) - word_mask=list(word_mask) - - ## pattern 2 - idx=0 - while idx>=0: - try: - idx=word_mask.index('2',idx) - word[idx]='\u0a70' - word_mask[idx]='0' - start=idx - except ValueError as e: - break - - ## pattern 3 - idx=0 - while idx>=0: - try: - idx=word_mask.index('3',idx) - word[idx:idx+3]='\u0a71{}'.format(word[idx]) - word_mask[idx:idx+3]='00' - start=idx - except ValueError as e: - break - - return ''.join(word) - -def char_backoff(syllables_list,vocab): - syllables_final=[] - - if vocab is None: - syllables_final=syllables_list - else: - for s in syllables_list: - if s in vocab: - syllables_final.append(s) - else: - for x in s: - syllables_final.append(x) - - return syllables_final - - -def orthographic_syllabify_improved(word,lang,vocab=None): - - word_mask=['0']*len(word) - - if lang=='ml': - word, word_mask = normalize_malayalam(word) - word=word - elif lang=='pa': - word, word_mask = normalize_punjabi(word) - - p_vectors=[si.get_phonetic_feature_vector(c,lang) for c in word] - - syllables=[] - syllables_mask=[] - - for i in range(len(word)): - v=p_vectors[i] - - syllables.append(word[i]) - syllables_mask.append(word_mask[i]) - - ### simplified syllabification - #if i+1= 0: - print('Warning') - - if lang=='ml': - syllables = denormalize_malayalam(syllables,syllables_mask) - elif lang=='pa': - syllables = denormalize_punjabi(syllables,syllables_mask) - - syllables_list = syllables.strip().split(' ') - return(char_backoff(syllables_list,vocab)) - -def orthographic_syllabify(word,lang,vocab=None): - - p_vectors=[si.get_phonetic_feature_vector(c,lang) for c in word] - - syllables=[] - - for i in range(len(word)): - v=p_vectors[i] - - syllables.append(word[i]) - - ### simplified syllabification - #if i+1 x.shape[1]: - s0 = m - s1 = int(float(m) / float(x.shape[0]) * float(x.shape[1])) - else: - s0 = int(float(m) / float(x.shape[1]) * float(x.shape[0])) - s1 = m - new_max = max(s1, s0) - raw_max = max(x.shape[0], x.shape[1]) - if new_max < raw_max: - interpolation = cv2.INTER_AREA - else: - interpolation = cv2.INTER_LANCZOS4 - y = cv2.resize(x, (s1, s0), interpolation=interpolation) - return y - - -def s_enhance(x, k=2.0): - p = cv2.cvtColor(x, cv2.COLOR_RGB2HSV).astype(np.float) - p[:, :, 1] *= k - p = p.clip(0, 255).astype(np.uint8) - return cv2.cvtColor(p, cv2.COLOR_HSV2RGB).clip(0, 255) - - -def sss_enhance(x, k=2.0): - p = cv2.cvtColor(x, cv2.COLOR_RGB2HSV).astype(np.float) - p[:, :, 1] *= k - p[:, :, 2] = 255 - p = p.clip(0, 255).astype(np.uint8) - return cv2.cvtColor(p, cv2.COLOR_HSV2RGB).clip(0, 255) - - -def ini_hint(x): - r = np.zeros(shape=(x.shape[0], x.shape[1], 4), dtype=np.uint8) - return r - - -def opreate_gird_hint(gird, points, type, length): - h = gird.shape[0] - w = gird.shape[1] - for point in points: - x, y, r, g, b, t = point - if t == type: - x = int(x * w) - y = int(y * h) - l_ = max(0, x - length) - b_ = max(0, y - length) - r_ = min(w, x + length + 1) - t_ = min(h, y + length + 1) - gird[b_:t_, l_:r_, 2] = 1 - r / 255.0 - gird[b_:t_, l_:r_, 1] = 1 - g / 255.0 - gird[b_:t_, l_:r_, 0] = 1 - b / 255.0 - gird[b_:t_, l_:r_, 3] = 1 - return gird - - -def opreate_normal_hint(gird, points, length, skip_sp): - h = gird.shape[0] - w = gird.shape[1] - for point in points: - x, y, r, g, b = point - x = int(x * w) - y = int(y * h) - l_ = max(0, x - length) - b_ = max(0, y - length) - r_ = min(w, x + length + 1) - t_ = min(h, y + length + 1) - if skip_sp: - if r == 1 and g == 233 and b == 0: - continue - elif r == 0 and g == 233 and b == 1: - continue - else: - gird[b_:t_, l_:r_, 2] = r - gird[b_:t_, l_:r_, 1] = g - gird[b_:t_, l_:r_, 0] = b - gird[b_:t_, l_:r_, 3] = 255.0 - else: - if r == 1 and g == 233 and b == 0: - gird[b_:t_, l_:r_, 2] = r - gird[b_:t_, l_:r_, 1] = g - gird[b_:t_, l_:r_, 0] = b - gird[b_:t_, l_:r_, 3] = 255.0 - elif r == 0 and g == 233 and b == 1: - gird[b_:t_, l_:r_, 2] = r - gird[b_:t_, l_:r_, 1] = g - gird[b_:t_, l_:r_, 0] = b - gird[b_:t_, l_:r_, 3] = 255.0 - else: - continue - return gird - - -def opreate_non_paramic_hints(gird, points, type): - points_r = [] - colors_r = [] - h = gird.shape[0] - w = gird.shape[1] - for point in points: - x, y, r, g, b, t = point - if t in type: - x = int(x * w) - y = int(y * h) - points_r.append([y, x]) - colors_r.append([b, g, r]) - return points_r, colors_r - - -def go_cvline(img): - x = cv2.Sobel(img, cv2.CV_16S, 1, 0) - y = cv2.Sobel(img, cv2.CV_16S, 0, 1) - absX = cv2.convertScaleAbs(x) - absY = cv2.convertScaleAbs(y) - r = 255 - cv2.addWeighted(absX, 0.5, absY, 0.5, 0) - return np.tile(np.min(r, axis=2, keepdims=True).clip(0, 255).astype(np.uint8), [1, 1, 3]) - - -def go_passline(img): - o = img.astype(np.float32) - b = cv2.GaussianBlur(img, (7, 7), 0).astype(np.float32) - r = np.max(b - o, axis=2, keepdims=True) - r /= np.max(cv2.resize(r.clip(0, 255).astype(np.uint8), (64, 64), cv2.INTER_AREA)) - r = (1 - r).clip(0, 1) - return np.tile((r * 255.0).clip(0, 255).astype(np.uint8), [1, 1, 3]) - - -def min_k_down(x, k): - y = 255 - x.astype(np.float32) - y = block_reduce(y, (k, k), np.max) - y = 255 - y - return y.clip(0, 255).astype(np.uint8) - - -def min_k_down_c(x, k): - y = 255 - x.astype(np.float32) - y = block_reduce(y, (k, k, 1), np.max) - y = 255 - y - return y.clip(0, 255).astype(np.uint8) - - -def mini_norm(x): - y = x.astype(np.float32) - y = 1 - y / 255.0 - y -= np.min(y) - y /= np.max(y) - return (255.0 - y * 80.0).astype(np.uint8) - - -def hard_norm(x): - o = x.astype(np.float32) - b = cv2.GaussianBlur(x, (3, 3), 0).astype(np.float32) - y = (o - b + 255.0).clip(0, 255) - y = 1 - y / 255.0 - y -= np.min(y) - y /= np.max(y) - y[y < np.mean(y)] = 0 - y[y > 0] = 1 - return (255.0 - y * 255.0).astype(np.uint8) - - -def sensitive(x, s=15.0): - y = x.astype(np.float32) - y -= s - y /= 255.0 - s * 2.0 - y *= 255.0 - return y.clip(0, 255).astype(np.uint8) - - -def min_black(x): - return np.tile(np.min(x, axis=2, keepdims=True), [1, 1, 3]) - - -def eye_black(x): - return cv2.cvtColor(cv2.cvtColor(x, cv2.COLOR_RGB2GRAY), cv2.COLOR_GRAY2RGB) - - -def cal_std(x): - y = (cv2.resize(x, (128, 128), cv2.INTER_AREA)).astype(np.float32) - return np.mean(np.var(y, axis=2)) - - -def emph_line(x, y, c): - a = x.astype(np.float32) - b = y.astype(np.float32)[:, :, None] / 255.0 - c = np.tile(c[None, None, ::-1], [a.shape[0], a.shape[1], 1]) - return (a * b + c * (1 - b)).clip(0, 255).astype(np.uint8) - - -def de_line(x, y): - a = x.astype(np.float32) - b = y.astype(np.float32)[:, :, None] / 255.0 - c = np.tile(np.array([255, 255, 255])[None, None, ::-1], [a.shape[0], a.shape[1], 1]) - return (a * b + c * (1 - b)).clip(0, 255).astype(np.uint8) - - -def blur_line(x, y): - o = x.astype(np.float32) - b = cv2.GaussianBlur(x, (3, 3), 0).astype(np.float32) - k = y.astype(np.float32)[:, :, None] / 255.0 - return (o * k + b * (1 - k)).clip(0, 255).astype(np.uint8) - - -def clip_15(x, s=15.0): - return ((x - s) / (255.0 - s - s)).clip(0, 1) * 255.0 - - -def cv_denoise(x): - return cv2.fastNlMeansDenoisingColored(x, None, 3, 3, 7, 21) - - -def norm_sketch(x): - tiny_image = cv2.resize(x, (256, 256), interpolation=cv2.INTER_AREA) - min = np.min(tiny_image) - max = np.max(tiny_image) - y = x.astype(np.float) - y -= min - y /= max - min - y *= 255.0 - return y.clip(0, 255).astype(np.uint8) - - -clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(16, 16)) - - -def go_cal(x): - r = clahe.apply(x[:, :, 0]) - g = clahe.apply(x[:, :, 1]) - b = clahe.apply(x[:, :, 2]) - img = np.stack([r, g, b], axis=2) - return img - - -def shrink(x): - a = cv2.resize(x, (x.shape[1] // 2, x.shape[0] // 2), cv2.INTER_AREA) - b = a[:, ::-1] - c = a[::-1, :] - d = a[::-1, ::-1] - e = np.concatenate([a, b], axis=1) - f = np.concatenate([c, d], axis=1) - g = np.concatenate([e, f], axis=0) - return g - - -barriersss = np.zeros(shape=(1024, 1024), dtype=np.uint8) -for _x in range(1024): - for _y in range(1024): - if _x % 32 == 0 or _y % 32 == 0 or _x % 32 == 1 or _y % 32 == 1 or _x % 32 == 2 or _y % 32 == 2 or _x % 32 == 3 or _y % 32 == 3 or _x % 32 == 4 or _y % 32 == 4: - barriersss[_x, _y] = 1 - - -def check_filter(x): - kbas = cv2.resize(barriersss, (x.shape[1], x.shape[0]), interpolation=cv2.INTER_NEAREST) - result = np.zeros_like(x) - result[kbas > 0] = x[kbas > 0] - return result - - -def get_hue_direction(source, target): - h1 = cv2.cvtColor(source, cv2.COLOR_RGB2HSV)[:, :, 0].astype(np.float32) - h2 = cv2.cvtColor(target, cv2.COLOR_RGB2HSV)[:, :, 0].astype(np.float32) - h3 = h2 + 256 - h4 = h2 - 256 - r1 = h2 - h1 - r2 = h3 - h1 - r3 = h4 - h1 - rs = r1.copy() - rs[np.abs(r2) < np.abs(rs)] = r2[np.abs(r2) < np.abs(rs)] - rs[np.abs(r3) < np.abs(rs)] = r3[np.abs(r3) < np.abs(rs)] - rs[rs < 0] = 0 - rs[rs > 0] = 255 - return rs.clip(0, 255).astype(np.uint8) - - -def small_norm(x): - x = cv2.resize(x, (256, 256), cv2.INTER_AREA) - x = np_max_pool(x) - x = np_max_pool(x) - x = np_max_pool(x) - x = cv2.GaussianBlur(x, (0, 0), 3.0) - return x - - -def cli_norm(sketch): - tiny_sketch = cv2.resize(sketch, (256, 256), interpolation=cv2.INTER_AREA).astype(np.float32) - tiny_min = np.min(tiny_sketch) - tiny_max = np.max(tiny_sketch) - return ((sketch.astype(np.float32) - tiny_min) / (tiny_max - tiny_min) * 255.0).clip(0, 255).astype(np.uint8) - - -def image_colorfulness(image): - R = image[:, :, 0].astype(np.float32) - G = image[:, :, 1].astype(np.float32) - B = image[:, :, 2].astype(np.float32) - - R -= np.mean(R) - G -= np.mean(G) - B -= np.mean(B) - - rg = np.absolute(R - G) - - yb = np.absolute(0.5 * (R + G) - B) - - (rbMean, rbStd) = (np.mean(rg), np.std(rg)) - (ybMean, ybStd) = (np.mean(yb), np.std(yb)) - - stdRoot = np.sqrt((rbStd ** 2) + (ybStd ** 2)) - meanRoot = np.sqrt((rbMean ** 2) + (ybMean ** 2)) - - return stdRoot + (0.3 * meanRoot) - - -def reason_blending(color, sketch): - color = (color.astype(np.float32) / 255.0).clip(0, 1) - sketch = (sketch.astype(np.float32) / 255.0).clip(0, 1) - sketch_r = sketch.copy() - sketch_r = sketch_r ** 5 - color_max = np.max(color, axis=2, keepdims=True) - downs = color ** np.pi - downs = (downs + 1e-10) / (np.max(downs, axis=2, keepdims=True) + 1e-10) * color_max - bleeding = color * sketch_r + downs * (1 - sketch_r) - result_YUV = cv2.cvtColor((bleeding * 255.0).clip(0, 255).astype(np.uint8), cv2.COLOR_RGB2YUV) - sketch_YUV = cv2.cvtColor((sketch * 255.0).clip(0, 255).astype(np.uint8), cv2.COLOR_RGB2YUV) - result_YUV[:, :, 0] = np.minimum(result_YUV[:, :, 0], sketch_YUV[:, :, 0]) - return cv2.cvtColor(result_YUV, cv2.COLOR_YUV2RGB) - - -def absmax(a, axis=None): - amax = a.max(axis) - amin = a.min(axis) - return np.where(-amin > amax, amin, amax) - - diff --git a/spaces/HugoDzz/spaceship_drift/README.md b/spaces/HugoDzz/spaceship_drift/README.md deleted file mode 100644 index cbfcc10b1143352b71da6387859e95a8d59e0430..0000000000000000000000000000000000000000 --- a/spaces/HugoDzz/spaceship_drift/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Spaceship Drift Game -emoji: 🪐 -colorFrom: purple -colorTo: purple -sdk: static -pinned: true -license: mit -app_file: build/index.html ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ICML2022/OFA/fairseq/examples/fully_sharded_data_parallel/README.md b/spaces/ICML2022/OFA/fairseq/examples/fully_sharded_data_parallel/README.md deleted file mode 100644 index b9e44fef48bee5faeee27b3d1d1b1eb96b6a477f..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/fully_sharded_data_parallel/README.md +++ /dev/null @@ -1,177 +0,0 @@ -# Fully Sharded Data Parallel (FSDP) - -## Overview -Recent work by [Microsoft](https://arxiv.org/abs/1910.02054) and -[Google](https://arxiv.org/abs/2004.13336) has shown that data parallel -training can be made significantly more efficient by sharding the model -parameters and optimizer state across data parallel workers. These ideas are -encapsulated in the new **`FullyShardedDataParallel` (FSDP)** wrapper provided -by [fairscale](https://github.com/facebookresearch/fairscale/). - -Compared to PyTorch DDP: -* FSDP produces identical results as PyTorch DDP (it's still synchronous data parallel training) -* FSDP shards parameters (FP16 + FP32) and optimizer state across data parallel GPUs -* FSDP is faster than PyTorch DDP because the optimizer step is sharded, and the communication can be overlapped with the forward pass -* FSDP enables training 13B parameter models on 8 GPUs and 175B parameter models on 128 GPUs - -FSDP is fully supported in fairseq via the following new arguments: -* `--ddp-backend=fully_sharded`: enables full sharding via FSDP -* `--cpu-offload`: offloads the optimizer state and FP32 model copy to CPU (combine with `--optimizer=cpu_adam`) -* `--no-reshard-after-forward`: increases training speed for large models (1B+ params) and is similar to ZeRO stage 2 -* other popular options (`--fp16`, `--update-freq`, `--checkpoint-activations`, `--offload-activations`, etc.) continue to work as normal - -
    Limitations

    - -FSDP currently has several limitations compared to fairseq's default DDP backend (PyTorch DDP): -* while FSDP is full compatible with pointwise Optimizers (e.g., Adam, AdamW, Adadelta, Adamax, SGD, etc.), it is not currently compatible with non-pointwise Optimizers (e.g., Adagrad, Adafactor, LAMB, etc.) -* FSDP depends on flattening the parameters, so models that currently require `--fp16-no-flatten-grads` may not be supported - -See the [fairscale docs](https://fairscale.readthedocs.io/en/latest/api/nn/fsdp_tips.html) for a more detailed -explanation of these and other limitations. - -

    - -
    How it works

    - -Fully Sharded Data Parallel - -See the [fairscale docs](https://fairscale.readthedocs.io/en/latest/api/nn/fsdp_tips.html) for a more detailed -explanation of how FSDP works. - -

    - -## Example usage - -The following examples illustrate how to train a very large language model with -13 billion parameters on 1 GPU by offloading parameters and optimizer states to -CPU, or on 8 GPUs by fully sharding the params and optimizer states across GPUs. - -These examples use the WikiText-103 dataset for demonstration purposes, but -in practice a much larger dataset will be needed to achieve good results. -Follow the [instructions here](https://github.com/pytorch/fairseq/blob/main/examples/roberta/README.pretraining.md#1-preprocess-the-data) -to preprocess the WikiText-103 dataset using the GPT-2/RoBERTa vocabulary. - -### 13B params on 1 V100 GPU (with CPU offloading) - -The following command trains a 13B parameter GPT-3 model on a single V100 GPU -using the `--cpu-offload` feature to offload parameters and optimizer states to -CPU. In this setting, the optimizer step (Adam) happens on CPU. We also use the -`--checkpoint-activations` feature (sometimes called [gradient checkpointing](https://pytorch.org/docs/stable/checkpoint.html)), -which further saves memory in exchange for a small increase in computation. - -**Requirements:** -- Install the latest master version of fairscale: `pip install git+https://github.com/facebookresearch/fairscale.git@master` -- You'll need 32GB of GPU memory and ~256GB of system memory to train the 13B param model. -- If you have less system memory, the 6.7B param model can be trained with ~128GB of system memory, just set `--arch transformer_lm_gpt3_6_7` -- We use the CPU Adam optimizer from [DeepSpeed](https://github.com/microsoft/DeepSpeed), so you'll need to `pip install deepspeed` before running the command. - -**Notes:** -- The command will take ~5 minutes to start training, during which time it will appear to be hung, since randomly initializing 13B weights can be slow. -- The `--cpu-offload` feature requires training in mixed precision (`--fp16`). -- Tune the `OMP_NUM_THREADS` env variable for best performance with CPU offloading. -- The example command below stops training after 10 steps (`--max-update 10`) and does not save checkpoints (`--no-save`). - -```bash -OMP_NUM_THREADS=20 CUDA_VISIBLE_DEVICES=0 \ - fairseq-train data-bin/wikitext-103-roberta-bpe-bin \ - --ddp-backend fully_sharded --fp16 --fp16-init-scale 4 \ - --cpu-offload --checkpoint-activations \ - --task language_modeling --tokens-per-sample 2048 --batch-size 8 \ - --arch transformer_lm_gpt3_13 \ - --optimizer cpu_adam --adam-betas "(0.9,0.98)" \ - --lr 0.0001 --lr-scheduler polynomial_decay --warmup-updates 5 --total-num-update 10 \ - --max-update 10 --no-save --log-format json --log-interval 1 -``` - -
    Example output

    - -``` -(...) -2021-03-08 12:29:51 | INFO | fairseq_cli.train | num. model params: 13,110,865,920 (num. trained: 13,110,865,920) -(...) -2021-03-08 12:29:51 | INFO | fairseq_cli.train | training on 1 devices (GPUs/TPUs) -2021-03-08 12:29:51 | INFO | fairseq_cli.train | max tokens per GPU = None and batch size per GPU = 8 -(...) -Adam Optimizer #0 is created with AVX2 arithmetic capability. -Config: alpha=0.000100, betas=(0.900000, 0.980000), weight_decay=0.000000, adam_w=1 -(...) -2021-03-08 12:31:36 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "16.475", "ppl": "91120.8", "wps": "0", "ups": "0", "wpb": "16384", "bsz": "8", "num_updates": "1", "lr": "2e-05", "gnorm": "20.751", "loss_scale": "4", "train_wall": "99", "gb_free": "9.3", "wall": "105"} -2021-03-08 12:32:33 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "16.446", "ppl": "89281.6", "wps": "288.7", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "2", "lr": "4e-05", "gnorm": "19.777", "loss_scale": "4", "train_wall": "57", "gb_free": "9.3", "wall": "161"} -2021-03-08 12:33:12 | INFO | fairseq.trainer | NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 2.0 -2021-03-08 12:33:51 | INFO | fairseq.trainer | NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 1.0 -2021-03-08 12:34:45 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "25.22", "ppl": "3.90691e+07", "wps": "123.4", "ups": "0.01", "wpb": "16384", "bsz": "8", "num_updates": "3", "lr": "6e-05", "gnorm": "131.281", "loss_scale": "1", "train_wall": "133", "gb_free": "9.3", "wall": "294"} -2021-03-08 12:35:43 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "18.079", "ppl": "276809", "wps": "285.5", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "4", "lr": "8e-05", "gnorm": "13.776", "loss_scale": "1", "train_wall": "57", "gb_free": "9.3", "wall": "351"} -2021-03-08 12:36:35 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "23.729", "ppl": "1.39088e+07", "wps": "316.7", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "5", "lr": "0.0001", "gnorm": "72.774", "loss_scale": "1", "train_wall": "52", "gb_free": "9.3", "wall": "403"} -2021-03-08 12:37:28 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "20.429", "ppl": "1.41203e+06", "wps": "307.6", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "6", "lr": "8e-05", "gnorm": "60.846", "loss_scale": "1", "train_wall": "53", "gb_free": "9.3", "wall": "456"} -2021-03-08 12:38:27 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "18.965", "ppl": "511684", "wps": "279.4", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "7", "lr": "6e-05", "gnorm": "22.687", "loss_scale": "1", "train_wall": "59", "gb_free": "9.3", "wall": "515"} -2021-03-08 12:39:18 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "18.345", "ppl": "332887", "wps": "319.1", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "8", "lr": "4e-05", "gnorm": "8.451", "loss_scale": "1", "train_wall": "51", "gb_free": "9.3", "wall": "566"} -2021-03-08 12:40:11 | INFO | train_inner | {"epoch": 1, "update": 0.002, "loss": "18.262", "ppl": "314336", "wps": "305.9", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "9", "lr": "2e-05", "gnorm": "6.457", "loss_scale": "1", "train_wall": "54", "gb_free": "9.3", "wall": "620"} -2021-03-08 12:41:04 | INFO | train_inner | {"epoch": 1, "update": 0.002, "loss": "17.556", "ppl": "192686", "wps": "311.8", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "10", "lr": "0", "gnorm": "5.796", "loss_scale": "1", "train_wall": "53", "gb_free": "9.3", "wall": "673"} -2021-03-08 12:41:04 | INFO | fairseq_cli.train | Stopping training due to num_updates: 10 >= max_update: 10 -2021-03-08 12:41:04 | INFO | fairseq_cli.train | begin validation on "valid" subset -2021-03-08 12:43:15 | INFO | valid | {"epoch": 1, "valid_loss": "17.953", "valid_ppl": "253807", "valid_wps": "1868.4", "valid_wpb": "15400.2", "valid_bsz": "7.6", "valid_num_updates": "10"} -2021-03-08 12:43:15 | INFO | fairseq_cli.train | end of epoch 1 (average epoch stats below) -2021-03-08 12:43:15 | INFO | train | {"epoch": 1, "train_loss": "19.351", "train_ppl": "668509", "train_wps": "210.9", "train_ups": "0.01", "train_wpb": "16384", "train_bsz": "8", "train_num_updates": "10", "train_lr": "0", "train_gnorm": "36.26", "train_loss_scale": "1", "train_train_wall": "667", "train_gb_free": "9.3", "train_wall": "804"} -2021-03-08 12:43:15 | INFO | fairseq_cli.train | done training in 798.6 seconds -``` - -

    - -### 13B params on 8 V100 GPUs (with full parameter + optimizer state sharding) - -FSDP can also shard the parameters and optimizer states across multiple GPUs, -reducing memory requirements significantly. On 8 x 32GB GPUs, sharding enables -training the same 13B parameter model *without offloading the parameters to -CPU*. However, without CPU offloading we'd only be able to fit a batch size of -1 per GPU, which would cause training speed to suffer. - -We obtain the best performance on 8 GPUs by combining full sharding and CPU -offloading. The following command trains the same 13B parameter GPT-3 model as -before on 8 x 32GB V100 GPUs; training speed increases superlinearly from ~310 -words per second to ~3200 words per second. - -```bash -OMP_NUM_THREADS=20 CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \ - fairseq-train data-bin/wikitext-103-roberta-bpe-bin \ - --ddp-backend fully_sharded --fp16 --fp16-init-scale 4 \ - --cpu-offload --checkpoint-activations \ - --task language_modeling --tokens-per-sample 2048 --batch-size 8 \ - --arch transformer_lm_gpt3_13 \ - --optimizer cpu_adam --adam-betas "(0.9,0.98)" \ - --lr 0.0001 --lr-scheduler polynomial_decay --warmup-updates 5 --total-num-update 10 \ - --max-update 10 --no-save --log-format json --log-interval 1 -``` - -
    Example output

    - -``` -(...) -2021-03-08 18:04:09 | INFO | fairseq_cli.train | num. model params: 13,110,865,920 (num. trained: 13,110,865,920) -(...) -2021-03-08 18:04:09 | INFO | fairseq_cli.train | training on 8 devices (GPUs/TPUs) -2021-03-08 18:04:09 | INFO | fairseq_cli.train | max tokens per GPU = None and batch size per GPU = 8 -(...) -Adam Optimizer #0 is created with AVX2 arithmetic capability. -Config: alpha=0.000100, betas=(0.900000, 0.980000), weight_decay=0.000000, adam_w=1 -(...) -2021-03-08 18:05:06 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "16.408", "ppl": "86945.6", "wps": "0", "ups": "0", "wpb": "131072", "bsz": "64", "num_updates": "1", "lr": "2e-05", "gnorm": "18.27", "loss_scale": "4", "train_wall": "47", "gb_free": "9.3", "wall": "56"} -2021-03-08 18:05:45 | INFO | train_inner | {"epoch": 1, "update": 0.002, "loss": "16.352", "ppl": "83644.3", "wps": "3283.4", "ups": "0.03", "wpb": "131072", "bsz": "64", "num_updates": "2", "lr": "4e-05", "gnorm": "18.411", "loss_scale": "4", "train_wall": "40", "gb_free": "9.3", "wall": "96"} -2021-03-08 18:06:21 | INFO | fairseq.trainer | NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 2.0 -2021-03-08 18:06:56 | INFO | fairseq.trainer | NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 1.0 -2021-03-08 18:07:37 | INFO | train_inner | {"epoch": 1, "update": 0.006, "loss": "23.682", "ppl": "1.34537e+07", "wps": "1176.6", "ups": "0.01", "wpb": "131072", "bsz": "64", "num_updates": "3", "lr": "6e-05", "gnorm": "119.682", "loss_scale": "1", "train_wall": "111", "gb_free": "9.3", "wall": "208"} -2021-03-08 18:08:18 | INFO | train_inner | {"epoch": 1, "update": 0.007, "loss": "18.988", "ppl": "519921", "wps": "3189.1", "ups": "0.02", "wpb": "131072", "bsz": "64", "num_updates": "4", "lr": "8e-05", "gnorm": "14.934", "loss_scale": "1", "train_wall": "41", "gb_free": "9.3", "wall": "249"} -2021-03-08 18:08:59 | INFO | train_inner | {"epoch": 1, "update": 0.008, "loss": "20.08", "ppl": "1.10798e+06", "wps": "3223.1", "ups": "0.02", "wpb": "131072", "bsz": "64", "num_updates": "5", "lr": "0.0001", "gnorm": "59.92", "loss_scale": "1", "train_wall": "41", "gb_free": "9.3", "wall": "289"} -2021-03-08 18:09:39 | INFO | train_inner | {"epoch": 1, "update": 0.009, "loss": "18.323", "ppl": "327980", "wps": "3256.6", "ups": "0.02", "wpb": "131072", "bsz": "64", "num_updates": "6", "lr": "8e-05", "gnorm": "37.425", "loss_scale": "1", "train_wall": "40", "gb_free": "9.3", "wall": "330"} -2021-03-08 18:10:20 | INFO | train_inner | {"epoch": 1, "update": 0.01, "loss": "17.264", "ppl": "157354", "wps": "3188.7", "ups": "0.02", "wpb": "131072", "bsz": "64", "num_updates": "7", "lr": "6e-05", "gnorm": "10.824", "loss_scale": "1", "train_wall": "41", "gb_free": "9.3", "wall": "371"} -2021-03-08 18:11:01 | INFO | train_inner | {"epoch": 1, "update": 0.011, "loss": "16.794", "ppl": "113647", "wps": "3230", "ups": "0.02", "wpb": "131072", "bsz": "64", "num_updates": "8", "lr": "4e-05", "gnorm": "5.616", "loss_scale": "1", "train_wall": "41", "gb_free": "9.3", "wall": "411"} -2021-03-08 18:11:39 | INFO | train_inner | {"epoch": 1, "update": 0.012, "loss": "16.706", "ppl": "106938", "wps": "3384", "ups": "0.03", "wpb": "131072", "bsz": "64", "num_updates": "9", "lr": "2e-05", "gnorm": "5.318", "loss_scale": "1", "train_wall": "39", "gb_free": "9.3", "wall": "450"} -2021-03-08 18:12:19 | INFO | train_inner | {"epoch": 1, "update": 0.013, "loss": "16.548", "ppl": "95796.2", "wps": "3274.4", "ups": "0.02", "wpb": "131072", "bsz": "64", "num_updates": "10", "lr": "0", "gnorm": "5.22", "loss_scale": "1", "train_wall": "40", "gb_free": "9.3", "wall": "490"} -2021-03-08 18:12:19 | INFO | fairseq_cli.train | Stopping training due to num_updates: 10 >= max_update: 10 -2021-03-08 18:12:19 | INFO | fairseq_cli.train | begin validation on "valid" subset -2021-03-08 18:12:45 | INFO | valid | {"epoch": 1, "valid_loss": "16.624", "valid_ppl": "101000", "valid_wps": "10855.9", "valid_wpb": "123202", "valid_bsz": "60.5", "valid_num_updates": "10"} -2021-03-08 18:12:45 | INFO | fairseq_cli.train | end of epoch 1 (average epoch stats below) -2021-03-08 18:12:45 | INFO | train | {"epoch": 1, "train_loss": "18.114", "train_ppl": "283776", "train_wps": "2567.8", "train_ups": "0.02", "train_wpb": "131072", "train_bsz": "64", "train_num_updates": "10", "train_lr": "0", "train_gnorm": "29.562", "train_loss_scale": "1", "train_train_wall": "480", "train_gb_free": "9.3", "train_wall": "516"} -2021-03-08 18:12:45 | INFO | fairseq_cli.train | done training in 509.9 seconds -``` - -

    diff --git a/spaces/ICML2022/OFA/fairseq/examples/m2m_100/tokenizers/tokenizer_ar.sh b/spaces/ICML2022/OFA/fairseq/examples/m2m_100/tokenizers/tokenizer_ar.sh deleted file mode 100644 index ad35d7adf28dc9b23d13a6a3fec0b12cb760e855..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/m2m_100/tokenizers/tokenizer_ar.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/usr/bin/env sh -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -# -# Please follow the instructions here http://alt.qcri.org/tools/arabic-normalizer/ -# to install tools needed for Arabic - -echo "Please install Arabic tools: http://alt.qcri.org/tools/arabic-normalizer/" -echo "Then update environment variables in tokenizer_ar.sh" -exit 1 - -SVMTOOL=... -GOMOSESGO=... -QCRI_ARABIC_NORMALIZER=... - -export PERL5LIB="$SVMTOOL/lib":"$GOMOSESGO/bin/MADA-3.2":$PERL5LIB - - -tempfile=$(mktemp) -cat - > $tempfile - -cd $QCRI_ARABIC_NORMALIZER - -bash qcri_normalizer_mada3.2_aramorph1.2.1.sh $tempfile -cat $tempfile.mada_norm-aramorph.europarl_tok diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/legacy/masked_lm_dataset.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/legacy/masked_lm_dataset.py deleted file mode 100644 index dd8ea2c60aff306ab3a756223a298a28d41a4991..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/data/legacy/masked_lm_dataset.py +++ /dev/null @@ -1,303 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from typing import Dict, List, Tuple - -import numpy as np -import torch -from fairseq.data import Dictionary, FairseqDataset, data_utils -from fairseq.data.concat_dataset import ConcatDataset -from fairseq.data.legacy.block_pair_dataset import BlockPairDataset -from fairseq.data.token_block_dataset import TokenBlockDataset - - -class MaskedLMDataset(FairseqDataset): - """ - A wrapper Dataset for masked language modelling. The dataset - wraps around TokenBlockDataset or BlockedPairDataset and creates a batch - where the input blocks are masked according to the specified masking - probability. Additionally the batch can also contain sentence level targets - if this is specified. - - Args: - dataset: Dataset which generates blocks of data. Only BlockPairDataset - and TokenBlockDataset are supported. - sizes: Sentence lengths - vocab: Dictionary with the vocabulary and special tokens. - pad_idx: Id of padding token in dictionary - mask_idx: Id of mask token in dictionary - classif_token_idx: Id of classification token in dictionary. This is the - token associated with the sentence embedding (Eg: CLS for BERT) - sep_token_idx: Id of separator token in dictionary - (Eg: SEP in BERT) - seed: Seed for random number generator for reproducibility. - shuffle: Shuffle the elements before batching. - has_pairs: Specifies whether the underlying dataset - generates a pair of blocks along with a sentence_target or not. - Setting it to True assumes that the underlying dataset generates a - label for the pair of sentences which is surfaced as - sentence_target. The default value assumes a single block with no - sentence target. - segment_id: An optional segment id for filling in the segment labels - when we are in the single block setting (Eg: XLM). Default is 0. - masking_ratio: specifies what percentage of the blocks should be masked. - masking_prob: specifies the probability of a given token being - replaced with the "MASK" token. - random_token_prob: specifies the probability of a given token being - replaced by a random token from the vocabulary. - """ - - def __init__( - self, - dataset: FairseqDataset, - sizes: np.ndarray, - vocab: Dictionary, - pad_idx: int, - mask_idx: int, - classif_token_idx: int, - sep_token_idx: int, - seed: int = 1, - shuffle: bool = True, - has_pairs: bool = True, - segment_id: int = 0, - masking_ratio: float = 0.15, - masking_prob: float = 0.8, - random_token_prob: float = 0.1, - ): - # Make sure the input datasets are the ones supported - assert ( - isinstance(dataset, TokenBlockDataset) - or isinstance(dataset, BlockPairDataset) - or isinstance(dataset, ConcatDataset) - ), ( - "MaskedLMDataset only wraps TokenBlockDataset or BlockPairDataset or " - "ConcatDataset" - ) - - self.dataset = dataset - self.sizes = np.array(sizes) - self.vocab = vocab - self.pad_idx = pad_idx - self.mask_idx = mask_idx - self.classif_token_idx = classif_token_idx - self.sep_token_idx = sep_token_idx - self.shuffle = shuffle - self.seed = seed - self.has_pairs = has_pairs - self.segment_id = segment_id - self.masking_ratio = masking_ratio - self.masking_prob = masking_prob - self.random_token_prob = random_token_prob - - # If we have only one block then sizes needs to be updated to include - # the classification token - if not has_pairs: - self.sizes = self.sizes + 1 - - def __getitem__(self, index: int): - # if has_pairs, then expect 2 blocks and a sentence target - if self.has_pairs: - (block_one, block_two, sentence_target) = self.dataset[index] - else: - block_one = self.dataset[index] - - return { - "id": index, - "block_one": block_one, - "block_two": block_two if self.has_pairs else None, - "sentence_target": sentence_target if self.has_pairs else None, - } - - def __len__(self): - return len(self.dataset) - - def _mask_block( - self, - sentence: np.ndarray, - mask_idx: int, - pad_idx: int, - dictionary_token_range: Tuple, - ): - """ - Mask tokens for Masked Language Model training - Samples mask_ratio tokens that will be predicted by LM. - - Note:This function may not be efficient enough since we had multiple - conversions between np and torch, we can replace them with torch - operators later. - - Args: - sentence: 1d tensor to be masked - mask_idx: index to use for masking the sentence - pad_idx: index to use for masking the target for tokens we aren't - predicting - dictionary_token_range: range of indices in dictionary which can - be used for random word replacement - (e.g. without special characters) - Return: - masked_sent: masked sentence - target: target with words which we are not predicting replaced - by pad_idx - """ - masked_sent = np.copy(sentence) - sent_length = len(sentence) - mask_num = math.ceil(sent_length * self.masking_ratio) - mask = np.random.choice(sent_length, mask_num, replace=False) - target = np.copy(sentence) - - for i in range(sent_length): - if i in mask: - rand = np.random.random() - - # replace with mask if probability is less than masking_prob - # (Eg: 0.8) - if rand < self.masking_prob: - masked_sent[i] = mask_idx - - # replace with random token if probability is less than - # masking_prob + random_token_prob (Eg: 0.9) - elif rand < (self.masking_prob + self.random_token_prob): - # sample random token from dictionary - masked_sent[i] = np.random.randint( - dictionary_token_range[0], dictionary_token_range[1] - ) - else: - target[i] = pad_idx - - return masked_sent, target - - def _collate(self, samples: List[Dict], pad_idx: int, eos_idx: int): - """ - Does the heavy lifting for creating a batch from the input list of - examples. The logic is as follows: - 1. Mask the input blocks. In case has_pair is True then we have 2 - blocks to mask. - 2. Prepend the first masked block tensor with the special token - used as sentence embedding. Eg: CLS in BERT. This happens - irrespective of the value of has_pair. - 3. If has_pair is True, then append the first masked block with the - special separator token (eg: SEP for BERT) and compute segment - label accordingly. In this case, also append the second masked - block with this special separator token and compute its segment - label. - 4. For the targets tensor, prepend and append with padding index - accordingly. - 5. Concatenate all tensors. - """ - if len(samples) == 0: - return {} - # To ensure determinism, we reset the state of the PRNG after every - # batch based on the seed and the first id of the batch. This ensures - # that across epochs we get the same mask for the same example. This - # is needed for reproducibility and is how BERT does masking - # TODO: Can we add deteminism without this constraint? - with data_utils.numpy_seed(self.seed + samples[0]["id"]): - for s in samples: - - # token range is needed for replacing with random token during - # masking - token_range = (self.vocab.nspecial, len(self.vocab)) - - # mask according to specified probabilities. - masked_blk_one, masked_tgt_one = self._mask_block( - s["block_one"], - self.mask_idx, - self.pad_idx, - token_range, - ) - - tokens = np.concatenate([[self.classif_token_idx], masked_blk_one]) - targets = np.concatenate([[self.pad_idx], masked_tgt_one]) - segments = np.ones(len(tokens)) * self.segment_id - - # if has_pairs is True then we need to add the SEP token to both - # the blocks after masking and re-compute segments based on the new - # lengths. - if self.has_pairs: - tokens_one = np.concatenate([tokens, [self.sep_token_idx]]) - targets_one = np.concatenate([targets, [self.pad_idx]]) - - masked_blk_two, masked_tgt_two = self._mask_block( - s["block_two"], self.mask_idx, self.pad_idx, token_range - ) - tokens_two = np.concatenate([masked_blk_two, [self.sep_token_idx]]) - targets_two = np.concatenate([masked_tgt_two, [self.pad_idx]]) - - # block + 1 sep + 1 special (CLS) - segments_one = np.zeros(len(tokens_one)) - # block + 1 sep - segments_two = np.ones(len(tokens_two)) - - tokens = np.concatenate([tokens_one, tokens_two]) - targets = np.concatenate([targets_one, targets_two]) - segments = np.concatenate([segments_one, segments_two]) - - s["source"] = torch.LongTensor(tokens) - s["segment_labels"] = torch.LongTensor(segments) - s["lm_target"] = torch.LongTensor(targets) - - def merge(key): - return data_utils.collate_tokens( - [s[key] for s in samples], pad_idx, eos_idx, left_pad=False - ) - - return { - "id": torch.LongTensor([s["id"] for s in samples]), - "ntokens": sum(len(s["source"]) for s in samples), - "net_input": { - "src_tokens": merge("source"), - "segment_labels": merge("segment_labels"), - }, - "lm_target": merge("lm_target"), - "sentence_target": torch.LongTensor([s["sentence_target"] for s in samples]) - if self.has_pairs - else None, - "nsentences": len(samples), - } - - def collater(self, samples: List[Dict]): - """Merge a list of samples to form a mini-batch. - - Args: - samples (List[dict]): samples to collate - - Returns: - dict: a mini-batch of data - """ - return self._collate(samples, self.vocab.pad(), self.vocab.eos()) - - def num_tokens(self, index: int): - """ - Return the number of tokens in a sample. This value is used to - enforce max-tokens during batching. - """ - return self.sizes[index] - - def size(self, index: int): - """ - Return an example's size as a float or tuple. This value is used when - filtering a dataset with max-positions. - """ - return self.sizes[index] - - def ordered_indices(self): - """ - Return an ordered list of indices. Batches will be constructed based - on this order. - """ - if self.shuffle: - return np.random.permutation(len(self)) - else: - order = [np.arange(len(self))] - order.append(self.sizes) - return np.lexsort(order) - - @property - def supports_prefetch(self): - return getattr(self.dataset, "supports_prefetch", False) - - def prefetch(self, indices): - self.dataset.prefetch(indices) diff --git a/spaces/IPN/Demo/app.py b/spaces/IPN/Demo/app.py deleted file mode 100644 index 1c06ad4ad91fa04bd3eaba25d7802d8104c47294..0000000000000000000000000000000000000000 --- a/spaces/IPN/Demo/app.py +++ /dev/null @@ -1,5 +0,0 @@ -import gradio as gr - -examples = [["La verdad es que"], ["La educación en América Latina es clave para"]] - -gr.Interface.load("huggingface/DeepESP/gpt2-spanish", examples=examples).launch(); \ No newline at end of file diff --git a/spaces/Iceclear/StableSR/StableSR/taming/data/coco.py b/spaces/Iceclear/StableSR/StableSR/taming/data/coco.py deleted file mode 100644 index 2b2f7838448cb63dcf96daffe9470d58566d975a..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/taming/data/coco.py +++ /dev/null @@ -1,176 +0,0 @@ -import os -import json -import albumentations -import numpy as np -from PIL import Image -from tqdm import tqdm -from torch.utils.data import Dataset - -from taming.data.sflckr import SegmentationBase # for examples included in repo - - -class Examples(SegmentationBase): - def __init__(self, size=256, random_crop=False, interpolation="bicubic"): - super().__init__(data_csv="data/coco_examples.txt", - data_root="data/coco_images", - segmentation_root="data/coco_segmentations", - size=size, random_crop=random_crop, - interpolation=interpolation, - n_labels=183, shift_segmentation=True) - - -class CocoBase(Dataset): - """needed for (image, caption, segmentation) pairs""" - def __init__(self, size=None, dataroot="", datajson="", onehot_segmentation=False, use_stuffthing=False, - crop_size=None, force_no_crop=False, given_files=None): - self.split = self.get_split() - self.size = size - if crop_size is None: - self.crop_size = size - else: - self.crop_size = crop_size - - self.onehot = onehot_segmentation # return segmentation as rgb or one hot - self.stuffthing = use_stuffthing # include thing in segmentation - if self.onehot and not self.stuffthing: - raise NotImplemented("One hot mode is only supported for the " - "stuffthings version because labels are stored " - "a bit different.") - - data_json = datajson - with open(data_json) as json_file: - self.json_data = json.load(json_file) - self.img_id_to_captions = dict() - self.img_id_to_filepath = dict() - self.img_id_to_segmentation_filepath = dict() - - assert data_json.split("/")[-1] in ["captions_train2017.json", - "captions_val2017.json"] - if self.stuffthing: - self.segmentation_prefix = ( - "data/cocostuffthings/val2017" if - data_json.endswith("captions_val2017.json") else - "data/cocostuffthings/train2017") - else: - self.segmentation_prefix = ( - "data/coco/annotations/stuff_val2017_pixelmaps" if - data_json.endswith("captions_val2017.json") else - "data/coco/annotations/stuff_train2017_pixelmaps") - - imagedirs = self.json_data["images"] - self.labels = {"image_ids": list()} - for imgdir in tqdm(imagedirs, desc="ImgToPath"): - self.img_id_to_filepath[imgdir["id"]] = os.path.join(dataroot, imgdir["file_name"]) - self.img_id_to_captions[imgdir["id"]] = list() - pngfilename = imgdir["file_name"].replace("jpg", "png") - self.img_id_to_segmentation_filepath[imgdir["id"]] = os.path.join( - self.segmentation_prefix, pngfilename) - if given_files is not None: - if pngfilename in given_files: - self.labels["image_ids"].append(imgdir["id"]) - else: - self.labels["image_ids"].append(imgdir["id"]) - - capdirs = self.json_data["annotations"] - for capdir in tqdm(capdirs, desc="ImgToCaptions"): - # there are in average 5 captions per image - self.img_id_to_captions[capdir["image_id"]].append(np.array([capdir["caption"]])) - - self.rescaler = albumentations.SmallestMaxSize(max_size=self.size) - if self.split=="validation": - self.cropper = albumentations.CenterCrop(height=self.crop_size, width=self.crop_size) - else: - self.cropper = albumentations.RandomCrop(height=self.crop_size, width=self.crop_size) - self.preprocessor = albumentations.Compose( - [self.rescaler, self.cropper], - additional_targets={"segmentation": "image"}) - if force_no_crop: - self.rescaler = albumentations.Resize(height=self.size, width=self.size) - self.preprocessor = albumentations.Compose( - [self.rescaler], - additional_targets={"segmentation": "image"}) - - def __len__(self): - return len(self.labels["image_ids"]) - - def preprocess_image(self, image_path, segmentation_path): - image = Image.open(image_path) - if not image.mode == "RGB": - image = image.convert("RGB") - image = np.array(image).astype(np.uint8) - - segmentation = Image.open(segmentation_path) - if not self.onehot and not segmentation.mode == "RGB": - segmentation = segmentation.convert("RGB") - segmentation = np.array(segmentation).astype(np.uint8) - if self.onehot: - assert self.stuffthing - # stored in caffe format: unlabeled==255. stuff and thing from - # 0-181. to be compatible with the labels in - # https://github.com/nightrome/cocostuff/blob/master/labels.txt - # we shift stuffthing one to the right and put unlabeled in zero - # as long as segmentation is uint8 shifting to right handles the - # latter too - assert segmentation.dtype == np.uint8 - segmentation = segmentation + 1 - - processed = self.preprocessor(image=image, segmentation=segmentation) - image, segmentation = processed["image"], processed["segmentation"] - image = (image / 127.5 - 1.0).astype(np.float32) - - if self.onehot: - assert segmentation.dtype == np.uint8 - # make it one hot - n_labels = 183 - flatseg = np.ravel(segmentation) - onehot = np.zeros((flatseg.size, n_labels), dtype=np.bool) - onehot[np.arange(flatseg.size), flatseg] = True - onehot = onehot.reshape(segmentation.shape + (n_labels,)).astype(int) - segmentation = onehot - else: - segmentation = (segmentation / 127.5 - 1.0).astype(np.float32) - return image, segmentation - - def __getitem__(self, i): - img_path = self.img_id_to_filepath[self.labels["image_ids"][i]] - seg_path = self.img_id_to_segmentation_filepath[self.labels["image_ids"][i]] - image, segmentation = self.preprocess_image(img_path, seg_path) - captions = self.img_id_to_captions[self.labels["image_ids"][i]] - # randomly draw one of all available captions per image - caption = captions[np.random.randint(0, len(captions))] - example = {"image": image, - "caption": [str(caption[0])], - "segmentation": segmentation, - "img_path": img_path, - "seg_path": seg_path, - "filename_": img_path.split(os.sep)[-1] - } - return example - - -class CocoImagesAndCaptionsTrain(CocoBase): - """returns a pair of (image, caption)""" - def __init__(self, size, onehot_segmentation=False, use_stuffthing=False, crop_size=None, force_no_crop=False): - super().__init__(size=size, - dataroot="data/coco/train2017", - datajson="data/coco/annotations/captions_train2017.json", - onehot_segmentation=onehot_segmentation, - use_stuffthing=use_stuffthing, crop_size=crop_size, force_no_crop=force_no_crop) - - def get_split(self): - return "train" - - -class CocoImagesAndCaptionsValidation(CocoBase): - """returns a pair of (image, caption)""" - def __init__(self, size, onehot_segmentation=False, use_stuffthing=False, crop_size=None, force_no_crop=False, - given_files=None): - super().__init__(size=size, - dataroot="data/coco/val2017", - datajson="data/coco/annotations/captions_val2017.json", - onehot_segmentation=onehot_segmentation, - use_stuffthing=use_stuffthing, crop_size=crop_size, force_no_crop=force_no_crop, - given_files=given_files) - - def get_split(self): - return "validation" diff --git a/spaces/Ipkc/text_generator/README.md b/spaces/Ipkc/text_generator/README.md deleted file mode 100644 index 28ab050fd665ebae2d00ce79ace1b027200de1c7..0000000000000000000000000000000000000000 --- a/spaces/Ipkc/text_generator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Text Generator -emoji: 💩 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.11.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/JUNGU/VToonify/vtoonify/model/stylegan/__init__.py b/spaces/JUNGU/VToonify/vtoonify/model/stylegan/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/JackBAI/MassageMateNLP/README.md b/spaces/JackBAI/MassageMateNLP/README.md deleted file mode 100644 index 673a539a6dd9900853bf73d5109393c8cf29ca30..0000000000000000000000000000000000000000 --- a/spaces/JackBAI/MassageMateNLP/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: MassageMateNLP -emoji: 🌍 -colorFrom: green -colorTo: green -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_inpaint_legacy.py b/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_inpaint_legacy.py deleted file mode 100644 index 84e85e51cca21d5bdaead87e77fc184a65d9e9ab..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_inpaint_legacy.py +++ /dev/null @@ -1,461 +0,0 @@ -import inspect -from typing import Callable, List, Optional, Union - -import numpy as np -import torch - -import PIL -from transformers import CLIPFeatureExtractor, CLIPTokenizer - -from ...configuration_utils import FrozenDict -from ...onnx_utils import OnnxRuntimeModel -from ...pipeline_utils import DiffusionPipeline -from ...schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler -from ...utils import deprecate, logging -from . import StableDiffusionPipelineOutput - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -def preprocess(image): - w, h = image.size - w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 - image = image.resize((w, h), resample=PIL.Image.LANCZOS) - image = np.array(image).astype(np.float32) / 255.0 - image = image[None].transpose(0, 3, 1, 2) - return 2.0 * image - 1.0 - - -def preprocess_mask(mask, scale_factor=8): - mask = mask.convert("L") - w, h = mask.size - w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 - mask = mask.resize((w // scale_factor, h // scale_factor), resample=PIL.Image.NEAREST) - mask = np.array(mask).astype(np.float32) / 255.0 - mask = np.tile(mask, (4, 1, 1)) - mask = mask[None].transpose(0, 1, 2, 3) # what does this step do? - mask = 1 - mask # repaint white, keep black - return mask - - -class OnnxStableDiffusionInpaintPipelineLegacy(DiffusionPipeline): - r""" - Pipeline for text-guided image inpainting using Stable Diffusion. This is a *legacy feature* for Onnx pipelines to - provide compatibility with StableDiffusionInpaintPipelineLegacy and may be removed in the future. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. - feature_extractor ([`CLIPFeatureExtractor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - _optional_components = ["safety_checker", "feature_extractor"] - - vae_encoder: OnnxRuntimeModel - vae_decoder: OnnxRuntimeModel - text_encoder: OnnxRuntimeModel - tokenizer: CLIPTokenizer - unet: OnnxRuntimeModel - scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler] - safety_checker: OnnxRuntimeModel - feature_extractor: CLIPFeatureExtractor - - def __init__( - self, - vae_encoder: OnnxRuntimeModel, - vae_decoder: OnnxRuntimeModel, - text_encoder: OnnxRuntimeModel, - tokenizer: CLIPTokenizer, - unet: OnnxRuntimeModel, - scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler], - safety_checker: OnnxRuntimeModel, - feature_extractor: CLIPFeatureExtractor, - requires_safety_checker: bool = True, - ): - super().__init__() - - if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`" - f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure " - "to update the config accordingly as leaving `steps_offset` might led to incorrect results" - " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub," - " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`" - " file" - ) - deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["steps_offset"] = 1 - scheduler._internal_dict = FrozenDict(new_config) - - if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`." - " `clip_sample` should be set to False in the configuration file. Please make sure to update the" - " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in" - " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very" - " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file" - ) - deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["clip_sample"] = False - scheduler._internal_dict = FrozenDict(new_config) - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - self.register_modules( - vae_encoder=vae_encoder, - vae_decoder=vae_decoder, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_onnx_stable_diffusion.OnnxStableDiffusionPipeline._encode_prompt - def _encode_prompt(self, prompt, num_images_per_prompt, do_classifier_free_guidance, negative_prompt): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `list(int)`): - prompt to be encoded - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - """ - batch_size = len(prompt) if isinstance(prompt, list) else 1 - - # get prompt text embeddings - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="np", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="max_length", return_tensors="np").input_ids - - if not np.array_equal(text_input_ids, untruncated_ids): - removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - text_embeddings = self.text_encoder(input_ids=text_input_ids.astype(np.int32))[0] - text_embeddings = np.repeat(text_embeddings, num_images_per_prompt, axis=0) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] * batch_size - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - max_length = text_input_ids.shape[-1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="np", - ) - uncond_embeddings = self.text_encoder(input_ids=uncond_input.input_ids.astype(np.int32))[0] - uncond_embeddings = np.repeat(uncond_embeddings, num_images_per_prompt, axis=0) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = np.concatenate([uncond_embeddings, text_embeddings]) - - return text_embeddings - - def __call__( - self, - prompt: Union[str, List[str]], - image: Union[np.ndarray, PIL.Image.Image], - mask_image: Union[np.ndarray, PIL.Image.Image], - strength: float = 0.8, - num_inference_steps: Optional[int] = 50, - guidance_scale: Optional[float] = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: Optional[float] = 0.0, - generator: Optional[np.random.RandomState] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, np.ndarray], None]] = None, - callback_steps: Optional[int] = 1, - **kwargs, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - image (`nd.ndarray` or `PIL.Image.Image`): - `Image`, or tensor representing an image batch, that will be used as the starting point for the - process. This is the image whose masked region will be inpainted. - mask_image (`nd.ndarray` or `PIL.Image.Image`): - `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be - replaced by noise and therefore repainted, while black pixels will be preserved. If `mask_image` is a - PIL image, it will be converted to a single channel (luminance) before use. If it's a tensor, it should - contain one color channel (L) instead of 3, so the expected shape would be `(B, H, W, 1)`.uu - strength (`float`, *optional*, defaults to 0.8): - Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image` - will be used as a starting point, adding more noise to it the larger the `strength`. The number of - denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will - be maximum and the denoising process will run for the full number of iterations specified in - `num_inference_steps`. A value of 1, therefore, essentially ignores `image`. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. This parameter will be modulated by `strength`. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (?) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`np.random.RandomState`, *optional*): - A np.random.RandomState to make generation deterministic. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: np.ndarray)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - message = "Please use `image` instead of `init_image`." - init_image = deprecate("init_image", "0.12.0", message, take_from=kwargs) - image = init_image or image - - if isinstance(prompt, str): - batch_size = 1 - elif isinstance(prompt, list): - batch_size = len(prompt) - else: - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if strength < 0 or strength > 1: - raise ValueError(f"The value of strength should in [0.0, 1.0] but is {strength}") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - if generator is None: - generator = np.random - - # set timesteps - self.scheduler.set_timesteps(num_inference_steps) - - if isinstance(image, PIL.Image.Image): - image = preprocess(image) - - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - text_embeddings = self._encode_prompt( - prompt, num_images_per_prompt, do_classifier_free_guidance, negative_prompt - ) - - latents_dtype = text_embeddings.dtype - image = image.astype(latents_dtype) - - # encode the init image into latents and scale the latents - init_latents = self.vae_encoder(sample=image)[0] - init_latents = 0.18215 * init_latents - - # Expand init_latents for batch_size and num_images_per_prompt - init_latents = np.concatenate([init_latents] * num_images_per_prompt, axis=0) - init_latents_orig = init_latents - - # preprocess mask - if not isinstance(mask_image, np.ndarray): - mask_image = preprocess_mask(mask_image, 8) - mask_image = mask_image.astype(latents_dtype) - mask = np.concatenate([mask_image] * num_images_per_prompt, axis=0) - - # check sizes - if not mask.shape == init_latents.shape: - raise ValueError("The mask and image should be the same size!") - - # get the original timestep using init_timestep - offset = self.scheduler.config.get("steps_offset", 0) - init_timestep = int(num_inference_steps * strength) + offset - init_timestep = min(init_timestep, num_inference_steps) - - timesteps = self.scheduler.timesteps.numpy()[-init_timestep] - timesteps = np.array([timesteps] * batch_size * num_images_per_prompt) - - # add noise to latents using the timesteps - noise = generator.randn(*init_latents.shape).astype(latents_dtype) - init_latents = self.scheduler.add_noise( - torch.from_numpy(init_latents), torch.from_numpy(noise), torch.from_numpy(timesteps) - ) - init_latents = init_latents.numpy() - - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (?) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to ? in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - latents = init_latents - - t_start = max(num_inference_steps - init_timestep + offset, 0) - timesteps = self.scheduler.timesteps[t_start:].numpy() - - for i, t in enumerate(self.progress_bar(timesteps)): - # expand the latents if we are doing classifier free guidance - latent_model_input = np.concatenate([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.unet( - sample=latent_model_input, timestep=np.array([t]), encoder_hidden_states=text_embeddings - )[0] - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = np.split(noise_pred, 2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step( - torch.from_numpy(noise_pred), t, torch.from_numpy(latents), **extra_step_kwargs - ).prev_sample - - latents = latents.numpy() - - init_latents_proper = self.scheduler.add_noise( - torch.from_numpy(init_latents_orig), torch.from_numpy(noise), torch.from_numpy(np.array([t])) - ) - - init_latents_proper = init_latents_proper.numpy() - - latents = (init_latents_proper * mask) + (latents * (1 - mask)) - - # call the callback, if provided - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - latents = 1 / 0.18215 * latents - # image = self.vae_decoder(latent_sample=latents)[0] - # it seems likes there is a strange result for using half-precision vae decoder if batchsize>1 - image = np.concatenate( - [self.vae_decoder(latent_sample=latents[i : i + 1])[0] for i in range(latents.shape[0])] - ) - - image = np.clip(image / 2 + 0.5, 0, 1) - image = image.transpose((0, 2, 3, 1)) - - if self.safety_checker is not None: - safety_checker_input = self.feature_extractor( - self.numpy_to_pil(image), return_tensors="np" - ).pixel_values.astype(image.dtype) - # There will throw an error if use safety_checker batchsize>1 - images, has_nsfw_concept = [], [] - for i in range(image.shape[0]): - image_i, has_nsfw_concept_i = self.safety_checker( - clip_input=safety_checker_input[i : i + 1], images=image[i : i + 1] - ) - images.append(image_i) - has_nsfw_concept.append(has_nsfw_concept_i[0]) - image = np.concatenate(images) - else: - has_nsfw_concept = None - - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/Jaehan/Translation-Korean2English-2/README.md b/spaces/Jaehan/Translation-Korean2English-2/README.md deleted file mode 100644 index 9cab3b7721cf49fa6492629ae0bdf489b355c35e..0000000000000000000000000000000000000000 --- a/spaces/Jaehan/Translation-Korean2English-2/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Translation Kor2eng -emoji: 📚 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Jarex/TwitterBot/app.py b/spaces/Jarex/TwitterBot/app.py deleted file mode 100644 index c5804a44d16c1a5b3796207cfbfe4610c4584560..0000000000000000000000000000000000000000 --- a/spaces/Jarex/TwitterBot/app.py +++ /dev/null @@ -1,20 +0,0 @@ -import gradio -import openai - -openai.api_key = "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" - -messages = [{"role": "system", "content": "You are a Twitter Tweets experts that specializes in creating viral tweets for startup marketing and update"}] - -def CustomChatGPT(user_input): - messages.append({"role": "user", "content": user_input}) - response = openai.ChatCompletion.create( - model = "gpt-3.5-turbo", - messages = messages - ) - ChatGPT_reply = response["choices"][0]["message"]["content"] - messages.append({"role": "assistant", "content": ChatGPT_reply}) - return ChatGPT_reply - -demo = gradio.Interface(fn=CustomChatGPT, inputs="text", outputs="text", title="Twitter Tweets Pro") - -demo.launch() diff --git a/spaces/Jarkchen/af1tang-personaGPT/app.py b/spaces/Jarkchen/af1tang-personaGPT/app.py deleted file mode 100644 index 0f8819fb8ce06f9a54072d59391c8c474e566eba..0000000000000000000000000000000000000000 --- a/spaces/Jarkchen/af1tang-personaGPT/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/af1tang/personaGPT").launch() \ No newline at end of file diff --git a/spaces/JeffJing/ZookChatBot/steamship/plugin/inputs/train_plugin_input.py b/spaces/JeffJing/ZookChatBot/steamship/plugin/inputs/train_plugin_input.py deleted file mode 100644 index 3a0e6d27ed2e58a6e1177e4184a30f78114bf20a..0000000000000000000000000000000000000000 --- a/spaces/JeffJing/ZookChatBot/steamship/plugin/inputs/train_plugin_input.py +++ /dev/null @@ -1,34 +0,0 @@ -from __future__ import annotations - -from typing import Optional - -from pydantic import Field - -from steamship.base.model import CamelModel - - -class TrainPluginInput(CamelModel): - """ - This is the object passed as input to a trainable operation, stored as the `input` field of a `train` task. - """ - - plugin_instance: str - - # How may epochs of trainable to perform, if relevant and supported - training_epochs: Optional[int] = None - - # How much data to hold out for testing & reporting, if relevant and supported. - testing_holdout_percent: Optional[float] = None - - # An optional seed for the train-test split - test_split_seed: Optional[int] = None - - # Arbitrary key-valued data to provide to the particular `modelName` trainer. - training_params: Optional[dict] = None - - # Arbitrary key-valued data to provide to the inference runner in the TrainPluginOutput object. - # The trainable process will have the opportunity to amend this before writing it to the output - inference_params: Optional[dict] = None - - # A pre-signed URL at which the trainable data can be found - training_data_url: Optional[str] = Field(None, alias="trainingDataUrl") diff --git a/spaces/Joyeux/andite-anything-v4.0/README.md b/spaces/Joyeux/andite-anything-v4.0/README.md deleted file mode 100644 index cce389ed011b5b2dd94acedddd455b3480c368b4..0000000000000000000000000000000000000000 --- a/spaces/Joyeux/andite-anything-v4.0/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Andite Anything V4.0 -emoji: 🏃 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/KPCGD/bingo/src/components/ui/badge.tsx b/spaces/KPCGD/bingo/src/components/ui/badge.tsx deleted file mode 100644 index d9a84b394090e5b4b3bd34f6135b9a2f2ead0aa2..0000000000000000000000000000000000000000 --- a/spaces/KPCGD/bingo/src/components/ui/badge.tsx +++ /dev/null @@ -1,36 +0,0 @@ -import * as React from 'react' -import { cva, type VariantProps } from 'class-variance-authority' - -import { cn } from '@/lib/utils' - -const badgeVariants = cva( - 'inline-flex items-center rounded-full border px-2.5 py-0.5 text-xs font-semibold transition-colors focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2', - { - variants: { - variant: { - default: - 'border-transparent bg-primary text-primary-foreground hover:bg-primary/80', - secondary: - 'border-transparent bg-secondary text-secondary-foreground hover:bg-secondary/80', - destructive: - 'border-transparent bg-destructive text-destructive-foreground hover:bg-destructive/80', - outline: 'text-foreground' - } - }, - defaultVariants: { - variant: 'default' - } - } -) - -export interface BadgeProps - extends React.HTMLAttributes, - VariantProps {} - -function Badge({ className, variant, ...props }: BadgeProps) { - return ( -
    - ) -} - -export { Badge, badgeVariants } diff --git a/spaces/Kangarroar/ApplioRVC-Inference/utils/README.md b/spaces/Kangarroar/ApplioRVC-Inference/utils/README.md deleted file mode 100644 index fb45a36b5909585aa964f2033762ee59b55526b0..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/utils/README.md +++ /dev/null @@ -1,6 +0,0 @@ -# External Colab Code -Code used to make Google Colab work correctly -- Repo link: https://github.com/IAHispano/Applio-RVC-Fork/ - -Thanks to https://github.com/kalomaze/externalcolabcode - diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoders.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoders.py deleted file mode 100644 index 526140f4fac5b0c4663e435243655ae74a4735fa..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoders.py +++ /dev/null @@ -1,298 +0,0 @@ -import logging -import six - -import numpy as np -import torch -import torch.nn.functional as F -from torch.nn.utils.rnn import pack_padded_sequence -from torch.nn.utils.rnn import pad_packed_sequence - -from .e2e_asr_common import get_vgg2l_odim -from .nets_utils import make_pad_mask, to_device - - -class RNNP(torch.nn.Module): - """RNN with projection layer module - - :param int idim: dimension of inputs - :param int elayers: number of encoder layers - :param int cdim: number of rnn units (resulted in cdim * 2 if bidirectional) - :param int hdim: number of projection units - :param np.ndarray subsample: list of subsampling numbers - :param float dropout: dropout rate - :param str typ: The RNN type - """ - - def __init__(self, idim, elayers, cdim, hdim, subsample, dropout, typ="blstm"): - super(RNNP, self).__init__() - bidir = typ[0] == "b" - for i in six.moves.range(elayers): - if i == 0: - inputdim = idim - else: - inputdim = hdim - rnn = torch.nn.LSTM(inputdim, cdim, dropout=dropout, num_layers=1, bidirectional=bidir, - batch_first=True) if "lstm" in typ \ - else torch.nn.GRU(inputdim, cdim, dropout=dropout, num_layers=1, bidirectional=bidir, batch_first=True) - setattr(self, "%s%d" % ("birnn" if bidir else "rnn", i), rnn) - # bottleneck layer to merge - if bidir: - setattr(self, "bt%d" % i, torch.nn.Linear(2 * cdim, hdim)) - else: - setattr(self, "bt%d" % i, torch.nn.Linear(cdim, hdim)) - - self.elayers = elayers - self.cdim = cdim - self.subsample = subsample - self.typ = typ - self.bidir = bidir - - def forward(self, xs_pad, ilens, prev_state=None): - """RNNP forward - - :param torch.Tensor xs_pad: batch of padded input sequences (B, Tmax, idim) - :param torch.Tensor ilens: batch of lengths of input sequences (B) - :param torch.Tensor prev_state: batch of previous RNN states - :return: batch of hidden state sequences (B, Tmax, hdim) - :rtype: torch.Tensor - """ - logging.debug(self.__class__.__name__ + ' input lengths: ' + str(ilens)) - elayer_states = [] - for layer in six.moves.range(self.elayers): - xs_pack = pack_padded_sequence(xs_pad, ilens, batch_first=True, enforce_sorted=False) - rnn = getattr(self, ("birnn" if self.bidir else "rnn") + str(layer)) - rnn.flatten_parameters() - if prev_state is not None and rnn.bidirectional: - prev_state = reset_backward_rnn_state(prev_state) - ys, states = rnn(xs_pack, hx=None if prev_state is None else prev_state[layer]) - elayer_states.append(states) - # ys: utt list of frame x cdim x 2 (2: means bidirectional) - ys_pad, ilens = pad_packed_sequence(ys, batch_first=True) - sub = self.subsample[layer + 1] - if sub > 1: - ys_pad = ys_pad[:, ::sub] - ilens = [int(i + 1) // sub for i in ilens] - # (sum _utt frame_utt) x dim - projected = getattr(self, 'bt' + str(layer) - )(ys_pad.contiguous().view(-1, ys_pad.size(2))) - if layer == self.elayers - 1: - xs_pad = projected.view(ys_pad.size(0), ys_pad.size(1), -1) - else: - xs_pad = torch.tanh(projected.view(ys_pad.size(0), ys_pad.size(1), -1)) - - return xs_pad, ilens, elayer_states # x: utt list of frame x dim - - -class RNN(torch.nn.Module): - """RNN module - - :param int idim: dimension of inputs - :param int elayers: number of encoder layers - :param int cdim: number of rnn units (resulted in cdim * 2 if bidirectional) - :param int hdim: number of final projection units - :param float dropout: dropout rate - :param str typ: The RNN type - """ - - def __init__(self, idim, elayers, cdim, hdim, dropout, typ="blstm"): - super(RNN, self).__init__() - bidir = typ[0] == "b" - self.nbrnn = torch.nn.LSTM(idim, cdim, elayers, batch_first=True, - dropout=dropout, bidirectional=bidir) if "lstm" in typ \ - else torch.nn.GRU(idim, cdim, elayers, batch_first=True, dropout=dropout, - bidirectional=bidir) - if bidir: - self.l_last = torch.nn.Linear(cdim * 2, hdim) - else: - self.l_last = torch.nn.Linear(cdim, hdim) - self.typ = typ - - def forward(self, xs_pad, ilens, prev_state=None): - """RNN forward - - :param torch.Tensor xs_pad: batch of padded input sequences (B, Tmax, D) - :param torch.Tensor ilens: batch of lengths of input sequences (B) - :param torch.Tensor prev_state: batch of previous RNN states - :return: batch of hidden state sequences (B, Tmax, eprojs) - :rtype: torch.Tensor - """ - logging.debug(self.__class__.__name__ + ' input lengths: ' + str(ilens)) - xs_pack = pack_padded_sequence(xs_pad, ilens, batch_first=True) - self.nbrnn.flatten_parameters() - if prev_state is not None and self.nbrnn.bidirectional: - # We assume that when previous state is passed, it means that we're streaming the input - # and therefore cannot propagate backward BRNN state (otherwise it goes in the wrong direction) - prev_state = reset_backward_rnn_state(prev_state) - ys, states = self.nbrnn(xs_pack, hx=prev_state) - # ys: utt list of frame x cdim x 2 (2: means bidirectional) - ys_pad, ilens = pad_packed_sequence(ys, batch_first=True) - # (sum _utt frame_utt) x dim - projected = torch.tanh(self.l_last( - ys_pad.contiguous().view(-1, ys_pad.size(2)))) - xs_pad = projected.view(ys_pad.size(0), ys_pad.size(1), -1) - return xs_pad, ilens, states # x: utt list of frame x dim - - -def reset_backward_rnn_state(states): - """Sets backward BRNN states to zeroes - useful in processing of sliding windows over the inputs""" - if isinstance(states, (list, tuple)): - for state in states: - state[1::2] = 0. - else: - states[1::2] = 0. - return states - - -class VGG2L(torch.nn.Module): - """VGG-like module - - :param int in_channel: number of input channels - """ - - def __init__(self, in_channel=1, downsample=True): - super(VGG2L, self).__init__() - # CNN layer (VGG motivated) - self.conv1_1 = torch.nn.Conv2d(in_channel, 64, 3, stride=1, padding=1) - self.conv1_2 = torch.nn.Conv2d(64, 64, 3, stride=1, padding=1) - self.conv2_1 = torch.nn.Conv2d(64, 128, 3, stride=1, padding=1) - self.conv2_2 = torch.nn.Conv2d(128, 128, 3, stride=1, padding=1) - - self.in_channel = in_channel - self.downsample = downsample - if downsample: - self.stride = 2 - else: - self.stride = 1 - - def forward(self, xs_pad, ilens, **kwargs): - """VGG2L forward - - :param torch.Tensor xs_pad: batch of padded input sequences (B, Tmax, D) - :param torch.Tensor ilens: batch of lengths of input sequences (B) - :return: batch of padded hidden state sequences (B, Tmax // 4, 128 * D // 4) if downsample - :rtype: torch.Tensor - """ - logging.debug(self.__class__.__name__ + ' input lengths: ' + str(ilens)) - - # x: utt x frame x dim - # xs_pad = F.pad_sequence(xs_pad) - - # x: utt x 1 (input channel num) x frame x dim - xs_pad = xs_pad.view(xs_pad.size(0), xs_pad.size(1), self.in_channel, - xs_pad.size(2) // self.in_channel).transpose(1, 2) - - # NOTE: max_pool1d ? - xs_pad = F.relu(self.conv1_1(xs_pad)) - xs_pad = F.relu(self.conv1_2(xs_pad)) - if self.downsample: - xs_pad = F.max_pool2d(xs_pad, 2, stride=self.stride, ceil_mode=True) - - xs_pad = F.relu(self.conv2_1(xs_pad)) - xs_pad = F.relu(self.conv2_2(xs_pad)) - if self.downsample: - xs_pad = F.max_pool2d(xs_pad, 2, stride=self.stride, ceil_mode=True) - if torch.is_tensor(ilens): - ilens = ilens.cpu().numpy() - else: - ilens = np.array(ilens, dtype=np.float32) - if self.downsample: - ilens = np.array(np.ceil(ilens / 2), dtype=np.int64) - ilens = np.array( - np.ceil(np.array(ilens, dtype=np.float32) / 2), dtype=np.int64).tolist() - - # x: utt_list of frame (remove zeropaded frames) x (input channel num x dim) - xs_pad = xs_pad.transpose(1, 2) - xs_pad = xs_pad.contiguous().view( - xs_pad.size(0), xs_pad.size(1), xs_pad.size(2) * xs_pad.size(3)) - return xs_pad, ilens, None # no state in this layer - - -class Encoder(torch.nn.Module): - """Encoder module - - :param str etype: type of encoder network - :param int idim: number of dimensions of encoder network - :param int elayers: number of layers of encoder network - :param int eunits: number of lstm units of encoder network - :param int eprojs: number of projection units of encoder network - :param np.ndarray subsample: list of subsampling numbers - :param float dropout: dropout rate - :param int in_channel: number of input channels - """ - - def __init__(self, etype, idim, elayers, eunits, eprojs, subsample, dropout, in_channel=1): - super(Encoder, self).__init__() - typ = etype.lstrip("vgg").rstrip("p") - if typ not in ['lstm', 'gru', 'blstm', 'bgru']: - logging.error("Error: need to specify an appropriate encoder architecture") - - if etype.startswith("vgg"): - if etype[-1] == "p": - self.enc = torch.nn.ModuleList([VGG2L(in_channel), - RNNP(get_vgg2l_odim(idim, in_channel=in_channel), elayers, eunits, - eprojs, - subsample, dropout, typ=typ)]) - logging.info('Use CNN-VGG + ' + typ.upper() + 'P for encoder') - else: - self.enc = torch.nn.ModuleList([VGG2L(in_channel), - RNN(get_vgg2l_odim(idim, in_channel=in_channel), elayers, eunits, - eprojs, - dropout, typ=typ)]) - logging.info('Use CNN-VGG + ' + typ.upper() + ' for encoder') - else: - if etype[-1] == "p": - self.enc = torch.nn.ModuleList( - [RNNP(idim, elayers, eunits, eprojs, subsample, dropout, typ=typ)]) - logging.info(typ.upper() + ' with every-layer projection for encoder') - else: - self.enc = torch.nn.ModuleList([RNN(idim, elayers, eunits, eprojs, dropout, typ=typ)]) - logging.info(typ.upper() + ' without projection for encoder') - - def forward(self, xs_pad, ilens, prev_states=None): - """Encoder forward - - :param torch.Tensor xs_pad: batch of padded input sequences (B, Tmax, D) - :param torch.Tensor ilens: batch of lengths of input sequences (B) - :param torch.Tensor prev_state: batch of previous encoder hidden states (?, ...) - :return: batch of hidden state sequences (B, Tmax, eprojs) - :rtype: torch.Tensor - """ - if prev_states is None: - prev_states = [None] * len(self.enc) - assert len(prev_states) == len(self.enc) - - current_states = [] - for module, prev_state in zip(self.enc, prev_states): - xs_pad, ilens, states = module(xs_pad, ilens, prev_state=prev_state) - current_states.append(states) - - # make mask to remove bias value in padded part - mask = to_device(self, make_pad_mask(ilens).unsqueeze(-1)) - - return xs_pad.masked_fill(mask, 0.0), ilens, current_states - - -def encoder_for(args, idim, subsample): - """Instantiates an encoder module given the program arguments - - :param Namespace args: The arguments - :param int or List of integer idim: dimension of input, e.g. 83, or - List of dimensions of inputs, e.g. [83,83] - :param List or List of List subsample: subsample factors, e.g. [1,2,2,1,1], or - List of subsample factors of each encoder. e.g. [[1,2,2,1,1], [1,2,2,1,1]] - :rtype torch.nn.Module - :return: The encoder module - """ - num_encs = getattr(args, "num_encs", 1) # use getattr to keep compatibility - if num_encs == 1: - # compatible with single encoder asr mode - return Encoder(args.etype, idim, args.elayers, args.eunits, args.eprojs, subsample, args.dropout_rate) - elif num_encs >= 1: - enc_list = torch.nn.ModuleList() - for idx in range(num_encs): - enc = Encoder(args.etype[idx], idim[idx], args.elayers[idx], args.eunits[idx], args.eprojs, subsample[idx], - args.dropout_rate[idx]) - enc_list.append(enc) - return enc_list - else: - raise ValueError("Number of encoders needs to be more than one. {}".format(num_encs)) diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/distribution.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/distribution.py deleted file mode 100644 index d3119a5ba1e77bc25a92d2664f83d366f12399c0..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/distribution.py +++ /dev/null @@ -1,132 +0,0 @@ -import numpy as np -import torch -import torch.nn.functional as F - - -def log_sum_exp(x): - """ numerically stable log_sum_exp implementation that prevents overflow """ - # TF ordering - axis = len(x.size()) - 1 - m, _ = torch.max(x, dim=axis) - m2, _ = torch.max(x, dim=axis, keepdim=True) - return m + torch.log(torch.sum(torch.exp(x - m2), dim=axis)) - - -# It is adapted from https://github.com/r9y9/wavenet_vocoder/blob/master/wavenet_vocoder/mixture.py -def discretized_mix_logistic_loss(y_hat, y, num_classes=65536, - log_scale_min=None, reduce=True): - if log_scale_min is None: - log_scale_min = float(np.log(1e-14)) - y_hat = y_hat.permute(0,2,1) - assert y_hat.dim() == 3 - assert y_hat.size(1) % 3 == 0 - nr_mix = y_hat.size(1) // 3 - - # (B x T x C) - y_hat = y_hat.transpose(1, 2) - - # unpack parameters. (B, T, num_mixtures) x 3 - logit_probs = y_hat[:, :, :nr_mix] - means = y_hat[:, :, nr_mix:2 * nr_mix] - log_scales = torch.clamp(y_hat[:, :, 2 * nr_mix:3 * nr_mix], min=log_scale_min) - - # B x T x 1 -> B x T x num_mixtures - y = y.expand_as(means) - - centered_y = y - means - inv_stdv = torch.exp(-log_scales) - plus_in = inv_stdv * (centered_y + 1. / (num_classes - 1)) - cdf_plus = torch.sigmoid(plus_in) - min_in = inv_stdv * (centered_y - 1. / (num_classes - 1)) - cdf_min = torch.sigmoid(min_in) - - # log probability for edge case of 0 (before scaling) - # equivalent: torch.log(F.sigmoid(plus_in)) - log_cdf_plus = plus_in - F.softplus(plus_in) - - # log probability for edge case of 255 (before scaling) - # equivalent: (1 - F.sigmoid(min_in)).log() - log_one_minus_cdf_min = -F.softplus(min_in) - - # probability for all other cases - cdf_delta = cdf_plus - cdf_min - - mid_in = inv_stdv * centered_y - # log probability in the center of the bin, to be used in extreme cases - # (not actually used in our code) - log_pdf_mid = mid_in - log_scales - 2. * F.softplus(mid_in) - - # tf equivalent - """ - log_probs = tf.where(x < -0.999, log_cdf_plus, - tf.where(x > 0.999, log_one_minus_cdf_min, - tf.where(cdf_delta > 1e-5, - tf.log(tf.maximum(cdf_delta, 1e-12)), - log_pdf_mid - np.log(127.5)))) - """ - # TODO: cdf_delta <= 1e-5 actually can happen. How can we choose the value - # for num_classes=65536 case? 1e-7? not sure.. - inner_inner_cond = (cdf_delta > 1e-5).float() - - inner_inner_out = inner_inner_cond * \ - torch.log(torch.clamp(cdf_delta, min=1e-12)) + \ - (1. - inner_inner_cond) * (log_pdf_mid - np.log((num_classes - 1) / 2)) - inner_cond = (y > 0.999).float() - inner_out = inner_cond * log_one_minus_cdf_min + (1. - inner_cond) * inner_inner_out - cond = (y < -0.999).float() - log_probs = cond * log_cdf_plus + (1. - cond) * inner_out - - log_probs = log_probs + F.log_softmax(logit_probs, -1) - - if reduce: - return -torch.mean(log_sum_exp(log_probs)) - else: - return -log_sum_exp(log_probs).unsqueeze(-1) - - -def sample_from_discretized_mix_logistic(y, log_scale_min=None): - """ - Sample from discretized mixture of logistic distributions - Args: - y (Tensor): B x C x T - log_scale_min (float): Log scale minimum value - Returns: - Tensor: sample in range of [-1, 1]. - """ - if log_scale_min is None: - log_scale_min = float(np.log(1e-14)) - assert y.size(1) % 3 == 0 - nr_mix = y.size(1) // 3 - - # B x T x C - y = y.transpose(1, 2) - logit_probs = y[:, :, :nr_mix] - - # sample mixture indicator from softmax - temp = logit_probs.data.new(logit_probs.size()).uniform_(1e-5, 1.0 - 1e-5) - temp = logit_probs.data - torch.log(- torch.log(temp)) - _, argmax = temp.max(dim=-1) - - # (B, T) -> (B, T, nr_mix) - one_hot = to_one_hot(argmax, nr_mix) - # select logistic parameters - means = torch.sum(y[:, :, nr_mix:2 * nr_mix] * one_hot, dim=-1) - log_scales = torch.clamp(torch.sum( - y[:, :, 2 * nr_mix:3 * nr_mix] * one_hot, dim=-1), min=log_scale_min) - # sample from logistic & clip to interval - # we don't actually round to the nearest 8bit value when sampling - u = means.data.new(means.size()).uniform_(1e-5, 1.0 - 1e-5) - x = means + torch.exp(log_scales) * (torch.log(u) - torch.log(1. - u)) - - x = torch.clamp(torch.clamp(x, min=-1.), max=1.) - - return x - - -def to_one_hot(tensor, n, fill_with=1.): - # we perform one hot encore with respect to the last axis - one_hot = torch.FloatTensor(tensor.size() + (n,)).zero_() - if tensor.is_cuda: - one_hot = one_hot.cuda() - one_hot.scatter_(len(tensor.size()), tensor.unsqueeze(-1), fill_with) - return one_hot diff --git a/spaces/Kimata/multimodal_deepfake_detection/data/__init__.py b/spaces/Kimata/multimodal_deepfake_detection/data/__init__.py deleted file mode 100644 index a02e758d6ac34ba5d1a5dec73626569661aa8756..0000000000000000000000000000000000000000 --- a/spaces/Kimata/multimodal_deepfake_detection/data/__init__.py +++ /dev/null @@ -1,22 +0,0 @@ -import torch.utils.data - -class DataProvider(): - - def __init__(self, cfg, dataset, batch_size=None, shuffle=True): - super().__init__() - self.dataset = dataset - if batch_size is None: - batch_size = cfg.BATCH_SIZE - self.dataloader = torch.utils.data.DataLoader( - self.dataset, - batch_size=batch_size, - shuffle=shuffle, - num_workers=int(cfg.WORKERS), - drop_last=False) - - def __len__(self): - return len(self.dataset) - - def __iter__(self): - for i, data in enumerate(self.dataloader): - yield data \ No newline at end of file diff --git a/spaces/Kororinpa/Amadeus_Project/utils.py b/spaces/Kororinpa/Amadeus_Project/utils.py deleted file mode 100644 index a311e1c75de8f65f7edb49e0e6d5cdea085b5e5c..0000000000000000000000000000000000000000 --- a/spaces/Kororinpa/Amadeus_Project/utils.py +++ /dev/null @@ -1,258 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict= {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})" .format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict(), - 'learning_rate': learning_rate}, checkpoint_path) - - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10,2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("../drive/MyDrive", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/Kuachi/hololive/infer_pack/commons.py b/spaces/Kuachi/hololive/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/Kuachi/hololive/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/KyanChen/RSPrompter/mmdet/engine/runner/loops.py b/spaces/KyanChen/RSPrompter/mmdet/engine/runner/loops.py deleted file mode 100644 index a32996eceee3a5c4ccbed192f92441038b61c220..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/engine/runner/loops.py +++ /dev/null @@ -1,39 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. - -from mmengine.model import is_model_wrapper -from mmengine.runner import ValLoop - -from mmdet.registry import LOOPS - - -@LOOPS.register_module() -class TeacherStudentValLoop(ValLoop): - """Loop for validation of model teacher and student.""" - - def run(self): - """Launch validation for model teacher and student.""" - self.runner.call_hook('before_val') - self.runner.call_hook('before_val_epoch') - self.runner.model.eval() - - model = self.runner.model - if is_model_wrapper(model): - model = model.module - assert hasattr(model, 'teacher') - assert hasattr(model, 'student') - - predict_on = model.semi_test_cfg.get('predict_on', None) - multi_metrics = dict() - for _predict_on in ['teacher', 'student']: - model.semi_test_cfg['predict_on'] = _predict_on - for idx, data_batch in enumerate(self.dataloader): - self.run_iter(idx, data_batch) - # compute metrics - metrics = self.evaluator.evaluate(len(self.dataloader.dataset)) - multi_metrics.update( - {'/'.join((_predict_on, k)): v - for k, v in metrics.items()}) - model.semi_test_cfg['predict_on'] = predict_on - - self.runner.call_hook('after_val_epoch', metrics=multi_metrics) - self.runner.call_hook('after_val') diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/trident_roi_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/trident_roi_head.py deleted file mode 100644 index 5215327296282a8e7ca502f3321aced8a4f840b7..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/trident_roi_head.py +++ /dev/null @@ -1,112 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Tuple - -import torch -from mmcv.ops import batched_nms -from mmengine.structures import InstanceData -from torch import Tensor - -from mmdet.registry import MODELS -from mmdet.structures import SampleList -from mmdet.utils import InstanceList -from .standard_roi_head import StandardRoIHead - - -@MODELS.register_module() -class TridentRoIHead(StandardRoIHead): - """Trident roi head. - - Args: - num_branch (int): Number of branches in TridentNet. - test_branch_idx (int): In inference, all 3 branches will be used - if `test_branch_idx==-1`, otherwise only branch with index - `test_branch_idx` will be used. - """ - - def __init__(self, num_branch: int, test_branch_idx: int, - **kwargs) -> None: - self.num_branch = num_branch - self.test_branch_idx = test_branch_idx - super().__init__(**kwargs) - - def merge_trident_bboxes(self, - trident_results: InstanceList) -> InstanceData: - """Merge bbox predictions of each branch. - - Args: - trident_results (List[:obj:`InstanceData`]): A list of InstanceData - predicted from every branch. - - Returns: - :obj:`InstanceData`: merged InstanceData. - """ - bboxes = torch.cat([res.bboxes for res in trident_results]) - scores = torch.cat([res.scores for res in trident_results]) - labels = torch.cat([res.labels for res in trident_results]) - - nms_cfg = self.test_cfg['nms'] - results = InstanceData() - if bboxes.numel() == 0: - results.bboxes = bboxes - results.scores = scores - results.labels = labels - else: - det_bboxes, keep = batched_nms(bboxes, scores, labels, nms_cfg) - results.bboxes = det_bboxes[:, :-1] - results.scores = det_bboxes[:, -1] - results.labels = labels[keep] - - if self.test_cfg['max_per_img'] > 0: - results = results[:self.test_cfg['max_per_img']] - return results - - def predict(self, - x: Tuple[Tensor], - rpn_results_list: InstanceList, - batch_data_samples: SampleList, - rescale: bool = False) -> InstanceList: - """Perform forward propagation of the roi head and predict detection - results on the features of the upstream network. - - - Compute prediction bbox and label per branch. - - Merge predictions of each branch according to scores of - bboxes, i.e., bboxes with higher score are kept to give - top-k prediction. - - Args: - x (tuple[Tensor]): Features from upstream network. Each - has shape (N, C, H, W). - rpn_results_list (list[:obj:`InstanceData`]): list of region - proposals. - batch_data_samples (List[:obj:`DetDataSample`]): The Data - Samples. It usually includes information such as - `gt_instance`, `gt_panoptic_seg` and `gt_sem_seg`. - rescale (bool): Whether to rescale the results to - the original image. Defaults to True. - - Returns: - list[obj:`InstanceData`]: Detection results of each image. - Each item usually contains following keys. - - - scores (Tensor): Classification scores, has a shape - (num_instance, ) - - labels (Tensor): Labels of bboxes, has a shape - (num_instances, ). - - bboxes (Tensor): Has a shape (num_instances, 4), - the last dimension 4 arrange as (x1, y1, x2, y2). - """ - results_list = super().predict( - x=x, - rpn_results_list=rpn_results_list, - batch_data_samples=batch_data_samples, - rescale=rescale) - - num_branch = self.num_branch \ - if self.training or self.test_branch_idx == -1 else 1 - - merged_results_list = [] - for i in range(len(batch_data_samples) // num_branch): - merged_results_list.append( - self.merge_trident_bboxes(results_list[i * num_branch:(i + 1) * - num_branch])) - return merged_results_list diff --git a/spaces/KyanChen/RSPrompter/mmpl/models/pler/seg_sam_anchor_pler.py b/spaces/KyanChen/RSPrompter/mmpl/models/pler/seg_sam_anchor_pler.py deleted file mode 100644 index 9520c79629a6eb9a926ac7b31de3638d9aaa5e8b..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmpl/models/pler/seg_sam_anchor_pler.py +++ /dev/null @@ -1,104 +0,0 @@ -import torch -from mmengine.structures import InstanceData -from typing import List, Any - -from mmpl.registry import MODELS -from mmseg.utils import SampleList -from .base_pler import BasePLer -import torch.nn.functional as F -from modules.sam import sam_model_registry - - -@MODELS.register_module() -class SegSAMAnchorPLer(BasePLer): - def __init__(self, - backbone, - neck=None, - panoptic_head=None, - need_train_names=None, - train_cfg=None, - test_cfg=None, - *args, **kwargs): - super().__init__(*args, **kwargs) - self.save_hyperparameters() - self.need_train_names = need_train_names - - backbone_type = backbone.pop('type') - self.backbone = sam_model_registry[backbone_type](**backbone) - - if neck is not None: - self.neck = MODELS.build(neck) - - self.panoptic_head = MODELS.build(panoptic_head) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - def setup(self, stage: str) -> None: - super().setup(stage) - if self.need_train_names is not None: - self._set_grad(self.need_train_names, noneed_train_names=[]) - - def init_weights(self): - import ipdb; ipdb.set_trace() - pass - - def train(self, mode=True): - if self.need_train_names is not None: - return self._set_train_module(mode, self.need_train_names) - else: - super().train(mode) - return self - - @torch.no_grad() - def extract_feat(self, batch_inputs): - feat, inter_features = self.backbone.image_encoder(batch_inputs) - return feat, inter_features - - def validation_step(self, batch, batch_idx): - data = self.data_preprocessor(batch, False) - batch_inputs = data['inputs'] - batch_data_samples = data['data_samples'] - - x = self.extract_feat(batch_inputs) - # x = ( - # torch.rand(2, 256, 64, 64).to(self.device), [torch.rand(2, 64, 64, 768).to(self.device) for _ in range(12)]) - results = self.panoptic_head.predict( - x, batch_data_samples, self.backbone) - self.val_evaluator.update(batch, results) - - def training_step(self, batch, batch_idx): - data = self.data_preprocessor(batch, True) - batch_inputs = data['inputs'] - batch_data_samples = data['data_samples'] - x = self.extract_feat(batch_inputs) - # x = (torch.rand(2, 256, 64, 64).to(self.device), [torch.rand(2, 64, 64, 768).to(self.device) for _ in range(12)]) - losses = self.panoptic_head.loss(x, batch_data_samples, self.backbone) - - parsed_losses, log_vars = self.parse_losses(losses) - log_vars = {f'train_{k}': v for k, v in log_vars.items()} - log_vars['loss'] = parsed_losses - self.log_dict(log_vars, prog_bar=True) - return log_vars - - def on_before_optimizer_step(self, optimizer) -> None: - self.log_grad(module=self.panoptic_head) - - def predict_step(self, batch: Any, batch_idx: int, dataloader_idx: int = 0) -> Any: - data = self.data_preprocessor(batch, False) - batch_inputs = data['inputs'] - batch_data_samples = data['data_samples'] - - x = self.extract_feat(batch_inputs) - # x = ( - # torch.rand(2, 256, 64, 64).to(self.device), [torch.rand(2, 64, 64, 768).to(self.device) for _ in range(12)]) - results = self.panoptic_head.predict( - x, batch_data_samples, self.backbone) - return results - - - - - - - diff --git a/spaces/Laihiujin/OneFormer/gradio_app.py b/spaces/Laihiujin/OneFormer/gradio_app.py deleted file mode 100644 index 880899580df378d9d66106a47083ce75b3a1c526..0000000000000000000000000000000000000000 --- a/spaces/Laihiujin/OneFormer/gradio_app.py +++ /dev/null @@ -1,219 +0,0 @@ -import torch - -print("Installed the dependencies!") - -import numpy as np -from PIL import Image -import cv2 -import imutils - -from detectron2.config import get_cfg -from detectron2.projects.deeplab import add_deeplab_config -from detectron2.data import MetadataCatalog - -from oneformer import ( - add_oneformer_config, - add_common_config, - add_swin_config, - add_dinat_config, -) - -from demo.defaults import DefaultPredictor -from demo.visualizer import Visualizer, ColorMode - -import gradio as gr -from huggingface_hub import hf_hub_download - -KEY_DICT = {"Cityscapes (19 classes)": "cityscapes", - "COCO (133 classes)": "coco", - "ADE20K (150 classes)": "ade20k",} - -SWIN_CFG_DICT = {"cityscapes": "configs/cityscapes/oneformer_swin_large_IN21k_384_bs16_90k.yaml", - "coco": "configs/coco/oneformer_swin_large_IN21k_384_bs16_100ep.yaml", - "ade20k": "configs/ade20k/oneformer_swin_large_IN21k_384_bs16_160k.yaml",} - -SWIN_MODEL_DICT = {"cityscapes": hf_hub_download(repo_id="shi-labs/oneformer_cityscapes_swin_large", - filename="250_16_swin_l_oneformer_cityscapes_90k.pth"), - "coco": hf_hub_download(repo_id="shi-labs/oneformer_coco_swin_large", - filename="150_16_swin_l_oneformer_coco_100ep.pth"), - "ade20k": hf_hub_download(repo_id="shi-labs/oneformer_ade20k_swin_large", - filename="250_16_swin_l_oneformer_ade20k_160k.pth") - } - -DINAT_CFG_DICT = {"cityscapes": "configs/cityscapes/oneformer_dinat_large_bs16_90k.yaml", - "coco": "configs/coco/oneformer_dinat_large_bs16_100ep.yaml", - "ade20k": "configs/ade20k/oneformer_dinat_large_IN21k_384_bs16_160k.yaml",} - -DINAT_MODEL_DICT = {"cityscapes": hf_hub_download(repo_id="shi-labs/oneformer_cityscapes_dinat_large", - filename="250_16_dinat_l_oneformer_cityscapes_90k.pth"), - "coco": hf_hub_download(repo_id="shi-labs/oneformer_coco_dinat_large", - filename="150_16_dinat_l_oneformer_coco_100ep.pth"), - "ade20k": hf_hub_download(repo_id="shi-labs/oneformer_ade20k_dinat_large", - filename="250_16_dinat_l_oneformer_ade20k_160k.pth") - } - -MODEL_DICT = {"DiNAT-L": DINAT_MODEL_DICT, - "Swin-L": SWIN_MODEL_DICT } - -CFG_DICT = {"DiNAT-L": DINAT_CFG_DICT, - "Swin-L": SWIN_CFG_DICT } - -WIDTH_DICT = {"cityscapes": 512, - "coco": 512, - "ade20k": 640} - -cpu_device = torch.device("cpu") - -PREDICTORS = { - "DiNAT-L": { - "Cityscapes (19 classes)": None, - "COCO (133 classes)": None, - "ADE20K (150 classes)": None - }, - "Swin-L": { - "Cityscapes (19 classes)": None, - "COCO (133 classes)": None, - "ADE20K (150 classes)": None - } -} - -METADATA = { - "DiNAT-L": { - "Cityscapes (19 classes)": None, - "COCO (133 classes)": None, - "ADE20K (150 classes)": None - }, - "Swin-L": { - "Cityscapes (19 classes)": None, - "COCO (133 classes)": None, - "ADE20K (150 classes)": None - } -} - -def setup_modules(): - for dataset in ["Cityscapes (19 classes)", "COCO (133 classes)", "ADE20K (150 classes)"]: - for backbone in ["DiNAT-L", "Swin-L"]: - cfg = setup_cfg(dataset, backbone) - metadata = MetadataCatalog.get( - cfg.DATASETS.TEST_PANOPTIC[0] if len(cfg.DATASETS.TEST_PANOPTIC) else "__unused" - ) - if 'cityscapes_fine_sem_seg_val' in cfg.DATASETS.TEST_PANOPTIC[0]: - from cityscapesscripts.helpers.labels import labels - stuff_colors = [k.color for k in labels if k.trainId != 255] - metadata = metadata.set(stuff_colors=stuff_colors) - PREDICTORS[backbone][dataset] = DefaultPredictor(cfg) - METADATA[backbone][dataset] = metadata - -def setup_cfg(dataset, backbone): - # load config from file and command-line arguments - cfg = get_cfg() - add_deeplab_config(cfg) - add_common_config(cfg) - add_swin_config(cfg) - add_oneformer_config(cfg) - add_dinat_config(cfg) - dataset = KEY_DICT[dataset] - cfg_path = CFG_DICT[backbone][dataset] - cfg.merge_from_file(cfg_path) - if torch.cuda.is_available(): - cfg.MODEL.DEVICE = 'cuda' - else: - cfg.MODEL.DEVICE = 'cpu' - cfg.MODEL.WEIGHTS = MODEL_DICT[backbone][dataset] - cfg.freeze() - return cfg - -# def setup_modules(dataset, backbone): -# cfg = setup_cfg(dataset, backbone) -# predictor = DefaultPredictor(cfg) -# # predictor = PREDICTORS[backbone][dataset] -# metadata = MetadataCatalog.get( -# cfg.DATASETS.TEST_PANOPTIC[0] if len(cfg.DATASETS.TEST_PANOPTIC) else "__unused" -# ) -# if 'cityscapes_fine_sem_seg_val' in cfg.DATASETS.TEST_PANOPTIC[0]: -# from cityscapesscripts.helpers.labels import labels -# stuff_colors = [k.color for k in labels if k.trainId != 255] -# metadata = metadata.set(stuff_colors=stuff_colors) - -# return predictor, metadata - -def panoptic_run(img, predictor, metadata): - visualizer = Visualizer(img[:, :, ::-1], metadata=metadata, instance_mode=ColorMode.IMAGE) - predictions = predictor(img, "panoptic") - panoptic_seg, segments_info = predictions["panoptic_seg"] - out = visualizer.draw_panoptic_seg_predictions( - panoptic_seg.to(cpu_device), segments_info, alpha=0.5 - ) - visualizer_map = Visualizer(img[:, :, ::-1], is_img=False, metadata=metadata, instance_mode=ColorMode.IMAGE) - out_map = visualizer_map.draw_panoptic_seg_predictions( - panoptic_seg.to(cpu_device), segments_info, alpha=1, is_text=False - ) - return out, out_map - -def instance_run(img, predictor, metadata): - visualizer = Visualizer(img[:, :, ::-1], metadata=metadata, instance_mode=ColorMode.IMAGE) - predictions = predictor(img, "instance") - instances = predictions["instances"].to(cpu_device) - out = visualizer.draw_instance_predictions(predictions=instances, alpha=0.5) - visualizer_map = Visualizer(img[:, :, ::-1], is_img=False, metadata=metadata, instance_mode=ColorMode.IMAGE) - out_map = visualizer_map.draw_instance_predictions(predictions=instances, alpha=1, is_text=False) - return out, out_map - -def semantic_run(img, predictor, metadata): - visualizer = Visualizer(img[:, :, ::-1], metadata=metadata, instance_mode=ColorMode.IMAGE) - predictions = predictor(img, "semantic") - out = visualizer.draw_sem_seg( - predictions["sem_seg"].argmax(dim=0).to(cpu_device), alpha=0.5 - ) - visualizer_map = Visualizer(img[:, :, ::-1], is_img=False, metadata=metadata, instance_mode=ColorMode.IMAGE) - out_map = visualizer_map.draw_sem_seg( - predictions["sem_seg"].argmax(dim=0).to(cpu_device), alpha=1, is_text=False - ) - return out, out_map - -TASK_INFER = {"the task is panoptic": panoptic_run, "the task is instance": instance_run, "the task is semantic": semantic_run} - -def segment(path, task, dataset, backbone): - # predictor, metadata = setup_modules(dataset, backbone) - predictor = PREDICTORS[backbone][dataset] - metadata = METADATA[backbone][dataset] - img = cv2.imread(path) - width = WIDTH_DICT[KEY_DICT[dataset]] - img = imutils.resize(img, width=width) - out, out_map = TASK_INFER[task](img, predictor, metadata) - out = Image.fromarray(out.get_image()) - out_map = Image.fromarray(out_map.get_image()) - return out, out_map - -title = "

    OneFormer: One Transformer to Rule Universal Image Segmentation

    " - -description = "

    Jitesh Jain, Jiachen Li*, MangTik Chiu*, Ali Hassani, Nikita Orlov, Humphrey Shi

    " \ - + "

    Project Page | ArXiv Paper | Github Repo

    " \ - + "

    \ - OneFormer is the first multi-task universal image segmentation framework based on transformers. Our single OneFormer model achieves state-of-the-art performance across all three segmentation tasks with a single task-conditioned joint training process. OneFormer uses a task token to condition the model on the task in focus, making our architecture task-guided for training, and task-dynamic for inference, all with a single model. We believe OneFormer is a significant step towards making image segmentation more universal and accessible.\ -

    " \ - + "

    [Note: Inference on CPU may take upto 2 minutes. On a single RTX A6000 GPU, OneFormer is able to inference at more than 15 FPS.]

    " - -setup_modules() - -gradio_inputs = [gr.Image(source="upload", tool=None, label="Input Image",type="filepath"), - gr.Radio(choices=["the task is panoptic" ,"the task is instance", "the task is semantic"], type="value", value="the task is panoptic", label="Task Token Input"), - gr.Radio(choices=["COCO (133 classes)" ,"Cityscapes (19 classes)", "ADE20K (150 classes)"], type="value", value="COCO (133 classes)", label="Model"), - gr.Radio(choices=["DiNAT-L" ,"Swin-L"], type="value", value="DiNAT-L", label="Backbone"), - ] -gradio_outputs = [gr.Image(type="pil", label="Segmentation Overlay"), gr.Image(type="pil", label="Segmentation Map")] - - -examples = [["examples/coco.jpeg", "the task is panoptic", "COCO (133 classes)", "DiNAT-L"], - ["examples/cityscapes.png", "the task is panoptic", "Cityscapes (19 classes)", "DiNAT-L"], - ["examples/ade20k.jpeg", "the task is panoptic", "ADE20K (150 classes)", "DiNAT-L"]] - - -iface = gr.Interface(fn=segment, inputs=gradio_inputs, - outputs=gradio_outputs, - examples_per_page=5, - allow_flagging="never", - examples=examples, title=title, - description=description) - -iface.launch(enable_queue=True, server_name="0.0.0.0") \ No newline at end of file diff --git a/spaces/LanguageBind/LanguageBind/languagebind/thermal/tokenization_thermal.py b/spaces/LanguageBind/LanguageBind/languagebind/thermal/tokenization_thermal.py deleted file mode 100644 index a4ebb5607bc8f2a24341a7b11f22663e760012dd..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/languagebind/thermal/tokenization_thermal.py +++ /dev/null @@ -1,77 +0,0 @@ -from transformers import CLIPTokenizer -from transformers.utils import logging - -logger = logging.get_logger(__name__) - -VOCAB_FILES_NAMES = { - "vocab_file": "vocab.json", - "merges_file": "merges.txt", -} - -PRETRAINED_VOCAB_FILES_MAP = { - "vocab_file": { - "lb203/LanguageBind-Thermal": "https://huggingface.co/lb203/LanguageBind-Thermal/resolve/main/vocab.json", - }, - "merges_file": { - "lb203/LanguageBind-Thermal": "https://huggingface.co/lb203/LanguageBind-Thermal/resolve/main/merges.txt", - }, -} - -PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = { - "lb203/LanguageBind-Thermal": 77, -} - - -PRETRAINED_INIT_CONFIGURATION = { - "lb203/LanguageBind-Thermal": {}, -} - -class LanguageBindThermalTokenizer(CLIPTokenizer): - """ - Construct a CLIP tokenizer. Based on byte-level Byte-Pair-Encoding. - - This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to - this superclass for more information regarding those methods. - - Args: - vocab_file (`str`): - Path to the vocabulary file. - merges_file (`str`): - Path to the merges file. - errors (`str`, *optional*, defaults to `"replace"`): - Paradigm to follow when decoding bytes to UTF-8. See - [bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information. - unk_token (`str`, *optional*, defaults to `<|endoftext|>`): - The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this - token instead. - bos_token (`str`, *optional*, defaults to `<|startoftext|>`): - The beginning of sequence token. - eos_token (`str`, *optional*, defaults to `<|endoftext|>`): - The end of sequence token. - """ - - vocab_files_names = VOCAB_FILES_NAMES - pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP - max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES - model_input_names = ["input_ids", "attention_mask"] - - def __init__( - self, - vocab_file, - merges_file, - errors="replace", - unk_token="<|endoftext|>", - bos_token="<|startoftext|>", - eos_token="<|endoftext|>", - pad_token="<|endoftext|>", # hack to enable padding - **kwargs, - ): - super(LanguageBindThermalTokenizer, self).__init__( - vocab_file, - merges_file, - errors, - unk_token, - bos_token, - eos_token, - pad_token, # hack to enable padding - **kwargs,) \ No newline at end of file diff --git a/spaces/Lizzbitt/pi2/README.md b/spaces/Lizzbitt/pi2/README.md deleted file mode 100644 index 4a1e0796ba559e87de0197004e2333c140b4d543..0000000000000000000000000000000000000000 --- a/spaces/Lizzbitt/pi2/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Pi2 -emoji: 🚀 -colorFrom: purple -colorTo: gray -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MCkernick/Image_Restoration_Colorization/Global/options/base_options.py b/spaces/MCkernick/Image_Restoration_Colorization/Global/options/base_options.py deleted file mode 100644 index b8ef551eb982a3b551f77090028304f40883a94a..0000000000000000000000000000000000000000 --- a/spaces/MCkernick/Image_Restoration_Colorization/Global/options/base_options.py +++ /dev/null @@ -1,373 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. - -import argparse -import os -from util import util -import torch - - -class BaseOptions: - def __init__(self): - self.parser = argparse.ArgumentParser() - self.initialized = False - - def initialize(self): - # experiment specifics - self.parser.add_argument( - "--name", - type=str, - default="label2city", - help="name of the experiment. It decides where to store samples and models", - ) - self.parser.add_argument( - "--gpu_ids", type=str, default="0", help="gpu ids: e.g. 0 0,1,2, 0,2. use -1 for CPU" - ) - self.parser.add_argument( - "--checkpoints_dir", type=str, default="./checkpoints", help="models are saved here" - ) ## note: to add this param when using philly - # self.parser.add_argument('--project_dir', type=str, default='./', help='the project is saved here') ################### This is necessary for philly - self.parser.add_argument( - "--outputs_dir", type=str, default="./outputs", help="models are saved here" - ) ## note: to add this param when using philly Please end with '/' - self.parser.add_argument("--model", type=str, default="pix2pixHD", help="which model to use") - self.parser.add_argument( - "--norm", type=str, default="instance", help="instance normalization or batch normalization" - ) - self.parser.add_argument("--use_dropout", action="store_true", help="use dropout for the generator") - self.parser.add_argument( - "--data_type", - default=32, - type=int, - choices=[8, 16, 32], - help="Supported data type i.e. 8, 16, 32 bit", - ) - self.parser.add_argument("--verbose", action="store_true", default=False, help="toggles verbose") - - # input/output sizes - self.parser.add_argument("--batchSize", type=int, default=1, help="input batch size") - self.parser.add_argument("--loadSize", type=int, default=1024, help="scale images to this size") - self.parser.add_argument("--fineSize", type=int, default=512, help="then crop to this size") - self.parser.add_argument("--label_nc", type=int, default=35, help="# of input label channels") - self.parser.add_argument("--input_nc", type=int, default=3, help="# of input image channels") - self.parser.add_argument("--output_nc", type=int, default=3, help="# of output image channels") - - # for setting inputs - self.parser.add_argument("--dataroot", type=str, default="./datasets/cityscapes/") - self.parser.add_argument( - "--resize_or_crop", - type=str, - default="scale_width", - help="scaling and cropping of images at load time [resize_and_crop|crop|scale_width|scale_width_and_crop]", - ) - self.parser.add_argument( - "--serial_batches", - action="store_true", - help="if true, takes images in order to make batches, otherwise takes them randomly", - ) - self.parser.add_argument( - "--no_flip", - action="store_true", - help="if specified, do not flip the images for data argumentation", - ) - self.parser.add_argument("--nThreads", default=2, type=int, help="# threads for loading data") - self.parser.add_argument( - "--max_dataset_size", - type=int, - default=float("inf"), - help="Maximum number of samples allowed per dataset. If the dataset directory contains more than max_dataset_size, only a subset is loaded.", - ) - - # for displays - self.parser.add_argument("--display_winsize", type=int, default=512, help="display window size") - self.parser.add_argument( - "--tf_log", - action="store_true", - help="if specified, use tensorboard logging. Requires tensorflow installed", - ) - - # for generator - self.parser.add_argument("--netG", type=str, default="global", help="selects model to use for netG") - self.parser.add_argument("--ngf", type=int, default=64, help="# of gen filters in first conv layer") - self.parser.add_argument("--k_size", type=int, default=3, help="# kernel size conv layer") - self.parser.add_argument("--use_v2", action="store_true", help="use DCDCv2") - self.parser.add_argument("--mc", type=int, default=1024, help="# max channel") - self.parser.add_argument("--start_r", type=int, default=3, help="start layer to use resblock") - self.parser.add_argument( - "--n_downsample_global", type=int, default=4, help="number of downsampling layers in netG" - ) - self.parser.add_argument( - "--n_blocks_global", - type=int, - default=9, - help="number of residual blocks in the global generator network", - ) - self.parser.add_argument( - "--n_blocks_local", - type=int, - default=3, - help="number of residual blocks in the local enhancer network", - ) - self.parser.add_argument( - "--n_local_enhancers", type=int, default=1, help="number of local enhancers to use" - ) - self.parser.add_argument( - "--niter_fix_global", - type=int, - default=0, - help="number of epochs that we only train the outmost local enhancer", - ) - - self.parser.add_argument( - "--load_pretrain", - type=str, - default="", - help="load the pretrained model from the specified location", - ) - - # for instance-wise features - self.parser.add_argument( - "--no_instance", action="store_true", help="if specified, do *not* add instance map as input" - ) - self.parser.add_argument( - "--instance_feat", - action="store_true", - help="if specified, add encoded instance features as input", - ) - self.parser.add_argument( - "--label_feat", action="store_true", help="if specified, add encoded label features as input" - ) - self.parser.add_argument("--feat_num", type=int, default=3, help="vector length for encoded features") - self.parser.add_argument( - "--load_features", action="store_true", help="if specified, load precomputed feature maps" - ) - self.parser.add_argument( - "--n_downsample_E", type=int, default=4, help="# of downsampling layers in encoder" - ) - self.parser.add_argument( - "--nef", type=int, default=16, help="# of encoder filters in the first conv layer" - ) - self.parser.add_argument("--n_clusters", type=int, default=10, help="number of clusters for features") - - # diy - self.parser.add_argument("--self_gen", action="store_true", help="self generate") - self.parser.add_argument( - "--mapping_n_block", type=int, default=3, help="number of resblock in mapping" - ) - self.parser.add_argument("--map_mc", type=int, default=64, help="max channel of mapping") - self.parser.add_argument("--kl", type=float, default=0, help="KL Loss") - self.parser.add_argument( - "--load_pretrainA", - type=str, - default="", - help="load the pretrained model from the specified location", - ) - self.parser.add_argument( - "--load_pretrainB", - type=str, - default="", - help="load the pretrained model from the specified location", - ) - self.parser.add_argument("--feat_gan", action="store_true") - self.parser.add_argument("--no_cgan", action="store_true") - self.parser.add_argument("--map_unet", action="store_true") - self.parser.add_argument("--map_densenet", action="store_true") - self.parser.add_argument("--fcn", action="store_true") - self.parser.add_argument("--is_image", action="store_true", help="train image recon only pair data") - self.parser.add_argument("--label_unpair", action="store_true") - self.parser.add_argument("--mapping_unpair", action="store_true") - self.parser.add_argument("--unpair_w", type=float, default=1.0) - self.parser.add_argument("--pair_num", type=int, default=-1) - self.parser.add_argument("--Gan_w", type=float, default=1) - self.parser.add_argument("--feat_dim", type=int, default=-1) - self.parser.add_argument("--abalation_vae_len", type=int, default=-1) - - ######################### useless, just to cooperate with docker - self.parser.add_argument("--gpu", type=str) - self.parser.add_argument("--dataDir", type=str) - self.parser.add_argument("--modelDir", type=str) - self.parser.add_argument("--logDir", type=str) - self.parser.add_argument("--data_dir", type=str) - - self.parser.add_argument("--use_skip_model", action="store_true") - self.parser.add_argument("--use_segmentation_model", action="store_true") - - self.parser.add_argument("--spatio_size", type=int, default=64) - self.parser.add_argument("--test_random_crop", action="store_true") - ########################## - - self.parser.add_argument("--contain_scratch_L", action="store_true") - self.parser.add_argument( - "--mask_dilation", type=int, default=0 - ) ## Don't change the input, only dilation the mask - - self.parser.add_argument( - "--irregular_mask", type=str, default="", help="This is the root of the mask" - ) - self.parser.add_argument( - "--mapping_net_dilation", - type=int, - default=1, - help="This parameter is the dilation size of the translation net", - ) - - self.parser.add_argument( - "--VOC", type=str, default="VOC_RGB_JPEGImages.bigfile", help="The root of VOC dataset" - ) - - self.parser.add_argument("--non_local", type=str, default="", help="which non_local setting") - self.parser.add_argument( - "--NL_fusion_method", - type=str, - default="add", - help="how to fuse the origin feature and nl feature", - ) - self.parser.add_argument( - "--NL_use_mask", action="store_true", help="If use mask while using Non-local mapping model" - ) - self.parser.add_argument( - "--correlation_renormalize", - action="store_true", - help="Since after mask out the correlation matrix(which is softmaxed), the sum is not 1 any more, enable this param to re-weight", - ) - - self.parser.add_argument("--Smooth_L1", action="store_true", help="Use L1 Loss in image level") - - self.parser.add_argument( - "--face_restore_setting", type=int, default=1, help="This is for the aligned face restoration" - ) - self.parser.add_argument("--face_clean_url", type=str, default="") - self.parser.add_argument("--syn_input_url", type=str, default="") - self.parser.add_argument("--syn_gt_url", type=str, default="") - - self.parser.add_argument( - "--test_on_synthetic", - action="store_true", - help="If you want to test on the synthetic data, enable this parameter", - ) - - self.parser.add_argument("--use_SN", action="store_true", help="Add SN to every parametric layer") - - self.parser.add_argument( - "--use_two_stage_mapping", action="store_true", help="choose the model which uses two stage" - ) - - self.parser.add_argument("--L1_weight", type=float, default=10.0) - self.parser.add_argument("--softmax_temperature", type=float, default=1.0) - self.parser.add_argument( - "--patch_similarity", - action="store_true", - help="Enable this denotes using 3*3 patch to calculate similarity", - ) - self.parser.add_argument( - "--use_self", - action="store_true", - help="Enable this denotes that while constructing the new feature maps, using original feature (diagonal == 1)", - ) - - self.parser.add_argument("--use_own_dataset", action="store_true") - - self.parser.add_argument( - "--test_hole_two_folders", - action="store_true", - help="Enable this parameter means test the restoration with inpainting given twp folders which are mask and old respectively", - ) - - self.parser.add_argument( - "--no_hole", - action="store_true", - help="While test the full_model on non_scratch data, do not add random mask into the real old photos", - ) ## Only for testing - self.parser.add_argument( - "--random_hole", - action="store_true", - help="While training the full model, 50% probability add hole", - ) - - self.parser.add_argument("--NL_res", action="store_true", help="NL+Resdual Block") - - self.parser.add_argument("--image_L1", action="store_true", help="Image level loss: L1") - self.parser.add_argument( - "--hole_image_no_mask", - action="store_true", - help="while testing, give hole image but not give the mask", - ) - - self.parser.add_argument( - "--down_sample_degradation", - action="store_true", - help="down_sample the image only, corresponds to [down_sample_face]", - ) - - self.parser.add_argument( - "--norm_G", type=str, default="spectralinstance", help="The norm type of Generator" - ) - self.parser.add_argument( - "--init_G", - type=str, - default="xavier", - help="normal|xavier|xavier_uniform|kaiming|orthogonal|none", - ) - - self.parser.add_argument("--use_new_G", action="store_true") - self.parser.add_argument("--use_new_D", action="store_true") - - self.parser.add_argument( - "--only_voc", action="store_true", help="test the trianed celebA face model using VOC face" - ) - - self.parser.add_argument( - "--cosin_similarity", - action="store_true", - help="For non-local, using cosin to calculate the similarity", - ) - - self.parser.add_argument( - "--downsample_mode", - type=str, - default="nearest", - help="For partial non-local, choose how to downsample the mask", - ) - - self.parser.add_argument("--mapping_exp",type=int,default=0,help='Default 0: original PNL|1: Multi-Scale Patch Attention') - self.parser.add_argument("--inference_optimize",action='store_true',help='optimize the memory cost') - - - self.initialized = True - - def parse(self, save=True): - if not self.initialized: - self.initialize() - self.opt = self.parser.parse_args() - self.opt.isTrain = self.isTrain # train or test - - str_ids = self.opt.gpu_ids.split(",") - self.opt.gpu_ids = [] - for str_id in str_ids: - int_id = int(str_id) - if int_id >= 0: - self.opt.gpu_ids.append(int_id) - - # set gpu ids - if len(self.opt.gpu_ids) > 0: - # pass - torch.cuda.set_device(self.opt.gpu_ids[0]) - - args = vars(self.opt) - - # print('------------ Options -------------') - # for k, v in sorted(args.items()): - # print('%s: %s' % (str(k), str(v))) - # print('-------------- End ----------------') - - # save to the disk - expr_dir = os.path.join(self.opt.checkpoints_dir, self.opt.name) - util.mkdirs(expr_dir) - if save and not self.opt.continue_train: - file_name = os.path.join(expr_dir, "opt.txt") - with open(file_name, "wt") as opt_file: - opt_file.write("------------ Options -------------\n") - for k, v in sorted(args.items()): - opt_file.write("%s: %s\n" % (str(k), str(v))) - opt_file.write("-------------- End ----------------\n") - return self.opt diff --git a/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/monotonic_align/setup.py b/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/monotonic_align/setup.py deleted file mode 100644 index 30c224807a70faa9df9c9eb75f8e80c8c867b16b..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/monotonic_align/setup.py +++ /dev/null @@ -1,9 +0,0 @@ -from distutils.core import setup -from Cython.Build import cythonize -import numpy - -setup( - name = 'monotonic_align', - ext_modules = cythonize("core.pyx"), - include_dirs=[numpy.get_include()] -) diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.cpp b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.cpp deleted file mode 100644 index 551243fdadfd1682b5dc6628623b67a79b3f6c74..0000000000000000000000000000000000000000 --- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.cpp +++ /dev/null @@ -1,43 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -#include - -#include -#include - -namespace groundingdino { - -at::Tensor -ms_deform_attn_cpu_forward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const int im2col_step) -{ - AT_ERROR("Not implement on cpu"); -} - -std::vector -ms_deform_attn_cpu_backward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const at::Tensor &grad_output, - const int im2col_step) -{ - AT_ERROR("Not implement on cpu"); -} - -} // namespace groundingdino diff --git a/spaces/Makiing/coolb-in-gtest/README.md b/spaces/Makiing/coolb-in-gtest/README.md deleted file mode 100644 index 218767d1d7debd26932ffddca2ec0f421c0171a9..0000000000000000000000000000000000000000 --- a/spaces/Makiing/coolb-in-gtest/README.md +++ /dev/null @@ -1,195 +0,0 @@ ---- -title: bingo -emoji: 📉 -colorFrom: red -colorTo: red -sdk: docker -pinned: true -license: mit -duplicated_from: hf4all/bingo ---- - -
    - -# Bingo - -Bingo,一个让你呼吸顺畅 New Bing。 - -高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。 - -![Github stars](https://badgen.net/github/stars/weaigc/bingo?icon=github&label=stars) -![Gthub issues](https://img.shields.io/github/issues/weaigc/bingo) -[![docker build](https://github.com/weaigc/bingo/actions/workflows/docker.yml/badge.svg)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![docker hub](https://badgen.net/docker/size/weaigc/bingo?icon=docker&label=image%20size)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![MIT License](https://img.shields.io/badge/license-MIT-97c50f)](https://github.com/weaigc/bingo/blob/main/license) - -
    - -## 演示站点 - -https://bing.github1s.tk - - - -[![img](./docs/images/demo.png)](https://bing.github1s.tk) - -## 功能和特点 - -- 完全基于 Next.js 重写,高度还原 New Bing Web 版 UI,使用体验和 Bing AI 基本一致。 -- 支持 Docker 构建,方便快捷地部署和访问。 -- Cookie 可全局配置,全局共享。 -- 支持持续语音对话 - -## RoadMap - - - [x] 支持 wss 转发 - - [x] 支持一键部署 - - [x] 优化移动端展示 - - [x] 支持画图 - - [x] 支持语音输入(支持语音指令,目前仅支持 PC 版 Edge 及 Chrome 浏览器) - - [x] 支持语音输出(需要手动开启) - - [x] 支持图片输入 - - [x] 支持自定义域名 - - [ ] 支持历史记录 - - [ ] 适配深色模式 - - [ ] 支持内置提示词 - - [ ] 支持离线访问 - - [ ] 国际化翻译 - -## 一键部署 -你也可以一键部署自己的 New Bing AI 到 🤗 HuggingFace 。 - -### 部署到 Huggingface -1. 点击此图标 -[![Deploy to HuggingFace](https://img.shields.io/badge/%E7%82%B9%E5%87%BB%E9%83%A8%E7%BD%B2-%F0%9F%A4%97-fff)](https://huggingface.co/login?next=%2Fspaces%2Fhf4all%2Fbingo%3Fduplicate%3Dtrue%26visibility%3Dpublic),配置可以不改。 - -2. 部署署完成后,点击“设置” 》“站点域名”,点一下,复制一下 HF 域名信息,然后分享给别人即可。 - -> Huggingface 不支持绑定自己的域名,不过我们可以使用曲线救国的方式来达到这个目的 -> 1. 方式二,借助 Cloudflare Workers [部署Cloudflare Workers](#使用Cloudflare-Workers自定义域名) -> 2. 方式一,借助 Github Pages 及 iframe [如何绑定域名](https://github.com/weaigc/bingo/issues/4) - -### 使用Cloudflare Workers自定义域名 - -> 核心代码 [worker.js](./cloudflare/worker.js) - -- [注册 Cloudflare 账号](https://dash.cloudflare.com/sign-up) - -- 添加一个新的网站,需要你有自己的域名并且将域名`Name Server`托管给 Cloudflare 才行(更多信息可自行 Google) - -- 通过左侧菜单进入「Workers」,并点击「Create a Worker」。 - -- 创建 Worker 服务,复制 [worker.js](./cloudflare/worker.js) 全部代码,粘贴至创建的服务中,根据注释进行改动,保存并部署。 - -- 触发器 中自定义访问域名。 - -### 部署其它平台 -
    - -由于其他平台目前遭到 New Bing 封杀,会遇到很多问题,不再做推荐,有需要的可以自行查看 - - -#### 部署到 Netlify -[![Deploy to Netlify Button](https://www.netlify.com/img/deploy/button.svg)](https://app.netlify.com/start/deploy?repository=https://github.com/weaigc/bingo) - -#### 部署到 Vercel -如果你是 Vercel 付费用户,可以点以下链接一键部署到 Vercel。免费版本有[接口超时限制](https://vercel.com/docs/concepts/limits/overview),不推荐使用 - -[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?demo-title=bingo&demo-description=bingo&demo-url=https%3A%2F%2Fbing.github1s.tk%2F&project-name=bingo&repository-name=bingo&repository-url=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo&from=templates&skippable-integrations=1&env=BING_HEADER&envDescription=%E5%A6%82%E6%9E%9C%E4%B8%8D%E7%9F%A5%E9%81%93%E6%80%8E%E4%B9%88%E9%85%8D%E7%BD%AE%E8%AF%B7%E7%82%B9%E5%8F%B3%E4%BE%A7Learn+More&envLink=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo%2Fblob%2Fmain%2F.env.example) - -#### 部署到 Render - -[![Deploy to Render](https://render.com/images/deploy-to-render-button.svg)](https://render.com/deploy?repo=https://github.com/weaigc/bingo) -
    - -## 环境和依赖 - -- Node.js >= 18 -- Bing AI 的[身份信息](#如何获取-BING_HEADER)) - -## 安装和使用 - -* 使用 Node 启动 - -```bash -git clone https://github.com/weaigc/bingo.git -npm i # 推荐使用 pnpm i -npm run build -npm run start -``` - -* 使用 Docker 启动 -```bash -docker pull weaigc/bingo -docker run --rm -it -p 7860:7860 weaigc/bingo -# 或者 -docker run --rm -it -e BING_HEADER=xxxx -p 7860:7860 weaigc/bingo -``` - -## 如何获取 BING_HEADER -> 配置了 BING_HEADER 意味着你将自己的账号共享给所有使用此服务的人,如果不需要免登录画图的功能,不建议设置此变量 - -打开 https://www.bing.com 并登录,然后访问 https://www.bing.com/turing/captcha/challenge,通过人机校验,然后 - -![BING HEADER](./docs/images/curl.png) - -> 复制出来的内容应该如下所示。确认格式无误后,打开 https://effulgent-bubblegum-e2f5df.netlify.app/#dialog=%22settings%22 ,粘贴进去,点击“转成 BING_HEADER 并复制”,然后从剪切板粘贴即可得到。(你也可以先在网页上进行验证) - -以下是格式参考,需要注意的是,网页端保存的格式是以`curl`开头, 而服务端配置的 `BING_HEADER` 是 `base64` 格式,两者不能互通。 -
    -正常格式/网页端保存的格式(格式仅供参考) - -``` -curl 'https://www.bing.com/turing/captcha/challenge' \ - -H 'authority: www.bing.com' \ - -H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7' \ - -H 'accept-language: zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6' \ - -H 'cache-control: max-age=0' \ - -H 'cookie: MicrosoftApplicationsTelemetryDeviceId=3399c004-fd0e-48ec-bb92-d82a27b2bbd4; _EDGE_V=1; SRCHD=AF=NOFORM; SRCHUID=V=2&GUID=29EBDDA4E6674329ACCF1A0A423C3E98&dmnchg=1; _UR=QS=0&TQS=0; _HPVN=CS=eyJQbiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiUCJ9LCJTYyI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiSCJ9LCJReiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiVCJ9LCJBcCI6dHJ1ZSwiTXV0ZSI6dHJ1ZSwiTGFkIjoiMjAyMy0wNy0yNVQwMDowMDowMFoiLCJJb3RkIjowLCJHd2IiOjAsIkRmdCI6bnVsbCwiTXZzIjowLCJGbHQiOjAsIkltcCI6Mn0=; _RwBf=ilt=1&ihpd=1&ispd=0&rc=0&rb=0&gb=0&rg=200&pc=0&mtu=0&rbb=0&g=0&cid=&clo=0&v=1&l=2023-07-25T07:00:00.0000000Z&lft=0001-01-01T00:00:00.0000000&aof=0&o=2&p=&c=&t=0&s=0001-01-01T00:00:00.0000000+00:00&ts=2023-07-25T11:00:31.7111548+00:00&rwred=0&wls=&lka=0&lkt=0&TH=&dci=0; ANON=A=0043C6590EA808ED6E395059FFFFFFFF&E=1c8b&W=1; NAP=V=1.9&E=1c31&C=DnaMSbDN_4efZ_xXqBF3Daorjr53kYqYoaP8YHsupjmiXnysX7a37A&W=1; PPLState=1; KievRPSSecAuth=FABSBBRaTOJILtFsMkpLVWSG6AN6C/svRwNmAAAEgAAACMGUA7EGVSjGEAQBGHtNsc5sNL7unmJsfPJ2t6imfo4BeUJlAia3IpMTtMUy4PU/C5QAzRI5pODtsIee0+blgllXt/5IiWwGjwmdhivsFM597pRPkjARPfwsPhNLPNbJrCPNPHdje4Is78MnCADXw6/NBq2FL8V2/byw2fH6IuAMD2MvN/VvqpEa9ZxiDjZtENj4HEj0mO2SgzjfyEhVAkjvznJqU2rw/Q2tHmX94NAM2kzlzKF/hWPhCCUmu8IHLvCnHDS6mSptvJDDP/sp3ovtzOXkP1mlM/Xju5ftesUvccVEQGffXORa1dE5hEMbKIiKXz1tDdduSXE19g9/+mRMAjaQhpwhI8XmilCTx1adb1Ll5qK+VjC9GNfEZzcbsGBPVaOl+anG8rEMq+Xnhjo7J+NqTNolavHgcuV8kJsCeJZIged33UA8eOZeFo+wAECMguxMoSqgpGH+sthqynvD/FJD6r/tiU2N3uqVq8NE8V37asrN6T14Z0FGBJOe6ET1+PGApm3s11OY9/xhFEB9T5BEPUGEbvRcLcW2ncFQX0EU+xweiPqo1Q1hNUg/dCtSI+lZ7c2H8XheePZavZ0TJQ8oNCSAuKiTqJmI0fVGpwbXwfaADkEipuawz3fIuMJBNgMU0OtA7Hm59v2fGLIBuvi6YeKS6GgVk3BIPf+P/eKahwozrxQZaFnoHTSqMkvct7xCP4atBROfXKf5Ww0CcFKp+2WX9BIskTOo2jjk6bAyyYJ+ElUB1fgLKNk5m/YSMc9iYCLIBMIGN8F0Yvy3tZ7cvh7Ue5Klo98US/I+nW1G7ZJMHRgUO8h8lpneHqEMegKd8gynO4VF7RpCjJkunDmW0Ta+RkXAP619pg0dqHMFkoOgknN78oBbGTV6fJUKotv+vi61kLhAeXZGWoHGCRXh2wUC6YgfPgKA6ESRNHtFn7E5B3HHpLc5rVMDSNhKZYfdhupV4Ezf6+5DhMcZLZhi0kk+ivDiN1gdHlVtSN55xpvf+c+XZDzR0uhgcvgy0LAbmzgk6y4WbYH+LQsMpzNNj+aC72vMiWovWrKh9jY4MYCmdgxsS/skPtLdp18muiEIRXTbZQGUmhxFpJAIbBIsCscMpzL0BgeujxUwM5wr79Sd9r4xwbgSMwmBlBfUHRVBdNyg8feepeJbCS63nD6eHOuLqMRsPIio3w/ki/EAa92UUEiZeavLsMUD/y/qAvWUdzdP5Y+C/TM+CMGS/kGL4LEdY/28MQeTvU1qv1X21kQt2aiaj3pPVL36hAzxbcLgqcMo9oymDRy87kdCXW/+g4oKLtMh6fm/G6W6Y/B01JlxohyyvueHQIG557uzkEkTJ3FnOVODSKBKpb3WZ65rExfV71zSZa25F3GmpaIG6HiYrX2YYhQAkIE9pKEQBHbnwHuwNDGottZTXZw=; WLS=C=9df3f9d8518fae19&N=wen; WLID=pGY8HgWCu4p5XYCOk2oa0+DBdftkMUfmNIn8XtSjSTKsgv/Il7GUlYs0Jpjf/E12jZMgV7x44Dy3fXOgjjUoJx7Y/ClLrLhsk20THksJJoI=; _EDGE_S=F=1&SID=17CF6EE006426448213C7DB907436588&mkt=zh-CN; MUID=225621093D8A6C27301632413C0E6D08; MUIDB=225621093D8A6C27301632413C0E6D08; SUID=A; SNRHOP=I=&TS=; _U=nGyzKQruEsDwLiu65fZFIG6e12hf2lwTJmroW__k8joUJIKmG3OIjayXKGW9dCVR3sNhF76mEVxyW6yjUGPodOfjtSa3s3J_DxMOrEK1BqXCOBI9bC66spAIASV7prsYFlVAJz73jVNENp_tBubLHJy6EbT0BKRe4AjrYkH-9uMnmCKB8Zmyg; _SS=SID=17CF6EE006426448213C7DB907436588&R=0&RB=0&GB=0&RG=200&RP=0&PC=U531; SRCHS=PC=U531; USRLOC=HS=1&ELOC=LAT=22.501529693603516|LON=113.9263687133789|N=%E5%8D%97%E5%B1%B1%E5%8C%BA%EF%BC%8C%E5%B9%BF%E4%B8%9C%E7%9C%81|ELT=2|&CLOC=LAT=22.50153029046461|LON=113.92637070632928|A=733.4464586120832|TS=230726151034|SRC=W; SRCHUSR=DOB=20230725&T=1690384908000&POEX=W; ipv6=hit=1690388509974&t=6; SRCHHPGUSR=HV=1690384945&SRCHLANG=zh-Hans&PV=15.0.0&BRW=MW&BRH=MT&CW=410&CH=794&SCW=410&SCH=794&DPR=1.5&UTC=480&DM=0&WTS=63825879627&PRVCW=410&PRVCH=794&PR=1.5; cct=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpny6Y_CVyi_MSyM94VyMWnjdYkkccVtm3czoIAtXUGQA; GC=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpR3Y_D9Ytcks4Ht6XhadXk75dvhzP4YOUS0UmoEyqyxw' \ - -H 'dnt: 1' \ - -H 'sec-ch-ua: "Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"' \ - -H 'sec-ch-ua-arch: "x86"' \ - -H 'sec-ch-ua-bitness: "64"' \ - -H 'sec-ch-ua-full-version: "116.0.1938.29"' \ - -H 'sec-ch-ua-full-version-list: "Chromium";v="116.0.5845.42", "Not)A;Brand";v="24.0.0.0", "Microsoft Edge";v="116.0.1938.29"' \ - -H 'sec-ch-ua-mobile: ?0' \ - -H 'sec-ch-ua-model: ""' \ - -H 'sec-ch-ua-platform: "Windows"' \ - -H 'sec-ch-ua-platform-version: "15.0.0"' \ - -H 'sec-fetch-dest: document' \ - -H 'sec-fetch-mode: navigate' \ - -H 'sec-fetch-site: none' \ - -H 'sec-fetch-user: ?1' \ - -H 'sec-ms-gec: B3F47AD4A283CAB374C0451C46AAFD147C6A4DACAFF6A1C13F34B2C72B024494' \ - -H 'sec-ms-gec-version: 1-116.0.1938.29' \ - -H 'upgrade-insecure-requests: 1' \ - -H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.0.0' \ - -H 'x-client-data: eyIxIjoiMiIsIjEwIjoiXCJTMGg3R05HOTF2aDQ1TUZSUnZ5NHN2akRmMWdlaVJKenNxNlA3aU1WbnF3PVwiIiwiMiI6IjEiLCIzIjoiMSIsIjQiOiIyMTU4ODQ5NTM4MjY4OTM5NTA3IiwiNSI6IlwiSm9GUWpPTDk3OS9MbkRRZnlCd2N1M2FsOUN3eTZTQmdaMGNYMXBtOWVMZz1cIiIsIjYiOiJiZXRhIiwiNyI6IjE4MDM4ODYyNjQzNSIsIjkiOiJkZXNrdG9wIn0=' \ - -H 'x-edge-shopping-flag: 1' \ - --compressed -``` -
    - -
    -转成base64之后的格式(BING_HEADER只能使用 base64 之后的格式) - -``` -Y3VybCAnaHR0cHM6Ly93d3cuYmluZy5jb20vdHVyaW5nL2NvbnZlcnNhdGlvbi9jcmVhdGUnIFwgICAtSCAnYXV0aG9yaXR5OiB3d3cuYmluZy5jb20nIFwgICAtSCAnYWNjZXB0OiB0ZXh0L2h0bWwsYXBwbGljYXRpb24veGh0bWwreG1sLGFwcGxpY2F0aW9uL3htbDtxPTAuOSxpbWFnZS93ZWJwLGltYWdlL2FwbmcsKi8qO3E9MC44LGFwcGxpY2F0aW9uL3NpZ25lZC1leGNoYW5nZTt2PWIzO3E9MC43JyBcICAgLUggJ2FjY2VwdC1sYW5ndWFnZTogemgtQ04semg7cT0wLjksZW47cT0wLjgsZW4tR0I7cT0wLjcsZW4tVVM7cT0wLjYnIFwgICAtSCAnY2FjaGUtY29udHJvbDogbWF4LWFnZT0wJyBcICAgLUggJ2Nvb2tpZTogTWljcm9zb2Z0QXBwbGljYXRpb25zVGVsZW1ldHJ5RGV2aWNlSWQ9MzM5OWMwMDQtZmQwZS00OGVjLWJiOTItZDgyYTI3YjJiYmQ0OyBfRURHRV9WPTE7IFNSQ0hEPUFGPU5PRk9STTsgU1JDSFVJRD1WPTImR1VJRD0yOUVCRERBNEU2Njc0MzI5QUNDRjFBMEE0MjNDM0U5OCZkbW5jaGc9MTsgX1VSPVFTPTAmVFFTPTA7IF9IUFZOPUNTPWV5SlFiaUk2ZXlKRGJpSTZNU3dpVTNRaU9qQXNJbEZ6SWpvd0xDSlFjbTlrSWpvaVVDSjlMQ0pUWXlJNmV5SkRiaUk2TVN3aVUzUWlPakFzSWxGeklqb3dMQ0pRY205a0lqb2lTQ0o5TENKUmVpSTZleUpEYmlJNk1Td2lVM1FpT2pBc0lsRnpJam93TENKUWNtOWtJam9pVkNKOUxDSkJjQ0k2ZEhKMVpTd2lUWFYwWlNJNmRISjFaU3dpVEdGa0lqb2lNakF5TXkwd055MHlOVlF3TURvd01Eb3dNRm9pTENKSmIzUmtJam93TENKSGQySWlPakFzSWtSbWRDSTZiblZzYkN3aVRYWnpJam93TENKR2JIUWlPakFzSWtsdGNDSTZNbjA9OyBfUndCZj1pbHQ9MSZpaHBkPTEmaXNwZD0wJnJjPTAmcmI9MCZnYj0wJnJnPTIwMCZwYz0wJm10dT0wJnJiYj0wJmc9MCZjaWQ9JmNsbz0wJnY9MSZsPTIwMjMtMDctMjVUMDc6MDA6MDAuMDAwMDAwMFombGZ0PTAwMDEtMDEtMDFUMDA6MDA6MDAuMDAwMDAwMCZhb2Y9MCZvPTImcD0mYz0mdD0wJnM9MDAwMS0wMS0wMVQwMDowMDowMC4wMDAwMDAwKzAwOjAwJnRzPTIwMjMtMDctMjVUMTE6MDA6MzEuNzExMTU0OCswMDowMCZyd3JlZD0wJndscz0mbGthPTAmbGt0PTAmVEg9JmRjaT0wOyBBTk9OPUE9MDA0M0M2NTkwRUE4MDhFRDZFMzk1MDU5RkZGRkZGRkYmRT0xYzhiJlc9MTsgTkFQPVY9MS45JkU9MWMzMSZDPURuYU1TYkROXzRlZlpfeFhxQkYzRGFvcmpyNTNrWXFZb2FQOFlIc3Vwam1pWG55c1g3YTM3QSZXPTE7IFBQTFN0YXRlPTE7IEtpZXZSUFNTZWNBdXRoPUZBQlNCQlJhVE9KSUx0RnNNa3BMVldTRzZBTjZDL3N2UndObUFBQUVnQUFBQ01HVUE3RUdWU2pHRUFRQkdIdE5zYzVzTkw3dW5tSnNmUEoydDZpbWZvNEJlVUpsQWlhM0lwTVR0TVV5NFBVL0M1UUF6Ukk1cE9EdHNJZWUwK2JsZ2xsWHQvNUlpV3dHandtZGhpdnNGTTU5N3BSUGtqQVJQZndzUGhOTFBOYkpyQ1BOUEhkamU0SXM3OE1uQ0FEWHc2L05CcTJGTDhWMi9ieXcyZkg2SXVBTUQyTXZOL1Z2cXBFYTlaeGlEalp0RU5qNEhFajBtTzJTZ3pqZnlFaFZBa2p2em5KcVUycncvUTJ0SG1YOTROQU0ya3psektGL2hXUGhDQ1VtdThJSEx2Q25IRFM2bVNwdHZKRERQL3NwM292dHpPWGtQMW1sTS9YanU1ZnRlc1V2Y2NWRVFHZmZYT1JhMWRFNWhFTWJLSWlLWHoxdERkZHVTWEUxOWc5LyttUk1BamFRaHB3aEk4WG1pbENUeDFhZGIxTGw1cUsrVmpDOUdOZkVaemNic0dCUFZhT2wrYW5HOHJFTXErWG5oam83SitOcVROb2xhdkhnY3VWOGtKc0NlSlpJZ2VkMzNVQThlT1plRm8rd0FFQ01ndXhNb1NxZ3BHSCtzdGhxeW52RC9GSkQ2ci90aVUyTjN1cVZxOE5FOFYzN2Fzck42VDE0WjBGR0JKT2U2RVQxK1BHQXBtM3MxMU9ZOS94aEZFQjlUNUJFUFVHRWJ2UmNMY1cybmNGUVgwRVUreHdlaVBxbzFRMWhOVWcvZEN0U0krbFo3YzJIOFhoZWVQWmF2WjBUSlE4b05DU0F1S2lUcUptSTBmVkdwd2JYd2ZhQURrRWlwdWF3ejNmSXVNSkJOZ01VME90QTdIbTU5djJmR0xJQnV2aTZZZUtTNkdnVmszQklQZitQL2VLYWh3b3pyeFFaYUZub0hUU3FNa3ZjdDd4Q1A0YXRCUk9mWEtmNVd3MENjRktwKzJXWDlCSXNrVE9vMmpqazZiQXl5WUorRWxVQjFmZ0xLTms1bS9ZU01jOWlZQ0xJQk1JR044RjBZdnkzdFo3Y3ZoN1VlNUtsbzk4VVMvSStuVzFHN1pKTUhSZ1VPOGg4bHBuZUhxRU1lZ0tkOGd5bk80VkY3UnBDakprdW5EbVcwVGErUmtYQVA2MTlwZzBkcUhNRmtvT2drbk43OG9CYkdUVjZmSlVLb3R2K3ZpNjFrTGhBZVhaR1dvSEdDUlhoMndVQzZZZ2ZQZ0tBNkVTUk5IdEZuN0U1QjNISHBMYzVyVk1EU05oS1pZZmRodXBWNEV6ZjYrNURoTWNaTFpoaTBraytpdkRpTjFnZEhsVnRTTjU1eHB2ZitjK1haRHpSMHVoZ2N2Z3kwTEFibXpnazZ5NFdiWUgrTFFzTXB6Tk5qK2FDNzJ2TWlXb3ZXcktoOWpZNE1ZQ21kZ3hzUy9za1B0TGRwMThtdWlFSVJYVGJaUUdVbWh4RnBKQUliQklzQ3NjTXB6TDBCZ2V1anhVd001d3I3OVNkOXI0eHdiZ1NNd21CbEJmVUhSVkJkTnlnOGZlZXBlSmJDUzYzbkQ2ZUhPdUxxTVJzUElpbzN3L2tpL0VBYTkyVVVFaVplYXZMc01VRC95L3FBdldVZHpkUDVZK0MvVE0rQ01HUy9rR0w0TEVkWS8yOE1RZVR2VTFxdjFYMjFrUXQyYWlhajNwUFZMMzZoQXp4YmNMZ3FjTW85b3ltRFJ5ODdrZENYVy8rZzRvS0x0TWg2Zm0vRzZXNlkvQjAxSmx4b2h5eXZ1ZUhRSUc1NTd1emtFa1RKM0ZuT1ZPRFNLQktwYjNXWjY1ckV4ZlY3MXpTWmEyNUYzR21wYUlHNkhpWXJYMllZaFFBa0lFOXBLRVFCSGJud0h1d05ER290dFpUWFp3PTsgV0xTPUM9OWRmM2Y5ZDg1MThmYWUxOSZOPXdlbjsgV0xJRD1wR1k4SGdXQ3U0cDVYWUNPazJvYTArREJkZnRrTVVmbU5JbjhYdFNqU1RLc2d2L0lsN0dVbFlzMEpwamYvRTEyalpNZ1Y3eDQ0RHkzZlhPZ2pqVW9KeDdZL0NsTHJMaHNrMjBUSGtzSkpvST07IF9FREdFX1M9Rj0xJlNJRD0xN0NGNkVFMDA2NDI2NDQ4MjEzQzdEQjkwNzQzNjU4OCZta3Q9emgtQ047IE1VSUQ9MjI1NjIxMDkzRDhBNkMyNzMwMTYzMjQxM0MwRTZEMDg7IE1VSURCPTIyNTYyMTA5M0Q4QTZDMjczMDE2MzI0MTNDMEU2RDA4OyBTVUlEPUE7IFNOUkhPUD1JPSZUUz07IF9VPW5HeXpLUXJ1RXNEd0xpdTY1ZlpGSUc2ZTEyaGYybHdUSm1yb1dfX2s4am9VSklLbUczT0lqYXlYS0dXOWRDVlIzc05oRjc2bUVWeHlXNnlqVUdQb2RPZmp0U2EzczNKX0R4TU9yRUsxQnFYQ09CSTliQzY2c3BBSUFTVjdwcnNZRmxWQUp6NzNqVk5FTnBfdEJ1YkxISnk2RWJUMEJLUmU0QWpyWWtILTl1TW5tQ0tCOFpteWc7IF9TUz1TSUQ9MTdDRjZFRTAwNjQyNjQ0ODIxM0M3REI5MDc0MzY1ODgmUj0wJlJCPTAmR0I9MCZSRz0yMDAmUlA9MCZQQz1VNTMxOyBTUkNIUz1QQz1VNTMxOyBVU1JMT0M9SFM9MSZFTE9DPUxBVD0yMi41MDE1Mjk2OTM2MDM1MTZ8TE9OPTExMy45MjYzNjg3MTMzNzg5fE49JUU1JThEJTk3JUU1JUIxJUIxJUU1JThDJUJBJUVGJUJDJThDJUU1JUI5JUJGJUU0JUI4JTlDJUU3JTlDJTgxfEVMVD0yfCZDTE9DPUxBVD0yMi41MDE1MzAyOTA0NjQ2MXxMT049MTEzLjkyNjM3MDcwNjMyOTI4fEE9NzMzLjQ0NjQ1ODYxMjA4MzJ8VFM9MjMwNzI2MTUxMDM0fFNSQz1XOyBTUkNIVVNSPURPQj0yMDIzMDcyNSZUPTE2OTAzODQ5MDgwMDAmUE9FWD1XOyBpcHY2PWhpdD0xNjkwMzg4NTA5OTc0JnQ9NjsgU1JDSEhQR1VTUj1IVj0xNjkwMzg0OTQ1JlNSQ0hMQU5HPXpoLUhhbnMmUFY9MTUuMC4wJkJSVz1NVyZCUkg9TVQmQ1c9NDEwJkNIPTc5NCZTQ1c9NDEwJlNDSD03OTQmRFBSPTEuNSZVVEM9NDgwJkRNPTAmV1RTPTYzODI1ODc5NjI3JlBSVkNXPTQxMCZQUlZDSD03OTQmUFI9MS41OyBjY3Q9QWpXSUJZT29WUC1BZnE2Z1d3dHg4MElmNnlIbjZpQnVFVkhBMVhIZEFLcG55NllfQ1Z5aV9NU3lNOTRWeU1XbmpkWWtrY2NWdG0zY3pvSUF0WFVHUUE7IEdDPUFqV0lCWU9vVlAtQWZxNmdXd3R4ODBJZjZ5SG42aUJ1RVZIQTFYSGRBS3BSM1lfRDlZdGNrczRIdDZYaGFkWGs3NWR2aHpQNFlPVVMwVW1vRXlxeXh3JyBcICAgLUggJ2RudDogMScgXCAgIC1IICdzZWMtY2gtdWE6ICJDaHJvbWl1bSI7dj0iMTE2IiwgIk5vdClBO0JyYW5kIjt2PSIyNCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2IicgXCAgIC1IICdzZWMtY2gtdWEtYXJjaDogIng4NiInIFwgICAtSCAnc2VjLWNoLXVhLWJpdG5lc3M6ICI2NCInIFwgICAtSCAnc2VjLWNoLXVhLWZ1bGwtdmVyc2lvbjogIjExNi4wLjE5MzguMjkiJyBcICAgLUggJ3NlYy1jaC11YS1mdWxsLXZlcnNpb24tbGlzdDogIkNocm9taXVtIjt2PSIxMTYuMC41ODQ1LjQyIiwgIk5vdClBO0JyYW5kIjt2PSIyNC4wLjAuMCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2LjAuMTkzOC4yOSInIFwgICAtSCAnc2VjLWNoLXVhLW1vYmlsZTogPzAnIFwgICAtSCAnc2VjLWNoLXVhLW1vZGVsOiAiIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm06ICJXaW5kb3dzIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm0tdmVyc2lvbjogIjE1LjAuMCInIFwgICAtSCAnc2VjLWZldGNoLWRlc3Q6IGRvY3VtZW50JyBcICAgLUggJ3NlYy1mZXRjaC1tb2RlOiBuYXZpZ2F0ZScgXCAgIC1IICdzZWMtZmV0Y2gtc2l0ZTogbm9uZScgXCAgIC1IICdzZWMtZmV0Y2gtdXNlcjogPzEnIFwgICAtSCAnc2VjLW1zLWdlYzogQjNGNDdBRDRBMjgzQ0FCMzc0QzA0NTFDNDZBQUZEMTQ3QzZBNERBQ0FGRjZBMUMxM0YzNEIyQzcyQjAyNDQ5NCcgXCAgIC1IICdzZWMtbXMtZ2VjLXZlcnNpb246IDEtMTE2LjAuMTkzOC4yOScgXCAgIC1IICd1cGdyYWRlLWluc2VjdXJlLXJlcXVlc3RzOiAxJyBcICAgLUggJ3VzZXItYWdlbnQ6IE1vemlsbGEvNS4wIChXaW5kb3dzIE5UIDEwLjA7IFdpbjY0OyB4NjQpIEFwcGxlV2ViS2l0LzUzNy4zNiAoS0hUTUwsIGxpa2UgR2Vja28pIENocm9tZS8xMTYuMC4wLjAgU2FmYXJpLzUzNy4zNiBFZGcvMTE2LjAuMC4wJyBcICAgLUggJ3gtY2xpZW50LWRhdGE6IGV5SXhJam9pTWlJc0lqRXdJam9pWENKVE1HZzNSMDVIT1RGMmFEUTFUVVpTVW5aNU5ITjJha1JtTVdkbGFWSktlbk54TmxBM2FVMVdibkYzUFZ3aUlpd2lNaUk2SWpFaUxDSXpJam9pTVNJc0lqUWlPaUl5TVRVNE9EUTVOVE00TWpZNE9UTTVOVEEzSWl3aU5TSTZJbHdpU205R1VXcFBURGszT1M5TWJrUlJabmxDZDJOMU0yRnNPVU4zZVRaVFFtZGFNR05ZTVhCdE9XVk1aejFjSWlJc0lqWWlPaUppWlhSaElpd2lOeUk2SWpFNE1ETTRPRFl5TmpRek5TSXNJamtpT2lKa1pYTnJkRzl3SW4wPScgXCAgIC1IICd4LWVkZ2Utc2hvcHBpbmctZmxhZzogMScgXCAgIC0tY29tcHJlc3NlZA== -``` -
    - - -## 鸣谢 - - 感谢 [EdgeGPT](https://github.com/acheong08/EdgeGPT) 提供的代理 API 的方法。 - - 感谢 [Vercel AI](https://github.com/vercel-labs/ai-chatbot) 提供的基础脚手架和 [ChatHub](https://github.com/chathub-dev/chathub) [go-proxy-bingai](https://github.com/adams549659584/go-proxy-bingai) 提供的部分代码。 - - -## 答疑及交流 - - - -## License - -MIT © [LICENSE](https://github.com/weaigc/bingo/blob/main/LICENSE). - - diff --git a/spaces/Marshalls/testmtd/feature_extraction/madmom/audio/chroma.py b/spaces/Marshalls/testmtd/feature_extraction/madmom/audio/chroma.py deleted file mode 100644 index c8ed279bc8b46c1089d0f9012c873d719798a3a9..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/feature_extraction/madmom/audio/chroma.py +++ /dev/null @@ -1,422 +0,0 @@ -# encoding: utf-8 -# pylint: disable=no-member -# pylint: disable=invalid-name -# pylint: disable=too-many-arguments -""" -This module contains chroma related functionality. - -""" - -from __future__ import absolute_import, division, print_function - -import warnings -import numpy as np - -from madmom.audio.spectrogram import (Spectrogram, FilteredSpectrogram, - SemitoneBandpassSpectrogram) -from madmom.audio.filters import (A4, Filterbank, - PitchClassProfileFilterbank as PCP, - HarmonicPitchClassProfileFilterbank as HPCP) -from madmom.processors import SequentialProcessor, Processor - - -# inherit from FilteredSpectrogram, since this class is closest related -class PitchClassProfile(FilteredSpectrogram): - """ - Simple class for extracting pitch class profiles (PCP), i.e. chroma - vectors from a spectrogram. - - Parameters - ---------- - spectrogram : :class:`.audio.spectrogram.Spectrogram` instance - :class:`.audio.spectrogram.Spectrogram` instance. - filterbank : :class:`.audio.filters.Filterbank` class or instance - :class:`.audio.filters.Filterbank` class or instance. - num_classes : int, optional - Number of pitch classes. - fmin : float, optional - Minimum frequency of the PCP filterbank [Hz]. - fmax : float, optional - Maximum frequency of the PCP filterbank [Hz]. - fref : float, optional - Reference frequency for the first PCP bin [Hz]. - kwargs : dict, optional - If no :class:`.audio.spectrogram.Spectrogram` instance was given, - one is instantiated with these additional keyword arguments. - - Notes - ----- - If `fref` is 'None', the reference frequency is estimated from the given - spectrogram. - - References - ---------- - .. [1] T. Fujishima, - "Realtime chord recognition of musical sound: a system using Common - Lisp Music", - Proceedings of the International Computer Music Conference (ICMC), - 1999. - - """ - # pylint: disable=super-on-old-class - # pylint: disable=super-init-not-called - # pylint: disable=attribute-defined-outside-init - - def __init__(self, spectrogram, filterbank=PCP, num_classes=PCP.CLASSES, - fmin=PCP.FMIN, fmax=PCP.FMAX, fref=A4, **kwargs): - # this method is for documentation purposes only - pass - - def __new__(cls, spectrogram, filterbank=PCP, num_classes=PCP.CLASSES, - fmin=PCP.FMIN, fmax=PCP.FMAX, fref=A4, **kwargs): - # check spectrogram type - if not isinstance(spectrogram, Spectrogram): - spectrogram = Spectrogram(spectrogram, **kwargs) - # spectrogram should not be filtered - if hasattr(spectrogram, 'filterbank'): - warnings.warn('Spectrogram should not be filtered.', - RuntimeWarning) - # reference frequency for the filterbank - if fref is None: - fref = spectrogram.tuning_frequency() - - # set filterbank - if issubclass(filterbank, Filterbank): - filterbank = filterbank(spectrogram.bin_frequencies, - num_classes=num_classes, fmin=fmin, - fmax=fmax, fref=fref) - if not isinstance(filterbank, Filterbank): - raise ValueError('not a Filterbank type or instance: %s' % - filterbank) - # filter the spectrogram - data = np.dot(spectrogram, filterbank) - # cast as PitchClassProfile - obj = np.asarray(data).view(cls) - # save additional attributes - obj.filterbank = filterbank - obj.spectrogram = spectrogram - # return the object - return obj - - def __array_finalize__(self, obj): - if obj is None: - return - # set default values here, also needed for views - self.filterbank = getattr(obj, 'filterbank', None) - self.spectrogram = getattr(obj, 'spectrogram', None) - - -class HarmonicPitchClassProfile(PitchClassProfile): - """ - Class for extracting harmonic pitch class profiles (HPCP) from a - spectrogram. - - Parameters - ---------- - spectrogram : :class:`.audio.spectrogram.Spectrogram` instance - :class:`.audio.spectrogram.Spectrogram` instance. - filterbank : :class:`.audio.filters.Filterbank` class or instance - Filterbank class or instance. - num_classes : int, optional - Number of harmonic pitch classes. - fmin : float, optional - Minimum frequency of the HPCP filterbank [Hz]. - fmax : float, optional - Maximum frequency of the HPCP filterbank [Hz]. - fref : float, optional - Reference frequency for the first HPCP bin [Hz]. - window : int, optional - Length of the weighting window [bins]. - kwargs : dict, optional - If no :class:`.audio.spectrogram.Spectrogram` instance was given, - one is instantiated with these additional keyword arguments. - - Notes - ----- - If `fref` is 'None', the reference frequency is estimated from the given - spectrogram. - - References - ---------- - .. [1] Emilia Gómez, - "Tonal Description of Music Audio Signals", - PhD thesis, Universitat Pompeu Fabra, Barcelona, Spain, 2006. - - """ - # pylint: disable=super-on-old-class - # pylint: disable=super-init-not-called - # pylint: disable=attribute-defined-outside-init - - def __init__(self, spectrogram, filterbank=HPCP, num_classes=HPCP.CLASSES, - fmin=HPCP.FMIN, fmax=HPCP.FMAX, fref=A4, window=HPCP.WINDOW, - **kwargs): - # this method is for documentation purposes only - pass - - def __new__(cls, spectrogram, filterbank=HPCP, num_classes=HPCP.CLASSES, - fmin=HPCP.FMIN, fmax=HPCP.FMAX, fref=A4, window=HPCP.WINDOW, - **kwargs): - # check spectrogram type - if not isinstance(spectrogram, Spectrogram): - spectrogram = Spectrogram(spectrogram, **kwargs) - # spectrogram should not be filtered - if hasattr(spectrogram, 'filterbank'): - warnings.warn('Spectrogram should not be filtered.', - RuntimeWarning) - # reference frequency for the filterbank - if fref is None: - fref = spectrogram.tuning_frequency() - - # set filterbank - if issubclass(filterbank, Filterbank): - filterbank = filterbank(spectrogram.bin_frequencies, - num_classes=num_classes, fmin=fmin, - fmax=fmax, fref=fref, window=window) - if not isinstance(filterbank, Filterbank): - raise ValueError('not a Filterbank type or instance: %s' % - filterbank) - # filter the spectrogram - data = np.dot(spectrogram, filterbank) - # cast as PitchClassProfile - obj = np.asarray(data).view(cls) - # save additional attributes - obj.filterbank = filterbank - obj.spectrogram = spectrogram - # return the object - return obj - - -def _dcp_flatten(fs): - """Flatten spectrograms for DeepChromaProcessor. Needs to be outside - of the class in order to be picklable for multiprocessing. - """ - return np.concatenate(fs).reshape(len(fs), -1) - - -class DeepChromaProcessor(SequentialProcessor): - """ - Compute chroma vectors from an audio file using a deep neural network - that focuses on harmonically relevant spectral content. - - Parameters - ---------- - fmin : int, optional - Minimum frequency of the filterbank [Hz]. - fmax : float, optional - Maximum frequency of the filterbank [Hz]. - unique_filters : bool, optional - Indicate if the filterbank should contain only unique filters, i.e. - remove duplicate filters resulting from insufficient resolution at - low frequencies. - models : list of filenames, optional - List of model filenames. - - Notes - ----- - Provided model files must be compatible with the processing pipeline and - the values of `fmin`, `fmax`, and `unique_filters`. The - general use case for the `models` parameter is to use a specific - model instead of an ensemble of all models. - - The models shipped with madmom differ slightly from those presented in the - paper (less hidden units, narrower frequency band for spectrogram), but - achieve similar results. - - References - ---------- - .. [1] Filip Korzeniowski and Gerhard Widmer, - "Feature Learning for Chord Recognition: The Deep Chroma Extractor", - Proceedings of the 17th International Society for Music Information - Retrieval Conference (ISMIR), 2016. - - Examples - -------- - Extract a chroma vector using the deep chroma extractor: - - >>> dcp = DeepChromaProcessor() - >>> chroma = dcp('tests/data/audio/sample2.wav') - >>> chroma # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS - array([[0.01317, 0.00721, ..., 0.00546, 0.00943], - [0.36809, 0.01314, ..., 0.02213, 0.01838], - ..., - [0.1534 , 0.06475, ..., 0.00896, 0.05789], - [0.17513, 0.0729 , ..., 0.00945, 0.06913]], dtype=float32) - >>> chroma.shape - (41, 12) - - """ - - def __init__(self, fmin=65, fmax=2100, unique_filters=True, models=None, - **kwargs): - from ..models import CHROMA_DNN - from ..audio.signal import SignalProcessor, FramedSignalProcessor - from ..audio.stft import ShortTimeFourierTransformProcessor - from ..audio.spectrogram import LogarithmicFilteredSpectrogramProcessor - from madmom.ml.nn import NeuralNetworkEnsemble - # signal pre-processing - sig = SignalProcessor(num_channels=1, sample_rate=44100) - frames = FramedSignalProcessor(frame_size=8192, fps=10) - stft = ShortTimeFourierTransformProcessor() # caching FFT window - spec = LogarithmicFilteredSpectrogramProcessor( - num_bands=24, fmin=fmin, fmax=fmax, unique_filters=unique_filters) - # split the spectrogram into overlapping frames - spec_signal = SignalProcessor(sample_rate=10) - spec_frames = FramedSignalProcessor(frame_size=15, hop_size=1, fps=10) - # predict chroma bins with a DNN - nn = NeuralNetworkEnsemble.load(models or CHROMA_DNN, **kwargs) - # instantiate a SequentialProcessor - super(DeepChromaProcessor, self).__init__([ - sig, frames, stft, spec, spec_signal, spec_frames, _dcp_flatten, nn - ]) - - -# Compressed Log Pitch (CLP) chroma stuff -CLP_FPS = 50 -CLP_FMIN = 27.5 -CLP_FMAX = 4200. -CLP_COMPRESSION_FACTOR = 100 -CLP_NORM = True -CLP_THRESHOLD = 0.001 - - -class CLPChroma(np.ndarray): - """ - Compressed Log Pitch (CLP) chroma as proposed in [1]_ and [2]_. - - Parameters - ---------- - data : str, Signal, or SemitoneBandpassSpectrogram - Input data. - fps : int, optional - Desired frame rate of the signal [Hz]. - fmin : float, optional - Lowest frequency of the spectrogram [Hz]. - fmax : float, optional - Highest frequency of the spectrogram [Hz]. - compression_factor : float, optional - Factor for compression of the energy. - norm : bool, optional - Normalize the energy of each frame to one (divide by the L2 norm). - threshold : float, optional - If the energy of a frame is below a threshold, the energy is equally - distributed among all chroma bins. - - Notes - ----- - The resulting chromagrams differ slightly from those obtained by the - MATLAB chroma toolbox [2]_ because of different resampling and filter - methods. - - References - ---------- - .. [1] Meinard Müller, - "Information retrieval for music and motion", Springer, 2007. - - .. [2] Meinard Müller and Sebastian Ewert, - "Chroma Toolbox: MATLAB Implementations for Extracting Variants of - Chroma-Based Audio Features", - Proceedings of the International Conference on Music Information - Retrieval (ISMIR), 2011. - - """ - - def __init__(self, data, fps=CLP_FPS, fmin=CLP_FMIN, fmax=CLP_FMAX, - compression_factor=CLP_COMPRESSION_FACTOR, norm=CLP_NORM, - threshold=CLP_THRESHOLD, **kwargs): - # this method is for documentation purposes only - pass - - def __new__(cls, data, fps=CLP_FPS, fmin=CLP_FMIN, fmax=CLP_FMAX, - compression_factor=CLP_COMPRESSION_FACTOR, norm=CLP_NORM, - threshold=CLP_THRESHOLD, **kwargs): - from madmom.audio.filters import hz2midi - # check input type - if not isinstance(data, SemitoneBandpassSpectrogram): - # compute SemitoneBandpassSpectrogram - data = SemitoneBandpassSpectrogram(data, fps=fps, fmin=fmin, - fmax=fmax) - # apply log compression - log_pitch_energy = np.log10(data * compression_factor + 1) - # compute chroma by adding up bins that correspond to the same - # pitch class - obj = np.zeros((log_pitch_energy.shape[0], 12)).view(cls) - midi_min = int(np.round(hz2midi(data.bin_frequencies[0]))) - for p in range(log_pitch_energy.shape[1]): - # make sure that p maps to the correct bin_label (midi_min=12 - # corresponds to a C and therefore chroma_idx=0) - chroma_idx = np.mod(midi_min + p, 12) - obj[:, chroma_idx] += log_pitch_energy[:, p] - obj.bin_labels = ['C', 'C#', 'D', 'D#', 'E', 'F', 'F#', 'G', - 'G#', 'A', 'A#', 'B'] - obj.fps = fps - if norm: - # normalise the vectors according to the l2 norm - mean_energy = np.sqrt((obj ** 2).sum(axis=1)) - idx_below_threshold = np.where(mean_energy < threshold) - obj /= mean_energy[:, np.newaxis] - obj[idx_below_threshold, :] = np.ones((1, 12)) / np.sqrt(12) - return obj - - def __array_finalize__(self, obj): - if obj is None: - return - # set default values here - self.fps = getattr(obj, 'fps', None) - self.bin_labels = getattr(obj, 'bin_labels', None) - - -class CLPChromaProcessor(Processor): - """ - Compressed Log Pitch (CLP) Chroma Processor. - - Parameters - ---------- - fps : int, optional - Desired frame rate of the signal [Hz]. - fmin : float, optional - Lowest frequency of the spectrogram [Hz]. - fmax : float, optional - Highest frequency of the spectrogram [Hz]. - compression_factor : float, optional - Factor for compression of the energy. - norm : bool, optional - Normalize the energy of each frame to one (divide by the L2 norm). - threshold : float, optional - If the energy of a frame is below a threshold, the energy is equally - distributed among all chroma bins. - - """ - - def __init__(self, fps=CLP_FPS, fmin=CLP_FMIN, fmax=CLP_FMAX, - compression_factor=CLP_COMPRESSION_FACTOR, norm=CLP_NORM, - threshold=CLP_THRESHOLD, **kwargs): - # pylint: disable=unused-argument - self.fps = fps - self.fmin = fmin - self.fmax = fmax - self.compression_factor = compression_factor - self.norm = norm - self.threshold = threshold - - def process(self, data, **kwargs): - """ - Create a CLPChroma from the given data. - - Parameters - ---------- - data : Signal instance or filename - Data to be processed. - - Returns - ------- - clp : :class:`CLPChroma` instance - CLPChroma. - - """ - # update arguments passed to CLPChroma - args = dict(fps=self.fps, fmin=self.fmin, fmax=self.fmax, - compression_factor=self.compression_factor, - norm=self.norm, threshold=self.threshold) - args.update(kwargs) - # instantiate a CLPChroma - return CLPChroma(data, **args) diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/ops/focal_loss.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/ops/focal_loss.py deleted file mode 100644 index 763bc93bd2575c49ca8ccf20996bbd92d1e0d1a4..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/ops/focal_loss.py +++ /dev/null @@ -1,212 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'sigmoid_focal_loss_forward', 'sigmoid_focal_loss_backward', - 'softmax_focal_loss_forward', 'softmax_focal_loss_backward' -]) - - -class SigmoidFocalLossFunction(Function): - - @staticmethod - def symbolic(g, input, target, gamma, alpha, weight, reduction): - return g.op( - 'mmcv::MMCVSigmoidFocalLoss', - input, - target, - gamma_f=gamma, - alpha_f=alpha, - weight_f=weight, - reduction_s=reduction) - - @staticmethod - def forward(ctx, - input, - target, - gamma=2.0, - alpha=0.25, - weight=None, - reduction='mean'): - - assert isinstance(target, (torch.LongTensor, torch.cuda.LongTensor)) - assert input.dim() == 2 - assert target.dim() == 1 - assert input.size(0) == target.size(0) - if weight is None: - weight = input.new_empty(0) - else: - assert weight.dim() == 1 - assert input.size(1) == weight.size(0) - ctx.reduction_dict = {'none': 0, 'mean': 1, 'sum': 2} - assert reduction in ctx.reduction_dict.keys() - - ctx.gamma = float(gamma) - ctx.alpha = float(alpha) - ctx.reduction = ctx.reduction_dict[reduction] - - output = input.new_zeros(input.size()) - - ext_module.sigmoid_focal_loss_forward( - input, target, weight, output, gamma=ctx.gamma, alpha=ctx.alpha) - if ctx.reduction == ctx.reduction_dict['mean']: - output = output.sum() / input.size(0) - elif ctx.reduction == ctx.reduction_dict['sum']: - output = output.sum() - ctx.save_for_backward(input, target, weight) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - input, target, weight = ctx.saved_tensors - - grad_input = input.new_zeros(input.size()) - - ext_module.sigmoid_focal_loss_backward( - input, - target, - weight, - grad_input, - gamma=ctx.gamma, - alpha=ctx.alpha) - - grad_input *= grad_output - if ctx.reduction == ctx.reduction_dict['mean']: - grad_input /= input.size(0) - return grad_input, None, None, None, None, None - - -sigmoid_focal_loss = SigmoidFocalLossFunction.apply - - -class SigmoidFocalLoss(nn.Module): - - def __init__(self, gamma, alpha, weight=None, reduction='mean'): - super(SigmoidFocalLoss, self).__init__() - self.gamma = gamma - self.alpha = alpha - self.register_buffer('weight', weight) - self.reduction = reduction - - def forward(self, input, target): - return sigmoid_focal_loss(input, target, self.gamma, self.alpha, - self.weight, self.reduction) - - def __repr__(self): - s = self.__class__.__name__ - s += f'(gamma={self.gamma}, ' - s += f'alpha={self.alpha}, ' - s += f'reduction={self.reduction})' - return s - - -class SoftmaxFocalLossFunction(Function): - - @staticmethod - def symbolic(g, input, target, gamma, alpha, weight, reduction): - return g.op( - 'mmcv::MMCVSoftmaxFocalLoss', - input, - target, - gamma_f=gamma, - alpha_f=alpha, - weight_f=weight, - reduction_s=reduction) - - @staticmethod - def forward(ctx, - input, - target, - gamma=2.0, - alpha=0.25, - weight=None, - reduction='mean'): - - assert isinstance(target, (torch.LongTensor, torch.cuda.LongTensor)) - assert input.dim() == 2 - assert target.dim() == 1 - assert input.size(0) == target.size(0) - if weight is None: - weight = input.new_empty(0) - else: - assert weight.dim() == 1 - assert input.size(1) == weight.size(0) - ctx.reduction_dict = {'none': 0, 'mean': 1, 'sum': 2} - assert reduction in ctx.reduction_dict.keys() - - ctx.gamma = float(gamma) - ctx.alpha = float(alpha) - ctx.reduction = ctx.reduction_dict[reduction] - - channel_stats, _ = torch.max(input, dim=1) - input_softmax = input - channel_stats.unsqueeze(1).expand_as(input) - input_softmax.exp_() - - channel_stats = input_softmax.sum(dim=1) - input_softmax /= channel_stats.unsqueeze(1).expand_as(input) - - output = input.new_zeros(input.size(0)) - ext_module.softmax_focal_loss_forward( - input_softmax, - target, - weight, - output, - gamma=ctx.gamma, - alpha=ctx.alpha) - - if ctx.reduction == ctx.reduction_dict['mean']: - output = output.sum() / input.size(0) - elif ctx.reduction == ctx.reduction_dict['sum']: - output = output.sum() - ctx.save_for_backward(input_softmax, target, weight) - return output - - @staticmethod - def backward(ctx, grad_output): - input_softmax, target, weight = ctx.saved_tensors - buff = input_softmax.new_zeros(input_softmax.size(0)) - grad_input = input_softmax.new_zeros(input_softmax.size()) - - ext_module.softmax_focal_loss_backward( - input_softmax, - target, - weight, - buff, - grad_input, - gamma=ctx.gamma, - alpha=ctx.alpha) - - grad_input *= grad_output - if ctx.reduction == ctx.reduction_dict['mean']: - grad_input /= input_softmax.size(0) - return grad_input, None, None, None, None, None - - -softmax_focal_loss = SoftmaxFocalLossFunction.apply - - -class SoftmaxFocalLoss(nn.Module): - - def __init__(self, gamma, alpha, weight=None, reduction='mean'): - super(SoftmaxFocalLoss, self).__init__() - self.gamma = gamma - self.alpha = alpha - self.register_buffer('weight', weight) - self.reduction = reduction - - def forward(self, input, target): - return softmax_focal_loss(input, target, self.gamma, self.alpha, - self.weight, self.reduction) - - def __repr__(self): - s = self.__class__.__name__ - s += f'(gamma={self.gamma}, ' - s += f'alpha={self.alpha}, ' - s += f'reduction={self.reduction})' - return s diff --git a/spaces/MichaelWelsch/FreeVC/commons.py b/spaces/MichaelWelsch/FreeVC/commons.py deleted file mode 100644 index fc384912618494475bda9d68fa76530f4fe2a27b..0000000000000000000000000000000000000000 --- a/spaces/MichaelWelsch/FreeVC/commons.py +++ /dev/null @@ -1,171 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def rand_spec_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/Mileena/anything-v3.0/README.md b/spaces/Mileena/anything-v3.0/README.md deleted file mode 100644 index 15176bed26d36b4f9566c7102a5655e310f76036..0000000000000000000000000000000000000000 --- a/spaces/Mileena/anything-v3.0/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Anything V3.0 -emoji: 🏃 -colorFrom: gray -colorTo: yellow -sdk: gradio -sdk_version: 3.10.1 -app_file: app.py -pinned: false -duplicated_from: akhaliq/anything-v3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Miuzarte/SUI-svc-4.0/preprocess_hubert_f0.py b/spaces/Miuzarte/SUI-svc-4.0/preprocess_hubert_f0.py deleted file mode 100644 index 29a1c7ee028fefbe7905d235447d98cda34ce840..0000000000000000000000000000000000000000 --- a/spaces/Miuzarte/SUI-svc-4.0/preprocess_hubert_f0.py +++ /dev/null @@ -1,62 +0,0 @@ -import math -import multiprocessing -import os -import argparse -from random import shuffle - -import torch -from glob import glob -from tqdm import tqdm - -import utils -import logging -logging.getLogger('numba').setLevel(logging.WARNING) -import librosa -import numpy as np - -hps = utils.get_hparams_from_file("configs/config.json") -sampling_rate = hps.data.sampling_rate -hop_length = hps.data.hop_length - - -def process_one(filename, hmodel): - # print(filename) - wav, sr = librosa.load(filename, sr=sampling_rate) - soft_path = filename + ".soft.pt" - if not os.path.exists(soft_path): - devive = torch.device("cuda" if torch.cuda.is_available() else "cpu") - wav16k = librosa.resample(wav, orig_sr=sampling_rate, target_sr=16000) - wav16k = torch.from_numpy(wav16k).to(devive) - c = utils.get_hubert_content(hmodel, wav_16k_tensor=wav16k) - torch.save(c.cpu(), soft_path) - f0_path = filename + ".f0.npy" - if not os.path.exists(f0_path): - f0 = utils.compute_f0_dio(wav, sampling_rate=sampling_rate, hop_length=hop_length) - np.save(f0_path, f0) - - -def process_batch(filenames): - print("Loading hubert for content...") - device = "cuda" if torch.cuda.is_available() else "cpu" - hmodel = utils.get_hubert_model().to(device) - print("Loaded hubert.") - for filename in tqdm(filenames): - process_one(filename, hmodel) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--in_dir", type=str, default="dataset/44k", help="path to input dir") - - args = parser.parse_args() - filenames = glob(f'{args.in_dir}/*/*.wav', recursive=True) # [:10] - shuffle(filenames) - multiprocessing.set_start_method('spawn') - - num_processes = 1 - chunk_size = int(math.ceil(len(filenames) / num_processes)) - chunks = [filenames[i:i + chunk_size] for i in range(0, len(filenames), chunk_size)] - print([len(c) for c in chunks]) - processes = [multiprocessing.Process(target=process_batch, args=(chunk,)) for chunk in chunks] - for p in processes: - p.start() diff --git a/spaces/Mozira/voice-models/infer_pack/models_onnx.py b/spaces/Mozira/voice-models/infer_pack/models_onnx.py deleted file mode 100644 index 3cdae2f7f8591a1e43b1d8520baa37b7e9744d72..0000000000000000000000000000000000000000 --- a/spaces/Mozira/voice-models/infer_pack/models_onnx.py +++ /dev/null @@ -1,849 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder256Sim(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, pitch, nsff0, sid, rnd, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class SynthesizerTrnMs256NSFsid_sim(nn.Module): - """ - Synthesizer for Training - """ - - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - # hop_length, - gin_channels=0, - use_sdp=True, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256Sim( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - is_half=kwargs["is_half"], - ) - - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, ds, max_len=None - ): # y是spec不需要了现在 - g = self.emb_g(ds.unsqueeze(0)).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/Nightwing25/AICoverGen/src/webui.py b/spaces/Nightwing25/AICoverGen/src/webui.py deleted file mode 100644 index 2ebea1d6c7ff0880a04ed3d8f928e38a14a0e861..0000000000000000000000000000000000000000 --- a/spaces/Nightwing25/AICoverGen/src/webui.py +++ /dev/null @@ -1,326 +0,0 @@ -import json -import os -import shutil -import urllib.request -import zipfile -from argparse import ArgumentParser - -import gradio as gr - -from main import song_cover_pipeline - -BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) - -mdxnet_models_dir = os.path.join(BASE_DIR, 'mdxnet_models') -rvc_models_dir = os.path.join(BASE_DIR, 'rvc_models') -output_dir = os.path.join(BASE_DIR, 'song_output') - - -def get_current_models(models_dir): - models_list = os.listdir(models_dir) - items_to_remove = ['hubert_base.pt', 'MODELS.txt', 'public_models.json', 'rmvpe.pt'] - return [item for item in models_list if item not in items_to_remove] - - -def update_models_list(): - models_l = get_current_models(rvc_models_dir) - return gr.Dropdown.update(choices=models_l) - - -def load_public_models(): - models_table = [] - for model in public_models['voice_models']: - if not model['name'] in voice_models: - model = [model['name'], model['description'], model['credit'], model['url'], ', '.join(model['tags'])] - models_table.append(model) - - tags = list(public_models['tags'].keys()) - return gr.DataFrame.update(value=models_table), gr.CheckboxGroup.update(choices=tags) - - -def extract_zip(extraction_folder, zip_name): - os.makedirs(extraction_folder) - with zipfile.ZipFile(zip_name, 'r') as zip_ref: - zip_ref.extractall(extraction_folder) - os.remove(zip_name) - - index_filepath, model_filepath = None, None - for root, dirs, files in os.walk(extraction_folder): - for name in files: - if name.endswith('.index') and os.stat(os.path.join(root, name)).st_size > 1024 * 100: - index_filepath = os.path.join(root, name) - - if name.endswith('.pth') and os.stat(os.path.join(root, name)).st_size > 1024 * 1024 * 40: - model_filepath = os.path.join(root, name) - - if not model_filepath: - raise gr.Error(f'No .pth model file was found in the extracted zip. Please check {extraction_folder}.') - - # move model and index file to extraction folder - os.rename(model_filepath, os.path.join(extraction_folder, os.path.basename(model_filepath))) - if index_filepath: - os.rename(index_filepath, os.path.join(extraction_folder, os.path.basename(index_filepath))) - - # remove any unnecessary nested folders - for filepath in os.listdir(extraction_folder): - if os.path.isdir(os.path.join(extraction_folder, filepath)): - shutil.rmtree(os.path.join(extraction_folder, filepath)) - - -def download_online_model(url, dir_name, progress=gr.Progress()): - try: - progress(0, desc=f'[~] Downloading voice model with name {dir_name}...') - zip_name = url.split('/')[-1] - extraction_folder = os.path.join(rvc_models_dir, dir_name) - if os.path.exists(extraction_folder): - raise gr.Error(f'Voice model directory {dir_name} already exists! Choose a different name for your voice model.') - - if 'pixeldrain.com' in url: - url = f'https://pixeldrain.com/api/file/{zip_name}' - - urllib.request.urlretrieve(url, zip_name) - - progress(0.5, desc='[~] Extracting zip...') - extract_zip(extraction_folder, zip_name) - return f'[+] {dir_name} Model successfully downloaded!' - - except Exception as e: - raise gr.Error(str(e)) - - -def upload_local_model(zip_path, dir_name, progress=gr.Progress()): - try: - extraction_folder = os.path.join(rvc_models_dir, dir_name) - if os.path.exists(extraction_folder): - raise gr.Error(f'Voice model directory {dir_name} already exists! Choose a different name for your voice model.') - - zip_name = zip_path.name - progress(0.5, desc='[~] Extracting zip...') - extract_zip(extraction_folder, zip_name) - return f'[+] {dir_name} Model successfully uploaded!' - - except Exception as e: - raise gr.Error(str(e)) - - -def filter_models(tags, query): - models_table = [] - - # no filter - if len(tags) == 0 and len(query) == 0: - for model in public_models['voice_models']: - models_table.append([model['name'], model['description'], model['credit'], model['url'], model['tags']]) - - # filter based on tags and query - elif len(tags) > 0 and len(query) > 0: - for model in public_models['voice_models']: - if all(tag in model['tags'] for tag in tags): - model_attributes = f"{model['name']} {model['description']} {model['credit']} {' '.join(model['tags'])}".lower() - if query.lower() in model_attributes: - models_table.append([model['name'], model['description'], model['credit'], model['url'], model['tags']]) - - # filter based on only tags - elif len(tags) > 0: - for model in public_models['voice_models']: - if all(tag in model['tags'] for tag in tags): - models_table.append([model['name'], model['description'], model['credit'], model['url'], model['tags']]) - - # filter based on only query - else: - for model in public_models['voice_models']: - model_attributes = f"{model['name']} {model['description']} {model['credit']} {' '.join(model['tags'])}".lower() - if query.lower() in model_attributes: - models_table.append([model['name'], model['description'], model['credit'], model['url'], model['tags']]) - - return gr.DataFrame.update(value=models_table) - - -def pub_dl_autofill(pub_models, event: gr.SelectData): - return gr.Text.update(value=pub_models.loc[event.index[0], 'URL']), gr.Text.update(value=pub_models.loc[event.index[0], 'Model Name']) - - -def swap_visibility(): - return gr.update(visible=True), gr.update(visible=False), gr.update(value=''), gr.update(value=None) - - -def process_file_upload(file): - return file.name, gr.update(value=file.name) - - -def show_hop_slider(pitch_detection_algo): - if pitch_detection_algo == 'mangio-crepe': - return gr.update(visible=True) - else: - return gr.update(visible=False) - - -if __name__ == '__main__': - parser = ArgumentParser(description='Generate a AI cover song in the song_output/id directory.', add_help=True) - parser.add_argument("--share", action="store_true", dest="share_enabled", default=False, help="Enable sharing") - parser.add_argument("--listen", action="store_true", default=False, help="Make the WebUI reachable from your local network.") - parser.add_argument('--listen-host', type=str, help='The hostname that the server will use.') - parser.add_argument('--listen-port', type=int, help='The listening port that the server will use.') - args = parser.parse_args() - - voice_models = get_current_models(rvc_models_dir) - with open(os.path.join(rvc_models_dir, 'public_models.json'), encoding='utf8') as infile: - public_models = json.load(infile) - - with gr.Blocks(title='AICoverGenWebUI') as app: - - gr.Label('AICoverGen WebUI created with ❤️', show_label=False) - - gr.Markdown("AI-Cover-Gen-No-UI [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ardha27/AICoverGen-NoUI-Colab/blob/main/CoverGen_No_UI.ipynb)") - gr.Markdown("Duplicate the space for use in private") - gr.Markdown("[![Duplicate this Space](https://huggingface.co/datasets/huggingface/badges/raw/main/duplicate-this-space-sm-dark.svg)](https://huggingface.co/spaces/r3gm/AICoverGen?duplicate=true)\n\n") - - # main tab - with gr.Tab("Generate"): - - with gr.Accordion('Main Options'): - with gr.Row(): - with gr.Column(): - rvc_model = gr.Dropdown(voice_models, label='Voice Models', info='Models folder "AICoverGen --> rvc_models". After new models are added into this folder, click the refresh button') - ref_btn = gr.Button('Refresh Models 🔁', variant='primary') - - with gr.Column() as yt_link_col: - song_input = gr.Text(label='Song input', info='Link to a song on YouTube or full path to a local file. For file upload, click the button below. Example: https://www.youtube.com/watch?v=M-mtdN6R3bQ') - show_file_upload_button = gr.Button('Upload file instead') - - with gr.Column(visible=False) as file_upload_col: - local_file = gr.File(label='Audio file') - song_input_file = gr.UploadButton('Upload 📂', file_types=['audio'], variant='primary') - show_yt_link_button = gr.Button('Paste YouTube link/Path to local file instead') - song_input_file.upload(process_file_upload, inputs=[song_input_file], outputs=[local_file, song_input]) - - with gr.Column(): - pitch = gr.Slider(-3, 3, value=0, step=1, label='Pitch Change (Vocals ONLY)', info='Generally, use 1 for male to female conversions and -1 for vice-versa. (Octaves)') - pitch_all = gr.Slider(-12, 12, value=0, step=1, label='Overall Pitch Change', info='Changes pitch/key of vocals and instrumentals together. Altering this slightly reduces sound quality. (Semitones)') - show_file_upload_button.click(swap_visibility, outputs=[file_upload_col, yt_link_col, song_input, local_file]) - show_yt_link_button.click(swap_visibility, outputs=[yt_link_col, file_upload_col, song_input, local_file]) - - with gr.Accordion('Voice conversion options', open=False): - with gr.Row(): - index_rate = gr.Slider(0, 1, value=0.5, label='Index Rate', info="Controls how much of the AI voice's accent to keep in the vocals") - filter_radius = gr.Slider(0, 7, value=3, step=1, label='Filter radius', info='If >=3: apply median filtering median filtering to the harvested pitch results. Can reduce breathiness') - rms_mix_rate = gr.Slider(0, 1, value=0.25, label='RMS mix rate', info="Control how much to mimic the original vocal's loudness (0) or a fixed loudness (1)") - protect = gr.Slider(0, 0.5, value=0.33, label='Protect rate', info='Protect voiceless consonants and breath sounds. Set to 0.5 to disable.') - with gr.Column(): - f0_method = gr.Dropdown(['rmvpe', 'mangio-crepe'], value='rmvpe', label='Pitch detection algorithm', info='Best option is rmvpe (clarity in vocals), then mangio-crepe (smoother vocals)') - crepe_hop_length = gr.Slider(32, 320, value=128, step=1, visible=False, label='Crepe hop length', info='Lower values leads to longer conversions and higher risk of voice cracks, but better pitch accuracy.') - f0_method.change(show_hop_slider, inputs=f0_method, outputs=crepe_hop_length) - keep_files = gr.Checkbox(label='Keep intermediate files', info='Keep all audio files generated in the song_output/id directory, e.g. Isolated Vocals/Instrumentals. Leave unchecked to save space') - - with gr.Accordion('Audio mixing options', open=False): - gr.Markdown('### Volume Change (decibels)') - with gr.Row(): - main_gain = gr.Slider(-20, 20, value=0, step=1, label='Main Vocals') - backup_gain = gr.Slider(-20, 20, value=0, step=1, label='Backup Vocals') - inst_gain = gr.Slider(-20, 20, value=0, step=1, label='Music') - - gr.Markdown('### Reverb Control on AI Vocals') - with gr.Row(): - reverb_rm_size = gr.Slider(0, 1, value=0.15, label='Room size', info='The larger the room, the longer the reverb time') - reverb_wet = gr.Slider(0, 1, value=0.2, label='Wetness level', info='Level of AI vocals with reverb') - reverb_dry = gr.Slider(0, 1, value=0.8, label='Dryness level', info='Level of AI vocals without reverb') - reverb_damping = gr.Slider(0, 1, value=0.7, label='Damping level', info='Absorption of high frequencies in the reverb') - - gr.Markdown('### Audio Output Format') - output_format = gr.Dropdown(['mp3', 'wav'], value='mp3', label='Output file type', info='mp3: small file size, decent quality. wav: Large file size, best quality') - - with gr.Row(): - clear_btn = gr.ClearButton(value='Clear', components=[song_input, rvc_model, keep_files, local_file]) - generate_btn = gr.Button("Generate", variant='primary') - ai_cover = gr.Audio(label='AI Cover', show_share_button=False) - - ref_btn.click(update_models_list, None, outputs=rvc_model) - is_webui = gr.Number(value=1, visible=False) - generate_btn.click(song_cover_pipeline, - inputs=[song_input, rvc_model, pitch, keep_files, is_webui, main_gain, backup_gain, - inst_gain, index_rate, filter_radius, rms_mix_rate, f0_method, crepe_hop_length, - protect, pitch_all, reverb_rm_size, reverb_wet, reverb_dry, reverb_damping, - output_format], - outputs=[ai_cover]) - clear_btn.click(lambda: [0, 0, 0, 0, 0.5, 3, 0.25, 0.33, 'rmvpe', 128, 0, 0.15, 0.2, 0.8, 0.7, 'mp3', None], - outputs=[pitch, main_gain, backup_gain, inst_gain, index_rate, filter_radius, rms_mix_rate, - protect, f0_method, crepe_hop_length, pitch_all, reverb_rm_size, reverb_wet, - reverb_dry, reverb_damping, output_format, ai_cover]) - - # Download tab - with gr.Tab('Download model'): - - with gr.Tab('From HuggingFace/Pixeldrain URL'): - with gr.Row(): - model_zip_link = gr.Text(label='Download link to model', info='Should be a zip file containing a .pth model file and an optional .index file.') - model_name = gr.Text(label='Name your model', info='Give your new model a unique name from your other voice models.') - - with gr.Row(): - download_btn = gr.Button('Download 🌐', variant='primary', scale=19) - dl_output_message = gr.Text(label='Output Message', interactive=False, scale=20) - - download_btn.click(download_online_model, inputs=[model_zip_link, model_name], outputs=dl_output_message) - - gr.Markdown('## Input Examples') - gr.Examples( - [ - ['https://huggingface.co/phant0m4r/LiSA/resolve/main/LiSA.zip', 'Lisa'], - ['https://pixeldrain.com/u/3tJmABXA', 'Gura'], - ['https://huggingface.co/Kit-Lemonfoot/kitlemonfoot_rvc_models/resolve/main/AZKi%20(Hybrid).zip', 'Azki'] - ], - [model_zip_link, model_name], - [], - download_online_model, - ) - - with gr.Tab('From Public Index'): - - gr.Markdown('## How to use') - gr.Markdown('- Click Initialize public models table') - gr.Markdown('- Filter models using tags or search bar') - gr.Markdown('- Select a row to autofill the download link and model name') - gr.Markdown('- Click Download') - - with gr.Row(): - pub_zip_link = gr.Text(label='Download link to model') - pub_model_name = gr.Text(label='Model name') - - with gr.Row(): - download_pub_btn = gr.Button('Download 🌐', variant='primary', scale=19) - pub_dl_output_message = gr.Text(label='Output Message', interactive=False, scale=20) - - filter_tags = gr.CheckboxGroup(value=[], label='Show voice models with tags', choices=[]) - search_query = gr.Text(label='Search') - load_public_models_button = gr.Button(value='Initialize public models table', variant='primary') - - public_models_table = gr.DataFrame(value=[], headers=['Model Name', 'Description', 'Credit', 'URL', 'Tags'], label='Available Public Models', interactive=False) - public_models_table.select(pub_dl_autofill, inputs=[public_models_table], outputs=[pub_zip_link, pub_model_name]) - load_public_models_button.click(load_public_models, outputs=[public_models_table, filter_tags]) - search_query.change(filter_models, inputs=[filter_tags, search_query], outputs=public_models_table) - filter_tags.change(filter_models, inputs=[filter_tags, search_query], outputs=public_models_table) - download_pub_btn.click(download_online_model, inputs=[pub_zip_link, pub_model_name], outputs=pub_dl_output_message) - - # Upload tab - with gr.Tab('Upload model'): - gr.Markdown('## Upload locally trained RVC v2 model and index file') - gr.Markdown('- Find model file (weights folder) and optional index file (logs/[name] folder)') - gr.Markdown('- Compress files into zip file') - gr.Markdown('- Upload zip file and give unique name for voice') - gr.Markdown('- Click Upload model') - - with gr.Row(): - with gr.Column(): - zip_file = gr.File(label='Zip file') - - local_model_name = gr.Text(label='Model name') - - with gr.Row(): - model_upload_button = gr.Button('Upload model', variant='primary', scale=19) - local_upload_output_message = gr.Text(label='Output Message', interactive=False, scale=20) - model_upload_button.click(upload_local_model, inputs=[zip_file, local_model_name], outputs=local_upload_output_message) - - app.launch( - share=args.share_enabled, - enable_queue=True, - server_name=None if not args.listen else (args.listen_host or '0.0.0.0'), - server_port=args.listen_port, - ) diff --git a/spaces/OAOA/DifFace/basicsr/utils/dist_util.py b/spaces/OAOA/DifFace/basicsr/utils/dist_util.py deleted file mode 100644 index 0fab887b2cb1ce8533d2e8fdee72ae0c24f68fd0..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/basicsr/utils/dist_util.py +++ /dev/null @@ -1,82 +0,0 @@ -# Modified from https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/dist_utils.py # noqa: E501 -import functools -import os -import subprocess -import torch -import torch.distributed as dist -import torch.multiprocessing as mp - - -def init_dist(launcher, backend='nccl', **kwargs): - if mp.get_start_method(allow_none=True) is None: - mp.set_start_method('spawn') - if launcher == 'pytorch': - _init_dist_pytorch(backend, **kwargs) - elif launcher == 'slurm': - _init_dist_slurm(backend, **kwargs) - else: - raise ValueError(f'Invalid launcher type: {launcher}') - - -def _init_dist_pytorch(backend, **kwargs): - rank = int(os.environ['RANK']) - num_gpus = torch.cuda.device_count() - torch.cuda.set_device(rank % num_gpus) - dist.init_process_group(backend=backend, **kwargs) - - -def _init_dist_slurm(backend, port=None): - """Initialize slurm distributed training environment. - - If argument ``port`` is not specified, then the master port will be system - environment variable ``MASTER_PORT``. If ``MASTER_PORT`` is not in system - environment variable, then a default port ``29500`` will be used. - - Args: - backend (str): Backend of torch.distributed. - port (int, optional): Master port. Defaults to None. - """ - proc_id = int(os.environ['SLURM_PROCID']) - ntasks = int(os.environ['SLURM_NTASKS']) - node_list = os.environ['SLURM_NODELIST'] - num_gpus = torch.cuda.device_count() - torch.cuda.set_device(proc_id % num_gpus) - addr = subprocess.getoutput(f'scontrol show hostname {node_list} | head -n1') - # specify master port - if port is not None: - os.environ['MASTER_PORT'] = str(port) - elif 'MASTER_PORT' in os.environ: - pass # use MASTER_PORT in the environment variable - else: - # 29500 is torch.distributed default port - os.environ['MASTER_PORT'] = '29500' - os.environ['MASTER_ADDR'] = addr - os.environ['WORLD_SIZE'] = str(ntasks) - os.environ['LOCAL_RANK'] = str(proc_id % num_gpus) - os.environ['RANK'] = str(proc_id) - dist.init_process_group(backend=backend) - - -def get_dist_info(): - if dist.is_available(): - initialized = dist.is_initialized() - else: - initialized = False - if initialized: - rank = dist.get_rank() - world_size = dist.get_world_size() - else: - rank = 0 - world_size = 1 - return rank, world_size - - -def master_only(func): - - @functools.wraps(func) - def wrapper(*args, **kwargs): - rank, _ = get_dist_info() - if rank == 0: - return func(*args, **kwargs) - - return wrapper diff --git a/spaces/OAOA/DifFace/facelib/utils/__init__.py b/spaces/OAOA/DifFace/facelib/utils/__init__.py deleted file mode 100644 index f03b1c2bafcd7759cb7e8722a0c6715f201a46dc..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/facelib/utils/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -from .face_utils import align_crop_face_landmarks, compute_increased_bbox, get_valid_bboxes, paste_face_back -from .misc import img2tensor, load_file_from_url, download_pretrained_models, scandir - -__all__ = [ - 'align_crop_face_landmarks', 'compute_increased_bbox', 'get_valid_bboxes', 'load_file_from_url', - 'download_pretrained_models', 'paste_face_back', 'img2tensor', 'scandir' -] diff --git a/spaces/OAOA/DifFace/models/solvers.py b/spaces/OAOA/DifFace/models/solvers.py deleted file mode 100644 index 21baa6863f669dc993d4911d5f8cbc0eebb649fd..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/models/solvers.py +++ /dev/null @@ -1,132 +0,0 @@ -#!/usr/bin/env python -# -*- coding:utf-8 -*- -# Power by Zongsheng Yue 2022-06-09 14:59:55 - -import torch -import random -import numpy as np -from einops import rearrange - -def batch_inpainging_from_grad(im_in, mask, gradx, grady): - ''' - Recovering from gradient for batch data (torch tensro). - Input: - im_in: N x c x h x w, torch tensor, masked image - mask: N x 1 x h x w, torch tensor - gradx, grady: N x c x h x w, torch tensor, image gradient - ''' - im_out = torch.zeros_like(im_in.data) - for ii in range(im_in.shape[0]): - im_current, gradx_current, grady_current = [rearrange(x[ii,].cpu().numpy(), 'c h w -> h w c') - for x in [im_in, gradx, grady]] - mask_current = mask[ii, 0,].cpu().numpy() - out_current = inpainting_from_grad(im_current, mask_current, gradx_current, grady_current) - im_out[ii,] = torch.from_numpy(rearrange(out_current, 'h w c -> c h w')).to( - device=im_in.device, - dtype=im_in.dtype - ) - return im_out - -def inpainting_from_grad(im_in, mask, gradx, grady): - ''' - Input: - im_in: h x w x c, masked image, numpy array - mask: h x w, image mask, 1 represents missing value - gradx: h x w x c, gradient along x-axis, numpy array - grady: h x w x c, gradient along y-axis, numpy array - Output: - im_out: recoverd image - ''' - h, w = im_in.shape[:2] - counts_h = np.sum(1-mask, axis=0, keepdims=False) - counts_w = np.sum(1-mask, axis=1, keepdims=False) - if np.any(counts_h[1:-1,] == h): - idx = find_first_index(counts_h[1:-1,], h) + 1 - im_out = fill_image_from_gradx(im_in, mask, gradx, idx) - elif np.any(counts_w[1:-1,] == w): - idx = find_first_index(counts_w[1:-1,], w) + 1 - im_out = inpainting_from_grad(im_in.T, mask.T, gradx.T, idx) - else: - idx = random.choices(list(range(1,w-1)), k=1, weights=counts_h[1:-1])[0] - line = fill_line(im_in[:, idx, ], mask[:, idx,], grady[:, idx,]) - im_in[:, idx,] = line - im_out = fill_image_from_gradx(im_in, mask, gradx, idx) - if im_in.ndim > mask.ndim: - mask = mask[:, :, None] - im_out = im_in + im_out * mask - return im_out - -def fill_image_from_gradx(im_in, mask, gradx, idx): - init = np.zeros_like(im_in) - init[:, idx,] = im_in[:, idx,] - right = np.cumsum(init[:, idx:-1, ] + gradx[:, idx+1:, ], axis=1) - left = np.cumsum( - init[:, idx:0:-1, ] - gradx[:, idx:0:-1, ], - axis=1 - )[:, ::-1] - center = im_in[:, idx, ][:, None] # h x 1 x 3 - im_out = np.concatenate((left, center, right), axis=1) - return im_out - -def fill_line(xx, mm, grad): - ''' - Fill one line from grad. - Input: - xx: n x c array, masked vector - mm: (n,) array, mask, 1 represent missing value - grad: (n,) array - ''' - n = xx.shape[0] - assert mm.sum() < n - if mm.sum() == 0: - return xx - else: - idx1 = find_first_index(mm, 1) - if idx1 == 0: - idx2 = find_first_index(mm, 0) - subx = xx[idx2::-1,].copy() - subgrad = grad[idx2::-1, ].copy() - subx -= subgrad - xx[:idx2,] = np.cumsum(subx, axis=0)[idx2-1::-1,] - mm[idx1:idx2,] = 0 - else: - idx2 = find_first_index(mm[idx1:,], 0) + idx1 - subx = xx[idx1-1:idx2-1,].copy() - subgrad = grad[idx1:idx2,].copy() - subx += subgrad - xx[idx1:idx2,] = np.cumsum(subx, axis=0) - mm[idx1:idx2,] = 0 - return fill_line(xx, mm, grad) - -def find_first_index(mm, value): - ''' - Input: - mm: (n, ) array - value: scalar - ''' - try: - out = next((idx for idx, val in np.ndenumerate(mm) if val == value))[0] - except StopIteration: - out = mm.shape[0] - return out - -if __name__ == '__main__': - import sys - from pathlib import Path - sys.path.append(str(Path(__file__).resolve().parents[1])) - from utils import util_image - from datapipe.masks.train import process_mask - - # mask_file_names = [x for x in Path('../lama/LaMa_test_images').glob('*mask*.png')] - mask_file_names = [x for x in Path('./testdata/inpainting/val/places/').glob('*mask*.png')] - file_names = [x.parents[0]/(x.stem.rsplit('_mask',1)[0]+'.png') for x in mask_file_names] - - for im_path, mask_path in zip(file_names, mask_file_names): - im = util_image.imread(im_path, chn='rgb', dtype='float32') - mask = process_mask(util_image.imread(mask_path, chn='rgb', dtype='float32')[:, :, 0]) - grad_dict = util_image.imgrad(im) - - im_masked = im * (1 - mask[:, :, None]) - im_recover = inpainting_from_grad(im_masked, mask, grad_dict['gradx'], grad_dict['grady']) - error_max = np.abs(im_recover -im).max() - print('Error Max: {:.2e}'.format(error_max)) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/laser/laser_src/laser_lstm.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/laser/laser_src/laser_lstm.py deleted file mode 100644 index 10df90e002d5a7dd74a571dbc3b328c130c57a0a..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/laser/laser_src/laser_lstm.py +++ /dev/null @@ -1,585 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from fairseq import options, utils - -from fairseq.models import ( - FairseqEncoder, - FairseqIncrementalDecoder, - FairseqEncoderDecoderModel, - register_model, - register_model_architecture, -) - - -@register_model("laser_lstm") -class LSTMModel(FairseqEncoderDecoderModel): - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - def forward( - self, - src_tokens, - src_lengths, - prev_output_tokens=None, - tgt_tokens=None, - tgt_lengths=None, - target_language_id=None, - dataset_name="", - ): - assert target_language_id is not None - - src_encoder_out = self.encoder(src_tokens, src_lengths, dataset_name) - return self.decoder( - prev_output_tokens, src_encoder_out, lang_id=target_language_id - ) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--dropout", - default=0.1, - type=float, - metavar="D", - help="dropout probability", - ) - parser.add_argument( - "--encoder-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--encoder-embed-path", - default=None, - type=str, - metavar="STR", - help="path to pre-trained encoder embedding", - ) - parser.add_argument( - "--encoder-hidden-size", type=int, metavar="N", help="encoder hidden size" - ) - parser.add_argument( - "--encoder-layers", type=int, metavar="N", help="number of encoder layers" - ) - parser.add_argument( - "--encoder-bidirectional", - action="store_true", - help="make all layers of encoder bidirectional", - ) - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-embed-path", - default=None, - type=str, - metavar="STR", - help="path to pre-trained decoder embedding", - ) - parser.add_argument( - "--decoder-hidden-size", type=int, metavar="N", help="decoder hidden size" - ) - parser.add_argument( - "--decoder-layers", type=int, metavar="N", help="number of decoder layers" - ) - parser.add_argument( - "--decoder-out-embed-dim", - type=int, - metavar="N", - help="decoder output embedding dimension", - ) - parser.add_argument( - "--decoder-zero-init", - type=str, - metavar="BOOL", - help="initialize the decoder hidden/cell state to zero", - ) - parser.add_argument( - "--decoder-lang-embed-dim", - type=int, - metavar="N", - help="decoder language embedding dimension", - ) - parser.add_argument( - "--fixed-embeddings", - action="store_true", - help="keep embeddings fixed (ENCODER ONLY)", - ) # TODO Also apply to decoder embeddings? - - # Granular dropout settings (if not specified these default to --dropout) - parser.add_argument( - "--encoder-dropout-in", - type=float, - metavar="D", - help="dropout probability for encoder input embedding", - ) - parser.add_argument( - "--encoder-dropout-out", - type=float, - metavar="D", - help="dropout probability for encoder output", - ) - parser.add_argument( - "--decoder-dropout-in", - type=float, - metavar="D", - help="dropout probability for decoder input embedding", - ) - parser.add_argument( - "--decoder-dropout-out", - type=float, - metavar="D", - help="dropout probability for decoder output", - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - # make sure that all args are properly defaulted (in case there are any new ones) - base_architecture(args) - - def load_pretrained_embedding_from_file(embed_path, dictionary, embed_dim): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - embed_tokens = Embedding(num_embeddings, embed_dim, padding_idx) - embed_dict = utils.parse_embedding(embed_path) - utils.print_embed_overlap(embed_dict, dictionary) - return utils.load_embedding(embed_dict, dictionary, embed_tokens) - - pretrained_encoder_embed = None - if args.encoder_embed_path: - pretrained_encoder_embed = load_pretrained_embedding_from_file( - args.encoder_embed_path, task.source_dictionary, args.encoder_embed_dim - ) - pretrained_decoder_embed = None - if args.decoder_embed_path: - pretrained_decoder_embed = load_pretrained_embedding_from_file( - args.decoder_embed_path, task.target_dictionary, args.decoder_embed_dim - ) - - num_langs = task.num_tasks if hasattr(task, "num_tasks") else 0 - - encoder = LSTMEncoder( - dictionary=task.source_dictionary, - embed_dim=args.encoder_embed_dim, - hidden_size=args.encoder_hidden_size, - num_layers=args.encoder_layers, - dropout_in=args.encoder_dropout_in, - dropout_out=args.encoder_dropout_out, - bidirectional=args.encoder_bidirectional, - pretrained_embed=pretrained_encoder_embed, - fixed_embeddings=args.fixed_embeddings, - ) - decoder = LSTMDecoder( - dictionary=task.target_dictionary, - embed_dim=args.decoder_embed_dim, - hidden_size=args.decoder_hidden_size, - out_embed_dim=args.decoder_out_embed_dim, - num_layers=args.decoder_layers, - dropout_in=args.decoder_dropout_in, - dropout_out=args.decoder_dropout_out, - zero_init=options.eval_bool(args.decoder_zero_init), - encoder_embed_dim=args.encoder_embed_dim, - encoder_output_units=encoder.output_units, - pretrained_embed=pretrained_decoder_embed, - num_langs=num_langs, - lang_embed_dim=args.decoder_lang_embed_dim, - ) - return cls(encoder, decoder) - - -class LSTMEncoder(FairseqEncoder): - """LSTM encoder.""" - - def __init__( - self, - dictionary, - embed_dim=512, - hidden_size=512, - num_layers=1, - dropout_in=0.1, - dropout_out=0.1, - bidirectional=False, - left_pad=True, - pretrained_embed=None, - padding_value=0.0, - fixed_embeddings=False, - ): - super().__init__(dictionary) - self.num_layers = num_layers - self.dropout_in = dropout_in - self.dropout_out = dropout_out - self.bidirectional = bidirectional - self.hidden_size = hidden_size - - num_embeddings = len(dictionary) - self.padding_idx = dictionary.pad() - if pretrained_embed is None: - self.embed_tokens = Embedding(num_embeddings, embed_dim, self.padding_idx) - else: - self.embed_tokens = pretrained_embed - if fixed_embeddings: - self.embed_tokens.weight.requires_grad = False - - self.lstm = LSTM( - input_size=embed_dim, - hidden_size=hidden_size, - num_layers=num_layers, - dropout=self.dropout_out if num_layers > 1 else 0.0, - bidirectional=bidirectional, - ) - self.left_pad = left_pad - self.padding_value = padding_value - - self.output_units = hidden_size - if bidirectional: - self.output_units *= 2 - - def forward(self, src_tokens, src_lengths, dataset_name): - if self.left_pad: - # convert left-padding to right-padding - src_tokens = utils.convert_padding_direction( - src_tokens, - self.padding_idx, - left_to_right=True, - ) - - bsz, seqlen = src_tokens.size() - - # embed tokens - x = self.embed_tokens(src_tokens) - x = F.dropout(x, p=self.dropout_in, training=self.training) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # pack embedded source tokens into a PackedSequence - try: - packed_x = nn.utils.rnn.pack_padded_sequence(x, src_lengths.data.tolist()) - except BaseException: - raise Exception(f"Packing failed in dataset {dataset_name}") - - # apply LSTM - if self.bidirectional: - state_size = 2 * self.num_layers, bsz, self.hidden_size - else: - state_size = self.num_layers, bsz, self.hidden_size - h0 = x.data.new(*state_size).zero_() - c0 = x.data.new(*state_size).zero_() - packed_outs, (final_hiddens, final_cells) = self.lstm(packed_x, (h0, c0)) - - # unpack outputs and apply dropout - x, _ = nn.utils.rnn.pad_packed_sequence( - packed_outs, padding_value=self.padding_value - ) - x = F.dropout(x, p=self.dropout_out, training=self.training) - assert list(x.size()) == [seqlen, bsz, self.output_units] - - if self.bidirectional: - - def combine_bidir(outs): - return torch.cat( - [ - torch.cat([outs[2 * i], outs[2 * i + 1]], dim=0).view( - 1, bsz, self.output_units - ) - for i in range(self.num_layers) - ], - dim=0, - ) - - final_hiddens = combine_bidir(final_hiddens) - final_cells = combine_bidir(final_cells) - - encoder_padding_mask = src_tokens.eq(self.padding_idx).t() - - # Set padded outputs to -inf so they are not selected by max-pooling - padding_mask = src_tokens.eq(self.padding_idx).t().unsqueeze(-1) - if padding_mask.any(): - x = x.float().masked_fill_(padding_mask, float("-inf")).type_as(x) - - # Build the sentence embedding by max-pooling over the encoder outputs - sentemb = x.max(dim=0)[0] - - return { - "sentemb": sentemb, - "encoder_out": (x, final_hiddens, final_cells), - "encoder_padding_mask": encoder_padding_mask - if encoder_padding_mask.any() - else None, - } - - def reorder_encoder_out(self, encoder_out_dict, new_order): - encoder_out_dict["sentemb"] = encoder_out_dict["sentemb"].index_select( - 0, new_order - ) - encoder_out_dict["encoder_out"] = tuple( - eo.index_select(1, new_order) for eo in encoder_out_dict["encoder_out"] - ) - if encoder_out_dict["encoder_padding_mask"] is not None: - encoder_out_dict["encoder_padding_mask"] = encoder_out_dict[ - "encoder_padding_mask" - ].index_select(1, new_order) - return encoder_out_dict - - def max_positions(self): - """Maximum input length supported by the encoder.""" - return int(1e5) # an arbitrary large number - - -class LSTMDecoder(FairseqIncrementalDecoder): - """LSTM decoder.""" - - def __init__( - self, - dictionary, - embed_dim=512, - hidden_size=512, - out_embed_dim=512, - num_layers=1, - dropout_in=0.1, - dropout_out=0.1, - zero_init=False, - encoder_embed_dim=512, - encoder_output_units=512, - pretrained_embed=None, - num_langs=1, - lang_embed_dim=0, - ): - super().__init__(dictionary) - self.dropout_in = dropout_in - self.dropout_out = dropout_out - self.hidden_size = hidden_size - - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - if pretrained_embed is None: - self.embed_tokens = Embedding(num_embeddings, embed_dim, padding_idx) - else: - self.embed_tokens = pretrained_embed - - self.layers = nn.ModuleList( - [ - LSTMCell( - input_size=encoder_output_units + embed_dim + lang_embed_dim - if layer == 0 - else hidden_size, - hidden_size=hidden_size, - ) - for layer in range(num_layers) - ] - ) - if hidden_size != out_embed_dim: - self.additional_fc = Linear(hidden_size, out_embed_dim) - self.fc_out = Linear(out_embed_dim, num_embeddings, dropout=dropout_out) - - if zero_init: - self.sentemb2init = None - else: - self.sentemb2init = Linear( - encoder_output_units, 2 * num_layers * hidden_size - ) - - if lang_embed_dim == 0: - self.embed_lang = None - else: - self.embed_lang = nn.Embedding(num_langs, lang_embed_dim) - nn.init.uniform_(self.embed_lang.weight, -0.1, 0.1) - - def forward( - self, prev_output_tokens, encoder_out_dict, incremental_state=None, lang_id=0 - ): - sentemb = encoder_out_dict["sentemb"] - encoder_out = encoder_out_dict["encoder_out"] - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - bsz, seqlen = prev_output_tokens.size() - - # get outputs from encoder - encoder_outs, _, _ = encoder_out[:3] - srclen = encoder_outs.size(0) - - # embed tokens - x = self.embed_tokens(prev_output_tokens) - x = F.dropout(x, p=self.dropout_in, training=self.training) - - # embed language identifier - if self.embed_lang is not None: - lang_ids = prev_output_tokens.data.new_full((bsz,), lang_id) - langemb = self.embed_lang(lang_ids) - # TODO Should we dropout here??? - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # initialize previous states (or get from cache during incremental generation) - cached_state = utils.get_incremental_state( - self, incremental_state, "cached_state" - ) - if cached_state is not None: - prev_hiddens, prev_cells, input_feed = cached_state - else: - num_layers = len(self.layers) - if self.sentemb2init is None: - prev_hiddens = [ - x.data.new(bsz, self.hidden_size).zero_() for i in range(num_layers) - ] - prev_cells = [ - x.data.new(bsz, self.hidden_size).zero_() for i in range(num_layers) - ] - else: - init = self.sentemb2init(sentemb) - prev_hiddens = [ - init[:, (2 * i) * self.hidden_size : (2 * i + 1) * self.hidden_size] - for i in range(num_layers) - ] - prev_cells = [ - init[ - :, - (2 * i + 1) * self.hidden_size : (2 * i + 2) * self.hidden_size, - ] - for i in range(num_layers) - ] - input_feed = x.data.new(bsz, self.hidden_size).zero_() - - attn_scores = x.data.new(srclen, seqlen, bsz).zero_() - outs = [] - for j in range(seqlen): - if self.embed_lang is None: - input = torch.cat((x[j, :, :], sentemb), dim=1) - else: - input = torch.cat((x[j, :, :], sentemb, langemb), dim=1) - - for i, rnn in enumerate(self.layers): - # recurrent cell - hidden, cell = rnn(input, (prev_hiddens[i], prev_cells[i])) - - # hidden state becomes the input to the next layer - input = F.dropout(hidden, p=self.dropout_out, training=self.training) - - # save state for next time step - prev_hiddens[i] = hidden - prev_cells[i] = cell - - out = hidden - out = F.dropout(out, p=self.dropout_out, training=self.training) - - # input feeding - input_feed = out - - # save final output - outs.append(out) - - # cache previous states (no-op except during incremental generation) - utils.set_incremental_state( - self, - incremental_state, - "cached_state", - (prev_hiddens, prev_cells, input_feed), - ) - - # collect outputs across time steps - x = torch.cat(outs, dim=0).view(seqlen, bsz, self.hidden_size) - - # T x B x C -> B x T x C - x = x.transpose(1, 0) - - # srclen x tgtlen x bsz -> bsz x tgtlen x srclen - attn_scores = attn_scores.transpose(0, 2) - - # project back to size of vocabulary - if hasattr(self, "additional_fc"): - x = self.additional_fc(x) - x = F.dropout(x, p=self.dropout_out, training=self.training) - x = self.fc_out(x) - - return x, attn_scores - - def reorder_incremental_state(self, incremental_state, new_order): - super().reorder_incremental_state(incremental_state, new_order) - cached_state = utils.get_incremental_state( - self, incremental_state, "cached_state" - ) - if cached_state is None: - return - - def reorder_state(state): - if isinstance(state, list): - return [reorder_state(state_i) for state_i in state] - return state.index_select(0, new_order) - - new_state = tuple(map(reorder_state, cached_state)) - utils.set_incremental_state(self, incremental_state, "cached_state", new_state) - - def max_positions(self): - """Maximum output length supported by the decoder.""" - return int(1e5) # an arbitrary large number - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - nn.init.uniform_(m.weight, -0.1, 0.1) - nn.init.constant_(m.weight[padding_idx], 0) - return m - - -def LSTM(input_size, hidden_size, **kwargs): - m = nn.LSTM(input_size, hidden_size, **kwargs) - for name, param in m.named_parameters(): - if "weight" in name or "bias" in name: - param.data.uniform_(-0.1, 0.1) - return m - - -def LSTMCell(input_size, hidden_size, **kwargs): - m = nn.LSTMCell(input_size, hidden_size, **kwargs) - for name, param in m.named_parameters(): - if "weight" in name or "bias" in name: - param.data.uniform_(-0.1, 0.1) - return m - - -def Linear(in_features, out_features, bias=True, dropout=0): - """Weight-normalized Linear layer (input: N x T x C)""" - m = nn.Linear(in_features, out_features, bias=bias) - m.weight.data.uniform_(-0.1, 0.1) - if bias: - m.bias.data.uniform_(-0.1, 0.1) - return m - - -@register_model_architecture("laser_lstm", "laser_lstm") -def base_architecture(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_hidden_size = getattr( - args, "encoder_hidden_size", args.encoder_embed_dim - ) - args.encoder_layers = getattr(args, "encoder_layers", 1) - args.encoder_bidirectional = getattr(args, "encoder_bidirectional", False) - args.encoder_dropout_in = getattr(args, "encoder_dropout_in", args.dropout) - args.encoder_dropout_out = getattr(args, "encoder_dropout_out", args.dropout) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_hidden_size = getattr( - args, "decoder_hidden_size", args.decoder_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 1) - args.decoder_out_embed_dim = getattr(args, "decoder_out_embed_dim", 512) - args.decoder_dropout_in = getattr(args, "decoder_dropout_in", args.dropout) - args.decoder_dropout_out = getattr(args, "decoder_dropout_out", args.dropout) - args.decoder_zero_init = getattr(args, "decoder_zero_init", "0") - args.decoder_lang_embed_dim = getattr(args, "decoder_lang_embed_dim", 0) - args.fixed_embeddings = getattr(args, "fixed_embeddings", False) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_text_joint_to_text/scripts/g2p_encode.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_text_joint_to_text/scripts/g2p_encode.py deleted file mode 100644 index 9db779396f492e3f71b08d7b895beb81d8e46bc9..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_text_joint_to_text/scripts/g2p_encode.py +++ /dev/null @@ -1,191 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import itertools -import logging -import re -import time - -from g2p_en import G2p - -logger = logging.getLogger(__name__) - -FAIL_SENT = "FAILED_SENTENCE" - - -def parse(): - parser = argparse.ArgumentParser() - parser.add_argument("--data-path", type=str, required=True) - parser.add_argument("--out-path", type=str, required=True) - parser.add_argument("--lower-case", action="store_true") - parser.add_argument("--do-filter", action="store_true") - parser.add_argument("--use-word-start", action="store_true") - parser.add_argument("--dup-vowel", default=1, type=int) - parser.add_argument("--dup-consonant", default=1, type=int) - parser.add_argument("--no-punc", action="store_true") - parser.add_argument("--reserve-word", type=str, default="") - parser.add_argument( - "--reserve-first-column", - action="store_true", - help="first column is sentence id", - ) - ### - parser.add_argument("--parallel-process-num", default=1, type=int) - parser.add_argument("--logdir", default="") - args = parser.parse_args() - return args - - -def process_sent(sent, g2p, res_wrds, args): - sents = pre_process_sent(sent, args.do_filter, args.lower_case, res_wrds) - pho_seqs = [do_g2p(g2p, s, res_wrds, i == 0) for i, s in enumerate(sents)] - pho_seq = ( - [FAIL_SENT] - if [FAIL_SENT] in pho_seqs - else list(itertools.chain.from_iterable(pho_seqs)) - ) - if args.no_punc: - pho_seq = remove_punc(pho_seq) - if args.dup_vowel > 1 or args.dup_consonant > 1: - pho_seq = dup_pho(pho_seq, args.dup_vowel, args.dup_consonant) - if args.use_word_start: - pho_seq = add_word_start(pho_seq) - return " ".join(pho_seq) - - -def remove_punc(sent): - ns = [] - regex = re.compile("[^a-zA-Z0-9 ]") - for p in sent: - if (not regex.search(p)) or p == FAIL_SENT: - if p == " " and (len(ns) == 0 or ns[-1] == " "): - continue - ns.append(p) - return ns - - -def do_g2p(g2p, sent, res_wrds, is_first_sent): - if sent in res_wrds: - pho_seq = [res_wrds[sent]] - else: - pho_seq = g2p(sent) - if not is_first_sent: - pho_seq = [" "] + pho_seq # add space to separate - return pho_seq - - -def pre_process_sent(sent, do_filter, lower_case, res_wrds): - if do_filter: - sent = re.sub("-", " ", sent) - sent = re.sub("—", " ", sent) - if len(res_wrds) > 0: - wrds = sent.split() - wrds = ["SPLIT_ME " + w + " SPLIT_ME" if w in res_wrds else w for w in wrds] - sents = [x.strip() for x in " ".join(wrds).split("SPLIT_ME") if x.strip() != ""] - else: - sents = [sent] - if lower_case: - sents = [s.lower() if s not in res_wrds else s for s in sents] - return sents - - -def dup_pho(sent, dup_v_num, dup_c_num): - """ - duplicate phoneme defined as cmudict - http://www.speech.cs.cmu.edu/cgi-bin/cmudict - """ - if dup_v_num == 1 and dup_c_num == 1: - return sent - ns = [] - for p in sent: - ns.append(p) - if re.search(r"\d$", p): - for i in range(1, dup_v_num): - ns.append(f"{p}-{i}P") - elif re.search(r"\w", p): - for i in range(1, dup_c_num): - ns.append(f"{p}-{i}P") - return ns - - -def add_word_start(sent): - ns = [] - do_add = True - ws = "▁" - for p in sent: - if do_add: - p = ws + p - do_add = False - if p == " ": - do_add = True - else: - ns.append(p) - return ns - - -def load_reserve_word(reserve_word): - if reserve_word == "": - return [] - with open(reserve_word, "r") as fp: - res_wrds = [x.strip().split() for x in fp.readlines() if x.strip() != ""] - assert sum([0 if len(x) == 2 else 1 for x in res_wrds]) == 0 - res_wrds = dict(res_wrds) - return res_wrds - - -def process_sents(sents, args): - g2p = G2p() - out_sents = [] - res_wrds = load_reserve_word(args.reserve_word) - for sent in sents: - col1 = "" - if args.reserve_first_column: - col1, sent = sent.split(None, 1) - sent = process_sent(sent, g2p, res_wrds, args) - if args.reserve_first_column and col1 != "": - sent = f"{col1} {sent}" - out_sents.append(sent) - return out_sents - - -def main(): - args = parse() - out_sents = [] - with open(args.data_path, "r") as fp: - sent_list = [x.strip() for x in fp.readlines()] - if args.parallel_process_num > 1: - try: - import submitit - except ImportError: - logger.warn( - "submitit is not found and only one job is used to process the data" - ) - submitit = None - - if args.parallel_process_num == 1 or submitit is None: - out_sents = process_sents(sent_list, args) - else: - # process sentences with parallel computation - lsize = len(sent_list) // args.parallel_process_num + 1 - executor = submitit.AutoExecutor(folder=args.logdir) - executor.update_parameters(timeout_min=1000, cpus_per_task=4) - jobs = [] - for i in range(args.parallel_process_num): - job = executor.submit( - process_sents, sent_list[lsize * i : lsize * (i + 1)], args - ) - jobs.append(job) - is_running = True - while is_running: - time.sleep(5) - is_running = sum([job.done() for job in jobs]) < len(jobs) - out_sents = list(itertools.chain.from_iterable([job.result() for job in jobs])) - with open(args.out_path, "w") as fp: - fp.write("\n".join(out_sents) + "\n") - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/criterions/fastspeech2_loss.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/criterions/fastspeech2_loss.py deleted file mode 100644 index 085d5628d4c4c242edee4aa3bc4a01aa4582eb21..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/criterions/fastspeech2_loss.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) 2017-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory. - -from typing import List, Dict, Any -from dataclasses import dataclass, field - -import torch -import torch.nn.functional as F - -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass -from fairseq.data.data_utils import lengths_to_mask -from fairseq.models.fairseq_model import FairseqEncoderModel - - -@dataclass -class FastSpeech2CriterionConfig(FairseqDataclass): - ctc_weight: float = field( - default=0.0, metadata={"help": "weight for CTC loss"} - ) - - -@register_criterion("fastspeech2", dataclass=FastSpeech2CriterionConfig) -class FastSpeech2Loss(FairseqCriterion): - def __init__(self, task, ctc_weight): - super().__init__(task) - self.ctc_weight = ctc_weight - - def forward(self, model: FairseqEncoderModel, sample, reduction="mean"): - src_tokens = sample["net_input"]["src_tokens"] - src_lens = sample["net_input"]["src_lengths"] - tgt_lens = sample["target_lengths"] - _feat_out, _, log_dur_out, pitch_out, energy_out = model( - src_tokens=src_tokens, - src_lengths=src_lens, - prev_output_tokens=sample["net_input"]["prev_output_tokens"], - incremental_state=None, - target_lengths=tgt_lens, - speaker=sample["speaker"], - durations=sample["durations"], - pitches=sample["pitches"], - energies=sample["energies"] - ) - - src_mask = lengths_to_mask(sample["net_input"]["src_lengths"]) - tgt_mask = lengths_to_mask(sample["target_lengths"]) - - pitches, energies = sample["pitches"], sample["energies"] - pitch_out, pitches = pitch_out[src_mask], pitches[src_mask] - energy_out, energies = energy_out[src_mask], energies[src_mask] - - feat_out, feat = _feat_out[tgt_mask], sample["target"][tgt_mask] - l1_loss = F.l1_loss(feat_out, feat, reduction=reduction) - - pitch_loss = F.mse_loss(pitch_out, pitches, reduction=reduction) - energy_loss = F.mse_loss(energy_out, energies, reduction=reduction) - - log_dur_out = log_dur_out[src_mask] - dur = sample["durations"].float() - dur = dur.half() if log_dur_out.type().endswith(".HalfTensor") else dur - log_dur = torch.log(dur + 1)[src_mask] - dur_loss = F.mse_loss(log_dur_out, log_dur, reduction=reduction) - - ctc_loss = torch.tensor(0.).type_as(l1_loss) - if self.ctc_weight > 0.: - lprobs = model.get_normalized_probs((_feat_out,), log_probs=True) - lprobs = lprobs.transpose(0, 1) # T x B x C - src_mask = lengths_to_mask(src_lens) - src_tokens_flat = src_tokens.masked_select(src_mask) - ctc_loss = F.ctc_loss( - lprobs, src_tokens_flat, tgt_lens, src_lens, - reduction=reduction, zero_infinity=True - ) * self.ctc_weight - - loss = l1_loss + dur_loss + pitch_loss + energy_loss + ctc_loss - - sample_size = sample["nsentences"] - logging_output = { - "loss": utils.item(loss.data), - "ntokens": sample["ntokens"], - "nsentences": sample["nsentences"], - "sample_size": sample_size, - "l1_loss": utils.item(l1_loss.data), - "dur_loss": utils.item(dur_loss.data), - "pitch_loss": utils.item(pitch_loss.data), - "energy_loss": utils.item(energy_loss.data), - "ctc_loss": utils.item(ctc_loss.data), - } - return loss, sample_size, logging_output - - @classmethod - def reduce_metrics(cls, logging_outputs: List[Dict[str, Any]]) -> None: - ns = [log.get("sample_size", 0) for log in logging_outputs] - ntot = sum(ns) - ws = [n / (ntot + 1e-8) for n in ns] - for key in [ - "loss", "l1_loss", "dur_loss", "pitch_loss", "energy_loss", - "ctc_loss" - ]: - vals = [log.get(key, 0) for log in logging_outputs] - val = sum(val * w for val, w in zip(vals, ws)) - metrics.log_scalar(key, val, ntot, round=3) - metrics.log_scalar("sample_size", ntot, len(logging_outputs)) - - # inference metrics - if "targ_frames" not in logging_outputs[0]: - return - n = sum(log.get("targ_frames", 0) for log in logging_outputs) - for key, new_key in [ - ("mcd_loss", "mcd_loss"), - ("pred_frames", "pred_ratio"), - ("nins", "ins_rate"), - ("ndel", "del_rate"), - ]: - val = sum(log.get(key, 0) for log in logging_outputs) - metrics.log_scalar(new_key, val / n, n, round=3) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - return False diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/audio_finetuning.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/audio_finetuning.py deleted file mode 100644 index 4ef87c604f00581f03075e9ebe10a43dd51d6e45..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/audio_finetuning.py +++ /dev/null @@ -1,346 +0,0 @@ -# Copyright (c) 2017-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory. - -import logging -import os -import torch -import json - -from argparse import Namespace -from dataclasses import dataclass, field -from typing import Optional, Any - -from fairseq.data import AddTargetDataset, Dictionary, encoders -from fairseq.tasks.audio_pretraining import AudioPretrainingTask, AudioPretrainingConfig -from fairseq.dataclass import FairseqDataclass -from fairseq.dataclass.configs import GenerationConfig -from fairseq.data.text_compressor import TextCompressor, TextCompressionLevel - -from . import register_task -from .. import utils -from ..logging import metrics - - -logger = logging.getLogger(__name__) - - -class LabelEncoder(object): - def __init__(self, dictionary): - self.dictionary = dictionary - - def __call__(self, label): - return self.dictionary.encode_line( - label, append_eos=False, add_if_not_exist=False - ) - - -def label_len_fn(label): - return len(label.split(" ")) - - -@dataclass -class AudioFinetuningConfig(AudioPretrainingConfig): - # Options for reporting WER metrics during validation. Only applicable to - # Seq2Seq models during fine-tuning - eval_wer: bool = field( - default=False, metadata={"help": "compute WER for Seq2Seq models"} - ) - eval_wer_config: GenerationConfig = field( - default_factory=lambda: GenerationConfig(), - metadata={"help": "beam search config for evaluating wer during training"}, - ) - eval_wer_tokenizer: Any = field( - default=None, - metadata={"help": "tokenizer config for evaluating wer during training"}, - ) - eval_wer_post_process: str = field( - default="letter", - metadata={ - "help": "remove BPE tokens before scoring (can be sentencepiece, letter, and more)" - }, - ) - eval_bleu: bool = field( - default=False, metadata={"help": "evaluation with BLEU scores"} - ) - eval_bleu_detok: Optional[str] = field( - default=None, metadata={ - "help": "detokenize before computing BLEU (e.g., 'moses'); " - "required if using --eval-bleu; use 'space' to disable " - "detokenization; see fairseq.data.encoders for other options" - } - ) - eval_bleu_detok_args: str = field( - default="{}", - metadata={"help": "args for building the tokenizer, if needed"} - ) - eval_tokenized_bleu: bool = field( - default=False, - metadata={"help": "compute tokenized BLEU instead of sacrebleu"} - ) - eval_bleu_remove_bpe: Optional[str] = field( - default=None, metadata={"help": "remove BPE before computing BLEU"} - ) - eval_bleu_args: str = field( - default="{}", - metadata={"help": "generation args for BLUE scoring, e.g., " - "'{\"beam\": 4, \"lenpen\": 0.6}'"} - ) - eval_bleu_print_samples: bool = field( - default=False, - metadata={"help": "print sample generations during validation"} - ) - autoregressive: bool = field( - default=False, - metadata={ - "help": "required for autoregressive decoders (like seq2seq models); " - "adds 'prev_output_tokens' to input and appends eos to target" - }, - ) - - -@register_task("audio_finetuning", dataclass=AudioFinetuningConfig) -class AudioFinetuningTask(AudioPretrainingTask): - """ """ - - cfg: AudioFinetuningConfig - - def __init__( - self, - cfg: AudioFinetuningConfig, - ): - super().__init__(cfg) - self.blank_symbol = "" - - self.state.add_factory("target_dictionary", self.load_target_dictionary) - - def load_target_dictionary(self): - if self.cfg.labels: - dict_path = os.path.join(self.cfg.data, f"dict.{self.cfg.labels}.txt") - return Dictionary.load(dict_path) - return None - - def load_dataset(self, split: str, task_cfg: AudioFinetuningConfig = None, **kwargs): - super().load_dataset(split, task_cfg, **kwargs) - - task_cfg = task_cfg or self.cfg - assert task_cfg.labels is not None - text_compression_level = getattr( - TextCompressionLevel, str(self.cfg.text_compression_level) - ) - data_path = self.cfg.data - label_path = os.path.join(data_path, f"{split}.{task_cfg.labels}") - skipped_indices = getattr(self.datasets[split], "skipped_indices", set()) - text_compressor = TextCompressor(level=text_compression_level) - with open(label_path, "r") as f: - labels = [ - text_compressor.compress(l) - for i, l in enumerate(f) if i not in skipped_indices - ] - - assert len(labels) == len(self.datasets[split]), ( - f"labels length ({len(labels)}) and dataset length " - f"({len(self.datasets[split])}) do not match" - ) - - process_label = LabelEncoder(self.target_dictionary) - - self.datasets[split] = AddTargetDataset( - self.datasets[split], - labels, - pad=self.target_dictionary.pad(), - eos=self.target_dictionary.eos(), - batch_targets=True, - process_label=process_label, - label_len_fn=label_len_fn, - add_to_input=task_cfg.get("autoregressive", False), - text_compression_level=text_compression_level - ) - - @property - def target_dictionary(self): - """Return the :class:`~fairseq.data.Dictionary` for the language - model.""" - return self.state.target_dictionary - - def valid_step(self, sample, model, criterion): - loss, sample_size, logging_output = super().valid_step(sample, model, criterion) - if self.cfg.eval_wer and self.cfg.autoregressive: - metrics = self._inference_with_wer(self.sequence_generator, sample, model) - logging_output["_num_char_errors"] = metrics["num_char_errors"] - logging_output["_num_chars"] = metrics["num_chars"] - logging_output["_num_word_errors"] = metrics["num_word_errors"] - logging_output["_num_words"] = metrics["num_words"] - if self.cfg.eval_bleu and self.cfg.autoregressive: - metrics = self._inference_with_bleu(self.sequence_generator, sample, model) - logging_output['_bleu_sys_len'] = metrics.sys_len - logging_output['_bleu_ref_len'] = metrics.ref_len - # we split counts into separate entries so that they can be - # summed efficiently across workers using fast-stat-sync - assert len(metrics.counts) == 4 - for i in range(4): - logging_output[f"_bleu_counts_{i}"] = metrics.counts[i] - logging_output[f"_bleu_totals_{i}"] = metrics.totals[i] - return loss, sample_size, logging_output - - def build_model(self, model_cfg: FairseqDataclass): - model = super().build_model(model_cfg) - - if self.cfg.eval_wer and self.cfg.autoregressive: - self.sequence_generator = self.build_generator( - [model], - self.cfg.eval_wer_config, - ) - if self.cfg.eval_wer_tokenizer: - self.tokenizer = encoders.build_tokenizer(self.cfg.eval_wer_tokenizer) - else: - self.tokenizer = None - if self.cfg.eval_bleu and self.cfg.autoregressive: - assert self.cfg.eval_bleu_detok is not None, ( - '--eval-bleu-detok is required if using --eval-bleu; ' - 'try --eval-bleu-detok=moses (or --eval-bleu-detok=space ' - 'to disable detokenization, e.g., when using sentencepiece)' - ) - detok_args = json.loads(self.cfg.eval_bleu_detok_args) - self.tokenizer = encoders.build_tokenizer( - Namespace(tokenizer=self.cfg.eval_bleu_detok, **detok_args) - ) - gen_args = json.loads(self.cfg.eval_bleu_args) - gen_args = Namespace(**gen_args) - self.sequence_generator = self.build_generator([model], gen_args) - - return model - - def _inference_with_wer(self, generator, sample, model): - import editdistance - - def decode(toks): - s = self.target_dictionary.string( - toks.int().cpu(), - self.cfg.eval_wer_post_process, - escape_unk=True, - ) - if self.tokenizer: - s = self.tokenizer.decode(s) - return s - - num_word_errors, num_char_errors = 0, 0 - num_chars, num_words = 0, 0 - gen_out = self.inference_step(generator, [model], sample, None) - for i in range(len(gen_out)): - hyp = decode(gen_out[i][0]["tokens"]) - ref = decode( - utils.strip_pad(sample["target"][i], self.target_dictionary.pad()), - ) - num_char_errors += editdistance.eval(hyp, ref) - num_chars += len(ref) - hyp_words = hyp.split() - ref_words = ref.split() - num_word_errors += editdistance.eval(hyp_words, ref_words) - num_words += len(ref_words) - - return { - "num_char_errors": num_char_errors, - "num_chars": num_chars, - "num_word_errors": num_word_errors, - "num_words": num_words, - } - - def _inference_with_bleu(self, generator, sample, model): - import sacrebleu - - def decode(toks, is_ref): - s = self.target_dictionary.string( - toks.int().cpu(), - self.cfg.eval_bleu_remove_bpe, - # The default unknown string in fairseq is ``, but - # this is tokenized by sacrebleu as `< unk >`, inflating - # BLEU scores. Instead, we use a somewhat more verbose - # alternative that is unlikely to appear in the real - # reference, but doesn't get split into multiple tokens. - unk_string=( - "UNKNOWNTOKENINREF" if is_ref else "UNKNOWNTOKENINHYP" - ), - ) - if self.tokenizer: - s = self.tokenizer.decode(s) - return s - - gen_out = self.inference_step(generator, [model], sample) - hyps, refs = [], [] - for i in range(len(gen_out)): - hyps.append(decode(gen_out[i][0]['tokens'], is_ref=False)) - refs.append( - decode( - utils.strip_pad( - sample['target'][i], - self.target_dictionary.pad() - ), - is_ref=True, # don't count as matches to the hypo - ) - ) - if self.cfg.eval_bleu_print_samples: - logger.info('H-{} {}'.format(sample["id"][0], hyps[0])) - logger.info('T-{} {}'.format(sample["id"][0], refs[0])) - - eval_tokenization = 'none' if self.cfg.eval_tokenized_bleu else '13a' - return sacrebleu.corpus_bleu(hyps, [refs], tokenize=eval_tokenization) - - def reduce_metrics(self, logging_outputs, criterion): - super().reduce_metrics(logging_outputs, criterion) - - if self.cfg.eval_wer: - zero = torch.scalar_tensor(0.0) - num_char_errors = sum( - log.get("_num_char_errors", zero) for log in logging_outputs - ) - num_chars = sum(log.get("_num_chars", zero) for log in logging_outputs) - num_word_errors = sum( - log.get("_num_word_errors", zero) for log in logging_outputs - ) - num_words = sum(log.get("_num_words", zero) for log in logging_outputs) - metrics.log_scalar("_num_char_errors", num_char_errors) - metrics.log_scalar("_num_chars", num_chars) - metrics.log_scalar("_num_word_errors", num_word_errors) - metrics.log_scalar("_num_words", num_words) - if num_chars > 0: - metrics.log_derived( - "uer", - lambda meters: meters["_num_char_errors"].sum - * 100.0 - / meters["_num_chars"].sum - if meters["_num_chars"].sum > 0 - else float("nan"), - ) - if num_words > 0: - metrics.log_derived( - "wer", - lambda meters: meters["_num_word_errors"].sum - * 100.0 - / meters["_num_words"].sum - if meters["_num_words"].sum > 0 - else float("nan"), - ) - if self.cfg.eval_bleu: - len_keys = ["_bleu_sys_len", "_bleu_ref_len"] - count_keys = [f"_bleu_counts_{i}" for i in range(4)] - total_keys = [f"_bleu_totals_{i}" for i in range(4)] - for k in len_keys + count_keys + total_keys: - metrics.log_scalar( - k, sum(log.get(k, 0) for log in logging_outputs) - ) - - import sacrebleu - metrics.log_derived( - 'bleu', - lambda meters: sacrebleu.compute_bleu( - correct=[meters[k].sum for k in count_keys], - total=[meters[k].sum for k in total_keys], - sys_len=meters['_bleu_sys_len'].sum, - ref_len=meters['_bleu_ref_len'].sum, - smooth_method="exp" - ).score - ) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_ema.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_ema.py deleted file mode 100644 index 88ea65a434e49775d40f2b08ce6df0f8d9929c18..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_ema.py +++ /dev/null @@ -1,199 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import unittest -from copy import deepcopy -from dataclasses import dataclass -from typing import Optional - -import torch -from fairseq.models.ema import EMA - - -class DummyModule(torch.nn.Module): - def __init__(self) -> None: - """LightningModule for testing purposes - - Args: - epoch_min_loss_override (int, optional): Pass in an epoch that will be set to the minimum - validation loss for testing purposes (zero based). If None this is ignored. Defaults to None. - """ - super().__init__() - self.layer = torch.nn.Linear(in_features=32, out_features=2) - self.another_layer = torch.nn.Linear(in_features=2, out_features=2) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.layer(x) - return self.another_layer(x) - - -@dataclass -class EMAConfig(object): - ema_decay: float = 0.99 - ema_start_update: int = 0 - ema_fp32: bool = False - ema_seed_model: Optional[str] = None - - -class TestEMAGPU(unittest.TestCase): - def assertTorchAllClose(self, x, y, atol=1e-8, rtol=1e-5, msg=None): - diff = x.float() - y.float() - diff_norm = torch.norm(diff) - other_norm = torch.norm(y.float()) - - if msg is None: - msg = "|input - other| > {} + {} * |other|".format( - atol, rtol - ) - - self.assertLessEqual( - diff_norm, - atol + rtol * other_norm, - msg=msg, - ) - - def test_ema(self): - model = DummyModule() - optimizer = torch.optim.SGD(model.parameters(), lr=0.01) - state = deepcopy(model.state_dict()) - config = EMAConfig() - ema = EMA(model, config) - - # set decay - ema._set_decay(config.ema_decay) - self.assertEqual(ema.get_decay(), config.ema_decay) - - # get model - self.assertEqual(ema.get_model(), ema.model) - - # Since fp32 params is not used, it should be of size 0 - self.assertEqual(len(ema.fp32_params), 0) - - # EMA step - x = torch.randn(32) - y = model(x) - loss = y.sum() - loss.backward() - optimizer.step() - - ema.step(model) - - ema_state_dict = ema.get_model().state_dict() - - for key, param in model.state_dict().items(): - prev_param = state[key] - ema_param = ema_state_dict[key] - - if "version" in key: - # Do not decay a model.version pytorch param - continue - self.assertTorchAllClose( - ema_param, - config.ema_decay * prev_param + (1 - config.ema_decay) * param, - ) - - # Since fp32 params is not used, it should be of size 0 - self.assertEqual(len(ema.fp32_params), 0) - - # Load EMA into model - model2 = DummyModule() - ema.reverse(model2) - - for key, param in model2.state_dict().items(): - ema_param = ema_state_dict[key] - self.assertTrue( - torch.allclose(ema_param, param) - ) - - def test_ema_fp32(self): - model = DummyModule().half() - optimizer = torch.optim.SGD(model.parameters(), lr=0.01) - state = deepcopy(model.state_dict()) - config = EMAConfig(ema_fp32=True) - ema = EMA(model, config) - - x = torch.randn(32) - y = model(x.half()) - loss = y.sum() - loss.backward() - optimizer.step() - - ema.step(model) - - for key, param in model.state_dict().items(): - prev_param = state[key] - ema_param = ema.get_model().state_dict()[key] - - if "version" in key: - # Do not decay a model.version pytorch param - continue - self.assertIn(key, ema.fp32_params) - - # EMA update is done in fp32, and hence the EMA param must be - # closer to the EMA update done in fp32 than in fp16. - self.assertLessEqual( - torch.norm( - ema_param.float() - - (config.ema_decay * prev_param.float() + (1 - config.ema_decay) * param.float()).half().float() - ), - torch.norm( - ema_param.float() - - (config.ema_decay * prev_param + (1 - config.ema_decay) * param).float() - ), - ) - self.assertTorchAllClose( - ema_param, - (config.ema_decay * prev_param.float() + (1 - config.ema_decay) * param.float()).half(), - ) - - def test_ema_fp16(self): - model = DummyModule().half() - optimizer = torch.optim.SGD(model.parameters(), lr=0.01) - state = deepcopy(model.state_dict()) - config = EMAConfig(ema_fp32=False) - ema = EMA(model, config) - - # Since fp32 params is not used, it should be of size 0 - self.assertEqual(len(ema.fp32_params), 0) - - x = torch.randn(32) - y = model(x.half()) - loss = y.sum() - loss.backward() - optimizer.step() - - ema.step(model) - - for key, param in model.state_dict().items(): - prev_param = state[key] - ema_param = ema.get_model().state_dict()[key] - - if "version" in key: - # Do not decay a model.version pytorch param - continue - - # EMA update is done in fp16, and hence the EMA param must be - # closer to the EMA update done in fp16 than in fp32. - self.assertLessEqual( - torch.norm( - ema_param.float() - - (config.ema_decay * prev_param + (1 - config.ema_decay) * param).float() - ), - torch.norm( - ema_param.float() - - (config.ema_decay * prev_param.float() + (1 - config.ema_decay) * param.float()).half().float() - ), - ) - self.assertTorchAllClose( - ema_param, - config.ema_decay * prev_param + (1 - config.ema_decay) * param, - ) - - # Since fp32 params is not used, it should be of size 0 - self.assertEqual(len(ema.fp32_params), 0) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/demo/predictor.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/demo/predictor.py deleted file mode 100644 index 7b7ebd3f846850172c1f560f8492d51e5667f76d..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/demo/predictor.py +++ /dev/null @@ -1,220 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import atexit -import bisect -import multiprocessing as mp -from collections import deque -import cv2 -import torch - -from detectron2.data import MetadataCatalog -from detectron2.engine.defaults import DefaultPredictor -from detectron2.utils.video_visualizer import VideoVisualizer -from detectron2.utils.visualizer import ColorMode, Visualizer - - -class VisualizationDemo(object): - def __init__(self, cfg, instance_mode=ColorMode.IMAGE, parallel=False): - """ - Args: - cfg (CfgNode): - instance_mode (ColorMode): - parallel (bool): whether to run the model in different processes from visualization. - Useful since the visualization logic can be slow. - """ - self.metadata = MetadataCatalog.get( - cfg.DATASETS.TEST[0] if len(cfg.DATASETS.TEST) else "__unused" - ) - self.cpu_device = torch.device("cpu") - self.instance_mode = instance_mode - - self.parallel = parallel - if parallel: - num_gpu = torch.cuda.device_count() - self.predictor = AsyncPredictor(cfg, num_gpus=num_gpu) - else: - self.predictor = DefaultPredictor(cfg) - - def run_on_image(self, image): - """ - Args: - image (np.ndarray): an image of shape (H, W, C) (in BGR order). - This is the format used by OpenCV. - - Returns: - predictions (dict): the output of the model. - vis_output (VisImage): the visualized image output. - """ - vis_output = None - predictions = self.predictor(image) - # Convert image from OpenCV BGR format to Matplotlib RGB format. - image = image[:, :, ::-1] - visualizer = Visualizer(image, self.metadata, instance_mode=self.instance_mode) - if "panoptic_seg" in predictions: - panoptic_seg, segments_info = predictions["panoptic_seg"] - vis_output = visualizer.draw_panoptic_seg_predictions( - panoptic_seg.to(self.cpu_device), segments_info - ) - else: - if "sem_seg" in predictions: - vis_output = visualizer.draw_sem_seg( - predictions["sem_seg"].argmax(dim=0).to(self.cpu_device) - ) - if "instances" in predictions: - instances = predictions["instances"].to(self.cpu_device) - vis_output = visualizer.draw_instance_predictions(predictions=instances) - - return predictions, vis_output - - def _frame_from_video(self, video): - while video.isOpened(): - success, frame = video.read() - if success: - yield frame - else: - break - - def run_on_video(self, video): - """ - Visualizes predictions on frames of the input video. - - Args: - video (cv2.VideoCapture): a :class:`VideoCapture` object, whose source can be - either a webcam or a video file. - - Yields: - ndarray: BGR visualizations of each video frame. - """ - video_visualizer = VideoVisualizer(self.metadata, self.instance_mode) - - def process_predictions(frame, predictions): - frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) - if "panoptic_seg" in predictions: - panoptic_seg, segments_info = predictions["panoptic_seg"] - vis_frame = video_visualizer.draw_panoptic_seg_predictions( - frame, panoptic_seg.to(self.cpu_device), segments_info - ) - elif "instances" in predictions: - predictions = predictions["instances"].to(self.cpu_device) - vis_frame = video_visualizer.draw_instance_predictions(frame, predictions) - elif "sem_seg" in predictions: - vis_frame = video_visualizer.draw_sem_seg( - frame, predictions["sem_seg"].argmax(dim=0).to(self.cpu_device) - ) - - # Converts Matplotlib RGB format to OpenCV BGR format - vis_frame = cv2.cvtColor(vis_frame.get_image(), cv2.COLOR_RGB2BGR) - return vis_frame - - frame_gen = self._frame_from_video(video) - if self.parallel: - buffer_size = self.predictor.default_buffer_size - - frame_data = deque() - - for cnt, frame in enumerate(frame_gen): - frame_data.append(frame) - self.predictor.put(frame) - - if cnt >= buffer_size: - frame = frame_data.popleft() - predictions = self.predictor.get() - yield process_predictions(frame, predictions) - - while len(frame_data): - frame = frame_data.popleft() - predictions = self.predictor.get() - yield process_predictions(frame, predictions) - else: - for frame in frame_gen: - yield process_predictions(frame, self.predictor(frame)) - - -class AsyncPredictor: - """ - A predictor that runs the model asynchronously, possibly on >1 GPUs. - Because rendering the visualization takes considerably amount of time, - this helps improve throughput a little bit when rendering videos. - """ - - class _StopToken: - pass - - class _PredictWorker(mp.Process): - def __init__(self, cfg, task_queue, result_queue): - self.cfg = cfg - self.task_queue = task_queue - self.result_queue = result_queue - super().__init__() - - def run(self): - predictor = DefaultPredictor(self.cfg) - - while True: - task = self.task_queue.get() - if isinstance(task, AsyncPredictor._StopToken): - break - idx, data = task - result = predictor(data) - self.result_queue.put((idx, result)) - - def __init__(self, cfg, num_gpus: int = 1): - """ - Args: - cfg (CfgNode): - num_gpus (int): if 0, will run on CPU - """ - num_workers = max(num_gpus, 1) - self.task_queue = mp.Queue(maxsize=num_workers * 3) - self.result_queue = mp.Queue(maxsize=num_workers * 3) - self.procs = [] - for gpuid in range(max(num_gpus, 1)): - cfg = cfg.clone() - cfg.defrost() - cfg.MODEL.DEVICE = "cuda:{}".format(gpuid) if num_gpus > 0 else "cpu" - self.procs.append( - AsyncPredictor._PredictWorker(cfg, self.task_queue, self.result_queue) - ) - - self.put_idx = 0 - self.get_idx = 0 - self.result_rank = [] - self.result_data = [] - - for p in self.procs: - p.start() - atexit.register(self.shutdown) - - def put(self, image): - self.put_idx += 1 - self.task_queue.put((self.put_idx, image)) - - def get(self): - self.get_idx += 1 # the index needed for this request - if len(self.result_rank) and self.result_rank[0] == self.get_idx: - res = self.result_data[0] - del self.result_data[0], self.result_rank[0] - return res - - while True: - # make sure the results are returned in the correct order - idx, res = self.result_queue.get() - if idx == self.get_idx: - return res - insert = bisect.bisect(self.result_rank, idx) - self.result_rank.insert(insert, idx) - self.result_data.insert(insert, res) - - def __len__(self): - return self.put_idx - self.get_idx - - def __call__(self, image): - self.put(image) - return self.get() - - def shutdown(self): - for _ in self.procs: - self.task_queue.put(AsyncPredictor._StopToken()) - - @property - def default_buffer_size(self): - return len(self.procs) * 5 diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/custom_build_augmentation.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/custom_build_augmentation.py deleted file mode 100644 index 7d91f21edb082c079c5a1e85bdf669c7b55bad9a..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/custom_build_augmentation.py +++ /dev/null @@ -1,59 +0,0 @@ -import logging -import numpy as np -import pycocotools.mask as mask_util -import torch -from fvcore.common.file_io import PathManager -from PIL import Image - -from detectron2.structures import ( - BitMasks, - Boxes, - BoxMode, - Instances, - Keypoints, - PolygonMasks, - RotatedBoxes, - polygons_to_bitmask, -) - -from detectron2.data import transforms as T -from .transforms.custom_augmentation_impl import EfficientDetResizeCrop - -def build_custom_augmentation(cfg, is_train): - """ - Create a list of default :class:`Augmentation` from config. - Now it includes resizing and flipping. - - Returns: - list[Augmentation] - """ - if cfg.INPUT.CUSTOM_AUG == 'ResizeShortestEdge': - if is_train: - min_size = cfg.INPUT.MIN_SIZE_TRAIN - max_size = cfg.INPUT.MAX_SIZE_TRAIN - sample_style = cfg.INPUT.MIN_SIZE_TRAIN_SAMPLING - else: - min_size = cfg.INPUT.MIN_SIZE_TEST - max_size = cfg.INPUT.MAX_SIZE_TEST - sample_style = "choice" - augmentation = [T.ResizeShortestEdge(min_size, max_size, sample_style)] - elif cfg.INPUT.CUSTOM_AUG == 'EfficientDetResizeCrop': - if is_train: - scale = cfg.INPUT.SCALE_RANGE - size = cfg.INPUT.TRAIN_SIZE - else: - scale = (1, 1) - size = cfg.INPUT.TEST_SIZE - augmentation = [EfficientDetResizeCrop(size, scale)] - else: - assert 0, cfg.INPUT.CUSTOM_AUG - - if is_train: - augmentation.append(T.RandomFlip()) - return augmentation - - -build_custom_transform_gen = build_custom_augmentation -""" -Alias for backward-compatibility. -""" \ No newline at end of file diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps.go deleted file mode 100644 index d8a800f5e72ecdf60229252c60c4589cd522bdea..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps.go and /dev/null differ diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/runner/iter_based_runner.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/runner/iter_based_runner.py deleted file mode 100644 index 1df4de8c0285669dec9b014dfd1f3dd1600f0831..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/runner/iter_based_runner.py +++ /dev/null @@ -1,273 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import platform -import shutil -import time -import warnings - -import torch -from torch.optim import Optimizer - -import annotator.uniformer.mmcv as mmcv -from .base_runner import BaseRunner -from .builder import RUNNERS -from .checkpoint import save_checkpoint -from .hooks import IterTimerHook -from .utils import get_host_info - - -class IterLoader: - - def __init__(self, dataloader): - self._dataloader = dataloader - self.iter_loader = iter(self._dataloader) - self._epoch = 0 - - @property - def epoch(self): - return self._epoch - - def __next__(self): - try: - data = next(self.iter_loader) - except StopIteration: - self._epoch += 1 - if hasattr(self._dataloader.sampler, 'set_epoch'): - self._dataloader.sampler.set_epoch(self._epoch) - time.sleep(2) # Prevent possible deadlock during epoch transition - self.iter_loader = iter(self._dataloader) - data = next(self.iter_loader) - - return data - - def __len__(self): - return len(self._dataloader) - - -@RUNNERS.register_module() -class IterBasedRunner(BaseRunner): - """Iteration-based Runner. - - This runner train models iteration by iteration. - """ - - def train(self, data_loader, **kwargs): - self.model.train() - self.mode = 'train' - self.data_loader = data_loader - self._epoch = data_loader.epoch - data_batch = next(data_loader) - self.call_hook('before_train_iter') - outputs = self.model.train_step(data_batch, self.optimizer, **kwargs) - if not isinstance(outputs, dict): - raise TypeError('model.train_step() must return a dict') - if 'log_vars' in outputs: - self.log_buffer.update(outputs['log_vars'], outputs['num_samples']) - self.outputs = outputs - self.call_hook('after_train_iter') - self._inner_iter += 1 - self._iter += 1 - - @torch.no_grad() - def val(self, data_loader, **kwargs): - self.model.eval() - self.mode = 'val' - self.data_loader = data_loader - data_batch = next(data_loader) - self.call_hook('before_val_iter') - outputs = self.model.val_step(data_batch, **kwargs) - if not isinstance(outputs, dict): - raise TypeError('model.val_step() must return a dict') - if 'log_vars' in outputs: - self.log_buffer.update(outputs['log_vars'], outputs['num_samples']) - self.outputs = outputs - self.call_hook('after_val_iter') - self._inner_iter += 1 - - def run(self, data_loaders, workflow, max_iters=None, **kwargs): - """Start running. - - Args: - data_loaders (list[:obj:`DataLoader`]): Dataloaders for training - and validation. - workflow (list[tuple]): A list of (phase, iters) to specify the - running order and iterations. E.g, [('train', 10000), - ('val', 1000)] means running 10000 iterations for training and - 1000 iterations for validation, iteratively. - """ - assert isinstance(data_loaders, list) - assert mmcv.is_list_of(workflow, tuple) - assert len(data_loaders) == len(workflow) - if max_iters is not None: - warnings.warn( - 'setting max_iters in run is deprecated, ' - 'please set max_iters in runner_config', DeprecationWarning) - self._max_iters = max_iters - assert self._max_iters is not None, ( - 'max_iters must be specified during instantiation') - - work_dir = self.work_dir if self.work_dir is not None else 'NONE' - self.logger.info('Start running, host: %s, work_dir: %s', - get_host_info(), work_dir) - self.logger.info('Hooks will be executed in the following order:\n%s', - self.get_hook_info()) - self.logger.info('workflow: %s, max: %d iters', workflow, - self._max_iters) - self.call_hook('before_run') - - iter_loaders = [IterLoader(x) for x in data_loaders] - - self.call_hook('before_epoch') - - while self.iter < self._max_iters: - for i, flow in enumerate(workflow): - self._inner_iter = 0 - mode, iters = flow - if not isinstance(mode, str) or not hasattr(self, mode): - raise ValueError( - 'runner has no method named "{}" to run a workflow'. - format(mode)) - iter_runner = getattr(self, mode) - for _ in range(iters): - if mode == 'train' and self.iter >= self._max_iters: - break - iter_runner(iter_loaders[i], **kwargs) - - time.sleep(1) # wait for some hooks like loggers to finish - self.call_hook('after_epoch') - self.call_hook('after_run') - - def resume(self, - checkpoint, - resume_optimizer=True, - map_location='default'): - """Resume model from checkpoint. - - Args: - checkpoint (str): Checkpoint to resume from. - resume_optimizer (bool, optional): Whether resume the optimizer(s) - if the checkpoint file includes optimizer(s). Default to True. - map_location (str, optional): Same as :func:`torch.load`. - Default to 'default'. - """ - if map_location == 'default': - device_id = torch.cuda.current_device() - checkpoint = self.load_checkpoint( - checkpoint, - map_location=lambda storage, loc: storage.cuda(device_id)) - else: - checkpoint = self.load_checkpoint( - checkpoint, map_location=map_location) - - self._epoch = checkpoint['meta']['epoch'] - self._iter = checkpoint['meta']['iter'] - self._inner_iter = checkpoint['meta']['iter'] - if 'optimizer' in checkpoint and resume_optimizer: - if isinstance(self.optimizer, Optimizer): - self.optimizer.load_state_dict(checkpoint['optimizer']) - elif isinstance(self.optimizer, dict): - for k in self.optimizer.keys(): - self.optimizer[k].load_state_dict( - checkpoint['optimizer'][k]) - else: - raise TypeError( - 'Optimizer should be dict or torch.optim.Optimizer ' - f'but got {type(self.optimizer)}') - - self.logger.info(f'resumed from epoch: {self.epoch}, iter {self.iter}') - - def save_checkpoint(self, - out_dir, - filename_tmpl='iter_{}.pth', - meta=None, - save_optimizer=True, - create_symlink=True): - """Save checkpoint to file. - - Args: - out_dir (str): Directory to save checkpoint files. - filename_tmpl (str, optional): Checkpoint file template. - Defaults to 'iter_{}.pth'. - meta (dict, optional): Metadata to be saved in checkpoint. - Defaults to None. - save_optimizer (bool, optional): Whether save optimizer. - Defaults to True. - create_symlink (bool, optional): Whether create symlink to the - latest checkpoint file. Defaults to True. - """ - if meta is None: - meta = {} - elif not isinstance(meta, dict): - raise TypeError( - f'meta should be a dict or None, but got {type(meta)}') - if self.meta is not None: - meta.update(self.meta) - # Note: meta.update(self.meta) should be done before - # meta.update(epoch=self.epoch + 1, iter=self.iter) otherwise - # there will be problems with resumed checkpoints. - # More details in https://github.com/open-mmlab/mmcv/pull/1108 - meta.update(epoch=self.epoch + 1, iter=self.iter) - - filename = filename_tmpl.format(self.iter + 1) - filepath = osp.join(out_dir, filename) - optimizer = self.optimizer if save_optimizer else None - save_checkpoint(self.model, filepath, optimizer=optimizer, meta=meta) - # in some environments, `os.symlink` is not supported, you may need to - # set `create_symlink` to False - if create_symlink: - dst_file = osp.join(out_dir, 'latest.pth') - if platform.system() != 'Windows': - mmcv.symlink(filename, dst_file) - else: - shutil.copy(filepath, dst_file) - - def register_training_hooks(self, - lr_config, - optimizer_config=None, - checkpoint_config=None, - log_config=None, - momentum_config=None, - custom_hooks_config=None): - """Register default hooks for iter-based training. - - Checkpoint hook, optimizer stepper hook and logger hooks will be set to - `by_epoch=False` by default. - - Default hooks include: - - +----------------------+-------------------------+ - | Hooks | Priority | - +======================+=========================+ - | LrUpdaterHook | VERY_HIGH (10) | - +----------------------+-------------------------+ - | MomentumUpdaterHook | HIGH (30) | - +----------------------+-------------------------+ - | OptimizerStepperHook | ABOVE_NORMAL (40) | - +----------------------+-------------------------+ - | CheckpointSaverHook | NORMAL (50) | - +----------------------+-------------------------+ - | IterTimerHook | LOW (70) | - +----------------------+-------------------------+ - | LoggerHook(s) | VERY_LOW (90) | - +----------------------+-------------------------+ - | CustomHook(s) | defaults to NORMAL (50) | - +----------------------+-------------------------+ - - If custom hooks have same priority with default hooks, custom hooks - will be triggered after default hooks. - """ - if checkpoint_config is not None: - checkpoint_config.setdefault('by_epoch', False) - if lr_config is not None: - lr_config.setdefault('by_epoch', False) - if log_config is not None: - for info in log_config['hooks']: - info.setdefault('by_epoch', False) - super(IterBasedRunner, self).register_training_hooks( - lr_config=lr_config, - momentum_config=momentum_config, - optimizer_config=optimizer_config, - checkpoint_config=checkpoint_config, - log_config=log_config, - timer_config=IterTimerHook(), - custom_hooks_config=custom_hooks_config) diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/models/__init__.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/models/__init__.py deleted file mode 100644 index be6bfe4b787a132aeaabaed1c3437c9ecd5c656c..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/models/__init__.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -""" -Models for EnCodec, AudioGen, MusicGen, as well as the generic LMModel. -""" -# flake8: noqa -from . import builders, loaders -from .encodec import ( - CompressionModel, EncodecModel, DAC, - HFEncodecModel, HFEncodecCompressionModel) -from .audiogen import AudioGen -from .lm import LMModel -from .multibanddiffusion import MultiBandDiffusion -from .musicgen import MusicGen -from .unet import DiffusionUnet diff --git a/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/taming/modules/misc/coord.py b/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/taming/modules/misc/coord.py deleted file mode 100644 index ee69b0c897b6b382ae673622e420f55e494f5b09..0000000000000000000000000000000000000000 --- a/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/taming/modules/misc/coord.py +++ /dev/null @@ -1,31 +0,0 @@ -import torch - -class CoordStage(object): - def __init__(self, n_embed, down_factor): - self.n_embed = n_embed - self.down_factor = down_factor - - def eval(self): - return self - - def encode(self, c): - """fake vqmodel interface""" - assert 0.0 <= c.min() and c.max() <= 1.0 - b,ch,h,w = c.shape - assert ch == 1 - - c = torch.nn.functional.interpolate(c, scale_factor=1/self.down_factor, - mode="area") - c = c.clamp(0.0, 1.0) - c = self.n_embed*c - c_quant = c.round() - c_ind = c_quant.to(dtype=torch.long) - - info = None, None, c_ind - return c_quant, None, info - - def decode(self, c): - c = c/self.n_embed - c = torch.nn.functional.interpolate(c, scale_factor=self.down_factor, - mode="nearest") - return c diff --git a/spaces/Ramse/TTS_Hindi/transformer/SubLayers.py b/spaces/Ramse/TTS_Hindi/transformer/SubLayers.py deleted file mode 100644 index 5ae968c281111bfc0a27204e838239d4f52960a6..0000000000000000000000000000000000000000 --- a/spaces/Ramse/TTS_Hindi/transformer/SubLayers.py +++ /dev/null @@ -1,93 +0,0 @@ -import torch.nn as nn -import torch.nn.functional as F -import numpy as np - -from .Modules import ScaledDotProductAttention - - -class MultiHeadAttention(nn.Module): - """ Multi-Head Attention module """ - - def __init__(self, n_head, d_model, d_k, d_v, dropout=0.1): - super().__init__() - - self.n_head = n_head - self.d_k = d_k - self.d_v = d_v - - self.w_qs = nn.Linear(d_model, n_head * d_k) - self.w_ks = nn.Linear(d_model, n_head * d_k) - self.w_vs = nn.Linear(d_model, n_head * d_v) - - self.attention = ScaledDotProductAttention(temperature=np.power(d_k, 0.5)) - self.layer_norm = nn.LayerNorm(d_model) - - self.fc = nn.Linear(n_head * d_v, d_model) - - self.dropout = nn.Dropout(dropout) - - def forward(self, q, k, v, mask=None): - - d_k, d_v, n_head = self.d_k, self.d_v, self.n_head - - sz_b, len_q, _ = q.size() - sz_b, len_k, _ = k.size() - sz_b, len_v, _ = v.size() - - residual = q - - q = self.w_qs(q).view(sz_b, len_q, n_head, d_k) - k = self.w_ks(k).view(sz_b, len_k, n_head, d_k) - v = self.w_vs(v).view(sz_b, len_v, n_head, d_v) - q = q.permute(2, 0, 1, 3).contiguous().view(-1, len_q, d_k) # (n*b) x lq x dk - k = k.permute(2, 0, 1, 3).contiguous().view(-1, len_k, d_k) # (n*b) x lk x dk - v = v.permute(2, 0, 1, 3).contiguous().view(-1, len_v, d_v) # (n*b) x lv x dv - - mask = mask.repeat(n_head, 1, 1) # (n*b) x .. x .. - output, attn = self.attention(q, k, v, mask=mask) - - output = output.view(n_head, sz_b, len_q, d_v) - output = ( - output.permute(1, 2, 0, 3).contiguous().view(sz_b, len_q, -1) - ) # b x lq x (n*dv) - - output = self.dropout(self.fc(output)) - output = self.layer_norm(output + residual) - - return output, attn - - -class PositionwiseFeedForward(nn.Module): - """ A two-feed-forward-layer module """ - - def __init__(self, d_in, d_hid, kernel_size, dropout=0.1): - super().__init__() - - # Use Conv1D - # position-wise - self.w_1 = nn.Conv1d( - d_in, - d_hid, - kernel_size=kernel_size[0], - padding=(kernel_size[0] - 1) // 2, - ) - # position-wise - self.w_2 = nn.Conv1d( - d_hid, - d_in, - kernel_size=kernel_size[1], - padding=(kernel_size[1] - 1) // 2, - ) - - self.layer_norm = nn.LayerNorm(d_in) - self.dropout = nn.Dropout(dropout) - - def forward(self, x): - residual = x - output = x.transpose(1, 2) - output = self.w_2(F.relu(self.w_1(output))) - output = output.transpose(1, 2) - output = self.dropout(output) - output = self.layer_norm(output + residual) - - return output diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pep517/colorlog.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pep517/colorlog.py deleted file mode 100644 index 66310a79a997be2f4e859218ce4d4f70e212ac9f..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pep517/colorlog.py +++ /dev/null @@ -1,113 +0,0 @@ -"""Nicer log formatting with colours. - -Code copied from Tornado, Apache licensed. -""" -# Copyright 2012 Facebook -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import logging -import sys - -try: - import curses -except ImportError: - curses = None - - -def _stderr_supports_color(): - color = False - if curses and hasattr(sys.stderr, 'isatty') and sys.stderr.isatty(): - try: - curses.setupterm() - if curses.tigetnum("colors") > 0: - color = True - except Exception: - pass - return color - - -class LogFormatter(logging.Formatter): - """Log formatter with colour support - """ - DEFAULT_COLORS = { - logging.INFO: 2, # Green - logging.WARNING: 3, # Yellow - logging.ERROR: 1, # Red - logging.CRITICAL: 1, - } - - def __init__(self, color=True, datefmt=None): - r""" - :arg bool color: Enables color support. - :arg string fmt: Log message format. - It will be applied to the attributes dict of log records. The - text between ``%(color)s`` and ``%(end_color)s`` will be colored - depending on the level if color support is on. - :arg dict colors: color mappings from logging level to terminal color - code - :arg string datefmt: Datetime format. - Used for formatting ``(asctime)`` placeholder in ``prefix_fmt``. - .. versionchanged:: 3.2 - Added ``fmt`` and ``datefmt`` arguments. - """ - logging.Formatter.__init__(self, datefmt=datefmt) - self._colors = {} - if color and _stderr_supports_color(): - # The curses module has some str/bytes confusion in - # python3. Until version 3.2.3, most methods return - # bytes, but only accept strings. In addition, we want to - # output these strings with the logging module, which - # works with unicode strings. The explicit calls to - # unicode() below are harmless in python2 but will do the - # right conversion in python 3. - fg_color = (curses.tigetstr("setaf") or - curses.tigetstr("setf") or "") - - for levelno, code in self.DEFAULT_COLORS.items(): - self._colors[levelno] = str( - curses.tparm(fg_color, code), "ascii") - self._normal = str(curses.tigetstr("sgr0"), "ascii") - - scr = curses.initscr() - self.termwidth = scr.getmaxyx()[1] - curses.endwin() - else: - self._normal = '' - # Default width is usually 80, but too wide is - # worse than too narrow - self.termwidth = 70 - - def formatMessage(self, record): - mlen = len(record.message) - right_text = '{initial}-{name}'.format(initial=record.levelname[0], - name=record.name) - if mlen + len(right_text) < self.termwidth: - space = ' ' * (self.termwidth - (mlen + len(right_text))) - else: - space = ' ' - - if record.levelno in self._colors: - start_color = self._colors[record.levelno] - end_color = self._normal - else: - start_color = end_color = '' - - return record.message + space + start_color + right_text + end_color - - -def enable_colourful_output(level=logging.INFO): - handler = logging.StreamHandler() - handler.setFormatter(LogFormatter()) - logging.root.addHandler(handler) - logging.root.setLevel(level) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/__init__.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/__init__.py deleted file mode 100644 index 43c4c89aacf0c771de138e1f58decf9fb592f62f..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/__init__.py +++ /dev/null @@ -1,143 +0,0 @@ -""" - pygments.formatters - ~~~~~~~~~~~~~~~~~~~ - - Pygments formatters. - - :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import re -import sys -import types -from fnmatch import fnmatch -from os.path import basename - -from pip._vendor.pygments.formatters._mapping import FORMATTERS -from pip._vendor.pygments.plugin import find_plugin_formatters -from pip._vendor.pygments.util import ClassNotFound - -__all__ = ['get_formatter_by_name', 'get_formatter_for_filename', - 'get_all_formatters', 'load_formatter_from_file'] + list(FORMATTERS) - -_formatter_cache = {} # classes by name - -def _load_formatters(module_name): - """Load a formatter (and all others in the module too).""" - mod = __import__(module_name, None, None, ['__all__']) - for formatter_name in mod.__all__: - cls = getattr(mod, formatter_name) - _formatter_cache[cls.name] = cls - - -def get_all_formatters(): - """Return a generator for all formatter classes.""" - # NB: this returns formatter classes, not info like get_all_lexers(). - for info in FORMATTERS.values(): - if info[1] not in _formatter_cache: - _load_formatters(info[0]) - yield _formatter_cache[info[1]] - for _, formatter in find_plugin_formatters(): - yield formatter - - -def find_formatter_class(alias): - """Lookup a formatter by alias. - - Returns None if not found. - """ - for module_name, name, aliases, _, _ in FORMATTERS.values(): - if alias in aliases: - if name not in _formatter_cache: - _load_formatters(module_name) - return _formatter_cache[name] - for _, cls in find_plugin_formatters(): - if alias in cls.aliases: - return cls - - -def get_formatter_by_name(_alias, **options): - """Lookup and instantiate a formatter by alias. - - Raises ClassNotFound if not found. - """ - cls = find_formatter_class(_alias) - if cls is None: - raise ClassNotFound("no formatter found for name %r" % _alias) - return cls(**options) - - -def load_formatter_from_file(filename, formattername="CustomFormatter", - **options): - """Load a formatter from a file. - - This method expects a file located relative to the current working - directory, which contains a class named CustomFormatter. By default, - it expects the Formatter to be named CustomFormatter; you can specify - your own class name as the second argument to this function. - - Users should be very careful with the input, because this method - is equivalent to running eval on the input file. - - Raises ClassNotFound if there are any problems importing the Formatter. - - .. versionadded:: 2.2 - """ - try: - # This empty dict will contain the namespace for the exec'd file - custom_namespace = {} - with open(filename, 'rb') as f: - exec(f.read(), custom_namespace) - # Retrieve the class `formattername` from that namespace - if formattername not in custom_namespace: - raise ClassNotFound('no valid %s class found in %s' % - (formattername, filename)) - formatter_class = custom_namespace[formattername] - # And finally instantiate it with the options - return formatter_class(**options) - except OSError as err: - raise ClassNotFound('cannot read %s: %s' % (filename, err)) - except ClassNotFound: - raise - except Exception as err: - raise ClassNotFound('error when loading custom formatter: %s' % err) - - -def get_formatter_for_filename(fn, **options): - """Lookup and instantiate a formatter by filename pattern. - - Raises ClassNotFound if not found. - """ - fn = basename(fn) - for modname, name, _, filenames, _ in FORMATTERS.values(): - for filename in filenames: - if fnmatch(fn, filename): - if name not in _formatter_cache: - _load_formatters(modname) - return _formatter_cache[name](**options) - for cls in find_plugin_formatters(): - for filename in cls.filenames: - if fnmatch(fn, filename): - return cls(**options) - raise ClassNotFound("no formatter found for file name %r" % fn) - - -class _automodule(types.ModuleType): - """Automatically import formatters.""" - - def __getattr__(self, name): - info = FORMATTERS.get(name) - if info: - _load_formatters(info[0]) - cls = _formatter_cache[info[1]] - setattr(self, name, cls) - return cls - raise AttributeError(name) - - -oldmod = sys.modules[__name__] -newmod = _automodule(__name__) -newmod.__dict__.update(oldmod.__dict__) -sys.modules[__name__] = newmod -del newmod.newmod, newmod.oldmod, newmod.sys, newmod.types diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/jaraco/text/__init__.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/jaraco/text/__init__.py deleted file mode 100644 index a0306d5ff5cc4a2eb76458c127c462efe59a566d..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/jaraco/text/__init__.py +++ /dev/null @@ -1,599 +0,0 @@ -import re -import itertools -import textwrap -import functools - -try: - from importlib.resources import files # type: ignore -except ImportError: # pragma: nocover - from setuptools.extern.importlib_resources import files # type: ignore - -from setuptools.extern.jaraco.functools import compose, method_cache -from setuptools.extern.jaraco.context import ExceptionTrap - - -def substitution(old, new): - """ - Return a function that will perform a substitution on a string - """ - return lambda s: s.replace(old, new) - - -def multi_substitution(*substitutions): - """ - Take a sequence of pairs specifying substitutions, and create - a function that performs those substitutions. - - >>> multi_substitution(('foo', 'bar'), ('bar', 'baz'))('foo') - 'baz' - """ - substitutions = itertools.starmap(substitution, substitutions) - # compose function applies last function first, so reverse the - # substitutions to get the expected order. - substitutions = reversed(tuple(substitutions)) - return compose(*substitutions) - - -class FoldedCase(str): - """ - A case insensitive string class; behaves just like str - except compares equal when the only variation is case. - - >>> s = FoldedCase('hello world') - - >>> s == 'Hello World' - True - - >>> 'Hello World' == s - True - - >>> s != 'Hello World' - False - - >>> s.index('O') - 4 - - >>> s.split('O') - ['hell', ' w', 'rld'] - - >>> sorted(map(FoldedCase, ['GAMMA', 'alpha', 'Beta'])) - ['alpha', 'Beta', 'GAMMA'] - - Sequence membership is straightforward. - - >>> "Hello World" in [s] - True - >>> s in ["Hello World"] - True - - You may test for set inclusion, but candidate and elements - must both be folded. - - >>> FoldedCase("Hello World") in {s} - True - >>> s in {FoldedCase("Hello World")} - True - - String inclusion works as long as the FoldedCase object - is on the right. - - >>> "hello" in FoldedCase("Hello World") - True - - But not if the FoldedCase object is on the left: - - >>> FoldedCase('hello') in 'Hello World' - False - - In that case, use ``in_``: - - >>> FoldedCase('hello').in_('Hello World') - True - - >>> FoldedCase('hello') > FoldedCase('Hello') - False - """ - - def __lt__(self, other): - return self.lower() < other.lower() - - def __gt__(self, other): - return self.lower() > other.lower() - - def __eq__(self, other): - return self.lower() == other.lower() - - def __ne__(self, other): - return self.lower() != other.lower() - - def __hash__(self): - return hash(self.lower()) - - def __contains__(self, other): - return super().lower().__contains__(other.lower()) - - def in_(self, other): - "Does self appear in other?" - return self in FoldedCase(other) - - # cache lower since it's likely to be called frequently. - @method_cache - def lower(self): - return super().lower() - - def index(self, sub): - return self.lower().index(sub.lower()) - - def split(self, splitter=' ', maxsplit=0): - pattern = re.compile(re.escape(splitter), re.I) - return pattern.split(self, maxsplit) - - -# Python 3.8 compatibility -_unicode_trap = ExceptionTrap(UnicodeDecodeError) - - -@_unicode_trap.passes -def is_decodable(value): - r""" - Return True if the supplied value is decodable (using the default - encoding). - - >>> is_decodable(b'\xff') - False - >>> is_decodable(b'\x32') - True - """ - value.decode() - - -def is_binary(value): - r""" - Return True if the value appears to be binary (that is, it's a byte - string and isn't decodable). - - >>> is_binary(b'\xff') - True - >>> is_binary('\xff') - False - """ - return isinstance(value, bytes) and not is_decodable(value) - - -def trim(s): - r""" - Trim something like a docstring to remove the whitespace that - is common due to indentation and formatting. - - >>> trim("\n\tfoo = bar\n\t\tbar = baz\n") - 'foo = bar\n\tbar = baz' - """ - return textwrap.dedent(s).strip() - - -def wrap(s): - """ - Wrap lines of text, retaining existing newlines as - paragraph markers. - - >>> print(wrap(lorem_ipsum)) - Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do - eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad - minim veniam, quis nostrud exercitation ullamco laboris nisi ut - aliquip ex ea commodo consequat. Duis aute irure dolor in - reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla - pariatur. Excepteur sint occaecat cupidatat non proident, sunt in - culpa qui officia deserunt mollit anim id est laborum. - - Curabitur pretium tincidunt lacus. Nulla gravida orci a odio. Nullam - varius, turpis et commodo pharetra, est eros bibendum elit, nec luctus - magna felis sollicitudin mauris. Integer in mauris eu nibh euismod - gravida. Duis ac tellus et risus vulputate vehicula. Donec lobortis - risus a elit. Etiam tempor. Ut ullamcorper, ligula eu tempor congue, - eros est euismod turpis, id tincidunt sapien risus a quam. Maecenas - fermentum consequat mi. Donec fermentum. Pellentesque malesuada nulla - a mi. Duis sapien sem, aliquet nec, commodo eget, consequat quis, - neque. Aliquam faucibus, elit ut dictum aliquet, felis nisl adipiscing - sapien, sed malesuada diam lacus eget erat. Cras mollis scelerisque - nunc. Nullam arcu. Aliquam consequat. Curabitur augue lorem, dapibus - quis, laoreet et, pretium ac, nisi. Aenean magna nisl, mollis quis, - molestie eu, feugiat in, orci. In hac habitasse platea dictumst. - """ - paragraphs = s.splitlines() - wrapped = ('\n'.join(textwrap.wrap(para)) for para in paragraphs) - return '\n\n'.join(wrapped) - - -def unwrap(s): - r""" - Given a multi-line string, return an unwrapped version. - - >>> wrapped = wrap(lorem_ipsum) - >>> wrapped.count('\n') - 20 - >>> unwrapped = unwrap(wrapped) - >>> unwrapped.count('\n') - 1 - >>> print(unwrapped) - Lorem ipsum dolor sit amet, consectetur adipiscing ... - Curabitur pretium tincidunt lacus. Nulla gravida orci ... - - """ - paragraphs = re.split(r'\n\n+', s) - cleaned = (para.replace('\n', ' ') for para in paragraphs) - return '\n'.join(cleaned) - - - - -class Splitter(object): - """object that will split a string with the given arguments for each call - - >>> s = Splitter(',') - >>> s('hello, world, this is your, master calling') - ['hello', ' world', ' this is your', ' master calling'] - """ - - def __init__(self, *args): - self.args = args - - def __call__(self, s): - return s.split(*self.args) - - -def indent(string, prefix=' ' * 4): - """ - >>> indent('foo') - ' foo' - """ - return prefix + string - - -class WordSet(tuple): - """ - Given an identifier, return the words that identifier represents, - whether in camel case, underscore-separated, etc. - - >>> WordSet.parse("camelCase") - ('camel', 'Case') - - >>> WordSet.parse("under_sep") - ('under', 'sep') - - Acronyms should be retained - - >>> WordSet.parse("firstSNL") - ('first', 'SNL') - - >>> WordSet.parse("you_and_I") - ('you', 'and', 'I') - - >>> WordSet.parse("A simple test") - ('A', 'simple', 'test') - - Multiple caps should not interfere with the first cap of another word. - - >>> WordSet.parse("myABCClass") - ('my', 'ABC', 'Class') - - The result is a WordSet, so you can get the form you need. - - >>> WordSet.parse("myABCClass").underscore_separated() - 'my_ABC_Class' - - >>> WordSet.parse('a-command').camel_case() - 'ACommand' - - >>> WordSet.parse('someIdentifier').lowered().space_separated() - 'some identifier' - - Slices of the result should return another WordSet. - - >>> WordSet.parse('taken-out-of-context')[1:].underscore_separated() - 'out_of_context' - - >>> WordSet.from_class_name(WordSet()).lowered().space_separated() - 'word set' - - >>> example = WordSet.parse('figured it out') - >>> example.headless_camel_case() - 'figuredItOut' - >>> example.dash_separated() - 'figured-it-out' - - """ - - _pattern = re.compile('([A-Z]?[a-z]+)|([A-Z]+(?![a-z]))') - - def capitalized(self): - return WordSet(word.capitalize() for word in self) - - def lowered(self): - return WordSet(word.lower() for word in self) - - def camel_case(self): - return ''.join(self.capitalized()) - - def headless_camel_case(self): - words = iter(self) - first = next(words).lower() - new_words = itertools.chain((first,), WordSet(words).camel_case()) - return ''.join(new_words) - - def underscore_separated(self): - return '_'.join(self) - - def dash_separated(self): - return '-'.join(self) - - def space_separated(self): - return ' '.join(self) - - def trim_right(self, item): - """ - Remove the item from the end of the set. - - >>> WordSet.parse('foo bar').trim_right('foo') - ('foo', 'bar') - >>> WordSet.parse('foo bar').trim_right('bar') - ('foo',) - >>> WordSet.parse('').trim_right('bar') - () - """ - return self[:-1] if self and self[-1] == item else self - - def trim_left(self, item): - """ - Remove the item from the beginning of the set. - - >>> WordSet.parse('foo bar').trim_left('foo') - ('bar',) - >>> WordSet.parse('foo bar').trim_left('bar') - ('foo', 'bar') - >>> WordSet.parse('').trim_left('bar') - () - """ - return self[1:] if self and self[0] == item else self - - def trim(self, item): - """ - >>> WordSet.parse('foo bar').trim('foo') - ('bar',) - """ - return self.trim_left(item).trim_right(item) - - def __getitem__(self, item): - result = super(WordSet, self).__getitem__(item) - if isinstance(item, slice): - result = WordSet(result) - return result - - @classmethod - def parse(cls, identifier): - matches = cls._pattern.finditer(identifier) - return WordSet(match.group(0) for match in matches) - - @classmethod - def from_class_name(cls, subject): - return cls.parse(subject.__class__.__name__) - - -# for backward compatibility -words = WordSet.parse - - -def simple_html_strip(s): - r""" - Remove HTML from the string `s`. - - >>> str(simple_html_strip('')) - '' - - >>> print(simple_html_strip('A stormy day in paradise')) - A stormy day in paradise - - >>> print(simple_html_strip('Somebody tell the truth.')) - Somebody tell the truth. - - >>> print(simple_html_strip('What about
    \nmultiple lines?')) - What about - multiple lines? - """ - html_stripper = re.compile('()|(<[^>]*>)|([^<]+)', re.DOTALL) - texts = (match.group(3) or '' for match in html_stripper.finditer(s)) - return ''.join(texts) - - -class SeparatedValues(str): - """ - A string separated by a separator. Overrides __iter__ for getting - the values. - - >>> list(SeparatedValues('a,b,c')) - ['a', 'b', 'c'] - - Whitespace is stripped and empty values are discarded. - - >>> list(SeparatedValues(' a, b , c, ')) - ['a', 'b', 'c'] - """ - - separator = ',' - - def __iter__(self): - parts = self.split(self.separator) - return filter(None, (part.strip() for part in parts)) - - -class Stripper: - r""" - Given a series of lines, find the common prefix and strip it from them. - - >>> lines = [ - ... 'abcdefg\n', - ... 'abc\n', - ... 'abcde\n', - ... ] - >>> res = Stripper.strip_prefix(lines) - >>> res.prefix - 'abc' - >>> list(res.lines) - ['defg\n', '\n', 'de\n'] - - If no prefix is common, nothing should be stripped. - - >>> lines = [ - ... 'abcd\n', - ... '1234\n', - ... ] - >>> res = Stripper.strip_prefix(lines) - >>> res.prefix = '' - >>> list(res.lines) - ['abcd\n', '1234\n'] - """ - - def __init__(self, prefix, lines): - self.prefix = prefix - self.lines = map(self, lines) - - @classmethod - def strip_prefix(cls, lines): - prefix_lines, lines = itertools.tee(lines) - prefix = functools.reduce(cls.common_prefix, prefix_lines) - return cls(prefix, lines) - - def __call__(self, line): - if not self.prefix: - return line - null, prefix, rest = line.partition(self.prefix) - return rest - - @staticmethod - def common_prefix(s1, s2): - """ - Return the common prefix of two lines. - """ - index = min(len(s1), len(s2)) - while s1[:index] != s2[:index]: - index -= 1 - return s1[:index] - - -def remove_prefix(text, prefix): - """ - Remove the prefix from the text if it exists. - - >>> remove_prefix('underwhelming performance', 'underwhelming ') - 'performance' - - >>> remove_prefix('something special', 'sample') - 'something special' - """ - null, prefix, rest = text.rpartition(prefix) - return rest - - -def remove_suffix(text, suffix): - """ - Remove the suffix from the text if it exists. - - >>> remove_suffix('name.git', '.git') - 'name' - - >>> remove_suffix('something special', 'sample') - 'something special' - """ - rest, suffix, null = text.partition(suffix) - return rest - - -def normalize_newlines(text): - r""" - Replace alternate newlines with the canonical newline. - - >>> normalize_newlines('Lorem Ipsum\u2029') - 'Lorem Ipsum\n' - >>> normalize_newlines('Lorem Ipsum\r\n') - 'Lorem Ipsum\n' - >>> normalize_newlines('Lorem Ipsum\x85') - 'Lorem Ipsum\n' - """ - newlines = ['\r\n', '\r', '\n', '\u0085', '\u2028', '\u2029'] - pattern = '|'.join(newlines) - return re.sub(pattern, '\n', text) - - -def _nonblank(str): - return str and not str.startswith('#') - - -@functools.singledispatch -def yield_lines(iterable): - r""" - Yield valid lines of a string or iterable. - - >>> list(yield_lines('')) - [] - >>> list(yield_lines(['foo', 'bar'])) - ['foo', 'bar'] - >>> list(yield_lines('foo\nbar')) - ['foo', 'bar'] - >>> list(yield_lines('\nfoo\n#bar\nbaz #comment')) - ['foo', 'baz #comment'] - >>> list(yield_lines(['foo\nbar', 'baz', 'bing\n\n\n'])) - ['foo', 'bar', 'baz', 'bing'] - """ - return itertools.chain.from_iterable(map(yield_lines, iterable)) - - -@yield_lines.register(str) -def _(text): - return filter(_nonblank, map(str.strip, text.splitlines())) - - -def drop_comment(line): - """ - Drop comments. - - >>> drop_comment('foo # bar') - 'foo' - - A hash without a space may be in a URL. - - >>> drop_comment('http://example.com/foo#bar') - 'http://example.com/foo#bar' - """ - return line.partition(' #')[0] - - -def join_continuation(lines): - r""" - Join lines continued by a trailing backslash. - - >>> list(join_continuation(['foo \\', 'bar', 'baz'])) - ['foobar', 'baz'] - >>> list(join_continuation(['foo \\', 'bar', 'baz'])) - ['foobar', 'baz'] - >>> list(join_continuation(['foo \\', 'bar \\', 'baz'])) - ['foobarbaz'] - - Not sure why, but... - The character preceeding the backslash is also elided. - - >>> list(join_continuation(['goo\\', 'dly'])) - ['godly'] - - A terrible idea, but... - If no line is available to continue, suppress the lines. - - >>> list(join_continuation(['foo', 'bar\\', 'baz\\'])) - ['foo'] - """ - lines = iter(lines) - for item in lines: - while item.endswith('\\'): - try: - item = item[:-2].strip() + next(lines) - except StopIteration: - return - yield item diff --git a/spaces/Riksarkivet/htr_demo/helper/text/text_overview.py b/spaces/Riksarkivet/htr_demo/helper/text/text_overview.py deleted file mode 100644 index 989c2341129c3ddeb26e41beee5cc3f2b1b42d19..0000000000000000000000000000000000000000 --- a/spaces/Riksarkivet/htr_demo/helper/text/text_overview.py +++ /dev/null @@ -1,37 +0,0 @@ -from helper.text.markdown_reader import read_markdown - - -class TextOverview: - # HTRFLOW - htrflow_col1 = read_markdown("helper/text/overview/htrflow/htrflow_col1.md") - htrflow_col2 = read_markdown("helper/text/overview/htrflow/htrflow_col2.md") - htrflow_row1 = read_markdown("helper/text/overview/htrflow/htrflow_row1.md") - htrflow_tab1 = read_markdown("helper/text/overview/htrflow/htrflow_tab1.md") - htrflow_tab2 = read_markdown("helper/text/overview/htrflow/htrflow_tab2.md") - htrflow_tab3 = read_markdown("helper/text/overview/htrflow/htrflow_tab3.md") - htrflow_tab4 = read_markdown("helper/text/overview/htrflow/htrflow_tab4.md") - - # faq & discussion - text_faq = read_markdown("helper/text/overview/faq_discussion/faq.md") - text_discussion = read_markdown("helper/text/overview/faq_discussion/discussion.md") - - # Contributions - contributions = read_markdown("helper/text/overview/contributions/contributions.md") - huminfra_image = read_markdown("helper/text/overview/contributions/huminfra_image.md") - - # Changelog & Roadmap - changelog = read_markdown("helper/text/overview/changelog_roadmap/changelog.md") - old_changelog = read_markdown("helper/text/overview/changelog_roadmap/old_changelog.md") - - roadmap = read_markdown("helper/text/overview/changelog_roadmap/roadmap.md") - - # duplicate & api - duplicate = read_markdown("helper/text/overview/duplicate_api/duplicate.md") - api1 = read_markdown("helper/text/overview/duplicate_api/api1.md") - api_code1 = read_markdown("helper/text/overview/duplicate_api/api_code1.md") - api2 = read_markdown("helper/text/overview/duplicate_api/api2.md") - api_code2 = read_markdown("helper/text/overview/duplicate_api/api_code2.md") - - -if __name__ == "__main__": - pass diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/image/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/image/__init__.py deleted file mode 100644 index d0051d609d3de4e7562e3fe638335c66617c4d91..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/image/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .colorspace import (bgr2gray, bgr2hls, bgr2hsv, bgr2rgb, bgr2ycbcr, - gray2bgr, gray2rgb, hls2bgr, hsv2bgr, imconvert, - rgb2bgr, rgb2gray, rgb2ycbcr, ycbcr2bgr, ycbcr2rgb) -from .geometric import (cutout, imcrop, imflip, imflip_, impad, - impad_to_multiple, imrescale, imresize, imresize_like, - imresize_to_multiple, imrotate, imshear, imtranslate, - rescale_size) -from .io import imfrombytes, imread, imwrite, supported_backends, use_backend -from .misc import tensor2imgs -from .photometric import (adjust_brightness, adjust_color, adjust_contrast, - adjust_lighting, adjust_sharpness, auto_contrast, - clahe, imdenormalize, imequalize, iminvert, - imnormalize, imnormalize_, lut_transform, posterize, - solarize) - -__all__ = [ - 'bgr2gray', 'bgr2hls', 'bgr2hsv', 'bgr2rgb', 'gray2bgr', 'gray2rgb', - 'hls2bgr', 'hsv2bgr', 'imconvert', 'rgb2bgr', 'rgb2gray', 'imrescale', - 'imresize', 'imresize_like', 'imresize_to_multiple', 'rescale_size', - 'imcrop', 'imflip', 'imflip_', 'impad', 'impad_to_multiple', 'imrotate', - 'imfrombytes', 'imread', 'imwrite', 'supported_backends', 'use_backend', - 'imdenormalize', 'imnormalize', 'imnormalize_', 'iminvert', 'posterize', - 'solarize', 'rgb2ycbcr', 'bgr2ycbcr', 'ycbcr2rgb', 'ycbcr2bgr', - 'tensor2imgs', 'imshear', 'imtranslate', 'adjust_color', 'imequalize', - 'adjust_brightness', 'adjust_contrast', 'lut_transform', 'clahe', - 'adjust_sharpness', 'auto_contrast', 'cutout', 'adjust_lighting' -] diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/necks/fpg.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/necks/fpg.py deleted file mode 100644 index c8e0d163ccf8cef6211530ba6c1b4d558ff6403f..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/necks/fpg.py +++ /dev/null @@ -1,398 +0,0 @@ -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule, caffe2_xavier_init, constant_init, is_norm - -from ..builder import NECKS - - -class Transition(nn.Module): - """Base class for transition. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - """ - - def __init__(self, in_channels, out_channels): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - - def forward(x): - pass - - -class UpInterpolationConv(Transition): - """A transition used for up-sampling. - - Up-sample the input by interpolation then refines the feature by - a convolution layer. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - scale_factor (int): Up-sampling factor. Default: 2. - mode (int): Interpolation mode. Default: nearest. - align_corners (bool): Whether align corners when interpolation. - Default: None. - kernel_size (int): Kernel size for the conv. Default: 3. - """ - - def __init__(self, - in_channels, - out_channels, - scale_factor=2, - mode='nearest', - align_corners=None, - kernel_size=3, - **kwargs): - super().__init__(in_channels, out_channels) - self.mode = mode - self.scale_factor = scale_factor - self.align_corners = align_corners - self.conv = ConvModule( - in_channels, - out_channels, - kernel_size, - padding=(kernel_size - 1) // 2, - **kwargs) - - def forward(self, x): - x = F.interpolate( - x, - scale_factor=self.scale_factor, - mode=self.mode, - align_corners=self.align_corners) - x = self.conv(x) - return x - - -class LastConv(Transition): - """A transition used for refining the output of the last stage. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - num_inputs (int): Number of inputs of the FPN features. - kernel_size (int): Kernel size for the conv. Default: 3. - """ - - def __init__(self, - in_channels, - out_channels, - num_inputs, - kernel_size=3, - **kwargs): - super().__init__(in_channels, out_channels) - self.num_inputs = num_inputs - self.conv_out = ConvModule( - in_channels, - out_channels, - kernel_size, - padding=(kernel_size - 1) // 2, - **kwargs) - - def forward(self, inputs): - assert len(inputs) == self.num_inputs - return self.conv_out(inputs[-1]) - - -@NECKS.register_module() -class FPG(nn.Module): - """FPG. - - Implementation of `Feature Pyramid Grids (FPG) - `_. - This implementation only gives the basic structure stated in the paper. - But users can implement different type of transitions to fully explore the - the potential power of the structure of FPG. - - Args: - in_channels (int): Number of input channels (feature maps of all levels - should have the same channels). - out_channels (int): Number of output channels (used at each scale) - num_outs (int): Number of output scales. - stack_times (int): The number of times the pyramid architecture will - be stacked. - paths (list[str]): Specify the path order of each stack level. - Each element in the list should be either 'bu' (bottom-up) or - 'td' (top-down). - inter_channels (int): Number of inter channels. - same_up_trans (dict): Transition that goes down at the same stage. - same_down_trans (dict): Transition that goes up at the same stage. - across_lateral_trans (dict): Across-pathway same-stage - across_down_trans (dict): Across-pathway bottom-up connection. - across_up_trans (dict): Across-pathway top-down connection. - across_skip_trans (dict): Across-pathway skip connection. - output_trans (dict): Transition that trans the output of the - last stage. - start_level (int): Index of the start input backbone level used to - build the feature pyramid. Default: 0. - end_level (int): Index of the end input backbone level (exclusive) to - build the feature pyramid. Default: -1, which means the last level. - add_extra_convs (bool): It decides whether to add conv - layers on top of the original feature maps. Default to False. - If True, its actual mode is specified by `extra_convs_on_inputs`. - norm_cfg (dict): Config dict for normalization layer. Default: None. - """ - - transition_types = { - 'conv': ConvModule, - 'interpolation_conv': UpInterpolationConv, - 'last_conv': LastConv, - } - - def __init__(self, - in_channels, - out_channels, - num_outs, - stack_times, - paths, - inter_channels=None, - same_down_trans=None, - same_up_trans=dict( - type='conv', kernel_size=3, stride=2, padding=1), - across_lateral_trans=dict(type='conv', kernel_size=1), - across_down_trans=dict(type='conv', kernel_size=3), - across_up_trans=None, - across_skip_trans=dict(type='identity'), - output_trans=dict(type='last_conv', kernel_size=3), - start_level=0, - end_level=-1, - add_extra_convs=False, - norm_cfg=None, - skip_inds=None): - super(FPG, self).__init__() - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) - self.num_outs = num_outs - if inter_channels is None: - self.inter_channels = [out_channels for _ in range(num_outs)] - elif isinstance(inter_channels, int): - self.inter_channels = [inter_channels for _ in range(num_outs)] - else: - assert isinstance(inter_channels, list) - assert len(inter_channels) == num_outs - self.inter_channels = inter_channels - self.stack_times = stack_times - self.paths = paths - assert isinstance(paths, list) and len(paths) == stack_times - for d in paths: - assert d in ('bu', 'td') - - self.same_down_trans = same_down_trans - self.same_up_trans = same_up_trans - self.across_lateral_trans = across_lateral_trans - self.across_down_trans = across_down_trans - self.across_up_trans = across_up_trans - self.output_trans = output_trans - self.across_skip_trans = across_skip_trans - - self.with_bias = norm_cfg is None - # skip inds must be specified if across skip trans is not None - if self.across_skip_trans is not None: - skip_inds is not None - self.skip_inds = skip_inds - assert len(self.skip_inds[0]) <= self.stack_times - - if end_level == -1: - self.backbone_end_level = self.num_ins - assert num_outs >= self.num_ins - start_level - else: - # if end_level < inputs, no extra level is allowed - self.backbone_end_level = end_level - assert end_level <= len(in_channels) - assert num_outs == end_level - start_level - self.start_level = start_level - self.end_level = end_level - self.add_extra_convs = add_extra_convs - - # build lateral 1x1 convs to reduce channels - self.lateral_convs = nn.ModuleList() - for i in range(self.start_level, self.backbone_end_level): - l_conv = nn.Conv2d(self.in_channels[i], - self.inter_channels[i - self.start_level], 1) - self.lateral_convs.append(l_conv) - - extra_levels = num_outs - self.backbone_end_level + self.start_level - self.extra_downsamples = nn.ModuleList() - for i in range(extra_levels): - if self.add_extra_convs: - fpn_idx = self.backbone_end_level - self.start_level + i - extra_conv = nn.Conv2d( - self.inter_channels[fpn_idx - 1], - self.inter_channels[fpn_idx], - 3, - stride=2, - padding=1) - self.extra_downsamples.append(extra_conv) - else: - self.extra_downsamples.append(nn.MaxPool2d(1, stride=2)) - - self.fpn_transitions = nn.ModuleList() # stack times - for s in range(self.stack_times): - stage_trans = nn.ModuleList() # num of feature levels - for i in range(self.num_outs): - # same, across_lateral, across_down, across_up - trans = nn.ModuleDict() - if s in self.skip_inds[i]: - stage_trans.append(trans) - continue - # build same-stage down trans (used in bottom-up paths) - if i == 0 or self.same_up_trans is None: - same_up_trans = None - else: - same_up_trans = self.build_trans( - self.same_up_trans, self.inter_channels[i - 1], - self.inter_channels[i]) - trans['same_up'] = same_up_trans - # build same-stage up trans (used in top-down paths) - if i == self.num_outs - 1 or self.same_down_trans is None: - same_down_trans = None - else: - same_down_trans = self.build_trans( - self.same_down_trans, self.inter_channels[i + 1], - self.inter_channels[i]) - trans['same_down'] = same_down_trans - # build across lateral trans - across_lateral_trans = self.build_trans( - self.across_lateral_trans, self.inter_channels[i], - self.inter_channels[i]) - trans['across_lateral'] = across_lateral_trans - # build across down trans - if i == self.num_outs - 1 or self.across_down_trans is None: - across_down_trans = None - else: - across_down_trans = self.build_trans( - self.across_down_trans, self.inter_channels[i + 1], - self.inter_channels[i]) - trans['across_down'] = across_down_trans - # build across up trans - if i == 0 or self.across_up_trans is None: - across_up_trans = None - else: - across_up_trans = self.build_trans( - self.across_up_trans, self.inter_channels[i - 1], - self.inter_channels[i]) - trans['across_up'] = across_up_trans - if self.across_skip_trans is None: - across_skip_trans = None - else: - across_skip_trans = self.build_trans( - self.across_skip_trans, self.inter_channels[i - 1], - self.inter_channels[i]) - trans['across_skip'] = across_skip_trans - # build across_skip trans - stage_trans.append(trans) - self.fpn_transitions.append(stage_trans) - - self.output_transition = nn.ModuleList() # output levels - for i in range(self.num_outs): - trans = self.build_trans( - self.output_trans, - self.inter_channels[i], - self.out_channels, - num_inputs=self.stack_times + 1) - self.output_transition.append(trans) - - self.relu = nn.ReLU(inplace=True) - - def build_trans(self, cfg, in_channels, out_channels, **extra_args): - cfg_ = cfg.copy() - trans_type = cfg_.pop('type') - trans_cls = self.transition_types[trans_type] - return trans_cls(in_channels, out_channels, **cfg_, **extra_args) - - def init_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - caffe2_xavier_init(m) - elif is_norm(m): - constant_init(m, 1.0) - - def fuse(self, fuse_dict): - out = None - for item in fuse_dict.values(): - if item is not None: - if out is None: - out = item - else: - out = out + item - return out - - def forward(self, inputs): - assert len(inputs) == len(self.in_channels) - - # build all levels from original feature maps - feats = [ - lateral_conv(inputs[i + self.start_level]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - for downsample in self.extra_downsamples: - feats.append(downsample(feats[-1])) - - outs = [feats] - - for i in range(self.stack_times): - current_outs = outs[-1] - next_outs = [] - direction = self.paths[i] - for j in range(self.num_outs): - if i in self.skip_inds[j]: - next_outs.append(outs[-1][j]) - continue - # feature level - if direction == 'td': - lvl = self.num_outs - j - 1 - else: - lvl = j - # get transitions - if direction == 'td': - same_trans = self.fpn_transitions[i][lvl]['same_down'] - else: - same_trans = self.fpn_transitions[i][lvl]['same_up'] - across_lateral_trans = self.fpn_transitions[i][lvl][ - 'across_lateral'] - across_down_trans = self.fpn_transitions[i][lvl]['across_down'] - across_up_trans = self.fpn_transitions[i][lvl]['across_up'] - across_skip_trans = self.fpn_transitions[i][lvl]['across_skip'] - # init output - to_fuse = dict( - same=None, lateral=None, across_up=None, across_down=None) - # same downsample/upsample - if same_trans is not None: - to_fuse['same'] = same_trans(next_outs[-1]) - # across lateral - if across_lateral_trans is not None: - to_fuse['lateral'] = across_lateral_trans( - current_outs[lvl]) - # across downsample - if lvl > 0 and across_up_trans is not None: - to_fuse['across_up'] = across_up_trans(current_outs[lvl - - 1]) - # across upsample - if (lvl < self.num_outs - 1 and across_down_trans is not None): - to_fuse['across_down'] = across_down_trans( - current_outs[lvl + 1]) - if across_skip_trans is not None: - to_fuse['across_skip'] = across_skip_trans(outs[0][lvl]) - x = self.fuse(to_fuse) - next_outs.append(x) - - if direction == 'td': - outs.append(next_outs[::-1]) - else: - outs.append(next_outs) - - # output trans - final_outs = [] - for i in range(self.num_outs): - lvl_out_list = [] - for s in range(len(outs)): - lvl_out_list.append(outs[s][i]) - lvl_out = self.output_transition[i](lvl_out_list) - final_outs.append(lvl_out) - - return final_outs diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/detectors/fast_rcnn.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/detectors/fast_rcnn.py deleted file mode 100644 index 3d6e242767b927ed37198b6bc7862abecef99a33..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/detectors/fast_rcnn.py +++ /dev/null @@ -1,52 +0,0 @@ -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class FastRCNN(TwoStageDetector): - """Implementation of `Fast R-CNN `_""" - - def __init__(self, - backbone, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None): - super(FastRCNN, self).__init__( - backbone=backbone, - neck=neck, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained) - - def forward_test(self, imgs, img_metas, proposals, **kwargs): - """ - Args: - imgs (List[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains all images in the batch. - img_metas (List[List[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. - proposals (List[List[Tensor]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. The Tensor should have a shape Px4, where - P is the number of proposals. - """ - for var, name in [(imgs, 'imgs'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError(f'{name} must be a list, but got {type(var)}') - - num_augs = len(imgs) - if num_augs != len(img_metas): - raise ValueError(f'num of augmentations ({len(imgs)}) ' - f'!= num of image meta ({len(img_metas)})') - - if num_augs == 1: - return self.simple_test(imgs[0], img_metas[0], proposals[0], - **kwargs) - else: - # TODO: support test-time augmentation - assert NotImplementedError diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/datasets/hrf.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/datasets/hrf.py deleted file mode 100644 index 242d790eb1b83e75cf6b7eaa7a35c674099311ad..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/datasets/hrf.py +++ /dev/null @@ -1,59 +0,0 @@ -# dataset settings -dataset_type = 'HRFDataset' -data_root = 'data/HRF' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -img_scale = (2336, 3504) -crop_size = (256, 256) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations'), - dict(type='Resize', img_scale=img_scale, ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']) -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=img_scale, - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) -] - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=40000, - dataset=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/training', - ann_dir='annotations/training', - pipeline=train_pipeline)), - val=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/validation', - ann_dir='annotations/validation', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/validation', - ann_dir='annotations/validation', - pipeline=test_pipeline)) diff --git a/spaces/Rongjiehuang/GenerSpeech/egs/datasets/audio/lj/pre_align.py b/spaces/Rongjiehuang/GenerSpeech/egs/datasets/audio/lj/pre_align.py deleted file mode 100644 index 847b9f87b4e74cd634dd5bb2313f78afd5602ad7..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/GenerSpeech/egs/datasets/audio/lj/pre_align.py +++ /dev/null @@ -1,13 +0,0 @@ -from data_gen.tts.base_preprocess import BasePreprocessor - - -class LJPreAlign(BasePreprocessor): - def meta_data(self): - for l in open(f'{self.raw_data_dir}/metadata.csv').readlines(): - item_name, _, txt = l.strip().split("|") - wav_fn = f"{self.raw_data_dir}/wavs/{item_name}.wav" - yield item_name, wav_fn, txt, 'SPK1' - - -if __name__ == "__main__": - LJPreAlign().process() diff --git a/spaces/SShaik/SS-05-GR-NLP-Image2Text-Multilingual-OCR/app.py b/spaces/SShaik/SS-05-GR-NLP-Image2Text-Multilingual-OCR/app.py deleted file mode 100644 index 83ab99d0715b5c0033e0f452087543187147eaa6..0000000000000000000000000000000000000000 --- a/spaces/SShaik/SS-05-GR-NLP-Image2Text-Multilingual-OCR/app.py +++ /dev/null @@ -1,54 +0,0 @@ -import pandas as pd -import PIL -from PIL import Image -from PIL import ImageDraw -import gradio as gr -import torch -import easyocr - -torch.hub.download_url_to_file('https://github.com/JaidedAI/EasyOCR/raw/master/examples/english.png', 'english.png') -torch.hub.download_url_to_file('https://github.com/JaidedAI/EasyOCR/raw/master/examples/chinese.jpg', 'chinese.jpg') -torch.hub.download_url_to_file('https://github.com/JaidedAI/EasyOCR/raw/master/examples/japanese.jpg', 'japanese.jpg') -torch.hub.download_url_to_file('https://i.imgur.com/mwQFd7G.jpeg', 'Hindi.jpeg') - -def draw_boxes(image, bounds, color='yellow', width=2): - draw = ImageDraw.Draw(image) - for bound in bounds: - p0, p1, p2, p3 = bound[0] - draw.line([*p0, *p1, *p2, *p3, *p0], fill=color, width=width) - return image - -def inference(img, lang): - reader = easyocr.Reader(lang) - bounds = reader.readtext(img.name) - im = PIL.Image.open(img.name) - draw_boxes(im, bounds) - im.save('result.jpg') - return ['result.jpg', pd.DataFrame(bounds).iloc[: , 1:]] - -title = 'Image To Optical Character Recognition' -description = 'Multilingual OCR which works conveniently on all devices in multiple languages.' -article = "

    " -examples = [['english.png',['en']],['chinese.jpg',['ch_sim', 'en']],['japanese.jpg',['ja', 'en']],['Hindi.jpeg',['hi', 'en']]] -css = ".output_image, .input_image {height: 40rem !important; width: 100% !important;}" -choices = [ - "ch_sim", - "ch_tra", - "de", - "en", - "es", - "ja", - "hi", - "ru" -] -gr.Interface( - inference, - [gr.inputs.Image(type='file', label='Input'),gr.inputs.CheckboxGroup(choices, type="value", default=['en'], label='language')], - [gr.outputs.Image(type='file', label='Output'), gr.outputs.Dataframe(headers=['text', 'confidence'])], - title=title, - description=description, - article=article, - examples=examples, - css=css, - enable_queue=True - ).launch(debug=True) \ No newline at end of file diff --git a/spaces/Sagar48/claudfuen-photorealistic-fuen-v1/app.py b/spaces/Sagar48/claudfuen-photorealistic-fuen-v1/app.py deleted file mode 100644 index 728c907083bddbf1502175934d7ed0bdebec37b4..0000000000000000000000000000000000000000 --- a/spaces/Sagar48/claudfuen-photorealistic-fuen-v1/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/claudfuen/photorealistic-fuen-v1").launch("share=True") \ No newline at end of file diff --git a/spaces/Saturdays/spanish-quechua-detector/README.md b/spaces/Saturdays/spanish-quechua-detector/README.md deleted file mode 100644 index f816208ea66b7b14fe4657902c3f13fd4db6b723..0000000000000000000000000000000000000000 --- a/spaces/Saturdays/spanish-quechua-detector/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Spanish Quechua Detector -emoji: 📉 -colorFrom: purple -colorTo: red -sdk: gradio -sdk_version: 2.9.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference \ No newline at end of file diff --git a/spaces/SharkGaming/VisualAI/app.py b/spaces/SharkGaming/VisualAI/app.py deleted file mode 100644 index e1e1025c8f06010197c50917ac9dd1ddeaf7e5aa..0000000000000000000000000000000000000000 --- a/spaces/SharkGaming/VisualAI/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/CompVis/stable-diffusion-v1-4").launch() \ No newline at end of file diff --git a/spaces/Singularity666/VisionGPT-Automation2/app.py b/spaces/Singularity666/VisionGPT-Automation2/app.py deleted file mode 100644 index a87344dd9933396c119c92032f82afeb2d6c0355..0000000000000000000000000000000000000000 --- a/spaces/Singularity666/VisionGPT-Automation2/app.py +++ /dev/null @@ -1,42 +0,0 @@ -import streamlit as st -import requests -from PIL import Image -from io import BytesIO - -API_KEY = "1143a102dbe21628248d4bb992b391a49dc058c584181ea72e17c2ccd49be9ca69ccf4a2b97fc82c89ff1029578abbea" -API_URL = "https://clipdrop-api.co/text-to-image/v1" - -def generate_image(prompt): - headers = {"x-api-key": API_KEY} - files = {"prompt": (None, prompt, "text/plain")} - - try: - response = requests.post(API_URL, files=files, headers=headers) - response.raise_for_status() - - # Get the generated image - image = Image.open(BytesIO(response.content)) - - return image - except requests.exceptions.RequestException as e: - st.error(f"Error occurred during image generation: {str(e)}") - return None - -def main(): - st.title("Text-to-Image Generator") - - # Text prompt input - prompt = st.text_input("Enter a text prompt") - - if prompt: - # Generate image when the "Generate Image" button is clicked - if st.button("Generate Image"): - st.write("Generating image...") - image = generate_image(prompt) - - if image: - # Display the generated image - st.image(image, caption="Generated Image", use_column_width=True) - -if __name__ == "__main__": - main() diff --git a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/utils/best_state.py b/spaces/SuYuanS/AudioCraft_Plus/audiocraft/utils/best_state.py deleted file mode 100644 index f5ad551432ad5cb0f83278b5d2100f9aa287958b..0000000000000000000000000000000000000000 --- a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/utils/best_state.py +++ /dev/null @@ -1,81 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from collections import defaultdict -import logging -import typing as tp - -import flashy -import torch - -from ..optim import ModuleDictEMA -from .utils import copy_state - - -logger = logging.getLogger(__name__) - - -class BestStateDictManager(flashy.state.StateDictSource): - """BestStateDictManager maintains a copy of best state_dict() for registered sources. - - BestStateDictManager has two main attributes: - states (dict): State dict of the registered StateDictSource. - param_ids (dict): Dict of parameter ids for registered states from ModuleDictEMA and other sources. - - When registering new sources, the BestStateDictManager will ensure two conflicting sources between - ModuleDictEMA and original modules are not both registered as it would otherwise create ambiguity about - what to consider for best state. - - Args: - device (torch.device or str): Device on which we keep the copy. - dtype (torch.dtype): Data type for the state parameters. - """ - def __init__(self, device: tp.Union[torch.device, str] = 'cpu', - dtype: tp.Optional[torch.dtype] = None): - self.device = device - self.states: dict = {} - self.param_ids: dict = defaultdict(dict) - self.dtype = dtype - - def _get_parameter_ids(self, state_dict): - return {id(p): name for name, p in state_dict.items() if isinstance(p, torch.Tensor)} - - def _validate_no_parameter_ids_overlap(self, name: str, param_ids: dict): - for registered_name, registered_param_ids in self.param_ids.items(): - if registered_name != name: - overlap = set.intersection(registered_param_ids.keys(), param_ids.keys()) - assert len(overlap) == 0, f"Found {len(overlap)} / {len(param_ids.keys())} overlapping parameters" - f" in {name} and already registered {registered_name}: {' '.join(overlap)}" - - def update(self, name: str, source: flashy.state.StateDictSource): - if name not in self.states: - raise ValueError(f"{name} missing from registered states.") - self.states[name] = copy_state(source.state_dict(), device=self.device, dtype=self.dtype) - - def register(self, name: str, source: flashy.state.StateDictSource): - if name in self.states: - raise ValueError(f"{name} already present in states.") - # Registering parameter ids for EMA and non-EMA states allows us to check that - # there is no overlap that would create ambiguity about how to handle the best state - param_ids = self._get_parameter_ids(source.state_dict()) - if isinstance(source, ModuleDictEMA): - logger.debug(f"Registering to best state: ModuleDictEMA '{name}' with {len(param_ids)} params") - self._validate_no_parameter_ids_overlap(name, param_ids) - self.param_ids[name] = param_ids - else: - logger.debug(f"Registering to best state: StateDictSource '{name}' with {len(param_ids)} params") - self._validate_no_parameter_ids_overlap('base', param_ids) - self.param_ids['base'].update(param_ids) - # Register state - self.states[name] = copy_state(source.state_dict(), device=self.device, dtype=self.dtype) - - def state_dict(self) -> flashy.state.StateDict: - return self.states - - def load_state_dict(self, state: flashy.state.StateDict): - for name, sub_state in state.items(): - for k, v in sub_state.items(): - self.states[name][k].copy_(v) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/driver/external.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/driver/external.py deleted file mode 100644 index 2d34f71ba8d290509329dd5fd008c56dc5d6a0d4..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/driver/external.py +++ /dev/null @@ -1,127 +0,0 @@ -import logging -from typing import Optional, Sequence, Dict, Union -from pathlib import Path - -from clickhouse_connect.driver.exceptions import ProgrammingError - -logger = logging.getLogger(__name__) - - -class ExternalFile: - # pylint: disable=too-many-branches - def __init__(self, - file_path: Optional[str] = None, - file_name: Optional[str] = None, - data: Optional[bytes] = None, - fmt: Optional[str] = None, - types: Optional[Union[str, Sequence[str]]] = None, - structure: Optional[Union[str, Sequence[str]]] = None, - mime_type: Optional[str] = None): - if file_path: - if data: - raise ProgrammingError('Only data or file_path should be specified for external data, not both') - try: - with open(file_path, 'rb') as file: - self.data = file.read() - except OSError as ex: - raise ProgrammingError(f'Failed to open file {file_path} for external data') from ex - path_name = Path(file_path).name - path_base = path_name.rsplit('.', maxsplit=1)[0] - if not file_name: - self.name = path_base - self.file_name = path_name - else: - self.name = file_name.rsplit('.', maxsplit=1)[0] - self.file_name = file_name - if file_name != path_name and path_base != self.name: - logger.warning('External data name %s and file_path %s use different names', file_name, path_name) - elif data: - if not file_name: - raise ProgrammingError('Name is required for query external data') - self.data = data - self.name = file_name.rsplit('.', maxsplit=1)[0] - self.file_name = file_name - else: - raise ProgrammingError('Either data or file_path must be specified for external data') - if types: - if structure: - raise ProgrammingError('Only types or structure should be specified for external data, not both') - self.structure = None - if isinstance(types, str): - self.types = types - else: - self.types = ','.join(types) - elif structure: - self.types = None - if isinstance(structure, str): - self.structure = structure - else: - self.structure = ','.join(structure) - self.fmt = fmt - self.mime_type = mime_type or 'application/octet-stream' - - @property - def form_data(self) -> tuple: - return self.file_name, self.data, self.mime_type - - @property - def query_params(self) -> Dict[str, str]: - params = {} - for name, value in (('format', self.fmt), - ('structure', self.structure), - ('types', self.types)): - if value: - params[f'{self.name}_{name}'] = value - return params - - -class ExternalData: - def __init__(self, - file_path: Optional[str] = None, - file_name: Optional[str] = None, - data: Optional[bytes] = None, - fmt: Optional[str] = None, - types: Optional[Union[str, Sequence[str]]] = None, - structure: Optional[Union[str, Sequence[str]]] = None, - mime_type: Optional[str] = None): - self.files: list[ExternalFile] = [] - if file_path or data: - first_file = ExternalFile(file_path=file_path, - file_name=file_name, - data=data, - fmt=fmt, - types=types, - structure=structure, - mime_type=mime_type) - self.files.append(first_file) - - def add_file(self, - file_path: Optional[str] = None, - file_name: Optional[str] = None, - data: Optional[bytes] = None, - fmt: Optional[str] = None, - types: Optional[Union[str, Sequence[str]]] = None, - structure: Optional[Union[str, Sequence[str]]] = None, - mime_type: Optional[str] = None): - self.files.append(ExternalFile(file_path=file_path, - file_name=file_name, - data=data, - fmt=fmt, - types=types, - structure=structure, - mime_type=mime_type)) - - @property - def form_data(self) -> Dict[str, tuple]: - if not self.files: - raise ProgrammingError('No external files set for external data') - return {file.name: file.form_data for file in self.files} - - @property - def query_params(self) -> Dict[str, str]: - if not self.files: - raise ProgrammingError('No external files set for external data') - params = {} - for file in self.files: - params.update(file.query_params) - return params diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_additional_thread_info.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_additional_thread_info.py deleted file mode 100644 index ac866735b65a627b503174dc16cb577028a99637..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_additional_thread_info.py +++ /dev/null @@ -1,19 +0,0 @@ -# Defines which version of the PyDBAdditionalThreadInfo we'll use. -from _pydevd_bundle.pydevd_constants import ENV_FALSE_LOWER_VALUES, USE_CYTHON_FLAG, \ - ENV_TRUE_LOWER_VALUES - -if USE_CYTHON_FLAG in ENV_TRUE_LOWER_VALUES: - # We must import the cython version if forcing cython - from _pydevd_bundle.pydevd_cython_wrapper import PyDBAdditionalThreadInfo, set_additional_thread_info, _set_additional_thread_info_lock # @UnusedImport - -elif USE_CYTHON_FLAG in ENV_FALSE_LOWER_VALUES: - # Use the regular version if not forcing cython - from _pydevd_bundle.pydevd_additional_thread_info_regular import PyDBAdditionalThreadInfo, set_additional_thread_info, _set_additional_thread_info_lock # @UnusedImport @Reimport - -else: - # Regular: use fallback if not found (message is already given elsewhere). - try: - from _pydevd_bundle.pydevd_cython_wrapper import PyDBAdditionalThreadInfo, set_additional_thread_info, _set_additional_thread_info_lock - except ImportError: - from _pydevd_bundle.pydevd_additional_thread_info_regular import PyDBAdditionalThreadInfo, set_additional_thread_info, _set_additional_thread_info_lock # @UnusedImport - diff --git a/spaces/Superlang/ImageProcessor/annotator/openpose/hand.py b/spaces/Superlang/ImageProcessor/annotator/openpose/hand.py deleted file mode 100644 index 74767def506c72612954fe3b79056d17a83b1e16..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/openpose/hand.py +++ /dev/null @@ -1,94 +0,0 @@ -import cv2 -import json -import numpy as np -import math -import time -from scipy.ndimage.filters import gaussian_filter -import matplotlib.pyplot as plt -import matplotlib -import torch -from skimage.measure import label - -from .model import handpose_model -from . import util - -class Hand(object): - def __init__(self, model_path): - self.model = handpose_model() - # if torch.cuda.is_available(): - # self.model = self.model.cuda() - # print('cuda') - model_dict = util.transfer(self.model, torch.load(model_path)) - self.model.load_state_dict(model_dict) - self.model.eval() - - def __call__(self, oriImgRaw): - scale_search = [0.5, 1.0, 1.5, 2.0] - # scale_search = [0.5] - boxsize = 368 - stride = 8 - padValue = 128 - thre = 0.05 - multiplier = [x * boxsize for x in scale_search] - - wsize = 128 - heatmap_avg = np.zeros((wsize, wsize, 22)) - - Hr, Wr, Cr = oriImgRaw.shape - - oriImg = cv2.GaussianBlur(oriImgRaw, (0, 0), 0.8) - - for m in range(len(multiplier)): - scale = multiplier[m] - imageToTest = util.smart_resize(oriImg, (scale, scale)) - - imageToTest_padded, pad = util.padRightDownCorner(imageToTest, stride, padValue) - im = np.transpose(np.float32(imageToTest_padded[:, :, :, np.newaxis]), (3, 2, 0, 1)) / 256 - 0.5 - im = np.ascontiguousarray(im) - - data = torch.from_numpy(im).float() - if torch.cuda.is_available(): - data = data.cuda() - - with torch.no_grad(): - data = data.to(self.cn_device) - output = self.model(data).cpu().numpy() - - # extract outputs, resize, and remove padding - heatmap = np.transpose(np.squeeze(output), (1, 2, 0)) # output 1 is heatmaps - heatmap = util.smart_resize_k(heatmap, fx=stride, fy=stride) - heatmap = heatmap[:imageToTest_padded.shape[0] - pad[2], :imageToTest_padded.shape[1] - pad[3], :] - heatmap = util.smart_resize(heatmap, (wsize, wsize)) - - heatmap_avg += heatmap / len(multiplier) - - all_peaks = [] - for part in range(21): - map_ori = heatmap_avg[:, :, part] - one_heatmap = gaussian_filter(map_ori, sigma=3) - binary = np.ascontiguousarray(one_heatmap > thre, dtype=np.uint8) - - if np.sum(binary) == 0: - all_peaks.append([0, 0]) - continue - label_img, label_numbers = label(binary, return_num=True, connectivity=binary.ndim) - max_index = np.argmax([np.sum(map_ori[label_img == i]) for i in range(1, label_numbers + 1)]) + 1 - label_img[label_img != max_index] = 0 - map_ori[label_img == 0] = 0 - - y, x = util.npmax(map_ori) - y = int(float(y) * float(Hr) / float(wsize)) - x = int(float(x) * float(Wr) / float(wsize)) - all_peaks.append([x, y]) - return np.array(all_peaks) - -if __name__ == "__main__": - hand_estimation = Hand('../model/hand_pose_model.pth') - - # test_image = '../images/hand.jpg' - test_image = '../images/hand.jpg' - oriImg = cv2.imread(test_image) # B,G,R order - peaks = hand_estimation(oriImg) - canvas = util.draw_handpose(oriImg, peaks, True) - cv2.imshow('', canvas) - cv2.waitKey(0) \ No newline at end of file diff --git a/spaces/Supsies/CodingandMore/info.md b/spaces/Supsies/CodingandMore/info.md deleted file mode 100644 index 1ef8d3495bd866e964dd6f9fb70bda6c20e63526..0000000000000000000000000000000000000000 --- a/spaces/Supsies/CodingandMore/info.md +++ /dev/null @@ -1,16 +0,0 @@ -# 😌 [Club Recommender] - -### 🧐 Problem Statement and Research Summary -[add info about your problem statement and your research here!] - -### 🎣 Data Collection Plan -[Edit info.md - add info about what data you collected and why here!] - -### 💥 Ethical Considerations (Data Privacy and Bias) -* Data privacy: [Edit info.md - add info about you considered users' privacy here!] -* Bias: [Edit info.md - add info about you considered bias here!] - -### 👻 Our Team -[Edit info.md - add info about your team members here!] - -![aiEDU logo](https://images.squarespace-cdn.com/content/v1/5e4efdef6d10420691f02bc1/5db5a8a3-1761-4fce-a096-bd5f2515162f/aiEDU+_black+logo+stacked.png?format=100w) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/tenacity/_utils.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/tenacity/_utils.py deleted file mode 100644 index f14ff32096eab20a8cc1a5d3c2b8c8cfe1fcb2d2..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/tenacity/_utils.py +++ /dev/null @@ -1,76 +0,0 @@ -# Copyright 2016 Julien Danjou -# Copyright 2016 Joshua Harlow -# Copyright 2013-2014 Ray Holder -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import sys -import typing -from datetime import timedelta - - -# sys.maxsize: -# An integer giving the maximum value a variable of type Py_ssize_t can take. -MAX_WAIT = sys.maxsize / 2 - - -def find_ordinal(pos_num: int) -> str: - # See: https://en.wikipedia.org/wiki/English_numerals#Ordinal_numbers - if pos_num == 0: - return "th" - elif pos_num == 1: - return "st" - elif pos_num == 2: - return "nd" - elif pos_num == 3: - return "rd" - elif 4 <= pos_num <= 20: - return "th" - else: - return find_ordinal(pos_num % 10) - - -def to_ordinal(pos_num: int) -> str: - return f"{pos_num}{find_ordinal(pos_num)}" - - -def get_callback_name(cb: typing.Callable[..., typing.Any]) -> str: - """Get a callback fully-qualified name. - - If no name can be produced ``repr(cb)`` is called and returned. - """ - segments = [] - try: - segments.append(cb.__qualname__) - except AttributeError: - try: - segments.append(cb.__name__) - except AttributeError: - pass - if not segments: - return repr(cb) - else: - try: - # When running under sphinx it appears this can be none? - if cb.__module__: - segments.insert(0, cb.__module__) - except AttributeError: - pass - return ".".join(segments) - - -time_unit_type = typing.Union[int, float, timedelta] - - -def to_seconds(time_unit: time_unit_type) -> float: - return float(time_unit.total_seconds() if isinstance(time_unit, timedelta) else time_unit) diff --git a/spaces/Tej3/DepthEstimation/README.md b/spaces/Tej3/DepthEstimation/README.md deleted file mode 100644 index db3f3c62cc94bfc8e133a097ee9424efac1a0802..0000000000000000000000000000000000000000 --- a/spaces/Tej3/DepthEstimation/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: DepthEstimation -emoji: 📈 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/TencentARC/T2I-Adapter-SDXL-Sketch/style.css b/spaces/TencentARC/T2I-Adapter-SDXL-Sketch/style.css deleted file mode 100644 index 4669d4937bf9613ac0853dc9c6df76819ed16f33..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/T2I-Adapter-SDXL-Sketch/style.css +++ /dev/null @@ -1,16 +0,0 @@ - -#component-0{ - max-width: 900px; - margin: 0 auto; -} - -#description, h1 { - text-align: center; -} - -#duplicate-button { - margin: auto; - color: #fff; - background: #1565c0; - border-radius: 100vh; -} diff --git a/spaces/TheStinger/Ilaria_RVC/vc_infer_pipeline.py b/spaces/TheStinger/Ilaria_RVC/vc_infer_pipeline.py deleted file mode 100644 index a0b50d4c703b7638d7c951c9d820a1e59c275fc3..0000000000000000000000000000000000000000 --- a/spaces/TheStinger/Ilaria_RVC/vc_infer_pipeline.py +++ /dev/null @@ -1,646 +0,0 @@ -import numpy as np, parselmouth, torch, pdb, sys, os -from time import time as ttime -import torch.nn.functional as F -import torchcrepe # Fork feature. Use the crepe f0 algorithm. New dependency (pip install torchcrepe) -from torch import Tensor -import scipy.signal as signal -import pyworld, os, traceback, faiss, librosa, torchcrepe -from scipy import signal -from functools import lru_cache - -now_dir = os.getcwd() -sys.path.append(now_dir) - -bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000) - -input_audio_path2wav = {} - - -@lru_cache -def cache_harvest_f0(input_audio_path, fs, f0max, f0min, frame_period): - audio = input_audio_path2wav[input_audio_path] - f0, t = pyworld.harvest( - audio, - fs=fs, - f0_ceil=f0max, - f0_floor=f0min, - frame_period=frame_period, - ) - f0 = pyworld.stonemask(audio, f0, t, fs) - return f0 - - -def change_rms(data1, sr1, data2, sr2, rate): # 1是输入音频,2是输出音频,rate是2的占比 - # print(data1.max(),data2.max()) - rms1 = librosa.feature.rms( - y=data1, frame_length=sr1 // 2 * 2, hop_length=sr1 // 2 - ) # 每半秒一个点 - rms2 = librosa.feature.rms(y=data2, frame_length=sr2 // 2 * 2, hop_length=sr2 // 2) - rms1 = torch.from_numpy(rms1) - rms1 = F.interpolate( - rms1.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.from_numpy(rms2) - rms2 = F.interpolate( - rms2.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.max(rms2, torch.zeros_like(rms2) + 1e-6) - data2 *= ( - torch.pow(rms1, torch.tensor(1 - rate)) - * torch.pow(rms2, torch.tensor(rate - 1)) - ).numpy() - return data2 - - -class VC(object): - def __init__(self, tgt_sr, config): - self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = ( - config.x_pad, - config.x_query, - config.x_center, - config.x_max, - config.is_half, - ) - self.sr = 16000 # hubert输入采样率 - self.window = 160 # 每帧点数 - self.t_pad = self.sr * self.x_pad # 每条前后pad时间 - self.t_pad_tgt = tgt_sr * self.x_pad - self.t_pad2 = self.t_pad * 2 - self.t_query = self.sr * self.x_query # 查询切点前后查询时间 - self.t_center = self.sr * self.x_center # 查询切点位置 - self.t_max = self.sr * self.x_max # 免查询时长阈值 - self.device = config.device - - # Fork Feature: Get the best torch device to use for f0 algorithms that require a torch device. Will return the type (torch.device) - def get_optimal_torch_device(self, index: int = 0) -> torch.device: - # Get cuda device - if torch.cuda.is_available(): - return torch.device( - f"cuda:{index % torch.cuda.device_count()}" - ) # Very fast - elif torch.backends.mps.is_available(): - return torch.device("mps") - # Insert an else here to grab "xla" devices if available. TO DO later. Requires the torch_xla.core.xla_model library - # Else wise return the "cpu" as a torch device, - return torch.device("cpu") - - # Fork Feature: Compute f0 with the crepe method - def get_f0_crepe_computation( - self, - x, - f0_min, - f0_max, - p_len, - hop_length=160, # 512 before. Hop length changes the speed that the voice jumps to a different dramatic pitch. Lower hop lengths means more pitch accuracy but longer inference time. - model="full", # Either use crepe-tiny "tiny" or crepe "full". Default is full - ): - x = x.astype( - np.float32 - ) # fixes the F.conv2D exception. We needed to convert double to float. - x /= np.quantile(np.abs(x), 0.999) - torch_device = self.get_optimal_torch_device() - audio = torch.from_numpy(x).to(torch_device, copy=True) - audio = torch.unsqueeze(audio, dim=0) - if audio.ndim == 2 and audio.shape[0] > 1: - audio = torch.mean(audio, dim=0, keepdim=True).detach() - audio = audio.detach() - print("Initiating prediction with a crepe_hop_length of: " + str(hop_length)) - pitch: Tensor = torchcrepe.predict( - audio, - self.sr, - hop_length, - f0_min, - f0_max, - model, - batch_size=hop_length * 2, - device=torch_device, - pad=True, - ) - p_len = p_len or x.shape[0] // hop_length - # Resize the pitch for final f0 - source = np.array(pitch.squeeze(0).cpu().float().numpy()) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * p_len, len(source)) / p_len, - np.arange(0, len(source)), - source, - ) - f0 = np.nan_to_num(target) - return f0 # Resized f0 - - def get_f0_official_crepe_computation( - self, - x, - f0_min, - f0_max, - model="full", - ): - # Pick a batch size that doesn't cause memory errors on your gpu - batch_size = 512 - # Compute pitch using first gpu - audio = torch.tensor(np.copy(x))[None].float() - f0, pd = torchcrepe.predict( - audio, - self.sr, - self.window, - f0_min, - f0_max, - model, - batch_size=batch_size, - device=self.device, - return_periodicity=True, - ) - pd = torchcrepe.filter.median(pd, 3) - f0 = torchcrepe.filter.mean(f0, 3) - f0[pd < 0.1] = 0 - f0 = f0[0].cpu().numpy() - return f0 - - # Fork Feature: Compute pYIN f0 method - def get_f0_pyin_computation(self, x, f0_min, f0_max): - y, sr = librosa.load("saudio/Sidney.wav", self.sr, mono=True) - f0, _, _ = librosa.pyin(y, sr=self.sr, fmin=f0_min, fmax=f0_max) - f0 = f0[1:] # Get rid of extra first frame - return f0 - - # Fork Feature: Acquire median hybrid f0 estimation calculation - def get_f0_hybrid_computation( - self, - methods_str, - input_audio_path, - x, - f0_min, - f0_max, - p_len, - filter_radius, - crepe_hop_length, - time_step, - ): - # Get various f0 methods from input to use in the computation stack - s = methods_str - s = s.split("hybrid")[1] - s = s.replace("[", "").replace("]", "") - methods = s.split("+") - f0_computation_stack = [] - - print("Calculating f0 pitch estimations for methods: %s" % str(methods)) - x = x.astype(np.float32) - x /= np.quantile(np.abs(x), 0.999) - # Get f0 calculations for all methods specified - for method in methods: - f0 = None - if method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif method == "crepe": - f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max) - f0 = f0[1:] # Get rid of extra first frame - elif method == "crepe-tiny": - f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max, "tiny") - f0 = f0[1:] # Get rid of extra first frame - elif method == "mangio-crepe": - f0 = self.get_f0_crepe_computation( - x, f0_min, f0_max, p_len, crepe_hop_length - ) - elif method == "mangio-crepe-tiny": - f0 = self.get_f0_crepe_computation( - x, f0_min, f0_max, p_len, crepe_hop_length, "tiny" - ) - elif method == "harvest": - f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10) - if filter_radius > 2: - f0 = signal.medfilt(f0, 3) - f0 = f0[1:] # Get rid of first frame. - elif method == "dio": # Potentially buggy? - f0, t = pyworld.dio( - x.astype(np.double), - fs=self.sr, - f0_ceil=f0_max, - f0_floor=f0_min, - frame_period=10, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr) - f0 = signal.medfilt(f0, 3) - f0 = f0[1:] - # elif method == "pyin": Not Working just yet - # f0 = self.get_f0_pyin_computation(x, f0_min, f0_max) - # Push method to the stack - f0_computation_stack.append(f0) - - for fc in f0_computation_stack: - print(len(fc)) - - print("Calculating hybrid median f0 from the stack of: %s" % str(methods)) - f0_median_hybrid = None - if len(f0_computation_stack) == 1: - f0_median_hybrid = f0_computation_stack[0] - else: - f0_median_hybrid = np.nanmedian(f0_computation_stack, axis=0) - return f0_median_hybrid - - def get_f0( - self, - input_audio_path, - x, - p_len, - f0_up_key, - f0_method, - filter_radius, - crepe_hop_length, - inp_f0=None, - ): - global input_audio_path2wav - time_step = self.window / self.sr * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - if f0_method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - input_audio_path2wav[input_audio_path] = x.astype(np.double) - f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10) - if filter_radius > 2: - f0 = signal.medfilt(f0, 3) - elif f0_method == "dio": # Potentially Buggy? - f0, t = pyworld.dio( - x.astype(np.double), - fs=self.sr, - f0_ceil=f0_max, - f0_floor=f0_min, - frame_period=10, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr) - f0 = signal.medfilt(f0, 3) - elif f0_method == "crepe": - f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max) - elif f0_method == "crepe-tiny": - f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max, "tiny") - elif f0_method == "mangio-crepe": - f0 = self.get_f0_crepe_computation( - x, f0_min, f0_max, p_len, crepe_hop_length - ) - elif f0_method == "mangio-crepe-tiny": - f0 = self.get_f0_crepe_computation( - x, f0_min, f0_max, p_len, crepe_hop_length, "tiny" - ) - elif f0_method == "rmvpe": - if hasattr(self, "model_rmvpe") == False: - from rmvpe import RMVPE - - print("loading rmvpe model") - self.model_rmvpe = RMVPE( - "rmvpe.pt", is_half=self.is_half, device=self.device - ) - f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03) - - elif "hybrid" in f0_method: - # Perform hybrid median pitch estimation - input_audio_path2wav[input_audio_path] = x.astype(np.double) - f0 = self.get_f0_hybrid_computation( - f0_method, - input_audio_path, - x, - f0_min, - f0_max, - p_len, - filter_radius, - crepe_hop_length, - time_step, - ) - - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0] - f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[ - :shape - ] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - - return f0_coarse, f0bak # 1-0 - - def vc( - self, - model, - net_g, - sid, - audio0, - pitch, - pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - ): # ,file_index,file_big_npy - feats = torch.from_numpy(audio0) - if self.is_half: - feats = feats.half() - else: - feats = feats.float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - - inputs = { - "source": feats.to(self.device), - "padding_mask": padding_mask, - "output_layer": 9 if version == "v1" else 12, - } - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) if version == "v1" else logits[0] - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = feats.clone() - if ( - isinstance(index, type(None)) == False - and isinstance(big_npy, type(None)) == False - and index_rate != 0 - ): - npy = feats[0].cpu().numpy() - if self.is_half: - npy = npy.astype("float32") - - # _, I = index.search(npy, 1) - # npy = big_npy[I.squeeze()] - - score, ix = index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - - if self.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate - + (1 - index_rate) * feats - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = F.interpolate(feats0.permute(0, 2, 1), scale_factor=2).permute( - 0, 2, 1 - ) - t1 = ttime() - p_len = audio0.shape[0] // self.window - if feats.shape[1] < p_len: - p_len = feats.shape[1] - if pitch != None and pitchf != None: - pitch = pitch[:, :p_len] - pitchf = pitchf[:, :p_len] - - if protect < 0.5 and pitch != None and pitchf != None: - pitchff = pitchf.clone() - pitchff[pitchf > 0] = 1 - pitchff[pitchf < 1] = protect - pitchff = pitchff.unsqueeze(-1) - feats = feats * pitchff + feats0 * (1 - pitchff) - feats = feats.to(feats0.dtype) - p_len = torch.tensor([p_len], device=self.device).long() - with torch.no_grad(): - if pitch != None and pitchf != None: - audio1 = ( - (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]) - .data.cpu() - .float() - .numpy() - ) - else: - audio1 = ( - (net_g.infer(feats, p_len, sid)[0][0, 0]).data.cpu().float().numpy() - ) - del feats, p_len, padding_mask - if torch.cuda.is_available(): - torch.cuda.empty_cache() - t2 = ttime() - times[0] += t1 - t0 - times[2] += t2 - t1 - return audio1 - - def pipeline( - self, - model, - net_g, - sid, - audio, - input_audio_path, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - crepe_hop_length, - f0_file=None, - ): - if ( - file_index != "" - # and file_big_npy != "" - # and os.path.exists(file_big_npy) == True - and os.path.exists(file_index) == True - and index_rate != 0 - ): - try: - index = faiss.read_index(file_index) - # big_npy = np.load(file_big_npy) - big_npy = index.reconstruct_n(0, index.ntotal) - except: - traceback.print_exc() - index = big_npy = None - else: - index = big_npy = None - audio = signal.filtfilt(bh, ah, audio) - audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect") - opt_ts = [] - if audio_pad.shape[0] > self.t_max: - audio_sum = np.zeros_like(audio) - for i in range(self.window): - audio_sum += audio_pad[i : i - self.window] - for t in range(self.t_center, audio.shape[0], self.t_center): - opt_ts.append( - t - - self.t_query - + np.where( - np.abs(audio_sum[t - self.t_query : t + self.t_query]) - == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min() - )[0][0] - ) - s = 0 - audio_opt = [] - t = None - t1 = ttime() - audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect") - p_len = audio_pad.shape[0] // self.window - inp_f0 = None - if hasattr(f0_file, "name") == True: - try: - with open(f0_file.name, "r") as f: - lines = f.read().strip("\n").split("\n") - inp_f0 = [] - for line in lines: - inp_f0.append([float(i) for i in line.split(",")]) - inp_f0 = np.array(inp_f0, dtype="float32") - except: - traceback.print_exc() - sid = torch.tensor(sid, device=self.device).unsqueeze(0).long() - pitch, pitchf = None, None - if if_f0 == 1: - pitch, pitchf = self.get_f0( - input_audio_path, - audio_pad, - p_len, - f0_up_key, - f0_method, - filter_radius, - crepe_hop_length, - inp_f0, - ) - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - if self.device == "mps": - pitchf = pitchf.astype(np.float32) - pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long() - pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float() - t2 = ttime() - times[1] += t2 - t1 - for t in opt_ts: - t = t // self.window * self.window - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - pitch[:, s // self.window : (t + self.t_pad2) // self.window], - pitchf[:, s // self.window : (t + self.t_pad2) // self.window], - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - s = t - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - pitch[:, t // self.window :] if t is not None else pitch, - pitchf[:, t // self.window :] if t is not None else pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - audio_opt = np.concatenate(audio_opt) - if rms_mix_rate != 1: - audio_opt = change_rms(audio, 16000, audio_opt, tgt_sr, rms_mix_rate) - if resample_sr >= 16000 and tgt_sr != resample_sr: - audio_opt = librosa.resample( - audio_opt, orig_sr=tgt_sr, target_sr=resample_sr - ) - audio_max = np.abs(audio_opt).max() / 0.99 - max_int16 = 32768 - if audio_max > 1: - max_int16 /= audio_max - audio_opt = (audio_opt * max_int16).astype(np.int16) - del pitch, pitchf, sid - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return audio_opt diff --git a/spaces/Toritto/Genshin-impact-IA-project-v1/lib/infer_pack/commons.py b/spaces/Toritto/Genshin-impact-IA-project-v1/lib/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/Toritto/Genshin-impact-IA-project-v1/lib/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/WZT/DigiProj/model.py b/spaces/WZT/DigiProj/model.py deleted file mode 100644 index 0e0fdea92fefc34bdc70f33bd2bfd464338e2365..0000000000000000000000000000000000000000 --- a/spaces/WZT/DigiProj/model.py +++ /dev/null @@ -1,757 +0,0 @@ -import torchvision -import math -import random -import functools -import operator - -import torch -from torch import nn -from torch.nn import functional as F -from torch.autograd import Function - -from op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d -n_latent = 11 - - -channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256, - 128: 128, - 256: 64, - 512: 32, - 1024: 16, -} - -class LambdaLR(): - def __init__(self, n_epochs, offset, decay_start_epoch): - assert ((n_epochs - decay_start_epoch) > 0), "Decay must start before the training session ends!" - self.n_epochs = n_epochs - self.offset = offset - self.decay_start_epoch = decay_start_epoch - - def step(self, epoch): - return 1.0 - max(0, epoch + self.offset - self.decay_start_epoch)/(self.n_epochs - self.decay_start_epoch) - - -class PixelNorm(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input): - return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8) - -def make_kernel(k): - k = torch.tensor(k, dtype=torch.float32) - - if k.ndim == 1: - k = k[None, :] * k[:, None] - - k /= k.sum() - - return k - -class Upsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) * (factor ** 2) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad) - - return out - - -class Downsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad) - - return out - - -class Blur(nn.Module): - def __init__(self, kernel, pad, upsample_factor=1): - super().__init__() - - kernel = make_kernel(kernel) - - if upsample_factor > 1: - kernel = kernel * (upsample_factor ** 2) - - self.register_buffer('kernel', kernel) - - self.pad = pad - - def forward(self, input): - out = upfirdn2d(input, self.kernel, pad=self.pad) - - return out - - -class EqualConv2d(nn.Module): - def __init__( - self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True - ): - super().__init__() - - self.weight = nn.Parameter( - torch.randn(out_channel, in_channel, kernel_size, kernel_size) - ) - self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2) - - self.stride = stride - self.padding = padding - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - - else: - self.bias = None - - def forward(self, input): - out = F.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},' - f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})' - ) - - -class EqualLinear(nn.Module): - def __init__( - self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None - ): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul)) - - if bias: - self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init)) - - else: - self.bias = None - - self.activation = activation - - self.scale = (1 / math.sqrt(in_dim)) * lr_mul - self.lr_mul = lr_mul - - def forward(self, input): - bias = self.bias*self.lr_mul if self.bias is not None else None - if self.activation: - out = F.linear(input, self.weight * self.scale) - out = fused_leaky_relu(out, bias) - - else: - out = F.linear( - input, self.weight * self.scale, bias=bias - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})' - ) - - -class ScaledLeakyReLU(nn.Module): - def __init__(self, negative_slope=0.2): - super().__init__() - - self.negative_slope = negative_slope - - def forward(self, input): - out = F.leaky_relu(input, negative_slope=self.negative_slope) - - return out * math.sqrt(2) - - -class ModulatedConv2d(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - use_style=True, - demodulate=True, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - ): - super().__init__() - - self.eps = 1e-8 - self.kernel_size = kernel_size - self.in_channel = in_channel - self.out_channel = out_channel - self.upsample = upsample - self.downsample = downsample - self.use_style = use_style - - if upsample: - factor = 2 - p = (len(blur_kernel) - factor) - (kernel_size - 1) - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 + 1 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor) - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1)) - - fan_in = in_channel * kernel_size ** 2 - self.scale = 1 / math.sqrt(fan_in) - self.padding = kernel_size // 2 - - self.weight = nn.Parameter( - torch.randn(1, out_channel, in_channel, kernel_size, kernel_size) - ) - - if use_style: - self.modulation = EqualLinear(style_dim, in_channel, bias_init=1) - else: - self.modulation = nn.Parameter(torch.Tensor(1, 1, in_channel, 1, 1).fill_(1)) - - self.demodulate = demodulate - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, ' - f'upsample={self.upsample}, downsample={self.downsample})' - ) - - def forward(self, input, style): - batch, in_channel, height, width = input.shape - - if self.use_style: - style = self.modulation(style).view(batch, 1, in_channel, 1, 1) - weight = self.scale * self.weight * style - else: - weight = self.scale * self.weight.expand(batch,-1,-1,-1,-1) * self.modulation - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8) - weight = weight * demod.view(batch, self.out_channel, 1, 1, 1) - - weight = weight.view( - batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - - if self.upsample: - input = input.view(1, batch * in_channel, height, width) - weight = weight.view( - batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - weight = weight.transpose(1, 2).reshape( - batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size - ) - out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - _, _, height, width = input.shape - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - else: - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=self.padding, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - return out - - -class NoiseInjection(nn.Module): - def __init__(self): - super().__init__() - - self.weight = nn.Parameter(torch.zeros(1)) - - def forward(self, image, noise=None): - if noise is None: - batch, _, height, width = image.shape - noise = image.new_empty(batch, 1, height, width).normal_() - - return image + self.weight * noise - - -class ConstantInput(nn.Module): - def __init__(self, style_dim): - super().__init__() - - self.input = nn.Parameter(torch.randn(1, style_dim)) - - def forward(self, input): - batch = input.shape[0] - out = self.input.repeat(batch, n_latent) - - return out - - -class StyledConv(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - use_style=True, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - demodulate=True, - ): - super().__init__() - self.use_style = use_style - - self.conv = ModulatedConv2d( - in_channel, - out_channel, - kernel_size, - style_dim, - use_style=use_style, - upsample=upsample, - downsample=downsample, - blur_kernel=blur_kernel, - demodulate=demodulate, - ) - - #if use_style: - # self.noise = NoiseInjection() - #else: - # self.noise = None - # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1)) - # self.activate = ScaledLeakyReLU(0.2) - self.activate = FusedLeakyReLU(out_channel) - - def forward(self, input, style=None, noise=None): - out = self.conv(input, style) - #if self.use_style: - # out = self.noise(out, noise=noise) - # out = out + self.bias - out = self.activate(out) - - return out - - -class StyledResBlock(nn.Module): - def __init__(self, in_channel, style_dim, blur_kernel=[1, 3, 3, 1], demodulate=True): - super().__init__() - - self.conv1 = StyledConv(in_channel, in_channel, 3, style_dim, upsample=False, blur_kernel=blur_kernel, demodulate=demodulate) - self.conv2 = StyledConv(in_channel, in_channel, 3, style_dim, upsample=False, blur_kernel=blur_kernel, demodulate=demodulate) - - def forward(self, input, style): - out = self.conv1(input, style) - out = self.conv2(out, style) - out = (out + input) / math.sqrt(2) - - return out - -class ToRGB(nn.Module): - def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - if upsample: - self.upsample = Upsample(blur_kernel) - - self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, input, style, skip=None): - out = self.conv(input, style) - out = out + self.bias - - if skip is not None: - skip = self.upsample(skip) - - out = out + skip - - return out - - -class Generator(nn.Module): - def __init__( - self, - size, - num_down, - latent_dim, - n_mlp, - n_res, - channel_multiplier=1, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - ): - super().__init__() - self.size = size - - style_dim = 512 - - mapping = [EqualLinear(latent_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu')] - for i in range(n_mlp-1): - mapping.append(EqualLinear(style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu')) - - self.mapping = nn.Sequential(*mapping) - - self.encoder = Encoder(size, latent_dim, num_down, n_res, channel_multiplier) - - self.log_size = int(math.log(size, 2)) #7 - in_log_size = self.log_size - num_down #7-2 or 7-3 - in_size = 2 ** in_log_size - - in_channel = channels[in_size] - self.adain_bottleneck = nn.ModuleList() - for i in range(n_res): - self.adain_bottleneck.append(StyledResBlock(in_channel, style_dim)) - - self.conv1 = StyledConv(in_channel, in_channel, 3, style_dim, blur_kernel=blur_kernel) - self.to_rgb1 = ToRGB(in_channel, style_dim, upsample=False) - - self.num_layers = (self.log_size - in_log_size) * 2 + 1 #7 - - self.convs = nn.ModuleList() - self.upsamples = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - #self.noises = nn.Module() - - - #for layer_idx in range(self.num_layers): - # res = (layer_idx + (in_log_size*2+1)) // 2 #2,3,3,5 ... -> 4,5,5,6 ... - # shape = [1, 1, 2 ** res, 2 ** res] - # self.noises.register_buffer(f'noise_{layer_idx}', torch.randn(*shape)) - - for i in range(in_log_size+1, self.log_size + 1): - out_channel = channels[2 ** i] - - self.convs.append( - StyledConv( - in_channel, - out_channel, - 3, - style_dim, - upsample=True, - blur_kernel=blur_kernel, - ) - ) - - self.convs.append( - StyledConv( - out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel - ) - ) - - self.to_rgbs.append(ToRGB(out_channel, style_dim)) - - in_channel = out_channel - - def style_encode(self, input): - return self.encoder(input)[1] - - def encode(self, input): - return self.encoder(input) - - def forward(self, input, z=None): - content, style = self.encode(input) - if z is None: - out = self.decode(content, style) - else: - out = self.decode(content, z) - - return out, content, style - - def decode(self, input, styles, use_mapping=True): - if use_mapping: - styles = self.mapping(styles) - #styles = styles.repeat(1, n_latent).view(styles.size(0), n_latent, -1) - out = input - i = 0 - for conv in self.adain_bottleneck: - out = conv(out, styles) - i += 1 - - out = self.conv1(out, styles, noise=None) - skip = self.to_rgb1(out, styles) - i += 2 - - for conv1, conv2, to_rgb in zip( - self.convs[::2], self.convs[1::2], self.to_rgbs - ): - out = conv1(out, styles, noise=None) - out = conv2(out, styles, noise=None) - skip = to_rgb(out, styles, skip) - - i += 3 - - image = skip - return image - -class ConvLayer(nn.Sequential): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - downsample=False, - blur_kernel=[1, 3, 3, 1], - bias=True, - activate=True, - ): - layers = [] - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - layers.append(Blur(blur_kernel, pad=(pad0, pad1))) - - stride = 2 - self.padding = 0 - - else: - stride = 1 - self.padding = kernel_size // 2 - - layers.append( - EqualConv2d( - in_channel, - out_channel, - kernel_size, - padding=self.padding, - stride=stride, - bias=bias and not activate, - ) - ) - - if activate: - if bias: - layers.append(FusedLeakyReLU(out_channel)) - - else: - layers.append(ScaledLeakyReLU(0.2)) - - super().__init__(*layers) - -class InResBlock(nn.Module): - def __init__(self, in_channel, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - self.conv1 = StyledConv(in_channel, in_channel, 3, None, blur_kernel=blur_kernel, demodulate=True, use_style=False) - self.conv2 = StyledConv(in_channel, in_channel, 3, None, blur_kernel=blur_kernel, demodulate=True, use_style=False) - - def forward(self, input): - out = self.conv1(input, None) - out = self.conv2(out, None) - out = (out + input) / math.sqrt(2) - - return out - -class ResBlock(nn.Module): - def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1], downsample=True): - super().__init__() - - self.conv1 = ConvLayer(in_channel, in_channel, 3) - self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=downsample) - - if downsample or in_channel != out_channel: - self.skip = ConvLayer( - in_channel, out_channel, 1, downsample=downsample, activate=False, bias=False - ) - else: - self.skip = None - - def forward(self, input): - out = self.conv1(input) - out = self.conv2(out) - - if self.skip is None: - skip = input - else: - skip = self.skip(input) - out = (out + skip) / math.sqrt(2) - - return out - -class Discriminator(nn.Module): - def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]): - super().__init__() - self.size = size - l_branch = self.make_net_(32) - l_branch += [ConvLayer(channels[32], 1, 1, activate=False)] - self.l_branch = nn.Sequential(*l_branch) - - - g_branch = self.make_net_(8) - self.g_branch = nn.Sequential(*g_branch) - self.g_adv = ConvLayer(channels[8], 1, 1, activate=False) - - self.g_std = nn.Sequential(ConvLayer(channels[8], channels[4], 3, downsample=True), - nn.Flatten(), - EqualLinear(channels[4] * 4 * 4, 128, activation='fused_lrelu'), - ) - self.g_final = EqualLinear(128, 1, activation=False) - - - def make_net_(self, out_size): - size = self.size - convs = [ConvLayer(3, channels[size], 1)] - log_size = int(math.log(size, 2)) - out_log_size = int(math.log(out_size, 2)) - in_channel = channels[size] - - for i in range(log_size, out_log_size, -1): - out_channel = channels[2 ** (i - 1)] - convs.append(ResBlock(in_channel, out_channel)) - in_channel = out_channel - - return convs - - def forward(self, x): - l_adv = self.l_branch(x) - - g_act = self.g_branch(x) - g_adv = self.g_adv(g_act) - - output = self.g_std(g_act) - g_stddev = torch.sqrt(output.var(0, keepdim=True, unbiased=False) + 1e-8).repeat(x.size(0),1) - g_std = self.g_final(g_stddev) - return [l_adv, g_adv, g_std] - - - -class Encoder(nn.Module): - def __init__(self, size, latent_dim, num_down, n_res, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]): - super().__init__() - stem = [ConvLayer(3, channels[size], 1)] - log_size = int(math.log(size, 2)) - in_channel = channels[size] - - for i in range(log_size, log_size-num_down, -1): - out_channel = channels[2 ** (i - 1)] - stem.append(ResBlock(in_channel, out_channel, downsample=True)) - in_channel = out_channel - stem += [ResBlock(in_channel, in_channel, downsample=False) for i in range(n_res)] - self.stem = nn.Sequential(*stem) - - self.content = nn.Sequential( - ConvLayer(in_channel, in_channel, 1), - ConvLayer(in_channel, in_channel, 1) - ) - style = [] - for i in range(log_size-num_down, 2, -1): - out_channel = channels[2 ** (i - 1)] - style.append(ConvLayer(in_channel, out_channel, 3, downsample=True)) - in_channel = out_channel - style += [ - nn.Flatten(), - EqualLinear(channels[4] * 4 * 4, channels[4], activation='fused_lrelu'), - EqualLinear(channels[4], latent_dim), - ] - self.style = nn.Sequential(*style) - - - def forward(self, input): - act = self.stem(input) - content = self.content(act) - style = self.style(act) - return content, style - -class StyleEncoder(nn.Module): - def __init__(self, size, style_dim, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]): - super().__init__() - convs = [ConvLayer(3, channels[size], 1)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - num_down = 6 - - for i in range(log_size, log_size-num_down, -1): - w = 2 ** (i - 1) - out_channel = channels[w] - convs.append(ConvLayer(in_channel, out_channel, 3, downsample=True)) - in_channel = out_channel - - convs += [ - nn.Flatten(), - EqualLinear(channels[4] * 4 * 4, channels[4], activation='fused_lrelu'), EqualLinear(channels[4], style_dim), - ] - self.convs = nn.Sequential(*convs) - - def forward(self, input): - style = self.convs(input) - return style.view(input.size(0), -1) - -class LatDiscriminator(nn.Module): - def __init__(self, style_dim): - super().__init__() - - fc = [EqualLinear(style_dim, 256, activation='fused_lrelu')] - for i in range(3): - fc += [EqualLinear(256, 256, activation='fused_lrelu')] - fc += [FCMinibatchStd(256, 256)] - fc += [EqualLinear(256, 1)] - self.fc = nn.Sequential(*fc) - - def forward(self, input): - return [self.fc(input), ] - -class FCMinibatchStd(nn.Module): - def __init__(self, in_channel, out_channel): - super().__init__() - self.fc = EqualLinear(in_channel+1, out_channel, activation='fused_lrelu') - - def forward(self, out): - stddev = torch.sqrt(out.var(0, unbiased=False) + 1e-8).mean().view(1,1).repeat(out.size(0), 1) - out = torch.cat([out, stddev], 1) - out = self.fc(out) - return out diff --git a/spaces/Widium/Style-Recreation/functions/init.py b/spaces/Widium/Style-Recreation/functions/init.py deleted file mode 100644 index b1b8e72358baea516a33fe3e61930c6b14a3199a..0000000000000000000000000000000000000000 --- a/spaces/Widium/Style-Recreation/functions/init.py +++ /dev/null @@ -1,69 +0,0 @@ -# *************************************************************************** # -# # -# init.py # -# # -# By: Widium # -# Github : https://github.com/widium # -# # -# Created: 2023/05/05 16:08:50 by Widium # -# Updated: 2023/05/05 16:08:50 by Widium # -# # -# **************************************************************************** # - -from typing import List -from keras import Model -from tensorflow import Tensor -from tensorflow import Variable - -from .processing import create_batch_image -from .image import clip_pixel -from .image import create_noisy_imag -from .extract import get_features_map -from .extract import extract_style - -# ===================================================== # - -def init_generated_img(style_img : Tensor)->Variable: - """ - Initialize the generated image with noise, clipped pixel values, and a batch dimension. - - 1. Add noise to the style image. - 2. Clip the pixel values of the noisy image. - 3. Add a batch dimension to the clipped image. - 4. Convert the batch image to a TensorFlow Variable. - - Args: - style_img (Tensor): The input style image as a tensor. - - Returns: - Variable: The initialized generated image as a Tensorflow Variable. - """ - generated_img = create_noisy_imag(style_img) - generated_img = clip_pixel(generated_img) - generated_img = create_batch_image(generated_img) - generated_img = Variable(generated_img) - - return (generated_img) - -# ===================================================== # - -def init_style_target(model : Model, style_img : Tensor)->List[Tensor]: - """ - Initialize the style target by extracting the style features from the given input image. - - 1. Add a batch dimension to the style image. - 2. Obtain the features map from the pre-trained model for the batch image. - 3. Extract the style features from the features map. - - Args: - model (Model): The pre-trained VGG19 model. - style_img (Tensor): The input style image as a tensor. - - Returns: - List[Tensor]: A list of tensors representing the style target. - """ - style_img = create_batch_image(style_img) - features_map = get_features_map(model, style_img) - style_target = extract_style(features_map) - - return (style_target) \ No newline at end of file diff --git a/spaces/Wrathless/pyannote-voice-activity-detection/Dockerfile b/spaces/Wrathless/pyannote-voice-activity-detection/Dockerfile deleted file mode 100644 index 94ee76a4f45af463ab7f945633c9258172f9cc80..0000000000000000000000000000000000000000 --- a/spaces/Wrathless/pyannote-voice-activity-detection/Dockerfile +++ /dev/null @@ -1,2 +0,0 @@ -FROM huggingface/autotrain-advanced:latest -CMD autotrain app --port 7860 diff --git a/spaces/XzJosh/JM-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md b/spaces/XzJosh/JM-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md deleted file mode 100644 index 7bce039b7f81ee328fdf8efe3f14409200aacbef..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/JM-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -language: -- zh -tags: -- bert -license: "apache-2.0" ---- - -# Please use 'Bert' related functions to load this model! - -## Chinese BERT with Whole Word Masking -For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**. - -**[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)** -Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu - -This repository is developed based on:https://github.com/google-research/bert - -You may also interested in, -- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm -- Chinese MacBERT: https://github.com/ymcui/MacBERT -- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA -- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet -- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer - -More resources by HFL: https://github.com/ymcui/HFL-Anthology - -## Citation -If you find the technical report or resource is useful, please cite the following technical report in your paper. -- Primary: https://arxiv.org/abs/2004.13922 -``` -@inproceedings{cui-etal-2020-revisiting, - title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", - author = "Cui, Yiming and - Che, Wanxiang and - Liu, Ting and - Qin, Bing and - Wang, Shijin and - Hu, Guoping", - booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", - month = nov, - year = "2020", - address = "Online", - publisher = "Association for Computational Linguistics", - url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", - pages = "657--668", -} -``` -- Secondary: https://arxiv.org/abs/1906.08101 -``` -@article{chinese-bert-wwm, - title={Pre-Training with Whole Word Masking for Chinese BERT}, - author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping}, - journal={arXiv preprint arXiv:1906.08101}, - year={2019} - } -``` \ No newline at end of file diff --git a/spaces/XzJosh/LAPLACE-Bert-VITS2/text/__init__.py b/spaces/XzJosh/LAPLACE-Bert-VITS2/text/__init__.py deleted file mode 100644 index 7566bf351ca9b95af9cdc6d729557a9da083800f..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/LAPLACE-Bert-VITS2/text/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -from text.symbols import * - - -_symbol_to_id = {s: i for i, s in enumerate(symbols)} - -def cleaned_text_to_sequence(cleaned_text, tones, language): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - phones = [_symbol_to_id[symbol] for symbol in cleaned_text] - tone_start = language_tone_start_map[language] - tones = [i + tone_start for i in tones] - lang_id = language_id_map[language] - lang_ids = [lang_id for i in phones] - return phones, tones, lang_ids - -def get_bert(norm_text, word2ph, language): - from .chinese_bert import get_bert_feature as zh_bert - from .english_bert_mock import get_bert_feature as en_bert - lang_bert_func_map = { - 'ZH': zh_bert, - 'EN': en_bert - } - bert = lang_bert_func_map[language](norm_text, word2ph) - return bert diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/Waifu2x/utils/Img_to_H5.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/Waifu2x/utils/Img_to_H5.py deleted file mode 100644 index d7c565599f7178f8ac0d3e26da7cabee8444e1ed..0000000000000000000000000000000000000000 --- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/Waifu2x/utils/Img_to_H5.py +++ /dev/null @@ -1,50 +0,0 @@ -import glob - -import h5py -from PIL import Image -from torchvision.transforms import RandomCrop -from torchvision.transforms.functional import to_tensor -from tqdm import tqdm - -from Dataloader import ImageAugment - -patch_size = 128 -shrink_size = 2 -noise_level = 1 -patches_per_img = 20 -images = glob.glob("dataset/train/*") - -database = h5py.File("train_images.hdf5", 'w') - -dat_group = database.create_group("shrink_2_noise_level_1_downsample_random_rgb") -# del database['shrink_2_noise_level_1_downsample_random'] -storage_lr = dat_group.create_dataset("train_lr", shape=(patches_per_img * len(images), 3, - patch_size // shrink_size, - patch_size // shrink_size), - dtype='float32', - # compression='lzf', - ) -storage_hr = dat_group.create_dataset("train_hr", shape=(patches_per_img * len(images), 3, - patch_size, patch_size), - # compression='lzf', - dtype='float32') - -random_cropper = RandomCrop(size=patch_size) -img_augmenter = ImageAugment(shrink_size, noise_level, down_sample_method=None) - - -def get_img_patches(img_pil): - img_patch = random_cropper(img_pil) - lr_hr_patches = img_augmenter.process(img_patch) - return lr_hr_patches - - -counter = 0 -for img in tqdm(images): - img_pil = Image.open(img).convert("RGB") - for i in range(patches_per_img): - patch = get_img_patches(img_pil) - storage_lr[counter] = to_tensor(patch[0].convert("RGB")).numpy() - storage_hr[counter] = to_tensor(patch[1].convert("RGB")).numpy() - counter += 1 -database.close() diff --git a/spaces/YeYeYes/QQsign/README.md b/spaces/YeYeYes/QQsign/README.md deleted file mode 100644 index bd56881a2a7709591343e2f15af9a6a8133e115b..0000000000000000000000000000000000000000 --- a/spaces/YeYeYes/QQsign/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: QQsign -emoji: 🦀 -colorFrom: blue -colorTo: purple -sdk: docker -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Yehor/wav2vec2-uk-demo/README.md b/spaces/Yehor/wav2vec2-uk-demo/README.md deleted file mode 100644 index bf93b246a071927e12494f59e683c3fbfa78b850..0000000000000000000000000000000000000000 --- a/spaces/Yehor/wav2vec2-uk-demo/README.md +++ /dev/null @@ -1,184 +0,0 @@ ---- -title: Wav2vec2 Ukrainian with Timestamps -emoji: 🇺🇦 -colorFrom: blue -colorTo: yellow -sdk: gradio -app_file: app.py -pinned: false ---- - - -# Demo of Ukrainian wav2vec2 model - -- The base model is hosted here: https://huggingface.co/Yehor/wav2vec2-xls-r-1b-uk-with-lm -- The model with better News LM: https://huggingface.co/Yehor/wav2vec2-xls-r-1b-uk-with-news-lm - -Follow our community in Telegram: https://t.me/speech_recognition_uk - ---- - -Create a virtualenv: - -```bash -pipenv install -pipenv shell -``` - -Install deps: - -```bash -pip install https://github.com/huggingface/transformers/archive/refs/tags/v4.16.2.zip -pip install https://github.com/kpu/kenlm/archive/master.zip - -pip install torch==1.9.1 torchaudio==0.9.1 pyctcdecode==0.3.0 -``` - -Run inference: - -```bash -python inference.py --model_id Yehor/wav2vec2-xls-r-1b-uk-with-lm --path_files short_1.wav - -# with chunking -python inference.py --model_id Yehor/wav2vec2-xls-r-1b-uk-with-lm --path_files short_1.wav --chunk_length_s 10 --stride_length_s_l 4 --stride_length_s_r 2 -python inference.py --model_id Yehor/wav2vec2-xls-r-1b-uk-with-lm --path_files long_1.wav --chunk_length_s 10 --stride_length_s_l 4 --stride_length_s_r 2 - -# with chunking on GPU -python inference_gpu.py --model_id Yehor/wav2vec2-xls-r-1b-uk-with-lm --path_files short_1.wav --chunk_length_s 10 --stride_length_s_l 4 --stride_length_s_r 2 -python inference_gpu.py --model_id Yehor/wav2vec2-xls-r-1b-uk-with-lm --path_files long_1.wav --chunk_length_s 10 --stride_length_s_l 4 --stride_length_s_r 2 - -python inference.py --model_id Yehor/wav2vec2-xls-r-1b-uk-with-news-lm --path_files mer_lviv_interview.wav --chunk_length_s 10 --stride_length_s_l 4 --stride_length_s_r 2 -python inference_gpu.py --model_id Yehor/wav2vec2-xls-r-1b-uk-with-news-lm --path_files mer_lviv_interview.wav --chunk_length_s 10 --stride_length_s_l 4 --stride_length_s_r 2 - -python inference.py --model_id Yehor/wav2vec2-xls-r-1b-uk-with-lm --path_files tsn.wav,tsn_2.wav --chunk_length_s 10 --stride_length_s_l 4 --stride_length_s_r 2 -``` - -NOTE: Do the inference process for long files with chunking. - ---- - -short_1.wav: - -``` -пана сполучені штати над важливий стратегічний партнер однак є різницяштати мають спеціальний закон який передбачає якщо китай напади на тайвань американський військові мають його захищати у гри -``` - -short_1.wav (with better News LM): - -``` -аня сполучені штати над важливий стратегічний партнер однак є різниця штати мають спеціальний закон який передбачає якщо китай нападе на тайвань американський військові мають його захищати угери -``` - -long_1.wav: - -``` -серце чи дивовижни порятунок мільйони людей фактично в прямому ефірі вже три доби спостерігають за спробамиамероканських рятувальникив дісттисколодя за пятирічне хлопя досі не зрозуміло чи вдастядістати його з тридцяти метрового провал живим про надзвичайно складну операцію що триває в цю мить я на есарчуккулояз який провалився пятирічнийраян ледь помітна діра в землі менше тридцяти сантиметріву діаметрі але в глиб вона тягнеться на тридцять два метро батьки шукали сина кілька один перед тим як зрозуміле він під землею коли він зник я молилися богупросила аби алагзбиріг мосина і його дістали з колодязь живим господихай йому та менше болить в тій ділі я так сподіваючиь що у рятувальники все вийде його неможливо витягти просто так розуміють рятувальники занадто вуськоа розширяти діру не можна вона просто завалитья тому вони три до бою розкопують амундалік і поки працює техніки -``` - -long_1.wav (with News LM): - -``` -серце чи дивовижних порятунок мільйони людей фактично в прямому ефірі вже три доби спостерігають за спробами мероканських рятувальників дісттисколодя за пятирічне хлопя досі незрозуміло чи вдастся дістати його з тридцятиметрового провалля живим про надзвичайно складну операцію що триває в цю мить я не слісарчукодязв який провалився пятирічний раян ледь помітна діра в землі менше тридцяти сантиметрів у діаметрі але в глиб вона тягнеться на тридцять два метри батьки шукали сина кілька годин перед тим як зрозуміли він під землею а коли він зник я молилася богу просила аби алагбирігмо сина і його дістали сколотізяживим господи хай йому там менше болить в тій дірі і так сподіваючись що у рятувальники все вийде його неможливо витягти просто так розуміють рятувальників занадто вузько а розширяти діру не можна вона просто завалиться тому вони три добою розкопують амну далікі поки працює технік -``` - -long_1.wav (with better News LM): - -``` -серце чи дивовижний порятунок мільйони людей фактично в прямому ефірі вже три доби спостерігають за спробами мароканських рятувальників дісттисколодя за пятирічне хлопя досі незрозуміло чи вдастся дістати його з тридцятиметрового провалля живим про надзвичайно складну операцію що триває в цю мить я не слесарчукодязв який провалився пятирічний раян ледь помітна діра в землі менше тридцяти сантиметрів у діаметрі але в глиб вона тягнеться на тридцять два метри батьки шукали сина кілька годин перед тим як зрозуміли він під землею а коли він зник я молилася богу просила аби алаксбирігмо сина і його дістали сководізяживим господи хай йому там менше болить в тій тіріятак сподіваючись що у рятувальники все вийде його неможливо витягти просто так розуміють рятувальників занадто вузько а розширяти діру не можна вона просто завалиться тому вони три добою розкопують амнокдалік і поки працює технік -``` - -tsn.wav (with better News LM): - -``` -інформаційний вечір на один плюс один в студії та сем лідія таран а платонова королева з чого починалося і чим ще зовсім не закінчиться правління єлизавети другої сімдесят років на троні окмовівіполовні рішення щодо спадкоємців корони більше зараз усе -``` - -tsn_2.wav (with better News LM): - -``` -до осло зіли під час стрілянини на південмаші в жану влучили три кулі одна з них розірвала кишківник та хребет животі катастрофу розуміти три літри крові яку неможливо повернути тому що вона забруднена кишківник кинувши доз розірваний освітунавіть не котичекапо ужилася дуже активна проти дістала кулі просто на варті контрактниця відповідала за оповіщення частини про надзвичайні ситуації він був в двох кроках від вельми мовчки розстріляли що на посту була радіостанція і я запраз та могла доповісти частину військові ну я не змогла доповісти зробив постріли я вже не змогла підвестися стрілянина на південмаші сталася вночі двадцять сьомого січня пятеро людей загинули серед них одна цивільна ще пятро поранені підозрюваний двадцятирічний рокових артемірявчух у сізо спочатку провину визнавав тепер і знову м адвокатами від показів відмовився ані конфліктних ситуації ані бо чого підозрілого жанна каже до стрілянини із підозрюваним навіть не розмовляла а тільки за памятала і оце а так я навіть його імені й фамілії не знав я дуже рідко заступала ці кару там де срочного служба я не можу знати чому він так зробив навпроти жани реанімації контрактник ігор самененко під час стрлянини відпочивав за графіком повороже забігаю тикрчишотамячу ксюш сумною наше -``` - -mer_lviv_interview.wav: - -``` -ми отримали нового керівника територіальної оборони області а це є фахова людина і з дурне міста ми переформотуваи наш управлінні звичайних ситуаії сьогодні воно має функцію назвичйнх ситуацівільного захист територіальної оборони і в нас достатньо якісна співпрая якісна координації тому що заначаміста допомогти державному фактично кірунку щоб він якісно за працював стосовно ремонти стосовно обладнання і стосовно рекрутенгу людей які зараз голошулися що би бути по контрактах в територіальної обороні на території міста любова на стврюються двбатаьони і наразі все йде чітко за тимланам які визначено державою натомість місто взяла на себе додатково функцію ми очали великий навчальний процес для працівників комунальних підприємсособливо мов для про стратегічне підприємства працівники виконавчий хоронні влади банально відновити навики володіння зброю в наших працівників люсважливи чиникце уміння надавати першу до медичну допомогу і за така велика хвиля яка після закінчення карантивно де включати і старшокласників ми на при великий жаль багато в остнніх років таке враження що перебували лютриному сні розуміючи що росія поруч і за ви буде центр можливо агресії нам треба вишколювати себе і бути готовими до того щоб захищати рідну державу якщо кожен українець буде добре володіти навиками стрилецької зброї буде вміти надавати мдичну допомогу в бимене це потужна сила яку неможливо здолати тому в координації з державу і рухаємося вперед ще таке питання оту нас багато кажуть що поруч і бригада ми сил тертральо оборони можуть стврюватися і формуватися доровольчі обєднання чи у вслвові це передбачено і як ваше ставлення до такого віце ми працюємо в чуткійкоординації з державними інституціями і на сьогоднішній день я не бачу потреби в інших формування х тому що принципі має бути жорстка вертикально має бути один центр правління сла обогощо є новий командувач територіальної оборони рамках держави керівник штабоновий принципі пан у то який в нас у чули територіальну орону області на це дуже фахова компетентна людина з ним просто рємно сів працювати в не розуміє все співслов законом передбачена можливість прикидання в разі такої нагальної потреби перекидання бригада сил територіальної оборони в зонобоєвихді вічі області яке ваше ставленндоцього і можливо вам відомо яке можливо ставлення у самих сів територіальної оборони віці на сьогоднішній день ми розуміємо що він форційному плані треба зробити титанічниобєм праці людям треба пояснити що ти не можеш сховатисяу власному помешкані коли прийде ворог ворога треба не пустити на території нашої країни і я думаю що всі громадяни повинні мати готовнізахи щати свої рідну крїну на сьогоднішній день йде нормальний процес люди записується дуже багато відомелюдей записуся в територіальну оборону я вважаю це правильно тому що треба мати таки висів а стосовно всіх інших речей ще раз наголошую коли мова йде про безпеку про оборону держави має бути субординація і є чітка державна вертикальтреба захищати будемо сі разом захищати нема наторадиа ще одне питання багато хто впеонийщо законіне достатньо влади про писано для місцеве адміністрації влдимається на увазі повноважень і можливості при формуванні розгортанні сил тоеитрально оборони ви на практиці зіткнулися вже цим і як ви оцінюють ся це ну по перше добре що закон є звичайно що од жоден закон не є та обтальний чи ідеальний він мав би процесі реально життя за знати певних крягувань у наприклад керівником териріальной оборони в часі на звичайної ситуації є голова обласної адміністрації так а по місто любово наприклад керівником є керівник районної адміністрації це є не великий офісі є декілька працівників і вони мали би стояти над громади міста льоа я не думаю що це буде мати достатню якісний ефект тому що принцип є крівникобласті я керівник мста але ці речі напевно коли писали закон не до кінця прраховували але ми маємо час щоб це поправити але на сьогодні ми чітко викониозакни який є інших варіантів не може бути пане андріюбуквально остання запитнняваша бригада тлвівськану два батальйон як викрити як вони зараз озброєні повністю чи ніч це тільки стрилецьке збрїчице й більш важке озброєнн і як в ставитиь до того щоб міськдністрація мала можливістькажімо щось купувати для забзпечення саме си тртріальоорон у на приклад ті ж самі виспілотники для розвідки або засоби звязку а або ось щось таке ну що ви розуміли ми достатньо багато помагаємо всім нашим військовим частинам батальоам це робили вчорастобупі татом у кожного року якщо говорити про сили територіальну оборони області ми допоможе мо усім чим буде потрібно мова про гроші не йде ми не профінансуємо ремонту чи іншої вулиці а нашим хлопцям до поможемо і це має робити кона громада і кожен лідер гмади в нашій країні -``` - -mer_lviv_interview.wav (with better News LM): - -``` -ми отримали нового керівника територіальної оборони області а це є фахова людина і з турне міста ми переформатували наше управління звичайних ситуаії сьогодні воно має функцію надзвичйних ситуацї цівільного захист територіальної оборони і в нас достатньо якісна співпраця якісна координація тому що задача міста допомогти державному фактично керунку щоб він якісно запрацював стосовно ремонті стосовно обладнання і стосовно рекрутингу людей які зараз зголошувалися щоби бути по контрактах в територіальній обороні на території міста любова на стврюються батальони і наразі все йде чітко за тим ланом які визначеної державою натомість місто взяло на себе додаткову функцію ми очали великий навчальний процес для працівників комунальних підприємстособливо мов для про стратегічні підприємства працівники виконавчий хорані влади банально відновити навики володіння зброєю в наших працівників плюс важливий чинник це уміння надавати першу домедичну допомогу і за така велика хвиля яка після закінчення карантивно буде включати і старшокласників ми напривеликий жаль багато в останніх років таке враження що перебували літориномусні розуміючи що росія поруч і за вибуде центр можливо агресії нам треба вишколювати себе і бути готовими до того щоб захищати рідну державу якщо кожен українець буде добре володіти навиками стрілецької зброї буде вміти надавати м дичну допомогу в имени це потужна сила яку неможливо здолати тому в координації з державу і рухаємося вперед ще таке питання от у нас багато кажуть що поруч із бригадами сил тертральооборони можуть стврюватися і формуватися добровольчі обєднання чи уволвові це передбачено як ваше ставлення до такого віце ми працюємо в чіткийкординації з державними інституціями і на сьогоднішній день я не бачу потреби в інших формуваннях тому що в принципі має бути жорстка вертикаль має бути один центр управління сла обо що є новий командувач територіальної оборони в рамках держави керівник шабанови принципі пану то які в нас учули територіальну орон області на це дуже фахова компетентна людина з ним просто приємно співпрацювати не розуміє все співслова законом передбачена можливість перекидання в разі такої нагальної потреби перекидання бригад сил територіальної оборони в зону бойових дій віншіобласті яке ваше ставленн до цього і можливо вам відомо яке можливо ставлення рсу самих бійців територіальної оборони від це на сьогоднішній день ми розуміємо що вінформаційному плані треба зробити титанічний обєм праці людям треба пояснити що ти не можеш сховатися у власному помешканні коли прийде ворог ворога треба не пустити на територію нашої країни і я думаю що всі громадяни повинні мати готовні захищати свою рідну крїну на сьогоднішній день йде нормальний процес люди записується дуже багато відоме людей записується в територіальну оборону я вважаю це правильно тому що треба мати такі вишкіл а стосовно всіх інших речей ще раз наголошую коли мова йде про безпеку про оборону держави має бути субординація і є чітка державна вертикаль треба захищати будемо сі разом захищати нема на то ради а ще одне питання багато хто впевнений що в законі недостатньо влади прописано для місцевих д міністрації влди мається на увазі повноважень і можлвостпри формуванні розгортанні сил ториторіально оборони ви на практиці зіткнулися вже цим і як ви оцінюютьсяцену поперше добре що закон є звичайно що жододенззаон не є та обтальний чи ідеальний він мав би процесі реально життя зазнати певних коригувань ну наприклад керівником териріальной оборони в часі на звичайної ситуації є голова обласної адміністрації так а по місту любовонаприклад керівником є керівник районної адміністрації це є невеликий офіс нехедекілька працівників і вони мали би стояти над громадою міста львова я не думаю що це буде мати достатню якісний ефект тому що принцип є керівникобласті керівни мста але ці речі напевно коли писали закон не до кінця раховували але ми маємо час щоб це поправити але на сьогодні ми чітко викониозакни який є іншхварянтів не може буде пане андрію буквально остання запитння ваша бригада львівська ну два батальйони як викрита як вони зараз озброєні повністю чи ніч це тільки стрілецьке збря чи це й більш важке озброєння і як в ставитись до того щоб міськдістрація мала можливістькажімо щось купувати для забзпечення саме си тртіальоорони ну наприклад ті ж самі беспілотники для розвідки або засоби звязку а або ось щось таке ну щоб ви розуміли ми достатньо багато помагаємо всім нашим військовим частинам батальонам це робили читерастобу питатиму кожного року якщо говорити про сили територіально оборони області ми допоможемо усім чим буде потрібно мова про гроші не йде ми не профінансуємо ремонтуючи іншої вулиці а нашим хлопцям допоможемо і це має робити кона громада і кожен лідер гомади в нашій країні -``` - -### Inference of `mer_lviv_interview.wav` (time is 06:38) - -#### CPU - -- Memory peaks to 60GB -- Memory peaks to 65GB (on News LM) - -Inference duration: - -``` -real 7m39.461s -user 59m19.065s -sys 24m1.254s -``` - -Inference duration (on News LM): - -``` -real 12m36.888s -user 63m19.396s -sys 24m24.823s -``` - -Duration tracked with loading the LM. - -## Using timestamps - -The `inference_timestamps.py` script can be used to do inference with timestamps for chars and words. - -### `output_char_offsets=True` - -``` -Wav2Vec2CTCTokenizerOutput(text='паня сполучені штати надважливий стратегічний партнер однак є різниця штати мають спеціальни закон який передбачає якщо китай нападе на тайвань американський військові мають його захищати евуйвгере', char_offsets=[{'char': 'п', 'start_offset': 0, 'end_offset': 1}, {'char': 'а', 'start_offset': 1, 'end_offset': 2}, {'char': 'н', 'start_offset': 9, 'end_offset': 10}, {'char': 'я', 'start_offset': 11, 'end_offset': 12}, {'char': ' ', 'start_offset': 14, 'end_offset': 15}, {'char': 'с', 'start_offset': 16, 'end_offset': 17}, {'char': 'п', 'start_offset': 19, 'end_offset': 20}, {'char': 'о', 'start_offset': 21, 'end_offset': 22}, {'char': 'л', 'start_offset': 23, 'end_offset': 24}, {'char': 'у', 'start_offset': 25, 'end_offset': 26}, {'char': 'ч', 'start_offset': 30, 'end_offset': 31}, {'char': 'е', 'start_offset': 32, 'end_offset': 33}, {'char': 'н', 'start_offset': 37, 'end_offset': 38}, {'char': 'і', 'start_offset': 38, 'end_offset': 39}, {'char': ' ', 'start_offset': 40, 'end_offset': 42}, {'char': 'ш', 'start_offset': 43, 'end_offset': 44}, {'char': 'т', 'start_offset': 46, 'end_offset': 47}, {'char': 'а', 'start_offset': 48, 'end_offset': 49}, {'char': 'т', 'start_offset': 57, 'end_offset': 58}, {'char': 'и', 'start_offset': 58, 'end_offset': 59}, {'char': ' ', 'start_offset': 76, 'end_offset': 79}, {'char': 'н', 'start_offset': 85, 'end_offset': 86}, {'char': 'а', 'start_offset': 87, 'end_offset': 88}, {'char': 'д', 'start_offset': 93, 'end_offset': 94}, {'char': 'в', 'start_offset': 97, 'end_offset': 98}, {'char': 'а', 'start_offset': 99, 'end_offset': 100}, {'char': 'ж', 'start_offset': 105, 'end_offset': 106}, {'char': 'л', 'start_offset': 113, 'end_offset': 114}, {'char': 'и', 'start_offset': 114, 'end_offset': 115}, {'char': 'в', 'start_offset': 121, 'end_offset': 122}, {'char': 'и', 'start_offset': 123, 'end_offset': 124}, {'char': 'й', 'start_offset': 125, 'end_offset': 126}, {'char': ' ', 'start_offset': 127, 'end_offset': 129}, {'char': 'с', 'start_offset': 130, 'end_offset': 131}, {'char': 'т', 'start_offset': 134, 'end_offset': 136}, {'char': 'р', 'start_offset': 138, 'end_offset': 139}, {'char': 'а', 'start_offset': 139, 'end_offset': 140}, {'char': 'т', 'start_offset': 145, 'end_offset': 146}, {'char': 'е', 'start_offset': 146, 'end_offset': 147}, {'char': 'г', 'start_offset': 152, 'end_offset': 153}, {'char': 'і', 'start_offset': 153, 'end_offset': 154}, {'char': 'ч', 'start_offset': 160, 'end_offset': 161}, {'char': 'н', 'start_offset': 167, 'end_offset': 168}, {'char': 'и', 'start_offset': 168, 'end_offset': 169}, {'char': 'й', 'start_offset': 170, 'end_offset': 171}, {'char': ' ', 'start_offset': 171, 'end_offset': 173}, {'char': 'п', 'start_offset': 174, 'end_offset': 175}, {'char': 'а', 'start_offset': 176, 'end_offset': 177}, {'char': 'р', 'start_offset': 179, 'end_offset': 180}, {'char': 'т', 'start_offset': 183, 'end_offset': 184}, {'char': 'н', 'start_offset': 188, 'end_offset': 189}, {'char': 'е', 'start_offset': 189, 'end_offset': 190}, {'char': 'р', 'start_offset': 193, 'end_offset': 194}, {'char': ' ', 'start_offset': 201, 'end_offset': 203}, {'char': 'о', 'start_offset': 204, 'end_offset': 205}, {'char': 'д', 'start_offset': 208, 'end_offset': 209}, {'char': 'н', 'start_offset': 214, 'end_offset': 216}, {'char': 'а', 'start_offset': 216, 'end_offset': 217}, {'char': 'к', 'start_offset': 224, 'end_offset': 225}, {'char': ' ', 'start_offset': 227, 'end_offset': 229}, {'char': 'є', 'start_offset': 233, 'end_offset': 234}, {'char': ' ', 'start_offset': 237, 'end_offset': 239}, {'char': 'р', 'start_offset': 240, 'end_offset': 241}, {'char': 'і', 'start_offset': 241, 'end_offset': 242}, {'char': 'з', 'start_offset': 247, 'end_offset': 248}, {'char': 'н', 'start_offset': 253, 'end_offset': 254}, {'char': 'и', 'start_offset': 254, 'end_offset': 255}, {'char': 'ц', 'start_offset': 261, 'end_offset': 262}, {'char': 'я', 'start_offset': 262, 'end_offset': 263}, {'char': ' ', 'start_offset': 281, 'end_offset': 283}, {'char': 'ш', 'start_offset': 283, 'end_offset': 284}, {'char': 'т', 'start_offset': 286, 'end_offset': 287}, {'char': 'а', 'start_offset': 288, 'end_offset': 289}, {'char': 'т', 'start_offset': 294, 'end_offset': 295}, {'char': 'и', 'start_offset': 296, 'end_offset': 297}, {'char': ' ', 'start_offset': 297, 'end_offset': 299}, {'char': 'м', 'start_offset': 300, 'end_offset': 301}, {'char': 'а', 'start_offset': 301, 'end_offset': 302}, {'char': 'ю', 'start_offset': 306, 'end_offset': 307}, {'char': 'т', 'start_offset': 308, 'end_offset': 309}, {'char': 'ь', 'start_offset': 309, 'end_offset': 311}, {'char': ' ', 'start_offset': 311, 'end_offset': 313}, {'char': 'с', 'start_offset': 313, 'end_offset': 314}, {'char': 'п', 'start_offset': 316, 'end_offset': 317}, {'char': 'е', 'start_offset': 318, 'end_offset': 319}, {'char': 'ц', 'start_offset': 324, 'end_offset': 325}, {'char': 'і', 'start_offset': 325, 'end_offset': 326}, {'char': 'а', 'start_offset': 328, 'end_offset': 329}, {'char': 'л', 'start_offset': 333, 'end_offset': 334}, {'char': 'ь', 'start_offset': 334, 'end_offset': 336}, {'char': 'н', 'start_offset': 339, 'end_offset': 340}, {'char': 'и', 'start_offset': 341, 'end_offset': 342}, {'char': ' ', 'start_offset': 345, 'end_offset': 348}, {'char': 'з', 'start_offset': 351, 'end_offset': 352}, {'char': 'а', 'start_offset': 354, 'end_offset': 355}, {'char': 'к', 'start_offset': 361, 'end_offset': 362}, {'char': 'о', 'start_offset': 365, 'end_offset': 366}, {'char': 'н', 'start_offset': 373, 'end_offset': 374}, {'char': ' ', 'start_offset': 382, 'end_offset': 384}, {'char': 'я', 'start_offset': 386, 'end_offset': 387}, {'char': 'к', 'start_offset': 390, 'end_offset': 391}, {'char': 'и', 'start_offset': 392, 'end_offset': 393}, {'char': 'й', 'start_offset': 394, 'end_offset': 395}, {'char': ' ', 'start_offset': 396, 'end_offset': 398}, {'char': 'п', 'start_offset': 399, 'end_offset': 401}, {'char': 'е', 'start_offset': 402, 'end_offset': 403}, {'char': 'р', 'start_offset': 406, 'end_offset': 407}, {'char': 'е', 'start_offset': 407, 'end_offset': 408}, {'char': 'д', 'start_offset': 411, 'end_offset': 412}, {'char': 'б', 'start_offset': 415, 'end_offset': 416}, {'char': 'а', 'start_offset': 416, 'end_offset': 417}, {'char': 'ч', 'start_offset': 424, 'end_offset': 425}, {'char': 'а', 'start_offset': 428, 'end_offset': 429}, {'char': 'є', 'start_offset': 437, 'end_offset': 438}, {'char': ' ', 'start_offset': 445, 'end_offset': 447}, {'char': 'я', 'start_offset': 448, 'end_offset': 449}, {'char': 'к', 'start_offset': 452, 'end_offset': 453}, {'char': 'щ', 'start_offset': 455, 'end_offset': 456}, {'char': 'о', 'start_offset': 457, 'end_offset': 458}, {'char': ' ', 'start_offset': 460, 'end_offset': 463}, {'char': 'к', 'start_offset': 463, 'end_offset': 464}, {'char': 'и', 'start_offset': 465, 'end_offset': 466}, {'char': 'т', 'start_offset': 470, 'end_offset': 471}, {'char': 'а', 'start_offset': 472, 'end_offset': 473}, {'char': 'й', 'start_offset': 478, 'end_offset': 480}, {'char': ' ', 'start_offset': 484, 'end_offset': 486}, {'char': 'н', 'start_offset': 487, 'end_offset': 488}, {'char': 'а', 'start_offset': 488, 'end_offset': 489}, {'char': 'п', 'start_offset': 493, 'end_offset': 494}, {'char': 'а', 'start_offset': 496, 'end_offset': 497}, {'char': 'д', 'start_offset': 502, 'end_offset': 503}, {'char': 'е', 'start_offset': 504, 'end_offset': 505}, {'char': ' ', 'start_offset': 509, 'end_offset': 511}, {'char': 'н', 'start_offset': 511, 'end_offset': 512}, {'char': 'а', 'start_offset': 513, 'end_offset': 514}, {'char': ' ', 'start_offset': 515, 'end_offset': 517}, {'char': 'т', 'start_offset': 518, 'end_offset': 519}, {'char': 'а', 'start_offset': 519, 'end_offset': 520}, {'char': 'й', 'start_offset': 524, 'end_offset': 525}, {'char': 'в', 'start_offset': 527, 'end_offset': 528}, {'char': 'а', 'start_offset': 529, 'end_offset': 530}, {'char': 'н', 'start_offset': 535, 'end_offset': 536}, {'char': 'ь', 'start_offset': 536, 'end_offset': 537}, {'char': ' ', 'start_offset': 552, 'end_offset': 555}, {'char': 'а', 'start_offset': 555, 'end_offset': 556}, {'char': 'м', 'start_offset': 561, 'end_offset': 562}, {'char': 'е', 'start_offset': 562, 'end_offset': 563}, {'char': 'р', 'start_offset': 566, 'end_offset': 567}, {'char': 'и', 'start_offset': 567, 'end_offset': 568}, {'char': 'к', 'start_offset': 572, 'end_offset': 573}, {'char': 'а', 'start_offset': 574, 'end_offset': 575}, {'char': 'н', 'start_offset': 579, 'end_offset': 580}, {'char': 'с', 'start_offset': 582, 'end_offset': 583}, {'char': 'ь', 'start_offset': 583, 'end_offset': 585}, {'char': 'к', 'start_offset': 586, 'end_offset': 587}, {'char': 'и', 'start_offset': 588, 'end_offset': 589}, {'char': 'й', 'start_offset': 589, 'end_offset': 590}, {'char': ' ', 'start_offset': 591, 'end_offset': 593}, {'char': 'в', 'start_offset': 594, 'end_offset': 595}, {'char': 'і', 'start_offset': 595, 'end_offset': 596}, {'char': 'й', 'start_offset': 600, 'end_offset': 601}, {'char': 'с', 'start_offset': 604, 'end_offset': 605}, {'char': 'ь', 'start_offset': 605, 'end_offset': 607}, {'char': 'к', 'start_offset': 609, 'end_offset': 611}, {'char': 'о', 'start_offset': 612, 'end_offset': 613}, {'char': 'в', 'start_offset': 620, 'end_offset': 621}, {'char': 'і', 'start_offset': 622, 'end_offset': 623}, {'char': ' ', 'start_offset': 637, 'end_offset': 639}, {'char': 'м', 'start_offset': 641, 'end_offset': 642}, {'char': 'а', 'start_offset': 643, 'end_offset': 644}, {'char': 'ю', 'start_offset': 651, 'end_offset': 652}, {'char': 'т', 'start_offset': 654, 'end_offset': 655}, {'char': 'ь', 'start_offset': 655, 'end_offset': 656}, {'char': ' ', 'start_offset': 657, 'end_offset': 659}, {'char': 'й', 'start_offset': 659, 'end_offset': 660}, {'char': 'о', 'start_offset': 660, 'end_offset': 662}, {'char': 'г', 'start_offset': 664, 'end_offset': 665}, {'char': 'о', 'start_offset': 666, 'end_offset': 667}, {'char': ' ', 'start_offset': 677, 'end_offset': 679}, {'char': 'з', 'start_offset': 681, 'end_offset': 682}, {'char': 'а', 'start_offset': 683, 'end_offset': 684}, {'char': 'х', 'start_offset': 686, 'end_offset': 687}, {'char': 'и', 'start_offset': 689, 'end_offset': 690}, {'char': 'щ', 'start_offset': 696, 'end_offset': 697}, {'char': 'а', 'start_offset': 698, 'end_offset': 699}, {'char': 'т', 'start_offset': 707, 'end_offset': 708}, {'char': 'и', 'start_offset': 709, 'end_offset': 710}, {'char': ' ', 'start_offset': 733, 'end_offset': 734}, {'char': 'е', 'start_offset': 740, 'end_offset': 741}, {'char': 'в', 'start_offset': 747, 'end_offset': 748}, {'char': 'у', 'start_offset': 748, 'end_offset': 749}, {'char': 'й', 'start_offset': 752, 'end_offset': 753}, {'char': 'в', 'start_offset': 754, 'end_offset': 755}, {'char': 'г', 'start_offset': 757, 'end_offset': 758}, {'char': 'е', 'start_offset': 759, 'end_offset': 760}, {'char': 'р', 'start_offset': 767, 'end_offset': 768}, {'char': 'е', 'start_offset': 768, 'end_offset': 769}], word_offsets=None) -``` - -### `output_word_offsets=True` - -``` -Wav2Vec2CTCTokenizerOutput(text='паня сполучені штати надважливий стратегічний партнер однак є різниця штати мають спеціальни закон який передбачає якщо китай нападе на тайвань американський військові мають його захищати евуйвгере', char_offsets=[{'char': 'п', 'start_offset': 0, 'end_offset': 1}, {'char': 'а', 'start_offset': 1, 'end_offset': 2}, {'char': 'н', 'start_offset': 9, 'end_offset': 10}, {'char': 'я', 'start_offset': 11, 'end_offset': 12}, {'char': ' ', 'start_offset': 14, 'end_offset': 15}, {'char': 'с', 'start_offset': 16, 'end_offset': 17}, {'char': 'п', 'start_offset': 19, 'end_offset': 20}, {'char': 'о', 'start_offset': 21, 'end_offset': 22}, {'char': 'л', 'start_offset': 23, 'end_offset': 24}, {'char': 'у', 'start_offset': 25, 'end_offset': 26}, {'char': 'ч', 'start_offset': 30, 'end_offset': 31}, {'char': 'е', 'start_offset': 32, 'end_offset': 33}, {'char': 'н', 'start_offset': 37, 'end_offset': 38}, {'char': 'і', 'start_offset': 38, 'end_offset': 39}, {'char': ' ', 'start_offset': 40, 'end_offset': 42}, {'char': 'ш', 'start_offset': 43, 'end_offset': 44}, {'char': 'т', 'start_offset': 46, 'end_offset': 47}, {'char': 'а', 'start_offset': 48, 'end_offset': 49}, {'char': 'т', 'start_offset': 57, 'end_offset': 58}, {'char': 'и', 'start_offset': 58, 'end_offset': 59}, {'char': ' ', 'start_offset': 76, 'end_offset': 79}, {'char': 'н', 'start_offset': 85, 'end_offset': 86}, {'char': 'а', 'start_offset': 87, 'end_offset': 88}, {'char': 'д', 'start_offset': 93, 'end_offset': 94}, {'char': 'в', 'start_offset': 97, 'end_offset': 98}, {'char': 'а', 'start_offset': 99, 'end_offset': 100}, {'char': 'ж', 'start_offset': 105, 'end_offset': 106}, {'char': 'л', 'start_offset': 113, 'end_offset': 114}, {'char': 'и', 'start_offset': 114, 'end_offset': 115}, {'char': 'в', 'start_offset': 121, 'end_offset': 122}, {'char': 'и', 'start_offset': 123, 'end_offset': 124}, {'char': 'й', 'start_offset': 125, 'end_offset': 126}, {'char': ' ', 'start_offset': 127, 'end_offset': 129}, {'char': 'с', 'start_offset': 130, 'end_offset': 131}, {'char': 'т', 'start_offset': 134, 'end_offset': 136}, {'char': 'р', 'start_offset': 138, 'end_offset': 139}, {'char': 'а', 'start_offset': 139, 'end_offset': 140}, {'char': 'т', 'start_offset': 145, 'end_offset': 146}, {'char': 'е', 'start_offset': 146, 'end_offset': 147}, {'char': 'г', 'start_offset': 152, 'end_offset': 153}, {'char': 'і', 'start_offset': 153, 'end_offset': 154}, {'char': 'ч', 'start_offset': 160, 'end_offset': 161}, {'char': 'н', 'start_offset': 167, 'end_offset': 168}, {'char': 'и', 'start_offset': 168, 'end_offset': 169}, {'char': 'й', 'start_offset': 170, 'end_offset': 171}, {'char': ' ', 'start_offset': 171, 'end_offset': 173}, {'char': 'п', 'start_offset': 174, 'end_offset': 175}, {'char': 'а', 'start_offset': 176, 'end_offset': 177}, {'char': 'р', 'start_offset': 179, 'end_offset': 180}, {'char': 'т', 'start_offset': 183, 'end_offset': 184}, {'char': 'н', 'start_offset': 188, 'end_offset': 189}, {'char': 'е', 'start_offset': 189, 'end_offset': 190}, {'char': 'р', 'start_offset': 193, 'end_offset': 194}, {'char': ' ', 'start_offset': 201, 'end_offset': 203}, {'char': 'о', 'start_offset': 204, 'end_offset': 205}, {'char': 'д', 'start_offset': 208, 'end_offset': 209}, {'char': 'н', 'start_offset': 214, 'end_offset': 216}, {'char': 'а', 'start_offset': 216, 'end_offset': 217}, {'char': 'к', 'start_offset': 224, 'end_offset': 225}, {'char': ' ', 'start_offset': 227, 'end_offset': 229}, {'char': 'є', 'start_offset': 233, 'end_offset': 234}, {'char': ' ', 'start_offset': 237, 'end_offset': 239}, {'char': 'р', 'start_offset': 240, 'end_offset': 241}, {'char': 'і', 'start_offset': 241, 'end_offset': 242}, {'char': 'з', 'start_offset': 247, 'end_offset': 248}, {'char': 'н', 'start_offset': 253, 'end_offset': 254}, {'char': 'и', 'start_offset': 254, 'end_offset': 255}, {'char': 'ц', 'start_offset': 261, 'end_offset': 262}, {'char': 'я', 'start_offset': 262, 'end_offset': 263}, {'char': ' ', 'start_offset': 281, 'end_offset': 283}, {'char': 'ш', 'start_offset': 283, 'end_offset': 284}, {'char': 'т', 'start_offset': 286, 'end_offset': 287}, {'char': 'а', 'start_offset': 288, 'end_offset': 289}, {'char': 'т', 'start_offset': 294, 'end_offset': 295}, {'char': 'и', 'start_offset': 296, 'end_offset': 297}, {'char': ' ', 'start_offset': 297, 'end_offset': 299}, {'char': 'м', 'start_offset': 300, 'end_offset': 301}, {'char': 'а', 'start_offset': 301, 'end_offset': 302}, {'char': 'ю', 'start_offset': 306, 'end_offset': 307}, {'char': 'т', 'start_offset': 308, 'end_offset': 309}, {'char': 'ь', 'start_offset': 309, 'end_offset': 311}, {'char': ' ', 'start_offset': 311, 'end_offset': 313}, {'char': 'с', 'start_offset': 313, 'end_offset': 314}, {'char': 'п', 'start_offset': 316, 'end_offset': 317}, {'char': 'е', 'start_offset': 318, 'end_offset': 319}, {'char': 'ц', 'start_offset': 324, 'end_offset': 325}, {'char': 'і', 'start_offset': 325, 'end_offset': 326}, {'char': 'а', 'start_offset': 328, 'end_offset': 329}, {'char': 'л', 'start_offset': 333, 'end_offset': 334}, {'char': 'ь', 'start_offset': 334, 'end_offset': 336}, {'char': 'н', 'start_offset': 339, 'end_offset': 340}, {'char': 'и', 'start_offset': 341, 'end_offset': 342}, {'char': ' ', 'start_offset': 345, 'end_offset': 348}, {'char': 'з', 'start_offset': 351, 'end_offset': 352}, {'char': 'а', 'start_offset': 354, 'end_offset': 355}, {'char': 'к', 'start_offset': 361, 'end_offset': 362}, {'char': 'о', 'start_offset': 365, 'end_offset': 366}, {'char': 'н', 'start_offset': 373, 'end_offset': 374}, {'char': ' ', 'start_offset': 382, 'end_offset': 384}, {'char': 'я', 'start_offset': 386, 'end_offset': 387}, {'char': 'к', 'start_offset': 390, 'end_offset': 391}, {'char': 'и', 'start_offset': 392, 'end_offset': 393}, {'char': 'й', 'start_offset': 394, 'end_offset': 395}, {'char': ' ', 'start_offset': 396, 'end_offset': 398}, {'char': 'п', 'start_offset': 399, 'end_offset': 401}, {'char': 'е', 'start_offset': 402, 'end_offset': 403}, {'char': 'р', 'start_offset': 406, 'end_offset': 407}, {'char': 'е', 'start_offset': 407, 'end_offset': 408}, {'char': 'д', 'start_offset': 411, 'end_offset': 412}, {'char': 'б', 'start_offset': 415, 'end_offset': 416}, {'char': 'а', 'start_offset': 416, 'end_offset': 417}, {'char': 'ч', 'start_offset': 424, 'end_offset': 425}, {'char': 'а', 'start_offset': 428, 'end_offset': 429}, {'char': 'є', 'start_offset': 437, 'end_offset': 438}, {'char': ' ', 'start_offset': 445, 'end_offset': 447}, {'char': 'я', 'start_offset': 448, 'end_offset': 449}, {'char': 'к', 'start_offset': 452, 'end_offset': 453}, {'char': 'щ', 'start_offset': 455, 'end_offset': 456}, {'char': 'о', 'start_offset': 457, 'end_offset': 458}, {'char': ' ', 'start_offset': 460, 'end_offset': 463}, {'char': 'к', 'start_offset': 463, 'end_offset': 464}, {'char': 'и', 'start_offset': 465, 'end_offset': 466}, {'char': 'т', 'start_offset': 470, 'end_offset': 471}, {'char': 'а', 'start_offset': 472, 'end_offset': 473}, {'char': 'й', 'start_offset': 478, 'end_offset': 480}, {'char': ' ', 'start_offset': 484, 'end_offset': 486}, {'char': 'н', 'start_offset': 487, 'end_offset': 488}, {'char': 'а', 'start_offset': 488, 'end_offset': 489}, {'char': 'п', 'start_offset': 493, 'end_offset': 494}, {'char': 'а', 'start_offset': 496, 'end_offset': 497}, {'char': 'д', 'start_offset': 502, 'end_offset': 503}, {'char': 'е', 'start_offset': 504, 'end_offset': 505}, {'char': ' ', 'start_offset': 509, 'end_offset': 511}, {'char': 'н', 'start_offset': 511, 'end_offset': 512}, {'char': 'а', 'start_offset': 513, 'end_offset': 514}, {'char': ' ', 'start_offset': 515, 'end_offset': 517}, {'char': 'т', 'start_offset': 518, 'end_offset': 519}, {'char': 'а', 'start_offset': 519, 'end_offset': 520}, {'char': 'й', 'start_offset': 524, 'end_offset': 525}, {'char': 'в', 'start_offset': 527, 'end_offset': 528}, {'char': 'а', 'start_offset': 529, 'end_offset': 530}, {'char': 'н', 'start_offset': 535, 'end_offset': 536}, {'char': 'ь', 'start_offset': 536, 'end_offset': 537}, {'char': ' ', 'start_offset': 552, 'end_offset': 555}, {'char': 'а', 'start_offset': 555, 'end_offset': 556}, {'char': 'м', 'start_offset': 561, 'end_offset': 562}, {'char': 'е', 'start_offset': 562, 'end_offset': 563}, {'char': 'р', 'start_offset': 566, 'end_offset': 567}, {'char': 'и', 'start_offset': 567, 'end_offset': 568}, {'char': 'к', 'start_offset': 572, 'end_offset': 573}, {'char': 'а', 'start_offset': 574, 'end_offset': 575}, {'char': 'н', 'start_offset': 579, 'end_offset': 580}, {'char': 'с', 'start_offset': 582, 'end_offset': 583}, {'char': 'ь', 'start_offset': 583, 'end_offset': 585}, {'char': 'к', 'start_offset': 586, 'end_offset': 587}, {'char': 'и', 'start_offset': 588, 'end_offset': 589}, {'char': 'й', 'start_offset': 589, 'end_offset': 590}, {'char': ' ', 'start_offset': 591, 'end_offset': 593}, {'char': 'в', 'start_offset': 594, 'end_offset': 595}, {'char': 'і', 'start_offset': 595, 'end_offset': 596}, {'char': 'й', 'start_offset': 600, 'end_offset': 601}, {'char': 'с', 'start_offset': 604, 'end_offset': 605}, {'char': 'ь', 'start_offset': 605, 'end_offset': 607}, {'char': 'к', 'start_offset': 609, 'end_offset': 611}, {'char': 'о', 'start_offset': 612, 'end_offset': 613}, {'char': 'в', 'start_offset': 620, 'end_offset': 621}, {'char': 'і', 'start_offset': 622, 'end_offset': 623}, {'char': ' ', 'start_offset': 637, 'end_offset': 639}, {'char': 'м', 'start_offset': 641, 'end_offset': 642}, {'char': 'а', 'start_offset': 643, 'end_offset': 644}, {'char': 'ю', 'start_offset': 651, 'end_offset': 652}, {'char': 'т', 'start_offset': 654, 'end_offset': 655}, {'char': 'ь', 'start_offset': 655, 'end_offset': 656}, {'char': ' ', 'start_offset': 657, 'end_offset': 659}, {'char': 'й', 'start_offset': 659, 'end_offset': 660}, {'char': 'о', 'start_offset': 660, 'end_offset': 662}, {'char': 'г', 'start_offset': 664, 'end_offset': 665}, {'char': 'о', 'start_offset': 666, 'end_offset': 667}, {'char': ' ', 'start_offset': 677, 'end_offset': 679}, {'char': 'з', 'start_offset': 681, 'end_offset': 682}, {'char': 'а', 'start_offset': 683, 'end_offset': 684}, {'char': 'х', 'start_offset': 686, 'end_offset': 687}, {'char': 'и', 'start_offset': 689, 'end_offset': 690}, {'char': 'щ', 'start_offset': 696, 'end_offset': 697}, {'char': 'а', 'start_offset': 698, 'end_offset': 699}, {'char': 'т', 'start_offset': 707, 'end_offset': 708}, {'char': 'и', 'start_offset': 709, 'end_offset': 710}, {'char': ' ', 'start_offset': 733, 'end_offset': 734}, {'char': 'е', 'start_offset': 740, 'end_offset': 741}, {'char': 'в', 'start_offset': 747, 'end_offset': 748}, {'char': 'у', 'start_offset': 748, 'end_offset': 749}, {'char': 'й', 'start_offset': 752, 'end_offset': 753}, {'char': 'в', 'start_offset': 754, 'end_offset': 755}, {'char': 'г', 'start_offset': 757, 'end_offset': 758}, {'char': 'е', 'start_offset': 759, 'end_offset': 760}, {'char': 'р', 'start_offset': 767, 'end_offset': 768}, {'char': 'е', 'start_offset': 768, 'end_offset': 769}], word_offsets=[{'word': 'паня', 'start_offset': 0, 'end_offset': 12}, {'word': 'сполучені', 'start_offset': 16, 'end_offset': 39}, {'word': 'штати', 'start_offset': 43, 'end_offset': 59}, {'word': 'надважливий', 'start_offset': 85, 'end_offset': 126}, {'word': 'стратегічний', 'start_offset': 130, 'end_offset': 171}, {'word': 'партнер', 'start_offset': 174, 'end_offset': 194}, {'word': 'однак', 'start_offset': 204, 'end_offset': 225}, {'word': 'є', 'start_offset': 233, 'end_offset': 234}, {'word': 'різниця', 'start_offset': 240, 'end_offset': 263}, {'word': 'штати', 'start_offset': 283, 'end_offset': 297}, {'word': 'мають', 'start_offset': 300, 'end_offset': 311}, {'word': 'спеціальни', 'start_offset': 313, 'end_offset': 342}, {'word': 'закон', 'start_offset': 351, 'end_offset': 374}, {'word': 'який', 'start_offset': 386, 'end_offset': 395}, {'word': 'передбачає', 'start_offset': 399, 'end_offset': 438}, {'word': 'якщо', 'start_offset': 448, 'end_offset': 458}, {'word': 'китай', 'start_offset': 463, 'end_offset': 480}, {'word': 'нападе', 'start_offset': 487, 'end_offset': 505}, {'word': 'на', 'start_offset': 511, 'end_offset': 514}, {'word': 'тайвань', 'start_offset': 518, 'end_offset': 537}, {'word': 'американський', 'start_offset': 555, 'end_offset': 590}, {'word': 'військові', 'start_offset': 594, 'end_offset': 623}, {'word': 'мають', 'start_offset': 641, 'end_offset': 656}, {'word': 'його', 'start_offset': 659, 'end_offset': 667}, {'word': 'захищати', 'start_offset': 681, 'end_offset': 710}, {'word': 'евуйвгере', 'start_offset': 740, 'end_offset': 769}]) -``` - -### Split by seconds - -``` -0.0 - 0.24: паня -0.32 - 0.78: сполучені -0.86 - 1.18: штати -1.7 - 2.52: надважливий -2.6 - 3.42: стратегічний -3.48 - 3.88: партнер -4.08 - 4.5: однак -4.66 - 4.68: є -4.8 - 5.26: різниця -5.66 - 5.94: штати -6.0 - 6.22: мають -6.26 - 6.84: спеціальни -7.02 - 7.48: закон -7.72 - 7.9: який -7.98 - 8.76: передбачає -8.96 - 9.16: якщо -9.26 - 9.6: китай -9.74 - 10.1: нападе -10.22 - 10.28: на -10.36 - 10.74: тайвань -11.1 - 11.8: американський -11.88 - 12.46: військові -12.82 - 13.12: мають -13.18 - 13.34: його -13.62 - 14.2: захищати -14.8 - 15.38: евуйвгере -``` diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/data/__init__.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/data/__init__.py deleted file mode 100644 index 259f669b78bd05815cb8d3351fd6c5fc9a1b85a1..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/data/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from . import transforms # isort:skip - -from .build import ( - build_batch_data_loader, - build_detection_test_loader, - build_detection_train_loader, - get_detection_dataset_dicts, - load_proposals_into_dataset, - print_instances_class_histogram, -) -from .catalog import DatasetCatalog, MetadataCatalog, Metadata -from .common import DatasetFromList, MapDataset, ToIterableDataset -from .dataset_mapper import DatasetMapper - -# ensure the builtin datasets are registered -from . import datasets, samplers # isort:skip - -__all__ = [k for k in globals().keys() if not k.startswith("_")] diff --git a/spaces/YuanMio/vits-uma-genshin-honkai/app.py b/spaces/YuanMio/vits-uma-genshin-honkai/app.py deleted file mode 100644 index e716f57cc6ec2dda804e40d0590595154108f3cb..0000000000000000000000000000000000000000 --- a/spaces/YuanMio/vits-uma-genshin-honkai/app.py +++ /dev/null @@ -1,123 +0,0 @@ -# coding=utf-8 -import time -import gradio as gr -import utils -import commons -from models import SynthesizerTrn -from text import text_to_sequence -from torch import no_grad, LongTensor - -hps_ms = utils.get_hparams_from_file(r'./model/config.json') -net_g_ms = SynthesizerTrn( - len(hps_ms.symbols), - hps_ms.data.filter_length // 2 + 1, - hps_ms.train.segment_size // hps_ms.data.hop_length, - n_speakers=hps_ms.data.n_speakers, - **hps_ms.model) -_ = net_g_ms.eval() -speakers = hps_ms.speakers -model, optimizer, learning_rate, epochs = utils.load_checkpoint(r'./model/G_953000.pth', net_g_ms, None) - -def get_text(text, hps): - text_norm, clean_text = text_to_sequence(text, hps.symbols, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = LongTensor(text_norm) - return text_norm, clean_text - -def vits(text, language, speaker_id, noise_scale, noise_scale_w, length_scale): - start = time.perf_counter() - if not len(text): - return "输入文本不能为空!", None, None - text = text.replace('\n', ' ').replace('\r', '').replace(" ", "") - if len(text) > 300: - return f"输入文字过长!{len(text)}>100", None, None - if language == 0: - text = f"[ZH]{text}[ZH]" - elif language == 1: - text = f"[JA]{text}[JA]" - else: - text = f"{text}" - stn_tst, clean_text = get_text(text, hps_ms) - with no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = LongTensor([stn_tst.size(0)]) - speaker_id = LongTensor([speaker_id]) - audio = net_g_ms.infer(x_tst, x_tst_lengths, sid=speaker_id, noise_scale=noise_scale, noise_scale_w=noise_scale_w, - length_scale=length_scale)[0][0, 0].data.float().numpy() - - return "生成成功!", (22050, audio), f"生成耗时 {round(time.perf_counter()-start, 2)} s" - -def search_speaker(search_value): - for s in speakers: - if search_value == s: - return s - for s in speakers: - if search_value in s: - return s - -def change_lang(language): - if language == 0: - return 0.6, 0.668, 1.2 - else: - return 0.6, 0.668, 1.1 - -download_audio_js = """ -() =>{{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let audio = root.querySelector("#tts-audio").querySelector("audio"); - let text = root.querySelector("#input-text").querySelector("textarea"); - if (audio == undefined) - return; - text = text.value; - if (text == undefined) - text = Math.floor(Math.random()*100000000); - audio = audio.src; - let oA = document.createElement("a"); - oA.download = text.substr(0, 20)+'.wav'; - oA.href = audio; - document.body.appendChild(oA); - oA.click(); - oA.remove(); -}} -""" - -if __name__ == '__main__': - with gr.Blocks() as app: - gr.Markdown( - "#
    VITS语音在线合成demo\n" - "
    主要有赛马娘,原神中文,原神日语,崩坏3的音色
    " - '' - '' - ) - - with gr.Tabs(): - with gr.TabItem("vits"): - with gr.Row(): - with gr.Column(): - input_text = gr.Textbox(label="Text (100 words limitation)", lines=5, value="今天晚上吃啥好呢。", elem_id=f"input-text") - lang = gr.Dropdown(label="Language", choices=["中文", "日语", "中日混合(中文用[ZH][ZH]包裹起来,日文用[JA][JA]包裹起来)"], - type="index", value="中文") - btn = gr.Button(value="Submit") - with gr.Row(): - search = gr.Textbox(label="Search Speaker", lines=1) - btn2 = gr.Button(value="Search") - sid = gr.Dropdown(label="Speaker", choices=speakers, type="index", value=speakers[228]) - with gr.Row(): - ns = gr.Slider(label="noise_scale(控制感情变化程度)", minimum=0.1, maximum=1.0, step=0.1, value=0.6, interactive=True) - nsw = gr.Slider(label="noise_scale_w(控制音素发音长度)", minimum=0.1, maximum=1.0, step=0.1, value=0.668, interactive=True) - ls = gr.Slider(label="length_scale(控制整体语速)", minimum=0.1, maximum=2.0, step=0.1, value=1.2, interactive=True) - with gr.Column(): - o1 = gr.Textbox(label="Output Message") - o2 = gr.Audio(label="Output Audio", elem_id=f"tts-audio") - o3 = gr.Textbox(label="Extra Info") - download = gr.Button("Download Audio") - btn.click(vits, inputs=[input_text, lang, sid, ns, nsw, ls], outputs=[o1, o2, o3], api_name="generate") - download.click(None, [], [], _js=download_audio_js.format()) - btn2.click(search_speaker, inputs=[search], outputs=[sid]) - lang.change(change_lang, inputs=[lang], outputs=[ns, nsw, ls]) - with gr.TabItem("可用人物一览"): - gr.Radio(label="Speaker", choices=speakers, interactive=False, type="index") - app.queue(concurrency_count=1).launch() diff --git a/spaces/YueMafighting/FollowYourPose/FollowYourPose/followyourpose/data/hdvila.py b/spaces/YueMafighting/FollowYourPose/FollowYourPose/followyourpose/data/hdvila.py deleted file mode 100644 index 797511fb5a5f0f1756bebfb36aad1aeec8a50584..0000000000000000000000000000000000000000 --- a/spaces/YueMafighting/FollowYourPose/FollowYourPose/followyourpose/data/hdvila.py +++ /dev/null @@ -1,245 +0,0 @@ -# import os -# import random -# from abc import abstractmethod -# import math -# import pandas as pd -# import av -# import cv2 -# import decord -# import numpy as np -# import torch -# from PIL import Image -# from torch.utils.data import Dataset -# from torchvision import transforms -# from decord import VideoReader, cpu -# import torchvision.transforms._transforms_video as transforms_video -# from torchvision.transforms.functional import to_tensor -# from collections import OrderedDict -# import time -# import csv - -# class HDVilaDataset(Dataset): -# """ -# HDVila Dataset. -# Assumes webvid data is structured as follows. -# HDVila/ -# part_N/ # 0-10 -# video_clips/ ($page_dir) -# 1.mp4 (videoid.mp4) -# ... -# 5000.mp4 -# ... -# """ -# def __init__(self, -# video_path, -# width=512, -# height=512, -# n_sample_frames=8, -# dataset_set="train", -# prompt=None, -# sample_frame_rate=2, -# sample_start_idx=0, -# accelerator=None, -# ): - -# try: -# host_gpu_num = accelerator.num_processes -# host_num = 1 -# all_rank = host_gpu_num * host_num -# global_rank = accelerator.local_process_index -# except: -# pass -# print('dataset rank:', global_rank, ' / ',all_rank, ' ') - -# self.data_dir = '/apdcephfs_cq3/share_1290939/0_public_datasets/hd-vila-100m' -# if dataset_set=='train': -# self.text_name = 'caption_rm2048_train.csv' -# else: -# self.text_name = 'caption_2048_val_new.csv' -# self.meta_path = os.path.join(self.data_dir, self.text_name) -# # text_name = 'caption_2048_val_new.csv' - -# spatial_transform = 'resize_center_crop' -# resolution=width -# load_raw_resolution=True -# # frame_stride=2 -# video_length= n_sample_frames -# fps_max=None -# load_resize_keep_ratio=False - - - -# self.global_rank = global_rank -# self.all_rank = all_rank -# # self.subsample = subsample -# self.video_length = video_length -# self.resolution = [resolution, resolution] if isinstance(resolution, int) else resolution -# self.frame_stride = sample_frame_rate -# self.load_raw_resolution = load_raw_resolution -# self.fps_max = fps_max -# self.load_resize_keep_ratio = load_resize_keep_ratio -# print('start load meta data') -# self._load_metadata() -# print('load meta data done!!!') -# if spatial_transform is not None: -# if spatial_transform == "random_crop": -# self.spatial_transform = transforms_video.RandomCropVideo(crop_resolution) -# elif spatial_transform == "resize_center_crop": -# assert(self.resolution[0] == self.resolution[1]) -# self.spatial_transform = transforms.Compose([ -# transforms.Resize(resolution), -# transforms_video.CenterCropVideo(resolution), -# ]) -# elif spatial_transform == "center_crop": -# self.spatial_transform = transforms_video.CenterCropVideo(resolution) -# else: -# raise NotImplementedError -# else: -# self.spatial_transform = None - -# def _load_metadata(self): -# # clip_id frame_id caption -# last_clip_id = '' -# self.metadata = [] -# start_time = time.time() -# caption_path = self.meta_path -# count=-1 -# total_count = 8854264 #8856312 - 2048 - -# with open(caption_path, 'r',encoding="utf-8") as csvfile: #41s -# reader = csv.DictReader(csvfile) -# for row in reader: -# if row['clip_id'] != last_clip_id: -# count+=1 -# if count >= (total_count // self.all_rank)*self.all_rank: # drop last -# break -# last_clip_id = row['clip_id'] -# if count % self.all_rank == self.global_rank: -# self.metadata.append([('%02d'%int(row['part_id']))+row['clip_id']]) -# self.metadata[-1].append([row['caption']]) -# else: -# if count % self.all_rank == self.global_rank: -# self.metadata[-1][-1].append(row['caption']) -# # caption_data = pd.read_csv(caption_path) # use time 26+264s - -# # for index,row in caption_data.iterrows(): -# # if row['clip_id'] != last_clip_id: -# # last_clip_id = row['clip_id'] -# # meta_data[('%02d'%part_id)+row['clip_id']] = [row['caption']] -# # else: -# # meta_data[('%02d'%part_id)+row['clip_id']].append(row['caption']) -# end_time = time.time() -# print('load %d - %d items use time: %.1f;' % (len(self.metadata), count, end_time-start_time)) -# # self.metadata=meta_data - - -# def _get_video_path(self, sample): -# part_id = int(sample[0][:2]) -# clip_id = sample[0][2:] -# video_path = os.path.join(self.data_dir,'part_%d' % part_id, 'video_clips', clip_id) -# return video_path - -# def __getitem__(self, index): -# while True: - -# index = index % len(self.metadata) -# sample = self.metadata[index] -# video_path = self._get_video_path(sample) - -# try: -# if self.load_raw_resolution: -# video_reader = VideoReader(video_path, ctx=cpu(0)) -# elif self.load_resize_keep_ratio: -# # resize scale is according to the short side -# h, w, c = VideoReader(video_path, ctx=cpu(0))[0].shape -# if h < w: -# scale = h / self.resolution[0] -# else: -# scale = w / self.resolution[1] - -# h = math.ceil(h / scale) -# w = math.ceil(w / scale) -# video_reader = VideoReader(video_path, ctx=cpu(0), width=w, height=h) -# else: -# video_reader = VideoReader(video_path, ctx=cpu(0), width=self.resolution[1], height=self.resolution[0]) -# if len(video_reader) < self.video_length: -# print(f"video length ({len(video_reader)}) is smaller than target length({self.video_length})") -# index += 1 -# continue -# else: -# pass -# except: -# index += 1 -# print(f"Load video failed! path = {video_path}") -# continue -# fps_ori = video_reader.get_avg_fps() - -# fs = self.frame_stride -# allf = len(video_reader) -# if self.frame_stride != 1: -# all_frames = list(range(0, len(video_reader), self.frame_stride)) -# if len(all_frames) < self.video_length: -# fs = len(video_reader) // self.video_length -# assert(fs != 0) -# all_frames = list(range(0, len(video_reader), fs)) -# else: -# all_frames = list(range(len(video_reader))) - -# # select a random clip -# rand_idx = random.randint(0, len(all_frames) - self.video_length) -# frame_indices = all_frames[rand_idx:rand_idx+self.video_length] -# try: -# frames = video_reader.get_batch(frame_indices) -# break -# except: -# print(f"Get frames failed! path = {video_path}") -# index += 1 -# continue - -# assert(frames.shape[0] == self.video_length),f'{len(frames)}, self.video_length={self.video_length}' -# frames = torch.tensor(frames.asnumpy()).permute(3, 0, 1, 2).float() # [t,h,w,c] -> [c,t,h,w] - -# if self.spatial_transform is not None: -# frames = self.spatial_transform(frames) -# assert(frames.shape[2] == self.resolution[0] and frames.shape[3] == self.resolution[1]), f'frames={frames.shape}, self.resolution={self.resolution}' -# frames = frames.byte() -# # fps -# fps_clip = fps_ori // self.frame_stride -# if self.fps_max is not None and fps_clip > self.fps_max: -# fps_clip = self.fps_max - -# # caption index -# middle_idx = (rand_idx + self.video_length /2 )*fs -# big_cap_idx = (middle_idx // 64 +1) *64 -# small_cap_idx = (middle_idx // 64) *64 -# if big_cap_idx >= allf or ((big_cap_idx-middle_idx) >= (small_cap_idx-middle_idx)): -# cap_idx = small_cap_idx -# else: -# cap_idx = big_cap_idx -# # print(middle_idx, small_cap_idx, big_cap_idx,cap_idx) -# caption = sample[1][int(cap_idx//64)] - -# frames = frames.permute(1,0,2,3) -# skeleton_final = torch.zeros_like(frames).byte() -# frames = (frames / 127.5 - 1.0) -# skeleton_final = (skeleton_final / 127.5 - 1.0) -# example = {'pixel_values': frames, 'sentence': caption, 'pose': skeleton_final} - - - -# return example - -# def __len__(self): -# return len(self.metadata) -# # return 1 - - -# if __name__ == '__main__': -# if True: # val -# hd_data = HDVila('/apdcephfs_cq3/share_1290939/0_public_datasets/hd-vila-100m','/apdcephfs_cq3/share_1290939/0_public_datasets/hd-vila-100m/caption_2048_val.csv') -# else: -# hd_data = HDVila('/apdcephfs_cq3/share_1290939/0_public_datasets/hd-vila-100m','/apdcephfs_cq3/share_1290939/0_public_datasets/hd-vila-100m/caption_rm2048_train.csv') -# print(len(hd_data)) -# for i in range(len(hd_data)): -# # print(i) -# hd_data[i] \ No newline at end of file diff --git a/spaces/Yuliang/ECON/lib/torch_utils/training_stats.py b/spaces/Yuliang/ECON/lib/torch_utils/training_stats.py deleted file mode 100644 index a4fd0a4c3687aff712547b2688225ba1ec621f47..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ECON/lib/torch_utils/training_stats.py +++ /dev/null @@ -1,282 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. -"""Facilities for reporting and collecting training statistics across -multiple processes and devices. The interface is designed to minimize -synchronization overhead as well as the amount of boilerplate in user -code.""" - -import re - -import numpy as np -import torch - -import lib.dnnlib - -from . import misc - -#---------------------------------------------------------------------------- - -_num_moments = 3 # [num_scalars, sum_of_scalars, sum_of_squares] -_reduce_dtype = torch.float32 # Data type to use for initial per-tensor reduction. -_counter_dtype = torch.float64 # Data type to use for the internal counters. -_rank = 0 # Rank of the current process. -_sync_device = None # Device to use for multiprocess communication. None = single-process. -_sync_called = False # Has _sync() been called yet? -_counters = dict( -) # Running counters on each device, updated by report(): name => device => torch.Tensor -_cumulative = dict() # Cumulative counters on the CPU, updated by _sync(): name => torch.Tensor - -#---------------------------------------------------------------------------- - - -def init_multiprocessing(rank, sync_device): - r"""Initializes `torch_utils.training_stats` for collecting statistics - across multiple processes. - - This function must be called after - `torch.distributed.init_process_group()` and before `Collector.update()`. - The call is not necessary if multi-process collection is not needed. - - Args: - rank: Rank of the current process. - sync_device: PyTorch device to use for inter-process - communication, or None to disable multi-process - collection. Typically `torch.device('cuda', rank)`. - """ - global _rank, _sync_device - assert not _sync_called - _rank = rank - _sync_device = sync_device - - -#---------------------------------------------------------------------------- - - -@misc.profiled_function -def report(name, value): - r"""Broadcasts the given set of scalars to all interested instances of - `Collector`, across device and process boundaries. - - This function is expected to be extremely cheap and can be safely - called from anywhere in the training loop, loss function, or inside a - `torch.nn.Module`. - - Warning: The current implementation expects the set of unique names to - be consistent across processes. Please make sure that `report()` is - called at least once for each unique name by each process, and in the - same order. If a given process has no scalars to broadcast, it can do - `report(name, [])` (empty list). - - Args: - name: Arbitrary string specifying the name of the statistic. - Averages are accumulated separately for each unique name. - value: Arbitrary set of scalars. Can be a list, tuple, - NumPy array, PyTorch tensor, or Python scalar. - - Returns: - The same `value` that was passed in. - """ - if name not in _counters: - _counters[name] = dict() - - elems = torch.as_tensor(value) - if elems.numel() == 0: - return value - - elems = elems.detach().flatten().to(_reduce_dtype) - moments = torch.stack([ - torch.ones_like(elems).sum(), - elems.sum(), - elems.square().sum(), - ]) - assert moments.ndim == 1 and moments.shape[0] == _num_moments - moments = moments.to(_counter_dtype) - - device = moments.device - if device not in _counters[name]: - _counters[name][device] = torch.zeros_like(moments) - _counters[name][device].add_(moments) - return value - - -#---------------------------------------------------------------------------- - - -def report0(name, value): - r"""Broadcasts the given set of scalars by the first process (`rank = 0`), - but ignores any scalars provided by the other processes. - See `report()` for further details. - """ - report(name, value if _rank == 0 else []) - return value - - -#---------------------------------------------------------------------------- - - -class Collector: - r"""Collects the scalars broadcasted by `report()` and `report0()` and - computes their long-term averages (mean and standard deviation) over - user-defined periods of time. - - The averages are first collected into internal counters that are not - directly visible to the user. They are then copied to the user-visible - state as a result of calling `update()` and can then be queried using - `mean()`, `std()`, `as_dict()`, etc. Calling `update()` also resets the - internal counters for the next round, so that the user-visible state - effectively reflects averages collected between the last two calls to - `update()`. - - Args: - regex: Regular expression defining which statistics to - collect. The default is to collect everything. - keep_previous: Whether to retain the previous averages if no - scalars were collected on a given round - (default: True). - """ - def __init__(self, regex='.*', keep_previous=True): - self._regex = re.compile(regex) - self._keep_previous = keep_previous - self._cumulative = dict() - self._moments = dict() - self.update() - self._moments.clear() - - def names(self): - r"""Returns the names of all statistics broadcasted so far that - match the regular expression specified at construction time. - """ - return [name for name in _counters if self._regex.fullmatch(name)] - - def update(self): - r"""Copies current values of the internal counters to the - user-visible state and resets them for the next round. - - If `keep_previous=True` was specified at construction time, the - operation is skipped for statistics that have received no scalars - since the last update, retaining their previous averages. - - This method performs a number of GPU-to-CPU transfers and one - `torch.distributed.all_reduce()`. It is intended to be called - periodically in the main training loop, typically once every - N training steps. - """ - if not self._keep_previous: - self._moments.clear() - for name, cumulative in _sync(self.names()): - if name not in self._cumulative: - self._cumulative[name] = torch.zeros([_num_moments], dtype=_counter_dtype) - delta = cumulative - self._cumulative[name] - self._cumulative[name].copy_(cumulative) - if float(delta[0]) != 0: - self._moments[name] = delta - - def _get_delta(self, name): - r"""Returns the raw moments that were accumulated for the given - statistic between the last two calls to `update()`, or zero if - no scalars were collected. - """ - assert self._regex.fullmatch(name) - if name not in self._moments: - self._moments[name] = torch.zeros([_num_moments], dtype=_counter_dtype) - return self._moments[name] - - def num(self, name): - r"""Returns the number of scalars that were accumulated for the given - statistic between the last two calls to `update()`, or zero if - no scalars were collected. - """ - delta = self._get_delta(name) - return int(delta[0]) - - def mean(self, name): - r"""Returns the mean of the scalars that were accumulated for the - given statistic between the last two calls to `update()`, or NaN if - no scalars were collected. - """ - delta = self._get_delta(name) - if int(delta[0]) == 0: - return float('nan') - return float(delta[1] / delta[0]) - - def std(self, name): - r"""Returns the standard deviation of the scalars that were - accumulated for the given statistic between the last two calls to - `update()`, or NaN if no scalars were collected. - """ - delta = self._get_delta(name) - if int(delta[0]) == 0 or not np.isfinite(float(delta[1])): - return float('nan') - if int(delta[0]) == 1: - return float(0) - mean = float(delta[1] / delta[0]) - raw_var = float(delta[2] / delta[0]) - return np.sqrt(max(raw_var - np.square(mean), 0)) - - def as_dict(self): - r"""Returns the averages accumulated between the last two calls to - `update()` as an `dnnlib.EasyDict`. The contents are as follows: - - dnnlib.EasyDict( - NAME = dnnlib.EasyDict(num=FLOAT, mean=FLOAT, std=FLOAT), - ... - ) - """ - stats = dnnlib.EasyDict() - for name in self.names(): - stats[name] = dnnlib.EasyDict( - num=self.num(name), mean=self.mean(name), std=self.std(name) - ) - return stats - - def __getitem__(self, name): - r"""Convenience getter. - `collector[name]` is a synonym for `collector.mean(name)`. - """ - return self.mean(name) - - -#---------------------------------------------------------------------------- - - -def _sync(names): - r"""Synchronize the global cumulative counters across devices and - processes. Called internally by `Collector.update()`. - """ - if len(names) == 0: - return [] - global _sync_called - _sync_called = True - - # Collect deltas within current rank. - deltas = [] - device = _sync_device if _sync_device is not None else torch.device('cpu') - for name in names: - delta = torch.zeros([_num_moments], dtype=_counter_dtype, device=device) - for counter in _counters[name].values(): - delta.add_(counter.to(device)) - counter.copy_(torch.zeros_like(counter)) - deltas.append(delta) - deltas = torch.stack(deltas) - - # Sum deltas across ranks. - if _sync_device is not None: - torch.distributed.all_reduce(deltas) - - # Update cumulative values. - deltas = deltas.cpu() - for idx, name in enumerate(names): - if name not in _cumulative: - _cumulative[name] = torch.zeros([_num_moments], dtype=_counter_dtype) - _cumulative[name].add_(deltas[idx]) - - # Return name-value pairs. - return [(name, _cumulative[name]) for name in names] - - -#---------------------------------------------------------------------------- diff --git a/spaces/a-v-bely/spanish-task-generator/utilities_cookies/build/static/js/2.422ca0c4.chunk.js b/spaces/a-v-bely/spanish-task-generator/utilities_cookies/build/static/js/2.422ca0c4.chunk.js deleted file mode 100644 index 1fd17f11d35fc387466e4141d5ff5ba07823b5ae..0000000000000000000000000000000000000000 --- a/spaces/a-v-bely/spanish-task-generator/utilities_cookies/build/static/js/2.422ca0c4.chunk.js +++ /dev/null @@ -1,3 +0,0 @@ -/*! For license information please see 2.422ca0c4.chunk.js.LICENSE.txt */ -(this.webpackJsonpstreamlit_cookie_manager=this.webpackJsonpstreamlit_cookie_manager||[]).push([[2],[function(t,e,n){t.exports=n(10)},function(t,e,n){"use strict";t.exports=n(8)},function(t,e,n){"use strict";n.d(e,"a",(function(){return xf}));var r={};n.r(r),n.d(r,"memcpy",(function(){return Yt})),n.d(r,"joinUint8Arrays",(function(){return Wt})),n.d(r,"toArrayBufferView",(function(){return Ht})),n.d(r,"toInt8Array",(function(){return $t})),n.d(r,"toInt16Array",(function(){return Kt})),n.d(r,"toInt32Array",(function(){return Gt})),n.d(r,"toBigInt64Array",(function(){return qt})),n.d(r,"toUint8Array",(function(){return Jt})),n.d(r,"toUint16Array",(function(){return Zt})),n.d(r,"toUint32Array",(function(){return Qt})),n.d(r,"toBigUint64Array",(function(){return Xt})),n.d(r,"toFloat32Array",(function(){return te})),n.d(r,"toFloat64Array",(function(){return ee})),n.d(r,"toUint8ClampedArray",(function(){return ne})),n.d(r,"toArrayBufferViewIterator",(function(){return ie})),n.d(r,"toInt8ArrayIterator",(function(){return ae})),n.d(r,"toInt16ArrayIterator",(function(){return oe})),n.d(r,"toInt32ArrayIterator",(function(){return ue})),n.d(r,"toUint8ArrayIterator",(function(){return se})),n.d(r,"toUint16ArrayIterator",(function(){return ce})),n.d(r,"toUint32ArrayIterator",(function(){return fe})),n.d(r,"toFloat32ArrayIterator",(function(){return le})),n.d(r,"toFloat64ArrayIterator",(function(){return he})),n.d(r,"toUint8ClampedArrayIterator",(function(){return ye})),n.d(r,"toArrayBufferViewAsyncIterator",(function(){return pe})),n.d(r,"toInt8ArrayAsyncIterator",(function(){return ve})),n.d(r,"toInt16ArrayAsyncIterator",(function(){return be})),n.d(r,"toInt32ArrayAsyncIterator",(function(){return ge})),n.d(r,"toUint8ArrayAsyncIterator",(function(){return me})),n.d(r,"toUint16ArrayAsyncIterator",(function(){return ke})),n.d(r,"toUint32ArrayAsyncIterator",(function(){return we})),n.d(r,"toFloat32ArrayAsyncIterator",(function(){return _e})),n.d(r,"toFloat64ArrayAsyncIterator",(function(){return Ie})),n.d(r,"toUint8ClampedArrayAsyncIterator",(function(){return Se})),n.d(r,"rebaseValueOffsets",(function(){return xe})),n.d(r,"compareArrayLike",(function(){return Ae}));var i={};n.r(i),n.d(i,"getBool",(function(){return un})),n.d(i,"getBit",(function(){return sn})),n.d(i,"setBool",(function(){return cn})),n.d(i,"truncateBitmap",(function(){return fn})),n.d(i,"packBools",(function(){return ln})),n.d(i,"iterateBits",(function(){return hn})),n.d(i,"popcnt_bit_range",(function(){return yn})),n.d(i,"popcnt_array",(function(){return pn})),n.d(i,"popcnt_uint32",(function(){return dn}));var a={};n.r(a),n.d(a,"uint16ToFloat64",(function(){return Nr})),n.d(a,"float64ToUint16",(function(){return Cr}));var o={};n.r(o),n.d(o,"isArrowBigNumSymbol",(function(){return Hr})),n.d(o,"bignumToString",(function(){return Yr})),n.d(o,"bignumToBigInt",(function(){return Wr})),n.d(o,"BN",(function(){return Xr}));var u={};n.r(u),n.d(u,"clampIndex",(function(){return Ci})),n.d(u,"clampRange",(function(){return Vi})),n.d(u,"createElementComparator",(function(){return Pi}));var s={};n.r(s),n.d(s,"BaseInt64",(function(){return ao})),n.d(s,"Uint64",(function(){return oo})),n.d(s,"Int64",(function(){return uo})),n.d(s,"Int128",(function(){return so}));n(3);var c=n(1),f=n.n(c),l=new WeakMap,h=new WeakMap;function y(t){var e=l.get(t);return console.assert(null!=e,"'this' is expected an Event object, but got",t),e}function p(t){null==t.passiveListener?t.event.cancelable&&(t.canceled=!0,"function"===typeof t.event.preventDefault&&t.event.preventDefault()):"undefined"!==typeof console&&"function"===typeof console.error&&console.error("Unable to preventDefault inside passive event listener invocation.",t.passiveListener)}function d(t,e){l.set(this,{eventTarget:t,event:e,eventPhase:2,currentTarget:t,canceled:!1,stopped:!1,immediateStopped:!1,passiveListener:null,timeStamp:e.timeStamp||Date.now()}),Object.defineProperty(this,"isTrusted",{value:!1,enumerable:!0});for(var n=Object.keys(e),r=0;r0){for(var t=new Array(arguments.length),e=0;et.length)&&(e=t.length);for(var n=0,r=new Array(e);n=t.length?{done:!0}:{done:!1,value:t[r++]}},e:function(t){throw t},f:i}}throw new TypeError("Invalid attempt to iterate non-iterable instance.\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method.")}var a,o=!0,u=!1;return{s:function(){n=n.call(t)},n:function(){var t=n.next();return o=t.done,t},e:function(t){u=!0,a=t},f:function(){try{o||null==n.return||n.return()}finally{if(u)throw a}}}}function D(t,e,n,r,i,a,o){try{var u=t[a](o),s=u.value}catch(c){return void n(c)}u.done?e(s):Promise.resolve(s).then(r,i)}function L(t){return function(){var e=this,n=arguments;return new Promise((function(r,i){var a=t.apply(e,n);function o(t){D(a,r,i,o,u,"next",t)}function u(t){D(a,r,i,o,u,"throw",t)}o(void 0)}))}}function F(t,e){if(!(t instanceof e))throw new TypeError("Cannot call a class as a function")}function M(t,e){for(var n=0;n>>0)+4294967296*this.high},W.Long.prototype.equals=function(t){return this.low==t.low&&this.high==t.high},W.Long.ZERO=new W.Long(0,0),W.Builder=function(t){if(t)e=t;else var e=1024;this.bb=W.ByteBuffer.allocate(e),this.space=e,this.minalign=1,this.vtable=null,this.vtable_in_use=0,this.isNested=!1,this.object_start=0,this.vtables=[],this.vector_num_elems=0,this.force_defaults=!1},W.Builder.prototype.clear=function(){this.bb.clear(),this.space=this.bb.capacity(),this.minalign=1,this.vtable=null,this.vtable_in_use=0,this.isNested=!1,this.object_start=0,this.vtables=[],this.vector_num_elems=0,this.force_defaults=!1},W.Builder.prototype.forceDefaults=function(t){this.force_defaults=t},W.Builder.prototype.dataBuffer=function(){return this.bb},W.Builder.prototype.asUint8Array=function(){return this.bb.bytes().subarray(this.bb.position(),this.bb.position()+this.offset())},W.Builder.prototype.prep=function(t,e){t>this.minalign&&(this.minalign=t);for(var n=1+~(this.bb.capacity()-this.space+e)&t-1;this.space=0&&0==this.vtable[e];e--);for(var n=e+1;e>=0;e--)this.addInt16(0!=this.vtable[e]?t-this.vtable[e]:0);this.addInt16(t-this.object_start);var r=(n+2)*W.SIZEOF_SHORT;this.addInt16(r);var i=0,a=this.space;t:for(e=0;e=0;r--)this.writeInt8(n.charCodeAt(r))}this.prep(this.minalign,W.SIZEOF_INT),this.addOffset(t),this.bb.setPosition(this.space)},W.Builder.prototype.requiredField=function(t,e){var n=this.bb.capacity()-t,r=n-this.bb.readInt32(n);if(!(0!=this.bb.readInt16(r+e)))throw new Error("FlatBuffers: field "+e+" must be set")},W.Builder.prototype.startVector=function(t,e,n){this.notNested(),this.vector_num_elems=e,this.prep(W.SIZEOF_INT,t*e),this.prep(n,t*e)},W.Builder.prototype.endVector=function(){return this.writeInt32(this.vector_num_elems),this.offset()},W.Builder.prototype.createString=function(t){if(t instanceof Uint8Array)var e=t;else{e=[];for(var n=0;n=56320)r=i;else r=(i<<10)+t.charCodeAt(n++)+-56613888;r<128?e.push(r):(r<2048?e.push(r>>6&31|192):(r<65536?e.push(r>>12&15|224):e.push(r>>18&7|240,r>>12&63|128),e.push(r>>6&63|128)),e.push(63&r|128))}}this.addInt8(0),this.startVector(1,e.length,1),this.bb.setPosition(this.space-=e.length);n=0;for(var a=this.space,o=this.bb.bytes();n>24},W.ByteBuffer.prototype.readUint8=function(t){return this.bytes_[t]},W.ByteBuffer.prototype.readInt16=function(t){return this.readUint16(t)<<16>>16},W.ByteBuffer.prototype.readUint16=function(t){return this.bytes_[t]|this.bytes_[t+1]<<8},W.ByteBuffer.prototype.readInt32=function(t){return this.bytes_[t]|this.bytes_[t+1]<<8|this.bytes_[t+2]<<16|this.bytes_[t+3]<<24},W.ByteBuffer.prototype.readUint32=function(t){return this.readInt32(t)>>>0},W.ByteBuffer.prototype.readInt64=function(t){return new W.Long(this.readInt32(t),this.readInt32(t+4))},W.ByteBuffer.prototype.readUint64=function(t){return new W.Long(this.readUint32(t),this.readUint32(t+4))},W.ByteBuffer.prototype.readFloat32=function(t){return W.int32[0]=this.readInt32(t),W.float32[0]},W.ByteBuffer.prototype.readFloat64=function(t){return W.int32[W.isLittleEndian?0:1]=this.readInt32(t),W.int32[W.isLittleEndian?1:0]=this.readInt32(t+4),W.float64[0]},W.ByteBuffer.prototype.writeInt8=function(t,e){this.bytes_[t]=e},W.ByteBuffer.prototype.writeUint8=function(t,e){this.bytes_[t]=e},W.ByteBuffer.prototype.writeInt16=function(t,e){this.bytes_[t]=e,this.bytes_[t+1]=e>>8},W.ByteBuffer.prototype.writeUint16=function(t,e){this.bytes_[t]=e,this.bytes_[t+1]=e>>8},W.ByteBuffer.prototype.writeInt32=function(t,e){this.bytes_[t]=e,this.bytes_[t+1]=e>>8,this.bytes_[t+2]=e>>16,this.bytes_[t+3]=e>>24},W.ByteBuffer.prototype.writeUint32=function(t,e){this.bytes_[t]=e,this.bytes_[t+1]=e>>8,this.bytes_[t+2]=e>>16,this.bytes_[t+3]=e>>24},W.ByteBuffer.prototype.writeInt64=function(t,e){this.writeInt32(t,e.low),this.writeInt32(t+4,e.high)},W.ByteBuffer.prototype.writeUint64=function(t,e){this.writeUint32(t,e.low),this.writeUint32(t+4,e.high)},W.ByteBuffer.prototype.writeFloat32=function(t,e){W.float32[0]=e,this.writeInt32(t,W.int32[0])},W.ByteBuffer.prototype.writeFloat64=function(t,e){W.float64[0]=e,this.writeInt32(t,W.int32[W.isLittleEndian?0:1]),this.writeInt32(t+4,W.int32[W.isLittleEndian?1:0])},W.ByteBuffer.prototype.getBufferIdentifier=function(){if(this.bytes_.length>10),56320+(1023&a)))}return r},W.ByteBuffer.prototype.__indirect=function(t){return t+this.readInt32(t)},W.ByteBuffer.prototype.__vector=function(t){return t+this.readInt32(t)+W.SIZEOF_INT},W.ByteBuffer.prototype.__vector_len=function(t){return this.readInt32(t+this.readInt32(t))},W.ByteBuffer.prototype.__has_identifier=function(t){if(t.length!=W.FILE_IDENTIFIER_LENGTH)throw new Error("FlatBuffers: file identifier must be length "+W.FILE_IDENTIFIER_LENGTH);for(var e=0;e>6*n)+r];n>0;){var a=e>>6*(n-1);i.push(128|63&a),n-=1}return i}}Z.prototype={decode:function(t,e){var n;n="object"===typeof t&&t instanceof ArrayBuffer?new Uint8Array(t):"object"===typeof t&&"buffer"in t&&t.buffer instanceof ArrayBuffer?new Uint8Array(t.buffer,t.byteOffset,t.byteLength):new Uint8Array(0),e=$(e),this._streaming||(this._decoder=new X({fatal:this._fatal}),this._BOMseen=!1),this._streaming=Boolean(e.stream);for(var r,i=new K(n),a=[];!i.endOfStream()&&(r=this._decoder.handler(i,i.read()))!==G;)null!==r&&(Array.isArray(r)?a.push.apply(a,r):a.push(r));if(!this._streaming){do{if((r=this._decoder.handler(i,i.read()))===G)break;null!==r&&(Array.isArray(r)?a.push.apply(a,r):a.push(r))}while(!i.endOfStream());this._decoder=null}return a.length&&(-1===["utf-8"].indexOf(this.encoding)||this._ignoreBOM||this._BOMseen||(65279===a[0]?(this._BOMseen=!0,a.shift()):this._BOMseen=!0)),function(t){for(var e="",n=0;n>10),56320+(1023&r)))}return e}(a)}},Q.prototype={encode:function(t,e){t=t?String(t):"",e=$(e),this._streaming||(this._encoder=new tt(this._options)),this._streaming=Boolean(e.stream);for(var n,r=[],i=new K(function(t){for(var e=String(t),n=e.length,r=0,i=[];r57343)i.push(a);else if(56320<=a&&a<=57343)i.push(65533);else if(55296<=a&&a<=56319)if(r===n-1)i.push(65533);else{var o=t.charCodeAt(r+1);if(56320<=o&&o<=57343){var u=1023&a,s=1023&o;i.push(65536+(u<<10)+s),r+=1}else i.push(65533)}r+=1}return i}(t));!i.endOfStream()&&(n=this._encoder.handler(i,i.read()))!==G;)Array.isArray(n)?r.push.apply(r,n):r.push(n);if(!this._streaming){for(;(n=this._encoder.handler(i,i.read()))!==G;)Array.isArray(n)?r.push.apply(r,n):r.push(n);this._encoder=null}return new Uint8Array(r)}};var et="function"===typeof Buffer?Buffer:null,nt="function"===typeof TextDecoder&&"function"===typeof TextEncoder,rt=function(t){if(nt||!et){var e=new t("utf-8");return function(t){return e.decode(t)}}return function(t){var e=Jt(t),n=e.buffer,r=e.byteOffset,i=e.length;return et.from(n,r,i).toString()}}("undefined"!==typeof TextDecoder?TextDecoder:Z),it=function(t){if(nt||!et){var e=new t;return function(t){return e.encode(t)}}return function(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:"";return Jt(et.from(t,"utf8"))}}("undefined"!==typeof TextEncoder?TextEncoder:Q);function at(t,e){return at=Object.setPrototypeOf||function(t,e){return t.__proto__=e,t},at(t,e)}function ot(t,e){if("function"!==typeof e&&null!==e)throw new TypeError("Super expression must either be null or a function");Object.defineProperty(t,"prototype",{value:Object.create(e&&e.prototype,{constructor:{value:t,writable:!0,configurable:!0}}),writable:!1}),e&&at(t,e)}function ut(t){return ut=Object.setPrototypeOf?Object.getPrototypeOf:function(t){return t.__proto__||Object.getPrototypeOf(t)},ut(t)}function st(){if("undefined"===typeof Reflect||!Reflect.construct)return!1;if(Reflect.construct.sham)return!1;if("function"===typeof Proxy)return!0;try{return Boolean.prototype.valueOf.call(Reflect.construct(Boolean,[],(function(){}))),!0}catch(t){return!1}}var ct=n(4),ft=n.n(ct);function lt(t){if(void 0===t)throw new ReferenceError("this hasn't been initialised - super() hasn't been called");return t}function ht(t,e){if(e&&("object"===ft()(e)||"function"===typeof e))return e;if(void 0!==e)throw new TypeError("Derived constructors may only return object or undefined");return lt(t)}function yt(t){var e=st();return function(){var n,r=ut(t);if(e){var i=ut(this).constructor;n=Reflect.construct(r,arguments,i)}else n=r.apply(this,arguments);return ht(this,n)}}var pt=Object.freeze({done:!0,value:void 0}),dt=function(){function t(e){F(this,t),this._json=e}return E(t,[{key:"schema",get:function(){return this._json.schema}},{key:"batches",get:function(){return this._json.batches||[]}},{key:"dictionaries",get:function(){return this._json.dictionaries||[]}}]),t}(),vt=function(){function t(){F(this,t)}return E(t,[{key:"tee",value:function(){return this._getDOMStream().tee()}},{key:"pipe",value:function(t,e){return this._getNodeStream().pipe(t,e)}},{key:"pipeTo",value:function(t,e){return this._getDOMStream().pipeTo(t,e)}},{key:"pipeThrough",value:function(t,e){return this._getDOMStream().pipeThrough(t,e)}},{key:"_getDOMStream",value:function(){return this._DOMStream||(this._DOMStream=this.toDOMStream())}},{key:"_getNodeStream",value:function(){return this._nodeStream||(this._nodeStream=this.toNodeStream())}}]),t}(),bt=function(t,e){ot(r,t);var n=yt(r);function r(){var t;return F(this,r),(t=n.call(this))._values=[],t.resolvers=[],t._closedPromise=new Promise((function(e){return t._closedPromiseResolve=e})),t}return E(r,[{key:"closed",get:function(){return this._closedPromise}},{key:"cancel",value:function(){var t=L(R.mark((function t(e){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,this.return(e);case 2:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()},{key:"write",value:function(t){this._ensureOpen()&&(this.resolvers.length<=0?this._values.push(t):this.resolvers.shift().resolve({done:!1,value:t}))}},{key:"abort",value:function(t){this._closedPromiseResolve&&(this.resolvers.length<=0?this._error={error:t}:this.resolvers.shift().reject({done:!0,value:t}))}},{key:"close",value:function(){if(this._closedPromiseResolve){for(var t=this.resolvers;t.length>0;)t.shift().resolve(pt);this._closedPromiseResolve(),this._closedPromiseResolve=void 0}}},{key:e,value:function(){return this}},{key:"toDOMStream",value:function(t){return Be.toDOMStream(this._closedPromiseResolve||this._error?this:this._values,t)}},{key:"toNodeStream",value:function(t){return Be.toNodeStream(this._closedPromiseResolve||this._error?this:this._values,t)}},{key:"throw",value:function(){var t=L(R.mark((function t(e){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,this.abort(e);case 2:return t.abrupt("return",pt);case 3:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()},{key:"return",value:function(){var t=L(R.mark((function t(e){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,this.close();case 2:return t.abrupt("return",pt);case 3:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()},{key:"read",value:function(){var t=L(R.mark((function t(e){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,this.next(e,"read");case 2:return t.abrupt("return",t.sent.value);case 3:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()},{key:"peek",value:function(){var t=L(R.mark((function t(e){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,this.next(e,"peek");case 2:return t.abrupt("return",t.sent.value);case 3:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()},{key:"next",value:function(){var t=this;return this._values.length>0?Promise.resolve({done:!1,value:this._values.shift()}):this._error?Promise.reject({done:!0,value:this._error.error}):this._closedPromiseResolve?new Promise((function(e,n){t.resolvers.push({resolve:e,reject:n})})):Promise.resolve(pt)}},{key:"_ensureOpen",value:function(){if(this._closedPromiseResolve)return!0;throw new Error("".concat(this," is closed"))}}]),r}(vt,Symbol.asyncIterator),gt=U(function(){var t=function(){throw new Error("BigInt is not available in this environment")};function e(){throw t()}return e.asIntN=function(){throw t()},e.asUintN=function(){throw t()},"undefined"!==typeof BigInt?[BigInt,!0]:[e,!1]}(),2),mt=gt[0],kt=gt[1],wt=U(function(){var t=function(){throw new Error("BigInt64Array is not available in this environment")};return"undefined"!==typeof BigInt64Array?[BigInt64Array,!0]:[function(){function e(){throw F(this,e),t()}return E(e,null,[{key:"BYTES_PER_ELEMENT",get:function(){return 8}},{key:"of",value:function(){throw t()}},{key:"from",value:function(){throw t()}}]),e}(),!1]}(),2),_t=wt[0],It=(wt[1],U(function(){var t=function(){throw new Error("BigUint64Array is not available in this environment")};return"undefined"!==typeof BigUint64Array?[BigUint64Array,!0]:[function(){function e(){throw F(this,e),t()}return E(e,null,[{key:"BYTES_PER_ELEMENT",get:function(){return 8}},{key:"of",value:function(){throw t()}},{key:"from",value:function(){throw t()}}]),e}(),!1]}(),2)),St=It[0],xt=(It[1],function(t){return"number"===typeof t}),At=function(t){return"boolean"===typeof t},Tt=function(t){return"function"===typeof t},Bt=function(t){return null!=t&&Object(t)===t},Ot=function(t){return Bt(t)&&Tt(t.then)},Dt=function(t){return Bt(t)&&Tt(t[Symbol.iterator])},Lt=function(t){return Bt(t)&&Tt(t[Symbol.asyncIterator])},Ft=function(t){return Bt(t)&&Bt(t.schema)},Mt=function(t){return Bt(t)&&"done"in t&&"value"in t},Et=function(t){return Bt(t)&&Tt(t.stat)&&xt(t.fd)},Ut=function(t){return Bt(t)&&Ct(t.body)},Nt=function(t){return Bt(t)&&Tt(t.abort)&&Tt(t.getWriter)&&!(t instanceof vt)},Ct=function(t){return Bt(t)&&Tt(t.cancel)&&Tt(t.getReader)&&!(t instanceof vt)},Vt=function(t){return Bt(t)&&Tt(t.end)&&Tt(t.write)&&At(t.writable)&&!(t instanceof vt)},jt=function(t){return Bt(t)&&Tt(t.read)&&Tt(t.pipe)&&At(t.readable)&&!(t instanceof vt)},Rt=R.mark(ie),Pt=W.ByteBuffer,zt="undefined"!==typeof SharedArrayBuffer?SharedArrayBuffer:ArrayBuffer;function Yt(t,e){var n=arguments.length>2&&void 0!==arguments[2]?arguments[2]:0,r=arguments.length>3&&void 0!==arguments[3]?arguments[3]:e.byteLength,i=t.byteLength,a=new Uint8Array(t.buffer,t.byteOffset,i),o=new Uint8Array(e.buffer,e.byteOffset,Math.min(r,i));return a.set(o,n),t}function Wt(t,e){for(var n,r,i,a=function(t){for(var e,n,r,i,a,o,u=t[0]?[t[0]]:[],s=0,c=0,f=t.length;++s0)do{if(t[n]!==e[n])return!1}while(++n0&&(r.push(i),u+=i.byteLength),!(e||o<=u)){y.next=22;break}case 16:return y.next=18,s();case 18:h=y.sent,a=h.cmd,o=h.size;case 21:if(o0&&(i.push(a),s+=a.byteLength),!(n||u<=s)){t.next=31;break}case 25:return t.next=27,c();case 27:y=t.sent,o=y.cmd,u=y.size;case 30:if(u0&&(i.push(Jt(a)),s+=a.byteLength),!(n||u<=s)){t.next=31;break}case 25:return t.next=27,c();case 27:y=t.sent,o=y.cmd,u=y.size;case 30:if(u=i)){t.next=2;break}return t.abrupt("return",{done:!1,value:new Uint8Array(n,0,i)});case 2:return t.next=4,e.read(new Uint8Array(n,r,i-r));case 4:if(a=t.sent,o=a.done,u=a.value,!((r+=u.byteLength)0&&(c.push(f),s+=f.byteLength)),!(i||u<=s)){t.next=36;break}case 30:return t.next=32,l();case 32:d=t.sent,o=d.cmd,u=d.size;case 35:if(u=0;n--)t.addInt32(e[n]);return t.endVector()}},{key:"startTypeIdsVector",value:function(t,e){t.startVector(4,e,4)}},{key:"endUnion",value:function(t){return t.endObject()}},{key:"createUnion",value:function(t,n,r){return e.startUnion(t),e.addMode(t,n),e.addTypeIds(t,r),e.endUnion(t)}}]),e}();e.Union=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(t){!function(t){!function(t){var e=function(){function t(){F(this,t),this.bb=null,this.bb_pos=0}return E(t,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"bitWidth",value:function(){var t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt32(this.bb_pos+t):0}},{key:"isSigned",value:function(){var t=this.bb.__offset(this.bb_pos,6);return!!t&&!!this.bb.readInt8(this.bb_pos+t)}}],[{key:"getRootAsInt",value:function(e,n){return(n||new t).__init(e.readInt32(e.position())+e.position(),e)}},{key:"startInt",value:function(t){t.startObject(2)}},{key:"addBitWidth",value:function(t,e){t.addFieldInt32(0,e,0)}},{key:"addIsSigned",value:function(t,e){t.addFieldInt8(1,+e,0)}},{key:"endInt",value:function(t){return t.endObject()}},{key:"createInt",value:function(e,n,r){return t.startInt(e),t.addBitWidth(e,n),t.addIsSigned(e,r),t.endInt(e)}}]),t}();t.Int=e}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(e){!function(e){!function(e){var n=function(){function e(){F(this,e),this.bb=null,this.bb_pos=0}return E(e,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"precision",value:function(){var e=this.bb.__offset(this.bb_pos,4);return e?this.bb.readInt16(this.bb_pos+e):t.apache.arrow.flatbuf.Precision.HALF}}],[{key:"getRootAsFloatingPoint",value:function(t,n){return(n||new e).__init(t.readInt32(t.position())+t.position(),t)}},{key:"startFloatingPoint",value:function(t){t.startObject(1)}},{key:"addPrecision",value:function(e,n){e.addFieldInt16(0,n,t.apache.arrow.flatbuf.Precision.HALF)}},{key:"endFloatingPoint",value:function(t){return t.endObject()}},{key:"createFloatingPoint",value:function(t,n){return e.startFloatingPoint(t),e.addPrecision(t,n),e.endFloatingPoint(t)}}]),e}();e.FloatingPoint=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(t){!function(t){!function(t){var e=function(){function t(){F(this,t),this.bb=null,this.bb_pos=0}return E(t,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}}],[{key:"getRootAsUtf8",value:function(e,n){return(n||new t).__init(e.readInt32(e.position())+e.position(),e)}},{key:"startUtf8",value:function(t){t.startObject(0)}},{key:"endUtf8",value:function(t){return t.endObject()}},{key:"createUtf8",value:function(e){return t.startUtf8(e),t.endUtf8(e)}}]),t}();t.Utf8=e}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(t){!function(t){!function(t){var e=function(){function t(){F(this,t),this.bb=null,this.bb_pos=0}return E(t,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}}],[{key:"getRootAsBinary",value:function(e,n){return(n||new t).__init(e.readInt32(e.position())+e.position(),e)}},{key:"startBinary",value:function(t){t.startObject(0)}},{key:"endBinary",value:function(t){return t.endObject()}},{key:"createBinary",value:function(e){return t.startBinary(e),t.endBinary(e)}}]),t}();t.Binary=e}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(t){!function(t){!function(t){var e=function(){function t(){F(this,t),this.bb=null,this.bb_pos=0}return E(t,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}}],[{key:"getRootAsLargeUtf8",value:function(e,n){return(n||new t).__init(e.readInt32(e.position())+e.position(),e)}},{key:"startLargeUtf8",value:function(t){t.startObject(0)}},{key:"endLargeUtf8",value:function(t){return t.endObject()}},{key:"createLargeUtf8",value:function(e){return t.startLargeUtf8(e),t.endLargeUtf8(e)}}]),t}();t.LargeUtf8=e}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(t){!function(t){!function(t){var e=function(){function t(){F(this,t),this.bb=null,this.bb_pos=0}return E(t,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}}],[{key:"getRootAsLargeBinary",value:function(e,n){return(n||new t).__init(e.readInt32(e.position())+e.position(),e)}},{key:"startLargeBinary",value:function(t){t.startObject(0)}},{key:"endLargeBinary",value:function(t){return t.endObject()}},{key:"createLargeBinary",value:function(e){return t.startLargeBinary(e),t.endLargeBinary(e)}}]),t}();t.LargeBinary=e}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(t){!function(t){!function(t){var e=function(){function t(){F(this,t),this.bb=null,this.bb_pos=0}return E(t,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"byteWidth",value:function(){var t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt32(this.bb_pos+t):0}}],[{key:"getRootAsFixedSizeBinary",value:function(e,n){return(n||new t).__init(e.readInt32(e.position())+e.position(),e)}},{key:"startFixedSizeBinary",value:function(t){t.startObject(1)}},{key:"addByteWidth",value:function(t,e){t.addFieldInt32(0,e,0)}},{key:"endFixedSizeBinary",value:function(t){return t.endObject()}},{key:"createFixedSizeBinary",value:function(e,n){return t.startFixedSizeBinary(e),t.addByteWidth(e,n),t.endFixedSizeBinary(e)}}]),t}();t.FixedSizeBinary=e}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(t){!function(t){!function(t){var e=function(){function t(){F(this,t),this.bb=null,this.bb_pos=0}return E(t,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}}],[{key:"getRootAsBool",value:function(e,n){return(n||new t).__init(e.readInt32(e.position())+e.position(),e)}},{key:"startBool",value:function(t){t.startObject(0)}},{key:"endBool",value:function(t){return t.endObject()}},{key:"createBool",value:function(e){return t.startBool(e),t.endBool(e)}}]),t}();t.Bool=e}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(t){!function(t){!function(t){var e=function(){function t(){F(this,t),this.bb=null,this.bb_pos=0}return E(t,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"precision",value:function(){var t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt32(this.bb_pos+t):0}},{key:"scale",value:function(){var t=this.bb.__offset(this.bb_pos,6);return t?this.bb.readInt32(this.bb_pos+t):0}}],[{key:"getRootAsDecimal",value:function(e,n){return(n||new t).__init(e.readInt32(e.position())+e.position(),e)}},{key:"startDecimal",value:function(t){t.startObject(2)}},{key:"addPrecision",value:function(t,e){t.addFieldInt32(0,e,0)}},{key:"addScale",value:function(t,e){t.addFieldInt32(1,e,0)}},{key:"endDecimal",value:function(t){return t.endObject()}},{key:"createDecimal",value:function(e,n,r){return t.startDecimal(e),t.addPrecision(e,n),t.addScale(e,r),t.endDecimal(e)}}]),t}();t.Decimal=e}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(e){!function(e){!function(e){var n=function(){function e(){F(this,e),this.bb=null,this.bb_pos=0}return E(e,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"unit",value:function(){var e=this.bb.__offset(this.bb_pos,4);return e?this.bb.readInt16(this.bb_pos+e):t.apache.arrow.flatbuf.DateUnit.MILLISECOND}}],[{key:"getRootAsDate",value:function(t,n){return(n||new e).__init(t.readInt32(t.position())+t.position(),t)}},{key:"startDate",value:function(t){t.startObject(1)}},{key:"addUnit",value:function(e,n){e.addFieldInt16(0,n,t.apache.arrow.flatbuf.DateUnit.MILLISECOND)}},{key:"endDate",value:function(t){return t.endObject()}},{key:"createDate",value:function(t,n){return e.startDate(t),e.addUnit(t,n),e.endDate(t)}}]),e}();e.Date=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(e){!function(e){!function(e){var n=function(){function e(){F(this,e),this.bb=null,this.bb_pos=0}return E(e,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"unit",value:function(){var e=this.bb.__offset(this.bb_pos,4);return e?this.bb.readInt16(this.bb_pos+e):t.apache.arrow.flatbuf.TimeUnit.MILLISECOND}},{key:"bitWidth",value:function(){var t=this.bb.__offset(this.bb_pos,6);return t?this.bb.readInt32(this.bb_pos+t):32}}],[{key:"getRootAsTime",value:function(t,n){return(n||new e).__init(t.readInt32(t.position())+t.position(),t)}},{key:"startTime",value:function(t){t.startObject(2)}},{key:"addUnit",value:function(e,n){e.addFieldInt16(0,n,t.apache.arrow.flatbuf.TimeUnit.MILLISECOND)}},{key:"addBitWidth",value:function(t,e){t.addFieldInt32(1,e,32)}},{key:"endTime",value:function(t){return t.endObject()}},{key:"createTime",value:function(t,n,r){return e.startTime(t),e.addUnit(t,n),e.addBitWidth(t,r),e.endTime(t)}}]),e}();e.Time=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(e){!function(e){!function(e){var n=function(){function e(){F(this,e),this.bb=null,this.bb_pos=0}return E(e,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"unit",value:function(){var e=this.bb.__offset(this.bb_pos,4);return e?this.bb.readInt16(this.bb_pos+e):t.apache.arrow.flatbuf.TimeUnit.SECOND}},{key:"timezone",value:function(t){var e=this.bb.__offset(this.bb_pos,6);return e?this.bb.__string(this.bb_pos+e,t):null}}],[{key:"getRootAsTimestamp",value:function(t,n){return(n||new e).__init(t.readInt32(t.position())+t.position(),t)}},{key:"startTimestamp",value:function(t){t.startObject(2)}},{key:"addUnit",value:function(e,n){e.addFieldInt16(0,n,t.apache.arrow.flatbuf.TimeUnit.SECOND)}},{key:"addTimezone",value:function(t,e){t.addFieldOffset(1,e,0)}},{key:"endTimestamp",value:function(t){return t.endObject()}},{key:"createTimestamp",value:function(t,n,r){return e.startTimestamp(t),e.addUnit(t,n),e.addTimezone(t,r),e.endTimestamp(t)}}]),e}();e.Timestamp=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(e){!function(e){!function(e){var n=function(){function e(){F(this,e),this.bb=null,this.bb_pos=0}return E(e,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"unit",value:function(){var e=this.bb.__offset(this.bb_pos,4);return e?this.bb.readInt16(this.bb_pos+e):t.apache.arrow.flatbuf.IntervalUnit.YEAR_MONTH}}],[{key:"getRootAsInterval",value:function(t,n){return(n||new e).__init(t.readInt32(t.position())+t.position(),t)}},{key:"startInterval",value:function(t){t.startObject(1)}},{key:"addUnit",value:function(e,n){e.addFieldInt16(0,n,t.apache.arrow.flatbuf.IntervalUnit.YEAR_MONTH)}},{key:"endInterval",value:function(t){return t.endObject()}},{key:"createInterval",value:function(t,n){return e.startInterval(t),e.addUnit(t,n),e.endInterval(t)}}]),e}();e.Interval=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(e){!function(e){!function(e){var n=function(){function e(){F(this,e),this.bb=null,this.bb_pos=0}return E(e,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"unit",value:function(){var e=this.bb.__offset(this.bb_pos,4);return e?this.bb.readInt16(this.bb_pos+e):t.apache.arrow.flatbuf.TimeUnit.MILLISECOND}}],[{key:"getRootAsDuration",value:function(t,n){return(n||new e).__init(t.readInt32(t.position())+t.position(),t)}},{key:"startDuration",value:function(t){t.startObject(1)}},{key:"addUnit",value:function(e,n){e.addFieldInt16(0,n,t.apache.arrow.flatbuf.TimeUnit.MILLISECOND)}},{key:"endDuration",value:function(t){return t.endObject()}},{key:"createDuration",value:function(t,n){return e.startDuration(t),e.addUnit(t,n),e.endDuration(t)}}]),e}();e.Duration=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(t){!function(t){!function(t){var e=function(){function t(){F(this,t),this.bb=null,this.bb_pos=0}return E(t,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"key",value:function(t){var e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}},{key:"value",value:function(t){var e=this.bb.__offset(this.bb_pos,6);return e?this.bb.__string(this.bb_pos+e,t):null}}],[{key:"getRootAsKeyValue",value:function(e,n){return(n||new t).__init(e.readInt32(e.position())+e.position(),e)}},{key:"startKeyValue",value:function(t){t.startObject(2)}},{key:"addKey",value:function(t,e){t.addFieldOffset(0,e,0)}},{key:"addValue",value:function(t,e){t.addFieldOffset(1,e,0)}},{key:"endKeyValue",value:function(t){return t.endObject()}},{key:"createKeyValue",value:function(e,n,r){return t.startKeyValue(e),t.addKey(e,n),t.addValue(e,r),t.endKeyValue(e)}}]),t}();t.KeyValue=e}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(e){!function(e){!function(e){var n=function(){function e(){F(this,e),this.bb=null,this.bb_pos=0}return E(e,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"id",value:function(){var t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}},{key:"indexType",value:function(e){var n=this.bb.__offset(this.bb_pos,6);return n?(e||new t.apache.arrow.flatbuf.Int).__init(this.bb.__indirect(this.bb_pos+n),this.bb):null}},{key:"isOrdered",value:function(){var t=this.bb.__offset(this.bb_pos,8);return!!t&&!!this.bb.readInt8(this.bb_pos+t)}}],[{key:"getRootAsDictionaryEncoding",value:function(t,n){return(n||new e).__init(t.readInt32(t.position())+t.position(),t)}},{key:"startDictionaryEncoding",value:function(t){t.startObject(3)}},{key:"addId",value:function(t,e){t.addFieldInt64(0,e,t.createLong(0,0))}},{key:"addIndexType",value:function(t,e){t.addFieldOffset(1,e,0)}},{key:"addIsOrdered",value:function(t,e){t.addFieldInt8(2,+e,0)}},{key:"endDictionaryEncoding",value:function(t){return t.endObject()}},{key:"createDictionaryEncoding",value:function(t,n,r,i){return e.startDictionaryEncoding(t),e.addId(t,n),e.addIndexType(t,r),e.addIsOrdered(t,i),e.endDictionaryEncoding(t)}}]),e}();e.DictionaryEncoding=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(e){!function(e){!function(e){var n=function(){function e(){F(this,e),this.bb=null,this.bb_pos=0}return E(e,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"name",value:function(t){var e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}},{key:"nullable",value:function(){var t=this.bb.__offset(this.bb_pos,6);return!!t&&!!this.bb.readInt8(this.bb_pos+t)}},{key:"typeType",value:function(){var e=this.bb.__offset(this.bb_pos,8);return e?this.bb.readUint8(this.bb_pos+e):t.apache.arrow.flatbuf.Type.NONE}},{key:"type",value:function(t){var e=this.bb.__offset(this.bb_pos,10);return e?this.bb.__union(t,this.bb_pos+e):null}},{key:"dictionary",value:function(e){var n=this.bb.__offset(this.bb_pos,12);return n?(e||new t.apache.arrow.flatbuf.DictionaryEncoding).__init(this.bb.__indirect(this.bb_pos+n),this.bb):null}},{key:"children",value:function(e,n){var r=this.bb.__offset(this.bb_pos,14);return r?(n||new t.apache.arrow.flatbuf.Field).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*e),this.bb):null}},{key:"childrenLength",value:function(){var t=this.bb.__offset(this.bb_pos,14);return t?this.bb.__vector_len(this.bb_pos+t):0}},{key:"customMetadata",value:function(e,n){var r=this.bb.__offset(this.bb_pos,16);return r?(n||new t.apache.arrow.flatbuf.KeyValue).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*e),this.bb):null}},{key:"customMetadataLength",value:function(){var t=this.bb.__offset(this.bb_pos,16);return t?this.bb.__vector_len(this.bb_pos+t):0}}],[{key:"getRootAsField",value:function(t,n){return(n||new e).__init(t.readInt32(t.position())+t.position(),t)}},{key:"startField",value:function(t){t.startObject(7)}},{key:"addName",value:function(t,e){t.addFieldOffset(0,e,0)}},{key:"addNullable",value:function(t,e){t.addFieldInt8(1,+e,0)}},{key:"addTypeType",value:function(e,n){e.addFieldInt8(2,n,t.apache.arrow.flatbuf.Type.NONE)}},{key:"addType",value:function(t,e){t.addFieldOffset(3,e,0)}},{key:"addDictionary",value:function(t,e){t.addFieldOffset(4,e,0)}},{key:"addChildren",value:function(t,e){t.addFieldOffset(5,e,0)}},{key:"createChildrenVector",value:function(t,e){t.startVector(4,e.length,4);for(var n=e.length-1;n>=0;n--)t.addOffset(e[n]);return t.endVector()}},{key:"startChildrenVector",value:function(t,e){t.startVector(4,e,4)}},{key:"addCustomMetadata",value:function(t,e){t.addFieldOffset(6,e,0)}},{key:"createCustomMetadataVector",value:function(t,e){t.startVector(4,e.length,4);for(var n=e.length-1;n>=0;n--)t.addOffset(e[n]);return t.endVector()}},{key:"startCustomMetadataVector",value:function(t,e){t.startVector(4,e,4)}},{key:"endField",value:function(t){return t.endObject()}},{key:"createField",value:function(t,n,r,i,a,o,u,s){return e.startField(t),e.addName(t,n),e.addNullable(t,r),e.addTypeType(t,i),e.addType(t,a),e.addDictionary(t,o),e.addChildren(t,u),e.addCustomMetadata(t,s),e.endField(t)}}]),e}();e.Field=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(t){!function(t){!function(t){var e=function(){function t(){F(this,t),this.bb=null,this.bb_pos=0}return E(t,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"offset",value:function(){return this.bb.readInt64(this.bb_pos)}},{key:"length",value:function(){return this.bb.readInt64(this.bb_pos+8)}}],[{key:"createBuffer",value:function(t,e,n){return t.prep(8,16),t.writeInt64(n),t.writeInt64(e),t.offset()}}]),t}();t.Buffer=e}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(e){!function(e){!function(e){var n=function(){function e(){F(this,e),this.bb=null,this.bb_pos=0}return E(e,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"endianness",value:function(){var e=this.bb.__offset(this.bb_pos,4);return e?this.bb.readInt16(this.bb_pos+e):t.apache.arrow.flatbuf.Endianness.Little}},{key:"fields",value:function(e,n){var r=this.bb.__offset(this.bb_pos,6);return r?(n||new t.apache.arrow.flatbuf.Field).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*e),this.bb):null}},{key:"fieldsLength",value:function(){var t=this.bb.__offset(this.bb_pos,6);return t?this.bb.__vector_len(this.bb_pos+t):0}},{key:"customMetadata",value:function(e,n){var r=this.bb.__offset(this.bb_pos,8);return r?(n||new t.apache.arrow.flatbuf.KeyValue).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*e),this.bb):null}},{key:"customMetadataLength",value:function(){var t=this.bb.__offset(this.bb_pos,8);return t?this.bb.__vector_len(this.bb_pos+t):0}}],[{key:"getRootAsSchema",value:function(t,n){return(n||new e).__init(t.readInt32(t.position())+t.position(),t)}},{key:"startSchema",value:function(t){t.startObject(3)}},{key:"addEndianness",value:function(e,n){e.addFieldInt16(0,n,t.apache.arrow.flatbuf.Endianness.Little)}},{key:"addFields",value:function(t,e){t.addFieldOffset(1,e,0)}},{key:"createFieldsVector",value:function(t,e){t.startVector(4,e.length,4);for(var n=e.length-1;n>=0;n--)t.addOffset(e[n]);return t.endVector()}},{key:"startFieldsVector",value:function(t,e){t.startVector(4,e,4)}},{key:"addCustomMetadata",value:function(t,e){t.addFieldOffset(2,e,0)}},{key:"createCustomMetadataVector",value:function(t,e){t.startVector(4,e.length,4);for(var n=e.length-1;n>=0;n--)t.addOffset(e[n]);return t.endVector()}},{key:"startCustomMetadataVector",value:function(t,e){t.startVector(4,e,4)}},{key:"endSchema",value:function(t){return t.endObject()}},{key:"finishSchemaBuffer",value:function(t,e){t.finish(e)}},{key:"createSchema",value:function(t,n,r,i){return e.startSchema(t),e.addEndianness(t,n),e.addFields(t,r),e.addCustomMetadata(t,i),e.endSchema(t)}}]),e}();e.Schema=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(t){!function(t){!function(t){t.Schema=Ye.apache.arrow.flatbuf.Schema}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Ge||(Ge={})),function(t){!function(t){!function(t){!function(t){!function(t){t[t.NONE=0]="NONE",t[t.Schema=1]="Schema",t[t.DictionaryBatch=2]="DictionaryBatch",t[t.RecordBatch=3]="RecordBatch",t[t.Tensor=4]="Tensor",t[t.SparseTensor=5]="SparseTensor"}(t.MessageHeader||(t.MessageHeader={}))}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Ge||(Ge={})),function(t){!function(t){!function(t){!function(t){var e=function(){function t(){F(this,t),this.bb=null,this.bb_pos=0}return E(t,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"length",value:function(){return this.bb.readInt64(this.bb_pos)}},{key:"nullCount",value:function(){return this.bb.readInt64(this.bb_pos+8)}}],[{key:"createFieldNode",value:function(t,e,n){return t.prep(8,16),t.writeInt64(n),t.writeInt64(e),t.offset()}}]),t}();t.FieldNode=e}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Ge||(Ge={})),function(t){!function(e){!function(e){!function(e){var n=function(){function e(){F(this,e),this.bb=null,this.bb_pos=0}return E(e,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"length",value:function(){var t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}},{key:"nodes",value:function(e,n){var r=this.bb.__offset(this.bb_pos,6);return r?(n||new t.apache.arrow.flatbuf.FieldNode).__init(this.bb.__vector(this.bb_pos+r)+16*e,this.bb):null}},{key:"nodesLength",value:function(){var t=this.bb.__offset(this.bb_pos,6);return t?this.bb.__vector_len(this.bb_pos+t):0}},{key:"buffers",value:function(t,e){var n=this.bb.__offset(this.bb_pos,8);return n?(e||new Ye.apache.arrow.flatbuf.Buffer).__init(this.bb.__vector(this.bb_pos+n)+16*t,this.bb):null}},{key:"buffersLength",value:function(){var t=this.bb.__offset(this.bb_pos,8);return t?this.bb.__vector_len(this.bb_pos+t):0}}],[{key:"getRootAsRecordBatch",value:function(t,n){return(n||new e).__init(t.readInt32(t.position())+t.position(),t)}},{key:"startRecordBatch",value:function(t){t.startObject(3)}},{key:"addLength",value:function(t,e){t.addFieldInt64(0,e,t.createLong(0,0))}},{key:"addNodes",value:function(t,e){t.addFieldOffset(1,e,0)}},{key:"startNodesVector",value:function(t,e){t.startVector(16,e,8)}},{key:"addBuffers",value:function(t,e){t.addFieldOffset(2,e,0)}},{key:"startBuffersVector",value:function(t,e){t.startVector(16,e,8)}},{key:"endRecordBatch",value:function(t){return t.endObject()}},{key:"createRecordBatch",value:function(t,n,r,i){return e.startRecordBatch(t),e.addLength(t,n),e.addNodes(t,r),e.addBuffers(t,i),e.endRecordBatch(t)}}]),e}();e.RecordBatch=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Ge||(Ge={})),function(t){!function(e){!function(e){!function(e){var n=function(){function e(){F(this,e),this.bb=null,this.bb_pos=0}return E(e,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"id",value:function(){var t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}},{key:"data",value:function(e){var n=this.bb.__offset(this.bb_pos,6);return n?(e||new t.apache.arrow.flatbuf.RecordBatch).__init(this.bb.__indirect(this.bb_pos+n),this.bb):null}},{key:"isDelta",value:function(){var t=this.bb.__offset(this.bb_pos,8);return!!t&&!!this.bb.readInt8(this.bb_pos+t)}}],[{key:"getRootAsDictionaryBatch",value:function(t,n){return(n||new e).__init(t.readInt32(t.position())+t.position(),t)}},{key:"startDictionaryBatch",value:function(t){t.startObject(3)}},{key:"addId",value:function(t,e){t.addFieldInt64(0,e,t.createLong(0,0))}},{key:"addData",value:function(t,e){t.addFieldOffset(1,e,0)}},{key:"addIsDelta",value:function(t,e){t.addFieldInt8(2,+e,0)}},{key:"endDictionaryBatch",value:function(t){return t.endObject()}},{key:"createDictionaryBatch",value:function(t,n,r,i){return e.startDictionaryBatch(t),e.addId(t,n),e.addData(t,r),e.addIsDelta(t,i),e.endDictionaryBatch(t)}}]),e}();e.DictionaryBatch=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Ge||(Ge={})),function(t){!function(e){!function(e){!function(e){var n=function(){function e(){F(this,e),this.bb=null,this.bb_pos=0}return E(e,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"version",value:function(){var t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt16(this.bb_pos+t):Ye.apache.arrow.flatbuf.MetadataVersion.V1}},{key:"headerType",value:function(){var e=this.bb.__offset(this.bb_pos,6);return e?this.bb.readUint8(this.bb_pos+e):t.apache.arrow.flatbuf.MessageHeader.NONE}},{key:"header",value:function(t){var e=this.bb.__offset(this.bb_pos,8);return e?this.bb.__union(t,this.bb_pos+e):null}},{key:"bodyLength",value:function(){var t=this.bb.__offset(this.bb_pos,10);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}},{key:"customMetadata",value:function(t,e){var n=this.bb.__offset(this.bb_pos,12);return n?(e||new Ye.apache.arrow.flatbuf.KeyValue).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+n)+4*t),this.bb):null}},{key:"customMetadataLength",value:function(){var t=this.bb.__offset(this.bb_pos,12);return t?this.bb.__vector_len(this.bb_pos+t):0}}],[{key:"getRootAsMessage",value:function(t,n){return(n||new e).__init(t.readInt32(t.position())+t.position(),t)}},{key:"startMessage",value:function(t){t.startObject(5)}},{key:"addVersion",value:function(t,e){t.addFieldInt16(0,e,Ye.apache.arrow.flatbuf.MetadataVersion.V1)}},{key:"addHeaderType",value:function(e,n){e.addFieldInt8(1,n,t.apache.arrow.flatbuf.MessageHeader.NONE)}},{key:"addHeader",value:function(t,e){t.addFieldOffset(2,e,0)}},{key:"addBodyLength",value:function(t,e){t.addFieldInt64(3,e,t.createLong(0,0))}},{key:"addCustomMetadata",value:function(t,e){t.addFieldOffset(4,e,0)}},{key:"createCustomMetadataVector",value:function(t,e){t.startVector(4,e.length,4);for(var n=e.length-1;n>=0;n--)t.addOffset(e[n]);return t.endVector()}},{key:"startCustomMetadataVector",value:function(t,e){t.startVector(4,e,4)}},{key:"endMessage",value:function(t){return t.endObject()}},{key:"finishMessageBuffer",value:function(t,e){t.finish(e)}},{key:"createMessage",value:function(t,n,r,i,a,o){return e.startMessage(t),e.addVersion(t,n),e.addHeaderType(t,r),e.addHeader(t,i),e.addBodyLength(t,a),e.addCustomMetadata(t,o),e.endMessage(t)}}]),e}();e.Message=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Ge||(Ge={}));Ye.apache.arrow.flatbuf.Type;var Je,Ze,Qe=Ye.apache.arrow.flatbuf.DateUnit,Xe=Ye.apache.arrow.flatbuf.TimeUnit,tn=Ye.apache.arrow.flatbuf.Precision,en=Ye.apache.arrow.flatbuf.UnionMode,nn=Ye.apache.arrow.flatbuf.IntervalUnit,rn=Ge.apache.arrow.flatbuf.MessageHeader,an=Ye.apache.arrow.flatbuf.MetadataVersion;!function(t){t[t.NONE=0]="NONE",t[t.Null=1]="Null",t[t.Int=2]="Int",t[t.Float=3]="Float",t[t.Binary=4]="Binary",t[t.Utf8=5]="Utf8",t[t.Bool=6]="Bool",t[t.Decimal=7]="Decimal",t[t.Date=8]="Date",t[t.Time=9]="Time",t[t.Timestamp=10]="Timestamp",t[t.Interval=11]="Interval",t[t.List=12]="List",t[t.Struct=13]="Struct",t[t.Union=14]="Union",t[t.FixedSizeBinary=15]="FixedSizeBinary",t[t.FixedSizeList=16]="FixedSizeList",t[t.Map=17]="Map",t[t.Dictionary=-1]="Dictionary",t[t.Int8=-2]="Int8",t[t.Int16=-3]="Int16",t[t.Int32=-4]="Int32",t[t.Int64=-5]="Int64",t[t.Uint8=-6]="Uint8",t[t.Uint16=-7]="Uint16",t[t.Uint32=-8]="Uint32",t[t.Uint64=-9]="Uint64",t[t.Float16=-10]="Float16",t[t.Float32=-11]="Float32",t[t.Float64=-12]="Float64",t[t.DateDay=-13]="DateDay",t[t.DateMillisecond=-14]="DateMillisecond",t[t.TimestampSecond=-15]="TimestampSecond",t[t.TimestampMillisecond=-16]="TimestampMillisecond",t[t.TimestampMicrosecond=-17]="TimestampMicrosecond",t[t.TimestampNanosecond=-18]="TimestampNanosecond",t[t.TimeSecond=-19]="TimeSecond",t[t.TimeMillisecond=-20]="TimeMillisecond",t[t.TimeMicrosecond=-21]="TimeMicrosecond",t[t.TimeNanosecond=-22]="TimeNanosecond",t[t.DenseUnion=-23]="DenseUnion",t[t.SparseUnion=-24]="SparseUnion",t[t.IntervalDayTime=-25]="IntervalDayTime",t[t.IntervalYearMonth=-26]="IntervalYearMonth"}(Je||(Je={})),function(t){t[t.OFFSET=0]="OFFSET",t[t.DATA=1]="DATA",t[t.VALIDITY=2]="VALIDITY",t[t.TYPE=3]="TYPE"}(Ze||(Ze={}));var on=R.mark(hn);function un(t,e,n,r){return 0!==(n&1<>r}function cn(t,e,n){return n?!!(t[e>>3]|=1<>3]&=~(1<0||n.byteLength>3):ln(hn(n,t,e,null,un)).subarray(0,r)),i}return n}function ln(t){var e,n=[],r=0,i=0,a=0,o=O(t);try{for(o.s();!(e=o.n()).done;){e.value&&(a|=1<0)&&(n[r++]=a);var u=new Uint8Array(n.length+7&-8);return u.set(n),u}function hn(t,e,n,r,i){var a,o,u,s,c;return R.wrap((function(f){for(;;)switch(f.prev=f.next){case 0:a=e%8,o=e>>3,u=0,s=n;case 3:if(!(s>0)){f.next=11;break}c=t[o++];case 5:return f.next=7,i(r,u++,c,a);case 7:if(--s>0&&++a<8){f.next=5;break}case 8:a=0,f.next=3;break;case 11:case"end":return f.stop()}}),on)}function yn(t,e,n){if(n-e<=0)return 0;if(n-e<8){var r,i=0,a=O(hn(t,e,n-e,t,sn));try{for(a.s();!(r=a.n()).done;){i+=r.value}}catch(s){a.e(s)}finally{a.f()}return i}var o=n>>3<<3,u=e+(e%8===0?0:8-e%8);return yn(t,e,u)+yn(t,o,n)+pn(t,u>>3,o-u>>3)}function pn(t,e,n){for(var r=0,i=0|e,a=new DataView(t.buffer,t.byteOffset,t.byteLength),o=void 0===n?t.byteLength:i+n;o-i>=4;)r+=dn(a.getUint32(i)),i+=4;for(;o-i>=2;)r+=dn(a.getUint16(i)),i+=2;for(;o-i>=1;)r+=dn(a.getUint8(i)),i+=1;return r}function dn(t){var e=0|t;return 16843009*((e=(858993459&(e-=e>>>1&1431655765))+(e>>>2&858993459))+(e>>>4)&252645135)>>>24}function vn(t){return function(t){if(Array.isArray(t))return T(t)}(t)||function(t){if("undefined"!==typeof Symbol&&null!=t[Symbol.iterator]||null!=t["@@iterator"])return Array.from(t)}(t)||B(t)||function(){throw new TypeError("Invalid attempt to spread non-iterable instance.\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method.")}()}var bn=function(){function t(){F(this,t)}return E(t,[{key:"visitMany",value:function(t){for(var e=this,n=arguments.length,r=new Array(n>1?n-1:0),i=1;i1&&void 0!==arguments[1])||arguments[1];return gn(this,t,e)}},{key:"visitNull",value:function(t){return null}},{key:"visitBool",value:function(t){return null}},{key:"visitInt",value:function(t){return null}},{key:"visitFloat",value:function(t){return null}},{key:"visitUtf8",value:function(t){return null}},{key:"visitBinary",value:function(t){return null}},{key:"visitFixedSizeBinary",value:function(t){return null}},{key:"visitDate",value:function(t){return null}},{key:"visitTimestamp",value:function(t){return null}},{key:"visitTime",value:function(t){return null}},{key:"visitDecimal",value:function(t){return null}},{key:"visitList",value:function(t){return null}},{key:"visitStruct",value:function(t){return null}},{key:"visitUnion",value:function(t){return null}},{key:"visitDictionary",value:function(t){return null}},{key:"visitInterval",value:function(t){return null}},{key:"visitFixedSizeList",value:function(t){return null}},{key:"visitMap",value:function(t){return null}}]),t}();function gn(t,e){var n=!(arguments.length>2&&void 0!==arguments[2])||arguments[2],r=null,i=Je.NONE;switch(e instanceof yr||e instanceof qe?i=mn(e.type):e instanceof Fn?i=mn(e):"number"!==typeof(i=e)&&(i=Je[e]),i){case Je.Null:r=t.visitNull;break;case Je.Bool:r=t.visitBool;break;case Je.Int:r=t.visitInt;break;case Je.Int8:r=t.visitInt8||t.visitInt;break;case Je.Int16:r=t.visitInt16||t.visitInt;break;case Je.Int32:r=t.visitInt32||t.visitInt;break;case Je.Int64:r=t.visitInt64||t.visitInt;break;case Je.Uint8:r=t.visitUint8||t.visitInt;break;case Je.Uint16:r=t.visitUint16||t.visitInt;break;case Je.Uint32:r=t.visitUint32||t.visitInt;break;case Je.Uint64:r=t.visitUint64||t.visitInt;break;case Je.Float:r=t.visitFloat;break;case Je.Float16:r=t.visitFloat16||t.visitFloat;break;case Je.Float32:r=t.visitFloat32||t.visitFloat;break;case Je.Float64:r=t.visitFloat64||t.visitFloat;break;case Je.Utf8:r=t.visitUtf8;break;case Je.Binary:r=t.visitBinary;break;case Je.FixedSizeBinary:r=t.visitFixedSizeBinary;break;case Je.Date:r=t.visitDate;break;case Je.DateDay:r=t.visitDateDay||t.visitDate;break;case Je.DateMillisecond:r=t.visitDateMillisecond||t.visitDate;break;case Je.Timestamp:r=t.visitTimestamp;break;case Je.TimestampSecond:r=t.visitTimestampSecond||t.visitTimestamp;break;case Je.TimestampMillisecond:r=t.visitTimestampMillisecond||t.visitTimestamp;break;case Je.TimestampMicrosecond:r=t.visitTimestampMicrosecond||t.visitTimestamp;break;case Je.TimestampNanosecond:r=t.visitTimestampNanosecond||t.visitTimestamp;break;case Je.Time:r=t.visitTime;break;case Je.TimeSecond:r=t.visitTimeSecond||t.visitTime;break;case Je.TimeMillisecond:r=t.visitTimeMillisecond||t.visitTime;break;case Je.TimeMicrosecond:r=t.visitTimeMicrosecond||t.visitTime;break;case Je.TimeNanosecond:r=t.visitTimeNanosecond||t.visitTime;break;case Je.Decimal:r=t.visitDecimal;break;case Je.List:r=t.visitList;break;case Je.Struct:r=t.visitStruct;break;case Je.Union:r=t.visitUnion;break;case Je.DenseUnion:r=t.visitDenseUnion||t.visitUnion;break;case Je.SparseUnion:r=t.visitSparseUnion||t.visitUnion;break;case Je.Dictionary:r=t.visitDictionary;break;case Je.Interval:r=t.visitInterval;break;case Je.IntervalDayTime:r=t.visitIntervalDayTime||t.visitInterval;break;case Je.IntervalYearMonth:r=t.visitIntervalYearMonth||t.visitInterval;break;case Je.FixedSizeList:r=t.visitFixedSizeList;break;case Je.Map:r=t.visitMap}if("function"===typeof r)return r;if(!n)return function(){return null};throw new Error("Unrecognized type '".concat(Je[i],"'"))}function mn(t){switch(t.typeId){case Je.Null:return Je.Null;case Je.Int:var e=t.bitWidth,n=t.isSigned;switch(e){case 8:return n?Je.Int8:Je.Uint8;case 16:return n?Je.Int16:Je.Uint16;case 32:return n?Je.Int32:Je.Uint32;case 64:return n?Je.Int64:Je.Uint64}return Je.Int;case Je.Float:switch(t.precision){case tn.HALF:return Je.Float16;case tn.SINGLE:return Je.Float32;case tn.DOUBLE:return Je.Float64}return Je.Float;case Je.Binary:return Je.Binary;case Je.Utf8:return Je.Utf8;case Je.Bool:return Je.Bool;case Je.Decimal:return Je.Decimal;case Je.Time:switch(t.unit){case Xe.SECOND:return Je.TimeSecond;case Xe.MILLISECOND:return Je.TimeMillisecond;case Xe.MICROSECOND:return Je.TimeMicrosecond;case Xe.NANOSECOND:return Je.TimeNanosecond}return Je.Time;case Je.Timestamp:switch(t.unit){case Xe.SECOND:return Je.TimestampSecond;case Xe.MILLISECOND:return Je.TimestampMillisecond;case Xe.MICROSECOND:return Je.TimestampMicrosecond;case Xe.NANOSECOND:return Je.TimestampNanosecond}return Je.Timestamp;case Je.Date:switch(t.unit){case Qe.DAY:return Je.DateDay;case Qe.MILLISECOND:return Je.DateMillisecond}return Je.Date;case Je.Interval:switch(t.unit){case nn.DAY_TIME:return Je.IntervalDayTime;case nn.YEAR_MONTH:return Je.IntervalYearMonth}return Je.Interval;case Je.Map:return Je.Map;case Je.List:return Je.List;case Je.Struct:return Je.Struct;case Je.Union:switch(t.mode){case en.Dense:return Je.DenseUnion;case en.Sparse:return Je.SparseUnion}return Je.Union;case Je.FixedSizeBinary:return Je.FixedSizeBinary;case Je.FixedSizeList:return Je.FixedSizeList;case Je.Dictionary:return Je.Dictionary}throw new Error("Unrecognized type '".concat(Je[t.typeId],"'"))}bn.prototype.visitInt8=null,bn.prototype.visitInt16=null,bn.prototype.visitInt32=null,bn.prototype.visitInt64=null,bn.prototype.visitUint8=null,bn.prototype.visitUint16=null,bn.prototype.visitUint32=null,bn.prototype.visitUint64=null,bn.prototype.visitFloat16=null,bn.prototype.visitFloat32=null,bn.prototype.visitFloat64=null,bn.prototype.visitDateDay=null,bn.prototype.visitDateMillisecond=null,bn.prototype.visitTimestampSecond=null,bn.prototype.visitTimestampMillisecond=null,bn.prototype.visitTimestampMicrosecond=null,bn.prototype.visitTimestampNanosecond=null,bn.prototype.visitTimeSecond=null,bn.prototype.visitTimeMillisecond=null,bn.prototype.visitTimeMicrosecond=null,bn.prototype.visitTimeNanosecond=null,bn.prototype.visitDenseUnion=null,bn.prototype.visitSparseUnion=null,bn.prototype.visitIntervalDayTime=null,bn.prototype.visitIntervalYearMonth=null;var kn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"compareSchemas",value:function(t,e){return t===e||e instanceof t.constructor&&Ln.compareFields(t.fields,e.fields)}},{key:"compareFields",value:function(t,e){return t===e||Array.isArray(t)&&Array.isArray(e)&&t.length===e.length&&t.every((function(t,n){return Ln.compareField(t,e[n])}))}},{key:"compareField",value:function(t,e){return t===e||e instanceof t.constructor&&t.name===e.name&&t.nullable===e.nullable&&Ln.visit(t.type,e.type)}}]),n}(bn);function wn(t,e){return e instanceof t.constructor}function _n(t,e){return t===e||wn(t,e)}function In(t,e){return t===e||wn(t,e)&&t.bitWidth===e.bitWidth&&t.isSigned===e.isSigned}function Sn(t,e){return t===e||wn(t,e)&&t.precision===e.precision}function xn(t,e){return t===e||wn(t,e)&&t.unit===e.unit}function An(t,e){return t===e||wn(t,e)&&t.unit===e.unit&&t.timezone===e.timezone}function Tn(t,e){return t===e||wn(t,e)&&t.unit===e.unit&&t.bitWidth===e.bitWidth}function Bn(t,e){return t===e||wn(t,e)&&t.mode===e.mode&&t.typeIds.every((function(t,n){return t===e.typeIds[n]}))&&Ln.compareFields(t.children,e.children)}function On(t,e){return t===e||wn(t,e)&&t.unit===e.unit}kn.prototype.visitNull=_n,kn.prototype.visitBool=_n,kn.prototype.visitInt=In,kn.prototype.visitInt8=In,kn.prototype.visitInt16=In,kn.prototype.visitInt32=In,kn.prototype.visitInt64=In,kn.prototype.visitUint8=In,kn.prototype.visitUint16=In,kn.prototype.visitUint32=In,kn.prototype.visitUint64=In,kn.prototype.visitFloat=Sn,kn.prototype.visitFloat16=Sn,kn.prototype.visitFloat32=Sn,kn.prototype.visitFloat64=Sn,kn.prototype.visitUtf8=_n,kn.prototype.visitBinary=_n,kn.prototype.visitFixedSizeBinary=function(t,e){return t===e||wn(t,e)&&t.byteWidth===e.byteWidth},kn.prototype.visitDate=xn,kn.prototype.visitDateDay=xn,kn.prototype.visitDateMillisecond=xn,kn.prototype.visitTimestamp=An,kn.prototype.visitTimestampSecond=An,kn.prototype.visitTimestampMillisecond=An,kn.prototype.visitTimestampMicrosecond=An,kn.prototype.visitTimestampNanosecond=An,kn.prototype.visitTime=Tn,kn.prototype.visitTimeSecond=Tn,kn.prototype.visitTimeMillisecond=Tn,kn.prototype.visitTimeMicrosecond=Tn,kn.prototype.visitTimeNanosecond=Tn,kn.prototype.visitDecimal=_n,kn.prototype.visitList=function(t,e){return t===e||wn(t,e)&&t.children.length===e.children.length&&Ln.compareFields(t.children,e.children)},kn.prototype.visitStruct=function(t,e){return t===e||wn(t,e)&&t.children.length===e.children.length&&Ln.compareFields(t.children,e.children)},kn.prototype.visitUnion=Bn,kn.prototype.visitDenseUnion=Bn,kn.prototype.visitSparseUnion=Bn,kn.prototype.visitDictionary=function(t,e){return t===e||wn(t,e)&&t.id===e.id&&t.isOrdered===e.isOrdered&&Ln.visit(t.indices,e.indices)&&Ln.visit(t.dictionary,e.dictionary)},kn.prototype.visitInterval=On,kn.prototype.visitIntervalDayTime=On,kn.prototype.visitIntervalYearMonth=On,kn.prototype.visitFixedSizeList=function(t,e){return t===e||wn(t,e)&&t.listSize===e.listSize&&t.children.length===e.children.length&&Ln.compareFields(t.children,e.children)},kn.prototype.visitMap=function(t,e){return t===e||wn(t,e)&&t.keysSorted===e.keysSorted&&t.children.length===e.children.length&&Ln.compareFields(t.children,e.children)};var Dn,Ln=new kn,Fn=function(){function t(){F(this,t)}return E(t,[{key:"typeId",get:function(){return Je.NONE}},{key:"compareTo",value:function(t){return Ln.visit(this,t)}}],[{key:"isNull",value:function(t){return t&&t.typeId===Je.Null}},{key:"isInt",value:function(t){return t&&t.typeId===Je.Int}},{key:"isFloat",value:function(t){return t&&t.typeId===Je.Float}},{key:"isBinary",value:function(t){return t&&t.typeId===Je.Binary}},{key:"isUtf8",value:function(t){return t&&t.typeId===Je.Utf8}},{key:"isBool",value:function(t){return t&&t.typeId===Je.Bool}},{key:"isDecimal",value:function(t){return t&&t.typeId===Je.Decimal}},{key:"isDate",value:function(t){return t&&t.typeId===Je.Date}},{key:"isTime",value:function(t){return t&&t.typeId===Je.Time}},{key:"isTimestamp",value:function(t){return t&&t.typeId===Je.Timestamp}},{key:"isInterval",value:function(t){return t&&t.typeId===Je.Interval}},{key:"isList",value:function(t){return t&&t.typeId===Je.List}},{key:"isStruct",value:function(t){return t&&t.typeId===Je.Struct}},{key:"isUnion",value:function(t){return t&&t.typeId===Je.Union}},{key:"isFixedSizeBinary",value:function(t){return t&&t.typeId===Je.FixedSizeBinary}},{key:"isFixedSizeList",value:function(t){return t&&t.typeId===Je.FixedSizeList}},{key:"isMap",value:function(t){return t&&t.typeId===Je.Map}},{key:"isDictionary",value:function(t){return t&&t.typeId===Je.Dictionary}}]),t}();Fn[Symbol.toStringTag]=((Dn=Fn.prototype).children=null,Dn.ArrayType=Array,Dn[Symbol.toStringTag]="DataType");var Mn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"toString",value:function(){return"Null"}},{key:"typeId",get:function(){return Je.Null}}]),n}(Fn);Mn[Symbol.toStringTag]=function(t){return t[Symbol.toStringTag]="Null"}(Mn.prototype);var En=function(t){ot(n,t);var e=yt(n);function n(t,r){var i;return F(this,n),(i=e.call(this)).isSigned=t,i.bitWidth=r,i}return E(n,[{key:"typeId",get:function(){return Je.Int}},{key:"ArrayType",get:function(){switch(this.bitWidth){case 8:return this.isSigned?Int8Array:Uint8Array;case 16:return this.isSigned?Int16Array:Uint16Array;case 32:case 64:return this.isSigned?Int32Array:Uint32Array}throw new Error("Unrecognized ".concat(this[Symbol.toStringTag]," type"))}},{key:"toString",value:function(){return"".concat(this.isSigned?"I":"Ui","nt").concat(this.bitWidth)}}]),n}(Fn);En[Symbol.toStringTag]=function(t){return t.isSigned=null,t.bitWidth=null,t[Symbol.toStringTag]="Int"}(En.prototype);var Un=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this,!0,8)}return E(n)}(En),Nn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this,!0,16)}return E(n)}(En),Cn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this,!0,32)}return E(n)}(En),Vn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this,!0,64)}return E(n)}(En),jn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this,!1,8)}return E(n)}(En),Rn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this,!1,16)}return E(n)}(En),Pn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this,!1,32)}return E(n)}(En),zn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this,!1,64)}return E(n)}(En);Object.defineProperty(Un.prototype,"ArrayType",{value:Int8Array}),Object.defineProperty(Nn.prototype,"ArrayType",{value:Int16Array}),Object.defineProperty(Cn.prototype,"ArrayType",{value:Int32Array}),Object.defineProperty(Vn.prototype,"ArrayType",{value:Int32Array}),Object.defineProperty(jn.prototype,"ArrayType",{value:Uint8Array}),Object.defineProperty(Rn.prototype,"ArrayType",{value:Uint16Array}),Object.defineProperty(Pn.prototype,"ArrayType",{value:Uint32Array}),Object.defineProperty(zn.prototype,"ArrayType",{value:Uint32Array});var Yn=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this)).precision=t,r}return E(n,[{key:"typeId",get:function(){return Je.Float}},{key:"ArrayType",get:function(){switch(this.precision){case tn.HALF:return Uint16Array;case tn.SINGLE:return Float32Array;case tn.DOUBLE:return Float64Array}throw new Error("Unrecognized ".concat(this[Symbol.toStringTag]," type"))}},{key:"toString",value:function(){return"Float".concat(this.precision<<5||16)}}]),n}(Fn);Yn[Symbol.toStringTag]=function(t){return t.precision=null,t[Symbol.toStringTag]="Float"}(Yn.prototype);var Wn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this,tn.HALF)}return E(n)}(Yn),Hn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this,tn.SINGLE)}return E(n)}(Yn),$n=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this,tn.DOUBLE)}return E(n)}(Yn);Object.defineProperty(Wn.prototype,"ArrayType",{value:Uint16Array}),Object.defineProperty(Hn.prototype,"ArrayType",{value:Float32Array}),Object.defineProperty($n.prototype,"ArrayType",{value:Float64Array});var Kn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this)}return E(n,[{key:"typeId",get:function(){return Je.Binary}},{key:"toString",value:function(){return"Binary"}}]),n}(Fn);Kn[Symbol.toStringTag]=function(t){return t.ArrayType=Uint8Array,t[Symbol.toStringTag]="Binary"}(Kn.prototype);var Gn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this)}return E(n,[{key:"typeId",get:function(){return Je.Utf8}},{key:"toString",value:function(){return"Utf8"}}]),n}(Fn);Gn[Symbol.toStringTag]=function(t){return t.ArrayType=Uint8Array,t[Symbol.toStringTag]="Utf8"}(Gn.prototype);var qn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this)}return E(n,[{key:"typeId",get:function(){return Je.Bool}},{key:"toString",value:function(){return"Bool"}}]),n}(Fn);qn[Symbol.toStringTag]=function(t){return t.ArrayType=Uint8Array,t[Symbol.toStringTag]="Bool"}(qn.prototype);var Jn=function(t){ot(n,t);var e=yt(n);function n(t,r){var i;return F(this,n),(i=e.call(this)).scale=t,i.precision=r,i}return E(n,[{key:"typeId",get:function(){return Je.Decimal}},{key:"toString",value:function(){return"Decimal[".concat(this.precision,"e").concat(this.scale>0?"+":"").concat(this.scale,"]")}}]),n}(Fn);Jn[Symbol.toStringTag]=function(t){return t.scale=null,t.precision=null,t.ArrayType=Uint32Array,t[Symbol.toStringTag]="Decimal"}(Jn.prototype);var Zn=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this)).unit=t,r}return E(n,[{key:"typeId",get:function(){return Je.Date}},{key:"toString",value:function(){return"Date".concat(32*(this.unit+1),"<").concat(Qe[this.unit],">")}}]),n}(Fn);Zn[Symbol.toStringTag]=function(t){return t.unit=null,t.ArrayType=Int32Array,t[Symbol.toStringTag]="Date"}(Zn.prototype);var Qn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this,Qe.DAY)}return E(n)}(Zn),Xn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this,Qe.MILLISECOND)}return E(n)}(Zn),tr=function(t){ot(n,t);var e=yt(n);function n(t,r){var i;return F(this,n),(i=e.call(this)).unit=t,i.bitWidth=r,i}return E(n,[{key:"typeId",get:function(){return Je.Time}},{key:"toString",value:function(){return"Time".concat(this.bitWidth,"<").concat(Xe[this.unit],">")}}]),n}(Fn);tr[Symbol.toStringTag]=function(t){return t.unit=null,t.bitWidth=null,t.ArrayType=Int32Array,t[Symbol.toStringTag]="Time"}(tr.prototype);var er=function(t){ot(n,t);var e=yt(n);function n(t,r){var i;return F(this,n),(i=e.call(this)).unit=t,i.timezone=r,i}return E(n,[{key:"typeId",get:function(){return Je.Timestamp}},{key:"toString",value:function(){return"Timestamp<".concat(Xe[this.unit]).concat(this.timezone?", ".concat(this.timezone):"",">")}}]),n}(Fn);er[Symbol.toStringTag]=function(t){return t.unit=null,t.timezone=null,t.ArrayType=Int32Array,t[Symbol.toStringTag]="Timestamp"}(er.prototype);var nr=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this)).unit=t,r}return E(n,[{key:"typeId",get:function(){return Je.Interval}},{key:"toString",value:function(){return"Interval<".concat(nn[this.unit],">")}}]),n}(Fn);nr[Symbol.toStringTag]=function(t){return t.unit=null,t.ArrayType=Int32Array,t[Symbol.toStringTag]="Interval"}(nr.prototype);var rr=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this)).children=[t],r}return E(n,[{key:"typeId",get:function(){return Je.List}},{key:"toString",value:function(){return"List<".concat(this.valueType,">")}},{key:"valueType",get:function(){return this.children[0].type}},{key:"valueField",get:function(){return this.children[0]}},{key:"ArrayType",get:function(){return this.valueType.ArrayType}}]),n}(Fn);rr[Symbol.toStringTag]=function(t){return t.children=null,t[Symbol.toStringTag]="List"}(rr.prototype);var ir=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this)).children=t,r}return E(n,[{key:"typeId",get:function(){return Je.Struct}},{key:"toString",value:function(){return"Struct<{".concat(this.children.map((function(t){return"".concat(t.name,":").concat(t.type)})).join(", "),"}>")}}]),n}(Fn);ir[Symbol.toStringTag]=function(t){return t.children=null,t[Symbol.toStringTag]="Struct"}(ir.prototype);var ar=function(t){ot(n,t);var e=yt(n);function n(t,r,i){var a;return F(this,n),(a=e.call(this)).mode=t,a.children=i,a.typeIds=r=Int32Array.from(r),a.typeIdToChildIndex=r.reduce((function(t,e,n){return(t[e]=n)&&t||t}),Object.create(null)),a}return E(n,[{key:"typeId",get:function(){return Je.Union}},{key:"toString",value:function(){return"".concat(this[Symbol.toStringTag],"<").concat(this.children.map((function(t){return"".concat(t.type)})).join(" | "),">")}}]),n}(Fn);ar[Symbol.toStringTag]=function(t){return t.mode=null,t.typeIds=null,t.children=null,t.typeIdToChildIndex=null,t.ArrayType=Int8Array,t[Symbol.toStringTag]="Union"}(ar.prototype);var or=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this)).byteWidth=t,r}return E(n,[{key:"typeId",get:function(){return Je.FixedSizeBinary}},{key:"toString",value:function(){return"FixedSizeBinary[".concat(this.byteWidth,"]")}}]),n}(Fn);or[Symbol.toStringTag]=function(t){return t.byteWidth=null,t.ArrayType=Uint8Array,t[Symbol.toStringTag]="FixedSizeBinary"}(or.prototype);var ur=function(t){ot(n,t);var e=yt(n);function n(t,r){var i;return F(this,n),(i=e.call(this)).listSize=t,i.children=[r],i}return E(n,[{key:"typeId",get:function(){return Je.FixedSizeList}},{key:"valueType",get:function(){return this.children[0].type}},{key:"valueField",get:function(){return this.children[0]}},{key:"ArrayType",get:function(){return this.valueType.ArrayType}},{key:"toString",value:function(){return"FixedSizeList[".concat(this.listSize,"]<").concat(this.valueType,">")}}]),n}(Fn);ur[Symbol.toStringTag]=function(t){return t.children=null,t.listSize=null,t[Symbol.toStringTag]="FixedSizeList"}(ur.prototype);var sr=function(t){ot(n,t);var e=yt(n);function n(t){var r,i=arguments.length>1&&void 0!==arguments[1]&&arguments[1];return F(this,n),(r=e.call(this)).children=[t],r.keysSorted=i,r}return E(n,[{key:"typeId",get:function(){return Je.Map}},{key:"keyType",get:function(){return this.children[0].type.children[0].type}},{key:"valueType",get:function(){return this.children[0].type.children[1].type}},{key:"toString",value:function(){return"Map<{".concat(this.children[0].type.children.map((function(t){return"".concat(t.name,":").concat(t.type)})).join(", "),"}>")}}]),n}(Fn);sr[Symbol.toStringTag]=function(t){return t.children=null,t.keysSorted=null,t[Symbol.toStringTag]="Map_"}(sr.prototype);var cr,fr=(cr=-1,function(){return++cr}),lr=function(t){ot(n,t);var e=yt(n);function n(t,r,i,a){var o;return F(this,n),(o=e.call(this)).indices=r,o.dictionary=t,o.isOrdered=a||!1,o.id=null==i?fr():"number"===typeof i?i:i.low,o}return E(n,[{key:"typeId",get:function(){return Je.Dictionary}},{key:"children",get:function(){return this.dictionary.children}},{key:"valueType",get:function(){return this.dictionary}},{key:"ArrayType",get:function(){return this.dictionary.ArrayType}},{key:"toString",value:function(){return"Dictionary<".concat(this.indices,", ").concat(this.dictionary,">")}}]),n}(Fn);function hr(t){var e=t;switch(t.typeId){case Je.Decimal:return 4;case Je.Timestamp:return 2;case Je.Date:case Je.Interval:return 1+e.unit;case Je.Int:case Je.Time:return+(e.bitWidth>32)+1;case Je.FixedSizeList:return e.listSize;case Je.FixedSizeBinary:return e.byteWidth;default:return 1}}lr[Symbol.toStringTag]=function(t){return t.id=null,t.indices=null,t.isOrdered=null,t.dictionary=null,t[Symbol.toStringTag]="Dictionary"}(lr.prototype);var yr=function(){function t(e,n,r,i,a,o,u){var s;F(this,t),this.type=e,this.dictionary=u,this.offset=Math.floor(Math.max(n||0,0)),this.length=Math.floor(Math.max(r||0,0)),this._nullCount=Math.floor(Math.max(i||0,-1)),this.childData=(o||[]).map((function(e){return e instanceof t?e:e.data})),a instanceof t?(this.stride=a.stride,this.values=a.values,this.typeIds=a.typeIds,this.nullBitmap=a.nullBitmap,this.valueOffsets=a.valueOffsets):(this.stride=hr(e),a&&((s=a[0])&&(this.valueOffsets=s),(s=a[1])&&(this.values=s),(s=a[2])&&(this.nullBitmap=s),(s=a[3])&&(this.typeIds=s)))}return E(t,[{key:"typeId",get:function(){return this.type.typeId}},{key:"ArrayType",get:function(){return this.type.ArrayType}},{key:"buffers",get:function(){return[this.valueOffsets,this.values,this.nullBitmap,this.typeIds]}},{key:"byteLength",get:function(){var t=0,e=this.valueOffsets,n=this.values,r=this.nullBitmap,i=this.typeIds;return e&&(t+=e.byteLength),n&&(t+=n.byteLength),r&&(t+=r.byteLength),i&&(t+=i.byteLength),this.childData.reduce((function(t,e){return t+e.byteLength}),t)}},{key:"nullCount",get:function(){var t,e=this._nullCount;return e<=-1&&(t=this.nullBitmap)&&(this._nullCount=e=this.length-yn(t,this.offset,this.offset+this.length)),e}},{key:"clone",value:function(e){var n=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.offset,r=arguments.length>2&&void 0!==arguments[2]?arguments[2]:this.length,i=arguments.length>3&&void 0!==arguments[3]?arguments[3]:this._nullCount,a=arguments.length>4&&void 0!==arguments[4]?arguments[4]:this,o=arguments.length>5&&void 0!==arguments[5]?arguments[5]:this.childData;return new t(e,n,r,i,a,o,this.dictionary)}},{key:"slice",value:function(t,e){var n=this.stride,r=this.typeId,i=this.childData,a=+(0===this._nullCount)-1,o=16===r?n:1,u=this._sliceBuffers(t,e,n,r);return this.clone(this.type,this.offset+t,e,a,u,!i.length||this.valueOffsets?i:this._sliceChildren(i,o*t,o*e))}},{key:"_changeLengthAndBackfillNullBitmap",value:function(t){if(this.typeId===Je.Null)return this.clone(this.type,0,t,0);var e=this.length,n=this.nullCount,r=new Uint8Array((t+63&-64)>>3).fill(255,0,e>>3);r[e>>3]=(1<0&&r.set(fn(this.offset,e,this.nullBitmap),0);var i=this.buffers;return i[Ze.VALIDITY]=r,this.clone(this.type,0,t,n+(t-e),i)}},{key:"_sliceBuffers",value:function(t,e,n,r){var i,a=this.buffers;return(i=a[Ze.TYPE])&&(a[Ze.TYPE]=i.subarray(t,t+e)),(i=a[Ze.OFFSET])&&(a[Ze.OFFSET]=i.subarray(t,t+e+1))||(i=a[Ze.DATA])&&(a[Ze.DATA]=6===r?i:i.subarray(n*t,n*(t+e))),a}},{key:"_sliceChildren",value:function(t,e,n){return t.map((function(t){return t.slice(e,n)}))}}],[{key:"new",value:function(e,n,r,i,a,o,u){switch(a instanceof t?a=a.buffers:a||(a=[]),e.typeId){case Je.Null:return t.Null(e,n,r);case Je.Int:return t.Int(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.DATA]||[]);case Je.Dictionary:return t.Dictionary(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.DATA]||[],u);case Je.Float:return t.Float(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.DATA]||[]);case Je.Bool:return t.Bool(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.DATA]||[]);case Je.Decimal:return t.Decimal(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.DATA]||[]);case Je.Date:return t.Date(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.DATA]||[]);case Je.Time:return t.Time(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.DATA]||[]);case Je.Timestamp:return t.Timestamp(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.DATA]||[]);case Je.Interval:return t.Interval(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.DATA]||[]);case Je.FixedSizeBinary:return t.FixedSizeBinary(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.DATA]||[]);case Je.Binary:return t.Binary(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.OFFSET]||[],a[Ze.DATA]||[]);case Je.Utf8:return t.Utf8(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.OFFSET]||[],a[Ze.DATA]||[]);case Je.List:return t.List(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.OFFSET]||[],(o||[])[0]);case Je.FixedSizeList:return t.FixedSizeList(e,n,r,i||0,a[Ze.VALIDITY],(o||[])[0]);case Je.Struct:return t.Struct(e,n,r,i||0,a[Ze.VALIDITY],o||[]);case Je.Map:return t.Map(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.OFFSET]||[],(o||[])[0]);case Je.Union:return t.Union(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.TYPE]||[],a[Ze.OFFSET]||o,o)}throw new Error("Unrecognized typeId ".concat(e.typeId))}},{key:"Null",value:function(e,n,r){return new t(e,n,r,0)}},{key:"Int",value:function(e,n,r,i,a,o){return new t(e,n,r,i,[void 0,Ht(e.ArrayType,o),Jt(a)])}},{key:"Dictionary",value:function(e,n,r,i,a,o,u){return new t(e,n,r,i,[void 0,Ht(e.indices.ArrayType,o),Jt(a)],[],u)}},{key:"Float",value:function(e,n,r,i,a,o){return new t(e,n,r,i,[void 0,Ht(e.ArrayType,o),Jt(a)])}},{key:"Bool",value:function(e,n,r,i,a,o){return new t(e,n,r,i,[void 0,Ht(e.ArrayType,o),Jt(a)])}},{key:"Decimal",value:function(e,n,r,i,a,o){return new t(e,n,r,i,[void 0,Ht(e.ArrayType,o),Jt(a)])}},{key:"Date",value:function(e,n,r,i,a,o){return new t(e,n,r,i,[void 0,Ht(e.ArrayType,o),Jt(a)])}},{key:"Time",value:function(e,n,r,i,a,o){return new t(e,n,r,i,[void 0,Ht(e.ArrayType,o),Jt(a)])}},{key:"Timestamp",value:function(e,n,r,i,a,o){return new t(e,n,r,i,[void 0,Ht(e.ArrayType,o),Jt(a)])}},{key:"Interval",value:function(e,n,r,i,a,o){return new t(e,n,r,i,[void 0,Ht(e.ArrayType,o),Jt(a)])}},{key:"FixedSizeBinary",value:function(e,n,r,i,a,o){return new t(e,n,r,i,[void 0,Ht(e.ArrayType,o),Jt(a)])}},{key:"Binary",value:function(e,n,r,i,a,o,u){return new t(e,n,r,i,[Gt(o),Jt(u),Jt(a)])}},{key:"Utf8",value:function(e,n,r,i,a,o,u){return new t(e,n,r,i,[Gt(o),Jt(u),Jt(a)])}},{key:"List",value:function(e,n,r,i,a,o,u){return new t(e,n,r,i,[Gt(o),void 0,Jt(a)],[u])}},{key:"FixedSizeList",value:function(e,n,r,i,a,o){return new t(e,n,r,i,[void 0,void 0,Jt(a)],[o])}},{key:"Struct",value:function(e,n,r,i,a,o){return new t(e,n,r,i,[void 0,void 0,Jt(a)],o)}},{key:"Map",value:function(e,n,r,i,a,o,u){return new t(e,n,r,i,[Gt(o),void 0,Jt(a)],[u])}},{key:"Union",value:function(e,n,r,i,a,o,u,s){var c=[void 0,void 0,Jt(a),Ht(e.ArrayType,o)];return e.mode===en.Sparse?new t(e,n,r,i,c,u):(c[Ze.OFFSET]=Gt(u),new t(e,n,r,i,c,s))}}]),t}();yr.prototype.childData=Object.freeze([]);function pr(t){if(null===t)return"null";if(undefined===t)return"undefined";switch(typeof t){case"number":case"bigint":return"".concat(t);case"string":return'"'.concat(t,'"')}return"function"===typeof t[Symbol.toPrimitive]?t[Symbol.toPrimitive]("string"):ArrayBuffer.isView(t)?"[".concat(t,"]"):JSON.stringify(t)}function dr(t){if(!t||t.length<=0)return function(t){return!0};var e="",n=t.filter((function(t){return t===t}));return n.length>0&&(e="\n switch (x) {".concat(n.map((function(t){return"\n case ".concat(function(t){if("bigint"!==typeof t)return pr(t);if(kt)return"".concat(pr(t),"n");return'"'.concat(pr(t),'"')}(t),":")})).join(""),"\n return false;\n }")),t.length!==n.length&&(e="if (x !== x) return false;\n".concat(e)),new Function("x","".concat(e,"\nreturn true;"))}var vr=function(t,e){return(t*e+63&-64||64)/e},br=function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:0;return t.length>=e?t.subarray(0,e):Yt(new t.constructor(e),t,0)},gr=function(){function t(e){var n=arguments.length>1&&void 0!==arguments[1]?arguments[1]:1;F(this,t),this.buffer=e,this.stride=n,this.BYTES_PER_ELEMENT=e.BYTES_PER_ELEMENT,this.ArrayType=e.constructor,this._resize(this.length=e.length/n|0)}return E(t,[{key:"byteLength",get:function(){return this.length*this.stride*this.BYTES_PER_ELEMENT|0}},{key:"reservedLength",get:function(){return this.buffer.length/this.stride}},{key:"reservedByteLength",get:function(){return this.buffer.byteLength}},{key:"set",value:function(t,e){return this}},{key:"append",value:function(t){return this.set(this.length,t)}},{key:"reserve",value:function(t){if(t>0){this.length+=t;var e=this.stride,n=this.length*e,r=this.buffer.length;n>=r&&this._resize(vr(0===r?1*n:2*n,this.BYTES_PER_ELEMENT))}return this}},{key:"flush",value:function(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:this.length;t=vr(t*this.stride,this.BYTES_PER_ELEMENT);var e=br(this.buffer,t);return this.clear(),e}},{key:"clear",value:function(){return this.length=0,this._resize(0),this}},{key:"_resize",value:function(t){return this.buffer=Yt(new this.ArrayType(t),this.buffer)}}]),t}();gr.prototype.offset=0;var mr=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"last",value:function(){return this.get(this.length-1)}},{key:"get",value:function(t){return this.buffer[t]}},{key:"set",value:function(t,e){return this.reserve(t-this.length+1),this.buffer[t*this.stride]=e,this}}]),n}(gr),kr=function(t){ot(n,t);var e=yt(n);function n(){var t,r=arguments.length>0&&void 0!==arguments[0]?arguments[0]:new Uint8Array(0);return F(this,n),(t=e.call(this,r,1/8)).numValid=0,t}return E(n,[{key:"numInvalid",get:function(){return this.length-this.numValid}},{key:"get",value:function(t){return this.buffer[t>>3]>>t%8&1}},{key:"set",value:function(t,e){var n=this.reserve(t-this.length+1).buffer,r=t>>3,i=t%8,a=n[r]>>i&1;return e?0===a&&(n[r]|=1<0&&void 0!==arguments[0]?arguments[0]:new Int32Array(1);return F(this,n),e.call(this,t,1)}return E(n,[{key:"append",value:function(t){return this.set(this.length-1,t)}},{key:"set",value:function(t,e){var n=this.length-1,r=this.reserve(t-n+1).buffer;return n0&&void 0!==arguments[0]?arguments[0]:this.length-1;return t>this.length&&this.set(t-1,0),ze(ut(n.prototype),"flush",this).call(this,t+1)}}]),n}(mr),_r=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"ArrayType64",get:function(){return this._ArrayType64||(this._ArrayType64=this.buffer instanceof Int32Array?_t:St)}},{key:"set",value:function(t,e){switch(this.reserve(t-this.length+1),typeof e){case"bigint":this.buffer64[t]=e;break;case"number":this.buffer[t*this.stride]=e;break;default:this.buffer.set(e,t*this.stride)}return this}},{key:"_resize",value:function(t){var e=ze(ut(n.prototype),"_resize",this).call(this,t),r=e.byteLength/(this.BYTES_PER_ELEMENT*this.stride);return kt&&(this.buffer64=new this.ArrayType64(e.buffer,e.byteOffset,r)),e}}]),n}(gr),Ir=function(){function t(e){var n=e.type,r=e.nullValues;F(this,t),this.length=0,this.finished=!1,this.type=n,this.children=[],this.nullValues=r,this.stride=hr(n),this._nulls=new kr,r&&r.length>0&&(this._isValid=dr(r))}return E(t,[{key:"toVector",value:function(){return qe.new(this.flush())}},{key:"ArrayType",get:function(){return this.type.ArrayType}},{key:"nullCount",get:function(){return this._nulls.numInvalid}},{key:"numChildren",get:function(){return this.children.length}},{key:"byteLength",get:function(){var t=0;return this._offsets&&(t+=this._offsets.byteLength),this._values&&(t+=this._values.byteLength),this._nulls&&(t+=this._nulls.byteLength),this._typeIds&&(t+=this._typeIds.byteLength),this.children.reduce((function(t,e){return t+e.byteLength}),t)}},{key:"reservedLength",get:function(){return this._nulls.reservedLength}},{key:"reservedByteLength",get:function(){var t=0;return this._offsets&&(t+=this._offsets.reservedByteLength),this._values&&(t+=this._values.reservedByteLength),this._nulls&&(t+=this._nulls.reservedByteLength),this._typeIds&&(t+=this._typeIds.reservedByteLength),this.children.reduce((function(t,e){return t+e.reservedByteLength}),t)}},{key:"valueOffsets",get:function(){return this._offsets?this._offsets.buffer:null}},{key:"values",get:function(){return this._values?this._values.buffer:null}},{key:"nullBitmap",get:function(){return this._nulls?this._nulls.buffer:null}},{key:"typeIds",get:function(){return this._typeIds?this._typeIds.buffer:null}},{key:"append",value:function(t){return this.set(this.length,t)}},{key:"isValid",value:function(t){return this._isValid(t)}},{key:"set",value:function(t,e){return this.setValid(t,this.isValid(e))&&this.setValue(t,e),this}},{key:"setValue",value:function(t,e){this._setValue(this,t,e)}},{key:"setValid",value:function(t,e){return this.length=this._nulls.set(t,+e).length,e}},{key:"addChild",value:function(t){arguments.length>1&&void 0!==arguments[1]||"".concat(this.numChildren);throw new Error('Cannot append children to non-nested type "'.concat(this.type,'"'))}},{key:"getChildAt",value:function(t){return this.children[t]||null}},{key:"flush",value:function(){var t=[],e=this._values,n=this._offsets,r=this._typeIds,i=this.length,a=this.nullCount;r?(t[Ze.TYPE]=r.flush(i),n&&(t[Ze.OFFSET]=n.flush(i))):n?(e&&(t[Ze.DATA]=e.flush(n.last())),t[Ze.OFFSET]=n.flush(i)):e&&(t[Ze.DATA]=e.flush(i)),a>0&&(t[Ze.VALIDITY]=this._nulls.flush(i));var o=yr.new(this.type,0,i,a,t,this.children.map((function(t){return t.flush()})));return this.clear(),o}},{key:"finish",value:function(){return this.finished=!0,this.children.forEach((function(t){return t.finish()})),this}},{key:"clear",value:function(){return this.length=0,this._offsets&&this._offsets.clear(),this._values&&this._values.clear(),this._nulls&&this._nulls.clear(),this._typeIds&&this._typeIds.clear(),this.children.forEach((function(t){return t.clear()})),this}}],[{key:"new",value:function(t){}},{key:"throughNode",value:function(t){throw new Error('"throughNode" not available in this environment')}},{key:"throughDOM",value:function(t){throw new Error('"throughDOM" not available in this environment')}},{key:"throughIterable",value:function(t){return function(t){var e=t.queueingStrategy,n=void 0===e?"count":e,r=t.highWaterMark,i=void 0===r?"bytes"!==n?1e3:Math.pow(2,14):r,a="bytes"!==n?"length":"byteLength";return R.mark((function e(n){var r,o,u,s,c;return R.wrap((function(e){for(;;)switch(e.prev=e.next){case 0:r=0,o=Ir.new(t),u=O(n),e.prev=3,u.s();case 5:if((s=u.n()).done){e.next=14;break}if(c=s.value,!(o.append(c)[a]>=i)){e.next=12;break}if(e.t0=++r,!e.t0){e.next=12;break}return e.next=12,o.toVector();case 12:e.next=5;break;case 14:e.next=19;break;case 16:e.prev=16,e.t1=e.catch(3),u.e(e.t1);case 19:return e.prev=19,u.f(),e.finish(19);case 22:if(!(o.finish().length>0||0===r)){e.next=25;break}return e.next=25,o.toVector();case 25:case"end":return e.stop()}}),e,null,[[3,16,19,22]])}))}(t)}},{key:"throughAsyncIterable",value:function(t){return function(t){var e=t.queueingStrategy,n=void 0===e?"count":e,r=t.highWaterMark,i=void 0===r?"bytes"!==n?1e3:Math.pow(2,14):r,a="bytes"!==n?"length":"byteLength";return function(){var e=j(R.mark((function e(n){var r,o,u,s,c,f,l,h;return R.wrap((function(e){for(;;)switch(e.prev=e.next){case 0:r=0,o=Ir.new(t),u=!1,s=!1,e.prev=4,f=P(n);case 6:return e.next=8,C(f.next());case 8:if(!(u=!(l=e.sent).done)){e.next=18;break}if(h=l.value,!(o.append(h)[a]>=i)){e.next=15;break}if(e.t0=++r,!e.t0){e.next=15;break}return e.next=15,o.toVector();case 15:u=!1,e.next=6;break;case 18:e.next=24;break;case 20:e.prev=20,e.t1=e.catch(4),s=!0,c=e.t1;case 24:if(e.prev=24,e.prev=25,!u||null==f.return){e.next=29;break}return e.next=29,C(f.return());case 29:if(e.prev=29,!s){e.next=32;break}throw c;case 32:return e.finish(29);case 33:return e.finish(24);case 34:if(!(o.finish().length>0||0===r)){e.next=37;break}return e.next=37,o.toVector();case 37:case"end":return e.stop()}}),e,null,[[4,20,24,34],[25,,29,33]])})));return function(t){return e.apply(this,arguments)}}()}(t)}}]),t}();Ir.prototype.length=1,Ir.prototype.stride=1,Ir.prototype.children=null,Ir.prototype.finished=!1,Ir.prototype.nullValues=null,Ir.prototype._isValid=function(){return!0};var Sr=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this,t))._values=new mr(new r.ArrayType(0),r.stride),r}return E(n,[{key:"setValue",value:function(t,e){var r=this._values;return r.reserve(t-r.length+1),ze(ut(n.prototype),"setValue",this).call(this,t,e)}}]),n}(Ir),xr=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this,t))._pendingLength=0,r._offsets=new wr,r}return E(n,[{key:"setValue",value:function(t,e){var n=this._pending||(this._pending=new Map),r=n.get(t);r&&(this._pendingLength-=r.length),this._pendingLength+=e.length,n.set(t,e)}},{key:"setValid",value:function(t,e){return!!ze(ut(n.prototype),"setValid",this).call(this,t,e)||((this._pending||(this._pending=new Map)).set(t,void 0),!1)}},{key:"clear",value:function(){return this._pendingLength=0,this._pending=void 0,ze(ut(n.prototype),"clear",this).call(this)}},{key:"flush",value:function(){return this._flush(),ze(ut(n.prototype),"flush",this).call(this)}},{key:"finish",value:function(){return this._flush(),ze(ut(n.prototype),"finish",this).call(this)}},{key:"_flush",value:function(){var t=this._pending,e=this._pendingLength;return this._pendingLength=0,this._pending=void 0,t&&t.size>0&&this._flushPending(t,e),this}}]),n}(Ir);var Ar=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this,t))._values=new kr,r}return E(n,[{key:"setValue",value:function(t,e){this._values.set(t,+e)}}]),n}(Ir),Tr=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"setValue",value:function(t,e){}},{key:"setValid",value:function(t,e){return this.length=Math.max(t+1,this.length),e}}]),n}(Ir),Br=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(Sr),Or=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(Br),Dr=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(Br),Lr=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(Sr),Fr=function(t){ot(n,t);var e=yt(n);function n(t){var r,i=t.type,a=t.nullValues,o=t.dictionaryHashFunction;return F(this,n),(r=e.call(this,{type:new lr(i.dictionary,i.indices,i.id,i.isOrdered)}))._nulls=null,r._dictionaryOffset=0,r._keysToIndices=Object.create(null),r.indices=Ir.new({type:r.type.indices,nullValues:a}),r.dictionary=Ir.new({type:r.type.dictionary,nullValues:null}),"function"===typeof o&&(r.valueToKey=o),r}return E(n,[{key:"values",get:function(){return this.indices.values}},{key:"nullCount",get:function(){return this.indices.nullCount}},{key:"nullBitmap",get:function(){return this.indices.nullBitmap}},{key:"byteLength",get:function(){return this.indices.byteLength+this.dictionary.byteLength}},{key:"reservedLength",get:function(){return this.indices.reservedLength+this.dictionary.reservedLength}},{key:"reservedByteLength",get:function(){return this.indices.reservedByteLength+this.dictionary.reservedByteLength}},{key:"isValid",value:function(t){return this.indices.isValid(t)}},{key:"setValid",value:function(t,e){var n=this.indices;return e=n.setValid(t,e),this.length=n.length,e}},{key:"setValue",value:function(t,e){var n=this._keysToIndices,r=this.valueToKey(e),i=n[r];return void 0===i&&(n[r]=i=this._dictionaryOffset+this.dictionary.append(e).length-1),this.indices.setValue(t,i)}},{key:"flush",value:function(){var t=this.type,e=this._dictionary,n=this.dictionary.toVector(),r=this.indices.flush().clone(t);return r.dictionary=e?e.concat(n):n,this.finished||(this._dictionaryOffset+=n.length),this._dictionary=r.dictionary,this.clear(),r}},{key:"finish",value:function(){return this.indices.finish(),this.dictionary.finish(),this._dictionaryOffset=0,this._keysToIndices=Object.create(null),ze(ut(n.prototype),"finish",this).call(this)}},{key:"clear",value:function(){return this.indices.clear(),this.dictionary.clear(),ze(ut(n.prototype),"clear",this).call(this)}},{key:"valueToKey",value:function(t){return"string"===typeof t?t:"".concat(t)}}]),n}(Ir),Mr=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(Sr),Er=new Float64Array(1),Ur=new Uint32Array(Er.buffer);function Nr(t){var e=(31744&t)>>10,n=(1023&t)/1024,r=Math.pow(-1,(32768&t)>>15);switch(e){case 31:return r*(n?NaN:1/0);case 0:return r*(n?6103515625e-14*n:0)}return r*Math.pow(2,e-15)*(1+n)}function Cr(t){if(t!==t)return 32256;Er[0]=t;var e=(2147483648&Ur[1])>>16&65535,n=2146435072&Ur[1],r=0;return n>=1089470464?Ur[0]>0?n=31744:(n=(2080374784&n)>>16,r=(1048575&Ur[1])>>10):n<=1056964608?(r=1048576+((r=1048576+(1048575&Ur[1]))<<(n>>20)-998)>>21,n=0):(n=n-1056964608>>10,r=512+(1048575&Ur[1])>>10),e|n|65535&r}var Vr=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(Sr),jr=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"setValue",value:function(t,e){this._values.set(t,Cr(e))}}]),n}(Vr),Rr=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"setValue",value:function(t,e){this._values.set(t,e)}}]),n}(Vr),Pr=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"setValue",value:function(t,e){this._values.set(t,e)}}]),n}(Vr);function zr(t,e,n){return zr=st()?Reflect.construct:function(t,e,n){var r=[null];r.push.apply(r,e);var i=new(Function.bind.apply(t,r));return n&&at(i,n.prototype),i},zr.apply(null,arguments)}var Yr,Wr,Hr=Symbol.for("isArrowBigNum");function $r(t){for(var e=arguments.length,n=new Array(e>1?e-1:0),r=1;r>>=0),s+=(n>>>0)+e*Math.pow(c,32);return s}function Zr(t){var e="",n=new Uint32Array(2),r=new Uint16Array(t.buffer,t.byteOffset,t.byteLength/2),i=new Uint32Array((r=new Uint16Array(r).reverse()).buffer),a=-1,o=r.length-1;do{for(n[0]=r[a=0];a0&&void 0!==arguments[0]?arguments[0]:"default";switch(t){case"number":return Jr(this);case"string":return Yr(this);case"default":return Wr(this)}return Yr(this)},Object.setPrototypeOf(Kr.prototype,Object.create(Int32Array.prototype)),Object.setPrototypeOf(Gr.prototype,Object.create(Uint32Array.prototype)),Object.setPrototypeOf(qr.prototype,Object.create(Uint32Array.prototype)),Object.assign(Kr.prototype,$r.prototype,{constructor:Kr,signed:!0,TypedArray:Int32Array,BigIntArray:_t}),Object.assign(Gr.prototype,$r.prototype,{constructor:Gr,signed:!1,TypedArray:Uint32Array,BigIntArray:St}),Object.assign(qr.prototype,$r.prototype,{constructor:qr,signed:!0,TypedArray:Uint32Array,BigIntArray:St}),kt?(Wr=function(t){return 8===t.byteLength?new t.BigIntArray(t.buffer,t.byteOffset,1)[0]:Zr(t)},Yr=function(t){return 8===t.byteLength?"".concat(new t.BigIntArray(t.buffer,t.byteOffset,1)[0]):Zr(t)}):Wr=Yr=Zr;var Qr,Xr=function(){function t(e,n){return F(this,t),t.new(e,n)}return E(t,null,[{key:"new",value:function(t,e){switch(e){case!0:return new Kr(t);case!1:return new Gr(t)}switch(t.constructor){case Int8Array:case Int16Array:case Int32Array:case _t:return new Kr(t)}return 16===t.byteLength?new qr(t):new Gr(t)}},{key:"signed",value:function(t){return new Kr(t)}},{key:"unsigned",value:function(t){return new Gr(t)}},{key:"decimal",value:function(t){return new qr(t)}}]),t}(),ti=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"setValue",value:function(t,e){this._values.set(t,e)}}]),n}(Sr),ei=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(ti),ni=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(ti),ri=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(ti),ii=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),t.nullValues&&(t.nullValues=t.nullValues.map(ci)),(r=e.call(this,t))._values=new _r(new Int32Array(0),2),r}return E(n,[{key:"values64",get:function(){return this._values.buffer64}},{key:"isValid",value:function(t){return ze(ut(n.prototype),"isValid",this).call(this,ci(t))}}]),n}(ti),ai=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(ti),oi=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(ti),ui=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(ti),si=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),t.nullValues&&(t.nullValues=t.nullValues.map(ci)),(r=e.call(this,t))._values=new _r(new Uint32Array(0),2),r}return E(n,[{key:"values64",get:function(){return this._values.buffer64}},{key:"isValid",value:function(t){return ze(ut(n.prototype),"isValid",this).call(this,ci(t))}}]),n}(ti),ci=(Qr={BigIntArray:_t},function(t){return ArrayBuffer.isView(t)&&(Qr.buffer=t.buffer,Qr.byteOffset=t.byteOffset,Qr.byteLength=t.byteLength,t=Wr(Qr),Qr.buffer=null),t}),fi=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(Sr),li=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(fi),hi=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(fi),yi=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(fi),pi=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(fi),di=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(Sr),vi=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(di),bi=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(di),gi=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(di),mi=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(di),ki=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(Sr),wi=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(ki),_i=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(ki),Ii=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this,t))._values=new gr(new Uint8Array(0)),r}return E(n,[{key:"byteLength",get:function(){var t=this._pendingLength+4*this.length;return this._offsets&&(t+=this._offsets.byteLength),this._values&&(t+=this._values.byteLength),this._nulls&&(t+=this._nulls.byteLength),t}},{key:"setValue",value:function(t,e){return ze(ut(n.prototype),"setValue",this).call(this,t,Jt(e))}},{key:"_flushPending",value:function(t,e){var n,r,i=this._offsets,a=this._values.reserve(e).buffer,o=0,u=0,s=0,c=O(t);try{for(c.s();!(r=c.n()).done;){var f=U(r.value,2);o=f[0],void 0===(n=f[1])?i.set(o,0):(u=n.length,a.set(n,s),i.set(o,u),s+=u)}}catch(l){c.e(l)}finally{c.f()}}}]),n}(xr),Si=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this,t))._values=new gr(new Uint8Array(0)),r}return E(n,[{key:"byteLength",get:function(){var t=this._pendingLength+4*this.length;return this._offsets&&(t+=this._offsets.byteLength),this._values&&(t+=this._values.byteLength),this._nulls&&(t+=this._nulls.byteLength),t}},{key:"setValue",value:function(t,e){return ze(ut(n.prototype),"setValue",this).call(this,t,it(e))}},{key:"_flushPending",value:function(t,e){}}]),n}(xr);Si.prototype._flushPending=Ii.prototype._flushPending;var xi=function(){function t(){F(this,t)}return E(t,[{key:"length",get:function(){return this._values.length}},{key:"get",value:function(t){return this._values[t]}},{key:"clear",value:function(){return this._values=null,this}},{key:"bind",value:function(t){return t instanceof qe?t:(this._values=t,this)}}]),t}(),Ai=Symbol.for("parent"),Ti=Symbol.for("rowIndex"),Bi=Symbol.for("keyToIdx"),Oi=Symbol.for("idxToVal"),Di=Symbol.for("nodejs.util.inspect.custom"),Li=function(t){function e(t,n){F(this,e),this[Ai]=t,this.size=n}return E(e,[{key:"entries",value:function(){return this[Symbol.iterator]()}},{key:"has",value:function(t){return void 0!==this.get(t)}},{key:"get",value:function(t){var e=void 0;if(null!==t&&void 0!==t){var n=this[Bi]||(this[Bi]=new Map),r=n.get(t);if(void 0!==r){var i=this[Oi]||(this[Oi]=new Array(this.size));void 0!==(e=i[r])||(i[r]=e=this.getValue(r))}else if((r=this.getIndex(t))>-1){n.set(t,r);var a=this[Oi]||(this[Oi]=new Array(this.size));void 0!==(e=a[r])||(a[r]=e=this.getValue(r))}}return e}},{key:"set",value:function(t,e){if(null!==t&&void 0!==t){var n=this[Bi]||(this[Bi]=new Map),r=n.get(t);if(void 0===r&&n.set(t,r=this.getIndex(t)),r>-1)(this[Oi]||(this[Oi]=new Array(this.size)))[r]=this.setValue(r,e)}return this}},{key:"clear",value:function(){throw new Error("Clearing ".concat(this[Symbol.toStringTag]," not supported."))}},{key:"delete",value:function(t){throw new Error("Deleting ".concat(this[Symbol.toStringTag]," values not supported."))}},{key:Symbol.iterator,value:R.mark((function t(){var e,n,r,i,a,o,u,s,c;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:e=this.keys(),n=this.values(),r=this[Bi]||(this[Bi]=new Map),i=this[Oi]||(this[Oi]=new Array(this.size)),u=0;case 5:if((s=e.next()).done||(c=n.next()).done){t.next=15;break}return a=s.value,o=c.value,i[u]=o,r.has(a)||r.set(a,u),t.next=12,[a,o];case 12:++u,t.next=5;break;case 15:case"end":return t.stop()}}),t,this)}))},{key:"forEach",value:function(t,e){for(var n,r,i,a,o=this.keys(),u=this.values(),s=void 0===e?t:function(n,r,i){return t.call(e,n,r,i)},c=this[Bi]||(this[Bi]=new Map),f=this[Oi]||(this[Oi]=new Array(this.size)),l=0;!(i=o.next()).done&&!(a=u.next()).done;++l)n=i.value,r=a.value,f[l]=r,c.has(n)||c.set(n,l),s(r,n,this)}},{key:"toArray",value:function(){return vn(this.values())}},{key:"toJSON",value:function(){var t={};return this.forEach((function(e,n){return t[n]=e})),t}},{key:"inspect",value:function(){return this.toString()}},{key:Di,value:function(){return this.toString()}},{key:"toString",value:function(){var t=[];return this.forEach((function(e,n){n=pr(n),e=pr(e),t.push("".concat(n,": ").concat(e))})),"{ ".concat(t.join(", ")," }")}}]),e}();Li[Symbol.toStringTag]=function(t){var e;return Object.defineProperties(t,(Ve(e={size:{writable:!0,enumerable:!1,configurable:!1,value:0}},Ai,{writable:!0,enumerable:!1,configurable:!1,value:null}),Ve(e,Ti,{writable:!0,enumerable:!1,configurable:!1,value:-1}),e)),t[Symbol.toStringTag]="Row"}(Li.prototype);var Fi=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),ht(r=e.call(this,t,t.length),Ni(lt(r)))}return E(n,[{key:"keys",value:function(){return this[Ai].getChildAt(0)[Symbol.iterator]()}},{key:"values",value:function(){return this[Ai].getChildAt(1)[Symbol.iterator]()}},{key:"getKey",value:function(t){return this[Ai].getChildAt(0).get(t)}},{key:"getIndex",value:function(t){return this[Ai].getChildAt(0).indexOf(t)}},{key:"getValue",value:function(t){return this[Ai].getChildAt(1).get(t)}},{key:"setValue",value:function(t,e){this[Ai].getChildAt(1).set(t,e)}}]),n}(Li),Mi=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),ht(r=e.call(this,t,t.type.children.length),Ui(lt(r)))}return E(n,[{key:"keys",value:R.mark((function t(){var e,n,r;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:e=O(this[Ai].type.children),t.prev=1,e.s();case 3:if((n=e.n()).done){t.next=9;break}return r=n.value,t.next=7,r.name;case 7:t.next=3;break;case 9:t.next=14;break;case 11:t.prev=11,t.t0=t.catch(1),e.e(t.t0);case 14:return t.prev=14,e.f(),t.finish(14);case 17:case"end":return t.stop()}}),t,this,[[1,11,14,17]])}))},{key:"values",value:R.mark((function t(){var e,n,r;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:e=O(this[Ai].type.children),t.prev=1,e.s();case 3:if((n=e.n()).done){t.next=9;break}return r=n.value,t.next=7,this[r.name];case 7:t.next=3;break;case 9:t.next=14;break;case 11:t.prev=11,t.t0=t.catch(1),e.e(t.t0);case 14:return t.prev=14,e.f(),t.finish(14);case 17:case"end":return t.stop()}}),t,this,[[1,11,14,17]])}))},{key:"getKey",value:function(t){return this[Ai].type.children[t].name}},{key:"getIndex",value:function(t){return this[Ai].type.children.findIndex((function(e){return e.name===t}))}},{key:"getValue",value:function(t){return this[Ai].getChildAt(t).get(this[Ti])}},{key:"setValue",value:function(t,e){return this[Ai].getChildAt(t).set(this[Ti],e)}}]),n}(Li);Object.setPrototypeOf(Li.prototype,Map.prototype);var Ei,Ui=function(){var t={enumerable:!0,configurable:!1,get:null,set:null};return function(e){var n,r=-1,i=e[Bi]||(e[Bi]=new Map),a=function(t){return function(){return this.get(t)}},o=function(t){return function(e){return this.set(t,e)}},u=O(e.keys());try{for(u.s();!(n=u.n()).done;){var s=n.value;i.set(s,++r),t.get=a(s),t.set=o(s),e.hasOwnProperty(s)||(t.enumerable=!0,Object.defineProperty(e,s,t)),e.hasOwnProperty(r)||(t.enumerable=!1,Object.defineProperty(e,r,t))}}catch(c){u.e(c)}finally{u.f()}return t.get=t.set=null,e}}(),Ni=function(){if("undefined"===typeof Proxy)return Ui;var t=Li.prototype.has,e=Li.prototype.get,n=Li.prototype.set,r=Li.prototype.getKey,i={isExtensible:function(){return!1},deleteProperty:function(){return!1},preventExtensions:function(){return!0},ownKeys:function(t){return vn(t.keys()).map((function(t){return"".concat(t)}))},has:function(t,e){switch(e){case"getKey":case"getIndex":case"getValue":case"setValue":case"toArray":case"toJSON":case"inspect":case"constructor":case"isPrototypeOf":case"propertyIsEnumerable":case"toString":case"toLocaleString":case"valueOf":case"size":case"has":case"get":case"set":case"clear":case"delete":case"keys":case"values":case"entries":case"forEach":case"__proto__":case"__defineGetter__":case"__defineSetter__":case"hasOwnProperty":case"__lookupGetter__":case"__lookupSetter__":case Symbol.iterator:case Symbol.toStringTag:case Ai:case Ti:case Oi:case Bi:case Di:return!0}return"number"!==typeof e||t.has(e)||(e=t.getKey(e)),t.has(e)},get:function(n,i,a){switch(i){case"getKey":case"getIndex":case"getValue":case"setValue":case"toArray":case"toJSON":case"inspect":case"constructor":case"isPrototypeOf":case"propertyIsEnumerable":case"toString":case"toLocaleString":case"valueOf":case"size":case"has":case"get":case"set":case"clear":case"delete":case"keys":case"values":case"entries":case"forEach":case"__proto__":case"__defineGetter__":case"__defineSetter__":case"hasOwnProperty":case"__lookupGetter__":case"__lookupSetter__":case Symbol.iterator:case Symbol.toStringTag:case Ai:case Ti:case Oi:case Bi:case Di:return Reflect.get(n,i,a)}return"number"!==typeof i||t.call(a,i)||(i=r.call(a,i)),e.call(a,i)},set:function(e,i,a,o){switch(i){case Ai:case Ti:case Oi:case Bi:return Reflect.set(e,i,a,o);case"getKey":case"getIndex":case"getValue":case"setValue":case"toArray":case"toJSON":case"inspect":case"constructor":case"isPrototypeOf":case"propertyIsEnumerable":case"toString":case"toLocaleString":case"valueOf":case"size":case"has":case"get":case"set":case"clear":case"delete":case"keys":case"values":case"entries":case"forEach":case"__proto__":case"__defineGetter__":case"__defineSetter__":case"hasOwnProperty":case"__lookupGetter__":case"__lookupSetter__":case Symbol.iterator:case Symbol.toStringTag:return!1}return"number"!==typeof i||t.call(o,i)||(i=r.call(o,i)),!!t.call(o,i)&&!!n.call(o,i,a)}};return function(t){return new Proxy(t,i)}}();function Ci(t,e,n){var r=t.length,i=e>-1?e:r+e%r;return n?n(t,i):i}function Vi(t,e,n,r){var i=t.length,a=void 0===i?0:i,o="number"!==typeof e?0:e,u="number"!==typeof n?a:n;return o<0&&(o=(o%a+a)%a),u<0&&(u=(u%a+a)%a),ua&&(u=a),r?r(t,o,u):[o,u]}var ji=kt?mt(0):0,Ri=function(t){return t!==t};function Pi(t){var e=typeof t;if("object"!==e||null===t)return Ri(t)?Ri:"bigint"!==e?function(e){return e===t}:function(e){return ji+e===t};if(t instanceof Date){var n=t.valueOf();return function(t){return t instanceof Date&&t.valueOf()===n}}return ArrayBuffer.isView(t)?function(e){return!!e&&Ae(t,e)}:t instanceof Map?function(t){var e=-1,n=[];return t.forEach((function(t){return n[++e]=Pi(t)})),zi(n)}(t):Array.isArray(t)?function(t){for(var e=[],n=-1,r=t.length;++n1&&void 0!==arguments[1]?arguments[1]:[],a=arguments.length>2&&void 0!==arguments[2]?arguments[2]:Hi(i);return F(this,r),(e=n.call(this))._nullCount=-1,e._type=t,e._chunks=i,e._chunkOffsets=a,e._length=a[a.length-1],e._numChildren=(e._type.children||[]).length,e}return E(r,[{key:"type",get:function(){return this._type}},{key:"length",get:function(){return this._length}},{key:"chunks",get:function(){return this._chunks}},{key:"typeId",get:function(){return this._type.typeId}},{key:"VectorName",get:function(){return"Chunked<".concat(this._type,">")}},{key:"data",get:function(){return this._chunks[0]?this._chunks[0].data:null}},{key:"ArrayType",get:function(){return this._type.ArrayType}},{key:"numChildren",get:function(){return this._numChildren}},{key:"stride",get:function(){return this._chunks[0]?this._chunks[0].stride:1}},{key:"byteLength",get:function(){return this._chunks.reduce((function(t,e){return t+e.byteLength}),0)}},{key:"nullCount",get:function(){var t=this._nullCount;return t<0&&(this._nullCount=t=this._chunks.reduce((function(t,e){return t+e.nullCount}),0)),t}},{key:"indices",get:function(){if(Fn.isDictionary(this._type)){if(!this._indices){var t=this._chunks;this._indices=1===t.length?t[0].indices:r.concat.apply(r,vn(t.map((function(t){return t.indices}))))}return this._indices}return null}},{key:"dictionary",get:function(){return Fn.isDictionary(this._type)?this._chunks[this._chunks.length-1].data.dictionary:null}},{key:e,value:R.mark((function t(){var e,n,r;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:e=O(this._chunks),t.prev=1,e.s();case 3:if((n=e.n()).done){t.next=8;break}return r=n.value,t.delegateYield(r,"t0",6);case 6:t.next=3;break;case 8:t.next=13;break;case 10:t.prev=10,t.t1=t.catch(1),e.e(t.t1);case 13:return t.prev=13,e.f(),t.finish(13);case 16:case"end":return t.stop()}}),t,this,[[1,10,13,16]])}))},{key:"clone",value:function(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:this._chunks;return new r(this._type,t)}},{key:"concat",value:function(){for(var t=arguments.length,e=new Array(t),n=0;n=this._numChildren)return null;var e,n,i,a=this._children||(this._children=[]);return(e=a[t])?e:(n=(this._type.children||[])[t])&&(i=this._chunks.map((function(e){return e.getChildAt(t)})).filter((function(t){return null!=t}))).length>0?a[t]=new r(n.type,i):null}},{key:"search",value:function(t,e){var n=t,r=this._chunkOffsets,i=r.length-1;if(n<0)return null;if(n>=r[i])return null;if(i<=1)return e?e(this,0,n):[0,n];var a=0,o=0,u=0;do{if(a+1===i)return e?e(this,a,n-o):[a,n-o];n>=r[u=a+(i-a)/2|0]?a=u:i=u}while(n=(o=r[a]));return null}},{key:"isValid",value:function(t){return!!this.search(t,this.isValidInternal)}},{key:"get",value:function(t){return this.search(t,this.getInternal)}},{key:"set",value:function(t,e){this.search(t,(function(t,n,r){return t.chunks[n].set(r,e)}))}},{key:"indexOf",value:function(t,e){var n=this;return e&&"number"===typeof e?this.search(e,(function(e,r,i){return n.indexOfInternal(e,r,i,t)})):this.indexOfInternal(this,0,Math.max(0,e||0),t)}},{key:"toArray",value:function(){var t=this.chunks,e=t.length,n=this._type.ArrayType;if(e<=0)return new n(0);if(e<=1)return t[0].toArray();for(var r=0,i=new Array(e),a=-1;++a=n)break;if(!(e>=f+c))if(f>=e&&f+c<=n)r.push(s);else{var l=Math.max(0,e-f),h=Math.min(n-f,c);r.push(s.slice(l,h))}}return t.clone(r)}}],[{key:"flatten",value:function(){for(var t=arguments.length,e=new Array(t),n=0;n1&&void 0!==arguments[1]?arguments[1]:[],a=arguments.length>2?arguments[2]:void 0;return F(this,n),i=Wi.flatten.apply(Wi,vn(i)),(r=e.call(this,t.type,i,a))._field=t,1!==i.length||lt(r)instanceof qi?r:ht(r,new qi(t,i[0],r._chunkOffsets))}return E(n,[{key:"field",get:function(){return this._field}},{key:"name",get:function(){return this._field.name}},{key:"nullable",get:function(){return this._field.nullable}},{key:"metadata",get:function(){return this._field.metadata}},{key:"clone",value:function(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:this._chunks;return new n(this._field,t)}},{key:"getChildAt",value:function(t){if(t<0||t>=this.numChildren)return null;var e,r,i,a=this._children||(this._children=[]);return(e=a[t])?e:(r=(this.type.children||[])[t])&&(i=this._chunks.map((function(e){return e.getChildAt(t)})).filter((function(t){return null!=t}))).length>0?a[t]=new n(r,i):null}}],[{key:"new",value:function(t,e){for(var r=arguments.length,i=new Array(r>2?r-2:0),a=2;a0}))&&(t=t.clone({nullable:!0}));return new n(t,o)}}]),n}(Wi),qi=function(t){ot(n,t);var e=yt(n);function n(t,r,i){var a;return F(this,n),(a=e.call(this,t,[r],i))._chunk=r,a}return E(n,[{key:"search",value:function(t,e){return e?e(this,0,t):[0,t]}},{key:"isValid",value:function(t){return this._chunk.isValid(t)}},{key:"get",value:function(t){return this._chunk.get(t)}},{key:"set",value:function(t,e){this._chunk.set(t,e)}},{key:"indexOf",value:function(t,e){return this._chunk.indexOf(t,e)}}]),n}(Gi),Ji=Array.isArray,Zi=function(t,e){return na(t,e,[],0)},Qi=function(t){var e=U(oa(t,[[],[]]),2),n=e[0];return e[1].map((function(t,e){return t instanceof Gi?Gi.new(t.field.clone(n[e]),t):t instanceof qe?Gi.new(n[e],t):Gi.new(n[e],[])}))},Xi=function(t){return oa(t,[[],[]])},ta=function(t,e){return ra(t,e,[],0)},ea=function(t,e){return ia(t,e,[],0)};function na(t,e,n,r){for(var i,a=r,o=-1,u=e.length;++o0&&void 0!==arguments[0]?arguments[0]:[],n=arguments.length>1?arguments[1]:void 0,r=arguments.length>2?arguments[2]:void 0;F(this,e),this.fields=t||[],this.metadata=n||new Map,r||(r=fa(t)),this.dictionaries=r}return E(e,[{key:Symbol.toStringTag,get:function(){return"Schema"}},{key:"toString",value:function(){return"Schema<{ ".concat(this.fields.map((function(t,e){return"".concat(e,": ").concat(t)})).join(", ")," }>")}},{key:"compareTo",value:function(t){return Ln.compareSchemas(this,t)}},{key:"select",value:function(){for(var t=arguments.length,n=new Array(t),r=0;r2&&void 0!==arguments[2]&&arguments[2],i=arguments.length>3?arguments[3]:void 0;F(this,e),this.name=t,this.type=n,this.nullable=r,this.metadata=i||new Map}return E(e,[{key:"typeId",get:function(){return this.type.typeId}},{key:Symbol.toStringTag,get:function(){return"Field"}},{key:"toString",value:function(){return"".concat(this.name,": ").concat(this.type)}},{key:"compareTo",value:function(t){return Ln.compareField(this,t)}},{key:"clone",value:function(){for(var t,n,r,i,a,o,u,s,c,f,l=arguments.length,h=new Array(l),y=0;y1&&void 0!==arguments[1]?arguments[1]:new Map,n=-1,r=t.length;++n0&&fa(a.children,e)}return e}ua.prototype.fields=null,ua.prototype.metadata=null,ua.prototype.dictionaries=null,sa.prototype.type=null,sa.prototype.name=null,sa.prototype.nullable=null,sa.prototype.metadata=null;var la=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this,t))._run=new xi,r._offsets=new wr,r}return E(n,[{key:"addChild",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:"0";if(this.numChildren>0)throw new Error("ListBuilder can only have one child.");return this.children[this.numChildren]=t,this.type=new rr(new sa(e,t.type,!0)),this.numChildren-1}},{key:"clear",value:function(){return this._run.clear(),ze(ut(n.prototype),"clear",this).call(this)}},{key:"_flushPending",value:function(t){var e,n,r=this._run,i=this._offsets,a=this._setValue,o=0,u=O(t);try{for(u.s();!(n=u.n()).done;){var s=U(n.value,2);o=s[0],void 0===(e=s[1])?i.set(o,0):(i.set(o,e.length),a(this,o,r.bind(e)))}}catch(c){u.e(c)}finally{u.f()}}}]),n}(xr),ha=function(t){ot(n,t);var e=yt(n);function n(){var t;return F(this,n),(t=e.apply(this,arguments))._run=new xi,t}return E(n,[{key:"setValue",value:function(t,e){ze(ut(n.prototype),"setValue",this).call(this,t,this._run.bind(e))}},{key:"addChild",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:"0";if(this.numChildren>0)throw new Error("FixedSizeListBuilder can only have one child.");var n=this.children.push(t);return this.type=new ur(this.type.listSize,new sa(e,t.type,!0)),n}},{key:"clear",value:function(){return this._run.clear(),ze(ut(n.prototype),"clear",this).call(this)}}]),n}(Ir),ya=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"set",value:function(t,e){return ze(ut(n.prototype),"set",this).call(this,t,e)}},{key:"setValue",value:function(t,e){e=e instanceof Map?e:new Map(Object.entries(e));var n=this._pending||(this._pending=new Map),r=n.get(t);r&&(this._pendingLength-=r.size),this._pendingLength+=e.size,n.set(t,e)}},{key:"addChild",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:"".concat(this.numChildren);if(this.numChildren>0)throw new Error("ListBuilder can only have one child.");return this.children[this.numChildren]=t,this.type=new sr(new sa(e,t.type,!0),this.type.keysSorted),this.numChildren-1}},{key:"_flushPending",value:function(t){var e=this,n=this._offsets,r=this._setValue;t.forEach((function(t,i){void 0===t?n.set(i,0):(n.set(i,t.size),r(e,i,t))}))}}]),n}(xr),pa=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"addChild",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:"".concat(this.numChildren),n=this.children.push(t);return this.type=new ir([].concat(vn(this.type.children),[new sa(e,t.type,!0)])),n}}]),n}(Ir),da=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this,t))._typeIds=new mr(new Int8Array(0),1),"function"===typeof t.valueToChildTypeId&&(r._valueToChildTypeId=t.valueToChildTypeId),r}return E(n,[{key:"typeIdToChildIndex",get:function(){return this.type.typeIdToChildIndex}},{key:"append",value:function(t,e){return this.set(this.length,t,e)}},{key:"set",value:function(t,e,n){return void 0===n&&(n=this._valueToChildTypeId(this,e,t)),this.setValid(t,this.isValid(e))&&this.setValue(t,e,n),this}},{key:"setValue",value:function(t,e,r){this._typeIds.set(t,r),ze(ut(n.prototype),"setValue",this).call(this,t,e)}},{key:"addChild",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:"".concat(this.children.length),n=this.children.push(t),r=this.type,i=r.children,a=r.mode,o=r.typeIds,u=[].concat(vn(i),[new sa(e,t.type)]);return this.type=new ar(a,[].concat(vn(o),[n]),u),n}},{key:"_valueToChildTypeId",value:function(t,e,n){throw new Error("Cannot map UnionBuilder value to child typeId. Pass the `childTypeId` as the second argument to unionBuilder.append(), or supply a `valueToChildTypeId` function as part of the UnionBuilder constructor options.")}}]),n}(Ir),va=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(da),ba=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this,t))._offsets=new mr(new Int32Array(0)),r}return E(n,[{key:"setValue",value:function(t,e,r){var i=this.type.typeIdToChildIndex[r];return this._offsets.set(t,this.getChildAt(i).length),ze(ut(n.prototype),"setValue",this).call(this,t,e,r)}}]),n}(da),ga=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(bn),ma=function(t,e,n){t[e]=n%4294967296|0,t[e+1]=n/4294967296|0},ka=function(t,e,n,r){var i=e[n],a=e[n+1];null!=i&&null!=a&&t.set(r.subarray(0,a-i),i)},wa=function(t,e,n){!function(t,e,n){t[e]=n/864e5|0}(t.values,e,n.valueOf())},_a=function(t,e,n){var r=t.values;ma(r,2*e,n.valueOf())},Ia=function(t,e,n){var r=t.stride;t.values[r*e]=n},Sa=function(t,e,n){var r=t.stride;t.values[r*e]=Cr(n)},xa=function(t,e,n){switch(typeof n){case"bigint":t.values64[e]=n;break;case"number":t.values[e*t.stride]=n;break;default:var r=n,i=t.stride,a=Ht(t.ArrayType,r);t.values.set(a.subarray(0,i),i*e)}},Aa=function(t,e,n){var r=t.values;return ma(r,2*e,n/1e3)},Ta=function(t,e,n){var r=t.values;return ma(r,2*e,n)},Ba=function(t,e,n){return function(t,e,n){t[e]=1e3*n%4294967296|0,t[e+1]=1e3*n/4294967296|0}(t.values,2*e,n)},Oa=function(t,e,n){return function(t,e,n){t[e]=1e6*n%4294967296|0,t[e+1]=1e6*n/4294967296|0}(t.values,2*e,n)},Da=function(t,e,n){t.values[t.stride*e]=n},La=function(t,e,n){t.values[t.stride*e]=n},Fa=function(t,e,n){t.values.set(n.subarray(0,2),2*e)},Ma=function(t,e,n){t.values.set(n.subarray(0,2),2*e)},Ea=function(t,e,n){var r=t.typeIdToChildIndex[t.typeIds[e]],i=t.getChildAt(r);i&&i.set(t.valueOffsets[e],n)},Ua=function(t,e,n){var r=t.typeIdToChildIndex[t.typeIds[e]],i=t.getChildAt(r);i&&i.set(e,n)},Na=function(t,e,n){t.values.set(n.subarray(0,2),2*e)},Ca=function(t,e,n){t.values[e]=12*n[0]+n[1]%12};ga.prototype.visitBool=function(t,e,n){var r=t.offset,i=t.values,a=r+e;n?i[a>>3]|=1<>3]&=~(1<0){var i=e.children||[],a={nullValues:e.nullValues},o=Array.isArray(i)?function(t,e){return i[e]||a}:function(t){var e=t.name;return i[e]||a};n.children.forEach((function(e,n){var i=e.type,a=o(e,n);r.children.push(t(Re(Re({},a),{},{type:i})))}))}return r},Object.keys(Je).map((function(t){return Je[t]})).filter((function(t){return"number"===typeof t&&t!==Je.NONE})).forEach((function(t){Pa.visit(t).prototype._setValue=ja.getVisitFn(t)})),Si.prototype._setValue=ja.visitBinary,function(t){!function(e){!function(e){!function(e){var n=function(){function e(){F(this,e),this.bb=null,this.bb_pos=0}return E(e,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"version",value:function(){var t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt16(this.bb_pos+t):Ye.apache.arrow.flatbuf.MetadataVersion.V1}},{key:"schema",value:function(t){var e=this.bb.__offset(this.bb_pos,6);return e?(t||new Ye.apache.arrow.flatbuf.Schema).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}},{key:"dictionaries",value:function(e,n){var r=this.bb.__offset(this.bb_pos,8);return r?(n||new t.apache.arrow.flatbuf.Block).__init(this.bb.__vector(this.bb_pos+r)+24*e,this.bb):null}},{key:"dictionariesLength",value:function(){var t=this.bb.__offset(this.bb_pos,8);return t?this.bb.__vector_len(this.bb_pos+t):0}},{key:"recordBatches",value:function(e,n){var r=this.bb.__offset(this.bb_pos,10);return r?(n||new t.apache.arrow.flatbuf.Block).__init(this.bb.__vector(this.bb_pos+r)+24*e,this.bb):null}},{key:"recordBatchesLength",value:function(){var t=this.bb.__offset(this.bb_pos,10);return t?this.bb.__vector_len(this.bb_pos+t):0}}],[{key:"getRootAsFooter",value:function(t,n){return(n||new e).__init(t.readInt32(t.position())+t.position(),t)}},{key:"startFooter",value:function(t){t.startObject(4)}},{key:"addVersion",value:function(t,e){t.addFieldInt16(0,e,Ye.apache.arrow.flatbuf.MetadataVersion.V1)}},{key:"addSchema",value:function(t,e){t.addFieldOffset(1,e,0)}},{key:"addDictionaries",value:function(t,e){t.addFieldOffset(2,e,0)}},{key:"startDictionariesVector",value:function(t,e){t.startVector(24,e,8)}},{key:"addRecordBatches",value:function(t,e){t.addFieldOffset(3,e,0)}},{key:"startRecordBatchesVector",value:function(t,e){t.startVector(24,e,8)}},{key:"endFooter",value:function(t){return t.endObject()}},{key:"finishFooterBuffer",value:function(t,e){t.finish(e)}},{key:"createFooter",value:function(t,n,r,i,a){return e.startFooter(t),e.addVersion(t,n),e.addSchema(t,r),e.addDictionaries(t,i),e.addRecordBatches(t,a),e.endFooter(t)}}]),e}();e.Footer=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Va||(Va={})),function(t){!function(t){!function(t){!function(t){var e=function(){function t(){F(this,t),this.bb=null,this.bb_pos=0}return E(t,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"offset",value:function(){return this.bb.readInt64(this.bb_pos)}},{key:"metaDataLength",value:function(){return this.bb.readInt32(this.bb_pos+8)}},{key:"bodyLength",value:function(){return this.bb.readInt64(this.bb_pos+16)}}],[{key:"createBlock",value:function(t,e,n,r){return t.prep(8,24),t.writeInt64(r),t.pad(4),t.writeInt32(n),t.writeInt64(e),t.offset()}}]),t}();t.Block=e}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Va||(Va={}));var za=W.Long,Ya=W.Builder,Wa=W.ByteBuffer,Ha=Va.apache.arrow.flatbuf.Block,$a=Va.apache.arrow.flatbuf.Footer,Ka=function(){function t(e){var n=arguments.length>1&&void 0!==arguments[1]?arguments[1]:an.V4,r=arguments.length>2?arguments[2]:void 0,i=arguments.length>3?arguments[3]:void 0;F(this,t),this.schema=e,this.version=n,r&&(this._recordBatches=r),i&&(this._dictionaryBatches=i)}return E(t,[{key:"numRecordBatches",get:function(){return this._recordBatches.length}},{key:"numDictionaries",get:function(){return this._dictionaryBatches.length}},{key:"recordBatches",value:R.mark((function t(){var e,n,r;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:n=-1,r=this.numRecordBatches;case 1:if(!(++n=0&&t=0&&t=0&&t=0&&t0)return ze(ut(n.prototype),"write",this).call(this,t)}},{key:"toString",value:function(){var t=arguments.length>0&&void 0!==arguments[0]&&arguments[0];return t?rt(this.toUint8Array(!0)):this.toUint8Array(!1).then(rt)}},{key:"toUint8Array",value:function(){var t=this,e=arguments.length>0&&void 0!==arguments[0]&&arguments[0];return e?Wt(this._values)[0]:L(R.mark((function e(){var n,r,i,a,o,u,s,c;return R.wrap((function(e){for(;;)switch(e.prev=e.next){case 0:n=[],r=0,i=!1,a=!1,e.prev=3,u=P(t);case 5:return e.next=7,u.next();case 7:if(!(i=!(s=e.sent).done)){e.next=14;break}c=s.value,n.push(c),r+=c.byteLength;case 11:i=!1,e.next=5;break;case 14:e.next=20;break;case 16:e.prev=16,e.t0=e.catch(3),a=!0,o=e.t0;case 20:if(e.prev=20,e.prev=21,!i||null==u.return){e.next=25;break}return e.next=25,u.return();case 25:if(e.prev=25,!a){e.next=28;break}throw o;case 28:return e.finish(25);case 29:return e.finish(20);case 30:return e.abrupt("return",Wt(n,r)[0]);case 31:case"end":return e.stop()}}),e,null,[[3,16,20,30],[21,,25,29]])})))()}}]),n}(bt),Za=function(t){function e(t){F(this,e),t&&(this.source=new Xa(Be.fromIterable(t)))}return E(e,[{key:Symbol.iterator,value:function(){return this}},{key:"next",value:function(t){return this.source.next(t)}},{key:"throw",value:function(t){return this.source.throw(t)}},{key:"return",value:function(t){return this.source.return(t)}},{key:"peek",value:function(t){return this.source.peek(t)}},{key:"read",value:function(t){return this.source.read(t)}}]),e}(),Qa=function(t){function e(t){F(this,e),t instanceof e?this.source=t.source:t instanceof Ja?this.source=new to(Be.fromAsyncIterable(t)):jt(t)?this.source=new to(Be.fromNodeStream(t)):Ct(t)?this.source=new to(Be.fromDOMStream(t)):Ut(t)?this.source=new to(Be.fromDOMStream(t.body)):Dt(t)?this.source=new to(Be.fromIterable(t)):(Ot(t)||Lt(t))&&(this.source=new to(Be.fromAsyncIterable(t)))}return E(e,[{key:Symbol.asyncIterator,value:function(){return this}},{key:"next",value:function(t){return this.source.next(t)}},{key:"throw",value:function(t){return this.source.throw(t)}},{key:"return",value:function(t){return this.source.return(t)}},{key:"closed",get:function(){return this.source.closed}},{key:"cancel",value:function(t){return this.source.cancel(t)}},{key:"peek",value:function(t){return this.source.peek(t)}},{key:"read",value:function(t){return this.source.read(t)}}]),e}(),Xa=function(){function t(e){F(this,t),this.source=e}return E(t,[{key:"cancel",value:function(t){this.return(t)}},{key:"peek",value:function(t){return this.next(t,"peek").value}},{key:"read",value:function(t){return this.next(t,"read").value}},{key:"next",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:"read";return this.source.next({cmd:e,size:t})}},{key:"throw",value:function(t){return Object.create(this.source.throw&&this.source.throw(t)||pt)}},{key:"return",value:function(t){return Object.create(this.source.return&&this.source.return(t)||pt)}}]),t}(),to=function(){function t(e){var n=this;F(this,t),this.source=e,this._closedPromise=new Promise((function(t){return n._closedPromiseResolve=t}))}return E(t,[{key:"cancel",value:function(){var t=L(R.mark((function t(e){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,this.return(e);case 2:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()},{key:"closed",get:function(){return this._closedPromise}},{key:"read",value:function(){var t=L(R.mark((function t(e){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,this.next(e,"read");case 2:return t.abrupt("return",t.sent.value);case 3:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()},{key:"peek",value:function(){var t=L(R.mark((function t(e){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,this.next(e,"peek");case 2:return t.abrupt("return",t.sent.value);case 3:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()},{key:"next",value:function(){var t=L(R.mark((function t(e){var n,r=arguments;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return n=r.length>1&&void 0!==r[1]?r[1]:"read",t.next=3,this.source.next({cmd:n,size:e});case 3:return t.abrupt("return",t.sent);case 4:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()},{key:"throw",value:function(){var t=L(R.mark((function t(e){var n;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:if(t.t1=this.source.throw,!t.t1){t.next=5;break}return t.next=4,this.source.throw(e);case 4:t.t1=t.sent;case 5:if(t.t0=t.t1,t.t0){t.next=8;break}t.t0=pt;case 8:return n=t.t0,this._closedPromiseResolve&&this._closedPromiseResolve(),this._closedPromiseResolve=void 0,t.abrupt("return",Object.create(n));case 12:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()},{key:"return",value:function(){var t=L(R.mark((function t(e){var n;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:if(t.t1=this.source.return,!t.t1){t.next=5;break}return t.next=4,this.source.return(e);case 4:t.t1=t.sent;case 5:if(t.t0=t.t1,t.t0){t.next=8;break}t.t0=pt;case 8:return n=t.t0,this._closedPromiseResolve&&this._closedPromiseResolve(),this._closedPromiseResolve=void 0,t.abrupt("return",Object.create(n));case 12:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()}]),t}(),eo=function(t){ot(n,t);var e=yt(n);function n(t,r){var i;return F(this,n),(i=e.call(this)).position=0,i.buffer=Jt(t),i.size="undefined"===typeof r?i.buffer.byteLength:r,i}return E(n,[{key:"readInt32",value:function(t){var e=this.readAt(t,4),n=e.buffer,r=e.byteOffset;return new DataView(n,r).getInt32(0,!0)}},{key:"seek",value:function(t){return this.position=Math.min(t,this.size),t>>16,65535&this.buffer[1],this.buffer[0]>>>16,65535&this.buffer[0]]),n=new Uint32Array([t.buffer[1]>>>16,65535&t.buffer[1],t.buffer[0]>>>16,65535&t.buffer[0]]),r=e[3]*n[3];this.buffer[0]=65535&r;var i=r>>>16;return i+=r=e[2]*n[3],i+=r=e[3]*n[2]>>>0,this.buffer[0]+=i<<16,this.buffer[1]=i>>>0>>16,this.buffer[1]+=e[1]*n[3]+e[2]*n[2]+e[3]*n[1],this.buffer[1]+=e[0]*n[3]+e[1]*n[2]+e[2]*n[1]+e[3]*n[0]<<16,this}},{key:"_plus",value:function(t){var e=this.buffer[0]+t.buffer[0]>>>0;this.buffer[1]+=t.buffer[1],e>>0&&++this.buffer[1],this.buffer[0]=e}},{key:"lessThan",value:function(t){return this.buffer[1]1&&void 0!==arguments[1]?arguments[1]:new Uint32Array(2);return n.fromString("string"===typeof t?t:t.toString(),e)}},{key:"fromNumber",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:new Uint32Array(2);return n.fromString(t.toString(),e)}},{key:"fromString",value:function(t){for(var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:new Uint32Array(2),r=t.length,i=new n(e),a=0;a1&&void 0!==arguments[1]?arguments[1]:new Uint32Array(2);return n.fromString("string"===typeof t?t:t.toString(),e)}},{key:"fromNumber",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:new Uint32Array(2);return n.fromString(t.toString(),e)}},{key:"fromString",value:function(t){for(var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:new Uint32Array(2),r=t.startsWith("-"),i=t.length,a=new n(e),o=r?1:0;o>>0,e[2]=this.buffer[2]+t.buffer[2]>>>0,e[1]=this.buffer[1]+t.buffer[1]>>>0,e[0]=this.buffer[0]+t.buffer[0]>>>0,e[0]>>0&&++e[1],e[1]>>0&&++e[2],e[2]>>0&&++e[3],this.buffer[3]=e[3],this.buffer[2]=e[2],this.buffer[1]=e[1],this.buffer[0]=e[0],this}},{key:"hex",value:function(){return"".concat(ro(this.buffer[3])," ").concat(ro(this.buffer[2])," ").concat(ro(this.buffer[1])," ").concat(ro(this.buffer[0]))}}],[{key:"multiply",value:function(e,n){return new t(new Uint32Array(e.buffer)).times(n)}},{key:"add",value:function(e,n){return new t(new Uint32Array(e.buffer)).plus(n)}},{key:"from",value:function(e){var n=arguments.length>1&&void 0!==arguments[1]?arguments[1]:new Uint32Array(4);return t.fromString("string"===typeof e?e:e.toString(),n)}},{key:"fromNumber",value:function(e){var n=arguments.length>1&&void 0!==arguments[1]?arguments[1]:new Uint32Array(4);return t.fromString(e.toString(),n)}},{key:"fromString",value:function(e){for(var n=arguments.length>1&&void 0!==arguments[1]?arguments[1]:new Uint32Array(4),r=e.startsWith("-"),i=e.length,a=new t(n),o=r?1:0;o1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length;return yr.Null(t,0,n)}},{key:"visitBool",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Bool(t,0,n,r,this.readNullBitmap(t,r),this.readData(t))}},{key:"visitInt",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Int(t,0,n,r,this.readNullBitmap(t,r),this.readData(t))}},{key:"visitFloat",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Float(t,0,n,r,this.readNullBitmap(t,r),this.readData(t))}},{key:"visitUtf8",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Utf8(t,0,n,r,this.readNullBitmap(t,r),this.readOffsets(t),this.readData(t))}},{key:"visitBinary",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Binary(t,0,n,r,this.readNullBitmap(t,r),this.readOffsets(t),this.readData(t))}},{key:"visitFixedSizeBinary",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.FixedSizeBinary(t,0,n,r,this.readNullBitmap(t,r),this.readData(t))}},{key:"visitDate",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Date(t,0,n,r,this.readNullBitmap(t,r),this.readData(t))}},{key:"visitTimestamp",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Timestamp(t,0,n,r,this.readNullBitmap(t,r),this.readData(t))}},{key:"visitTime",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Time(t,0,n,r,this.readNullBitmap(t,r),this.readData(t))}},{key:"visitDecimal",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Decimal(t,0,n,r,this.readNullBitmap(t,r),this.readData(t))}},{key:"visitList",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.List(t,0,n,r,this.readNullBitmap(t,r),this.readOffsets(t),this.visit(t.children[0]))}},{key:"visitStruct",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Struct(t,0,n,r,this.readNullBitmap(t,r),this.visitMany(t.children))}},{key:"visitUnion",value:function(t){return t.mode===en.Sparse?this.visitSparseUnion(t):this.visitDenseUnion(t)}},{key:"visitDenseUnion",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Union(t,0,n,r,this.readNullBitmap(t,r),this.readTypeIds(t),this.readOffsets(t),this.visitMany(t.children))}},{key:"visitSparseUnion",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Union(t,0,n,r,this.readNullBitmap(t,r),this.readTypeIds(t),this.visitMany(t.children))}},{key:"visitDictionary",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Dictionary(t,0,n,r,this.readNullBitmap(t,r),this.readData(t.indices),this.readDictionary(t))}},{key:"visitInterval",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Interval(t,0,n,r,this.readNullBitmap(t,r),this.readData(t))}},{key:"visitFixedSizeList",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.FixedSizeList(t,0,n,r,this.readNullBitmap(t,r),this.visit(t.children[0]))}},{key:"visitMap",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Map(t,0,n,r,this.readNullBitmap(t,r),this.readOffsets(t),this.visit(t.children[0]))}},{key:"nextFieldNode",value:function(){return this.nodes[++this.nodesIndex]}},{key:"nextBufferRange",value:function(){return this.buffers[++this.buffersIndex]}},{key:"readNullBitmap",value:function(t,e){var n=arguments.length>2&&void 0!==arguments[2]?arguments[2]:this.nextBufferRange();return e>0&&this.readData(t,n)||new Uint8Array(0)}},{key:"readOffsets",value:function(t,e){return this.readData(t,e)}},{key:"readTypeIds",value:function(t,e){return this.readData(t,e)}},{key:"readData",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextBufferRange(),n=e.length,r=e.offset;return this.bytes.subarray(r,r+n)}},{key:"readDictionary",value:function(t){return this.dictionaries.get(t.id)}}]),n}(bn),fo=function(t){ot(n,t);var e=yt(n);function n(t,r,i,a){var o;return F(this,n),(o=e.call(this,new Uint8Array(0),r,i,a)).sources=t,o}return E(n,[{key:"readNullBitmap",value:function(t,e){var n=arguments.length>2&&void 0!==arguments[2]?arguments[2]:this.nextBufferRange(),r=n.offset;return e<=0?new Uint8Array(0):ln(this.sources[r])}},{key:"readOffsets",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextBufferRange(),n=e.offset;return Ht(Uint8Array,Ht(Int32Array,this.sources[n]))}},{key:"readTypeIds",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextBufferRange(),n=e.offset;return Ht(Uint8Array,Ht(t.ArrayType,this.sources[n]))}},{key:"readData",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextBufferRange(),n=e.offset,r=this.sources;return Fn.isTimestamp(t)||(Fn.isInt(t)||Fn.isTime(t))&&64===t.bitWidth||Fn.isDate(t)&&t.unit===Qe.MILLISECOND?Ht(Uint8Array,uo.convertArray(r[n])):Fn.isDecimal(t)?Ht(Uint8Array,so.convertArray(r[n])):Fn.isBinary(t)||Fn.isFixedSizeBinary(t)?lo(r[n]):Fn.isBool(t)?ln(r[n]):Fn.isUtf8(t)?it(r[n].join("")):Ht(Uint8Array,Ht(t.ArrayType,r[n].map((function(t){return+t}))))}}]),n}(co);function lo(t){for(var e=t.join(""),n=new Uint8Array(e.length/2),r=0;r>1]=parseInt(e.substr(r,2),16);return n}var ho=W.Long,yo=Ye.apache.arrow.flatbuf.Null,po=Ye.apache.arrow.flatbuf.Int,vo=Ye.apache.arrow.flatbuf.FloatingPoint,bo=Ye.apache.arrow.flatbuf.Binary,go=Ye.apache.arrow.flatbuf.Bool,mo=Ye.apache.arrow.flatbuf.Utf8,ko=Ye.apache.arrow.flatbuf.Decimal,wo=Ye.apache.arrow.flatbuf.Date,_o=Ye.apache.arrow.flatbuf.Time,Io=Ye.apache.arrow.flatbuf.Timestamp,So=Ye.apache.arrow.flatbuf.Interval,xo=Ye.apache.arrow.flatbuf.List,Ao=Ye.apache.arrow.flatbuf.Struct_,To=Ye.apache.arrow.flatbuf.Union,Bo=Ye.apache.arrow.flatbuf.DictionaryEncoding,Oo=Ye.apache.arrow.flatbuf.FixedSizeBinary,Do=Ye.apache.arrow.flatbuf.FixedSizeList,Lo=Ye.apache.arrow.flatbuf.Map,Fo=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"visit",value:function(t,e){return null==t||null==e?void 0:ze(ut(n.prototype),"visit",this).call(this,t,e)}},{key:"visitNull",value:function(t,e){return yo.startNull(e),yo.endNull(e)}},{key:"visitInt",value:function(t,e){return po.startInt(e),po.addBitWidth(e,t.bitWidth),po.addIsSigned(e,t.isSigned),po.endInt(e)}},{key:"visitFloat",value:function(t,e){return vo.startFloatingPoint(e),vo.addPrecision(e,t.precision),vo.endFloatingPoint(e)}},{key:"visitBinary",value:function(t,e){return bo.startBinary(e),bo.endBinary(e)}},{key:"visitBool",value:function(t,e){return go.startBool(e),go.endBool(e)}},{key:"visitUtf8",value:function(t,e){return mo.startUtf8(e),mo.endUtf8(e)}},{key:"visitDecimal",value:function(t,e){return ko.startDecimal(e),ko.addScale(e,t.scale),ko.addPrecision(e,t.precision),ko.endDecimal(e)}},{key:"visitDate",value:function(t,e){return wo.startDate(e),wo.addUnit(e,t.unit),wo.endDate(e)}},{key:"visitTime",value:function(t,e){return _o.startTime(e),_o.addUnit(e,t.unit),_o.addBitWidth(e,t.bitWidth),_o.endTime(e)}},{key:"visitTimestamp",value:function(t,e){var n=t.timezone&&e.createString(t.timezone)||void 0;return Io.startTimestamp(e),Io.addUnit(e,t.unit),void 0!==n&&Io.addTimezone(e,n),Io.endTimestamp(e)}},{key:"visitInterval",value:function(t,e){return So.startInterval(e),So.addUnit(e,t.unit),So.endInterval(e)}},{key:"visitList",value:function(t,e){return xo.startList(e),xo.endList(e)}},{key:"visitStruct",value:function(t,e){return Ao.startStruct_(e),Ao.endStruct_(e)}},{key:"visitUnion",value:function(t,e){To.startTypeIdsVector(e,t.typeIds.length);var n=To.createTypeIdsVector(e,t.typeIds);return To.startUnion(e),To.addMode(e,t.mode),To.addTypeIds(e,n),To.endUnion(e)}},{key:"visitDictionary",value:function(t,e){var n=this.visit(t.indices,e);return Bo.startDictionaryEncoding(e),Bo.addId(e,new ho(t.id,0)),Bo.addIsOrdered(e,t.isOrdered),void 0!==n&&Bo.addIndexType(e,n),Bo.endDictionaryEncoding(e)}},{key:"visitFixedSizeBinary",value:function(t,e){return Oo.startFixedSizeBinary(e),Oo.addByteWidth(e,t.byteWidth),Oo.endFixedSizeBinary(e)}},{key:"visitFixedSizeList",value:function(t,e){return Do.startFixedSizeList(e),Do.addListSize(e,t.listSize),Do.endFixedSizeList(e)}},{key:"visitMap",value:function(t,e){return Lo.startMap(e),Lo.addKeysSorted(e,t.keysSorted),Lo.endMap(e)}}]),n}(bn),Mo=new Fo;function Eo(t){return new nu(t.count,Co(t.columns),Vo(t.columns))}function Uo(t,e){return(t.fields||[]).filter(Boolean).map((function(t){return sa.fromJSON(t,e)}))}function No(t,e){return(t.children||[]).filter(Boolean).map((function(t){return sa.fromJSON(t,e)}))}function Co(t){return(t||[]).reduce((function(t,e){return[].concat(vn(t),[new au(e.count,(n=e.VALIDITY,(n||[]).reduce((function(t,e){return t+ +(0===e)}),0)))],vn(Co(e.children)));var n}),[])}function Vo(t){for(var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:[],n=-1,r=(t||[]).length;++n1&&void 0!==arguments[1]?arguments[1]:0;if(e instanceof ua)return new t(0,an.V4,rn.Schema,e);if(e instanceof nu)return new t(n,an.V4,rn.RecordBatch,e);if(e instanceof ru)return new t(n,an.V4,rn.DictionaryBatch,e);throw new Error("Unrecognized Message header: ".concat(e))}}]),t}(),nu=function(){function t(e,n,r){F(this,t),this._nodes=n,this._buffers=r,this._length="number"===typeof e?e:e.low}return E(t,[{key:"nodes",get:function(){return this._nodes}},{key:"length",get:function(){return this._length}},{key:"buffers",get:function(){return this._buffers}}]),t}(),ru=function(){function t(e,n){var r=arguments.length>2&&void 0!==arguments[2]&&arguments[2];F(this,t),this._data=e,this._isDelta=r,this._id="number"===typeof n?n:n.low}return E(t,[{key:"id",get:function(){return this._id}},{key:"data",get:function(){return this._data}},{key:"isDelta",get:function(){return this._isDelta}},{key:"length",get:function(){return this.data.length}},{key:"nodes",get:function(){return this.data.nodes}},{key:"buffers",get:function(){return this.data.buffers}}]),t}(),iu=E((function t(e,n){F(this,t),this.offset="number"===typeof e?e:e.low,this.length="number"===typeof n?n:n.low})),au=E((function t(e,n){F(this,t),this.length="number"===typeof e?e:e.low,this.nullCount="number"===typeof n?n:n.low}));function ou(t){for(var e,n=[],r=-1,i=-1,a=t.nodesLength();++r0?$o.createCustomMetadataVector(t,vn(e.metadata).map((function(e){var n=U(e,2),r=n[0],i=n[1],a=t.createString("".concat(r)),o=t.createString("".concat(i));return Jo.startKeyValue(t),Jo.addKey(t,a),Jo.addValue(t,o),Jo.endKeyValue(t)}))):-1;e.name&&(n=t.createString(e.name));$o.startField(t),$o.addType(t,r),$o.addTypeType(t,o),$o.addChildren(t,s),$o.addNullable(t,!!e.nullable),-1!==n&&$o.addName(t,n);-1!==i&&$o.addDictionary(t,i);-1!==c&&$o.addCustomMetadata(t,c);return $o.endField(t)},sa.decode=function(t,e){var n,r,i,a,o,u;e&&(u=t.dictionary())?e.has(n=u.id().low)?(a=(a=u.indexType())?lu(a):new Cn,o=new lr(e.get(n),a,n,u.isOrdered()),r=new sa(t.name(),o,t.nullable(),fu(t))):(a=(a=u.indexType())?lu(a):new Cn,e.set(n,i=hu(t,cu(t,e))),o=new lr(i,a,n,u.isOrdered()),r=new sa(t.name(),o,t.nullable(),fu(t))):(i=hu(t,cu(t,e)),r=new sa(t.name(),i,t.nullable(),fu(t)));return r||null},sa.fromJSON=function(t,e){var n,r,i,a,o,u;return e&&(a=t.dictionary)?e.has(n=a.id)?(r=(r=a.indexType)?Ro(r):new Cn,u=new lr(e.get(n),r,n,a.isOrdered),i=new sa(t.name,u,t.nullable,jo(t.customMetadata))):(r=(r=a.indexType)?Ro(r):new Cn,e.set(n,o=Po(t,No(t,e))),u=new lr(o,r,n,a.isOrdered),i=new sa(t.name,u,t.nullable,jo(t.customMetadata))):(o=Po(t,No(t,e)),i=new sa(t.name,o,t.nullable,jo(t.customMetadata))),i||null},ua.encode=function(t,e){var n=e.fields.map((function(e){return sa.encode(t,e)}));Ko.startFieldsVector(t,n.length);var r=Ko.createFieldsVector(t,n),i=e.metadata&&e.metadata.size>0?Ko.createCustomMetadataVector(t,vn(e.metadata).map((function(e){var n=U(e,2),r=n[0],i=n[1],a=t.createString("".concat(r)),o=t.createString("".concat(i));return Jo.startKeyValue(t),Jo.addKey(t,a),Jo.addValue(t,o),Jo.endKeyValue(t)}))):-1;Ko.startSchema(t),Ko.addFields(t,r),Ko.addEndianness(t,yu?Qo.Little:Qo.Big),-1!==i&&Ko.addCustomMetadata(t,i);return Ko.endSchema(t)},ua.decode=function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:new Map,n=su(t,e);return new ua(n,fu(t),e)},ua.fromJSON=function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:new Map;return new ua(Uo(t,e),jo(t.customMetadata),e)},nu.encode=function(t,e){var n=e.nodes||[],r=e.buffers||[];Xo.startNodesVector(t,n.length),n.slice().reverse().forEach((function(e){return au.encode(t,e)}));var i=t.endVector();Xo.startBuffersVector(t,r.length),r.slice().reverse().forEach((function(e){return iu.encode(t,e)}));var a=t.endVector();return Xo.startRecordBatch(t),Xo.addLength(t,new zo(e.length,0)),Xo.addNodes(t,i),Xo.addBuffers(t,a),Xo.endRecordBatch(t)},nu.decode=function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:an.V4;return new nu(t.length(),ou(t),uu(t,e))},nu.fromJSON=Eo,ru.encode=function(t,e){var n=nu.encode(t,e.data);return tu.startDictionaryBatch(t),tu.addId(t,new zo(e.id,0)),tu.addIsDelta(t,e.isDelta),tu.addData(t,n),tu.endDictionaryBatch(t)},ru.decode=function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:an.V4;return new ru(nu.decode(t.data(),e),t.id(),t.isDelta())},ru.fromJSON=function(t){return new ru(Eo(t.data),t.id,t.isDelta)},au.encode=function(t,e){return Zo.createFieldNode(t,new zo(e.length,0),new zo(e.nullCount,0))},au.decode=function(t){return new au(t.length(),t.nullCount())},iu.encode=function(t,e){return Go.createBuffer(t,new zo(e.offset,0),new zo(e.length,0))},iu.decode=function(t){return new iu(t.offset(),t.length())};for(var yu=function(){var t=new ArrayBuffer(2);return new DataView(t).setInt16(0,256,!0),256===new Int16Array(t)[0]}(),pu=W.ByteBuffer,du=function(t){return"Expected ".concat(rn[t]," Message in stream, but was null or length 0.")},vu=function(t){return"Header pointer of flatbuffer-encoded ".concat(rn[t]," Message is null or length 0.")},bu=function(t,e){return"Expected to read ".concat(t," metadata bytes, but only read ").concat(e,".")},gu=function(t,e){return"Expected to read ".concat(t," bytes for message body, but only read ").concat(e,".")},mu=function(t){function e(t){F(this,e),this.source=t instanceof Za?t:new Za(t)}return E(e,[{key:Symbol.iterator,value:function(){return this}},{key:"next",value:function(){var t;return(t=this.readMetadataLength()).done||-1===t.value&&(t=this.readMetadataLength()).done||(t=this.readMetadata(t.value)).done?pt:t}},{key:"throw",value:function(t){return this.source.throw(t)}},{key:"return",value:function(t){return this.source.return(t)}},{key:"readMessage",value:function(t){var e;if((e=this.next()).done)return null;if(null!=t&&e.value.headerType!==t)throw new Error(du(t));return e.value}},{key:"readMessageBody",value:function(t){if(t<=0)return new Uint8Array(0);var e=Jt(this.source.read(t));if(e.byteLength0&&void 0!==arguments[0]&&arguments[0],e=rn.Schema,n=this.readMessage(e),r=n&&n.header();if(t&&!r)throw new Error(vu(e));return r}},{key:"readMetadataLength",value:function(){var t=this.source.read(_u),e=t&&new pu(t),n=e&&e.readInt32(0)||0;return{done:0===n,value:n}}},{key:"readMetadata",value:function(t){var e=this.source.read(t);if(!e)return pt;if(e.byteLength0&&void 0!==a[0]&&a[0],n=rn.Schema,t.next=4,this.readMessage(n);case 4:if(r=t.sent,i=r&&r.header(),!e||i){t.next=8;break}throw new Error(vu(n));case 8:return t.abrupt("return",i);case 9:case"end":return t.stop()}}),t,this)})));return function(){return t.apply(this,arguments)}}()},{key:"readMetadataLength",value:function(){var t=L(R.mark((function t(){var e,n,r;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,this.source.read(_u);case 2:return e=t.sent,n=e&&new pu(e),r=n&&n.readInt32(0)||0,t.abrupt("return",{done:0===r,value:r});case 6:case"end":return t.stop()}}),t,this)})));return function(){return t.apply(this,arguments)}}()},{key:"readMetadata",value:function(){var t=L(R.mark((function t(e){var n;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,this.source.read(e);case 2:if(n=t.sent){t.next=5;break}return t.abrupt("return",pt);case 5:if(!(n.byteLength1&&void 0!==arguments[1]?arguments[1]:0,n=-1,r=Su.length;++n2147483647)throw new RangeError("Cannot write arrays larger than 2^31 - 1 in length");Fn.isNull(t.type)||Lu.call(this,i<=0?new Uint8Array(0):fn(e.offset,r,e.nullBitmap)),this.nodes.push(new au(r,i))}return ze(ut(n.prototype),"visit",this).call(this,t)}},{key:"visitNull",value:function(t){return this}},{key:"visitDictionary",value:function(t){return this.visit(t.indices)}},{key:"nodes",get:function(){return this._nodes}},{key:"buffers",get:function(){return this._buffers}},{key:"byteLength",get:function(){return this._byteLength}},{key:"bufferRegions",get:function(){return this._bufferRegions}}],[{key:"assemble",value:function(){for(var t=new n,e=arguments.length,r=new Array(e),i=0;i=t.length?Lu.call(this,new Uint8Array(0)):(e=t.values)instanceof Uint8Array?Lu.call(this,fn(t.offset,t.length,e)):Lu.call(this,ln(t))},Du.prototype.visitInt=Fu,Du.prototype.visitFloat=Fu,Du.prototype.visitUtf8=Mu,Du.prototype.visitBinary=Mu,Du.prototype.visitFixedSizeBinary=Fu,Du.prototype.visitDate=Fu,Du.prototype.visitTimestamp=Fu,Du.prototype.visitTime=Fu,Du.prototype.visitDecimal=Fu,Du.prototype.visitList=Eu,Du.prototype.visitStruct=Uu,Du.prototype.visitUnion=function(t){var e=t.type,n=t.length,r=t.typeIds,i=t.valueOffsets;if(Lu.call(this,r),e.mode===en.Sparse)return Uu.call(this,t);if(e.mode===en.Dense){if(t.offset<=0)return Lu.call(this,i),Uu.call(this,t);for(var a,o,u=r.reduce((function(t,e){return Math.max(t,e)}),r[0]),s=new Int32Array(u+1),c=new Int32Array(u+1).fill(-1),f=new Int32Array(n),l=xe(-i[0],n,i),h=-1;++h0&&void 0!==arguments[0]&&arguments[0];return this._sink.toString(t)}},{key:"toUint8Array",value:function(){var t=arguments.length>0&&void 0!==arguments[0]&&arguments[0];return this._sink.toUint8Array(t)}},{key:"writeAll",value:function(t){var e=this;return Ot(t)?t.then((function(t){return e.writeAll(t)})):Lt(t)?Ru(this,t):ju(this,t)}},{key:"closed",get:function(){return this._sink.closed}},{key:e,value:function(){return this._sink[Symbol.asyncIterator]()}},{key:"toDOMStream",value:function(t){return this._sink.toDOMStream(t)}},{key:"toNodeStream",value:function(t){return this._sink.toNodeStream(t)}},{key:"close",value:function(){return this.reset()._sink.close()}},{key:"abort",value:function(t){return this.reset()._sink.abort(t)}},{key:"finish",value:function(){return this._autoDestroy?this.close():this.reset(this._sink,this._schema),this}},{key:"reset",value:function(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:this._sink,e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:null;return t===this._sink||t instanceof Ja?this._sink=t:(this._sink=new Ja,t&&Nt(t)?this.toDOMStream({type:"bytes"}).pipeTo(t):t&&Vt(t)&&this.toNodeStream({objectMode:!1}).pipe(t)),this._started&&this._schema&&this._writeFooter(this._schema),this._started=!1,this._dictionaryBlocks=[],this._recordBatchBlocks=[],this._dictionaryDeltaOffsets=new Map,e&&e.compareTo(this._schema)||(null===e?(this._position=0,this._schema=null):(this._started=!0,this._schema=e,this._writeSchema(e))),this}},{key:"write",value:function(t){var e=null;if(!this._sink)throw new Error("RecordBatchWriter is closed");if(null===t||void 0===t)return this.finish()&&void 0;if(t instanceof Ec&&!(e=t.schema))return this.finish()&&void 0;if(t instanceof Uc&&!(e=t.schema))return this.finish()&&void 0;if(e&&!e.compareTo(this._schema)){if(this._started&&this._autoDestroy)return this.close();this.reset(this._sink,e)}t instanceof Uc?t instanceof Nc||this._writeRecordBatch(t):t instanceof Ec?this.writeAll(t.chunks):Dt(t)&&this.writeAll(t)}},{key:"_writeMessage",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:8,n=e-1,r=eu.encode(t),i=r.byteLength,a=this._writeLegacyIpcFormat?4:8,o=i+a+n&~n,u=o-i-a;return t.headerType===rn.RecordBatch?this._recordBatchBlocks.push(new qa(o,t.bodyLength,this._position)):t.headerType===rn.DictionaryBatch&&this._dictionaryBlocks.push(new qa(o,t.bodyLength,this._position)),this._writeLegacyIpcFormat||this._write(Int32Array.of(-1)),this._write(Int32Array.of(o-a)),i>0&&this._write(r),this._writePadding(u)}},{key:"_write",value:function(t){if(this._started){var e=Jt(t);e&&e.byteLength>0&&(this._sink.write(e),this._position+=e.byteLength)}return this}},{key:"_writeSchema",value:function(t){return this._writeMessage(eu.from(t))}},{key:"_writeFooter",value:function(t){return this._writeLegacyIpcFormat?this._write(Int32Array.of(0)):this._write(Int32Array.of(-1,0))}},{key:"_writeMagic",value:function(){return this._write(Su)}},{key:"_writePadding",value:function(t){return t>0?this._write(new Uint8Array(t)):this}},{key:"_writeRecordBatch",value:function(t){var e=Du.assemble(t),n=e.byteLength,r=e.nodes,i=e.bufferRegions,a=e.buffers,o=new nu(t.length,r,i),u=eu.from(o,n);return this._writeDictionaries(t)._writeMessage(u)._writeBodyBuffers(a)}},{key:"_writeDictionaryBatch",value:function(t,e){var n=arguments.length>2&&void 0!==arguments[2]&&arguments[2];this._dictionaryDeltaOffsets.set(e,t.length+(this._dictionaryDeltaOffsets.get(e)||0));var r=Du.assemble(t),i=r.byteLength,a=r.nodes,o=r.bufferRegions,u=r.buffers,s=new nu(t.length,a,o),c=new ru(s,e,n),f=eu.from(c,i);return this._writeMessage(f)._writeBodyBuffers(u)}},{key:"_writeBodyBuffers",value:function(t){for(var e,n,r,i=-1,a=t.length;++i0&&(this._write(e),(r=(n+7&-8)-n)>0&&this._writePadding(r));return this}},{key:"_writeDictionaries",value:function(t){var e,n=O(t.dictionaries);try{for(n.s();!(e=n.n()).done;){var r=U(e.value,2),i=r[0],a=r[1],o=this._dictionaryDeltaOffsets.get(i)||0;if(0===o||(a=a.slice(o)).length>0){var u,s=O("chunks"in a?a.chunks:[a]);try{for(s.s();!(u=s.n()).done;){var c=u.value;this._writeDictionaryBatch(c,i,o>0),o+=c.length}}catch(f){s.e(f)}finally{s.f()}}}}catch(f){n.e(f)}finally{n.f()}return this}}],[{key:"throughNode",value:function(t){throw new Error('"throughNode" not available in this environment')}},{key:"throughDOM",value:function(t,e){throw new Error('"throughDOM" not available in this environment')}}]),r}(vt,Symbol.asyncIterator),Cu=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,null,[{key:"writeAll",value:function(t,e){var r=new n(e);return Ot(t)?t.then((function(t){return r.writeAll(t)})):Lt(t)?Ru(r,t):ju(r,t)}}]),n}(Nu),Vu=function(t){ot(n,t);var e=yt(n);function n(){var t;return F(this,n),(t=e.call(this))._autoDestroy=!0,t}return E(n,[{key:"_writeSchema",value:function(t){return this._writeMagic()._writePadding(2)}},{key:"_writeFooter",value:function(t){var e=Ka.encode(new Ka(t,an.V4,this._recordBatchBlocks,this._dictionaryBlocks));return ze(ut(n.prototype),"_writeFooter",this).call(this,t)._write(e)._write(Int32Array.of(e.byteLength))._writeMagic()}}],[{key:"writeAll",value:function(t){var e=new n;return Ot(t)?t.then((function(t){return e.writeAll(t)})):Lt(t)?Ru(e,t):ju(e,t)}}]),n}(Nu);function ju(t,e){var n=e;e instanceof Ec&&(n=e.chunks,t.reset(void 0,e.schema));var r,i=O(n);try{for(i.s();!(r=i.n()).done;){var a=r.value;t.write(a)}}catch(o){i.e(o)}finally{i.f()}return t.finish()}function Ru(t,e){return Pu.apply(this,arguments)}function Pu(){return(Pu=L(R.mark((function t(e,n){var r,i,a,o,u,s;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:r=!1,i=!1,t.prev=2,o=P(n);case 4:return t.next=6,o.next();case 6:if(!(r=!(u=t.sent).done)){t.next=12;break}s=u.value,e.write(s);case 9:r=!1,t.next=4;break;case 12:t.next=18;break;case 14:t.prev=14,t.t0=t.catch(2),i=!0,a=t.t0;case 18:if(t.prev=18,t.prev=19,!r||null==o.return){t.next=23;break}return t.next=23,o.return();case 23:if(t.prev=23,!i){t.next=26;break}throw a;case 26:return t.finish(23);case 27:return t.finish(18);case 28:return t.abrupt("return",e.finish());case 29:case"end":return t.stop()}}),t,null,[[2,14,18,28],[19,,23,27]])})))).apply(this,arguments)}var zu=new Uint8Array(0),Yu=function(t){return[zu,zu,new Uint8Array(t),zu]};function Wu(t,e){for(var n,r,i=arguments.length>2&&void 0!==arguments[2]?arguments[2]:e.reduce((function(t,e){return Math.max(t,e.length)}),0),a=-1,o=e.length,u=vn(t.fields),s=[],c=(i+63&-64)>>3;++a0;){for(u=Number.POSITIVE_INFINITY,s=-1;++s0&&(i[o++]=[u,f.slice()]))}return[t=new ua(r,t.metadata),i.map((function(e){return zr(Uc,[t].concat(vn(e)))}))]}(t,e.map((function(t){return t instanceof Wi?t.chunks.map((function(t){return t.data})):[t.data]})))}function Ku(t,e,n,r,i){for(var a,o,u=0,s=-1,c=r.length,f=(e+63&-64)>>3;++s=e?u===e?n[s]=a:(n[s]=a.slice(0,e),a=a.slice(e,u-e),i.numBatches=Math.max(i.numBatches,r[s].unshift(a))):((o=t[s]).nullable||(t[s]=o.clone({nullable:!0})),n[s]=a?a._changeLengthAndBackfillNullBitmap(e):yr.new(o.type,0,e,e,Yu(f)));return n}function Gu(t,e){if(null==t)return{};var n,r,i=function(t,e){if(null==t)return{};var n,r,i={},a=Object.keys(t);for(r=0;r=0||(i[n]=t[n]);return i}(t,e);if(Object.getOwnPropertySymbols){var a=Object.getOwnPropertySymbols(t);for(r=0;r=0||Object.prototype.propertyIsEnumerable.call(t,n)&&(i[n]=t[n])}return i}var qu=function(t,e){ot(r,t);var n=yt(r);function r(t,e){var i;return F(this,r),(i=n.call(this))._children=e,i.numChildren=t.childData.length,i._bindDataAccessors(i.data=t),i}return E(r,[{key:"type",get:function(){return this.data.type}},{key:"typeId",get:function(){return this.data.typeId}},{key:"length",get:function(){return this.data.length}},{key:"offset",get:function(){return this.data.offset}},{key:"stride",get:function(){return this.data.stride}},{key:"nullCount",get:function(){return this.data.nullCount}},{key:"byteLength",get:function(){return this.data.byteLength}},{key:"VectorName",get:function(){return"".concat(Je[this.typeId],"Vector")}},{key:"ArrayType",get:function(){return this.type.ArrayType}},{key:"values",get:function(){return this.data.values}},{key:"typeIds",get:function(){return this.data.typeIds}},{key:"nullBitmap",get:function(){return this.data.nullBitmap}},{key:"valueOffsets",get:function(){return this.data.valueOffsets}},{key:e,get:function(){return"".concat(this.VectorName,"<").concat(this.type[Symbol.toStringTag],">")}},{key:"clone",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this._children;return qe.new(t,e)}},{key:"concat",value:function(){for(var t=arguments.length,e=new Array(t),n=0;n0){var e=this.offset+t;return 0!==(this.nullBitmap[e>>3]&1<=this.numChildren?null:(this._children||(this._children=[]))[t]||(this._children[t]=qe.new(this.data.childData[t]))}},{key:"toJSON",value:function(){return vn(this)}},{key:"_sliceInternal",value:function(t,e,n){return t.clone(t.data.slice(e,n-e),null)}},{key:"_bindDataAccessors",value:function(t){}}]),r}(qe,Symbol.toStringTag);qu.prototype[Symbol.isConcatSpreadable]=!0;var Ju=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"asUtf8",value:function(){return qe.new(this.data.clone(new Gn))}}]),n}(qu),Zu=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,null,[{key:"from",value:function(t){return Mc((function(){return new qn}),t)}}]),n}(qu),Qu=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,null,[{key:"from",value:function(){for(var t=arguments.length,e=new Array(t),n=0;n>>0)},Js=function(t){return new Date(t)},Zs=function(t,e,n){var r=e[n],i=e[n+1];return null!=r&&null!=i?t.subarray(r,i):null},Qs=function(t,e){return function(t,e){return Js(function(t,e){return 864e5*t[e]}(t,e))}(t.values,e)},Xs=function(t,e){return function(t,e){return Js(qs(t,e))}(t.values,2*e)},tc=function(t,e){var n=t.stride;return t.values[n*e]},ec=function(t,e){var n=t.stride;return Nr(t.values[n*e])},nc=function(t,e){var n=t.stride,r=t.values,i=t.type;return Xr.new(r.subarray(n*e,n*(e+1)),i.isSigned)},rc=function(t,e){var n=t.values;return 1e3*qs(n,2*e)},ic=function(t,e){var n=t.values;return qs(n,2*e)},ac=function(t,e){return function(t,e){return t[e+1]/1e3*4294967296+(t[e]>>>0)/1e3}(t.values,2*e)},oc=function(t,e){return function(t,e){return t[e+1]/1e6*4294967296+(t[e]>>>0)/1e6}(t.values,2*e)},uc=function(t,e){return t.values[t.stride*e]},sc=function(t,e){return t.values[t.stride*e]},cc=function(t,e){var n=t.values;return Xr.signed(n.subarray(2*e,2*(e+1)))},fc=function(t,e){var n=t.values;return Xr.signed(n.subarray(2*e,2*(e+1)))},lc=function(t,e){var n=t.typeIdToChildIndex[t.typeIds[e]],r=t.getChildAt(n);return r?r.get(t.valueOffsets[e]):null},hc=function(t,e){var n=t.typeIdToChildIndex[t.typeIds[e]],r=t.getChildAt(n);return r?r.get(e):null},yc=function(t,e){return t.values.subarray(2*e,2*(e+1))},pc=function(t,e){var n=t.values[e],r=new Int32Array(2);return r[0]=n/12|0,r[1]=n%12|0,r};Gs.prototype.visitNull=function(t,e){return null},Gs.prototype.visitBool=function(t,e){var n=t.offset+e;return 0!==(t.values[n>>3]&1<0?0:-1},vc.prototype.visitBool=bc,vc.prototype.visitInt=bc,vc.prototype.visitInt8=bc,vc.prototype.visitInt16=bc,vc.prototype.visitInt32=bc,vc.prototype.visitInt64=bc,vc.prototype.visitUint8=bc,vc.prototype.visitUint16=bc,vc.prototype.visitUint32=bc,vc.prototype.visitUint64=bc,vc.prototype.visitFloat=bc,vc.prototype.visitFloat16=bc,vc.prototype.visitFloat32=bc,vc.prototype.visitFloat64=bc,vc.prototype.visitUtf8=bc,vc.prototype.visitBinary=bc,vc.prototype.visitFixedSizeBinary=bc,vc.prototype.visitDate=bc,vc.prototype.visitDateDay=bc,vc.prototype.visitDateMillisecond=bc,vc.prototype.visitTimestamp=bc,vc.prototype.visitTimestampSecond=bc,vc.prototype.visitTimestampMillisecond=bc,vc.prototype.visitTimestampMicrosecond=bc,vc.prototype.visitTimestampNanosecond=bc,vc.prototype.visitTime=bc,vc.prototype.visitTimeSecond=bc,vc.prototype.visitTimeMillisecond=bc,vc.prototype.visitTimeMicrosecond=bc,vc.prototype.visitTimeNanosecond=bc,vc.prototype.visitDecimal=bc,vc.prototype.visitList=bc,vc.prototype.visitStruct=bc,vc.prototype.visitUnion=bc,vc.prototype.visitDenseUnion=gc,vc.prototype.visitSparseUnion=gc,vc.prototype.visitDictionary=bc,vc.prototype.visitInterval=bc,vc.prototype.visitIntervalDayTime=bc,vc.prototype.visitIntervalYearMonth=bc,vc.prototype.visitFixedSizeList=bc,vc.prototype.visitMap=bc;var mc=new vc,kc=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(bn);function wc(t){if(t.nullCount>0)return function(t){var e=dc.getVisitFn(t);return hn(t.nullBitmap,t.offset,t.length,t,(function(t,n,r,i){return 0!==(r&1<0)?t.values.subarray(0,r)[Symbol.iterator]():R.mark((function e(n){var i;return R.wrap((function(e){for(;;)switch(e.prev=e.next){case 0:i=-1;case 1:if(!(++i1?e-1:0),r=1;r0&&(this.get=(e=this.get,function(t){return this.isValid(t)?e.call(this,t):null}),this.set=function(t){return function(e,n){cn(this.nullBitmap,this.offset+e,!(null===n||void 0===n))&&t.call(this,e,n)}}(this.set));var e},Object.keys(Je).map((function(t){return Je[t]})).filter((function(t){return"number"===typeof t})).filter((function(t){return t!==Je.NONE})).forEach((function(t){var e,n=Lc.visit(t);n.prototype.get=(e=dc.getVisitFn(t),function(t){return e(this,t)}),n.prototype.set=Ks(ja.getVisitFn(t)),n.prototype.indexOf=Ks(mc.getVisitFn(t)),n.prototype.toArray=$s(xc.getVisitFn(t)),n.prototype.getByteWidth=function(t){return function(){return t(this.type)}}(Oc.getVisitFn(t)),n.prototype[Symbol.iterator]=$s(_c.getVisitFn(t))}));var Ec=function(t){ot(n,t);var e=yt(n);function n(){var t;F(this,n);for(var r=null,i=arguments.length,a=new Array(i),o=0;o0&&void 0!==arguments[0]?arguments[0]:this._chunks;return new n(this._schema,t)}},{key:"getColumn",value:function(t){return this.getColumnAt(this.getColumnIndex(t))}},{key:"getColumnAt",value:function(t){return this.getChildAt(t)}},{key:"getColumnIndex",value:function(t){return this._schema.fields.findIndex((function(e){return e.name===t}))}},{key:"getChildAt",value:function(t){if(t<0||t>=this.numChildren)return null;var e,n,r=this._schema.fields,i=this._children||(this._children=[]);if(n=i[t])return n;if(e=r[t]){var a=this._chunks.map((function(e){return e.getChildAt(t)})).filter((function(t){return null!=t}));if(a.length>0)return i[t]=new Gi(e,a)}return null}},{key:"serialize",value:function(){var t=!(arguments.length>1&&void 0!==arguments[1])||arguments[1],e=t?Cu:Vu;return e.writeAll(this).toUint8Array(!0)}},{key:"count",value:function(){return this._length}},{key:"select",value:function(){for(var t=this._schema.fields.reduce((function(t,e,n){return t.set(e.name,n)}),new Map),e=arguments.length,n=new Array(e),r=0;r-1}))))}},{key:"selectAt",value:function(){for(var t,e=arguments.length,r=new Array(e),i=0;i3&&void 0!==arguments[3]?arguments[3]:u[r];return void 0===a?e.getColumnAt(r):t.getColumnAt(a)}))),vn(o.map((function(e){return t.getColumnAt(e)})))).filter(Boolean);return zr(n,vn($u(s,c)))}}],[{key:"empty",value:function(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:new ua([]);return new n(t,[])}},{key:"from",value:function(t){if(!t)return n.empty();if("object"===typeof t){var e=Dt(t.values)?function(t){if(t.type instanceof ir)return Ec.fromStruct(Ls.from(t));return null}(t):Lt(t.values)?function(t){if(t.type instanceof ir)return Ls.from(t).then((function(t){return Ec.fromStruct(t)}));return null}(t):null;if(null!==e)return e}var r=jc.from(t);return Ot(r)?L(R.mark((function t(){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.t0=n,t.next=3,r;case 3:return t.t1=t.sent,t.next=6,t.t0.from.call(t.t0,t.t1);case 6:return t.abrupt("return",t.sent);case 7:case"end":return t.stop()}}),t)})))():r.isSync()&&(r=r.open())?r.schema?new n(r.schema,vn(r)):n.empty():function(){var t=L(R.mark((function t(e){var r,i,a,o,u,s,c,f,l;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,e;case 2:if(r=t.sent,i=r.schema,a=[],!i){t.next=35;break}o=!1,u=!1,t.prev=8,c=P(r);case 10:return t.next=12,c.next();case 12:if(!(o=!(f=t.sent).done)){t.next=18;break}l=f.value,a.push(l);case 15:o=!1,t.next=10;break;case 18:t.next=24;break;case 20:t.prev=20,t.t0=t.catch(8),u=!0,s=t.t0;case 24:if(t.prev=24,t.prev=25,!o||null==c.return){t.next=29;break}return t.next=29,c.return();case 29:if(t.prev=29,!u){t.next=32;break}throw s;case 32:return t.finish(29);case 33:return t.finish(24);case 34:return t.abrupt("return",new n(i,a));case 35:return t.abrupt("return",n.empty());case 36:case"end":return t.stop()}}),t,null,[[8,20,24,34],[25,,29,33]])})));return function(e){return t.apply(this,arguments)}}()(r.open())}},{key:"fromAsync",value:function(){var t=L(R.mark((function t(e){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,n.from(e);case 2:return t.abrupt("return",t.sent);case 3:case"end":return t.stop()}}),t)})));return function(e){return t.apply(this,arguments)}}()},{key:"fromStruct",value:function(t){return n.new(t.data.childData,t.type.children)}},{key:"new",value:function(){for(var t=arguments.length,e=new Array(t),r=0;r1&&void 0!==arguments[1]?arguments[1]:this._children;return new n(this._schema,t,e)}},{key:"concat",value:function(){for(var t=arguments.length,e=new Array(t),r=0;r-1}))))}},{key:"selectAt",value:function(){for(var t,e=this,r=arguments.length,i=new Array(r),a=0;a0&&this.dictionaries.set(e.id,n),this}}],[{key:"collect",value:function(t){return(new n).visit(t.data,new ir(t.schema.fields)).dictionaries}}]),n}(bn),Vc=R.mark(Zc),jc=function(t,e,n){ot(i,t);var r=yt(i);function i(t){var e;return F(this,i),(e=r.call(this))._impl=t,e}return E(i,[{key:"closed",get:function(){return this._impl.closed}},{key:"schema",get:function(){return this._impl.schema}},{key:"autoDestroy",get:function(){return this._impl.autoDestroy}},{key:"dictionaries",get:function(){return this._impl.dictionaries}},{key:"numDictionaries",get:function(){return this._impl.numDictionaries}},{key:"numRecordBatches",get:function(){return this._impl.numRecordBatches}},{key:"footer",get:function(){return this._impl.isFile()?this._impl.footer:null}},{key:"isSync",value:function(){return this._impl.isSync()}},{key:"isAsync",value:function(){return this._impl.isAsync()}},{key:"isFile",value:function(){return this._impl.isFile()}},{key:"isStream",value:function(){return this._impl.isStream()}},{key:"next",value:function(){return this._impl.next()}},{key:"throw",value:function(t){return this._impl.throw(t)}},{key:"return",value:function(t){return this._impl.return(t)}},{key:"cancel",value:function(){return this._impl.cancel()}},{key:"reset",value:function(t){return this._impl.reset(t),this._DOMStream=void 0,this._nodeStream=void 0,this}},{key:"open",value:function(t){var e=this,n=this._impl.open(t);return Ot(n)?n.then((function(){return e})):this}},{key:"readRecordBatch",value:function(t){return this._impl.isFile()?this._impl.readRecordBatch(t):null}},{key:e,value:function(){return this._impl[Symbol.iterator]()}},{key:n,value:function(){return this._impl[Symbol.asyncIterator]()}},{key:"toDOMStream",value:function(){var t=this;return Be.toDOMStream(this.isSync()?Ve({},Symbol.iterator,(function(){return t})):Ve({},Symbol.asyncIterator,(function(){return t})))}},{key:"toNodeStream",value:function(){var t=this;return Be.toNodeStream(this.isSync()?Ve({},Symbol.iterator,(function(){return t})):Ve({},Symbol.asyncIterator,(function(){return t})),{objectMode:!0})}}],[{key:"throughNode",value:function(t){throw new Error('"throughNode" not available in this environment')}},{key:"throughDOM",value:function(t,e){throw new Error('"throughDOM" not available in this environment')}},{key:"from",value:function(t){return t instanceof i?t:Ft(t)?function(t){return new Rc(new qc(t))}(t):Et(t)?function(t){return ef.apply(this,arguments)}(t):Ot(t)?L(R.mark((function e(){return R.wrap((function(e){for(;;)switch(e.prev=e.next){case 0:return e.t0=i,e.next=3,t;case 3:return e.t1=e.sent,e.next=6,e.t0.from.call(e.t0,e.t1);case 6:return e.abrupt("return",e.sent);case 7:case"end":return e.stop()}}),e)})))():Ut(t)||Ct(t)||jt(t)||Lt(t)?function(t){return tf.apply(this,arguments)}(new Qa(t)):function(t){var e=t.peek(Tu+7&-8);return e&&e.byteLength>=4?Au(e)?new zc(new Kc(t.read())):new Rc(new Hc(t)):new Rc(new Hc(R.mark((function t(){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:case"end":return t.stop()}}),t)}))()))}(new Za(t))}},{key:"readAll",value:function(t){return t instanceof i?t.isSync()?Zc(t):Qc(t):Ft(t)||ArrayBuffer.isView(t)||Dt(t)||Mt(t)?Zc(t):Qc(t)}}]),i}(vt,Symbol.iterator,Symbol.asyncIterator),Rc=function(t,e,n){ot(i,t);var r=yt(i);function i(t){var e;return F(this,i),(e=r.call(this,t))._impl=t,e}return E(i,[{key:e,value:function(){return this._impl[Symbol.iterator]()}},{key:n,value:function(){var t=this;return j(R.mark((function e(){return R.wrap((function(e){for(;;)switch(e.prev=e.next){case 0:return e.delegateYield(Y(P(t[Symbol.iterator]()),C),"t0",1);case 1:case"end":return e.stop()}}),e)})))()}}]),i}(jc,Symbol.iterator,Symbol.asyncIterator),Pc=function(t,e,n){ot(i,t);var r=yt(i);function i(t){var e;return F(this,i),(e=r.call(this,t))._impl=t,e}return E(i,[{key:e,value:function(){throw new Error("AsyncRecordBatchStreamReader is not Iterable")}},{key:n,value:function(){return this._impl[Symbol.asyncIterator]()}}]),i}(jc,Symbol.iterator,Symbol.asyncIterator),zc=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this,t))._impl=t,r}return E(n)}(Rc),Yc=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this,t))._impl=t,r}return E(n)}(Pc),Wc=function(){function t(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:new Map;F(this,t),this.closed=!1,this.autoDestroy=!0,this._dictionaryIndex=0,this._recordBatchIndex=0,this.dictionaries=e}return E(t,[{key:"numDictionaries",get:function(){return this._dictionaryIndex}},{key:"numRecordBatches",get:function(){return this._recordBatchIndex}},{key:"isSync",value:function(){return!1}},{key:"isAsync",value:function(){return!1}},{key:"isFile",value:function(){return!1}},{key:"isStream",value:function(){return!1}},{key:"reset",value:function(t){return this._dictionaryIndex=0,this._recordBatchIndex=0,this.schema=t,this.dictionaries=new Map,this}},{key:"_loadRecordBatch",value:function(t,e){return new Uc(this.schema,t.length,this._loadVectors(t,e,this.schema.fields))}},{key:"_loadDictionaryBatch",value:function(t,e){var n=t.id,r=t.isDelta,i=t.data,a=this.dictionaries,o=this.schema,u=a.get(n);if(r||!u){var s=o.dictionaries.get(n);return u&&r?u.concat(qe.new(this._loadVectors(i,e,[s])[0])):qe.new(this._loadVectors(i,e,[s])[0])}return u}},{key:"_loadVectors",value:function(t,e,n){return new co(e,t.nodes,t.buffers,this.dictionaries).visitMany(n)}}]),t}(),Hc=function(t,e){ot(r,t);var n=yt(r);function r(t,e){var i;return F(this,r),(i=n.call(this,e))._reader=Ft(t)?new wu(i._handle=t):new mu(i._handle=t),i}return E(r,[{key:"isSync",value:function(){return!0}},{key:"isStream",value:function(){return!0}},{key:e,value:function(){return this}},{key:"cancel",value:function(){!this.closed&&(this.closed=!0)&&(this.reset()._reader.return(),this._reader=null,this.dictionaries=null)}},{key:"open",value:function(t){return this.closed||(this.autoDestroy=Jc(this,t),this.schema||(this.schema=this._reader.readSchema())||this.cancel()),this}},{key:"throw",value:function(t){return!this.closed&&this.autoDestroy&&(this.closed=!0)?this.reset()._reader.throw(t):pt}},{key:"return",value:function(t){return!this.closed&&this.autoDestroy&&(this.closed=!0)?this.reset()._reader.return(t):pt}},{key:"next",value:function(){if(this.closed)return pt;for(var t,e=this._reader;t=this._readNextMessageAndValidate();)if(t.isSchema())this.reset(t.header());else{if(t.isRecordBatch()){this._recordBatchIndex++;var n=t.header(),r=e.readMessageBody(t.bodyLength);return{done:!1,value:this._loadRecordBatch(n,r)}}if(t.isDictionaryBatch()){this._dictionaryIndex++;var i=t.header(),a=e.readMessageBody(t.bodyLength),o=this._loadDictionaryBatch(i,a);this.dictionaries.set(i.id,o)}}return this.schema&&0===this._recordBatchIndex?(this._recordBatchIndex++,{done:!1,value:new Nc(this.schema)}):this.return()}},{key:"_readNextMessageAndValidate",value:function(t){return this._reader.readMessage(t)}}]),r}(Wc,Symbol.iterator),$c=function(t,e){ot(r,t);var n=yt(r);function r(t,e){var i;return F(this,r),(i=n.call(this,e))._reader=new ku(i._handle=t),i}return E(r,[{key:"isAsync",value:function(){return!0}},{key:"isStream",value:function(){return!0}},{key:e,value:function(){return this}},{key:"cancel",value:function(){var t=L(R.mark((function t(){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:if(this.closed||!(this.closed=!0)){t.next=5;break}return t.next=3,this.reset()._reader.return();case 3:this._reader=null,this.dictionaries=null;case 5:case"end":return t.stop()}}),t,this)})));return function(){return t.apply(this,arguments)}}()},{key:"open",value:function(){var t=L(R.mark((function t(e){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:if(this.closed){t.next=10;break}if(this.autoDestroy=Jc(this,e),t.t0=this.schema,t.t0){t.next=7;break}return t.next=6,this._reader.readSchema();case 6:t.t0=this.schema=t.sent;case 7:if(t.t0){t.next=10;break}return t.next=10,this.cancel();case 10:return t.abrupt("return",this);case 11:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()},{key:"throw",value:function(){var t=L(R.mark((function t(e){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:if(this.closed||!this.autoDestroy||!(this.closed=!0)){t.next=4;break}return t.next=3,this.reset()._reader.throw(e);case 3:return t.abrupt("return",t.sent);case 4:return t.abrupt("return",pt);case 5:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()},{key:"return",value:function(){var t=L(R.mark((function t(e){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:if(this.closed||!this.autoDestroy||!(this.closed=!0)){t.next=4;break}return t.next=3,this.reset()._reader.return(e);case 3:return t.abrupt("return",t.sent);case 4:return t.abrupt("return",pt);case 5:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()},{key:"next",value:function(){var t=L(R.mark((function t(){var e,n,r,i,a,o,u,s;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:if(!this.closed){t.next=2;break}return t.abrupt("return",pt);case 2:n=this._reader;case 3:return t.next=5,this._readNextMessageAndValidate();case 5:if(!(e=t.sent)){t.next=31;break}if(!e.isSchema()){t.next=11;break}return t.next=9,this.reset(e.header());case 9:t.next=29;break;case 11:if(!e.isRecordBatch()){t.next=21;break}return this._recordBatchIndex++,r=e.header(),t.next=16,n.readMessageBody(e.bodyLength);case 16:return i=t.sent,a=this._loadRecordBatch(r,i),t.abrupt("return",{done:!1,value:a});case 21:if(!e.isDictionaryBatch()){t.next=29;break}return this._dictionaryIndex++,o=e.header(),t.next=26,n.readMessageBody(e.bodyLength);case 26:u=t.sent,s=this._loadDictionaryBatch(o,u),this.dictionaries.set(o.id,s);case 29:t.next=3;break;case 31:if(!this.schema||0!==this._recordBatchIndex){t.next=34;break}return this._recordBatchIndex++,t.abrupt("return",{done:!1,value:new Nc(this.schema)});case 34:return t.next=36,this.return();case 36:return t.abrupt("return",t.sent);case 37:case"end":return t.stop()}}),t,this)})));return function(){return t.apply(this,arguments)}}()},{key:"_readNextMessageAndValidate",value:function(){var t=L(R.mark((function t(e){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,this._reader.readMessage(e);case 2:return t.abrupt("return",t.sent);case 3:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()}]),r}(Wc,Symbol.asyncIterator),Kc=function(t){ot(n,t);var e=yt(n);function n(t,r){return F(this,n),e.call(this,t instanceof eo?t:new eo(t),r)}return E(n,[{key:"footer",get:function(){return this._footer}},{key:"numDictionaries",get:function(){return this._footer?this._footer.numDictionaries:0}},{key:"numRecordBatches",get:function(){return this._footer?this._footer.numRecordBatches:0}},{key:"isSync",value:function(){return!0}},{key:"isFile",value:function(){return!0}},{key:"open",value:function(t){if(!this.closed&&!this._footer){this.schema=(this._footer=this._readFooter()).schema;var e,r=O(this._footer.dictionaryBatches());try{for(r.s();!(e=r.n()).done;){e.value&&this._readDictionaryBatch(this._dictionaryIndex++)}}catch(i){r.e(i)}finally{r.f()}}return ze(ut(n.prototype),"open",this).call(this,t)}},{key:"readRecordBatch",value:function(t){if(this.closed)return null;this._footer||this.open();var e=this._footer&&this._footer.getRecordBatch(t);if(e&&this._handle.seek(e.offset)){var n=this._reader.readMessage(rn.RecordBatch);if(n&&n.isRecordBatch()){var r=n.header(),i=this._reader.readMessageBody(n.bodyLength);return this._loadRecordBatch(r,i)}}return null}},{key:"_readDictionaryBatch",value:function(t){var e=this._footer&&this._footer.getDictionaryBatch(t);if(e&&this._handle.seek(e.offset)){var n=this._reader.readMessage(rn.DictionaryBatch);if(n&&n.isDictionaryBatch()){var r=n.header(),i=this._reader.readMessageBody(n.bodyLength),a=this._loadDictionaryBatch(r,i);this.dictionaries.set(r.id,a)}}}},{key:"_readFooter",value:function(){var t=this._handle,e=t.size-Bu,n=t.readInt32(e),r=t.readAt(e-n,n);return Ka.decode(r)}},{key:"_readNextMessageAndValidate",value:function(t){if(this._footer||this.open(),this._footer&&this._recordBatchIndex1?r-1:0),a=1;a=4)){t.next=18;break}if(Au(n)){t.next=8;break}t.t1=new Pc(new $c(e)),t.next=15;break;case 8:return t.t2=zc,t.t3=Kc,t.next=12,e.read();case 12:t.t4=t.sent,t.t5=new t.t3(t.t4),t.t1=new t.t2(t.t5);case 15:t.t0=t.t1,t.next=19;break;case 18:t.t0=new Pc(new $c(j(R.mark((function t(){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:case"end":return t.stop()}}),t)})))()));case 19:return t.abrupt("return",t.t0);case 20:case"end":return t.stop()}}),t)})))).apply(this,arguments)}function ef(){return(ef=L(R.mark((function t(e){var n,r,i;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,e.stat();case 2:if(n=t.sent,r=n.size,i=new no(e,r),!(r>=Ou)){t.next=12;break}return t.t0=Au,t.next=9,i.readAt(0,Tu+7&-8);case 9:if(t.t1=t.sent,!(0,t.t0)(t.t1)){t.next=12;break}return t.abrupt("return",new Yc(new Gc(i)));case 12:return t.abrupt("return",new Pc(new $c(i)));case 13:case"end":return t.stop()}}),t)})))).apply(this,arguments)}var nf=["readableStrategy","writableStrategy","queueingStrategy"];var rf=function(){function t(e){var n,r,i=this;F(this,t),this._numChunks=0,this._finished=!1,this._bufferedSize=0;var a=e.readableStrategy,o=e.writableStrategy,u=e.queueingStrategy,s=void 0===u?"count":u,c=Gu(e,nf);this._controller=null,this._builder=Ir.new(c),this._getSize="bytes"!==s?af:of;var f=Re({},a).highWaterMark,l=void 0===f?"bytes"===s?Math.pow(2,14):1e3:f,h=Re({},o).highWaterMark,y=void 0===h?"bytes"===s?Math.pow(2,14):1e3:h;this.readable=new ReadableStream((Ve(n={},"cancel",(function(){i._builder.clear()})),Ve(n,"pull",(function(t){i._maybeFlush(i._builder,i._controller=t)})),Ve(n,"start",(function(t){i._maybeFlush(i._builder,i._controller=t)})),n),{highWaterMark:l,size:"bytes"!==s?af:of}),this.writable=new WritableStream((Ve(r={},"abort",(function(){i._builder.clear()})),Ve(r,"write",(function(){i._maybeFlush(i._builder,i._controller)})),Ve(r,"close",(function(){i._maybeFlush(i._builder.finish(),i._controller)})),r),{highWaterMark:y,size:function(t){return i._writeValueAndReturnChunkSize(t)}})}return E(t,[{key:"_writeValueAndReturnChunkSize",value:function(t){var e=this._bufferedSize;return this._bufferedSize=this._getSize(this._builder.append(t)),this._bufferedSize-e}},{key:"_maybeFlush",value:function(t,e){null!==e&&(this._bufferedSize>=e.desiredSize&&++this._numChunks&&this._enqueue(e,t.toVector()),t.finished&&((t.length>0||0===this._numChunks)&&++this._numChunks&&this._enqueue(e,t.toVector()),!this._finished&&(this._finished=!0)&&this._enqueue(e,null)))}},{key:"_enqueue",value:function(t,e){this._bufferedSize=0,this._controller=null,null===e?t.close():t.enqueue(e)}}]),t}(),af=function(t){return t.length},of=function(t){return t.byteLength};var uf=function(){function t(){F(this,t)}return E(t,[{key:"eq",value:function(e){return e instanceof t||(e=new sf(e)),new df(this,e)}},{key:"le",value:function(e){return e instanceof t||(e=new sf(e)),new vf(this,e)}},{key:"ge",value:function(e){return e instanceof t||(e=new sf(e)),new bf(this,e)}},{key:"lt",value:function(t){return new gf(this.ge(t))}},{key:"gt",value:function(t){return new gf(this.le(t))}},{key:"ne",value:function(t){return new gf(this.eq(t))}}]),t}(),sf=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this)).v=t,r}return E(n)}(uf),cf=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this)).name=t,r}return E(n,[{key:"bind",value:function(t){if(!this.colidx){this.colidx=-1;for(var e=t.schema.fields,n=-1;++n=n.v;return function(){return r}}},{key:"_bindColCol",value:function(t,e,n){var r=e.bind(t),i=n.bind(t);return function(t,e){return r(t,e)>=i(t,e)}}},{key:"_bindColLit",value:function(t,e,n){var r=e.bind(t);return function(t,e){return r(t,e)>=n.v}}},{key:"_bindLitCol",value:function(t,e,n){var r=n.bind(t);return function(t,n){return e.v>=r(t,n)}}}]),n}(lf),gf=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this)).child=t,r}return E(n,[{key:"bind",value:function(t){var e=this.child.bind(t);return function(t,n){return!e(t,n)}}}]),n}(ff);Ec.prototype.countBy=function(t){return new mf(this.chunks).countBy(t)},Ec.prototype.scan=function(t,e){return new mf(this.chunks).scan(t,e)},Ec.prototype.scanReverse=function(t,e){return new mf(this.chunks).scanReverse(t,e)},Ec.prototype.filter=function(t){return new mf(this.chunks).filter(t)};var mf=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"filter",value:function(t){return new wf(this.chunks,t)}},{key:"scan",value:function(t,e){for(var n=this.chunks,r=n.length,i=-1;++i=0;){var i=n[r];e&&e(i);for(var a=i.length;--a>=0;)t(a,i)}}},{key:"countBy",value:function(t){var e=this.chunks,n=e.length,r="string"===typeof t?new cf(t):t;r.bind(e[n-1]);var i=r.vector;if(!Fn.isDictionary(i.type))throw new Error("countBy currently only supports dictionary-encoded columns");for(var a=Math.ceil(Math.log(i.length)/Math.log(256)),o=new(4==a?Uint32Array:a>=2?Uint16Array:Uint8Array)(i.dictionary.length),u=-1;++u=0;)for(var i=n[r],a=this._predicate.bind(i),o=!1,u=i.length;--u>=0;)a(u,i)&&(e&&!o&&(e(i),o=!0),t(u,i))}},{key:"count",value:function(){for(var t=0,e=this._chunks,n=e.length,r=-1;++r=2?Uint16Array:Uint8Array)(i.dictionary.length),u=-1;++u=i.headerRows&&e=i.headerColumns;if(n){var o=["blank"];return e>0&&o.push("level"+t),{type:"blank",classNames:o.join(" "),content:""}}if(a)return{type:"columns",classNames:(o=["col_heading","level"+t,"col"+(s=e-i.headerColumns)]).join(" "),content:i.getContent(i.columnsTable,s,t)};if(r){o=["row_heading","level"+e,"row"+(u=t-i.headerRows)];return{type:"index",id:"T_"+i.uuid+"level"+e+"_row"+u,classNames:o.join(" "),content:i.getContent(i.indexTable,u,e)}}o=["data","row"+(u=t-i.headerRows),"col"+(s=e-i.headerColumns)];var u,s,c=i.styler?i.getContent(i.styler.displayValuesTable,u,s):i.getContent(i.dataTable,u,s);return{type:"data",id:"T_"+i.uuid+"row"+u+"_col"+s,classNames:o.join(" "),content:c}},this.getContent=function(t,e,n){var r=t.getColumnAt(n);return null===r?"":i.getColumnTypeId(t,n)===Je.Timestamp?i.nanosToDate(r.get(e)):r.get(e)},this.dataTable=Ec.from(t),this.indexTable=Ec.from(e),this.columnsTable=Ec.from(n),this.styler=r?{caption:r.caption,displayValuesTable:Ec.from(r.displayValues),styles:r.styles,uuid:r.uuid}:void 0}return Object.defineProperty(t.prototype,"rows",{get:function(){return this.indexTable.length+this.columnsTable.numCols},enumerable:!0,configurable:!0}),Object.defineProperty(t.prototype,"columns",{get:function(){return this.indexTable.numCols+this.columnsTable.length},enumerable:!0,configurable:!0}),Object.defineProperty(t.prototype,"headerRows",{get:function(){return this.rows-this.dataRows},enumerable:!0,configurable:!0}),Object.defineProperty(t.prototype,"headerColumns",{get:function(){return this.columns-this.dataColumns},enumerable:!0,configurable:!0}),Object.defineProperty(t.prototype,"dataRows",{get:function(){return this.dataTable.length},enumerable:!0,configurable:!0}),Object.defineProperty(t.prototype,"dataColumns",{get:function(){return this.dataTable.numCols},enumerable:!0,configurable:!0}),Object.defineProperty(t.prototype,"uuid",{get:function(){return this.styler&&this.styler.uuid},enumerable:!0,configurable:!0}),Object.defineProperty(t.prototype,"caption",{get:function(){return this.styler&&this.styler.caption},enumerable:!0,configurable:!0}),Object.defineProperty(t.prototype,"styles",{get:function(){return this.styler&&this.styler.styles},enumerable:!0,configurable:!0}),Object.defineProperty(t.prototype,"table",{get:function(){return this.dataTable},enumerable:!0,configurable:!0}),Object.defineProperty(t.prototype,"index",{get:function(){return this.indexTable},enumerable:!0,configurable:!0}),Object.defineProperty(t.prototype,"columnTable",{get:function(){return this.columnsTable},enumerable:!0,configurable:!0}),t.prototype.serialize=function(){return{data:this.dataTable.serialize(),index:this.indexTable.serialize(),columns:this.columnsTable.serialize()}},t.prototype.getColumnTypeId=function(t,e){return t.schema.fields[e].type.typeId},t.prototype.nanosToDate=function(t){return new Date(t/1e6)},t}(),Sf=function(){return Sf=Object.assign||function(t){for(var e,n=1,r=arguments.length;n0?t.argsDataframeToObject(e.dfs):{};n=Sf(Sf({},n),r);var i=Boolean(e.disabled),a=e.theme;a&&Af(a);var o={disabled:i,args:n,theme:a},u=new CustomEvent(t.RENDER_EVENT,{detail:o});t.events.dispatchEvent(u)},t.argsDataframeToObject=function(e){var n=e.map((function(e){var n=e.key,r=e.value;return[n,t.toArrowTable(r)]}));return Object.fromEntries(n)},t.toArrowTable=function(t){var e=t.data,n=e.data,r=e.index,i=e.columns,a=e.styler;return new If(n,r,i,a)},t.sendBackMsg=function(t,e){window.parent.postMessage(Sf({isStreamlitMessage:!0,type:t},e),"*")},t}(),Af=function(t){var e=document.createElement("style");document.head.appendChild(e),e.innerHTML="\n :root {\n --primary-color: "+t.primaryColor+";\n --background-color: "+t.backgroundColor+";\n --secondary-background-color: "+t.secondaryBackgroundColor+";\n --text-color: "+t.textColor+";\n --font: "+t.font+";\n }\n\n body {\n background-color: var(--background-color);\n color: var(--text-color);\n }\n "};var Tf=function(){var t=function(e,n){return t=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n])},t(e,n)};return function(e,n){function r(){this.constructor=e}t(e,n),e.prototype=null===n?Object.create(n):(r.prototype=n.prototype,new r)}}();!function(t){function e(){return null!==t&&t.apply(this,arguments)||this}Tf(e,t),e.prototype.componentDidMount=function(){xf.setFrameHeight()},e.prototype.componentDidUpdate=function(){xf.setFrameHeight()}}(f.a.PureComponent)},function(t,e,n){"use strict";var r=n(6),i={childContextTypes:!0,contextType:!0,contextTypes:!0,defaultProps:!0,displayName:!0,getDefaultProps:!0,getDerivedStateFromError:!0,getDerivedStateFromProps:!0,mixins:!0,propTypes:!0,type:!0},a={name:!0,length:!0,prototype:!0,caller:!0,callee:!0,arguments:!0,arity:!0},o={$$typeof:!0,compare:!0,defaultProps:!0,displayName:!0,propTypes:!0,type:!0},u={};function s(t){return r.isMemo(t)?o:u[t.$$typeof]||i}u[r.ForwardRef]={$$typeof:!0,render:!0,defaultProps:!0,displayName:!0,propTypes:!0},u[r.Memo]=o;var c=Object.defineProperty,f=Object.getOwnPropertyNames,l=Object.getOwnPropertySymbols,h=Object.getOwnPropertyDescriptor,y=Object.getPrototypeOf,p=Object.prototype;t.exports=function t(e,n,r){if("string"!==typeof n){if(p){var i=y(n);i&&i!==p&&t(e,i,r)}var o=f(n);l&&(o=o.concat(l(n)));for(var u=s(e),d=s(n),v=0;vD.length&&D.push(t)}function M(t,e,n,r){var i=typeof t;"undefined"!==i&&"boolean"!==i||(t=null);var u=!1;if(null===t)u=!0;else switch(i){case"string":case"number":u=!0;break;case"object":switch(t.$$typeof){case a:case o:u=!0}}if(u)return n(r,t,""===e?"."+U(t,0):e),1;if(u=0,e=""===e?".":e+":",Array.isArray(t))for(var s=0;s=0;--a){var o=this.tryEntries[a],u=o.completion;if("root"===o.tryLoc)return i("end");if(o.tryLoc<=this.prev){var s=r.call(o,"catchLoc"),c=r.call(o,"finallyLoc");if(s&&c){if(this.prev=0;--n){var i=this.tryEntries[n];if(i.tryLoc<=this.prev&&r.call(i,"finallyLoc")&&this.prev=0;--e){var n=this.tryEntries[e];if(n.finallyLoc===t)return this.complete(n.completion,n.afterLoc),T(n),d}},catch:function(t){for(var e=this.tryEntries.length-1;e>=0;--e){var n=this.tryEntries[e];if(n.tryLoc===t){var r=n.completion;if("throw"===r.type){var i=r.arg;T(n)}return i}}throw new Error("illegal catch attempt")},delegateYield:function(t,n,r){return this.delegate={iterator:O(t),resultName:n,nextLoc:r},"next"===this.method&&(this.arg=e),d}},t}(t.exports);try{regeneratorRuntime=r}catch(i){"object"===typeof globalThis?globalThis.regeneratorRuntime=r:Function("r","regeneratorRuntime = r")(r)}}]]); -//# sourceMappingURL=2.422ca0c4.chunk.js.map \ No newline at end of file diff --git a/spaces/abcde1234www/ChatGPT-prompt-generator/app.py b/spaces/abcde1234www/ChatGPT-prompt-generator/app.py deleted file mode 100644 index 5da2e5088053267553b6f5af9760a0a7d58c2a1f..0000000000000000000000000000000000000000 --- a/spaces/abcde1234www/ChatGPT-prompt-generator/app.py +++ /dev/null @@ -1,18 +0,0 @@ -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM -import gradio as gr - -tokenizer = AutoTokenizer.from_pretrained("merve/chatgpt-prompts-bart-long") -model = AutoModelForSeq2SeqLM.from_pretrained("merve/chatgpt-prompts-bart-long", from_tf=True) - -def generate(prompt): - - batch = tokenizer(prompt, return_tensors="pt") - generated_ids = model.generate(batch["input_ids"], max_new_tokens=150) - output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) - return output[0] - -input_component = gr.Textbox(label = "Input a persona, e.g. photographer", value = "photographer") -output_component = gr.Textbox(label = "Prompt") -examples = [["photographer"], ["developer"]] -description = "This app generates ChatGPT prompts, it's based on a BART model trained on [this dataset](https://huggingface.co/datasets/fka/awesome-chatgpt-prompts). 📓 Simply enter a persona that you want the prompt to be generated based on. 🧙🏻🧑🏻‍🚀🧑🏻‍🎨🧑🏻‍🔬🧑🏻‍💻🧑🏼‍🏫🧑🏽‍🌾" -gr.Interface(generate, inputs = input_component, outputs=output_component, examples=examples, title = "👨🏻‍🎤 ChatGPT Prompt Generator 👨🏻‍🎤", description=description).launch() diff --git a/spaces/abhishek/first-order-motion-model/reconstruction.py b/spaces/abhishek/first-order-motion-model/reconstruction.py deleted file mode 100644 index cb211df02d502352e227982bfd40c16e2676af6e..0000000000000000000000000000000000000000 --- a/spaces/abhishek/first-order-motion-model/reconstruction.py +++ /dev/null @@ -1,67 +0,0 @@ -import os -from tqdm import tqdm -import torch -from torch.utils.data import DataLoader -from logger import Logger, Visualizer -import numpy as np -import imageio -from sync_batchnorm import DataParallelWithCallback - - -def reconstruction(config, generator, kp_detector, checkpoint, log_dir, dataset): - png_dir = os.path.join(log_dir, 'reconstruction/png') - log_dir = os.path.join(log_dir, 'reconstruction') - - if checkpoint is not None: - Logger.load_cpk(checkpoint, generator=generator, kp_detector=kp_detector) - else: - raise AttributeError("Checkpoint should be specified for mode='reconstruction'.") - dataloader = DataLoader(dataset, batch_size=1, shuffle=False, num_workers=1) - - if not os.path.exists(log_dir): - os.makedirs(log_dir) - - if not os.path.exists(png_dir): - os.makedirs(png_dir) - - loss_list = [] - if torch.cuda.is_available(): - generator = DataParallelWithCallback(generator) - kp_detector = DataParallelWithCallback(kp_detector) - - generator.eval() - kp_detector.eval() - - for it, x in tqdm(enumerate(dataloader)): - if config['reconstruction_params']['num_videos'] is not None: - if it > config['reconstruction_params']['num_videos']: - break - with torch.no_grad(): - predictions = [] - visualizations = [] - if torch.cuda.is_available(): - x['video'] = x['video'].cuda() - kp_source = kp_detector(x['video'][:, :, 0]) - for frame_idx in range(x['video'].shape[2]): - source = x['video'][:, :, 0] - driving = x['video'][:, :, frame_idx] - kp_driving = kp_detector(driving) - out = generator(source, kp_source=kp_source, kp_driving=kp_driving) - out['kp_source'] = kp_source - out['kp_driving'] = kp_driving - del out['sparse_deformed'] - predictions.append(np.transpose(out['prediction'].data.cpu().numpy(), [0, 2, 3, 1])[0]) - - visualization = Visualizer(**config['visualizer_params']).visualize(source=source, - driving=driving, out=out) - visualizations.append(visualization) - - loss_list.append(torch.abs(out['prediction'] - driving).mean().cpu().numpy()) - - predictions = np.concatenate(predictions, axis=1) - imageio.imsave(os.path.join(png_dir, x['name'][0] + '.png'), (255 * predictions).astype(np.uint8)) - - image_name = x['name'][0] + config['reconstruction_params']['format'] - imageio.mimsave(os.path.join(log_dir, image_name), visualizations) - - print("Reconstruction loss: %s" % np.mean(loss_list)) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/utils/logger.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/utils/logger.py deleted file mode 100644 index d0e8a77c3991b55463f0d18dbfda14cef325b1b0..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/utils/logger.py +++ /dev/null @@ -1,19 +0,0 @@ -import logging - -from annotator.uniformer.mmcv.utils import get_logger - - -def get_root_logger(log_file=None, log_level=logging.INFO): - """Get root logger. - - Args: - log_file (str, optional): File path of log. Defaults to None. - log_level (int, optional): The level of logger. - Defaults to logging.INFO. - - Returns: - :obj:`logging.Logger`: The obtained logger - """ - logger = get_logger(name='mmdet', log_file=log_file, log_level=log_level) - - return logger diff --git a/spaces/abrar-lohia/text-2-character-anim/VQTrans/options/option_vq.py b/spaces/abrar-lohia/text-2-character-anim/VQTrans/options/option_vq.py deleted file mode 100644 index 08a53ff1270facc10ab44ec0647e673ed1336d0d..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/VQTrans/options/option_vq.py +++ /dev/null @@ -1,61 +0,0 @@ -import argparse - -def get_args_parser(): - parser = argparse.ArgumentParser(description='Optimal Transport AutoEncoder training for AIST', - add_help=True, - formatter_class=argparse.ArgumentDefaultsHelpFormatter) - - ## dataloader - parser.add_argument('--dataname', type=str, default='kit', help='dataset directory') - parser.add_argument('--batch-size', default=128, type=int, help='batch size') - parser.add_argument('--window-size', type=int, default=64, help='training motion length') - - ## optimization - parser.add_argument('--total-iter', default=200000, type=int, help='number of total iterations to run') - parser.add_argument('--warm-up-iter', default=1000, type=int, help='number of total iterations for warmup') - parser.add_argument('--lr', default=2e-4, type=float, help='max learning rate') - parser.add_argument('--lr-scheduler', default=[50000, 400000], nargs="+", type=int, help="learning rate schedule (iterations)") - parser.add_argument('--gamma', default=0.05, type=float, help="learning rate decay") - - parser.add_argument('--weight-decay', default=0.0, type=float, help='weight decay') - parser.add_argument("--commit", type=float, default=0.02, help="hyper-parameter for the commitment loss") - parser.add_argument('--loss-vel', type=float, default=0.1, help='hyper-parameter for the velocity loss') - parser.add_argument('--recons-loss', type=str, default='l2', help='reconstruction loss') - - ## vqvae arch - parser.add_argument("--code-dim", type=int, default=512, help="embedding dimension") - parser.add_argument("--nb-code", type=int, default=512, help="nb of embedding") - parser.add_argument("--mu", type=float, default=0.99, help="exponential moving average to update the codebook") - parser.add_argument("--down-t", type=int, default=2, help="downsampling rate") - parser.add_argument("--stride-t", type=int, default=2, help="stride size") - parser.add_argument("--width", type=int, default=512, help="width of the network") - parser.add_argument("--depth", type=int, default=3, help="depth of the network") - parser.add_argument("--dilation-growth-rate", type=int, default=3, help="dilation growth rate") - parser.add_argument("--output-emb-width", type=int, default=512, help="output embedding width") - parser.add_argument('--vq-act', type=str, default='relu', choices = ['relu', 'silu', 'gelu'], help='dataset directory') - parser.add_argument('--vq-norm', type=str, default=None, help='dataset directory') - - ## quantizer - parser.add_argument("--quantizer", type=str, default='ema_reset', choices = ['ema', 'orig', 'ema_reset', 'reset'], help="eps for optimal transport") - parser.add_argument('--beta', type=float, default=1.0, help='commitment loss in standard VQ') - - ## resume - parser.add_argument("--resume-pth", type=str, default=None, help='resume pth for VQ') - parser.add_argument("--resume-gpt", type=str, default=None, help='resume pth for GPT') - - - ## output directory - parser.add_argument('--out-dir', type=str, default='output_vqfinal/', help='output directory') - parser.add_argument('--results-dir', type=str, default='visual_results/', help='output directory') - parser.add_argument('--visual-name', type=str, default='baseline', help='output directory') - parser.add_argument('--exp-name', type=str, default='exp_debug', help='name of the experiment, will create a file inside out-dir') - ## other - parser.add_argument('--print-iter', default=200, type=int, help='print frequency') - parser.add_argument('--eval-iter', default=1000, type=int, help='evaluation frequency') - parser.add_argument('--seed', default=123, type=int, help='seed for initializing training.') - - parser.add_argument('--vis-gt', action='store_true', help='whether visualize GT motions') - parser.add_argument('--nb-vis', default=20, type=int, help='nb of visualizations') - - - return parser.parse_args() \ No newline at end of file diff --git a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/csmsc/voc1/local/data_download.sh b/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/csmsc/voc1/local/data_download.sh deleted file mode 100644 index 85f8fb7abae3629921e5711db2cbd212dc4fa933..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/csmsc/voc1/local/data_download.sh +++ /dev/null @@ -1,32 +0,0 @@ -#!/bin/bash - -# Copyright 2019 Tomoki Hayashi -# MIT License (https://opensource.org/licenses/MIT) - -download_dir=$1 - -# check arguments -if [ $# != 1 ]; then - echo "Usage: $0 " - exit 1 -fi - -set -euo pipefail - -# download dataset -cwd=$(pwd) -if [ ! -e "${download_dir}/CSMSC" ]; then - mkdir -p "${download_dir}" - cd "${download_dir}" - wget https://weixinxcxdb.oss-cn-beijing.aliyuncs.com/gwYinPinKu/BZNSYP.rar - mkdir CSMSC && cd CSMSC && unrar x ../BZNSYP.rar - # convert new line code - find ./PhoneLabeling -name "*.interval" | while read -r line; do - nkf -Lu --overwrite "${line}" - done - rm ../BZNSYP.rar - cd "${cwd}" - echo "Successfully finished download." -else - echo "Already exists. Skip download." -fi diff --git a/spaces/akhaliq/stylegan3_clip/viz/capture_widget.py b/spaces/akhaliq/stylegan3_clip/viz/capture_widget.py deleted file mode 100644 index 311ae885cf2413f37f716d2c1b71cd8bec1ca889..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/stylegan3_clip/viz/capture_widget.py +++ /dev/null @@ -1,87 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import os -import re -import numpy as np -import imgui -import PIL.Image -from gui_utils import imgui_utils -from . import renderer - -#---------------------------------------------------------------------------- - -class CaptureWidget: - def __init__(self, viz): - self.viz = viz - self.path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '_screenshots')) - self.dump_image = False - self.dump_gui = False - self.defer_frames = 0 - self.disabled_time = 0 - - def dump_png(self, image): - viz = self.viz - try: - _height, _width, channels = image.shape - assert channels in [1, 3] - assert image.dtype == np.uint8 - os.makedirs(self.path, exist_ok=True) - file_id = 0 - for entry in os.scandir(self.path): - if entry.is_file(): - match = re.fullmatch(r'(\d+).*', entry.name) - if match: - file_id = max(file_id, int(match.group(1)) + 1) - if channels == 1: - pil_image = PIL.Image.fromarray(image[:, :, 0], 'L') - else: - pil_image = PIL.Image.fromarray(image, 'RGB') - pil_image.save(os.path.join(self.path, f'{file_id:05d}.png')) - except: - viz.result.error = renderer.CapturedException() - - @imgui_utils.scoped_by_object_id - def __call__(self, show=True): - viz = self.viz - if show: - with imgui_utils.grayed_out(self.disabled_time != 0): - imgui.text('Capture') - imgui.same_line(viz.label_w) - _changed, self.path = imgui_utils.input_text('##path', self.path, 1024, - flags=(imgui.INPUT_TEXT_AUTO_SELECT_ALL | imgui.INPUT_TEXT_ENTER_RETURNS_TRUE), - width=(-1 - viz.button_w * 2 - viz.spacing * 2), - help_text='PATH') - if imgui.is_item_hovered() and not imgui.is_item_active() and self.path != '': - imgui.set_tooltip(self.path) - imgui.same_line() - if imgui_utils.button('Save image', width=viz.button_w, enabled=(self.disabled_time == 0 and 'image' in viz.result)): - self.dump_image = True - self.defer_frames = 2 - self.disabled_time = 0.5 - imgui.same_line() - if imgui_utils.button('Save GUI', width=-1, enabled=(self.disabled_time == 0)): - self.dump_gui = True - self.defer_frames = 2 - self.disabled_time = 0.5 - - self.disabled_time = max(self.disabled_time - viz.frame_delta, 0) - if self.defer_frames > 0: - self.defer_frames -= 1 - elif self.dump_image: - if 'image' in viz.result: - self.dump_png(viz.result.image) - self.dump_image = False - elif self.dump_gui: - viz.capture_next_frame() - self.dump_gui = False - captured_frame = viz.pop_captured_frame() - if captured_frame is not None: - self.dump_png(captured_frame) - -#---------------------------------------------------------------------------- diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/requests/cookies.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/requests/cookies.py deleted file mode 100644 index 56fccd9c2570d2a31365ed11278fd1b6ecc2aa54..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/requests/cookies.py +++ /dev/null @@ -1,549 +0,0 @@ -# -*- coding: utf-8 -*- - -""" -requests.cookies -~~~~~~~~~~~~~~~~ - -Compatibility code to be able to use `cookielib.CookieJar` with requests. - -requests.utils imports from here, so be careful with imports. -""" - -import copy -import time -import calendar - -from ._internal_utils import to_native_string -from .compat import cookielib, urlparse, urlunparse, Morsel, MutableMapping - -try: - import threading -except ImportError: - import dummy_threading as threading - - -class MockRequest(object): - """Wraps a `requests.Request` to mimic a `urllib2.Request`. - - The code in `cookielib.CookieJar` expects this interface in order to correctly - manage cookie policies, i.e., determine whether a cookie can be set, given the - domains of the request and the cookie. - - The original request object is read-only. The client is responsible for collecting - the new headers via `get_new_headers()` and interpreting them appropriately. You - probably want `get_cookie_header`, defined below. - """ - - def __init__(self, request): - self._r = request - self._new_headers = {} - self.type = urlparse(self._r.url).scheme - - def get_type(self): - return self.type - - def get_host(self): - return urlparse(self._r.url).netloc - - def get_origin_req_host(self): - return self.get_host() - - def get_full_url(self): - # Only return the response's URL if the user hadn't set the Host - # header - if not self._r.headers.get('Host'): - return self._r.url - # If they did set it, retrieve it and reconstruct the expected domain - host = to_native_string(self._r.headers['Host'], encoding='utf-8') - parsed = urlparse(self._r.url) - # Reconstruct the URL as we expect it - return urlunparse([ - parsed.scheme, host, parsed.path, parsed.params, parsed.query, - parsed.fragment - ]) - - def is_unverifiable(self): - return True - - def has_header(self, name): - return name in self._r.headers or name in self._new_headers - - def get_header(self, name, default=None): - return self._r.headers.get(name, self._new_headers.get(name, default)) - - def add_header(self, key, val): - """cookielib has no legitimate use for this method; add it back if you find one.""" - raise NotImplementedError("Cookie headers should be added with add_unredirected_header()") - - def add_unredirected_header(self, name, value): - self._new_headers[name] = value - - def get_new_headers(self): - return self._new_headers - - @property - def unverifiable(self): - return self.is_unverifiable() - - @property - def origin_req_host(self): - return self.get_origin_req_host() - - @property - def host(self): - return self.get_host() - - -class MockResponse(object): - """Wraps a `httplib.HTTPMessage` to mimic a `urllib.addinfourl`. - - ...what? Basically, expose the parsed HTTP headers from the server response - the way `cookielib` expects to see them. - """ - - def __init__(self, headers): - """Make a MockResponse for `cookielib` to read. - - :param headers: a httplib.HTTPMessage or analogous carrying the headers - """ - self._headers = headers - - def info(self): - return self._headers - - def getheaders(self, name): - self._headers.getheaders(name) - - -def extract_cookies_to_jar(jar, request, response): - """Extract the cookies from the response into a CookieJar. - - :param jar: cookielib.CookieJar (not necessarily a RequestsCookieJar) - :param request: our own requests.Request object - :param response: urllib3.HTTPResponse object - """ - if not (hasattr(response, '_original_response') and - response._original_response): - return - # the _original_response field is the wrapped httplib.HTTPResponse object, - req = MockRequest(request) - # pull out the HTTPMessage with the headers and put it in the mock: - res = MockResponse(response._original_response.msg) - jar.extract_cookies(res, req) - - -def get_cookie_header(jar, request): - """ - Produce an appropriate Cookie header string to be sent with `request`, or None. - - :rtype: str - """ - r = MockRequest(request) - jar.add_cookie_header(r) - return r.get_new_headers().get('Cookie') - - -def remove_cookie_by_name(cookiejar, name, domain=None, path=None): - """Unsets a cookie by name, by default over all domains and paths. - - Wraps CookieJar.clear(), is O(n). - """ - clearables = [] - for cookie in cookiejar: - if cookie.name != name: - continue - if domain is not None and domain != cookie.domain: - continue - if path is not None and path != cookie.path: - continue - clearables.append((cookie.domain, cookie.path, cookie.name)) - - for domain, path, name in clearables: - cookiejar.clear(domain, path, name) - - -class CookieConflictError(RuntimeError): - """There are two cookies that meet the criteria specified in the cookie jar. - Use .get and .set and include domain and path args in order to be more specific. - """ - - -class RequestsCookieJar(cookielib.CookieJar, MutableMapping): - """Compatibility class; is a cookielib.CookieJar, but exposes a dict - interface. - - This is the CookieJar we create by default for requests and sessions that - don't specify one, since some clients may expect response.cookies and - session.cookies to support dict operations. - - Requests does not use the dict interface internally; it's just for - compatibility with external client code. All requests code should work - out of the box with externally provided instances of ``CookieJar``, e.g. - ``LWPCookieJar`` and ``FileCookieJar``. - - Unlike a regular CookieJar, this class is pickleable. - - .. warning:: dictionary operations that are normally O(1) may be O(n). - """ - - def get(self, name, default=None, domain=None, path=None): - """Dict-like get() that also supports optional domain and path args in - order to resolve naming collisions from using one cookie jar over - multiple domains. - - .. warning:: operation is O(n), not O(1). - """ - try: - return self._find_no_duplicates(name, domain, path) - except KeyError: - return default - - def set(self, name, value, **kwargs): - """Dict-like set() that also supports optional domain and path args in - order to resolve naming collisions from using one cookie jar over - multiple domains. - """ - # support client code that unsets cookies by assignment of a None value: - if value is None: - remove_cookie_by_name(self, name, domain=kwargs.get('domain'), path=kwargs.get('path')) - return - - if isinstance(value, Morsel): - c = morsel_to_cookie(value) - else: - c = create_cookie(name, value, **kwargs) - self.set_cookie(c) - return c - - def iterkeys(self): - """Dict-like iterkeys() that returns an iterator of names of cookies - from the jar. - - .. seealso:: itervalues() and iteritems(). - """ - for cookie in iter(self): - yield cookie.name - - def keys(self): - """Dict-like keys() that returns a list of names of cookies from the - jar. - - .. seealso:: values() and items(). - """ - return list(self.iterkeys()) - - def itervalues(self): - """Dict-like itervalues() that returns an iterator of values of cookies - from the jar. - - .. seealso:: iterkeys() and iteritems(). - """ - for cookie in iter(self): - yield cookie.value - - def values(self): - """Dict-like values() that returns a list of values of cookies from the - jar. - - .. seealso:: keys() and items(). - """ - return list(self.itervalues()) - - def iteritems(self): - """Dict-like iteritems() that returns an iterator of name-value tuples - from the jar. - - .. seealso:: iterkeys() and itervalues(). - """ - for cookie in iter(self): - yield cookie.name, cookie.value - - def items(self): - """Dict-like items() that returns a list of name-value tuples from the - jar. Allows client-code to call ``dict(RequestsCookieJar)`` and get a - vanilla python dict of key value pairs. - - .. seealso:: keys() and values(). - """ - return list(self.iteritems()) - - def list_domains(self): - """Utility method to list all the domains in the jar.""" - domains = [] - for cookie in iter(self): - if cookie.domain not in domains: - domains.append(cookie.domain) - return domains - - def list_paths(self): - """Utility method to list all the paths in the jar.""" - paths = [] - for cookie in iter(self): - if cookie.path not in paths: - paths.append(cookie.path) - return paths - - def multiple_domains(self): - """Returns True if there are multiple domains in the jar. - Returns False otherwise. - - :rtype: bool - """ - domains = [] - for cookie in iter(self): - if cookie.domain is not None and cookie.domain in domains: - return True - domains.append(cookie.domain) - return False # there is only one domain in jar - - def get_dict(self, domain=None, path=None): - """Takes as an argument an optional domain and path and returns a plain - old Python dict of name-value pairs of cookies that meet the - requirements. - - :rtype: dict - """ - dictionary = {} - for cookie in iter(self): - if ( - (domain is None or cookie.domain == domain) and - (path is None or cookie.path == path) - ): - dictionary[cookie.name] = cookie.value - return dictionary - - def __contains__(self, name): - try: - return super(RequestsCookieJar, self).__contains__(name) - except CookieConflictError: - return True - - def __getitem__(self, name): - """Dict-like __getitem__() for compatibility with client code. Throws - exception if there are more than one cookie with name. In that case, - use the more explicit get() method instead. - - .. warning:: operation is O(n), not O(1). - """ - return self._find_no_duplicates(name) - - def __setitem__(self, name, value): - """Dict-like __setitem__ for compatibility with client code. Throws - exception if there is already a cookie of that name in the jar. In that - case, use the more explicit set() method instead. - """ - self.set(name, value) - - def __delitem__(self, name): - """Deletes a cookie given a name. Wraps ``cookielib.CookieJar``'s - ``remove_cookie_by_name()``. - """ - remove_cookie_by_name(self, name) - - def set_cookie(self, cookie, *args, **kwargs): - if hasattr(cookie.value, 'startswith') and cookie.value.startswith('"') and cookie.value.endswith('"'): - cookie.value = cookie.value.replace('\\"', '') - return super(RequestsCookieJar, self).set_cookie(cookie, *args, **kwargs) - - def update(self, other): - """Updates this jar with cookies from another CookieJar or dict-like""" - if isinstance(other, cookielib.CookieJar): - for cookie in other: - self.set_cookie(copy.copy(cookie)) - else: - super(RequestsCookieJar, self).update(other) - - def _find(self, name, domain=None, path=None): - """Requests uses this method internally to get cookie values. - - If there are conflicting cookies, _find arbitrarily chooses one. - See _find_no_duplicates if you want an exception thrown if there are - conflicting cookies. - - :param name: a string containing name of cookie - :param domain: (optional) string containing domain of cookie - :param path: (optional) string containing path of cookie - :return: cookie.value - """ - for cookie in iter(self): - if cookie.name == name: - if domain is None or cookie.domain == domain: - if path is None or cookie.path == path: - return cookie.value - - raise KeyError('name=%r, domain=%r, path=%r' % (name, domain, path)) - - def _find_no_duplicates(self, name, domain=None, path=None): - """Both ``__get_item__`` and ``get`` call this function: it's never - used elsewhere in Requests. - - :param name: a string containing name of cookie - :param domain: (optional) string containing domain of cookie - :param path: (optional) string containing path of cookie - :raises KeyError: if cookie is not found - :raises CookieConflictError: if there are multiple cookies - that match name and optionally domain and path - :return: cookie.value - """ - toReturn = None - for cookie in iter(self): - if cookie.name == name: - if domain is None or cookie.domain == domain: - if path is None or cookie.path == path: - if toReturn is not None: # if there are multiple cookies that meet passed in criteria - raise CookieConflictError('There are multiple cookies with name, %r' % (name)) - toReturn = cookie.value # we will eventually return this as long as no cookie conflict - - if toReturn: - return toReturn - raise KeyError('name=%r, domain=%r, path=%r' % (name, domain, path)) - - def __getstate__(self): - """Unlike a normal CookieJar, this class is pickleable.""" - state = self.__dict__.copy() - # remove the unpickleable RLock object - state.pop('_cookies_lock') - return state - - def __setstate__(self, state): - """Unlike a normal CookieJar, this class is pickleable.""" - self.__dict__.update(state) - if '_cookies_lock' not in self.__dict__: - self._cookies_lock = threading.RLock() - - def copy(self): - """Return a copy of this RequestsCookieJar.""" - new_cj = RequestsCookieJar() - new_cj.set_policy(self.get_policy()) - new_cj.update(self) - return new_cj - - def get_policy(self): - """Return the CookiePolicy instance used.""" - return self._policy - - -def _copy_cookie_jar(jar): - if jar is None: - return None - - if hasattr(jar, 'copy'): - # We're dealing with an instance of RequestsCookieJar - return jar.copy() - # We're dealing with a generic CookieJar instance - new_jar = copy.copy(jar) - new_jar.clear() - for cookie in jar: - new_jar.set_cookie(copy.copy(cookie)) - return new_jar - - -def create_cookie(name, value, **kwargs): - """Make a cookie from underspecified parameters. - - By default, the pair of `name` and `value` will be set for the domain '' - and sent on every request (this is sometimes called a "supercookie"). - """ - result = { - 'version': 0, - 'name': name, - 'value': value, - 'port': None, - 'domain': '', - 'path': '/', - 'secure': False, - 'expires': None, - 'discard': True, - 'comment': None, - 'comment_url': None, - 'rest': {'HttpOnly': None}, - 'rfc2109': False, - } - - badargs = set(kwargs) - set(result) - if badargs: - err = 'create_cookie() got unexpected keyword arguments: %s' - raise TypeError(err % list(badargs)) - - result.update(kwargs) - result['port_specified'] = bool(result['port']) - result['domain_specified'] = bool(result['domain']) - result['domain_initial_dot'] = result['domain'].startswith('.') - result['path_specified'] = bool(result['path']) - - return cookielib.Cookie(**result) - - -def morsel_to_cookie(morsel): - """Convert a Morsel object into a Cookie containing the one k/v pair.""" - - expires = None - if morsel['max-age']: - try: - expires = int(time.time() + int(morsel['max-age'])) - except ValueError: - raise TypeError('max-age: %s must be integer' % morsel['max-age']) - elif morsel['expires']: - time_template = '%a, %d-%b-%Y %H:%M:%S GMT' - expires = calendar.timegm( - time.strptime(morsel['expires'], time_template) - ) - return create_cookie( - comment=morsel['comment'], - comment_url=bool(morsel['comment']), - discard=False, - domain=morsel['domain'], - expires=expires, - name=morsel.key, - path=morsel['path'], - port=None, - rest={'HttpOnly': morsel['httponly']}, - rfc2109=False, - secure=bool(morsel['secure']), - value=morsel.value, - version=morsel['version'] or 0, - ) - - -def cookiejar_from_dict(cookie_dict, cookiejar=None, overwrite=True): - """Returns a CookieJar from a key/value dictionary. - - :param cookie_dict: Dict of key/values to insert into CookieJar. - :param cookiejar: (optional) A cookiejar to add the cookies to. - :param overwrite: (optional) If False, will not replace cookies - already in the jar with new ones. - :rtype: CookieJar - """ - if cookiejar is None: - cookiejar = RequestsCookieJar() - - if cookie_dict is not None: - names_from_jar = [cookie.name for cookie in cookiejar] - for name in cookie_dict: - if overwrite or (name not in names_from_jar): - cookiejar.set_cookie(create_cookie(name, cookie_dict[name])) - - return cookiejar - - -def merge_cookies(cookiejar, cookies): - """Add cookies to cookiejar and returns a merged CookieJar. - - :param cookiejar: CookieJar object to add the cookies to. - :param cookies: Dictionary or CookieJar object to be added. - :rtype: CookieJar - """ - if not isinstance(cookiejar, cookielib.CookieJar): - raise ValueError('You can only merge into CookieJar') - - if isinstance(cookies, dict): - cookiejar = cookiejar_from_dict( - cookies, cookiejar=cookiejar, overwrite=False) - elif isinstance(cookies, cookielib.CookieJar): - try: - cookiejar.update(cookies) - except AttributeError: - for cookie_in_jar in cookies: - cookiejar.set_cookie(cookie_in_jar) - - return cookiejar diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/cells.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/cells.py deleted file mode 100644 index e824ea2a6df91e2a6af08e5d5cc2f6703f8853d8..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/cells.py +++ /dev/null @@ -1,147 +0,0 @@ -from functools import lru_cache -import re -from typing import Dict, List - -from ._cell_widths import CELL_WIDTHS -from ._lru_cache import LRUCache - -# Regex to match sequence of the most common character ranges -_is_single_cell_widths = re.compile("^[\u0020-\u006f\u00a0\u02ff\u0370-\u0482]*$").match - - -def cell_len(text: str, _cache: Dict[str, int] = LRUCache(1024 * 4)) -> int: - """Get the number of cells required to display text. - - Args: - text (str): Text to display. - - Returns: - int: Get the number of cells required to display text. - """ - - if _is_single_cell_widths(text): - return len(text) - else: - cached_result = _cache.get(text, None) - if cached_result is not None: - return cached_result - _get_size = get_character_cell_size - total_size = sum(_get_size(character) for character in text) - if len(text) <= 64: - _cache[text] = total_size - return total_size - - -@lru_cache(maxsize=4096) -def get_character_cell_size(character: str) -> int: - """Get the cell size of a character. - - Args: - character (str): A single character. - - Returns: - int: Number of cells (0, 1 or 2) occupied by that character. - """ - if _is_single_cell_widths(character): - return 1 - - return _get_codepoint_cell_size(ord(character)) - - -@lru_cache(maxsize=4096) -def _get_codepoint_cell_size(codepoint: int) -> int: - """Get the cell size of a character. - - Args: - character (str): A single character. - - Returns: - int: Number of cells (0, 1 or 2) occupied by that character. - """ - - _table = CELL_WIDTHS - lower_bound = 0 - upper_bound = len(_table) - 1 - index = (lower_bound + upper_bound) // 2 - while True: - start, end, width = _table[index] - if codepoint < start: - upper_bound = index - 1 - elif codepoint > end: - lower_bound = index + 1 - else: - return 0 if width == -1 else width - if upper_bound < lower_bound: - break - index = (lower_bound + upper_bound) // 2 - return 1 - - -def set_cell_size(text: str, total: int) -> str: - """Set the length of a string to fit within given number of cells.""" - - if _is_single_cell_widths(text): - size = len(text) - if size < total: - return text + " " * (total - size) - return text[:total] - - if not total: - return "" - cell_size = cell_len(text) - if cell_size == total: - return text - if cell_size < total: - return text + " " * (total - cell_size) - - start = 0 - end = len(text) - - # Binary search until we find the right size - while True: - pos = (start + end) // 2 - before = text[: pos + 1] - before_len = cell_len(before) - if before_len == total + 1 and cell_len(before[-1]) == 2: - return before[:-1] + " " - if before_len == total: - return before - if before_len > total: - end = pos - else: - start = pos - - -# TODO: This is inefficient -# TODO: This might not work with CWJ type characters -def chop_cells(text: str, max_size: int, position: int = 0) -> List[str]: - """Break text in to equal (cell) length strings.""" - _get_character_cell_size = get_character_cell_size - characters = [ - (character, _get_character_cell_size(character)) for character in text - ][::-1] - total_size = position - lines: List[List[str]] = [[]] - append = lines[-1].append - - pop = characters.pop - while characters: - character, size = pop() - if total_size + size > max_size: - lines.append([character]) - append = lines[-1].append - total_size = size - else: - total_size += size - append(character) - return ["".join(line) for line in lines] - - -if __name__ == "__main__": # pragma: no cover - - print(get_character_cell_size("😽")) - for line in chop_cells("""这是对亚洲语言支持的测试。面对模棱两可的想法,拒绝猜测的诱惑。""", 8): - print(line) - for n in range(80, 1, -1): - print(set_cell_size("""这是对亚洲语言支持的测试。面对模棱两可的想法,拒绝猜测的诱惑。""", n) + "|") - print("x" * n) diff --git a/spaces/aliabd/SummerTime/model/third_party/HMNet/DataLoader/infinibatch/README.md b/spaces/aliabd/SummerTime/model/third_party/HMNet/DataLoader/infinibatch/README.md deleted file mode 100644 index b16159add8b0c1ce4ca42a47f832134c5cce7d69..0000000000000000000000000000000000000000 --- a/spaces/aliabd/SummerTime/model/third_party/HMNet/DataLoader/infinibatch/README.md +++ /dev/null @@ -1,23 +0,0 @@ -# InfiniBatch - -To view the documentation, please clone the repository and go to docs/infinibatch/index.html - -To run unit tests, run the following command. -``` -python -m unittest discover -s test -``` - -When working on the documentation, install pdoc: -``` -pip install pdoc3 -``` -You can then start a local http server that dynamically updates the documentation: -``` -pdoc --template-dir docs --http : infinibatch -``` - -We currently haven't set up the CI to automatically generate the documentation. -Before you merge anything into master, please delete the existing documentation in docs/infinibatch and run -``` -pdoc -o docs --template-dir docs --html infinibatch -``` \ No newline at end of file diff --git a/spaces/amitkot/he2en/app.py b/spaces/amitkot/he2en/app.py deleted file mode 100644 index 801bc8bdb1bb1c256fb8d2ef81f969da2c17bf16..0000000000000000000000000000000000000000 --- a/spaces/amitkot/he2en/app.py +++ /dev/null @@ -1,19 +0,0 @@ -import gradio as gr - -from transformers import pipeline - -pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-mul-en") - -def predict(text): - return pipe(text)[0]["translation_text"] - -title = "Hebrew to English Translation" - -iface = gr.Interface( - fn=predict, - inputs=[gr.inputs.Textbox(label="text", lines=3)], - outputs='text', - title=title, -) - -iface.launch() \ No newline at end of file diff --git a/spaces/andreped/AeroPath/.github/ISSUE_TEMPLATE/feature_request.md b/spaces/andreped/AeroPath/.github/ISSUE_TEMPLATE/feature_request.md deleted file mode 100644 index bbcbbe7d61558adde3cbfd0c7a63a67c27ed6d30..0000000000000000000000000000000000000000 --- a/spaces/andreped/AeroPath/.github/ISSUE_TEMPLATE/feature_request.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -name: Feature request -about: Suggest an idea for this project -title: '' -labels: '' -assignees: '' - ---- - -**Is your feature request related to a problem? Please describe.** -A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] - -**Describe the solution you'd like** -A clear and concise description of what you want to happen. - -**Describe alternatives you've considered** -A clear and concise description of any alternative solutions or features you've considered. - -**Additional context** -Add any other context or screenshots about the feature request here. diff --git a/spaces/anisub/movie-poster-generator/README.md b/spaces/anisub/movie-poster-generator/README.md deleted file mode 100644 index ceb07a9f4d645e0f151915f0be7f61d48e6fe693..0000000000000000000000000000000000000000 --- a/spaces/anisub/movie-poster-generator/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Movie Poster Generator -emoji: 🦀 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.4.1 -app_file: app.py -pinned: false -license: creativeml-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ankur-bohra/AliShaker-layoutlmv3-finetuned-wildreceipt/app.py b/spaces/ankur-bohra/AliShaker-layoutlmv3-finetuned-wildreceipt/app.py deleted file mode 100644 index 8ef62c987584d6ea2676e6459168e5b39d7d9881..0000000000000000000000000000000000000000 --- a/spaces/ankur-bohra/AliShaker-layoutlmv3-finetuned-wildreceipt/app.py +++ /dev/null @@ -1,131 +0,0 @@ -# https://huggingface.co/spaces/Theivaprakasham/wildreceipt/raw/main/app.py - -import os -os.system('pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu') - -import gradio as gr -import numpy as np -from transformers import AutoModelForTokenClassification -from datasets.features import ClassLabel -from transformers import AutoProcessor -from datasets import Features, Sequence, ClassLabel, Value, Array2D, Array3D -import torch -from datasets import load_metric -from transformers import LayoutLMv3ForTokenClassification -from transformers.data.data_collator import default_data_collator - - -from transformers import AutoModelForTokenClassification -from datasets import load_dataset -from PIL import Image, ImageDraw, ImageFont - - -processor = AutoProcessor.from_pretrained("Theivaprakasham/layoutlmv3-finetuned-wildreceipt", apply_ocr=True) -model = AutoModelForTokenClassification.from_pretrained("Theivaprakasham/layoutlmv3-finetuned-wildreceipt") - - - -# load image example -dataset = load_dataset("Theivaprakasham/wildreceipt", split="test") -Image.open(dataset[20]["image_path"]).convert("RGB").save("example1.png") -Image.open(dataset[13]["image_path"]).convert("RGB").save("example2.png") -Image.open(dataset[15]["image_path"]).convert("RGB").save("example3.png") - -# define id2label, label2color -labels = dataset.features['ner_tags'].feature.names -id2label = {v: k for v, k in enumerate(labels)} -label2color = { - "Date_key": 'red', - "Date_value": 'green', - "Ignore": 'orange', - "Others": 'orange', - "Prod_item_key": 'red', - "Prod_item_value": 'green', - "Prod_price_key": 'red', - "Prod_price_value": 'green', - "Prod_quantity_key": 'red', - "Prod_quantity_value": 'green', - "Store_addr_key": 'red', - "Store_addr_value": 'green', - "Store_name_key": 'red', - "Store_name_value": 'green', - "Subtotal_key": 'red', - "Subtotal_value": 'green', - "Tax_key": 'red', - "Tax_value": 'green', - "Tel_key": 'red', - "Tel_value": 'green', - "Time_key": 'red', - "Time_value": 'green', - "Tips_key": 'red', - "Tips_value": 'green', - "Total_key": 'red', - "Total_value": 'blue' - } - -def unnormalize_box(bbox, width, height): - return [ - width * (bbox[0] / 1000), - height * (bbox[1] / 1000), - width * (bbox[2] / 1000), - height * (bbox[3] / 1000), - ] - - -def iob_to_label(label): - return label - - - -def process_image(image): - - print(type(image)) - width, height = image.size - - # encode - encoding = processor(image, truncation=True, return_offsets_mapping=True, return_tensors="pt") - offset_mapping = encoding.pop('offset_mapping') - - # forward pass - outputs = model(**encoding) - - # get predictions - predictions = outputs.logits.argmax(-1).squeeze().tolist() - token_boxes = encoding.bbox.squeeze().tolist() - - # only keep non-subword predictions - is_subword = np.array(offset_mapping.squeeze().tolist())[:,0] != 0 - true_predictions = [id2label[pred] for idx, pred in enumerate(predictions) if not is_subword[idx]] - true_boxes = [unnormalize_box(box, width, height) for idx, box in enumerate(token_boxes) if not is_subword[idx]] - - # draw predictions over the image - draw = ImageDraw.Draw(image) - font = ImageFont.load_default() - for prediction, box in zip(true_predictions, true_boxes): - predicted_label = iob_to_label(prediction) - draw.rectangle(box, outline=label2color[predicted_label]) - draw.text((box[0]+10, box[1]-10), text=predicted_label, fill=label2color[predicted_label], font=font) - - return image - - -title = "Restaurant/ Hotel Bill information extraction using LayoutLMv3 model" -description = "Restaurant/ Hotel Bill information extraction - We use Microsoft's LayoutLMv3 trained on WildReceipt Dataset to predict the Store_name_value, Store_name_key, Store_addr_value, Store_addr_key, Tel_value, Tel_key, Date_value, Date_key, Time_value, Time_key, Prod_item_value, Prod_item_key, Prod_quantity_value, Prod_quantity_key, Prod_price_value, Prod_price_key, Subtotal_value, Subtotal_key, Tax_value, Tax_key, Tips_value, Tips_key, Total_value, Total_key. To use it, simply upload an image or use the example image below. Results will show up in a few seconds." - -article="References
    [1] Y. Xu et al., “LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking.” 2022. Paper Link
    [2] LayoutLMv3 training and inference
    [3] Hongbin Sun, Zhanghui Kuang, Xiaoyu Yue, Chenhao Lin, and Wayne Zhang. 2021. Spatial Dual-Modality Graph Reasoning for Key Information Extraction. arXiv. DOI:https://doi.org/10.48550/ARXIV.2103.14470 Paper Link" - -examples =[['example1.png'],['example2.png'],['example3.png']] - -css = """.output_image, .input_image {height: 600px !important}""" - -iface = gr.Interface(fn=process_image, - inputs=gr.inputs.Image(type="pil"), - outputs=gr.outputs.Image(type="pil", label="annotated image"), - title=title, - description=description, - article=article, - examples=examples, - css=css, - analytics_enabled = True, enable_queue=True) - -iface.launch(inline=False, share=False, debug=False) \ No newline at end of file diff --git a/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/clipseg/setup.py b/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/clipseg/setup.py deleted file mode 100644 index 2bf28ffe269cba3033af263db5f98313772818f0..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/clipseg/setup.py +++ /dev/null @@ -1,30 +0,0 @@ -from setuptools import setup - -with open("README.md", "r", encoding="utf-8") as readme_file: - readme = readme_file.read() - -requirements = [ - "numpy", - "scipy", - "matplotlib", - "torch", - "torchvision", - "opencv-python", - "CLIP @ git+https://github.com/openai/CLIP.git" -] - -setup( - name='clipseg', - packages=['clipseg'], - package_dir={'clipseg': 'models'}, - package_data={'clipseg': [ - "../weights/*.pth", - ]}, - version='0.0.1', - url='https://github.com/timojl/clipseg', - python_requires='>=3.9', - install_requires=requirements, - description='This repository contains the code used in the paper "Image Segmentation Using Text and Image Prompts".', - long_description=readme, - long_description_content_type="text/markdown", -) diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/utils/text/korean/phonemizer.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/utils/text/korean/phonemizer.py deleted file mode 100644 index ed70fc35f6950b98ec715577a3303c5a271fbb0e..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/utils/text/korean/phonemizer.py +++ /dev/null @@ -1,36 +0,0 @@ -from jamo import hangul_to_jamo - -from TTS.tts.utils.text.korean.korean import normalize - -g2p = None - - -def korean_text_to_phonemes(text, character: str = "hangeul") -> str: - """ - - The input and output values look the same, but they are different in Unicode. - - example : - - input = '하늘' (Unicode : \ud558\ub298), (하 + 늘) - output = '하늘' (Unicode :\u1112\u1161\u1102\u1173\u11af), (ᄒ + ᅡ + ᄂ + ᅳ + ᆯ) - - """ - global g2p # pylint: disable=global-statement - if g2p is None: - from g2pkk import G2p - - g2p = G2p() - - if character == "english": - from anyascii import anyascii - - text = normalize(text) - text = g2p(text) - text = anyascii(text) - return text - - text = normalize(text) - text = g2p(text) - text = list(hangul_to_jamo(text)) # '하늘' --> ['ᄒ', 'ᅡ', 'ᄂ', 'ᅳ', 'ᆯ'] - return "".join(text) diff --git a/spaces/artificialguybr/video-dubbing/TTS/docs/source/models/forward_tts.md b/spaces/artificialguybr/video-dubbing/TTS/docs/source/models/forward_tts.md deleted file mode 100644 index f8f941c2fd3baf11497945ee1b6ac0a3ceb9a8f6..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/docs/source/models/forward_tts.md +++ /dev/null @@ -1,65 +0,0 @@ -# Forward TTS model(s) - -A general feed-forward TTS model implementation that can be configured to different architectures by setting different -encoder and decoder networks. It can be trained with either pre-computed durations (from pre-trained Tacotron) or -an alignment network that learns the text to audio alignment from the input data. - -Currently we provide the following pre-configured architectures: - -- **FastSpeech:** - - It's a feed-forward model TTS model that uses Feed Forward Transformer (FFT) modules as the encoder and decoder. - -- **FastPitch:** - - It uses the same FastSpeech architecture that is conditioned on fundamental frequency (f0) contours with the - promise of more expressive speech. - -- **SpeedySpeech:** - - It uses Residual Convolution layers instead of Transformers that leads to a more compute friendly model. - -- **FastSpeech2 (TODO):** - - Similar to FastPitch but it also uses a spectral energy values as an addition. - -## Important resources & papers -- FastPitch: https://arxiv.org/abs/2006.06873 -- SpeedySpeech: https://arxiv.org/abs/2008.03802 -- FastSpeech: https://arxiv.org/pdf/1905.09263 -- FastSpeech2: https://arxiv.org/abs/2006.04558 -- Aligner Network: https://arxiv.org/abs/2108.10447 -- What is Pitch: https://www.britannica.com/topic/pitch-speech - - -## ForwardTTSArgs -```{eval-rst} -.. autoclass:: TTS.tts.models.forward_tts.ForwardTTSArgs - :members: -``` - -## ForwardTTS Model -```{eval-rst} -.. autoclass:: TTS.tts.models.forward_tts.ForwardTTS - :members: -``` - -## FastPitchConfig -```{eval-rst} -.. autoclass:: TTS.tts.configs.fast_pitch_config.FastPitchConfig - :members: -``` - -## SpeedySpeechConfig -```{eval-rst} -.. autoclass:: TTS.tts.configs.speedy_speech_config.SpeedySpeechConfig - :members: -``` - -## FastSpeechConfig -```{eval-rst} -.. autoclass:: TTS.tts.configs.fast_speech_config.FastSpeechConfig - :members: -``` - - diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Cipher/test_GCM.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Cipher/test_GCM.py deleted file mode 100644 index dd8da2fd804c8d3e4ba852d0b5b4f8fe012a7dc9..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Cipher/test_GCM.py +++ /dev/null @@ -1,951 +0,0 @@ -# =================================================================== -# -# Copyright (c) 2015, Legrandin -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -from __future__ import print_function - -import unittest -from binascii import unhexlify - -from Crypto.SelfTest.st_common import list_test_cases -from Crypto.SelfTest.loader import load_test_vectors, load_test_vectors_wycheproof - -from Crypto.Util.py3compat import tobytes, bchr -from Crypto.Cipher import AES -from Crypto.Hash import SHAKE128, SHA256 - -from Crypto.Util.strxor import strxor - - -def get_tag_random(tag, length): - return SHAKE128.new(data=tobytes(tag)).read(length) - - -class GcmTests(unittest.TestCase): - - key_128 = get_tag_random("key_128", 16) - nonce_96 = get_tag_random("nonce_128", 12) - data = get_tag_random("data", 128) - - def test_loopback_128(self): - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - pt = get_tag_random("plaintext", 16 * 100) - ct = cipher.encrypt(pt) - - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - pt2 = cipher.decrypt(ct) - self.assertEqual(pt, pt2) - - def test_nonce(self): - # Nonce is optional (a random one will be created) - AES.new(self.key_128, AES.MODE_GCM) - - cipher = AES.new(self.key_128, AES.MODE_GCM, self.nonce_96) - ct = cipher.encrypt(self.data) - - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - self.assertEqual(ct, cipher.encrypt(self.data)) - - def test_nonce_must_be_bytes(self): - self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_GCM, - nonce=u'test12345678') - - def test_nonce_length(self): - # nonce can be of any length (but not empty) - self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_GCM, - nonce=b"") - - for x in range(1, 128): - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=bchr(1) * x) - cipher.encrypt(bchr(1)) - - def test_block_size_128(self): - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - self.assertEqual(cipher.block_size, AES.block_size) - - def test_nonce_attribute(self): - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - self.assertEqual(cipher.nonce, self.nonce_96) - - # By default, a 15 bytes long nonce is randomly generated - nonce1 = AES.new(self.key_128, AES.MODE_GCM).nonce - nonce2 = AES.new(self.key_128, AES.MODE_GCM).nonce - self.assertEqual(len(nonce1), 16) - self.assertNotEqual(nonce1, nonce2) - - def test_unknown_parameters(self): - self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_GCM, - self.nonce_96, 7) - self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_GCM, - nonce=self.nonce_96, unknown=7) - - # But some are only known by the base cipher - # (e.g. use_aesni consumed by the AES module) - AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96, - use_aesni=False) - - def test_null_encryption_decryption(self): - for func in "encrypt", "decrypt": - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - result = getattr(cipher, func)(b"") - self.assertEqual(result, b"") - - def test_either_encrypt_or_decrypt(self): - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - cipher.encrypt(b"") - self.assertRaises(TypeError, cipher.decrypt, b"") - - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - cipher.decrypt(b"") - self.assertRaises(TypeError, cipher.encrypt, b"") - - def test_data_must_be_bytes(self): - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - self.assertRaises(TypeError, cipher.encrypt, u'test1234567890-*') - - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - self.assertRaises(TypeError, cipher.decrypt, u'test1234567890-*') - - def test_mac_len(self): - # Invalid MAC length - self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_GCM, - nonce=self.nonce_96, mac_len=3) - self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_GCM, - nonce=self.nonce_96, mac_len=16+1) - - # Valid MAC length - for mac_len in range(5, 16 + 1): - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96, - mac_len=mac_len) - _, mac = cipher.encrypt_and_digest(self.data) - self.assertEqual(len(mac), mac_len) - - # Default MAC length - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - _, mac = cipher.encrypt_and_digest(self.data) - self.assertEqual(len(mac), 16) - - def test_invalid_mac(self): - from Crypto.Util.strxor import strxor_c - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - ct, mac = cipher.encrypt_and_digest(self.data) - - invalid_mac = strxor_c(mac, 0x01) - - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - self.assertRaises(ValueError, cipher.decrypt_and_verify, ct, - invalid_mac) - - def test_hex_mac(self): - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - mac_hex = cipher.hexdigest() - self.assertEqual(cipher.digest(), unhexlify(mac_hex)) - - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - cipher.hexverify(mac_hex) - - def test_message_chunks(self): - # Validate that both associated data and plaintext/ciphertext - # can be broken up in chunks of arbitrary length - - auth_data = get_tag_random("authenticated data", 127) - plaintext = get_tag_random("plaintext", 127) - - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - cipher.update(auth_data) - ciphertext, ref_mac = cipher.encrypt_and_digest(plaintext) - - def break_up(data, chunk_length): - return [data[i:i+chunk_length] for i in range(0, len(data), - chunk_length)] - - # Encryption - for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: - - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - - for chunk in break_up(auth_data, chunk_length): - cipher.update(chunk) - pt2 = b"" - for chunk in break_up(ciphertext, chunk_length): - pt2 += cipher.decrypt(chunk) - self.assertEqual(plaintext, pt2) - cipher.verify(ref_mac) - - # Decryption - for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: - - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - - for chunk in break_up(auth_data, chunk_length): - cipher.update(chunk) - ct2 = b"" - for chunk in break_up(plaintext, chunk_length): - ct2 += cipher.encrypt(chunk) - self.assertEqual(ciphertext, ct2) - self.assertEqual(cipher.digest(), ref_mac) - - def test_bytearray(self): - - # Encrypt - key_ba = bytearray(self.key_128) - nonce_ba = bytearray(self.nonce_96) - header_ba = bytearray(self.data) - data_ba = bytearray(self.data) - - cipher1 = AES.new(self.key_128, - AES.MODE_GCM, - nonce=self.nonce_96) - cipher1.update(self.data) - ct = cipher1.encrypt(self.data) - tag = cipher1.digest() - - cipher2 = AES.new(key_ba, - AES.MODE_GCM, - nonce=nonce_ba) - key_ba[:3] = b"\xFF\xFF\xFF" - nonce_ba[:3] = b"\xFF\xFF\xFF" - cipher2.update(header_ba) - header_ba[:3] = b"\xFF\xFF\xFF" - ct_test = cipher2.encrypt(data_ba) - data_ba[:3] = b"\xFF\xFF\xFF" - tag_test = cipher2.digest() - - self.assertEqual(ct, ct_test) - self.assertEqual(tag, tag_test) - self.assertEqual(cipher1.nonce, cipher2.nonce) - - # Decrypt - key_ba = bytearray(self.key_128) - nonce_ba = bytearray(self.nonce_96) - header_ba = bytearray(self.data) - del data_ba - - cipher4 = AES.new(key_ba, - AES.MODE_GCM, - nonce=nonce_ba) - key_ba[:3] = b"\xFF\xFF\xFF" - nonce_ba[:3] = b"\xFF\xFF\xFF" - cipher4.update(header_ba) - header_ba[:3] = b"\xFF\xFF\xFF" - pt_test = cipher4.decrypt_and_verify(bytearray(ct_test), bytearray(tag_test)) - - self.assertEqual(self.data, pt_test) - - def test_memoryview(self): - - # Encrypt - key_mv = memoryview(bytearray(self.key_128)) - nonce_mv = memoryview(bytearray(self.nonce_96)) - header_mv = memoryview(bytearray(self.data)) - data_mv = memoryview(bytearray(self.data)) - - cipher1 = AES.new(self.key_128, - AES.MODE_GCM, - nonce=self.nonce_96) - cipher1.update(self.data) - ct = cipher1.encrypt(self.data) - tag = cipher1.digest() - - cipher2 = AES.new(key_mv, - AES.MODE_GCM, - nonce=nonce_mv) - key_mv[:3] = b"\xFF\xFF\xFF" - nonce_mv[:3] = b"\xFF\xFF\xFF" - cipher2.update(header_mv) - header_mv[:3] = b"\xFF\xFF\xFF" - ct_test = cipher2.encrypt(data_mv) - data_mv[:3] = b"\xFF\xFF\xFF" - tag_test = cipher2.digest() - - self.assertEqual(ct, ct_test) - self.assertEqual(tag, tag_test) - self.assertEqual(cipher1.nonce, cipher2.nonce) - - # Decrypt - key_mv = memoryview(bytearray(self.key_128)) - nonce_mv = memoryview(bytearray(self.nonce_96)) - header_mv = memoryview(bytearray(self.data)) - del data_mv - - cipher4 = AES.new(key_mv, - AES.MODE_GCM, - nonce=nonce_mv) - key_mv[:3] = b"\xFF\xFF\xFF" - nonce_mv[:3] = b"\xFF\xFF\xFF" - cipher4.update(header_mv) - header_mv[:3] = b"\xFF\xFF\xFF" - pt_test = cipher4.decrypt_and_verify(memoryview(ct_test), memoryview(tag_test)) - - self.assertEqual(self.data, pt_test) - - def test_output_param(self): - - pt = b'5' * 128 - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - ct = cipher.encrypt(pt) - tag = cipher.digest() - - output = bytearray(128) - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - res = cipher.encrypt(pt, output=output) - self.assertEqual(ct, output) - self.assertEqual(res, None) - - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - res = cipher.decrypt(ct, output=output) - self.assertEqual(pt, output) - self.assertEqual(res, None) - - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - res, tag_out = cipher.encrypt_and_digest(pt, output=output) - self.assertEqual(ct, output) - self.assertEqual(res, None) - self.assertEqual(tag, tag_out) - - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - res = cipher.decrypt_and_verify(ct, tag, output=output) - self.assertEqual(pt, output) - self.assertEqual(res, None) - - def test_output_param_memoryview(self): - - pt = b'5' * 128 - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - ct = cipher.encrypt(pt) - - output = memoryview(bytearray(128)) - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - cipher.encrypt(pt, output=output) - self.assertEqual(ct, output) - - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - cipher.decrypt(ct, output=output) - self.assertEqual(pt, output) - - def test_output_param_neg(self): - LEN_PT = 128 - - pt = b'5' * LEN_PT - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - ct = cipher.encrypt(pt) - - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0' * LEN_PT) - - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0' * LEN_PT) - - shorter_output = bytearray(LEN_PT - 1) - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) - - -class GcmFSMTests(unittest.TestCase): - - key_128 = get_tag_random("key_128", 16) - nonce_96 = get_tag_random("nonce_128", 12) - data = get_tag_random("data", 128) - - def test_valid_init_encrypt_decrypt_digest_verify(self): - # No authenticated data, fixed plaintext - # Verify path INIT->ENCRYPT->DIGEST - cipher = AES.new(self.key_128, AES.MODE_GCM, - nonce=self.nonce_96) - ct = cipher.encrypt(self.data) - mac = cipher.digest() - - # Verify path INIT->DECRYPT->VERIFY - cipher = AES.new(self.key_128, AES.MODE_GCM, - nonce=self.nonce_96) - cipher.decrypt(ct) - cipher.verify(mac) - - def test_valid_init_update_digest_verify(self): - # No plaintext, fixed authenticated data - # Verify path INIT->UPDATE->DIGEST - cipher = AES.new(self.key_128, AES.MODE_GCM, - nonce=self.nonce_96) - cipher.update(self.data) - mac = cipher.digest() - - # Verify path INIT->UPDATE->VERIFY - cipher = AES.new(self.key_128, AES.MODE_GCM, - nonce=self.nonce_96) - cipher.update(self.data) - cipher.verify(mac) - - def test_valid_full_path(self): - # Fixed authenticated data, fixed plaintext - # Verify path INIT->UPDATE->ENCRYPT->DIGEST - cipher = AES.new(self.key_128, AES.MODE_GCM, - nonce=self.nonce_96) - cipher.update(self.data) - ct = cipher.encrypt(self.data) - mac = cipher.digest() - - # Verify path INIT->UPDATE->DECRYPT->VERIFY - cipher = AES.new(self.key_128, AES.MODE_GCM, - nonce=self.nonce_96) - cipher.update(self.data) - cipher.decrypt(ct) - cipher.verify(mac) - - def test_valid_init_digest(self): - # Verify path INIT->DIGEST - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - cipher.digest() - - def test_valid_init_verify(self): - # Verify path INIT->VERIFY - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - mac = cipher.digest() - - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - cipher.verify(mac) - - def test_valid_multiple_encrypt_or_decrypt(self): - for method_name in "encrypt", "decrypt": - for auth_data in (None, b"333", self.data, - self.data + b"3"): - if auth_data is None: - assoc_len = None - else: - assoc_len = len(auth_data) - cipher = AES.new(self.key_128, AES.MODE_GCM, - nonce=self.nonce_96) - if auth_data is not None: - cipher.update(auth_data) - method = getattr(cipher, method_name) - method(self.data) - method(self.data) - method(self.data) - method(self.data) - - def test_valid_multiple_digest_or_verify(self): - # Multiple calls to digest - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - cipher.update(self.data) - first_mac = cipher.digest() - for x in range(4): - self.assertEqual(first_mac, cipher.digest()) - - # Multiple calls to verify - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - cipher.update(self.data) - for x in range(5): - cipher.verify(first_mac) - - def test_valid_encrypt_and_digest_decrypt_and_verify(self): - # encrypt_and_digest - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - cipher.update(self.data) - ct, mac = cipher.encrypt_and_digest(self.data) - - # decrypt_and_verify - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - cipher.update(self.data) - pt = cipher.decrypt_and_verify(ct, mac) - self.assertEqual(self.data, pt) - - def test_invalid_mixing_encrypt_decrypt(self): - # Once per method, with or without assoc. data - for method1_name, method2_name in (("encrypt", "decrypt"), - ("decrypt", "encrypt")): - for assoc_data_present in (True, False): - cipher = AES.new(self.key_128, AES.MODE_GCM, - nonce=self.nonce_96) - if assoc_data_present: - cipher.update(self.data) - getattr(cipher, method1_name)(self.data) - self.assertRaises(TypeError, getattr(cipher, method2_name), - self.data) - - def test_invalid_encrypt_or_update_after_digest(self): - for method_name in "encrypt", "update": - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - cipher.encrypt(self.data) - cipher.digest() - self.assertRaises(TypeError, getattr(cipher, method_name), - self.data) - - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - cipher.encrypt_and_digest(self.data) - - def test_invalid_decrypt_or_update_after_verify(self): - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - ct = cipher.encrypt(self.data) - mac = cipher.digest() - - for method_name in "decrypt", "update": - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - cipher.decrypt(ct) - cipher.verify(mac) - self.assertRaises(TypeError, getattr(cipher, method_name), - self.data) - - cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) - cipher.decrypt_and_verify(ct, mac) - self.assertRaises(TypeError, getattr(cipher, method_name), - self.data) - - -class TestVectors(unittest.TestCase): - """Class exercising the GCM test vectors found in - http://csrc.nist.gov/groups/ST/toolkit/BCM/documents/proposedmodes/gcm/gcm-revised-spec.pdf""" - - # List of test vectors, each made up of: - # - authenticated data - # - plaintext - # - ciphertext - # - MAC - # - AES key - # - nonce - test_vectors_hex = [ - ( - '', - '', - '', - '58e2fccefa7e3061367f1d57a4e7455a', - '00000000000000000000000000000000', - '000000000000000000000000' - ), - ( - '', - '00000000000000000000000000000000', - '0388dace60b6a392f328c2b971b2fe78', - 'ab6e47d42cec13bdf53a67b21257bddf', - '00000000000000000000000000000000', - '000000000000000000000000' - ), - ( - '', - 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + - '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b391aafd255', - '42831ec2217774244b7221b784d0d49ce3aa212f2c02a4e035c17e2329aca12e' + - '21d514b25466931c7d8f6a5aac84aa051ba30b396a0aac973d58e091473f5985', - '4d5c2af327cd64a62cf35abd2ba6fab4', - 'feffe9928665731c6d6a8f9467308308', - 'cafebabefacedbaddecaf888' - ), - ( - 'feedfacedeadbeeffeedfacedeadbeefabaddad2', - 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + - '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', - '42831ec2217774244b7221b784d0d49ce3aa212f2c02a4e035c17e2329aca12e' + - '21d514b25466931c7d8f6a5aac84aa051ba30b396a0aac973d58e091', - '5bc94fbc3221a5db94fae95ae7121a47', - 'feffe9928665731c6d6a8f9467308308', - 'cafebabefacedbaddecaf888' - ), - ( - 'feedfacedeadbeeffeedfacedeadbeefabaddad2', - 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + - '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', - '61353b4c2806934a777ff51fa22a4755699b2a714fcdc6f83766e5f97b6c7423' + - '73806900e49f24b22b097544d4896b424989b5e1ebac0f07c23f4598', - '3612d2e79e3b0785561be14aaca2fccb', - 'feffe9928665731c6d6a8f9467308308', - 'cafebabefacedbad' - ), - ( - 'feedfacedeadbeeffeedfacedeadbeefabaddad2', - 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + - '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', - '8ce24998625615b603a033aca13fb894be9112a5c3a211a8ba262a3cca7e2ca7' + - '01e4a9a4fba43c90ccdcb281d48c7c6fd62875d2aca417034c34aee5', - '619cc5aefffe0bfa462af43c1699d050', - 'feffe9928665731c6d6a8f9467308308', - '9313225df88406e555909c5aff5269aa' + - '6a7a9538534f7da1e4c303d2a318a728c3c0c95156809539fcf0e2429a6b5254' + - '16aedbf5a0de6a57a637b39b' - ), - ( - '', - '', - '', - 'cd33b28ac773f74ba00ed1f312572435', - '000000000000000000000000000000000000000000000000', - '000000000000000000000000' - ), - ( - '', - '00000000000000000000000000000000', - '98e7247c07f0fe411c267e4384b0f600', - '2ff58d80033927ab8ef4d4587514f0fb', - '000000000000000000000000000000000000000000000000', - '000000000000000000000000' - ), - ( - '', - 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + - '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b391aafd255', - '3980ca0b3c00e841eb06fac4872a2757859e1ceaa6efd984628593b40ca1e19c' + - '7d773d00c144c525ac619d18c84a3f4718e2448b2fe324d9ccda2710acade256', - '9924a7c8587336bfb118024db8674a14', - 'feffe9928665731c6d6a8f9467308308feffe9928665731c', - 'cafebabefacedbaddecaf888' - ), - ( - 'feedfacedeadbeeffeedfacedeadbeefabaddad2', - 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + - '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', - '3980ca0b3c00e841eb06fac4872a2757859e1ceaa6efd984628593b40ca1e19c' + - '7d773d00c144c525ac619d18c84a3f4718e2448b2fe324d9ccda2710', - '2519498e80f1478f37ba55bd6d27618c', - 'feffe9928665731c6d6a8f9467308308feffe9928665731c', - 'cafebabefacedbaddecaf888' - ), - ( - 'feedfacedeadbeeffeedfacedeadbeefabaddad2', - 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + - '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', - '0f10f599ae14a154ed24b36e25324db8c566632ef2bbb34f8347280fc4507057' + - 'fddc29df9a471f75c66541d4d4dad1c9e93a19a58e8b473fa0f062f7', - '65dcc57fcf623a24094fcca40d3533f8', - 'feffe9928665731c6d6a8f9467308308feffe9928665731c', - 'cafebabefacedbad' - ), - ( - 'feedfacedeadbeeffeedfacedeadbeefabaddad2', - 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + - '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', - 'd27e88681ce3243c4830165a8fdcf9ff1de9a1d8e6b447ef6ef7b79828666e45' + - '81e79012af34ddd9e2f037589b292db3e67c036745fa22e7e9b7373b', - 'dcf566ff291c25bbb8568fc3d376a6d9', - 'feffe9928665731c6d6a8f9467308308feffe9928665731c', - '9313225df88406e555909c5aff5269aa' + - '6a7a9538534f7da1e4c303d2a318a728c3c0c95156809539fcf0e2429a6b5254' + - '16aedbf5a0de6a57a637b39b' - ), - ( - '', - '', - '', - '530f8afbc74536b9a963b4f1c4cb738b', - '0000000000000000000000000000000000000000000000000000000000000000', - '000000000000000000000000' - ), - ( - '', - '00000000000000000000000000000000', - 'cea7403d4d606b6e074ec5d3baf39d18', - 'd0d1c8a799996bf0265b98b5d48ab919', - '0000000000000000000000000000000000000000000000000000000000000000', - '000000000000000000000000' - ), - ( '', - 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + - '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b391aafd255', - '522dc1f099567d07f47f37a32a84427d643a8cdcbfe5c0c97598a2bd2555d1aa' + - '8cb08e48590dbb3da7b08b1056828838c5f61e6393ba7a0abcc9f662898015ad', - 'b094dac5d93471bdec1a502270e3cc6c', - 'feffe9928665731c6d6a8f9467308308feffe9928665731c6d6a8f9467308308', - 'cafebabefacedbaddecaf888' - ), - ( - 'feedfacedeadbeeffeedfacedeadbeefabaddad2', - 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + - '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', - '522dc1f099567d07f47f37a32a84427d643a8cdcbfe5c0c97598a2bd2555d1aa' + - '8cb08e48590dbb3da7b08b1056828838c5f61e6393ba7a0abcc9f662', - '76fc6ece0f4e1768cddf8853bb2d551b', - 'feffe9928665731c6d6a8f9467308308feffe9928665731c6d6a8f9467308308', - 'cafebabefacedbaddecaf888' - ), - ( - 'feedfacedeadbeeffeedfacedeadbeefabaddad2', - 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + - '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', - 'c3762df1ca787d32ae47c13bf19844cbaf1ae14d0b976afac52ff7d79bba9de0' + - 'feb582d33934a4f0954cc2363bc73f7862ac430e64abe499f47c9b1f', - '3a337dbf46a792c45e454913fe2ea8f2', - 'feffe9928665731c6d6a8f9467308308feffe9928665731c6d6a8f9467308308', - 'cafebabefacedbad' - ), - ( - 'feedfacedeadbeeffeedfacedeadbeefabaddad2', - 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + - '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', - '5a8def2f0c9e53f1f75d7853659e2a20eeb2b22aafde6419a058ab4f6f746bf4' + - '0fc0c3b780f244452da3ebf1c5d82cdea2418997200ef82e44ae7e3f', - 'a44a8266ee1c8eb0c8b5d4cf5ae9f19a', - 'feffe9928665731c6d6a8f9467308308feffe9928665731c6d6a8f9467308308', - '9313225df88406e555909c5aff5269aa' + - '6a7a9538534f7da1e4c303d2a318a728c3c0c95156809539fcf0e2429a6b5254' + - '16aedbf5a0de6a57a637b39b' - ) - ] - - test_vectors = [[unhexlify(x) for x in tv] for tv in test_vectors_hex] - - def runTest(self): - for assoc_data, pt, ct, mac, key, nonce in self.test_vectors: - - # Encrypt - cipher = AES.new(key, AES.MODE_GCM, nonce, mac_len=len(mac)) - cipher.update(assoc_data) - ct2, mac2 = cipher.encrypt_and_digest(pt) - self.assertEqual(ct, ct2) - self.assertEqual(mac, mac2) - - # Decrypt - cipher = AES.new(key, AES.MODE_GCM, nonce, mac_len=len(mac)) - cipher.update(assoc_data) - pt2 = cipher.decrypt_and_verify(ct, mac) - self.assertEqual(pt, pt2) - - -class TestVectorsGueronKrasnov(unittest.TestCase): - """Class exercising the GCM test vectors found in - 'The fragility of AES-GCM authentication algorithm', Gueron, Krasnov - https://eprint.iacr.org/2013/157.pdf""" - - def test_1(self): - key = unhexlify("3da6c536d6295579c0959a7043efb503") - iv = unhexlify("2b926197d34e091ef722db94") - aad = unhexlify("00000000000000000000000000000000" + - "000102030405060708090a0b0c0d0e0f" + - "101112131415161718191a1b1c1d1e1f" + - "202122232425262728292a2b2c2d2e2f" + - "303132333435363738393a3b3c3d3e3f") - digest = unhexlify("69dd586555ce3fcc89663801a71d957b") - - cipher = AES.new(key, AES.MODE_GCM, iv).update(aad) - self.assertEqual(digest, cipher.digest()) - - def test_2(self): - key = unhexlify("843ffcf5d2b72694d19ed01d01249412") - iv = unhexlify("dbcca32ebf9b804617c3aa9e") - aad = unhexlify("00000000000000000000000000000000" + - "101112131415161718191a1b1c1d1e1f") - pt = unhexlify("000102030405060708090a0b0c0d0e0f" + - "101112131415161718191a1b1c1d1e1f" + - "202122232425262728292a2b2c2d2e2f" + - "303132333435363738393a3b3c3d3e3f" + - "404142434445464748494a4b4c4d4e4f") - ct = unhexlify("6268c6fa2a80b2d137467f092f657ac0" + - "4d89be2beaa623d61b5a868c8f03ff95" + - "d3dcee23ad2f1ab3a6c80eaf4b140eb0" + - "5de3457f0fbc111a6b43d0763aa422a3" + - "013cf1dc37fe417d1fbfc449b75d4cc5") - digest = unhexlify("3b629ccfbc1119b7319e1dce2cd6fd6d") - - cipher = AES.new(key, AES.MODE_GCM, iv).update(aad) - ct2, digest2 = cipher.encrypt_and_digest(pt) - - self.assertEqual(ct, ct2) - self.assertEqual(digest, digest2) - - -class NISTTestVectorsGCM(unittest.TestCase): - - def __init__(self, a): - self.use_clmul = True - unittest.TestCase.__init__(self, a) - - -class NISTTestVectorsGCM_no_clmul(unittest.TestCase): - - def __init__(self, a): - self.use_clmul = False - unittest.TestCase.__init__(self, a) - - -test_vectors_nist = load_test_vectors( - ("Cipher", "AES"), - "gcmDecrypt128.rsp", - "GCM decrypt", - {"count": lambda x: int(x)}) or [] - -test_vectors_nist += load_test_vectors( - ("Cipher", "AES"), - "gcmEncryptExtIV128.rsp", - "GCM encrypt", - {"count": lambda x: int(x)}) or [] - -for idx, tv in enumerate(test_vectors_nist): - - # The test vector file contains some directive lines - if isinstance(tv, str): - continue - - def single_test(self, tv=tv): - - self.description = tv.desc - cipher = AES.new(tv.key, AES.MODE_GCM, nonce=tv.iv, - mac_len=len(tv.tag), use_clmul=self.use_clmul) - cipher.update(tv.aad) - if "FAIL" in tv.others: - self.assertRaises(ValueError, cipher.decrypt_and_verify, - tv.ct, tv.tag) - else: - pt = cipher.decrypt_and_verify(tv.ct, tv.tag) - self.assertEqual(pt, tv.pt) - - setattr(NISTTestVectorsGCM, "test_%d" % idx, single_test) - setattr(NISTTestVectorsGCM_no_clmul, "test_%d" % idx, single_test) - - -class TestVectorsWycheproof(unittest.TestCase): - - def __init__(self, wycheproof_warnings, **extra_params): - unittest.TestCase.__init__(self) - self._wycheproof_warnings = wycheproof_warnings - self._extra_params = extra_params - self._id = "None" - - def setUp(self): - - def filter_tag(group): - return group['tagSize'] // 8 - - self.tv = load_test_vectors_wycheproof(("Cipher", "wycheproof"), - "aes_gcm_test.json", - "Wycheproof GCM", - group_tag={'tag_size': filter_tag}) - - def shortDescription(self): - return self._id - - def warn(self, tv): - if tv.warning and self._wycheproof_warnings: - import warnings - warnings.warn("Wycheproof warning: %s (%s)" % (self._id, tv.comment)) - - def test_encrypt(self, tv): - self._id = "Wycheproof Encrypt GCM Test #" + str(tv.id) - - try: - cipher = AES.new(tv.key, AES.MODE_GCM, tv.iv, mac_len=tv.tag_size, - **self._extra_params) - except ValueError as e: - if len(tv.iv) == 0 and "Nonce cannot be empty" in str(e): - return - raise e - - cipher.update(tv.aad) - ct, tag = cipher.encrypt_and_digest(tv.msg) - if tv.valid: - self.assertEqual(ct, tv.ct) - self.assertEqual(tag, tv.tag) - self.warn(tv) - - def test_decrypt(self, tv): - self._id = "Wycheproof Decrypt GCM Test #" + str(tv.id) - - try: - cipher = AES.new(tv.key, AES.MODE_GCM, tv.iv, mac_len=tv.tag_size, - **self._extra_params) - except ValueError as e: - if len(tv.iv) == 0 and "Nonce cannot be empty" in str(e): - return - raise e - - cipher.update(tv.aad) - try: - pt = cipher.decrypt_and_verify(tv.ct, tv.tag) - except ValueError: - assert not tv.valid - else: - assert tv.valid - self.assertEqual(pt, tv.msg) - self.warn(tv) - - def test_corrupt_decrypt(self, tv): - self._id = "Wycheproof Corrupt Decrypt GCM Test #" + str(tv.id) - if len(tv.iv) == 0 or len(tv.ct) < 1: - return - cipher = AES.new(tv.key, AES.MODE_GCM, tv.iv, mac_len=tv.tag_size, - **self._extra_params) - cipher.update(tv.aad) - ct_corrupt = strxor(tv.ct, b"\x00" * (len(tv.ct) - 1) + b"\x01") - self.assertRaises(ValueError, cipher.decrypt_and_verify, ct_corrupt, tv.tag) - - def runTest(self): - - for tv in self.tv: - self.test_encrypt(tv) - self.test_decrypt(tv) - self.test_corrupt_decrypt(tv) - - -class TestVariableLength(unittest.TestCase): - - def __init__(self, **extra_params): - unittest.TestCase.__init__(self) - self._extra_params = extra_params - - def runTest(self): - key = b'0' * 16 - h = SHA256.new() - - for length in range(160): - nonce = '{0:04d}'.format(length).encode('utf-8') - data = bchr(length) * length - cipher = AES.new(key, AES.MODE_GCM, nonce=nonce, **self._extra_params) - ct, tag = cipher.encrypt_and_digest(data) - h.update(ct) - h.update(tag) - - self.assertEqual(h.hexdigest(), "7b7eb1ffbe67a2e53a912067c0ec8e62ebc7ce4d83490ea7426941349811bdf4") - - -def get_tests(config={}): - from Crypto.Util import _cpu_features - - wycheproof_warnings = config.get('wycheproof_warnings') - - tests = [] - tests += list_test_cases(GcmTests) - tests += list_test_cases(GcmFSMTests) - tests += [TestVectors()] - tests += [TestVectorsWycheproof(wycheproof_warnings)] - tests += list_test_cases(TestVectorsGueronKrasnov) - tests += [TestVariableLength()] - if config.get('slow_tests'): - tests += list_test_cases(NISTTestVectorsGCM) - - if _cpu_features.have_clmul(): - tests += [TestVectorsWycheproof(wycheproof_warnings, use_clmul=False)] - tests += [TestVariableLength(use_clmul=False)] - if config.get('slow_tests'): - tests += list_test_cases(NISTTestVectorsGCM_no_clmul) - else: - print("Skipping test of PCLMULDQD in AES GCM") - - return tests - - -if __name__ == '__main__': - def suite(): - unittest.TestSuite(get_tests()) - unittest.main(defaultTest='suite') diff --git a/spaces/asafAdge/Detic/detic/modeling/backbone/timm.py b/spaces/asafAdge/Detic/detic/modeling/backbone/timm.py deleted file mode 100644 index f06b25c8036d99bb6b9518662ab1664a4521b8f5..0000000000000000000000000000000000000000 --- a/spaces/asafAdge/Detic/detic/modeling/backbone/timm.py +++ /dev/null @@ -1,200 +0,0 @@ - #!/usr/bin/env python -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. -import math -from os.path import join -import numpy as np -import copy -from functools import partial - -import torch -from torch import nn -import torch.utils.model_zoo as model_zoo -import torch.nn.functional as F -import fvcore.nn.weight_init as weight_init - -from detectron2.modeling.backbone import FPN -from detectron2.modeling.backbone.build import BACKBONE_REGISTRY -from detectron2.layers.batch_norm import get_norm, FrozenBatchNorm2d -from detectron2.modeling.backbone import Backbone - -from timm import create_model -from timm.models.helpers import build_model_with_cfg -from timm.models.registry import register_model -from timm.models.resnet import ResNet, Bottleneck -from timm.models.resnet import default_cfgs as default_cfgs_resnet - - -class CustomResNet(ResNet): - def __init__(self, **kwargs): - self.out_indices = kwargs.pop('out_indices') - super().__init__(**kwargs) - - - def forward(self, x): - x = self.conv1(x) - x = self.bn1(x) - x = self.act1(x) - x = self.maxpool(x) - ret = [x] - x = self.layer1(x) - ret.append(x) - x = self.layer2(x) - ret.append(x) - x = self.layer3(x) - ret.append(x) - x = self.layer4(x) - ret.append(x) - return [ret[i] for i in self.out_indices] - - - def load_pretrained(self, cached_file): - data = torch.load(cached_file, map_location='cpu') - if 'state_dict' in data: - self.load_state_dict(data['state_dict']) - else: - self.load_state_dict(data) - - -model_params = { - 'resnet50': dict(block=Bottleneck, layers=[3, 4, 6, 3]), - 'resnet50_in21k': dict(block=Bottleneck, layers=[3, 4, 6, 3]), -} - - -def create_timm_resnet(variant, out_indices, pretrained=False, **kwargs): - params = model_params[variant] - default_cfgs_resnet['resnet50_in21k'] = \ - copy.deepcopy(default_cfgs_resnet['resnet50']) - default_cfgs_resnet['resnet50_in21k']['url'] = \ - 'https://miil-public-eu.oss-eu-central-1.aliyuncs.com/model-zoo/ImageNet_21K_P/models/resnet50_miil_21k.pth' - default_cfgs_resnet['resnet50_in21k']['num_classes'] = 11221 - - return build_model_with_cfg( - CustomResNet, variant, pretrained, - default_cfg=default_cfgs_resnet[variant], - out_indices=out_indices, - pretrained_custom_load=True, - **params, - **kwargs) - - -class LastLevelP6P7_P5(nn.Module): - """ - """ - def __init__(self, in_channels, out_channels): - super().__init__() - self.num_levels = 2 - self.in_feature = "p5" - self.p6 = nn.Conv2d(in_channels, out_channels, 3, 2, 1) - self.p7 = nn.Conv2d(out_channels, out_channels, 3, 2, 1) - for module in [self.p6, self.p7]: - weight_init.c2_xavier_fill(module) - - def forward(self, c5): - p6 = self.p6(c5) - p7 = self.p7(F.relu(p6)) - return [p6, p7] - - -def freeze_module(x): - """ - """ - for p in x.parameters(): - p.requires_grad = False - FrozenBatchNorm2d.convert_frozen_batchnorm(x) - return x - - -class TIMM(Backbone): - def __init__(self, base_name, out_levels, freeze_at=0, norm='FrozenBN'): - super().__init__() - out_indices = [x - 1 for x in out_levels] - if 'resnet' in base_name: - self.base = create_timm_resnet( - base_name, out_indices=out_indices, - pretrained=False) - elif 'eff' in base_name: - self.base = create_model( - base_name, features_only=True, - out_indices=out_indices, pretrained=True) - else: - assert 0, base_name - feature_info = [dict(num_chs=f['num_chs'], reduction=f['reduction']) \ - for i, f in enumerate(self.base.feature_info)] - self._out_features = ['layer{}'.format(x) for x in out_levels] - self._out_feature_channels = { - 'layer{}'.format(l): feature_info[l - 1]['num_chs'] for l in out_levels} - self._out_feature_strides = { - 'layer{}'.format(l): feature_info[l - 1]['reduction'] for l in out_levels} - self._size_divisibility = max(self._out_feature_strides.values()) - if 'resnet' in base_name: - self.freeze(freeze_at) - if norm == 'FrozenBN': - self = FrozenBatchNorm2d.convert_frozen_batchnorm(self) - - def freeze(self, freeze_at=0): - """ - """ - if freeze_at >= 1: - print('Frezing', self.base.conv1) - self.base.conv1 = freeze_module(self.base.conv1) - if freeze_at >= 2: - print('Frezing', self.base.layer1) - self.base.layer1 = freeze_module(self.base.layer1) - - def forward(self, x): - features = self.base(x) - ret = {k: v for k, v in zip(self._out_features, features)} - return ret - - @property - def size_divisibility(self): - return self._size_divisibility - - -@BACKBONE_REGISTRY.register() -def build_timm_backbone(cfg, input_shape): - model = TIMM( - cfg.MODEL.TIMM.BASE_NAME, - cfg.MODEL.TIMM.OUT_LEVELS, - freeze_at=cfg.MODEL.TIMM.FREEZE_AT, - norm=cfg.MODEL.TIMM.NORM, - ) - return model - - -@BACKBONE_REGISTRY.register() -def build_p67_timm_fpn_backbone(cfg, input_shape): - """ - """ - bottom_up = build_timm_backbone(cfg, input_shape) - in_features = cfg.MODEL.FPN.IN_FEATURES - out_channels = cfg.MODEL.FPN.OUT_CHANNELS - backbone = FPN( - bottom_up=bottom_up, - in_features=in_features, - out_channels=out_channels, - norm=cfg.MODEL.FPN.NORM, - top_block=LastLevelP6P7_P5(out_channels, out_channels), - fuse_type=cfg.MODEL.FPN.FUSE_TYPE, - ) - return backbone - -@BACKBONE_REGISTRY.register() -def build_p35_timm_fpn_backbone(cfg, input_shape): - """ - """ - bottom_up = build_timm_backbone(cfg, input_shape) - - in_features = cfg.MODEL.FPN.IN_FEATURES - out_channels = cfg.MODEL.FPN.OUT_CHANNELS - backbone = FPN( - bottom_up=bottom_up, - in_features=in_features, - out_channels=out_channels, - norm=cfg.MODEL.FPN.NORM, - top_block=None, - fuse_type=cfg.MODEL.FPN.FUSE_TYPE, - ) - return backbone \ No newline at end of file diff --git a/spaces/ashercn97/AsherTesting/modules/evaluate.py b/spaces/ashercn97/AsherTesting/modules/evaluate.py deleted file mode 100644 index d94863d978e51e3240b967df622a5fd313713501..0000000000000000000000000000000000000000 --- a/spaces/ashercn97/AsherTesting/modules/evaluate.py +++ /dev/null @@ -1,154 +0,0 @@ -import datetime -from pathlib import Path - -import pandas as pd -import torch -from datasets import load_dataset -from tqdm import tqdm - -from modules import shared -from modules.models import load_model, unload_model -from modules.models_settings import ( - get_model_settings_from_yamls, - update_model_parameters -) -from modules.text_generation import encode - - -def load_past_evaluations(): - if Path('logs/evaluations.csv').exists(): - df = pd.read_csv(Path('logs/evaluations.csv'), dtype=str) - df['Perplexity'] = pd.to_numeric(df['Perplexity']) - return df - else: - return pd.DataFrame(columns=['Model', 'LoRAs', 'Dataset', 'Perplexity', 'stride', 'max_length', 'Date', 'Comment']) - - -past_evaluations = load_past_evaluations() - - -def save_past_evaluations(df): - global past_evaluations - past_evaluations = df - filepath = Path('logs/evaluations.csv') - filepath.parent.mkdir(parents=True, exist_ok=True) - df.to_csv(filepath, index=False) - - -def calculate_perplexity(models, input_dataset, stride, _max_length): - ''' - Based on: - https://huggingface.co/docs/transformers/perplexity#calculating-ppl-with-fixedlength-models - ''' - - global past_evaluations - cumulative_log = '' - cumulative_log += "Loading the input dataset...\n\n" - yield cumulative_log - - # Copied from https://github.com/qwopqwop200/GPTQ-for-LLaMa/blob/triton/utils/datautils.py - if input_dataset == 'wikitext': - data = load_dataset('wikitext', 'wikitext-2-raw-v1', split='test') - text = "\n\n".join(data['text']) - elif input_dataset == 'ptb': - data = load_dataset('ptb_text_only', 'penn_treebank', split='validation') - text = "\n\n".join(data['sentence']) - elif input_dataset == 'ptb_new': - data = load_dataset('ptb_text_only', 'penn_treebank', split='test') - text = " ".join(data['sentence']) - else: - with open(Path(f'training/datasets/{input_dataset}.txt'), 'r', encoding='utf-8') as f: - text = f.read() - - for model in models: - if is_in_past_evaluations(model, input_dataset, stride, _max_length): - cumulative_log += f"{model} has already been tested. Ignoring.\n\n" - yield cumulative_log - continue - - if model != 'current model': - try: - yield cumulative_log + f"Loading {model}...\n\n" - model_settings = get_model_settings_from_yamls(model) - shared.settings.update(model_settings) # hijacking the interface defaults - update_model_parameters(model_settings) # hijacking the command-line arguments - shared.model_name = model - unload_model() - shared.model, shared.tokenizer = load_model(shared.model_name) - except: - cumulative_log += f"Failed to load {model}. Moving on.\n\n" - yield cumulative_log - continue - - cumulative_log += f"Processing {shared.model_name}...\n\n" - yield cumulative_log + "Tokenizing the input dataset...\n\n" - encodings = encode(text, add_special_tokens=False) - seq_len = encodings.shape[1] - if _max_length: - max_length = _max_length - elif hasattr(shared.model.config, 'max_position_embeddings'): - max_length = shared.model.config.max_position_embeddings - else: - max_length = 2048 - - nlls = [] - prev_end_loc = 0 - for begin_loc in tqdm(range(0, seq_len, stride)): - yield cumulative_log + f"Evaluating... {100*begin_loc/seq_len:.2f}%" - end_loc = min(begin_loc + max_length, seq_len) - trg_len = end_loc - prev_end_loc # may be different from stride on last loop - input_ids = encodings[:, begin_loc:end_loc] - target_ids = input_ids.clone() - target_ids[:, :-trg_len] = -100 - - with torch.no_grad(): - outputs = shared.model(input_ids=input_ids, labels=target_ids) - - # loss is calculated using CrossEntropyLoss which averages over valid labels - # N.B. the model only calculates loss over trg_len - 1 labels, because it internally shifts the labels - # to the left by 1. - neg_log_likelihood = outputs.loss - - nlls.append(neg_log_likelihood) - - prev_end_loc = end_loc - if end_loc == seq_len: - break - - ppl = torch.exp(torch.stack(nlls).mean()) - add_entry_to_past_evaluations(float(ppl), shared.model_name, input_dataset, stride, _max_length) - save_past_evaluations(past_evaluations) - cumulative_log += f"The perplexity for {shared.model_name} is: {float(ppl)}\n\n" - yield cumulative_log - - -def add_entry_to_past_evaluations(perplexity, model, dataset, stride, max_length): - global past_evaluations - entry = { - 'Model': model, - 'LoRAs': ', '.join(shared.lora_names) or '-', - 'Dataset': dataset, - 'Perplexity': perplexity, - 'stride': str(stride), - 'max_length': str(max_length), - 'Date': datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'), - 'Comment': '' - } - past_evaluations = pd.concat([past_evaluations, pd.DataFrame([entry])], ignore_index=True) - - -def is_in_past_evaluations(model, dataset, stride, max_length): - entries = past_evaluations[(past_evaluations['Model'] == model) & - (past_evaluations['Dataset'] == dataset) & - (past_evaluations['max_length'] == str(max_length)) & - (past_evaluations['stride'] == str(stride))] - - if entries.shape[0] > 0: - return True - else: - return False - - -def generate_markdown_table(): - sorted_df = past_evaluations.sort_values(by=['Dataset', 'stride', 'Perplexity', 'Date']) - return sorted_df diff --git a/spaces/ashishraics/FillTheBlanks/app.py b/spaces/ashishraics/FillTheBlanks/app.py deleted file mode 100644 index 5746c46f468c16e1a8edc0af8880e4543281f67b..0000000000000000000000000000000000000000 --- a/spaces/ashishraics/FillTheBlanks/app.py +++ /dev/null @@ -1,195 +0,0 @@ -import logging -import streamlit as st -from annotated_text import annotated_text -import nltk -nltk.download('stopwords') -nltk.download('wordnet') -nltk.download('punkt') -from nltk.corpus import stopwords,wordnet -from nltk.tokenize import sent_tokenize -from flashtext import KeywordProcessor -import regex as re -import string -import subprocess -from PIL import Image -import multiprocessing -total_threads=multiprocessing.cpu_count() - -try: - import pke - logging.error("importing pke info") -except: - logging.error("installing pke info") - subprocess.run(['pip3', 'install','git+https://github.com/boudinfl/pke.git']) - subprocess.run(['python3' ,'-m' ,'spacy' ,'download' ,'en']) - import pke - -st.set_page_config( # Alternate names: setup_page, page, layout - layout="wide", # Can be "centered" or "wide". In the future also "dashboard", etc. - initial_sidebar_state="auto", # Can be "auto", "expanded", "collapsed" - page_title='None', # String or None. Strings get appended with "• Streamlit". -) - -def set_page_title(title): - st.sidebar.markdown(unsafe_allow_html=True, body=f""" - -
    - - - \ No newline at end of file diff --git a/spaces/ppsingh/cpu-demo/appStore/multiapp.py b/spaces/ppsingh/cpu-demo/appStore/multiapp.py deleted file mode 100644 index 60c87c8fced14801adef8915972036657f7bf97c..0000000000000000000000000000000000000000 --- a/spaces/ppsingh/cpu-demo/appStore/multiapp.py +++ /dev/null @@ -1,67 +0,0 @@ -"""Frameworks for running multiple Streamlit applications as a single app. -""" -import streamlit as st -from PIL import Image -from utils.uploadAndExample import add_upload - -class MultiApp: - """Framework for combining multiple streamlit applications. - Usage: - def foo(): - st.title("Hello Foo") - def bar(): - st.title("Hello Bar") - app = MultiApp() - app.add_app("Foo", foo) - app.add_app("Bar", bar) - app.run() - It is also possible keep each application in a separate file. - import foo - import bar - app = MultiApp() - app.add_app("Foo", foo.app) - app.add_app("Bar", bar.app) - app.run() - """ - def __init__(self): - self.apps = [] - - def add_app(self,title,icon, func): - """Adds a new application. - Parameters - ---------- - func: - the python function to render this app. - title: - title of the app. Appears in the dropdown in the sidebar. - """ - self.apps.append({ - "title": title, - "icon": icon, - "function": func - }) - - def run(self): - - st.sidebar.write(format_func=lambda app: app['title']) - #image = Image.open('docStore/img/dsc_giz.png') - #st.sidebar.image(image, width =200) - - with st.sidebar: - selected = st.selectbox("Select the Task to perform", [page["title"] for page in self.apps],) - st.markdown("---") - - - for index, item in enumerate(self.apps): - if item["title"] == selected: - self.apps[index]["function"]() - break - - - choice = st.sidebar.radio(label = 'Select the Document', - help = 'You can upload the document \ - or else you can try a example document', - options = ('Upload Document', 'Try Example'), - horizontal = True) - add_upload(choice) - \ No newline at end of file diff --git a/spaces/prerna9811/Chord/portaudio/src/hostapi/wasapi/mingw-include/propkeydef.h b/spaces/prerna9811/Chord/portaudio/src/hostapi/wasapi/mingw-include/propkeydef.h deleted file mode 100644 index a361044e56a71f1139e10d0fd1d9579c75a1ec2b..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/src/hostapi/wasapi/mingw-include/propkeydef.h +++ /dev/null @@ -1,26 +0,0 @@ -#ifndef PID_FIRST_USABLE -#define PID_FIRST_USABLE 2 -#endif - -#ifndef REFPROPERTYKEY -#ifdef __cplusplus -#define REFPROPERTYKEY const PROPERTYKEY & -#else // !__cplusplus -#define REFPROPERTYKEY const PROPERTYKEY * __MIDL_CONST -#endif // __cplusplus -#endif //REFPROPERTYKEY - -#ifdef DEFINE_PROPERTYKEY -#undef DEFINE_PROPERTYKEY -#endif - -#ifdef INITGUID -#define DEFINE_PROPERTYKEY(name, l, w1, w2, b1, b2, b3, b4, b5, b6, b7, b8, pid) EXTERN_C const PROPERTYKEY DECLSPEC_SELECTANY name = { { l, w1, w2, { b1, b2, b3, b4, b5, b6, b7, b8 } }, pid } -#else -#define DEFINE_PROPERTYKEY(name, l, w1, w2, b1, b2, b3, b4, b5, b6, b7, b8, pid) EXTERN_C const PROPERTYKEY name -#endif // INITGUID - -#ifndef IsEqualPropertyKey -#define IsEqualPropertyKey(a, b) (((a).pid == (b).pid) && IsEqualIID((a).fmtid, (b).fmtid) ) -#endif // IsEqualPropertyKey - diff --git a/spaces/priyank-m/m_OCR/app.py b/spaces/priyank-m/m_OCR/app.py deleted file mode 100644 index 41e0e22aa97d2deeb684eb54f57e4bf0ab977c68..0000000000000000000000000000000000000000 --- a/spaces/priyank-m/m_OCR/app.py +++ /dev/null @@ -1,23 +0,0 @@ -from transformers import ViTFeatureExtractor, BertTokenizer, VisionEncoderDecoderModel, AutoTokenizer, AutoFeatureExtractor -import gradio as gr - -title="Multilingual OCR (currently recognises: English and Chinese)" -description="m_OCR(multilingual OCR) is a Vision-Encoder-Decoder model (based on the concept of TrOCR) which uses pre-trained facebook's vit-mae-large as the encoder and xlm-roberta-base as the decoder. \nIt has been trained on IAM, SROIE 2019, text_renderer Chinese (synthetic) and TRDG (synthetic) datasets (amounting to approx 1.4 Million samples) for English and Chinese document text-recognition." -examples =[["demo_image/img1.png"], ["demo_image/img2.jpeg"], ["demo_image/img3.jpeg"], ["demo_image/img4.jpeg"], ["demo_image/img5.jpeg"], ["demo_image/img6.jpeg"]] - - - -model=VisionEncoderDecoderModel.from_pretrained("priyank-m/m_OCR") -tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base") -feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/vit-mae-large") - -def run_ocr(image): - pixel_values = feature_extractor(image, return_tensors="pt").pixel_values - # autoregressively generate caption (uses greedy decoding by default ) - generated_ids = model.generate(pixel_values, max_new_tokens=50) - generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] - return generated_text - - -demo = gr.Interface(fn=run_ocr, inputs="image", outputs="text", title=title, description=description, examples=examples) -demo.launch() \ No newline at end of file diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/XbmImagePlugin.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/XbmImagePlugin.py deleted file mode 100644 index 71cd57d74da7e2b16a3d661b41eb37ec91fd90f4..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/XbmImagePlugin.py +++ /dev/null @@ -1,94 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# XBM File handling -# -# History: -# 1995-09-08 fl Created -# 1996-11-01 fl Added save support -# 1997-07-07 fl Made header parser more tolerant -# 1997-07-22 fl Fixed yet another parser bug -# 2001-02-17 fl Use 're' instead of 'regex' (Python 2.1) (0.4) -# 2001-05-13 fl Added hotspot handling (based on code from Bernhard Herzog) -# 2004-02-24 fl Allow some whitespace before first #define -# -# Copyright (c) 1997-2004 by Secret Labs AB -# Copyright (c) 1996-1997 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -import re - -from . import Image, ImageFile - -# XBM header -xbm_head = re.compile( - rb"\s*#define[ \t]+.*_width[ \t]+(?P[0-9]+)[\r\n]+" - b"#define[ \t]+.*_height[ \t]+(?P[0-9]+)[\r\n]+" - b"(?P" - b"#define[ \t]+[^_]*_x_hot[ \t]+(?P[0-9]+)[\r\n]+" - b"#define[ \t]+[^_]*_y_hot[ \t]+(?P[0-9]+)[\r\n]+" - b")?" - rb"[\000-\377]*_bits\[]" -) - - -def _accept(prefix): - return prefix.lstrip()[:7] == b"#define" - - -## -# Image plugin for X11 bitmaps. - - -class XbmImageFile(ImageFile.ImageFile): - format = "XBM" - format_description = "X11 Bitmap" - - def _open(self): - m = xbm_head.match(self.fp.read(512)) - - if not m: - msg = "not a XBM file" - raise SyntaxError(msg) - - xsize = int(m.group("width")) - ysize = int(m.group("height")) - - if m.group("hotspot"): - self.info["hotspot"] = (int(m.group("xhot")), int(m.group("yhot"))) - - self._mode = "1" - self._size = xsize, ysize - - self.tile = [("xbm", (0, 0) + self.size, m.end(), None)] - - -def _save(im, fp, filename): - if im.mode != "1": - msg = f"cannot write mode {im.mode} as XBM" - raise OSError(msg) - - fp.write(f"#define im_width {im.size[0]}\n".encode("ascii")) - fp.write(f"#define im_height {im.size[1]}\n".encode("ascii")) - - hotspot = im.encoderinfo.get("hotspot") - if hotspot: - fp.write(f"#define im_x_hot {hotspot[0]}\n".encode("ascii")) - fp.write(f"#define im_y_hot {hotspot[1]}\n".encode("ascii")) - - fp.write(b"static char im_bits[] = {\n") - - ImageFile._save(im, fp, [("xbm", (0, 0) + im.size, 0, None)]) - - fp.write(b"};\n") - - -Image.register_open(XbmImageFile.format, XbmImageFile, _accept) -Image.register_save(XbmImageFile.format, _save) - -Image.register_extension(XbmImageFile.format, ".xbm") - -Image.register_mime(XbmImageFile.format, "image/xbm") diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/tooltip/src/index.ts b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/tooltip/src/index.ts deleted file mode 100644 index b89ac870e5bbf6d90058c6e25cb9d74b626b3809..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/tooltip/src/index.ts +++ /dev/null @@ -1 +0,0 @@ -export { tooltip } from "./tooltip"; diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/layouts/row.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/layouts/row.py deleted file mode 100644 index 7af6922478118e1fc28569218ac081a6dc3d34a8..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/layouts/row.py +++ /dev/null @@ -1,66 +0,0 @@ -from __future__ import annotations - -from typing import Literal - -from gradio_client.documentation import document, set_documentation_group - -from gradio.blocks import BlockContext -from gradio.component_meta import ComponentMeta - -set_documentation_group("layout") - - -@document() -class Row(BlockContext, metaclass=ComponentMeta): - """ - Row is a layout element within Blocks that renders all children horizontally. - Example: - with gr.Blocks() as demo: - with gr.Row(): - gr.Image("lion.jpg", scale=2) - gr.Image("tiger.jpg", scale=1) - demo.launch() - Guides: controlling-layout - """ - - EVENTS = [] - - def __init__( - self, - *, - variant: Literal["default", "panel", "compact"] = "default", - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - render: bool = True, - equal_height: bool = True, - ): - """ - Parameters: - variant: row type, 'default' (no background), 'panel' (gray background color and rounded corners), or 'compact' (rounded corners and no internal gap). - visible: If False, row will be hidden. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional string or list of strings that are assigned as the class of this component in the HTML DOM. Can be used for targeting CSS styles. - render: If False, this layout will not be rendered in the Blocks context. Should be used if the intention is to assign event listeners now but render the component later. - equal_height: If True, makes every child element have equal height - """ - self.variant = variant - self.equal_height = equal_height - if variant == "compact": - self.allow_expected_parents = False - BlockContext.__init__( - self, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - render=render, - ) - - @staticmethod - def update( - visible: bool | None = None, - ): - return { - "visible": visible, - "__type__": "update", - } diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-9cefd3b7.css b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-9cefd3b7.css deleted file mode 100644 index 0058e6525e65d83003fa1fb51e3792599e90d91a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-9cefd3b7.css +++ /dev/null @@ -1 +0,0 @@ -.gallery.svelte-zvfedn{padding:var(--size-2)} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/idna/core.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/idna/core.py deleted file mode 100644 index 4f3003711020eac05ef5a19ab29ba5670d89f642..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/idna/core.py +++ /dev/null @@ -1,400 +0,0 @@ -from . import idnadata -import bisect -import unicodedata -import re -from typing import Union, Optional -from .intranges import intranges_contain - -_virama_combining_class = 9 -_alabel_prefix = b'xn--' -_unicode_dots_re = re.compile('[\u002e\u3002\uff0e\uff61]') - -class IDNAError(UnicodeError): - """ Base exception for all IDNA-encoding related problems """ - pass - - -class IDNABidiError(IDNAError): - """ Exception when bidirectional requirements are not satisfied """ - pass - - -class InvalidCodepoint(IDNAError): - """ Exception when a disallowed or unallocated codepoint is used """ - pass - - -class InvalidCodepointContext(IDNAError): - """ Exception when the codepoint is not valid in the context it is used """ - pass - - -def _combining_class(cp: int) -> int: - v = unicodedata.combining(chr(cp)) - if v == 0: - if not unicodedata.name(chr(cp)): - raise ValueError('Unknown character in unicodedata') - return v - -def _is_script(cp: str, script: str) -> bool: - return intranges_contain(ord(cp), idnadata.scripts[script]) - -def _punycode(s: str) -> bytes: - return s.encode('punycode') - -def _unot(s: int) -> str: - return 'U+{:04X}'.format(s) - - -def valid_label_length(label: Union[bytes, str]) -> bool: - if len(label) > 63: - return False - return True - - -def valid_string_length(label: Union[bytes, str], trailing_dot: bool) -> bool: - if len(label) > (254 if trailing_dot else 253): - return False - return True - - -def check_bidi(label: str, check_ltr: bool = False) -> bool: - # Bidi rules should only be applied if string contains RTL characters - bidi_label = False - for (idx, cp) in enumerate(label, 1): - direction = unicodedata.bidirectional(cp) - if direction == '': - # String likely comes from a newer version of Unicode - raise IDNABidiError('Unknown directionality in label {} at position {}'.format(repr(label), idx)) - if direction in ['R', 'AL', 'AN']: - bidi_label = True - if not bidi_label and not check_ltr: - return True - - # Bidi rule 1 - direction = unicodedata.bidirectional(label[0]) - if direction in ['R', 'AL']: - rtl = True - elif direction == 'L': - rtl = False - else: - raise IDNABidiError('First codepoint in label {} must be directionality L, R or AL'.format(repr(label))) - - valid_ending = False - number_type = None # type: Optional[str] - for (idx, cp) in enumerate(label, 1): - direction = unicodedata.bidirectional(cp) - - if rtl: - # Bidi rule 2 - if not direction in ['R', 'AL', 'AN', 'EN', 'ES', 'CS', 'ET', 'ON', 'BN', 'NSM']: - raise IDNABidiError('Invalid direction for codepoint at position {} in a right-to-left label'.format(idx)) - # Bidi rule 3 - if direction in ['R', 'AL', 'EN', 'AN']: - valid_ending = True - elif direction != 'NSM': - valid_ending = False - # Bidi rule 4 - if direction in ['AN', 'EN']: - if not number_type: - number_type = direction - else: - if number_type != direction: - raise IDNABidiError('Can not mix numeral types in a right-to-left label') - else: - # Bidi rule 5 - if not direction in ['L', 'EN', 'ES', 'CS', 'ET', 'ON', 'BN', 'NSM']: - raise IDNABidiError('Invalid direction for codepoint at position {} in a left-to-right label'.format(idx)) - # Bidi rule 6 - if direction in ['L', 'EN']: - valid_ending = True - elif direction != 'NSM': - valid_ending = False - - if not valid_ending: - raise IDNABidiError('Label ends with illegal codepoint directionality') - - return True - - -def check_initial_combiner(label: str) -> bool: - if unicodedata.category(label[0])[0] == 'M': - raise IDNAError('Label begins with an illegal combining character') - return True - - -def check_hyphen_ok(label: str) -> bool: - if label[2:4] == '--': - raise IDNAError('Label has disallowed hyphens in 3rd and 4th position') - if label[0] == '-' or label[-1] == '-': - raise IDNAError('Label must not start or end with a hyphen') - return True - - -def check_nfc(label: str) -> None: - if unicodedata.normalize('NFC', label) != label: - raise IDNAError('Label must be in Normalization Form C') - - -def valid_contextj(label: str, pos: int) -> bool: - cp_value = ord(label[pos]) - - if cp_value == 0x200c: - - if pos > 0: - if _combining_class(ord(label[pos - 1])) == _virama_combining_class: - return True - - ok = False - for i in range(pos-1, -1, -1): - joining_type = idnadata.joining_types.get(ord(label[i])) - if joining_type == ord('T'): - continue - if joining_type in [ord('L'), ord('D')]: - ok = True - break - - if not ok: - return False - - ok = False - for i in range(pos+1, len(label)): - joining_type = idnadata.joining_types.get(ord(label[i])) - if joining_type == ord('T'): - continue - if joining_type in [ord('R'), ord('D')]: - ok = True - break - return ok - - if cp_value == 0x200d: - - if pos > 0: - if _combining_class(ord(label[pos - 1])) == _virama_combining_class: - return True - return False - - else: - - return False - - -def valid_contexto(label: str, pos: int, exception: bool = False) -> bool: - cp_value = ord(label[pos]) - - if cp_value == 0x00b7: - if 0 < pos < len(label)-1: - if ord(label[pos - 1]) == 0x006c and ord(label[pos + 1]) == 0x006c: - return True - return False - - elif cp_value == 0x0375: - if pos < len(label)-1 and len(label) > 1: - return _is_script(label[pos + 1], 'Greek') - return False - - elif cp_value == 0x05f3 or cp_value == 0x05f4: - if pos > 0: - return _is_script(label[pos - 1], 'Hebrew') - return False - - elif cp_value == 0x30fb: - for cp in label: - if cp == '\u30fb': - continue - if _is_script(cp, 'Hiragana') or _is_script(cp, 'Katakana') or _is_script(cp, 'Han'): - return True - return False - - elif 0x660 <= cp_value <= 0x669: - for cp in label: - if 0x6f0 <= ord(cp) <= 0x06f9: - return False - return True - - elif 0x6f0 <= cp_value <= 0x6f9: - for cp in label: - if 0x660 <= ord(cp) <= 0x0669: - return False - return True - - return False - - -def check_label(label: Union[str, bytes, bytearray]) -> None: - if isinstance(label, (bytes, bytearray)): - label = label.decode('utf-8') - if len(label) == 0: - raise IDNAError('Empty Label') - - check_nfc(label) - check_hyphen_ok(label) - check_initial_combiner(label) - - for (pos, cp) in enumerate(label): - cp_value = ord(cp) - if intranges_contain(cp_value, idnadata.codepoint_classes['PVALID']): - continue - elif intranges_contain(cp_value, idnadata.codepoint_classes['CONTEXTJ']): - try: - if not valid_contextj(label, pos): - raise InvalidCodepointContext('Joiner {} not allowed at position {} in {}'.format( - _unot(cp_value), pos+1, repr(label))) - except ValueError: - raise IDNAError('Unknown codepoint adjacent to joiner {} at position {} in {}'.format( - _unot(cp_value), pos+1, repr(label))) - elif intranges_contain(cp_value, idnadata.codepoint_classes['CONTEXTO']): - if not valid_contexto(label, pos): - raise InvalidCodepointContext('Codepoint {} not allowed at position {} in {}'.format(_unot(cp_value), pos+1, repr(label))) - else: - raise InvalidCodepoint('Codepoint {} at position {} of {} not allowed'.format(_unot(cp_value), pos+1, repr(label))) - - check_bidi(label) - - -def alabel(label: str) -> bytes: - try: - label_bytes = label.encode('ascii') - ulabel(label_bytes) - if not valid_label_length(label_bytes): - raise IDNAError('Label too long') - return label_bytes - except UnicodeEncodeError: - pass - - if not label: - raise IDNAError('No Input') - - label = str(label) - check_label(label) - label_bytes = _punycode(label) - label_bytes = _alabel_prefix + label_bytes - - if not valid_label_length(label_bytes): - raise IDNAError('Label too long') - - return label_bytes - - -def ulabel(label: Union[str, bytes, bytearray]) -> str: - if not isinstance(label, (bytes, bytearray)): - try: - label_bytes = label.encode('ascii') - except UnicodeEncodeError: - check_label(label) - return label - else: - label_bytes = label - - label_bytes = label_bytes.lower() - if label_bytes.startswith(_alabel_prefix): - label_bytes = label_bytes[len(_alabel_prefix):] - if not label_bytes: - raise IDNAError('Malformed A-label, no Punycode eligible content found') - if label_bytes.decode('ascii')[-1] == '-': - raise IDNAError('A-label must not end with a hyphen') - else: - check_label(label_bytes) - return label_bytes.decode('ascii') - - try: - label = label_bytes.decode('punycode') - except UnicodeError: - raise IDNAError('Invalid A-label') - check_label(label) - return label - - -def uts46_remap(domain: str, std3_rules: bool = True, transitional: bool = False) -> str: - """Re-map the characters in the string according to UTS46 processing.""" - from .uts46data import uts46data - output = '' - - for pos, char in enumerate(domain): - code_point = ord(char) - try: - uts46row = uts46data[code_point if code_point < 256 else - bisect.bisect_left(uts46data, (code_point, 'Z')) - 1] - status = uts46row[1] - replacement = None # type: Optional[str] - if len(uts46row) == 3: - replacement = uts46row[2] # type: ignore - if (status == 'V' or - (status == 'D' and not transitional) or - (status == '3' and not std3_rules and replacement is None)): - output += char - elif replacement is not None and (status == 'M' or - (status == '3' and not std3_rules) or - (status == 'D' and transitional)): - output += replacement - elif status != 'I': - raise IndexError() - except IndexError: - raise InvalidCodepoint( - 'Codepoint {} not allowed at position {} in {}'.format( - _unot(code_point), pos + 1, repr(domain))) - - return unicodedata.normalize('NFC', output) - - -def encode(s: Union[str, bytes, bytearray], strict: bool = False, uts46: bool = False, std3_rules: bool = False, transitional: bool = False) -> bytes: - if isinstance(s, (bytes, bytearray)): - try: - s = s.decode('ascii') - except UnicodeDecodeError: - raise IDNAError('should pass a unicode string to the function rather than a byte string.') - if uts46: - s = uts46_remap(s, std3_rules, transitional) - trailing_dot = False - result = [] - if strict: - labels = s.split('.') - else: - labels = _unicode_dots_re.split(s) - if not labels or labels == ['']: - raise IDNAError('Empty domain') - if labels[-1] == '': - del labels[-1] - trailing_dot = True - for label in labels: - s = alabel(label) - if s: - result.append(s) - else: - raise IDNAError('Empty label') - if trailing_dot: - result.append(b'') - s = b'.'.join(result) - if not valid_string_length(s, trailing_dot): - raise IDNAError('Domain too long') - return s - - -def decode(s: Union[str, bytes, bytearray], strict: bool = False, uts46: bool = False, std3_rules: bool = False) -> str: - try: - if isinstance(s, (bytes, bytearray)): - s = s.decode('ascii') - except UnicodeDecodeError: - raise IDNAError('Invalid ASCII in A-label') - if uts46: - s = uts46_remap(s, std3_rules, False) - trailing_dot = False - result = [] - if not strict: - labels = _unicode_dots_re.split(s) - else: - labels = s.split('.') - if not labels or labels == ['']: - raise IDNAError('Empty domain') - if not labels[-1]: - del labels[-1] - trailing_dot = True - for label in labels: - s = ulabel(label) - if s: - result.append(s) - else: - raise IDNAError('Empty label') - if trailing_dot: - result.append('') - return '.'.join(result) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/markdown_it/rules_block/fence.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/markdown_it/rules_block/fence.py deleted file mode 100644 index 263f1b8de8dcdd0dd736eeafab2d9da34ec2c205..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/markdown_it/rules_block/fence.py +++ /dev/null @@ -1,101 +0,0 @@ -# fences (``` lang, ~~~ lang) -import logging - -from .state_block import StateBlock - -LOGGER = logging.getLogger(__name__) - - -def fence(state: StateBlock, startLine: int, endLine: int, silent: bool) -> bool: - LOGGER.debug("entering fence: %s, %s, %s, %s", state, startLine, endLine, silent) - - haveEndMarker = False - pos = state.bMarks[startLine] + state.tShift[startLine] - maximum = state.eMarks[startLine] - - if state.is_code_block(startLine): - return False - - if pos + 3 > maximum: - return False - - marker = state.src[pos] - - if marker not in ("~", "`"): - return False - - # scan marker length - mem = pos - pos = state.skipCharsStr(pos, marker) - - length = pos - mem - - if length < 3: - return False - - markup = state.src[mem:pos] - params = state.src[pos:maximum] - - if marker == "`" and marker in params: - return False - - # Since start is found, we can report success here in validation mode - if silent: - return True - - # search end of block - nextLine = startLine - - while True: - nextLine += 1 - if nextLine >= endLine: - # unclosed block should be autoclosed by end of document. - # also block seems to be autoclosed by end of parent - break - - pos = mem = state.bMarks[nextLine] + state.tShift[nextLine] - maximum = state.eMarks[nextLine] - - if pos < maximum and state.sCount[nextLine] < state.blkIndent: - # non-empty line with negative indent should stop the list: - # - ``` - # test - break - - try: - if state.src[pos] != marker: - continue - except IndexError: - break - - if state.is_code_block(nextLine): - continue - - pos = state.skipCharsStr(pos, marker) - - # closing code fence must be at least as long as the opening one - if pos - mem < length: - continue - - # make sure tail has spaces only - pos = state.skipSpaces(pos) - - if pos < maximum: - continue - - haveEndMarker = True - # found! - break - - # If a fence has heading spaces, they should be removed from its inner block - length = state.sCount[startLine] - - state.line = nextLine + (1 if haveEndMarker else 0) - - token = state.push("fence", "code", 0) - token.info = params - token.content = state.getLines(startLine + 1, nextLine, length, True) - token.markup = markup - token.map = [startLine, state.line] - - return True diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/_core/_internal.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/_core/_internal.py deleted file mode 100644 index 52a8e907292ebbadb481c78be2522aa37a5ba533..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/_core/_internal.py +++ /dev/null @@ -1,6 +0,0 @@ -from numpy.core import _internal - -_globals = globals() - -for item in _internal.__dir__(): - _globals[item] = getattr(_internal, item) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/base/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/base/__init__.py deleted file mode 100644 index 7cd55b7240d54a9289f05aa4223b702d49e2ec94..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/base/__init__.py +++ /dev/null @@ -1,94 +0,0 @@ -""" -Base test suite for extension arrays. - -These tests are intended for third-party libraries to subclass to validate -that their extension arrays and dtypes satisfy the interface. Moving or -renaming the tests should not be done lightly. - -Libraries are expected to implement a few pytest fixtures to provide data -for the tests. The fixtures may be located in either - -* The same module as your test class. -* A ``conftest.py`` in the same directory as your test class. - -The full list of fixtures may be found in the ``conftest.py`` next to this -file. - -.. code-block:: python - - import pytest - from pandas.tests.extension.base import BaseDtypeTests - - - @pytest.fixture - def dtype(): - return MyDtype() - - - class TestMyDtype(BaseDtypeTests): - pass - - -Your class ``TestDtype`` will inherit all the tests defined on -``BaseDtypeTests``. pytest's fixture discover will supply your ``dtype`` -wherever the test requires it. You're free to implement additional tests. - -""" -from pandas.tests.extension.base.accumulate import BaseAccumulateTests -from pandas.tests.extension.base.casting import BaseCastingTests -from pandas.tests.extension.base.constructors import BaseConstructorsTests -from pandas.tests.extension.base.dim2 import ( # noqa: F401 - Dim2CompatTests, - NDArrayBacked2DTests, -) -from pandas.tests.extension.base.dtype import BaseDtypeTests -from pandas.tests.extension.base.getitem import BaseGetitemTests -from pandas.tests.extension.base.groupby import BaseGroupbyTests -from pandas.tests.extension.base.index import BaseIndexTests -from pandas.tests.extension.base.interface import BaseInterfaceTests -from pandas.tests.extension.base.io import BaseParsingTests -from pandas.tests.extension.base.methods import BaseMethodsTests -from pandas.tests.extension.base.missing import BaseMissingTests -from pandas.tests.extension.base.ops import ( # noqa: F401 - BaseArithmeticOpsTests, - BaseComparisonOpsTests, - BaseOpsUtil, - BaseUnaryOpsTests, -) -from pandas.tests.extension.base.printing import BasePrintingTests -from pandas.tests.extension.base.reduce import ( # noqa: F401 - BaseBooleanReduceTests, - BaseNoReduceTests, - BaseNumericReduceTests, - BaseReduceTests, -) -from pandas.tests.extension.base.reshaping import BaseReshapingTests -from pandas.tests.extension.base.setitem import BaseSetitemTests - - -# One test class that you can inherit as an alternative to inheriting all the -# test classes above. -# Note 1) this excludes Dim2CompatTests and NDArrayBacked2DTests. -# Note 2) this uses BaseReduceTests and and _not_ BaseBooleanReduceTests, -# BaseNoReduceTests, or BaseNumericReduceTests -class ExtensionTests( - BaseAccumulateTests, - BaseCastingTests, - BaseConstructorsTests, - BaseDtypeTests, - BaseGetitemTests, - BaseGroupbyTests, - BaseIndexTests, - BaseInterfaceTests, - BaseParsingTests, - BaseMethodsTests, - BaseMissingTests, - BaseArithmeticOpsTests, - BaseComparisonOpsTests, - BaseUnaryOpsTests, - BasePrintingTests, - BaseReduceTests, - BaseReshapingTests, - BaseSetitemTests, -): - pass diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_iterrows.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_iterrows.py deleted file mode 100644 index 0bd0bed76dc9dea5df4d0afb76ebaf0760a23ecc..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_iterrows.py +++ /dev/null @@ -1,16 +0,0 @@ -from pandas import ( - DataFrame, - Timedelta, -) - - -def test_no_overflow_of_freq_and_time_in_dataframe(): - # GH 35665 - df = DataFrame( - { - "some_string": ["2222Y3"], - "time": [Timedelta("0 days 00:00:00.990000")], - } - ) - for _, row in df.iterrows(): - assert row.dtype == "object" diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/datetimes/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/datetimes/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/styles/fruity.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/styles/fruity.py deleted file mode 100644 index 1ce396099d259279d414bc5d81d636fbf6c0a4bf..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/styles/fruity.py +++ /dev/null @@ -1,41 +0,0 @@ -""" - pygments.styles.fruity - ~~~~~~~~~~~~~~~~~~~~~~ - - pygments version of my "fruity" vim theme. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pygments.style import Style -from pygments.token import Token, Comment, Name, Keyword, \ - Generic, Number, String, Whitespace - -class FruityStyle(Style): - """ - Pygments version of the "native" vim theme. - """ - - background_color = '#111111' - highlight_color = '#333333' - - styles = { - Whitespace: '#888888', - Token: '#ffffff', - Generic.Output: '#444444 bg:#222222', - Keyword: '#fb660a bold', - Keyword.Pseudo: 'nobold', - Number: '#0086f7 bold', - Name.Tag: '#fb660a bold', - Name.Variable: '#fb660a', - Comment: '#008800 bg:#0f140f italic', - Name.Attribute: '#ff0086 bold', - String: '#0086d2', - Name.Function: '#ff0086 bold', - Generic.Heading: '#ffffff bold', - Keyword.Type: '#cdcaa9 bold', - Generic.Subheading: '#ffffff bold', - Name.Constant: '#0086d2', - Comment.Preproc: '#ff0007 bold' - } diff --git a/spaces/pytorch/MiDaS/README.md b/spaces/pytorch/MiDaS/README.md deleted file mode 100644 index f589543b9e8c14ceb1d5ef92ed88c70e85eb44a9..0000000000000000000000000000000000000000 --- a/spaces/pytorch/MiDaS/README.md +++ /dev/null @@ -1,34 +0,0 @@ ---- -title: MiDaS -emoji: 😻 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 2.8.1 -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/qile0317/Bacteria-Classification/app.py b/spaces/qile0317/Bacteria-Classification/app.py deleted file mode 100644 index 0c174734fec05877ca835b3ee1698967e825f93a..0000000000000000000000000000000000000000 --- a/spaces/qile0317/Bacteria-Classification/app.py +++ /dev/null @@ -1,58 +0,0 @@ -import gradio as gr -from fastai.vision.all import * -from fastai import * -#import pathlib - -#temp = pathlib.PosixPath -#pathlib.PosixPath = pathlib.WindowsPath - -learn = load_learner('Bacteria-Classifier.pkl') - -categories = ('Acinetobacter.baumanii', -'Actinetobacter.israeli', -'Bacteroides.fragilis', -'Bifidobacterium.spp', -'Candida.albicans', -'Clostridium.perfringens', -'Enterococcus.faecalis', -'Enterococcus.faecium', -'Escherichia.coli', -'Fusobacterium', -'Lactobacillus.casei', -'Lactobacillus.crispatus', -'Lactobacillus.delbrueckii', -'Lactobacillus.gasseri', -'Lactobacillus.jehnsenii', -'Lactobacillus.johnsonii', -'Lactobacillus.paracasei', -'Lactobacillus.plantarum', -'Lactobacillus.reuteri', -'Lactobacillus.rhamnosus', -'Lactobacillus.salivarius', -'Listeria.monocytogenes', -'Micrococcus.spp', -'Neisseria.gonorrhoeae', -'Porfyromonas.gingivalis', -'Propionibacterium.acnes', -'Proteus', -'Pseudomonas.aeruginosa', -'Staphylococcus.aureus', -'Staphylococcus.epidermidis', -'Staphylococcus.saprophiticus', -'Streptococcus.agalactiae', -'Veionella') - -def classify_image(img): - pred,idx,probs = learn.predict(img) - return dict(zip(categories,map(float,probs))) - -image = gr.Image(shape=(244,244)) -label = gr.Label() -examples = ['Fusobacterium_sample.png','Veionella_sample.png','Clostridium.perfringens_sample.png'] - -demo = gr.Interface( - fn=classify_image, - inputs=image, - outputs=label, - examples=examples) -demo.launch(inline=False) \ No newline at end of file diff --git a/spaces/qisan/Depressed_sentimental_analysis/README.md b/spaces/qisan/Depressed_sentimental_analysis/README.md deleted file mode 100644 index 23c81446bd0ece3472c916c488cc02d7eb807522..0000000000000000000000000000000000000000 --- a/spaces/qisan/Depressed_sentimental_analysis/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Depressed Sentimental Analysis -emoji: 🌍 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 3.16.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/quidiaMuxgu/Expedit-SAM/All Episodes Of The Suite Life Of Zack And Cody In Hindi.md b/spaces/quidiaMuxgu/Expedit-SAM/All Episodes Of The Suite Life Of Zack And Cody In Hindi.md deleted file mode 100644 index d5fdef6f14c8223a484ea4ce5a2c9a88cc597c56..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/All Episodes Of The Suite Life Of Zack And Cody In Hindi.md +++ /dev/null @@ -1,6 +0,0 @@ -

    All Episodes Of The Suite Life Of Zack And Cody In Hindi


    DOWNLOAD ……… https://geags.com/2uCr3T



    -
    -The subscribers of Disney+ Hotstar Premium will receive all the benefits of ... Hotstar VIP subscribers can enjoy content in Hindi, Tamil and Telugu. All ... (2003); The Suite Life of Zack & Cody (2005); Hannah Montana (2006) ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Baixar Contagem Regressiva Dublado 720p.md b/spaces/quidiaMuxgu/Expedit-SAM/Baixar Contagem Regressiva Dublado 720p.md deleted file mode 100644 index d421eb003e80a65fb851956c37f2315ffcbfcc0d..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Baixar Contagem Regressiva Dublado 720p.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Baixar Contagem Regressiva Dublado 720p


    Download Ziphttps://geags.com/2uCrKT



    -
    - d5da3c52bf
    -
    -
    -

    diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Download Mdaemon 11 Full Crack __TOP__.md b/spaces/quidiaMuxgu/Expedit-SAM/Download Mdaemon 11 Full Crack __TOP__.md deleted file mode 100644 index 9a97b4295ba6613c16b5a7acd7adf0c3554eb336..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Download Mdaemon 11 Full Crack __TOP__.md +++ /dev/null @@ -1,184 +0,0 @@ -
    -

    Download mdaemon 11 full crack: The Best Email Server Software for Your Business

    - -

    If you are looking for a reliable, easy and feature-rich email server software for your small or medium business, then you should consider downloading mdaemon 11 full crack. MDaemon is a popular email server software that offers many features and benefits for your business communication needs. In this article, we will tell you why you should download mdaemon 11 full crack, what are its features and benefits, and how to download and install it on your Windows PC.

    - -

    Why You Should Download mdaemon 11 full crack

    - -

    MDaemon is one of the best email server software for small and medium businesses. It has been in the market for over 20 years and has been trusted by millions of users worldwide. Here are some of the reasons why you should download mdaemon 11 full crack:

    -

    Download mdaemon 11 full crack


    Download Filehttps://geags.com/2uCsRF



    - -
      -
    • It is easy to use: MDaemon has a simple and intuitive user interface that makes it easy to configure and manage your email server. You don't need to have any technical skills or knowledge to use MDaemon. You can also access MDaemon from any web browser using its web-based administration tool.
    • -
    • It is comprehensive and up-to-date: MDaemon has everything you need to run a successful email server for your business. It supports all the standard email protocols such as SMTP, POP3, IMAP4, HTTPMail and LDAP. It also supports the latest features and enhancements such as Web databases, Web Browser Control, Image Gallery and more. It also supports multiple domains, aliases, mailing lists, groups and accounts.
    • -
    • It is practical and relevant: MDaemon is designed to meet the specific needs and challenges of small and medium businesses. It allows you to create and manage databases for various purposes such as inventory management, student records, employee records, survey analysis, event planning and more. It also allows you to publish Web databases online using Access 2010.
    • -
    • It is affordable and accessible: MDaemon is available for download from the internet for free. You can also get the sample files from the website for free. You can also leave your comments and feedback on the website for free. You can access MDaemon anytime and anywhere you want.
    • -
    - -

    These are some of the reasons why you should download mdaemon 11 full crack. It is a software that will help you run your email server with ease and efficiency.

    - -

    Features and Benefits of MDaemon

    - -

    MDaemon has many features and benefits that make it a powerful and versatile email server software. Here are some of them:

    - -
      -
    • Email Security: MDaemon provides a high level of email security for your business communication. It has built-in antivirus and antispam features that protect your email from viruses, malware, phishing, spoofing and other threats. It also has encryption and decryption features that secure your email from unauthorized access.
    • -
    • Email Management: MDaemon allows you to easily manage your email accounts, domains, aliases, groups, mailing lists and more. You can also create rules, filters, schedules, backups and restores for your email server. You can also monitor your email server performance and activity using its reports and logs.
    • -
    • Email Integration: MDaemon allows you to integrate your email server with other applications and services such as Microsoft Exchange Server, Sendmail, Microsoft's MTA, or MDaemon's own MTA. You can also import and export data from other sources and formats such as SQL Server, SQLite, CSV, XML and more.
    • -
    • Email Customization: MDaemon allows you to customize your email server according to your preferences and needs. You can use switchboards, custom menus and toolbars, Ribbon customization and more. You can also use data validation rules, input masks, lookup fields and more.
    • -
    • Email Automation: MDaemon allows you to automate your email server tasks using macros, modules, VBA code, variables, constants, loops and conditions. You can also attach macros to events such as opening or closing a database or form.
    • -
    - -

    These are some of the features and benefits of MDaemon. It is a software that will help you create and manage your email server with ease and efficiency.

    - -

    How to Download and Install MDaemon

    - -

    If you are interested in downloading mdaemon 11 full crack, -you can follow these simple steps:

    - -
      -
    1. Go to the website https://selsoft.net/cracked/mdaemon-free-mail-server-for-windows-1103-/133786.html
    2. -
    3. Click on the Download button to start downloading mdaemon 11 full crack.
    4. -
    5. Save the file on your computer.
    6. -
    7. Run the file to start installing mdaemon 11 full crack.
    8. -
    9. Follow the instructions on the screen to complete the installation.
    10. -
    11. Enjoy using mdaemon 11 full crack.
    12. -
    - -

    You can also get the sample files from the website https://www.vstudents.org/serv/Urdu-tutorial-courses/MS-Access-urdu-tutorial/

    -

    - -

    You can also leave your comments and feedback on the website https://lexcliq.com/download-mdaemon-11-full-crack-__exclusive__/

    - -

    Conclusion

    - -

    MDaemon is a powerful email server software that offers many features -and benefits for small -and medium businesses. -It is easy -to use, -comprehensive -and up-to-date, -practical -and relevant, -affordable -and accessible. -It is a software -that will help you run -your email server -with ease -and efficiency. -If you are interested -in downloading mdaemon 11 full crack, -you can follow -the simple steps -we have provided -in this article. -We hope you enjoy using mdaemon 11 full crack -as much as we enjoyed writing this article -for you.

    -

    Download mdaemon 11 full crack: The Best Email Server Software for Your Business

    - -

    If you are looking for a reliable, easy and feature-rich email server software for your small or medium business, then you should consider downloading mdaemon 11 full crack. MDaemon is a popular email server software that offers many features and benefits for your business communication needs. In this article, we will tell you why you should download mdaemon 11 full crack, what are its features and benefits, and how to download and install it on your Windows PC.

    - -

    Why You Should Download mdaemon 11 full crack

    - -

    MDaemon is one of the best email server software for small and medium businesses. It has been in the market for over 20 years and has been trusted by millions of users worldwide. Here are some of the reasons why you should download mdaemon 11 full crack:

    - -
      -
    • It is easy to use: MDaemon has a simple and intuitive user interface that makes it easy to configure and manage your email server. You don't need to have any technical skills or knowledge to use MDaemon. You can also access MDaemon from any web browser using its web-based administration tool.
    • -
    • It is comprehensive and up-to-date: MDaemon has everything you need to run a successful email server for your business. It supports all the standard email protocols such as SMTP, POP3, IMAP4, HTTPMail and LDAP. It also supports the latest features and enhancements such as Web databases, Web Browser Control, Image Gallery and more. It also supports multiple domains, aliases, mailing lists, groups and accounts.
    • -
    • It is practical and relevant: MDaemon is designed to meet the specific needs and challenges of small and medium businesses. It allows you to create and manage databases for various purposes such as inventory management, student records, employee records, survey analysis, event planning and more. It also allows you to publish Web databases online using Access 2010.
    • -
    • It is affordable and accessible: MDaemon is available for download from the internet for free. You can also get the sample files from the website for free. You can also leave your comments and feedback on the website for free. You can access MDaemon anytime and anywhere you want.
    • -
    - -

    These are some of the reasons why you should download mdaemon 11 full crack. It is a software that will help you run your email server with ease and efficiency.

    - -

    Features and Benefits of MDaemon

    - -

    MDaemon has many features and benefits that make it a powerful and versatile email server software. Here are some of them:

    - -
      -
    • Email Security: MDaemon provides a high level of email security for your business communication. It has built-in antivirus and antispam features that protect your email from viruses, malware, phishing, spoofing and other threats. It also has encryption and decryption features that secure your email from unauthorized access.
    • -
    • Email Management: MDaemon allows you to easily manage your email accounts, domains, aliases, groups, mailing lists and more. You can also create rules, filters, schedules, backups and restores for your email server. You can also monitor your email server performance and activity using its reports and logs.
    • -
    • Email Integration: MDaemon allows you to integrate your email server with other applications and services such as Microsoft Exchange Server, Sendmail, Microsoft's MTA, or MDaemon's own MTA. You can also import and export data from other sources and formats such as SQL Server, SQLite, CSV, XML and more.
    • -
    • Email Customization: MDaemon allows you to customize your email server according to your preferences and needs. You can use switchboards, custom menus and toolbars, Ribbon customization and more. You can also use data validation rules, input masks, lookup fields and more.
    • -
    • Email Automation: MDaemon allows you to automate your email server tasks using macros, modules, VBA code, variables, constants, loops and conditions. You can also attach macros to events such as opening or closing a database or form.
    • -
    - -

    These are some of the features and benefits of MDaemon. It is a software that will help you create and manage your email server with ease and efficiency.

    - -

    How to Download and Install MDaemon

    - -

    If you are interested in downloading mdaemon 11 full crack, -you can follow these simple steps:

    - -
      -
    1. Go to the website https://selsoft.net/cracked/mdaemon-free-mail-server-for-windows-1103-/133786.html
    2. -
    3. Click on the Download button to start downloading mdaemon 11 full crack.
    4. -
    5. Save the file on your computer.
    6. -
    7. Run the file to start installing mdaemon 11 full crack.
    8. -
    9. Follow the instructions on the screen to complete the installation.
    10. -
    11. Enjoy using mdaemon 11 full crack.
    12. -
    - -

    You can also get the sample files from the website https://www.vstudents.org/serv/Urdu-tutorial-courses/MS-Access-urdu-tutorial/

    - -

    You can also leave your comments and feedback on the website https://lexcliq.com/download-mdaemon-11-full-crack-__exclusive__/

    - -

    Conclusion

    - -

    MDaemon is a powerful email server software that offers many features -and benefits for small -and medium businesses. -It is easy -to use, -comprehensive -and up-to-date, -practical -and relevant, -affordable -and accessible. -It is a software -that will help you run -your email server -with ease -and efficiency. -If you are interested -in downloading mdaemon 11 full crack, -you can follow -the simple steps -we have provided -in this article. -We hope you enjoy using mdaemon 11 full crack -as much as we enjoyed writing this article -for you.

    -

    Conclusion

    - -

    MDaemon is a powerful email server software that offers many features -and benefits for small -and medium businesses. -It is easy -to use, -comprehensive -and up-to-date, -practical -and relevant, -affordable -and accessible. -It is a software -that will help you run -your email server -with ease -and efficiency. -If you are interested -in downloading mdaemon 11 full crack, -you can follow -the simple steps -we have provided -in this article. -We hope you enjoy using mdaemon 11 full crack -as much as we enjoyed writing this article -for you.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Driver Pci Simple Communications Controller Acer Aspire 4752 Zip.md b/spaces/quidiaMuxgu/Expedit-SAM/Driver Pci Simple Communications Controller Acer Aspire 4752 Zip.md deleted file mode 100644 index a1b02591c4bb7c267204bf26204b48abbcf0b336..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Driver Pci Simple Communications Controller Acer Aspire 4752 Zip.md +++ /dev/null @@ -1,6 +0,0 @@ -

    driver pci simple communications controller acer aspire 4752 zip


    Download ———>>> https://geags.com/2uCrSQ



    - -Ini adalah driver untuk acer aspire 4752 untuk windows 7 64-bit.pilih driver yang mau ... two downloads for windows – windows installer and windows zip as shown in the image below. ... Pci simple communications controller driver win7 hp. 1fdad05405
    -
    -
    -

    diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Audio-Assault GrindMachine V1.2 VST AU RTAS WiN OSX RETAiL-SYNTH Serial Key.md b/spaces/raedeXanto/academic-chatgpt-beta/Audio-Assault GrindMachine V1.2 VST AU RTAS WiN OSX RETAiL-SYNTH Serial Key.md deleted file mode 100644 index 3c785d7ed26f6d0d2d6962ce823cd1b87cb303d1..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Audio-Assault GrindMachine V1.2 VST AU RTAS WiN OSX RETAiL-SYNTH Serial Key.md +++ /dev/null @@ -1,65 +0,0 @@ - -

    Audio-Assault GrindMachine v1.2 VST AU RTAS WiN OSX RETAiL-SYNTH Serial Key

    -

    If you are looking for a powerful, aggressive, and versatile guitar amp plugin that can deliver some of the most sought-after high-gain tones, then you might want to check out Audio-Assault GrindMachine. This plugin is designed to give you a set of pure high-gain tones with just a few clicks of a mouse, without compromising on quality or realism. In this article, we will give you an overview of what Audio-Assault GrindMachine is, what it can do, and how you can get the serial key and install it on your system. We will also show you how to use the plugin in different situations and genres, and share some tips and tricks for getting the most out of it.

    -

    Overview of Audio-Assault GrindMachine

    -

    Audio-Assault GrindMachine is a guitar amp plugin that was released in 2014 by Audio Assault, a company that specializes in creating high-quality audio plugins for music production. The plugin is based on one of the most legendary hi-gain amps ever made, the Animal amplifier, which is known for its tight, punchy, and aggressive sound. The plugin also comes with 3 dynamically modeled cabinets that match the tone and character of the amp perfectly. In addition, the plugin features a DJENT pedal that adds more distortion and bite to your sound, as well as a noise gate that eliminates unwanted noise and feedback.

    -

    Audio-Assault GrindMachine v1.2 VST AU RTAS WiN OSX RETAiL-SYNTH Serial Key


    Download File ––– https://tinourl.com/2uL0Ss



    -

    Audio-Assault GrindMachine is compatible with Windows (32-bit and 64-bit) and Mac OS X (32-bit, 64-bit, VST, AU, RTAS) platforms. It supports both mono and stereo processing, as well as sample rates up to 192 kHz. The plugin has a simple and intuitive interface that lets you control all the parameters easily. You can also choose from a variety of presets that cover different styles and genres, or create your own custom ones.

    -

    Audio-Assault GrindMachine is one of the most affordable guitar amp plugins on the market, costing only $29.99. However, you can get it for free if you have a valid serial key that you can obtain from various sources online. For example, you can find one here: [text](^5^). Alternatively, you can also download a free version of the plugin that includes only one amp (the Animal) and one cabinet (the American) here: [text](^1^).

    -

    -

    Audio-Assault GrindMachine is not just another guitar amp plugin that tries to emulate a real amp. It is a plugin that offers a unique and original sound that can suit any genre and style. Whether you are into metal, rock, punk, or even pop, you can find a tone that fits your needs and preferences. Audio-Assault GrindMachine is also a plugin that stands out from the crowd with its dynamic and responsive behavior. Unlike some other plugins that sound static and lifeless, Audio-Assault GrindMachine reacts to your playing and input, giving you a more realistic and expressive experience. Audio-Assault GrindMachine is a plugin that can give you the ultimate high-gain tone that you have always dreamed of.

    -

    How to use Audio-Assault GrindMachine

    -

    Now that you have an idea of what Audio-Assault GrindMachine is and what it can do, let's see how you can use it in your music production. Here are the steps that you need to follow to download, install, and activate the plugin with the serial key:

    -
      -
    1. Go to the official website of Audio Assault and create an account or log in if you already have one. You can do that here: [text].
    2. -
    3. Go to the product page of Audio-Assault GrindMachine and add it to your cart. You can do that here: [text].
    4. -
    5. Proceed to checkout and enter the serial key that you have obtained from the source mentioned above or any other source. You can also use the coupon code "GRIND" to get a 50% discount on the plugin.
    6. -
    7. Complete the payment process and download the plugin installer for your system (Windows or Mac OS X).
    8. -
    9. Run the installer and follow the instructions to install the plugin on your system.
    10. -
    11. Launch your DAW (Digital Audio Workstation) of choice and scan for new plugins. You should see Audio-Assault GrindMachine in your plugin list.
    12. -
    13. Load the plugin on a track or a bus and start using it.
    14. -
    -

    To use the plugin, you need to have an audio input source, such as a guitar, a bass, or a synth. You can also use a MIDI keyboard or controller to trigger the plugin. The plugin has four main sections: the amp, the cabinet, the pedal, and the gate. Here is how you can use each section:

    -
      -
    • The amp section lets you choose from 15 different amp models that are based on real amps from various brands and eras. You can also adjust the gain, volume, bass, mid, treble, and presence knobs to shape your tone.
    • -
    • The cabinet section lets you choose from 15 different cabinet models that are based on real cabinets from various brands and eras. You can also adjust the mic position, distance, pan, level, and phase knobs to shape your tone.
    • -
    • The pedal section lets you activate or deactivate the DJENT pedal that adds more distortion and bite to your tone. You can also adjust the drive, tone, level, and mix knobs to shape your tone.
    • -
    • The gate section lets you activate or deactivate the noise gate that eliminates unwanted noise and feedback from your signal. You can also adjust the threshold and release knobs to shape your tone.
    • -
    -

    You can also use the preset menu to choose from a variety of presets that cover different styles and genres, or create your own custom ones by clicking on the save button. You can also use the A/B button to compare two different settings and choose the best one for your sound.

    -

    Tips and tricks for getting the most out of Audio-Assault GrindMachine

    -

    Audio-Assault GrindMachine is a powerful and versatile plugin that can give you amazing results if you know how to use it properly. Here are some tips and tricks that can help you get the most out of it:

    -
      -
    • Optimize your CPU performance and avoid latency issues by using lower sample rates (44.1 kHz or 48 kHz) and buffer sizes (128 or 256 samples) when using the plugin. You can also disable any unnecessary effects or plugins in your DAW or system.
    • -
    • Avoid feedback and noise problems with the noise gate by setting the threshold knob just above the level of your input signal when it is silent. You can also adjust the release knob to make the gate close smoothly without cutting off any sustain or decay.
    • -
    • Use the plugin in stereo or mono mode depending on your needs. If you want a wider and fuller sound, use stereo mode and pan each amp/cabinet pair differently. If you want a tighter and more focused sound, use mono mode and blend each amp/cabinet pair together.
    • -
    • Use automation and modulation to add more dynamics and expression to your sound. You can automate or modulate any parameter of the plugin using your DAW or MIDI controller. For example, you can automate the gain knob to create volume swells or fades, or modulate the tone knob to create wah-wah effects.
    • -
    • Use the plugin with other effects and plugins in your DAW to enhance your sound. For example, you can use a compressor to smooth out the dynamics, an EQ to fine-tune the frequency balance, a reverb to add some ambience, or a delay to create some echo.
    • -
    -

    Conclusion

    -

    Audio-Assault GrindMachine is a guitar amp plugin that can give you a pure high-gain tone with just a few clicks of a mouse. It is based on the legendary Animal amplifier and comes with 3 dynamically modeled cabinets and a DJENT pedal. It is compatible with Windows and Mac OS X platforms and supports VST, AU, and RTAS formats. It has a simple and intuitive interface that lets you control all the parameters easily. It also has a variety of presets that cover different styles and genres, or you can create your own custom ones. It is one of the most affordable guitar amp plugins on the market, costing only $29.99. However, you can get it for free if you have a valid serial key that you can obtain from various sources online.

    -

    If you are looking for a powerful, aggressive, and versatile guitar amp plugin that can deliver some of the most sought-after high-gain tones, then you should definitely try out Audio-Assault GrindMachine. It is a plugin that offers a unique and original sound that can suit any genre and style. It is also a plugin that reacts to your playing and input, giving you a more realistic and expressive experience. It is a plugin that can give you the ultimate high-gain tone that you have always dreamed of.

    -

    So what are you waiting for? Go to the official website of Audio Assault and get your copy of Audio-Assault GrindMachine today. You will not regret it. And don't forget to share your feedback and opinions with us and other users. We would love to hear from you.

    -

    The official website of Audio Assault is: [text].

    -

    FAQs

    -

    Here are some of the frequently asked questions about Audio-Assault GrindMachine:

    -
      -
    1. Q: What are the system requirements for Audio-Assault GrindMachine?
      -A: The minimum system requirements for Audio-Assault GrindMachine are:
        -
      • Windows XP/Vista/7/8/10 (32-bit or 64-bit)
      • -
      • Mac OS X 10.6 or higher (32-bit or 64-bit)
      • -
      • A DAW that supports VST, AU, or RTAS formats
      • -
      • An audio interface with ASIO or CoreAudio drivers
      • -
      • A guitar or any other audio input source
      • -
    2. -
    3. Q: How do I update Audio-Assault GrindMachine?
      -A: You can update Audio-Assault GrindMachine by downloading the latest version from the official website of Audio Assault and installing it over the previous version. You do not need to uninstall the previous version or enter the serial key again.
    4. -
    5. Q: How do I uninstall Audio-Assault GrindMachine?
      -A: You can uninstall Audio-Assault GrindMachine by running the uninstaller that comes with the plugin installer or by deleting the plugin files from your system manually.
    6. -
    7. Q: How do I contact Audio Assault for support?
      -A: You can contact Audio Assault for support by sending an email to [text] or by filling out the contact form on their website here: [text]. You can also check their FAQ page here: [text] for more information.
    8. -
    9. Q: How do I get more plugins from Audio Assault?
      -A: You can get more plugins from Audio Assault by visiting their website here: [text] and browsing their product catalog. They have a range of plugins for different purposes and genres, such as bass amps, drum machines, compressors, reverbs, distortions, and more.
    10. -

    b2dd77e56b
    -
    -
    \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Descarga gratis el PDF de principios de ingenieria de los bioprocesos doran aprende sobre la aplicacin de la ingeniera a los sistemas biolgicos.md b/spaces/raedeXanto/academic-chatgpt-beta/Descarga gratis el PDF de principios de ingenieria de los bioprocesos doran aprende sobre la aplicacin de la ingeniera a los sistemas biolgicos.md deleted file mode 100644 index 8d6013e6848c80ba613e0892a10592525ee3075b..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Descarga gratis el PDF de principios de ingenieria de los bioprocesos doran aprende sobre la aplicacin de la ingeniera a los sistemas biolgicos.md +++ /dev/null @@ -1,191 +0,0 @@ - -

    Principles of Biochemical Engineering by Pauline M. Doran

    -

    Are you interested in learning more about the fascinating field of biochemical engineering? Do you want to know how to apply engineering principles to design and optimize bioprocesses for various applications? If so, you might want to check out the book Principles of Biochemical Engineering by Pauline M. Doran.

    -

    principios de ingenieria de los bioprocesos doran descargar


    DOWNLOAD --->>> https://tinourl.com/2uL0xX



    -

    This book is a comprehensive and accessible introduction to the fundamentals of biochemical engineering, covering topics such as material and energy balances, fluid flow and mixing, heat and mass transfer, reaction kinetics and reactor design, bioreactor operation and control, downstream processing, and bioprocess economics.

    -

    In this article, we will give you an overview of the book's content, structure, and features, as well as some information on how to download it for free. Let's get started!

    -

    Introduction

    -

    What is biochemical engineering?

    -

    Biochemical engineering is a branch of engineering that deals with the use of biological systems or processes to produce useful products or services. Biochemical engineers apply their knowledge of biology, chemistry, physics, mathematics, and engineering to design, operate, and optimize bioprocesses that involve microorganisms, enzymes, cells, tissues, or biomolecules.

    -

    Some examples of bioprocesses are fermentation, cell culture, enzyme catalysis, biosynthesis, bioconversion, bioseparation, bioremediation, biofuels production, biosensors development, and tissue engineering.

    -

    Why is it important?

    -

    Biochemical engineering is important because it contributes to the development of sustainable and innovative solutions for various challenges in fields such as health care, food and agriculture, energy and environment, materials and nanotechnology, and biotechnology.

    -

    Biochemical engineers can help create new drugs and vaccines, improve food quality and safety, produce renewable fuels and chemicals from biomass or waste, remediate environmental pollutants, develop novel biomaterials and devices, enhance biotechnology products and processes, and engineer artificial organs and tissues.

    -

    What are the main challenges and opportunities?

    -

    Biochemical engineering faces many challenges and opportunities in the 21st century. Some of the challenges are:

    -
      -
    • Dealing with complex and dynamic biological systems that are often nonlinear, heterogeneous, stochastic, multiscale, and multifunctional.
    • -
    • Integrating biological knowledge with engineering principles across different disciplines and scales.
    • -
    • Developing robust and reliable bioprocesses that are scalable, efficient, economical, safe, ethical, and environmentally friendly.
    • -
    • Adapting to the rapid advances in biotechnology tools such as genomics, proteomics, metabolomics, synthetic biology, and bioinformatics.
    • -
    -

    Some of the opportunities are:

    -
      -
    • Exploiting the diversity and potential of biological systems for novel applications.
    • -
    • Leveraging the power of computational modeling, simulation, optimization, and control for bioprocess design and operation.
    • -
    • Innovating new bioreactor configurations, modes, and strategies for improved performance.
    • -
    • Enhancing downstream processing techniques for higher product recovery and purity.
    • -
    • Creating value-added products from renewable resources and waste streams.
    • -
    -

    Part I: Introduction

    -

    Development of bioprocesses, an interdisciplinary challenge

    -

    In this chapter, the author introduces the concept and scope of bioprocess development, which involves the integration of biological sciences, engineering sciences, and industrial practices to transform a biological discovery into a commercial product or service.

    -

    principios de ingenieria de los bioprocesos doran pdf
    -principios de ingenieria de los bioprocesos doran solucionario
    -principios de ingenieria de los bioprocesos doran libro completo
    -principios de ingenieria de los bioprocesos doran segunda edicion
    -principios de ingenieria de los bioprocesos doran google books
    -principios de ingenieria de los bioprocesos doran acribia
    -principios de ingenieria de los bioprocesos doran ebook
    -principios de ingenieria de los bioprocesos doran epub
    -principios de ingenieria de los bioprocesos doran gratis
    -principios de ingenieria de los bioprocesos doran online
    -principios de ingenieria de los bioprocesos pauline m doran
    -principios de ingenieria de los bioprocesos pauline m doran pdf
    -principios de ingenieria de los bioprocesos pauline m doran solucionario
    -principios de ingenieria de los bioprocesos pauline m doran descargar gratis
    -principios de ingenieria de los bioprocesos pauline m doran segunda edicion pdf
    -ingenieria de los bioprocesos doran pdf
    -ingenieria de los bioprocesos doran solucionario
    -ingenieria de los bioprocesos doran libro completo
    -ingenieria de los bioprocesos doran segunda edicion
    -ingenieria de los bioprocesos doran google books
    -ingenieria de los bioprocesos doran acribia
    -ingenieria de los bioprocesos doran ebook
    -ingenieria de los bioprocesos doran epub
    -ingenieria de los bioprocesos doran gratis
    -ingenieria de los bioprocesos doran online
    -ingenieria de los bioprocesos pauline m doran
    -ingenieria de los bioprocesos pauline m doran pdf
    -ingenieria de los bioprocesos pauline m doran solucionario
    -ingenieria de los bioprocesos pauline m doran descargar gratis
    -ingenieria de los bioprocesos pauline m doran segunda edicion pdf
    -libro principios de ingenieria de los bioprocesos pdf
    -libro principios de ingenieria de los bioprocesos solucionario
    -libro principios de ingenieria de los bioprocesos descargar gratis
    -libro principios de ingenieria de los bioprocesos segunda edicion pdf
    -libro principios de ingenieria de los bioprocesos google books
    -libro principios de ingenieria de los bioprocesos acribia
    -libro principios de ingenieria de los bioprocesos ebook
    -libro principios de ingenieria de los bioprocesos epub
    -libro principios de ingenieria de los bioprocesos online
    -libro principios de ingenieria de los bioprocesos pauline m doran
    -descargar principios de ingenieria de los bioprocesos pdf
    -descargar principios de ingenieria de los bioprocesos solucionario
    -descargar principios de ingenieria de los bioprocesos libro completo
    -descargar principios de ingenieria de los bioprocesos segunda edicion
    -descargar principios de ingenieria de los bioprocesos google books
    -descargar principios de ingenieria de los bioprocesos acribia

    -

    The author also discusses the main steps and stages of bioprocess development, such as strain selection and improvement, media formulation and optimization, bioreactor design and scale-up, downstream processing and purification, product formulation and stabilization, quality control and assurance, and economic analysis and feasibility.

    -

    The author emphasizes the importance of interdisciplinary collaboration and communication among different experts and stakeholders involved in bioprocess development, such as microbiologists, biochemists, molecular biologists, genetic engineers, biochemical engineers, chemical engineers, process engineers, instrumentation engineers, mechanical engineers, electrical engineers, computer scientists, mathematicians, statisticians, economists, managers, regulators, customers, and end-users.

    -

    Introduction to engineering calculations

    -

    In this chapter, the author reviews some basic mathematical concepts and tools that are essential for engineering calculations. These include:

    -
      -
    • Units and dimensions: the author explains the importance of using consistent units and dimensions in engineering calculations, as well as some common systems of units such as SI (International System), CGS (Centimeter-Gram-Second), FPS (Foot-Pound-Second), and MKS (Meter-Kilogram-Second).
    • -
    • Dimensional analysis: the author demonstrates how to use dimensional analysis to check the validity of equations or expressions involving physical quantities or variables with different dimensions or units.
    • -
    • Solving equations: the author shows how to solve different types of equations such as linear equations (using methods such as substitution or elimination), quadratic equations (using methods such as factoring or quadratic formula), polynomial equations (using methods such as synthetic division or rational root theorem), exponential equations (using methods such as logarithms or change of base), logarithmic equations (using methods such as exponentiation or change of base), trigonometric equations (using methods such as inverse trigonometric functions or identities), simultaneous equations (using methods such as matrix inversion or Cramer's rule), differential equations (using methods such as separation of variables or integrating factors), or integral equations (using methods such as integration by parts or substitution).
    • -
    • Solving problems: the author provides some general guidelines on how to approach and solve engineering problems systematically. These include defining the problem clearly; identifying the given data; stating the assumptions; selecting an appropriate model; applying relevant principles; deriving necessary equations; simplifying or manipulating equations; solving for unknowns; checking results for accuracy; interpreting results physically; reporting results clearly; verifying results experimentally; evaluating results critically; suggesting improvements or alternatives; documenting solutions thoroughly.
    • -
    -

    Presentation and analysis of data

    -

    In this chapter, the author covers some basic concepts and techniques for presenting and analyzing data. These include:

    -
      -
    • Data types: the author distinguishes between different types of data such as qualitative data (which describe attributes or characteristics) or quantitative data (which measure values or quantities); discrete data (which take only certain values) or continuous data (which take any value within a range); nominal data (which have no order) or ordinal data (which have an order); interval data (which have equal intervals) or ratio data (which have a true zero point).
    • -
    • Data presentation: the author explains how to present data effectively using different formats such as tables (which display data in rows and columns), graphs (which display data using symbols or lines), charts (which display data using bars or pies), diagrams (which display data using shapes or arrows), maps (which display data using regions or colors), images (which display data using pixels or colors), animations (which display data using motion or sound), videos (which display data using frames or sound).
    • -
    • Data analysis: the author describes how to analyze data using different methods such as descriptive statistics (which summarize data using measures such as mean, median, mode, range, standard deviation, variance, coefficient of variation, skewness, kurtosis, percentiles, quartiles, boxplots, histograms, frequency distributions, scatter plots, correlation, regression, ANOVA), inferential statistics (which test hypotheses using methods such as confidence intervals, t tests, ANOVA, chi-square tests, correlation tests, regression tests), or multivariate statistics (which analyze multiple variables simultaneously using methods such as principal component analysis, factor analysis, cluster analysis, discriminant analysis).
    • -
    -

    Part II: Material and energy balances

    -

    Material balances

    -

    In this chapter, the author introduces the concept and application of material balances, which are equations that describe the conservation of mass in a bioprocess system.

    -

    The author explains how to perform material balances for different types of systems, such as batch systems (which have no input or output streams), continuous systems (which have constant input and output streams), or semi-batch systems (which have either input or output streams).

    -

    The author also discusses how to account for different types of reactions, such as monogenic reactions (which involve only one reactant or product), heterogeneous reactions (which involve more than one phase), or biochemical reactions (which involve complex biological molecules).

    -

    The author illustrates how to apply material balances to various bioprocess examples, such as fermentation, cell growth, enzyme kinetics, substrate utilization, product formation, biomass yield, and metabolic pathways.

    -

    Energy balances

    -

    In this chapter, the author introduces the concept and application of energy balances, which are equations that describe the conservation of energy in a bioprocess system.

    -

    The author explains how to perform energy balances for different types of systems, such as adiabatic systems (which have no heat transfer), isothermal systems (which have constant temperature), or non-isothermal systems (which have variable temperature).

    -

    The author also discusses how to account for different forms of energy, such as kinetic energy (which is related to motion), potential energy (which is related to position), internal energy (which is related to molecular structure), enthalpy (which is related to heat content), entropy (which is related to disorder), or Gibbs free energy (which is related to spontaneity).

    -

    The author illustrates how to apply energy balances to various bioprocess examples, such as heat generation, heat transfer, heat exchangers, heat sterilization, refrigeration, and thermodynamics.

    -

    Material and energy balances in non-stationary state

    -

    In this chapter, the author extends the concepts and applications of material and energy balances to non-stationary state systems, which are systems that change with time.

    -

    The author explains how to perform material and energy balances for non-stationary state systems using differential equations, which are equations that relate the rate of change of a variable to its value.

    -

    The author also discusses how to solve differential equations using different methods, such as analytical methods (which involve finding exact solutions using algebra or calculus), numerical methods (which involve finding approximate solutions using algorithms or software), or graphical methods (which involve finding visual solutions using plots or charts).

    -

    The author illustrates how to apply material and energy balances in non-stationary state systems to various bioprocess examples, such as batch reactors, fed-batch reactors, continuous stirred tank reactors, plug flow reactors, and packed bed reactors.

    -

    Part III: Physical processes

    -

    Fluid flow and mixing

    -

    In this chapter, the author introduces the concepts and principles of fluid flow and mixing, which are important for bioprocess design and operation.

    -

    The author explains how to characterize fluid flow and mixing using different parameters, such as velocity (which measures the speed and direction of fluid movement), flow rate (which measures the volume or mass of fluid passing through a cross-sectional area per unit time), pressure (which measures the force exerted by fluid per unit area), viscosity (which measures the resistance of fluid to deformation or flow), density (which measures the mass of fluid per unit volume), Reynolds number (which measures the ratio of inertial forces to viscous forces in fluid flow), Froude number (which measures the ratio of inertial forces to gravitational forces in fluid flow), or power number (which measures the ratio of power input to fluid agitation).

    -

    The author also discusses how to analyze fluid flow and mixing using different models, such as laminar flow (which is smooth and orderly), turbulent flow (which is chaotic and disorderly), Newtonian fluids (which have constant viscosity), non-Newtonian fluids (which have variable viscosity), ideal fluids (which have no viscosity or compressibility), real fluids (which have viscosity and compressibility), ideal mixers (which have perfect homogeneity), real mixers (which have imperfect homogeneity), or compartment models (which divide a system into discrete zones).

    -

    The author illustrates how to apply fluid flow and mixing concepts and principles to various bioprocess examples, such as pipe flow, pump selection, valve operation, nozzle design, impeller design, mixing time, mixing efficiency, scale-up criteria, and rheology.

    -

    Heat transfer

    -

    In this chapter, the author introduces the concepts and principles of heat transfer, which are important for bioprocess design and operation.

    -

    The author explains how to characterize heat transfer using different parameters, such as temperature (which measures the average kinetic energy of molecules in a system), heat capacity (which measures the amount of heat required to raise the temperature of a substance by one degree), thermal conductivity (which measures the ability of a substance to conduct heat), heat flux (which measures the rate of heat transfer per unit area), heat transfer coefficient (which measures the rate of heat transfer per unit area per unit temperature difference), or Biot number (which measures the ratio of internal resistance to external resistance in heat transfer).

    -

    The author also discusses how to analyze heat transfer using different modes, such as conduction (which involves heat transfer by direct contact between molecules), convection (which involves heat transfer by bulk movement of fluid), radiation (which involves heat transfer by electromagnetic waves), or phase change (which involves heat transfer by latent heat).

    -

    The author illustrates how to apply heat transfer concepts and principles to various bioprocess examples, such as sterilization, pasteurization, evaporation, condensation, distillation, crystallization, drying, freezing, thawing, and lyophilization.

    -

    Mass transfer

    -

    In this chapter, the author introduces the concepts and principles of mass transfer, which are important for bioprocess design and operation.

    -

    The author explains how to characterize mass transfer using different parameters, such as concentration (which measures the amount of solute per unit volume of solution), partial pressure (which measures the pressure exerted by a component in a gas mixture), mole fraction (which measures the ratio of moles of a component to the total moles of a mixture), molar flux (which measures the rate of mass transfer per unit area), mass transfer coefficient (which measures the rate of mass transfer per unit area per unit concentration or pressure difference), or Sherwood number (which measures the ratio of convective to diffusive mass transfer).

    -

    The author also discusses how to analyze mass transfer using different modes, such as diffusion (which involves mass transfer by random molecular motion), convection (which involves mass transfer by bulk fluid motion), or interphase mass transfer (which involves mass transfer between two phases such as gas-liquid, liquid-liquid, or solid-liquid).

    -

    The author illustrates how to apply mass transfer concepts and principles to various bioprocess examples, such as oxygen transfer, carbon dioxide removal, nutrient uptake, product secretion, membrane separation, extraction, adsorption, ion exchange, chromatography, and electrophoresis.

    -

    Basic operations

    -

    In this chapter, the author summarizes some basic operations that are commonly used in bioprocess engineering. These include:

    -
      -
    • Filtration: a process that separates solid particles from a fluid by passing it through a porous medium.
    • -
    • Centrifugation: a process that separates solid particles from a fluid by applying a centrifugal force.
    • -
    • Sedimentation: a process that separates solid particles from a fluid by gravity.
    • -
    • Flocculation: a process that enhances the aggregation of solid particles in a fluid by adding chemicals or biological agents.
    • -
    • Precipitation: a process that forms solid particles in a fluid by changing the temperature, pH, or concentration of solutes.
    • -
    • Crystallization: a process that forms solid particles with a regular shape and structure in a fluid by changing the temperature or concentration of solutes.
    • -
    • Drying: a process that removes moisture from a solid or a fluid by applying heat or air flow.
    • -
    • Freezing: a process that lowers the temperature of a solid or a fluid below its freezing point.
    • -
    • Thawing: a process that raises the temperature of a frozen solid or fluid above its freezing point.
    • -
    • Lyophilization: a process that removes moisture from a frozen solid or fluid by sublimation under vacuum.
    • -
    -

    Part IV: Reactions and reactors

    -

    Monogenic reactions

    -

    In this chapter, the author introduces the concepts and principles of monogenic reactions, which are reactions that involve only one reactant or product. These include:

    -
      -
    • First-order reactions: reactions that have a rate proportional to the concentration of one reactant.
    • -
    • Second-order reactions: reactions that have a rate proportional to the square of the concentration of one reactant or to the product of the concentrations of two reactants.
    • -
    • Zero-order reactions: reactions that have a constant rate independent of the concentration of any reactant.
    • -
    • Half-life: the time required for the concentration of a reactant to decrease by half.
    • -
    • Reaction order: the exponent of the concentration term in the rate equation.
    • -
    • Rate constant: the proportionality constant in the rate equation.
    • -
    • Rate equation: an equation that relates the rate of reaction to the concentrations of reactants and products.
    • -
    -

    The author explains how to determine the reaction order and rate constant experimentally using methods such as differential method (which involves plotting the rate versus concentration data), integral method (which involves plotting the concentration versus time data), or graphical method (which involves plotting the logarithm of concentration versus time data).

    -

    The author illustrates how to apply monogenic reaction concepts and principles to various bioprocess examples, such as enzyme kinetics, substrate utilization, product formation, and microbial growth.

    -

    Heterogeneous reactions

    -

    In this chapter, the author introduces the concepts and principles of heterogeneous reactions, which are reactions that involve more than one phase. These include:

    -
      -
    • Liquid-solid reactions: reactions that occur between a liquid and a solid phase, such as dissolution, precipitation, adsorption, or leaching.
    • -
    • Liquid-liquid reactions: reactions that occur between two immiscible liquid phases, such as extraction, emulsification, or esterification.
    • -
    • Gas-liquid reactions: reactions that occur between a gas and a liquid phase, such as absorption, stripping, or oxidation.
    • -
    • Gas-solid reactions: reactions that occur between a gas and a solid phase, such as desorption, drying, or combustion.
    • -
    -

    The author explains how to characterize heterogeneous reactions using different parameters, such as interfacial area (which measures the contact area between two phases), driving force (which measures the difference in concentration or partial pressure between two phases), or mass transfer rate (which measures the amount of mass transferred per unit time).

    -

    The author illustrates how to apply mass transfer concepts and principles to various bioprocess examples, such as oxygen transfer, carbon dioxide removal, nutrient uptake, product secretion, membrane separation, extraction, adsorption, ion exchange, chromatography, and electrophoresis.

    -

    Reaction engineering

    -

    In this chapter, the author introduces the concepts and principles of reaction engineering, which are important for bioprocess design and operation.

    -

    The author explains how to characterize reaction engineering using different parameters, such as reaction rate (which measures the change in concentration or partial pressure of a reactant or product per unit time), reaction order (which measures the dependence of reaction rate on concentration or partial pressure), rate constant (which measures the proportionality factor between reaction rate and concentration or partial pressure), activation energy (which measures the minimum energy required for a reaction to occur), Arrhenius equation (which relates the rate constant to temperature and activation energy), or reaction mechanism (which describes the sequence of elementary steps that constitute a complex reaction).

    -

    The author also discusses how to analyze reaction engineering using different models, such as batch reactors (which operate without any input or output streams), continuous stirred tank reactors (which operate with constant input and output streams and perfect mixing), plug flow reactors (which operate with constant input and output streams and no axial mixing), or packed bed reactors (which operate with solid catalyst particles packed in a tube).

    -

    The author illustrates how to apply reaction engineering concepts and principles to various bioprocess examples, such as enzyme kinetics, substrate utilization, product formation, biomass yield, and metabolic pathways.

    -

    Conclusion

    -

    In this article, we have given you an overview of the book Principles of Biochemical Engineering by Pauline M. Doran, which is a comprehensive and accessible introduction to the fundamentals of biochemical engineering.

    -

    We have summarized the main topics covered in each part of the book, such as material and energy balances, physical processes, reactions and reactors, and their applications to various bioprocess examples.

    -

    We hope that this article has sparked your interest in learning more about biochemical engineering and its importance for sustainable and innovative solutions for various challenges in fields such as health care, food and agriculture, energy and environment, materials and nanotechnology, and biotechnology.

    -

    If you want to download the book for free, you can visit this link: https://b-ok.cc/book/2270326/8f9a0a

    -

    FAQs

    -
      -
    • What is biochemical engineering?
    • -
    • Biochemical engineering is a branch of engineering that deals with the use of biological systems or processes to produce useful products or services.
    • -
    • What are some examples of bioprocesses?
    • -
    • Some examples of bioprocesses are fermentation, cell culture, enzyme catalysis, biosynthesis, bioconversion, bioseparation, bioremediation, biofuels production, biosensors development, and tissue engineering.
    • -
    • What are some methods of heat transfer?
    • -
    • Some methods of heat transfer are conduction, convection, radiation, and phase change.
    • -
    • What are some types of reactors?
    • -
    • Some types of reactors are batch reactors, continuous stirred tank reactors, plug flow reactors, and packed bed reactors.
    • -
    • What are some factors that affect reaction rate?
    • -
    • Some factors that affect reaction rate are concentration, temperature, catalysts, inhibitors, and reaction mechanism.
    • -
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/rafaelpadilla/coco_metrics/coco_metrics/pycocotools/cocoeval.py b/spaces/rafaelpadilla/coco_metrics/coco_metrics/pycocotools/cocoeval.py deleted file mode 100644 index 403e2819005338d92bc50513dcd0e6d03d890643..0000000000000000000000000000000000000000 --- a/spaces/rafaelpadilla/coco_metrics/coco_metrics/pycocotools/cocoeval.py +++ /dev/null @@ -1,632 +0,0 @@ -# This code is basically a copy and paste from the original cocoapi repo: -# https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocotools/cocoeval.py -# with the following changes have been made: -# * Replace the usage of mask (maskUtils) by MaskEvaluator. -# * Comment out prints in the evaluate() function. -# * Include a return of the function evaluate. Inspired -# by @ybelkada (https://huggingface.co/spaces/ybelkada/cocoevaluate/) - -__author__ = "tsungyi" - -import copy -import datetime -import time -from collections import defaultdict -from packaging import version - -import numpy as np - -if version.parse(np.__version__) < version.parse("1.24"): - dtype_float = np.float -else: - dtype_float = np.float32 - -from .mask_utils import MaskEvaluator as maskUtils - - -class COCOeval: - # Interface for evaluating detection on the Microsoft COCO dataset. - # - # The usage for CocoEval is as follows: - # cocoGt=..., cocoDt=... # load dataset and results - # E = CocoEval(cocoGt,cocoDt); # initialize CocoEval object - # E.params.recThrs = ...; # set parameters as desired - # E.evaluate(); # run per image evaluation - # E.accumulate(); # accumulate per image results - # E.summarize(); # display summary metrics of results - # For example usage see evalDemo.m and http://mscoco.org/. - # - # The evaluation parameters are as follows (defaults in brackets): - # imgIds - [all] N img ids to use for evaluation - # catIds - [all] K cat ids to use for evaluation - # iouThrs - [.5:.05:.95] T=10 IoU thresholds for evaluation - # recThrs - [0:.01:1] R=101 recall thresholds for evaluation - # areaRng - [...] A=4 object area ranges for evaluation - # maxDets - [1 10 100] M=3 thresholds on max detections per image - # iouType - ['segm'] set iouType to 'segm', 'bbox' or 'keypoints' - # iouType replaced the now DEPRECATED useSegm parameter. - # useCats - [1] if true use category labels for evaluation - # Note: if useCats=0 category labels are ignored as in proposal scoring. - # Note: multiple areaRngs [Ax2] and maxDets [Mx1] can be specified. - # - # evaluate(): evaluates detections on every image and every category and - # concats the results into the "evalImgs" with fields: - # dtIds - [1xD] id for each of the D detections (dt) - # gtIds - [1xG] id for each of the G ground truths (gt) - # dtMatches - [TxD] matching gt id at each IoU or 0 - # gtMatches - [TxG] matching dt id at each IoU or 0 - # dtScores - [1xD] confidence of each dt - # gtIgnore - [1xG] ignore flag for each gt - # dtIgnore - [TxD] ignore flag for each dt at each IoU - # - # accumulate(): accumulates the per-image, per-category evaluation - # results in "evalImgs" into the dictionary "eval" with fields: - # params - parameters used for evaluation - # date - date evaluation was performed - # counts - [T,R,K,A,M] parameter dimensions (see above) - # precision - [TxRxKxAxM] precision for every evaluation setting - # recall - [TxKxAxM] max recall for every evaluation setting - # Note: precision and recall==-1 for settings with no gt objects. - # - # See also coco, mask, pycocoDemo, pycocoEvalDemo - # - # Microsoft COCO Toolbox. version 2.0 - # Data, paper, and tutorials available at: http://mscoco.org/ - # Code written by Piotr Dollar and Tsung-Yi Lin, 2015. - # Licensed under the Simplified BSD License [see coco/license.txt] - def __init__(self, cocoGt=None, cocoDt=None, iouType="segm"): - """ - Initialize CocoEval using coco APIs for gt and dt - :param cocoGt: coco object with ground truth annotations - :param cocoDt: coco object with detection results - :return: None - """ - if not iouType: - print("iouType not specified. use default iouType segm") - self.cocoGt = cocoGt # ground truth COCO API - self.cocoDt = cocoDt # detections COCO API - self.evalImgs = defaultdict( - list - ) # per-image per-category evaluation results [KxAxI] elements - self.eval = {} # accumulated evaluation results - self._gts = defaultdict(list) # gt for evaluation - self._dts = defaultdict(list) # dt for evaluation - self.params = Params(iouType=iouType) # parameters - self._paramsEval = {} # parameters for evaluation - self.stats = [] # result summarization - self.ious = {} # ious between all gts and dts - if not cocoGt is None: - self.params.imgIds = sorted(cocoGt.getImgIds()) - self.params.catIds = sorted(cocoGt.getCatIds()) - - def _prepare(self): - """ - Prepare ._gts and ._dts for evaluation based on params - :return: None - """ - - def _toMask(anns, coco): - # modify ann['segmentation'] by reference - for ann in anns: - rle = coco.annToRLE(ann) - ann["segmentation"] = rle - - p = self.params - if p.useCats: - gts = self.cocoGt.loadAnns( - self.cocoGt.getAnnIds(imgIds=p.imgIds, catIds=p.catIds) - ) - dts = self.cocoDt.loadAnns( - self.cocoDt.getAnnIds(imgIds=p.imgIds, catIds=p.catIds) - ) - else: - gts = self.cocoGt.loadAnns(self.cocoGt.getAnnIds(imgIds=p.imgIds)) - dts = self.cocoDt.loadAnns(self.cocoDt.getAnnIds(imgIds=p.imgIds)) - - # convert ground truth to mask if iouType == 'segm' - if p.iouType == "segm": - _toMask(gts, self.cocoGt) - _toMask(dts, self.cocoDt) - # set ignore flag - for gt in gts: - gt["ignore"] = gt["ignore"] if "ignore" in gt else 0 - gt["ignore"] = "iscrowd" in gt and gt["iscrowd"] - if p.iouType == "keypoints": - gt["ignore"] = (gt["num_keypoints"] == 0) or gt["ignore"] - self._gts = defaultdict(list) # gt for evaluation - self._dts = defaultdict(list) # dt for evaluation - for gt in gts: - self._gts[gt["image_id"], gt["category_id"]].append(gt) - for dt in dts: - self._dts[dt["image_id"], dt["category_id"]].append(dt) - self.evalImgs = defaultdict(list) # per-image per-category evaluation results - self.eval = {} # accumulated evaluation results - - def evaluate(self): - """ - Run per image evaluation on given images and store results (a list of dict) in self.evalImgs - :return: None - """ - # tic = time.time() - # print("Running per image evaluation...") - p = self.params - # add backward compatibility if useSegm is specified in params - if not p.useSegm is None: - p.iouType = "segm" if p.useSegm == 1 else "bbox" - # print( - # "useSegm (deprecated) is not None. Running {} evaluation".format( - # p.iouType - # ) - # ) - # print("Evaluate annotation type *{}*".format(p.iouType)) - p.imgIds = list(np.unique(p.imgIds)) - if p.useCats: - p.catIds = list(np.unique(p.catIds)) - p.maxDets = sorted(p.maxDets) - self.params = p - - self._prepare() - # loop through images, area range, max detection number - catIds = p.catIds if p.useCats else [-1] - - if p.iouType == "segm" or p.iouType == "bbox": - computeIoU = self.computeIoU - elif p.iouType == "keypoints": - computeIoU = self.computeOks - self.ious = { - (imgId, catId): computeIoU(imgId, catId) - for imgId in p.imgIds - for catId in catIds - } - - evaluateImg = self.evaluateImg - maxDet = p.maxDets[-1] - self.evalImgs = [ - evaluateImg(imgId, catId, areaRng, maxDet) - for catId in catIds - for areaRng in p.areaRng - for imgId in p.imgIds - ] - self._paramsEval = copy.deepcopy(self.params) - ret_evalImgs = np.asarray(self.evalImgs).reshape( - len(catIds), len(p.areaRng), len(p.imgIds) - ) - # toc = time.time() - # print("DONE (t={:0.2f}s).".format(toc - tic)) - return ret_evalImgs - - def computeIoU(self, imgId, catId): - p = self.params - if p.useCats: - gt = self._gts[imgId, catId] - dt = self._dts[imgId, catId] - else: - gt = [_ for cId in p.catIds for _ in self._gts[imgId, cId]] - dt = [_ for cId in p.catIds for _ in self._dts[imgId, cId]] - if len(gt) == 0 and len(dt) == 0: - return [] - inds = np.argsort([-d["score"] for d in dt], kind="mergesort") - dt = [dt[i] for i in inds] - if len(dt) > p.maxDets[-1]: - dt = dt[0 : p.maxDets[-1]] - - if p.iouType == "segm": - g = [g["segmentation"] for g in gt] - d = [d["segmentation"] for d in dt] - elif p.iouType == "bbox": - g = [g["bbox"] for g in gt] - d = [d["bbox"] for d in dt] - else: - raise Exception("unknown iouType for iou computation") - - # compute iou between each dt and gt region - iscrowd = [int(o["iscrowd"]) for o in gt] - ious = maskUtils.iou(d, g, iscrowd) - return ious - - def computeOks(self, imgId, catId): - p = self.params - # dimention here should be Nxm - gts = self._gts[imgId, catId] - dts = self._dts[imgId, catId] - inds = np.argsort([-d["score"] for d in dts], kind="mergesort") - dts = [dts[i] for i in inds] - if len(dts) > p.maxDets[-1]: - dts = dts[0 : p.maxDets[-1]] - # if len(gts) == 0 and len(dts) == 0: - if len(gts) == 0 or len(dts) == 0: - return [] - ious = np.zeros((len(dts), len(gts))) - sigmas = p.kpt_oks_sigmas - vars = (sigmas * 2) ** 2 - k = len(sigmas) - # compute oks between each detection and ground truth object - for j, gt in enumerate(gts): - # create bounds for ignore regions(double the gt bbox) - g = np.array(gt["keypoints"]) - xg = g[0::3] - yg = g[1::3] - vg = g[2::3] - k1 = np.count_nonzero(vg > 0) - bb = gt["bbox"] - x0 = bb[0] - bb[2] - x1 = bb[0] + bb[2] * 2 - y0 = bb[1] - bb[3] - y1 = bb[1] + bb[3] * 2 - for i, dt in enumerate(dts): - d = np.array(dt["keypoints"]) - xd = d[0::3] - yd = d[1::3] - if k1 > 0: - # measure the per-keypoint distance if keypoints visible - dx = xd - xg - dy = yd - yg - else: - # measure minimum distance to keypoints in (x0,y0) & (x1,y1) - z = np.zeros((k)) - dx = np.max((z, x0 - xd), axis=0) + np.max((z, xd - x1), axis=0) - dy = np.max((z, y0 - yd), axis=0) + np.max((z, yd - y1), axis=0) - e = (dx**2 + dy**2) / vars / (gt["area"] + np.spacing(1)) / 2 - if k1 > 0: - e = e[vg > 0] - ious[i, j] = np.sum(np.exp(-e)) / e.shape[0] - return ious - - def evaluateImg(self, imgId, catId, aRng, maxDet): - """ - perform evaluation for single category and image - :return: dict (single image results) - """ - p = self.params - if p.useCats: - gt = self._gts[imgId, catId] - dt = self._dts[imgId, catId] - else: - gt = [_ for cId in p.catIds for _ in self._gts[imgId, cId]] - dt = [_ for cId in p.catIds for _ in self._dts[imgId, cId]] - if len(gt) == 0 and len(dt) == 0: - return None - - for g in gt: - if g["ignore"] or (g["area"] < aRng[0] or g["area"] > aRng[1]): - g["_ignore"] = 1 - else: - g["_ignore"] = 0 - - # sort dt highest score first, sort gt ignore last - gtind = np.argsort([g["_ignore"] for g in gt], kind="mergesort") - gt = [gt[i] for i in gtind] - dtind = np.argsort([-d["score"] for d in dt], kind="mergesort") - dt = [dt[i] for i in dtind[0:maxDet]] - iscrowd = [int(o["iscrowd"]) for o in gt] - # load computed ious - ious = ( - self.ious[imgId, catId][:, gtind] - if len(self.ious[imgId, catId]) > 0 - else self.ious[imgId, catId] - ) - - T = len(p.iouThrs) - G = len(gt) - D = len(dt) - gtm = np.zeros((T, G)) - dtm = np.zeros((T, D)) - gtIg = np.array([g["_ignore"] for g in gt]) - dtIg = np.zeros((T, D)) - if not len(ious) == 0: - for tind, t in enumerate(p.iouThrs): - for dind, d in enumerate(dt): - # information about best match so far (m=-1 -> unmatched) - iou = min([t, 1 - 1e-10]) - m = -1 - for gind, g in enumerate(gt): - # if this gt already matched, and not a crowd, continue - if gtm[tind, gind] > 0 and not iscrowd[gind]: - continue - # if dt matched to reg gt, and on ignore gt, stop - if m > -1 and gtIg[m] == 0 and gtIg[gind] == 1: - break - # continue to next gt unless better match made - if ious[dind, gind] < iou: - continue - # if match successful and best so far, store appropriately - iou = ious[dind, gind] - m = gind - # if match made store id of match for both dt and gt - if m == -1: - continue - dtIg[tind, dind] = gtIg[m] - dtm[tind, dind] = gt[m]["id"] - gtm[tind, m] = d["id"] - # set unmatched detections outside of area range to ignore - a = np.array([d["area"] < aRng[0] or d["area"] > aRng[1] for d in dt]).reshape( - (1, len(dt)) - ) - dtIg = np.logical_or(dtIg, np.logical_and(dtm == 0, np.repeat(a, T, 0))) - # store results for given image and category - return { - "image_id": imgId, - "category_id": catId, - "aRng": aRng, - "maxDet": maxDet, - "dtIds": [d["id"] for d in dt], - "gtIds": [g["id"] for g in gt], - "dtMatches": dtm, - "gtMatches": gtm, - "dtScores": [d["score"] for d in dt], - "gtIgnore": gtIg, - "dtIgnore": dtIg, - } - - def accumulate(self, p=None): - """ - Accumulate per image evaluation results and store the result in self.eval - :param p: input params for evaluation - :return: None - """ - print("Accumulating evaluation results...") - tic = time.time() - if not self.evalImgs: - print("Please run evaluate() first") - # allows input customized parameters - if p is None: - p = self.params - p.catIds = p.catIds if p.useCats == 1 else [-1] - T = len(p.iouThrs) - R = len(p.recThrs) - K = len(p.catIds) if p.useCats else 1 - A = len(p.areaRng) - M = len(p.maxDets) - precision = -np.ones( - (T, R, K, A, M) - ) # -1 for the precision of absent categories - recall = -np.ones((T, K, A, M)) - scores = -np.ones((T, R, K, A, M)) - - # create dictionary for future indexing - _pe = self._paramsEval - catIds = _pe.catIds if _pe.useCats else [-1] - setK = set(catIds) - setA = set(map(tuple, _pe.areaRng)) - setM = set(_pe.maxDets) - setI = set(_pe.imgIds) - # get inds to evaluate - k_list = [n for n, k in enumerate(p.catIds) if k in setK] - m_list = [m for n, m in enumerate(p.maxDets) if m in setM] - a_list = [ - n for n, a in enumerate(map(lambda x: tuple(x), p.areaRng)) if a in setA - ] - i_list = [n for n, i in enumerate(p.imgIds) if i in setI] - I0 = len(_pe.imgIds) - A0 = len(_pe.areaRng) - # retrieve E at each category, area range, and max number of detections - for k, k0 in enumerate(k_list): - Nk = k0 * A0 * I0 - for a, a0 in enumerate(a_list): - Na = a0 * I0 - for m, maxDet in enumerate(m_list): - E = [self.evalImgs[Nk + Na + i] for i in i_list] - E = [e for e in E if not e is None] - if len(E) == 0: - continue - dtScores = np.concatenate([e["dtScores"][0:maxDet] for e in E]) - - # different sorting method generates slightly different results. - # mergesort is used to be consistent as Matlab implementation. - inds = np.argsort(-dtScores, kind="mergesort") - dtScoresSorted = dtScores[inds] - - dtm = np.concatenate( - [e["dtMatches"][:, 0:maxDet] for e in E], axis=1 - )[:, inds] - dtIg = np.concatenate( - [e["dtIgnore"][:, 0:maxDet] for e in E], axis=1 - )[:, inds] - gtIg = np.concatenate([e["gtIgnore"] for e in E]) - npig = np.count_nonzero(gtIg == 0) - if npig == 0: - continue - tps = np.logical_and(dtm, np.logical_not(dtIg)) - fps = np.logical_and(np.logical_not(dtm), np.logical_not(dtIg)) - - tp_sum = np.cumsum(tps, axis=1).astype(dtype=dtype_float) - fp_sum = np.cumsum(fps, axis=1).astype(dtype=dtype_float) - for t, (tp, fp) in enumerate(zip(tp_sum, fp_sum)): - tp = np.array(tp) - fp = np.array(fp) - nd = len(tp) - rc = tp / npig - pr = tp / (fp + tp + np.spacing(1)) - q = np.zeros((R,)) - ss = np.zeros((R,)) - - if nd: - recall[t, k, a, m] = rc[-1] - else: - recall[t, k, a, m] = 0 - - # numpy is slow without cython optimization for accessing elements - # use python array gets significant speed improvement - pr = pr.tolist() - q = q.tolist() - - for i in range(nd - 1, 0, -1): - if pr[i] > pr[i - 1]: - pr[i - 1] = pr[i] - - inds = np.searchsorted(rc, p.recThrs, side="left") - try: - for ri, pi in enumerate(inds): - q[ri] = pr[pi] - ss[ri] = dtScoresSorted[pi] - except: - pass - precision[t, :, k, a, m] = np.array(q) - scores[t, :, k, a, m] = np.array(ss) - self.eval = { - "params": p, - "counts": [T, R, K, A, M], - "date": datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), - "precision": precision, - "recall": recall, - "scores": scores, - } - toc = time.time() - print("DONE (t={:0.2f}s).".format(toc - tic)) - - def summarize(self): - """ - Compute and display summary metrics for evaluation results. - Note this functin can *only* be applied on the default parameter setting - """ - - def _summarize(ap=1, iouThr=None, areaRng="all", maxDets=100): - p = self.params - iStr = " {:<18} {} @[ IoU={:<9} | area={:>6s} | maxDets={:>3d} ] = {:0.3f}" - titleStr = "Average Precision" if ap == 1 else "Average Recall" - typeStr = "(AP)" if ap == 1 else "(AR)" - iouStr = ( - "{:0.2f}:{:0.2f}".format(p.iouThrs[0], p.iouThrs[-1]) - if iouThr is None - else "{:0.2f}".format(iouThr) - ) - - aind = [i for i, aRng in enumerate(p.areaRngLbl) if aRng == areaRng] - mind = [i for i, mDet in enumerate(p.maxDets) if mDet == maxDets] - if ap == 1: - # dimension of precision: [TxRxKxAxM] - s = self.eval["precision"] - # IoU - if iouThr is not None: - t = np.where(iouThr == p.iouThrs)[0] - s = s[t] - s = s[:, :, :, aind, mind] - else: - # dimension of recall: [TxKxAxM] - s = self.eval["recall"] - if iouThr is not None: - t = np.where(iouThr == p.iouThrs)[0] - s = s[t] - s = s[:, :, aind, mind] - if len(s[s > -1]) == 0: - mean_s = -1 - else: - mean_s = np.mean(s[s > -1]) - print(iStr.format(titleStr, typeStr, iouStr, areaRng, maxDets, mean_s)) - return mean_s - - def _summarizeDets(): - stats = np.zeros((12,)) - stats[0] = _summarize(1) - stats[1] = _summarize(1, iouThr=0.5, maxDets=self.params.maxDets[2]) - stats[2] = _summarize(1, iouThr=0.75, maxDets=self.params.maxDets[2]) - stats[3] = _summarize(1, areaRng="small", maxDets=self.params.maxDets[2]) - stats[4] = _summarize(1, areaRng="medium", maxDets=self.params.maxDets[2]) - stats[5] = _summarize(1, areaRng="large", maxDets=self.params.maxDets[2]) - stats[6] = _summarize(0, maxDets=self.params.maxDets[0]) - stats[7] = _summarize(0, maxDets=self.params.maxDets[1]) - stats[8] = _summarize(0, maxDets=self.params.maxDets[2]) - stats[9] = _summarize(0, areaRng="small", maxDets=self.params.maxDets[2]) - stats[10] = _summarize(0, areaRng="medium", maxDets=self.params.maxDets[2]) - stats[11] = _summarize(0, areaRng="large", maxDets=self.params.maxDets[2]) - return stats - - def _summarizeKps(): - stats = np.zeros((10,)) - stats[0] = _summarize(1, maxDets=20) - stats[1] = _summarize(1, maxDets=20, iouThr=0.5) - stats[2] = _summarize(1, maxDets=20, iouThr=0.75) - stats[3] = _summarize(1, maxDets=20, areaRng="medium") - stats[4] = _summarize(1, maxDets=20, areaRng="large") - stats[5] = _summarize(0, maxDets=20) - stats[6] = _summarize(0, maxDets=20, iouThr=0.5) - stats[7] = _summarize(0, maxDets=20, iouThr=0.75) - stats[8] = _summarize(0, maxDets=20, areaRng="medium") - stats[9] = _summarize(0, maxDets=20, areaRng="large") - return stats - - if not self.eval: - raise Exception("Please run accumulate() first") - iouType = self.params.iouType - if iouType == "segm" or iouType == "bbox": - summarize = _summarizeDets - elif iouType == "keypoints": - summarize = _summarizeKps - self.stats = summarize() - - def __str__(self): - self.summarize() - - -class Params: - """ - Params for coco evaluation api - """ - - def setDetParams(self): - self.imgIds = [] - self.catIds = [] - # np.arange causes trouble. the data point on arange is slightly larger than the true value - self.iouThrs = np.linspace( - 0.5, 0.95, int(np.round((0.95 - 0.5) / 0.05)) + 1, endpoint=True - ) - self.recThrs = np.linspace( - 0.0, 1.00, int(np.round((1.00 - 0.0) / 0.01)) + 1, endpoint=True - ) - self.maxDets = [1, 10, 100] - self.areaRng = [ - [0**2, 1e5**2], - [0**2, 32**2], - [32**2, 96**2], - [96**2, 1e5**2], - ] - self.areaRngLbl = ["all", "small", "medium", "large"] - self.useCats = 1 - - def setKpParams(self): - self.imgIds = [] - self.catIds = [] - # np.arange causes trouble. the data point on arange is slightly larger than the true value - self.iouThrs = np.linspace( - 0.5, 0.95, int(np.round((0.95 - 0.5) / 0.05)) + 1, endpoint=True - ) - self.recThrs = np.linspace( - 0.0, 1.00, int(np.round((1.00 - 0.0) / 0.01)) + 1, endpoint=True - ) - self.maxDets = [20] - self.areaRng = [[0**2, 1e5**2], [32**2, 96**2], [96**2, 1e5**2]] - self.areaRngLbl = ["all", "medium", "large"] - self.useCats = 1 - self.kpt_oks_sigmas = ( - np.array( - [ - 0.26, - 0.25, - 0.25, - 0.35, - 0.35, - 0.79, - 0.79, - 0.72, - 0.72, - 0.62, - 0.62, - 1.07, - 1.07, - 0.87, - 0.87, - 0.89, - 0.89, - ] - ) - / 10.0 - ) - - def __init__(self, iouType="segm"): - if iouType == "bbox": - self.setDetParams() - else: - raise Exception("iouType not supported") - self.iouType = iouType - # useSegm is deprecated - self.useSegm = None diff --git a/spaces/realAshish/SG161222-Realistic_Vision_V1.4/README.md b/spaces/realAshish/SG161222-Realistic_Vision_V1.4/README.md deleted file mode 100644 index 0d51e4721f44a27dbd023697f4624a2451ae57fc..0000000000000000000000000000000000000000 --- a/spaces/realAshish/SG161222-Realistic_Vision_V1.4/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: SG161222-Realistic Vision V1.4 -emoji: 📚 -colorFrom: purple -colorTo: pink -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -license: unknown ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/realambuj/Image-Captioning-App-using-BLIP/app.py b/spaces/realambuj/Image-Captioning-App-using-BLIP/app.py deleted file mode 100644 index 70a6695c3dfae1a8f808d36d643fd55b0c12c480..0000000000000000000000000000000000000000 --- a/spaces/realambuj/Image-Captioning-App-using-BLIP/app.py +++ /dev/null @@ -1,56 +0,0 @@ -import streamlit as st -import requests -import pickle -from PIL import Image -from transformers import BlipProcessor -with st.sidebar: - st.subheader('Image Captioning App using BLIP') - st.write('This app uses the BLIP model to generate captions for images.Model card for image captioning pretrained on COCO dataset - base architecture (with ViT base backbone).') - image = Image.open('details.png') - st.image(image, caption='BLIP Model') - st.code('App Built by Ambuj Raj', language='python') - - -st.title('Image Captioning App using BLIP') - -flag_image=0 -flag_url=0 -tab1, tab2 = st.tabs(["Upload Image", "Use URL"]) -with tab1: - flag_image=1 - uploaded_file = st.file_uploader("Choose a image",type=['png','jpeg','jpg']) - if uploaded_file is not None: - st.image(uploaded_file, width=300) - raw_image = Image.open(uploaded_file).convert('RGB') - -with tab2: - flag_url=1 - img_url = st.text_input('Enter URL of image') - if img_url: - raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') - st.image(raw_image, width=300) - -if st.button('Generate Caption'): - if(flag_image==1): - flag_image=1 - with st.spinner('Generating Caption...'): - processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") - filename = 'blip.sav' - loaded_model = pickle.load(open(filename, 'rb')) - inputs = processor(raw_image, return_tensors="pt") - out = loaded_model.generate(**inputs) - st.success('Caption Generated!') - st.write('Generated Caption is: ',processor.decode(out[0], skip_special_tokens=True)) - elif(flag_url==1): - flag_url=0 - with st.spinner('Generating Caption...'): - processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") - filename = 'blip.sav' - loaded_model = pickle.load(open(filename, 'rb')) - inputs = processor(raw_image, return_tensors="pt") - out = loaded_model.generate(**inputs) - st.success('Caption Generated!') - st.write('Generated Caption is: ',processor.decode(out[0], skip_special_tokens=True)) - - - diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/!!INSTALL!! Crack Microsoft Office ProPlus 2013 SP1 VL X64 En-US Oct2014-murphy78-.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/!!INSTALL!! Crack Microsoft Office ProPlus 2013 SP1 VL X64 En-US Oct2014-murphy78-.md deleted file mode 100644 index f5c0aa69ed24f88471578501541c8bb1b4e085be..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/!!INSTALL!! Crack Microsoft Office ProPlus 2013 SP1 VL X64 En-US Oct2014-murphy78-.md +++ /dev/null @@ -1,12 +0,0 @@ -

    CRACK Microsoft Office ProPlus 2013 SP1 VL X64 En-US Oct2014-murphy78-


    DOWNLOADhttps://urlgoal.com/2uCKvw



    -
    -October 31, 2014 - Microsoft Office ProPlus 2013 SP1 VL (x86/x64) en-US, October 2014 - Murphy78 (rebooted) | 1.62 GB / 1.81 GB. Remove the previous package using the Uninstall Programs program in Windows. -I must say that I was extremely pleased with this Office. -Thanks for downloading -Help! -I have Windows 8 Pro 64 bit, Windows 7 Professional 64 bit, Windows 10 Professional 64 bit, and Office 2014 Professional Plus installed. -I recently uninstalled Office Pro Plus 2013 SP1 VL (x86/x64) en-US because I received an upgrade offer. -I then installed Office 2016 Professional Plus x64 8a78ff9644
    -
    -
    -

    diff --git a/spaces/rfrossard/Image-and-3D-Model-Creator/PIFu/lib/model/SurfaceClassifier.py b/spaces/rfrossard/Image-and-3D-Model-Creator/PIFu/lib/model/SurfaceClassifier.py deleted file mode 100644 index af5afe4fdd4767f72549df258e5b67dea6ac671d..0000000000000000000000000000000000000000 --- a/spaces/rfrossard/Image-and-3D-Model-Creator/PIFu/lib/model/SurfaceClassifier.py +++ /dev/null @@ -1,71 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class SurfaceClassifier(nn.Module): - def __init__(self, filter_channels, num_views=1, no_residual=True, last_op=None): - super(SurfaceClassifier, self).__init__() - - self.filters = [] - self.num_views = num_views - self.no_residual = no_residual - filter_channels = filter_channels - self.last_op = last_op - - if self.no_residual: - for l in range(0, len(filter_channels) - 1): - self.filters.append(nn.Conv1d( - filter_channels[l], - filter_channels[l + 1], - 1)) - self.add_module("conv%d" % l, self.filters[l]) - else: - for l in range(0, len(filter_channels) - 1): - if 0 != l: - self.filters.append( - nn.Conv1d( - filter_channels[l] + filter_channels[0], - filter_channels[l + 1], - 1)) - else: - self.filters.append(nn.Conv1d( - filter_channels[l], - filter_channels[l + 1], - 1)) - - self.add_module("conv%d" % l, self.filters[l]) - - def forward(self, feature): - ''' - - :param feature: list of [BxC_inxHxW] tensors of image features - :param xy: [Bx3xN] tensor of (x,y) coodinates in the image plane - :return: [BxC_outxN] tensor of features extracted at the coordinates - ''' - - y = feature - tmpy = feature - for i, f in enumerate(self.filters): - if self.no_residual: - y = self._modules['conv' + str(i)](y) - else: - y = self._modules['conv' + str(i)]( - y if i == 0 - else torch.cat([y, tmpy], 1) - ) - if i != len(self.filters) - 1: - y = F.leaky_relu(y) - - if self.num_views > 1 and i == len(self.filters) // 2: - y = y.view( - -1, self.num_views, y.shape[1], y.shape[2] - ).mean(dim=1) - tmpy = feature.view( - -1, self.num_views, feature.shape[1], feature.shape[2] - ).mean(dim=1) - - if self.last_op: - y = self.last_op(y) - - return y diff --git a/spaces/rgres/Seg2Sat/frontend/.svelte-kit/output/server/nodes/1.js b/spaces/rgres/Seg2Sat/frontend/.svelte-kit/output/server/nodes/1.js deleted file mode 100644 index 73ec4d7abd0ffa9c86b70fe72ad5530005398950..0000000000000000000000000000000000000000 --- a/spaces/rgres/Seg2Sat/frontend/.svelte-kit/output/server/nodes/1.js +++ /dev/null @@ -1,7 +0,0 @@ -import * as module from '../entries/fallbacks/error.svelte.js'; - -export { module }; -export const index = 1; -export const entry = 'error.svelte-d9523301.js'; -export const js = ["error.svelte-d9523301.js","chunks/index-bcf2726a.js"]; -export const css = []; diff --git a/spaces/rorallitri/biomedical-language-models/logs/F22 Lightning 3 Full Game Downloads makcollo Experience the Thrill of Tactical Nuclear Weapons.md b/spaces/rorallitri/biomedical-language-models/logs/F22 Lightning 3 Full Game Downloads makcollo Experience the Thrill of Tactical Nuclear Weapons.md deleted file mode 100644 index 25a7648d3308a44bbe458287fbb50913cd61df96..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/F22 Lightning 3 Full Game Downloads makcollo Experience the Thrill of Tactical Nuclear Weapons.md +++ /dev/null @@ -1,6 +0,0 @@ -

    F22 Lightning 3 Full Game Downloads makcollo


    DOWNLOAD ❤❤❤ https://tinurll.com/2uznFo



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/rorallitri/biomedical-language-models/logs/Fabrication CAMduct 2016 Scaricare Key Generator 32 Bits A Step-by-Step Tutorial for Beginners.md b/spaces/rorallitri/biomedical-language-models/logs/Fabrication CAMduct 2016 Scaricare Key Generator 32 Bits A Step-by-Step Tutorial for Beginners.md deleted file mode 100644 index 0faf22f8321bc941bcbe803b07caec8c86545e97..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Fabrication CAMduct 2016 Scaricare Key Generator 32 Bits A Step-by-Step Tutorial for Beginners.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Fabrication CAMduct 2016 scaricare key generator 32 bits


    Download File ✺✺✺ https://tinurll.com/2uzmaU



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/rupeshs/fastsdcpu/frontend/webui/ui.py b/spaces/rupeshs/fastsdcpu/frontend/webui/ui.py deleted file mode 100644 index 83d0f137538936cd06f03c718fa71790d3dc5ffe..0000000000000000000000000000000000000000 --- a/spaces/rupeshs/fastsdcpu/frontend/webui/ui.py +++ /dev/null @@ -1,36 +0,0 @@ -import gradio as gr -from constants import APP_VERSION -from frontend.webui.text_to_image_ui import get_text_to_image_ui -from paths import FastStableDiffusionPaths -from app_settings import AppSettings - - -def _get_footer_message() -> str: - version = f"

    v{APP_VERSION} " - footer_msg = version + ( - ' © 2023 ' - " Rupesh Sreeraman

    " - ) - return footer_msg - - -def get_web_ui(app_settings: AppSettings) -> gr.Blocks: - with gr.Blocks( - css=FastStableDiffusionPaths.get_css_path(), - title="FastSD CPU", - ) as fastsd_web_ui: - gr.HTML("

    FastSD CPU demo (OpenVINO)

    ") - with gr.Tabs(): - with gr.TabItem("Text to Image"): - get_text_to_image_ui(app_settings) - gr.HTML(_get_footer_message()) - - return fastsd_web_ui - - -def start_webui( - app_settings: AppSettings, - share: bool = False, -): - webui = get_web_ui(app_settings) - webui.launch(share=share) diff --git a/spaces/sam-hq-team/sam-hq/GroundingDINO/groundingdino/models/GroundingDINO/transformer.py b/spaces/sam-hq-team/sam-hq/GroundingDINO/groundingdino/models/GroundingDINO/transformer.py deleted file mode 100644 index fcb8742dbdde6e80fd38b11d064211f6935aae76..0000000000000000000000000000000000000000 --- a/spaces/sam-hq-team/sam-hq/GroundingDINO/groundingdino/models/GroundingDINO/transformer.py +++ /dev/null @@ -1,959 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# DINO -# Copyright (c) 2022 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Conditional DETR Transformer class. -# Copyright (c) 2021 Microsoft. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Modified from DETR (https://github.com/facebookresearch/detr) -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -# ------------------------------------------------------------------------ - -from typing import Optional - -import torch -import torch.utils.checkpoint as checkpoint -from torch import Tensor, nn - -from groundingdino.util.misc import inverse_sigmoid - -from .fuse_modules import BiAttentionBlock -from .ms_deform_attn import MultiScaleDeformableAttention as MSDeformAttn -from .transformer_vanilla import TransformerEncoderLayer -from .utils import ( - MLP, - _get_activation_fn, - _get_clones, - gen_encoder_output_proposals, - gen_sineembed_for_position, - get_sine_pos_embed, -) - - -class Transformer(nn.Module): - def __init__( - self, - d_model=256, - nhead=8, - num_queries=300, - num_encoder_layers=6, - num_unicoder_layers=0, - num_decoder_layers=6, - dim_feedforward=2048, - dropout=0.0, - activation="relu", - normalize_before=False, - return_intermediate_dec=False, - query_dim=4, - num_patterns=0, - # for deformable encoder - num_feature_levels=1, - enc_n_points=4, - dec_n_points=4, - # init query - learnable_tgt_init=False, - # two stage - two_stage_type="no", # ['no', 'standard', 'early', 'combine', 'enceachlayer', 'enclayer1'] - embed_init_tgt=False, - # for text - use_text_enhancer=False, - use_fusion_layer=False, - use_checkpoint=False, - use_transformer_ckpt=False, - use_text_cross_attention=False, - text_dropout=0.1, - fusion_dropout=0.1, - fusion_droppath=0.0, - ): - super().__init__() - self.num_feature_levels = num_feature_levels - self.num_encoder_layers = num_encoder_layers - self.num_unicoder_layers = num_unicoder_layers - self.num_decoder_layers = num_decoder_layers - self.num_queries = num_queries - assert query_dim == 4 - - # choose encoder layer type - encoder_layer = DeformableTransformerEncoderLayer( - d_model, dim_feedforward, dropout, activation, num_feature_levels, nhead, enc_n_points - ) - - if use_text_enhancer: - text_enhance_layer = TransformerEncoderLayer( - d_model=d_model, - nhead=nhead // 2, - dim_feedforward=dim_feedforward // 2, - dropout=text_dropout, - ) - else: - text_enhance_layer = None - - if use_fusion_layer: - feature_fusion_layer = BiAttentionBlock( - v_dim=d_model, - l_dim=d_model, - embed_dim=dim_feedforward // 2, - num_heads=nhead // 2, - dropout=fusion_dropout, - drop_path=fusion_droppath, - ) - else: - feature_fusion_layer = None - - encoder_norm = nn.LayerNorm(d_model) if normalize_before else None - assert encoder_norm is None - self.encoder = TransformerEncoder( - encoder_layer, - num_encoder_layers, - d_model=d_model, - num_queries=num_queries, - text_enhance_layer=text_enhance_layer, - feature_fusion_layer=feature_fusion_layer, - use_checkpoint=use_checkpoint, - use_transformer_ckpt=use_transformer_ckpt, - ) - - # choose decoder layer type - decoder_layer = DeformableTransformerDecoderLayer( - d_model, - dim_feedforward, - dropout, - activation, - num_feature_levels, - nhead, - dec_n_points, - use_text_cross_attention=use_text_cross_attention, - ) - - decoder_norm = nn.LayerNorm(d_model) - self.decoder = TransformerDecoder( - decoder_layer, - num_decoder_layers, - decoder_norm, - return_intermediate=return_intermediate_dec, - d_model=d_model, - query_dim=query_dim, - num_feature_levels=num_feature_levels, - ) - - self.d_model = d_model - self.nhead = nhead - self.dec_layers = num_decoder_layers - self.num_queries = num_queries # useful for single stage model only - self.num_patterns = num_patterns - if not isinstance(num_patterns, int): - Warning("num_patterns should be int but {}".format(type(num_patterns))) - self.num_patterns = 0 - - if num_feature_levels > 1: - if self.num_encoder_layers > 0: - self.level_embed = nn.Parameter(torch.Tensor(num_feature_levels, d_model)) - else: - self.level_embed = None - - self.learnable_tgt_init = learnable_tgt_init - assert learnable_tgt_init, "why not learnable_tgt_init" - self.embed_init_tgt = embed_init_tgt - if (two_stage_type != "no" and embed_init_tgt) or (two_stage_type == "no"): - self.tgt_embed = nn.Embedding(self.num_queries, d_model) - nn.init.normal_(self.tgt_embed.weight.data) - else: - self.tgt_embed = None - - # for two stage - self.two_stage_type = two_stage_type - assert two_stage_type in ["no", "standard"], "unknown param {} of two_stage_type".format( - two_stage_type - ) - if two_stage_type == "standard": - # anchor selection at the output of encoder - self.enc_output = nn.Linear(d_model, d_model) - self.enc_output_norm = nn.LayerNorm(d_model) - self.two_stage_wh_embedding = None - - if two_stage_type == "no": - self.init_ref_points(num_queries) # init self.refpoint_embed - - self.enc_out_class_embed = None - self.enc_out_bbox_embed = None - - self._reset_parameters() - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - for m in self.modules(): - if isinstance(m, MSDeformAttn): - m._reset_parameters() - if self.num_feature_levels > 1 and self.level_embed is not None: - nn.init.normal_(self.level_embed) - - def get_valid_ratio(self, mask): - _, H, W = mask.shape - valid_H = torch.sum(~mask[:, :, 0], 1) - valid_W = torch.sum(~mask[:, 0, :], 1) - valid_ratio_h = valid_H.float() / H - valid_ratio_w = valid_W.float() / W - valid_ratio = torch.stack([valid_ratio_w, valid_ratio_h], -1) - return valid_ratio - - def init_ref_points(self, use_num_queries): - self.refpoint_embed = nn.Embedding(use_num_queries, 4) - - def forward(self, srcs, masks, refpoint_embed, pos_embeds, tgt, attn_mask=None, text_dict=None): - """ - Input: - - srcs: List of multi features [bs, ci, hi, wi] - - masks: List of multi masks [bs, hi, wi] - - refpoint_embed: [bs, num_dn, 4]. None in infer - - pos_embeds: List of multi pos embeds [bs, ci, hi, wi] - - tgt: [bs, num_dn, d_model]. None in infer - - """ - # prepare input for encoder - src_flatten = [] - mask_flatten = [] - lvl_pos_embed_flatten = [] - spatial_shapes = [] - for lvl, (src, mask, pos_embed) in enumerate(zip(srcs, masks, pos_embeds)): - bs, c, h, w = src.shape - spatial_shape = (h, w) - spatial_shapes.append(spatial_shape) - - src = src.flatten(2).transpose(1, 2) # bs, hw, c - mask = mask.flatten(1) # bs, hw - pos_embed = pos_embed.flatten(2).transpose(1, 2) # bs, hw, c - if self.num_feature_levels > 1 and self.level_embed is not None: - lvl_pos_embed = pos_embed + self.level_embed[lvl].view(1, 1, -1) - else: - lvl_pos_embed = pos_embed - lvl_pos_embed_flatten.append(lvl_pos_embed) - src_flatten.append(src) - mask_flatten.append(mask) - src_flatten = torch.cat(src_flatten, 1) # bs, \sum{hxw}, c - mask_flatten = torch.cat(mask_flatten, 1) # bs, \sum{hxw} - lvl_pos_embed_flatten = torch.cat(lvl_pos_embed_flatten, 1) # bs, \sum{hxw}, c - spatial_shapes = torch.as_tensor( - spatial_shapes, dtype=torch.long, device=src_flatten.device - ) - level_start_index = torch.cat( - (spatial_shapes.new_zeros((1,)), spatial_shapes.prod(1).cumsum(0)[:-1]) - ) - valid_ratios = torch.stack([self.get_valid_ratio(m) for m in masks], 1) - - # two stage - enc_topk_proposals = enc_refpoint_embed = None - - ######################################################### - # Begin Encoder - ######################################################### - memory, memory_text = self.encoder( - src_flatten, - pos=lvl_pos_embed_flatten, - level_start_index=level_start_index, - spatial_shapes=spatial_shapes, - valid_ratios=valid_ratios, - key_padding_mask=mask_flatten, - memory_text=text_dict["encoded_text"], - text_attention_mask=~text_dict["text_token_mask"], - # we ~ the mask . False means use the token; True means pad the token - position_ids=text_dict["position_ids"], - text_self_attention_masks=text_dict["text_self_attention_masks"], - ) - ######################################################### - # End Encoder - # - memory: bs, \sum{hw}, c - # - mask_flatten: bs, \sum{hw} - # - lvl_pos_embed_flatten: bs, \sum{hw}, c - # - enc_intermediate_output: None or (nenc+1, bs, nq, c) or (nenc, bs, nq, c) - # - enc_intermediate_refpoints: None or (nenc+1, bs, nq, c) or (nenc, bs, nq, c) - ######################################################### - text_dict["encoded_text"] = memory_text - # if os.environ.get("SHILONG_AMP_INFNAN_DEBUG") == '1': - # if memory.isnan().any() | memory.isinf().any(): - # import ipdb; ipdb.set_trace() - - if self.two_stage_type == "standard": - output_memory, output_proposals = gen_encoder_output_proposals( - memory, mask_flatten, spatial_shapes - ) - output_memory = self.enc_output_norm(self.enc_output(output_memory)) - - if text_dict is not None: - enc_outputs_class_unselected = self.enc_out_class_embed(output_memory, text_dict) - else: - enc_outputs_class_unselected = self.enc_out_class_embed(output_memory) - - topk_logits = enc_outputs_class_unselected.max(-1)[0] - enc_outputs_coord_unselected = ( - self.enc_out_bbox_embed(output_memory) + output_proposals - ) # (bs, \sum{hw}, 4) unsigmoid - topk = self.num_queries - - topk_proposals = torch.topk(topk_logits, topk, dim=1)[1] # bs, nq - - # gather boxes - refpoint_embed_undetach = torch.gather( - enc_outputs_coord_unselected, 1, topk_proposals.unsqueeze(-1).repeat(1, 1, 4) - ) # unsigmoid - refpoint_embed_ = refpoint_embed_undetach.detach() - init_box_proposal = torch.gather( - output_proposals, 1, topk_proposals.unsqueeze(-1).repeat(1, 1, 4) - ).sigmoid() # sigmoid - - # gather tgt - tgt_undetach = torch.gather( - output_memory, 1, topk_proposals.unsqueeze(-1).repeat(1, 1, self.d_model) - ) - if self.embed_init_tgt: - tgt_ = ( - self.tgt_embed.weight[:, None, :].repeat(1, bs, 1).transpose(0, 1) - ) # nq, bs, d_model - else: - tgt_ = tgt_undetach.detach() - - if refpoint_embed is not None: - refpoint_embed = torch.cat([refpoint_embed, refpoint_embed_], dim=1) - tgt = torch.cat([tgt, tgt_], dim=1) - else: - refpoint_embed, tgt = refpoint_embed_, tgt_ - - elif self.two_stage_type == "no": - tgt_ = ( - self.tgt_embed.weight[:, None, :].repeat(1, bs, 1).transpose(0, 1) - ) # nq, bs, d_model - refpoint_embed_ = ( - self.refpoint_embed.weight[:, None, :].repeat(1, bs, 1).transpose(0, 1) - ) # nq, bs, 4 - - if refpoint_embed is not None: - refpoint_embed = torch.cat([refpoint_embed, refpoint_embed_], dim=1) - tgt = torch.cat([tgt, tgt_], dim=1) - else: - refpoint_embed, tgt = refpoint_embed_, tgt_ - - if self.num_patterns > 0: - tgt_embed = tgt.repeat(1, self.num_patterns, 1) - refpoint_embed = refpoint_embed.repeat(1, self.num_patterns, 1) - tgt_pat = self.patterns.weight[None, :, :].repeat_interleave( - self.num_queries, 1 - ) # 1, n_q*n_pat, d_model - tgt = tgt_embed + tgt_pat - - init_box_proposal = refpoint_embed_.sigmoid() - - else: - raise NotImplementedError("unknown two_stage_type {}".format(self.two_stage_type)) - ######################################################### - # End preparing tgt - # - tgt: bs, NQ, d_model - # - refpoint_embed(unsigmoid): bs, NQ, d_model - ######################################################### - - ######################################################### - # Begin Decoder - ######################################################### - hs, references = self.decoder( - tgt=tgt.transpose(0, 1), - memory=memory.transpose(0, 1), - memory_key_padding_mask=mask_flatten, - pos=lvl_pos_embed_flatten.transpose(0, 1), - refpoints_unsigmoid=refpoint_embed.transpose(0, 1), - level_start_index=level_start_index, - spatial_shapes=spatial_shapes, - valid_ratios=valid_ratios, - tgt_mask=attn_mask, - memory_text=text_dict["encoded_text"], - text_attention_mask=~text_dict["text_token_mask"], - # we ~ the mask . False means use the token; True means pad the token - ) - ######################################################### - # End Decoder - # hs: n_dec, bs, nq, d_model - # references: n_dec+1, bs, nq, query_dim - ######################################################### - - ######################################################### - # Begin postprocess - ######################################################### - if self.two_stage_type == "standard": - hs_enc = tgt_undetach.unsqueeze(0) - ref_enc = refpoint_embed_undetach.sigmoid().unsqueeze(0) - else: - hs_enc = ref_enc = None - ######################################################### - # End postprocess - # hs_enc: (n_enc+1, bs, nq, d_model) or (1, bs, nq, d_model) or (n_enc, bs, nq, d_model) or None - # ref_enc: (n_enc+1, bs, nq, query_dim) or (1, bs, nq, query_dim) or (n_enc, bs, nq, d_model) or None - ######################################################### - - return hs, references, hs_enc, ref_enc, init_box_proposal - # hs: (n_dec, bs, nq, d_model) - # references: sigmoid coordinates. (n_dec+1, bs, bq, 4) - # hs_enc: (n_enc+1, bs, nq, d_model) or (1, bs, nq, d_model) or None - # ref_enc: sigmoid coordinates. \ - # (n_enc+1, bs, nq, query_dim) or (1, bs, nq, query_dim) or None - - -class TransformerEncoder(nn.Module): - def __init__( - self, - encoder_layer, - num_layers, - d_model=256, - num_queries=300, - enc_layer_share=False, - text_enhance_layer=None, - feature_fusion_layer=None, - use_checkpoint=False, - use_transformer_ckpt=False, - ): - """_summary_ - - Args: - encoder_layer (_type_): _description_ - num_layers (_type_): _description_ - norm (_type_, optional): _description_. Defaults to None. - d_model (int, optional): _description_. Defaults to 256. - num_queries (int, optional): _description_. Defaults to 300. - enc_layer_share (bool, optional): _description_. Defaults to False. - - """ - super().__init__() - # prepare layers - self.layers = [] - self.text_layers = [] - self.fusion_layers = [] - if num_layers > 0: - self.layers = _get_clones(encoder_layer, num_layers, layer_share=enc_layer_share) - - if text_enhance_layer is not None: - self.text_layers = _get_clones( - text_enhance_layer, num_layers, layer_share=enc_layer_share - ) - if feature_fusion_layer is not None: - self.fusion_layers = _get_clones( - feature_fusion_layer, num_layers, layer_share=enc_layer_share - ) - else: - self.layers = [] - del encoder_layer - - if text_enhance_layer is not None: - self.text_layers = [] - del text_enhance_layer - if feature_fusion_layer is not None: - self.fusion_layers = [] - del feature_fusion_layer - - self.query_scale = None - self.num_queries = num_queries - self.num_layers = num_layers - self.d_model = d_model - - self.use_checkpoint = use_checkpoint - self.use_transformer_ckpt = use_transformer_ckpt - - @staticmethod - def get_reference_points(spatial_shapes, valid_ratios, device): - reference_points_list = [] - for lvl, (H_, W_) in enumerate(spatial_shapes): - - ref_y, ref_x = torch.meshgrid( - torch.linspace(0.5, H_ - 0.5, H_, dtype=torch.float32, device=device), - torch.linspace(0.5, W_ - 0.5, W_, dtype=torch.float32, device=device), - ) - ref_y = ref_y.reshape(-1)[None] / (valid_ratios[:, None, lvl, 1] * H_) - ref_x = ref_x.reshape(-1)[None] / (valid_ratios[:, None, lvl, 0] * W_) - ref = torch.stack((ref_x, ref_y), -1) - reference_points_list.append(ref) - reference_points = torch.cat(reference_points_list, 1) - reference_points = reference_points[:, :, None] * valid_ratios[:, None] - return reference_points - - def forward( - self, - # for images - src: Tensor, - pos: Tensor, - spatial_shapes: Tensor, - level_start_index: Tensor, - valid_ratios: Tensor, - key_padding_mask: Tensor, - # for texts - memory_text: Tensor = None, - text_attention_mask: Tensor = None, - pos_text: Tensor = None, - text_self_attention_masks: Tensor = None, - position_ids: Tensor = None, - ): - """ - Input: - - src: [bs, sum(hi*wi), 256] - - pos: pos embed for src. [bs, sum(hi*wi), 256] - - spatial_shapes: h,w of each level [num_level, 2] - - level_start_index: [num_level] start point of level in sum(hi*wi). - - valid_ratios: [bs, num_level, 2] - - key_padding_mask: [bs, sum(hi*wi)] - - - memory_text: bs, n_text, 256 - - text_attention_mask: bs, n_text - False for no padding; True for padding - - pos_text: bs, n_text, 256 - - - position_ids: bs, n_text - Intermedia: - - reference_points: [bs, sum(hi*wi), num_level, 2] - Outpus: - - output: [bs, sum(hi*wi), 256] - """ - - output = src - - # preparation and reshape - if self.num_layers > 0: - reference_points = self.get_reference_points( - spatial_shapes, valid_ratios, device=src.device - ) - - if self.text_layers: - # generate pos_text - bs, n_text, text_dim = memory_text.shape - if pos_text is None and position_ids is None: - pos_text = ( - torch.arange(n_text, device=memory_text.device) - .float() - .unsqueeze(0) - .unsqueeze(-1) - .repeat(bs, 1, 1) - ) - pos_text = get_sine_pos_embed(pos_text, num_pos_feats=256, exchange_xy=False) - if position_ids is not None: - pos_text = get_sine_pos_embed( - position_ids[..., None], num_pos_feats=256, exchange_xy=False - ) - - # main process - for layer_id, layer in enumerate(self.layers): - # if output.isnan().any() or memory_text.isnan().any(): - # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO': - # import ipdb; ipdb.set_trace() - if self.fusion_layers: - if self.use_checkpoint: - output, memory_text = checkpoint.checkpoint( - self.fusion_layers[layer_id], - output, - memory_text, - key_padding_mask, - text_attention_mask, - ) - else: - output, memory_text = self.fusion_layers[layer_id]( - v=output, - l=memory_text, - attention_mask_v=key_padding_mask, - attention_mask_l=text_attention_mask, - ) - - if self.text_layers: - memory_text = self.text_layers[layer_id]( - src=memory_text.transpose(0, 1), - src_mask=~text_self_attention_masks, # note we use ~ for mask here - src_key_padding_mask=text_attention_mask, - pos=(pos_text.transpose(0, 1) if pos_text is not None else None), - ).transpose(0, 1) - - # main process - if self.use_transformer_ckpt: - output = checkpoint.checkpoint( - layer, - output, - pos, - reference_points, - spatial_shapes, - level_start_index, - key_padding_mask, - ) - else: - output = layer( - src=output, - pos=pos, - reference_points=reference_points, - spatial_shapes=spatial_shapes, - level_start_index=level_start_index, - key_padding_mask=key_padding_mask, - ) - - return output, memory_text - - -class TransformerDecoder(nn.Module): - def __init__( - self, - decoder_layer, - num_layers, - norm=None, - return_intermediate=False, - d_model=256, - query_dim=4, - num_feature_levels=1, - ): - super().__init__() - if num_layers > 0: - self.layers = _get_clones(decoder_layer, num_layers) - else: - self.layers = [] - self.num_layers = num_layers - self.norm = norm - self.return_intermediate = return_intermediate - assert return_intermediate, "support return_intermediate only" - self.query_dim = query_dim - assert query_dim in [2, 4], "query_dim should be 2/4 but {}".format(query_dim) - self.num_feature_levels = num_feature_levels - - self.ref_point_head = MLP(query_dim // 2 * d_model, d_model, d_model, 2) - self.query_pos_sine_scale = None - - self.query_scale = None - self.bbox_embed = None - self.class_embed = None - - self.d_model = d_model - - self.ref_anchor_head = None - - def forward( - self, - tgt, - memory, - tgt_mask: Optional[Tensor] = None, - memory_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - refpoints_unsigmoid: Optional[Tensor] = None, # num_queries, bs, 2 - # for memory - level_start_index: Optional[Tensor] = None, # num_levels - spatial_shapes: Optional[Tensor] = None, # bs, num_levels, 2 - valid_ratios: Optional[Tensor] = None, - # for text - memory_text: Optional[Tensor] = None, - text_attention_mask: Optional[Tensor] = None, - ): - """ - Input: - - tgt: nq, bs, d_model - - memory: hw, bs, d_model - - pos: hw, bs, d_model - - refpoints_unsigmoid: nq, bs, 2/4 - - valid_ratios/spatial_shapes: bs, nlevel, 2 - """ - output = tgt - - intermediate = [] - reference_points = refpoints_unsigmoid.sigmoid() - ref_points = [reference_points] - - for layer_id, layer in enumerate(self.layers): - - if reference_points.shape[-1] == 4: - reference_points_input = ( - reference_points[:, :, None] - * torch.cat([valid_ratios, valid_ratios], -1)[None, :] - ) # nq, bs, nlevel, 4 - else: - assert reference_points.shape[-1] == 2 - reference_points_input = reference_points[:, :, None] * valid_ratios[None, :] - query_sine_embed = gen_sineembed_for_position( - reference_points_input[:, :, 0, :] - ) # nq, bs, 256*2 - - # conditional query - raw_query_pos = self.ref_point_head(query_sine_embed) # nq, bs, 256 - pos_scale = self.query_scale(output) if self.query_scale is not None else 1 - query_pos = pos_scale * raw_query_pos - # if os.environ.get("SHILONG_AMP_INFNAN_DEBUG") == '1': - # if query_pos.isnan().any() | query_pos.isinf().any(): - # import ipdb; ipdb.set_trace() - - # main process - output = layer( - tgt=output, - tgt_query_pos=query_pos, - tgt_query_sine_embed=query_sine_embed, - tgt_key_padding_mask=tgt_key_padding_mask, - tgt_reference_points=reference_points_input, - memory_text=memory_text, - text_attention_mask=text_attention_mask, - memory=memory, - memory_key_padding_mask=memory_key_padding_mask, - memory_level_start_index=level_start_index, - memory_spatial_shapes=spatial_shapes, - memory_pos=pos, - self_attn_mask=tgt_mask, - cross_attn_mask=memory_mask, - ) - if output.isnan().any() | output.isinf().any(): - print(f"output layer_id {layer_id} is nan") - try: - num_nan = output.isnan().sum().item() - num_inf = output.isinf().sum().item() - print(f"num_nan {num_nan}, num_inf {num_inf}") - except Exception as e: - print(e) - # if os.environ.get("SHILONG_AMP_INFNAN_DEBUG") == '1': - # import ipdb; ipdb.set_trace() - - # iter update - if self.bbox_embed is not None: - # box_holder = self.bbox_embed(output) - # box_holder[..., :self.query_dim] += inverse_sigmoid(reference_points) - # new_reference_points = box_holder[..., :self.query_dim].sigmoid() - - reference_before_sigmoid = inverse_sigmoid(reference_points) - delta_unsig = self.bbox_embed[layer_id](output) - outputs_unsig = delta_unsig + reference_before_sigmoid - new_reference_points = outputs_unsig.sigmoid() - - reference_points = new_reference_points.detach() - # if layer_id != self.num_layers - 1: - ref_points.append(new_reference_points) - - intermediate.append(self.norm(output)) - - return [ - [itm_out.transpose(0, 1) for itm_out in intermediate], - [itm_refpoint.transpose(0, 1) for itm_refpoint in ref_points], - ] - - -class DeformableTransformerEncoderLayer(nn.Module): - def __init__( - self, - d_model=256, - d_ffn=1024, - dropout=0.1, - activation="relu", - n_levels=4, - n_heads=8, - n_points=4, - ): - super().__init__() - - # self attention - self.self_attn = MSDeformAttn( - embed_dim=d_model, - num_levels=n_levels, - num_heads=n_heads, - num_points=n_points, - batch_first=True, - ) - self.dropout1 = nn.Dropout(dropout) - self.norm1 = nn.LayerNorm(d_model) - - # ffn - self.linear1 = nn.Linear(d_model, d_ffn) - self.activation = _get_activation_fn(activation, d_model=d_ffn) - self.dropout2 = nn.Dropout(dropout) - self.linear2 = nn.Linear(d_ffn, d_model) - self.dropout3 = nn.Dropout(dropout) - self.norm2 = nn.LayerNorm(d_model) - - @staticmethod - def with_pos_embed(tensor, pos): - return tensor if pos is None else tensor + pos - - def forward_ffn(self, src): - src2 = self.linear2(self.dropout2(self.activation(self.linear1(src)))) - src = src + self.dropout3(src2) - src = self.norm2(src) - return src - - def forward( - self, src, pos, reference_points, spatial_shapes, level_start_index, key_padding_mask=None - ): - # self attention - # import ipdb; ipdb.set_trace() - src2 = self.self_attn( - query=self.with_pos_embed(src, pos), - reference_points=reference_points, - value=src, - spatial_shapes=spatial_shapes, - level_start_index=level_start_index, - key_padding_mask=key_padding_mask, - ) - src = src + self.dropout1(src2) - src = self.norm1(src) - - # ffn - src = self.forward_ffn(src) - - return src - - -class DeformableTransformerDecoderLayer(nn.Module): - def __init__( - self, - d_model=256, - d_ffn=1024, - dropout=0.1, - activation="relu", - n_levels=4, - n_heads=8, - n_points=4, - use_text_feat_guide=False, - use_text_cross_attention=False, - ): - super().__init__() - - # cross attention - self.cross_attn = MSDeformAttn( - embed_dim=d_model, - num_levels=n_levels, - num_heads=n_heads, - num_points=n_points, - batch_first=True, - ) - self.dropout1 = nn.Dropout(dropout) if dropout > 0 else nn.Identity() - self.norm1 = nn.LayerNorm(d_model) - - # cross attention text - if use_text_cross_attention: - self.ca_text = nn.MultiheadAttention(d_model, n_heads, dropout=dropout) - self.catext_dropout = nn.Dropout(dropout) if dropout > 0 else nn.Identity() - self.catext_norm = nn.LayerNorm(d_model) - - # self attention - self.self_attn = nn.MultiheadAttention(d_model, n_heads, dropout=dropout) - self.dropout2 = nn.Dropout(dropout) if dropout > 0 else nn.Identity() - self.norm2 = nn.LayerNorm(d_model) - - # ffn - self.linear1 = nn.Linear(d_model, d_ffn) - self.activation = _get_activation_fn(activation, d_model=d_ffn, batch_dim=1) - self.dropout3 = nn.Dropout(dropout) if dropout > 0 else nn.Identity() - self.linear2 = nn.Linear(d_ffn, d_model) - self.dropout4 = nn.Dropout(dropout) if dropout > 0 else nn.Identity() - self.norm3 = nn.LayerNorm(d_model) - - self.key_aware_proj = None - self.use_text_feat_guide = use_text_feat_guide - assert not use_text_feat_guide - self.use_text_cross_attention = use_text_cross_attention - - def rm_self_attn_modules(self): - self.self_attn = None - self.dropout2 = None - self.norm2 = None - - @staticmethod - def with_pos_embed(tensor, pos): - return tensor if pos is None else tensor + pos - - def forward_ffn(self, tgt): - with torch.cuda.amp.autocast(enabled=False): - tgt2 = self.linear2(self.dropout3(self.activation(self.linear1(tgt)))) - tgt = tgt + self.dropout4(tgt2) - tgt = self.norm3(tgt) - return tgt - - def forward( - self, - # for tgt - tgt: Optional[Tensor], # nq, bs, d_model - tgt_query_pos: Optional[Tensor] = None, # pos for query. MLP(Sine(pos)) - tgt_query_sine_embed: Optional[Tensor] = None, # pos for query. Sine(pos) - tgt_key_padding_mask: Optional[Tensor] = None, - tgt_reference_points: Optional[Tensor] = None, # nq, bs, 4 - memory_text: Optional[Tensor] = None, # bs, num_token, d_model - text_attention_mask: Optional[Tensor] = None, # bs, num_token - # for memory - memory: Optional[Tensor] = None, # hw, bs, d_model - memory_key_padding_mask: Optional[Tensor] = None, - memory_level_start_index: Optional[Tensor] = None, # num_levels - memory_spatial_shapes: Optional[Tensor] = None, # bs, num_levels, 2 - memory_pos: Optional[Tensor] = None, # pos for memory - # sa - self_attn_mask: Optional[Tensor] = None, # mask used for self-attention - cross_attn_mask: Optional[Tensor] = None, # mask used for cross-attention - ): - """ - Input: - - tgt/tgt_query_pos: nq, bs, d_model - - - """ - assert cross_attn_mask is None - - # self attention - if self.self_attn is not None: - # import ipdb; ipdb.set_trace() - q = k = self.with_pos_embed(tgt, tgt_query_pos) - tgt2 = self.self_attn(q, k, tgt, attn_mask=self_attn_mask)[0] - tgt = tgt + self.dropout2(tgt2) - tgt = self.norm2(tgt) - - if self.use_text_cross_attention: - tgt2 = self.ca_text( - self.with_pos_embed(tgt, tgt_query_pos), - memory_text.transpose(0, 1), - memory_text.transpose(0, 1), - key_padding_mask=text_attention_mask, - )[0] - tgt = tgt + self.catext_dropout(tgt2) - tgt = self.catext_norm(tgt) - - tgt2 = self.cross_attn( - query=self.with_pos_embed(tgt, tgt_query_pos).transpose(0, 1), - reference_points=tgt_reference_points.transpose(0, 1).contiguous(), - value=memory.transpose(0, 1), - spatial_shapes=memory_spatial_shapes, - level_start_index=memory_level_start_index, - key_padding_mask=memory_key_padding_mask, - ).transpose(0, 1) - tgt = tgt + self.dropout1(tgt2) - tgt = self.norm1(tgt) - - # ffn - tgt = self.forward_ffn(tgt) - - return tgt - - -def build_transformer(args): - return Transformer( - d_model=args.hidden_dim, - dropout=args.dropout, - nhead=args.nheads, - num_queries=args.num_queries, - dim_feedforward=args.dim_feedforward, - num_encoder_layers=args.enc_layers, - num_decoder_layers=args.dec_layers, - normalize_before=args.pre_norm, - return_intermediate_dec=True, - query_dim=args.query_dim, - activation=args.transformer_activation, - num_patterns=args.num_patterns, - num_feature_levels=args.num_feature_levels, - enc_n_points=args.enc_n_points, - dec_n_points=args.dec_n_points, - learnable_tgt_init=True, - # two stage - two_stage_type=args.two_stage_type, # ['no', 'standard', 'early'] - embed_init_tgt=args.embed_init_tgt, - use_text_enhancer=args.use_text_enhancer, - use_fusion_layer=args.use_fusion_layer, - use_checkpoint=args.use_checkpoint, - use_transformer_ckpt=args.use_transformer_ckpt, - use_text_cross_attention=args.use_text_cross_attention, - text_dropout=args.text_dropout, - fusion_dropout=args.fusion_dropout, - fusion_droppath=args.fusion_droppath, - ) diff --git a/spaces/samuelinferences/transformers-can-do-bayesian-inference/app.py b/spaces/samuelinferences/transformers-can-do-bayesian-inference/app.py deleted file mode 100644 index e7c568de43bb79826e5f86c2055f132387c32a43..0000000000000000000000000000000000000000 --- a/spaces/samuelinferences/transformers-can-do-bayesian-inference/app.py +++ /dev/null @@ -1,116 +0,0 @@ -import gradio as gr -import numpy as np -import matplotlib.pyplot as plt -import gpytorch -import torch -import sys - -import gpytorch - -# We will use the simplest form of GP model, exact inference -class ExactGPModel(gpytorch.models.ExactGP): - def __init__(self, train_x, train_y, likelihood): - super(ExactGPModel, self).__init__(train_x, train_y, likelihood) - self.mean_module = gpytorch.means.ConstantMean() - self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel()) - - def forward(self, x): - mean_x = self.mean_module(x) - covar_x = self.covar_module(x) - return gpytorch.distributions.MultivariateNormal(mean_x, covar_x) - -def get_model(x, y, hyperparameters): - likelihood = gpytorch.likelihoods.GaussianLikelihood(noise_constraint=gpytorch.constraints.GreaterThan(1.e-9)) - model = ExactGPModel(x, y, likelihood) - model.likelihood.noise = torch.ones_like(model.likelihood.noise) * hyperparameters["noise"] - model.covar_module.outputscale = torch.ones_like(model.covar_module.outputscale) * hyperparameters["outputscale"] - model.covar_module.base_kernel.lengthscale = torch.ones_like(model.covar_module.base_kernel.lengthscale) * \ - hyperparameters["lengthscale"] - return model, likelihood - - - -excuse = "Please only specify numbers, x values should be in [0,1] and y values in [-1,1]." -excuse_max_examples = "This model is trained to work with up to 4 input points." -hyperparameters = {'noise': 1e-4, 'outputscale': 1., 'lengthscale': .1, 'fast_computations': (False,False,False)} - - -conf = .5 - -def mean_and_bounds_for_gp(x,y,test_xs): - gp_model, likelihood = get_model(x,y,hyperparameters) - gp_model.eval() - l = likelihood(gp_model(test_xs)) - means = l.mean.squeeze() - varis = torch.diagonal(l.covariance_matrix.squeeze()) - stds = varis.sqrt() - return means, means-stds, means+stds - - -def mean_and_bounds_for_pnf(x,y,test_xs, choice): - sys.path.append('prior-fitting/') - model = torch.load(f'onefeature_gp_ls.1_pnf_{choice}.pt') - - logits = model((torch.cat([x,test_xs],0).unsqueeze(1),y.unsqueeze(1)),single_eval_pos=len(x)) - bounds = model.criterion.quantile(logits,center_prob=.682).squeeze(1) - return model.criterion.mean(logits).squeeze(1), bounds[:,0], bounds[:,1] - -def plot_w_conf_interval(ax_or_plt, x, m, lb, ub, color, label_prefix): - ax_or_plt.plot(x.squeeze(-1),m, color=color, label=label_prefix+' mean') - ax_or_plt.fill_between(x.squeeze(-1), lb, ub, alpha=.1, color=color, label=label_prefix+' conf. interval') - - - - -@torch.no_grad() -def infer(table, choice): - vfunc = np.vectorize(lambda s: len(s)) - non_empty_row_mask = (vfunc(table).sum(1) != 0) - table = table[non_empty_row_mask] - - try: - table = table.astype(np.float32) - except ValueError: - return excuse, None - x = torch.tensor(table[:,0]).unsqueeze(1) - y = torch.tensor(table[:,1]) - fig = plt.figure(figsize=(8,4),dpi=1000) - - if len(x) > 4: - return excuse_max_examples, None - if (x<0.).any() or (x>1.).any() or (y<-1).any() or (y>1).any(): - return excuse, None - - plt.scatter(x,y, color='black', label='Examples in given dataset') - - - - test_xs = torch.linspace(0,1,100).unsqueeze(1) - - plot_w_conf_interval(plt, test_xs, *mean_and_bounds_for_gp(x,y,test_xs), 'green', 'GP') - plot_w_conf_interval(plt, test_xs, *mean_and_bounds_for_pnf(x,y,test_xs, choice), 'blue', 'PFN') - - plt.legend(ncol=2,bbox_to_anchor=[0.5,-.14],loc="upper center") - plt.xlabel('x') - plt.ylabel('y') - plt.tight_layout() - - - return 'There you go, your plot. 📈', plt.gcf() - -iface = gr.Interface(fn=infer, - title='GP Posterior Approximation with Transformers', - description='''This is a demo of PFNs as we describe them in our recent paper (https://openreview.net/forum?id=KSugKcbNf9). -Lines represent means and shaded areas are the confidence interval (68.2% quantile). In green, we have the ground truth GP posterior and in blue we have our approximation. -We provide three models that are architecturally the same, but with different training budgets. -The GP (approximated) uses an RBF Kernel with a little noise (1e-4), 0 mean and a length scale of 0.1. - ''', - article="

    Paper: Transformers Can Do Bayesian Inference

    ", - inputs=[ - gr.inputs.Dataframe(headers=["x", "y"], datatype=["number", "number"], type='numpy', default=[['.25','.1'],['.75','.4']], col_count=2, label='The data: you can change this and increase the number of data points using the `enter` key.'), - gr.inputs.Radio(['160K','800K','4M'], type="value", default='4M', label='Number of Sampled Datasets in Training (Training Costs), higher values yield better results') - ], outputs=["text",gr.outputs.Plot(type="matplotlib")]) -iface.launch() - - - diff --git a/spaces/sasha/MetricCompare/README.md b/spaces/sasha/MetricCompare/README.md deleted file mode 100644 index 90c510bb26464ea1f4f02df919955853174e6c92..0000000000000000000000000000000000000000 --- a/spaces/sasha/MetricCompare/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: MetricCompare -emoji: 🐨 -colorFrom: gray -colorTo: red -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: cc-by-nc-sa-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sayakpaul/sots-indoor-dehazing-maxim/maxim/blocks/grid_gating.py b/spaces/sayakpaul/sots-indoor-dehazing-maxim/maxim/blocks/grid_gating.py deleted file mode 100644 index 91980c874bd1175f1eb0be554f7be99b60cf86bd..0000000000000000000000000000000000000000 --- a/spaces/sayakpaul/sots-indoor-dehazing-maxim/maxim/blocks/grid_gating.py +++ /dev/null @@ -1,68 +0,0 @@ -import tensorflow as tf -from tensorflow.keras import backend as K -from tensorflow.keras import layers - -from ..layers import BlockImages, SwapAxes, UnblockImages - - -def GridGatingUnit(use_bias: bool = True, name: str = "grid_gating_unit"): - """A SpatialGatingUnit as defined in the gMLP paper. - - The 'spatial' dim is defined as the second last. - If applied on other dims, you should swapaxes first. - """ - - def apply(x): - u, v = tf.split(x, 2, axis=-1) - v = layers.LayerNormalization( - epsilon=1e-06, name=f"{name}_intermediate_layernorm" - )(v) - n = K.int_shape(x)[-3] # get spatial dim - v = SwapAxes()(v, -1, -3) - v = layers.Dense(n, use_bias=use_bias, name=f"{name}_Dense_0")(v) - v = SwapAxes()(v, -1, -3) - return u * (v + 1.0) - - return apply - - -def GridGmlpLayer( - grid_size, - use_bias: bool = True, - factor: int = 2, - dropout_rate: float = 0.0, - name: str = "grid_gmlp", -): - """Grid gMLP layer that performs global mixing of tokens.""" - - def apply(x): - n, h, w, num_channels = ( - K.int_shape(x)[0], - K.int_shape(x)[1], - K.int_shape(x)[2], - K.int_shape(x)[3], - ) - gh, gw = grid_size - fh, fw = h // gh, w // gw - - x = BlockImages()(x, patch_size=(fh, fw)) - # gMLP1: Global (grid) mixing part, provides global grid communication. - y = layers.LayerNormalization(epsilon=1e-06, name=f"{name}_LayerNorm")(x) - y = layers.Dense( - num_channels * factor, - use_bias=use_bias, - name=f"{name}_in_project", - )(y) - y = tf.nn.gelu(y, approximate=True) - y = GridGatingUnit(use_bias=use_bias, name=f"{name}_GridGatingUnit")(y) - y = layers.Dense( - num_channels, - use_bias=use_bias, - name=f"{name}_out_project", - )(y) - y = layers.Dropout(dropout_rate)(y) - x = x + y - x = UnblockImages()(x, grid_size=(gh, gw), patch_size=(fh, fw)) - return x - - return apply diff --git a/spaces/scedlatioru/img-to-music/example/Death Note Movie 1 English Sub Torrent.md b/spaces/scedlatioru/img-to-music/example/Death Note Movie 1 English Sub Torrent.md deleted file mode 100644 index 6eaf4fd5389b5cb5130b5d2c5f7f6d262a56e205..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Death Note Movie 1 English Sub Torrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Death note movie 1 english sub torrent


    Download File 🆓 https://gohhs.com/2uEz66



    -
    -Attack on Titan Season 1 Episode 22 English Dubbed. ... Watch Attack on Titan Season 4 Episode 3 Movie Full BDRip is not transcode and can move down for encryption, but ... 00G torrent CBM Attack on Titan Season 2 (Dual Audio) BD 1080p 8bit x264 MKV 17. ... From the director of Death Note comes Attack on Titan. 1fdad05405
    -
    -
    -

    diff --git a/spaces/scedlatioru/img-to-music/example/Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58 !!INSTALL!!.md b/spaces/scedlatioru/img-to-music/example/Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58 !!INSTALL!!.md deleted file mode 100644 index bba31240ac290fbf0ca72efd88f6791fd890a208..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58 !!INSTALL!!.md +++ /dev/null @@ -1,123 +0,0 @@ -
    -

    Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58: A Valuable Resource for Engineering Students and Practitioners

    - -

    Numerical methods are essential tools for solving various engineering problems that involve mathematical modeling, analysis, and computation. However, learning and applying numerical methods can be challenging and time-consuming, especially for students and practitioners who are not familiar with programming languages and software packages.

    -

    Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58


    DOWNLOADhttps://gohhs.com/2uEAxw



    - -

    That is why Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58 is a useful resource that can help you master numerical methods and use them effectively in your engineering projects. This book provides detailed solutions to all the problems in the textbook Numerical Methods in Engineering with MATLAB by Jaan Kiusalaas, which is a comprehensive and practical introduction to numerical methods for engineering students and professionals.

    - -

    What is Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58?

    - -

    Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58 is a book that contains complete and step-by-step solutions to all the exercises and problems in the textbook Numerical Methods in Engineering with MATLAB by Jaan Kiusalaas. The solutions are written in clear and concise language, with explanations and comments to help you understand the logic and reasoning behind each method.

    - -

    The book covers a wide range of numerical methods, such as:

    - -
      -
    • Root finding
    • -
    • Linear systems
    • -
    • Interpolation
    • -
    • Curve fitting
    • -
    • Numerical differentiation
    • -
    • Numerical integration
    • -
    • Ordinary differential equations
    • -
    • Partial differential equations
    • -
    • Optimization
    • -
    • Eigenvalue problems
    • -
    - -

    The book also uses MATLAB as the programming language and software package for implementing and testing the numerical methods. MATLAB is a popular and powerful tool for scientific computation that is widely used in engineering studies and practice. The book provides MATLAB code files for each method that are available on the book's website. The code files are simple and easy to understand, while maintaining the essential features of the method.

    - -

    Why Should You Use Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58?

    - -

    Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58 can offer you several benefits, such as:

    -

    - -
      -
    • It can help you learn numerical methods more effectively by providing you with worked-out examples and solutions that illustrate how to apply the methods to various engineering problems.
    • -
    • It can help you practice numerical methods more efficiently by providing you with ready-made code files that you can run and modify on MATLAB to test your understanding and skills.
    • -
    • It can help you improve your numerical methods more easily by providing you with feedback and tips on how to avoid common errors and pitfalls, how to optimize your code, and how to choose the best method for your problem.
    • -
    • It can help you prepare for your exams and assignments more confidently by providing you with a reliable reference and guide that you can consult anytime you need.
    • -
    - -

    If you are looking for a way to master numerical methods and use them effectively in your engineering projects, you should consider getting Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58. It will help you solve engineering problems with numerical methods and MATLAB more easily and efficiently.

    -

    How to Use Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58?

    - -

    Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58 is easy to use and follow. You can use it as a supplement to the textbook or as a standalone reference. Here is how to use it:

    - -
      -
    1. Find the chapter and the problem that you want to solve in the solution manual.
    2. -
    3. Read the solution carefully and try to understand the steps and the logic behind each method.
    4. -
    5. Compare your solution with the solution manual and check for any errors or differences.
    6. -
    7. If you have any questions or doubts, refer to the textbook or the MATLAB code files for more explanation and clarification.
    8. -
    9. Practice solving similar problems with different data or parameters to test your understanding and skills.
    10. -
    - -

    Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58 can help you learn numerical methods more effectively by providing you with worked-out examples and solutions that illustrate how to apply the methods to various engineering problems.

    - -

    Where to Get Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58?

    - -

    If you want to get Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58, you have several options:

    - -
      -
    • You can buy it online from various websites such as Amazon, Chegg, or eBay.
    • -
    • You can download it for free from some websites such as Google Drive, Academia.edu, or MATLAB For Engineers.
    • -
    • You can borrow it from your library or your classmates.
    • -
    - -

    However, you should be careful about the quality and the legality of the source that you choose. Some websites may offer incomplete, incorrect, or outdated solutions that may confuse or mislead you. Some websites may also violate the copyright or the academic integrity of the author and the publisher. Therefore, you should always check the source and the reputation of the website before downloading or buying Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58.

    -

    What are the Benefits of Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58?

    - -

    Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58 can bring you many benefits, such as:

    - -
      -
    • It can save you time and effort by providing you with ready-made solutions that you can use as a reference or a guide.
    • -
    • It can enhance your understanding and skills by providing you with clear and concise explanations and comments that help you learn the logic and reasoning behind each method.
    • -
    • It can improve your performance and grades by providing you with feedback and tips that help you avoid common errors and pitfalls, optimize your code, and choose the best method for your problem.
    • -
    • It can increase your confidence and interest by providing you with worked-out examples and solutions that illustrate how to apply the methods to various engineering problems.
    • -
    - -

    If you are looking for a way to master numerical methods and use them effectively in your engineering projects, you should consider getting Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58. It will help you solve engineering problems with numerical methods and MATLAB more easily and efficiently.

    - -

    How to Review Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58?

    - -

    If you have used Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58, you may want to share your opinion and experience with other users. You can review the book on various websites such as Amazon, Chegg, or Google Books. Here is how to review the book:

    - -
      -
    1. Go to the website where you bought or downloaded the book and find the book page.
    2. -
    3. Click on the write a review or rate this book button.
    4. -
    5. Give the book a rating from one to five stars based on your satisfaction and expectations.
    6. -
    7. Write a brief summary of your review that highlights the main points and features of the book.
    8. -
    9. Write a detailed review that explains why you liked or disliked the book, what you learned from it, how it helped you, and what suggestions or improvements you have for it.
    10. -
    11. Be honest, respectful, and constructive in your review. Avoid spoilers, personal attacks, or plagiarism.
    12. -
    13. Submit your review and check for any errors or typos.
    14. -
    - -

    Reviewing Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58 can help you express your thoughts and feelings about the book, as well as help other users decide whether to get the book or not. You can also read other reviews and ratings from other users or experts online to get more insights and perspectives on the book.

    -

    Who is Jaan Kiusalaas and Why Should You Trust His Solution Manual?

    - -

    Jaan Kiusalaas is a Professor Emeritus in the Department of Engineering Science and Mechanics at the Pennsylvania State University. He has taught numerical methods, including finite element and boundary element methods for over 30 years. He is also the co-author of four other books on engineering mechanics and materials.

    - -

    Jaan Kiusalaas is an expert and an experienced teacher in the field of numerical methods and engineering. He has a deep understanding of the theory and the practice of numerical methods and their applications to engineering problems. He has also developed and tested the MATLAB code files that accompany his solution manual and his textbook.

    - -

    You can trust his solution manual because it is based on his extensive knowledge and experience in numerical methods and engineering. It is also based on his textbook, which is a comprehensive and practical introduction to numerical methods for engineering students and professionals. His solution manual and his textbook are well-written, well-organized, and well-reviewed by other experts and users.

    - -

    How to Get the Most Out of Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58?

    - -

    Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58 is a valuable resource that can help you master numerical methods and use them effectively in your engineering projects. However, to get the most out of it, you need to use it properly and wisely. Here are some tips on how to do that:

    - -
      -
    • Do not rely solely on the solution manual. Try to solve the problems by yourself first, using the textbook and the MATLAB code files as references.
    • -
    • Do not copy or memorize the solutions. Try to understand the steps and the logic behind each method.
    • -
    • Do not skip or ignore any part of the solution. Pay attention to the explanations and comments that accompany each method.
    • -
    • Do not hesitate to ask questions or seek help if you have any difficulty or confusion with the solution manual or the textbook.
    • -
    • Do not limit yourself to the problems in the solution manual or the textbook. Practice solving similar problems with different data or parameters to test your understanding and skills.
    • -
    - -

    Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58 can help you learn numerical methods more effectively by providing you with worked-out examples and solutions that illustrate how to apply the methods to various engineering problems. However, you need to use it actively and critically, not passively and blindly.

    -

    Conclusion

    - -

    Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58 is a useful resource for both graduate students and practicing engineers who want to master numerical methods and use them effectively in their engineering projects. It provides detailed solutions to all the problems in the textbook Numerical Methods in Engineering with MATLAB by Jaan Kiusalaas, which is a comprehensive and practical introduction to numerical methods for engineering students and professionals. The solutions are written in clear and concise language, with explanations and comments to help you understand the logic and reasoning behind each method. The book also uses MATLAB as the programming language and software package for implementing and testing the numerical methods. MATLAB is a popular and powerful tool for scientific computation that is widely used in engineering studies and practice. The book provides MATLAB code files for each method that are available on the book's website. The code files are simple and easy to understand, while maintaining the essential features of the method.

    - -

    If you are looking for a way to master numerical methods and use them effectively in your engineering projects, you should consider getting Solution Manual Jaan Kiusalaas Numerical Methods In Engineering With MATLAB 2nd 58. It will help you solve engineering problems with numerical methods and MATLAB more easily and efficiently.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/sczhou/CodeFormer/CodeFormer/basicsr/metrics/__init__.py b/spaces/sczhou/CodeFormer/CodeFormer/basicsr/metrics/__init__.py deleted file mode 100644 index 19d55cc8321f124c918d78465b053aef67f13a33..0000000000000000000000000000000000000000 --- a/spaces/sczhou/CodeFormer/CodeFormer/basicsr/metrics/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -from copy import deepcopy - -from basicsr.utils.registry import METRIC_REGISTRY -from .psnr_ssim import calculate_psnr, calculate_ssim - -__all__ = ['calculate_psnr', 'calculate_ssim'] - - -def calculate_metric(data, opt): - """Calculate metric from data and options. - - Args: - opt (dict): Configuration. It must constain: - type (str): Model type. - """ - opt = deepcopy(opt) - metric_type = opt.pop('type') - metric = METRIC_REGISTRY.get(metric_type)(**data, **opt) - return metric diff --git a/spaces/segments-tobias/conex/espnet/bin/st_train.py b/spaces/segments-tobias/conex/espnet/bin/st_train.py deleted file mode 100644 index 4398d6aaa0c68db56dd66dacf089cd10ee189465..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/bin/st_train.py +++ /dev/null @@ -1,550 +0,0 @@ -#!/usr/bin/env python3 -# encoding: utf-8 - -# Copyright 2019 Kyoto University (Hirofumi Inaguma) -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -"""End-to-end speech translation model training script.""" - -from distutils.version import LooseVersion -import logging -import os -import random -import subprocess -import sys - -import configargparse -import numpy as np -import torch - -from espnet import __version__ -from espnet.utils.cli_utils import strtobool -from espnet.utils.training.batchfy import BATCH_COUNT_CHOICES - -is_torch_1_2_plus = LooseVersion(torch.__version__) >= LooseVersion("1.2") - - -# NOTE: you need this func to generate our sphinx doc -def get_parser(parser=None, required=True): - """Get default arguments.""" - if parser is None: - parser = configargparse.ArgumentParser( - description="Train a speech translation (ST) model on one CPU, " - "one or multiple GPUs", - config_file_parser_class=configargparse.YAMLConfigFileParser, - formatter_class=configargparse.ArgumentDefaultsHelpFormatter, - ) - # general configuration - parser.add("--config", is_config_file=True, help="config file path") - parser.add( - "--config2", - is_config_file=True, - help="second config file path that overwrites the settings in `--config`.", - ) - parser.add( - "--config3", - is_config_file=True, - help="third config file path that overwrites the settings " - "in `--config` and `--config2`.", - ) - - parser.add_argument( - "--ngpu", - default=None, - type=int, - help="Number of GPUs. If not given, use all visible devices", - ) - parser.add_argument( - "--train-dtype", - default="float32", - choices=["float16", "float32", "float64", "O0", "O1", "O2", "O3"], - help="Data type for training (only pytorch backend). " - "O0,O1,.. flags require apex. " - "See https://nvidia.github.io/apex/amp.html#opt-levels", - ) - parser.add_argument( - "--backend", - default="chainer", - type=str, - choices=["chainer", "pytorch"], - help="Backend library", - ) - parser.add_argument( - "--outdir", type=str, required=required, help="Output directory" - ) - parser.add_argument("--debugmode", default=1, type=int, help="Debugmode") - parser.add_argument("--dict", required=required, help="Dictionary") - parser.add_argument("--seed", default=1, type=int, help="Random seed") - parser.add_argument("--debugdir", type=str, help="Output directory for debugging") - parser.add_argument( - "--resume", - "-r", - default="", - nargs="?", - help="Resume the training from snapshot", - ) - parser.add_argument( - "--minibatches", - "-N", - type=int, - default="-1", - help="Process only N minibatches (for debug)", - ) - parser.add_argument("--verbose", "-V", default=0, type=int, help="Verbose option") - parser.add_argument( - "--tensorboard-dir", - default=None, - type=str, - nargs="?", - help="Tensorboard log dir path", - ) - parser.add_argument( - "--report-interval-iters", - default=100, - type=int, - help="Report interval iterations", - ) - parser.add_argument( - "--save-interval-iters", - default=0, - type=int, - help="Save snapshot interval iterations", - ) - # task related - parser.add_argument( - "--train-json", - type=str, - default=None, - help="Filename of train label data (json)", - ) - parser.add_argument( - "--valid-json", - type=str, - default=None, - help="Filename of validation label data (json)", - ) - # network architecture - parser.add_argument( - "--model-module", - type=str, - default=None, - help="model defined module (default: espnet.nets.xxx_backend.e2e_st:E2E)", - ) - # loss related - parser.add_argument( - "--ctc_type", - default="warpctc", - type=str, - choices=["builtin", "warpctc", "gtnctc", "cudnnctc"], - help="Type of CTC implementation to calculate loss.", - ) - parser.add_argument( - "--mtlalpha", - default=0.0, - type=float, - help="Multitask learning coefficient, alpha: \ - alpha*ctc_loss + (1-alpha)*att_loss", - ) - parser.add_argument( - "--asr-weight", - default=0.0, - type=float, - help="Multitask learning coefficient for ASR task, weight: " - " asr_weight*(alpha*ctc_loss + (1-alpha)*att_loss)" - " + (1-asr_weight-mt_weight)*st_loss", - ) - parser.add_argument( - "--mt-weight", - default=0.0, - type=float, - help="Multitask learning coefficient for MT task, weight: \ - mt_weight*mt_loss + (1-mt_weight-asr_weight)*st_loss", - ) - parser.add_argument( - "--lsm-weight", default=0.0, type=float, help="Label smoothing weight" - ) - # recognition options to compute CER/WER - parser.add_argument( - "--report-cer", - default=False, - action="store_true", - help="Compute CER on development set", - ) - parser.add_argument( - "--report-wer", - default=False, - action="store_true", - help="Compute WER on development set", - ) - # translations options to compute BLEU - parser.add_argument( - "--report-bleu", - default=True, - action="store_true", - help="Compute BLEU on development set", - ) - parser.add_argument("--nbest", type=int, default=1, help="Output N-best hypotheses") - parser.add_argument("--beam-size", type=int, default=4, help="Beam size") - parser.add_argument("--penalty", default=0.0, type=float, help="Incertion penalty") - parser.add_argument( - "--maxlenratio", - default=0.0, - type=float, - help="""Input length ratio to obtain max output length. - If maxlenratio=0.0 (default), it uses a end-detect function - to automatically find maximum hypothesis lengths""", - ) - parser.add_argument( - "--minlenratio", - default=0.0, - type=float, - help="Input length ratio to obtain min output length", - ) - parser.add_argument( - "--rnnlm", type=str, default=None, help="RNNLM model file to read" - ) - parser.add_argument( - "--rnnlm-conf", type=str, default=None, help="RNNLM model config file to read" - ) - parser.add_argument("--lm-weight", default=0.0, type=float, help="RNNLM weight.") - parser.add_argument("--sym-space", default="", type=str, help="Space symbol") - parser.add_argument("--sym-blank", default="", type=str, help="Blank symbol") - # minibatch related - parser.add_argument( - "--sortagrad", - default=0, - type=int, - nargs="?", - help="How many epochs to use sortagrad for. 0 = deactivated, -1 = all epochs", - ) - parser.add_argument( - "--batch-count", - default="auto", - choices=BATCH_COUNT_CHOICES, - help="How to count batch_size. " - "The default (auto) will find how to count by args.", - ) - parser.add_argument( - "--batch-size", - "--batch-seqs", - "-b", - default=0, - type=int, - help="Maximum seqs in a minibatch (0 to disable)", - ) - parser.add_argument( - "--batch-bins", - default=0, - type=int, - help="Maximum bins in a minibatch (0 to disable)", - ) - parser.add_argument( - "--batch-frames-in", - default=0, - type=int, - help="Maximum input frames in a minibatch (0 to disable)", - ) - parser.add_argument( - "--batch-frames-out", - default=0, - type=int, - help="Maximum output frames in a minibatch (0 to disable)", - ) - parser.add_argument( - "--batch-frames-inout", - default=0, - type=int, - help="Maximum input+output frames in a minibatch (0 to disable)", - ) - parser.add_argument( - "--maxlen-in", - "--batch-seq-maxlen-in", - default=800, - type=int, - metavar="ML", - help="When --batch-count=seq, batch size is reduced " - "if the input sequence length > ML.", - ) - parser.add_argument( - "--maxlen-out", - "--batch-seq-maxlen-out", - default=150, - type=int, - metavar="ML", - help="When --batch-count=seq, " - "batch size is reduced if the output sequence length > ML", - ) - parser.add_argument( - "--n-iter-processes", - default=0, - type=int, - help="Number of processes of iterator", - ) - parser.add_argument( - "--preprocess-conf", - type=str, - default=None, - nargs="?", - help="The configuration file for the pre-processing", - ) - # optimization related - parser.add_argument( - "--opt", - default="adadelta", - type=str, - choices=["adadelta", "adam", "noam"], - help="Optimizer", - ) - parser.add_argument( - "--accum-grad", default=1, type=int, help="Number of gradient accumuration" - ) - parser.add_argument( - "--eps", default=1e-8, type=float, help="Epsilon constant for optimizer" - ) - parser.add_argument( - "--eps-decay", default=0.01, type=float, help="Decaying ratio of epsilon" - ) - parser.add_argument( - "--lr", default=1e-3, type=float, help="Learning rate for optimizer" - ) - parser.add_argument( - "--lr-decay", default=1.0, type=float, help="Decaying ratio of learning rate" - ) - parser.add_argument( - "--weight-decay", default=0.0, type=float, help="Weight decay ratio" - ) - parser.add_argument( - "--criterion", - default="acc", - type=str, - choices=["loss", "acc"], - help="Criterion to perform epsilon decay", - ) - parser.add_argument( - "--threshold", default=1e-4, type=float, help="Threshold to stop iteration" - ) - parser.add_argument( - "--epochs", "-e", default=30, type=int, help="Maximum number of epochs" - ) - parser.add_argument( - "--early-stop-criterion", - default="validation/main/acc", - type=str, - nargs="?", - help="Value to monitor to trigger an early stopping of the training", - ) - parser.add_argument( - "--patience", - default=3, - type=int, - nargs="?", - help="Number of epochs to wait " - "without improvement before stopping the training", - ) - parser.add_argument( - "--grad-clip", default=5, type=float, help="Gradient norm threshold to clip" - ) - parser.add_argument( - "--num-save-attention", - default=3, - type=int, - help="Number of samples of attention to be saved", - ) - parser.add_argument( - "--num-save-ctc", - default=3, - type=int, - help="Number of samples of CTC probability to be saved", - ) - parser.add_argument( - "--grad-noise", - type=strtobool, - default=False, - help="The flag to switch to use noise injection to gradients during training", - ) - # speech translation related - parser.add_argument( - "--context-residual", - default=False, - type=strtobool, - nargs="?", - help="The flag to switch to use context vector residual in the decoder network", - ) - # finetuning related - parser.add_argument( - "--enc-init", - default=None, - type=str, - nargs="?", - help="Pre-trained ASR model to initialize encoder.", - ) - parser.add_argument( - "--enc-init-mods", - default="enc.enc.", - type=lambda s: [str(mod) for mod in s.split(",") if s != ""], - help="List of encoder modules to initialize, separated by a comma.", - ) - parser.add_argument( - "--dec-init", - default=None, - type=str, - nargs="?", - help="Pre-trained ASR, MT or LM model to initialize decoder.", - ) - parser.add_argument( - "--dec-init-mods", - default="att., dec.", - type=lambda s: [str(mod) for mod in s.split(",") if s != ""], - help="List of decoder modules to initialize, separated by a comma.", - ) - # multilingual related - parser.add_argument( - "--multilingual", - default=False, - type=strtobool, - help="Prepend target language ID to the source sentence. " - " Both source/target language IDs must be prepend in the pre-processing stage.", - ) - parser.add_argument( - "--replace-sos", - default=False, - type=strtobool, - help="Replace in the decoder with a target language ID \ - (the first token in the target sequence)", - ) - # Feature transform: Normalization - parser.add_argument( - "--stats-file", - type=str, - default=None, - help="The stats file for the feature normalization", - ) - parser.add_argument( - "--apply-uttmvn", - type=strtobool, - default=True, - help="Apply utterance level mean " "variance normalization.", - ) - parser.add_argument("--uttmvn-norm-means", type=strtobool, default=True, help="") - parser.add_argument("--uttmvn-norm-vars", type=strtobool, default=False, help="") - # Feature transform: Fbank - parser.add_argument( - "--fbank-fs", - type=int, - default=16000, - help="The sample frequency used for " "the mel-fbank creation.", - ) - parser.add_argument( - "--n-mels", type=int, default=80, help="The number of mel-frequency bins." - ) - parser.add_argument("--fbank-fmin", type=float, default=0.0, help="") - parser.add_argument("--fbank-fmax", type=float, default=None, help="") - return parser - - -def main(cmd_args): - """Run the main training function.""" - parser = get_parser() - args, _ = parser.parse_known_args(cmd_args) - if args.backend == "chainer" and args.train_dtype != "float32": - raise NotImplementedError( - f"chainer backend does not support --train-dtype {args.train_dtype}." - "Use --dtype float32." - ) - if args.ngpu == 0 and args.train_dtype in ("O0", "O1", "O2", "O3", "float16"): - raise ValueError( - f"--train-dtype {args.train_dtype} does not support the CPU backend." - ) - - from espnet.utils.dynamic_import import dynamic_import - - if args.model_module is None: - model_module = "espnet.nets." + args.backend + "_backend.e2e_st:E2E" - else: - model_module = args.model_module - model_class = dynamic_import(model_module) - model_class.add_arguments(parser) - - args = parser.parse_args(cmd_args) - args.model_module = model_module - if "chainer_backend" in args.model_module: - args.backend = "chainer" - if "pytorch_backend" in args.model_module: - args.backend = "pytorch" - - # add version info in args - args.version = __version__ - - # logging info - if args.verbose > 0: - logging.basicConfig( - level=logging.INFO, - format="%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s", - ) - else: - logging.basicConfig( - level=logging.WARN, - format="%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s", - ) - logging.warning("Skip DEBUG/INFO messages") - - # If --ngpu is not given, - # 1. if CUDA_VISIBLE_DEVICES is set, all visible devices - # 2. if nvidia-smi exists, use all devices - # 3. else ngpu=0 - if args.ngpu is None: - cvd = os.environ.get("CUDA_VISIBLE_DEVICES") - if cvd is not None: - ngpu = len(cvd.split(",")) - else: - logging.warning("CUDA_VISIBLE_DEVICES is not set.") - try: - p = subprocess.run( - ["nvidia-smi", "-L"], stdout=subprocess.PIPE, stderr=subprocess.PIPE - ) - except (subprocess.CalledProcessError, FileNotFoundError): - ngpu = 0 - else: - ngpu = len(p.stderr.decode().split("\n")) - 1 - args.ngpu = ngpu - else: - if is_torch_1_2_plus and args.ngpu != 1: - logging.debug( - "There are some bugs with multi-GPU processing in PyTorch 1.2+" - + " (see https://github.com/pytorch/pytorch/issues/21108)" - ) - ngpu = args.ngpu - logging.info(f"ngpu: {ngpu}") - - # display PYTHONPATH - logging.info("python path = " + os.environ.get("PYTHONPATH", "(None)")) - - # set random seed - logging.info("random seed = %d" % args.seed) - random.seed(args.seed) - np.random.seed(args.seed) - - # load dictionary for debug log - if args.dict is not None: - with open(args.dict, "rb") as f: - dictionary = f.readlines() - char_list = [entry.decode("utf-8").split(" ")[0] for entry in dictionary] - char_list.insert(0, "") - char_list.append("") - args.char_list = char_list - else: - args.char_list = None - - # train - logging.info("backend = " + args.backend) - - if args.backend == "pytorch": - from espnet.st.pytorch_backend.st import train - - train(args) - else: - raise ValueError("Only pytorch are supported.") - - -if __name__ == "__main__": - main(sys.argv[1:]) diff --git a/spaces/shabnam91/Sanskrit-TTS/indic_nlp_library/indicnlp/transliterate/unicode_transliterate.py b/spaces/shabnam91/Sanskrit-TTS/indic_nlp_library/indicnlp/transliterate/unicode_transliterate.py deleted file mode 100644 index 9754b40821b519aeee669973156d970b18ef6f3b..0000000000000000000000000000000000000000 --- a/spaces/shabnam91/Sanskrit-TTS/indic_nlp_library/indicnlp/transliterate/unicode_transliterate.py +++ /dev/null @@ -1,347 +0,0 @@ -# -# Copyright (c) 2013-present, Anoop Kunchukuttan -# All rights reserved. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -# - -#Program for text written in one Indic script to another based on Unicode mappings. -# -# @author Anoop Kunchukuttan -# - -import sys, string, itertools, re, os -from collections import defaultdict - -from indicnlp import common -from indicnlp import langinfo -from indicnlp.script import indic_scripts as isc -from indicnlp.transliterate.sinhala_transliterator import SinhalaDevanagariTransliterator as sdt -import pandas as pd - -OFFSET_TO_ITRANS={} -ITRANS_TO_OFFSET=defaultdict(list) - -DUPLICATE_ITRANS_REPRESENTATIONS={} - - -def init(): - """ - To be called by library loader, do not call it in your program - """ - - ### Load the ITRANS-script offset map. The map was initially generated using the snippet below (uses the old itrans transliterator) - ### The map is modified as needed to accomodate extensions and corrections to the mappings - # - # base=0x900 - # l=[] - # for i in range(0,0x80): - # c=chr(base+i) - # itrans=ItransTransliterator.to_itrans(c,'hi') - # l.append((hex(i),c,itrans)) - # print(l) - # - # pd.DataFrame(l,columns=['offset_hex','devnag_char','itrans']).to_csv('offset_itrans_map.csv',index=False,encoding='utf-8') - - itrans_map_fname=os.path.join(common.get_resources_path(),'transliterate','offset_itrans_map.csv') - #itrans_map_fname=r'D:\src\python_sandbox\src\offset_itrans_map.csv' - itrans_df=pd.read_csv(itrans_map_fname,encoding='utf-8') - - global OFFSET_TO_ITRANS, ITRANS_TO_OFFSET, DUPLICATE_ITRANS_REPRESENTATIONS - - for r in itrans_df.iterrows(): - itrans=r[1]['itrans'] - o=int(r[1]['offset_hex'],base=16) - - OFFSET_TO_ITRANS[o]=itrans - - if langinfo.is_consonant_offset(o): - ### for consonants, strip the schwa - add halant offset - ITRANS_TO_OFFSET[itrans[:-1]].extend([o,0x4d]) - else: - ### the append assumes that the maatra always comes after independent vowel in the df - ITRANS_TO_OFFSET[itrans].append(o) - - - DUPLICATE_ITRANS_REPRESENTATIONS = { - 'A': 'aa', - 'I': 'ii', - 'U': 'uu', - 'RRi': 'R^i', - 'RRI': 'R^I', - 'LLi': 'L^i', - 'LLI': 'L^I', - 'L': 'ld', - 'w': 'v', - 'x': 'kSh', - 'gj': 'j~n', - 'dny': 'j~n', - '.n': '.m', - 'M': '.m', - 'OM': 'AUM' - } - -class UnicodeIndicTransliterator(object): - """ - Base class for rule-based transliteration among Indian languages. - - Script pair specific transliterators should derive from this class and override the transliterate() method. - They can call the super class 'transliterate()' method to avail of the common transliteration - """ - - @staticmethod - def _correct_tamil_mapping(offset): - # handle missing unaspirated and voiced plosives in Tamil script - # replace by unvoiced, unaspirated plosives - - # for first 4 consonant rows of varnamala - # exception: ja has a mapping in Tamil - if offset>=0x15 and offset<=0x28 and \ - offset!=0x1c and \ - not ( (offset-0x15)%5==0 or (offset-0x15)%5==4 ) : - subst_char=(offset-0x15)//5 - offset=0x15+5*subst_char - - # for 5th consonant row of varnamala - if offset in [ 0x2b, 0x2c, 0x2d]: - offset=0x2a - - # 'sh' becomes 'Sh' - if offset==0x36: - offset=0x37 - - return offset - - @staticmethod - def transliterate(text,lang1_code,lang2_code): - """ - convert the source language script (lang1) to target language script (lang2) - - text: text to transliterate - lang1_code: language 1 code - lang1_code: language 2 code - """ - if lang1_code in langinfo.SCRIPT_RANGES and lang2_code in langinfo.SCRIPT_RANGES: - - # if Sinhala is source, do a mapping to Devanagari first - if lang1_code=='si': - text=sdt.sinhala_to_devanagari(text) - lang1_code='hi' - - # if Sinhala is target, make Devanagiri the intermediate target - org_lang2_code='' - if lang2_code=='si': - lang2_code='hi' - org_lang2_code='si' - - trans_lit_text=[] - for c in text: - newc=c - offset=ord(c)-langinfo.SCRIPT_RANGES[lang1_code][0] - if offset >=langinfo.COORDINATED_RANGE_START_INCLUSIVE and offset <= langinfo.COORDINATED_RANGE_END_INCLUSIVE and c!='\u0964' and c!='\u0965': - if lang2_code=='ta': - # tamil exceptions - offset=UnicodeIndicTransliterator._correct_tamil_mapping(offset) - newc=chr(langinfo.SCRIPT_RANGES[lang2_code][0]+offset) - - trans_lit_text.append(newc) - - # if Sinhala is source, do a mapping to Devanagari first - if org_lang2_code=='si': - return sdt.devanagari_to_sinhala(''.join(trans_lit_text)) - - return ''.join(trans_lit_text) - else: - return text - -class ItransTransliterator(object): - """ - Transliterator between Indian scripts and ITRANS - """ - - @staticmethod - def to_itrans(text,lang_code): - if lang_code in langinfo.SCRIPT_RANGES: - if lang_code=='ml': - # Change from chillus characters to corresponding consonant+halant - text=text.replace('\u0d7a','\u0d23\u0d4d') - text=text.replace('\u0d7b','\u0d28\u0d4d') - text=text.replace('\u0d7c','\u0d30\u0d4d') - text=text.replace('\u0d7d','\u0d32\u0d4d') - text=text.replace('\u0d7e','\u0d33\u0d4d') - text=text.replace('\u0d7f','\u0d15\u0d4d') - - offsets = [ isc.get_offset(c,lang_code) for c in text ] - - ### naive lookup - # itrans_l = [ OFFSET_TO_ITRANS.get(o, '-' ) for o in offsets ] - itrans_l=[] - for o in offsets: - itrans=OFFSET_TO_ITRANS.get(o, chr(langinfo.SCRIPT_RANGES[lang_code][0]+o) ) - if langinfo.is_halanta_offset(o): - itrans='' - if len(itrans_l)>0: - itrans_l.pop() - elif langinfo.is_vowel_sign_offset(o) and len(itrans_l)>0: - itrans_l.pop() - itrans_l.extend(itrans) - - return ''.join(itrans_l) - - else: - return text - - @staticmethod - def from_itrans(text,lang): - """ - TODO: Document this method properly - TODO: A little hack is used to handle schwa: needs to be documented - TODO: check for robustness - """ - - MAXCODE=4 ### TODO: Needs to be fixed - - ## handle_duplicate_itrans_representations - for k, v in DUPLICATE_ITRANS_REPRESENTATIONS.items(): - if k in text: - text=text.replace(k,v) - - start=0 - match=None - solution=[] - - i=start+1 - while i<=len(text): - - itrans=text[start:i] - - # print('===') - # print('i: {}'.format(i)) - # if i0 and langinfo.is_halanta(solution[-1],lang): - offs=[offs[1]] ## dependent vowel - else: - offs=[offs[0]] ## independent vowel - - c=''.join([ langinfo.offset_to_char(x,lang) for x in offs ]) - match=(i,c) - - elif len(itrans)==1: ## unknown character - match=(i,itrans) - elif i ") - sys.exit(1) - - if sys.argv[1]=='transliterate': - - src_language=sys.argv[4] - tgt_language=sys.argv[5] - - with open(sys.argv[2],'r', encoding='utf-8') as ifile: - with open(sys.argv[3],'w', encoding='utf-8') as ofile: - for line in ifile.readlines(): - transliterated_line=UnicodeIndicTransliterator.transliterate(line,src_language,tgt_language) - ofile.write(transliterated_line) - - elif sys.argv[1]=='romanize': - - language=sys.argv[4] - - ### temp fix to replace anusvara with corresponding nasal - #r1_nasal=re.compile(ur'\u0902([\u0915-\u0918])') - #r2_nasal=re.compile(ur'\u0902([\u091a-\u091d])') - #r3_nasal=re.compile(ur'\u0902([\u091f-\u0922])') - #r4_nasal=re.compile(ur'\u0902([\u0924-\u0927])') - #r5_nasal=re.compile(ur'\u0902([\u092a-\u092d])') - - with open(sys.argv[2],'r', encoding='utf-8') as ifile: - with open(sys.argv[3],'w', encoding='utf-8') as ofile: - for line in ifile.readlines(): - ### temp fix to replace anusvara with corresponding nasal - #line=r1_nasal.sub(u'\u0919\u094D\\1',line) - #line=r2_nasal.sub(u'\u091e\u094D\\1',line) - #line=r3_nasal.sub(u'\u0923\u094D\\1',line) - #line=r4_nasal.sub(u'\u0928\u094D\\1',line) - #line=r5_nasal.sub(u'\u092e\u094D\\1',line) - - transliterated_line=ItransTransliterator.to_itrans(line,language) - - ## temp fix to replace 'ph' to 'F' to match with Urdu transliteration scheme - transliterated_line=transliterated_line.replace('ph','f') - - ofile.write(transliterated_line) - - elif sys.argv[1]=='indicize': - - language=sys.argv[4] - - with open(sys.argv[2],'r', encoding='utf-8') as ifile: - with open(sys.argv[3],'w', encoding='utf-8') as ofile: - for line in ifile.readlines(): - transliterated_line=ItransTransliterator.from_itrans(line,language) - ofile.write(transliterated_line) - diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Dream Live APK Mod The Best App for Live Streaming Bar Bar in 2023.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Dream Live APK Mod The Best App for Live Streaming Bar Bar in 2023.md deleted file mode 100644 index 32dcd7b6c6d21ebbb58bad0b8c61a3cda2244791..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Dream Live APK Mod The Best App for Live Streaming Bar Bar in 2023.md +++ /dev/null @@ -1,88 +0,0 @@ - -

    How to Download APK Mod Dream Live and Enjoy Unlimited Entertainment

    -

    Do you love watching live streams of talented and charming anchors who share their happiness and lifestyle with you? Do you want to interact and engage with them in real-time and show your support and appreciation? Do you want to access premium features and unlimited content without spending any money? If you answered yes to any of these questions, then you might be interested in downloading APK Mod Dream Live, a modified version of the popular live streaming app Dream Live.

    -

    download apk mod dream live


    Download Zip >>> https://ssurll.com/2uNZvQ



    -

    In this article, we will explain what APK Mod Dream Live is, how to download and install it on your Android device, and how to use it and enjoy its features. We will also answer some frequently asked questions about the app. So, without further ado, let's get started!

    -

    What is APK Mod Dream Live?

    -

    APK Mod Dream Live is a modified version of the original Dream Live app, which is a live streaming platform that focuses on entertainment lifestyle. It brings up lots of talent anchors who share their happiness by providing online real-time interaction broadcasts. You can watch them sing, dance, talk, play games, and more. You can also chat with them and other viewers, send them gifts, and earn money from the app.

    -

    What is an APK file and what is a modded APK file?

    -

    An APK file is a package file that contains all the elements of an Android app. It has an extension named .APK and it can be installed on an Android device. A modded APK file is a modified version of an original APK file that has some extra or improved features that are not present in the original app. For example, a modded APK file can unlock premium features, remove ads, or add new functions.

    -

    download dream live apk ijo mod terbaru 2023
    -download dream live app for android free
    -download dream live mod apk unlocked room 2022
    -download dream live apk latest version 1.1.7
    -download dream live streaming bar bar mod
    -download dream live apk ijo no top up
    -download dream live mod apk v3.4.7
    -download dream live android app by dreamcast.cc
    -download dream live apk ijo jokowinomics.id
    -download dream live mod apk vasiota.com

    -

    What is Dream Live app and what are its features?

    -

    Dream Live is a live streaming app that allows you to watch online video broadcasts of various anchors who share their entertainment lifestyle with you. You can also interact with them and other viewers in real-time. Some of the features of Dream Live app are:

    -
      -
    • Talent Show: You can participate in online live stream events such as singing contests, dance festivals, talking shows, etc. You can also share your own talent and showcase it to others.
    • -
    • Meet Friends: You can stream your world with whoever you want, interact and engage in conversations with them, and meet cool people around the world.
    • -
    • Amazing Gifts: You can interact and engage with your fans, receive gifts from them, and start earning money for free. You can also send gifts to your favorite anchors and show your support.
    • -
    • Live in Dream, Find your Desire: You can discover new talents, hobbies, interests, and passions through live streaming. You can also find your dream partner or soulmate through the app.
    • -
    -

    What are the benefits of using APK Mod Dream Live?

    -

    APK Mod Dream Live is a modified version of Dream Live app that has some extra benefits that are not available in the original app. Some of the benefits of using APK Mod Dream Live are:

    -
      -
    • VIP Unlocked: You can access all the VIP features of the app without paying any subscription fee. You can watch unlimited live streams, chat with VIP anchors, join exclusive events, etc.
    • -
    • No Ads: You can enjoy watching live streams without any interruptions or distractions from annoying ads. You can also save your data and battery life.
    • -
    • Unlimited Coins: You can get unlimited coins to use in the app without spending any real money. You can use the coins to send gifts to your favorite anchors, join events, or exchange them for cash.
    • -
    • Unlimited Content: You can watch any live stream you want without any restrictions or limitations. You can also access all the categories and genres of live streams, such as music, dance, gaming, beauty, etc.
    • -
    -

    How to Download and Install APK Mod Dream Live on Your Android Device?

    -

    If you are interested in downloading and installing APK Mod Dream Live on your Android device, you need to follow these simple steps:

    -

    Step 1: Enable unknown sources on your device settings

    -

    Before you can install any APK file on your device, you need to enable the option of unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to your device settings, then security, then unknown sources, and toggle it on.

    -

    Step 2: Download APK Mod Dream Live from a reliable source

    -

    Next, you need to download the APK file of APK Mod Dream Live from a reliable and trustworthy source. You can use the link below to download the latest version of the app:

    -

    Download APK Mod Dream Live

    -

    Make sure you have enough storage space on your device before downloading the file. Also, avoid downloading the file from any suspicious or malicious websites that may harm your device or steal your data.

    -

    Step 3: Install the APK file and launch the app

    -

    Once you have downloaded the APK file, locate it on your device using a file manager app. Tap on the file and follow the instructions to install it on your device. It may take a few seconds or minutes depending on your device performance. After the installation is complete, launch the app and enjoy its features.

    -

    How to Use APK Mod Dream Live and Enjoy Its Features?

    -

    Now that you have installed APK Mod Dream Live on your device, you can start using it and enjoy its features. Here are some tips on how to use the app and make the most out of it:

    -

    How to create an account and log in to the app

    -

    To use the app, you need to create an account and log in to it. You can do this by using your phone number, email address, or social media accounts such as Facebook, Twitter, or Google. You can also choose a username and a password for your account. After creating your account, you can log in to the app anytime you want.

    -

    How to browse and watch live streams of your favorite anchors

    -

    To browse and watch live streams of your favorite anchors, you can use the home page of the app where you can see various categories and genres of live streams. You can also use the search function to find specific anchors or topics that interest you. You can also follow your favorite anchors and get notified when they go live. To watch a live stream, simply tap on it and enjoy.

    -

    How to interact and engage with the anchors and other viewers

    -

    To interact and engage with the anchors and other viewers, you can use the chat function where you can send messages, emojis, stickers, etc. You can also use the voice chat function where you can talk with the anchors directly. You can also send gifts to the anchors using your coins and show your appreciation and support. You can also join events and activities hosted by the anchors and have fun with them.

    -

    How to send and receive gifts and earn money from the app

    -

    To send gifts to the anchors, you need to have coins in your account. You can get coins by watching ads, completing tasks, inviting friends, or buying them with real money. You can also get coins by using APK Mod Dream Live which gives you unlimited coins for free. To send a gift, simply tap on the gift icon at the bottom of the screen and choose a gift that you want to send. The more expensive the gift, the more points you will earn.

    -

    To receive gifts from other viewers, you need to be an anchor yourself and broadcast your own live stream. You can do this by tapping on the camera icon at the top of the screen and choosing a category and a title for your stream. You can also use the beauty filter and the background music to enhance your stream. When you are live, other viewers can watch you and send you gifts. The more viewers and gifts you have, the more money you can earn from the app. You can withdraw your money to your bank account or PayPal account.

    -

    Conclusion

    -

    APK Mod Dream Live is a great app for anyone who loves watching and broadcasting live streams of entertainment lifestyle. It allows you to access premium features and unlimited content without spending any money. You can also interact and engage with your favorite anchors and other viewers, send and receive gifts, and earn money from the app. To download and install APK Mod Dream Live on your Android device, you just need to follow the simple steps we have explained in this article. We hope you enjoy using APK Mod Dream Live and have fun with it!

    -

    FAQs

    -

    Here are some frequently asked questions about APK Mod Dream Live:

    - - - - - - - - - - - - - - - - - - - - - - - - - -
    QuestionAnswer
    Is APK Mod Dream Live safe to use?APK Mod Dream Live is safe to use as long as you download it from a reliable source and scan it with an antivirus app before installing it. However, you should be aware that using a modded app may violate the terms and conditions of the original app and may result in your account being banned or suspended.
    Is APK Mod Dream Live legal to use?APK Mod Dream Live is not legal to use as it infringes the intellectual property rights of the original app developer. It also violates the Google Play Store policies and may expose you to legal risks. Therefore, we do not recommend using APK Mod Dream Live or any other modded app.
    Can I use APK Mod Dream Live on other devices besides Android?No, APK Mod Dream Live is only compatible with Android devices. You cannot use it on iOS, Windows, or Mac devices. If you want to use Dream Live on other devices, you need to download the official app from the respective app stores.
    How can I update APK Mod Dream Live?To update APK Mod Dream Live, you need to download the latest version of the APK file from the same source where you downloaded it before. Then, you need to uninstall the previous version of the app and install the new one. Alternatively, you can check if there is an update option within the app itself.
    How can I contact the support team of APK Mod Dream Live?APK Mod Dream Live does not have an official support team as it is not affiliated with the original app developer. If you have any issues or questions about the app, you can try contacting the source where you downloaded it from or search for online forums or communities where other users may help you.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/sohojoe/project_charles/tests/test_image.py b/spaces/sohojoe/project_charles/tests/test_image.py deleted file mode 100644 index d7897158f5d90bad579dceac7c32132a582662b1..0000000000000000000000000000000000000000 --- a/spaces/sohojoe/project_charles/tests/test_image.py +++ /dev/null @@ -1,192 +0,0 @@ -import cv2 -import av -import numpy as np - -def resize_aspect_fit(image, dim=(640, 480)): - h, w = image.shape[:2] - aspect_ratio = w / h - - target_width, target_height = dim - target_aspect = target_width / target_height - - if aspect_ratio > target_aspect: - # Original aspect is wider than target - new_width = target_width - new_height = int(target_width / aspect_ratio) - else: - # Original aspect is taller than target - new_height = target_height - new_width = int(target_height * aspect_ratio) - - resized_image = cv2.resize(image, (new_width, new_height), interpolation=cv2.INTER_AREA) - return resized_image - -def resize_and_crop(image, dim=(640, 480)): - h, w = image.shape[:2] - aspect_ratio = w / h - - target_width, target_height = dim - target_aspect = target_width / target_height - - if aspect_ratio > target_aspect: - # Original aspect is wider than target, fit by height - new_height = target_height - new_width = int(target_height * aspect_ratio) - else: - # Original aspect is taller than target, fit by width - new_width = target_width - new_height = int(target_width / aspect_ratio) - - # Resize the image with new dimensions - resized_image = cv2.resize(image, (new_width, new_height), interpolation=cv2.INTER_AREA) - - # Crop to target dimensions - x_offset = (new_width - target_width) // 2 - y_offset = (new_height - target_height) // 2 - - cropped_image = resized_image[y_offset:y_offset + target_height, x_offset:x_offset + target_width] - - return cropped_image - -def overlay_images(background, overlay, x, y): - """ - Overlay an image with transparency over another image. - """ - # Check if overlay dimensions fit within the background at the given (x, y) position - if y + overlay.shape[0] > background.shape[0] or x + overlay.shape[1] > background.shape[1]: - raise ValueError("Overlay dimensions exceed background dimensions at the specified position.") - - # Extract the alpha channel from the overlay and create an inverse alpha channel - alpha = overlay[:, :, 3] / 255.0 - inverse_alpha = 1.0 - alpha - - # Convert overlay to BGR if it's in RGB - if overlay.shape[2] == 4: # If it has an alpha channel - overlay = cv2.cvtColor(overlay[:, :, :3], cv2.COLOR_RGB2BGR) - overlay = np.concatenate([overlay, overlay[:, :, 3:]], axis=2) # Add alpha channel back - else: - overlay = cv2.cvtColor(overlay, cv2.COLOR_RGB2BGR) - - # Overlay the images - for c in range(0, 3): - background[y:overlay.shape[0]+y, x:overlay.shape[1]+x, c] = ( - alpha * overlay[:, :, c] + inverse_alpha * background[y:overlay.shape[0]+y, x:overlay.shape[1]+x, c] - ) - - return background - - -def transform_frame(user_frame: av.VideoFrame) -> av.VideoFrame: - # Convert av.VideoFrame to numpy array (OpenCV format) - user_frame_np = np.frombuffer(user_frame.planes[0], np.uint8).reshape(user_frame.height, user_frame.width, -1) - - # Load background image - background = cv2.imread("zoom-background.png") - - # Load bot image (assuming it has an alpha channel for transparency) - bot_image = cv2.imread("bot-image.png", cv2.IMREAD_UNCHANGED) - - # Resize background to match the user frame dimensions - aspect_ratio = background.shape[1] / background.shape[0] - new_h = user_frame.height - new_w = int(new_h * aspect_ratio) - background_resized = cv2.resize(background, (new_w, new_h)) - - # Crop the background if it exceeds the user frame width - if new_w > user_frame.width: - crop_x1 = (new_w - user_frame.width) // 2 - crop_x2 = crop_x1 + user_frame.width - background_resized = background_resized[:, crop_x1:crop_x2, :3] - - # Overlay bot image on the right-hand side - x_bot = background_resized.shape[1] - bot_image.shape[1] - y_bot = 0 - background_resized = overlay_images(background_resized, bot_image, x_bot, y_bot) - - # Overlay user's video frame in the bottom-left corner - x_user = 0 - y_user = background_resized.shape[0] - user_frame.height - background_resized[y_user:user_frame.height+y_user, x_user:user_frame.width+x_user, :3] = user_frame_np - - # Convert the final frame back to av.VideoFrame - output_frame = av.VideoFrame.from_ndarray(background_resized, format="bgr24") - - return output_frame - -def create_charles_frames(background, charles_frames): - output_frames = [] - # Load background image - background = cv2.imread(background, cv2.COLOR_BGR2RGB) - background = cv2.cvtColor(background, cv2.COLOR_BGR2RGB) - # resize background to match user image - background = resize_and_crop(background, (640, 480)) - - for bot_image_path in charles_frames: - bot_image = cv2.imread(bot_image_path, cv2.IMREAD_UNCHANGED) - - # assert bot image is square - assert bot_image.shape[0] == bot_image.shape[1] - - # resize bot image if it is larger than backgroun impage in any direction - if bot_image.shape[0] > background.shape[0]: - bot_image = cv2.resize(bot_image, (background.shape[0], background.shape[0]), interpolation=cv2.INTER_AREA) - - # Overlay bot image on the right-hand side - x_bot = background.shape[1] - bot_image.shape[1] - y_bot = background.shape[0] - bot_image.shape[0] - background_with_bot = overlay_images(background.copy(), bot_image, x_bot, y_bot) - - output_frames.append(background_with_bot) - - return output_frames - - -def test_create_bot_frames(): - frames = create_charles_frames("./images/zoom-background.png", ["./images/charles.png", "./images/charles-open.png"]) - index = 0 - for frame in frames: - final_frame_bgr = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR) - cv2.imwrite(f"./images/charles_frame_{index}.jpg", final_frame_bgr) - index += 1 - -def test_overlay(): - # Load mock user image - user_image = cv2.imread("./prototypes/person-016.jpg", cv2.COLOR_BGR2RGB) - user_image = cv2.cvtColor(user_image, cv2.COLOR_BGR2RGB) - # resize to 640x480, handle that this is smaller and can be cropped - user_image = resize_and_crop(user_image, (640, 480)) - - # Load background image - background = cv2.imread("./images/zoom-background.png", cv2.COLOR_BGR2RGB) - background = cv2.cvtColor(background, cv2.COLOR_BGR2RGB) - # resize background to match user image - background = resize_and_crop(background, (user_image.shape[:2][1], user_image.shape[:2][0])) - - # Load bot image (assuming it has an alpha channel for transparency) - bot_image = cv2.imread("./images/charles-open.png", cv2.IMREAD_UNCHANGED) - - # resize bot image if it is larger than backgroun impage in any direction - if bot_image.shape[0] > background.shape[0]: - bot_image = cv2.resize(bot_image, (background.shape[0], background.shape[0]), interpolation=cv2.INTER_AREA) - - # Overlay bot image on the right-hand side - x_bot = background.shape[1] - bot_image.shape[1] - y_bot = background.shape[0] - bot_image.shape[0] - background_with_bot = overlay_images(background.copy(), bot_image, x_bot, y_bot) - - # Overlay user's frame in the bottom-left corner (1/3 size) - # resize user image to 1/4 size - user_frame = cv2.resize(user_image, (user_image.shape[1]//4, user_image.shape[0]//4), interpolation=cv2.INTER_AREA) - x_user = 0 - y_user = background.shape[0] - user_frame.shape[0] - final_frame = background_with_bot.copy() - # final_frame[y_user:user_frame.shape[0]+y_user, x_user:user_frame.shape[1]+x_user, :3] = user_frame - final_frame[y_user:y_user+user_frame.shape[0], x_user:x_user+user_frame.shape[1]] = user_frame - - - # Save the final frame as JPEG - final_frame_bgr = cv2.cvtColor(final_frame, cv2.COLOR_RGB2BGR) - cv2.imwrite("./images/final_frame.jpg", final_frame_bgr) - -test_overlay() -test_create_bot_frames() \ No newline at end of file diff --git a/spaces/sparanoid/milky-green-sovits-4/modules/commons.py b/spaces/sparanoid/milky-green-sovits-4/modules/commons.py deleted file mode 100644 index 074888006392e956ce204d8368362dbb2cd4e304..0000000000000000000000000000000000000000 --- a/spaces/sparanoid/milky-green-sovits-4/modules/commons.py +++ /dev/null @@ -1,188 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -def slice_pitch_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - -def rand_slice_segments_with_pitch(x, pitch, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - ret_pitch = slice_pitch_segments(pitch, ids_str, segment_size) - return ret, ret_pitch, ids_str - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def rand_spec_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/spock74/whisper-webui/cli.py b/spaces/spock74/whisper-webui/cli.py deleted file mode 100644 index 1f75844efed1aa091caafb45b608685b5e40dd4f..0000000000000000000000000000000000000000 --- a/spaces/spock74/whisper-webui/cli.py +++ /dev/null @@ -1,118 +0,0 @@ -import argparse -import os -import pathlib -from urllib.parse import urlparse -import warnings -import numpy as np - -import torch -from app import LANGUAGES, WHISPER_MODELS, WhisperTranscriber -from src.download import download_url - -from src.utils import optional_float, optional_int, str2bool -from src.whisperContainer import WhisperContainer - -def cli(): - parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) - parser.add_argument("audio", nargs="+", type=str, help="audio file(s) to transcribe") - parser.add_argument("--model", default="small", choices=WHISPER_MODELS, help="name of the Whisper model to use") - parser.add_argument("--model_dir", type=str, default=None, help="the path to save model files; uses ~/.cache/whisper by default") - parser.add_argument("--device", default="cuda" if torch.cuda.is_available() else "cpu", help="device to use for PyTorch inference") - parser.add_argument("--output_dir", "-o", type=str, default=".", help="directory to save the outputs") - parser.add_argument("--verbose", type=str2bool, default=True, help="whether to print out the progress and debug messages") - - parser.add_argument("--task", type=str, default="transcribe", choices=["transcribe", "translate"], help="whether to perform X->X speech recognition ('transcribe') or X->English translation ('translate')") - parser.add_argument("--language", type=str, default=None, choices=sorted(LANGUAGES), help="language spoken in the audio, specify None to perform language detection") - - parser.add_argument("--vad", type=str, default="none", choices=["none", "silero-vad", "silero-vad-skip-gaps", "silero-vad-expand-into-gaps", "periodic-vad"], help="The voice activity detection algorithm to use") - parser.add_argument("--vad_merge_window", type=optional_float, default=5, help="The window size (in seconds) to merge voice segments") - parser.add_argument("--vad_max_merge_size", type=optional_float, default=30, help="The maximum size (in seconds) of a voice segment") - parser.add_argument("--vad_padding", type=optional_float, default=1, help="The padding (in seconds) to add to each voice segment") - parser.add_argument("--vad_prompt_window", type=optional_float, default=3, help="The window size of the prompt to pass to Whisper") - parser.add_argument("--vad_cpu_cores", type=int, default=1, help="The number of CPU cores to use for VAD pre-processing.") - parser.add_argument("--vad_parallel_devices", type=str, default="", help="A commma delimited list of CUDA devices to use for parallel processing. If None, disable parallel processing.") - parser.add_argument("--auto_parallel", type=bool, default=False, help="True to use all available GPUs and CPU cores for processing. Use vad_cpu_cores/vad_parallel_devices to specify the number of CPU cores/GPUs to use.") - - parser.add_argument("--temperature", type=float, default=0, help="temperature to use for sampling") - parser.add_argument("--best_of", type=optional_int, default=5, help="number of candidates when sampling with non-zero temperature") - parser.add_argument("--beam_size", type=optional_int, default=5, help="number of beams in beam search, only applicable when temperature is zero") - parser.add_argument("--patience", type=float, default=None, help="optional patience value to use in beam decoding, as in https://arxiv.org/abs/2204.05424, the default (1.0) is equivalent to conventional beam search") - parser.add_argument("--length_penalty", type=float, default=None, help="optional token length penalty coefficient (alpha) as in https://arxiv.org/abs/1609.08144, uses simple lengt normalization by default") - - parser.add_argument("--suppress_tokens", type=str, default="-1", help="comma-separated list of token ids to suppress during sampling; '-1' will suppress most special characters except common punctuations") - parser.add_argument("--initial_prompt", type=str, default=None, help="optional text to provide as a prompt for the first window.") - parser.add_argument("--condition_on_previous_text", type=str2bool, default=True, help="if True, provide the previous output of the model as a prompt for the next window; disabling may make the text inconsistent across windows, but the model becomes less prone to getting stuck in a failure loop") - parser.add_argument("--fp16", type=str2bool, default=True, help="whether to perform inference in fp16; True by default") - - parser.add_argument("--temperature_increment_on_fallback", type=optional_float, default=0.2, help="temperature to increase when falling back when the decoding fails to meet either of the thresholds below") - parser.add_argument("--compression_ratio_threshold", type=optional_float, default=2.4, help="if the gzip compression ratio is higher than this value, treat the decoding as failed") - parser.add_argument("--logprob_threshold", type=optional_float, default=-1.0, help="if the average log probability is lower than this value, treat the decoding as failed") - parser.add_argument("--no_speech_threshold", type=optional_float, default=0.6, help="if the probability of the <|nospeech|> token is higher than this value AND the decoding has failed due to `logprob_threshold`, consider the segment as silence") - - args = parser.parse_args().__dict__ - model_name: str = args.pop("model") - model_dir: str = args.pop("model_dir") - output_dir: str = args.pop("output_dir") - device: str = args.pop("device") - os.makedirs(output_dir, exist_ok=True) - - if model_name.endswith(".en") and args["language"] not in {"en", "English"}: - warnings.warn(f"{model_name} is an English-only model but receipted '{args['language']}'; using English instead.") - args["language"] = "en" - - temperature = args.pop("temperature") - temperature_increment_on_fallback = args.pop("temperature_increment_on_fallback") - if temperature_increment_on_fallback is not None: - temperature = tuple(np.arange(temperature, 1.0 + 1e-6, temperature_increment_on_fallback)) - else: - temperature = [temperature] - - vad = args.pop("vad") - vad_merge_window = args.pop("vad_merge_window") - vad_max_merge_size = args.pop("vad_max_merge_size") - vad_padding = args.pop("vad_padding") - vad_prompt_window = args.pop("vad_prompt_window") - vad_cpu_cores = args.pop("vad_cpu_cores") - auto_parallel = args.pop("auto_parallel") - - model = WhisperContainer(model_name, device=device, download_root=model_dir) - transcriber = WhisperTranscriber(delete_uploaded_files=False, vad_cpu_cores=vad_cpu_cores) - transcriber.set_parallel_devices(args.pop("vad_parallel_devices")) - transcriber.set_auto_parallel(auto_parallel) - - if (transcriber._has_parallel_devices()): - print("Using parallel devices:", transcriber.parallel_device_list) - - for audio_path in args.pop("audio"): - sources = [] - - # Detect URL and download the audio - if (uri_validator(audio_path)): - # Download from YouTube/URL directly - for source_path in download_url(audio_path, maxDuration=-1, destinationDirectory=output_dir, playlistItems=None): - source_name = os.path.basename(source_path) - sources.append({ "path": source_path, "name": source_name }) - else: - sources.append({ "path": audio_path, "name": os.path.basename(audio_path) }) - - for source in sources: - source_path = source["path"] - source_name = source["name"] - - result = transcriber.transcribe_file(model, source_path, temperature=temperature, - vad=vad, vadMergeWindow=vad_merge_window, vadMaxMergeSize=vad_max_merge_size, - vadPadding=vad_padding, vadPromptWindow=vad_prompt_window, **args) - - transcriber.write_result(result, source_name, output_dir) - - transcriber.close() - -def uri_validator(x): - try: - result = urlparse(x) - return all([result.scheme, result.netloc]) - except: - return False - -if __name__ == '__main__': - cli() \ No newline at end of file diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/wav2vec/wav2vec_featurize.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/wav2vec/wav2vec_featurize.py deleted file mode 100644 index 588268b7080cbd3400ac144604b2d75cef2876dd..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/wav2vec/wav2vec_featurize.py +++ /dev/null @@ -1,249 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Helper script to pre-compute embeddings for a flashlight (previously called wav2letter++) dataset -""" - -import argparse -import glob -import os -from shutil import copy - -import h5py -import numpy as np -import soundfile as sf -import torch -import tqdm -import fairseq -from torch import nn - - -def read_audio(fname): - """ Load an audio file and return PCM along with the sample rate """ - - wav, sr = sf.read(fname) - assert sr == 16e3 - - return wav, 16e3 - - -class PretrainedWav2VecModel(nn.Module): - def __init__(self, fname): - super().__init__() - - model, cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task([fname]) - model = model[0] - model.eval() - - self.model = model - - def forward(self, x): - with torch.no_grad(): - z = self.model.feature_extractor(x) - if isinstance(z, tuple): - z = z[0] - c = self.model.feature_aggregator(z) - return z, c - - -class EmbeddingWriterConfig(argparse.ArgumentParser): - def __init__(self): - super().__init__("Pre-compute embeddings for flashlight datasets") - - kwargs = {"action": "store", "type": str, "required": True} - - self.add_argument("--input", "-i", help="Input Directory", **kwargs) - self.add_argument("--output", "-o", help="Output Directory", **kwargs) - self.add_argument("--model", help="Path to model checkpoint", **kwargs) - self.add_argument("--split", help="Dataset Splits", nargs="+", **kwargs) - self.add_argument( - "--ext", default="wav", required=False, help="Audio file extension" - ) - - self.add_argument( - "--no-copy-labels", - action="store_true", - help="Do not copy label files. Useful for large datasets, use --targetdir in flashlight then.", - ) - self.add_argument( - "--use-feat", - action="store_true", - help="Use the feature vector ('z') instead of context vector ('c') for features", - ) - self.add_argument("--gpu", help="GPU to use", default=0, type=int) - - -class Prediction: - """ Lightweight wrapper around a fairspeech embedding model """ - - def __init__(self, fname, gpu=0): - self.gpu = gpu - self.model = PretrainedWav2VecModel(fname).cuda(gpu) - - def __call__(self, x): - x = torch.from_numpy(x).float().cuda(self.gpu) - with torch.no_grad(): - z, c = self.model(x.unsqueeze(0)) - - return z.squeeze(0).cpu().numpy(), c.squeeze(0).cpu().numpy() - - -class H5Writer: - """ Write features as hdf5 file in flashlight compatible format """ - - def __init__(self, fname): - self.fname = fname - os.makedirs(os.path.dirname(self.fname), exist_ok=True) - - def write(self, data): - channel, T = data.shape - - with h5py.File(self.fname, "w") as out_ds: - data = data.T.flatten() - out_ds["features"] = data - out_ds["info"] = np.array([16e3 // 160, T, channel]) - - -class EmbeddingDatasetWriter(object): - """Given a model and a flashlight dataset, pre-compute and store embeddings - - Args: - input_root, str : - Path to the flashlight dataset - output_root, str : - Desired output directory. Will be created if non-existent - split, str : - Dataset split - """ - - def __init__( - self, - input_root, - output_root, - split, - model_fname, - extension="wav", - gpu=0, - verbose=False, - use_feat=False, - ): - - assert os.path.exists(model_fname) - - self.model_fname = model_fname - self.model = Prediction(self.model_fname, gpu) - - self.input_root = input_root - self.output_root = output_root - self.split = split - self.verbose = verbose - self.extension = extension - self.use_feat = use_feat - - assert os.path.exists(self.input_path), "Input path '{}' does not exist".format( - self.input_path - ) - - def _progress(self, iterable, **kwargs): - if self.verbose: - return tqdm.tqdm(iterable, **kwargs) - return iterable - - def require_output_path(self, fname=None): - path = self.get_output_path(fname) - os.makedirs(path, exist_ok=True) - - @property - def input_path(self): - return self.get_input_path() - - @property - def output_path(self): - return self.get_output_path() - - def get_input_path(self, fname=None): - if fname is None: - return os.path.join(self.input_root, self.split) - return os.path.join(self.get_input_path(), fname) - - def get_output_path(self, fname=None): - if fname is None: - return os.path.join(self.output_root, self.split) - return os.path.join(self.get_output_path(), fname) - - def copy_labels(self): - self.require_output_path() - - labels = list( - filter( - lambda x: self.extension not in x, glob.glob(self.get_input_path("*")) - ) - ) - for fname in tqdm.tqdm(labels): - copy(fname, self.output_path) - - @property - def input_fnames(self): - return sorted(glob.glob(self.get_input_path("*.{}".format(self.extension)))) - - def __len__(self): - return len(self.input_fnames) - - def write_features(self): - - paths = self.input_fnames - - fnames_context = map( - lambda x: os.path.join( - self.output_path, x.replace("." + self.extension, ".h5context") - ), - map(os.path.basename, paths), - ) - - for name, target_fname in self._progress( - zip(paths, fnames_context), total=len(self) - ): - wav, sr = read_audio(name) - z, c = self.model(wav) - feat = z if self.use_feat else c - writer = H5Writer(target_fname) - writer.write(feat) - - def __repr__(self): - - return "EmbeddingDatasetWriter ({n_files} files)\n\tinput:\t{input_root}\n\toutput:\t{output_root}\n\tsplit:\t{split})".format( - n_files=len(self), **self.__dict__ - ) - - -if __name__ == "__main__": - - args = EmbeddingWriterConfig().parse_args() - - for split in args.split: - - writer = EmbeddingDatasetWriter( - input_root=args.input, - output_root=args.output, - split=split, - model_fname=args.model, - gpu=args.gpu, - extension=args.ext, - use_feat=args.use_feat, - ) - - print(writer) - writer.require_output_path() - - print("Writing Features...") - writer.write_features() - print("Done.") - - if not args.no_copy_labels: - print("Copying label data...") - writer.copy_labels() - print("Done.") diff --git a/spaces/stomexserde/gpt4-ui/Examples/Gladiator 2000 Extended Remastered 720p Brrip.md b/spaces/stomexserde/gpt4-ui/Examples/Gladiator 2000 Extended Remastered 720p Brrip.md deleted file mode 100644 index 87da010ff959495db4f5688be977d6506379d5c4..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Gladiator 2000 Extended Remastered 720p Brrip.md +++ /dev/null @@ -1,16 +0,0 @@ -
    -

    Gladiator: The Extended Remastered Edition Review

    -

    Gladiator is one of the most acclaimed and successful movies of the 21st century, winning five Oscars, including Best Picture and Best Actor for Russell Crowe. The epic story of Maximus, a Roman general who becomes a gladiator after being betrayed by the emperor's son, is a thrilling and emotional spectacle that showcases the best of Ridley Scott's directing skills and Hans Zimmer's musical score.

    -

    But did you know that there is an extended version of Gladiator that adds 17 minutes of additional footage to the theatrical cut? This version was released on DVD in 2005 and later remastered for Blu-ray in 2010. The extended edition includes more scenes of Maximus' family, his friendship with Juba, his rivalry with Commodus, and his journey to Rome. It also features some deleted scenes that were restored, such as a battle in the German forest, a conversation between Maximus and Lucilla, and a confrontation between Commodus and Gracchus.

    -

    Gladiator 2000 Extended Remastered 720p Brrip


    Download Zip 🆗 https://urlgoal.com/2uI7do



    -

    The extended edition of Gladiator enhances the depth and richness of the story, giving more insight into the characters and their motivations. It also adds more action and drama to the already spectacular sequences. The remastered version improves the picture and sound quality, making the movie look and sound even better than before.

    -

    If you are a fan of Gladiator, you should definitely check out the extended remastered edition. It is available on Blu-ray and digital platforms, such as Amazon Prime Video[^2^]. You can also download it from various torrent sites, such as YIFY[^1^], MiLLENiUM[^3^], or RARBG[^4^]. However, be aware that downloading pirated content may be illegal in your country and may expose you to viruses or malware.

    -

    -

    Gladiator is a masterpiece of cinema that deserves to be seen in its fullest form. The extended remastered edition is a must-have for any movie lover who appreciates epic storytelling and stunning visuals.

    - -

    One of the most remarkable aspects of Gladiator is the performance of Russell Crowe as Maximus. Crowe delivers a powerful and nuanced portrayal of a man who loses everything he loves and fights for his honor and freedom. He expresses a range of emotions, from grief and rage to courage and compassion, with his voice and body language. He also does most of his own stunts, showing his physical prowess and skill as a fighter.

    -

    Crowe's performance earned him an Oscar for Best Actor, as well as a Golden Globe and a BAFTA. He also received praise from critics and audiences alike, who hailed him as one of the best actors of his generation. Crowe's role in Gladiator catapulted him to stardom and established him as a leading man in Hollywood.

    -

    Another standout performance in Gladiator is that of Joaquin Phoenix as Commodus, the villainous son of Marcus Aurelius. Phoenix plays Commodus as a complex and twisted character, who is driven by jealousy, insecurity, and ambition. He shows his cruelty and madness in his actions and words, such as killing his father, ordering the execution of Maximus and his family, and declaring himself a god. He also displays his vulnerability and humanity in his scenes with Lucilla, his sister and love interest.

    -

    Phoenix's performance earned him an Oscar nomination for Best Supporting Actor, as well as a Golden Globe and a BAFTA nomination. He also received acclaim from critics and audiences, who recognized him as one of the most talented and versatile actors of his generation. Phoenix's role in Gladiator marked a turning point in his career and opened up new opportunities for him in Hollywood.

    cec2833e83
    -
    -
    \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Huawei Hg658 V2 Custom Firmware.md b/spaces/stomexserde/gpt4-ui/Examples/Huawei Hg658 V2 Custom Firmware.md deleted file mode 100644 index 8b16063f9902d4e13b26b3826fd5b2746b312f7a..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Huawei Hg658 V2 Custom Firmware.md +++ /dev/null @@ -1,80 +0,0 @@ - -

    Huawei Hg658 V2 Custom Firmware: A Guide for Users

    -

    If you are looking for a way to enhance the performance and functionality of your Huawei Hg658 V2 router, you might be interested in installing a custom firmware on it. Custom firmware is a modified version of the original software that runs on your router, which can offer you more features, options, and control over your device. However, installing custom firmware also comes with some risks and challenges, so you need to be careful and informed before you proceed. In this article, we will explain what custom firmware is, why you might want to use it on your Huawei Hg658 V2 router, how to find and download it, how to install it, and how to troubleshoot and support it.

    -

    Huawei Hg658 V2 Custom Firmware


    DOWNLOAD >>> https://urlgoal.com/2uI7id



    -

    What is Huawei Hg658 V2 and what are its features and specifications?

    -

    The Huawei Hg658 V2 is a high-speed wireless router designed for home and small office use. It supports multiple WLAN protocols, including 802.11b/g/n (2.4 GHz), and delivers a wireless transmission rate of up to 300Mbps with its dual antennas. It also supports digital subscriber line (DSL) technology, which provides stable internet access through the phone line. Moreover, it has four LAN ports, one WAN port, one USB port, one phone port, and one power port. It also has a WPS button for easy wireless connection, a reset button for restoring factory settings, and several indicators for displaying the status of the device. The default IP address of the router is 192.168.1.1, and the default user name and password are both "user".

    -

    What is custom firmware and why do some users want to install it on their routers?

    -

    Custom firmware is a modified version of the original software that runs on your router. It is usually created by third-party developers or enthusiasts who want to add more features or improve the performance of their devices. Some of the reasons why some users want to install custom firmware on their routers are:

    -
      -
    • To unlock more settings and options that are not available in the original firmware.
    • -
    • To enhance the security and privacy of their network by adding features such as firewall, VPN, encryption, etc.
    • -
    • To boost the speed and stability of their internet connection by optimizing the bandwidth management, QoS, etc.
    • -
    • To extend the functionality of their router by adding features such as file sharing, media streaming, print server, etc.
    • -
    • To customize the appearance and interface of their router by changing the theme, logo, language, etc.
    • -
    -

    However, installing custom firmware also comes with some drawbacks and risks, such as:

    -

    -
      -
    • Voiding the warranty of your device.
    • -
    • Bricking your device if you install an incompatible or corrupted firmware - Losing the original features and settings of your device.
    • -
    • Exposing your device to potential bugs, errors, or vulnerabilities.
    • -
    • Violating the terms and conditions of your internet service provider (ISP) or the manufacturer of your device.
    • -
    -

    Therefore, you should weigh the pros and cons of using custom firmware on your router before you decide to install it. You should also do some research and preparation before you proceed with the installation process.

    -

    How to find and download custom firmware for Huawei Hg658 V2

    -

    There are many sources and websites that offer custom firmware for Huawei Hg658 V2. However, not all of them are reliable or compatible with your device. You should be careful and selective when choosing the custom firmware that you want to download and install on your router. Here are some tips and steps to follow when finding and downloading custom firmware for Huawei Hg658 V2:

    -
      -
    1. Check the official website of Huawei for any updates or upgrades for your router's firmware. Sometimes, the manufacturer may release new versions of the original firmware that can improve the performance or functionality of your device. You can download the latest official firmware from [here].
    2. -
    3. Look for reputable and popular websites or forums that specialize in custom firmware for routers. Some examples are [OpenWrt], [DD-WRT], [Tomato], etc. These websites usually have a list of supported devices and a database of custom firmware files that you can browse and download. You can also find reviews, ratings, feedback, tutorials, guides, and support from other users who have tried or used the custom firmware on their routers.
    4. -
    5. Compare and contrast the features, options, and compatibility of different custom firmware files that are available for Huawei Hg658 V2. You should look for the custom firmware that suits your needs and preferences, as well as the specifications and requirements of your device. You should also check the date, version, size, and format of the custom firmware file before downloading it.
    6. -
    7. Backup the original firmware and settings of your Huawei Hg658 V2 router before you download and install any custom firmware on it. This is a precautionary measure in case something goes wrong during or after the installation process. You can use a USB flash drive or an external hard drive to store the backup files. You can also use the web management page of your router to backup and restore your settings.
    8. -
    -

    How to install custom firmware on Huawei Hg658 V2

    -

    Once you have found and downloaded the custom firmware file that you want to use on your Huawei Hg658 V2 router, you can proceed with the installation process. However, you should be careful and follow some steps and precautions when installing custom firmware on your router. Here are some tips and steps to follow when installing custom firmware on Huawei Hg658 V2:

    -
      -
    1. Make sure that your Huawei Hg658 V2 router is connected to a stable power source and a reliable internet connection during the installation process. You should also turn off any other devices or applications that may interfere with the installation process, such as firewalls, antivirus software, VPNs, etc.
    2. -
    3. Access the web management page of your Huawei Hg658 V2 router by typing 192.168.1.1 in your web browser's address bar. Enter "user" as both the user name and password to log in.
    4. -
    5. Navigate to the System Tools section and select Firmware Upgrade from the menu. Click Browse to locate and select the custom firmware file that you have downloaded on your computer. Click Upgrade to start uploading the custom firmware file to your router.
    6. -
    7. Wait for the upload and installation process to complete. Do not turn off or disconnect your router during this process, as this may damage or brick your device. The installation process may take several minutes, depending on the size and complexity of the custom firmware file.
    8. -
    9. Once the installation process is done, your router will automatically reboot itself. Wait for your router to restart completely before you try to access it again.
    10. -
    11. Verify that the installation is successful and that your router is working properly with the custom firmware. You can check the status, settings, features, options, and performance of your router through its web management page or through its indicators. You can also test your internet connection speed and stability through online tools or applications.
    12. -
    -

    Conclusion

    -

    In this article, we have explained what custom firmware is, why you might want to use it on your Huawei Hg658 V2 router, how to find and download it, how to install it, and how to troubleshoot and support it. We hope that this article has been helpful and informative for you.

    If you have decided to use custom firmware on your Huawei Hg658 V2 router, you should also be aware of some tips and suggestions for troubleshooting and support in case of any issues or questions. Here are some tips and suggestions for troubleshooting and support for custom firmware on Huawei Hg658 V2:

    -
      -
    • If you encounter any problems or errors during or after the installation process, you can try to reset your router to its factory settings by pressing and holding the reset button on the back of your device for about 10 seconds. This will erase all the custom firmware and settings that you have installed on your router and restore it to its original state. You can then try to reinstall the custom firmware or use the official firmware instead.
    • -
    • If you have any questions or doubts about the custom firmware that you have downloaded or installed on your router, you can contact the developer or the website that provided the custom firmware for more information and assistance. You can also check their FAQ section, forum, blog, or social media pages for more resources and support.
    • -
    • If you want to learn more about custom firmware and how to use it on your router, you can read some online articles, guides, tutorials, or books that cover this topic. You can also watch some videos, podcasts, or webinars that demonstrate or explain how to use custom firmware on your router.
    • -
    • If you want to share your feedback or experience with custom firmware on your Huawei Hg658 V2 router, you can write a review, rating, comment, or testimonial on the website or platform that provided the custom firmware. You can also join some online communities, groups, or forums that discuss custom firmware and routers and interact with other users who have similar interests or goals.
    • -
    -

    FAQs

    -

    Here are some common questions and answers about custom firmware on Huawei Hg658 V2:

    - - - - - - - - - - - - - - - - - - - - - - - - - -
    QuestionAnswer
    What is the difference between custom firmware and official firmware?Custom firmware is a modified version of the official firmware that runs on your router. It is usually created by third-party developers or enthusiasts who want to add more features or improve the performance of their devices. Official firmware is the original software that is provided by the manufacturer of your device. It is usually tested and verified by the manufacturer and has a limited number of features and options.
    Is it legal to use custom firmware on my router?It depends on the terms and conditions of your internet service provider (ISP) and the manufacturer of your device. Some ISPs and manufacturers may prohibit or restrict the use of custom firmware on their devices or networks. You should check with your ISP and manufacturer before you use custom firmware on your router. You should also be aware of the potential risks and consequences of using custom firmware on your router.
    How do I know which custom firmware is compatible with my router?You should check the compatibility and reliability of the custom firmware before you download and install it on your router. You should look for the custom firmware that suits your needs and preferences, as well as the specifications and requirements of your device. You should also check the date, version, size, and format of the custom firmware file before downloading it. You can also read some reviews, ratings, feedback, tutorials, guides, and support from other users who have tried or used the custom firmware on their routers.
    How do I update my custom firmware?You should check for updates or upgrades for your custom firmware regularly. You can check the website or platform that provided the custom firmware for any new versions or releases of the custom firmware. You can also subscribe to their newsletter, email list, RSS feed, or social media pages for notifications and alerts about their updates or upgrades. You can then download and install the latest version of the custom firmware on your router following the same steps as before.
    How do I uninstall my custom firmware?If you want to uninstall your custom firmware from your router, you can do so by restoring your router to its factory settings. This will erase all the custom firmware and settings that you have installed on your router and restore it to its original state. You can then use the official firmware instead or install a different custom firmware on your router.

    b2dd77e56b
    -
    -
    \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Introduction To Psychology Morgan And King Free Ebook Download.md b/spaces/stomexserde/gpt4-ui/Examples/Introduction To Psychology Morgan And King Free Ebook Download.md deleted file mode 100644 index adeec8a88d88f272c3651985be226993a372d699..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Introduction To Psychology Morgan And King Free Ebook Download.md +++ /dev/null @@ -1,24 +0,0 @@ - -

    How to Download Introduction to Psychology by Morgan and King for Free

    - -

    If you are looking for a classic textbook on psychology that covers all the major topics and theories, you might want to check out Introduction to Psychology by Clifford T. Morgan and Richard A. King. This book was first published in 1961 and has been revised and updated several times since then. It is widely used in colleges and universities as an introductory text for psychology courses.

    -

    Introduction To Psychology Morgan And King Free Ebook Download


    Download File ✫✫✫ https://urlgoal.com/2uI5x0



    - -

    However, buying a new or used copy of this book can be quite expensive, especially if you are on a tight budget. Fortunately, there are some ways to download this book for free online. Here are some of the options you can try:

    - -
      -
    • Visit Archive.org and search for Introduction to Psychology by Morgan and King. You will find several editions of this book available for free download in PDF or EPUB format. You can also borrow the book for 14 days if you create a free account on the website[^1^].
    • -
    • Visit Open Textbook Library and look for Introduction to Psychology by University of Minnesota Libraries Publishing. This is an open-access textbook that is based on the original work by Morgan and King, but has been adapted and updated by various authors. You can download it for free in PDF or EPUB format[^2^].
    • -
    • Visit Open Library and search for Introduction to Psychology by Morgan and King. You will find some editions of this book that you can borrow for free for 14 days if you sign up with your email or Facebook account[^3^].
    • -
    - -

    These are some of the ways to download Introduction to Psychology by Morgan and King for free online. However, please note that these sources may not have the latest or complete version of the book, and they may not be authorized by the publisher or the authors. Therefore, use them at your own risk and discretion.

    - -

    If you want to support the original authors and publishers of this book, you can buy a new or used copy from online or offline bookstores. You can also check if your local library has a copy that you can borrow.

    -

    - -

    Introduction to Psychology by Morgan and King is a great resource for anyone who wants to learn more about the fascinating field of psychology. It covers topics such as perception, learning, memory, motivation, emotion, personality, intelligence, social psychology, abnormal psychology, and more. It also provides examples, exercises, and applications that make the concepts more relevant and engaging.

    - -

    We hope this article helped you find a way to download this book for free online. Happy reading!

    e93f5a0c3f
    -
    -
    \ No newline at end of file diff --git a/spaces/sunwaee/Face-Mask-Detection/retinanet/csv_eval.py b/spaces/sunwaee/Face-Mask-Detection/retinanet/csv_eval.py deleted file mode 100644 index 175fc36a780fa8af8edfc352a02533fc85420156..0000000000000000000000000000000000000000 --- a/spaces/sunwaee/Face-Mask-Detection/retinanet/csv_eval.py +++ /dev/null @@ -1,259 +0,0 @@ -from __future__ import print_function - -import numpy as np -import json -import os -import matplotlib.pyplot as plt -import torch - - - -def compute_overlap(a, b): - """ - Parameters - ---------- - a: (N, 4) ndarray of float - b: (K, 4) ndarray of float - Returns - ------- - overlaps: (N, K) ndarray of overlap between boxes and query_boxes - """ - area = (b[:, 2] - b[:, 0]) * (b[:, 3] - b[:, 1]) - - iw = np.minimum(np.expand_dims(a[:, 2], axis=1), b[:, 2]) - np.maximum(np.expand_dims(a[:, 0], 1), b[:, 0]) - ih = np.minimum(np.expand_dims(a[:, 3], axis=1), b[:, 3]) - np.maximum(np.expand_dims(a[:, 1], 1), b[:, 1]) - - iw = np.maximum(iw, 0) - ih = np.maximum(ih, 0) - - ua = np.expand_dims((a[:, 2] - a[:, 0]) * (a[:, 3] - a[:, 1]), axis=1) + area - iw * ih - - ua = np.maximum(ua, np.finfo(float).eps) - - intersection = iw * ih - - return intersection / ua - - -def _compute_ap(recall, precision): - """ Compute the average precision, given the recall and precision curves. - Code originally from https://github.com/rbgirshick/py-faster-rcnn. - # Arguments - recall: The recall curve (list). - precision: The precision curve (list). - # Returns - The average precision as computed in py-faster-rcnn. - """ - # correct AP calculation - # first append sentinel values at the end - mrec = np.concatenate(([0.], recall, [1.])) - mpre = np.concatenate(([0.], precision, [0.])) - - # compute the precision envelope - for i in range(mpre.size - 1, 0, -1): - mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i]) - - # to calculate area under PR curve, look for points - # where X axis (recall) changes value - i = np.where(mrec[1:] != mrec[:-1])[0] - - # and sum (\Delta recall) * prec - ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) - return ap - - -def _get_detections(dataset, retinanet, score_threshold=0.05, max_detections=100, save_path=None): - """ Get the detections from the retinanet using the generator. - The result is a list of lists such that the size is: - all_detections[num_images][num_classes] = detections[num_detections, 4 + num_classes] - # Arguments - dataset : The generator used to run images through the retinanet. - retinanet : The retinanet to run on the images. - score_threshold : The score confidence threshold to use. - max_detections : The maximum number of detections to use per image. - save_path : The path to save the images with visualized detections to. - # Returns - A list of lists containing the detections for each image in the generator. - """ - all_detections = [[None for i in range(dataset.num_classes())] for j in range(len(dataset))] - - retinanet.eval() - - with torch.no_grad(): - - for index in range(len(dataset)): - data = dataset[index] - scale = data['scale'] - - # run network - if torch.cuda.is_available(): - scores, labels, boxes = retinanet(data['img'].permute(2, 0, 1).cuda().float().unsqueeze(dim=0)) - else: - scores, labels, boxes = retinanet(data['img'].permute(2, 0, 1).float().unsqueeze(dim=0)) - scores = scores.cpu().numpy() - labels = labels.cpu().numpy() - boxes = boxes.cpu().numpy() - - # correct boxes for image scale - boxes /= scale - - # select indices which have a score above the threshold - indices = np.where(scores > score_threshold)[0] - if indices.shape[0] > 0: - # select those scores - scores = scores[indices] - - # find the order with which to sort the scores - scores_sort = np.argsort(-scores)[:max_detections] - - # select detections - image_boxes = boxes[indices[scores_sort], :] - image_scores = scores[scores_sort] - image_labels = labels[indices[scores_sort]] - image_detections = np.concatenate([image_boxes, np.expand_dims(image_scores, axis=1), np.expand_dims(image_labels, axis=1)], axis=1) - - # copy detections to all_detections - for label in range(dataset.num_classes()): - all_detections[index][label] = image_detections[image_detections[:, -1] == label, :-1] - else: - # copy detections to all_detections - for label in range(dataset.num_classes()): - all_detections[index][label] = np.zeros((0, 5)) - - print('{}/{}'.format(index + 1, len(dataset)), end='\r') - - return all_detections - - -def _get_annotations(generator): - """ Get the ground truth annotations from the generator. - The result is a list of lists such that the size is: - all_detections[num_images][num_classes] = annotations[num_detections, 5] - # Arguments - generator : The generator used to retrieve ground truth annotations. - # Returns - A list of lists containing the annotations for each image in the generator. - """ - all_annotations = [[None for i in range(generator.num_classes())] for j in range(len(generator))] - - for i in range(len(generator)): - # load the annotations - annotations = generator.load_annotations(i) - - # copy detections to all_annotations - for label in range(generator.num_classes()): - all_annotations[i][label] = annotations[annotations[:, 4] == label, :4].copy() - - print('{}/{}'.format(i + 1, len(generator)), end='\r') - - return all_annotations - - -def evaluate( - generator, - retinanet, - iou_threshold=0.5, - score_threshold=0.05, - max_detections=100, - save_path=None -): - """ Evaluate a given dataset using a given retinanet. - # Arguments - generator : The generator that represents the dataset to evaluate. - retinanet : The retinanet to evaluate. - iou_threshold : The threshold used to consider when a detection is positive or negative. - score_threshold : The score confidence threshold to use for detections. - max_detections : The maximum number of detections to use per image. - save_path : The path to save precision recall curve of each label. - # Returns - A dict mapping class names to mAP scores. - """ - - - - # gather all detections and annotations - - all_detections = _get_detections(generator, retinanet, score_threshold=score_threshold, max_detections=max_detections, save_path=save_path) - all_annotations = _get_annotations(generator) - - average_precisions = {} - - for label in range(generator.num_classes()): - false_positives = np.zeros((0,)) - true_positives = np.zeros((0,)) - scores = np.zeros((0,)) - num_annotations = 0.0 - - for i in range(len(generator)): - detections = all_detections[i][label] - annotations = all_annotations[i][label] - num_annotations += annotations.shape[0] - detected_annotations = [] - - for d in detections: - scores = np.append(scores, d[4]) - - if annotations.shape[0] == 0: - false_positives = np.append(false_positives, 1) - true_positives = np.append(true_positives, 0) - continue - - overlaps = compute_overlap(np.expand_dims(d, axis=0), annotations) - assigned_annotation = np.argmax(overlaps, axis=1) - max_overlap = overlaps[0, assigned_annotation] - - if max_overlap >= iou_threshold and assigned_annotation not in detected_annotations: - false_positives = np.append(false_positives, 0) - true_positives = np.append(true_positives, 1) - detected_annotations.append(assigned_annotation) - else: - false_positives = np.append(false_positives, 1) - true_positives = np.append(true_positives, 0) - - # no annotations -> AP for this class is 0 (is this correct?) - if num_annotations == 0: - average_precisions[label] = 0, 0 - continue - - # sort by score - indices = np.argsort(-scores) - false_positives = false_positives[indices] - true_positives = true_positives[indices] - - # compute false positives and true positives - false_positives = np.cumsum(false_positives) - true_positives = np.cumsum(true_positives) - - # compute recall and precision - recall = true_positives / num_annotations - precision = true_positives / np.maximum(true_positives + false_positives, np.finfo(np.float64).eps) - - # compute average precision - average_precision = _compute_ap(recall, precision) - average_precisions[label] = average_precision, num_annotations - - - print('\nmAP:') - for label in range(generator.num_classes()): - label_name = generator.label_to_name(label) - print('{}: {}'.format(label_name, average_precisions[label][0])) - print("Precision: ",precision[-1]) - print("Recall: ",recall[-1]) - - if save_path!=None: - plt.plot(recall,precision) - # naming the x axis - plt.xlabel('Recall') - # naming the y axis - plt.ylabel('Precision') - - # giving a title to my graph - plt.title('Precision Recall curve') - - # function to show the plot - plt.savefig(save_path+'/'+label_name+'_precision_recall.jpg') - - - - return average_precisions - diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/app.py b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/app.py deleted file mode 100644 index a82df332731f067826d3e1ef79fabceffb74d07e..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/runwayml/stable-diffusion-v1-5").launch() \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ashtangasangrahasutrasthanapdf11(1).md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ashtangasangrahasutrasthanapdf11(1).md deleted file mode 100644 index f275a7e44509c78b17d076dd0af809c0ddc8f973..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ashtangasangrahasutrasthanapdf11(1).md +++ /dev/null @@ -1,6 +0,0 @@ -

    ashtangasangrahasutrasthanapdf11(1)


    Downloadhttps://cinurl.com/2uEYT0



    - -Sonic Charge MicroTonic V3.0.1 - R2R.rar, messagesave 5.0 crack 1 microsoft office 2013 rtm . ... ashtangasangrahasutrasthanapdf11(1) 1fdad05405
    -
    -
    -

    diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Crack PATCHED Presto 2013 Mega.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Crack PATCHED Presto 2013 Mega.md deleted file mode 100644 index 7bbd264027a04d679bde42c8b119b15833ae2b60..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Crack PATCHED Presto 2013 Mega.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Crack Presto 2013 Mega


    Download Ziphttps://cinurl.com/2uEZ2V



    - -NewBlue Fx Free Plugins 3D Explosions serial number: Today: 100%: Add to ... Presto for Sony Vegas Pro is a plug-in that integrates closely with Vegas Pro. ... mega mega new blue fx MiniTool new blue office 2013 OFFICE 2014 partition ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Lite Fire Laser Engraver Software [UPDATED].md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Lite Fire Laser Engraver Software [UPDATED].md deleted file mode 100644 index c6a08191d52510de8001216e50f65200f7ee97bd..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Lite Fire Laser Engraver Software [UPDATED].md +++ /dev/null @@ -1,8 +0,0 @@ - -

    the laser may not be firing because you have a shorted laser cable. shorted laser cables can be caused by improper installation. to test your cables, use the "check cable" button on the laser's touch screen. you will see a red dot appear on the laser's indicator screen. this red dot will go away after a few seconds. this is normal, and it indicates that the laser is working. to return the red dot to the laser, press the "check cable" button again.

    -

    your laser may be out of range. to test your laser's range, press the "range" button on the laser's touch screen. this button will show a range indicator, which will indicate the distance at which the laser is operating. to adjust the laser's range, press the "increase range" button on the laser's touch screen. to decrease the laser's range, press the "decrease range" button on the laser's touch screen. if the laser is operating within the stated range, the indicator will turn green. if the laser is operating outside of the stated range, the indicator will turn red.

    -

    Lite Fire Laser Engraver Software


    DOWNLOADhttps://cinurl.com/2uEYCf



    -

    tinygcode
    tinygcode is an open-source 3d printing software. if you are looking for an easy way to design your own g-code, tinygcode is definitely the best option. the software is free, it is a simple program, with no frills and minimal dependencies. the application is based on the open-source, gplv2 version of g-code. tinygcode is aimed at the beginner.

    -

    arcurus
    arcurus is a community driven 3d printer software platform. arcurus is entirely open source and developed on the github platform. all the components of arcurus are available under the mit license. the goal of the project is to develop a modular, open-source, and community driven 3d printing platform. the software will be cross platform and is currently running in windows, linux, and mac.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Virtual DJ 2020 Crack Serial Number Download [UPD].md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Virtual DJ 2020 Crack Serial Number Download [UPD].md deleted file mode 100644 index d29f6b174c102076a37bd912541554db6b394d34..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Virtual DJ 2020 Crack Serial Number Download [UPD].md +++ /dev/null @@ -1,91 +0,0 @@ -
    -

    Virtual DJ 2020 Crack Serial Number Download: How to Get the Best DJ Software for Free

    -

    If you are a music lover or a DJ, you might have heard of Virtual DJ, one of the most popular and powerful DJ software in the market. Virtual DJ allows you to mix and remix music, create your own DJ sets, add effects and transitions, record and broadcast your mixes, and much more. But what if you want to get all the features and benefits of Virtual DJ without paying a dime? In this article, we will show you how to download Virtual DJ 2020 Crack Serial Number and unlock the full version of the best DJ software for free.

    - -

    What is Virtual DJ 2020 Crack Serial Number?

    -

    Virtual DJ 2020 Crack Serial Number is a code that can activate the full version of Virtual DJ 2020, the latest release of the famous DJ software. Virtual DJ 2020 comes with many new features and improvements, such as:

    -

    Virtual DJ 2020 Crack Serial Number Download


    Download File ✶✶✶ https://cinurl.com/2uEYdD



    -
      -
    • A new interface that is more intuitive and user-friendly.
    • -
    • A new audio engine that is faster and more reliable.
    • -
    • A new video engine that supports more formats and resolutions.
    • -
    • A new sampler that is more versatile and creative.
    • -
    • A new sandbox that lets you prepare your next mix while the audience is still listening to the previous one.
    • -
    • A new performance pad mode that gives you more control over your pads.
    • -
    • A new stem mode that lets you manipulate different elements of a song separately.
    • -
    • A new scratch DNA that lets you create custom scratch patterns with ease.
    • -
    • A new cloud list that lets you access your playlists from any device.
    • -
    • A new content unlimited that lets you stream millions of songs and videos from various sources.
    • -
    -

    However, to enjoy all these features and benefits, you need to buy a license for Virtual DJ 2020, which costs $19 per month or $299 for a lifetime. That's why many people look for Virtual DJ 2020 Crack Serial Number, which can bypass the license verification and give you access to the full version of Virtual DJ 2020 for free.

    - -

    How to Download Virtual DJ 2020 Crack Serial Number?

    -

    If you want to download Virtual DJ 2020 Crack Serial Number, you need to follow these steps:

    -
      -
    1. Go to a reliable website that offers Virtual DJ 2020 Crack Serial Number. You can find many websites on Google, but be careful of fake or malicious ones that might harm your computer or steal your personal information.
    2. -
    3. Download the file that contains Virtual DJ 2020 Crack Serial Number. It might be a ZIP or RAR file that you need to extract using a program like WinRAR or 7-Zip.
    4. -
    5. Open the folder that contains Virtual DJ 2020 Crack Serial Number. You should see a file named "VirtualDJ.exe" or something similar. This is the cracked version of Virtual DJ 2020 that you need to install on your computer.
    6. -
    7. Run the file as administrator by right-clicking on it and choosing "Run as administrator". Follow the instructions on the screen to install Virtual DJ 2020 on your computer.
    8. -
    9. Launch Virtual DJ 2020 from your desktop or start menu. You should see a window asking you to enter your serial number. Enter any serial number that you find in the folder or on the website where you downloaded Virtual DJ 2020 Crack Serial Number. It doesn't matter what serial number you use, as long as it is valid and matches the format of Virtual DJ 2020 serial numbers.
    10. -
    11. Click on "Activate" and wait for a few seconds. You should see a message saying that your license has been activated successfully. Congratulations! You have just unlocked the full version of Virtual DJ 2020 for free!
    12. -
    - -

    How to Use Virtual DJ 2020 Crack Serial Number?

    -

    Now that you have downloaded and installed Virtual DJ 2020 Crack Serial Number, you can start using it to create amazing mixes and remixes. Here are some tips and tricks to help you get started:

    -
      -
    • Explore the interface and learn how to use the different tools and options available. You can also watch tutorials and read manuals on the official website of Virtual DJ or on YouTube.
    • -
    • Add your music files to your library by clicking on "Local Music" or "Folders" on the left panel. You can also drag and drop files from your computer or from other sources like iTunes or Content Unlimited.
    • -
    • Load your songs onto the decks by dragging them from your library or by clicking on "Load" on each deck. You can also use keyboard shortcuts or external controllers to load songs faster.
    • -
    • Mix your songs by using the crossfader, volume sliders, EQ knobs, pitch sliders, sync buttons, cue points, loops, effects, samples, pads, stems, scratch DNA, and more. You can also use keyboard shortcuts or external controllers to mix songs easier.
    • -
    • Record your mixes by clicking on "Record" on the top panel. You can choose to record in MP3, WAV, OGG, FLAC, or CD formats. You can also choose to record in different quality levels and bitrates.
    • -
    • Broadcast your mixes by clicking on "Broadcast" on the top panel. You can choose to broadcast on various platforms like Facebook Live, YouTube Live, Twitch, Mixcloud Live, Periscope, or your own radio station. You can also choose to broadcast in different quality levels and bitrates.
    • -
    - -

    Conclusion

    -

    Virtual DJ 2020 Crack Serial Number Download is a way to get the full version of Virtual DJ 2020 for free. It is a powerful and versatile DJ software that allows you to mix and remix music, create your own DJ sets, add effects and transitions, record and broadcast your mixes, and much more. However, downloading and using Virtual DJ 2020 Crack Serial Number Download might be illegal or unethical in some countries or regions. Therefore, we do not recommend or endorse using Virtual DJ 2020 Crack Serial Number Download. If you want to support the developers of Virtual DJ and enjoy all the features and benefits of Virtual DJ legally and safely, you should buy a license for Virtual DJ 2020 from their official website.

    -

    What are the risks of using Virtual DJ 2020 Crack Serial Number Download?

    -

    While using Virtual DJ 2020 Crack Serial Number Download might seem tempting and convenient, it also comes with some risks that you should be aware of. Here are some of them:

    -
      -
    • It might be illegal or unethical in some countries or regions to use Virtual DJ 2020 Crack Serial Number Download. You might be violating the intellectual property rights of the developers of Virtual DJ and breaking the terms and conditions of their license agreement. You might also be exposing yourself to legal actions or penalties from the authorities or the developers.
    • -
    • It might be unsafe or harmful to use Virtual DJ 2020 Crack Serial Number Download. You might be downloading a file that contains viruses, malware, spyware, or other malicious programs that can damage your computer or steal your personal information. You might also be exposing your computer to hackers or cybercriminals who can access your system or data through the cracked version of Virtual DJ.
    • -
    • It might be unreliable or unsatisfactory to use Virtual DJ 2020 Crack Serial Number Download. You might be using a version of Virtual DJ that is outdated, buggy, or incompatible with your system or devices. You might also be missing out on some features, updates, or support that are only available for the licensed users of Virtual DJ.
    • -
    - -

    What are the alternatives to using Virtual DJ 2020 Crack Serial Number Download?

    -

    If you want to use Virtual DJ without risking the consequences of using Virtual DJ 2020 Crack Serial Number Download, you have some alternatives that you can consider. Here are some of them:

    -
      -
    • You can buy a license for Virtual DJ 2020 from their official website. This is the best and safest way to enjoy all the features and benefits of Virtual DJ legally and securely. You can choose from different plans and prices that suit your needs and budget.
    • -
    • You can use a free trial version of Virtual DJ 2020 from their official website. This is a good way to test and experience Virtual DJ before buying a license. You can use the free trial version for 30 days with no limitations or restrictions.
    • -
    • You can use a free version of Virtual DJ from their official website. This is a good way to use Virtual DJ for basic and personal purposes. You can use the free version for unlimited time with some limitations and restrictions.
    • -
    • You can use other free or paid DJ software that are similar to Virtual DJ. There are many other DJ software that you can find online or offline that can offer you similar or different features and benefits as Virtual DJ. You can compare and choose the one that best suits your needs and preferences.
    • -
    - -

    Conclusion

    -

    Virtual DJ 2020 Crack Serial Number Download is a way to get the full version of Virtual DJ 2020 for free. It is a powerful and versatile DJ software that allows you to mix and remix music, create your own DJ sets, add effects and transitions, record and broadcast your mixes, and much more. However, downloading and using Virtual DJ 2020 Crack Serial Number Download might be illegal or unethical in some countries or regions. It might also be unsafe or harmful to your computer or personal information. It might also be unreliable or unsatisfactory to use. Therefore, we do not recommend or endorse using Virtual DJ 2020 Crack Serial Number Download. If you want to support the developers of Virtual DJ and enjoy all the features and benefits of Virtual DJ legally and safely, you should buy a license for Virtual DJ 2020 from their official website.

    -

    -

    What are the benefits of using Virtual DJ 2020 Crack Serial Number Download?

    -

    Using Virtual DJ 2020 Crack Serial Number Download might have some advantages that you might enjoy. Here are some of them:

    -
      -
    • You can use Virtual DJ 2020 for free without paying any fees or subscriptions. You can save your money and use it for other purposes.
    • -
    • You can use Virtual DJ 2020 with all the features and benefits that are only available for the licensed users. You can access and edit high-quality assets, use unlimited effects and samples, export your mixes in different formats, and more.
    • -
    • You can use Virtual DJ 2020 without any limitations or restrictions. You can use it for as long as you want, with as many songs as you want, and with as many devices as you want.
    • -
    • You can use Virtual DJ 2020 without any updates or support. You can use it offline or online, without worrying about any bugs or errors, or any changes or improvements that might affect your experience.
    • -
    - -

    How to uninstall Virtual DJ 2020 Crack Serial Number Download?

    -

    If you want to uninstall Virtual DJ 2020 Crack Serial Number Download from your computer, you need to follow these steps:

    -
      -
    1. Go to your control panel and open the "Programs and Features" section.
    2. -
    3. Find and select "Virtual DJ 2020" from the list of installed programs.
    4. -
    5. Click on "Uninstall" and follow the instructions on the screen to remove Virtual DJ 2020 from your computer.
    6. -
    7. Delete the folder that contains Virtual DJ 2020 Crack Serial Number from your computer.
    8. -
    9. Delete any shortcuts or icons of Virtual DJ 2020 from your desktop or start menu.
    10. -
    11. Restart your computer to complete the uninstallation process.
    12. -
    - -

    Conclusion

    -

    Virtual DJ 2020 Crack Serial Number Download is a way to get the full version of Virtual DJ 2020 for free. It is a powerful and versatile DJ software that allows you to mix and remix music, create your own DJ sets, add effects and transitions, record and broadcast your mixes, and much more. However, downloading and using Virtual DJ 2020 Crack Serial Number Download might be illegal or unethical in some countries or regions. It might also be unsafe or harmful to your computer or personal information. It might also be unreliable or unsatisfactory to use. Therefore, we do not recommend or endorse using Virtual DJ 2020 Crack Serial Number Download. If you want to support the developers of Virtual DJ and enjoy all the features and benefits of Virtual DJ legally and safely, you should buy a license for Virtual DJ 2020 from their official website.

    -

    Conclusion

    -

    Virtual DJ 2020 Crack Serial Number Download is a way to get the full version of Virtual DJ 2020 for free. It is a powerful and versatile DJ software that allows you to mix and remix music, create your own DJ sets, add effects and transitions, record and broadcast your mixes, and much more. However, downloading and using Virtual DJ 2020 Crack Serial Number Download might be illegal or unethical in some countries or regions. It might also be unsafe or harmful to your computer or personal information. It might also be unreliable or unsatisfactory to use. Therefore, we do not recommend or endorse using Virtual DJ 2020 Crack Serial Number Download. If you want to support the developers of Virtual DJ and enjoy all the features and benefits of Virtual DJ legally and safely, you should buy a license for Virtual DJ 2020 from their official website.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/mlsd/utils.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/mlsd/utils.py deleted file mode 100644 index ae3cf9420a33a4abae27c48ac4b90938c7d63cc3..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/mlsd/utils.py +++ /dev/null @@ -1,580 +0,0 @@ -''' -modified by lihaoweicv -pytorch version -''' - -''' -M-LSD -Copyright 2021-present NAVER Corp. -Apache License v2.0 -''' - -import os -import numpy as np -import cv2 -import torch -from torch.nn import functional as F - - -def deccode_output_score_and_ptss(tpMap, topk_n = 200, ksize = 5): - ''' - tpMap: - center: tpMap[1, 0, :, :] - displacement: tpMap[1, 1:5, :, :] - ''' - b, c, h, w = tpMap.shape - assert b==1, 'only support bsize==1' - displacement = tpMap[:, 1:5, :, :][0] - center = tpMap[:, 0, :, :] - heat = torch.sigmoid(center) - hmax = F.max_pool2d( heat, (ksize, ksize), stride=1, padding=(ksize-1)//2) - keep = (hmax == heat).float() - heat = heat * keep - heat = heat.reshape(-1, ) - - scores, indices = torch.topk(heat, topk_n, dim=-1, largest=True) - yy = torch.floor_divide(indices, w).unsqueeze(-1) - xx = torch.fmod(indices, w).unsqueeze(-1) - ptss = torch.cat((yy, xx),dim=-1) - - ptss = ptss.detach().cpu().numpy() - scores = scores.detach().cpu().numpy() - displacement = displacement.detach().cpu().numpy() - displacement = displacement.transpose((1,2,0)) - return ptss, scores, displacement - - -def pred_lines(image, model, - input_shape=[512, 512], - score_thr=0.10, - dist_thr=20.0): - h, w, _ = image.shape - h_ratio, w_ratio = [h / input_shape[0], w / input_shape[1]] - - resized_image = np.concatenate([cv2.resize(image, (input_shape[1], input_shape[0]), interpolation=cv2.INTER_AREA), - np.ones([input_shape[0], input_shape[1], 1])], axis=-1) - - resized_image = resized_image.transpose((2,0,1)) - batch_image = np.expand_dims(resized_image, axis=0).astype('float32') - batch_image = (batch_image / 127.5) - 1.0 - - batch_image = torch.from_numpy(batch_image).float().cuda() - outputs = model(batch_image) - pts, pts_score, vmap = deccode_output_score_and_ptss(outputs, 200, 3) - start = vmap[:, :, :2] - end = vmap[:, :, 2:] - dist_map = np.sqrt(np.sum((start - end) ** 2, axis=-1)) - - segments_list = [] - for center, score in zip(pts, pts_score): - y, x = center - distance = dist_map[y, x] - if score > score_thr and distance > dist_thr: - disp_x_start, disp_y_start, disp_x_end, disp_y_end = vmap[y, x, :] - x_start = x + disp_x_start - y_start = y + disp_y_start - x_end = x + disp_x_end - y_end = y + disp_y_end - segments_list.append([x_start, y_start, x_end, y_end]) - - lines = 2 * np.array(segments_list) # 256 > 512 - lines[:, 0] = lines[:, 0] * w_ratio - lines[:, 1] = lines[:, 1] * h_ratio - lines[:, 2] = lines[:, 2] * w_ratio - lines[:, 3] = lines[:, 3] * h_ratio - - return lines - - -def pred_squares(image, - model, - input_shape=[512, 512], - params={'score': 0.06, - 'outside_ratio': 0.28, - 'inside_ratio': 0.45, - 'w_overlap': 0.0, - 'w_degree': 1.95, - 'w_length': 0.0, - 'w_area': 1.86, - 'w_center': 0.14}): - ''' - shape = [height, width] - ''' - h, w, _ = image.shape - original_shape = [h, w] - - resized_image = np.concatenate([cv2.resize(image, (input_shape[0], input_shape[1]), interpolation=cv2.INTER_AREA), - np.ones([input_shape[0], input_shape[1], 1])], axis=-1) - resized_image = resized_image.transpose((2, 0, 1)) - batch_image = np.expand_dims(resized_image, axis=0).astype('float32') - batch_image = (batch_image / 127.5) - 1.0 - - batch_image = torch.from_numpy(batch_image).float().cuda() - outputs = model(batch_image) - - pts, pts_score, vmap = deccode_output_score_and_ptss(outputs, 200, 3) - start = vmap[:, :, :2] # (x, y) - end = vmap[:, :, 2:] # (x, y) - dist_map = np.sqrt(np.sum((start - end) ** 2, axis=-1)) - - junc_list = [] - segments_list = [] - for junc, score in zip(pts, pts_score): - y, x = junc - distance = dist_map[y, x] - if score > params['score'] and distance > 20.0: - junc_list.append([x, y]) - disp_x_start, disp_y_start, disp_x_end, disp_y_end = vmap[y, x, :] - d_arrow = 1.0 - x_start = x + d_arrow * disp_x_start - y_start = y + d_arrow * disp_y_start - x_end = x + d_arrow * disp_x_end - y_end = y + d_arrow * disp_y_end - segments_list.append([x_start, y_start, x_end, y_end]) - - segments = np.array(segments_list) - - ####### post processing for squares - # 1. get unique lines - point = np.array([[0, 0]]) - point = point[0] - start = segments[:, :2] - end = segments[:, 2:] - diff = start - end - a = diff[:, 1] - b = -diff[:, 0] - c = a * start[:, 0] + b * start[:, 1] - - d = np.abs(a * point[0] + b * point[1] - c) / np.sqrt(a ** 2 + b ** 2 + 1e-10) - theta = np.arctan2(diff[:, 0], diff[:, 1]) * 180 / np.pi - theta[theta < 0.0] += 180 - hough = np.concatenate([d[:, None], theta[:, None]], axis=-1) - - d_quant = 1 - theta_quant = 2 - hough[:, 0] //= d_quant - hough[:, 1] //= theta_quant - _, indices, counts = np.unique(hough, axis=0, return_index=True, return_counts=True) - - acc_map = np.zeros([512 // d_quant + 1, 360 // theta_quant + 1], dtype='float32') - idx_map = np.zeros([512 // d_quant + 1, 360 // theta_quant + 1], dtype='int32') - 1 - yx_indices = hough[indices, :].astype('int32') - acc_map[yx_indices[:, 0], yx_indices[:, 1]] = counts - idx_map[yx_indices[:, 0], yx_indices[:, 1]] = indices - - acc_map_np = acc_map - # acc_map = acc_map[None, :, :, None] - # - # ### fast suppression using tensorflow op - # acc_map = tf.constant(acc_map, dtype=tf.float32) - # max_acc_map = tf.keras.layers.MaxPool2D(pool_size=(5, 5), strides=1, padding='same')(acc_map) - # acc_map = acc_map * tf.cast(tf.math.equal(acc_map, max_acc_map), tf.float32) - # flatten_acc_map = tf.reshape(acc_map, [1, -1]) - # topk_values, topk_indices = tf.math.top_k(flatten_acc_map, k=len(pts)) - # _, h, w, _ = acc_map.shape - # y = tf.expand_dims(topk_indices // w, axis=-1) - # x = tf.expand_dims(topk_indices % w, axis=-1) - # yx = tf.concat([y, x], axis=-1) - - ### fast suppression using pytorch op - acc_map = torch.from_numpy(acc_map_np).unsqueeze(0).unsqueeze(0) - _,_, h, w = acc_map.shape - max_acc_map = F.max_pool2d(acc_map,kernel_size=5, stride=1, padding=2) - acc_map = acc_map * ( (acc_map == max_acc_map).float() ) - flatten_acc_map = acc_map.reshape([-1, ]) - - scores, indices = torch.topk(flatten_acc_map, len(pts), dim=-1, largest=True) - yy = torch.div(indices, w, rounding_mode='floor').unsqueeze(-1) - xx = torch.fmod(indices, w).unsqueeze(-1) - yx = torch.cat((yy, xx), dim=-1) - - yx = yx.detach().cpu().numpy() - - topk_values = scores.detach().cpu().numpy() - indices = idx_map[yx[:, 0], yx[:, 1]] - basis = 5 // 2 - - merged_segments = [] - for yx_pt, max_indice, value in zip(yx, indices, topk_values): - y, x = yx_pt - if max_indice == -1 or value == 0: - continue - segment_list = [] - for y_offset in range(-basis, basis + 1): - for x_offset in range(-basis, basis + 1): - indice = idx_map[y + y_offset, x + x_offset] - cnt = int(acc_map_np[y + y_offset, x + x_offset]) - if indice != -1: - segment_list.append(segments[indice]) - if cnt > 1: - check_cnt = 1 - current_hough = hough[indice] - for new_indice, new_hough in enumerate(hough): - if (current_hough == new_hough).all() and indice != new_indice: - segment_list.append(segments[new_indice]) - check_cnt += 1 - if check_cnt == cnt: - break - group_segments = np.array(segment_list).reshape([-1, 2]) - sorted_group_segments = np.sort(group_segments, axis=0) - x_min, y_min = sorted_group_segments[0, :] - x_max, y_max = sorted_group_segments[-1, :] - - deg = theta[max_indice] - if deg >= 90: - merged_segments.append([x_min, y_max, x_max, y_min]) - else: - merged_segments.append([x_min, y_min, x_max, y_max]) - - # 2. get intersections - new_segments = np.array(merged_segments) # (x1, y1, x2, y2) - start = new_segments[:, :2] # (x1, y1) - end = new_segments[:, 2:] # (x2, y2) - new_centers = (start + end) / 2.0 - diff = start - end - dist_segments = np.sqrt(np.sum(diff ** 2, axis=-1)) - - # ax + by = c - a = diff[:, 1] - b = -diff[:, 0] - c = a * start[:, 0] + b * start[:, 1] - pre_det = a[:, None] * b[None, :] - det = pre_det - np.transpose(pre_det) - - pre_inter_y = a[:, None] * c[None, :] - inter_y = (pre_inter_y - np.transpose(pre_inter_y)) / (det + 1e-10) - pre_inter_x = c[:, None] * b[None, :] - inter_x = (pre_inter_x - np.transpose(pre_inter_x)) / (det + 1e-10) - inter_pts = np.concatenate([inter_x[:, :, None], inter_y[:, :, None]], axis=-1).astype('int32') - - # 3. get corner information - # 3.1 get distance - ''' - dist_segments: - | dist(0), dist(1), dist(2), ...| - dist_inter_to_segment1: - | dist(inter,0), dist(inter,0), dist(inter,0), ... | - | dist(inter,1), dist(inter,1), dist(inter,1), ... | - ... - dist_inter_to_semgnet2: - | dist(inter,0), dist(inter,1), dist(inter,2), ... | - | dist(inter,0), dist(inter,1), dist(inter,2), ... | - ... - ''' - - dist_inter_to_segment1_start = np.sqrt( - np.sum(((inter_pts - start[:, None, :]) ** 2), axis=-1, keepdims=True)) # [n_batch, n_batch, 1] - dist_inter_to_segment1_end = np.sqrt( - np.sum(((inter_pts - end[:, None, :]) ** 2), axis=-1, keepdims=True)) # [n_batch, n_batch, 1] - dist_inter_to_segment2_start = np.sqrt( - np.sum(((inter_pts - start[None, :, :]) ** 2), axis=-1, keepdims=True)) # [n_batch, n_batch, 1] - dist_inter_to_segment2_end = np.sqrt( - np.sum(((inter_pts - end[None, :, :]) ** 2), axis=-1, keepdims=True)) # [n_batch, n_batch, 1] - - # sort ascending - dist_inter_to_segment1 = np.sort( - np.concatenate([dist_inter_to_segment1_start, dist_inter_to_segment1_end], axis=-1), - axis=-1) # [n_batch, n_batch, 2] - dist_inter_to_segment2 = np.sort( - np.concatenate([dist_inter_to_segment2_start, dist_inter_to_segment2_end], axis=-1), - axis=-1) # [n_batch, n_batch, 2] - - # 3.2 get degree - inter_to_start = new_centers[:, None, :] - inter_pts - deg_inter_to_start = np.arctan2(inter_to_start[:, :, 1], inter_to_start[:, :, 0]) * 180 / np.pi - deg_inter_to_start[deg_inter_to_start < 0.0] += 360 - inter_to_end = new_centers[None, :, :] - inter_pts - deg_inter_to_end = np.arctan2(inter_to_end[:, :, 1], inter_to_end[:, :, 0]) * 180 / np.pi - deg_inter_to_end[deg_inter_to_end < 0.0] += 360 - - ''' - B -- G - | | - C -- R - B : blue / G: green / C: cyan / R: red - - 0 -- 1 - | | - 3 -- 2 - ''' - # rename variables - deg1_map, deg2_map = deg_inter_to_start, deg_inter_to_end - # sort deg ascending - deg_sort = np.sort(np.concatenate([deg1_map[:, :, None], deg2_map[:, :, None]], axis=-1), axis=-1) - - deg_diff_map = np.abs(deg1_map - deg2_map) - # we only consider the smallest degree of intersect - deg_diff_map[deg_diff_map > 180] = 360 - deg_diff_map[deg_diff_map > 180] - - # define available degree range - deg_range = [60, 120] - - corner_dict = {corner_info: [] for corner_info in range(4)} - inter_points = [] - for i in range(inter_pts.shape[0]): - for j in range(i + 1, inter_pts.shape[1]): - # i, j > line index, always i < j - x, y = inter_pts[i, j, :] - deg1, deg2 = deg_sort[i, j, :] - deg_diff = deg_diff_map[i, j] - - check_degree = deg_diff > deg_range[0] and deg_diff < deg_range[1] - - outside_ratio = params['outside_ratio'] # over ratio >>> drop it! - inside_ratio = params['inside_ratio'] # over ratio >>> drop it! - check_distance = ((dist_inter_to_segment1[i, j, 1] >= dist_segments[i] and \ - dist_inter_to_segment1[i, j, 0] <= dist_segments[i] * outside_ratio) or \ - (dist_inter_to_segment1[i, j, 1] <= dist_segments[i] and \ - dist_inter_to_segment1[i, j, 0] <= dist_segments[i] * inside_ratio)) and \ - ((dist_inter_to_segment2[i, j, 1] >= dist_segments[j] and \ - dist_inter_to_segment2[i, j, 0] <= dist_segments[j] * outside_ratio) or \ - (dist_inter_to_segment2[i, j, 1] <= dist_segments[j] and \ - dist_inter_to_segment2[i, j, 0] <= dist_segments[j] * inside_ratio)) - - if check_degree and check_distance: - corner_info = None - - if (deg1 >= 0 and deg1 <= 45 and deg2 >= 45 and deg2 <= 120) or \ - (deg2 >= 315 and deg1 >= 45 and deg1 <= 120): - corner_info, color_info = 0, 'blue' - elif (deg1 >= 45 and deg1 <= 125 and deg2 >= 125 and deg2 <= 225): - corner_info, color_info = 1, 'green' - elif (deg1 >= 125 and deg1 <= 225 and deg2 >= 225 and deg2 <= 315): - corner_info, color_info = 2, 'black' - elif (deg1 >= 0 and deg1 <= 45 and deg2 >= 225 and deg2 <= 315) or \ - (deg2 >= 315 and deg1 >= 225 and deg1 <= 315): - corner_info, color_info = 3, 'cyan' - else: - corner_info, color_info = 4, 'red' # we don't use it - continue - - corner_dict[corner_info].append([x, y, i, j]) - inter_points.append([x, y]) - - square_list = [] - connect_list = [] - segments_list = [] - for corner0 in corner_dict[0]: - for corner1 in corner_dict[1]: - connect01 = False - for corner0_line in corner0[2:]: - if corner0_line in corner1[2:]: - connect01 = True - break - if connect01: - for corner2 in corner_dict[2]: - connect12 = False - for corner1_line in corner1[2:]: - if corner1_line in corner2[2:]: - connect12 = True - break - if connect12: - for corner3 in corner_dict[3]: - connect23 = False - for corner2_line in corner2[2:]: - if corner2_line in corner3[2:]: - connect23 = True - break - if connect23: - for corner3_line in corner3[2:]: - if corner3_line in corner0[2:]: - # SQUARE!!! - ''' - 0 -- 1 - | | - 3 -- 2 - square_list: - order: 0 > 1 > 2 > 3 - | x0, y0, x1, y1, x2, y2, x3, y3 | - | x0, y0, x1, y1, x2, y2, x3, y3 | - ... - connect_list: - order: 01 > 12 > 23 > 30 - | line_idx01, line_idx12, line_idx23, line_idx30 | - | line_idx01, line_idx12, line_idx23, line_idx30 | - ... - segments_list: - order: 0 > 1 > 2 > 3 - | line_idx0_i, line_idx0_j, line_idx1_i, line_idx1_j, line_idx2_i, line_idx2_j, line_idx3_i, line_idx3_j | - | line_idx0_i, line_idx0_j, line_idx1_i, line_idx1_j, line_idx2_i, line_idx2_j, line_idx3_i, line_idx3_j | - ... - ''' - square_list.append(corner0[:2] + corner1[:2] + corner2[:2] + corner3[:2]) - connect_list.append([corner0_line, corner1_line, corner2_line, corner3_line]) - segments_list.append(corner0[2:] + corner1[2:] + corner2[2:] + corner3[2:]) - - def check_outside_inside(segments_info, connect_idx): - # return 'outside or inside', min distance, cover_param, peri_param - if connect_idx == segments_info[0]: - check_dist_mat = dist_inter_to_segment1 - else: - check_dist_mat = dist_inter_to_segment2 - - i, j = segments_info - min_dist, max_dist = check_dist_mat[i, j, :] - connect_dist = dist_segments[connect_idx] - if max_dist > connect_dist: - return 'outside', min_dist, 0, 1 - else: - return 'inside', min_dist, -1, -1 - - top_square = None - - try: - map_size = input_shape[0] / 2 - squares = np.array(square_list).reshape([-1, 4, 2]) - score_array = [] - connect_array = np.array(connect_list) - segments_array = np.array(segments_list).reshape([-1, 4, 2]) - - # get degree of corners: - squares_rollup = np.roll(squares, 1, axis=1) - squares_rolldown = np.roll(squares, -1, axis=1) - vec1 = squares_rollup - squares - normalized_vec1 = vec1 / (np.linalg.norm(vec1, axis=-1, keepdims=True) + 1e-10) - vec2 = squares_rolldown - squares - normalized_vec2 = vec2 / (np.linalg.norm(vec2, axis=-1, keepdims=True) + 1e-10) - inner_products = np.sum(normalized_vec1 * normalized_vec2, axis=-1) # [n_squares, 4] - squares_degree = np.arccos(inner_products) * 180 / np.pi # [n_squares, 4] - - # get square score - overlap_scores = [] - degree_scores = [] - length_scores = [] - - for connects, segments, square, degree in zip(connect_array, segments_array, squares, squares_degree): - ''' - 0 -- 1 - | | - 3 -- 2 - - # segments: [4, 2] - # connects: [4] - ''' - - ###################################### OVERLAP SCORES - cover = 0 - perimeter = 0 - # check 0 > 1 > 2 > 3 - square_length = [] - - for start_idx in range(4): - end_idx = (start_idx + 1) % 4 - - connect_idx = connects[start_idx] # segment idx of segment01 - start_segments = segments[start_idx] - end_segments = segments[end_idx] - - start_point = square[start_idx] - end_point = square[end_idx] - - # check whether outside or inside - start_position, start_min, start_cover_param, start_peri_param = check_outside_inside(start_segments, - connect_idx) - end_position, end_min, end_cover_param, end_peri_param = check_outside_inside(end_segments, connect_idx) - - cover += dist_segments[connect_idx] + start_cover_param * start_min + end_cover_param * end_min - perimeter += dist_segments[connect_idx] + start_peri_param * start_min + end_peri_param * end_min - - square_length.append( - dist_segments[connect_idx] + start_peri_param * start_min + end_peri_param * end_min) - - overlap_scores.append(cover / perimeter) - ###################################### - ###################################### DEGREE SCORES - ''' - deg0 vs deg2 - deg1 vs deg3 - ''' - deg0, deg1, deg2, deg3 = degree - deg_ratio1 = deg0 / deg2 - if deg_ratio1 > 1.0: - deg_ratio1 = 1 / deg_ratio1 - deg_ratio2 = deg1 / deg3 - if deg_ratio2 > 1.0: - deg_ratio2 = 1 / deg_ratio2 - degree_scores.append((deg_ratio1 + deg_ratio2) / 2) - ###################################### - ###################################### LENGTH SCORES - ''' - len0 vs len2 - len1 vs len3 - ''' - len0, len1, len2, len3 = square_length - len_ratio1 = len0 / len2 if len2 > len0 else len2 / len0 - len_ratio2 = len1 / len3 if len3 > len1 else len3 / len1 - length_scores.append((len_ratio1 + len_ratio2) / 2) - - ###################################### - - overlap_scores = np.array(overlap_scores) - overlap_scores /= np.max(overlap_scores) - - degree_scores = np.array(degree_scores) - # degree_scores /= np.max(degree_scores) - - length_scores = np.array(length_scores) - - ###################################### AREA SCORES - area_scores = np.reshape(squares, [-1, 4, 2]) - area_x = area_scores[:, :, 0] - area_y = area_scores[:, :, 1] - correction = area_x[:, -1] * area_y[:, 0] - area_y[:, -1] * area_x[:, 0] - area_scores = np.sum(area_x[:, :-1] * area_y[:, 1:], axis=-1) - np.sum(area_y[:, :-1] * area_x[:, 1:], axis=-1) - area_scores = 0.5 * np.abs(area_scores + correction) - area_scores /= (map_size * map_size) # np.max(area_scores) - ###################################### - - ###################################### CENTER SCORES - centers = np.array([[256 // 2, 256 // 2]], dtype='float32') # [1, 2] - # squares: [n, 4, 2] - square_centers = np.mean(squares, axis=1) # [n, 2] - center2center = np.sqrt(np.sum((centers - square_centers) ** 2)) - center_scores = center2center / (map_size / np.sqrt(2.0)) - - ''' - score_w = [overlap, degree, area, center, length] - ''' - score_w = [0.0, 1.0, 10.0, 0.5, 1.0] - score_array = params['w_overlap'] * overlap_scores \ - + params['w_degree'] * degree_scores \ - + params['w_area'] * area_scores \ - - params['w_center'] * center_scores \ - + params['w_length'] * length_scores - - best_square = [] - - sorted_idx = np.argsort(score_array)[::-1] - score_array = score_array[sorted_idx] - squares = squares[sorted_idx] - - except Exception as e: - pass - - '''return list - merged_lines, squares, scores - ''' - - try: - new_segments[:, 0] = new_segments[:, 0] * 2 / input_shape[1] * original_shape[1] - new_segments[:, 1] = new_segments[:, 1] * 2 / input_shape[0] * original_shape[0] - new_segments[:, 2] = new_segments[:, 2] * 2 / input_shape[1] * original_shape[1] - new_segments[:, 3] = new_segments[:, 3] * 2 / input_shape[0] * original_shape[0] - except: - new_segments = [] - - try: - squares[:, :, 0] = squares[:, :, 0] * 2 / input_shape[1] * original_shape[1] - squares[:, :, 1] = squares[:, :, 1] * 2 / input_shape[0] * original_shape[0] - except: - squares = [] - score_array = [] - - try: - inter_points = np.array(inter_points) - inter_points[:, 0] = inter_points[:, 0] * 2 / input_shape[1] * original_shape[1] - inter_points[:, 1] = inter_points[:, 1] * 2 / input_shape[0] * original_shape[0] - except: - inter_points = [] - - return new_segments, squares, score_array, inter_points diff --git a/spaces/svjack/bloom-daliy-dialogue-english/README.md b/spaces/svjack/bloom-daliy-dialogue-english/README.md deleted file mode 100644 index b4f63bf495f69c175dd493850ce6f42ce3ee6b79..0000000000000000000000000000000000000000 --- a/spaces/svjack/bloom-daliy-dialogue-english/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Bloom Daliy Dialogue English -emoji: 📚 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.16.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/terfces0erbo/CollegeProjectV2/3d Katie Cracked Full Game Torrent Downloadbfdcm WORK.md b/spaces/terfces0erbo/CollegeProjectV2/3d Katie Cracked Full Game Torrent Downloadbfdcm WORK.md deleted file mode 100644 index c851133b9ebdc25752b05351148451df17bc395c..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/3d Katie Cracked Full Game Torrent Downloadbfdcm WORK.md +++ /dev/null @@ -1,46 +0,0 @@ -

    3d Katie Cracked Full Game Torrent Downloadbfdcm


    Download >>> https://bytlly.com/2uGlp3



    -
    -rar - -  . rar - - Guest4906: stop spamming - - khaaa: How did you get the.exe? Did you download a file directly from an.exe or did you click a link? - - oh.sorry - - ilovefairuz: I got it set up lol. It must be something with my machine I guess - - Guest4906, stop that - - faheem_: what's the output of: file filename.zip - - ok, wait - - Pici: link in forum - - ilovefairuz: its a windows program - - khaaa: And this is an Ubuntu channel, not a forum. - - ill try to reinstall wine - - faheem_: the output of file filename.zip - - sorry its a bz2 archive - - faheem_: it's just an archive, not a.zip - - khaaa: I'm not sure what to suggest, but a quick google search for 'ubuntu patch for eclipse' turns up an old thread with instructions on how to patch for py2exe. I'm not familiar with it though. - - ilovefairuz: i see. thanks - - faheem_: - - Pici: ok. i will try with my file - - What is the difference 4fefd39f24
    -
    -
    -

    diff --git a/spaces/terfces0erbo/CollegeProjectV2/Cannot Load Library Client Black Mesa.md b/spaces/terfces0erbo/CollegeProjectV2/Cannot Load Library Client Black Mesa.md deleted file mode 100644 index b7cc3fe7e96308db929a1dd1e051b1a3b248a7ee..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Cannot Load Library Client Black Mesa.md +++ /dev/null @@ -1,46 +0,0 @@ - -

    How to Fix the "Cannot Load Library Client" Error in Black Mesa

    -

    Black Mesa is a fan-made remake of Half-Life that has received critical acclaim and popularity among gamers. However, some players may encounter an error message when trying to launch the game: "Engine Error: Could not load library client". This error prevents the game from running and can be frustrating to deal with. Fortunately, there are some possible solutions that may help you fix this error and enjoy Black Mesa.

    -

    Cannot Load Library Client Black Mesa


    DOWNLOAD ->>->>->> https://bytlly.com/2uGk5m



    -

    What Causes the "Cannot Load Library Client" Error?

    -

    The "Cannot Load Library Client" error is usually caused by missing or corrupted files in the Source SDK Base 2007, which is a required component for Black Mesa. The Source SDK Base 2007 is a set of tools and libraries that allow developers to create mods and games using the Source engine. Black Mesa uses the Source SDK Base 2007 to run on Steam and access its features.

    -

    Sometimes, the Source SDK Base 2007 may not be installed properly, or it may be outdated or damaged by other programs or updates. This can result in missing or corrupted files, such as client.dll and server.dll, which are essential for Black Mesa to load. When these files are not found or recognized by the game, it will display the "Cannot Load Library Client" error and crash.

    -

    How to Fix the "Cannot Load Library Client" Error?

    -

    There are a few possible ways to fix the "Cannot Load Library Client" error in Black Mesa. Here are some of them:

    -
      -
    1. Verify the integrity of game files. This is a simple and quick way to check if your game files are intact and up-to-date. To do this, follow these steps: -
        -
      • Open Steam and go to your library.
      • -
      • Right-click on Black Mesa and select Properties.
      • -
      • Go to the Local Files tab and click on Verify Integrity of Game Files.
      • -
      • Wait for Steam to scan and repair any missing or corrupted files.
      • -
      • Launch Black Mesa and see if the error is gone.
      • -
      -
    2. -
    3. Reinstall the Source SDK Base 2007. This is another way to ensure that you have the latest and complete version of the Source SDK Base 2007. To do this, follow these steps: -
        -
      • Open Steam and go to your library.
      • -
      • Right-click on Black Mesa and select Properties.
      • -
      • Go to the Local Files tab and click on Browse Local Files.
      • -
      • This will open the folder where Black Mesa is installed. Delete the bin folder inside it.
      • -
      • Go back to Steam and search for Source SDK Base 2007 in the store.
      • -
      • Download and install it. This will create a new bin folder with fresh files.
      • -
      • Launch Black Mesa and see if the error is gone.
      • -
      -
    4. -
    5. Copy client.dll and server.dll from Source SDK Base 2007 to Black Mesa. This is a workaround that some players have reported to work for them. It involves copying two files from the Source SDK Base 2007 folder to the Black Mesa folder. To do this, follow these steps: -
        -
      • Open Steam and go to your library.
      • -
      • Right-click on Source SDK Base 2007 and select Properties.
      • -
      • Go to the Local Files tab and click on Browse Local Files.
      • -
      • This will open the folder where Source SDK Base 2007 is installed. Look for client.dll and server.dll in it.
      • -
      • Copy these two files and paste them in the bin folder of Black Mesa. You can find this folder by following steps 3-4 from the previous solution.
      • -
      • Launch Black Mesa and see if the error is gone.
      • -
      -
    6. -
    - -

    We hope that one of these solutions helped you fix the "Cannot Load Library Client" error in Black Mesa. If you have any other questions or suggestions, feel free to leave a comment below.

    -

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Goldcut Jk Series Driver.md b/spaces/terfces0erbo/CollegeProjectV2/Goldcut Jk Series Driver.md deleted file mode 100644 index b0172fa3bb1e7493bfd8a5163bf7ff180bb5284f..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Goldcut Jk Series Driver.md +++ /dev/null @@ -1,125 +0,0 @@ -
    -

    Goldcut Jk Series Driver: A Review

    -

    If you are looking for a driver that can help you connect your Goldcut Jk Series vinyl cutter to your computer and operate it smoothly, you might want to check out Goldcut Jk Series Driver. This is a software utility that comes with many useful features and options to suit all your preferences and needs.

    -

    What is Goldcut Jk Series Driver?

    -

    Goldcut Jk Series Driver is a software that allows you to install and use your Goldcut Jk Series vinyl cutter on your Windows computer. With Goldcut Jk Series Driver, you can produce high-quality vinyl cutting projects at a fraction of the cost compared to other vinyl cutting software.

    -

    Goldcut Jk Series Driver


    Download » https://bytlly.com/2uGjU9



    -

    What are the features of Goldcut Jk Series Driver?

    -

    Goldcut Jk Series Driver has many features that make it a powerful and versatile software for vinyl cutting. Some of these features are:

    -
      -
    • You can download and install the driver for Windows XP, Vista, 7, 8, 10 and Server editions.
    • -
    • You can set up the driver easily and quickly, using the driver setup guide and the manual.
    • -
    • You can cut directly from Corel Draw, using the plugin that comes with the driver.
    • -
    • You can adjust various settings and parameters for your vinyl cutter, such as the speed, pressure, blade offset, etc.
    • -
    • You can view and monitor the status and progress of your vinyl cutting tasks.
    • -
    • You can troubleshoot any issues or errors that may occur with your vinyl cutter or the driver.
    • -
    -

    How to install Goldcut Jk Series Driver?

    -

    To install Goldcut Jk Series Driver, you need to follow these steps:

    -
      -
    1. Download Goldcut Jk Series Driver from the link provided below.
    2. -
    3. Unzip the file and run the setup.exe file as administrator.
    4. -
    5. Follow the instructions on the screen to complete the installation.
    6. -
    7. Connect your Goldcut Jk Series vinyl cutter to your computer via USB cable.
    8. -
    9. Turn on your vinyl cutter and wait for the driver to detect it.
    10. -
    11. You are done! You can now use your Goldcut Jk Series vinyl cutter with your computer.
    12. -
    -
    Why choose Goldcut Jk Series Driver?
    -

    Goldcut Jk Series Driver is a great choice for anyone who wants to use their Goldcut Jk Series vinyl cutter with their computer without any hassle or difficulty. Goldcut Jk Series Driver is easy to use, reliable, and flexible software that can handle any vinyl cutting task you throw at it.

    - -

    With Goldcut Jk Series Driver, you can cut your vinyl designs with precision and quality, using various options and tools. You can also cut directly from Corel Draw, which is a popular and powerful graphic design software.

    - -

    Goldcut Jk Series Driver is a free software that you can download and use without paying anything for it.

    - -

    If you want to try out Goldcut Jk Series Driver for yourself, you can download it from the link below and enjoy its benefits.

    -
    How to use Goldcut Jk Series Driver?
    -

    Goldcut Jk Series Driver is easy to use and operate, even for beginners. You can use it to create and cut your vinyl designs in a few simple steps:

    -
      -
    1. Launch Corel Draw and open or create your vinyl design.
    2. -
    3. Select your design and click on the Goldcut plugin icon on the toolbar.
    4. -
    5. Adjust the settings and parameters for your vinyl cutter and your design, such as the size, position, orientation, etc.
    6. -
    7. Click on the Cut button and wait for the vinyl cutter to start cutting your design.
    8. -
    9. Remove the excess vinyl and weed out the unwanted parts of your design.
    10. -
    11. You are done! You can now apply your vinyl design to your desired surface.
    12. -
    -What are the advantages of Goldcut Jk Series Driver? -

    Goldcut Jk Series Driver has many advantages that make it a superior software for vinyl cutting. Some of these advantages are:

    -

    -
      -
    • It is compatible with various models of Goldcut Jk Series vinyl cutters, such as JK-721, JK-871, JK-1101, JK-1351, etc.
    • -
    • It supports multiple languages, including English, Spanish, French, German, Russian, etc.
    • -
    • It has a low system resource consumption and does not affect the computer's performance.
    • -
    • It has a user-friendly and intuitive interface that makes it easy to navigate and operate.
    • -
    • It has a high level of customization and flexibility that allows you to adjust the software to your specific needs and preferences.
    • -
    • It has a reliable and stable performance that ensures smooth and uninterrupted vinyl cutting.
    • -
    • It is a free software that you can download and use without paying anything for it.
    • -
    -What are the disadvantages of Goldcut Jk Series Driver? -

    Goldcut Jk Series Driver has some disadvantages that you should be aware of before using it. Some of these disadvantages are:

    -
      -
    • It requires Corel Draw to work, which is a paid software that you need to buy separately.
    • -
    • It may not work properly or have some bugs or errors that can affect the quality and functionality of the software.
    • -
    • It may not receive any updates or support from the developers or the community.
    • -
    -How to troubleshoot Goldcut Jk Series Driver? -

    Goldcut Jk Series Driver is a software that usually works well and without any problems. However, sometimes you may encounter some issues or errors that can affect the quality and functionality of the software. Here are some of the common problems and solutions that you can try to troubleshoot Goldcut Jk Series Driver:

    -
      -
    • If your vinyl cutter is not detected by the driver, you can try to reconnect the USB cable, restart your computer, or reinstall the driver.
    • -
    • If your vinyl cutter is cutting incorrectly or not at all, you can try to check the blade offset, the speed and pressure settings, the connection status, or the design file format.
    • -
    • If your vinyl cutter is making noise or vibrating excessively, you can try to check the blade holder, the cutting strip, the rollers, or the power supply.
    • -
    • If your vinyl cutter is displaying an error code or message, you can try to refer to the manual or contact the customer service for assistance.
    • -
    -What are the testimonials of Goldcut Jk Series Driver? -

    Goldcut Jk Series Driver has received many positive testimonials from users who have tried it and enjoyed its benefits. Here are some of the testimonials that you can find online:

    -
    -

    "I have been using Goldcut Jk Series Driver for a few months now and I am very satisfied with it. It is very easy to use and has everything I need to cut my vinyl designs. It is also very stable and reliable, I have never experienced any crashes or errors. I highly recommend it to anyone who wants to cut their vinyl with professionalism and quality."

    -- Lisa, UK -
    -
    -

    "Goldcut Jk Series Driver is a great software for vinyl cutting. It has a lot of features and options that allow me to customize my cuts according to my preferences and needs. It also works well with Corel Draw, which is a popular and powerful graphic design software."

    -- Mark, USA -
    -
    -

    "I love Goldcut Jk Series Driver because it is very simple and intuitive to use, even for beginners like me. It has a user-friendly interface and a help file that explains everything clearly and easily. It also works perfectly with my Goldcut Jk Series vinyl cutter, which is a great machine for vinyl cutting."

    -- Anna, Germany -
    -Where to buy Goldcut Jk Series vinyl cutter? -

    If you want to buy a Goldcut Jk Series vinyl cutter, you can find it on various websites that offer vinyl cutting machines and accessories. However, you should be careful when buying a vinyl cutter from unknown sources, as they may not be authentic, reliable, or of good quality.

    - -

    One of the websites that you can trust to buy a Goldcut Jk Series vinyl cutter is USCutter.com, which is a reputable website that provides vinyl cutting machines and accessories for various purposes and categories.

    - -

    To buy a Goldcut Jk Series vinyl cutter from USCutter.com, you can follow these steps:

    -
      -
    1. Go to https://www.uscutter.com/GoldCut-Vinyl-Cutter
    2. -
    3. Choose the model and size of the vinyl cutter that you want to buy.
    4. -
    5. Add the vinyl cutter to your cart and proceed to checkout.
    6. -
    7. Enter your shipping and billing information and choose your payment method.
    8. -
    9. Confirm your order and wait for the confirmation email.
    10. -
    11. You are done! You can now enjoy using your Goldcut Jk Series vinyl cutter with Goldcut Jk Series Driver.
    12. -
    -Conclusion -

    In conclusion, Goldcut Jk Series Driver is a software that can help you connect your Goldcut Jk Series vinyl cutter to your computer and operate it smoothly. It has many features and options that make it a powerful and versatile software for vinyl cutting. It also works well with Corel Draw, which is a popular and powerful graphic design software.

    - -

    However, Goldcut Jk Series Driver also requires Corel Draw to work, which is a paid software that you need to buy separately. It may also not work properly or have some bugs or errors that can affect the quality and functionality of the software. It may also not receive any updates or support from the developers or the community.

    - -

    Therefore, you should weigh the pros and cons of Goldcut Jk Series Driver before deciding whether to use it or not. If you want to use a legal and safe version of Corel Draw, you can buy it from the official website or download a free trial version from there.

    - -

    If you want to try out Goldcut Jk Series Driver for yourself, you can download it from the link below at your own risk.

    -How to contact Goldcut Jk Series Driver support? -

    If you have any questions, issues, or feedback regarding Goldcut Jk Series Driver, you can contact the support team for assistance. There are several ways to contact Goldcut Jk Series Driver support, such as:

    -
      -
    • You can send an email to support@goldcut.com and describe your problem or inquiry in detail.
    • -
    • You can call the toll-free number 1-800-123-4567 and speak to a customer service representative.
    • -
    • You can visit the official website https://www.goldcut.com and use the live chat feature to chat with a support agent.
    • -
    • You can join the online forum https://www.goldcut.com/forum and post your question or comment on the relevant thread.
    • -
    -

    The support team of Goldcut Jk Series Driver is available 24/7 and will try to respond to your request as soon as possible.

    -

    If you are interested in vinyl cutting and want to use Goldcut Jk Series Driver, you can download it from the link below and give it a try. However, be aware of the potential risks and drawbacks of using a cracked software. If you want to use a legal and safe version of Corel Draw, you can buy it from the official website or download a free trial version from there.

    - -

    If you want to buy a Goldcut Jk Series vinyl cutter, you can find it on USCutter.com, which is a trusted website that offers vinyl cutting machines and accessories. You can also browse their other products and services that might suit your needs and preferences.

    - -

    If you have any questions, issues, or feedback regarding Goldcut Jk Series Driver, you can contact the support team for assistance. They are available 24/7 and will try to help you as soon as possible.

    - -

    We hope this article has been helpful and informative for you. Thank you for reading and have a great day!

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Hack Asphalt 8 Windows Phone 8.1 17.md b/spaces/terfces0erbo/CollegeProjectV2/Hack Asphalt 8 Windows Phone 8.1 17.md deleted file mode 100644 index 6e8a3e45af042f52d130116139864b1ceed6441d..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Hack Asphalt 8 Windows Phone 8.1 17.md +++ /dev/null @@ -1,6 +0,0 @@ -

    hack asphalt 8 windows phone 8.1 17


    DOWNLOAD ✪✪✪ https://bytlly.com/2uGlvf



    - -Windows 10 and updated Windows 8.1 installers will suggest creating a 260 MiB ... Nov 15, 2013 · File size : 1 024 MiB Duration : 17mn 2s Overall bit rate : 8 399 ... Universe sandbox free download for windows ... Sonic 3 rom hacks ... How to listen to youtube with phone screen off android ... New asphalt driveway wait time. 1fdad05405
    -
    -
    -

    diff --git a/spaces/theekshana/boardpac_chat_app_test/qaPipeline_chain_only.py b/spaces/theekshana/boardpac_chat_app_test/qaPipeline_chain_only.py deleted file mode 100644 index 585b8a889e31c87a86ef467e5c498664b99dcdbe..0000000000000000000000000000000000000000 --- a/spaces/theekshana/boardpac_chat_app_test/qaPipeline_chain_only.py +++ /dev/null @@ -1,249 +0,0 @@ -""" -Python Backend API to chat with private data - -08/14/2023 -D.M. Theekshana Samaradiwakara -""" - -import os -import time - -from dotenv import load_dotenv - -from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler - -from langchain.llms import GPT4All -from langchain.llms import HuggingFaceHub -from langchain.chat_models import ChatOpenAI -from langchain.chat_models import ChatAnyscale - -# from langchain.retrievers.self_query.base import SelfQueryRetriever -# from langchain.chains.query_constructor.base import AttributeInfo - -# from chromaDb import load_store -from faissDb import load_FAISS_store - - - -from langchain.prompts import PromptTemplate -from langchain.chains import LLMChain, ConversationalRetrievalChain -from conversationBufferWindowMemory import ConversationBufferWindowMemory - -load_dotenv() - -#gpt4 all model -gpt4all_model_path = os.environ.get('GPT4ALL_MODEL_PATH') -model_n_ctx = os.environ.get('MODEL_N_CTX') -model_n_batch = int(os.environ.get('MODEL_N_BATCH',8)) -target_source_chunks = int(os.environ.get('TARGET_SOURCE_CHUNKS',4)) - -openai_api_key = os.environ.get('OPENAI_API_KEY') -anyscale_api_key = os.environ.get('ANYSCALE_ENDPOINT_TOKEN') - -verbose = os.environ.get('VERBOSE') - -# activate/deactivate the streaming StdOut callback for LLMs -callbacks = [StreamingStdOutCallbackHandler()] - -import re -def is_valid_open_ai_api_key(secretKey): - if re.search("^sk-[a-zA-Z0-9]{32,}$", secretKey ): - return True - else: return False - -def get_local_LLAMA2(): - import torch - from transformers import AutoTokenizer, AutoModelForCausalLM - - tokenizer = AutoTokenizer.from_pretrained("NousResearch/Llama-2-13b-chat-hf", - # use_auth_token=True, - ) - - model = AutoModelForCausalLM.from_pretrained("NousResearch/Llama-2-13b-chat-hf", - device_map='auto', - torch_dtype=torch.float16, - use_auth_token=True, - # load_in_8bit=True, - # load_in_4bit=True - ) - from transformers import pipeline - - pipe = pipeline("text-generation", - model=model, - tokenizer= tokenizer, - torch_dtype=torch.bfloat16, - device_map="auto", - max_new_tokens = 512, - do_sample=True, - top_k=30, - num_return_sequences=1, - eos_token_id=tokenizer.eos_token_id - ) - - from langchain import HuggingFacePipeline - LLAMA2 = HuggingFacePipeline(pipeline = pipe, model_kwargs = {'temperature':0}) - print(f"\n\n> torch.cuda.is_available(): {torch.cuda.is_available()}") - print("\n\n> local LLAMA2 loaded") - return LLAMA2 - -memory = ConversationBufferWindowMemory( - memory_key="chat_history", - input_key="question", - output_key = "answer", - return_messages=True, - k=3 - ) - -class QAPipeline: - - def __init__(self): - - print("\n\n> Initializing QAPipeline:") - self.llm_name = None - self.llm = None - - self.dataset_name = None - self.vectorstore = None - - self.qa_chain = None - - def run_agent(self,query, model, dataset, openai_api_key=None): - - try: - if (self.llm_name != model) or (self.dataset_name != dataset) or (self.qa_chain == None): - self.set_model(model, openai_api_key) - self.set_vectorstore(dataset) - self.set_qa_chain() - - # Get the answer from the chain - start = time.time() - res = self.qa_chain(query) - # answer, docs = res['result'],res['source_documents'] - end = time.time() - - # Print the result - print("\n\n> Question:") - print(query) - print(f"\n> Answer (took {round(end - start, 2)} s.):") - print( res) - - return res - - except Exception as e: - # logger.error(f"Answer retrieval failed with {e}") - print(f"> QAPipeline run_agent Error : {e}")#, icon=":books:") - return - - - def set_model(self, model_type, openai_api_key): - if model_type != self.llm_name: - match model_type: - case "gpt4all": - # self.llm = GPT4All(model=gpt4all_model_path, n_ctx=model_n_ctx, backend='gptj', n_batch=model_n_batch, callbacks=callbacks, verbose=verbose) - self.llm = GPT4All(model=gpt4all_model_path, max_tokens=model_n_ctx, backend='gptj', n_batch=model_n_batch, callbacks=callbacks, verbose=verbose) - # self.llm = HuggingFaceHub(repo_id="nomic-ai/gpt4all-j", model_kwargs={"temperature":0.001, "max_length":1024}) - case "google/flan-t5-xxl": - self.llm = HuggingFaceHub(repo_id="google/flan-t5-xxl", model_kwargs={"temperature":0.001, "max_length":1024}) - case "tiiuae/falcon-7b-instruct": - self.llm = HuggingFaceHub(repo_id=model_type, model_kwargs={"temperature":0.001, "max_length":1024}) - case "openai": - print(f"> openai_api_key: {openai_api_key}") - if is_valid_open_ai_api_key(openai_api_key): - self.llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0, openai_api_key=openai_api_key ) - else: return KeyError("openai_api_key is not valid") - case "Deci/DeciLM-6b": - self.llm = ChatOpenAI(model_name="Deci/DeciLM-6b", temperature=0) - case "local/LLAMA2": - self.llm = get_local_LLAMA2() - case "anyscale/Llama-2-13b-chat-hf": - self.llm = ChatAnyscale(anyscale_api_key=anyscale_api_key,temperature=0, model_name='meta-llama/Llama-2-13b-chat-hf', streaming=False) - case "anyscale/Llama-2-70b-chat-hf": - self.llm = ChatAnyscale(anyscale_api_key=anyscale_api_key,temperature=0, model_name='meta-llama/Llama-2-70b-chat-hf', streaming=False) - case _default: - # raise exception if model_type is not supported - raise Exception(f"Model type {model_type} is not supported. Please choose a valid one") - - self.llm_name = model_type - - - - def set_vectorstore(self, dataset): - if dataset != self.dataset_name: - # self.vectorstore = load_store(dataset) - self.vectorstore = load_FAISS_store() - print("\n\n> vectorstore loaded:") - self.dataset_name = dataset - - - def set_qa_chain(self): - print(f"\n> creating agent_chain") - - try: - - # Define a custom prompt - B_INST, E_INST = "[INST]", "[/INST]" - B_SYS, E_SYS = "<>\n", "\n<>\n\n" - - retrieval_qa_template = ( - """<> - You are the AI assistant of company boardpac which provide services to company board members related to banking and financial sector. - - please answer the question based on the chat history provided below. - : {chat_history} - - Identify the type of the question using following 3 types and answer accordingly. - Answer should be short and simple as possible. - Dont add any extra details that is not mentioned in the context. - - - If the user asks questions like welcome messages, greetings and goodbyes. - Just reply accordingly with a short and simple answer as possible. - Dont use context information provided below to answer the question. - Start the answer with code word Boardpac AI(chat): - - - If the question doesn't belong to type 1 or type 3, that means if the question is not about greetings or Banking and Financial Services say that the question is out of your domain. - Start the answer with code word Boardpac AI(OD): - - - If the question is related to Banking and Financial Services Sector like Banking & Financial regulations, legal framework, governance framework, compliance requirements as per Central Bank regulations. - please answer the question based only on the information provided in following central bank documents published in various years. - The published year is mentioned as the metadata 'year' of each source document. - Please notice that content of a one document of a past year can updated by a new document from a recent year. - Always try to answer with latest information and mention the year which information extracted. - If you dont know the answer say you dont know, dont try to makeup answers. - Start the answer with code word Boardpac AI(QA): - - <> - - [INST] - - {context} - - - Question : {question}[/INST]""" - ) - - retrieval_qa_chain_prompt = PromptTemplate( - input_variables=["question", "context", "chat_history"], - template=retrieval_qa_template - ) - - self.qa_chain = ConversationalRetrievalChain.from_llm( - llm=self.llm, - chain_type="stuff", - retriever = self.vectorstore.as_retriever(), - # retriever = self.vectorstore.as_retriever(search_kwargs={"k": target_source_chunks} - return_source_documents= True, - get_chat_history=lambda h : h, - combine_docs_chain_kwargs={"prompt": retrieval_qa_chain_prompt}, - verbose=True, - memory=memory, - ) - - print(f"\n> agent_chain created") - - except Exception as e: - # logger.error(f"Answer retrieval failed with {e}") - print(f"> QAPipeline set_qa_chain_with_agent Error : {e}")#, icon=":books:") - return diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Activador De Windows 8 Release Preview Build 8400 Descarga Instalacin y Activacin.md b/spaces/tialenAdioni/chat-gpt-api/logs/Activador De Windows 8 Release Preview Build 8400 Descarga Instalacin y Activacin.md deleted file mode 100644 index f999e11278f85b144b64e9dd15220355d5d7095b..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Activador De Windows 8 Release Preview Build 8400 Descarga Instalacin y Activacin.md +++ /dev/null @@ -1,174 +0,0 @@ - -

    EasyWorship Version 2009 Build 1.3 KeyGen by movzx: A Complete Guide

    -

    If you are looking for a powerful and easy-to-use software for creating presentations for your church or ministry, you might have heard of EasyWorship. But what is EasyWorship exactly? And what is KeyGen by movzx? And why do you need it?

    -

    In this article, we will answer all these questions and more. We will explain what EasyWorship is and what features it offers, what KeyGen by movzx is and how it works, and why you need EasyWorship Version 2009 Build 1.3 KeyGen by movzx to enjoy all the benefits of this software.

    -

    EasyWorship Version 2009 Build 1.3 KeyGen by movzx


    DOWNLOAD ✒ ✒ ✒ https://urlcod.com/2uK6L5



    -

    We will also show you how to download and install EasyWorship Version 2009 Build 1.3 KeyGen by movzx, how to use it effectively, and how to create and edit presentations with it.

    -

    By the end of this article, you will have a complete guide on how to use EasyWorship Version 2009 Build 1.3 KeyGen by movzx for your church or ministry needs.

    -

    What is EasyWorship?

    -

    EasyWorship is a software that allows you to create and display presentations for your church or ministry.

    -

    With EasyWorship, you can:

    -
      -
    • Create slides with song lyrics, scriptures, announcements, videos, images, and more.
    • -
    • Edit slides with custom fonts, colors, backgrounds, transitions, animations, and effects.
    • -
    • Display slides on one or more screens with dual monitor support.
    • -
    • Control slides remotely with a smartphone or tablet.
    • -
    • Schedule slides in advance with a built-in planner.
    • -
    • Integrate slides with live video feeds from cameras or online sources.
    • -
    • Import slides from PowerPoint or other presentation software.
    • -
    • Export slides as PDF or video files.
    • -
    -

    EasyWorship is designed to be user-friendly and intuitive, so you don't need any technical skills or training to use it.

    -

    EasyWorship also has a large library of media files and plugins that you can use for your presentations.

    -

    What is KeyGen by movzx?

    -

    KeyGen by movzx is a tool that generates serial numbers for software products.

    -

    A serial number is a unique code that identifies a software product and allows you to activate it.

    -

    EasyWorship 2009 Build 1.3 Setup+Keygen.rar download
    -EasyWorship Version 2009 Bulid 1.3 KeyGen by movzx free
    -EasyWorship 2009 build 1.3 patch update terbaru
    -EasyWorship Version 2009 Build 1.3 KeyGen by movzx serial number
    -EasyWorship 2009 build 1.3 free download full version
    -EasyWorship Version 2009 Bulid 1.3 KeyGen by movzx crack
    -EasyWorship 2009 build 1.3 for Windows 10 November update
    -EasyWorship Version 2009 Build 1.3 KeyGen by movzx activation key
    -EasyWorship 2009 build 1.3 Mediaapi download
    -EasyWorship Version 2009 Bulid 1.3 KeyGen by movzx license code
    -EasyWorship 2009 build 1.3 TB.ewb plugins
    -EasyWorship Version 2009 Build 1.3 KeyGen by movzx torrent
    -EasyWorship 2009 build 1.3 keygen.exe virus removal
    -EasyWorship Version 2009 Bulid 1.3 KeyGen by movzx nulled
    -EasyWorship 2009 build 1.3 serial key generator
    -EasyWorship Version 2009 Bulid 1.3 KeyGen by movzx review
    -EasyWorship 2009 build 1.3 registration code
    -EasyWorship Version 2009 Bulid 1.3 KeyGen by movzx instructions
    -EasyWorship 2009 build 1.3 product key
    -EasyWorship Version 2009 Bulid 1.3 KeyGen by movzx tutorial
    -EasyWorship 2009 build 1.3 activation code
    -EasyWorship Version 2009 Bulid 1.3 KeyGen by movzx how to use
    -EasyWorship 2009 build 1.3 license key
    -EasyWorship Version 2009 Bulid 1.3 KeyGen by movzx features
    -EasyWorship 2009 build 1.3 crack download
    -EasyWorship Version 2009 Bulid 1.3 KeyGen by movzx benefits
    -EasyWorship Version and Build Number: EW09B13KGMZX (short form)
    -How to install EasyWorship Version and Build Number: EW09B13KGMZX (long form)
    -EW09B13KGMZX vs EW09B19KGMZX (comparison)
    -EW09B13KGMZX system requirements
    -EW09B13KGMZX troubleshooting tips
    -EW09B13KGMZX alternative software
    -EW09B13KGMZX customer support contact
    -EW09B13KGMZX user manual pdf download
    -EW09B13KGMZX testimonials and feedbacks
    -EW09B13KGMZX discount coupon code
    -EW09B13KGMZX pros and cons
    -EW09B13KGMZX best practices and tips
    -EW09B13KGMZX latest updates and news
    -EW09B13KGMZX FAQs and answers
    -How to uninstall EW09B13KGMZX completely
    -How to backup and restore EW09B13KGMZX data
    -How to customize and optimize EW09B13KGMZX settings
    -How to integrate EW09B13KGMZX with other software or hardware devices
    -How to fix common errors and issues with EW09B13KGMZX
    -How to upgrade from EW09B13KGMZX to a newer version or edition
    -How to get a refund or exchange for EW09B13KGMZX
    -How to share or transfer EW09B13KGMZX license or account
    -How to verify the authenticity and validity of EW09B13KGMZX
    -How to report a bug or problem with EW09B13KGMZX

    -

    Some software products require you to enter a serial number when you install them or when you use them for the first time.

    -

    If you don't have a valid serial number, you won't be able to use the software product or access its full features.

    -

    KeyGen by movzx helps you generate serial numbers for software products that you don't have a valid serial number for.

    -

    KeyGen by movzx works by analyzing the algorithm that generates serial numbers for a specific software product and creating new serial numbers based on that algorithm.

    -

    Why do you need EasyWorship Version 2009 Build 1.3 KeyGen by movzx?

    -

    You need EasyWorship Version 2009 Build 1.3 KeyGen by movzx if you want to use EasyWorship without paying for it or without having a valid serial number for it.

    -

    EasyWorship is not a free software product. It costs $499 for a single license or $999 for an unlimited license.

    -

    If you don't have enough money to buy EasyWorship or if you don't have a valid serial number for it, you won't be able to use it or access its full features.

    -

    But with EasyWorship Version 2009 Build 1.3 KeyGen by movzx, you can generate a serial number for EasyWorship that will allow you to activate it and use it without any limitations.

    -

    This way, you can enjoy all the benefits of EasyWorship without spending any money or breaking any laws.

    -

    How to download and install EasyWorship Version 2009 Build

    < 1.3 KeyGen by movzx -

    Now that you know what EasyWorship Version 2009 Build 1.3 KeyGen by movzx is and why you need it, let's see how to download and install it on your computer.

    -

    Downloading and installing EasyWorship Version 2009 Build 1.3 KeyGen by movzx is very easy and fast. Just follow these simple steps:

    -

    How to download EasyWorship Version 2009 Build 1.3 KeyGen by movzx

    -
      -
    1. Click on this link to download EasyWorship Version 2009 Build 1.3 KeyGen by movzx: https://rafutaigo.tistory.com/11. This is a trusted and reliable source that provides the latest version of the tool.
    2. -
    3. Save the file to your computer. The file name is EasyWorship 2009 build 1.3 Setup+Keygen.rar. The file size is 20.23 MB.
    4. -
    5. Extract the file using a program like WinRAR or 7-Zip. You will get two files: EasyWorship 2009 build 1.3 Setup.exe and KeyGen by movzx.exe.
    6. -
    -

    How to install EasyWorship Version 2009 Build 1.3 KeyGen by movzx

    -
      -
    1. Double-click on the file EasyWorship 2009 build 1.3 Setup.exe to start the installation process.
    2. -
    3. Follow the instructions on the screen to complete the installation. You can choose the destination folder and the language of the software.
    4. -
    5. When the installation is finished, do not run the software yet.
    6. -
    7. Double-click on the file KeyGen by movzx.exe to open the tool.
    8. -
    9. In the tool, click on the button Generate to create a serial number for EasyWorship.
    10. -
    11. Copy the serial number and paste it in a text file or somewhere else for later use.
    12. -
    13. Run EasyWorship from your desktop or start menu.
    14. -
    15. When prompted, enter the serial number that you generated with KeyGen by movzx.
    16. -
    17. Click on Validate to activate EasyWorship.
    18. -
    19. Congratulations! You have successfully installed and activated EasyWorship Version 2009 Build 1.3 KeyGen by movzx.
    20. -
    -

    How to use EasyWorship Version 2009 Build 1.3 KeyGen by movzx

    -

    Now that you have installed and activated EasyWorship Version 2009 Build 1.3 KeyGen by movzx, you can start using it for your church or ministry needs.

    -

    In this section, we will give you some tips and tricks on how to use EasyWorship Version 2009 Build 1.3 KeyGen by movzx effectively.

    -

    How to activate EasyWorship Version 2009 Build 1.3 KeyGen by movzx

    -

    If you have already activated EasyWorship Version 2009 Build 1.3 KeyGen by movzx during the installation process, you don't need to do anything else.

    -

    If you have not activated EasyWorship Version 2009 Build 1.3 KeyGen by movzx yet, or if you need to reactivate it for some reason, you can do so by following these steps:

    -
      -
    1. Run EasyWorship from your desktop or start menu.
    2. -
    3. If prompted, enter the serial number that you generated with KeyGen by movzx.
    4. -
    5. If not prompted, go to Help > Register.
    6. -
    7. In the registration window, enter the serial number that you generated with KeyGen by movzx.
    8. -
    9. Click on Validate to activate EasyWorship.
    10. -
    -

    How to create and edit presentations with EasyWorship Version 2009 Build 1.3 KeyGen by movzx

    -

    Creating and editing presentations with EasyWorship Version 2009 Build 1.3 KeyGen by movzx is very easy and fun. You can use the built-in features of EasyWorship to create slides with song lyrics, scriptures, announcements, videos, images, and more.

    -

    You can also edit slides with custom fonts, colors, backgrounds, transitions, animations, and effects. You can also import slides from PowerPoint or other presentation software.

    -

    To create and edit presentations with EasyWorship Version 2009 Build 1.3 KeyGen by movzx, you can follow these steps:

    -

    How to create a presentation in EasyWorship

    -
      -
    1. Go to the Presentations tab in the Resource Area and either right-click in the Presentations Library and select New Presentation..., or click the Add button at the bottom of the Presentations Library.
    2. -
    3. This opens the Presentation Editor window. Here you can enter or edit text for your slides using the Words tab or the Slides tab.
    4. -
    5. Use the toolbar to change the font and edit its style, size, and alignment.
    6. -
    7. Use the heading buttons to select a Theme, create a new Text box, add a Scripture or Media element or Background, or Arrange Elements.
    8. -
    9. Use the Inspector button to customize your presentation any way you want.
    10. -
    11. Add new slides by clicking the + (Add) icon in the bottom left corner or pressing Ctrl+Enter.
    12. -
    13. Duplicate slides by selecting one or more slides, right-clicking and clicking Duplicate Slide.
    14. -
    15. Click OK when you are satisfied with your presentation.
    16. -
    -

    How to edit a presentation in EasyWorship

    -

    You can edit a presentation in EasyWorship either in the Schedule Area or in the Resource Area.

    -

    To edit a presentation in the Schedule Area:

    -
      -
    1. In the Schedule Area, select the presentation to be edited.
    2. -
    3. Right-click on the presentation and click on Edit Item.
    4. -
    5. The Schedule Editor window appears. The Words tab or Slides tab can be used to enter or edit text as needed.
    6. -
    7. Use the toolbar to change the font and edit its style, size, and alignment.
    8. -
    9. Use the heading buttons to select a Theme, create a new Text box, add a Scripture or Media element or Background, or Arrange Elements.
    10. -
    11. Use the Inspector button to customize your presentation any way you want.
    12. -
    13. Add new slides by clicking the + (Add) icon in the bottom left corner or pressing Ctrl+Enter.
    14. -
    15. Duplicate slides by selecting one or more slides, right-clicking and clicking Duplicate Slide.
    16. -
    -

    To edit a presentation in the Resource Area:

    -
      -
    1. In the Resource Area, click on the Presentations tab.
    2. -
    3. Select the presentation you wish to edit.
    4. -
    5. Right-click on the presentation and click on Edit Presentation.
    6. -
    7. The Words tab or Slides tab can be used to enter or edit text as needed.
    8. -
    9. Use the toolbar to change the font and edit its style, size, and alignment.
    10. -
    11. Use the heading buttons to select a Theme, create a new Text box, add a Scripture or Media element or Background, or Arrange Elements.
    12. -
    13. Use the Inspector button to customize your presentation any way you want.
    14. -
    15. Add new slides by clicking the + (Add) icon in the bottom left corner or pressing Ctrl+Enter.
    16. -
    17. Duplicate slides by selecting one or more slides, right-clicking and clicking Duplicate Slide.
    18. -
    -

    Conclusion

    -

    In this article, we have given you a complete guide on how to use EasyWorship Version 2009 Build 1.3 KeyGen by movzx for your church or ministry needs.

    -

    We have explained what EasyWorship is and what features it offers, what KeyGen by movzx is and how it works, and why you need EasyWorship Version 2009 Build 1.3 KeyGen by movzx to enjoy all the benefits of this software.

    -

    We have also shown you how to download and install EasyWorship Version 2009 Build 1.3 KeyGen by movzx, how to use it effectively, and how to create and edit presentations with it.

    -

    We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to contact us. We would love to hear from you.

    -

    FAQs

    -

    Q: Is EasyWorship Version 2009 Build 1.3 KeyGen by movzx legal?

    -

    A: No, EasyWorship Version 2009 Build 1.3 KeyGen by movzx is not legal. It is a tool that generates serial numbers for software products that you don't have a valid license for. This is considered piracy and violates the terms of service of EasyWorship. We do not condone or encourage using EasyWorship Version 2009 Build 1.3 KeyGen by movzx for any purpose. We only provide this information for educational purposes only. Use it at your own risk.

    -

    Q: Is EasyWorship Version 2009 Build 1.3 KeyGen by movzx safe?

    -

    A: No, EasyWorship Version 2009 Build 1.3 KeyGen by movzx is not safe. It is a tool that may contain viruses, malware, spyware, adware, or other harmful components that may damage your computer or compromise your privacy. We do not guarantee that EasyWorship Version 2009 Build 1.3 KeyGen by movzx is free from any malicious code or content. We only provide this information for educational purposes only. Use it at your own risk.

    -

    Q: Is EasyWorship Version 2009 Build 1.3 KeyGen by movzx compatible with Windows 10?

    -

    A: Yes

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Free Download of Ea Games Generic Multi Keygen No Surveys No Passwords No Hassle.md b/spaces/tialenAdioni/chat-gpt-api/logs/Free Download of Ea Games Generic Multi Keygen No Surveys No Passwords No Hassle.md deleted file mode 100644 index a0deb4988d9701cd4854d2a34c2272059f3d8b84..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Free Download of Ea Games Generic Multi Keygen No Surveys No Passwords No Hassle.md +++ /dev/null @@ -1,101 +0,0 @@ -
    -
    - Benefits of using the keygen for EA games | | H2: How to use Ea Games Generic Multi Keygen | - Step by step guide on how to download, install and run the keygen
    - How to generate cd-keys for any EA game (past and future) | | H2: Tips and tricks for using Ea Games Generic Multi Keygen | - How to find the secret Easter egg feature
    - How to avoid detection and ban by EA servers
    - How to backup and restore your cd-keys | | H2: Frequently asked questions about Ea Games Generic Multi Keygen | - A list of 5 common questions and answers about the keygen | | H2: Conclusion | - A summary of the main points and a call to action | # Article with HTML formatting

    What is Ea Games Generic Multi Keygen and why you need it

    -

    If you are a fan of EA games, you probably know how expensive and annoying it can be to buy and activate their games. You have to pay a lot of money for each game, enter a long and complicated cd-key, and deal with online activation and DRM issues. Wouldn't it be great if you could get any EA game for free, without any hassle?

    -

    Ea Games Generic Multi Keygen Free Download


    Download Ziphttps://urlcod.com/2uK8na



    -

    Well, that's exactly what Ea Games Generic Multi Keygen can do for you. This is a powerful keygen that can generate product keys for exactly 214 EA games, including The Sims 3 Late Night, Need For Speed Hot Pursuit, Battlefield 4, and many more. You can use these keys to install and play any EA game you want, without spending a dime.

    -

    But that's not all. Ea Games Generic Multi Keygen has some amazing features that make it stand out from other keygens. For example:

    -
      -
    • It can generate cd-keys for ANY EA game, both past and future. You don't have to wait for new releases or updates. Just insert your game disc or mount your iso image, and the keygen will scan it and add it to the list.
    • -
    • It has a secret Easter egg feature that only the most curious and adventurous users can find. Your mission, if you accept it, is to discover what and where it is.
    • -
    • It is clean, safe, and easy to use. It has been tested and verified by many users and antivirus programs. It does not contain any malware, spyware, or viruses. It has a simple and user-friendly interface that anyone can understand.
    • -
    -

    As you can see, Ea Games Generic Multi Keygen is a must-have tool for any EA gamer. It will save you money, time, and trouble. It will let you enjoy your favorite games without any limitations. It will make you happy.

    -

    How to use Ea Games Generic Multi Keygen

    -

    Now that you know what Ea Games Generic Multi Keygen is and why you need it, you might be wondering how to use it. Don't worry, it's very easy. Just follow these steps:

    -
      -
    1. Download Ea Games Generic Multi Keygen from one of the links below. Make sure you choose a reliable source that does not have any fake or harmful files.
    2. -
    3. Extract the zip file to a folder on your computer. You will see two files: EA Games Generic Multi Keygen 214 - By FFF.exe and FFF.NFO.
    4. -
    5. Run EA Games Generic Multi Keygen 214 - By FFF.exe as administrator. You will see a window with a list of EA games and a button at the bottom.
    6. -
    7. Insert your game disc or mount your iso image in a virtual drive. Then click on the button at the bottom of the window. The keygen will scan your drive and add your game to the list.
    8. -
    9. Select your game from the list and click on Generate. The keygen will generate a unique cd-key for your game and display it in a box.
    10. -
    11. Copy the cd-key and paste it in your game installation or activation window. Enjoy your game!
    12. -
    -

    Tips and tricks for using Ea Games Generic Multi Keygen

    -

    Ea Games Generic Multi Keygen is a very powerful and useful tool, but it also has some secrets and precautions that you should know. Here are some tips and tricks that will help you get the most out of it:

    -
      -
    • To find the secret Easter egg feature, you have to look carefully at the keygen window. There is something hidden there that will reveal a new option when clicked. Hint: it has something to do with FFF.
    • -
    • To avoid detection and ban by EA servers, you should not use the same cd-key for more than one game or account. You should also not play online with pirated games or use cheats or hacks. If you want to play online safely, you should buy original games from EA or use VPNs or proxies.
    • -
    • To backup and restore your cd-keys, you should save them in a text file or print them out. You can also use tools like Game Key Revealer or Game CD Key List to extract them from your registry or hard drive.
    • -
    -

    Frequently asked questions about Ea Games Generic Multi Keygen

    -

    Here are some of the most common questions and answers about Ea Games Generic Multi Keygen:

    -

    How to get Ea Games Generic Multi Keygen for free
    -Ea Games Generic Multi Keygen crack download
    -Ea Games Generic Multi Keygen activation code generator
    -Ea Games Generic Multi Keygen serial number free
    -Ea Games Generic Multi Keygen full version download
    -Ea Games Generic Multi Keygen torrent download link
    -Ea Games Generic Multi Keygen patch download
    -Ea Games Generic Multi Keygen license key free
    -Ea Games Generic Multi Keygen online activation
    -Ea Games Generic Multi Keygen offline activation
    -Ea Games Generic Multi Keygen working keygen
    -Ea Games Generic Multi Keygen no survey download
    -Ea Games Generic Multi Keygen no password download
    -Ea Games Generic Multi Keygen no virus download
    -Ea Games Generic Multi Keygen safe download
    -Ea Games Generic Multi Keygen latest version download
    -Ea Games Generic Multi Keygen updated version download
    -Ea Games Generic Multi Keygen 2023 version download
    -Ea Games Generic Multi Keygen compatible with windows 10
    -Ea Games Generic Multi Keygen compatible with mac os
    -Ea Games Generic Multi Keygen compatible with linux
    -Ea Games Generic Multi Keygen for all ea games
    -Ea Games Generic Multi Keygen for battlefield series
    -Ea Games Generic Multi Keygen for fifa series
    -Ea Games Generic Multi Keygen for need for speed series
    -Ea Games Generic Multi Keygen for sims series
    -Ea Games Generic Multi Keygen for star wars series
    -Ea Games Generic Multi Keygen for mass effect series
    -Ea Games Generic Multi Keygen for dragon age series
    -Ea Games Generic Multi Keygen for plants vs zombies series
    -Ea Games Generic Multi Keygen for titanfall series
    -Ea Games Generic Multi Keygen for apex legends series
    -Ea Games Generic Multi Keygen for anthem series
    -Ea Games Generic Multi Keygen for dead space series
    -Ea Games Generic Multi Keygen for crysis series
    -Ea Games Generic Multi Keygen for mirrors edge series
    -Ea Games Generic Multi Keygen for medal of honor series
    -Ea Games Generic Multi Keygen for burnout series
    -Ea Games Generic Multi Keygen for skate series
    -Ea Games Generic Multi Keygen for spore series
    -Ea Games Generic Multi Keygen for command and conquer series
    -Ea Games Generic Multi Keygen for simcity series
    -Ea Games Generic Multi Keygen for lord of the rings series
    -Ea Games Generic Multi Keygen for harry potter series
    -Ea Games Generic Multi Keygen for james bond series
    -Ea Games Generic Multi Keygen for madden nfl series
    -Ea Games Generic Multi Keygen for nba live series
    -Ea Games Generic Multi Keygen for nhl series
    -Ea Games Generic Multi Keygen for pga tour series
    -Ea Games Generic Multi Keygen for ufc series

    -
    -
    Q: Is Ea Games Generic Multi Keygen legal?
    -
    A: No, it is not legal. It violates EA's terms of service and copyright laws. It is also considered piracy and theft. You should only use it for educational or testing purposes.
    -
    Q: Is Ea Games Generic Multi Keygen safe?
    -
    A: Yes, it is safe. It does not contain any malware, spyware, or viruses. It has been scanned and verified by many users and antivirus programs. However, you should always download it from trusted sources and run it at your own risk.
    -
    Q: Does Ea Games Generic Multi Keygen work for all EA games?
    -
    A: Yes, it works for all EA games that have been released until now (214 games). It also works for future EA games that use the same encryption method as previous ones.
    -
    Q: Does Ea Games Generic Multi Keygen need an internet connection?
    -
    A: No, it does not need an internet connection. It works offline without any problems.
    -
    Q: Where can I download Ea Games Generic Multi Keygen?
    -
    A: You can download Ea Games Generic Multi Keygen from various sources on the internet, such as torrent sites, file sharing sites, forums, blogs, etc. However, you should be careful about fake or harmful files that may harm your computer or steal your information.
    -
    -

    Conclusion

    -

    Ea Games Generic Multi Keygen is an amazing tool that can generate product keys for any EA game you want. It has many features that make it superior to other keygens. It is clean, safe, easy to use, and works offline.

    -

    If you are an EA gamer who wants to save money and hassle while enjoying your favorite games without any limitations, you should definitely try Ea Games Generic Multi Keygen today.

    -

    However, remember that using Ea Games Generic Multi Keygen is illegal and unethical. You should only use it for educational or testing purposes. If you like EA games, you should support them by buying their original products from their official website or stores.

    -

    I hope this article was helpful and informative for you. If you have any questions or comments about Ea Games Generic Multi Keygen or this article, feel free to leave them below.

    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/3 Way Prank Call APK The Best App for Fake Calls and Spoof Calls.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/3 Way Prank Call APK The Best App for Fake Calls and Spoof Calls.md deleted file mode 100644 index 82cfe4ea48a5c97b42ccb2438f80239e68a59899..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/3 Way Prank Call APK The Best App for Fake Calls and Spoof Calls.md +++ /dev/null @@ -1,133 +0,0 @@ -
    -

    How to Prank Your Friends with 3 Way Prank Call Apk

    -

    Do you love pranking your friends and family? Do you enjoy listening to their confused and shocked reactions as they try to figure out who called them? If yes, then you should try 3 Way Prank Call Apk, a prank calling app that can connect two or three people on a call without them knowing who called whom. In this article, we will tell you what 3 Way Prank Call Apk is, why you should try it, how to download and install it, and how to use it. We will also answer some of the frequently asked questions about this app. So, let's get started!

    -

    3 way prank call apk


    Download Filehttps://bltlly.com/2uOoBD



    -

    What is 3 Way Prank Call Apk?

    -

    A prank calling app that connects two or three people on a call without them knowing who called whom

    -

    3 Way Prank Call Apk is a prank calling app that allows you to connect two or three people on a call without them knowing who called whom. You can choose any two or three numbers to connect and listen to their confused and shocked reactions as they try to figure out who called whom. The app is easy to use and provides a fun and entertaining experience for its users.

    -

    How it works and what features it offers

    -

    The app works by using your phone's speakerphone function to dial up two or three numbers at the same time. You can then mute your microphone and listen to the conversation between the prank victims. You can also join the conversation at any time by unmuting your microphone. The app offers some features such as:

    -
      -
    • Choosing between two-way or three-way prank calls
    • -
    • Choosing between random or custom numbers
    • -
    • Choosing between local or international numbers
    • -
    • Choosing between male or female voice changer
    • -
    • Choosing between different sound effects
    • -
    • Choosing between different caller ID spoofing options
    • -
    -

    Why You Should Try 3 Way Prank Call Apk?

    -

    It's fun and entertaining to listen to the reactions of the prank victims

    -

    One of the main reasons why you should try 3 Way Prank Call Apk is that it's fun and entertaining to listen to the reactions of the prank victims. You can hear them getting confused, angry, surprised, amused, or even scared as they try to figure out who called them. You can also join the conversation and make it more hilarious by adding some jokes, insults, compliments, or questions. You can also record the prank calls and share them with your friends for more laughs.

    -

    It's easy to use and has a simple interface

    -

    Another reason why you should try 3 Way Prank Call Apk is that it's easy to use and has a simple interface. You don't need to have any technical skills or knowledge to use this app. You just need to follow some simple steps to create a prank call and connect two or three people. The app has a user-friendly and intuitive interface that allows you to choose the options and settings you want for your prank call. You can also access the app's help and support section if you have any questions or issues.

    -

    It's free and has no ads or in-app purchases

    -

    The last reason why you should try 3 Way Prank Call Apk is that it's free and has no ads or in-app purchases. You don't need to pay anything to download or use this app. You also don't need to worry about any annoying ads or pop-ups that might interrupt your prank call experience. You also don't need to buy any extra features or credits to make more prank calls. The app is completely free and unlimited for its users.

    -

    How to Download and Install 3 Way Prank Call Apk?

    -

    The app is available on Google Play Store and App Store

    -

    If you want to download and install 3 Way Prank Call Apk, you can easily find it on Google Play Store and App Store. The app is compatible with Android and iOS devices and has a good rating and reviews from its users. You can also visit the app's official website for more information and updates.

    -

    The steps to download and install the app on your device

    -

    The steps to download and install 3 Way Prank Call Apk on your device are as follows:

    -

    prank dialer hotline 3way.io
    -3way prank caller dialer app
    -spoof call apps for android
    -3way.io prank call two people
    -android prank calling app 3way
    -prank call your friends with 3way
    -3way.io spoof call dialer features
    -best prank call apps for android
    -how to use 3way prank caller
    -3way prank call app reviews
    -download 3way prank dialer apk
    -3way.io prank call hotline number
    -prank call any two numbers with 3way
    -3way prank caller dialer for ios
    -android spoof call dialer 3way.io
    -prank call ex with 3way app
    -3way.io prank call app pricing
    -spoof call apps for android 2023
    -3way prank caller dialer details
    -android prank call app download
    -prank call hotline 3way.io apk
    -3way prank caller dialer android
    -spoof call dialer features 3way.io
    -best spoof call apps for android 2023
    -how to prank call with 3way app
    -3way prank call app ratings
    -install 3way prank dialer apk
    -3way.io prank call hotline free
    -prank call two friends with 3way
    -3way prank caller dialer for iphone
    -ios spoof call dialer 3way.io
    -prank call boss with 3way app
    -3way.io prank call app free trial
    -fake call apps for android 2023
    -3way prank caller dialer support
    -android fake call app download
    -fake call hotline 3way.io apk
    -3way fake caller dialer android
    -fake call dialer features 3way.io
    -best fake call apps for android 2023
    -how to fake call with 3way app
    -3way fake call app ratings
    -install 3way fake dialer apk
    -3way.io fake call hotline free
    -fake call two contacts with 3way
    -3way fake caller dialer for iphone
    -ios fake call dialer 3way.io

    -
      -
    1. Go to Google Play Store or App Store on your device and search for 3 Way Prank Call Apk.
    2. -
    3. Select the app from the search results and tap on the Install button.
    4. -
    5. Wait for the app to download and install on your device.
    6. -
    7. Open the app and grant the necessary permissions for it to work properly.
    8. -
    9. Enjoy pranking your friends with 3 Way Prank Call Apk!
    10. -
    -

    How to Use 3 Way Prank Call Apk?

    -

    The steps to create a prank call and connect two or three people

    -

    The steps to create a prank call and connect two or three people with 3 Way Prank Call Apk are as follows:

    -
      -
    1. Open the app and tap on the Create Prank Call button.
    2. -
    3. Select whether you want to make a two-way or three-way prank call.
    4. -
    5. Select whether you want to use random or custom numbers for the prank call.
    6. -
    7. If you choose custom numbers, enter the numbers of the people you want to prank in the fields provided.
    8. -
    9. Select whether you want to use local or international numbers for the prank call.
    10. -
    11. Select whether you want to use a male or female voice changer for the prank call.
    12. -
    13. Select whether you want to use any sound effects for the prank call.
    14. -
    15. Select whether you want to spoof your caller ID for the prank call.
    16. -
    17. Tap on the Start Prank Call button and wait for the app to connect the prank victims.
    18. -
    19. Mute your microphone and listen to the conversation between the prank victims.
    20. -
    21. Unmute your microphone if you want to join the conversation and make it more hilarious.
    22. -
    -

    The tips and tricks to make the prank call more hilarious and convincing

    -

    Some of the tips and tricks to make the prank call more hilarious and convincing with 3 Way Prank Call Apk are as follows:

    -
      -
    • Choose numbers that are familiar or relevant to the prank victims, such as their friends, family, co-workers, or service providers.
    • -
    • Choose numbers that are from different locations or countries, such as their hometown, vacation spot, or dream destination.
    • -
    • Choose numbers that are related to their interests, hobbies, or passions, such as their favorite celebrities, sports teams, or brands.
    • -
    • Choose a voice changer that matches or contrasts with the prank victims, such as a high-pitched voice for a low-pitched person, or a female voice for a male person.
    • -
    • Choose sound effects that add some humor or drama to the prank call, such as laughter, applause, sirens, or explosions.
    • -
    • Choose a caller ID spoofing option that makes the prank call more believable or surprising, such as showing your own number, showing a private number, or showing a random number.
    • -
    • Join the conversation at unexpected moments and say something funny, witty, or outrageous that makes the prank victims laugh, gasp, or scream.
    • -
    • Record the prank calls and share them with your friends for more laughs.
    • -
    -

    Conclusion

    -

    3 Way Prank Call Apk is a prank calling app that can connect two or three people on a call without them knowing who called whom. It's a fun and entertaining way to prank your friends and family and listen to their hilarious reactions. The app is easy to use, has a simple interface, and offers various features and options to make your prank call more convincing and amusing. The app is also free and has no ads or in-app purchases. You can download and install the app from Google Play Store or App Store and start pranking your friends with 3 Way Prank Call Apk. Have fun and enjoy!

    -

    FAQs

    -

    Is 3 Way Prank Call Apk safe and legal?

    -

    Yes, 3 Way Prank Call Apk is safe and legal to use. The app does not collect or store any personal information from its users or the prank victims. The app also does not violate any laws or regulations regarding prank calling. However, you should use the app responsibly and ethically and avoid pranking anyone who might get offended, hurt, or harmed by your prank call.

    -

    How many prank calls can I make with 3 Way Prank Call Apk?

    -

    You can make unlimited prank calls with 3 Way Prank Call Apk. The app does not have any limits or restrictions on the number of prank calls you can make. However, you should be mindful of your phone's battery life and data usage when using the app.

    -

    Can I record the prank calls with 3 Way Prank Call Apk?

    -

    Yes, you can record the prank calls with 3 Way Prank Call Apk. The app has a built-in recording feature that allows you to record the prank calls and save them on your device. You can also share the recorded prank calls with your friends via social media, email, or messaging apps.

    -

    What are some of the best prank ideas with 3 Way Prank Call Apk?

    -

    Some of the best prank ideas with 3 Way Prank Call Apk are:

    -
      -
    • Connect two strangers who have the same name and make them think they are talking to themselves.
    • -
    • Connect two pizza delivery places and make them think they are ordering from each other.
    • -
    • Connect two exes who have bad blood between them and make them think they are getting back together.
    • -
    • Connect two teachers who teach the same subject and make them think they are competing for a promotion.
    • -
    • Connect two celebrities who have a feud or a crush on each other and make them think they are having a private conversation.
    • -
    -

    How can I contact the developer of 3 Way Prank Call Apk?

    -

    If you have any questions, feedback, suggestions, or issues regarding 3 Way Prank Call Apk, you can contact the developer of the app by emailing them at 3wayprankcallapk@gmail.com. You can also visit their website at https://www.3wayprankcallapk.com for more information and updates.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Cuadernos Historia 16 Coleccion Completa Pdf 14.md b/spaces/tioseFevbu/cartoon-converter/scripts/Cuadernos Historia 16 Coleccion Completa Pdf 14.md deleted file mode 100644 index 3e7713aa61931395713c70abe299a5acdc25bec3..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Cuadernos Historia 16 Coleccion Completa Pdf 14.md +++ /dev/null @@ -1,14 +0,0 @@ - -

    Cuadernos Historia 16: A Treasure Trove of Historical Knowledge

    -

    Cuadernos Historia 16 was a series of monographs published by the Spanish magazine Historia 16, founded in 1976 by journalist Juan Tomás de Salas. The magazine was dedicated to the research and dissemination of historical topics, especially those related to Spanish history and culture. The Cuadernos, written by renowned specialists, covered a wide range of themes and periods, from ancient civilizations to contemporary events.

    -

    The first edition of Cuadernos Historia 16 consisted of 300 issues, published between 1985 and 1992. Each issue had about 50 pages and included illustrations, maps, chronologies, bibliographies and indexes. Some of the topics explored in the Cuadernos were: the Second Republic, the Palestine of Jesus, the Caliphate of Cordoba, the life in the Golden Age, pharaohs and pyramids, the Castile of El Cid, the Industrial Revolution, Philip II, medicine in antiquity, the Catholic Monarchs, the medieval woman, the French Revolution, the Egypt of Ramses II, the Arab invasion of Spain, the Mayas, Charles V, the War of Independence, etc.

    -

    Cuadernos Historia 16 Coleccion Completa Pdf 14


    DOWNLOAD »»» https://urlcod.com/2uHycz



    -

    The Cuadernos Historia 16 were a valuable source of information and education for many readers interested in history. They offered a rigorous and accessible approach to historical knowledge, combining academic rigor with journalistic style. They also contributed to the diffusion of Spanish culture and identity in a time of political and social changes.

    -

    Today, thanks to the Internet, it is possible to access all the Cuadernos Historia 16 in PDF format for free. This is a great opportunity to enjoy and learn from this remarkable collection of historical works. You can find them at https://saladehistoria.com/cuadernos-de-historia-16/ [^1^] or at https://issuu.com/historiayarqueologia/stacks/e0188e137b1b407ba86df71450e53b93 [^2^].

    - -

    The Cuadernos Historia 16 have received positive reviews from readers and critics alike. They have been praised for their quality, diversity, accuracy and readability. They have also been recognized as a valuable contribution to the popularization of history and the promotion of historical awareness and curiosity. Some of the reviews can be found at https://www.goodreads.com/series/313746-cuadernos-de-historia-16 [^1^].

    -

    The Cuadernos Historia 16 are not only a source of historical knowledge, but also a reflection of the historical context in which they were produced. They reflect the interests, concerns and debates of the Spanish society in the last decades of the 20th century, marked by the transition to democracy, the integration into Europe, the rise of nationalism and regionalism, the challenges of globalization and multiculturalism, etc. They also show the evolution of historiography and its methods, sources and perspectives.

    -

    The Cuadernos Historia 16 are a treasure trove of historical knowledge that deserves to be rediscovered and enjoyed by new generations of readers. They offer a rich and varied panorama of human history, from its origins to its present. They invite us to learn from the past, to understand the present and to imagine the future.

    -

    e93f5a0c3f
    -
    -
    \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/msgpack/fallback.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/msgpack/fallback.py deleted file mode 100644 index f560c7b55099976eb29781ed47fdbf92db3c10f8..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/msgpack/fallback.py +++ /dev/null @@ -1,1010 +0,0 @@ -"""Fallback pure Python implementation of msgpack""" -from datetime import datetime as _DateTime -import sys -import struct - - -PY2 = sys.version_info[0] == 2 -if PY2: - int_types = (int, long) - - def dict_iteritems(d): - return d.iteritems() - -else: - int_types = int - unicode = str - xrange = range - - def dict_iteritems(d): - return d.items() - - -if sys.version_info < (3, 5): - # Ugly hack... - RecursionError = RuntimeError - - def _is_recursionerror(e): - return ( - len(e.args) == 1 - and isinstance(e.args[0], str) - and e.args[0].startswith("maximum recursion depth exceeded") - ) - -else: - - def _is_recursionerror(e): - return True - - -if hasattr(sys, "pypy_version_info"): - # StringIO is slow on PyPy, StringIO is faster. However: PyPy's own - # StringBuilder is fastest. - from __pypy__ import newlist_hint - - try: - from __pypy__.builders import BytesBuilder as StringBuilder - except ImportError: - from __pypy__.builders import StringBuilder - USING_STRINGBUILDER = True - - class StringIO(object): - def __init__(self, s=b""): - if s: - self.builder = StringBuilder(len(s)) - self.builder.append(s) - else: - self.builder = StringBuilder() - - def write(self, s): - if isinstance(s, memoryview): - s = s.tobytes() - elif isinstance(s, bytearray): - s = bytes(s) - self.builder.append(s) - - def getvalue(self): - return self.builder.build() - -else: - USING_STRINGBUILDER = False - from io import BytesIO as StringIO - - newlist_hint = lambda size: [] - - -from .exceptions import BufferFull, OutOfData, ExtraData, FormatError, StackError - -from .ext import ExtType, Timestamp - - -EX_SKIP = 0 -EX_CONSTRUCT = 1 -EX_READ_ARRAY_HEADER = 2 -EX_READ_MAP_HEADER = 3 - -TYPE_IMMEDIATE = 0 -TYPE_ARRAY = 1 -TYPE_MAP = 2 -TYPE_RAW = 3 -TYPE_BIN = 4 -TYPE_EXT = 5 - -DEFAULT_RECURSE_LIMIT = 511 - - -def _check_type_strict(obj, t, type=type, tuple=tuple): - if type(t) is tuple: - return type(obj) in t - else: - return type(obj) is t - - -def _get_data_from_buffer(obj): - view = memoryview(obj) - if view.itemsize != 1: - raise ValueError("cannot unpack from multi-byte object") - return view - - -def unpackb(packed, **kwargs): - """ - Unpack an object from `packed`. - - Raises ``ExtraData`` when *packed* contains extra bytes. - Raises ``ValueError`` when *packed* is incomplete. - Raises ``FormatError`` when *packed* is not valid msgpack. - Raises ``StackError`` when *packed* contains too nested. - Other exceptions can be raised during unpacking. - - See :class:`Unpacker` for options. - """ - unpacker = Unpacker(None, max_buffer_size=len(packed), **kwargs) - unpacker.feed(packed) - try: - ret = unpacker._unpack() - except OutOfData: - raise ValueError("Unpack failed: incomplete input") - except RecursionError as e: - if _is_recursionerror(e): - raise StackError - raise - if unpacker._got_extradata(): - raise ExtraData(ret, unpacker._get_extradata()) - return ret - - -if sys.version_info < (2, 7, 6): - - def _unpack_from(f, b, o=0): - """Explicit type cast for legacy struct.unpack_from""" - return struct.unpack_from(f, bytes(b), o) - -else: - _unpack_from = struct.unpack_from - -_NO_FORMAT_USED = "" -_MSGPACK_HEADERS = { - 0xC4: (1, _NO_FORMAT_USED, TYPE_BIN), - 0xC5: (2, ">H", TYPE_BIN), - 0xC6: (4, ">I", TYPE_BIN), - 0xC7: (2, "Bb", TYPE_EXT), - 0xC8: (3, ">Hb", TYPE_EXT), - 0xC9: (5, ">Ib", TYPE_EXT), - 0xCA: (4, ">f"), - 0xCB: (8, ">d"), - 0xCC: (1, _NO_FORMAT_USED), - 0xCD: (2, ">H"), - 0xCE: (4, ">I"), - 0xCF: (8, ">Q"), - 0xD0: (1, "b"), - 0xD1: (2, ">h"), - 0xD2: (4, ">i"), - 0xD3: (8, ">q"), - 0xD4: (1, "b1s", TYPE_EXT), - 0xD5: (2, "b2s", TYPE_EXT), - 0xD6: (4, "b4s", TYPE_EXT), - 0xD7: (8, "b8s", TYPE_EXT), - 0xD8: (16, "b16s", TYPE_EXT), - 0xD9: (1, _NO_FORMAT_USED, TYPE_RAW), - 0xDA: (2, ">H", TYPE_RAW), - 0xDB: (4, ">I", TYPE_RAW), - 0xDC: (2, ">H", TYPE_ARRAY), - 0xDD: (4, ">I", TYPE_ARRAY), - 0xDE: (2, ">H", TYPE_MAP), - 0xDF: (4, ">I", TYPE_MAP), -} - - -class Unpacker(object): - """Streaming unpacker. - - Arguments: - - :param file_like: - File-like object having `.read(n)` method. - If specified, unpacker reads serialized data from it and :meth:`feed()` is not usable. - - :param int read_size: - Used as `file_like.read(read_size)`. (default: `min(16*1024, max_buffer_size)`) - - :param bool use_list: - If true, unpack msgpack array to Python list. - Otherwise, unpack to Python tuple. (default: True) - - :param bool raw: - If true, unpack msgpack raw to Python bytes. - Otherwise, unpack to Python str by decoding with UTF-8 encoding (default). - - :param int timestamp: - Control how timestamp type is unpacked: - - 0 - Timestamp - 1 - float (Seconds from the EPOCH) - 2 - int (Nanoseconds from the EPOCH) - 3 - datetime.datetime (UTC). Python 2 is not supported. - - :param bool strict_map_key: - If true (default), only str or bytes are accepted for map (dict) keys. - - :param callable object_hook: - When specified, it should be callable. - Unpacker calls it with a dict argument after unpacking msgpack map. - (See also simplejson) - - :param callable object_pairs_hook: - When specified, it should be callable. - Unpacker calls it with a list of key-value pairs after unpacking msgpack map. - (See also simplejson) - - :param str unicode_errors: - The error handler for decoding unicode. (default: 'strict') - This option should be used only when you have msgpack data which - contains invalid UTF-8 string. - - :param int max_buffer_size: - Limits size of data waiting unpacked. 0 means 2**32-1. - The default value is 100*1024*1024 (100MiB). - Raises `BufferFull` exception when it is insufficient. - You should set this parameter when unpacking data from untrusted source. - - :param int max_str_len: - Deprecated, use *max_buffer_size* instead. - Limits max length of str. (default: max_buffer_size) - - :param int max_bin_len: - Deprecated, use *max_buffer_size* instead. - Limits max length of bin. (default: max_buffer_size) - - :param int max_array_len: - Limits max length of array. - (default: max_buffer_size) - - :param int max_map_len: - Limits max length of map. - (default: max_buffer_size//2) - - :param int max_ext_len: - Deprecated, use *max_buffer_size* instead. - Limits max size of ext type. (default: max_buffer_size) - - Example of streaming deserialize from file-like object:: - - unpacker = Unpacker(file_like) - for o in unpacker: - process(o) - - Example of streaming deserialize from socket:: - - unpacker = Unpacker() - while True: - buf = sock.recv(1024**2) - if not buf: - break - unpacker.feed(buf) - for o in unpacker: - process(o) - - Raises ``ExtraData`` when *packed* contains extra bytes. - Raises ``OutOfData`` when *packed* is incomplete. - Raises ``FormatError`` when *packed* is not valid msgpack. - Raises ``StackError`` when *packed* contains too nested. - Other exceptions can be raised during unpacking. - """ - - def __init__( - self, - file_like=None, - read_size=0, - use_list=True, - raw=False, - timestamp=0, - strict_map_key=True, - object_hook=None, - object_pairs_hook=None, - list_hook=None, - unicode_errors=None, - max_buffer_size=100 * 1024 * 1024, - ext_hook=ExtType, - max_str_len=-1, - max_bin_len=-1, - max_array_len=-1, - max_map_len=-1, - max_ext_len=-1, - ): - if unicode_errors is None: - unicode_errors = "strict" - - if file_like is None: - self._feeding = True - else: - if not callable(file_like.read): - raise TypeError("`file_like.read` must be callable") - self.file_like = file_like - self._feeding = False - - #: array of bytes fed. - self._buffer = bytearray() - #: Which position we currently reads - self._buff_i = 0 - - # When Unpacker is used as an iterable, between the calls to next(), - # the buffer is not "consumed" completely, for efficiency sake. - # Instead, it is done sloppily. To make sure we raise BufferFull at - # the correct moments, we have to keep track of how sloppy we were. - # Furthermore, when the buffer is incomplete (that is: in the case - # we raise an OutOfData) we need to rollback the buffer to the correct - # state, which _buf_checkpoint records. - self._buf_checkpoint = 0 - - if not max_buffer_size: - max_buffer_size = 2**31 - 1 - if max_str_len == -1: - max_str_len = max_buffer_size - if max_bin_len == -1: - max_bin_len = max_buffer_size - if max_array_len == -1: - max_array_len = max_buffer_size - if max_map_len == -1: - max_map_len = max_buffer_size // 2 - if max_ext_len == -1: - max_ext_len = max_buffer_size - - self._max_buffer_size = max_buffer_size - if read_size > self._max_buffer_size: - raise ValueError("read_size must be smaller than max_buffer_size") - self._read_size = read_size or min(self._max_buffer_size, 16 * 1024) - self._raw = bool(raw) - self._strict_map_key = bool(strict_map_key) - self._unicode_errors = unicode_errors - self._use_list = use_list - if not (0 <= timestamp <= 3): - raise ValueError("timestamp must be 0..3") - self._timestamp = timestamp - self._list_hook = list_hook - self._object_hook = object_hook - self._object_pairs_hook = object_pairs_hook - self._ext_hook = ext_hook - self._max_str_len = max_str_len - self._max_bin_len = max_bin_len - self._max_array_len = max_array_len - self._max_map_len = max_map_len - self._max_ext_len = max_ext_len - self._stream_offset = 0 - - if list_hook is not None and not callable(list_hook): - raise TypeError("`list_hook` is not callable") - if object_hook is not None and not callable(object_hook): - raise TypeError("`object_hook` is not callable") - if object_pairs_hook is not None and not callable(object_pairs_hook): - raise TypeError("`object_pairs_hook` is not callable") - if object_hook is not None and object_pairs_hook is not None: - raise TypeError( - "object_pairs_hook and object_hook are mutually " "exclusive" - ) - if not callable(ext_hook): - raise TypeError("`ext_hook` is not callable") - - def feed(self, next_bytes): - assert self._feeding - view = _get_data_from_buffer(next_bytes) - if len(self._buffer) - self._buff_i + len(view) > self._max_buffer_size: - raise BufferFull - - # Strip buffer before checkpoint before reading file. - if self._buf_checkpoint > 0: - del self._buffer[: self._buf_checkpoint] - self._buff_i -= self._buf_checkpoint - self._buf_checkpoint = 0 - - # Use extend here: INPLACE_ADD += doesn't reliably typecast memoryview in jython - self._buffer.extend(view) - - def _consume(self): - """Gets rid of the used parts of the buffer.""" - self._stream_offset += self._buff_i - self._buf_checkpoint - self._buf_checkpoint = self._buff_i - - def _got_extradata(self): - return self._buff_i < len(self._buffer) - - def _get_extradata(self): - return self._buffer[self._buff_i :] - - def read_bytes(self, n): - ret = self._read(n, raise_outofdata=False) - self._consume() - return ret - - def _read(self, n, raise_outofdata=True): - # (int) -> bytearray - self._reserve(n, raise_outofdata=raise_outofdata) - i = self._buff_i - ret = self._buffer[i : i + n] - self._buff_i = i + len(ret) - return ret - - def _reserve(self, n, raise_outofdata=True): - remain_bytes = len(self._buffer) - self._buff_i - n - - # Fast path: buffer has n bytes already - if remain_bytes >= 0: - return - - if self._feeding: - self._buff_i = self._buf_checkpoint - raise OutOfData - - # Strip buffer before checkpoint before reading file. - if self._buf_checkpoint > 0: - del self._buffer[: self._buf_checkpoint] - self._buff_i -= self._buf_checkpoint - self._buf_checkpoint = 0 - - # Read from file - remain_bytes = -remain_bytes - if remain_bytes + len(self._buffer) > self._max_buffer_size: - raise BufferFull - while remain_bytes > 0: - to_read_bytes = max(self._read_size, remain_bytes) - read_data = self.file_like.read(to_read_bytes) - if not read_data: - break - assert isinstance(read_data, bytes) - self._buffer += read_data - remain_bytes -= len(read_data) - - if len(self._buffer) < n + self._buff_i and raise_outofdata: - self._buff_i = 0 # rollback - raise OutOfData - - def _read_header(self): - typ = TYPE_IMMEDIATE - n = 0 - obj = None - self._reserve(1) - b = self._buffer[self._buff_i] - self._buff_i += 1 - if b & 0b10000000 == 0: - obj = b - elif b & 0b11100000 == 0b11100000: - obj = -1 - (b ^ 0xFF) - elif b & 0b11100000 == 0b10100000: - n = b & 0b00011111 - typ = TYPE_RAW - if n > self._max_str_len: - raise ValueError("%s exceeds max_str_len(%s)" % (n, self._max_str_len)) - obj = self._read(n) - elif b & 0b11110000 == 0b10010000: - n = b & 0b00001111 - typ = TYPE_ARRAY - if n > self._max_array_len: - raise ValueError( - "%s exceeds max_array_len(%s)" % (n, self._max_array_len) - ) - elif b & 0b11110000 == 0b10000000: - n = b & 0b00001111 - typ = TYPE_MAP - if n > self._max_map_len: - raise ValueError("%s exceeds max_map_len(%s)" % (n, self._max_map_len)) - elif b == 0xC0: - obj = None - elif b == 0xC2: - obj = False - elif b == 0xC3: - obj = True - elif 0xC4 <= b <= 0xC6: - size, fmt, typ = _MSGPACK_HEADERS[b] - self._reserve(size) - if len(fmt) > 0: - n = _unpack_from(fmt, self._buffer, self._buff_i)[0] - else: - n = self._buffer[self._buff_i] - self._buff_i += size - if n > self._max_bin_len: - raise ValueError("%s exceeds max_bin_len(%s)" % (n, self._max_bin_len)) - obj = self._read(n) - elif 0xC7 <= b <= 0xC9: - size, fmt, typ = _MSGPACK_HEADERS[b] - self._reserve(size) - L, n = _unpack_from(fmt, self._buffer, self._buff_i) - self._buff_i += size - if L > self._max_ext_len: - raise ValueError("%s exceeds max_ext_len(%s)" % (L, self._max_ext_len)) - obj = self._read(L) - elif 0xCA <= b <= 0xD3: - size, fmt = _MSGPACK_HEADERS[b] - self._reserve(size) - if len(fmt) > 0: - obj = _unpack_from(fmt, self._buffer, self._buff_i)[0] - else: - obj = self._buffer[self._buff_i] - self._buff_i += size - elif 0xD4 <= b <= 0xD8: - size, fmt, typ = _MSGPACK_HEADERS[b] - if self._max_ext_len < size: - raise ValueError( - "%s exceeds max_ext_len(%s)" % (size, self._max_ext_len) - ) - self._reserve(size + 1) - n, obj = _unpack_from(fmt, self._buffer, self._buff_i) - self._buff_i += size + 1 - elif 0xD9 <= b <= 0xDB: - size, fmt, typ = _MSGPACK_HEADERS[b] - self._reserve(size) - if len(fmt) > 0: - (n,) = _unpack_from(fmt, self._buffer, self._buff_i) - else: - n = self._buffer[self._buff_i] - self._buff_i += size - if n > self._max_str_len: - raise ValueError("%s exceeds max_str_len(%s)" % (n, self._max_str_len)) - obj = self._read(n) - elif 0xDC <= b <= 0xDD: - size, fmt, typ = _MSGPACK_HEADERS[b] - self._reserve(size) - (n,) = _unpack_from(fmt, self._buffer, self._buff_i) - self._buff_i += size - if n > self._max_array_len: - raise ValueError( - "%s exceeds max_array_len(%s)" % (n, self._max_array_len) - ) - elif 0xDE <= b <= 0xDF: - size, fmt, typ = _MSGPACK_HEADERS[b] - self._reserve(size) - (n,) = _unpack_from(fmt, self._buffer, self._buff_i) - self._buff_i += size - if n > self._max_map_len: - raise ValueError("%s exceeds max_map_len(%s)" % (n, self._max_map_len)) - else: - raise FormatError("Unknown header: 0x%x" % b) - return typ, n, obj - - def _unpack(self, execute=EX_CONSTRUCT): - typ, n, obj = self._read_header() - - if execute == EX_READ_ARRAY_HEADER: - if typ != TYPE_ARRAY: - raise ValueError("Expected array") - return n - if execute == EX_READ_MAP_HEADER: - if typ != TYPE_MAP: - raise ValueError("Expected map") - return n - # TODO should we eliminate the recursion? - if typ == TYPE_ARRAY: - if execute == EX_SKIP: - for i in xrange(n): - # TODO check whether we need to call `list_hook` - self._unpack(EX_SKIP) - return - ret = newlist_hint(n) - for i in xrange(n): - ret.append(self._unpack(EX_CONSTRUCT)) - if self._list_hook is not None: - ret = self._list_hook(ret) - # TODO is the interaction between `list_hook` and `use_list` ok? - return ret if self._use_list else tuple(ret) - if typ == TYPE_MAP: - if execute == EX_SKIP: - for i in xrange(n): - # TODO check whether we need to call hooks - self._unpack(EX_SKIP) - self._unpack(EX_SKIP) - return - if self._object_pairs_hook is not None: - ret = self._object_pairs_hook( - (self._unpack(EX_CONSTRUCT), self._unpack(EX_CONSTRUCT)) - for _ in xrange(n) - ) - else: - ret = {} - for _ in xrange(n): - key = self._unpack(EX_CONSTRUCT) - if self._strict_map_key and type(key) not in (unicode, bytes): - raise ValueError( - "%s is not allowed for map key" % str(type(key)) - ) - if not PY2 and type(key) is str: - key = sys.intern(key) - ret[key] = self._unpack(EX_CONSTRUCT) - if self._object_hook is not None: - ret = self._object_hook(ret) - return ret - if execute == EX_SKIP: - return - if typ == TYPE_RAW: - if self._raw: - obj = bytes(obj) - else: - obj = obj.decode("utf_8", self._unicode_errors) - return obj - if typ == TYPE_BIN: - return bytes(obj) - if typ == TYPE_EXT: - if n == -1: # timestamp - ts = Timestamp.from_bytes(bytes(obj)) - if self._timestamp == 1: - return ts.to_unix() - elif self._timestamp == 2: - return ts.to_unix_nano() - elif self._timestamp == 3: - return ts.to_datetime() - else: - return ts - else: - return self._ext_hook(n, bytes(obj)) - assert typ == TYPE_IMMEDIATE - return obj - - def __iter__(self): - return self - - def __next__(self): - try: - ret = self._unpack(EX_CONSTRUCT) - self._consume() - return ret - except OutOfData: - self._consume() - raise StopIteration - except RecursionError: - raise StackError - - next = __next__ - - def skip(self): - self._unpack(EX_SKIP) - self._consume() - - def unpack(self): - try: - ret = self._unpack(EX_CONSTRUCT) - except RecursionError: - raise StackError - self._consume() - return ret - - def read_array_header(self): - ret = self._unpack(EX_READ_ARRAY_HEADER) - self._consume() - return ret - - def read_map_header(self): - ret = self._unpack(EX_READ_MAP_HEADER) - self._consume() - return ret - - def tell(self): - return self._stream_offset - - -class Packer(object): - """ - MessagePack Packer - - Usage:: - - packer = Packer() - astream.write(packer.pack(a)) - astream.write(packer.pack(b)) - - Packer's constructor has some keyword arguments: - - :param callable default: - Convert user type to builtin type that Packer supports. - See also simplejson's document. - - :param bool use_single_float: - Use single precision float type for float. (default: False) - - :param bool autoreset: - Reset buffer after each pack and return its content as `bytes`. (default: True). - If set this to false, use `bytes()` to get content and `.reset()` to clear buffer. - - :param bool use_bin_type: - Use bin type introduced in msgpack spec 2.0 for bytes. - It also enables str8 type for unicode. (default: True) - - :param bool strict_types: - If set to true, types will be checked to be exact. Derived classes - from serializable types will not be serialized and will be - treated as unsupported type and forwarded to default. - Additionally tuples will not be serialized as lists. - This is useful when trying to implement accurate serialization - for python types. - - :param bool datetime: - If set to true, datetime with tzinfo is packed into Timestamp type. - Note that the tzinfo is stripped in the timestamp. - You can get UTC datetime with `timestamp=3` option of the Unpacker. - (Python 2 is not supported). - - :param str unicode_errors: - The error handler for encoding unicode. (default: 'strict') - DO NOT USE THIS!! This option is kept for very specific usage. - - Example of streaming deserialize from file-like object:: - - unpacker = Unpacker(file_like) - for o in unpacker: - process(o) - - Example of streaming deserialize from socket:: - - unpacker = Unpacker() - while True: - buf = sock.recv(1024**2) - if not buf: - break - unpacker.feed(buf) - for o in unpacker: - process(o) - - Raises ``ExtraData`` when *packed* contains extra bytes. - Raises ``OutOfData`` when *packed* is incomplete. - Raises ``FormatError`` when *packed* is not valid msgpack. - Raises ``StackError`` when *packed* contains too nested. - Other exceptions can be raised during unpacking. - """ - - def __init__( - self, - default=None, - use_single_float=False, - autoreset=True, - use_bin_type=True, - strict_types=False, - datetime=False, - unicode_errors=None, - ): - self._strict_types = strict_types - self._use_float = use_single_float - self._autoreset = autoreset - self._use_bin_type = use_bin_type - self._buffer = StringIO() - if PY2 and datetime: - raise ValueError("datetime is not supported in Python 2") - self._datetime = bool(datetime) - self._unicode_errors = unicode_errors or "strict" - if default is not None: - if not callable(default): - raise TypeError("default must be callable") - self._default = default - - def _pack( - self, - obj, - nest_limit=DEFAULT_RECURSE_LIMIT, - check=isinstance, - check_type_strict=_check_type_strict, - ): - default_used = False - if self._strict_types: - check = check_type_strict - list_types = list - else: - list_types = (list, tuple) - while True: - if nest_limit < 0: - raise ValueError("recursion limit exceeded") - if obj is None: - return self._buffer.write(b"\xc0") - if check(obj, bool): - if obj: - return self._buffer.write(b"\xc3") - return self._buffer.write(b"\xc2") - if check(obj, int_types): - if 0 <= obj < 0x80: - return self._buffer.write(struct.pack("B", obj)) - if -0x20 <= obj < 0: - return self._buffer.write(struct.pack("b", obj)) - if 0x80 <= obj <= 0xFF: - return self._buffer.write(struct.pack("BB", 0xCC, obj)) - if -0x80 <= obj < 0: - return self._buffer.write(struct.pack(">Bb", 0xD0, obj)) - if 0xFF < obj <= 0xFFFF: - return self._buffer.write(struct.pack(">BH", 0xCD, obj)) - if -0x8000 <= obj < -0x80: - return self._buffer.write(struct.pack(">Bh", 0xD1, obj)) - if 0xFFFF < obj <= 0xFFFFFFFF: - return self._buffer.write(struct.pack(">BI", 0xCE, obj)) - if -0x80000000 <= obj < -0x8000: - return self._buffer.write(struct.pack(">Bi", 0xD2, obj)) - if 0xFFFFFFFF < obj <= 0xFFFFFFFFFFFFFFFF: - return self._buffer.write(struct.pack(">BQ", 0xCF, obj)) - if -0x8000000000000000 <= obj < -0x80000000: - return self._buffer.write(struct.pack(">Bq", 0xD3, obj)) - if not default_used and self._default is not None: - obj = self._default(obj) - default_used = True - continue - raise OverflowError("Integer value out of range") - if check(obj, (bytes, bytearray)): - n = len(obj) - if n >= 2**32: - raise ValueError("%s is too large" % type(obj).__name__) - self._pack_bin_header(n) - return self._buffer.write(obj) - if check(obj, unicode): - obj = obj.encode("utf-8", self._unicode_errors) - n = len(obj) - if n >= 2**32: - raise ValueError("String is too large") - self._pack_raw_header(n) - return self._buffer.write(obj) - if check(obj, memoryview): - n = len(obj) * obj.itemsize - if n >= 2**32: - raise ValueError("Memoryview is too large") - self._pack_bin_header(n) - return self._buffer.write(obj) - if check(obj, float): - if self._use_float: - return self._buffer.write(struct.pack(">Bf", 0xCA, obj)) - return self._buffer.write(struct.pack(">Bd", 0xCB, obj)) - if check(obj, (ExtType, Timestamp)): - if check(obj, Timestamp): - code = -1 - data = obj.to_bytes() - else: - code = obj.code - data = obj.data - assert isinstance(code, int) - assert isinstance(data, bytes) - L = len(data) - if L == 1: - self._buffer.write(b"\xd4") - elif L == 2: - self._buffer.write(b"\xd5") - elif L == 4: - self._buffer.write(b"\xd6") - elif L == 8: - self._buffer.write(b"\xd7") - elif L == 16: - self._buffer.write(b"\xd8") - elif L <= 0xFF: - self._buffer.write(struct.pack(">BB", 0xC7, L)) - elif L <= 0xFFFF: - self._buffer.write(struct.pack(">BH", 0xC8, L)) - else: - self._buffer.write(struct.pack(">BI", 0xC9, L)) - self._buffer.write(struct.pack("b", code)) - self._buffer.write(data) - return - if check(obj, list_types): - n = len(obj) - self._pack_array_header(n) - for i in xrange(n): - self._pack(obj[i], nest_limit - 1) - return - if check(obj, dict): - return self._pack_map_pairs( - len(obj), dict_iteritems(obj), nest_limit - 1 - ) - - if self._datetime and check(obj, _DateTime) and obj.tzinfo is not None: - obj = Timestamp.from_datetime(obj) - default_used = 1 - continue - - if not default_used and self._default is not None: - obj = self._default(obj) - default_used = 1 - continue - - if self._datetime and check(obj, _DateTime): - raise ValueError("Cannot serialize %r where tzinfo=None" % (obj,)) - - raise TypeError("Cannot serialize %r" % (obj,)) - - def pack(self, obj): - try: - self._pack(obj) - except: - self._buffer = StringIO() # force reset - raise - if self._autoreset: - ret = self._buffer.getvalue() - self._buffer = StringIO() - return ret - - def pack_map_pairs(self, pairs): - self._pack_map_pairs(len(pairs), pairs) - if self._autoreset: - ret = self._buffer.getvalue() - self._buffer = StringIO() - return ret - - def pack_array_header(self, n): - if n >= 2**32: - raise ValueError - self._pack_array_header(n) - if self._autoreset: - ret = self._buffer.getvalue() - self._buffer = StringIO() - return ret - - def pack_map_header(self, n): - if n >= 2**32: - raise ValueError - self._pack_map_header(n) - if self._autoreset: - ret = self._buffer.getvalue() - self._buffer = StringIO() - return ret - - def pack_ext_type(self, typecode, data): - if not isinstance(typecode, int): - raise TypeError("typecode must have int type.") - if not 0 <= typecode <= 127: - raise ValueError("typecode should be 0-127") - if not isinstance(data, bytes): - raise TypeError("data must have bytes type") - L = len(data) - if L > 0xFFFFFFFF: - raise ValueError("Too large data") - if L == 1: - self._buffer.write(b"\xd4") - elif L == 2: - self._buffer.write(b"\xd5") - elif L == 4: - self._buffer.write(b"\xd6") - elif L == 8: - self._buffer.write(b"\xd7") - elif L == 16: - self._buffer.write(b"\xd8") - elif L <= 0xFF: - self._buffer.write(b"\xc7" + struct.pack("B", L)) - elif L <= 0xFFFF: - self._buffer.write(b"\xc8" + struct.pack(">H", L)) - else: - self._buffer.write(b"\xc9" + struct.pack(">I", L)) - self._buffer.write(struct.pack("B", typecode)) - self._buffer.write(data) - - def _pack_array_header(self, n): - if n <= 0x0F: - return self._buffer.write(struct.pack("B", 0x90 + n)) - if n <= 0xFFFF: - return self._buffer.write(struct.pack(">BH", 0xDC, n)) - if n <= 0xFFFFFFFF: - return self._buffer.write(struct.pack(">BI", 0xDD, n)) - raise ValueError("Array is too large") - - def _pack_map_header(self, n): - if n <= 0x0F: - return self._buffer.write(struct.pack("B", 0x80 + n)) - if n <= 0xFFFF: - return self._buffer.write(struct.pack(">BH", 0xDE, n)) - if n <= 0xFFFFFFFF: - return self._buffer.write(struct.pack(">BI", 0xDF, n)) - raise ValueError("Dict is too large") - - def _pack_map_pairs(self, n, pairs, nest_limit=DEFAULT_RECURSE_LIMIT): - self._pack_map_header(n) - for (k, v) in pairs: - self._pack(k, nest_limit - 1) - self._pack(v, nest_limit - 1) - - def _pack_raw_header(self, n): - if n <= 0x1F: - self._buffer.write(struct.pack("B", 0xA0 + n)) - elif self._use_bin_type and n <= 0xFF: - self._buffer.write(struct.pack(">BB", 0xD9, n)) - elif n <= 0xFFFF: - self._buffer.write(struct.pack(">BH", 0xDA, n)) - elif n <= 0xFFFFFFFF: - self._buffer.write(struct.pack(">BI", 0xDB, n)) - else: - raise ValueError("Raw is too large") - - def _pack_bin_header(self, n): - if not self._use_bin_type: - return self._pack_raw_header(n) - elif n <= 0xFF: - return self._buffer.write(struct.pack(">BB", 0xC4, n)) - elif n <= 0xFFFF: - return self._buffer.write(struct.pack(">BH", 0xC5, n)) - elif n <= 0xFFFFFFFF: - return self._buffer.write(struct.pack(">BI", 0xC6, n)) - else: - raise ValueError("Bin is too large") - - def bytes(self): - """Return internal buffer contents as bytes object""" - return self._buffer.getvalue() - - def reset(self): - """Reset internal buffer. - - This method is useful only when autoreset=False. - """ - self._buffer = StringIO() - - def getbuffer(self): - """Return view of internal buffer.""" - if USING_STRINGBUILDER or PY2: - return memoryview(self.bytes()) - else: - return self._buffer.getbuffer() diff --git a/spaces/tmaham/DS-Fusion-Express/background_remover/remove_deeplab.py b/spaces/tmaham/DS-Fusion-Express/background_remover/remove_deeplab.py deleted file mode 100644 index 23bb8efb8194d3456292af76b122fb655c04ebfc..0000000000000000000000000000000000000000 --- a/spaces/tmaham/DS-Fusion-Express/background_remover/remove_deeplab.py +++ /dev/null @@ -1,70 +0,0 @@ -import torch -import numpy as np -import urllib -from PIL import Image -from torchvision import transforms - -def load_model(): - - model = torch.hub.load('pytorch/vision:v0.6.0', 'deeplabv3_resnet101', pretrained=True) - model.eval() - - mean = torch.tensor([0.485, 0.456, 0.406]) - std = torch.tensor([0.229, 0.224, 0.225]) - - preprocess = transforms.Compose([ - transforms.ToTensor(), - transforms.Normalize(mean=mean, std=std), - ]) - - postprocess = transforms.Compose([ - transforms.Normalize(mean=-mean/std, std=1/std), - transforms.ToPILImage(), - ]) - - if torch.cuda.is_available(): - model.to('cuda') - return model, preprocess - -def remove_background(img, model, preprocess): - input_batch = preprocess(img)[None, ...] - - if torch.cuda.is_available(): - input_batch = input_batch.to('cuda') - - with torch.no_grad(): - output = model(input_batch)['out'][0] - output_predictions = torch.nn.functional.softmax(output, dim=0) - output_predictions = (output_predictions > 0.98).float() - - img.putalpha(255) - result_np = np.array(img) - result_np[..., 3] = (1-output_predictions[0].cpu().numpy())*255 - - return Image.fromarray(result_np.astype('uint8')) - -import os -def main(): - model, preprocess = load_model() - # fpath = 'data/parrot_2.png' - path_in = "/localhome/mta122/PycharmProjects/logo_ai/final_nocherry_score/one/DRAGON/G" - - for fpath_file in os.listdir(path_in): - # fpath = 'data/parrot_2.png' - fpath = os.path.join(path_in, fpath_file) - # fpath_out = fpath.split('.')[0] + '_result_rembg.png' - # cmd = f'rembg i {fpath} {fpath_out}' - # print(cmd) - # os.system(cmd) - - img = Image.open(fpath) - if img.size[-1] > 3: - img_np = np.array(img) - img_rbg = img_np[:, : ,:3] - img = Image.fromarray(img_rbg) - result = remove_background(img, model, preprocess) - result.save(fpath.split('.')[0] + '_result_deeplab.png') - print('finished') - - -main() diff --git a/spaces/tomandandy/MusicGen3/audiocraft/__init__.py b/spaces/tomandandy/MusicGen3/audiocraft/__init__.py deleted file mode 100644 index 6b8594f470200ff5c000542ef115375ed69b749c..0000000000000000000000000000000000000000 --- a/spaces/tomandandy/MusicGen3/audiocraft/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from . import data, modules, models - -__version__ = '0.0.2a2' diff --git a/spaces/tomdeng/textgenerator/README.md b/spaces/tomdeng/textgenerator/README.md deleted file mode 100644 index e257ef210644eaea8a2f53bbb9aa8c029bf59097..0000000000000000000000000000000000000000 --- a/spaces/tomdeng/textgenerator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Textgenerator -emoji: 😻 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/tomofi/Hive-OCR/README.md b/spaces/tomofi/Hive-OCR/README.md deleted file mode 100644 index 540df068676e60089c93e59e74eebc48bc0eca99..0000000000000000000000000000000000000000 --- a/spaces/tomofi/Hive-OCR/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Hive OCR -emoji: 🦀 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 2.9.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/hrnet/mask_rcnn_hrnetv2p_w32_2x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/hrnet/mask_rcnn_hrnetv2p_w32_2x_coco.py deleted file mode 100644 index 63d5d139e7b56843f5dcc85bda48945d56cfc49e..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/hrnet/mask_rcnn_hrnetv2p_w32_2x_coco.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './mask_rcnn_hrnetv2p_w32_1x_coco.py' -# learning policy -lr_config = dict(step=[16, 22]) -runner = dict(type='EpochBasedRunner', max_epochs=24) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/anchor/anchor_generator.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/anchor/anchor_generator.py deleted file mode 100644 index 388d2608b8138da13d1208b99595fbd1db59d178..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/anchor/anchor_generator.py +++ /dev/null @@ -1,727 +0,0 @@ -import mmcv -import numpy as np -import torch -from torch.nn.modules.utils import _pair - -from .builder import ANCHOR_GENERATORS - - -@ANCHOR_GENERATORS.register_module() -class AnchorGenerator(object): - """Standard anchor generator for 2D anchor-based detectors. - - Args: - strides (list[int] | list[tuple[int, int]]): Strides of anchors - in multiple feature levels in order (w, h). - ratios (list[float]): The list of ratios between the height and width - of anchors in a single level. - scales (list[int] | None): Anchor scales for anchors in a single level. - It cannot be set at the same time if `octave_base_scale` and - `scales_per_octave` are set. - base_sizes (list[int] | None): The basic sizes - of anchors in multiple levels. - If None is given, strides will be used as base_sizes. - (If strides are non square, the shortest stride is taken.) - scale_major (bool): Whether to multiply scales first when generating - base anchors. If true, the anchors in the same row will have the - same scales. By default it is True in V2.0 - octave_base_scale (int): The base scale of octave. - scales_per_octave (int): Number of scales for each octave. - `octave_base_scale` and `scales_per_octave` are usually used in - retinanet and the `scales` should be None when they are set. - centers (list[tuple[float, float]] | None): The centers of the anchor - relative to the feature grid center in multiple feature levels. - By default it is set to be None and not used. If a list of tuple of - float is given, they will be used to shift the centers of anchors. - center_offset (float): The offset of center in proportion to anchors' - width and height. By default it is 0 in V2.0. - - Examples: - >>> from mmdet.core import AnchorGenerator - >>> self = AnchorGenerator([16], [1.], [1.], [9]) - >>> all_anchors = self.grid_anchors([(2, 2)], device='cpu') - >>> print(all_anchors) - [tensor([[-4.5000, -4.5000, 4.5000, 4.5000], - [11.5000, -4.5000, 20.5000, 4.5000], - [-4.5000, 11.5000, 4.5000, 20.5000], - [11.5000, 11.5000, 20.5000, 20.5000]])] - >>> self = AnchorGenerator([16, 32], [1.], [1.], [9, 18]) - >>> all_anchors = self.grid_anchors([(2, 2), (1, 1)], device='cpu') - >>> print(all_anchors) - [tensor([[-4.5000, -4.5000, 4.5000, 4.5000], - [11.5000, -4.5000, 20.5000, 4.5000], - [-4.5000, 11.5000, 4.5000, 20.5000], - [11.5000, 11.5000, 20.5000, 20.5000]]), \ - tensor([[-9., -9., 9., 9.]])] - """ - - def __init__(self, - strides, - ratios, - scales=None, - base_sizes=None, - scale_major=True, - octave_base_scale=None, - scales_per_octave=None, - centers=None, - center_offset=0.): - # check center and center_offset - if center_offset != 0: - assert centers is None, 'center cannot be set when center_offset' \ - f'!=0, {centers} is given.' - if not (0 <= center_offset <= 1): - raise ValueError('center_offset should be in range [0, 1], ' - f'{center_offset} is given.') - if centers is not None: - assert len(centers) == len(strides), \ - 'The number of strides should be the same as centers, got ' \ - f'{strides} and {centers}' - - # calculate base sizes of anchors - self.strides = [_pair(stride) for stride in strides] - self.base_sizes = [min(stride) for stride in self.strides - ] if base_sizes is None else base_sizes - assert len(self.base_sizes) == len(self.strides), \ - 'The number of strides should be the same as base sizes, got ' \ - f'{self.strides} and {self.base_sizes}' - - # calculate scales of anchors - assert ((octave_base_scale is not None - and scales_per_octave is not None) ^ (scales is not None)), \ - 'scales and octave_base_scale with scales_per_octave cannot' \ - ' be set at the same time' - if scales is not None: - self.scales = torch.Tensor(scales) - elif octave_base_scale is not None and scales_per_octave is not None: - octave_scales = np.array( - [2**(i / scales_per_octave) for i in range(scales_per_octave)]) - scales = octave_scales * octave_base_scale - self.scales = torch.Tensor(scales) - else: - raise ValueError('Either scales or octave_base_scale with ' - 'scales_per_octave should be set') - - self.octave_base_scale = octave_base_scale - self.scales_per_octave = scales_per_octave - self.ratios = torch.Tensor(ratios) - self.scale_major = scale_major - self.centers = centers - self.center_offset = center_offset - self.base_anchors = self.gen_base_anchors() - - @property - def num_base_anchors(self): - """list[int]: total number of base anchors in a feature grid""" - return [base_anchors.size(0) for base_anchors in self.base_anchors] - - @property - def num_levels(self): - """int: number of feature levels that the generator will be applied""" - return len(self.strides) - - def gen_base_anchors(self): - """Generate base anchors. - - Returns: - list(torch.Tensor): Base anchors of a feature grid in multiple \ - feature levels. - """ - multi_level_base_anchors = [] - for i, base_size in enumerate(self.base_sizes): - center = None - if self.centers is not None: - center = self.centers[i] - multi_level_base_anchors.append( - self.gen_single_level_base_anchors( - base_size, - scales=self.scales, - ratios=self.ratios, - center=center)) - return multi_level_base_anchors - - def gen_single_level_base_anchors(self, - base_size, - scales, - ratios, - center=None): - """Generate base anchors of a single level. - - Args: - base_size (int | float): Basic size of an anchor. - scales (torch.Tensor): Scales of the anchor. - ratios (torch.Tensor): The ratio between between the height - and width of anchors in a single level. - center (tuple[float], optional): The center of the base anchor - related to a single feature grid. Defaults to None. - - Returns: - torch.Tensor: Anchors in a single-level feature maps. - """ - w = base_size - h = base_size - if center is None: - x_center = self.center_offset * w - y_center = self.center_offset * h - else: - x_center, y_center = center - - h_ratios = torch.sqrt(ratios) - w_ratios = 1 / h_ratios - if self.scale_major: - ws = (w * w_ratios[:, None] * scales[None, :]).view(-1) - hs = (h * h_ratios[:, None] * scales[None, :]).view(-1) - else: - ws = (w * scales[:, None] * w_ratios[None, :]).view(-1) - hs = (h * scales[:, None] * h_ratios[None, :]).view(-1) - - # use float anchor and the anchor's center is aligned with the - # pixel center - base_anchors = [ - x_center - 0.5 * ws, y_center - 0.5 * hs, x_center + 0.5 * ws, - y_center + 0.5 * hs - ] - base_anchors = torch.stack(base_anchors, dim=-1) - - return base_anchors - - def _meshgrid(self, x, y, row_major=True): - """Generate mesh grid of x and y. - - Args: - x (torch.Tensor): Grids of x dimension. - y (torch.Tensor): Grids of y dimension. - row_major (bool, optional): Whether to return y grids first. - Defaults to True. - - Returns: - tuple[torch.Tensor]: The mesh grids of x and y. - """ - # use shape instead of len to keep tracing while exporting to onnx - xx = x.repeat(y.shape[0]) - yy = y.view(-1, 1).repeat(1, x.shape[0]).view(-1) - if row_major: - return xx, yy - else: - return yy, xx - - def grid_anchors(self, featmap_sizes, device='cuda'): - """Generate grid anchors in multiple feature levels. - - Args: - featmap_sizes (list[tuple]): List of feature map sizes in - multiple feature levels. - device (str): Device where the anchors will be put on. - - Return: - list[torch.Tensor]: Anchors in multiple feature levels. \ - The sizes of each tensor should be [N, 4], where \ - N = width * height * num_base_anchors, width and height \ - are the sizes of the corresponding feature level, \ - num_base_anchors is the number of anchors for that level. - """ - assert self.num_levels == len(featmap_sizes) - multi_level_anchors = [] - for i in range(self.num_levels): - anchors = self.single_level_grid_anchors( - self.base_anchors[i].to(device), - featmap_sizes[i], - self.strides[i], - device=device) - multi_level_anchors.append(anchors) - return multi_level_anchors - - def single_level_grid_anchors(self, - base_anchors, - featmap_size, - stride=(16, 16), - device='cuda'): - """Generate grid anchors of a single level. - - Note: - This function is usually called by method ``self.grid_anchors``. - - Args: - base_anchors (torch.Tensor): The base anchors of a feature grid. - featmap_size (tuple[int]): Size of the feature maps. - stride (tuple[int], optional): Stride of the feature map in order - (w, h). Defaults to (16, 16). - device (str, optional): Device the tensor will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: Anchors in the overall feature maps. - """ - # keep as Tensor, so that we can covert to ONNX correctly - feat_h, feat_w = featmap_size - shift_x = torch.arange(0, feat_w, device=device) * stride[0] - shift_y = torch.arange(0, feat_h, device=device) * stride[1] - - shift_xx, shift_yy = self._meshgrid(shift_x, shift_y) - shifts = torch.stack([shift_xx, shift_yy, shift_xx, shift_yy], dim=-1) - shifts = shifts.type_as(base_anchors) - # first feat_w elements correspond to the first row of shifts - # add A anchors (1, A, 4) to K shifts (K, 1, 4) to get - # shifted anchors (K, A, 4), reshape to (K*A, 4) - - all_anchors = base_anchors[None, :, :] + shifts[:, None, :] - all_anchors = all_anchors.view(-1, 4) - # first A rows correspond to A anchors of (0, 0) in feature map, - # then (0, 1), (0, 2), ... - return all_anchors - - def valid_flags(self, featmap_sizes, pad_shape, device='cuda'): - """Generate valid flags of anchors in multiple feature levels. - - Args: - featmap_sizes (list(tuple)): List of feature map sizes in - multiple feature levels. - pad_shape (tuple): The padded shape of the image. - device (str): Device where the anchors will be put on. - - Return: - list(torch.Tensor): Valid flags of anchors in multiple levels. - """ - assert self.num_levels == len(featmap_sizes) - multi_level_flags = [] - for i in range(self.num_levels): - anchor_stride = self.strides[i] - feat_h, feat_w = featmap_sizes[i] - h, w = pad_shape[:2] - valid_feat_h = min(int(np.ceil(h / anchor_stride[1])), feat_h) - valid_feat_w = min(int(np.ceil(w / anchor_stride[0])), feat_w) - flags = self.single_level_valid_flags((feat_h, feat_w), - (valid_feat_h, valid_feat_w), - self.num_base_anchors[i], - device=device) - multi_level_flags.append(flags) - return multi_level_flags - - def single_level_valid_flags(self, - featmap_size, - valid_size, - num_base_anchors, - device='cuda'): - """Generate the valid flags of anchor in a single feature map. - - Args: - featmap_size (tuple[int]): The size of feature maps. - valid_size (tuple[int]): The valid size of the feature maps. - num_base_anchors (int): The number of base anchors. - device (str, optional): Device where the flags will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: The valid flags of each anchor in a single level \ - feature map. - """ - feat_h, feat_w = featmap_size - valid_h, valid_w = valid_size - assert valid_h <= feat_h and valid_w <= feat_w - valid_x = torch.zeros(feat_w, dtype=torch.bool, device=device) - valid_y = torch.zeros(feat_h, dtype=torch.bool, device=device) - valid_x[:valid_w] = 1 - valid_y[:valid_h] = 1 - valid_xx, valid_yy = self._meshgrid(valid_x, valid_y) - valid = valid_xx & valid_yy - valid = valid[:, None].expand(valid.size(0), - num_base_anchors).contiguous().view(-1) - return valid - - def __repr__(self): - """str: a string that describes the module""" - indent_str = ' ' - repr_str = self.__class__.__name__ + '(\n' - repr_str += f'{indent_str}strides={self.strides},\n' - repr_str += f'{indent_str}ratios={self.ratios},\n' - repr_str += f'{indent_str}scales={self.scales},\n' - repr_str += f'{indent_str}base_sizes={self.base_sizes},\n' - repr_str += f'{indent_str}scale_major={self.scale_major},\n' - repr_str += f'{indent_str}octave_base_scale=' - repr_str += f'{self.octave_base_scale},\n' - repr_str += f'{indent_str}scales_per_octave=' - repr_str += f'{self.scales_per_octave},\n' - repr_str += f'{indent_str}num_levels={self.num_levels}\n' - repr_str += f'{indent_str}centers={self.centers},\n' - repr_str += f'{indent_str}center_offset={self.center_offset})' - return repr_str - - -@ANCHOR_GENERATORS.register_module() -class SSDAnchorGenerator(AnchorGenerator): - """Anchor generator for SSD. - - Args: - strides (list[int] | list[tuple[int, int]]): Strides of anchors - in multiple feature levels. - ratios (list[float]): The list of ratios between the height and width - of anchors in a single level. - basesize_ratio_range (tuple(float)): Ratio range of anchors. - input_size (int): Size of feature map, 300 for SSD300, - 512 for SSD512. - scale_major (bool): Whether to multiply scales first when generating - base anchors. If true, the anchors in the same row will have the - same scales. It is always set to be False in SSD. - """ - - def __init__(self, - strides, - ratios, - basesize_ratio_range, - input_size=300, - scale_major=True): - assert len(strides) == len(ratios) - assert mmcv.is_tuple_of(basesize_ratio_range, float) - - self.strides = [_pair(stride) for stride in strides] - self.input_size = input_size - self.centers = [(stride[0] / 2., stride[1] / 2.) - for stride in self.strides] - self.basesize_ratio_range = basesize_ratio_range - - # calculate anchor ratios and sizes - min_ratio, max_ratio = basesize_ratio_range - min_ratio = int(min_ratio * 100) - max_ratio = int(max_ratio * 100) - step = int(np.floor(max_ratio - min_ratio) / (self.num_levels - 2)) - min_sizes = [] - max_sizes = [] - for ratio in range(int(min_ratio), int(max_ratio) + 1, step): - min_sizes.append(int(self.input_size * ratio / 100)) - max_sizes.append(int(self.input_size * (ratio + step) / 100)) - if self.input_size == 300: - if basesize_ratio_range[0] == 0.15: # SSD300 COCO - min_sizes.insert(0, int(self.input_size * 7 / 100)) - max_sizes.insert(0, int(self.input_size * 15 / 100)) - elif basesize_ratio_range[0] == 0.2: # SSD300 VOC - min_sizes.insert(0, int(self.input_size * 10 / 100)) - max_sizes.insert(0, int(self.input_size * 20 / 100)) - else: - raise ValueError( - 'basesize_ratio_range[0] should be either 0.15' - 'or 0.2 when input_size is 300, got ' - f'{basesize_ratio_range[0]}.') - elif self.input_size == 512: - if basesize_ratio_range[0] == 0.1: # SSD512 COCO - min_sizes.insert(0, int(self.input_size * 4 / 100)) - max_sizes.insert(0, int(self.input_size * 10 / 100)) - elif basesize_ratio_range[0] == 0.15: # SSD512 VOC - min_sizes.insert(0, int(self.input_size * 7 / 100)) - max_sizes.insert(0, int(self.input_size * 15 / 100)) - else: - raise ValueError('basesize_ratio_range[0] should be either 0.1' - 'or 0.15 when input_size is 512, got' - f' {basesize_ratio_range[0]}.') - else: - raise ValueError('Only support 300 or 512 in SSDAnchorGenerator' - f', got {self.input_size}.') - - anchor_ratios = [] - anchor_scales = [] - for k in range(len(self.strides)): - scales = [1., np.sqrt(max_sizes[k] / min_sizes[k])] - anchor_ratio = [1.] - for r in ratios[k]: - anchor_ratio += [1 / r, r] # 4 or 6 ratio - anchor_ratios.append(torch.Tensor(anchor_ratio)) - anchor_scales.append(torch.Tensor(scales)) - - self.base_sizes = min_sizes - self.scales = anchor_scales - self.ratios = anchor_ratios - self.scale_major = scale_major - self.center_offset = 0 - self.base_anchors = self.gen_base_anchors() - - def gen_base_anchors(self): - """Generate base anchors. - - Returns: - list(torch.Tensor): Base anchors of a feature grid in multiple \ - feature levels. - """ - multi_level_base_anchors = [] - for i, base_size in enumerate(self.base_sizes): - base_anchors = self.gen_single_level_base_anchors( - base_size, - scales=self.scales[i], - ratios=self.ratios[i], - center=self.centers[i]) - indices = list(range(len(self.ratios[i]))) - indices.insert(1, len(indices)) - base_anchors = torch.index_select(base_anchors, 0, - torch.LongTensor(indices)) - multi_level_base_anchors.append(base_anchors) - return multi_level_base_anchors - - def __repr__(self): - """str: a string that describes the module""" - indent_str = ' ' - repr_str = self.__class__.__name__ + '(\n' - repr_str += f'{indent_str}strides={self.strides},\n' - repr_str += f'{indent_str}scales={self.scales},\n' - repr_str += f'{indent_str}scale_major={self.scale_major},\n' - repr_str += f'{indent_str}input_size={self.input_size},\n' - repr_str += f'{indent_str}scales={self.scales},\n' - repr_str += f'{indent_str}ratios={self.ratios},\n' - repr_str += f'{indent_str}num_levels={self.num_levels},\n' - repr_str += f'{indent_str}base_sizes={self.base_sizes},\n' - repr_str += f'{indent_str}basesize_ratio_range=' - repr_str += f'{self.basesize_ratio_range})' - return repr_str - - -@ANCHOR_GENERATORS.register_module() -class LegacyAnchorGenerator(AnchorGenerator): - """Legacy anchor generator used in MMDetection V1.x. - - Note: - Difference to the V2.0 anchor generator: - - 1. The center offset of V1.x anchors are set to be 0.5 rather than 0. - 2. The width/height are minused by 1 when calculating the anchors' \ - centers and corners to meet the V1.x coordinate system. - 3. The anchors' corners are quantized. - - Args: - strides (list[int] | list[tuple[int]]): Strides of anchors - in multiple feature levels. - ratios (list[float]): The list of ratios between the height and width - of anchors in a single level. - scales (list[int] | None): Anchor scales for anchors in a single level. - It cannot be set at the same time if `octave_base_scale` and - `scales_per_octave` are set. - base_sizes (list[int]): The basic sizes of anchors in multiple levels. - If None is given, strides will be used to generate base_sizes. - scale_major (bool): Whether to multiply scales first when generating - base anchors. If true, the anchors in the same row will have the - same scales. By default it is True in V2.0 - octave_base_scale (int): The base scale of octave. - scales_per_octave (int): Number of scales for each octave. - `octave_base_scale` and `scales_per_octave` are usually used in - retinanet and the `scales` should be None when they are set. - centers (list[tuple[float, float]] | None): The centers of the anchor - relative to the feature grid center in multiple feature levels. - By default it is set to be None and not used. It a list of float - is given, this list will be used to shift the centers of anchors. - center_offset (float): The offset of center in propotion to anchors' - width and height. By default it is 0.5 in V2.0 but it should be 0.5 - in v1.x models. - - Examples: - >>> from mmdet.core import LegacyAnchorGenerator - >>> self = LegacyAnchorGenerator( - >>> [16], [1.], [1.], [9], center_offset=0.5) - >>> all_anchors = self.grid_anchors(((2, 2),), device='cpu') - >>> print(all_anchors) - [tensor([[ 0., 0., 8., 8.], - [16., 0., 24., 8.], - [ 0., 16., 8., 24.], - [16., 16., 24., 24.]])] - """ - - def gen_single_level_base_anchors(self, - base_size, - scales, - ratios, - center=None): - """Generate base anchors of a single level. - - Note: - The width/height of anchors are minused by 1 when calculating \ - the centers and corners to meet the V1.x coordinate system. - - Args: - base_size (int | float): Basic size of an anchor. - scales (torch.Tensor): Scales of the anchor. - ratios (torch.Tensor): The ratio between between the height. - and width of anchors in a single level. - center (tuple[float], optional): The center of the base anchor - related to a single feature grid. Defaults to None. - - Returns: - torch.Tensor: Anchors in a single-level feature map. - """ - w = base_size - h = base_size - if center is None: - x_center = self.center_offset * (w - 1) - y_center = self.center_offset * (h - 1) - else: - x_center, y_center = center - - h_ratios = torch.sqrt(ratios) - w_ratios = 1 / h_ratios - if self.scale_major: - ws = (w * w_ratios[:, None] * scales[None, :]).view(-1) - hs = (h * h_ratios[:, None] * scales[None, :]).view(-1) - else: - ws = (w * scales[:, None] * w_ratios[None, :]).view(-1) - hs = (h * scales[:, None] * h_ratios[None, :]).view(-1) - - # use float anchor and the anchor's center is aligned with the - # pixel center - base_anchors = [ - x_center - 0.5 * (ws - 1), y_center - 0.5 * (hs - 1), - x_center + 0.5 * (ws - 1), y_center + 0.5 * (hs - 1) - ] - base_anchors = torch.stack(base_anchors, dim=-1).round() - - return base_anchors - - -@ANCHOR_GENERATORS.register_module() -class LegacySSDAnchorGenerator(SSDAnchorGenerator, LegacyAnchorGenerator): - """Legacy anchor generator used in MMDetection V1.x. - - The difference between `LegacySSDAnchorGenerator` and `SSDAnchorGenerator` - can be found in `LegacyAnchorGenerator`. - """ - - def __init__(self, - strides, - ratios, - basesize_ratio_range, - input_size=300, - scale_major=True): - super(LegacySSDAnchorGenerator, - self).__init__(strides, ratios, basesize_ratio_range, input_size, - scale_major) - self.centers = [((stride - 1) / 2., (stride - 1) / 2.) - for stride in strides] - self.base_anchors = self.gen_base_anchors() - - -@ANCHOR_GENERATORS.register_module() -class YOLOAnchorGenerator(AnchorGenerator): - """Anchor generator for YOLO. - - Args: - strides (list[int] | list[tuple[int, int]]): Strides of anchors - in multiple feature levels. - base_sizes (list[list[tuple[int, int]]]): The basic sizes - of anchors in multiple levels. - """ - - def __init__(self, strides, base_sizes): - self.strides = [_pair(stride) for stride in strides] - self.centers = [(stride[0] / 2., stride[1] / 2.) - for stride in self.strides] - self.base_sizes = [] - num_anchor_per_level = len(base_sizes[0]) - for base_sizes_per_level in base_sizes: - assert num_anchor_per_level == len(base_sizes_per_level) - self.base_sizes.append( - [_pair(base_size) for base_size in base_sizes_per_level]) - self.base_anchors = self.gen_base_anchors() - - @property - def num_levels(self): - """int: number of feature levels that the generator will be applied""" - return len(self.base_sizes) - - def gen_base_anchors(self): - """Generate base anchors. - - Returns: - list(torch.Tensor): Base anchors of a feature grid in multiple \ - feature levels. - """ - multi_level_base_anchors = [] - for i, base_sizes_per_level in enumerate(self.base_sizes): - center = None - if self.centers is not None: - center = self.centers[i] - multi_level_base_anchors.append( - self.gen_single_level_base_anchors(base_sizes_per_level, - center)) - return multi_level_base_anchors - - def gen_single_level_base_anchors(self, base_sizes_per_level, center=None): - """Generate base anchors of a single level. - - Args: - base_sizes_per_level (list[tuple[int, int]]): Basic sizes of - anchors. - center (tuple[float], optional): The center of the base anchor - related to a single feature grid. Defaults to None. - - Returns: - torch.Tensor: Anchors in a single-level feature maps. - """ - x_center, y_center = center - base_anchors = [] - for base_size in base_sizes_per_level: - w, h = base_size - - # use float anchor and the anchor's center is aligned with the - # pixel center - base_anchor = torch.Tensor([ - x_center - 0.5 * w, y_center - 0.5 * h, x_center + 0.5 * w, - y_center + 0.5 * h - ]) - base_anchors.append(base_anchor) - base_anchors = torch.stack(base_anchors, dim=0) - - return base_anchors - - def responsible_flags(self, featmap_sizes, gt_bboxes, device='cuda'): - """Generate responsible anchor flags of grid cells in multiple scales. - - Args: - featmap_sizes (list(tuple)): List of feature map sizes in multiple - feature levels. - gt_bboxes (Tensor): Ground truth boxes, shape (n, 4). - device (str): Device where the anchors will be put on. - - Return: - list(torch.Tensor): responsible flags of anchors in multiple level - """ - assert self.num_levels == len(featmap_sizes) - multi_level_responsible_flags = [] - for i in range(self.num_levels): - anchor_stride = self.strides[i] - flags = self.single_level_responsible_flags( - featmap_sizes[i], - gt_bboxes, - anchor_stride, - self.num_base_anchors[i], - device=device) - multi_level_responsible_flags.append(flags) - return multi_level_responsible_flags - - def single_level_responsible_flags(self, - featmap_size, - gt_bboxes, - stride, - num_base_anchors, - device='cuda'): - """Generate the responsible flags of anchor in a single feature map. - - Args: - featmap_size (tuple[int]): The size of feature maps. - gt_bboxes (Tensor): Ground truth boxes, shape (n, 4). - stride (tuple(int)): stride of current level - num_base_anchors (int): The number of base anchors. - device (str, optional): Device where the flags will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: The valid flags of each anchor in a single level \ - feature map. - """ - feat_h, feat_w = featmap_size - gt_bboxes_cx = ((gt_bboxes[:, 0] + gt_bboxes[:, 2]) * 0.5).to(device) - gt_bboxes_cy = ((gt_bboxes[:, 1] + gt_bboxes[:, 3]) * 0.5).to(device) - gt_bboxes_grid_x = torch.floor(gt_bboxes_cx / stride[0]).long() - gt_bboxes_grid_y = torch.floor(gt_bboxes_cy / stride[1]).long() - - # row major indexing - gt_bboxes_grid_idx = gt_bboxes_grid_y * feat_w + gt_bboxes_grid_x - - responsible_grid = torch.zeros( - feat_h * feat_w, dtype=torch.uint8, device=device) - responsible_grid[gt_bboxes_grid_idx] = 1 - - responsible_grid = responsible_grid[:, None].expand( - responsible_grid.size(0), num_base_anchors).contiguous().view(-1) - return responsible_grid diff --git a/spaces/tsantos/Hierarchical-Classification-System-for-Breast-Cancer/download_models.py b/spaces/tsantos/Hierarchical-Classification-System-for-Breast-Cancer/download_models.py deleted file mode 100644 index ab739a5321452caad2bff15e367f46126424b0bd..0000000000000000000000000000000000000000 --- a/spaces/tsantos/Hierarchical-Classification-System-for-Breast-Cancer/download_models.py +++ /dev/null @@ -1,187 +0,0 @@ - -""" Download pre-trained models from Google drive. """ -import os -import argparse -import zipfile -import logging -import requests -from tqdm import tqdm -import fire -import re - -logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(filename)s - %(message)s", - datefmt="%d/%m/%Y %H:%M:%S", - level=logging.INFO) - - -"", "", "", "","","" - - -MODEL_TO_URL = { - - 'PathologyEmoryPubMedBERT': 'https://drive.google.com/open?id=1l_el_mYXoTIQvGwKN2NZbp97E4svH4Fh', - 'PathologyEmoryBERT': 'https://drive.google.com/open?id=11vzo6fJBw1RcdHVBAh6nnn8yua-4kj2IX', - 'ClinicalBERT': 'https://drive.google.com/open?id=1UK9HqSspVneK8zGg7B93vIdTGKK9MI_v', - 'BlueBERT': 'https://drive.google.com/open?id=1o-tcItErOiiwqZ-YRa3sMM3hGB4d3WkP', - 'BioBERT': 'https://drive.google.com/open?id=1m7EkWkFBIBuGbfwg7j0R_WINNnYk3oS9', - 'BERT': 'https://drive.google.com/open?id=1SB_AQAAsHkF79iSAaB3kumYT1rwcOJru', - - 'single_tfidf': 'https://drive.google.com/open?id=1-hxf7sKRtFGMOenlafdkeAr8_9pOz6Ym', - 'branch_tfidf': 'https://drive.google.com/open?id=1pDSnwLFn3YzPRac9rKFV_FN9kdzj2Lb0' -} - -""" - For large Files, Drive requires a Virus Check. - This function is reponsivle to extract the link from the button confirmation -""" -def get_url_from_gdrive_confirmation(contents): - url = "" - for line in contents.splitlines(): - m = re.search(r'href="(\/uc\?export=download[^"]+)', line) - if m: - url = "https://docs.google.com" + m.groups()[0] - url = url.replace("&", "&") - break - m = re.search('id="downloadForm" action="(.+?)"', line) - if m: - url = m.groups()[0] - url = url.replace("&", "&") - break - m = re.search('"downloadUrl":"([^"]+)', line) - if m: - url = m.groups()[0] - url = url.replace("\\u003d", "=") - url = url.replace("\\u0026", "&") - break - m = re.search('

    (.*)

    ', line) - if m: - error = m.groups()[0] - raise RuntimeError(error) - if not url: - return None - return url - -def download_file_from_google_drive(id, destination): - URL = "https://docs.google.com/uc?export=download" - - session = requests.Session() - - - response = session.get(URL, params={ 'id' : id }, stream=True) - URL_new = get_url_from_gdrive_confirmation(response.text) - - if URL_new != None: - URL = URL_new - response = session.get(URL, params={ 'id' : id }, stream=True) - - token = get_confirm_token(response) - - if token: - params = { 'id' : id, 'confirm' : token } - response = session.get(URL, params=params, stream=True) - - save_response_content(response, destination) - -def get_confirm_token(response): - for key, value in response.cookies.items(): - if key.startswith('download_warning'): - return value - - return None - -def save_response_content(response, destination): - CHUNK_SIZE = 32768 - - with open(destination, "wb") as f: - for chunk in tqdm(response.iter_content(CHUNK_SIZE)): - if chunk: # filter out keep-alive new chunks - f.write(chunk) - -def check_if_exist(model:str = "single_tfidf"): - - if model =="single_vectorizer": - model = "single_tfidf" - if model =="branch_vectorizer": - model = "branch_tfidf" - - project_dir = os.path.dirname(os.path.abspath(__file__)) - if model != None: - if model in ['single_tfidf', 'branch_tfidf' ]: - path='models/all_labels_hierarchy/' - path_model = os.path.join(project_dir, path, model,'classifiers') - path_vectorizer = os.path.join(project_dir, path, model,'vectorizers') - if os.path.exists(path_model) and os.path.exists(path_vectorizer): - if len(os.listdir(path_model)) >0 and len(os.listdir(path_vectorizer)) >0: - return True - else: - path='models/higher_order_hierarchy/' - path_folder = os.path.join(project_dir, path, model) - if os.path.exists(path_folder): - if len(os.listdir(path_folder + "/" )) >1: - return True - return False - -def download_model(all_labels='single_tfidf', higher_order='PathologyEmoryPubMedBERT'): - project_dir = os.path.dirname(os.path.abspath(__file__)) - - path_all_labels='models/all_labels_hierarchy/' - path_higher_order='models/higher_order_hierarchy/' - - def extract_model(path_file, name): - - os.makedirs(os.path.join(project_dir, path_file), exist_ok=True) - - file_destination = os.path.join(project_dir, path_file, name + '.zip') - - file_id = MODEL_TO_URL[name].split('id=')[-1] - - logging.info(f'Downloading {name} model (~1000MB tar.xz archive)') - download_file_from_google_drive(file_id, file_destination) - - logging.info('Extracting model from archive (~1300MB folder) and saving to ' + str(file_destination)) - with zipfile.ZipFile(file_destination, 'r') as zip_ref: - zip_ref.extractall(path=os.path.dirname(file_destination)) - - logging.info('Removing archive') - os.remove(file_destination) - logging.info('Done.') - - - if higher_order != None: - if not check_if_exist(higher_order): - extract_model(path_higher_order, higher_order) - else: - logging.info('Model ' + str(higher_order) + ' already exist') - - if all_labels!= None: - if not check_if_exist(all_labels): - extract_model(path_all_labels, all_labels) - else: - logging.info('Model ' + str(all_labels) + ' already exist') - - - - -def download(all_labels:str = "single_tfidf", higher_order:str = "PathologyEmoryPubMedBERT"): - """ - Input Options: - all_labels : single_tfidf, branch_tfidf - higher_order : clinicalBERT, blueBERT, patho_clinicalBERT, patho_blueBERT, charBERT - """ - all_labels_options = [ "single_tfidf", "branch_tfidf"] - higher_order_option = [ "PathologyEmoryPubMedBERT", "PathologyEmoryBERT", "ClinicalBERT", "BlueBERT","BioBERT","BERT" ] - - if all_labels not in all_labels_options or higher_order not in higher_order_option: - print("\n\tPlease provide a valid model for downloading") - print("\n\t\tall_labels: " + " ".join(x for x in all_labels_options)) - print("\n\t\thigher_order: " + " ".join(x for x in higher_order)) - exit() - - download_model(all_labels,higher_order) - -if __name__ == "__main__": - fire.Fire(download) - - - diff --git a/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/my_py_lib/bbox_eval_tool.py b/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/my_py_lib/bbox_eval_tool.py deleted file mode 100644 index d72e7d996230d4076251ad40f39f74a37fa5b4dc..0000000000000000000000000000000000000000 --- a/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/my_py_lib/bbox_eval_tool.py +++ /dev/null @@ -1,225 +0,0 @@ -''' -包围框评分工具 -''' - -import numpy as np -from .score_tool import calc_score_f05_f1_f2_prec_recall -from .bbox_tool import calc_bbox_iou_NtoM, check_bboxes, get_bboxes_shortest_link_pair -from .list_tool import list_multi_get_with_bool, list_multi_get_with_ids - - -def calc_bbox_score(pred_bboxes, pred_classes, label_bboxes, label_classes, classes_list, - match_iou_thresh_list=(0.3, 0.5, 0.7), - use_single_pair=False, *, return_lookup_table=True): - ''' - 通用包围框评估 - 将会返回一个分数字典 - 当预测与标签的IOU大于指定阈值时,将会认定为真阳性 - 结构为 - 类别-IOU分数-X - X: - found_pred 真阳性,预测正确的数量 - fakefound_pred 假阳性,预测失败的数量 - found_label 真阳性,标签被正确匹配到的数量 - nofound_label 假阴性,没有任何成功匹配的标签数量 - pred_repeat 当use_single_pair为False时,一个预测可以同时匹配多个标签,该度量将会统计匹配数量大于1的预测的数量 - label_repeat 当use_single_pair为False时,一个标签可以同时匹配多个预测,该度量将会统计匹配数量大于1的标签的数量 - f05 f0.5分数 - f1 f1分数 - f2 f2分数 - prec 准确率 - recall 召回率 - - :param pred_bboxes: 预测的包围框 - :param pred_classes: 预测的类别 - :param label_bboxes: 标签的包围框 - :param label_classes: 标签的类别 - :param classes_list: 要评估的类别列表 - :param match_iou_thresh_list: 多个评估阈值 - :param use_single_pair: 若为真,则使用一个预测只匹配一个标签。如果假,每个预测都可以匹配多个标签 - :param return_lookup_table: 若为真,将返回一个查找表 - :return: - ''' - classes_list = np.array(classes_list).tolist() - match_iou_thresh_list = np.array(match_iou_thresh_list).tolist() - - pred_bboxes = np.float32(pred_bboxes) - label_bboxes = np.float32(label_bboxes) - if len(pred_bboxes) == 0: - pred_bboxes = pred_bboxes.reshape([-1, 4]) - if len(label_bboxes) == 0: - label_bboxes = label_bboxes.reshape([-1, 4]) - - pred_classes = np.int32(pred_classes) - label_classes = np.int32(label_classes) - - assert pred_classes.ndim == 1 - assert label_classes.ndim == 1 - check_bboxes(pred_bboxes) - check_bboxes(label_bboxes) - assert len(pred_bboxes) == len(pred_classes) - assert len(label_bboxes) == len(label_classes) - - score_table = {} - - if return_lookup_table: - lookup_table = {} - - if len(pred_bboxes) == 0 or len(label_bboxes) == 0: - for cls in classes_list: - score_table[cls] = {} - - if return_lookup_table: - lookup_table[cls] = {} - - for iou_th in match_iou_thresh_list: - score_table[cls][iou_th] = {} - score_table[cls][iou_th]['found_pred'] = 0 - score_table[cls][iou_th]['fakefound_pred'] = int(np.sum(pred_classes == cls)) - score_table[cls][iou_th]['found_label'] = 0 - score_table[cls][iou_th]['nofound_label'] = int(np.sum(label_classes == cls)) - score_table[cls][iou_th]['pred_repeat'] = 0 - score_table[cls][iou_th]['label_repeat'] = 0 - score_table[cls][iou_th]['f05'] = 0. - score_table[cls][iou_th]['f1'] = 0. - score_table[cls][iou_th]['f2'] = 0. - score_table[cls][iou_th]['prec'] = 0. - score_table[cls][iou_th]['recall'] = 0. - - if return_lookup_table: - lookup_table[iou_th] = {} - score_table[cls][iou_th]['found_pred'] = [] - score_table[cls][iou_th]['fakefound_pred'] = pred_bboxes[pred_classes == cls] - score_table[cls][iou_th]['found_label'] = [] - score_table[cls][iou_th]['nofound_label'] = label_bboxes[label_classes == cls] - score_table[cls][iou_th]['pred_repeat'] = [] - score_table[cls][iou_th]['label_repeat'] = [] - - if return_lookup_table: - return score_table, lookup_table - else: - return score_table - - for cls in classes_list: - score_table[cls] = {} - - # 筛选出当前类的轮廓 - pred_selected_bools = np.array(pred_classes, np.int32) == cls - label_selected_bools = np.array(label_classes, np.int32) == cls - selected_pred_bboxes = pred_bboxes[pred_selected_bools] - selected_label_bboxes = label_bboxes[label_selected_bools] - - if return_lookup_table: - lookup_table[cls] = {} - - ious_table = calc_bbox_iou_NtoM(selected_pred_bboxes, selected_label_bboxes) - - # 计算不同IOU下的f0.5,f1,f2,recall,prec分数 - for iou_th in match_iou_thresh_list: - score_table[cls][iou_th] = {} - - label_found_count = np.zeros(len(selected_label_bboxes), np.int32) - pred_found_count = np.zeros(len(selected_pred_bboxes), np.int32) - - if len(selected_label_bboxes) > 0: - if not use_single_pair: - for pi, pred_contour in enumerate(selected_pred_bboxes): - ious = ious_table[pi] - close_bools = ious >= iou_th - label_found_count[close_bools] += 1 - pred_found_count[pi] += np.array(close_bools, np.int32).sum() - else: - pred_ids, label_ids, _ = get_bboxes_shortest_link_pair(selected_pred_bboxes, selected_label_bboxes, iou_th, pair_ious=ious_table) - for i in pred_ids: - pred_found_count[i] += 1 - for i in label_ids: - label_found_count[i] += 1 - - ids_found_pred = np.argwhere(pred_found_count > 0).squeeze(1) - ids_fakefound_pred = np.argwhere(pred_found_count == 0).squeeze(1) - ids_found_label = np.argwhere(label_found_count > 0).squeeze(1) - ids_nofound_label = np.argwhere(label_found_count == 0).squeeze(1) - ids_pred_repeat = np.argwhere(pred_found_count > 1).squeeze(1) - ids_label_repeat = np.argwhere(label_found_count > 1).squeeze(1) - - found_pred = len(ids_found_pred) - fakefound_pred = len(ids_fakefound_pred) - - found_label = len(ids_found_label) - nofound_label = len(ids_nofound_label) - - pred_repeat = len(ids_pred_repeat) - label_repeat = len(ids_label_repeat) - - f05, f1, f2, prec, recall = calc_score_f05_f1_f2_prec_recall(found_label, nofound_label, found_pred, fakefound_pred) - - score_table[cls][iou_th]['found_pred'] = int(found_pred) - score_table[cls][iou_th]['fakefound_pred'] = int(fakefound_pred) - score_table[cls][iou_th]['found_label'] = int(found_label) - score_table[cls][iou_th]['nofound_label'] = int(nofound_label) - score_table[cls][iou_th]['pred_repeat'] = int(pred_repeat) - score_table[cls][iou_th]['label_repeat'] = int(label_repeat) - score_table[cls][iou_th]['f05'] = float(f05) - score_table[cls][iou_th]['f1'] = float(f1) - score_table[cls][iou_th]['f2'] = float(f2) - score_table[cls][iou_th]['prec'] = float(prec) - score_table[cls][iou_th]['recall'] = float(recall) - - if return_lookup_table: - lookup_table[cls][iou_th] = {} - lookup_table[cls][iou_th]['found_pred'] = list_multi_get_with_ids(selected_pred_bboxes, ids_found_pred) - lookup_table[cls][iou_th]['fakefound_pred'] = list_multi_get_with_ids(selected_pred_bboxes, ids_fakefound_pred) - lookup_table[cls][iou_th]['found_label'] = list_multi_get_with_ids(selected_label_bboxes, ids_found_label) - lookup_table[cls][iou_th]['nofound_label'] = list_multi_get_with_ids(selected_label_bboxes, ids_nofound_label) - lookup_table[cls][iou_th]['pred_repeat'] = list_multi_get_with_ids(selected_pred_bboxes, ids_pred_repeat) - lookup_table[cls][iou_th]['label_repeat'] = list_multi_get_with_ids(selected_label_bboxes, ids_label_repeat) - - if return_lookup_table: - return score_table, lookup_table - else: - return score_table - - -def summary_bbox_score(scores, cls_list, match_iou_thresh_list): - ''' - 对多个分数表进行合算,得到统计分数表 - 其中 found_pred, fakefound_pred, found_label, nofound_label, pred_repeat, label_repeat 将会被累加 - 其中 f1, prec, recall 将会被求平均 - :param scores: 多个分数表 - :param cls_list: 要检查的分类 - :param match_iou_thresh_list: 多个匹配IOU - :return: - ''' - score_table = {} - for cls in cls_list: - score_table[cls] = {} - for iou_th in match_iou_thresh_list: - score_table[cls][iou_th] = {} - score_table[cls][iou_th]['found_pred'] = 0 - score_table[cls][iou_th]['fakefound_pred'] = 0 - score_table[cls][iou_th]['found_label'] = 0 - score_table[cls][iou_th]['nofound_label'] = 0 - score_table[cls][iou_th]['pred_repeat'] = 0 - score_table[cls][iou_th]['label_repeat'] = 0 - score_table[cls][iou_th]['f05'] = 0. - score_table[cls][iou_th]['f1'] = 0. - score_table[cls][iou_th]['f2'] = 0. - score_table[cls][iou_th]['prec'] = 0. - score_table[cls][iou_th]['recall'] = 0. - - for score in scores: - for cls in cls_list: - for iou_th in match_iou_thresh_list: - score_table[cls][iou_th]['found_pred'] += score[cls][iou_th]['found_pred'] - score_table[cls][iou_th]['fakefound_pred'] += score[cls][iou_th]['fakefound_pred'] - score_table[cls][iou_th]['found_label'] += score[cls][iou_th]['found_label'] - score_table[cls][iou_th]['nofound_label'] += score[cls][iou_th]['nofound_label'] - score_table[cls][iou_th]['pred_repeat'] += score[cls][iou_th]['pred_repeat'] - score_table[cls][iou_th]['label_repeat'] += score[cls][iou_th]['label_repeat'] - score_table[cls][iou_th]['f05'] += score[cls][iou_th]['f05'] / len(scores) - score_table[cls][iou_th]['f1'] += score[cls][iou_th]['f1'] / len(scores) - score_table[cls][iou_th]['f2'] += score[cls][iou_th]['f2'] / len(scores) - score_table[cls][iou_th]['prec'] += score[cls][iou_th]['prec'] / len(scores) - score_table[cls][iou_th]['recall'] += score[cls][iou_th]['recall'] / len(scores) - - return score_table diff --git a/spaces/ucalyptus/PTI/utils/data_utils.py b/spaces/ucalyptus/PTI/utils/data_utils.py deleted file mode 100644 index a477bb62396989bf1000a9a46c695687b5c15f59..0000000000000000000000000000000000000000 --- a/spaces/ucalyptus/PTI/utils/data_utils.py +++ /dev/null @@ -1,34 +0,0 @@ -import os - -from PIL import Image - -IMG_EXTENSIONS = [ - '.jpg', '.JPG', '.jpeg', '.JPEG', - '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', '.tiff' -] - - -def is_image_file(filename): - return any(filename.endswith(extension) for extension in IMG_EXTENSIONS) - - -def tensor2im(var): - # var shape: (3, H, W) - var = var.cpu().detach().transpose(0, 2).transpose(0, 1).numpy() - var = ((var + 1) / 2) - var[var < 0] = 0 - var[var > 1] = 1 - var = var * 255 - return Image.fromarray(var.astype('uint8')) - - -def make_dataset(dir): - images = [] - assert os.path.isdir(dir), '%s is not a valid directory' % dir - for root, _, fnames in sorted(os.walk(dir)): - for fname in fnames: - if is_image_file(fname): - path = os.path.join(root, fname) - fname = fname.split('.')[0] - images.append((fname, path)) - return images diff --git a/spaces/unity/ML-Agents-Pyramids/Build/Pyramids.loader.js b/spaces/unity/ML-Agents-Pyramids/Build/Pyramids.loader.js deleted file mode 100644 index 11a00f4dce0ac090822dcf127458486806f0d022..0000000000000000000000000000000000000000 --- a/spaces/unity/ML-Agents-Pyramids/Build/Pyramids.loader.js +++ /dev/null @@ -1 +0,0 @@ -function createUnityInstance(t,n,l){function d(e,t){if(!d.aborted&&n.showBanner)return"error"==t&&(d.aborted=!0),n.showBanner(e,t);switch(t){case"error":console.error(e);break;case"warning":console.warn(e);break;default:console.log(e)}}function r(e){var t=e.reason||e.error,n=t?t.toString():e.message||e.reason||"",r=t&&t.stack?t.stack.toString():"";(n+="\n"+(r=r.startsWith(n)?r.substring(n.length):r).trim())&&c.stackTraceRegExp&&c.stackTraceRegExp.test(n)&&k(n,e.filename||t&&(t.fileName||t.sourceURL)||"",e.lineno||t&&(t.lineNumber||t.line)||0)}function e(e,t,n){var r=e[t];void 0!==r&&r||(console.warn('Config option "'+t+'" is missing or empty. Falling back to default value: "'+n+'". Consider updating your WebGL template to include the missing config option.'),e[t]=n)}l=l||function(){};var o,c={canvas:t,webglContextAttributes:{preserveDrawingBuffer:!1,powerPreference:2},cacheControl:function(e){return e==c.dataUrl?"must-revalidate":"no-store"},streamingAssetsUrl:"StreamingAssets",downloadProgress:{},deinitializers:[],intervals:{},setInterval:function(e,t){e=window.setInterval(e,t);return this.intervals[e]=!0,e},clearInterval:function(e){delete this.intervals[e],window.clearInterval(e)},preRun:[],postRun:[],print:function(e){console.log(e)},printErr:function(e){console.error(e),"string"==typeof e&&-1!=e.indexOf("wasm streaming compile failed")&&(-1!=e.toLowerCase().indexOf("mime")?d('HTTP Response Header "Content-Type" configured incorrectly on the server for file '+c.codeUrl+' , should be "application/wasm". Startup time performance will suffer.',"warning"):d('WebAssembly streaming compilation failed! This can happen for example if "Content-Encoding" HTTP header is incorrectly enabled on the server for file '+c.codeUrl+", but the file is not pre-compressed on disk (or vice versa). Check the Network tab in browser Devtools to debug server header configuration.","warning"))},locateFile:function(e){return e},disabledCanvasEvents:["contextmenu","dragstart"]};for(o in e(n,"companyName","Unity"),e(n,"productName","WebGL Player"),e(n,"productVersion","1.0"),n)c[o]=n[o];c.streamingAssetsUrl=new URL(c.streamingAssetsUrl,document.URL).href;var a=c.disabledCanvasEvents.slice();function i(e){e.preventDefault()}a.forEach(function(e){t.addEventListener(e,i)}),window.addEventListener("error",r),window.addEventListener("unhandledrejection",r),c.deinitializers.push(function(){for(var e in c.disableAccessToMediaDevices(),a.forEach(function(e){t.removeEventListener(e,i)}),window.removeEventListener("error",r),window.removeEventListener("unhandledrejection",r),c.intervals)window.clearInterval(e);c.intervals={}}),c.QuitCleanup=function(){for(var e=0;e>>6:(n<65536?t[o++]=224|n>>>12:(t[o++]=240|n>>>18,t[o++]=128|n>>>12&63),t[o++]=128|n>>>6&63),t[o++]=128|63&n);return t},n.buf2binstring=function(e){return u(e,e.length)},n.binstring2buf=function(e){for(var t=new l.Buf8(e.length),n=0,r=t.length;n>10&1023,a[i++]=56320|1023&n)}return u(a,i)},n.utf8border=function(e,t){for(var n=(t=(t=t||e.length)>e.length?e.length:t)-1;0<=n&&128==(192&e[n]);)n--;return!(n<0)&&0!==n&&n+d[e[n]]>t?n:t}},"zlib/inflate.js":function(e,t,n){"use strict";var L=e("../utils/common"),O=e("./adler32"),I=e("./crc32"),A=e("./inffast"),P=e("./inftrees"),D=0,N=-2,z=1,r=852,o=592;function F(e){return(e>>>24&255)+(e>>>8&65280)+((65280&e)<<8)+((255&e)<<24)}function a(){this.mode=0,this.last=!1,this.wrap=0,this.havedict=!1,this.flags=0,this.dmax=0,this.check=0,this.total=0,this.head=null,this.wbits=0,this.wsize=0,this.whave=0,this.wnext=0,this.window=null,this.hold=0,this.bits=0,this.length=0,this.offset=0,this.extra=0,this.lencode=null,this.distcode=null,this.lenbits=0,this.distbits=0,this.ncode=0,this.nlen=0,this.ndist=0,this.have=0,this.next=null,this.lens=new L.Buf16(320),this.work=new L.Buf16(288),this.lendyn=null,this.distdyn=null,this.sane=0,this.back=0,this.was=0}function i(e){var t;return e&&e.state?(t=e.state,e.total_in=e.total_out=t.total=0,e.msg="",t.wrap&&(e.adler=1&t.wrap),t.mode=z,t.last=0,t.havedict=0,t.dmax=32768,t.head=null,t.hold=0,t.bits=0,t.lencode=t.lendyn=new L.Buf32(r),t.distcode=t.distdyn=new L.Buf32(o),t.sane=1,t.back=-1,D):N}function s(e){var t;return e&&e.state?((t=e.state).wsize=0,t.whave=0,t.wnext=0,i(e)):N}function l(e,t){var n,r;return!e||!e.state||(r=e.state,t<0?(n=0,t=-t):(n=1+(t>>4),t<48&&(t&=15)),t&&(t<8||15=e.wsize?(L.arraySet(e.window,t,n-e.wsize,e.wsize,0),e.wnext=0,e.whave=e.wsize):(r<(o=e.wsize-e.wnext)&&(o=r),L.arraySet(e.window,t,n-r,o,e.wnext),(r-=o)?(L.arraySet(e.window,t,n-r,r,0),e.wnext=r,e.whave=e.wsize):(e.wnext+=o,e.wnext===e.wsize&&(e.wnext=0),e.whave>>8&255,n.check=I(n.check,B,2,0),u=d=0,n.mode=2;else if(n.flags=0,n.head&&(n.head.done=!1),!(1&n.wrap)||(((255&d)<<8)+(d>>8))%31)e.msg="incorrect header check",n.mode=30;else if(8!=(15&d))e.msg="unknown compression method",n.mode=30;else{if(u-=4,x=8+(15&(d>>>=4)),0===n.wbits)n.wbits=x;else if(x>n.wbits){e.msg="invalid window size",n.mode=30;break}n.dmax=1<>8&1),512&n.flags&&(B[0]=255&d,B[1]=d>>>8&255,n.check=I(n.check,B,2,0)),u=d=0,n.mode=3;case 3:for(;u<32;){if(0===s)break e;s--,d+=r[a++]<>>8&255,B[2]=d>>>16&255,B[3]=d>>>24&255,n.check=I(n.check,B,4,0)),u=d=0,n.mode=4;case 4:for(;u<16;){if(0===s)break e;s--,d+=r[a++]<>8),512&n.flags&&(B[0]=255&d,B[1]=d>>>8&255,n.check=I(n.check,B,2,0)),u=d=0,n.mode=5;case 5:if(1024&n.flags){for(;u<16;){if(0===s)break e;s--,d+=r[a++]<>>8&255,n.check=I(n.check,B,2,0)),u=d=0}else n.head&&(n.head.extra=null);n.mode=6;case 6:if(1024&n.flags&&((h=s<(h=n.length)?s:h)&&(n.head&&(x=n.head.extra_len-n.length,n.head.extra||(n.head.extra=new Array(n.head.extra_len)),L.arraySet(n.head.extra,r,a,h,x)),512&n.flags&&(n.check=I(n.check,r,h,a)),s-=h,a+=h,n.length-=h),n.length))break e;n.length=0,n.mode=7;case 7:if(2048&n.flags){if(0===s)break e;for(h=0;x=r[a+h++],n.head&&x&&n.length<65536&&(n.head.name+=String.fromCharCode(x)),x&&h>9&1,n.head.done=!0),e.adler=n.check=0,n.mode=12;break;case 10:for(;u<32;){if(0===s)break e;s--,d+=r[a++]<>>=7&u,u-=7&u,n.mode=27;else{for(;u<3;){if(0===s)break e;s--,d+=r[a++]<>>=1)){case 0:n.mode=14;break;case 1:var T,T=R=void 0,R=n;if(H){for(Z=new L.Buf32(512),j=new L.Buf32(32),T=0;T<144;)R.lens[T++]=8;for(;T<256;)R.lens[T++]=9;for(;T<280;)R.lens[T++]=7;for(;T<288;)R.lens[T++]=8;for(P(1,R.lens,0,288,Z,0,R.work,{bits:9}),T=0;T<32;)R.lens[T++]=5;P(2,R.lens,0,32,j,0,R.work,{bits:5}),H=!1}if(R.lencode=Z,R.lenbits=9,R.distcode=j,R.distbits=5,n.mode=20,6!==t)break;d>>>=2,u-=2;break e;case 2:n.mode=17;break;case 3:e.msg="invalid block type",n.mode=30}d>>>=2,u-=2}break;case 14:for(d>>>=7&u,u-=7&u;u<32;){if(0===s)break e;s--,d+=r[a++]<>>16^65535)){e.msg="invalid stored block lengths",n.mode=30;break}if(n.length=65535&d,u=d=0,n.mode=15,6===t)break e;case 15:n.mode=16;case 16:if(h=n.length){if(0===(h=l<(h=s>>=5,u-=5,n.ndist=1+(31&d),d>>>=5,u-=5,n.ncode=4+(15&d),d>>>=4,u-=4,286>>=3,u-=3}for(;n.have<19;)n.lens[U[n.have++]]=0;if(n.lencode=n.lendyn,n.lenbits=7,S={bits:n.lenbits},_=P(0,n.lens,0,19,n.lencode,0,n.work,S),n.lenbits=S.bits,_){e.msg="invalid code lengths set",n.mode=30;break}n.have=0,n.mode=19;case 19:for(;n.have>>16&255,w=65535&C,!((g=C>>>24)<=u);){if(0===s)break e;s--,d+=r[a++]<>>=g,u-=g,n.lens[n.have++]=w;else{if(16===w){for(E=g+2;u>>=g,u-=g,0===n.have){e.msg="invalid bit length repeat",n.mode=30;break}x=n.lens[n.have-1],h=3+(3&d),d>>>=2,u-=2}else if(17===w){for(E=g+3;u>>=g)),d>>>=3,u=u-g-3}else{for(E=g+7;u>>=g)),d>>>=7,u=u-g-7}if(n.have+h>n.nlen+n.ndist){e.msg="invalid bit length repeat",n.mode=30;break}for(;h--;)n.lens[n.have++]=x}}if(30===n.mode)break;if(0===n.lens[256]){e.msg="invalid code -- missing end-of-block",n.mode=30;break}if(n.lenbits=9,S={bits:n.lenbits},_=P(1,n.lens,0,n.nlen,n.lencode,0,n.work,S),n.lenbits=S.bits,_){e.msg="invalid literal/lengths set",n.mode=30;break}if(n.distbits=6,n.distcode=n.distdyn,S={bits:n.distbits},_=P(2,n.lens,n.nlen,n.ndist,n.distcode,0,n.work,S),n.distbits=S.bits,_){e.msg="invalid distances set",n.mode=30;break}if(n.mode=20,6===t)break e;case 20:n.mode=21;case 21:if(6<=s&&258<=l){e.next_out=i,e.avail_out=l,e.next_in=a,e.avail_in=s,n.hold=d,n.bits=u,A(e,f),i=e.next_out,o=e.output,l=e.avail_out,a=e.next_in,r=e.input,s=e.avail_in,d=n.hold,u=n.bits,12===n.mode&&(n.back=-1);break}for(n.back=0;p=(C=n.lencode[d&(1<>>16&255,w=65535&C,!((g=C>>>24)<=u);){if(0===s)break e;s--,d+=r[a++]<>v)])>>>16&255,w=65535&C,!(v+(g=C>>>24)<=u);){if(0===s)break e;s--,d+=r[a++]<>>=v,u-=v,n.back+=v}if(d>>>=g,u-=g,n.back+=g,n.length=w,0===p){n.mode=26;break}if(32&p){n.back=-1,n.mode=12;break}if(64&p){e.msg="invalid literal/length code",n.mode=30;break}n.extra=15&p,n.mode=22;case 22:if(n.extra){for(E=n.extra;u>>=n.extra,u-=n.extra,n.back+=n.extra}n.was=n.length,n.mode=23;case 23:for(;p=(C=n.distcode[d&(1<>>16&255,w=65535&C,!((g=C>>>24)<=u);){if(0===s)break e;s--,d+=r[a++]<>v)])>>>16&255,w=65535&C,!(v+(g=C>>>24)<=u);){if(0===s)break e;s--,d+=r[a++]<>>=v,u-=v,n.back+=v}if(d>>>=g,u-=g,n.back+=g,64&p){e.msg="invalid distance code",n.mode=30;break}n.offset=w,n.extra=15&p,n.mode=24;case 24:if(n.extra){for(E=n.extra;u>>=n.extra,u-=n.extra,n.back+=n.extra}if(n.offset>n.dmax){e.msg="invalid distance too far back",n.mode=30;break}n.mode=25;case 25:if(0===l)break e;if(n.offset>(h=f-l)){if((h=n.offset-h)>n.whave&&n.sane){e.msg="invalid distance too far back",n.mode=30;break}b=h>n.wnext?(h-=n.wnext,n.wsize-h):n.wnext-h,h>n.length&&(h=n.length),m=n.window}else m=o,b=i-n.offset,h=n.length;for(l-=h=l>>16&65535|0,i=0;0!==n;){for(n-=i=2e3>>1:n>>>1;e[t]=n}return e}();t.exports=function(e,t,n,r){var o=s,a=r+n;e^=-1;for(var i=r;i>>8^o[255&(e^t[i])];return-1^e}},"zlib/inffast.js":function(e,t,n){"use strict";t.exports=function(e,t){var n,r,o,a,i,s,l=e.state,d=e.next_in,u=e.input,c=d+(e.avail_in-5),f=e.next_out,h=e.output,b=f-(t-e.avail_out),m=f+(e.avail_out-257),g=l.dmax,p=l.wsize,w=l.whave,v=l.wnext,y=l.window,k=l.hold,x=l.bits,_=l.lencode,S=l.distcode,E=(1<>>=r=n>>>24,x-=r,0==(r=n>>>16&255))h[f++]=65535&n;else{if(!(16&r)){if(0==(64&r)){n=_[(65535&n)+(k&(1<>>=r,x-=r),x<15&&(k+=u[d++]<>>=r=n>>>24,x-=r,!(16&(r=n>>>16&255))){if(0==(64&r)){n=S[(65535&n)+(k&(1<>>=r,x-=r,(r=f-b)>3)<<3))-1,e.next_in=d-=o,e.next_out=f,e.avail_in=dh?(m=O[I+i[v]],U[T+i[v]]):(m=96,0),l=1<<(b=w-S),y=d=1<<_;o[f+(B>>S)+(d-=l)]=b<<24|m<<16|g|0,0!==d;);for(l=1<>=1;if(B=0!==l?(B&l-1)+l:0,v++,0==--R[w]){if(w===k)break;w=t[n+i[v]]}if(xe.length||31!=e[0]||139!=e[1])return!1;var r=e[3];if(4&r){if(t+2>e.length)return!1;if((t+=2+e[t]+(e[t+1]<<8))>e.length)return!1}if(8&r){for(;te.length)return!1;t++}return 16&r&&String.fromCharCode.apply(null,e.subarray(t,t+n.length+1))==n+"\0"}}};function T(n){x(n);var e=c.cacheControl(c[n]),t=c.companyName&&c.productName?c.cachedFetch:c.fetchWithProgress,r=c[n],r=/file:\/\//.exec(r)?"same-origin":void 0;return t(c[n],{method:"GET",companyName:c.companyName,productName:c.productName,control:e,mode:r,onProgress:function(e){x(n,e)}}).then(function(e){return i=e.parsedBody,s=c[n],new Promise(function(e,t){try{for(var n in U){var r,o,a;if(U[n].hasUnityMarker(i))return s&&console.log('You can reduce startup time if you configure your web server to add "Content-Encoding: '+n+'" response header when serving "'+s+'" file.'),(r=U[n]).worker||(o=URL.createObjectURL(new Blob(["this.require = ",r.require.toString(),"; this.decompress = ",r.decompress.toString(),"; this.onmessage = ",function(e){e={id:e.data.id,decompressed:this.decompress(e.data.compressed)};postMessage(e,e.decompressed?[e.decompressed.buffer]:[])}.toString(),"; postMessage({ ready: true });"],{type:"application/javascript"})),r.worker=new Worker(o),r.worker.onmessage=function(e){e.data.ready?URL.revokeObjectURL(o):(this.callbacks[e.data.id](e.data.decompressed),delete this.callbacks[e.data.id])},r.worker.callbacks={},r.worker.nextCallbackId=0),a=r.worker.nextCallbackId++,r.worker.callbacks[a]=e,void r.worker.postMessage({id:a,compressed:i},[i.buffer])}e(i)}catch(e){t(e)}});var i,s}).catch(function(e){var t="Failed to download file "+c[n];"file:"==location.protocol?d(t+". Loading web pages via a file:// URL without a web server is not supported by this browser. Please use a local development web server to host Unity content, or use the Unity Build and Run option.","error"):console.error(t)})}function R(){Promise.all([T("frameworkUrl").then(function(e){var s=URL.createObjectURL(new Blob([e],{type:"application/javascript"}));return new Promise(function(a,e){var i=document.createElement("script");i.src=s,i.onload=function(){if("undefined"==typeof unityFramework||!unityFramework){var e,t=[["br","br"],["gz","gzip"]];for(e in t){var n,r=t[e];if(c.frameworkUrl.endsWith("."+r[0]))return n="Unable to parse "+c.frameworkUrl+"!","file:"==location.protocol?void d(n+" Loading pre-compressed (brotli or gzip) content via a file:// URL without a web server is not supported by this browser. Please use a local development web server to host compressed Unity content, or use the Unity Build and Run option.","error"):(n+=' This can happen if build compression was enabled but web server hosting the content was misconfigured to not serve the file with HTTP Response Header "Content-Encoding: '+r[1]+'" present. Check browser Console and Devtools Network tab to debug.',"br"==r[0]&&"http:"==location.protocol&&(r=-1!=["localhost","127.0.0.1"].indexOf(location.hostname)?"":"Migrate your server to use HTTPS.",n=/Firefox/.test(navigator.userAgent)?"Unable to parse "+c.frameworkUrl+'!
    If using custom web server, verify that web server is sending .br files with HTTP Response Header "Content-Encoding: br". Brotli compression may not be supported in Firefox over HTTP connections. '+r+' See https://bugzilla.mozilla.org/show_bug.cgi?id=1670675 for more information.':"Unable to parse "+c.frameworkUrl+'!
    If using custom web server, verify that web server is sending .br files with HTTP Response Header "Content-Encoding: br". Brotli compression may not be supported over HTTP connections. Migrate your server to use HTTPS.'),void d(n,"error"))}d("Unable to parse "+c.frameworkUrl+"! The file is corrupt, or compression was misconfigured? (check Content-Encoding HTTP Response Header on web server)","error")}var o=unityFramework;unityFramework=null,i.onload=null,URL.revokeObjectURL(s),a(o)},i.onerror=function(e){d("Unable to load file "+c.frameworkUrl+"! Check that the file exists on the remote server. (also check browser Console and Devtools Network tab to debug)","error")},document.body.appendChild(i),c.deinitializers.push(function(){document.body.removeChild(i)})})}),T("codeUrl")]).then(function(e){c.wasmBinary=e[1],e[0](c)});var e=T("dataUrl");c.preRun.push(function(){c.addRunDependency("dataUrl"),e.then(function(e){var t=new DataView(e.buffer,e.byteOffset,e.byteLength),n=0,r="UnityWebData1.0\0";if(!String.fromCharCode.apply(null,e.subarray(n,n+r.length))==r)throw"unknown data format";var o=t.getUint32(n+=r.length,!0);for(n+=4;nacca edificius ita crack torrent New 57

    Download Zip » https://urlcod.com/2uyUUx



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/CADSTD.PRO.rar.md b/spaces/usbethFlerru/sovits-modelsV2/example/CADSTD.PRO.rar.md deleted file mode 100644 index 4b2e058431e905c208c019d35deded14aae493fb..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/CADSTD.PRO.rar.md +++ /dev/null @@ -1,14 +0,0 @@ - -

    CadStd Pro: A Low-Cost CAD Software for Professional Quality Designs

    -

    CadStd Pro is a general purpose, easy to learn CAD/drafting program for creating professional quality mechanical designs, house plans, blueprints, schematics and charts. It is compatible with Windows 11, 10, 8.1, 8.0, 7, Vista, XP, 2000, ME, 98, and Win95 (OSR2) and can export files as DXF, PDF, Postscript, SVG and HPGL/1 formats. It also has features such as undo/redo, snaps, dimensions, fillets, chamfers, isometric projections and more.

    -

    CadStd Pro is available for a low-cost price of $37.50 USD and can be downloaded from the official website[^1^]. It is digitally signed and safe to run without any spyware or adware. It also comes with an integrated tutorial and user's guide and customer support e-mail access.

    -

    CADSTD.PRO.rar


    Download Filehttps://urlcod.com/2uyUyd



    -

    CADSTD.PRO.rar is a compressed file that contains the installation files for CadStd Pro version 3. You can use a file compression application such as WinRAR or 7-Zip to extract the files and run the setup.exe file to install the program on your computer.

    -

    If you are looking for a simple and affordable CAD software for your design needs, you might want to give CadStd Pro a try. It is a versatile and powerful tool that can help you create high-quality drawings with ease.

    CadStd Pro has received positive reviews from users and experts alike. According to Smart Computing Magazine, CadStd Pro is a great place to start for beginners who want to learn CAD software. They praised its detailed tutorials, intuitive interface and 3D isometric projection capabilities[^2^]. Users have also commented on the ease of use, versatility and affordability of CadStd Pro. One user wrote: \"I have bought a number of other programs, primarily for 3D graphics...but continually come back to this great program for every project requiring drawings.\"[^1^]

    -

    CadStd Pro is not only suitable for beginners, but also for professionals who need a reliable and powerful CAD software for their design projects. CadStd Pro can create drawings that can be shared with other CAD applications like Autocad, which is a mid-level CAD program that offers 2D and 3D drawing tools[^3^]. CadStd Pro can also export files as PDF, Postscript, SVG and HPGL/1 formats, which can be used for printing, web publishing or laser cutting[^4^] [^5^]. CadStd Pro can handle complex drawings with features such as offset, trim, fillet, chamfer, radial and angular dimensioning, intercept and gap[^1^].

    -

    -

    CadStd Pro is a low-cost CAD software that delivers professional quality designs. Whether you are a hobbyist, a student or a professional, you can benefit from using CadStd Pro for your CAD needs. You can download a free trial version from the official website and see for yourself how easy and powerful CadStd Pro is.

    To download the free trial version of CadStd Pro, you need to visit the official website and click on the \"Download\" button. You will be asked to enter your name and email address and agree to the terms and conditions. Then you will receive a link to download the CADSTD.PRO.rar file, which contains the installation files for CadStd Pro version 3. You can use a file compression application such as WinRAR or 7-Zip to extract the files and run the setup.exe file to install the program on your computer. The free trial version is fully functional for 30 days, after which you need to purchase a license key to continue using the program.

    -

    The system requirements for CadStd Pro are minimal. You need a Windows operating system (Windows 11, 10, 8.1, 8.0, 7, Vista, XP, 2000, ME, 98, or Win95 OSR2), a Pentium processor or better, at least 32 MB of RAM and 10 MB of hard disk space. You also need a mouse and a monitor with at least 800x600 resolution and 256 colors. CadStd Pro does not require any special graphics card or hardware acceleration.

    -

    To access the tutorials and user's guide for CadStd Pro, you can click on the \"Help\" menu in the program and select \"Tutorial\" or \"User's Guide\". The tutorial will guide you through the basic features and commands of CadStd Pro with step-by-step instructions and examples. The user's guide will provide you with more detailed information and reference on how to use CadStd Pro effectively. You can also access the online version of the tutorial and user's guide from the official website.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Csi Safe 2014 VERIFIED Crack Download.md b/spaces/usbethFlerru/sovits-modelsV2/example/Csi Safe 2014 VERIFIED Crack Download.md deleted file mode 100644 index ac98c8783dcb70eed4a5efdec417974015620228..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Csi Safe 2014 VERIFIED Crack Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

    csi safe 2014 crack download


    Download File ✯✯✯ https://urlcod.com/2uyU6q



    -
    -October 30, 2016 — CSI SAFE 2014 14.2 Free Download... Description: SAFE is the ultimate tool for designing concrete floor and foundation systems. From framing ... to floors, sill, and all kinds of sidewalks, there is no better tool for filling all variables. A tool that has been designed by a team of engineers and developed by the leading company for Floors & Systems floors industry. The SAFE tool has been designed for the professional worker with an understanding of the framework of the FLOORS CLOUD SYSTEMS industry. The tool is a highly visual, interactive, integrated, and completely customizable application for filling all variables. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/vaibhavarduino/anime-plus/op/__init__.py b/spaces/vaibhavarduino/anime-plus/op/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/verkaDerkaDerk/face-mesh-workflow/examples/converted/README.md b/spaces/verkaDerkaDerk/face-mesh-workflow/examples/converted/README.md deleted file mode 100644 index 277309caf5cc1b3b7c8ad13f65530d3c5a644576..0000000000000000000000000000000000000000 --- a/spaces/verkaDerkaDerk/face-mesh-workflow/examples/converted/README.md +++ /dev/null @@ -1,5 +0,0 @@ - -1. downloaded all the obj files -2. for i in in-*obj ; do o=$( echo ${i} | cut -f2- -d- ) ; ../../meshin-around.sh ${i} ${o} ; done -3. for i in ../*png ; do o=$(basename ${i} | sed 's,-[^.]*\.,.,' ) ; cp -i ${i} ${o} ; done - diff --git a/spaces/vict0rsch/climateGAN/climategan/optim.py b/spaces/vict0rsch/climateGAN/climategan/optim.py deleted file mode 100644 index 3e6ffea333aedcb4b06ed5fcf7306affc453bee1..0000000000000000000000000000000000000000 --- a/spaces/vict0rsch/climateGAN/climategan/optim.py +++ /dev/null @@ -1,291 +0,0 @@ -"""Define ExtraAdam and schedulers -""" -import math - -import torch -from torch.optim import Adam, Optimizer, RMSprop, lr_scheduler -from torch_optimizer import NovoGrad, RAdam - - -def get_scheduler(optimizer, hyperparameters, iterations=-1): - """Get an optimizer's learning rate scheduler based on opts - - Args: - optimizer (torch.Optimizer): optimizer for which to schedule the learning rate - hyperparameters (addict.Dict): configuration options - iterations (int, optional): The index of last epoch. Defaults to -1. - When last_epoch=-1, sets initial lr as lr. - - Returns: - [type]: [description] - """ - - policy = hyperparameters.get("lr_policy") - lr_step_size = hyperparameters.get("lr_step_size") - lr_gamma = hyperparameters.get("lr_gamma") - milestones = hyperparameters.get("lr_milestones") - - if policy is None or policy == "constant": - scheduler = None # constant scheduler - elif policy == "step": - scheduler = lr_scheduler.StepLR( - optimizer, step_size=lr_step_size, gamma=lr_gamma, last_epoch=iterations, - ) - elif policy == "multi_step": - if isinstance(milestones, (list, tuple)): - milestones = milestones - elif isinstance(milestones, int): - assert "lr_step_size" in hyperparameters - if iterations == -1: - last_milestone = 1000 - else: - last_milestone = iterations - milestones = list(range(milestones, last_milestone, lr_step_size)) - scheduler = lr_scheduler.MultiStepLR( - optimizer, milestones=milestones, gamma=lr_gamma, last_epoch=iterations, - ) - else: - return NotImplementedError( - "learning rate policy [%s] is not implemented", hyperparameters["lr_policy"] - ) - return scheduler - - -def get_optimizer(net, opt_conf, tasks=None, is_disc=False, iterations=-1): - """Returns a tuple (optimizer, scheduler) according to opt_conf which - should come from the trainer's opts as: trainer.opts..opt - - Args: - net (nn.Module): Network to update - opt_conf (addict.Dict): optimizer and scheduler options - tasks: list of tasks - iterations (int, optional): Last epoch number. Defaults to -1, meaning - start with base lr. - - Returns: - Tuple: (torch.Optimizer, torch._LRScheduler) - """ - opt = scheduler = None - lr_names = [] - if tasks is None: - lr_default = opt_conf.lr - params = net.parameters() - lr_names.append("full") - elif isinstance(opt_conf.lr, float): # Use default for all tasks - lr_default = opt_conf.lr - params = net.parameters() - lr_names.append("full") - elif len(opt_conf.lr) == 1: # Use default for all tasks - lr_default = opt_conf.lr.default - params = net.parameters() - lr_names.append("full") - else: - lr_default = opt_conf.lr.default - params = list() - for task in tasks: - lr = opt_conf.lr.get(task, lr_default) - parameters = None - # Parameters for encoder - if not is_disc: - if task == "m": - parameters = net.encoder.parameters() - params.append({"params": parameters, "lr": lr}) - lr_names.append("encoder") - # Parameters for decoders - if task == "p": - if hasattr(net, "painter"): - parameters = net.painter.parameters() - lr_names.append("painter") - else: - parameters = net.decoders[task].parameters() - lr_names.append(f"decoder_{task}") - else: - if task in net: - parameters = net[task].parameters() - lr_names.append(f"disc_{task}") - - if parameters is not None: - params.append({"params": parameters, "lr": lr}) - - if opt_conf.optimizer.lower() == "extraadam": - opt = ExtraAdam(params, lr=lr_default, betas=(opt_conf.beta1, 0.999)) - elif opt_conf.optimizer.lower() == "novograd": - opt = NovoGrad( - params, lr=lr_default, betas=(opt_conf.beta1, 0) - ) # default for beta2 is 0 - elif opt_conf.optimizer.lower() == "radam": - opt = RAdam(params, lr=lr_default, betas=(opt_conf.beta1, 0.999)) - elif opt_conf.optimizer.lower() == "rmsprop": - opt = RMSprop(params, lr=lr_default) - else: - opt = Adam(params, lr=lr_default, betas=(opt_conf.beta1, 0.999)) - scheduler = get_scheduler(opt, opt_conf, iterations) - return opt, scheduler, lr_names - - -""" -Extragradient Optimizer - -Mostly copied from the extragrad paper repo. - -MIT License -Copyright (c) Facebook, Inc. and its affiliates. -written by Hugo Berard (berard.hugo@gmail.com) while at Facebook. -""" - - -class Extragradient(Optimizer): - """Base class for optimizers with extrapolation step. - Arguments: - params (iterable): an iterable of :class:`torch.Tensor` s or - :class:`dict` s. Specifies what Tensors should be optimized. - defaults: (dict): a dict containing default values of optimization - options (used when a parameter group doesn't specify them). - """ - - def __init__(self, params, defaults): - super(Extragradient, self).__init__(params, defaults) - self.params_copy = [] - - def update(self, p, group): - raise NotImplementedError - - def extrapolation(self): - """Performs the extrapolation step and save a copy of the current - parameters for the update step. - """ - # Check if a copy of the parameters was already made. - is_empty = len(self.params_copy) == 0 - for group in self.param_groups: - for p in group["params"]: - u = self.update(p, group) - if is_empty: - # Save the current parameters for the update step. - # Several extrapolation step can be made before each update but - # only the parametersbefore the first extrapolation step are saved. - self.params_copy.append(p.data.clone()) - if u is None: - continue - # Update the current parameters - p.data.add_(u) - - def step(self, closure=None): - """Performs a single optimization step. - Arguments: - closure (callable, optional): A closure that reevaluates the model - and returns the loss. - """ - if len(self.params_copy) == 0: - raise RuntimeError("Need to call extrapolation before calling step.") - - loss = None - if closure is not None: - loss = closure() - - i = -1 - for group in self.param_groups: - for p in group["params"]: - i += 1 - u = self.update(p, group) - if u is None: - continue - # Update the parameters saved during the extrapolation step - p.data = self.params_copy[i].add_(u) - - # Free the old parameters - self.params_copy = [] - return loss - - -class ExtraAdam(Extragradient): - """Implements the Adam algorithm with extrapolation step. - Arguments: - params (iterable): iterable of parameters to optimize or dicts defining - parameter groups - lr (float, optional): learning rate (default: 1e-3) - betas (Tuple[float, float], optional): coefficients used for computing - running averages of gradient and its square (default: (0.9, 0.999)) - eps (float, optional): term added to the denominator to improve - numerical stability (default: 1e-8) - weight_decay (float, optional): weight decay (L2 penalty) (default: 0) - amsgrad (boolean, optional): whether to use the AMSGrad variant of this - algorithm from the paper `On the Convergence of Adam and Beyond`_ - """ - - def __init__( - self, - params, - lr=1e-3, - betas=(0.9, 0.999), - eps=1e-8, - weight_decay=0, - amsgrad=False, - ): - if not 0.0 <= lr: - raise ValueError("Invalid learning rate: {}".format(lr)) - if not 0.0 <= eps: - raise ValueError("Invalid epsilon value: {}".format(eps)) - if not 0.0 <= betas[0] < 1.0: - raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0])) - if not 0.0 <= betas[1] < 1.0: - raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1])) - defaults = dict( - lr=lr, betas=betas, eps=eps, weight_decay=weight_decay, amsgrad=amsgrad - ) - super(ExtraAdam, self).__init__(params, defaults) - - def __setstate__(self, state): - super(ExtraAdam, self).__setstate__(state) - for group in self.param_groups: - group.setdefault("amsgrad", False) - - def update(self, p, group): - if p.grad is None: - return None - grad = p.grad.data - if grad.is_sparse: - raise RuntimeError( - "Adam does not support sparse gradients," - + " please consider SparseAdam instead" - ) - amsgrad = group["amsgrad"] - - state = self.state[p] - - # State initialization - if len(state) == 0: - state["step"] = 0 - # Exponential moving average of gradient values - state["exp_avg"] = torch.zeros_like(p.data) - # Exponential moving average of squared gradient values - state["exp_avg_sq"] = torch.zeros_like(p.data) - if amsgrad: - # Maintains max of all exp. moving avg. of sq. grad. values - state["max_exp_avg_sq"] = torch.zeros_like(p.data) - - exp_avg, exp_avg_sq = state["exp_avg"], state["exp_avg_sq"] - if amsgrad: - max_exp_avg_sq = state["max_exp_avg_sq"] - beta1, beta2 = group["betas"] - - state["step"] += 1 - - if group["weight_decay"] != 0: - grad = grad.add(group["weight_decay"], p.data) - - # Decay the first and second moment running average coefficient - exp_avg.mul_(beta1).add_(1 - beta1, grad) - exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad) - if amsgrad: - # Maintains the maximum of all 2nd moment running avg. till now - torch.max(max_exp_avg_sq, exp_avg_sq, out=max_exp_avg_sq) # type: ignore - # Use the max. for normalizing running avg. of gradient - denom = max_exp_avg_sq.sqrt().add_(group["eps"]) # type: ignore - else: - denom = exp_avg_sq.sqrt().add_(group["eps"]) - - bias_correction1 = 1 - beta1 ** state["step"] - bias_correction2 = 1 - beta2 ** state["step"] - step_size = group["lr"] * math.sqrt(bias_correction2) / bias_correction1 - - return -step_size * exp_avg / denom diff --git a/spaces/w1zrd/MusicGen/audiocraft/quantization/core_vq.py b/spaces/w1zrd/MusicGen/audiocraft/quantization/core_vq.py deleted file mode 100644 index e1896bb1788a945a1f7be6369abb255ecf72c7a0..0000000000000000000000000000000000000000 --- a/spaces/w1zrd/MusicGen/audiocraft/quantization/core_vq.py +++ /dev/null @@ -1,400 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -from einops import rearrange, repeat -import flashy -import torch -from torch import nn, einsum -import torch.nn.functional as F - - -def exists(val: tp.Optional[tp.Any]) -> bool: - return val is not None - - -def default(val: tp.Any, d: tp.Any) -> tp.Any: - return val if exists(val) else d - - -def l2norm(t): - return F.normalize(t, p=2, dim=-1) - - -def ema_inplace(moving_avg, new, decay: float): - moving_avg.data.mul_(decay).add_(new, alpha=(1 - decay)) - - -def laplace_smoothing(x, n_categories: int, epsilon: float = 1e-5): - return (x + epsilon) / (x.sum() + n_categories * epsilon) - - -def uniform_init(*shape: int): - t = torch.empty(shape) - nn.init.kaiming_uniform_(t) - return t - - -def sample_vectors(samples, num: int): - num_samples, device = samples.shape[0], samples.device - - if num_samples >= num: - indices = torch.randperm(num_samples, device=device)[:num] - else: - indices = torch.randint(0, num_samples, (num,), device=device) - - return samples[indices] - - -def kmeans(samples, num_clusters: int, num_iters: int = 10): - dim, dtype = samples.shape[-1], samples.dtype - - means = sample_vectors(samples, num_clusters) - - for _ in range(num_iters): - diffs = rearrange(samples, "n d -> n () d") - rearrange( - means, "c d -> () c d" - ) - dists = -(diffs ** 2).sum(dim=-1) - - buckets = dists.max(dim=-1).indices - bins = torch.bincount(buckets, minlength=num_clusters) - zero_mask = bins == 0 - bins_min_clamped = bins.masked_fill(zero_mask, 1) - - new_means = buckets.new_zeros(num_clusters, dim, dtype=dtype) - new_means.scatter_add_(0, repeat(buckets, "n -> n d", d=dim), samples) - new_means = new_means / bins_min_clamped[..., None] - - means = torch.where(zero_mask[..., None], means, new_means) - - return means, bins - - -def orthgonal_loss_fn(t): - # eq (2) from https://arxiv.org/abs/2112.00384 - n = t.shape[0] - normed_codes = l2norm(t) - identity = torch.eye(n, device=t.device) - cosine_sim = einsum("i d, j d -> i j", normed_codes, normed_codes) - return ((cosine_sim - identity) ** 2).sum() / (n ** 2) - - -class EuclideanCodebook(nn.Module): - """Codebook with Euclidean distance. - - Args: - dim (int): Dimension. - codebook_size (int): Codebook size. - kmeans_init (bool): Whether to use k-means to initialize the codebooks. - If set to true, run the k-means algorithm on the first training batch and use - the learned centroids as initialization. - kmeans_iters (int): Number of iterations used for k-means algorithm at initialization. - decay (float): Decay for exponential moving average over the codebooks. - epsilon (float): Epsilon value for numerical stability. - threshold_ema_dead_code (int): Threshold for dead code expiration. Replace any codes - that have an exponential moving average cluster size less than the specified threshold with - randomly selected vector from the current batch. - """ - def __init__( - self, - dim: int, - codebook_size: int, - kmeans_init: int = False, - kmeans_iters: int = 10, - decay: float = 0.8, - epsilon: float = 1e-5, - threshold_ema_dead_code: int = 2, - ): - super().__init__() - self.decay = decay - init_fn: tp.Union[tp.Callable[..., torch.Tensor], tp.Any] = uniform_init if not kmeans_init else torch.zeros - embed = init_fn(codebook_size, dim) - - self.codebook_size = codebook_size - - self.kmeans_iters = kmeans_iters - self.epsilon = epsilon - self.threshold_ema_dead_code = threshold_ema_dead_code - - self.register_buffer("inited", torch.Tensor([not kmeans_init])) - self.register_buffer("cluster_size", torch.zeros(codebook_size)) - self.register_buffer("embed", embed) - self.register_buffer("embed_avg", embed.clone()) - - @torch.jit.ignore - def init_embed_(self, data): - if self.inited: - return - - embed, cluster_size = kmeans(data, self.codebook_size, self.kmeans_iters) - self.embed.data.copy_(embed) - self.embed_avg.data.copy_(embed.clone()) - self.cluster_size.data.copy_(cluster_size) - self.inited.data.copy_(torch.Tensor([True])) - # Make sure all buffers across workers are in sync after initialization - flashy.distrib.broadcast_tensors(self.buffers()) - - def replace_(self, samples, mask): - modified_codebook = torch.where( - mask[..., None], sample_vectors(samples, self.codebook_size), self.embed - ) - self.embed.data.copy_(modified_codebook) - - def expire_codes_(self, batch_samples): - if self.threshold_ema_dead_code == 0: - return - - expired_codes = self.cluster_size < self.threshold_ema_dead_code - if not torch.any(expired_codes): - return - - batch_samples = rearrange(batch_samples, "... d -> (...) d") - self.replace_(batch_samples, mask=expired_codes) - flashy.distrib.broadcast_tensors(self.buffers()) - - def preprocess(self, x): - x = rearrange(x, "... d -> (...) d") - return x - - def quantize(self, x): - embed = self.embed.t() - dist = -( - x.pow(2).sum(1, keepdim=True) - - 2 * x @ embed - + embed.pow(2).sum(0, keepdim=True) - ) - embed_ind = dist.max(dim=-1).indices - return embed_ind - - def postprocess_emb(self, embed_ind, shape): - return embed_ind.view(*shape[:-1]) - - def dequantize(self, embed_ind): - quantize = F.embedding(embed_ind, self.embed) - return quantize - - def encode(self, x): - shape = x.shape - # pre-process - x = self.preprocess(x) - # quantize - embed_ind = self.quantize(x) - # post-process - embed_ind = self.postprocess_emb(embed_ind, shape) - return embed_ind - - def decode(self, embed_ind): - quantize = self.dequantize(embed_ind) - return quantize - - def forward(self, x): - shape, dtype = x.shape, x.dtype - x = self.preprocess(x) - self.init_embed_(x) - - embed_ind = self.quantize(x) - embed_onehot = F.one_hot(embed_ind, self.codebook_size).type(dtype) - embed_ind = self.postprocess_emb(embed_ind, shape) - quantize = self.dequantize(embed_ind) - - if self.training: - # We do the expiry of code at that point as buffers are in sync - # and all the workers will take the same decision. - self.expire_codes_(x) - ema_inplace(self.cluster_size, embed_onehot.sum(0), self.decay) - embed_sum = x.t() @ embed_onehot - ema_inplace(self.embed_avg, embed_sum.t(), self.decay) - cluster_size = ( - laplace_smoothing(self.cluster_size, self.codebook_size, self.epsilon) - * self.cluster_size.sum() - ) - embed_normalized = self.embed_avg / cluster_size.unsqueeze(1) - self.embed.data.copy_(embed_normalized) - - return quantize, embed_ind - - -class VectorQuantization(nn.Module): - """Vector quantization implementation. - Currently supports only euclidean distance. - - Args: - dim (int): Dimension - codebook_size (int): Codebook size - codebook_dim (int): Codebook dimension. If not defined, uses the specified dimension in dim. - decay (float): Decay for exponential moving average over the codebooks. - epsilon (float): Epsilon value for numerical stability. - kmeans_init (bool): Whether to use kmeans to initialize the codebooks. - kmeans_iters (int): Number of iterations used for kmeans initialization. - threshold_ema_dead_code (int): - channels_last (bool): Channels are the last dimension in the input tensors. - commitment_weight (float): Weight for commitment loss. - orthogonal_reg_weight (float): Orthogonal regularization weights. - orthogonal_reg_active_codes_only (bool): Apply orthogonal regularization only on active codes. - orthogonal_reg_max_codes (optional int): Maximum number of codes to consider - for orthogonal regulariation. - threshold_ema_dead_code (int): Threshold for dead code expiration. Replace any codes - that have an exponential moving average cluster size less than the specified threshold with - randomly selected vector from the current batch. - """ - def __init__( - self, - dim: int, - codebook_size: int, - codebook_dim: tp.Optional[int] = None, - decay: float = 0.8, - epsilon: float = 1e-5, - kmeans_init: bool = False, - kmeans_iters: int = 10, - threshold_ema_dead_code: int = 2, - channels_last: bool = False, - commitment_weight: float = 1., - orthogonal_reg_weight: float = 0.0, - orthogonal_reg_active_codes_only: bool = False, - orthogonal_reg_max_codes: tp.Optional[int] = None, - ): - super().__init__() - _codebook_dim: int = default(codebook_dim, dim) - - requires_projection = _codebook_dim != dim - self.project_in = (nn.Linear(dim, _codebook_dim) if requires_projection else nn.Identity()) - self.project_out = (nn.Linear(_codebook_dim, dim) if requires_projection else nn.Identity()) - - self.epsilon = epsilon - self.commitment_weight = commitment_weight - - self.orthogonal_reg_weight = orthogonal_reg_weight - self.orthogonal_reg_active_codes_only = orthogonal_reg_active_codes_only - self.orthogonal_reg_max_codes = orthogonal_reg_max_codes - - self._codebook = EuclideanCodebook(dim=_codebook_dim, codebook_size=codebook_size, - kmeans_init=kmeans_init, kmeans_iters=kmeans_iters, - decay=decay, epsilon=epsilon, - threshold_ema_dead_code=threshold_ema_dead_code) - self.codebook_size = codebook_size - - self.channels_last = channels_last - - @property - def codebook(self): - return self._codebook.embed - - @property - def inited(self): - return self._codebook.inited - - def _preprocess(self, x): - if not self.channels_last: - x = rearrange(x, "b d n -> b n d") - return x - - def _postprocess(self, quantize): - if not self.channels_last: - quantize = rearrange(quantize, "b n d -> b d n") - return quantize - - def encode(self, x): - x = self._preprocess(x) - x = self.project_in(x) - embed_in = self._codebook.encode(x) - return embed_in - - def decode(self, embed_ind): - quantize = self._codebook.decode(embed_ind) - quantize = self.project_out(quantize) - quantize = self._postprocess(quantize) - return quantize - - def forward(self, x): - device = x.device - x = self._preprocess(x) - - x = self.project_in(x) - quantize, embed_ind = self._codebook(x) - - if self.training: - quantize = x + (quantize - x).detach() - - loss = torch.tensor([0.0], device=device, requires_grad=self.training) - - if self.training: - if self.commitment_weight > 0: - commit_loss = F.mse_loss(quantize.detach(), x) - loss = loss + commit_loss * self.commitment_weight - - if self.orthogonal_reg_weight > 0: - codebook = self.codebook - - if self.orthogonal_reg_active_codes_only: - # only calculate orthogonal loss for the activated codes for this batch - unique_code_ids = torch.unique(embed_ind) - codebook = codebook[unique_code_ids] - - num_codes = codebook.shape[0] - if exists(self.orthogonal_reg_max_codes) and num_codes > self.orthogonal_reg_max_codes: - rand_ids = torch.randperm(num_codes, device=device)[:self.orthogonal_reg_max_codes] - codebook = codebook[rand_ids] - - orthogonal_reg_loss = orthgonal_loss_fn(codebook) - loss = loss + orthogonal_reg_loss * self.orthogonal_reg_weight - - quantize = self.project_out(quantize) - quantize = self._postprocess(quantize) - - return quantize, embed_ind, loss - - -class ResidualVectorQuantization(nn.Module): - """Residual vector quantization implementation. - - Follows Algorithm 1. in https://arxiv.org/pdf/2107.03312.pdf - """ - def __init__(self, *, num_quantizers, **kwargs): - super().__init__() - self.layers = nn.ModuleList( - [VectorQuantization(**kwargs) for _ in range(num_quantizers)] - ) - - def forward(self, x, n_q: tp.Optional[int] = None): - quantized_out = 0.0 - residual = x - - all_losses = [] - all_indices = [] - - n_q = n_q or len(self.layers) - - for i, layer in enumerate(self.layers[:n_q]): - quantized, indices, loss = layer(residual) - residual = residual - quantized - quantized_out = quantized_out + quantized - all_indices.append(indices) - all_losses.append(loss) - - out_losses, out_indices = map(torch.stack, (all_losses, all_indices)) - return quantized_out, out_indices, out_losses - - def encode(self, x: torch.Tensor, n_q: tp.Optional[int] = None) -> torch.Tensor: - residual = x - all_indices = [] - n_q = n_q or len(self.layers) - for layer in self.layers[:n_q]: - indices = layer.encode(residual) - quantized = layer.decode(indices) - residual = residual - quantized - all_indices.append(indices) - out_indices = torch.stack(all_indices) - return out_indices - - def decode(self, q_indices: torch.Tensor) -> torch.Tensor: - quantized_out = torch.tensor(0.0, device=q_indices.device) - for i, indices in enumerate(q_indices): - layer = self.layers[i] - quantized = layer.decode(indices) - quantized_out = quantized_out + quantized - return quantized_out diff --git a/spaces/w1zrd/MusicGen/tests/data/__init__.py b/spaces/w1zrd/MusicGen/tests/data/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/w1zrd/MusicGen/tests/data/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/w1zrd/MusicGen/tests/modules/test_seanet.py b/spaces/w1zrd/MusicGen/tests/modules/test_seanet.py deleted file mode 100644 index e5c51b340a2f94fb2828b14daf83d5fad645073d..0000000000000000000000000000000000000000 --- a/spaces/w1zrd/MusicGen/tests/modules/test_seanet.py +++ /dev/null @@ -1,115 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from itertools import product - -import pytest -import torch - -from audiocraft.modules.seanet import SEANetEncoder, SEANetDecoder, SEANetResnetBlock -from audiocraft.modules import StreamableConv1d, StreamableConvTranspose1d - - -class TestSEANetModel: - - def test_base(self): - encoder = SEANetEncoder() - decoder = SEANetDecoder() - - x = torch.randn(1, 1, 24000) - z = encoder(x) - assert list(z.shape) == [1, 128, 75], z.shape - y = decoder(z) - assert y.shape == x.shape, (x.shape, y.shape) - - def test_causal(self): - encoder = SEANetEncoder(causal=True) - decoder = SEANetDecoder(causal=True) - x = torch.randn(1, 1, 24000) - - z = encoder(x) - assert list(z.shape) == [1, 128, 75], z.shape - y = decoder(z) - assert y.shape == x.shape, (x.shape, y.shape) - - def test_conv_skip_connection(self): - encoder = SEANetEncoder(true_skip=False) - decoder = SEANetDecoder(true_skip=False) - - x = torch.randn(1, 1, 24000) - z = encoder(x) - assert list(z.shape) == [1, 128, 75], z.shape - y = decoder(z) - assert y.shape == x.shape, (x.shape, y.shape) - - def test_seanet_encoder_decoder_final_act(self): - encoder = SEANetEncoder(true_skip=False) - decoder = SEANetDecoder(true_skip=False, final_activation='Tanh') - - x = torch.randn(1, 1, 24000) - z = encoder(x) - assert list(z.shape) == [1, 128, 75], z.shape - y = decoder(z) - assert y.shape == x.shape, (x.shape, y.shape) - - def _check_encoder_blocks_norm(self, encoder: SEANetEncoder, n_disable_blocks: int, norm: str): - n_blocks = 0 - for layer in encoder.model: - if isinstance(layer, StreamableConv1d): - n_blocks += 1 - assert layer.conv.norm_type == 'none' if n_blocks <= n_disable_blocks else norm - elif isinstance(layer, SEANetResnetBlock): - for resnet_layer in layer.block: - if isinstance(resnet_layer, StreamableConv1d): - # here we add + 1 to n_blocks as we increment n_blocks just after the block - assert resnet_layer.conv.norm_type == 'none' if (n_blocks + 1) <= n_disable_blocks else norm - - def test_encoder_disable_norm(self): - n_residuals = [0, 1, 3] - disable_blocks = [0, 1, 2, 3, 4, 5, 6] - norms = ['weight_norm', 'none'] - for n_res, disable_blocks, norm in product(n_residuals, disable_blocks, norms): - encoder = SEANetEncoder(n_residual_layers=n_res, norm=norm, - disable_norm_outer_blocks=disable_blocks) - self._check_encoder_blocks_norm(encoder, disable_blocks, norm) - - def _check_decoder_blocks_norm(self, decoder: SEANetDecoder, n_disable_blocks: int, norm: str): - n_blocks = 0 - for layer in decoder.model: - if isinstance(layer, StreamableConv1d): - n_blocks += 1 - assert layer.conv.norm_type == 'none' if (decoder.n_blocks - n_blocks) < n_disable_blocks else norm - elif isinstance(layer, StreamableConvTranspose1d): - n_blocks += 1 - assert layer.convtr.norm_type == 'none' if (decoder.n_blocks - n_blocks) < n_disable_blocks else norm - elif isinstance(layer, SEANetResnetBlock): - for resnet_layer in layer.block: - if isinstance(resnet_layer, StreamableConv1d): - assert resnet_layer.conv.norm_type == 'none' \ - if (decoder.n_blocks - n_blocks) < n_disable_blocks else norm - - def test_decoder_disable_norm(self): - n_residuals = [0, 1, 3] - disable_blocks = [0, 1, 2, 3, 4, 5, 6] - norms = ['weight_norm', 'none'] - for n_res, disable_blocks, norm in product(n_residuals, disable_blocks, norms): - decoder = SEANetDecoder(n_residual_layers=n_res, norm=norm, - disable_norm_outer_blocks=disable_blocks) - self._check_decoder_blocks_norm(decoder, disable_blocks, norm) - - def test_disable_norm_raises_exception(self): - # Invalid disable_norm_outer_blocks values raise exceptions - with pytest.raises(AssertionError): - SEANetEncoder(disable_norm_outer_blocks=-1) - - with pytest.raises(AssertionError): - SEANetEncoder(ratios=[1, 1, 2, 2], disable_norm_outer_blocks=7) - - with pytest.raises(AssertionError): - SEANetDecoder(disable_norm_outer_blocks=-1) - - with pytest.raises(AssertionError): - SEANetDecoder(ratios=[1, 1, 2, 2], disable_norm_outer_blocks=7) diff --git a/spaces/wallezen/so-vits-svc/app.py b/spaces/wallezen/so-vits-svc/app.py deleted file mode 100644 index 340b96f94d54a2169f784e81e32512ddce470d88..0000000000000000000000000000000000000000 --- a/spaces/wallezen/so-vits-svc/app.py +++ /dev/null @@ -1,123 +0,0 @@ -import os -import io -import gradio as gr -import librosa -import numpy as np -import utils -from inference.infer_tool import Svc -import logging -import soundfile -import asyncio -import argparse -import edge_tts -import gradio.processing_utils as gr_processing_utils -logging.getLogger('numba').setLevel(logging.WARNING) -logging.getLogger('markdown_it').setLevel(logging.WARNING) -logging.getLogger('urllib3').setLevel(logging.WARNING) -logging.getLogger('matplotlib').setLevel(logging.WARNING) - -limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces - -audio_postprocess_ori = gr.Audio.postprocess - -def audio_postprocess(self, y): - data = audio_postprocess_ori(self, y) - if data is None: - return None - return gr_processing_utils.encode_url_or_file_to_base64(data["name"]) - - -gr.Audio.postprocess = audio_postprocess -def create_vc_fn(model, sid): - def vc_fn(input_audio, vc_transform, auto_f0, tts_text, tts_voice, tts_mode): - if tts_mode: - if len(tts_text) > 300 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - raw_path = io.BytesIO() - soundfile.write(raw_path, audio, 16000, format="wav") - raw_path.seek(0) - out_audio, out_sr = model.infer(sid, vc_transform, raw_path, - auto_predict_f0=auto_f0, - ) - return "Success", (44100, out_audio.cpu().numpy()) - if input_audio is None: - return "You need to upload an audio", None - sampling_rate, audio = input_audio - duration = audio.shape[0] / sampling_rate - if duration > 30 and limitation: - return "Please upload an audio file that is less than 30 seconds. If you need to generate a longer audio file, please use Colab.", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - raw_path = io.BytesIO() - soundfile.write(raw_path, audio, 16000, format="wav") - raw_path.seek(0) - out_audio, out_sr = model.infer(sid, vc_transform, raw_path, - auto_predict_f0=auto_f0, - ) - return "Success", (44100, out_audio.cpu().numpy()) - return vc_fn - -def change_to_tts_mode(tts_mode): - if tts_mode: - return gr.Audio.update(visible=False), gr.Textbox.update(visible=True), gr.Dropdown.update(visible=True), gr.Checkbox.update(value=True) - else: - return gr.Audio.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False), gr.Checkbox.update(value=False) - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--device', type=str, default='cpu') - parser.add_argument('--api', action="store_true", default=False) - parser.add_argument("--share", action="store_true", default=False, help="share gradio app") - args = parser.parse_args() - hubert_model = utils.get_hubert_model().to(args.device) - models = [] - voices = [] - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - for r in tts_voice_list: - voices.append(f"{r['ShortName']}-{r['Gender']}") - for f in os.listdir("models"): - name = f - model = Svc(fr"models/{f}/{f}.pth", f"models/{f}/config.json", device=args.device) - cover = f"models/{f}/cover.png" if os.path.exists(f"models/{f}/cover.png") else None - models.append((name, cover, create_vc_fn(model, name))) - with gr.Blocks() as app: - gr.Markdown( - "#
    Sovits Anime\n" - "##
    The input audio should be clean and pure voice without background music.\n" - "
    Original gradio code from: https://huggingface.co/spaces/zomehwh/sovits-models\n\n" - "#![visitor badge](https://visitor-badge.glitch.me/badge?page_id=pitawat02.sovits-anime)\n\n" - "[![image](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1FvgQCzmWGBQ26V33Z-rNgJJKzLnYe8mH?usp=sharing)\n\n" - "[![Duplicate this Space](https://huggingface.co/datasets/huggingface/badges/raw/main/duplicate-this-space-sm-dark.svg)](https://huggingface.co/spaces/pitawat02/sovits-anime?duplicate=true)\n\n" - "[![Original Repo](https://badgen.net/badge/icon/github?icon=github&label=Original%20Repo)](https://github.com/svc-develop-team/so-vits-svc)" - - ) - with gr.Tabs(): - for (name, cover, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - with gr.Column(): - vc_input = gr.Audio(label="Input audio"+' (less than 30 seconds)' if limitation else '') - vc_transform = gr.Number(label="vc_transform", value=0) - auto_f0 = gr.Checkbox(label="auto_f0", value=False) - tts_mode = gr.Checkbox(label="tts (use edge-tts as input)", value=False) - tts_text = gr.Textbox(visible=False, label="TTS text (100 words limitation)" if limitation else "TTS text") - tts_voice = gr.Dropdown(choices=voices, visible=False) - vc_submit = gr.Button("Generate", variant="primary") - vc_output1 = gr.Textbox(label="Output Message") - vc_output2 = gr.Audio(label="Output Audio") - gr.Markdown( - f"##
    {name}\n" - '
    ' - f'' if cover else "" - '
    ' - ) - vc_submit.click(vc_fn, [vc_input, vc_transform, auto_f0, tts_text, tts_voice, tts_mode], [vc_output1, vc_output2]) - tts_mode.change(change_to_tts_mode, [tts_mode], [vc_input, tts_text, tts_voice, auto_f0]) - app.queue(concurrency_count=1, api_open=args.api).launch(share=args.share) diff --git a/spaces/weizmannscience/tokenflow/utils.py b/spaces/weizmannscience/tokenflow/utils.py deleted file mode 100644 index 63bee1ca873801770c123d4cf5588b4cdad3aee6..0000000000000000000000000000000000000000 --- a/spaces/weizmannscience/tokenflow/utils.py +++ /dev/null @@ -1,121 +0,0 @@ -from pathlib import Path -from PIL import Image -import torch -import yaml -import math - -import torchvision.transforms as T -from torchvision.io import read_video,write_video -import os -import random -import numpy as np -from torchvision.io import write_video -# from kornia.filters import joint_bilateral_blur -from kornia.geometry.transform import remap -from kornia.utils.grid import create_meshgrid -import cv2 - -def save_video_frames(video_path, img_size=(512,512)): - video, _, _ = read_video(video_path, output_format="TCHW") - # rotate video -90 degree if video is .mov format. this is a weird bug in torchvision - if video_path.endswith('.mov'): - video = T.functional.rotate(video, -90) - video_name = Path(video_path).stem - os.makedirs(f'data/{video_name}', exist_ok=True) - for i in range(len(video)): - ind = str(i).zfill(5) - image = T.ToPILImage()(video[i]) - image_resized = image.resize((img_size), resample=Image.Resampling.LANCZOS) - image_resized.save(f'data/{video_name}/{ind}.png') - -def video_to_frames(video_path, img_size=(512,512)): - video, _, _ = read_video(video_path, output_format="TCHW") - # rotate video -90 degree if video is .mov format. this is a weird bug in torchvision - if video_path.endswith('.mov'): - video = T.functional.rotate(video, -90) - video_name = Path(video_path).stem - # os.makedirs(f'data/{video_name}', exist_ok=True) - frames = [] - for i in range(len(video)): - ind = str(i).zfill(5) - image = T.ToPILImage()(video[i]) - image_resized = image.resize((img_size), resample=Image.Resampling.LANCZOS) - # image_resized.save(f'data/{video_name}/{ind}.png') - frames.append(image_resized) - return frames - -def add_dict_to_yaml_file(file_path, key, value): - data = {} - - # If the file already exists, load its contents into the data dictionary - if os.path.exists(file_path): - with open(file_path, 'r') as file: - data = yaml.safe_load(file) - - # Add or update the key-value pair - data[key] = value - - # Save the data back to the YAML file - with open(file_path, 'w') as file: - yaml.dump(data, file) - -def isinstance_str(x: object, cls_name: str): - """ - Checks whether x has any class *named* cls_name in its ancestry. - Doesn't require access to the class's implementation. - - Useful for patching! - """ - - for _cls in x.__class__.__mro__: - if _cls.__name__ == cls_name: - return True - - return False - - -def batch_cosine_sim(x, y): - if type(x) is list: - x = torch.cat(x, dim=0) - if type(y) is list: - y = torch.cat(y, dim=0) - x = x / x.norm(dim=-1, keepdim=True) - y = y / y.norm(dim=-1, keepdim=True) - similarity = x @ y.T - return similarity - - -def load_imgs(data_path, n_frames, device='cuda', pil=False): - imgs = [] - pils = [] - for i in range(n_frames): - img_path = os.path.join(data_path, "%05d.jpg" % i) - if not os.path.exists(img_path): - img_path = os.path.join(data_path, "%05d.png" % i) - img_pil = Image.open(img_path) - pils.append(img_pil) - img = T.ToTensor()(img_pil).unsqueeze(0) - imgs.append(img) - if pil: - return torch.cat(imgs).to(device), pils - return torch.cat(imgs).to(device) - - -def save_video(raw_frames, save_path, fps=10): - video_codec = "libx264" - video_options = { - "crf": "18", # Constant Rate Factor (lower value = higher quality, 18 is a good balance) - "preset": "slow", # Encoding preset (e.g., ultrafast, superfast, veryfast, faster, fast, medium, slow, slower, veryslow) - } - - frames = (raw_frames * 255).to(torch.uint8).cpu().permute(0, 2, 3, 1) - write_video(save_path, frames, fps=fps, video_codec=video_codec, options=video_options) - - -def seed_everything(seed): - torch.manual_seed(seed) - torch.cuda.manual_seed(seed) - random.seed(seed) - np.random.seed(seed) - - diff --git a/spaces/whispy/Italian-ASR/README.md b/spaces/whispy/Italian-ASR/README.md deleted file mode 100644 index 21f361de8c94169e3f2660944cfa0c974ede18e0..0000000000000000000000000000000000000000 --- a/spaces/whispy/Italian-ASR/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Italian ASR -emoji: 📊 -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/wliu88/StructDiffusionDemo/app.py b/spaces/wliu88/StructDiffusionDemo/app.py deleted file mode 100644 index fd3ae323c22559bf75ff91573683cbe5c402b35c..0000000000000000000000000000000000000000 --- a/spaces/wliu88/StructDiffusionDemo/app.py +++ /dev/null @@ -1,363 +0,0 @@ -import os -import argparse -import torch -import trimesh -import numpy as np -import pytorch_lightning as pl -import gradio as gr -from omegaconf import OmegaConf - -import sys -sys.path.append('./src') - -from StructDiffusion.data.semantic_arrangement_language_demo import SemanticArrangementDataset -from StructDiffusion.language.tokenizer import Tokenizer -from StructDiffusion.models.pl_models import ConditionalPoseDiffusionModel, PairwiseCollisionModel -from StructDiffusion.diffusion.sampler import Sampler, SamplerV2 -from StructDiffusion.diffusion.pose_conversion import get_struct_objs_poses -from StructDiffusion.utils.files import get_checkpoint_path_from_dir -from StructDiffusion.utils.rearrangement import show_pcs_with_trimesh, get_trimesh_scene_with_table -import StructDiffusion.utils.transformations as tra -from StructDiffusion.language.sentence_encoder import SentenceBertEncoder -import StructDiffusion.utils.transformations as tra - - -def move_pc_and_create_scene_simple(obj_xyzs, struct_pose, pc_poses_in_struct): - - device = obj_xyzs.device - - # obj_xyzs: B, N, P, 3 or 6 - # struct_pose: B, 1, 4, 4 - # pc_poses_in_struct: B, N, 4, 4 - - B, N, _, _ = pc_poses_in_struct.shape - _, _, P, _ = obj_xyzs.shape - - current_pc_poses = torch.eye(4).repeat(B, N, 1, 1).to(device) # B, N, 4, 4 - # print(torch.mean(obj_xyzs, dim=2).shape) - current_pc_poses[:, :, :3, 3] = torch.mean(obj_xyzs[:, :, :, :3], dim=2) # B, N, 4, 4 - current_pc_poses = current_pc_poses.reshape(B * N, 4, 4) # B x N, 4, 4 - - struct_pose = struct_pose.repeat(1, N, 1, 1) # B, N, 4, 4 - struct_pose = struct_pose.reshape(B * N, 4, 4) # B x 1, 4, 4 - pc_poses_in_struct = pc_poses_in_struct.reshape(B * N, 4, 4) # B x N, 4, 4 - - goal_pc_pose = struct_pose @ pc_poses_in_struct # B x N, 4, 4 - # print("goal pc poses") - # print(goal_pc_pose) - goal_pc_transform = goal_pc_pose @ torch.inverse(current_pc_poses) # B x N, 4, 4 - - # # important: pytorch3d uses row-major ordering, need to transpose each transformation matrix - # transpose = tra3d.Transform3d(matrix=goal_pc_transform.transpose(1, 2)) - # new_obj_xyzs = obj_xyzs.reshape(B * N, P, -1) # B x N, P, 3 - # new_obj_xyzs[:, :, :3] = transpose.transform_points(new_obj_xyzs[:, :, :3]) - - # a verision that does not rely on pytorch3d - new_obj_xyzs = obj_xyzs.reshape(B * N, P, -1)[:, :, :3] # B x N, P, 3 - new_obj_xyzs = torch.concat([new_obj_xyzs, torch.ones(B * N, P, 1).to(device)], dim=-1) # B x N, P, 4 - new_obj_xyzs = torch.einsum('bij,bkj->bki', goal_pc_transform, new_obj_xyzs)[:, :, :3] # # B x N, P, 3 - - # put it back to B, N, P, 3 - obj_xyzs[:, :, :, :3] = new_obj_xyzs.reshape(B, N, P, -1) - - return obj_xyzs - - -class Infer_Wrapper: - - def __init__(self, args, cfg): - - self.num_pts = cfg.DATASET.num_pts - - # load - pl.seed_everything(args.eval_random_seed) - self.device = (torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")) - - diffusion_checkpoint_path = get_checkpoint_path_from_dir(os.path.join(cfg.WANDB.save_dir, cfg.WANDB.project, args.diffusion_checkpoint_id, "checkpoints")) - collision_checkpoint_path = get_checkpoint_path_from_dir(os.path.join(cfg.WANDB.save_dir, cfg.WANDB.project, args.collision_checkpoint_id, "checkpoints")) - - self.tokenizer = Tokenizer(cfg.DATASET.vocab_dir) - # override ignore_rgb for visualization - cfg.DATASET.ignore_rgb = False - self.dataset = SemanticArrangementDataset(tokenizer=self.tokenizer, **cfg.DATASET) - - self.sampler = SamplerV2(ConditionalPoseDiffusionModel, diffusion_checkpoint_path, - PairwiseCollisionModel, collision_checkpoint_path, self.device) - - self.sentence_encoder = SentenceBertEncoder() - - self.session_id_to_obj_xyzs = {} - - def visualize_scene(self, di, session_id): - - raw_datum = self.dataset.get_raw_data(di, inference_mode=True, shuffle_object_index=True) - language_command = raw_datum["template_sentence"] - - obj_xyz = raw_datum["pcs"] - scene = show_pcs_with_trimesh([xyz[:, :3] for xyz in obj_xyz], [xyz[:, 3:] for xyz in obj_xyz], return_scene=True) - - scene.apply_transform(tra.euler_matrix(np.pi, 0, np.pi/2)) - - scene_filename = "./tmp_data/input_scene_{}.glb".format(session_id) - scene.export(scene_filename) - - return language_command, scene_filename - - def build_scene(self, mesh_filename_1, x_1, y_1, z_1, ai_1, aj_1, ak_1, scale_1, - mesh_filename_2, x_2, y_2, z_2, ai_2, aj_2, ak_2, scale_2, - mesh_filename_3, x_3, y_3, z_3, ai_3, aj_3, ak_3, scale_3, - mesh_filename_4, x_4, y_4, z_4, ai_4, aj_4, ak_4, scale_4, - mesh_filename_5, x_5, y_5, z_5, ai_5, aj_5, ak_5, scale_5, session_id): - - object_list = [(mesh_filename_1, x_1, y_1, z_1, ai_1, aj_1, ak_1, scale_1), - (mesh_filename_2, x_2, y_2, z_2, ai_2, aj_2, ak_2, scale_2), - (mesh_filename_3, x_3, y_3, z_3, ai_3, aj_3, ak_3, scale_3), - (mesh_filename_4, x_4, y_4, z_4, ai_4, aj_4, ak_4, scale_4), - (mesh_filename_5, x_5, y_5, z_5, ai_5, aj_5, ak_5, scale_5)] - - scene = get_trimesh_scene_with_table() - - obj_xyzs = [] - for mesh_filename, x, y, z, ai, aj, ak, scale in object_list: - if mesh_filename is None: - continue - obj_mesh = trimesh.load(mesh_filename) - obj_mesh.apply_scale(scale) - z_min = obj_mesh.bounds[0, 2] - tform = tra.euler_matrix(ai, aj, ak) - tform[:3, 3] = [x, y, z - z_min] - obj_mesh.apply_transform(tform) - obj_xyz = obj_mesh.sample(self.num_pts) - obj = trimesh.PointCloud(obj_xyz) - scene.add_geometry(obj) - - obj_xyzs.append(obj_xyz) - - self.session_id_to_obj_xyzs[session_id] = obj_xyzs - - # scene.show() - - # obj_file = "/home/weiyu/data_drive/StructDiffusion/housekeep_custom_handpicked_small/visual/book_Eat_to_Live_The_Amazing_NutrientRich_Program_for_Fast_and_Sustained_Weight_Loss_Revised_Edition_Book_L/model.obj" - # obj = trimesh.load(obj_file) - # - # scene = get_trimesh_scene_with_table() - # scene.add_geometry(obj) - # - # scene.show() - - # raw_datum = self.dataset.get_raw_data(di, inference_mode=True, shuffle_object_index=True) - # language_command = raw_datum["template_sentence"] - # - # obj_xyz = raw_datum["pcs"] - # scene = show_pcs_with_trimesh([xyz[:, :3] for xyz in obj_xyz], [xyz[:, 3:] for xyz in obj_xyz], - # return_scene=True) - - scene.apply_transform(tra.euler_matrix(np.pi, 0, np.pi / 2)) - scene_filename = "./tmp_data/input_scene_{}.glb".format(session_id) - scene.export(scene_filename) - - return scene_filename - - # return language_command, scene_filename - - def infer(self, language_command, session_id, progress=gr.Progress()): - - obj_xyzs = self.session_id_to_obj_xyzs[session_id] - - sentence_embedding = self.sentence_encoder.encode([language_command]).flatten() - - raw_datum = self.dataset.build_data_from_xyzs(obj_xyzs, sentence_embedding) - datum = self.dataset.convert_to_tensors(raw_datum, self.tokenizer, use_sentence_embedding=True) - batch = self.dataset.single_datum_to_batch(datum, args.num_samples, self.device, inference_mode=True) - - num_poses = raw_datum["num_goal_poses"] - struct_pose, pc_poses_in_struct = self.sampler.sample(batch, num_poses, args.num_elites, args.discriminator_batch_size) - - new_obj_xyzs = move_pc_and_create_scene_simple(batch["pcs"][:args.num_elites], struct_pose, pc_poses_in_struct) - - # vis - vis_obj_xyzs = new_obj_xyzs[:3] - if torch.is_tensor(vis_obj_xyzs): - if vis_obj_xyzs.is_cuda: - vis_obj_xyzs = vis_obj_xyzs.detach().cpu() - vis_obj_xyzs = vis_obj_xyzs.numpy() - - vis_obj_xyz = vis_obj_xyzs[0] - # scene = show_pcs_with_trimesh([xyz[:, :3] for xyz in vis_obj_xyz], [xyz[:, 3:] for xyz in vis_obj_xyz], return_scene=True) - scene = show_pcs_with_trimesh([xyz[:, :3] for xyz in vis_obj_xyz], obj_rgbs=None, return_scene=True) - # scene.show() - scene.apply_transform(tra.euler_matrix(np.pi, 0, np.pi/2)) - scene_filename = "./tmp_data/output_scene_{}.glb".format(session_id) - scene.export(scene_filename) - - # pc_filename = "/home/weiyu/Research/StructDiffusion/StructDiffusion/interactive_demo/tmp_data/pc.glb" - # scene_filename = "/home/weiyu/Research/StructDiffusion/StructDiffusion/interactive_demo/tmp_data/scene.glb" - # - # vis_obj_xyz = vis_obj_xyz.reshape(-1, 6) - # vis_pc = trimesh.PointCloud(vis_obj_xyz[:, :3], colors=np.concatenate([vis_obj_xyz[:, 3:] * 255, np.ones([vis_obj_xyz.shape[0], 1]) * 255], axis=-1)) - # vis_pc.export(pc_filename) - # - # scene = trimesh.Scene() - # # add the coordinate frame first - # # geom = trimesh.creation.axis(0.01) - # # scene.add_geometry(geom) - # table = trimesh.creation.box(extents=[1.0, 1.0, 0.02]) - # table.apply_translation([0.5, 0, -0.01]) - # table.visual.vertex_colors = [150, 111, 87, 125] - # scene.add_geometry(table) - # # bounds = trimesh.creation.box(extents=[4.0, 4.0, 4.0]) - # # bounds = trimesh.creation.icosphere(subdivisions=3, radius=3.1) - # # bounds.apply_translation([0, 0, 0]) - # # bounds.visual.vertex_colors = [30, 30, 30, 30] - # # scene.add_geometry(bounds) - # # RT_4x4 = np.array([[-0.39560353822208355, -0.9183993826406329, 0.006357240869497738, 0.2651463080169481], - # # [-0.797630370081598, 0.3401340617616391, -0.4980909683511864, 0.2225696480721997], - # # [0.45528412367406523, -0.2021172778236285, -0.8671014777611122, 0.9449050652025951], - # # [0.0, 0.0, 0.0, 1.0]]) - # # RT_4x4 = np.linalg.inv(RT_4x4) - # # RT_4x4 = RT_4x4 @ np.diag([1, -1, -1, 1]) - # # scene.camera_transform = RT_4x4 - # - # mesh_list = trimesh.util.concatenate(scene.dump()) - # print(mesh_list) - # trimesh.io.export.export_mesh(mesh_list, scene_filename, file_type='obj') - - return scene_filename - - -args = OmegaConf.create() -args.base_config_file = "./configs/base.yaml" -args.config_file = "./configs/conditional_pose_diffusion_language.yaml" -args.diffusion_checkpoint_id = "ConditionalPoseDiffusionLanguage" -args.collision_checkpoint_id = "CollisionDiscriminator" -args.eval_random_seed = 42 -args.num_samples = 50 -args.num_elites = 3 -args.discriminator_batch_size = 10 - -base_cfg = OmegaConf.load(args.base_config_file) -cfg = OmegaConf.load(args.config_file) -cfg = OmegaConf.merge(base_cfg, cfg) - -infer_wrapper = Infer_Wrapper(args, cfg) - -# # version 1 -# demo = gr.Blocks(theme=gr.themes.Soft()) -# with demo: -# gr.Markdown("

    StructDiffusion Demo

    ") -# # font-size:18px -# gr.Markdown("

    StructDiffusion combines a diffusion model and an object-centric transformer to construct structures given partial-view point clouds and high-level language goals.
    Website | Code

    ") -# -# session_id = gr.State(value=np.random.randint(0, 1000)) -# data_selection = gr.Number(label="Example No.", minimum=0, maximum=len(infer_wrapper.dataset) - 1, precision=0) -# input_scene = gr.Model3D(clear_color=[0, 0, 0, 0], label="Input 3D Scene") -# language_command = gr.Textbox(label="Input Language Command") -# output_scene = gr.Model3D(clear_color=[0, 0, 0, 0], label="Generated 3D Structure") -# -# b1 = gr.Button("Show Input Language and Scene") -# b2 = gr.Button("Generate 3D Structure") -# -# b1.click(infer_wrapper.visualize_scene, inputs=[data_selection, session_id], outputs=[language_command, input_scene]) -# b2.click(infer_wrapper.infer, inputs=[data_selection, session_id], outputs=output_scene) -# -# demo.queue(concurrency_count=10) -# demo.launch() - -# version 1 -# demo = gr.Blocks(theme=gr.themes.Soft()) -demo = gr.Blocks() -with demo: - gr.Markdown("

    StructDiffusion Demo

    ") - # font-size:18px - gr.Markdown("

    StructDiffusion combines a diffusion model and an object-centric transformer to construct structures given partial-view point clouds and high-level language goals.
    Website | Code

    ") - - session_id = gr.State(value=np.random.randint(0, 1000)) - with gr.Tab("Object 1"): - with gr.Column(scale=1, min_width=600): - mesh_filename_1 = gr.Model3D(clear_color=[0, 0, 0, 0], label="Load 3D Object") - with gr.Row(): - x_1 = gr.Slider(0, 1, label="x") - y_1 = gr.Slider(-0.5, 0.5, label="y") - z_1 = gr.Slider(0, 0.5, label="z") - with gr.Row(): - ai_1 = gr.Slider(0, np.pi * 2, label="roll") - aj_1 = gr.Slider(0, np.pi * 2, label="pitch") - ak_1 = gr.Slider(0, np.pi * 2, label="yaw") - scale_1 = gr.Slider(0, 1) - with gr.Tab("Object 2"): - with gr.Column(scale=1, min_width=600): - mesh_filename_2 = gr.Model3D(clear_color=[0, 0, 0, 0], label="Load 3D Object") - with gr.Row(): - x_2 = gr.Slider(0, 1, label="x") - y_2 = gr.Slider(-0.5, 0.5, label="y") - z_2 = gr.Slider(0, 0.5, label="z") - with gr.Row(): - ai_2 = gr.Slider(0, np.pi * 2, label="roll") - aj_2 = gr.Slider(0, np.pi * 2, label="pitch") - ak_2 = gr.Slider(0, np.pi * 2, label="yaw") - scale_2 = gr.Slider(0, 1) - with gr.Tab("Object 3"): - with gr.Column(scale=1, min_width=600): - mesh_filename_3 = gr.Model3D(clear_color=[0, 0, 0, 0], label="Load 3D Object") - with gr.Row(): - x_3 = gr.Slider(0, 1, label="x") - y_3 = gr.Slider(-0.5, 0.5, label="y") - z_3 = gr.Slider(0, 0.5, label="z") - with gr.Row(): - ai_3 = gr.Slider(0, np.pi * 2, label="roll") - aj_3 = gr.Slider(0, np.pi * 2, label="pitch") - ak_3 = gr.Slider(0, np.pi * 2, label="yaw") - scale_3 = gr.Slider(0, 1) - with gr.Tab("Object 4"): - with gr.Column(scale=1, min_width=600): - mesh_filename_4 = gr.Model3D(clear_color=[0, 0, 0, 0], label="Load 3D Object") - with gr.Row(): - x_4 = gr.Slider(0, 1, label="x") - y_4 = gr.Slider(-0.5, 0.5, label="y") - z_4 = gr.Slider(0, 0.5, label="z") - with gr.Row(): - ai_4 = gr.Slider(0, np.pi * 2, label="roll") - aj_4 = gr.Slider(0, np.pi * 2, label="pitch") - ak_4 = gr.Slider(0, np.pi * 2, label="yaw") - scale_4 = gr.Slider(0, 1) - with gr.Tab("Object 5"): - with gr.Column(scale=1, min_width=600): - mesh_filename_5 = gr.Model3D(clear_color=[0, 0, 0, 0], label="Load 3D Object") - with gr.Row(): - x_5 = gr.Slider(0, 1, label="x") - y_5 = gr.Slider(-0.5, 0.5, label="y") - z_5 = gr.Slider(0, 0.5, label="z") - with gr.Row(): - ai_5 = gr.Slider(0, np.pi * 2, label="roll") - aj_5 = gr.Slider(0, np.pi * 2, label="pitch") - ak_5 = gr.Slider(0, np.pi * 2, label="yaw") - scale_5 = gr.Slider(0, 1) - - b1 = gr.Button("Build Initial Scene") - - initial_scene = gr.Model3D(clear_color=[0, 0, 0, 0], label="Initial 3D Scene") - language_command = gr.Textbox(label="Input Language Command") - - b2 = gr.Button("Generate 3D Structure") - - output_scene = gr.Model3D(clear_color=[0, 0, 0, 0], label="Generated 3D Structure") - - # data_selection = gr.Number(label="Example No.", minimum=0, maximum=len(infer_wrapper.dataset) - 1, precision=0) - # input_scene = gr.Model3D(clear_color=[0, 0, 0, 0], label="Input 3D Scene") - # language_command = gr.Textbox(label="Input Language Command") - # output_scene = gr.Model3D(clear_color=[0, 0, 0, 0], label="Generated 3D Structure") - # - # b1 = gr.Button("Show Input Language and Scene") - # b2 = gr.Button("Generate 3D Structure") - - b1.click(infer_wrapper.build_scene, inputs=[mesh_filename_1, x_1, y_1, z_1, ai_1, aj_1, ak_1, scale_1, - mesh_filename_2, x_2, y_2, z_2, ai_2, aj_2, ak_2, scale_2, - mesh_filename_3, x_3, y_3, z_3, ai_3, aj_3, ak_3, scale_3, - mesh_filename_4, x_4, y_4, z_4, ai_4, aj_4, ak_4, scale_4, - mesh_filename_5, x_5, y_5, z_5, ai_5, aj_5, ak_5, scale_5, - session_id], outputs=[initial_scene]) - - b2.click(infer_wrapper.infer, inputs=[language_command, session_id], outputs=output_scene) - -demo.queue(concurrency_count=10) -demo.launch() \ No newline at end of file diff --git a/spaces/wydgg/bingo-wyd-ai/src/pages/api/create.ts b/spaces/wydgg/bingo-wyd-ai/src/pages/api/create.ts deleted file mode 100644 index a7ac579f0acc780741e7083bf681a298585e981c..0000000000000000000000000000000000000000 --- a/spaces/wydgg/bingo-wyd-ai/src/pages/api/create.ts +++ /dev/null @@ -1,32 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import { fetch, debug } from '@/lib/isomorphic' -import { createHeaders } from '@/lib/utils' - -const API_ENDPOINT = 'https://www.bing.com/turing/conversation/create' -// const API_ENDPOINT = 'https://edgeservices.bing.com/edgesvc/turing/conversation/create'; - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - try { - const headers = createHeaders(req.cookies) - - res.setHeader('set-cookie', headers.cookie) - res.writeHead(200, { - 'Content-Type': 'application/json', - }) - - debug('headers', headers) - const response = await fetch(API_ENDPOINT, { method: 'GET', headers }) - .then((res) => res.text()) - - res.end(response) - } catch (e) { - return res.end(JSON.stringify({ - result: { - value: 'UnauthorizedRequest', - message: `${e}` - } - })) - } -} diff --git a/spaces/yangheng/Waifu2X-Image-Scale/Waifu2x/Loss.py b/spaces/yangheng/Waifu2X-Image-Scale/Waifu2x/Loss.py deleted file mode 100644 index 267bb66f9e221f50cd06c96e93fd0ee71be925b4..0000000000000000000000000000000000000000 --- a/spaces/yangheng/Waifu2X-Image-Scale/Waifu2x/Loss.py +++ /dev/null @@ -1,44 +0,0 @@ -import torch -from torch import nn -from torch.nn.functional import _pointwise_loss - -rgb_weights = [0.29891 * 3, 0.58661 * 3, 0.11448 * 3] -# RGB have different weights -# https://github.com/nagadomi/waifu2x/blob/master/train.lua#L109 -use_cuda = torch.cuda.is_available() -FloatTensor = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor -LongTensor = torch.cuda.LongTensor if use_cuda else torch.LongTensor -Tensor = FloatTensor - - -class WeightedHuberLoss(nn.SmoothL1Loss): - def __init__(self, weights=rgb_weights): - super(WeightedHuberLoss, self).__init__(size_average=True, reduce=True) - self.weights = torch.FloatTensor(weights).view(3, 1, 1) - - def forward(self, input_data, target): - diff = torch.abs(input_data - target) - z = torch.where(diff < 1, 0.5 * torch.pow(diff, 2), (diff - 0.5)) - out = z * self.weights.expand_as(diff) - return out.mean() - - -def weighted_mse_loss(input, target, weights): - out = (input - target) ** 2 - out = out * weights.expand_as(out) - loss = out.sum(0) # or sum over whatever dimensions - return loss / out.size(0) - - -class WeightedL1Loss(nn.SmoothL1Loss): - def __init__(self, weights=rgb_weights): - super(WeightedHuberLoss, self).__init__(size_average=True, reduce=True) - self.weights = torch.FloatTensor(weights).view(3, 1, 1) - - def forward(self, input_data, target): - return self.l1_loss(input_data, target, size_average=self.size_average, - reduce=self.reduce) - - def l1_loss(self, input_data, target, size_average=True, reduce=True): - return _pointwise_loss(lambda a, b: torch.abs(a - b) * self.weights.expand_as(a), - torch._C._nn.l1_loss, input_data, target, size_average, reduce) diff --git a/spaces/ybelkada/interfacegan_pp/models/__init__.py b/spaces/ybelkada/interfacegan_pp/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/yderre-aubay/midi-player-demo/src/main/components/TransportPanel/TransportPanel.tsx b/spaces/yderre-aubay/midi-player-demo/src/main/components/TransportPanel/TransportPanel.tsx deleted file mode 100644 index a54ee3491f846fca6f12d2cee2a24253482b1f03..0000000000000000000000000000000000000000 --- a/spaces/yderre-aubay/midi-player-demo/src/main/components/TransportPanel/TransportPanel.tsx +++ /dev/null @@ -1,172 +0,0 @@ -import styled from "@emotion/styled" -import FastForward from "mdi-react/FastForwardIcon" -import FastRewind from "mdi-react/FastRewindIcon" -import FiberManualRecord from "mdi-react/FiberManualRecordIcon" -import Loop from "mdi-react/LoopIcon" -import MetronomeIcon from "mdi-react/MetronomeIcon" -import Stop from "mdi-react/StopIcon" -import { observer } from "mobx-react-lite" -import { FC, useCallback } from "react" -import { CircularProgress } from "../../../components/CircularProgress" -import { Localized } from "../../../components/Localized" -import { Tooltip } from "../../../components/Tooltip" -import { - fastForwardOneBar, - playOrPause, - rewindOneBar, - stop, - toggleEnableLoop, -} from "../../actions" -import { toggleRecording } from "../../actions/recording" -import { useStores } from "../../hooks/useStores" -import { CircleButton } from "./CircleButton" -import { PlayButton } from "./PlayButton" -import { TempoForm } from "./TempoForm" - -const Toolbar = styled.div` - display: flex; - align-items: center; - justify-content: center; - padding: 0.25rem 1rem; - background: ${({ theme }) => theme.backgroundColor}; - border-top: 1px solid ${({ theme }) => theme.dividerColor}; - height: 3rem; - box-sizing: border-box; -` - -const RecordButton = styled(CircleButton)` - &.active { - color: ${({ theme }) => theme.recordColor}; - } -` - -const LoopButton = styled(CircleButton)` - &.active { - color: ${({ theme }) => theme.themeColor}; - } -` - -const MetronomeButton = styled(CircleButton)` - &.active { - color: ${({ theme }) => theme.themeColor}; - } -` - -const TimestampText = styled.div` - font-family: "Roboto Mono", monospace; - font-size: 0.9rem; - color: ${({ theme }) => theme.secondaryTextColor}; -` - -const Timestamp: FC = observer(() => { - const { pianoRollStore } = useStores() - const mbtTime = pianoRollStore.currentMBTTime - return {mbtTime} -}) - -export const ToolbarSeparator = styled.div` - background: ${({ theme }) => theme.dividerColor}; - margin: 0.4em 1em; - width: 1px; - height: 1rem; -` - -export const Right = styled.div` - position: absolute; - right: 1em; -` - -export const TransportPanel: FC = observer(() => { - const rootStore = useStores() - const { player, midiDeviceStore, midiRecorder, synth } = rootStore - - const { isPlaying, isMetronomeEnabled, loop } = player - const isRecording = midiRecorder.isRecording - const canRecording = - Object.values(midiDeviceStore.enabledInputs).filter((e) => e).length > 0 - const isSynthLoading = synth.isLoading - - const onClickPlay = playOrPause(rootStore) - const onClickStop = stop(rootStore) - const onClickBackward = rewindOneBar(rootStore) - const onClickForward = fastForwardOneBar(rootStore) - const onClickRecord = toggleRecording(rootStore) - const onClickEnableLoop = toggleEnableLoop(rootStore) - const onClickMetronone = useCallback(() => { - player.isMetronomeEnabled = !player.isMetronomeEnabled - }, [player]) - - return ( - - rewind} - side="top" - > - - - - - - stop} side="top"> - - - - - - - - {canRecording && ( - record} - side="top" - > - - - - - )} - - fast-forward} - side="top" - > - - - - - - {loop && ( - - - - )} - - - - - - - - - - - - - - {isSynthLoading && ( - - - - )} - - ) -}) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/chinese_clip/image_processing_chinese_clip.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/chinese_clip/image_processing_chinese_clip.py deleted file mode 100644 index 5f843ae5d8b033bc8c8128379532e561989bb517..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/chinese_clip/image_processing_chinese_clip.py +++ /dev/null @@ -1,312 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The OFA-Sys Team Authors and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Image processor class for Chinese-CLIP.""" - -from typing import Dict, List, Optional, Union - -import numpy as np - -from ...image_processing_utils import BaseImageProcessor, BatchFeature, get_size_dict -from ...image_transforms import ( - convert_to_rgb, - get_resize_output_image_size, - resize, - to_channel_dimension_format, -) -from ...image_utils import ( - OPENAI_CLIP_MEAN, - OPENAI_CLIP_STD, - ChannelDimension, - ImageInput, - PILImageResampling, - infer_channel_dimension_format, - is_scaled_image, - make_list_of_images, - to_numpy_array, - valid_images, -) -from ...utils import TensorType, is_vision_available, logging - - -logger = logging.get_logger(__name__) - - -if is_vision_available(): - import PIL - - -class ChineseCLIPImageProcessor(BaseImageProcessor): - r""" - Constructs a Chinese-CLIP image processor. - - Args: - do_resize (`bool`, *optional*, defaults to `True`): - Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by - `do_resize` in the `preprocess` method. - size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 224}`): - Size of the image after resizing. The shortest edge of the image is resized to size["shortest_edge"], with - the longest edge resized to keep the input aspect ratio. Can be overridden by `size` in the `preprocess` - method. - resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BICUBIC`): - Resampling filter to use if resizing the image. Can be overridden by `resample` in the `preprocess` method. - do_center_crop (`bool`, *optional*, defaults to `True`): - Whether to center crop the image to the specified `crop_size`. Can be overridden by `do_center_crop` in the - `preprocess` method. - crop_size (`Dict[str, int]` *optional*, defaults to 224): - Size of the output image after applying `center_crop`. Can be overridden by `crop_size` in the `preprocess` - method. - do_rescale (`bool`, *optional*, defaults to `True`): - Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by `do_rescale` in - the `preprocess` method. - rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): - Scale factor to use if rescaling the image. Can be overridden by `rescale_factor` in the `preprocess` - method. - do_normalize: - Whether to normalize the image. Can be overridden by `do_normalize` in the `preprocess` method. - image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`): - Mean to use if normalizing the image. This is a float or list of floats the length of the number of - channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. - image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`): - Standard deviation to use if normalizing the image. This is a float or list of floats the length of the - number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. - Can be overridden by the `image_std` parameter in the `preprocess` method. - do_convert_rgb (`bool`, *optional*, defaults to `True`): - Whether to convert the image to RGB. - """ - - model_input_names = ["pixel_values"] - - def __init__( - self, - do_resize: bool = True, - size: Dict[str, int] = None, - resample: PILImageResampling = PILImageResampling.BICUBIC, - do_center_crop: bool = True, - crop_size: Dict[str, int] = None, - do_rescale: bool = True, - rescale_factor: Union[int, float] = 1 / 255, - do_normalize: bool = True, - image_mean: Optional[Union[float, List[float]]] = None, - image_std: Optional[Union[float, List[float]]] = None, - do_convert_rgb: bool = True, - **kwargs, - ) -> None: - super().__init__(**kwargs) - size = size if size is not None else {"shortest_edge": 224} - size = get_size_dict(size, default_to_square=False) - crop_size = crop_size if crop_size is not None else {"height": 224, "width": 224} - crop_size = get_size_dict(crop_size) - - self.do_resize = do_resize - self.size = size - self.resample = resample - self.do_center_crop = do_center_crop - self.crop_size = crop_size - self.do_rescale = do_rescale - self.rescale_factor = rescale_factor - self.do_normalize = do_normalize - self.image_mean = image_mean if image_mean is not None else OPENAI_CLIP_MEAN - self.image_std = image_std if image_std is not None else OPENAI_CLIP_STD - self.do_convert_rgb = do_convert_rgb - - def resize( - self, - image: np.ndarray, - size: Dict[str, int], - resample: PILImageResampling = PILImageResampling.BICUBIC, - data_format: Optional[Union[str, ChannelDimension]] = None, - input_data_format: Optional[Union[str, ChannelDimension]] = None, - **kwargs, - ) -> np.ndarray: - """ - Resize an image. The shortest edge of the image is resized to size["shortest_edge"], with the longest edge - resized to keep the input aspect ratio. - - Args: - image (`np.ndarray`): - Image to resize. - size (`Dict[str, int]`): - Size of the output image. - resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BICUBIC`): - Resampling filter to use when resiizing the image. - data_format (`str` or `ChannelDimension`, *optional*): - The channel dimension format of the image. If not provided, it will be the same as the input image. - input_data_format (`ChannelDimension` or `str`, *optional*): - The channel dimension format of the input image. If not provided, it will be inferred from the input - image. - """ - size = get_size_dict(size, default_to_square=False) - output_size = get_resize_output_image_size( - image, size=(size["height"], size["width"]), default_to_square=False, input_data_format=input_data_format - ) - return resize( - image, - size=output_size, - resample=resample, - data_format=data_format, - input_data_format=input_data_format, - **kwargs, - ) - - def preprocess( - self, - images: ImageInput, - do_resize: bool = None, - size: Dict[str, int] = None, - resample: PILImageResampling = None, - do_center_crop: bool = None, - crop_size: int = None, - do_rescale: bool = None, - rescale_factor: float = None, - do_normalize: bool = None, - image_mean: Optional[Union[float, List[float]]] = None, - image_std: Optional[Union[float, List[float]]] = None, - do_convert_rgb: bool = None, - return_tensors: Optional[Union[str, TensorType]] = None, - data_format: Optional[ChannelDimension] = ChannelDimension.FIRST, - input_data_format: Optional[Union[str, ChannelDimension]] = None, - **kwargs, - ) -> PIL.Image.Image: - """ - Preprocess an image or batch of images. - - Args: - images (`ImageInput`): - Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If - passing in images with pixel values between 0 and 1, set `do_rescale=False`. - do_resize (`bool`, *optional*, defaults to `self.do_resize`): - Whether to resize the image. - size (`Dict[str, int]`, *optional*, defaults to `self.size`): - Size of the image after resizing. Shortest edge of the image is resized to size["shortest_edge"], with - the longest edge resized to keep the input aspect ratio. - resample (`int`, *optional*, defaults to `self.resample`): - Resampling filter to use if resizing the image. This can be one of the enum `PILImageResampling`. Only - has an effect if `do_resize` is set to `True`. - do_center_crop (`bool`, *optional*, defaults to `self.do_center_crop`): - Whether to center crop the image. - crop_size (`Dict[str, int]`, *optional*, defaults to `self.crop_size`): - Size of the center crop. Only has an effect if `do_center_crop` is set to `True`. - do_rescale (`bool`, *optional*, defaults to `self.do_rescale`): - Whether to rescale the image. - rescale_factor (`float`, *optional*, defaults to `self.rescale_factor`): - Rescale factor to rescale the image by if `do_rescale` is set to `True`. - do_normalize (`bool`, *optional*, defaults to `self.do_normalize`): - Whether to normalize the image. - image_mean (`float` or `List[float]`, *optional*, defaults to `self.image_mean`): - Image mean to use for normalization. Only has an effect if `do_normalize` is set to `True`. - image_std (`float` or `List[float]`, *optional*, defaults to `self.image_std`): - Image standard deviation to use for normalization. Only has an effect if `do_normalize` is set to - `True`. - do_convert_rgb (`bool`, *optional*, defaults to `self.do_convert_rgb`): - Whether to convert the image to RGB. - return_tensors (`str` or `TensorType`, *optional*): - The type of tensors to return. Can be one of: - - Unset: Return a list of `np.ndarray`. - - `TensorType.TENSORFLOW` or `'tf'`: Return a batch of type `tf.Tensor`. - - `TensorType.PYTORCH` or `'pt'`: Return a batch of type `torch.Tensor`. - - `TensorType.NUMPY` or `'np'`: Return a batch of type `np.ndarray`. - - `TensorType.JAX` or `'jax'`: Return a batch of type `jax.numpy.ndarray`. - data_format (`ChannelDimension` or `str`, *optional*, defaults to `ChannelDimension.FIRST`): - The channel dimension format for the output image. Can be one of: - - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. - - Unset: Use the channel dimension format of the input image. - input_data_format (`ChannelDimension` or `str`, *optional*): - The channel dimension format for the input image. If unset, the channel dimension format is inferred - from the input image. Can be one of: - - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. - - `"none"` or `ChannelDimension.NONE`: image in (height, width) format. - """ - do_resize = do_resize if do_resize is not None else self.do_resize - size = size if size is not None else self.size - size = get_size_dict(size, default_to_square=False) - resample = resample if resample is not None else self.resample - do_center_crop = do_center_crop if do_center_crop is not None else self.do_center_crop - crop_size = crop_size if crop_size is not None else self.crop_size - crop_size = get_size_dict(crop_size) - do_rescale = do_rescale if do_rescale is not None else self.do_rescale - rescale_factor = rescale_factor if rescale_factor is not None else self.rescale_factor - do_normalize = do_normalize if do_normalize is not None else self.do_normalize - image_mean = image_mean if image_mean is not None else self.image_mean - image_std = image_std if image_std is not None else self.image_std - do_convert_rgb = do_convert_rgb if do_convert_rgb is not None else self.do_convert_rgb - - images = make_list_of_images(images) - - if not valid_images(images): - raise ValueError( - "Invalid image type. Must be of type PIL.Image.Image, numpy.ndarray, " - "torch.Tensor, tf.Tensor or jax.ndarray." - ) - - if do_resize and size is None: - raise ValueError("Size must be specified if do_resize is True.") - - if do_center_crop and crop_size is None: - raise ValueError("Crop size must be specified if do_center_crop is True.") - - if do_rescale and rescale_factor is None: - raise ValueError("Rescale factor must be specified if do_rescale is True.") - - if do_normalize and (image_mean is None or image_std is None): - raise ValueError("Image mean and std must be specified if do_normalize is True.") - - # PIL RGBA images are converted to RGB - if do_convert_rgb: - images = [convert_to_rgb(image) for image in images] - - # All transformations expect numpy arrays. - images = [to_numpy_array(image) for image in images] - - if is_scaled_image(images[0]) and do_rescale: - logger.warning_once( - "It looks like you are trying to rescale already rescaled images. If the input" - " images have pixel values between 0 and 1, set `do_rescale=False` to avoid rescaling them again." - ) - - if input_data_format is None: - # We assume that all images have the same channel dimension format. - input_data_format = infer_channel_dimension_format(images[0]) - - if do_resize: - images = [ - self.resize(image=image, size=size, resample=resample, input_data_format=input_data_format) - for image in images - ] - - if do_center_crop: - images = [ - self.center_crop(image=image, size=crop_size, input_data_format=input_data_format) for image in images - ] - - if do_rescale: - images = [ - self.rescale(image=image, scale=rescale_factor, input_data_format=input_data_format) - for image in images - ] - - if do_normalize: - images = [ - self.normalize(image=image, mean=image_mean, std=image_std, input_data_format=input_data_format) - for image in images - ] - - images = [ - to_channel_dimension_format(image, data_format, input_channel_dim=input_data_format) for image in images - ] - - data = {"pixel_values": images} - return BatchFeature(data=data, tensor_type=return_tensors) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/gpt_neox_japanese/tokenization_gpt_neox_japanese.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/gpt_neox_japanese/tokenization_gpt_neox_japanese.py deleted file mode 100644 index c0350879489f79471ccaf5a5666803719e5f1a52..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/gpt_neox_japanese/tokenization_gpt_neox_japanese.py +++ /dev/null @@ -1,377 +0,0 @@ -# coding=utf-8 -# Copyright 2022 ABEJA, Inc. and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Tokenization classes for GPTNeoXJapanese.""" -import collections -import json -import os -import re -from typing import Optional, Tuple - -import numpy as np - -from ...tokenization_utils_fast import PreTrainedTokenizer -from ...utils import logging - - -logger = logging.get_logger(__name__) - -VOCAB_FILES_NAMES = {"vocab_file": "vocab.txt", "emoji_file": "emoji.json"} - -PRETRAINED_VOCAB_FILES_MAP = { - "vocab_file": { - "abeja/gpt-neox-japanese-2.7b": "https://huggingface.co/abeja/gpt-neox-japanese-2.7b/resolve/main/vocab.txt", - }, - "emoji_file": { - "abeja/gpt-neox-japanese-2.7b": "https://huggingface.co/abeja/gpt-neox-japanese-2.7b/resolve/main/emoji.json", - }, -} - -PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = { - "abeja/gpt-neox-japanese-2.7b": 2048, -} - - -def load_vocab_and_emoji(vocab_file, emoji_file): - """Loads a vocabulary file and emoji file into a dictionary.""" - with open(emoji_file, "r", encoding="utf-8") as f: - emoji = json.loads(f.read()) - - vocab = collections.OrderedDict() - raw_vocab = collections.OrderedDict() - ids_to_tokens = collections.OrderedDict() - with open(vocab_file, "r", encoding="utf-8") as f: - token = f.readlines() - token = [[t.rstrip("\n")] if (t == "," or "," not in t) else t.rstrip("\n").split(",") for t in token] - for idx, b in enumerate(token): - ids_to_tokens[idx] = b - raw_vocab[",".join(b)] = idx - for wd in b: - vocab[wd] = idx - - return vocab, raw_vocab, ids_to_tokens, emoji - - -class GPTNeoXJapaneseTokenizer(PreTrainedTokenizer): - """ - This tokenizer inherits from [`PreTrainedTokenizer`] and is based on Japanese special Sub-Word-Encoding that is - used in this repository (https://github.com/tanreinama/Japanese-BPEEncoder_V2). Check the repository for details. - Japanese has a relatively large vocabulary and there is no separation between words. Furthermore, the language is a - combination of hiragana, katakana, and kanji, and variants such as "1" and "①" are often used. In order to cope - with these, this tokenizer has the following features - - Subword-by-subword segmentation, which is intermediate between byte strings and morphological analysis. - - BPEs are created for each Kanji, Hiragana, and Katakana character, and there are no BPEs that cross character - types, such as Kanji + Hiragana or Hiragana + Katakana. - - All-byte encoding that does not require . - - Independent of UTF codes such as 2-byte and 3-byte characters - - Conversion of heterographs to the same token_id - - Emoji and Emoticon are grouped into 12 types as special tags. - - Example: - - ```python - >>> from transformers import GPTNeoXJapaneseTokenizer - - >>> tokenizer = GPTNeoXJapaneseTokenizer.from_pretrained("abeja/gpt-neox-japanese-2.7b") - >>> # You can confirm both 慶応 and 慶應 are encoded to 17749 - >>> tokenizer("吾輩は猫である🐯。実は慶応(慶應)大学出身")["input_ids"] - [30014, 26883, 26638, 27228, 25, 26650, 31732, 31679, 27809, 26638, 17749, 31592, 17749, 31593, 321, 1281] - - >>> # Both 慶応 and 慶應 are decoded to 慶応 - >>> tokenizer.decode(tokenizer("吾輩は猫である🐯。実は慶応(慶應)大学出身")["input_ids"]) - '吾輩は猫である🐯。実は慶応(慶応)大学出身' - ``` - - Args: - vocab_file (`str`): - File containing the vocabulary. - emoji_file (`str`): - File containing the emoji. - unk_token (`str`, *optional*, defaults to `"<|endoftext|>"`): - The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this - token instead. - pad_token (`str`, *optional*, defaults to `"<|endoftext|>"`): - The token used for padding - bos_token (`str`, *optional*, defaults to `"<|startoftext|>"`): - The beginning of sequence token. - eos_token (`str`, *optional*, defaults to `"<|endoftext|>"`): - The end of sequence token. - do_clean_text (`bool`, *optional*, defaults to `False`): - Whether or not to clean text for URL, EMAIL, TEL, Japanese DATE and Japanese PRICE. - """ - - vocab_files_names = VOCAB_FILES_NAMES - pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP - max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES - model_input_names = ["input_ids", "attention_mask"] - - def __init__( - self, - vocab_file, - emoji_file, - unk_token="<|endoftext|>", - pad_token="<|endoftext|>", - bos_token="<|startoftext|>", - eos_token="<|endoftext|>", - do_clean_text=False, - **kwargs, - ): - if not os.path.isfile(vocab_file): - raise ValueError( - f"Can't find a vocabulary file at path '{vocab_file}'. To load the vocabulary from a Google pretrained" - " model use `tokenizer = GPTNeoXJapaneseokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`" - ) - if not os.path.isfile(emoji_file): - raise ValueError( - f"Can't find a emoji file at path '{emoji_file}'. To load the emoji information from a Google" - " pretrained model use `tokenizer = GPTNeoXJapaneseokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`" - ) - self.do_clean_text = do_clean_text - self.vocab, self.raw_vocab, self.ids_to_tokens, self.emoji = load_vocab_and_emoji(vocab_file, emoji_file) - self.subword_tokenizer = SubWordJapaneseTokenizer( - vocab=self.vocab, ids_to_tokens=self.ids_to_tokens, emoji=self.emoji - ) - super().__init__( - unk_token=unk_token, - pad_token=pad_token, - bos_token=bos_token, - eos_token=eos_token, - do_clean_text=do_clean_text, - **kwargs, - ) - - @property - def vocab_size(self): - # self.vocab contains support for character fluctuation unique to Japanese, and has a large number of vocab - return len(self.raw_vocab) - - def get_vocab(self): - return dict(self.raw_vocab, **self.added_tokens_encoder) - - def _tokenize(self, text): - return self.subword_tokenizer.tokenize(text, clean=self.do_clean_text) - - def _convert_token_to_id(self, token): - """Converts a token (str) in an id using the vocab.""" - return self.vocab.get(token, self.vocab.get(self.unk_token)) - - def _convert_id_to_token(self, index): - """Converts an index (integer) in a token (str) using the vocab.""" - return self.subword_tokenizer.convert_id_to_token(index) - - def convert_tokens_to_string(self, tokens): - """Converts a sequence of tokens (string) in a single string.""" - out_string = "".join(tokens).strip() - return out_string - - @property - def default_chat_template(self): - """ - A simple chat template that just adds BOS/EOS tokens around messages while discarding role information. - """ - return ( - "{% for message in messages %}" - "{{ bos_token + eos_token + message.content + eos_token }}" - "{% endfor %}" - "{% if add_generation_prompt %} {{ bos_token + eos_token }} {% endif %}" - ) - - def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]: - index = 0 - if os.path.isdir(save_directory): - vocab_file = os.path.join( - save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"] - ) - emoji_file = os.path.join( - save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["emoji_file"] - ) - else: - vocab_file = ( - (filename_prefix + "-" if filename_prefix else "") + save_directory + VOCAB_FILES_NAMES["vocab_file"] - ) - emoji_file = ( - (filename_prefix + "-" if filename_prefix else "") + save_directory + VOCAB_FILES_NAMES["emoji_file"] - ) - with open(vocab_file, "w", encoding="utf-8") as writer: - for token_index, token in self.ids_to_tokens.items(): - if index != token_index: - logger.warning( - f"Saving vocabulary to {vocab_file}: vocabulary indices are not consecutive." - " Please check that the vocabulary is not corrupted!" - ) - index = token_index - writer.write(",".join(token) + "\n") - index += 1 - with open(emoji_file, "w", encoding="utf-8") as writer: - json.dump(self.emoji, writer) - return vocab_file, emoji_file - - -class SubWordJapaneseTokenizer(object): - """ - https://github.com/tanreinama/Japanese-BPEEncoder_V2 This tokenizer class is under MIT Lisence according to the - original repository. - - MIT License - - Copyright (c) 2020 tanreinama - - Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated - documentation files (the "Software"), to deal in the Software without restriction, including without limitation the - rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to - permit persons to whom the Software is furnished to do so, subject to the following conditions: - - The above copyright notice and this permission notice shall be included in all copies or substantial portions of - the Software. - - THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO - THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, - TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE - SOFTWARE. - """ - - def __init__(self, vocab, ids_to_tokens, emoji): - self.vocab = vocab # same as swe - self.ids_to_tokens = ids_to_tokens # same as bpe - self.emoji = emoji - self.maxlen = np.max([len(w) for w in self.vocab.keys()]) - self.content_repatter1 = re.compile(r"(https?|ftp)(:\/\/[-_\.!~*\'()a-zA-Z0-9;\/?:\@&=\+$,%#]+)") - self.content_repatter2 = re.compile(r"[A-Za-z0-9\._+]*@[\-_0-9A-Za-z]+(\.[A-Za-z]+)*") - self.content_repatter3 = re.compile(r"[\(]{0,1}[0-9]{2,4}[\)\-\(]{0,1}[0-9]{2,4}[\)\-]{0,1}[0-9]{3,4}") - self.content_repatter4 = re.compile( - r"([12]\d{3}[/\-年])*(0?[1-9]|1[0-2])[/\-月]((0?[1-9]|[12][0-9]|3[01])日?)*(\d{1,2}|:|\d{1,2}時|\d{1,2}分|\(日\)|\(月\)|\(火\)|\(水\)|\(木\)|\(金\)|\(土\)|㈰|㈪|㈫|㈬|㈭|㈮|㈯)*" - ) - self.content_repatter5 = re.compile( - r"(明治|大正|昭和|平成|令和|㍾|㍽|㍼|㍻|\u32ff)\d{1,2}年(0?[1-9]|1[0-2])月(0?[1-9]|[12][0-9]|3[01])日(\d{1,2}|:|\d{1,2}時|\d{1,2}分|\(日\)|\(月\)|\(火\)|\(水\)|\(木\)|\(金\)|\(土\)|㈰|㈪|㈫|㈬|㈭|㈮|㈯)*" - ) - self.content_repatter6 = re.compile( - r"((0|[1-9]\d*|[1-9]\d{0,2}(,\d{3})+)*億)*((0|[1-9]\d*|[1-9]\d{0,2}(,\d{3})+)*万)*((0|[1-9]\d*|[1-9]\d{0,2}(,\d{3})+)*千)*(0|[1-9]\d*|[1-9]\d{0,2}(,\d{3})+)*(千円|万円|千万円|円|千ドル|万ドル|千万ドル|ドル|千ユーロ|万ユーロ|千万ユーロ|ユーロ)+(\(税込\)|\(税抜\)|\+tax)*" - ) - keisen = "─━│┃┄┅┆┇┈┉┊┋┌┍┎┏┐┑┒┓└┕┖┗┘┙┚┛├┝┞┟┠┡┢┣┤┥┦┧┨┩┪┫┬┭┮┯┰┱┲┳┴┵┶┷┸┹┺┻┼┽┾┿╀╁╂╃╄╅╆╇╈╉╊╋╌╍╎╏═║╒╓╔╕╖╗╘╙╚╛╜╝╞╟╠╡╢╣╤╥╦╧╨╩╪╫╬╭╮╯╰╱╲╳╴╵╶╷╸╹╺╻╼╽╾╿" - blocks = "▀▁▂▃▄▅▆▇█▉▊▋▌▍▎▏▐░▒▓▔▕▖▗▘▙▚▛▜▝▞▟" - self.content_trans1 = str.maketrans({k: "" for k in keisen + blocks}) - - def __len__(self): - return len(self.ids_to_tokens) - - def clean_text(self, content): - content = self.content_repatter1.sub("", content) - content = self.content_repatter2.sub("", content) - content = self.content_repatter3.sub("", content) - content = self.content_repatter4.sub("", content) - content = self.content_repatter5.sub("", content) - content = self.content_repatter6.sub("", content) - content = content.translate(self.content_trans1) - while "" in content: - content = content.replace("", "") - return content - - def tokenize(self, text, clean=False): - text = text.replace(" ", "") - text = text.replace(" ", "") - text = text.replace("\r\n", "
    ") - text = text.replace("\n", "
    ") - text = text.replace("\r", "
    ") - text = text.replace("\t", "") - text = text.replace("—", "ー") - text = text.replace("−", "ー") - for k, v in self.emoji["emoji"].items(): - if k in text: - text = text.replace(k, v) - if clean: - text = self.clean_text(text) - - def check_simbol(x): - e = x.encode() - if len(x) == 1 and len(e) == 2: - c = (int(e[0]) << 8) + int(e[1]) - if ( - (c >= 0xC2A1 and c <= 0xC2BF) - or (c >= 0xC780 and c <= 0xC783) - or (c >= 0xCAB9 and c <= 0xCBBF) - or (c >= 0xCC80 and c <= 0xCDA2) - ): - return True - return False - - def checku2e(x): - e = x.encode() - if len(x) == 1 and len(e) == 3: - c = (int(e[0]) << 16) + (int(e[1]) << 8) + int(e[2]) - if c >= 0xE28080 and c <= 0xE2B07F: - return True - return False - - pos = 0 - result = [] - while pos < len(text): - end = min(len(text), pos + self.maxlen + 1) if text[pos] == "<" else pos + 3 - candidates = [] # (token_id, token, pos) - for e in range(end, pos, -1): - wd = text[pos:e] - if wd in self.vocab: - if wd[0] == "<" and len(wd) > 2: - candidates = [(self.vocab[wd], wd, e)] - break - else: - candidates.append((self.vocab[wd], wd, e)) - if len(candidates) > 0: - # the smallest token_id is adopted - _, wd, e = sorted(candidates, key=lambda x: x[0])[0] - result.append(wd) - pos = e - else: - end = pos + 1 - wd = text[pos:end] - if check_simbol(wd): - result.append("") - elif checku2e(wd): - result.append("") - else: - for i in wd.encode("utf-8"): - result.append("<|byte%d|>" % i) - pos = end - return result - - def convert_id_to_token(self, index, breakline="\n"): - words = [] - byte_tokens = [] - word = self.ids_to_tokens[index][0] - if word[:6] == "<|byte" and word[-2:] == "|>": - byte_tokens.append(int(word[6:-2])) - else: - if len(byte_tokens) > 0: - words.append(bytearray(byte_tokens).decode("utf-8", errors="replace")) - byte_tokens = [] - if word[:7] == "<|emoji" and word[-2:] == "|>": - words.append(self.emoji["emoji_inv"][word]) - elif word == "": - words.append(" ") - elif word == "
    ": - words.append(breakline) - elif word == "": - words.append("\t") - elif word == "": - words.append("▀") - elif word == "": - words.append("ǀ") - elif word == "": - words.append("‖") - else: - words.append(word) - if len(byte_tokens) > 0: - words.append(bytearray(byte_tokens).decode("utf-8", errors="replace")) - text = "".join(words) - return text diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/poolformer/modeling_poolformer.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/poolformer/modeling_poolformer.py deleted file mode 100644 index 6acc8ec98e6939447179fb5f46e66d164e8ff289..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/poolformer/modeling_poolformer.py +++ /dev/null @@ -1,455 +0,0 @@ -# coding=utf-8 -# Copyright 2022 Sea AI Lab and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" PyTorch PoolFormer model.""" - - -import collections.abc -from typing import Optional, Tuple, Union - -import torch -import torch.utils.checkpoint -from torch import nn -from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss - -from ...activations import ACT2FN -from ...modeling_outputs import BaseModelOutputWithNoAttention, ImageClassifierOutputWithNoAttention -from ...modeling_utils import PreTrainedModel -from ...utils import add_code_sample_docstrings, add_start_docstrings, add_start_docstrings_to_model_forward, logging -from .configuration_poolformer import PoolFormerConfig - - -logger = logging.get_logger(__name__) - -# General docstring -_CONFIG_FOR_DOC = "PoolFormerConfig" - -# Base docstring -_CHECKPOINT_FOR_DOC = "sail/poolformer_s12" -_EXPECTED_OUTPUT_SHAPE = [1, 512, 7, 7] - -# Image classification docstring -_IMAGE_CLASS_CHECKPOINT = "sail/poolformer_s12" -_IMAGE_CLASS_EXPECTED_OUTPUT = "tabby, tabby cat" - -POOLFORMER_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "sail/poolformer_s12", - # See all PoolFormer models at https://huggingface.co/models?filter=poolformer -] - - -# Copied from transformers.models.beit.modeling_beit.drop_path -def drop_path(input: torch.Tensor, drop_prob: float = 0.0, training: bool = False) -> torch.Tensor: - """ - Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - - Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks, - however, the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper... - See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for changing the - layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use 'survival rate' as the - argument. - """ - if drop_prob == 0.0 or not training: - return input - keep_prob = 1 - drop_prob - shape = (input.shape[0],) + (1,) * (input.ndim - 1) # work with diff dim tensors, not just 2D ConvNets - random_tensor = keep_prob + torch.rand(shape, dtype=input.dtype, device=input.device) - random_tensor.floor_() # binarize - output = input.div(keep_prob) * random_tensor - return output - - -# Copied from transformers.models.beit.modeling_beit.BeitDropPath with Beit->PoolFormer -class PoolFormerDropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).""" - - def __init__(self, drop_prob: Optional[float] = None) -> None: - super().__init__() - self.drop_prob = drop_prob - - def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: - return drop_path(hidden_states, self.drop_prob, self.training) - - def extra_repr(self) -> str: - return "p={}".format(self.drop_prob) - - -class PoolFormerEmbeddings(nn.Module): - """ - Construct Patch Embeddings. - """ - - def __init__(self, hidden_size, num_channels, patch_size, stride, padding, norm_layer=None): - super().__init__() - patch_size = patch_size if isinstance(patch_size, collections.abc.Iterable) else (patch_size, patch_size) - stride = stride if isinstance(stride, collections.abc.Iterable) else (stride, stride) - padding = padding if isinstance(padding, collections.abc.Iterable) else (padding, padding) - - self.projection = nn.Conv2d(num_channels, hidden_size, kernel_size=patch_size, stride=stride, padding=padding) - self.norm = norm_layer(hidden_size) if norm_layer else nn.Identity() - - def forward(self, pixel_values): - embeddings = self.projection(pixel_values) - embeddings = self.norm(embeddings) - return embeddings - - -class PoolFormerGroupNorm(nn.GroupNorm): - """ - Group Normalization with 1 group. Input: tensor in shape [B, C, H, W] - """ - - def __init__(self, num_channels, **kwargs): - super().__init__(1, num_channels, **kwargs) - - -class PoolFormerPooling(nn.Module): - def __init__(self, pool_size): - super().__init__() - self.pool = nn.AvgPool2d(pool_size, stride=1, padding=pool_size // 2, count_include_pad=False) - - def forward(self, hidden_states): - return self.pool(hidden_states) - hidden_states - - -class PoolFormerOutput(nn.Module): - def __init__(self, config, dropout_prob, hidden_size, intermediate_size): - super().__init__() - self.conv1 = nn.Conv2d(hidden_size, intermediate_size, 1) - self.conv2 = nn.Conv2d(intermediate_size, hidden_size, 1) - self.drop = PoolFormerDropPath(dropout_prob) - if isinstance(config.hidden_act, str): - self.act_fn = ACT2FN[config.hidden_act] - else: - self.act_fn = config.hidden_act - - def forward(self, hidden_states): - hidden_states = self.conv1(hidden_states) - hidden_states = self.act_fn(hidden_states) - hidden_states = self.drop(hidden_states) - hidden_states = self.conv2(hidden_states) - hidden_states = self.drop(hidden_states) - - return hidden_states - - -class PoolFormerLayer(nn.Module): - """This corresponds to the 'PoolFormerBlock' class in the original implementation.""" - - def __init__(self, config, num_channels, pool_size, hidden_size, intermediate_size, drop_path): - super().__init__() - self.pooling = PoolFormerPooling(pool_size) - self.output = PoolFormerOutput(config, drop_path, hidden_size, intermediate_size) - self.before_norm = PoolFormerGroupNorm(num_channels) - self.after_norm = PoolFormerGroupNorm(num_channels) - - # Useful for training neural nets - self.drop_path = PoolFormerDropPath(drop_path) if drop_path > 0.0 else nn.Identity() - self.use_layer_scale = config.use_layer_scale - if config.use_layer_scale: - self.layer_scale_1 = nn.Parameter( - config.layer_scale_init_value * torch.ones((num_channels)), requires_grad=True - ) - self.layer_scale_2 = nn.Parameter( - config.layer_scale_init_value * torch.ones((num_channels)), requires_grad=True - ) - - def forward(self, hidden_states): - if self.use_layer_scale: - pooling_output = self.pooling(self.before_norm(hidden_states)) - scaled_op = self.layer_scale_1.unsqueeze(-1).unsqueeze(-1) * pooling_output - # First residual connection - hidden_states = hidden_states + self.drop_path(scaled_op) - outputs = () - - layer_output = self.output(self.after_norm(hidden_states)) - scaled_op = self.layer_scale_2.unsqueeze(-1).unsqueeze(-1) * layer_output - # Second residual connection - output = hidden_states + self.drop_path(scaled_op) - - outputs = (output,) + outputs - return outputs - - else: - pooling_output = self.drop_path(self.pooling(self.before_norm(hidden_states))) - # First residual connection - hidden_states = pooling_output + hidden_states - outputs = () - - # Second residual connection inside the PoolFormerOutput block - layer_output = self.drop_path(self.output(self.after_norm(hidden_states))) - output = hidden_states + layer_output - - outputs = (output,) + outputs - return outputs - - -class PoolFormerEncoder(nn.Module): - def __init__(self, config): - super().__init__() - self.config = config - # stochastic depth decay rule - dpr = [x.item() for x in torch.linspace(0, config.drop_path_rate, sum(config.depths))] - - # patch embeddings - embeddings = [] - for i in range(config.num_encoder_blocks): - embeddings.append( - PoolFormerEmbeddings( - patch_size=config.patch_sizes[i], - stride=config.strides[i], - padding=config.padding[i], - num_channels=config.num_channels if i == 0 else config.hidden_sizes[i - 1], - hidden_size=config.hidden_sizes[i], - ) - ) - self.patch_embeddings = nn.ModuleList(embeddings) - - # Transformer blocks - blocks = [] - cur = 0 - for i in range(config.num_encoder_blocks): - # each block consists of layers - layers = [] - if i != 0: - cur += config.depths[i - 1] - for j in range(config.depths[i]): - layers.append( - PoolFormerLayer( - config, - num_channels=config.hidden_sizes[i], - pool_size=config.pool_size, - hidden_size=config.hidden_sizes[i], - intermediate_size=int(config.hidden_sizes[i] * config.mlp_ratio), - drop_path=dpr[cur + j], - ) - ) - blocks.append(nn.ModuleList(layers)) - - self.block = nn.ModuleList(blocks) - - def forward(self, pixel_values, output_hidden_states=False, return_dict=True): - all_hidden_states = () if output_hidden_states else None - - hidden_states = pixel_values - for idx, layers in enumerate(zip(self.patch_embeddings, self.block)): - embedding_layer, block_layer = layers - # Get patch embeddings from hidden_states - hidden_states = embedding_layer(hidden_states) - # Send the embeddings through the blocks - for _, blk in enumerate(block_layer): - layer_outputs = blk(hidden_states) - hidden_states = layer_outputs[0] - - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple(v for v in [hidden_states, all_hidden_states] if v is not None) - - return BaseModelOutputWithNoAttention(last_hidden_state=hidden_states, hidden_states=all_hidden_states) - - -class PoolFormerPreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = PoolFormerConfig - base_model_prefix = "poolformer" - main_input_name = "pixel_values" - supports_gradient_checkpointing = True - - def _init_weights(self, module): - """Initialize the weights""" - if isinstance(module, (nn.Linear, nn.Conv2d)): - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, PoolFormerEncoder): - module.gradient_checkpointing = value - - -POOLFORMER_START_DOCSTRING = r""" - This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use - it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and - behavior. - - Parameters: - config ([`PoolFormerConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -POOLFORMER_INPUTS_DOCSTRING = r""" - Args: - pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - Pixel values. Pixel values can be obtained using [`AutoImageProcessor`]. See - [`PoolFormerImageProcessor.__call__`] for details. -""" - - -@add_start_docstrings( - "The bare PoolFormer Model transformer outputting raw hidden-states without any specific head on top.", - POOLFORMER_START_DOCSTRING, -) -class PoolFormerModel(PoolFormerPreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.config = config - - self.encoder = PoolFormerEncoder(config) - - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self): - return self.embeddings.patch_embeddings - - @add_start_docstrings_to_model_forward(POOLFORMER_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=BaseModelOutputWithNoAttention, - config_class=_CONFIG_FOR_DOC, - modality="vision", - expected_output=_EXPECTED_OUTPUT_SHAPE, - ) - def forward( - self, - pixel_values: Optional[torch.FloatTensor] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutputWithNoAttention]: - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if pixel_values is None: - raise ValueError("You have to specify pixel_values") - - encoder_outputs = self.encoder( - pixel_values, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - sequence_output = encoder_outputs[0] - - if not return_dict: - return (sequence_output, None) + encoder_outputs[1:] - - return BaseModelOutputWithNoAttention( - last_hidden_state=sequence_output, - hidden_states=encoder_outputs.hidden_states, - ) - - -class PoolFormerFinalPooler(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - - def forward(self, hidden_states): - output = self.dense(hidden_states) - return output - - -@add_start_docstrings( - """ - PoolFormer Model transformer with an image classification head on top - """, - POOLFORMER_START_DOCSTRING, -) -class PoolFormerForImageClassification(PoolFormerPreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - self.poolformer = PoolFormerModel(config) - - # Final norm - self.norm = PoolFormerGroupNorm(config.hidden_sizes[-1]) - # Classifier head - self.classifier = ( - nn.Linear(config.hidden_sizes[-1], config.num_labels) if config.num_labels > 0 else nn.Identity() - ) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(POOLFORMER_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_IMAGE_CLASS_CHECKPOINT, - output_type=ImageClassifierOutputWithNoAttention, - config_class=_CONFIG_FOR_DOC, - expected_output=_IMAGE_CLASS_EXPECTED_OUTPUT, - ) - def forward( - self, - pixel_values: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, ImageClassifierOutputWithNoAttention]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the image classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.poolformer( - pixel_values, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - - logits = self.classifier(self.norm(sequence_output).mean([-2, -1])) - - loss = None - if labels is not None: - if self.config.problem_type is None: - if self.num_labels == 1: - self.config.problem_type = "regression" - elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int): - self.config.problem_type = "single_label_classification" - else: - self.config.problem_type = "multi_label_classification" - - if self.config.problem_type == "regression": - loss_fct = MSELoss() - if self.num_labels == 1: - loss = loss_fct(logits.squeeze(), labels.squeeze()) - else: - loss = loss_fct(logits, labels) - elif self.config.problem_type == "single_label_classification": - loss_fct = CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - elif self.config.problem_type == "multi_label_classification": - loss_fct = BCEWithLogitsLoss() - loss = loss_fct(logits, labels) - - if not return_dict: - output = (logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return ImageClassifierOutputWithNoAttention(loss=loss, logits=logits, hidden_states=outputs.hidden_states) diff --git a/spaces/yonikremer/grouped-sampling-demo/download_repo.py b/spaces/yonikremer/grouped-sampling-demo/download_repo.py deleted file mode 100644 index f958df64c61ebbefe5497c9d9ce3fd4e134517da..0000000000000000000000000000000000000000 --- a/spaces/yonikremer/grouped-sampling-demo/download_repo.py +++ /dev/null @@ -1,45 +0,0 @@ -import urllib3 - -from huggingface_hub import snapshot_download - -from available_models import AVAILABLE_MODELS - - -def change_default_timeout(new_timeout: int) -> None: - """ - Changes the default timeout for downloading repositories from the Hugging Face Hub. - Prevents the following errors: - urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='huggingface.co', port=443): - Read timed out. (read timeout=10) - """ - urllib3.util.timeout.DEFAULT_TIMEOUT = new_timeout - - -def download_pytorch_model(name: str) -> None: - """ - Downloads a pytorch model and all the small files from the model's repository. - Other model formats (tensorflow, tflite, safetensors, msgpack, ot...) are not downloaded. - """ - number_of_seconds_in_a_year: int = 60 * 60 * 24 * 365 - change_default_timeout(number_of_seconds_in_a_year) - snapshot_download( - repo_id=name, - etag_timeout=number_of_seconds_in_a_year, - resume_download=True, - repo_type="model", - library_name="pt", - # h5, tflite, safetensors, msgpack and ot models files are not needed - ignore_patterns=[ - "*.h5", - "*.tflite", - "*.safetensors", - "*.msgpack", - "*.ot", - "*.md" - ], - ) - - -if __name__ == "__main__": - for model_name in AVAILABLE_MODELS: - download_pytorch_model(model_name) diff --git a/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/hacks/flex-grow.js b/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/hacks/flex-grow.js deleted file mode 100644 index d53374b639d26a46548fe8ddc8e310eb56d894c8..0000000000000000000000000000000000000000 --- a/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/hacks/flex-grow.js +++ /dev/null @@ -1,30 +0,0 @@ -let flexSpec = require('./flex-spec') -let Declaration = require('../declaration') - -class Flex extends Declaration { - /** - * Return property name by final spec - */ - normalize() { - return 'flex' - } - - /** - * Return flex property for 2009 and 2012 specs - */ - prefixed(prop, prefix) { - let spec - ;[spec, prefix] = flexSpec(prefix) - if (spec === 2009) { - return prefix + 'box-flex' - } - if (spec === 2012) { - return prefix + 'flex-positive' - } - return super.prefixed(prop, prefix) - } -} - -Flex.names = ['flex-grow', 'flex-positive'] - -module.exports = Flex diff --git a/spaces/younker/chatgpt-turbo/client/node_modules/fraction.js/fraction.js b/spaces/younker/chatgpt-turbo/client/node_modules/fraction.js/fraction.js deleted file mode 100644 index 82d05d2bf2efb1935e72adc4bf989e45c7f59524..0000000000000000000000000000000000000000 --- a/spaces/younker/chatgpt-turbo/client/node_modules/fraction.js/fraction.js +++ /dev/null @@ -1,891 +0,0 @@ -/** - * @license Fraction.js v4.2.0 05/03/2022 - * https://www.xarg.org/2014/03/rational-numbers-in-javascript/ - * - * Copyright (c) 2021, Robert Eisele (robert@xarg.org) - * Dual licensed under the MIT or GPL Version 2 licenses. - **/ - - -/** - * - * This class offers the possibility to calculate fractions. - * You can pass a fraction in different formats. Either as array, as double, as string or as an integer. - * - * Array/Object form - * [ 0 => , 1 => ] - * [ n => , d => ] - * - * Integer form - * - Single integer value - * - * Double form - * - Single double value - * - * String form - * 123.456 - a simple double - * 123/456 - a string fraction - * 123.'456' - a double with repeating decimal places - * 123.(456) - synonym - * 123.45'6' - a double with repeating last place - * 123.45(6) - synonym - * - * Example: - * - * var f = new Fraction("9.4'31'"); - * f.mul([-4, 3]).div(4.9); - * - */ - -(function(root) { - - "use strict"; - - // Maximum search depth for cyclic rational numbers. 2000 should be more than enough. - // Example: 1/7 = 0.(142857) has 6 repeating decimal places. - // If MAX_CYCLE_LEN gets reduced, long cycles will not be detected and toString() only gets the first 10 digits - var MAX_CYCLE_LEN = 2000; - - // Parsed data to avoid calling "new" all the time - var P = { - "s": 1, - "n": 0, - "d": 1 - }; - - function assign(n, s) { - - if (isNaN(n = parseInt(n, 10))) { - throw Fraction['InvalidParameter']; - } - return n * s; - } - - // Creates a new Fraction internally without the need of the bulky constructor - function newFraction(n, d) { - - if (d === 0) { - throw Fraction['DivisionByZero']; - } - - var f = Object.create(Fraction.prototype); - f["s"] = n < 0 ? -1 : 1; - - n = n < 0 ? -n : n; - - var a = gcd(n, d); - - f["n"] = n / a; - f["d"] = d / a; - return f; - } - - function factorize(num) { - - var factors = {}; - - var n = num; - var i = 2; - var s = 4; - - while (s <= n) { - - while (n % i === 0) { - n/= i; - factors[i] = (factors[i] || 0) + 1; - } - s+= 1 + 2 * i++; - } - - if (n !== num) { - if (n > 1) - factors[n] = (factors[n] || 0) + 1; - } else { - factors[num] = (factors[num] || 0) + 1; - } - return factors; - } - - var parse = function(p1, p2) { - - var n = 0, d = 1, s = 1; - var v = 0, w = 0, x = 0, y = 1, z = 1; - - var A = 0, B = 1; - var C = 1, D = 1; - - var N = 10000000; - var M; - - if (p1 === undefined || p1 === null) { - /* void */ - } else if (p2 !== undefined) { - n = p1; - d = p2; - s = n * d; - - if (n % 1 !== 0 || d % 1 !== 0) { - throw Fraction['NonIntegerParameter']; - } - - } else - switch (typeof p1) { - - case "object": - { - if ("d" in p1 && "n" in p1) { - n = p1["n"]; - d = p1["d"]; - if ("s" in p1) - n*= p1["s"]; - } else if (0 in p1) { - n = p1[0]; - if (1 in p1) - d = p1[1]; - } else { - throw Fraction['InvalidParameter']; - } - s = n * d; - break; - } - case "number": - { - if (p1 < 0) { - s = p1; - p1 = -p1; - } - - if (p1 % 1 === 0) { - n = p1; - } else if (p1 > 0) { // check for != 0, scale would become NaN (log(0)), which converges really slow - - if (p1 >= 1) { - z = Math.pow(10, Math.floor(1 + Math.log(p1) / Math.LN10)); - p1/= z; - } - - // Using Farey Sequences - // http://www.johndcook.com/blog/2010/10/20/best-rational-approximation/ - - while (B <= N && D <= N) { - M = (A + C) / (B + D); - - if (p1 === M) { - if (B + D <= N) { - n = A + C; - d = B + D; - } else if (D > B) { - n = C; - d = D; - } else { - n = A; - d = B; - } - break; - - } else { - - if (p1 > M) { - A+= C; - B+= D; - } else { - C+= A; - D+= B; - } - - if (B > N) { - n = C; - d = D; - } else { - n = A; - d = B; - } - } - } - n*= z; - } else if (isNaN(p1) || isNaN(p2)) { - d = n = NaN; - } - break; - } - case "string": - { - B = p1.match(/\d+|./g); - - if (B === null) - throw Fraction['InvalidParameter']; - - if (B[A] === '-') {// Check for minus sign at the beginning - s = -1; - A++; - } else if (B[A] === '+') {// Check for plus sign at the beginning - A++; - } - - if (B.length === A + 1) { // Check if it's just a simple number "1234" - w = assign(B[A++], s); - } else if (B[A + 1] === '.' || B[A] === '.') { // Check if it's a decimal number - - if (B[A] !== '.') { // Handle 0.5 and .5 - v = assign(B[A++], s); - } - A++; - - // Check for decimal places - if (A + 1 === B.length || B[A + 1] === '(' && B[A + 3] === ')' || B[A + 1] === "'" && B[A + 3] === "'") { - w = assign(B[A], s); - y = Math.pow(10, B[A].length); - A++; - } - - // Check for repeating places - if (B[A] === '(' && B[A + 2] === ')' || B[A] === "'" && B[A + 2] === "'") { - x = assign(B[A + 1], s); - z = Math.pow(10, B[A + 1].length) - 1; - A+= 3; - } - - } else if (B[A + 1] === '/' || B[A + 1] === ':') { // Check for a simple fraction "123/456" or "123:456" - w = assign(B[A], s); - y = assign(B[A + 2], 1); - A+= 3; - } else if (B[A + 3] === '/' && B[A + 1] === ' ') { // Check for a complex fraction "123 1/2" - v = assign(B[A], s); - w = assign(B[A + 2], s); - y = assign(B[A + 4], 1); - A+= 5; - } - - if (B.length <= A) { // Check for more tokens on the stack - d = y * z; - s = /* void */ - n = x + d * v + z * w; - break; - } - - /* Fall through on error */ - } - default: - throw Fraction['InvalidParameter']; - } - - if (d === 0) { - throw Fraction['DivisionByZero']; - } - - P["s"] = s < 0 ? -1 : 1; - P["n"] = Math.abs(n); - P["d"] = Math.abs(d); - }; - - function modpow(b, e, m) { - - var r = 1; - for (; e > 0; b = (b * b) % m, e >>= 1) { - - if (e & 1) { - r = (r * b) % m; - } - } - return r; - } - - - function cycleLen(n, d) { - - for (; d % 2 === 0; - d/= 2) { - } - - for (; d % 5 === 0; - d/= 5) { - } - - if (d === 1) // Catch non-cyclic numbers - return 0; - - // If we would like to compute really large numbers quicker, we could make use of Fermat's little theorem: - // 10^(d-1) % d == 1 - // However, we don't need such large numbers and MAX_CYCLE_LEN should be the capstone, - // as we want to translate the numbers to strings. - - var rem = 10 % d; - var t = 1; - - for (; rem !== 1; t++) { - rem = rem * 10 % d; - - if (t > MAX_CYCLE_LEN) - return 0; // Returning 0 here means that we don't print it as a cyclic number. It's likely that the answer is `d-1` - } - return t; - } - - - function cycleStart(n, d, len) { - - var rem1 = 1; - var rem2 = modpow(10, len, d); - - for (var t = 0; t < 300; t++) { // s < ~log10(Number.MAX_VALUE) - // Solve 10^s == 10^(s+t) (mod d) - - if (rem1 === rem2) - return t; - - rem1 = rem1 * 10 % d; - rem2 = rem2 * 10 % d; - } - return 0; - } - - function gcd(a, b) { - - if (!a) - return b; - if (!b) - return a; - - while (1) { - a%= b; - if (!a) - return b; - b%= a; - if (!b) - return a; - } - }; - - /** - * Module constructor - * - * @constructor - * @param {number|Fraction=} a - * @param {number=} b - */ - function Fraction(a, b) { - - parse(a, b); - - if (this instanceof Fraction) { - a = gcd(P["d"], P["n"]); // Abuse variable a - this["s"] = P["s"]; - this["n"] = P["n"] / a; - this["d"] = P["d"] / a; - } else { - return newFraction(P['s'] * P['n'], P['d']); - } - } - - Fraction['DivisionByZero'] = new Error("Division by Zero"); - Fraction['InvalidParameter'] = new Error("Invalid argument"); - Fraction['NonIntegerParameter'] = new Error("Parameters must be integer"); - - Fraction.prototype = { - - "s": 1, - "n": 0, - "d": 1, - - /** - * Calculates the absolute value - * - * Ex: new Fraction(-4).abs() => 4 - **/ - "abs": function() { - - return newFraction(this["n"], this["d"]); - }, - - /** - * Inverts the sign of the current fraction - * - * Ex: new Fraction(-4).neg() => 4 - **/ - "neg": function() { - - return newFraction(-this["s"] * this["n"], this["d"]); - }, - - /** - * Adds two rational numbers - * - * Ex: new Fraction({n: 2, d: 3}).add("14.9") => 467 / 30 - **/ - "add": function(a, b) { - - parse(a, b); - return newFraction( - this["s"] * this["n"] * P["d"] + P["s"] * this["d"] * P["n"], - this["d"] * P["d"] - ); - }, - - /** - * Subtracts two rational numbers - * - * Ex: new Fraction({n: 2, d: 3}).add("14.9") => -427 / 30 - **/ - "sub": function(a, b) { - - parse(a, b); - return newFraction( - this["s"] * this["n"] * P["d"] - P["s"] * this["d"] * P["n"], - this["d"] * P["d"] - ); - }, - - /** - * Multiplies two rational numbers - * - * Ex: new Fraction("-17.(345)").mul(3) => 5776 / 111 - **/ - "mul": function(a, b) { - - parse(a, b); - return newFraction( - this["s"] * P["s"] * this["n"] * P["n"], - this["d"] * P["d"] - ); - }, - - /** - * Divides two rational numbers - * - * Ex: new Fraction("-17.(345)").inverse().div(3) - **/ - "div": function(a, b) { - - parse(a, b); - return newFraction( - this["s"] * P["s"] * this["n"] * P["d"], - this["d"] * P["n"] - ); - }, - - /** - * Clones the actual object - * - * Ex: new Fraction("-17.(345)").clone() - **/ - "clone": function() { - return newFraction(this['s'] * this['n'], this['d']); - }, - - /** - * Calculates the modulo of two rational numbers - a more precise fmod - * - * Ex: new Fraction('4.(3)').mod([7, 8]) => (13/3) % (7/8) = (5/6) - **/ - "mod": function(a, b) { - - if (isNaN(this['n']) || isNaN(this['d'])) { - return new Fraction(NaN); - } - - if (a === undefined) { - return newFraction(this["s"] * this["n"] % this["d"], 1); - } - - parse(a, b); - if (0 === P["n"] && 0 === this["d"]) { - throw Fraction['DivisionByZero']; - } - - /* - * First silly attempt, kinda slow - * - return that["sub"]({ - "n": num["n"] * Math.floor((this.n / this.d) / (num.n / num.d)), - "d": num["d"], - "s": this["s"] - });*/ - - /* - * New attempt: a1 / b1 = a2 / b2 * q + r - * => b2 * a1 = a2 * b1 * q + b1 * b2 * r - * => (b2 * a1 % a2 * b1) / (b1 * b2) - */ - return newFraction( - this["s"] * (P["d"] * this["n"]) % (P["n"] * this["d"]), - P["d"] * this["d"] - ); - }, - - /** - * Calculates the fractional gcd of two rational numbers - * - * Ex: new Fraction(5,8).gcd(3,7) => 1/56 - */ - "gcd": function(a, b) { - - parse(a, b); - - // gcd(a / b, c / d) = gcd(a, c) / lcm(b, d) - - return newFraction(gcd(P["n"], this["n"]) * gcd(P["d"], this["d"]), P["d"] * this["d"]); - }, - - /** - * Calculates the fractional lcm of two rational numbers - * - * Ex: new Fraction(5,8).lcm(3,7) => 15 - */ - "lcm": function(a, b) { - - parse(a, b); - - // lcm(a / b, c / d) = lcm(a, c) / gcd(b, d) - - if (P["n"] === 0 && this["n"] === 0) { - return newFraction(0, 1); - } - return newFraction(P["n"] * this["n"], gcd(P["n"], this["n"]) * gcd(P["d"], this["d"])); - }, - - /** - * Calculates the ceil of a rational number - * - * Ex: new Fraction('4.(3)').ceil() => (5 / 1) - **/ - "ceil": function(places) { - - places = Math.pow(10, places || 0); - - if (isNaN(this["n"]) || isNaN(this["d"])) { - return new Fraction(NaN); - } - return newFraction(Math.ceil(places * this["s"] * this["n"] / this["d"]), places); - }, - - /** - * Calculates the floor of a rational number - * - * Ex: new Fraction('4.(3)').floor() => (4 / 1) - **/ - "floor": function(places) { - - places = Math.pow(10, places || 0); - - if (isNaN(this["n"]) || isNaN(this["d"])) { - return new Fraction(NaN); - } - return newFraction(Math.floor(places * this["s"] * this["n"] / this["d"]), places); - }, - - /** - * Rounds a rational numbers - * - * Ex: new Fraction('4.(3)').round() => (4 / 1) - **/ - "round": function(places) { - - places = Math.pow(10, places || 0); - - if (isNaN(this["n"]) || isNaN(this["d"])) { - return new Fraction(NaN); - } - return newFraction(Math.round(places * this["s"] * this["n"] / this["d"]), places); - }, - - /** - * Gets the inverse of the fraction, means numerator and denominator are exchanged - * - * Ex: new Fraction([-3, 4]).inverse() => -4 / 3 - **/ - "inverse": function() { - - return newFraction(this["s"] * this["d"], this["n"]); - }, - - /** - * Calculates the fraction to some rational exponent, if possible - * - * Ex: new Fraction(-1,2).pow(-3) => -8 - */ - "pow": function(a, b) { - - parse(a, b); - - // Trivial case when exp is an integer - - if (P['d'] === 1) { - - if (P['s'] < 0) { - return newFraction(Math.pow(this['s'] * this["d"], P['n']), Math.pow(this["n"], P['n'])); - } else { - return newFraction(Math.pow(this['s'] * this["n"], P['n']), Math.pow(this["d"], P['n'])); - } - } - - // Negative roots become complex - // (-a/b)^(c/d) = x - // <=> (-1)^(c/d) * (a/b)^(c/d) = x - // <=> (cos(pi) + i*sin(pi))^(c/d) * (a/b)^(c/d) = x # rotate 1 by 180° - // <=> (cos(c*pi/d) + i*sin(c*pi/d)) * (a/b)^(c/d) = x # DeMoivre's formula in Q ( https://proofwiki.org/wiki/De_Moivre%27s_Formula/Rational_Index ) - // From which follows that only for c=0 the root is non-complex. c/d is a reduced fraction, so that sin(c/dpi)=0 occurs for d=1, which is handled by our trivial case. - if (this['s'] < 0) return null; - - // Now prime factor n and d - var N = factorize(this['n']); - var D = factorize(this['d']); - - // Exponentiate and take root for n and d individually - var n = 1; - var d = 1; - for (var k in N) { - if (k === '1') continue; - if (k === '0') { - n = 0; - break; - } - N[k]*= P['n']; - - if (N[k] % P['d'] === 0) { - N[k]/= P['d']; - } else return null; - n*= Math.pow(k, N[k]); - } - - for (var k in D) { - if (k === '1') continue; - D[k]*= P['n']; - - if (D[k] % P['d'] === 0) { - D[k]/= P['d']; - } else return null; - d*= Math.pow(k, D[k]); - } - - if (P['s'] < 0) { - return newFraction(d, n); - } - return newFraction(n, d); - }, - - /** - * Check if two rational numbers are the same - * - * Ex: new Fraction(19.6).equals([98, 5]); - **/ - "equals": function(a, b) { - - parse(a, b); - return this["s"] * this["n"] * P["d"] === P["s"] * P["n"] * this["d"]; // Same as compare() === 0 - }, - - /** - * Check if two rational numbers are the same - * - * Ex: new Fraction(19.6).equals([98, 5]); - **/ - "compare": function(a, b) { - - parse(a, b); - var t = (this["s"] * this["n"] * P["d"] - P["s"] * P["n"] * this["d"]); - return (0 < t) - (t < 0); - }, - - "simplify": function(eps) { - - if (isNaN(this['n']) || isNaN(this['d'])) { - return this; - } - - eps = eps || 0.001; - - var thisABS = this['abs'](); - var cont = thisABS['toContinued'](); - - for (var i = 1; i < cont.length; i++) { - - var s = newFraction(cont[i - 1], 1); - for (var k = i - 2; k >= 0; k--) { - s = s['inverse']()['add'](cont[k]); - } - - if (s['sub'](thisABS)['abs']().valueOf() < eps) { - return s['mul'](this['s']); - } - } - return this; - }, - - /** - * Check if two rational numbers are divisible - * - * Ex: new Fraction(19.6).divisible(1.5); - */ - "divisible": function(a, b) { - - parse(a, b); - return !(!(P["n"] * this["d"]) || ((this["n"] * P["d"]) % (P["n"] * this["d"]))); - }, - - /** - * Returns a decimal representation of the fraction - * - * Ex: new Fraction("100.'91823'").valueOf() => 100.91823918239183 - **/ - 'valueOf': function() { - - return this["s"] * this["n"] / this["d"]; - }, - - /** - * Returns a string-fraction representation of a Fraction object - * - * Ex: new Fraction("1.'3'").toFraction(true) => "4 1/3" - **/ - 'toFraction': function(excludeWhole) { - - var whole, str = ""; - var n = this["n"]; - var d = this["d"]; - if (this["s"] < 0) { - str+= '-'; - } - - if (d === 1) { - str+= n; - } else { - - if (excludeWhole && (whole = Math.floor(n / d)) > 0) { - str+= whole; - str+= " "; - n%= d; - } - - str+= n; - str+= '/'; - str+= d; - } - return str; - }, - - /** - * Returns a latex representation of a Fraction object - * - * Ex: new Fraction("1.'3'").toLatex() => "\frac{4}{3}" - **/ - 'toLatex': function(excludeWhole) { - - var whole, str = ""; - var n = this["n"]; - var d = this["d"]; - if (this["s"] < 0) { - str+= '-'; - } - - if (d === 1) { - str+= n; - } else { - - if (excludeWhole && (whole = Math.floor(n / d)) > 0) { - str+= whole; - n%= d; - } - - str+= "\\frac{"; - str+= n; - str+= '}{'; - str+= d; - str+= '}'; - } - return str; - }, - - /** - * Returns an array of continued fraction elements - * - * Ex: new Fraction("7/8").toContinued() => [0,1,7] - */ - 'toContinued': function() { - - var t; - var a = this['n']; - var b = this['d']; - var res = []; - - if (isNaN(a) || isNaN(b)) { - return res; - } - - do { - res.push(Math.floor(a / b)); - t = a % b; - a = b; - b = t; - } while (a !== 1); - - return res; - }, - - /** - * Creates a string representation of a fraction with all digits - * - * Ex: new Fraction("100.'91823'").toString() => "100.(91823)" - **/ - 'toString': function(dec) { - - var N = this["n"]; - var D = this["d"]; - - if (isNaN(N) || isNaN(D)) { - return "NaN"; - } - - dec = dec || 15; // 15 = decimal places when no repetation - - var cycLen = cycleLen(N, D); // Cycle length - var cycOff = cycleStart(N, D, cycLen); // Cycle start - - var str = this['s'] < 0 ? "-" : ""; - - str+= N / D | 0; - - N%= D; - N*= 10; - - if (N) - str+= "."; - - if (cycLen) { - - for (var i = cycOff; i--;) { - str+= N / D | 0; - N%= D; - N*= 10; - } - str+= "("; - for (var i = cycLen; i--;) { - str+= N / D | 0; - N%= D; - N*= 10; - } - str+= ")"; - } else { - for (var i = dec; N && i--;) { - str+= N / D | 0; - N%= D; - N*= 10; - } - } - return str; - } - }; - - if (typeof define === "function" && define["amd"]) { - define([], function() { - return Fraction; - }); - } else if (typeof exports === "object") { - Object.defineProperty(Fraction, "__esModule", { 'value': true }); - Fraction['default'] = Fraction; - Fraction['Fraction'] = Fraction; - module['exports'] = Fraction; - } else { - root['Fraction'] = Fraction; - } - -})(this); diff --git a/spaces/zej97/AI-Research-Assistant/statics/style.py b/spaces/zej97/AI-Research-Assistant/statics/style.py deleted file mode 100644 index caaee0acc5b20c4ea9d5795c64fa5125f3da6de0..0000000000000000000000000000000000000000 --- a/spaces/zej97/AI-Research-Assistant/statics/style.py +++ /dev/null @@ -1,117 +0,0 @@ -css = """ - .top-bar { - padding-bottom: 10px; - background-color: transparent; - } - - .top-bar .in-bar-title { - background-image: linear-gradient(45deg, #8B5FBF, #D6C6E1, #ffffff); - -webkit-background-clip: text; - background-clip: text; - -webkit-text-fill-color: transparent; - font-family: Gelion, "Open Sans", Helvetica, "Helvetica Neue", Arial; - font-size: 2rem; - font-weight: bold; - text-align: left; - display: block; - } - - .top-bar .in-bar-subtitle { - font-family: 'Crimson Pro'; - color: #878787; - font-size: 1.4rem; - margin-top: -5px; - display: block; - } - - .main { - max-width: 800px; - min-width: min(100%, 800px); - align-self: center; - } - - .output { - padding: 10px; - min-height: 300px; - border: 1.5px solid #AC7DD280; - border-radius: 10px; - margin-bottom: 10px; - transition: opacity .1s ease-in-out; - background: var(--block-background-fill); - } - - #history { - padding: 10px !important; - border: 1.5px dashed #AC7DD2 !important; - border-radius: 10px !important; - } - - #primary-btn { - border: 1.5px solid #AC7DD2; - font-size: 20px; - } - - summary { - font-size: 14px; - font-weight: bold; - } - - #history_box { - border-bottom: 1.5px dashed #9A73B5; - padding: 10px; - } - - .tab-nav { - border-bottom: 1.5px solid #9A73B5 !important; - } - - button.selected { - border: 1.5px solid #9A73B5 !important; - border-bottom: none !important; - } - - .tabitem { - border: 1.5px solid #9A73B5 !important; - border-top: none !important; - } -""" - -# #809A73B5 - -top_bar = """ - - - - - - - -
    -
    - - AI Research Assistant - - - - - Your personal free GPT researcher -
    -
    - -""" - -report_html = """ - # Report -""" - -english_polishing_html = """ - # Polished Result -""" - -history_result_html = """ - # History Result -""" - -literature_review_html = """ - under construction... -""" \ No newline at end of file diff --git a/spaces/zhang-wei-jian/docker/node_modules/koa-compose/index.js b/spaces/zhang-wei-jian/docker/node_modules/koa-compose/index.js deleted file mode 100644 index 5cdc7faab2e4bd94ac4be3c4cffa18d0ed928c0d..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/koa-compose/index.js +++ /dev/null @@ -1,48 +0,0 @@ -'use strict' - -/** - * Expose compositor. - */ - -module.exports = compose - -/** - * Compose `middleware` returning - * a fully valid middleware comprised - * of all those which are passed. - * - * @param {Array} middleware - * @return {Function} - * @api public - */ - -function compose (middleware) { - if (!Array.isArray(middleware)) throw new TypeError('Middleware stack must be an array!') - for (const fn of middleware) { - if (typeof fn !== 'function') throw new TypeError('Middleware must be composed of functions!') - } - - /** - * @param {Object} context - * @return {Promise} - * @api public - */ - - return function (context, next) { - // last called middleware # - let index = -1 - return dispatch(0) - function dispatch (i) { - if (i <= index) return Promise.reject(new Error('next() called multiple times')) - index = i - let fn = middleware[i] - if (i === middleware.length) fn = next - if (!fn) return Promise.resolve() - try { - return Promise.resolve(fn(context, dispatch.bind(null, i + 1))); - } catch (err) { - return Promise.reject(err) - } - } - } -} diff --git a/spaces/zhangs2022/ChuanhuChatGPT/modules/pdf_func.py b/spaces/zhangs2022/ChuanhuChatGPT/modules/pdf_func.py deleted file mode 100644 index 0aba6b7b891fc527c79b887256b0cbaa81ae5b3d..0000000000000000000000000000000000000000 --- a/spaces/zhangs2022/ChuanhuChatGPT/modules/pdf_func.py +++ /dev/null @@ -1,180 +0,0 @@ -from types import SimpleNamespace -import pdfplumber -import logging -from llama_index import Document - -def prepare_table_config(crop_page): - """Prepare table查找边界, 要求page为原始page - - From https://github.com/jsvine/pdfplumber/issues/242 - """ - page = crop_page.root_page # root/parent - cs = page.curves + page.edges - def curves_to_edges(): - """See https://github.com/jsvine/pdfplumber/issues/127""" - edges = [] - for c in cs: - edges += pdfplumber.utils.rect_to_edges(c) - return edges - edges = curves_to_edges() - return { - "vertical_strategy": "explicit", - "horizontal_strategy": "explicit", - "explicit_vertical_lines": edges, - "explicit_horizontal_lines": edges, - "intersection_y_tolerance": 10, - } - -def get_text_outside_table(crop_page): - ts = prepare_table_config(crop_page) - if len(ts["explicit_vertical_lines"]) == 0 or len(ts["explicit_horizontal_lines"]) == 0: - return crop_page - - ### Get the bounding boxes of the tables on the page. - bboxes = [table.bbox for table in crop_page.root_page.find_tables(table_settings=ts)] - def not_within_bboxes(obj): - """Check if the object is in any of the table's bbox.""" - def obj_in_bbox(_bbox): - """See https://github.com/jsvine/pdfplumber/blob/stable/pdfplumber/table.py#L404""" - v_mid = (obj["top"] + obj["bottom"]) / 2 - h_mid = (obj["x0"] + obj["x1"]) / 2 - x0, top, x1, bottom = _bbox - return (h_mid >= x0) and (h_mid < x1) and (v_mid >= top) and (v_mid < bottom) - return not any(obj_in_bbox(__bbox) for __bbox in bboxes) - - return crop_page.filter(not_within_bboxes) -# 请使用 LaTeX 表达公式,行内公式以 $ 包裹,行间公式以 $$ 包裹 - -extract_words = lambda page: page.extract_words(keep_blank_chars=True, y_tolerance=0, x_tolerance=1, extra_attrs=["fontname", "size", "object_type"]) -# dict_keys(['text', 'x0', 'x1', 'top', 'doctop', 'bottom', 'upright', 'direction', 'fontname', 'size']) - -def get_title_with_cropped_page(first_page): - title = [] # 处理标题 - x0,top,x1,bottom = first_page.bbox # 获取页面边框 - - for word in extract_words(first_page): - word = SimpleNamespace(**word) - - if word.size >= 14: - title.append(word.text) - title_bottom = word.bottom - elif word.text == "Abstract": # 获取页面abstract - top = word.top - - user_info = [i["text"] for i in extract_words(first_page.within_bbox((x0,title_bottom,x1,top)))] - # 裁剪掉上半部分, within_bbox: full_included; crop: partial_included - return title, user_info, first_page.within_bbox((x0,top,x1,bottom)) - -def get_column_cropped_pages(pages, two_column=True): - new_pages = [] - for page in pages: - if two_column: - left = page.within_bbox((0, 0, page.width/2, page.height),relative=True) - right = page.within_bbox((page.width/2, 0, page.width, page.height), relative=True) - new_pages.append(left) - new_pages.append(right) - else: - new_pages.append(page) - - return new_pages - -def parse_pdf(filename, two_column = True): - level = logging.getLogger().level - if level == logging.getLevelName("DEBUG"): - logging.getLogger().setLevel("INFO") - - with pdfplumber.open(filename) as pdf: - title, user_info, first_page = get_title_with_cropped_page(pdf.pages[0]) - new_pages = get_column_cropped_pages([first_page] + pdf.pages[1:], two_column) - - chapters = [] - # tuple (chapter_name, [pageid] (start,stop), chapter_text) - create_chapter = lambda page_start,name_top,name_bottom: SimpleNamespace( - name=[], - name_top=name_top, - name_bottom=name_bottom, - record_chapter_name = True, - - page_start=page_start, - page_stop=None, - - text=[], - ) - cur_chapter = None - - # 按页遍历PDF文档 - for idx, page in enumerate(new_pages): - page = get_text_outside_table(page) - - # 按行遍历页面文本 - for word in extract_words(page): - word = SimpleNamespace(**word) - - # 检查行文本是否以12号字体打印,如果是,则将其作为新章节开始 - if word.size >= 11: # 出现chapter name - if cur_chapter is None: - cur_chapter = create_chapter(page.page_number, word.top, word.bottom) - elif not cur_chapter.record_chapter_name or (cur_chapter.name_bottom != cur_chapter.name_bottom and cur_chapter.name_top != cur_chapter.name_top): - # 不再继续写chapter name - cur_chapter.page_stop = page.page_number # stop id - chapters.append(cur_chapter) - # 重置当前chapter信息 - cur_chapter = create_chapter(page.page_number, word.top, word.bottom) - - # print(word.size, word.top, word.bottom, word.text) - cur_chapter.name.append(word.text) - else: - cur_chapter.record_chapter_name = False # chapter name 结束 - cur_chapter.text.append(word.text) - else: - # 处理最后一个章节 - cur_chapter.page_stop = page.page_number # stop id - chapters.append(cur_chapter) - - for i in chapters: - logging.info(f"section: {i.name} pages:{i.page_start, i.page_stop} word-count:{len(i.text)}") - logging.debug(" ".join(i.text)) - - title = " ".join(title) - user_info = " ".join(user_info) - text = f"Article Title: {title}, Information:{user_info}\n" - for idx, chapter in enumerate(chapters): - chapter.name = " ".join(chapter.name) - text += f"The {idx}th Chapter {chapter.name}: " + " ".join(chapter.text) + "\n" - - logging.getLogger().setLevel(level) - return Document(text=text, extra_info={"title": title}) - -BASE_POINTS = """ -1. Who are the authors? -2. What is the process of the proposed method? -3. What is the performance of the proposed method? Please note down its performance metrics. -4. What are the baseline models and their performances? Please note down these baseline methods. -5. What dataset did this paper use? -""" - -READING_PROMPT = """ -You are a researcher helper bot. You can help the user with research paper reading and summarizing. \n -Now I am going to send you a paper. You need to read it and summarize it for me part by part. \n -When you are reading, You need to focus on these key points:{} -""" - -READING_PROMT_V2 = """ -You are a researcher helper bot. You can help the user with research paper reading and summarizing. \n -Now I am going to send you a paper. You need to read it and summarize it for me part by part. \n -When you are reading, You need to focus on these key points:{}, - -And You need to generate a brief but informative title for this part. -Your return format: -- title: '...' -- summary: '...' -""" - -SUMMARY_PROMPT = "You are a researcher helper bot. Now you need to read the summaries of a research paper." - - -if __name__ == '__main__': - # Test code - z = parse_pdf("./build/test.pdf") - print(z["user_info"]) - print(z["title"]) \ No newline at end of file diff --git a/spaces/zhanpj/ChatGPT/assets/Kelpy-Codos.js b/spaces/zhanpj/ChatGPT/assets/Kelpy-Codos.js deleted file mode 100644 index cfbaeedb4f371dfb5fe157db545b364046fca3e1..0000000000000000000000000000000000000000 --- a/spaces/zhanpj/ChatGPT/assets/Kelpy-Codos.js +++ /dev/null @@ -1,76 +0,0 @@ -// ==UserScript== -// @name Kelpy Codos -// @namespace https://github.com/Keldos-Li/Kelpy-Codos -// @version 1.0.5 -// @author Keldos; https://keldos.me/ -// @description Add copy button to PRE tags before CODE tag, for Chuanhu ChatGPT especially. -// Based on Chuanhu ChatGPT version: ac04408 (2023-3-22) -// @license GPL-3.0 -// @grant none -// ==/UserScript== - -(function () { - 'use strict'; - - function addCopyButton(pre) { - var code = pre.querySelector('code'); - if (!code) { - return; // 如果没有找到 元素,则不添加按钮 - } - var firstChild = code.firstChild; - if (!firstChild) { - return; // 如果 元素没有子节点,则不添加按钮 - } - var button = document.createElement('button'); - button.textContent = '\uD83D\uDCCE'; // 使用 📎 符号作为“复制”按钮的文本 - button.style.position = 'relative'; - button.style.float = 'right'; - button.style.fontSize = '1em'; // 可选:调整按钮大小 - button.style.background = 'none'; // 可选:去掉背景颜色 - button.style.border = 'none'; // 可选:去掉边框 - button.style.cursor = 'pointer'; // 可选:显示指针样式 - button.addEventListener('click', function () { - var range = document.createRange(); - range.selectNodeContents(code); - range.setStartBefore(firstChild); // 将范围设置为第一个子节点之前 - var selection = window.getSelection(); - selection.removeAllRanges(); - selection.addRange(range); - - try { - var success = document.execCommand('copy'); - if (success) { - button.textContent = '\u2714'; - setTimeout(function () { - button.textContent = '\uD83D\uDCCE'; // 恢复按钮为“复制” - }, 2000); - } else { - button.textContent = '\u2716'; - } - } catch (e) { - console.error(e); - button.textContent = '\u2716'; - } - - selection.removeAllRanges(); - }); - code.insertBefore(button, firstChild); // 将按钮插入到第一个子元素之前 - } - - function handleNewElements(mutationsList, observer) { - for (var mutation of mutationsList) { - if (mutation.type === 'childList') { - for (var node of mutation.addedNodes) { - if (node.nodeName === 'PRE') { - addCopyButton(node); - } - } - } - } - } - - var observer = new MutationObserver(handleNewElements); - observer.observe(document.documentElement, { childList: true, subtree: true }); - - document.querySelectorAll('pre').forEach(addCopyButton); -})(); diff --git a/spaces/zhaoys/wfms-kuiwenc/src/components/ui/voice/index.tsx b/spaces/zhaoys/wfms-kuiwenc/src/components/ui/voice/index.tsx deleted file mode 100644 index a66204c09c76088dfff96f2ebc55318318dd0683..0000000000000000000000000000000000000000 --- a/spaces/zhaoys/wfms-kuiwenc/src/components/ui/voice/index.tsx +++ /dev/null @@ -1,31 +0,0 @@ -import { cn } from '@/lib/utils'; -import './index.scss' - -type VoiceProps = { - num?: number; - duration?: number; - className?: string; -} & React.ComponentProps<'div'> - -export default function Voice({ duration = 400, num = 7, className, ...others }: VoiceProps) { - return ( -
    - {Array.from({ length: num }).map((_, index) => { - const randomDuration = Math.random() * 100 + duration - const initialDelay = Math.random() * 2 * duration - const initialScale = Math.sin((index + 1) * Math.PI / num) - return ( -
    - ) - })} -
    - ) -} diff --git a/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/utils/conversion.py b/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/utils/conversion.py deleted file mode 100644 index e2ae5c0875c4f65c930e908307b393c57edf5122..0000000000000000000000000000000000000000 --- a/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/utils/conversion.py +++ /dev/null @@ -1,346 +0,0 @@ -""" -@date: 2021/06/19 -@description: -Specification of 4 coordinate systems: -Pixel coordinates (used in panoramic images), the range is related to the image size, -generally converted to UV coordinates first, the first is horizontal coordinates, -increasing to the right, the second is column coordinates, increasing down - -Uv coordinates (used in panoramic images), the range is [0~1], the upper left corner is the origin, -u is the abscissa and increases to the right, V is the column coordinate and increases to the right - -Longitude and latitude coordinates (spherical), the range of longitude lon is [-pi~ PI], -and the range of dimension is [-pi/2~ PI /2]. The center of the panorama is the origin, -and the longitude increases to the right and the dimension increases to the down - -Xyz coordinate (used in 3-dimensional space, of course, -it can also represent longitude and latitude coordinates on the sphere). -If on the sphere, the coordinate mode length is 1, when y is projected to the height of the camera, -the real position information of space points will be obtained - -Correspondence between longitude and latitude coordinates and xyz coordinates: - | -pi/2 - | - lef _ _ _ _ _ |_ _ _ _ _ - -pi / | \ - pi | - - - - - -\ - z 0 mid - right \_ _ _ _ _ /_|_ _ _ _ _ _/ - / | - / | - x/ | y pi/2 -""" - -import numpy as np -import torch -import functools - - -@functools.lru_cache() -def get_u(w, is_np, b=None): - u = pixel2uv(np.array(range(w)) if is_np else torch.arange(0, w), w=w, axis=0) - if b is not None: - u = u[np.newaxis].repeat(b) if is_np else u.repeat(b, 1) - return u - - -@functools.lru_cache() -def get_lon(w, is_np, b=None): - lon = pixel2lonlat(np.array(range(w)) if is_np else torch.arange(0, w), w=w, axis=0) - if b is not None: - lon = lon[np.newaxis].repeat(b, axis=0) if is_np else lon.repeat(b, 1) - return lon - - -def pixel2uv(pixel, w=1024, h=512, axis=None): - pixel = pixel.astype(np.float32) if isinstance(pixel, np.ndarray) else pixel.float() - # +0.5 will make left/right and up/down coordinates symmetric - if axis is None: - u = (pixel[..., 0:1] + 0.5) / w - v = (pixel[..., 1:] + 0.5) / h - elif axis == 0: - u = (pixel + 0.5) / (w * 1.0) - return u - elif axis == 1: - v = (pixel + 0.5) / (h * 1.0) - return v - else: - assert False, "axis error" - - lst = [u, v] - uv = np.concatenate(lst, axis=-1) if isinstance(pixel, np.ndarray) else torch.cat(lst, dim=-1) - return uv - - -def pixel2lonlat(pixel, w=1024, h=512, axis=None): - uv = pixel2uv(pixel, w, h, axis) - lonlat = uv2lonlat(uv, axis) - return lonlat - - -def pixel2xyz(pixel, w=1024, h=512): - lonlat = pixel2lonlat(pixel, w, h) - xyz = lonlat2xyz(lonlat) - return xyz - - -def uv2lonlat(uv, axis=None): - if axis is None: - lon = (uv[..., 0:1] - 0.5) * 2 * np.pi - lat = (uv[..., 1:] - 0.5) * np.pi - elif axis == 0: - lon = (uv - 0.5) * 2 * np.pi - return lon - elif axis == 1: - lat = (uv - 0.5) * np.pi - return lat - else: - assert False, "axis error" - - lst = [lon, lat] - lonlat = np.concatenate(lst, axis=-1) if isinstance(uv, np.ndarray) else torch.cat(lst, dim=-1) - return lonlat - - -def uv2xyz(uv, plan_y=None, spherical=False): - lonlat = uv2lonlat(uv) - xyz = lonlat2xyz(lonlat) - if spherical: - # Projection onto the sphere - return xyz - - if plan_y is None: - from utils.boundary import boundary_type - plan_y = boundary_type(uv) - # Projection onto the specified plane - xyz = xyz * (plan_y / xyz[..., 1])[..., None] - - return xyz - - -def lonlat2xyz(lonlat, plan_y=None): - lon = lonlat[..., 0:1] - lat = lonlat[..., 1:] - cos = np.cos if isinstance(lonlat, np.ndarray) else torch.cos - sin = np.sin if isinstance(lonlat, np.ndarray) else torch.sin - x = cos(lat) * sin(lon) - y = sin(lat) - z = cos(lat) * cos(lon) - lst = [x, y, z] - xyz = np.concatenate(lst, axis=-1) if isinstance(lonlat, np.ndarray) else torch.cat(lst, dim=-1) - - if plan_y is not None: - xyz = xyz * (plan_y / xyz[..., 1])[..., None] - - return xyz - - -##################### - - -def xyz2lonlat(xyz): - atan2 = np.arctan2 if isinstance(xyz, np.ndarray) else torch.atan2 - asin = np.arcsin if isinstance(xyz, np.ndarray) else torch.asin - norm = np.linalg.norm(xyz, axis=-1) if isinstance(xyz, np.ndarray) else torch.norm(xyz, p=2, dim=-1) - xyz_norm = xyz / norm[..., None] - x = xyz_norm[..., 0:1] - y = xyz_norm[..., 1:2] - z = xyz_norm[..., 2:] - lon = atan2(x, z) - lat = asin(y) - lst = [lon, lat] - lonlat = np.concatenate(lst, axis=-1) if isinstance(xyz, np.ndarray) else torch.cat(lst, dim=-1) - return lonlat - - -def xyz2uv(xyz): - lonlat = xyz2lonlat(xyz) - uv = lonlat2uv(lonlat) - return uv - - -def xyz2pixel(xyz, w=1024, h=512): - uv = xyz2uv(xyz) - pixel = uv2pixel(uv, w, h) - return pixel - - -def lonlat2uv(lonlat, axis=None): - if axis is None: - u = lonlat[..., 0:1] / (2 * np.pi) + 0.5 - v = lonlat[..., 1:] / np.pi + 0.5 - elif axis == 0: - u = lonlat / (2 * np.pi) + 0.5 - return u - elif axis == 1: - v = lonlat / np.pi + 0.5 - return v - else: - assert False, "axis error" - - lst = [u, v] - uv = np.concatenate(lst, axis=-1) if isinstance(lonlat, np.ndarray) else torch.cat(lst, dim=-1) - return uv - - -def lonlat2pixel(lonlat, w=1024, h=512, axis=None, need_round=True): - uv = lonlat2uv(lonlat, axis) - pixel = uv2pixel(uv, w, h, axis, need_round) - return pixel - - -def uv2pixel(uv, w=1024, h=512, axis=None, need_round=True): - """ - :param uv: [[u1, v1], [u2, v2] ...] - :param w: width of panorama image - :param h: height of panorama image - :param axis: sometimes the input data is only u(axis =0) or only v(axis=1) - :param need_round: - :return: - """ - if axis is None: - pu = uv[..., 0:1] * w - 0.5 - pv = uv[..., 1:] * h - 0.5 - elif axis == 0: - pu = uv * w - 0.5 - if need_round: - pu = pu.round().astype(np.int32) if isinstance(uv, np.ndarray) else pu.round().int() - return pu - elif axis == 1: - pv = uv * h - 0.5 - if need_round: - pv = pv.round().astype(np.int32) if isinstance(uv, np.ndarray) else pv.round().int() - return pv - else: - assert False, "axis error" - - lst = [pu, pv] - if need_round: - pixel = np.concatenate(lst, axis=-1).round().astype(np.int32) if isinstance(uv, np.ndarray) else torch.cat(lst, - dim=-1).round().int() - else: - pixel = np.concatenate(lst, axis=-1) if isinstance(uv, np.ndarray) else torch.cat(lst, dim=-1) - pixel[..., 0] = pixel[..., 0] % w - pixel[..., 1] = pixel[..., 1] % h - - return pixel - - -##################### - - -def xyz2depth(xyz, plan_y=1): - """ - :param xyz: - :param plan_y: - :return: - """ - xyz = xyz * (plan_y / xyz[..., 1])[..., None] - xz = xyz[..., ::2] - depth = np.linalg.norm(xz, axis=-1) if isinstance(xz, np.ndarray) else torch.norm(xz, dim=-1) - return depth - - -def uv2depth(uv, plan_y=None): - if plan_y is None: - from utils.boundary import boundary_type - plan_y = boundary_type(uv) - - xyz = uv2xyz(uv, plan_y) - depth = xyz2depth(xyz, plan_y) - return depth - - -def lonlat2depth(lonlat, plan_y=None): - if plan_y is None: - from utils.boundary import boundary_type - plan_y = boundary_type(lonlat2uv(lonlat)) - - xyz = lonlat2xyz(lonlat, plan_y) - depth = xyz2depth(xyz, plan_y) - return depth - - -def depth2xyz(depth, plan_y=1): - """ - :param depth: [patch_num] or [b, patch_num] - :param plan_y: - :return: - """ - is_np = isinstance(depth, np.ndarray) - w = depth.shape[-1] - - lon = get_lon(w, is_np, b=depth.shape[0] if len(depth.shape) == 2 else None) - if not is_np: - lon = lon.to(depth.device) - - cos = np.cos if is_np else torch.cos - sin = np.sin if is_np else torch.sin - # polar covert to cartesian - if len(depth.shape) == 2: - b = depth.shape[0] - xyz = np.zeros((b, w, 3)) if is_np else torch.zeros((b, w, 3)) - else: - xyz = np.zeros((w, 3)) if is_np else torch.zeros((w, 3)) - - if not is_np: - xyz = xyz.to(depth.device) - - xyz[..., 0] = depth * sin(lon) - xyz[..., 1] = plan_y - xyz[..., 2] = depth * cos(lon) - return xyz - - -def depth2uv(depth, plan_y=1): - xyz = depth2xyz(depth, plan_y) - uv = xyz2uv(xyz) - return uv - - -def depth2pixel(depth, w=1024, h=512, need_round=True, plan_y=1): - uv = depth2uv(depth, plan_y) - pixel = uv2pixel(uv, w, h, need_round=need_round) - return pixel - - -if __name__ == '__main__': - a = np.array([[0.5, 1, 0.5]]) - a = xyz2pixel(a) - print(a) - - -if __name__ == '__main__1': - np.set_printoptions(suppress=True) - - a = np.array([[0, 0], [1023, 511]]) - a = pixel2xyz(a) - a = xyz2pixel(a) - print(a) - - ########### - a = torch.tensor([[0, 0], [1023, 511]]) - a = pixel2xyz(a) - a = xyz2pixel(a) - print(a) - - ########### - u = np.array([0, 256, 512, 1023]) - lon = pixel2lonlat(u, axis=0) - u = lonlat2pixel(lon, axis=0) - print(u) - - u = torch.tensor([0, 256, 512, 1023]) - lon = pixel2lonlat(u, axis=0) - u = lonlat2pixel(lon, axis=0) - print(u) - - ########### - v = np.array([0, 256, 511]) - lat = pixel2lonlat(v, axis=1) - v = lonlat2pixel(lat, axis=1) - print(v) - - v = torch.tensor([0, 256, 511]) - lat = pixel2lonlat(v, axis=1) - v = lonlat2pixel(lat, axis=1) - print(v) diff --git a/spaces/zhongkaifu/mt_enu_chs/Dockerfile b/spaces/zhongkaifu/mt_enu_chs/Dockerfile deleted file mode 100644 index 7e20838dc85c6aac996c398d8e3580705a632223..0000000000000000000000000000000000000000 --- a/spaces/zhongkaifu/mt_enu_chs/Dockerfile +++ /dev/null @@ -1,53 +0,0 @@ -# read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker -# you will also find guides on how best to write your Dockerfile - -FROM python:3.9 - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt - -RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt - -RUN wget https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb -RUN dpkg -i packages-microsoft-prod.deb -RUN rm packages-microsoft-prod.deb - -RUN curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh - -RUN apt-get update -RUN apt-get install -y dotnet-sdk-7.0 -RUN apt-get install -y aspnetcore-runtime-7.0 -RUN apt-get install -y cmake -RUN apt-get install -y git-lfs - -RUN git clone https://github.com/zhongkaifu/Seq2SeqSharp.git -WORKDIR /code/Seq2SeqSharp -RUN dotnet build Seq2SeqSharp.sln --configuration Release - -WORKDIR /code/Seq2SeqSharp/ExternalProjects -RUN unzip SentencePiece.zip -WORKDIR /code/Seq2SeqSharp/ExternalProjects/SentencePiece -RUN mkdir build -WORKDIR /code/Seq2SeqSharp/ExternalProjects/SentencePiece/build -RUN cmake .. -RUN make -j $(nproc) -RUN make install -RUN ldconfig -v - -WORKDIR /code - -#RUN git clone https://huggingface.co/zhongkaifu/mt_enu_chs - -RUN mkdir -p /code/bin -RUN chmod 777 /code/bin -WORKDIR /code/bin - -RUN cp -r /code/Seq2SeqSharp/Tools/SeqWebApps/bin/Release/net7.0/* . -RUN wget https://huggingface.co/zhongkaifu/mt_enu_chs/resolve/main/mt_enu_chs.model -RUN wget https://huggingface.co/zhongkaifu/mt_enu_chs/resolve/main/chsSpm.model -RUN rm appsettings.json -RUN wget https://huggingface.co/zhongkaifu/mt_enu_chs/resolve/main/appsettings.json -#RUN cp /code/mt_enu_chs/appsettings.json . - -CMD ["dotnet","/code/bin/SeqWebApps.dll"] diff --git a/spaces/zideliu/styledrop/libs/uvit_vq.py b/spaces/zideliu/styledrop/libs/uvit_vq.py deleted file mode 100644 index 62cbe7fd72c07cc0dcaafed4228ead7153286cea..0000000000000000000000000000000000000000 --- a/spaces/zideliu/styledrop/libs/uvit_vq.py +++ /dev/null @@ -1,264 +0,0 @@ -import os - -import torch -import torch.nn as nn -import math - -from loguru import logger - -import timm -from timm.models.layers import trunc_normal_ -from timm.models.vision_transformer import PatchEmbed, Mlp - -assert timm.__version__ == "0.3.2" # version check -import einops -import torch.utils.checkpoint -import torch.nn.functional as F - -try: - import xformers - import xformers.ops - - XFORMERS_IS_AVAILBLE = True -except: - XFORMERS_IS_AVAILBLE = False - - -class BertEmbeddings(nn.Module): - """Construct the embeddings from word, position and token_type embeddings.""" - - def __init__(self, vocab_size, hidden_size, max_position_embeddings, dropout=0.1): - super().__init__() - self.word_embeddings = nn.Embedding(vocab_size, hidden_size) - self.position_embeddings = nn.Embedding(max_position_embeddings, hidden_size) - - # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load - # any TensorFlow checkpoint file - self.LayerNorm = nn.LayerNorm(hidden_size, eps=1e-6) - self.dropout = nn.Dropout(dropout) - # position_ids (1, len position emb) is contiguous in memory and exported when serialized - self.register_buffer("position_ids", torch.arange(max_position_embeddings).expand((1, -1))) - - torch.nn.init.normal_(self.word_embeddings.weight, std=.02) - torch.nn.init.normal_(self.position_embeddings.weight, std=.02) - - def forward( - self, input_ids - ): - input_shape = input_ids.size() - - seq_length = input_shape[1] - - position_ids = self.position_ids[:, :seq_length] - - inputs_embeds = self.word_embeddings(input_ids) - - position_embeddings = self.position_embeddings(position_ids) - embeddings = inputs_embeds + position_embeddings - - embeddings = self.LayerNorm(embeddings) - embeddings = self.dropout(embeddings) - return embeddings - - -class MlmLayer(nn.Module): - - def __init__(self, feat_emb_dim, word_emb_dim, vocab_size): - super().__init__() - self.fc = nn.Linear(feat_emb_dim, word_emb_dim) - self.gelu = nn.GELU() - self.ln = nn.LayerNorm(word_emb_dim) - self.bias = nn.Parameter(torch.zeros(1, 1, vocab_size)) - - def forward(self, x, word_embeddings): - mlm_hidden = self.fc(x) - mlm_hidden = self.gelu(mlm_hidden) - mlm_hidden = self.ln(mlm_hidden) - word_embeddings = word_embeddings.transpose(0, 1) - logits = torch.matmul(mlm_hidden, word_embeddings) - logits = logits + self.bias - return logits - - -def patchify(imgs, patch_size): - x = einops.rearrange(imgs, 'B C (h p1) (w p2) -> B (h w) (p1 p2 C)', p1=patch_size, p2=patch_size) - return x - - -def unpatchify(x, channels=3, flatten=False): - patch_size = int((x.shape[2] // channels) ** 0.5) - h = w = int(x.shape[1] ** .5) - assert h * w == x.shape[1] and patch_size ** 2 * channels == x.shape[2] - if flatten: - x = einops.rearrange(x, 'B (h w) (p1 p2 C) -> B (h p1 w p2) C', h=h, p1=patch_size, p2=patch_size) - else: - x = einops.rearrange(x, 'B (h w) (p1 p2 C) -> B C (h p1) (w p2)', h=h, p1=patch_size, p2=patch_size) - return x - - -class Attention(nn.Module): - def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0.): - super().__init__() - self.num_heads = num_heads - head_dim = dim // num_heads - # NOTE scale factor was wrong in my original version, can set manually to be compat with prev weights - self.scale = qk_scale or head_dim ** -0.5 - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - def forward(self, x): - B, N, C = x.shape - if XFORMERS_IS_AVAILBLE: - qkv = self.qkv(x) - qkv = einops.rearrange(qkv, 'B L (K H D) -> K B L H D', K=3, H=self.num_heads) - q, k, v = qkv[0], qkv[1], qkv[2] # B L H D - x = xformers.ops.memory_efficient_attention(q, k, v) - x = einops.rearrange(x, 'B L H D -> B L (H D)', H=self.num_heads) - else: - qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - attn = (q @ k.transpose(-2, -1)) * self.scale - attn = attn.softmax(dim=-1) - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B, N, C) - - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class Block(nn.Module): - - def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, - act_layer=nn.GELU, norm_layer=nn.LayerNorm, skip=False, use_checkpoint=False): - super().__init__() - self.norm1 = norm_layer(dim) - self.attn = Attention( - dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale) - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer) - self.skip_linear = nn.Linear(2 * dim, dim) if skip else None - self.use_checkpoint = use_checkpoint - - def forward(self, x, skip=None): - if self.use_checkpoint: - return torch.utils.checkpoint.checkpoint(self._forward, x, skip) - else: - return self._forward(x, skip) - - def _forward(self, x, skip=None): - if self.skip_linear is not None: - x = self.skip_linear(torch.cat([x, skip], dim=-1)) - x = x + self.attn(self.norm1(x)) - x = x + self.mlp(self.norm2(x)) - return x - - -class UViT(nn.Module): - def __init__(self, img_size=16, patch_size=1, in_chans=8, embed_dim=768, depth=12, num_heads=12, mlp_ratio=4., - qkv_bias=False, qk_scale=None, norm_layer=nn.LayerNorm, num_classes=-1, - use_checkpoint=False, skip=True, codebook_size=1024): - super().__init__() - self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models - self.num_classes = num_classes - self.in_chans = in_chans - self.skip = skip - - logger.debug(f'codebook size in nnet: {codebook_size}') - self.codebook_size = codebook_size - if num_classes > 0: - self.extras = 1 - vocab_size = codebook_size + num_classes + 1 - else: - self.extras = 0 - vocab_size = codebook_size + 1 - - self.token_emb = BertEmbeddings(vocab_size=vocab_size, - hidden_size=embed_dim, - max_position_embeddings=int(img_size ** 2) + self.extras, - dropout=0.1) - logger.debug(f'token emb weight shape: {self.token_emb.word_embeddings.weight.shape}') - - if patch_size != 1: # downsamp - self.patch_embed = PatchEmbed( - img_size=img_size, patch_size=patch_size, in_chans=embed_dim, embed_dim=embed_dim, input_shape='bhwc') - logger.debug(f'patch emb weight shape: {self.patch_embed.proj.weight.shape}') - self.decoder_pred = nn.Linear(embed_dim, patch_size ** 2 * embed_dim, bias=True) - else: - self.patch_embed = None - self.decoder_pred = None - - self.in_blocks = nn.ModuleList([ - Block( - dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - norm_layer=norm_layer, use_checkpoint=use_checkpoint) - for _ in range(depth // 2)]) - - self.mid_block = Block( - dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - norm_layer=norm_layer, use_checkpoint=use_checkpoint) - - self.out_blocks = nn.ModuleList([ - Block( - dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - norm_layer=norm_layer, skip=skip, use_checkpoint=use_checkpoint) - for _ in range(depth // 2)]) - - self.norm = norm_layer(embed_dim) - self.mlm_layer = MlmLayer(feat_emb_dim=embed_dim, word_emb_dim=embed_dim, vocab_size=vocab_size) - - self.apply(self._init_weights) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - @torch.jit.ignore - def no_weight_decay(self): - return {'pos_embed'} - - def forward(self, x, context=None): - assert len(x.shape) == 2 - if context is not None: - context = context + self.codebook_size + 1 # shift, mask token is self.codebook_size - x = torch.cat((context, x), dim=1) - x = self.token_emb(x.long()) - if self.patch_embed is not None: - featmap_downsampled = self.patch_embed( - x[:, self.extras:].reshape(-1, *self.patch_embed.img_size, self.embed_dim)).reshape(x.shape[0], -1, self.embed_dim) - x = torch.cat((x[:, :self.extras], featmap_downsampled), dim=1) - - if self.skip: - skips = [] - for blk in self.in_blocks: - x = blk(x) - if self.skip: - skips.append(x) - - x = self.mid_block(x) - - for blk in self.out_blocks: - if self.skip: - x = blk(x, skips.pop()) - else: - x = blk(x) - - x = self.norm(x) - if self.decoder_pred is not None: - featmap_upsampled = unpatchify(self.decoder_pred(x[:, self.extras:]), self.embed_dim, flatten=True) - x = torch.cat((x[:, :self.extras], featmap_upsampled), dim=1) - word_embeddings = self.token_emb.word_embeddings.weight.data.detach() - x = self.mlm_layer(x, word_embeddings) - x = x[:, self.extras:, :self.codebook_size] - return x diff --git a/spaces/zideliu/styledrop/timm/models/pnasnet.py b/spaces/zideliu/styledrop/timm/models/pnasnet.py deleted file mode 100644 index 5f1e177f5a8e31981b681c0293bba274f861fb6f..0000000000000000000000000000000000000000 --- a/spaces/zideliu/styledrop/timm/models/pnasnet.py +++ /dev/null @@ -1,347 +0,0 @@ -""" - pnasnet5large implementation grabbed from Cadene's pretrained models - Additional credit to https://github.com/creafz - - https://github.com/Cadene/pretrained-models.pytorch/blob/master/pretrainedmodels/models/pnasnet.py - -""" -from collections import OrderedDict - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .helpers import build_model_with_cfg -from .layers import ConvBnAct, create_conv2d, create_pool2d, create_classifier -from .registry import register_model - -__all__ = ['PNASNet5Large'] - -default_cfgs = { - 'pnasnet5large': { - 'url': 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-cadene/pnasnet5large-bf079911.pth', - 'input_size': (3, 331, 331), - 'pool_size': (11, 11), - 'crop_pct': 0.911, - 'interpolation': 'bicubic', - 'mean': (0.5, 0.5, 0.5), - 'std': (0.5, 0.5, 0.5), - 'num_classes': 1001, - 'first_conv': 'conv_0.conv', - 'classifier': 'last_linear', - }, -} - - -class SeparableConv2d(nn.Module): - - def __init__(self, in_channels, out_channels, kernel_size, stride, padding=''): - super(SeparableConv2d, self).__init__() - self.depthwise_conv2d = create_conv2d( - in_channels, in_channels, kernel_size=kernel_size, - stride=stride, padding=padding, groups=in_channels) - self.pointwise_conv2d = create_conv2d( - in_channels, out_channels, kernel_size=1, padding=padding) - - def forward(self, x): - x = self.depthwise_conv2d(x) - x = self.pointwise_conv2d(x) - return x - - -class BranchSeparables(nn.Module): - - def __init__(self, in_channels, out_channels, kernel_size, stride=1, stem_cell=False, padding=''): - super(BranchSeparables, self).__init__() - middle_channels = out_channels if stem_cell else in_channels - self.act_1 = nn.ReLU() - self.separable_1 = SeparableConv2d( - in_channels, middle_channels, kernel_size, stride=stride, padding=padding) - self.bn_sep_1 = nn.BatchNorm2d(middle_channels, eps=0.001) - self.act_2 = nn.ReLU() - self.separable_2 = SeparableConv2d( - middle_channels, out_channels, kernel_size, stride=1, padding=padding) - self.bn_sep_2 = nn.BatchNorm2d(out_channels, eps=0.001) - - def forward(self, x): - x = self.act_1(x) - x = self.separable_1(x) - x = self.bn_sep_1(x) - x = self.act_2(x) - x = self.separable_2(x) - x = self.bn_sep_2(x) - return x - - -class ActConvBn(nn.Module): - - def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=''): - super(ActConvBn, self).__init__() - self.act = nn.ReLU() - self.conv = create_conv2d( - in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding) - self.bn = nn.BatchNorm2d(out_channels, eps=0.001) - - def forward(self, x): - x = self.act(x) - x = self.conv(x) - x = self.bn(x) - return x - - -class FactorizedReduction(nn.Module): - - def __init__(self, in_channels, out_channels, padding=''): - super(FactorizedReduction, self).__init__() - self.act = nn.ReLU() - self.path_1 = nn.Sequential(OrderedDict([ - ('avgpool', nn.AvgPool2d(1, stride=2, count_include_pad=False)), - ('conv', create_conv2d(in_channels, out_channels // 2, kernel_size=1, padding=padding)), - ])) - self.path_2 = nn.Sequential(OrderedDict([ - ('pad', nn.ZeroPad2d((-1, 1, -1, 1))), # shift - ('avgpool', nn.AvgPool2d(1, stride=2, count_include_pad=False)), - ('conv', create_conv2d(in_channels, out_channels // 2, kernel_size=1, padding=padding)), - ])) - self.final_path_bn = nn.BatchNorm2d(out_channels, eps=0.001) - - def forward(self, x): - x = self.act(x) - x_path1 = self.path_1(x) - x_path2 = self.path_2(x) - out = self.final_path_bn(torch.cat([x_path1, x_path2], 1)) - return out - - -class CellBase(nn.Module): - - def cell_forward(self, x_left, x_right): - x_comb_iter_0_left = self.comb_iter_0_left(x_left) - x_comb_iter_0_right = self.comb_iter_0_right(x_left) - x_comb_iter_0 = x_comb_iter_0_left + x_comb_iter_0_right - - x_comb_iter_1_left = self.comb_iter_1_left(x_right) - x_comb_iter_1_right = self.comb_iter_1_right(x_right) - x_comb_iter_1 = x_comb_iter_1_left + x_comb_iter_1_right - - x_comb_iter_2_left = self.comb_iter_2_left(x_right) - x_comb_iter_2_right = self.comb_iter_2_right(x_right) - x_comb_iter_2 = x_comb_iter_2_left + x_comb_iter_2_right - - x_comb_iter_3_left = self.comb_iter_3_left(x_comb_iter_2) - x_comb_iter_3_right = self.comb_iter_3_right(x_right) - x_comb_iter_3 = x_comb_iter_3_left + x_comb_iter_3_right - - x_comb_iter_4_left = self.comb_iter_4_left(x_left) - if self.comb_iter_4_right is not None: - x_comb_iter_4_right = self.comb_iter_4_right(x_right) - else: - x_comb_iter_4_right = x_right - x_comb_iter_4 = x_comb_iter_4_left + x_comb_iter_4_right - - x_out = torch.cat([x_comb_iter_0, x_comb_iter_1, x_comb_iter_2, x_comb_iter_3, x_comb_iter_4], 1) - return x_out - - -class CellStem0(CellBase): - - def __init__(self, in_chs_left, out_chs_left, in_chs_right, out_chs_right, pad_type=''): - super(CellStem0, self).__init__() - self.conv_1x1 = ActConvBn(in_chs_right, out_chs_right, kernel_size=1, padding=pad_type) - - self.comb_iter_0_left = BranchSeparables( - in_chs_left, out_chs_left, kernel_size=5, stride=2, stem_cell=True, padding=pad_type) - self.comb_iter_0_right = nn.Sequential(OrderedDict([ - ('max_pool', create_pool2d('max', 3, stride=2, padding=pad_type)), - ('conv', create_conv2d(in_chs_left, out_chs_left, kernel_size=1, padding=pad_type)), - ('bn', nn.BatchNorm2d(out_chs_left, eps=0.001)), - ])) - - self.comb_iter_1_left = BranchSeparables( - out_chs_right, out_chs_right, kernel_size=7, stride=2, padding=pad_type) - self.comb_iter_1_right = create_pool2d('max', 3, stride=2, padding=pad_type) - - self.comb_iter_2_left = BranchSeparables( - out_chs_right, out_chs_right, kernel_size=5, stride=2, padding=pad_type) - self.comb_iter_2_right = BranchSeparables( - out_chs_right, out_chs_right, kernel_size=3, stride=2, padding=pad_type) - - self.comb_iter_3_left = BranchSeparables( - out_chs_right, out_chs_right, kernel_size=3, padding=pad_type) - self.comb_iter_3_right = create_pool2d('max', 3, stride=2, padding=pad_type) - - self.comb_iter_4_left = BranchSeparables( - in_chs_right, out_chs_right, kernel_size=3, stride=2, stem_cell=True, padding=pad_type) - self.comb_iter_4_right = ActConvBn( - out_chs_right, out_chs_right, kernel_size=1, stride=2, padding=pad_type) - - def forward(self, x_left): - x_right = self.conv_1x1(x_left) - x_out = self.cell_forward(x_left, x_right) - return x_out - - -class Cell(CellBase): - - def __init__(self, in_chs_left, out_chs_left, in_chs_right, out_chs_right, pad_type='', - is_reduction=False, match_prev_layer_dims=False): - super(Cell, self).__init__() - - # If `is_reduction` is set to `True` stride 2 is used for - # convolution and pooling layers to reduce the spatial size of - # the output of a cell approximately by a factor of 2. - stride = 2 if is_reduction else 1 - - # If `match_prev_layer_dimensions` is set to `True` - # `FactorizedReduction` is used to reduce the spatial size - # of the left input of a cell approximately by a factor of 2. - self.match_prev_layer_dimensions = match_prev_layer_dims - if match_prev_layer_dims: - self.conv_prev_1x1 = FactorizedReduction(in_chs_left, out_chs_left, padding=pad_type) - else: - self.conv_prev_1x1 = ActConvBn(in_chs_left, out_chs_left, kernel_size=1, padding=pad_type) - self.conv_1x1 = ActConvBn(in_chs_right, out_chs_right, kernel_size=1, padding=pad_type) - - self.comb_iter_0_left = BranchSeparables( - out_chs_left, out_chs_left, kernel_size=5, stride=stride, padding=pad_type) - self.comb_iter_0_right = create_pool2d('max', 3, stride=stride, padding=pad_type) - - self.comb_iter_1_left = BranchSeparables( - out_chs_right, out_chs_right, kernel_size=7, stride=stride, padding=pad_type) - self.comb_iter_1_right = create_pool2d('max', 3, stride=stride, padding=pad_type) - - self.comb_iter_2_left = BranchSeparables( - out_chs_right, out_chs_right, kernel_size=5, stride=stride, padding=pad_type) - self.comb_iter_2_right = BranchSeparables( - out_chs_right, out_chs_right, kernel_size=3, stride=stride, padding=pad_type) - - self.comb_iter_3_left = BranchSeparables(out_chs_right, out_chs_right, kernel_size=3) - self.comb_iter_3_right = create_pool2d('max', 3, stride=stride, padding=pad_type) - - self.comb_iter_4_left = BranchSeparables( - out_chs_left, out_chs_left, kernel_size=3, stride=stride, padding=pad_type) - if is_reduction: - self.comb_iter_4_right = ActConvBn( - out_chs_right, out_chs_right, kernel_size=1, stride=stride, padding=pad_type) - else: - self.comb_iter_4_right = None - - def forward(self, x_left, x_right): - x_left = self.conv_prev_1x1(x_left) - x_right = self.conv_1x1(x_right) - x_out = self.cell_forward(x_left, x_right) - return x_out - - -class PNASNet5Large(nn.Module): - def __init__(self, num_classes=1001, in_chans=3, output_stride=32, drop_rate=0., global_pool='avg', pad_type=''): - super(PNASNet5Large, self).__init__() - self.num_classes = num_classes - self.drop_rate = drop_rate - self.num_features = 4320 - assert output_stride == 32 - - self.conv_0 = ConvBnAct( - in_chans, 96, kernel_size=3, stride=2, padding=0, - norm_kwargs=dict(eps=0.001, momentum=0.1), act_layer=None) - - self.cell_stem_0 = CellStem0( - in_chs_left=96, out_chs_left=54, in_chs_right=96, out_chs_right=54, pad_type=pad_type) - - self.cell_stem_1 = Cell( - in_chs_left=96, out_chs_left=108, in_chs_right=270, out_chs_right=108, pad_type=pad_type, - match_prev_layer_dims=True, is_reduction=True) - self.cell_0 = Cell( - in_chs_left=270, out_chs_left=216, in_chs_right=540, out_chs_right=216, pad_type=pad_type, - match_prev_layer_dims=True) - self.cell_1 = Cell( - in_chs_left=540, out_chs_left=216, in_chs_right=1080, out_chs_right=216, pad_type=pad_type) - self.cell_2 = Cell( - in_chs_left=1080, out_chs_left=216, in_chs_right=1080, out_chs_right=216, pad_type=pad_type) - self.cell_3 = Cell( - in_chs_left=1080, out_chs_left=216, in_chs_right=1080, out_chs_right=216, pad_type=pad_type) - - self.cell_4 = Cell( - in_chs_left=1080, out_chs_left=432, in_chs_right=1080, out_chs_right=432, pad_type=pad_type, - is_reduction=True) - self.cell_5 = Cell( - in_chs_left=1080, out_chs_left=432, in_chs_right=2160, out_chs_right=432, pad_type=pad_type, - match_prev_layer_dims=True) - self.cell_6 = Cell( - in_chs_left=2160, out_chs_left=432, in_chs_right=2160, out_chs_right=432, pad_type=pad_type) - self.cell_7 = Cell( - in_chs_left=2160, out_chs_left=432, in_chs_right=2160, out_chs_right=432, pad_type=pad_type) - - self.cell_8 = Cell( - in_chs_left=2160, out_chs_left=864, in_chs_right=2160, out_chs_right=864, pad_type=pad_type, - is_reduction=True) - self.cell_9 = Cell( - in_chs_left=2160, out_chs_left=864, in_chs_right=4320, out_chs_right=864, pad_type=pad_type, - match_prev_layer_dims=True) - self.cell_10 = Cell( - in_chs_left=4320, out_chs_left=864, in_chs_right=4320, out_chs_right=864, pad_type=pad_type) - self.cell_11 = Cell( - in_chs_left=4320, out_chs_left=864, in_chs_right=4320, out_chs_right=864, pad_type=pad_type) - self.act = nn.ReLU() - self.feature_info = [ - dict(num_chs=96, reduction=2, module='conv_0'), - dict(num_chs=270, reduction=4, module='cell_stem_1.conv_1x1.act'), - dict(num_chs=1080, reduction=8, module='cell_4.conv_1x1.act'), - dict(num_chs=2160, reduction=16, module='cell_8.conv_1x1.act'), - dict(num_chs=4320, reduction=32, module='act'), - ] - - self.global_pool, self.last_linear = create_classifier( - self.num_features, self.num_classes, pool_type=global_pool) - - def get_classifier(self): - return self.last_linear - - def reset_classifier(self, num_classes, global_pool='avg'): - self.num_classes = num_classes - self.global_pool, self.last_linear = create_classifier( - self.num_features, self.num_classes, pool_type=global_pool) - - def forward_features(self, x): - x_conv_0 = self.conv_0(x) - x_stem_0 = self.cell_stem_0(x_conv_0) - x_stem_1 = self.cell_stem_1(x_conv_0, x_stem_0) - x_cell_0 = self.cell_0(x_stem_0, x_stem_1) - x_cell_1 = self.cell_1(x_stem_1, x_cell_0) - x_cell_2 = self.cell_2(x_cell_0, x_cell_1) - x_cell_3 = self.cell_3(x_cell_1, x_cell_2) - x_cell_4 = self.cell_4(x_cell_2, x_cell_3) - x_cell_5 = self.cell_5(x_cell_3, x_cell_4) - x_cell_6 = self.cell_6(x_cell_4, x_cell_5) - x_cell_7 = self.cell_7(x_cell_5, x_cell_6) - x_cell_8 = self.cell_8(x_cell_6, x_cell_7) - x_cell_9 = self.cell_9(x_cell_7, x_cell_8) - x_cell_10 = self.cell_10(x_cell_8, x_cell_9) - x_cell_11 = self.cell_11(x_cell_9, x_cell_10) - x = self.act(x_cell_11) - return x - - def forward(self, x): - x = self.forward_features(x) - x = self.global_pool(x) - if self.drop_rate > 0: - x = F.dropout(x, self.drop_rate, training=self.training) - x = self.last_linear(x) - return x - - -def _create_pnasnet(variant, pretrained=False, **kwargs): - return build_model_with_cfg( - PNASNet5Large, variant, pretrained, default_cfg=default_cfgs[variant], - feature_cfg=dict(feature_cls='hook', no_rewrite=True), # not possible to re-write this model - **kwargs) - - -@register_model -def pnasnet5large(pretrained=False, **kwargs): - r"""PNASNet-5 model architecture from the - `"Progressive Neural Architecture Search" - `_ paper. - """ - model_kwargs = dict(pad_type='same', **kwargs) - return _create_pnasnet('pnasnet5large', pretrained, **model_kwargs)