diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Aquachem Software Crack Free Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/Aquachem Software Crack Free Download.md deleted file mode 100644 index 3f05b976cf69fb8bbcb80e5f84d4af3eb2a30736..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Aquachem Software Crack Free Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

aquachem software crack free download


Download Zip > https://imgfil.com/2uxXtJ



- -Schlumberger AquaChem is a groundwater software package for anyone working ... وارد پوشه Crack شده و فایل LicenseManager.dll را کپی کرده و در محل نصب نرم ... Full Licensed, Free License, Cracked, Repacked, Direct Download Link, DDL, ... 4d29de3e1b
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Edius Free VERIFIED Download Full Version For Windows 7 32-bit Software.md b/spaces/1gistliPinn/ChatGPT4/Examples/Edius Free VERIFIED Download Full Version For Windows 7 32-bit Software.md deleted file mode 100644 index ed3cae2a4a5e0b184f2e85fbad8400cda285da94..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Edius Free VERIFIED Download Full Version For Windows 7 32-bit Software.md +++ /dev/null @@ -1,20 +0,0 @@ -
-

How to Download and Install EDIUS Pro for Windows 7 32-bit

-

EDIUS Pro is a powerful video editing software that supports various formats and resolutions. It allows you to create professional-looking projects with ease and speed. EDIUS Pro is compatible with Windows 7 32-bit, but you need to have an EDIUS ID and QuickTime installed on your computer before you can use it. In this article, we will show you how to download and install EDIUS Pro for Windows 7 32-bit in a few simple steps.

-

Step 1: Download EDIUS Pro from the official website

-

The first thing you need to do is to download EDIUS Pro from the official website. You can choose between the latest version (EDIUS X) or the previous versions (EDIUS 9 or EDIUS 8). The latest version has more features and a redesigned core engine, but it also requires more system resources. The previous versions are still supported and updated, but they have less functionality. You can compare the different versions here: https://www.edius.net/compare.html

-

edius free download full version for windows 7 32-bit software


Download Zip ---> https://imgfil.com/2uy0Ey



-

To download EDIUS Pro, go to https://www.edius.net/edius_download.html and select the version you want. You will see a zip file with a size of about 1 GB (for EDIUS X) or 800 MB (for EDIUS 9 or EDIUS 8). Click on the download button and save the file to your computer.

-

Step 2: Extract the zip file and run the setup

-

After you have downloaded the zip file, you need to extract it to a folder on your computer. You can use any file compression software, such as WinRAR or 7-Zip, to do this. Right-click on the zip file and choose "Extract here" or "Extract to" and select a destination folder.

-

Once you have extracted the zip file, you will see a folder with several files and subfolders. Double-click on the "Setup.exe" file to start the installation process. You will see a welcome screen with the End-User License Agreement. Read it carefully and click on "Accept" if you agree with the terms. Then click on "Install" to begin the installation.

-

Step 3: Follow the instructions and complete the installation

-

The installation process is quite simple and straightforward. You don't need to choose any options or settings, as everything is done automatically for you. The installer will copy the necessary files and create a shortcut icon on your desktop. The installation may take several minutes, depending on your system speed and performance.

-

When the installation is finished, you will see a message saying "Installation completed successfully". Click on "Finish" to exit the installer. You can now launch EDIUS Pro from your desktop or start menu.

-

-

Step 4: Activate EDIUS Pro with your EDIUS ID

-

The last step you need to do is to activate EDIUS Pro with your EDIUS ID. An EDIUS ID is a unique identifier that allows you to use EDIUS Pro and access its online services. If you don't have an EDIUS ID, you can create one for free at https://ediusid1.grassvalley.com/. You will need to provide your name, email address, country, and password.

-

Once you have an EDIUS ID, you can activate EDIUS Pro by entering it in the activation window that appears when you start EDIUS Pro for the first time. You will also need to enter your serial number, which is provided when you purchase EDIUS Pro from an authorized reseller or request a free 30-day trial version.

-

After you enter your EDIUS ID and serial number, click on "Activate" and wait for a few seconds. You will see a message saying "Activation completed successfully". Click on "OK" to close

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Eriyum Panikadu Book Free 16 !!TOP!!.md b/spaces/1gistliPinn/ChatGPT4/Examples/Eriyum Panikadu Book Free 16 !!TOP!!.md deleted file mode 100644 index 160dc1495d5f1319d3c325af33bc123461082be7..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Eriyum Panikadu Book Free 16 !!TOP!!.md +++ /dev/null @@ -1,14 +0,0 @@ -
-

Eriyum Panikadu: A Novel About the Plight of Tea Plantation Workers

-

Eriyum Panikadu is a Tamil novel by P.H. Daniel, translated by Ra. Murugavel. It was first published in 1969 and is considered a classic of Tamil literature. The novel depicts the life and struggles of the tea plantation workers in the Nilgiris during the colonial era. The novel is based on the author's personal experience as a doctor and a trade union leader among the workers.

-

Eriyum Panikadu Book Free 16


Download File »»» https://imgfil.com/2uxYiF



-

The novel follows the story of Selvan, a young worker who dreams of a better life for himself and his people. He falls in love with Valli, a beautiful girl from another plantation, and hopes to marry her someday. However, he faces many obstacles and challenges from the oppressive system of the planters, who exploit and abuse the workers mercilessly. The novel also portrays the social and cultural aspects of the workers, such as their festivals, rituals, beliefs, customs, and language.

-

Eriyum Panikadu is a powerful and realistic novel that exposes the harsh realities of the tea plantation industry and its impact on the workers. It also highlights the importance of education, organization, and resistance among the oppressed classes. The novel has been praised for its vivid narration, rich characterization, and historical accuracy. It has been adapted into a film by Bala in 2013, titled Paradesi.

-

Eriyum Panikadu is a novel that deserves to be read by everyone who wants to learn more about the history and culture of Tamil Nadu and its people. It is a novel that will move you, inspire you, and make you think. You can download Eriyum Panikadu book for free from SoundCloud[^2^] [^3^] or buy it from Amazon[^1^].

-

- -

The novel Eriyum Panikadu is not only a historical fiction, but also a social commentary on the contemporary issues of caste, class, and gender. The novel exposes the discrimination and violence faced by the workers, who belong to the Dalit community, from the upper-caste planters and the British officials. The novel also shows the plight of the women workers, who are subjected to sexual harassment, rape, and forced sterilization. The novel challenges the stereotypes and prejudices that are prevalent in the society and calls for social justice and equality.

-

The novel Eriyum Panikadu is also a literary masterpiece that showcases the beauty and richness of the Tamil language and culture. The novel uses a variety of dialects and registers to capture the authentic voice of the workers and their environment. The novel also incorporates many folk songs, proverbs, idioms, and metaphors that reflect the wisdom and creativity of the workers. The novel is a tribute to the resilience and courage of the workers, who despite their hardships, manage to find joy and hope in their lives.

-

The novel Eriyum Panikadu is a must-read for anyone who loves literature and history. It is a novel that will make you laugh, cry, angry, and hopeful. It is a novel that will teach you about the past and inspire you for the future. It is a novel that will stay with you forever.

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/1line/AutoGPT/data_ingestion.py b/spaces/1line/AutoGPT/data_ingestion.py deleted file mode 100644 index b89a33dafd15c2e7bded0445a741a4a1c47ed417..0000000000000000000000000000000000000000 --- a/spaces/1line/AutoGPT/data_ingestion.py +++ /dev/null @@ -1,96 +0,0 @@ -import argparse -import logging - -from autogpt.commands.file_operations import ingest_file, search_files -from autogpt.config import Config -from autogpt.memory import get_memory - -cfg = Config() - - -def configure_logging(): - logging.basicConfig( - filename="log-ingestion.txt", - filemode="a", - format="%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s", - datefmt="%H:%M:%S", - level=logging.DEBUG, - ) - return logging.getLogger("AutoGPT-Ingestion") - - -def ingest_directory(directory, memory, args): - """ - Ingest all files in a directory by calling the ingest_file function for each file. - - :param directory: The directory containing the files to ingest - :param memory: An object with an add() method to store the chunks in memory - """ - try: - files = search_files(directory) - for file in files: - ingest_file(file, memory, args.max_length, args.overlap) - except Exception as e: - print(f"Error while ingesting directory '{directory}': {str(e)}") - - -def main() -> None: - logger = configure_logging() - - parser = argparse.ArgumentParser( - description="Ingest a file or a directory with multiple files into memory. " - "Make sure to set your .env before running this script." - ) - group = parser.add_mutually_exclusive_group(required=True) - group.add_argument("--file", type=str, help="The file to ingest.") - group.add_argument( - "--dir", type=str, help="The directory containing the files to ingest." - ) - parser.add_argument( - "--init", - action="store_true", - help="Init the memory and wipe its content (default: False)", - default=False, - ) - parser.add_argument( - "--overlap", - type=int, - help="The overlap size between chunks when ingesting files (default: 200)", - default=200, - ) - parser.add_argument( - "--max_length", - type=int, - help="The max_length of each chunk when ingesting files (default: 4000)", - default=4000, - ) - - args = parser.parse_args() - - # Initialize memory - memory = get_memory(cfg, init=args.init) - print("Using memory of type: " + memory.__class__.__name__) - - if args.file: - try: - ingest_file(args.file, memory, args.max_length, args.overlap) - print(f"File '{args.file}' ingested successfully.") - except Exception as e: - logger.error(f"Error while ingesting file '{args.file}': {str(e)}") - print(f"Error while ingesting file '{args.file}': {str(e)}") - elif args.dir: - try: - ingest_directory(args.dir, memory, args) - print(f"Directory '{args.dir}' ingested successfully.") - except Exception as e: - logger.error(f"Error while ingesting directory '{args.dir}': {str(e)}") - print(f"Error while ingesting directory '{args.dir}': {str(e)}") - else: - print( - "Please provide either a file path (--file) or a directory name (--dir)" - " inside the auto_gpt_workspace directory as input." - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/8 Ball Pool APK 5.9.0 - The Best Way to Experience the Worlds 1 Pool Game.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/8 Ball Pool APK 5.9.0 - The Best Way to Experience the Worlds 1 Pool Game.md deleted file mode 100644 index 90d61e37433b5c5ab671225ba3d87b13e139b34a..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/8 Ball Pool APK 5.9.0 - The Best Way to Experience the Worlds 1 Pool Game.md +++ /dev/null @@ -1,128 +0,0 @@ - -

8 Ball Pool 5.9.0 APK Download: Everything You Need to Know

-

If you are a fan of pool games, you have probably heard of 8 Ball Pool, the most popular and addictive pool game for Android devices. In this article, we will tell you everything you need to know about the latest version of this game, 8 Ball Pool 5.9.0, and how to download and install it on your device.

-

What is 8 Ball Pool?

-

8 Ball Pool is a pool game developed by Miniclip.com, a leading online gaming company. It allows you to play online with millions of players from around the world, or challenge your friends in one-on-one matches. You can also participate in tournaments, win trophies, and collect coins and cues.

-

8 ball pool 5.9.0 apk download


Download Filehttps://urlin.us/2uT1z6



-

Features of 8 Ball Pool

-

Some of the features that make 8 Ball Pool stand out from other pool games are:

- -

How to play 8 Ball Pool

-

The rules of 8 Ball Pool are simple and similar to the real pool game. You have to pot all your balls (solid or striped) before your opponent does, and then pot the black ball (the 8 ball) to win the game. You can use the cue stick to aim and adjust the power of your shot, and use the spin button to add spin to the cue ball. You have to be careful not to pot the cue ball or the wrong balls, or you will lose your turn or the game.

-

What is new in 8 Ball Pool 5.9.0?

-

The latest version of 8 Ball Pool, released on June 14, 2023, brings some exciting new features and improvements to the game. Here are some of them:

-

New game mode: Power Pots

-

Power Pots is a new game mode that challenges you to pot as many balls as possible in a limited time. The more balls you pot, the more points you get. You can also use power-ups to boost your score and get extra time. Power Pots is available for a limited time only, so don't miss it!

-

New rewards and events

-

8 Ball Pool 5.9.0 also introduces new rewards and events for you to enjoy. You can earn free coins, cues, chat packs, and more by completing daily missions, watching videos, spinning the wheel, and playing mini-games. You can also join seasonal events and win exclusive prizes by ranking high on the leaderboards.

-

Bug fixes and improvements

-

As always, 8 Ball Pool 5.9.0 also fixes some bugs and improves the performance and stability of the game. Some of the issues that have been resolved are:

-

8 ball pool 5.9.0 arm64 apk download
-8 ball pool 5.9.0 apk download for windows pc
-8 ball pool 5.9.0 apk download softpedia
-8 ball pool 5.9.0 apk download apkpure
-8 ball pool 5.9.0 apk download miniclip
-8 ball pool 5.9.0 apk download android
-8 ball pool 5.9.0 apk download latest version
-8 ball pool 5.9.0 apk download free
-8 ball pool 5.9.0 apk download mod
-8 ball pool 5.9.0 apk download unlimited coins
-8 ball pool 5.9.0 apk download hack
-8 ball pool 5.9.0 apk download offline
-8 ball pool 5.9.0 apk download no root
-8 ball pool 5.9.0 apk download emulator
-8 ball pool 5.9.0 apk download with mouse and keyboard
-8 ball pool 5.9.0 apk download for pc windows 10
-8 ball pool 5.9.0 apk download for pc windows 7
-8 ball pool 5.9.0 apk download for mac
-8 ball pool 5.9.0 apk download for laptop
-8 ball pool 5.9.0 apk download for chromebook
-8 ball pool 5.9.0 apk download for tablet
-8 ball pool 5.9.0 apk download for firestick
-8 ball pool 5.9.0 apk download for smart tv
-8 ball pool 5.9.0 apk download for ios
-8 ball pool 5.9.0 apk download for iphone
-8 ball pool 5.9.0 apk download for ipad
-8 ball pool 5.9.0 apk download for ipod touch
-8 ball pool 5.9.0 apk download from google play store
-8 ball pool 5.9.0 apk download from uptodown
-8 ball pool 5.9.0 apk download from apkmirror
-how to install and play the game of the famous miniclip.com adapted for the android platform with the help of the app description of the softpedia website[^1^]
-how to install and play the game on windows with an emulator and adapt its controls to a mouse and keyboard as explained by the apkpure website[^2^]
-how to update the game to the latest version of the app with the help of the softpedia website[^1^]
-how to uninstall the game from your device with the help of the apkpure website[^2^]
-how to play the game online with other players from around the world with the help of the miniclip.com website[^1^]
-how to play the game offline with your friends or against the computer with the help of the apkpure website[^2^]
-how to customize your cue and table in the game with the help of the miniclip.com website[^1^]
-how to earn coins and cash in the game with the help of the apkpure website[^2^]
-how to join clubs and compete in tournaments in the game with the help of the miniclip.com website[^1^]
-how to chat and send gifts to other players in the game with the help of the apkpure website[^2^]

- -

How to download and install 8 Ball Pool 5.9.0 APK?

-

If you want to enjoy the new features and improvements of 8 Ball Pool 5.9.0, you have to download and install the APK file on your Android device. There are two ways to do this:

-

Download from official sources

-

The easiest and safest way to download 8 Ball Pool 5.9.0 APK is from the official sources, such as Google Play Store or Miniclip.com website. You just have to follow these steps:

-
    -
  1. Open the Google Play Store app on your device, or visit the Miniclip.com website on your browser.
  2. -
  3. Search for 8 Ball Pool and tap on the game icon.
  4. -
  5. Tap on the Update button and wait for the download to finish.
  6. -
  7. Tap on the Open button and enjoy the game.
  8. -
-

Download from third-party sources

-

Another way to download 8 Ball Pool 5.9.0 APK is from third-party sources, such as APKPure, APKMirror, or other websites that offer APK files. However, this method is not recommended, as it may expose your device to malware or viruses. If you still want to try this method, you have to follow these steps:

-
    -
  1. Visit a website that offers 8 Ball Pool 5.9.0 APK, such as APKPure or APKMirror.
  2. -
  3. Search for 8 Ball Pool and tap on the download button.
  4. -
  5. Wait for the download to finish and locate the APK file on your device.
  6. -
-

Install the APK file

-

Before you can install the APK file on your device, you have to enable the Unknown Sources option in your settings. This will allow you to install apps from sources other than the Google Play Store. To do this, follow these steps:

-
    -
  1. Go to Settings > Security > Unknown Sources and toggle it on.
  2. -
  3. A warning message will pop up, asking you to confirm your action. Tap on OK.
  4. -
-

Now, you can install the APK file by following these steps:

-
    -
  1. Locate the APK file on your device and tap on it.
  2. -
  3. A prompt will appear, asking you to install the app. Tap on Install.
  4. -
  5. Wait for the installation to finish and tap on Open.
  6. -
  7. Enjoy the game.
  8. -
-

Conclusion

-

8 Ball Pool is a fun and addictive pool game that lets you play online with millions of players from around the world, or challenge your friends in one-on-one matches. The latest version of the game, 8 Ball Pool 5.9.0, brings some exciting new features and improvements, such as a new game mode, new rewards and events, and bug fixes and performance enhancements. You can download and install 8 Ball Pool 5.9.0 APK on your Android device by following the steps we have provided in this article. We hope you enjoy playing 8 Ball Pool 5.9.0 and have a great time!

-

FAQs

-

Here are some frequently asked questions about 8 Ball Pool 5.9.0:

-
    -
  1. Is 8 Ball Pool 5.9.0 free?
  2. -

    Yes, 8 Ball Pool 5.9.0 is free to download and play, but it contains in-app purchases that allow you to buy coins, cues, chat packs, and more.

    -
  3. Is 8 Ball Pool 5.9.0 safe?
  4. -

    If you download 8 Ball Pool 5.9.0 from official sources, such as Google Play Store or Miniclip.com website, it is safe and secure. However, if you download it from third-party sources, it may contain malware or viruses that can harm your device.

    -
  5. Is 8 Ball Pool 5.9.0 compatible with my device?
  6. -

    8 Ball Pool 5.9.0 requires Android version 4.4 or higher to run smoothly on your device. You can check your device's Android version by going to Settings > About Phone > Android Version.

    -
  7. How can I contact the developers of 8 Ball Pool?
  8. -

    If you have any questions, feedback, or issues regarding 8 Ball Pool, you can contact the developers by visiting their website (https://www.miniclip.com), their Facebook page (https ://www.facebook.com/8ballpoolfans), or their Twitter account (https://twitter.com/8ballpool).

    -
  9. How can I improve my skills in 8 Ball Pool?
  10. -

    Some tips and tricks that can help you improve your skills in 8 Ball Pool are:

    - -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Defense Zone 3 APK The Best Tower Defense Game for Android.md b/spaces/1phancelerku/anime-remove-background/Defense Zone 3 APK The Best Tower Defense Game for Android.md deleted file mode 100644 index 904daacddbad3c515b63a4d11822c065cb898175..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Defense Zone 3 APK The Best Tower Defense Game for Android.md +++ /dev/null @@ -1,154 +0,0 @@ -
-

Defense Zone 3 APK: A Review of the Popular Action/Strategy Game

-

If you are a fan of action and strategy games, you might have heard of Defense Zone 3, a long-awaited sequel to the popular game series. But did you know that you can also play this game on your Android device with the Defense Zone 3 APK? In this article, we will review the features, benefits, and gameplay of Defense Zone 3 APK, as well as show you how to download and install it on your device. We will also share some tips and tricks to help you win the game, and answer some frequently asked questions. So, let's get started!

-

What is Defense Zone 3 APK?

-

Defense Zone 3 APK is an Android application package that allows you to play the game Defense Zone 3 on your Android device. Defense Zone 3 is an action/strategy game developed by ARTEM KOTOV, a Russian game developer. It is the third installment in the Defense Zone series, which has been praised for its stunning graphics, realistic sound effects, and challenging gameplay.

-

defense zone 3 apk


DOWNLOADhttps://jinyurl.com/2uNPON



-

The features of Defense Zone 3 APK

-

Defense Zone 3 APK has many features that make it an exciting and enjoyable game to play. Some of these features are:

- -

The benefits of Defense Zone 3 APK

-

Defense Zone 3 APK has many benefits that make it a worthwhile game to download and play. Some of these benefits are:

- -

How to download and install Defense Zone 3 APK?

-

If you want to play Defense Zone 3 APK on your Android device, you will need to download and install it first. Here are the steps to do so:

-

The steps to download and install Defense Zone 3 APK

-
    -
  1. Go to Google Play Store and search for Defense Zone 3. Alternatively, you can use this link: [Defense Zone 3 - Apps on Google Play].
  2. -
  3. Tap on the Install button and wait for the download to finish.
  4. -
  5. Once the download is complete, tap on the Open button and enjoy the game.
  6. -
-

If you cannot access Google Play Store or prefer to download the game from another source, you can follow these steps:

-
    -
  1. Go to a trusted website that offers Defense Zone 3 APK, such as [Defense Zone 3 APK Download - Free Strategy GAME for Android | APKPure.com].
  2. -
  3. Tap on the Download APK button and wait for the download to finish.
  4. -
  5. Once the download is complete, go to your device's settings and enable the installation of apps from unknown sources.
  6. -
  7. Locate the downloaded APK file and tap on it to install it.
  8. -
  9. Once the installation is complete, tap on the Open button and enjoy the game.
  10. -
-

The requirements and compatibility of Defense Zone 3 APK

-

Before you download and install Defense Zone 3 APK, you should check if your device meets the minimum requirements and is compatible with the game. Here are the requirements and compatibility of Defense Zone 3 APK:

- -

How to play Defense Zone 3 APK?

-

Now that you have downloaded and installed Defense Zone 3 APK, you are ready to play the game. Here are some basic instructions on how to play Defense Zone 3 APK:

-

The gameplay and controls of Defense Zone 3 APK

-

The gameplay of Defense Zone 3 APK is simple and intuitive. Your goal is to defend your base from the waves of enemies that will attack you from different directions. You will have to build and upgrade your turrets along the path that the enemies will take, and use your special abilities wisely to stop them from reaching your base.

-

defense zone 3 hd apk download
-defense zone 3 ultra hd apk
-defense zone 3 mod apk unlimited money
-defense zone 3 apk free download
-defense zone 3 apk + obb
-defense zone 3 hack apk
-defense zone 3 full apk
-defense zone 3 latest version apk
-defense zone 3 premium apk
-defense zone 3 apk mod menu
-defense zone 3 offline apk
-defense zone 3 android apk
-defense zone 3 apk pure
-defense zone 3 apk revdl
-defense zone 3 apk uptodown
-defense zone 3 cheats apk
-defense zone 3 cracked apk
-defense zone 3 game apk
-defense zone 3 pro apk
-defense zone 3 unlimited coins apk
-defense zone 3 apk for pc
-defense zone 3 mod apk android 1
-defense zone 3 mod apk rexdl
-defense zone 3 mod apk latest version
-defense zone 3 mod apk happymod
-defense zone 3 mod apk all levels unlocked
-defense zone 3 mod apk no ads
-defense zone 3 mod apk unlimited everything
-defense zone 3 mod apk android republic
-defense zone 3 mod apk an1.com
-defense zone 3 ultra hd mod apk
-defense zone 3 ultra hd full apk
-defense zone 3 ultra hd hack apk
-defense zone 3 ultra hd premium apk
-defense zone 3 ultra hd cracked apk
-defense zone 3 ultra hd latest version apk
-defense zone 3 ultra hd mod menu apk
-defense zone 3 ultra hd mod unlimited money and coins apk download free for android devices.

-

The controls of Defense Zone 3 APK are also easy and convenient. You can use your finger to drag and drop your turrets on the map, tap on them to upgrade or sell them, and swipe on the screen to move the camera. You can also use the buttons on the bottom of the screen to pause, resume, speed up, or slow down the game, as well as access the menu, settings, and abilities.

-

The tips and tricks for Defense Zone 3 APK

-

If you want to master Defense Zone 3 APK and win every level, you will need some tips and tricks to help you out. Here are some of them:

- -

Why should you play Defense Zone 3 APK?

-

You might be wondering why you should play Defense Zone 3 APK, when there are so many other games available on the market. Well, here are some reasons why you should give Defense Zone 3 APK a try:

-

The pros and cons of Defense Zone 3 APK

-

Like any other game, Defense Zone 3 APK has its pros and cons. Here are some of them:

- - - - - - - - - - - - - - - - - - - - - - - - - -
ProsCons
It is free to download and play.It can be addictive and time-consuming.
It has stunning graphics and sound effects.It can drain your battery and data quickly.
It has dynamic and amazing game sessions.It can be frustrating and difficult at times.
It has flexible difficulty settings and game modes.It can be repetitive and boring after a while.
It is educational and informative.It can be inaccurate and misleading in some aspects.
-

The ratings and reviews of Defense Zone 3 APK

-

If you are still not convinced, you can check out the ratings and reviews of Defense Zone 3 APK from other players. Here are some of them:

- -

Conclusion

-

In conclusion, Defense Zone 3 APK is a popular action/strategy game that you can play on your Android device. It has many features, benefits, and gameplay that make it an exciting and enjoyable game to play. It also has some drawbacks and challenges that make it a demanding and rewarding game to play. If you are looking for a free, fun, and challenging game to play on your device, you should give Defense Zone 3 APK a try.

-

FAQs

-

Here are some frequently asked questions about Defense Zone 3 APK:

-
    -
  1. Is Defense Zone 3 APK safe to download and install?
  2. -

    Yes, Defense Zone 3 APK is safe to download and install, as long as you download it from a trusted source, such as Google Play Store or a reputable website. You should also scan the APK file with an antivirus software before installing it.

    -
  3. Is Defense Zone 3 APK available for iOS devices?
  4. -

    No, Defense Zone 3 APK is not available for iOS devices. However, you can play Defense Zone 3 on your iOS device by downloading it from the App Store or using an emulator software.

    -
  5. Is Defense Zone 3 APK online or offline?
  6. -

    Defense Zone 3 APK is both online and offline. You can play the game without an internet connection, but you will need an internet connection to access some features, such as leaderboards, achievements, updates, etc.

    -
  7. How many levels are there in Defense Zone 3 APK?
  8. -

    There are 21 levels in Defense Zone 3 APK, each with different landscapes, enemies, and difficulties. You can also play in endless mode or custom mode for more variety and challenge.

    -
  9. How can I contact the developers of Defense Zone 3 APK?
  10. -

    You can contact the developers of Defense Zone 3 APK by sending an email to support@defensezone.net or visiting their website at [Defense Zone 3 HD]. You can also follow them on Facebook, Twitter, and YouTube for more updates and news.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Fast Orange VPN The Best Free Proxy App for Android.md b/spaces/1phancelerku/anime-remove-background/Fast Orange VPN The Best Free Proxy App for Android.md deleted file mode 100644 index fcaa47617ca12eebfedd1331afa1da375b5c4477..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Fast Orange VPN The Best Free Proxy App for Android.md +++ /dev/null @@ -1,94 +0,0 @@ - -

    Fast Orange APK: What Is It and How to Use It?

    -

    If you are looking for a fast, secure, and free VPN app for your Android device, you might want to check out Fast Orange APK. This app is a lightweight and powerful VPN tool that allows you to access any website or app without any limitations or censorship. In this article, we will explain what Fast Orange APK is, what are its benefits, and how to download, install, and use it on your device.

    -

    fast orange apk


    Download File > https://jinyurl.com/2uNPeZ



    -

    Introduction

    -

    Before we dive into the details of Fast Orange APK, let's first understand what an APK file is and why you might need it.

    -

    What is an APK file?

    -

    An APK file is an Android Package Kit file that contains all the files and code needed to run an app on an Android device. It is similar to an EXE file on Windows or a DMG file on Mac. You can download APK files from various sources online, such as official websites, app stores, or third-party platforms. However, not all APK files are safe and reliable, so you should always be careful about where you get them from.

    -

    What is Fast Orange APK?

    -

    Fast Orange APK is an APK file that contains the Fast Orange app, which is a VPN app developed by Orange CH. VPN stands for Virtual Private Network, which is a technology that creates a secure and encrypted connection between your device and a remote server. By using a VPN, you can hide your IP address, protect your online privacy, and bypass geo-restrictions or firewalls that block certain websites or apps.

    -

    VPN Orange app free download
    -Fast Orange unblock any website
    -Fire Orange booster for gaming
    -VPN Orange secure and stable
    -Fast Orange one-click connection
    -Fire Orange app multiplatform
    -VPN Orange unlimited proxy
    -Fast Orange high-speed ladder
    -Fire Orange app private
    -VPN Orange works with Wi-Fi
    -Fast Orange no registration required
    -Fire Orange app fast access
    -VPN Orange well-designed UI
    -Fast Orange no data limitation
    -Fire Orange app ultra-efficient
    -VPN Orange super-fast speed
    -Fast Orange hide your IP address
    -Fire Orange app iOS Android Mac PC
    -VPN Orange 3G 4G compatible
    -Fast Orange unlock sites and games
    -Fire Orange app 100% protected
    -VPN Orange free unlimited VPN
    -Fast Orange APK download link
    -Fire Orange booster APK download
    -VPN Orange APK latest version
    -Fast Orange APK for Android
    -Fire Orange booster APK for iOS
    -VPN Orange APK for PC
    -Fast Orange APK for Mac
    -Fire Orange booster APK for Android
    -VPN Orange APK free proxy server
    -Fast Orange APK high anonymity
    -Fire Orange booster APK gaming optimization
    -VPN Orange APK no ads
    -Fast Orange APK easy to use
    -Fire Orange booster APK no data hackers
    -VPN Orange APK best reviews
    -Fast Orange APK fast server speed
    -Fire Orange booster APK play any games
    -VPN Orange APK secure your data
    -Fast Orange APK unlimited bandwidth
    -Fire Orange booster APK coming soon

    -

    Benefits of Fast Orange APK

    -

    There are many reasons why you might want to use Fast Orange APK on your device. Here are some of the main benefits of this app:

    -

    Fast and secure VPN service

    -

    Fast Orange APK provides a fast and stable VPN connection that can handle high-speed data transfer and streaming. It also uses advanced encryption protocols and techniques to ensure that your data is safe from hackers, snoopers, or government agencies. You can trust that your online activities are private and secure with Fast Orange APK.

    -

    Free and unlimited proxy access

    -

    Unlike some other VPN apps that charge you money or limit your bandwidth, Fast Orange APK offers free and unlimited proxy access to any website or app you want. You can access popular platforms like Netflix, YouTube, Facebook, Twitter, Instagram, WhatsApp, Skype, and more without any restrictions or censorship. You can also switch between different server locations around the world to enjoy different content or services.

    -

    Easy and simple user interface

    -

    Fast Orange APK has a user-friendly and intuitive user interface that makes it easy for anyone to use. You don't need to register or log in to use the app. You just need to tap one button and you can connect to the VPN server of your choice. You can also customize the settings according to your preferences and needs.

    -

    How to download and install Fast Orange APK

    -

    If you want to use Fast Orange APK on your device, you need to download and install it first. Here are the steps you need to follow:

    -

    Step 1: Enable unknown sources on your device

    -

    Since Fast Orange APK is not available on the Google Play Store, you need to enable unknown sources on your device to allow it to install apps from other sources

    To enable unknown sources on your device, you need to access the settings app and look for the security or privacy option. Depending on your device, you may need to tap on the lock screen and security tab or the install unknown apps switch. Then, you need to turn on the unknown sources switch or check the box next to it. You may see a warning message against enabling this option, but you can ignore it if you trust the source of the APK file.

    -

    Step 2: Download the APK file from a trusted source

    -

    Once you have enabled unknown sources on your device, you can download the APK file from a trusted source. You can use your web browser or a file manager app to do this. For example, you can visit the official website of Fast Orange VPN and download the APK file from there. Alternatively, you can use a third-party platform that offers verified and safe APK files, such as APKPure or APKMirror. However, be careful not to download any fake or malicious APK files that may harm your device or compromise your data.

    -

    Step 3: Install the APK file and launch the app

    -

    After you have downloaded the APK file, you can install it by tapping on it or opening it with a file manager app. You may need to grant some permissions to the app before it can be installed. Once the installation is complete, you can launch the app by tapping on its icon in the app drawer or on the home screen. You are now ready to use Fast Orange APK on your device.

    -

    How to use Fast Orange APK

    -

    Using Fast Orange APK is very easy and simple. You just need to follow these steps:

    -

    Step 1: Choose a server location from the list

    -

    When you open the app, you will see a list of server locations that you can connect to. You can scroll through the list and select the one that suits your needs. For example, if you want to access a website or app that is only available in a certain country, you can choose a server location in that country. Alternatively, you can let the app choose the best server for you automatically by tapping on the smart connect button.

    -

    Step 2: Tap the connect button to start the VPN connection

    -

    After you have selected a server location, you can tap on the connect button at the bottom of the screen to start the VPN connection. You will see a green circle around the button when the connection is established. You will also see a key icon in the status bar of your device, indicating that you are using a VPN service.

    -

    Step 3: Enjoy the Internet without any restrictions or censorship

    -

    Now that you are connected to Fast Orange VPN, you can enjoy the Internet without any limitations or censorship. You can access any website or app that you want, regardless of your location or network. You can also browse the web anonymously and securely, without worrying about your online privacy or security.

    -

    Conclusion

    -

    Fast Orange APK is a great VPN app that offers fast, secure, and free proxy access to any website or app. It has an easy and simple user interface that anyone can use. It also has many server locations around the world that you can choose from. If you want to download and use Fast Orange APK on your device, you just need to enable unknown sources, download the APK file from a trusted source, install it, and launch it. Then, you can enjoy the Internet without any restrictions or censorship.

    -

    I hope this article was helpful and informative for you. If you have any questions or feedback, please feel free to leave them in the comments section below. Thank you for reading!

    -

    Frequently Asked Questions

    -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Final Cut Pro for Windows - Is It Possible? Heres the Answer.md b/spaces/1phancelerku/anime-remove-background/Final Cut Pro for Windows - Is It Possible? Heres the Answer.md deleted file mode 100644 index abf1cfb4ea1704aaefc18ca243d70b214068b43b..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Final Cut Pro for Windows - Is It Possible? Heres the Answer.md +++ /dev/null @@ -1,154 +0,0 @@ -
    -

    How to Download Final Cut Pro for Windows

    -

    Final Cut Pro is one of the most popular and powerful video editing software for Mac users. It offers a range of features and tools that can help you create stunning videos with ease. But what if you are a Windows user and want to use Final Cut Pro on your PC? Is it possible to download Final Cut Pro for Windows? And if so, how can you do it?

    -

    download final cut pro for windows


    Download >>> https://jinyurl.com/2uNPqk



    -

    In this article, we will answer these questions and more. We will explain what Final Cut Pro is, why it is not available for Windows, how to run it on Windows, and what are the best alternatives to Final Cut Pro for Windows. By the end of this article, you will have a clear idea of how to edit videos on your PC with or without Final Cut Pro.

    -

    What is Final Cut Pro?

    -

    Final Cut Pro is a video editing software that was developed by Apple in 1999. It is designed for professional and advanced users who need a high level of control and customization over their video projects. It supports multiple video formats, resolutions, frame rates, and codecs, as well as multi-camera editing, 360-degree video editing, VR headset playback, and advanced color grading. It also has a range of video transitions and filters, such as keying tools, mattes, and vocal de-poppers. It uses a magnetic timeline that allows non-destructive editing of clips without collisions or syncing problems. It is optimized for Mac computers with Apple silicon, especially the new Mac Studio, and can tap into superfast unified memory and the Apple Neural Engine for faster playback and rendering.

    -

    Features and benefits of Final Cut Pro

    -

    Some of the features and benefits of Final Cut Pro are:

    - -

    Limitations and drawbacks of Final Cut Pro

    -

    Some of the limitations and drawbacks of Final Cut Pro are:

    - -

    Why Final Cut Pro is not available for Windows?

    -

    Final Cut Pro is not available for Windows because it is a proprietary software that belongs to Apple. Apple has a history of creating exclusive products and services that only work on its own devices and platforms. This is part of its business strategy to create a loyal customer base and a competitive edge over other brands. Apple also wants to maintain the quality and performance of its software by optimizing it for its own hardware and software specifications.

    -

    The reason behind Apple's exclusivity

    -

    Apple's exclusivity is based on its philosophy of creating a seamless and integrated user experience across its products and services. Apple believes that by controlling both the hardware and the software aspects of its devices, it can offer better functionality, reliability, security, and design. Apple also wants to protect its intellectual property and prevent piracy and plagiarism of its software. Apple's exclusivity also helps it generate more revenue by encouraging users to buy more of its products and services.

    -

    The challenges of running Final Cut Pro on Windows

    -

    Running Final Cut Pro on Windows is not easy or advisable. There are several challenges and risks involved in trying to do so. Some of them are:

    - -

    How to run Final Cut Pro on Windows?

    -

    Despite the challenges and risks mentioned above, some people still want to run Final Cut Pro on Windows for various reasons. If you are one of them, you should know that there are two possible methods of installing Final Cut Pro on Windows: using a virtual machine or using a hackintosh.

    -

    How to install Final Cut Pro X on Windows 10
    -Final Cut Pro for Windows PC free download
    -Best video editing software for Windows like Final Cut Pro
    -Final Cut Pro alternatives for Windows 11
    -Download Final Cut Pro X trial version for Windows
    -Final Cut Pro vs Adobe Premiere Pro for Windows
    -Final Cut Pro compatible video editor for Windows
    -How to run Final Cut Pro on Windows with VirtualBox
    -Final Cut Pro for Windows crack download
    -Free online video editor similar to Final Cut Pro
    -How to edit Final Cut Pro projects on Windows
    -Final Cut Pro features and benefits for Windows users
    -Download Final Cut Pro effects and transitions for Windows
    -How to get Final Cut Pro for free on Windows
    -Best Final Cut Pro tutorials and courses for Windows
    -How to export Final Cut Pro videos to Windows
    -Final Cut Pro system requirements and specifications for Windows
    -Download Final Cut Pro plugins and extensions for Windows
    -How to use Final Cut Pro keyboard shortcuts on Windows
    -Final Cut Pro reviews and ratings for Windows
    -How to import and export media files in Final Cut Pro for Windows
    -Download Final Cut Pro templates and themes for Windows
    -How to create and edit 360° videos in Final Cut Pro for Windows
    -How to add subtitles and captions in Final Cut Pro for Windows
    -How to fix common errors and issues in Final Cut Pro for Windows
    -How to optimize performance and speed in Final Cut Pro for Windows
    -How to customize the interface and layout of Final Cut Pro for Windows
    -How to use the magnetic timeline and clip connections in Final Cut Pro for Windows
    -How to apply filters and color grading in Final Cut Pro for Windows
    -How to use the multicam editing feature in Final Cut Pro for Windows
    -How to sync audio and video in Final Cut Pro for Windows
    -How to use the audio roles and mixer in Final Cut Pro for Windows
    -How to use the smart conform and crop tools in Final Cut Pro for Windows
    -How to use the motion tracking and stabilization features in Final Cut Pro for Windows
    -How to use the keyframe animation and motion graphics features in Final Cut Pro for Windows
    -How to use the chroma key and green screen effects in Final Cut Pro for Windows
    -How to use the advanced compositing and masking features in Final Cut Pro for Windows
    -How to use the media browser and library management features in Final Cut Pro for Windows
    -How to use the proxy workflows and cloud collaboration features in Final Cut Pro for Windows
    -How to use the machine learning and artificial intelligence features in Final Cut Pro for Windows

    -

    The possible methods of installing Final Cut Pro on Windows

    -

    A virtual machine is a software that simulates a different operating system within your current operating system. For example, you can use a virtual machine to run macOS on your Windows PC. A hackintosh is a computer that runs macOS on non-Apple hardware. For example, you can install macOS on your custom-built PC.

    -

    To use a virtual machine to run Final Cut Pro on Windows, you will need to download and install a virtual machine software such as VMware Workstation Player or VirtualBox. Then you will need to download and install macOS on the virtual machine. Finally, you will need to download and install Final Cut Pro on the macOS virtual machine.

    -

    To use a hackintosh to run Final Cut Pro on Windows, you will need to have a compatible PC that meets the minimum requirements for running macOS. Then you will need to download and create a bootable USB drive with macOS installer. Finally, you will need to boot from the USB drive and install macOS on your PC.

    -

    The risks and disadvantages of using Final Cut Pro on Windows

    -

    Using either method to run Final Cut Pro on Windows has some risks and disadvantages that you should be aware of before trying them. Some of them are:

    - -

    What are the best alternatives to Final Cut Pro for Windows?What are the best alternatives to Final Cut Pro for Windows?

    -

    If you are looking for a video editing software that can match or surpass Final Cut Pro in terms of features, performance, and quality, but also works on Windows, you have plenty of options to choose from. There are many video editing software for Windows that cater to different levels of skills, budgets, and needs. Here are some of the criteria for choosing a good video editing software:

    - -

    The top three alternatives to Final Cut Pro for Windows

    -

    Based on the criteria above, we have selected the top three alternatives to Final Cut Pro for Windows that we think are worth considering. They are:

    -

    Adobe Premiere Pro

    -

    Adobe Premiere Pro is our top pick for the best video editing software for Windows. It is the industry-standard tool that is used by many professional videographers and editors around the world. It has all the features and tools you need to create stunning videos with ease. It is compatible with other Adobe products such as Photoshop and After Effects, which makes it ideal for cross-platform collaboration and integration. It also has a cloud-based service called Premiere Rush that lets you edit videos on your mobile devices and sync them with your desktop. It costs $20.99 per month or $239.88 per year as a standalone app, or $52.99 per month or $599.88 per year as part of the Adobe Creative Cloud suite.

    -

    CyberLink PowerDirector

    -

    CyberLink PowerDirector is another excellent video editing software for Windows that offers tons of tools and features for both beginners and experts. It has a user-friendly interface that is intuitive and customizable. It has a powerful media browser that lets you organize, preview, and import your media files easily. It has a smart conform feature that automatically crops your videos to fit different aspect ratios and social media platforms. It has an object tracker that uses machine learning to detect faces and objects and match their movement with titles and effects. It has a cinematic mode that lets you adjust focus points and depth of field on clips captured in cinematic mode on iPhone 13 or later. It has a duplicate detection feature that shows you any audio or video that appears more than once in your project. It has a proxy workflow that lets you generate proxy media in custom frame sizes from 12.5% to 100% of the original in ProRes Proxy or H.264. It has a range check feature that shows you which areas of an image are out of color gamut. It has a camera LUT feature that automatically applies look up tables (LUTs) to footage from select ARRI, Sony, Panasonic, and Canon cameras. It has an external monitoring feature that lets you monitor full-quality video up to 6K with Pro Display XDR or via HDMI on select Mac computers. It costs $69.99 for the lifetime license or $51.99 per year for the subscription.

    -

    Davinci Resolve

    -

    Davinci Resolve is a free video editing software for Windows that offers pro-level tools and features for advanced users. It has a modular interface that consists of different pages for different tasks, such as media management, editing, color grading, audio mixing, and delivery. It supports multiple video formats, resolutions, frame rates, and codecs, as well as 4K video, HDR video, and VR video. It offers a variety of video transitions and filters, as well as advanced tools such as multi-camera editing, motion tracking, keying, stabilization, noise reduction, and 3D editing. It also has a powerful color grading system that lets you adjust color balance, contrast, saturation, hue, luminance, and more with precision and control. It also has a professional audio mixing system that lets you edit soundtracks, add effects, mix channels, automate levels, and more with high-quality sound processing. The free version of Davinci Resolve has most of the features you need to create amazing videos, but if you want more advanced features such as collaboration tools, neural engine effects, facial recognition tracking, lens distortion correction, HDR grading tools and more, you can upgrade to the Studio version for $299.

    -

    Conclusion

    -

    Final Cut Pro is a great video editing software for Mac users, but it is not available for Windows users. If you want to use Final Cut Pro on Windows, you can try using a virtual machine or a hackintosh, but you will face many challenges and risks. A better option is to use one of the best alternatives to Final Cut Pro for Windows, such as Adobe Premiere Pro, CyberLink PowerDirector, or Davinci Resolve. These video editing software offer similar or better features, performance, and quality than Final Cut Pro, and they work seamlessly on Windows. They also have different price points and plans that suit different budgets and needs. You can download and try them for free and see which one works best for you.

    -

    Call to action and recommendations

    -

    If you are ready to start editing videos on your Windows PC, we recommend you to check out the following links:

    - -

    We hope this article has helped you learn how to download Final Cut Pro for Windows and what are the best alternatives to Final Cut Pro for Windows. If you have any questions or feedback, please leave a comment below. And if you liked this article, please share it with your friends and colleagues who might find it useful. Thank you for reading!

    -

    FAQs

    -

    Here are some of the frequently asked questions about Final Cut Pro and Windows:

    -

    Can I get Final Cut Pro for free?

    -

    No, Final Cut Pro is not free. It costs $299.99 to buy from the App Store. However, you can get a free 90-day trial of Final Cut Pro from Apple's website. You can use this trial to test the software and see if it meets your needs.

    -

    Is Final Cut Pro better than Adobe Premiere Pro?

    -

    Final Cut Pro and Adobe Premiere Pro are both excellent video editing software that have their own strengths and weaknesses. Some of the factors that may influence your choice are:

    - -

    The best way to decide which one is better for you is to try them both and compare their features, performance, and quality.

    -

    How long does it take to learn Final Cut Pro?

    -

    The time it takes to learn Final Cut Pro depends on your previous experience with video editing software, your learning style, and your goals. Generally speaking, Final Cut Pro has a user-friendly interface that is intuitive and customizable, which makes it easy to learn for beginners. However, it also has many advanced features and tools that require more practice and skill to master. A good way to learn Final Cut Pro is to follow online tutorials, courses, or books that teach you the basics and the best practices of video editing with Final Cut Pro. You can also join online communities or forums where you can ask questions, get feedback, and share tips with other users.

    -

    What are the system requirements for Final Cut Pro?

    -

    The minimum system requirements for running Final Cut Pro are:

    - -

    Can I use Final Cut Pro on my iPad or iPhone?

    -

    No, Final Cut Pro is not available for iPad or iPhone. However, you can use iMovie, which is a free video editing app that is similar to Final Cut Pro but simpler and more streamlined. iMovie lets you edit videos on your iPad or iPhone with ease and fun. You can add titles, transitions, filters, music, sound effects, and more to your videos. You can also use the green-screen effect, the picture-in-picture effect, the split-screen effect, and the cinematic mode to create amazing videos. You can also export your videos to different formats and resolutions, and share them with your friends and family via social media, email, or AirDrop. You can download iMovie from the App Store.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/A00001/bingothoo/tailwind.config.js b/spaces/A00001/bingothoo/tailwind.config.js deleted file mode 100644 index 03da3c3c45be6983b9f5ffa6df5f1fd0870e9636..0000000000000000000000000000000000000000 --- a/spaces/A00001/bingothoo/tailwind.config.js +++ /dev/null @@ -1,48 +0,0 @@ -/** @type {import('tailwindcss').Config} */ -module.exports = { - content: [ - './src/pages/**/*.{js,ts,jsx,tsx,mdx}', - './src/components/**/*.{js,ts,jsx,tsx,mdx}', - './src/app/**/*.{js,ts,jsx,tsx,mdx}', - './src/ui/**/*.{js,ts,jsx,tsx,mdx}', - ], - "darkMode": "class", - theme: { - extend: { - colors: { - 'primary-blue': 'rgb(var(--color-primary-blue) / )', - secondary: 'rgb(var(--color-secondary) / )', - 'primary-background': 'rgb(var(--primary-background) / )', - 'primary-text': 'rgb(var(--primary-text) / )', - 'secondary-text': 'rgb(var(--secondary-text) / )', - 'light-text': 'rgb(var(--light-text) / )', - 'primary-border': 'rgb(var(--primary-border) / )', - }, - keyframes: { - slideDownAndFade: { - from: { opacity: 0, transform: 'translateY(-2px)' }, - to: { opacity: 1, transform: 'translateY(0)' }, - }, - slideLeftAndFade: { - from: { opacity: 0, transform: 'translateX(2px)' }, - to: { opacity: 1, transform: 'translateX(0)' }, - }, - slideUpAndFade: { - from: { opacity: 0, transform: 'translateY(2px)' }, - to: { opacity: 1, transform: 'translateY(0)' }, - }, - slideRightAndFade: { - from: { opacity: 0, transform: 'translateX(2px)' }, - to: { opacity: 1, transform: 'translateX(0)' }, - }, - }, - animation: { - slideDownAndFade: 'slideDownAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideLeftAndFade: 'slideLeftAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideUpAndFade: 'slideUpAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideRightAndFade: 'slideRightAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - }, - }, - }, - plugins: [require('@headlessui/tailwindcss'), require('tailwind-scrollbar')], -} diff --git a/spaces/AI-Dashboards/Graph.NLP.Sentence.Similarity.Heatmap.KMeansCluster/app.py b/spaces/AI-Dashboards/Graph.NLP.Sentence.Similarity.Heatmap.KMeansCluster/app.py deleted file mode 100644 index 8b53f979d9f3ac86b100b5f19647e5ac4a7fa8ea..0000000000000000000000000000000000000000 --- a/spaces/AI-Dashboards/Graph.NLP.Sentence.Similarity.Heatmap.KMeansCluster/app.py +++ /dev/null @@ -1,77 +0,0 @@ -import streamlit as st -import nltk -from transformers import pipeline -from sentence_transformers import SentenceTransformer -from scipy.spatial.distance import cosine -import numpy as np -import seaborn as sns -import matplotlib.pyplot as plt -from sklearn.cluster import KMeans -import tensorflow as tf -import tensorflow_hub as hub - - -def cluster_examples(messages, embed, nc=3): - km = KMeans( - n_clusters=nc, init='random', - n_init=10, max_iter=300, - tol=1e-04, random_state=0 - ) - km = km.fit_predict(embed) - for n in range(nc): - idxs = [i for i in range(len(km)) if km[i] == n] - ms = [messages[i] for i in idxs] - st.markdown ("CLUSTER : %d"%n) - for m in ms: - st.markdown (m) - - -def plot_heatmap(labels, heatmap, rotation=90): - sns.set(font_scale=1.2) - fig, ax = plt.subplots() - g = sns.heatmap( - heatmap, - xticklabels=labels, - yticklabels=labels, - vmin=-1, - vmax=1, - cmap="coolwarm") - g.set_xticklabels(labels, rotation=rotation) - g.set_title("Textual Similarity") - - st.pyplot(fig) - #plt.show() - -#st.header("Sentence Similarity Demo") - -# Streamlit text boxes -text = st.text_area('Enter sentences:', value="Self confidence in outcomes helps us win and to make us successful.\nShe has a seriously impressive intellect and mind.\nStimulating and deep conversation helps us develop and grow.\nFrom basic quantum particles we get aerodynamics, friction, surface tension, weather, electromagnetism.\nIf she actively engages and comments positively, her anger disappears adapting into win-win's favor.\nI love interesting topics of conversation and the understanding and exploration of thoughts.\nThere is the ability to manipulate things the way you want in your mind to go how you want when you are self confident, that we don’t understand yet.") - -nc = st.slider('Select a number of clusters:', min_value=1, max_value=15, value=3) - -model_type = st.radio("Choose model:", ('Sentence Transformer', 'Universal Sentence Encoder'), index=0) - -# Model setup -if model_type == "Sentence Transformer": - model = SentenceTransformer('paraphrase-distilroberta-base-v1') -elif model_type == "Universal Sentence Encoder": - model_url = "https://tfhub.dev/google/universal-sentence-encoder-large/5" - model = hub.load(model_url) - -nltk.download('punkt') - -# Run model -if text: - sentences = nltk.tokenize.sent_tokenize(text) - if model_type == "Sentence Transformer": - embed = model.encode(sentences) - elif model_type == "Universal Sentence Encoder": - embed = model(sentences).numpy() - sim = np.zeros([len(embed), len(embed)]) - for i,em in enumerate(embed): - for j,ea in enumerate(embed): - sim[i][j] = 1.0-cosine(em,ea) - st.subheader("Similarity Heatmap") - plot_heatmap(sentences, sim) - st.subheader("Results from K-Means Clustering") - cluster_examples(sentences, embed, nc) diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/text_norm.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/text_norm.py deleted file mode 100644 index d0973cebc91e0525aeb6657e70012a1d37b5e6ff..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/text_norm.py +++ /dev/null @@ -1,790 +0,0 @@ -# coding=utf-8 -# Authors: -# 2019.5 Zhiyang Zhou (https://github.com/Joee1995/chn_text_norm.git) -# 2019.9 Jiayu DU -# -# requirements: -# - python 3.X -# notes: python 2.X WILL fail or produce misleading results - -import sys, os, argparse, codecs, string, re - -# ================================================================================ # -# basic constant -# ================================================================================ # -CHINESE_DIGIS = u'零一二三四五六七八九' -BIG_CHINESE_DIGIS_SIMPLIFIED = u'零壹贰叁肆伍陆柒捌玖' -BIG_CHINESE_DIGIS_TRADITIONAL = u'零壹貳參肆伍陸柒捌玖' -SMALLER_BIG_CHINESE_UNITS_SIMPLIFIED = u'十百千万' -SMALLER_BIG_CHINESE_UNITS_TRADITIONAL = u'拾佰仟萬' -LARGER_CHINESE_NUMERING_UNITS_SIMPLIFIED = u'亿兆京垓秭穰沟涧正载' -LARGER_CHINESE_NUMERING_UNITS_TRADITIONAL = u'億兆京垓秭穰溝澗正載' -SMALLER_CHINESE_NUMERING_UNITS_SIMPLIFIED = u'十百千万' -SMALLER_CHINESE_NUMERING_UNITS_TRADITIONAL = u'拾佰仟萬' - -ZERO_ALT = u'〇' -ONE_ALT = u'幺' -TWO_ALTS = [u'两', u'兩'] - -POSITIVE = [u'正', u'正'] -NEGATIVE = [u'负', u'負'] -POINT = [u'点', u'點'] -# PLUS = [u'加', u'加'] -# SIL = [u'杠', u'槓'] - -# 中文数字系统类型 -NUMBERING_TYPES = ['low', 'mid', 'high'] - -CURRENCY_NAMES = '(人民币|美元|日元|英镑|欧元|马克|法郎|加拿大元|澳元|港币|先令|芬兰马克|爱尔兰镑|' \ - '里拉|荷兰盾|埃斯库多|比塞塔|印尼盾|林吉特|新西兰元|比索|卢布|新加坡元|韩元|泰铢)' -CURRENCY_UNITS = '((亿|千万|百万|万|千|百)|(亿|千万|百万|万|千|百|)元|(亿|千万|百万|万|千|百|)块|角|毛|分)' -COM_QUANTIFIERS = '(匹|张|座|回|场|尾|条|个|首|阙|阵|网|炮|顶|丘|棵|只|支|袭|辆|挑|担|颗|壳|窠|曲|墙|群|腔|' \ - '砣|座|客|贯|扎|捆|刀|令|打|手|罗|坡|山|岭|江|溪|钟|队|单|双|对|出|口|头|脚|板|跳|枝|件|贴|' \ - '针|线|管|名|位|身|堂|课|本|页|家|户|层|丝|毫|厘|分|钱|两|斤|担|铢|石|钧|锱|忽|(千|毫|微)克|' \ - '毫|厘|分|寸|尺|丈|里|寻|常|铺|程|(千|分|厘|毫|微)米|撮|勺|合|升|斗|石|盘|碗|碟|叠|桶|笼|盆|' \ - '盒|杯|钟|斛|锅|簋|篮|盘|桶|罐|瓶|壶|卮|盏|箩|箱|煲|啖|袋|钵|年|月|日|季|刻|时|周|天|秒|分|旬|' \ - '纪|岁|世|更|夜|春|夏|秋|冬|代|伏|辈|丸|泡|粒|颗|幢|堆|条|根|支|道|面|片|张|颗|块)' - -# punctuation information are based on Zhon project (https://github.com/tsroten/zhon.git) -CHINESE_PUNC_STOP = '!?。。' -CHINESE_PUNC_NON_STOP = '"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏' -CHINESE_PUNC_LIST = CHINESE_PUNC_STOP + CHINESE_PUNC_NON_STOP - - -# ================================================================================ # -# basic class -# ================================================================================ # -class ChineseChar(object): - """ - 中文字符 - 每个字符对应简体和繁体, - e.g. 简体 = '负', 繁体 = '負' - 转换时可转换为简体或繁体 - """ - - def __init__(self, simplified, traditional): - self.simplified = simplified - self.traditional = traditional - # self.__repr__ = self.__str__ - - def __str__(self): - return self.simplified or self.traditional or None - - def __repr__(self): - return self.__str__() - - -class ChineseNumberUnit(ChineseChar): - """ - 中文数字/数位字符 - 每个字符除繁简体外还有一个额外的大写字符 - e.g. '陆' 和 '陸' - """ - - def __init__(self, power, simplified, traditional, big_s, big_t): - super(ChineseNumberUnit, self).__init__(simplified, traditional) - self.power = power - self.big_s = big_s - self.big_t = big_t - - def __str__(self): - return '10^{}'.format(self.power) - - @classmethod - def create(cls, index, value, numbering_type=NUMBERING_TYPES[1], small_unit=False): - - if small_unit: - return ChineseNumberUnit(power=index + 1, - simplified=value[0], traditional=value[1], big_s=value[1], big_t=value[1]) - elif numbering_type == NUMBERING_TYPES[0]: - return ChineseNumberUnit(power=index + 8, - simplified=value[0], traditional=value[1], big_s=value[0], big_t=value[1]) - elif numbering_type == NUMBERING_TYPES[1]: - return ChineseNumberUnit(power=(index + 2) * 4, - simplified=value[0], traditional=value[1], big_s=value[0], big_t=value[1]) - elif numbering_type == NUMBERING_TYPES[2]: - return ChineseNumberUnit(power=pow(2, index + 3), - simplified=value[0], traditional=value[1], big_s=value[0], big_t=value[1]) - else: - raise ValueError( - 'Counting type should be in {0} ({1} provided).'.format(NUMBERING_TYPES, numbering_type)) - - -class ChineseNumberDigit(ChineseChar): - """ - 中文数字字符 - """ - - def __init__(self, value, simplified, traditional, big_s, big_t, alt_s=None, alt_t=None): - super(ChineseNumberDigit, self).__init__(simplified, traditional) - self.value = value - self.big_s = big_s - self.big_t = big_t - self.alt_s = alt_s - self.alt_t = alt_t - - def __str__(self): - return str(self.value) - - @classmethod - def create(cls, i, v): - return ChineseNumberDigit(i, v[0], v[1], v[2], v[3]) - - -class ChineseMath(ChineseChar): - """ - 中文数位字符 - """ - - def __init__(self, simplified, traditional, symbol, expression=None): - super(ChineseMath, self).__init__(simplified, traditional) - self.symbol = symbol - self.expression = expression - self.big_s = simplified - self.big_t = traditional - - -CC, CNU, CND, CM = ChineseChar, ChineseNumberUnit, ChineseNumberDigit, ChineseMath - - -class NumberSystem(object): - """ - 中文数字系统 - """ - pass - - -class MathSymbol(object): - """ - 用于中文数字系统的数学符号 (繁/简体), e.g. - positive = ['正', '正'] - negative = ['负', '負'] - point = ['点', '點'] - """ - - def __init__(self, positive, negative, point): - self.positive = positive - self.negative = negative - self.point = point - - def __iter__(self): - for v in self.__dict__.values(): - yield v - - -# class OtherSymbol(object): -# """ -# 其他符号 -# """ -# -# def __init__(self, sil): -# self.sil = sil -# -# def __iter__(self): -# for v in self.__dict__.values(): -# yield v - - -# ================================================================================ # -# basic utils -# ================================================================================ # -def create_system(numbering_type=NUMBERING_TYPES[1]): - """ - 根据数字系统类型返回创建相应的数字系统,默认为 mid - NUMBERING_TYPES = ['low', 'mid', 'high']: 中文数字系统类型 - low: '兆' = '亿' * '十' = $10^{9}$, '京' = '兆' * '十', etc. - mid: '兆' = '亿' * '万' = $10^{12}$, '京' = '兆' * '万', etc. - high: '兆' = '亿' * '亿' = $10^{16}$, '京' = '兆' * '兆', etc. - 返回对应的数字系统 - """ - - # chinese number units of '亿' and larger - all_larger_units = zip( - LARGER_CHINESE_NUMERING_UNITS_SIMPLIFIED, LARGER_CHINESE_NUMERING_UNITS_TRADITIONAL) - larger_units = [CNU.create(i, v, numbering_type, False) - for i, v in enumerate(all_larger_units)] - # chinese number units of '十, 百, 千, 万' - all_smaller_units = zip( - SMALLER_CHINESE_NUMERING_UNITS_SIMPLIFIED, SMALLER_CHINESE_NUMERING_UNITS_TRADITIONAL) - smaller_units = [CNU.create(i, v, small_unit=True) - for i, v in enumerate(all_smaller_units)] - # digis - chinese_digis = zip(CHINESE_DIGIS, CHINESE_DIGIS, - BIG_CHINESE_DIGIS_SIMPLIFIED, BIG_CHINESE_DIGIS_TRADITIONAL) - digits = [CND.create(i, v) for i, v in enumerate(chinese_digis)] - digits[0].alt_s, digits[0].alt_t = ZERO_ALT, ZERO_ALT - digits[1].alt_s, digits[1].alt_t = ONE_ALT, ONE_ALT - digits[2].alt_s, digits[2].alt_t = TWO_ALTS[0], TWO_ALTS[1] - - # symbols - positive_cn = CM(POSITIVE[0], POSITIVE[1], '+', lambda x: x) - negative_cn = CM(NEGATIVE[0], NEGATIVE[1], '-', lambda x: -x) - point_cn = CM(POINT[0], POINT[1], '.', lambda x, - y: float(str(x) + '.' + str(y))) - # sil_cn = CM(SIL[0], SIL[1], '-', lambda x, y: float(str(x) + '-' + str(y))) - system = NumberSystem() - system.units = smaller_units + larger_units - system.digits = digits - system.math = MathSymbol(positive_cn, negative_cn, point_cn) - # system.symbols = OtherSymbol(sil_cn) - return system - - -def chn2num(chinese_string, numbering_type=NUMBERING_TYPES[1]): - def get_symbol(char, system): - for u in system.units: - if char in [u.traditional, u.simplified, u.big_s, u.big_t]: - return u - for d in system.digits: - if char in [d.traditional, d.simplified, d.big_s, d.big_t, d.alt_s, d.alt_t]: - return d - for m in system.math: - if char in [m.traditional, m.simplified]: - return m - - def string2symbols(chinese_string, system): - int_string, dec_string = chinese_string, '' - for p in [system.math.point.simplified, system.math.point.traditional]: - if p in chinese_string: - int_string, dec_string = chinese_string.split(p) - break - return [get_symbol(c, system) for c in int_string], \ - [get_symbol(c, system) for c in dec_string] - - def correct_symbols(integer_symbols, system): - """ - 一百八 to 一百八十 - 一亿一千三百万 to 一亿 一千万 三百万 - """ - - if integer_symbols and isinstance(integer_symbols[0], CNU): - if integer_symbols[0].power == 1: - integer_symbols = [system.digits[1]] + integer_symbols - - if len(integer_symbols) > 1: - if isinstance(integer_symbols[-1], CND) and isinstance(integer_symbols[-2], CNU): - integer_symbols.append( - CNU(integer_symbols[-2].power - 1, None, None, None, None)) - - result = [] - unit_count = 0 - for s in integer_symbols: - if isinstance(s, CND): - result.append(s) - unit_count = 0 - elif isinstance(s, CNU): - current_unit = CNU(s.power, None, None, None, None) - unit_count += 1 - - if unit_count == 1: - result.append(current_unit) - elif unit_count > 1: - for i in range(len(result)): - if isinstance(result[-i - 1], CNU) and result[-i - 1].power < current_unit.power: - result[-i - 1] = CNU(result[-i - 1].power + - current_unit.power, None, None, None, None) - return result - - def compute_value(integer_symbols): - """ - Compute the value. - When current unit is larger than previous unit, current unit * all previous units will be used as all previous units. - e.g. '两千万' = 2000 * 10000 not 2000 + 10000 - """ - value = [0] - last_power = 0 - for s in integer_symbols: - if isinstance(s, CND): - value[-1] = s.value - elif isinstance(s, CNU): - value[-1] *= pow(10, s.power) - if s.power > last_power: - value[:-1] = list(map(lambda v: v * - pow(10, s.power), value[:-1])) - last_power = s.power - value.append(0) - return sum(value) - - system = create_system(numbering_type) - int_part, dec_part = string2symbols(chinese_string, system) - int_part = correct_symbols(int_part, system) - int_str = str(compute_value(int_part)) - dec_str = ''.join([str(d.value) for d in dec_part]) - if dec_part: - return '{0}.{1}'.format(int_str, dec_str) - else: - return int_str - - -def num2chn(number_string, numbering_type=NUMBERING_TYPES[1], big=False, - traditional=False, alt_zero=False, alt_one=False, alt_two=True, - use_zeros=True, use_units=True): - def get_value(value_string, use_zeros=True): - - striped_string = value_string.lstrip('0') - - # record nothing if all zeros - if not striped_string: - return [] - - # record one digits - elif len(striped_string) == 1: - if use_zeros and len(value_string) != len(striped_string): - return [system.digits[0], system.digits[int(striped_string)]] - else: - return [system.digits[int(striped_string)]] - - # recursively record multiple digits - else: - result_unit = next(u for u in reversed( - system.units) if u.power < len(striped_string)) - result_string = value_string[:-result_unit.power] - return get_value(result_string) + [result_unit] + get_value(striped_string[-result_unit.power:]) - - system = create_system(numbering_type) - - int_dec = number_string.split('.') - if len(int_dec) == 1: - int_string = int_dec[0] - dec_string = "" - elif len(int_dec) == 2: - int_string = int_dec[0] - dec_string = int_dec[1] - else: - raise ValueError( - "invalid input num string with more than one dot: {}".format(number_string)) - - if use_units and len(int_string) > 1: - result_symbols = get_value(int_string) - else: - result_symbols = [system.digits[int(c)] for c in int_string] - dec_symbols = [system.digits[int(c)] for c in dec_string] - if dec_string: - result_symbols += [system.math.point] + dec_symbols - - if alt_two: - liang = CND(2, system.digits[2].alt_s, system.digits[2].alt_t, - system.digits[2].big_s, system.digits[2].big_t) - for i, v in enumerate(result_symbols): - if isinstance(v, CND) and v.value == 2: - next_symbol = result_symbols[i + - 1] if i < len(result_symbols) - 1 else None - previous_symbol = result_symbols[i - 1] if i > 0 else None - if isinstance(next_symbol, CNU) and isinstance(previous_symbol, (CNU, type(None))): - if next_symbol.power != 1 and ((previous_symbol is None) or (previous_symbol.power != 1)): - result_symbols[i] = liang - - # if big is True, '两' will not be used and `alt_two` has no impact on output - if big: - attr_name = 'big_' - if traditional: - attr_name += 't' - else: - attr_name += 's' - else: - if traditional: - attr_name = 'traditional' - else: - attr_name = 'simplified' - - result = ''.join([getattr(s, attr_name) for s in result_symbols]) - - # if not use_zeros: - # result = result.strip(getattr(system.digits[0], attr_name)) - - if alt_zero: - result = result.replace( - getattr(system.digits[0], attr_name), system.digits[0].alt_s) - - if alt_one: - result = result.replace( - getattr(system.digits[1], attr_name), system.digits[1].alt_s) - - for i, p in enumerate(POINT): - if result.startswith(p): - return CHINESE_DIGIS[0] + result - - # ^10, 11, .., 19 - if len(result) >= 2 and result[1] in [SMALLER_CHINESE_NUMERING_UNITS_SIMPLIFIED[0], - SMALLER_CHINESE_NUMERING_UNITS_TRADITIONAL[0]] and \ - result[0] in [CHINESE_DIGIS[1], BIG_CHINESE_DIGIS_SIMPLIFIED[1], BIG_CHINESE_DIGIS_TRADITIONAL[1]]: - result = result[1:] - - return result - - -# ================================================================================ # -# different types of rewriters -# ================================================================================ # -class Cardinal: - """ - CARDINAL类 - """ - - def __init__(self, cardinal=None, chntext=None): - self.cardinal = cardinal - self.chntext = chntext - - def chntext2cardinal(self): - return chn2num(self.chntext) - - def cardinal2chntext(self): - return num2chn(self.cardinal) - - -class Digit: - """ - DIGIT类 - """ - - def __init__(self, digit=None, chntext=None): - self.digit = digit - self.chntext = chntext - - # def chntext2digit(self): - # return chn2num(self.chntext) - - def digit2chntext(self): - return num2chn(self.digit, alt_two=False, use_units=False) - - -class TelePhone: - """ - TELEPHONE类 - """ - - def __init__(self, telephone=None, raw_chntext=None, chntext=None): - self.telephone = telephone - self.raw_chntext = raw_chntext - self.chntext = chntext - - # def chntext2telephone(self): - # sil_parts = self.raw_chntext.split('') - # self.telephone = '-'.join([ - # str(chn2num(p)) for p in sil_parts - # ]) - # return self.telephone - - def telephone2chntext(self, fixed=False): - - if fixed: - sil_parts = self.telephone.split('-') - self.raw_chntext = ''.join([ - num2chn(part, alt_two=False, use_units=False) for part in sil_parts - ]) - self.chntext = self.raw_chntext.replace('', '') - else: - sp_parts = self.telephone.strip('+').split() - self.raw_chntext = ''.join([ - num2chn(part, alt_two=False, use_units=False) for part in sp_parts - ]) - self.chntext = self.raw_chntext.replace('', '') - return self.chntext - - -class Fraction: - """ - FRACTION类 - """ - - def __init__(self, fraction=None, chntext=None): - self.fraction = fraction - self.chntext = chntext - - def chntext2fraction(self): - denominator, numerator = self.chntext.split('分之') - return chn2num(numerator) + '/' + chn2num(denominator) - - def fraction2chntext(self): - numerator, denominator = self.fraction.split('/') - return num2chn(denominator) + '分之' + num2chn(numerator) - - -class Date: - """ - DATE类 - """ - - def __init__(self, date=None, chntext=None): - self.date = date - self.chntext = chntext - - # def chntext2date(self): - # chntext = self.chntext - # try: - # year, other = chntext.strip().split('年', maxsplit=1) - # year = Digit(chntext=year).digit2chntext() + '年' - # except ValueError: - # other = chntext - # year = '' - # if other: - # try: - # month, day = other.strip().split('月', maxsplit=1) - # month = Cardinal(chntext=month).chntext2cardinal() + '月' - # except ValueError: - # day = chntext - # month = '' - # if day: - # day = Cardinal(chntext=day[:-1]).chntext2cardinal() + day[-1] - # else: - # month = '' - # day = '' - # date = year + month + day - # self.date = date - # return self.date - - def date2chntext(self): - date = self.date - try: - year, other = date.strip().split('年', 1) - year = Digit(digit=year).digit2chntext() + '年' - except ValueError: - other = date - year = '' - if other: - try: - month, day = other.strip().split('月', 1) - month = Cardinal(cardinal=month).cardinal2chntext() + '月' - except ValueError: - day = date - month = '' - if day: - day = Cardinal(cardinal=day[:-1]).cardinal2chntext() + day[-1] - else: - month = '' - day = '' - chntext = year + month + day - self.chntext = chntext - return self.chntext - - -class Money: - """ - MONEY类 - """ - - def __init__(self, money=None, chntext=None): - self.money = money - self.chntext = chntext - - # def chntext2money(self): - # return self.money - - def money2chntext(self): - money = self.money - pattern = re.compile(r'(\d+(\.\d+)?)') - matchers = pattern.findall(money) - if matchers: - for matcher in matchers: - money = money.replace(matcher[0], Cardinal(cardinal=matcher[0]).cardinal2chntext()) - self.chntext = money - return self.chntext - - -class Percentage: - """ - PERCENTAGE类 - """ - - def __init__(self, percentage=None, chntext=None): - self.percentage = percentage - self.chntext = chntext - - def chntext2percentage(self): - return chn2num(self.chntext.strip().strip('百分之')) + '%' - - def percentage2chntext(self): - return '百分之' + num2chn(self.percentage.strip().strip('%')) - - -# ================================================================================ # -# NSW Normalizer -# ================================================================================ # -class NSWNormalizer: - def __init__(self, raw_text): - self.raw_text = '^' + raw_text + '$' - self.norm_text = '' - - def _particular(self): - text = self.norm_text - pattern = re.compile(r"(([a-zA-Z]+)二([a-zA-Z]+))") - matchers = pattern.findall(text) - if matchers: - # print('particular') - for matcher in matchers: - text = text.replace(matcher[0], matcher[1] + '2' + matcher[2], 1) - self.norm_text = text - return self.norm_text - - def normalize(self, remove_punc=True): - text = self.raw_text - - # 规范化日期 - pattern = re.compile(r"\D+((([089]\d|(19|20)\d{2})年)?(\d{1,2}月(\d{1,2}[日号])?)?)") - matchers = pattern.findall(text) - if matchers: - # print('date') - for matcher in matchers: - text = text.replace(matcher[0], Date(date=matcher[0]).date2chntext(), 1) - - # 规范化金钱 - pattern = re.compile(r"\D+((\d+(\.\d+)?)[多余几]?" + CURRENCY_UNITS + r"(\d" + CURRENCY_UNITS + r"?)?)") - matchers = pattern.findall(text) - if matchers: - # print('money') - for matcher in matchers: - text = text.replace(matcher[0], Money(money=matcher[0]).money2chntext(), 1) - - # 规范化固话/手机号码 - # 手机 - # http://www.jihaoba.com/news/show/13680 - # 移动:139、138、137、136、135、134、159、158、157、150、151、152、188、187、182、183、184、178、198 - # 联通:130、131、132、156、155、186、185、176 - # 电信:133、153、189、180、181、177 - pattern = re.compile(r"\D((\+?86 ?)?1([38]\d|5[0-35-9]|7[678]|9[89])\d{8})\D") - matchers = pattern.findall(text) - if matchers: - # print('telephone') - for matcher in matchers: - text = text.replace(matcher[0], TelePhone(telephone=matcher[0]).telephone2chntext(), 1) - # 固话 - pattern = re.compile(r"\D((0(10|2[1-3]|[3-9]\d{2})-?)?[1-9]\d{6,7})\D") - matchers = pattern.findall(text) - if matchers: - # print('fixed telephone') - for matcher in matchers: - text = text.replace(matcher[0], TelePhone(telephone=matcher[0]).telephone2chntext(fixed=True), 1) - - # 规范化分数 - pattern = re.compile(r"(\d+/\d+)") - matchers = pattern.findall(text) - if matchers: - # print('fraction') - for matcher in matchers: - text = text.replace(matcher, Fraction(fraction=matcher).fraction2chntext(), 1) - - # 规范化百分数 - text = text.replace('%', '%') - pattern = re.compile(r"(\d+(\.\d+)?%)") - matchers = pattern.findall(text) - if matchers: - # print('percentage') - for matcher in matchers: - text = text.replace(matcher[0], Percentage(percentage=matcher[0]).percentage2chntext(), 1) - - # 规范化纯数+量词 - pattern = re.compile(r"(\d+(\.\d+)?)[多余几]?" + COM_QUANTIFIERS) - matchers = pattern.findall(text) - if matchers: - # print('cardinal+quantifier') - for matcher in matchers: - text = text.replace(matcher[0], Cardinal(cardinal=matcher[0]).cardinal2chntext(), 1) - - # 规范化数字编号 - pattern = re.compile(r"(\d{4,32})") - matchers = pattern.findall(text) - if matchers: - # print('digit') - for matcher in matchers: - text = text.replace(matcher, Digit(digit=matcher).digit2chntext(), 1) - - # 规范化纯数 - pattern = re.compile(r"(\d+(\.\d+)?)") - matchers = pattern.findall(text) - if matchers: - # print('cardinal') - for matcher in matchers: - text = text.replace(matcher[0], Cardinal(cardinal=matcher[0]).cardinal2chntext(), 1) - - self.norm_text = text - self._particular() - - text = self.norm_text.lstrip('^').rstrip('$') - if remove_punc: - # Punctuations removal - old_chars = CHINESE_PUNC_LIST + string.punctuation # includes all CN and EN punctuations - new_chars = ' ' * len(old_chars) - del_chars = '' - text = text.translate(str.maketrans(old_chars, new_chars, del_chars)) - return text - - -def nsw_test_case(raw_text): - print('I:' + raw_text) - print('O:' + NSWNormalizer(raw_text).normalize()) - print('') - - -def nsw_test(): - nsw_test_case('固话:0595-23865596或23880880。') - nsw_test_case('固话:0595-23865596或23880880。') - nsw_test_case('手机:+86 19859213959或15659451527。') - nsw_test_case('分数:32477/76391。') - nsw_test_case('百分数:80.03%。') - nsw_test_case('编号:31520181154418。') - nsw_test_case('纯数:2983.07克或12345.60米。') - nsw_test_case('日期:1999年2月20日或09年3月15号。') - nsw_test_case('金钱:12块5,34.5元,20.1万') - nsw_test_case('特殊:O2O或B2C。') - nsw_test_case('3456万吨') - nsw_test_case('2938个') - nsw_test_case('938') - nsw_test_case('今天吃了115个小笼包231个馒头') - nsw_test_case('有62%的概率') - - -if __name__ == '__main__': - # nsw_test() - - p = argparse.ArgumentParser() - p.add_argument('ifile', help='input filename, assume utf-8 encoding') - p.add_argument('ofile', help='output filename') - p.add_argument('--to_upper', action='store_true', help='convert to upper case') - p.add_argument('--to_lower', action='store_true', help='convert to lower case') - p.add_argument('--has_key', action='store_true', help="input text has Kaldi's key as first field.") - p.add_argument('--log_interval', type=int, default=10000, help='log interval in number of processed lines') - args = p.parse_args() - - ifile = codecs.open(args.ifile, 'r', 'utf8') - ofile = codecs.open(args.ofile, 'w+', 'utf8') - - n = 0 - for l in ifile: - key = '' - text = '' - if args.has_key: - cols = l.split(maxsplit=1) - key = cols[0] - if len(cols) == 2: - text = cols[1] - else: - text = '' - else: - text = l - - # cases - if args.to_upper and args.to_lower: - sys.stderr.write('text norm: to_upper OR to_lower?') - exit(1) - if args.to_upper: - text = text.upper() - if args.to_lower: - text = text.lower() - - # NSW(Non-Standard-Word) normalization - text = NSWNormalizer(text).normalize() - - # - if args.has_key: - ofile.write(key + '\t' + text) - else: - ofile.write(text) - - n += 1 - if n % args.log_interval == 0: - sys.stderr.write("text norm: {} lines done.\n".format(n)) - - sys.stderr.write("text norm: {} lines done in total.\n".format(n)) - - ifile.close() - ofile.close() diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/models/diffusion/dpm_solver/dpm_solver.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/models/diffusion/dpm_solver/dpm_solver.py deleted file mode 100644 index 095e5ba3ce0b1aa7f4b3f1e2e5d8fff7cfe6dc8c..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/models/diffusion/dpm_solver/dpm_solver.py +++ /dev/null @@ -1,1154 +0,0 @@ -import torch -import torch.nn.functional as F -import math -from tqdm import tqdm - - -class NoiseScheduleVP: - def __init__( - self, - schedule='discrete', - betas=None, - alphas_cumprod=None, - continuous_beta_0=0.1, - continuous_beta_1=20., - ): - """Create a wrapper class for the forward SDE (VP type). - *** - Update: We support discrete-time diffusion models by implementing a picewise linear interpolation for log_alpha_t. - We recommend to use schedule='discrete' for the discrete-time diffusion models, especially for high-resolution images. - *** - The forward SDE ensures that the condition distribution q_{t|0}(x_t | x_0) = N ( alpha_t * x_0, sigma_t^2 * I ). - We further define lambda_t = log(alpha_t) - log(sigma_t), which is the half-logSNR (described in the DPM-Solver paper). - Therefore, we implement the functions for computing alpha_t, sigma_t and lambda_t. For t in [0, T], we have: - log_alpha_t = self.marginal_log_mean_coeff(t) - sigma_t = self.marginal_std(t) - lambda_t = self.marginal_lambda(t) - Moreover, as lambda(t) is an invertible function, we also support its inverse function: - t = self.inverse_lambda(lambda_t) - =============================================================== - We support both discrete-time DPMs (trained on n = 0, 1, ..., N-1) and continuous-time DPMs (trained on t in [t_0, T]). - 1. For discrete-time DPMs: - For discrete-time DPMs trained on n = 0, 1, ..., N-1, we convert the discrete steps to continuous time steps by: - t_i = (i + 1) / N - e.g. for N = 1000, we have t_0 = 1e-3 and T = t_{N-1} = 1. - We solve the corresponding diffusion ODE from time T = 1 to time t_0 = 1e-3. - Args: - betas: A `torch.Tensor`. The beta array for the discrete-time DPM. (See the original DDPM paper for details) - alphas_cumprod: A `torch.Tensor`. The cumprod alphas for the discrete-time DPM. (See the original DDPM paper for details) - Note that we always have alphas_cumprod = cumprod(betas). Therefore, we only need to set one of `betas` and `alphas_cumprod`. - **Important**: Please pay special attention for the args for `alphas_cumprod`: - The `alphas_cumprod` is the \hat{alpha_n} arrays in the notations of DDPM. Specifically, DDPMs assume that - q_{t_n | 0}(x_{t_n} | x_0) = N ( \sqrt{\hat{alpha_n}} * x_0, (1 - \hat{alpha_n}) * I ). - Therefore, the notation \hat{alpha_n} is different from the notation alpha_t in DPM-Solver. In fact, we have - alpha_{t_n} = \sqrt{\hat{alpha_n}}, - and - log(alpha_{t_n}) = 0.5 * log(\hat{alpha_n}). - 2. For continuous-time DPMs: - We support two types of VPSDEs: linear (DDPM) and cosine (improved-DDPM). The hyperparameters for the noise - schedule are the default settings in DDPM and improved-DDPM: - Args: - beta_min: A `float` number. The smallest beta for the linear schedule. - beta_max: A `float` number. The largest beta for the linear schedule. - cosine_s: A `float` number. The hyperparameter in the cosine schedule. - cosine_beta_max: A `float` number. The hyperparameter in the cosine schedule. - T: A `float` number. The ending time of the forward process. - =============================================================== - Args: - schedule: A `str`. The noise schedule of the forward SDE. 'discrete' for discrete-time DPMs, - 'linear' or 'cosine' for continuous-time DPMs. - Returns: - A wrapper object of the forward SDE (VP type). - - =============================================================== - Example: - # For discrete-time DPMs, given betas (the beta array for n = 0, 1, ..., N - 1): - >>> ns = NoiseScheduleVP('discrete', betas=betas) - # For discrete-time DPMs, given alphas_cumprod (the \hat{alpha_n} array for n = 0, 1, ..., N - 1): - >>> ns = NoiseScheduleVP('discrete', alphas_cumprod=alphas_cumprod) - # For continuous-time DPMs (VPSDE), linear schedule: - >>> ns = NoiseScheduleVP('linear', continuous_beta_0=0.1, continuous_beta_1=20.) - """ - - if schedule not in ['discrete', 'linear', 'cosine']: - raise ValueError( - "Unsupported noise schedule {}. The schedule needs to be 'discrete' or 'linear' or 'cosine'".format( - schedule)) - - self.schedule = schedule - if schedule == 'discrete': - if betas is not None: - log_alphas = 0.5 * torch.log(1 - betas).cumsum(dim=0) - else: - assert alphas_cumprod is not None - log_alphas = 0.5 * torch.log(alphas_cumprod) - self.total_N = len(log_alphas) - self.T = 1. - self.t_array = torch.linspace(0., 1., self.total_N + 1)[1:].reshape((1, -1)) - self.log_alpha_array = log_alphas.reshape((1, -1,)) - else: - self.total_N = 1000 - self.beta_0 = continuous_beta_0 - self.beta_1 = continuous_beta_1 - self.cosine_s = 0.008 - self.cosine_beta_max = 999. - self.cosine_t_max = math.atan(self.cosine_beta_max * (1. + self.cosine_s) / math.pi) * 2. * ( - 1. + self.cosine_s) / math.pi - self.cosine_s - self.cosine_log_alpha_0 = math.log(math.cos(self.cosine_s / (1. + self.cosine_s) * math.pi / 2.)) - self.schedule = schedule - if schedule == 'cosine': - # For the cosine schedule, T = 1 will have numerical issues. So we manually set the ending time T. - # Note that T = 0.9946 may be not the optimal setting. However, we find it works well. - self.T = 0.9946 - else: - self.T = 1. - - def marginal_log_mean_coeff(self, t): - """ - Compute log(alpha_t) of a given continuous-time label t in [0, T]. - """ - if self.schedule == 'discrete': - return interpolate_fn(t.reshape((-1, 1)), self.t_array.to(t.device), - self.log_alpha_array.to(t.device)).reshape((-1)) - elif self.schedule == 'linear': - return -0.25 * t ** 2 * (self.beta_1 - self.beta_0) - 0.5 * t * self.beta_0 - elif self.schedule == 'cosine': - log_alpha_fn = lambda s: torch.log(torch.cos((s + self.cosine_s) / (1. + self.cosine_s) * math.pi / 2.)) - log_alpha_t = log_alpha_fn(t) - self.cosine_log_alpha_0 - return log_alpha_t - - def marginal_alpha(self, t): - """ - Compute alpha_t of a given continuous-time label t in [0, T]. - """ - return torch.exp(self.marginal_log_mean_coeff(t)) - - def marginal_std(self, t): - """ - Compute sigma_t of a given continuous-time label t in [0, T]. - """ - return torch.sqrt(1. - torch.exp(2. * self.marginal_log_mean_coeff(t))) - - def marginal_lambda(self, t): - """ - Compute lambda_t = log(alpha_t) - log(sigma_t) of a given continuous-time label t in [0, T]. - """ - log_mean_coeff = self.marginal_log_mean_coeff(t) - log_std = 0.5 * torch.log(1. - torch.exp(2. * log_mean_coeff)) - return log_mean_coeff - log_std - - def inverse_lambda(self, lamb): - """ - Compute the continuous-time label t in [0, T] of a given half-logSNR lambda_t. - """ - if self.schedule == 'linear': - tmp = 2. * (self.beta_1 - self.beta_0) * torch.logaddexp(-2. * lamb, torch.zeros((1,)).to(lamb)) - Delta = self.beta_0 ** 2 + tmp - return tmp / (torch.sqrt(Delta) + self.beta_0) / (self.beta_1 - self.beta_0) - elif self.schedule == 'discrete': - log_alpha = -0.5 * torch.logaddexp(torch.zeros((1,)).to(lamb.device), -2. * lamb) - t = interpolate_fn(log_alpha.reshape((-1, 1)), torch.flip(self.log_alpha_array.to(lamb.device), [1]), - torch.flip(self.t_array.to(lamb.device), [1])) - return t.reshape((-1,)) - else: - log_alpha = -0.5 * torch.logaddexp(-2. * lamb, torch.zeros((1,)).to(lamb)) - t_fn = lambda log_alpha_t: torch.arccos(torch.exp(log_alpha_t + self.cosine_log_alpha_0)) * 2. * ( - 1. + self.cosine_s) / math.pi - self.cosine_s - t = t_fn(log_alpha) - return t - - -def model_wrapper( - model, - noise_schedule, - model_type="noise", - model_kwargs={}, - guidance_type="uncond", - condition=None, - unconditional_condition=None, - guidance_scale=1., - classifier_fn=None, - classifier_kwargs={}, -): - """Create a wrapper function for the noise prediction model. - DPM-Solver needs to solve the continuous-time diffusion ODEs. For DPMs trained on discrete-time labels, we need to - firstly wrap the model function to a noise prediction model that accepts the continuous time as the input. - We support four types of the diffusion model by setting `model_type`: - 1. "noise": noise prediction model. (Trained by predicting noise). - 2. "x_start": data prediction model. (Trained by predicting the data x_0 at time 0). - 3. "v": velocity prediction model. (Trained by predicting the velocity). - The "v" prediction is derivation detailed in Appendix D of [1], and is used in Imagen-Video [2]. - [1] Salimans, Tim, and Jonathan Ho. "Progressive distillation for fast sampling of diffusion models." - arXiv preprint arXiv:2202.00512 (2022). - [2] Ho, Jonathan, et al. "Imagen Video: High Definition Video Generation with Diffusion Models." - arXiv preprint arXiv:2210.02303 (2022). - - 4. "score": marginal score function. (Trained by denoising score matching). - Note that the score function and the noise prediction model follows a simple relationship: - ``` - noise(x_t, t) = -sigma_t * score(x_t, t) - ``` - We support three types of guided sampling by DPMs by setting `guidance_type`: - 1. "uncond": unconditional sampling by DPMs. - The input `model` has the following format: - `` - model(x, t_input, **model_kwargs) -> noise | x_start | v | score - `` - 2. "classifier": classifier guidance sampling [3] by DPMs and another classifier. - The input `model` has the following format: - `` - model(x, t_input, **model_kwargs) -> noise | x_start | v | score - `` - The input `classifier_fn` has the following format: - `` - classifier_fn(x, t_input, cond, **classifier_kwargs) -> logits(x, t_input, cond) - `` - [3] P. Dhariwal and A. Q. Nichol, "Diffusion models beat GANs on image synthesis," - in Advances in Neural Information Processing Systems, vol. 34, 2021, pp. 8780-8794. - 3. "classifier-free": classifier-free guidance sampling by conditional DPMs. - The input `model` has the following format: - `` - model(x, t_input, cond, **model_kwargs) -> noise | x_start | v | score - `` - And if cond == `unconditional_condition`, the model output is the unconditional DPM output. - [4] Ho, Jonathan, and Tim Salimans. "Classifier-free diffusion guidance." - arXiv preprint arXiv:2207.12598 (2022). - - The `t_input` is the time label of the model, which may be discrete-time labels (i.e. 0 to 999) - or continuous-time labels (i.e. epsilon to T). - We wrap the model function to accept only `x` and `t_continuous` as inputs, and outputs the predicted noise: - `` - def model_fn(x, t_continuous) -> noise: - t_input = get_model_input_time(t_continuous) - return noise_pred(model, x, t_input, **model_kwargs) - `` - where `t_continuous` is the continuous time labels (i.e. epsilon to T). And we use `model_fn` for DPM-Solver. - =============================================================== - Args: - model: A diffusion model with the corresponding format described above. - noise_schedule: A noise schedule object, such as NoiseScheduleVP. - model_type: A `str`. The parameterization type of the diffusion model. - "noise" or "x_start" or "v" or "score". - model_kwargs: A `dict`. A dict for the other inputs of the model function. - guidance_type: A `str`. The type of the guidance for sampling. - "uncond" or "classifier" or "classifier-free". - condition: A pytorch tensor. The condition for the guided sampling. - Only used for "classifier" or "classifier-free" guidance type. - unconditional_condition: A pytorch tensor. The condition for the unconditional sampling. - Only used for "classifier-free" guidance type. - guidance_scale: A `float`. The scale for the guided sampling. - classifier_fn: A classifier function. Only used for the classifier guidance. - classifier_kwargs: A `dict`. A dict for the other inputs of the classifier function. - Returns: - A noise prediction model that accepts the noised data and the continuous time as the inputs. - """ - - def get_model_input_time(t_continuous): - """ - Convert the continuous-time `t_continuous` (in [epsilon, T]) to the model input time. - For discrete-time DPMs, we convert `t_continuous` in [1 / N, 1] to `t_input` in [0, 1000 * (N - 1) / N]. - For continuous-time DPMs, we just use `t_continuous`. - """ - if noise_schedule.schedule == 'discrete': - return (t_continuous - 1. / noise_schedule.total_N) * 1000. - else: - return t_continuous - - def noise_pred_fn(x, t_continuous, cond=None): - if t_continuous.reshape((-1,)).shape[0] == 1: - t_continuous = t_continuous.expand((x.shape[0])) - t_input = get_model_input_time(t_continuous) - if cond is None: - output = model(x, t_input, **model_kwargs) - else: - output = model(x, t_input, cond, **model_kwargs) - if model_type == "noise": - return output - elif model_type == "x_start": - alpha_t, sigma_t = noise_schedule.marginal_alpha(t_continuous), noise_schedule.marginal_std(t_continuous) - dims = x.dim() - return (x - expand_dims(alpha_t, dims) * output) / expand_dims(sigma_t, dims) - elif model_type == "v": - alpha_t, sigma_t = noise_schedule.marginal_alpha(t_continuous), noise_schedule.marginal_std(t_continuous) - dims = x.dim() - return expand_dims(alpha_t, dims) * output + expand_dims(sigma_t, dims) * x - elif model_type == "score": - sigma_t = noise_schedule.marginal_std(t_continuous) - dims = x.dim() - return -expand_dims(sigma_t, dims) * output - - def cond_grad_fn(x, t_input): - """ - Compute the gradient of the classifier, i.e. nabla_{x} log p_t(cond | x_t). - """ - with torch.enable_grad(): - x_in = x.detach().requires_grad_(True) - log_prob = classifier_fn(x_in, t_input, condition, **classifier_kwargs) - return torch.autograd.grad(log_prob.sum(), x_in)[0] - - def model_fn(x, t_continuous): - """ - The noise predicition model function that is used for DPM-Solver. - """ - if t_continuous.reshape((-1,)).shape[0] == 1: - t_continuous = t_continuous.expand((x.shape[0])) - if guidance_type == "uncond": - return noise_pred_fn(x, t_continuous) - elif guidance_type == "classifier": - assert classifier_fn is not None - t_input = get_model_input_time(t_continuous) - cond_grad = cond_grad_fn(x, t_input) - sigma_t = noise_schedule.marginal_std(t_continuous) - noise = noise_pred_fn(x, t_continuous) - return noise - guidance_scale * expand_dims(sigma_t, dims=cond_grad.dim()) * cond_grad - elif guidance_type == "classifier-free": - if guidance_scale == 1. or unconditional_condition is None: - return noise_pred_fn(x, t_continuous, cond=condition) - else: - x_in = torch.cat([x] * 2) - t_in = torch.cat([t_continuous] * 2) - c_in = torch.cat([unconditional_condition, condition]) - noise_uncond, noise = noise_pred_fn(x_in, t_in, cond=c_in).chunk(2) - return noise_uncond + guidance_scale * (noise - noise_uncond) - - assert model_type in ["noise", "x_start", "v"] - assert guidance_type in ["uncond", "classifier", "classifier-free"] - return model_fn - - -class DPM_Solver: - def __init__(self, model_fn, noise_schedule, predict_x0=False, thresholding=False, max_val=1.): - """Construct a DPM-Solver. - We support both the noise prediction model ("predicting epsilon") and the data prediction model ("predicting x0"). - If `predict_x0` is False, we use the solver for the noise prediction model (DPM-Solver). - If `predict_x0` is True, we use the solver for the data prediction model (DPM-Solver++). - In such case, we further support the "dynamic thresholding" in [1] when `thresholding` is True. - The "dynamic thresholding" can greatly improve the sample quality for pixel-space DPMs with large guidance scales. - Args: - model_fn: A noise prediction model function which accepts the continuous-time input (t in [epsilon, T]): - `` - def model_fn(x, t_continuous): - return noise - `` - noise_schedule: A noise schedule object, such as NoiseScheduleVP. - predict_x0: A `bool`. If true, use the data prediction model; else, use the noise prediction model. - thresholding: A `bool`. Valid when `predict_x0` is True. Whether to use the "dynamic thresholding" in [1]. - max_val: A `float`. Valid when both `predict_x0` and `thresholding` are True. The max value for thresholding. - - [1] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487, 2022b. - """ - self.model = model_fn - self.noise_schedule = noise_schedule - self.predict_x0 = predict_x0 - self.thresholding = thresholding - self.max_val = max_val - - def noise_prediction_fn(self, x, t): - """ - Return the noise prediction model. - """ - return self.model(x, t) - - def data_prediction_fn(self, x, t): - """ - Return the data prediction model (with thresholding). - """ - noise = self.noise_prediction_fn(x, t) - dims = x.dim() - alpha_t, sigma_t = self.noise_schedule.marginal_alpha(t), self.noise_schedule.marginal_std(t) - x0 = (x - expand_dims(sigma_t, dims) * noise) / expand_dims(alpha_t, dims) - if self.thresholding: - p = 0.995 # A hyperparameter in the paper of "Imagen" [1]. - s = torch.quantile(torch.abs(x0).reshape((x0.shape[0], -1)), p, dim=1) - s = expand_dims(torch.maximum(s, self.max_val * torch.ones_like(s).to(s.device)), dims) - x0 = torch.clamp(x0, -s, s) / s - return x0 - - def model_fn(self, x, t): - """ - Convert the model to the noise prediction model or the data prediction model. - """ - if self.predict_x0: - return self.data_prediction_fn(x, t) - else: - return self.noise_prediction_fn(x, t) - - def get_time_steps(self, skip_type, t_T, t_0, N, device): - """Compute the intermediate time steps for sampling. - Args: - skip_type: A `str`. The type for the spacing of the time steps. We support three types: - - 'logSNR': uniform logSNR for the time steps. - - 'time_uniform': uniform time for the time steps. (**Recommended for high-resolutional data**.) - - 'time_quadratic': quadratic time for the time steps. (Used in DDIM for low-resolutional data.) - t_T: A `float`. The starting time of the sampling (default is T). - t_0: A `float`. The ending time of the sampling (default is epsilon). - N: A `int`. The total number of the spacing of the time steps. - device: A torch device. - Returns: - A pytorch tensor of the time steps, with the shape (N + 1,). - """ - if skip_type == 'logSNR': - lambda_T = self.noise_schedule.marginal_lambda(torch.tensor(t_T).to(device)) - lambda_0 = self.noise_schedule.marginal_lambda(torch.tensor(t_0).to(device)) - logSNR_steps = torch.linspace(lambda_T.cpu().item(), lambda_0.cpu().item(), N + 1).to(device) - return self.noise_schedule.inverse_lambda(logSNR_steps) - elif skip_type == 'time_uniform': - return torch.linspace(t_T, t_0, N + 1).to(device) - elif skip_type == 'time_quadratic': - t_order = 2 - t = torch.linspace(t_T ** (1. / t_order), t_0 ** (1. / t_order), N + 1).pow(t_order).to(device) - return t - else: - raise ValueError( - "Unsupported skip_type {}, need to be 'logSNR' or 'time_uniform' or 'time_quadratic'".format(skip_type)) - - def get_orders_and_timesteps_for_singlestep_solver(self, steps, order, skip_type, t_T, t_0, device): - """ - Get the order of each step for sampling by the singlestep DPM-Solver. - We combine both DPM-Solver-1,2,3 to use all the function evaluations, which is named as "DPM-Solver-fast". - Given a fixed number of function evaluations by `steps`, the sampling procedure by DPM-Solver-fast is: - - If order == 1: - We take `steps` of DPM-Solver-1 (i.e. DDIM). - - If order == 2: - - Denote K = (steps // 2). We take K or (K + 1) intermediate time steps for sampling. - - If steps % 2 == 0, we use K steps of DPM-Solver-2. - - If steps % 2 == 1, we use K steps of DPM-Solver-2 and 1 step of DPM-Solver-1. - - If order == 3: - - Denote K = (steps // 3 + 1). We take K intermediate time steps for sampling. - - If steps % 3 == 0, we use (K - 2) steps of DPM-Solver-3, and 1 step of DPM-Solver-2 and 1 step of DPM-Solver-1. - - If steps % 3 == 1, we use (K - 1) steps of DPM-Solver-3 and 1 step of DPM-Solver-1. - - If steps % 3 == 2, we use (K - 1) steps of DPM-Solver-3 and 1 step of DPM-Solver-2. - ============================================ - Args: - order: A `int`. The max order for the solver (2 or 3). - steps: A `int`. The total number of function evaluations (NFE). - skip_type: A `str`. The type for the spacing of the time steps. We support three types: - - 'logSNR': uniform logSNR for the time steps. - - 'time_uniform': uniform time for the time steps. (**Recommended for high-resolutional data**.) - - 'time_quadratic': quadratic time for the time steps. (Used in DDIM for low-resolutional data.) - t_T: A `float`. The starting time of the sampling (default is T). - t_0: A `float`. The ending time of the sampling (default is epsilon). - device: A torch device. - Returns: - orders: A list of the solver order of each step. - """ - if order == 3: - K = steps // 3 + 1 - if steps % 3 == 0: - orders = [3, ] * (K - 2) + [2, 1] - elif steps % 3 == 1: - orders = [3, ] * (K - 1) + [1] - else: - orders = [3, ] * (K - 1) + [2] - elif order == 2: - if steps % 2 == 0: - K = steps // 2 - orders = [2, ] * K - else: - K = steps // 2 + 1 - orders = [2, ] * (K - 1) + [1] - elif order == 1: - K = 1 - orders = [1, ] * steps - else: - raise ValueError("'order' must be '1' or '2' or '3'.") - if skip_type == 'logSNR': - # To reproduce the results in DPM-Solver paper - timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, K, device) - else: - timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, steps, device)[ - torch.cumsum(torch.tensor([0, ] + orders)).to(device)] - return timesteps_outer, orders - - def denoise_to_zero_fn(self, x, s): - """ - Denoise at the final step, which is equivalent to solve the ODE from lambda_s to infty by first-order discretization. - """ - return self.data_prediction_fn(x, s) - - def dpm_solver_first_update(self, x, s, t, model_s=None, return_intermediate=False): - """ - DPM-Solver-1 (equivalent to DDIM) from time `s` to time `t`. - Args: - x: A pytorch tensor. The initial value at time `s`. - s: A pytorch tensor. The starting time, with the shape (x.shape[0],). - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - model_s: A pytorch tensor. The model function evaluated at time `s`. - If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it. - return_intermediate: A `bool`. If true, also return the model value at time `s`. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - ns = self.noise_schedule - dims = x.dim() - lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t) - h = lambda_t - lambda_s - log_alpha_s, log_alpha_t = ns.marginal_log_mean_coeff(s), ns.marginal_log_mean_coeff(t) - sigma_s, sigma_t = ns.marginal_std(s), ns.marginal_std(t) - alpha_t = torch.exp(log_alpha_t) - - if self.predict_x0: - phi_1 = torch.expm1(-h) - if model_s is None: - model_s = self.model_fn(x, s) - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - ) - if return_intermediate: - return x_t, {'model_s': model_s} - else: - return x_t - else: - phi_1 = torch.expm1(h) - if model_s is None: - model_s = self.model_fn(x, s) - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - ) - if return_intermediate: - return x_t, {'model_s': model_s} - else: - return x_t - - def singlestep_dpm_solver_second_update(self, x, s, t, r1=0.5, model_s=None, return_intermediate=False, - solver_type='dpm_solver'): - """ - Singlestep solver DPM-Solver-2 from time `s` to time `t`. - Args: - x: A pytorch tensor. The initial value at time `s`. - s: A pytorch tensor. The starting time, with the shape (x.shape[0],). - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - r1: A `float`. The hyperparameter of the second-order solver. - model_s: A pytorch tensor. The model function evaluated at time `s`. - If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it. - return_intermediate: A `bool`. If true, also return the model value at time `s` and `s1` (the intermediate time). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if solver_type not in ['dpm_solver', 'taylor']: - raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type)) - if r1 is None: - r1 = 0.5 - ns = self.noise_schedule - dims = x.dim() - lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t) - h = lambda_t - lambda_s - lambda_s1 = lambda_s + r1 * h - s1 = ns.inverse_lambda(lambda_s1) - log_alpha_s, log_alpha_s1, log_alpha_t = ns.marginal_log_mean_coeff(s), ns.marginal_log_mean_coeff( - s1), ns.marginal_log_mean_coeff(t) - sigma_s, sigma_s1, sigma_t = ns.marginal_std(s), ns.marginal_std(s1), ns.marginal_std(t) - alpha_s1, alpha_t = torch.exp(log_alpha_s1), torch.exp(log_alpha_t) - - if self.predict_x0: - phi_11 = torch.expm1(-r1 * h) - phi_1 = torch.expm1(-h) - - if model_s is None: - model_s = self.model_fn(x, s) - x_s1 = ( - expand_dims(sigma_s1 / sigma_s, dims) * x - - expand_dims(alpha_s1 * phi_11, dims) * model_s - ) - model_s1 = self.model_fn(x_s1, s1) - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - - (0.5 / r1) * expand_dims(alpha_t * phi_1, dims) * (model_s1 - model_s) - ) - elif solver_type == 'taylor': - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - + (1. / r1) * expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * ( - model_s1 - model_s) - ) - else: - phi_11 = torch.expm1(r1 * h) - phi_1 = torch.expm1(h) - - if model_s is None: - model_s = self.model_fn(x, s) - x_s1 = ( - expand_dims(torch.exp(log_alpha_s1 - log_alpha_s), dims) * x - - expand_dims(sigma_s1 * phi_11, dims) * model_s - ) - model_s1 = self.model_fn(x_s1, s1) - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - - (0.5 / r1) * expand_dims(sigma_t * phi_1, dims) * (model_s1 - model_s) - ) - elif solver_type == 'taylor': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - - (1. / r1) * expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * (model_s1 - model_s) - ) - if return_intermediate: - return x_t, {'model_s': model_s, 'model_s1': model_s1} - else: - return x_t - - def singlestep_dpm_solver_third_update(self, x, s, t, r1=1. / 3., r2=2. / 3., model_s=None, model_s1=None, - return_intermediate=False, solver_type='dpm_solver'): - """ - Singlestep solver DPM-Solver-3 from time `s` to time `t`. - Args: - x: A pytorch tensor. The initial value at time `s`. - s: A pytorch tensor. The starting time, with the shape (x.shape[0],). - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - r1: A `float`. The hyperparameter of the third-order solver. - r2: A `float`. The hyperparameter of the third-order solver. - model_s: A pytorch tensor. The model function evaluated at time `s`. - If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it. - model_s1: A pytorch tensor. The model function evaluated at time `s1` (the intermediate time given by `r1`). - If `model_s1` is None, we evaluate the model at `s1`; otherwise we directly use it. - return_intermediate: A `bool`. If true, also return the model value at time `s`, `s1` and `s2` (the intermediate times). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if solver_type not in ['dpm_solver', 'taylor']: - raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type)) - if r1 is None: - r1 = 1. / 3. - if r2 is None: - r2 = 2. / 3. - ns = self.noise_schedule - dims = x.dim() - lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t) - h = lambda_t - lambda_s - lambda_s1 = lambda_s + r1 * h - lambda_s2 = lambda_s + r2 * h - s1 = ns.inverse_lambda(lambda_s1) - s2 = ns.inverse_lambda(lambda_s2) - log_alpha_s, log_alpha_s1, log_alpha_s2, log_alpha_t = ns.marginal_log_mean_coeff( - s), ns.marginal_log_mean_coeff(s1), ns.marginal_log_mean_coeff(s2), ns.marginal_log_mean_coeff(t) - sigma_s, sigma_s1, sigma_s2, sigma_t = ns.marginal_std(s), ns.marginal_std(s1), ns.marginal_std( - s2), ns.marginal_std(t) - alpha_s1, alpha_s2, alpha_t = torch.exp(log_alpha_s1), torch.exp(log_alpha_s2), torch.exp(log_alpha_t) - - if self.predict_x0: - phi_11 = torch.expm1(-r1 * h) - phi_12 = torch.expm1(-r2 * h) - phi_1 = torch.expm1(-h) - phi_22 = torch.expm1(-r2 * h) / (r2 * h) + 1. - phi_2 = phi_1 / h + 1. - phi_3 = phi_2 / h - 0.5 - - if model_s is None: - model_s = self.model_fn(x, s) - if model_s1 is None: - x_s1 = ( - expand_dims(sigma_s1 / sigma_s, dims) * x - - expand_dims(alpha_s1 * phi_11, dims) * model_s - ) - model_s1 = self.model_fn(x_s1, s1) - x_s2 = ( - expand_dims(sigma_s2 / sigma_s, dims) * x - - expand_dims(alpha_s2 * phi_12, dims) * model_s - + r2 / r1 * expand_dims(alpha_s2 * phi_22, dims) * (model_s1 - model_s) - ) - model_s2 = self.model_fn(x_s2, s2) - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - + (1. / r2) * expand_dims(alpha_t * phi_2, dims) * (model_s2 - model_s) - ) - elif solver_type == 'taylor': - D1_0 = (1. / r1) * (model_s1 - model_s) - D1_1 = (1. / r2) * (model_s2 - model_s) - D1 = (r2 * D1_0 - r1 * D1_1) / (r2 - r1) - D2 = 2. * (D1_1 - D1_0) / (r2 - r1) - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - + expand_dims(alpha_t * phi_2, dims) * D1 - - expand_dims(alpha_t * phi_3, dims) * D2 - ) - else: - phi_11 = torch.expm1(r1 * h) - phi_12 = torch.expm1(r2 * h) - phi_1 = torch.expm1(h) - phi_22 = torch.expm1(r2 * h) / (r2 * h) - 1. - phi_2 = phi_1 / h - 1. - phi_3 = phi_2 / h - 0.5 - - if model_s is None: - model_s = self.model_fn(x, s) - if model_s1 is None: - x_s1 = ( - expand_dims(torch.exp(log_alpha_s1 - log_alpha_s), dims) * x - - expand_dims(sigma_s1 * phi_11, dims) * model_s - ) - model_s1 = self.model_fn(x_s1, s1) - x_s2 = ( - expand_dims(torch.exp(log_alpha_s2 - log_alpha_s), dims) * x - - expand_dims(sigma_s2 * phi_12, dims) * model_s - - r2 / r1 * expand_dims(sigma_s2 * phi_22, dims) * (model_s1 - model_s) - ) - model_s2 = self.model_fn(x_s2, s2) - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - - (1. / r2) * expand_dims(sigma_t * phi_2, dims) * (model_s2 - model_s) - ) - elif solver_type == 'taylor': - D1_0 = (1. / r1) * (model_s1 - model_s) - D1_1 = (1. / r2) * (model_s2 - model_s) - D1 = (r2 * D1_0 - r1 * D1_1) / (r2 - r1) - D2 = 2. * (D1_1 - D1_0) / (r2 - r1) - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - - expand_dims(sigma_t * phi_2, dims) * D1 - - expand_dims(sigma_t * phi_3, dims) * D2 - ) - - if return_intermediate: - return x_t, {'model_s': model_s, 'model_s1': model_s1, 'model_s2': model_s2} - else: - return x_t - - def multistep_dpm_solver_second_update(self, x, model_prev_list, t_prev_list, t, solver_type="dpm_solver"): - """ - Multistep solver DPM-Solver-2 from time `t_prev_list[-1]` to time `t`. - Args: - x: A pytorch tensor. The initial value at time `s`. - model_prev_list: A list of pytorch tensor. The previous computed model values. - t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],) - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if solver_type not in ['dpm_solver', 'taylor']: - raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type)) - ns = self.noise_schedule - dims = x.dim() - model_prev_1, model_prev_0 = model_prev_list - t_prev_1, t_prev_0 = t_prev_list - lambda_prev_1, lambda_prev_0, lambda_t = ns.marginal_lambda(t_prev_1), ns.marginal_lambda( - t_prev_0), ns.marginal_lambda(t) - log_alpha_prev_0, log_alpha_t = ns.marginal_log_mean_coeff(t_prev_0), ns.marginal_log_mean_coeff(t) - sigma_prev_0, sigma_t = ns.marginal_std(t_prev_0), ns.marginal_std(t) - alpha_t = torch.exp(log_alpha_t) - - h_0 = lambda_prev_0 - lambda_prev_1 - h = lambda_t - lambda_prev_0 - r0 = h_0 / h - D1_0 = expand_dims(1. / r0, dims) * (model_prev_0 - model_prev_1) - if self.predict_x0: - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(sigma_t / sigma_prev_0, dims) * x - - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0 - - 0.5 * expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * D1_0 - ) - elif solver_type == 'taylor': - x_t = ( - expand_dims(sigma_t / sigma_prev_0, dims) * x - - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0 - + expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * D1_0 - ) - else: - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x - - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0 - - 0.5 * expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * D1_0 - ) - elif solver_type == 'taylor': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x - - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0 - - expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * D1_0 - ) - return x_t - - def multistep_dpm_solver_third_update(self, x, model_prev_list, t_prev_list, t, solver_type='dpm_solver'): - """ - Multistep solver DPM-Solver-3 from time `t_prev_list[-1]` to time `t`. - Args: - x: A pytorch tensor. The initial value at time `s`. - model_prev_list: A list of pytorch tensor. The previous computed model values. - t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],) - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - ns = self.noise_schedule - dims = x.dim() - model_prev_2, model_prev_1, model_prev_0 = model_prev_list - t_prev_2, t_prev_1, t_prev_0 = t_prev_list - lambda_prev_2, lambda_prev_1, lambda_prev_0, lambda_t = ns.marginal_lambda(t_prev_2), ns.marginal_lambda( - t_prev_1), ns.marginal_lambda(t_prev_0), ns.marginal_lambda(t) - log_alpha_prev_0, log_alpha_t = ns.marginal_log_mean_coeff(t_prev_0), ns.marginal_log_mean_coeff(t) - sigma_prev_0, sigma_t = ns.marginal_std(t_prev_0), ns.marginal_std(t) - alpha_t = torch.exp(log_alpha_t) - - h_1 = lambda_prev_1 - lambda_prev_2 - h_0 = lambda_prev_0 - lambda_prev_1 - h = lambda_t - lambda_prev_0 - r0, r1 = h_0 / h, h_1 / h - D1_0 = expand_dims(1. / r0, dims) * (model_prev_0 - model_prev_1) - D1_1 = expand_dims(1. / r1, dims) * (model_prev_1 - model_prev_2) - D1 = D1_0 + expand_dims(r0 / (r0 + r1), dims) * (D1_0 - D1_1) - D2 = expand_dims(1. / (r0 + r1), dims) * (D1_0 - D1_1) - if self.predict_x0: - x_t = ( - expand_dims(sigma_t / sigma_prev_0, dims) * x - - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0 - + expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * D1 - - expand_dims(alpha_t * ((torch.exp(-h) - 1. + h) / h ** 2 - 0.5), dims) * D2 - ) - else: - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x - - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0 - - expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * D1 - - expand_dims(sigma_t * ((torch.exp(h) - 1. - h) / h ** 2 - 0.5), dims) * D2 - ) - return x_t - - def singlestep_dpm_solver_update(self, x, s, t, order, return_intermediate=False, solver_type='dpm_solver', r1=None, - r2=None): - """ - Singlestep DPM-Solver with the order `order` from time `s` to time `t`. - Args: - x: A pytorch tensor. The initial value at time `s`. - s: A pytorch tensor. The starting time, with the shape (x.shape[0],). - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - order: A `int`. The order of DPM-Solver. We only support order == 1 or 2 or 3. - return_intermediate: A `bool`. If true, also return the model value at time `s`, `s1` and `s2` (the intermediate times). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - r1: A `float`. The hyperparameter of the second-order or third-order solver. - r2: A `float`. The hyperparameter of the third-order solver. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if order == 1: - return self.dpm_solver_first_update(x, s, t, return_intermediate=return_intermediate) - elif order == 2: - return self.singlestep_dpm_solver_second_update(x, s, t, return_intermediate=return_intermediate, - solver_type=solver_type, r1=r1) - elif order == 3: - return self.singlestep_dpm_solver_third_update(x, s, t, return_intermediate=return_intermediate, - solver_type=solver_type, r1=r1, r2=r2) - else: - raise ValueError("Solver order must be 1 or 2 or 3, got {}".format(order)) - - def multistep_dpm_solver_update(self, x, model_prev_list, t_prev_list, t, order, solver_type='dpm_solver'): - """ - Multistep DPM-Solver with the order `order` from time `t_prev_list[-1]` to time `t`. - Args: - x: A pytorch tensor. The initial value at time `s`. - model_prev_list: A list of pytorch tensor. The previous computed model values. - t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],) - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - order: A `int`. The order of DPM-Solver. We only support order == 1 or 2 or 3. - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if order == 1: - return self.dpm_solver_first_update(x, t_prev_list[-1], t, model_s=model_prev_list[-1]) - elif order == 2: - return self.multistep_dpm_solver_second_update(x, model_prev_list, t_prev_list, t, solver_type=solver_type) - elif order == 3: - return self.multistep_dpm_solver_third_update(x, model_prev_list, t_prev_list, t, solver_type=solver_type) - else: - raise ValueError("Solver order must be 1 or 2 or 3, got {}".format(order)) - - def dpm_solver_adaptive(self, x, order, t_T, t_0, h_init=0.05, atol=0.0078, rtol=0.05, theta=0.9, t_err=1e-5, - solver_type='dpm_solver'): - """ - The adaptive step size solver based on singlestep DPM-Solver. - Args: - x: A pytorch tensor. The initial value at time `t_T`. - order: A `int`. The (higher) order of the solver. We only support order == 2 or 3. - t_T: A `float`. The starting time of the sampling (default is T). - t_0: A `float`. The ending time of the sampling (default is epsilon). - h_init: A `float`. The initial step size (for logSNR). - atol: A `float`. The absolute tolerance of the solver. For image data, the default setting is 0.0078, followed [1]. - rtol: A `float`. The relative tolerance of the solver. The default setting is 0.05. - theta: A `float`. The safety hyperparameter for adapting the step size. The default setting is 0.9, followed [1]. - t_err: A `float`. The tolerance for the time. We solve the diffusion ODE until the absolute error between the - current time and `t_0` is less than `t_err`. The default setting is 1e-5. - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_0: A pytorch tensor. The approximated solution at time `t_0`. - [1] A. Jolicoeur-Martineau, K. Li, R. Piché-Taillefer, T. Kachman, and I. Mitliagkas, "Gotta go fast when generating data with score-based models," arXiv preprint arXiv:2105.14080, 2021. - """ - ns = self.noise_schedule - s = t_T * torch.ones((x.shape[0],)).to(x) - lambda_s = ns.marginal_lambda(s) - lambda_0 = ns.marginal_lambda(t_0 * torch.ones_like(s).to(x)) - h = h_init * torch.ones_like(s).to(x) - x_prev = x - nfe = 0 - if order == 2: - r1 = 0.5 - lower_update = lambda x, s, t: self.dpm_solver_first_update(x, s, t, return_intermediate=True) - higher_update = lambda x, s, t, **kwargs: self.singlestep_dpm_solver_second_update(x, s, t, r1=r1, - solver_type=solver_type, - **kwargs) - elif order == 3: - r1, r2 = 1. / 3., 2. / 3. - lower_update = lambda x, s, t: self.singlestep_dpm_solver_second_update(x, s, t, r1=r1, - return_intermediate=True, - solver_type=solver_type) - higher_update = lambda x, s, t, **kwargs: self.singlestep_dpm_solver_third_update(x, s, t, r1=r1, r2=r2, - solver_type=solver_type, - **kwargs) - else: - raise ValueError("For adaptive step size solver, order must be 2 or 3, got {}".format(order)) - while torch.abs((s - t_0)).mean() > t_err: - t = ns.inverse_lambda(lambda_s + h) - x_lower, lower_noise_kwargs = lower_update(x, s, t) - x_higher = higher_update(x, s, t, **lower_noise_kwargs) - delta = torch.max(torch.ones_like(x).to(x) * atol, rtol * torch.max(torch.abs(x_lower), torch.abs(x_prev))) - norm_fn = lambda v: torch.sqrt(torch.square(v.reshape((v.shape[0], -1))).mean(dim=-1, keepdim=True)) - E = norm_fn((x_higher - x_lower) / delta).max() - if torch.all(E <= 1.): - x = x_higher - s = t - x_prev = x_lower - lambda_s = ns.marginal_lambda(s) - h = torch.min(theta * h * torch.float_power(E, -1. / order).float(), lambda_0 - lambda_s) - nfe += order - print('adaptive solver nfe', nfe) - return x - - def sample(self, x, steps=20, t_start=None, t_end=None, order=3, skip_type='time_uniform', - method='singlestep', lower_order_final=True, denoise_to_zero=False, solver_type='dpm_solver', - atol=0.0078, rtol=0.05, - ): - """ - Compute the sample at time `t_end` by DPM-Solver, given the initial `x` at time `t_start`. - ===================================================== - We support the following algorithms for both noise prediction model and data prediction model: - - 'singlestep': - Singlestep DPM-Solver (i.e. "DPM-Solver-fast" in the paper), which combines different orders of singlestep DPM-Solver. - We combine all the singlestep solvers with order <= `order` to use up all the function evaluations (steps). - The total number of function evaluations (NFE) == `steps`. - Given a fixed NFE == `steps`, the sampling procedure is: - - If `order` == 1: - - Denote K = steps. We use K steps of DPM-Solver-1 (i.e. DDIM). - - If `order` == 2: - - Denote K = (steps // 2) + (steps % 2). We take K intermediate time steps for sampling. - - If steps % 2 == 0, we use K steps of singlestep DPM-Solver-2. - - If steps % 2 == 1, we use (K - 1) steps of singlestep DPM-Solver-2 and 1 step of DPM-Solver-1. - - If `order` == 3: - - Denote K = (steps // 3 + 1). We take K intermediate time steps for sampling. - - If steps % 3 == 0, we use (K - 2) steps of singlestep DPM-Solver-3, and 1 step of singlestep DPM-Solver-2 and 1 step of DPM-Solver-1. - - If steps % 3 == 1, we use (K - 1) steps of singlestep DPM-Solver-3 and 1 step of DPM-Solver-1. - - If steps % 3 == 2, we use (K - 1) steps of singlestep DPM-Solver-3 and 1 step of singlestep DPM-Solver-2. - - 'multistep': - Multistep DPM-Solver with the order of `order`. The total number of function evaluations (NFE) == `steps`. - We initialize the first `order` values by lower order multistep solvers. - Given a fixed NFE == `steps`, the sampling procedure is: - Denote K = steps. - - If `order` == 1: - - We use K steps of DPM-Solver-1 (i.e. DDIM). - - If `order` == 2: - - We firstly use 1 step of DPM-Solver-1, then use (K - 1) step of multistep DPM-Solver-2. - - If `order` == 3: - - We firstly use 1 step of DPM-Solver-1, then 1 step of multistep DPM-Solver-2, then (K - 2) step of multistep DPM-Solver-3. - - 'singlestep_fixed': - Fixed order singlestep DPM-Solver (i.e. DPM-Solver-1 or singlestep DPM-Solver-2 or singlestep DPM-Solver-3). - We use singlestep DPM-Solver-`order` for `order`=1 or 2 or 3, with total [`steps` // `order`] * `order` NFE. - - 'adaptive': - Adaptive step size DPM-Solver (i.e. "DPM-Solver-12" and "DPM-Solver-23" in the paper). - We ignore `steps` and use adaptive step size DPM-Solver with a higher order of `order`. - You can adjust the absolute tolerance `atol` and the relative tolerance `rtol` to balance the computatation costs - (NFE) and the sample quality. - - If `order` == 2, we use DPM-Solver-12 which combines DPM-Solver-1 and singlestep DPM-Solver-2. - - If `order` == 3, we use DPM-Solver-23 which combines singlestep DPM-Solver-2 and singlestep DPM-Solver-3. - ===================================================== - Some advices for choosing the algorithm: - - For **unconditional sampling** or **guided sampling with small guidance scale** by DPMs: - Use singlestep DPM-Solver ("DPM-Solver-fast" in the paper) with `order = 3`. - e.g. - >>> dpm_solver = DPM_Solver(model_fn, noise_schedule, predict_x0=False) - >>> x_sample = dpm_solver.sample(x, steps=steps, t_start=t_start, t_end=t_end, order=3, - skip_type='time_uniform', method='singlestep') - - For **guided sampling with large guidance scale** by DPMs: - Use multistep DPM-Solver with `predict_x0 = True` and `order = 2`. - e.g. - >>> dpm_solver = DPM_Solver(model_fn, noise_schedule, predict_x0=True) - >>> x_sample = dpm_solver.sample(x, steps=steps, t_start=t_start, t_end=t_end, order=2, - skip_type='time_uniform', method='multistep') - We support three types of `skip_type`: - - 'logSNR': uniform logSNR for the time steps. **Recommended for low-resolutional images** - - 'time_uniform': uniform time for the time steps. **Recommended for high-resolutional images**. - - 'time_quadratic': quadratic time for the time steps. - ===================================================== - Args: - x: A pytorch tensor. The initial value at time `t_start` - e.g. if `t_start` == T, then `x` is a sample from the standard normal distribution. - steps: A `int`. The total number of function evaluations (NFE). - t_start: A `float`. The starting time of the sampling. - If `T` is None, we use self.noise_schedule.T (default is 1.0). - t_end: A `float`. The ending time of the sampling. - If `t_end` is None, we use 1. / self.noise_schedule.total_N. - e.g. if total_N == 1000, we have `t_end` == 1e-3. - For discrete-time DPMs: - - We recommend `t_end` == 1. / self.noise_schedule.total_N. - For continuous-time DPMs: - - We recommend `t_end` == 1e-3 when `steps` <= 15; and `t_end` == 1e-4 when `steps` > 15. - order: A `int`. The order of DPM-Solver. - skip_type: A `str`. The type for the spacing of the time steps. 'time_uniform' or 'logSNR' or 'time_quadratic'. - method: A `str`. The method for sampling. 'singlestep' or 'multistep' or 'singlestep_fixed' or 'adaptive'. - denoise_to_zero: A `bool`. Whether to denoise to time 0 at the final step. - Default is `False`. If `denoise_to_zero` is `True`, the total NFE is (`steps` + 1). - This trick is firstly proposed by DDPM (https://arxiv.org/abs/2006.11239) and - score_sde (https://arxiv.org/abs/2011.13456). Such trick can improve the FID - for diffusion models sampling by diffusion SDEs for low-resolutional images - (such as CIFAR-10). However, we observed that such trick does not matter for - high-resolutional images. As it needs an additional NFE, we do not recommend - it for high-resolutional images. - lower_order_final: A `bool`. Whether to use lower order solvers at the final steps. - Only valid for `method=multistep` and `steps < 15`. We empirically find that - this trick is a key to stabilizing the sampling by DPM-Solver with very few steps - (especially for steps <= 10). So we recommend to set it to be `True`. - solver_type: A `str`. The taylor expansion type for the solver. `dpm_solver` or `taylor`. We recommend `dpm_solver`. - atol: A `float`. The absolute tolerance of the adaptive step size solver. Valid when `method` == 'adaptive'. - rtol: A `float`. The relative tolerance of the adaptive step size solver. Valid when `method` == 'adaptive'. - Returns: - x_end: A pytorch tensor. The approximated solution at time `t_end`. - """ - t_0 = 1. / self.noise_schedule.total_N if t_end is None else t_end - t_T = self.noise_schedule.T if t_start is None else t_start - device = x.device - if method == 'adaptive': - with torch.no_grad(): - x = self.dpm_solver_adaptive(x, order=order, t_T=t_T, t_0=t_0, atol=atol, rtol=rtol, - solver_type=solver_type) - elif method == 'multistep': - assert steps >= order - timesteps = self.get_time_steps(skip_type=skip_type, t_T=t_T, t_0=t_0, N=steps, device=device) - assert timesteps.shape[0] - 1 == steps - with torch.no_grad(): - vec_t = timesteps[0].expand((x.shape[0])) - model_prev_list = [self.model_fn(x, vec_t)] - t_prev_list = [vec_t] - # Init the first `order` values by lower order multistep DPM-Solver. - for init_order in tqdm(range(1, order), desc="DPM init order"): - vec_t = timesteps[init_order].expand(x.shape[0]) - x = self.multistep_dpm_solver_update(x, model_prev_list, t_prev_list, vec_t, init_order, - solver_type=solver_type) - model_prev_list.append(self.model_fn(x, vec_t)) - t_prev_list.append(vec_t) - # Compute the remaining values by `order`-th order multistep DPM-Solver. - for step in tqdm(range(order, steps + 1), desc="DPM multistep"): - vec_t = timesteps[step].expand(x.shape[0]) - if lower_order_final and steps < 15: - step_order = min(order, steps + 1 - step) - else: - step_order = order - x = self.multistep_dpm_solver_update(x, model_prev_list, t_prev_list, vec_t, step_order, - solver_type=solver_type) - for i in range(order - 1): - t_prev_list[i] = t_prev_list[i + 1] - model_prev_list[i] = model_prev_list[i + 1] - t_prev_list[-1] = vec_t - # We do not need to evaluate the final model value. - if step < steps: - model_prev_list[-1] = self.model_fn(x, vec_t) - elif method in ['singlestep', 'singlestep_fixed']: - if method == 'singlestep': - timesteps_outer, orders = self.get_orders_and_timesteps_for_singlestep_solver(steps=steps, order=order, - skip_type=skip_type, - t_T=t_T, t_0=t_0, - device=device) - elif method == 'singlestep_fixed': - K = steps // order - orders = [order, ] * K - timesteps_outer = self.get_time_steps(skip_type=skip_type, t_T=t_T, t_0=t_0, N=K, device=device) - for i, order in enumerate(orders): - t_T_inner, t_0_inner = timesteps_outer[i], timesteps_outer[i + 1] - timesteps_inner = self.get_time_steps(skip_type=skip_type, t_T=t_T_inner.item(), t_0=t_0_inner.item(), - N=order, device=device) - lambda_inner = self.noise_schedule.marginal_lambda(timesteps_inner) - vec_s, vec_t = t_T_inner.tile(x.shape[0]), t_0_inner.tile(x.shape[0]) - h = lambda_inner[-1] - lambda_inner[0] - r1 = None if order <= 1 else (lambda_inner[1] - lambda_inner[0]) / h - r2 = None if order <= 2 else (lambda_inner[2] - lambda_inner[0]) / h - x = self.singlestep_dpm_solver_update(x, vec_s, vec_t, order, solver_type=solver_type, r1=r1, r2=r2) - if denoise_to_zero: - x = self.denoise_to_zero_fn(x, torch.ones((x.shape[0],)).to(device) * t_0) - return x - - -############################################################# -# other utility functions -############################################################# - -def interpolate_fn(x, xp, yp): - """ - A piecewise linear function y = f(x), using xp and yp as keypoints. - We implement f(x) in a differentiable way (i.e. applicable for autograd). - The function f(x) is well-defined for all x-axis. (For x beyond the bounds of xp, we use the outmost points of xp to define the linear function.) - Args: - x: PyTorch tensor with shape [N, C], where N is the batch size, C is the number of channels (we use C = 1 for DPM-Solver). - xp: PyTorch tensor with shape [C, K], where K is the number of keypoints. - yp: PyTorch tensor with shape [C, K]. - Returns: - The function values f(x), with shape [N, C]. - """ - N, K = x.shape[0], xp.shape[1] - all_x = torch.cat([x.unsqueeze(2), xp.unsqueeze(0).repeat((N, 1, 1))], dim=2) - sorted_all_x, x_indices = torch.sort(all_x, dim=2) - x_idx = torch.argmin(x_indices, dim=2) - cand_start_idx = x_idx - 1 - start_idx = torch.where( - torch.eq(x_idx, 0), - torch.tensor(1, device=x.device), - torch.where( - torch.eq(x_idx, K), torch.tensor(K - 2, device=x.device), cand_start_idx, - ), - ) - end_idx = torch.where(torch.eq(start_idx, cand_start_idx), start_idx + 2, start_idx + 1) - start_x = torch.gather(sorted_all_x, dim=2, index=start_idx.unsqueeze(2)).squeeze(2) - end_x = torch.gather(sorted_all_x, dim=2, index=end_idx.unsqueeze(2)).squeeze(2) - start_idx2 = torch.where( - torch.eq(x_idx, 0), - torch.tensor(0, device=x.device), - torch.where( - torch.eq(x_idx, K), torch.tensor(K - 2, device=x.device), cand_start_idx, - ), - ) - y_positions_expanded = yp.unsqueeze(0).expand(N, -1, -1) - start_y = torch.gather(y_positions_expanded, dim=2, index=start_idx2.unsqueeze(2)).squeeze(2) - end_y = torch.gather(y_positions_expanded, dim=2, index=(start_idx2 + 1).unsqueeze(2)).squeeze(2) - cand = start_y + (x - start_x) * (end_y - start_y) / (end_x - start_x) - return cand - - -def expand_dims(v, dims): - """ - Expand the tensor `v` to the dim `dims`. - Args: - `v`: a PyTorch tensor with shape [N]. - `dim`: a `int`. - Returns: - a PyTorch tensor with shape [N, 1, 1, ..., 1] and the total dimension is `dims`. - """ - return v[(...,) + (None,) * (dims - 1)] \ No newline at end of file diff --git a/spaces/AISuperheroes/05GR-Image-To-Multilingual-OCR/app.py b/spaces/AISuperheroes/05GR-Image-To-Multilingual-OCR/app.py deleted file mode 100644 index 83ab99d0715b5c0033e0f452087543187147eaa6..0000000000000000000000000000000000000000 --- a/spaces/AISuperheroes/05GR-Image-To-Multilingual-OCR/app.py +++ /dev/null @@ -1,54 +0,0 @@ -import pandas as pd -import PIL -from PIL import Image -from PIL import ImageDraw -import gradio as gr -import torch -import easyocr - -torch.hub.download_url_to_file('https://github.com/JaidedAI/EasyOCR/raw/master/examples/english.png', 'english.png') -torch.hub.download_url_to_file('https://github.com/JaidedAI/EasyOCR/raw/master/examples/chinese.jpg', 'chinese.jpg') -torch.hub.download_url_to_file('https://github.com/JaidedAI/EasyOCR/raw/master/examples/japanese.jpg', 'japanese.jpg') -torch.hub.download_url_to_file('https://i.imgur.com/mwQFd7G.jpeg', 'Hindi.jpeg') - -def draw_boxes(image, bounds, color='yellow', width=2): - draw = ImageDraw.Draw(image) - for bound in bounds: - p0, p1, p2, p3 = bound[0] - draw.line([*p0, *p1, *p2, *p3, *p0], fill=color, width=width) - return image - -def inference(img, lang): - reader = easyocr.Reader(lang) - bounds = reader.readtext(img.name) - im = PIL.Image.open(img.name) - draw_boxes(im, bounds) - im.save('result.jpg') - return ['result.jpg', pd.DataFrame(bounds).iloc[: , 1:]] - -title = 'Image To Optical Character Recognition' -description = 'Multilingual OCR which works conveniently on all devices in multiple languages.' -article = "

    " -examples = [['english.png',['en']],['chinese.jpg',['ch_sim', 'en']],['japanese.jpg',['ja', 'en']],['Hindi.jpeg',['hi', 'en']]] -css = ".output_image, .input_image {height: 40rem !important; width: 100% !important;}" -choices = [ - "ch_sim", - "ch_tra", - "de", - "en", - "es", - "ja", - "hi", - "ru" -] -gr.Interface( - inference, - [gr.inputs.Image(type='file', label='Input'),gr.inputs.CheckboxGroup(choices, type="value", default=['en'], label='language')], - [gr.outputs.Image(type='file', label='Output'), gr.outputs.Dataframe(headers=['text', 'confidence'])], - title=title, - description=description, - article=article, - examples=examples, - css=css, - enable_queue=True - ).launch(debug=True) \ No newline at end of file diff --git a/spaces/AP123/dreamgaussian/mesh_utils.py b/spaces/AP123/dreamgaussian/mesh_utils.py deleted file mode 100644 index ca9fce9232f5133d6f91d5cf64d9e17b0725a5c9..0000000000000000000000000000000000000000 --- a/spaces/AP123/dreamgaussian/mesh_utils.py +++ /dev/null @@ -1,147 +0,0 @@ -import numpy as np -import pymeshlab as pml - - -def poisson_mesh_reconstruction(points, normals=None): - # points/normals: [N, 3] np.ndarray - - import open3d as o3d - - pcd = o3d.geometry.PointCloud() - pcd.points = o3d.utility.Vector3dVector(points) - - # outlier removal - pcd, ind = pcd.remove_statistical_outlier(nb_neighbors=20, std_ratio=10) - - # normals - if normals is None: - pcd.estimate_normals() - else: - pcd.normals = o3d.utility.Vector3dVector(normals[ind]) - - # visualize - o3d.visualization.draw_geometries([pcd], point_show_normal=False) - - mesh, densities = o3d.geometry.TriangleMesh.create_from_point_cloud_poisson( - pcd, depth=9 - ) - vertices_to_remove = densities < np.quantile(densities, 0.1) - mesh.remove_vertices_by_mask(vertices_to_remove) - - # visualize - o3d.visualization.draw_geometries([mesh]) - - vertices = np.asarray(mesh.vertices) - triangles = np.asarray(mesh.triangles) - - print( - f"[INFO] poisson mesh reconstruction: {points.shape} --> {vertices.shape} / {triangles.shape}" - ) - - return vertices, triangles - - -def decimate_mesh( - verts, faces, target, backend="pymeshlab", remesh=False, optimalplacement=True -): - # optimalplacement: default is True, but for flat mesh must turn False to prevent spike artifect. - - _ori_vert_shape = verts.shape - _ori_face_shape = faces.shape - - if backend == "pyfqmr": - import pyfqmr - - solver = pyfqmr.Simplify() - solver.setMesh(verts, faces) - solver.simplify_mesh(target_count=target, preserve_border=False, verbose=False) - verts, faces, normals = solver.getMesh() - else: - m = pml.Mesh(verts, faces) - ms = pml.MeshSet() - ms.add_mesh(m, "mesh") # will copy! - - # filters - # ms.meshing_decimation_clustering(threshold=pml.Percentage(1)) - ms.meshing_decimation_quadric_edge_collapse( - targetfacenum=int(target), optimalplacement=optimalplacement - ) - - if remesh: - # ms.apply_coord_taubin_smoothing() - ms.meshing_isotropic_explicit_remeshing( - iterations=3, targetlen=pml.Percentage(1) - ) - - # extract mesh - m = ms.current_mesh() - verts = m.vertex_matrix() - faces = m.face_matrix() - - print( - f"[INFO] mesh decimation: {_ori_vert_shape} --> {verts.shape}, {_ori_face_shape} --> {faces.shape}" - ) - - return verts, faces - - -def clean_mesh( - verts, - faces, - v_pct=1, - min_f=64, - min_d=20, - repair=True, - remesh=True, - remesh_size=0.01, -): - # verts: [N, 3] - # faces: [N, 3] - - _ori_vert_shape = verts.shape - _ori_face_shape = faces.shape - - m = pml.Mesh(verts, faces) - ms = pml.MeshSet() - ms.add_mesh(m, "mesh") # will copy! - - # filters - ms.meshing_remove_unreferenced_vertices() # verts not refed by any faces - - if v_pct > 0: - ms.meshing_merge_close_vertices( - threshold=pml.Percentage(v_pct) - ) # 1/10000 of bounding box diagonal - - ms.meshing_remove_duplicate_faces() # faces defined by the same verts - ms.meshing_remove_null_faces() # faces with area == 0 - - if min_d > 0: - ms.meshing_remove_connected_component_by_diameter( - mincomponentdiag=pml.Percentage(min_d) - ) - - if min_f > 0: - ms.meshing_remove_connected_component_by_face_number(mincomponentsize=min_f) - - if repair: - # ms.meshing_remove_t_vertices(method=0, threshold=40, repeat=True) - ms.meshing_repair_non_manifold_edges(method=0) - ms.meshing_repair_non_manifold_vertices(vertdispratio=0) - - if remesh: - # ms.apply_coord_taubin_smoothing() - ms.meshing_isotropic_explicit_remeshing( - iterations=3, targetlen=pml.AbsoluteValue(remesh_size) - ) - - # extract mesh - m = ms.current_mesh() - verts = m.vertex_matrix() - faces = m.face_matrix() - - print( - f"[INFO] mesh cleaning: {_ori_vert_shape} --> {verts.shape}, {_ori_face_shape} --> {faces.shape}" - ) - - return verts, faces diff --git a/spaces/AbandonedMuse/UnlimitedMusicGen/Makefile b/spaces/AbandonedMuse/UnlimitedMusicGen/Makefile deleted file mode 100644 index 5bfd89dd833d7448b21073eb6ee7cfac1d5157dd..0000000000000000000000000000000000000000 --- a/spaces/AbandonedMuse/UnlimitedMusicGen/Makefile +++ /dev/null @@ -1,21 +0,0 @@ -default: linter tests - -install: - pip install -U pip - pip install -U -e '.[dev]' - -linter: - flake8 audiocraft && mypy audiocraft - flake8 tests && mypy tests - -tests: - coverage run -m pytest tests - coverage report --include 'audiocraft/*' - -docs: - pdoc3 --html -o docs -f audiocraft - -dist: - python setup.py sdist - -.PHONY: linter tests docs dist diff --git a/spaces/Abhilashvj/planogram-compliance/utils/__init__.py b/spaces/Abhilashvj/planogram-compliance/utils/__init__.py deleted file mode 100644 index 70b1225074f3f32efa8c9ddac084b5d81a9b2fce..0000000000000000000000000000000000000000 --- a/spaces/Abhilashvj/planogram-compliance/utils/__init__.py +++ /dev/null @@ -1,88 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -utils/initialization -""" - -import contextlib -import platform -import threading - - -def emojis(str=""): - # Return platform-dependent emoji-safe version of string - return ( - str.encode().decode("ascii", "ignore") - if platform.system() == "Windows" - else str - ) - - -class TryExcept(contextlib.ContextDecorator): - # YOLOv5 TryExcept class. Usage: @TryExcept() decorator or 'with TryExcept():' context manager - def __init__(self, msg=""): - self.msg = msg - - def __enter__(self): - pass - - def __exit__(self, exc_type, value, traceback): - if value: - print(emojis(f"{self.msg}{': ' if self.msg else ''}{value}")) - return True - - -def threaded(func): - # Multi-threads a target function and returns thread. Usage: @threaded decorator - def wrapper(*args, **kwargs): - thread = threading.Thread( - target=func, args=args, kwargs=kwargs, daemon=True - ) - thread.start() - return thread - - return wrapper - - -def join_threads(verbose=False): - # Join all daemon threads, i.e. atexit.register(lambda: join_threads()) - main_thread = threading.current_thread() - for t in threading.enumerate(): - if t is not main_thread: - if verbose: - print(f"Joining thread {t.name}") - t.join() - - -def notebook_init(verbose=True): - # Check system software and hardware - print("Checking setup...") - - import os - import shutil - - from utils.general import check_font, check_requirements, is_colab - from utils.torch_utils import select_device # imports - - check_font() - - import psutil - from IPython import display # to display images and clear console output - - if is_colab(): - shutil.rmtree( - "/content/sample_data", ignore_errors=True - ) # remove colab /sample_data directory - - # System info - if verbose: - gb = 1 << 30 # bytes to GiB (1024 ** 3) - ram = psutil.virtual_memory().total - total, used, free = shutil.disk_usage("/") - display.clear_output() - s = f"({os.cpu_count()} CPUs, {ram / gb:.1f} GB RAM, {(total - free) / gb:.1f}/{total / gb:.1f} GB disk)" - else: - s = "" - - select_device(newline=False) - print(emojis(f"Setup complete ✅ {s}")) - return display diff --git a/spaces/Aditya9790/yolo7-object-tracking/detect.py b/spaces/Aditya9790/yolo7-object-tracking/detect.py deleted file mode 100644 index abcd3135103bd53210ac82a337f9a8f3ccc564af..0000000000000000000000000000000000000000 --- a/spaces/Aditya9790/yolo7-object-tracking/detect.py +++ /dev/null @@ -1,163 +0,0 @@ -import argparse -import time -from pathlib import Path - -import cv2 -import torch -import torch.backends.cudnn as cudnn -from numpy import random - -from models.experimental import attempt_load -from utils.datasets import LoadStreams, LoadImages -from utils.general import check_img_size, check_requirements, check_imshow, non_max_suppression, apply_classifier, \ - scale_coords, xyxy2xywh, strip_optimizer, set_logging, increment_path -from utils.plots import plot_one_box -from utils.torch_utils import select_device, load_classifier, time_synchronized, TracedModel - - -def detect(save_img=False): - source, weights, view_img, save_txt, imgsz, trace = opt.source, opt.weights, opt.view_img, opt.save_txt, opt.img_size, not opt.no_trace - save_img = not opt.nosave and not source.endswith('.txt') # save inference images - webcam = source.isnumeric() or source.endswith('.txt') or source.lower().startswith( - ('rtsp://', 'rtmp://', 'http://', 'https://')) - - # Directories - save_dir = Path(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok)) # increment run - (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir - - # Initialize - set_logging() - device = select_device(opt.device) - half = device.type != 'cpu' # half precision only supported on CUDA - - # Load model - model = attempt_load(weights, map_location=device) # load FP32 model - stride = int(model.stride.max()) # model stride - imgsz = check_img_size(imgsz, s=stride) # check img_size - - if trace: - model = TracedModel(model, device, opt.img_size) - - if half: - model.half() # to FP16 - - # Second-stage classifier - classify = False - if classify: - modelc = load_classifier(name='resnet101', n=2) # initialize - modelc.load_state_dict(torch.load('weights/resnet101.pt', map_location=device)['model']).to(device).eval() - - # Set Dataloader - vid_path, vid_writer = None, None - if webcam: - view_img = check_imshow() - cudnn.benchmark = True # set True to speed up constant image size inference - dataset = LoadStreams(source, img_size=imgsz, stride=stride) - else: - dataset = LoadImages(source, img_size=imgsz, stride=stride) - - # Get names and colors - names = model.module.names if hasattr(model, 'module') else model.names - colors = [[random.randint(0, 255) for _ in range(3)] for _ in names] - - # Run inference - if device.type != 'cpu': - model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters()))) # run once - old_img_w = old_img_h = imgsz - old_img_b = 1 - - t0 = time.time() - for path, img, im0s, vid_cap in dataset: - img = torch.from_numpy(img).to(device) - img = img.half() if half else img.float() # uint8 to fp16/32 - img /= 255.0 # 0 - 255 to 0.0 - 1.0 - if img.ndimension() == 3: - img = img.unsqueeze(0) - - # Warmup - if device.type != 'cpu' and (old_img_b != img.shape[0] or old_img_h != img.shape[2] or old_img_w != img.shape[3]): - old_img_b = img.shape[0] - old_img_h = img.shape[2] - old_img_w = img.shape[3] - for i in range(3): - model(img, augment=opt.augment)[0] - - # Inference - t1 = time_synchronized() - with torch.no_grad(): # Calculating gradients would cause a GPU memory leak - pred = model(img, augment=opt.augment)[0] - t2 = time_synchronized() - - # Apply NMS - pred = non_max_suppression(pred, opt.conf_thres, opt.iou_thres, classes=opt.classes, agnostic=opt.agnostic_nms) - t3 = time_synchronized() - - # Apply Classifier - if classify: - pred = apply_classifier(pred, modelc, img, im0s) - - # Process detections - for i, det in enumerate(pred): # detections per image - if webcam: # batch_size >= 1 - p, s, im0, frame = path[i], '%g: ' % i, im0s[i].copy(), dataset.count - else: - p, s, im0, frame = path, '', im0s, getattr(dataset, 'frame', 0) - - p = Path(p) # to Path - save_path = str(save_dir / p.name) # img.jpg - txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}') # img.txt - gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh - if len(det): - # Rescale boxes from img_size to im0 size - det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round() - - # Print results - for c in det[:, -1].unique(): - n = (det[:, -1] == c).sum() # detections per class - s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string - - # Write results - for *xyxy, conf, cls in reversed(det): - if save_txt: # Write to file - xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh - line = (cls, *xywh, conf) if opt.save_conf else (cls, *xywh) # label format - with open(txt_path + '.txt', 'a') as f: - f.write(('%g ' * len(line)).rstrip() % line + '\n') - - if save_img or view_img: # Add bbox to image - label = f'{names[int(cls)]} {conf:.2f}' - plot_one_box(xyxy, im0, label=label, color=colors[int(cls)], line_thickness=1) - - # Print time (inference + NMS) - print(f'{s}Done. ({(1E3 * (t2 - t1)):.1f}ms) Inference, ({(1E3 * (t3 - t2)):.1f}ms) NMS') - - # Stream results - if view_img: - cv2.imshow(str(p), im0) - cv2.waitKey(1) # 1 millisecond - - # Save results (image with detections) - if save_img: - if dataset.mode == 'image': - cv2.imwrite(save_path, im0) - print(f" The image with the result is saved in: {save_path}") - else: # 'video' or 'stream' - if vid_path != save_path: # new video - vid_path = save_path - if isinstance(vid_writer, cv2.VideoWriter): - vid_writer.release() # release previous video writer - if vid_cap: # video - fps = vid_cap.get(cv2.CAP_PROP_FPS) - w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH)) - h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) - else: # stream - fps, w, h = 30, im0.shape[1], im0.shape[0] - save_path += '.mp4' - vid_writer = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h)) - vid_writer.write(im0) - - if save_txt or save_img: - s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else '' - #print(f"Results saved to {save_dir}{s}") - - print(f'Done. ({time.time() - t0:.3f}s)') \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dialog/methods/ButtonMethods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dialog/methods/ButtonMethods.js deleted file mode 100644 index 585de5d015e63e885af5dd07571b2dea339a3f42..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dialog/methods/ButtonMethods.js +++ /dev/null @@ -1,333 +0,0 @@ -export default { - getChoice(index) { - var choicesSizer = this.childrenMap.choicesSizer; - if (choicesSizer) { - return choicesSizer.getButton(index); - } else { - return undefined; - } - }, - - getAction(index) { - return this.childrenMap.actionsSizer.getButton(index); - }, - - getToolbar(index) { - return this.childrenMap.toolbarSizer.getButton(index); - }, - - getLeftToolbar(index) { - return this.childrenMap.leftToolbarSizer.getButton(index); - }, - - setChoiceEnable(index, enabled) { - var choicesSizer = this.childrenMap.choicesSizer; - if (choicesSizer) { - choicesSizer.setButtonEnable(index, enabled); - } - return this; - }, - - setActionEnable(index, enabled) { - this.childrenMap.actionsSizer.setButtonEnable(index, enabled); - return this; - }, - - setToolbarEnable(index, enabled) { - this.childrenMap.toolbarSizer.setButtonEnable(index, enabled); - return this; - }, - - setLeftToolbarEnable(index, enabled) { - this.childrenMap.leftToolbarSizer.setButtonEnable(index, enabled); - return this; - }, - - toggleChoiceEnable(index) { - var choicesSizer = this.childrenMap.choicesSizer; - if (choicesSizer) { - choicesSizer.toggleButtonEnable(index); - } - return this; - }, - - toggleActionEnable(index) { - this.childrenMap.actionsSizer.toggleButtonEnable(index); - return this; - }, - - toggleToolbarEnable(index) { - this.childrenMap.toolbarSizer.toggleButtonEnable(index); - return this; - }, - - toggleLeftToolbarEnable(index) { - this.childrenMap.leftToolbarSizer.toggleButtonEnable(index); - return this; - }, - - getChoiceEnable(index) { - var choicesSizer = this.childrenMap.choicesSizer; - if (choicesSizer) { - return choicesSizer.getButtonEnable(index); - } else { - return false; - } - }, - - getActionEnable(index) { - return this.childrenMap.actionsSizer.getButtonEnable(index); - }, - - getToolbarEnable(index) { - return this.childrenMap.toolbarSizer.getButtonEnable(index); - }, - - getLeftToolbarEnable(index) { - return this.childrenMap.leftToolbarSizer.getButtonEnable(index); - }, - - emitChoiceClick(index) { - var choicesSizer = this.childrenMap.choicesSizer; - if (choicesSizer) { - choicesSizer.emitButtonClick(index); - } - return this; - }, - - emitActionClick(index) { - this.childrenMap.actionsSizer.emitButtonClick(index); - return this; - }, - - emitToolbarClick(index) { - this.childrenMap.toolbarSizer.emitButtonClick(index); - return this; - }, - - emitLeftToolbarClick(index) { - this.childrenMap.leftToolbarSizer.emitButtonClick(index); - return this; - }, - - showChoice(index) { - var choicesSizer = this.childrenMap.choicesSizer; - if (choicesSizer) { - choicesSizer.showButton(index); - } - return this; - }, - - showAction(index) { - this.childrenMap.actionsSizer.showButton(index); - return this; - }, - - showToolbar(index) { - this.childrenMap.toolbarSizer.showButton(index); - return this; - }, - - showLeftToolbar(index) { - this.childrenMap.leftToolbarSizer.showButton(index); - return this; - }, - - hideChoice(index) { - var choicesSizer = this.childrenMap.choicesSizer; - if (choicesSizer) { - choicesSizer.hideButton(index); - } - return this; - }, - - hideAction(index) { - this.childrenMap.actionsSizer.hideButton(index); - return this; - }, - - hideToolbar(index) { - this.childrenMap.toolbarSizer.hideButton(index); - return this; - }, - - hideLeftToolbar(index) { - this.childrenMap.leftToolbarSizer.hideButton(index); - return this; - }, - - addChoice(gameObject) { - var choicesSizer = this.childrenMap.choicesSizer; - if (choicesSizer) { - choicesSizer.addButton(gameObject); - } - return this; - }, - - addAction(gameObject) { - this.childrenMap.actionsSizer.addButton(gameObject); - return this; - }, - - addToolbar(gameObject) { - this.childrenMap.toolbarSizer.addButton(gameObject); - return this; - }, - - addLeftToolbar(gameObject) { - this.childrenMap.leftToolbarSizer.addButton(gameObject); - return this; - }, - - removeChoice(index, destroyChild) { - var choicesSizer = this.childrenMap.choicesSizer; - if (choicesSizer) { - choicesSizer.removeButton(index, destroyChild); - } - return this; - }, - - removeAction(index, destroyChild) { - this.childrenMap.actionsSizer.removeButton(index, destroyChild); - return this; - }, - - removeToolbar(index, destroyChild) { - this.childrenMap.toolbarSizer.removeButton(index, destroyChild); - return this; - }, - - removeLeftToolbar(index, destroyChild) { - this.childrenMap.leftToolbarSizer.removeButton(index, destroyChild); - return this; - }, - - clearChoices(destroyChild) { - var choicesSizer = this.childrenMap.choicesSizer; - if (choicesSizer) { - choicesSizer.clearButtons(destroyChild); - } - return this; - }, - - clearActions(destroyChild) { - this.childrenMap.actionsSizer.clearButtons(destroyChild); - return this; - }, - - clearToolbar(destroyChild) { - this.childrenMap.toolbarSizer.clearButtons(destroyChild); - return this; - }, - - clearLeftToolbar(destroyChild) { - this.childrenMap.leftToolbarSizer.clearButtons(destroyChild); - return this; - }, - - forEachChoice(callback, scope) { - var choicesSizer = this.childrenMap.choicesSizer; - if (choicesSizer) { - choicesSizer.forEachButtton(callback, scope); - } - return this; - }, - - forEachAction(callback, scope) { - this.childrenMap.actionsSizer.forEachButtton(callback, scope); - return this; - }, - - forEachToolbar(callback, scope) { - this.childrenMap.toolbarSizer.forEachButtton(callback, scope); - return this; - }, - - forEachLeftToolbar(callback, scope) { - this.childrenMap.leftToolbarSizer.forEachButtton(callback, scope); - return this; - }, - - setAllButtonsEnable(enabled) { - if (enabled === undefined) { - enabled = true; - } - - if (this.childrenMap.toolbarSizer) { - this.setToolbarEnable(enabled); - } - if (this.childrenMap.leftToolbarSizer) { - this.setLeftToolbarEnable(enabled); - } - if (this.childrenMap.actionsSizer) { - this.setActionEnable(enabled); - } - if (this.childrenMap.choicesSizer) { - this.setChoiceEnable(enabled); - } - - return this; - }, - - // Checkboxes - getChoicesButtonStates() { - var choicesSizer = this.childrenMap.choicesSizer; - if (choicesSizer) { - return choicesSizer.getAllButtonsState(); - } else { - return {}; - } - }, - - getChoicesButtonState(name) { - var choicesSizer = this.childrenMap.choicesSizer; - if (name === undefined) { - if (choicesSizer) { - return choicesSizer.getAllButtonsState(); - } else { - return {} - } - } else { - if (choicesSizer) { - return choicesSizer.getButtonState(name); - } else { - return false; - } - } - }, - - setChoicesButtonState(name, state) { - var choicesSizer = this.childrenMap.choicesSizer; - if (choicesSizer) { - choicesSizer.setButtonState(name, state); - } - return this; - }, - - clearChoicesButtonStates() { - var choicesSizer = this.childrenMap.choicesSizer; - if (choicesSizer) { - choicesSizer.clearAllButtonsState(); - } - return this; - }, - - // Radio buttons - getChoicesSelectedButtonName() { - var choicesSizer = this.childrenMap.choicesSizer; - if (choicesSizer) { - return choicesSizer.getSelectedButtonName(); - } else { - return ''; - } - }, - - setChoicesSelectedButtonName(name) { - var choicesSizer = this.childrenMap.choicesSizer; - if (choicesSizer) { - choicesSizer.setSelectedButtonName(name); - } - return this; - }, - -}; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/folder/methods/ConfigurationMethods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/folder/methods/ConfigurationMethods.js deleted file mode 100644 index 8ec01518e198c67608cd48fbc7dd8f19d179aa24..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/folder/methods/ConfigurationMethods.js +++ /dev/null @@ -1,37 +0,0 @@ -import ScaleMethods from '../../basesizer/ScaleMethods.js'; - -var DefaultExpandCallback = function (gameObject, duration) { - ScaleMethods.popUp.call(gameObject, duration, this.expandDirection); -}; - -var DefaultCollapseCallback = function (gameObject, duration) { - ScaleMethods.scaleDown.call(gameObject, duration, this.expandDirection) -} - -export default { - setTransitionDuration(duration) { - this.transitionDuration = duration; - - this.childTransition - .setTransitInTime(duration) - .setTransitOutTime(duration); - - return this; - }, - - setExpandCallback(callback) { - if (callback === undefined) { - callback = DefaultExpandCallback.bind(this); - } - this.childTransition.setTransitInCallback(callback); - return this; - }, - - setCollapseCallback(callback) { - if (callback === undefined) { - callback = DefaultCollapseCallback.bind(this); - } - this.childTransition.setTransitOutCallback(callback); - return this; - } -} \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/index.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/index.js deleted file mode 100644 index 528b5e054152ff4dea853f58335c4d69f1b8358d..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/index.js +++ /dev/null @@ -1,12 +0,0 @@ -import Maker from './Maker.js'; -import Make from './Make.js'; -import YAMLMake from './YAMLMake.js'; -import Builders from './builders/Builders.js'; - - -export { - Maker, - Make, - YAMLMake, - Builders, -} \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/ResolveWidth.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/ResolveWidth.js deleted file mode 100644 index 1828c1c71073936b82247a338f048a9f081a2de6..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/ResolveWidth.js +++ /dev/null @@ -1,23 +0,0 @@ -import ResolveWidthBase from '../basesizer/ResolveWidth.js'; - -var ResolveWidth = function (width) { - var width = ResolveWidthBase.call(this, width); - - // Calculate proportionLength - if ((this.proportionLength === undefined) && (this.orientation === 0)) { - var remainder = width - this.childrenWidth; - if (remainder > 0) { - remainder = width - this.getChildrenWidth(false); - this.proportionLength = remainder / this.childrenProportion; - } else { - this.proportionLength = 0; - if (remainder < 0) { - // Warning - } - } - } - - return width; -} - -export default ResolveWidth; \ No newline at end of file diff --git a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/video2audio.py b/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/video2audio.py deleted file mode 100644 index db50a5c6b62c4c1faea5fefbb078f16aa9bb7fa3..0000000000000000000000000000000000000000 --- a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/video2audio.py +++ /dev/null @@ -1,27 +0,0 @@ -import os -from concurrent.futures import ThreadPoolExecutor - -from moviepy.editor import AudioFileClip - -video_dir = "./video_data/" -audio_dir = "./raw_audio/" -filelist = list(os.walk(video_dir))[0][2] - - -def generate_infos(): - videos = [] - for file in filelist: - if file.endswith(".mp4"): - videos.append(file) - return videos - - -def clip_file(file): - my_audio_clip = AudioFileClip(video_dir + file) - my_audio_clip.write_audiofile(audio_dir + file.rstrip(".mp4") + ".wav") - - -if __name__ == "__main__": - infos = generate_infos() - with ThreadPoolExecutor(max_workers=os.cpu_count()) as executor: - executor.map(clip_file, infos) diff --git a/spaces/Alfaxad/BioGalacticModels/app.py b/spaces/Alfaxad/BioGalacticModels/app.py deleted file mode 100644 index 9fceb00c5a51a2a1245642af72ab88fad32d186c..0000000000000000000000000000000000000000 --- a/spaces/Alfaxad/BioGalacticModels/app.py +++ /dev/null @@ -1,114 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import gradio as gr - -from model_list import ModelList - -DESCRIPTION = '# Explore Biology & Biochemistry Foundation Models 🧬' -NOTES = ''' -Thanks to the following folks who have made suggestions to this list! -- [Shelby](https://twitter.com/shelbynewsad), author of [this nice model list](https://compoundvc.notion.site/compoundvc/474885e638e94e44a1aab4d3124e3d6a?v=299bce7af785413da4c9f36837c03aaf) -- [Valentyn Bezshapkin](https://twitter.com/valentynbez) -- [Payel Das](https://twitter.com/payel791) -- [Anthony Costa](https://twitter.com/anthonycosta) -''' -FOOTER = '''''' - -def main(): - model_list = ModelList() - - with gr.Blocks(css='style.css') as demo: - gr.Markdown(DESCRIPTION) - - search_box = gr.Textbox( - label='Search Model Name', - placeholder= - 'You can search for titles with regular expressions. e.g. (?= 1 and sy >= 1 - return sx, sy - - -def _parse_padding(padding): - if isinstance(padding, int): - padding = [padding, padding] - assert isinstance(padding, (list, tuple)) - assert all(isinstance(x, int) for x in padding) - if len(padding) == 2: - padx, pady = padding - padding = [padx, padx, pady, pady] - padx0, padx1, pady0, pady1 = padding - return padx0, padx1, pady0, pady1 - - -def _get_filter_size(f): - if f is None: - return 1, 1 - assert isinstance(f, torch.Tensor) and f.ndim in [1, 2] - fw = f.shape[-1] - fh = f.shape[0] - with misc.suppress_tracer_warnings(): - fw = int(fw) - fh = int(fh) - misc.assert_shape(f, [fh, fw][:f.ndim]) - assert fw >= 1 and fh >= 1 - return fw, fh - -# ---------------------------------------------------------------------------- - - -def setup_filter(f, device=torch.device('cpu'), normalize=True, flip_filter=False, gain=1, separable=None): - r"""Convenience function to setup 2D FIR filter for `upfirdn2d()`. - - Args: - f: Torch tensor, numpy array, or python list of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), - `[]` (impulse), or - `None` (identity). - device: Result device (default: cpu). - normalize: Normalize the filter so that it retains the magnitude - for constant input signal (DC)? (default: True). - flip_filter: Flip the filter? (default: False). - gain: Overall scaling factor for signal magnitude (default: 1). - separable: Return a separable filter? (default: select automatically). - - Returns: - Float32 tensor of the shape - `[filter_height, filter_width]` (non-separable) or - `[filter_taps]` (separable). - """ - # Validate. - if f is None: - f = 1 - f = torch.as_tensor(f, dtype=torch.float32) - assert f.ndim in [0, 1, 2] - assert f.numel() > 0 - if f.ndim == 0: - f = f[np.newaxis] - - # Separable? - if separable is None: - separable = (f.ndim == 1 and f.numel() >= 8) - if f.ndim == 1 and not separable: - f = f.ger(f) - assert f.ndim == (1 if separable else 2) - - # Apply normalize, flip, gain, and device. - if normalize: - f /= f.sum() - if flip_filter: - f = f.flip(list(range(f.ndim))) - f = f * (gain ** (f.ndim / 2)) - f = f.to(device=device) - return f - -# ---------------------------------------------------------------------------- - - -def upfirdn2d(x, f, up=1, down=1, padding=0, flip_filter=False, gain=1, impl='cuda'): - r"""Pad, upsample, filter, and downsample a batch of 2D images. - - Performs the following sequence of operations for each channel: - - 1. Upsample the image by inserting N-1 zeros after each pixel (`up`). - - 2. Pad the image with the specified number of zeros on each side (`padding`). - Negative padding corresponds to cropping the image. - - 3. Convolve the image with the specified 2D FIR filter (`f`), shrinking it - so that the footprint of all output pixels lies within the input image. - - 4. Downsample the image by keeping every Nth pixel (`down`). - - This sequence of operations bears close resemblance to scipy.signal.upfirdn(). - The fused op is considerably more efficient than performing the same calculation - using standard PyTorch ops. It supports gradients of arbitrary order. - - Args: - x: Float32/float64/float16 input tensor of the shape - `[batch_size, num_channels, in_height, in_width]`. - f: Float32 FIR filter of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), or - `None` (identity). - up: Integer upsampling factor. Can be a single int or a list/tuple - `[x, y]` (default: 1). - down: Integer downsampling factor. Can be a single int or a list/tuple - `[x, y]` (default: 1). - padding: Padding with respect to the upsampled image. Can be a single number - or a list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - flip_filter: False = convolution, True = correlation (default: False). - gain: Overall scaling factor for signal magnitude (default: 1). - impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - assert isinstance(x, torch.Tensor) - assert impl in ['ref', 'cuda'] - if impl == 'cuda' and x.device.type == 'cuda' and _init(): - return _upfirdn2d_cuda(up=up, down=down, padding=padding, flip_filter=flip_filter, gain=gain).apply(x, f) - return _upfirdn2d_ref(x, f, up=up, down=down, padding=padding, flip_filter=flip_filter, gain=gain) - -# ---------------------------------------------------------------------------- - - -@misc.profiled_function -def _upfirdn2d_ref(x, f, up=1, down=1, padding=0, flip_filter=False, gain=1): - """Slow reference implementation of `upfirdn2d()` using standard PyTorch ops. - """ - # Validate arguments. - assert isinstance(x, torch.Tensor) and x.ndim == 4 - if f is None: - f = torch.ones([1, 1], dtype=torch.float32, device=x.device) - assert isinstance(f, torch.Tensor) and f.ndim in [1, 2] - assert f.dtype == torch.float32 and not f.requires_grad - batch_size, num_channels, in_height, in_width = x.shape - upx, upy = _parse_scaling(up) - downx, downy = _parse_scaling(down) - padx0, padx1, pady0, pady1 = _parse_padding(padding) - - # Check that upsampled buffer is not smaller than the filter. - upW = in_width * upx + padx0 + padx1 - upH = in_height * upy + pady0 + pady1 - assert upW >= f.shape[-1] and upH >= f.shape[0] - - # Upsample by inserting zeros. - x = x.reshape([batch_size, num_channels, in_height, 1, in_width, 1]) - x = torch.nn.functional.pad(x, [0, upx - 1, 0, 0, 0, upy - 1]) - x = x.reshape([batch_size, num_channels, in_height * upy, in_width * upx]) - - # Pad or crop. - x = torch.nn.functional.pad( - x, [max(padx0, 0), max(padx1, 0), max(pady0, 0), max(pady1, 0)]) - x = x[:, :, max(-pady0, 0): x.shape[2] - max(-pady1, 0), - max(-padx0, 0): x.shape[3] - max(-padx1, 0)] - - # Setup filter. - f = f * (gain ** (f.ndim / 2)) - f = f.to(x.dtype) - if not flip_filter: - f = f.flip(list(range(f.ndim))) - - # Convolve with the filter. - f = f[np.newaxis, np.newaxis].repeat([num_channels, 1] + [1] * f.ndim) - if f.ndim == 4: - x = conv2d_gradfix.conv2d(input=x, weight=f, groups=num_channels) - else: - x = conv2d_gradfix.conv2d( - input=x, weight=f.unsqueeze(2), groups=num_channels) - x = conv2d_gradfix.conv2d( - input=x, weight=f.unsqueeze(3), groups=num_channels) - - # Downsample by throwing away pixels. - x = x[:, :, ::downy, ::downx] - return x - -# ---------------------------------------------------------------------------- - - -_upfirdn2d_cuda_cache = dict() - - -def _upfirdn2d_cuda(up=1, down=1, padding=0, flip_filter=False, gain=1): - """Fast CUDA implementation of `upfirdn2d()` using custom ops. - """ - # Parse arguments. - upx, upy = _parse_scaling(up) - downx, downy = _parse_scaling(down) - padx0, padx1, pady0, pady1 = _parse_padding(padding) - - # Lookup from cache. - key = (upx, upy, downx, downy, padx0, padx1, - pady0, pady1, flip_filter, gain) - if key in _upfirdn2d_cuda_cache: - return _upfirdn2d_cuda_cache[key] - - # Forward op. - class Upfirdn2dCuda(torch.autograd.Function): - @staticmethod - def forward(ctx, x, f): # pylint: disable=arguments-differ - assert isinstance(x, torch.Tensor) and x.ndim == 4 - if f is None: - f = torch.ones([1, 1], dtype=torch.float32, device=x.device) - if f.ndim == 1 and f.shape[0] == 1: - # Convert separable-1 into full-1x1. - f = f.square().unsqueeze(0) - assert isinstance(f, torch.Tensor) and f.ndim in [1, 2] - y = x - if f.ndim == 2: - y = _plugin.upfirdn2d( - y, f, upx, upy, downx, downy, padx0, padx1, pady0, pady1, flip_filter, gain) - else: - y = _plugin.upfirdn2d(y, f.unsqueeze( - 0), upx, 1, downx, 1, padx0, padx1, 0, 0, flip_filter, 1.0) - y = _plugin.upfirdn2d(y, f.unsqueeze( - 1), 1, upy, 1, downy, 0, 0, pady0, pady1, flip_filter, gain) - ctx.save_for_backward(f) - ctx.x_shape = x.shape - return y - - @staticmethod - def backward(ctx, dy): # pylint: disable=arguments-differ - f, = ctx.saved_tensors - _, _, ih, iw = ctx.x_shape - _, _, oh, ow = dy.shape - fw, fh = _get_filter_size(f) - p = [ - fw - padx0 - 1, - iw * upx - ow * downx + padx0 - upx + 1, - fh - pady0 - 1, - ih * upy - oh * downy + pady0 - upy + 1, - ] - dx = None - df = None - - if ctx.needs_input_grad[0]: - dx = _upfirdn2d_cuda(up=down, down=up, padding=p, flip_filter=( - not flip_filter), gain=gain).apply(dy, f) - - assert not ctx.needs_input_grad[1] - return dx, df - - # Add to cache. - _upfirdn2d_cuda_cache[key] = Upfirdn2dCuda - return Upfirdn2dCuda - -# ---------------------------------------------------------------------------- - - -def filter2d(x, f, padding=0, flip_filter=False, gain=1, impl='cuda'): - r"""Filter a batch of 2D images using the given 2D FIR filter. - - By default, the result is padded so that its shape matches the input. - User-specified padding is applied on top of that, with negative values - indicating cropping. Pixels outside the image are assumed to be zero. - - Args: - x: Float32/float64/float16 input tensor of the shape - `[batch_size, num_channels, in_height, in_width]`. - f: Float32 FIR filter of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), or - `None` (identity). - padding: Padding with respect to the output. Can be a single number or a - list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - flip_filter: False = convolution, True = correlation (default: False). - gain: Overall scaling factor for signal magnitude (default: 1). - impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - padx0, padx1, pady0, pady1 = _parse_padding(padding) - fw, fh = _get_filter_size(f) - p = [ - padx0 + fw // 2, - padx1 + (fw - 1) // 2, - pady0 + fh // 2, - pady1 + (fh - 1) // 2, - ] - return upfirdn2d(x, f, padding=p, flip_filter=flip_filter, gain=gain, impl=impl) - -# ---------------------------------------------------------------------------- - - -def upsample2d(x, f, up=2, padding=0, flip_filter=False, gain=1, impl='cuda'): - r"""Upsample a batch of 2D images using the given 2D FIR filter. - - By default, the result is padded so that its shape is a multiple of the input. - User-specified padding is applied on top of that, with negative values - indicating cropping. Pixels outside the image are assumed to be zero. - - Args: - x: Float32/float64/float16 input tensor of the shape - `[batch_size, num_channels, in_height, in_width]`. - f: Float32 FIR filter of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), or - `None` (identity). - up: Integer upsampling factor. Can be a single int or a list/tuple - `[x, y]` (default: 1). - padding: Padding with respect to the output. Can be a single number or a - list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - flip_filter: False = convolution, True = correlation (default: False). - gain: Overall scaling factor for signal magnitude (default: 1). - impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - upx, upy = _parse_scaling(up) - padx0, padx1, pady0, pady1 = _parse_padding(padding) - fw, fh = _get_filter_size(f) - p = [ - padx0 + (fw + upx - 1) // 2, - padx1 + (fw - upx) // 2, - pady0 + (fh + upy - 1) // 2, - pady1 + (fh - upy) // 2, - ] - return upfirdn2d(x, f, up=up, padding=p, flip_filter=flip_filter, gain=gain*upx*upy, impl=impl) - -# ---------------------------------------------------------------------------- - - -def downsample2d(x, f, down=2, padding=0, flip_filter=False, gain=1, impl='cuda'): - r"""Downsample a batch of 2D images using the given 2D FIR filter. - - By default, the result is padded so that its shape is a fraction of the input. - User-specified padding is applied on top of that, with negative values - indicating cropping. Pixels outside the image are assumed to be zero. - - Args: - x: Float32/float64/float16 input tensor of the shape - `[batch_size, num_channels, in_height, in_width]`. - f: Float32 FIR filter of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), or - `None` (identity). - down: Integer downsampling factor. Can be a single int or a list/tuple - `[x, y]` (default: 1). - padding: Padding with respect to the input. Can be a single number or a - list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - flip_filter: False = convolution, True = correlation (default: False). - gain: Overall scaling factor for signal magnitude (default: 1). - impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - downx, downy = _parse_scaling(down) - padx0, padx1, pady0, pady1 = _parse_padding(padding) - fw, fh = _get_filter_size(f) - p = [ - padx0 + (fw - downx + 1) // 2, - padx1 + (fw - downx) // 2, - pady0 + (fh - downy + 1) // 2, - pady1 + (fh - downy) // 2, - ] - return upfirdn2d(x, f, down=down, padding=p, flip_filter=flip_filter, gain=gain, impl=impl) - -# ---------------------------------------------------------------------------- diff --git a/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/decoder/__init__.py b/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/decoder/__init__.py deleted file mode 100644 index 1cdb9e9be64cf090444bf7e85aa4c13d5c3426a0..0000000000000000000000000000000000000000 --- a/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/decoder/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -from src.decoder.tensoRF_decoder import TensorVMSplit -from src.utils.registry import Registry - -DECODER_REGISTRY = Registry("DECODER") - -DECODER_REGISTRY.register(TensorVMSplit) \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_model_editing.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_model_editing.py deleted file mode 100644 index 3ff3b1f2329ea1ba0eb1598692ea436106e9700c..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_model_editing.py +++ /dev/null @@ -1,757 +0,0 @@ -# Copyright 2023 TIME Authors and The HuggingFace Team. All rights reserved." -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import copy -import inspect -import warnings -from typing import Any, Callable, Dict, List, Optional, Union - -import torch -from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer - -from ...image_processor import VaeImageProcessor -from ...loaders import LoraLoaderMixin, TextualInversionLoaderMixin -from ...models import AutoencoderKL, UNet2DConditionModel -from ...schedulers import PNDMScheduler -from ...schedulers.scheduling_utils import SchedulerMixin -from ...utils import logging, randn_tensor -from ..pipeline_utils import DiffusionPipeline -from . import StableDiffusionPipelineOutput -from .safety_checker import StableDiffusionSafetyChecker - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -AUGS_CONST = ["A photo of ", "An image of ", "A picture of "] - - -class StableDiffusionModelEditingPipeline(DiffusionPipeline, TextualInversionLoaderMixin, LoraLoaderMixin): - r""" - Pipeline for text-to-image model editing. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods - implemented for all pipelines (downloading, saving, running on a particular device, etc.). - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. - text_encoder ([`~transformers.CLIPTextModel`]): - Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). - tokenizer ([`~transformers.CLIPTokenizer`]): - A `CLIPTokenizer` to tokenize text. - unet ([`UNet2DConditionModel`]): - A `UNet2DConditionModel` to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more details - about a model's potential harms. - feature_extractor ([`~transformers.CLIPFeatureExtractor`]): - A `CLIPFeatureExtractor` to extract features from generated images; used as inputs to the `safety_checker`. - with_to_k ([`bool`]): - Whether to edit the key projection matrices along with the value projection matrices. - with_augs ([`list`]): - Textual augmentations to apply while editing the text-to-image model. Set to `[]` for no augmentations. - """ - _optional_components = ["safety_checker", "feature_extractor"] - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: SchedulerMixin, - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPFeatureExtractor, - requires_safety_checker: bool = True, - with_to_k: bool = True, - with_augs: list = AUGS_CONST, - ): - super().__init__() - - if isinstance(scheduler, PNDMScheduler): - logger.error("PNDMScheduler for this pipeline is currently not supported.") - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - self.with_to_k = with_to_k - self.with_augs = with_augs - - # get cross-attention layers - ca_layers = [] - - def append_ca(net_): - if net_.__class__.__name__ == "CrossAttention": - ca_layers.append(net_) - elif hasattr(net_, "children"): - for net__ in net_.children(): - append_ca(net__) - - # recursively find all cross-attention layers in unet - for net in self.unet.named_children(): - if "down" in net[0]: - append_ca(net[1]) - elif "up" in net[0]: - append_ca(net[1]) - elif "mid" in net[0]: - append_ca(net[1]) - - # get projection matrices - self.ca_clip_layers = [l for l in ca_layers if l.to_v.in_features == 768] - self.projection_matrices = [l.to_v for l in self.ca_clip_layers] - self.og_matrices = [copy.deepcopy(l.to_v) for l in self.ca_clip_layers] - if self.with_to_k: - self.projection_matrices = self.projection_matrices + [l.to_k for l in self.ca_clip_layers] - self.og_matrices = self.og_matrices + [copy.deepcopy(l.to_k) for l in self.ca_clip_layers] - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_slicing - def enable_vae_slicing(self): - r""" - Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to - compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. - """ - self.vae.enable_slicing() - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_slicing - def disable_vae_slicing(self): - r""" - Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to - computing decoding in one step. - """ - self.vae.disable_slicing() - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt - def _encode_prompt( - self, - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt=None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - lora_scale: Optional[float] = None, - ): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`, *optional*): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is - less than `1`). - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - lora_scale (`float`, *optional*): - A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. - """ - # set lora scale so that monkey patched LoRA - # function of text encoder can correctly access it - if lora_scale is not None and isinstance(self, LoraLoaderMixin): - self._lora_scale = lora_scale - - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - if prompt_embeds is None: - # textual inversion: procecss multi-vector tokens if necessary - if isinstance(self, TextualInversionLoaderMixin): - prompt = self.maybe_convert_prompt(prompt, self.tokenizer) - - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( - text_input_ids, untruncated_ids - ): - removed_text = self.tokenizer.batch_decode( - untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1] - ) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = text_inputs.attention_mask.to(device) - else: - attention_mask = None - - prompt_embeds = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - prompt_embeds = prompt_embeds[0] - - prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) - - bs_embed, seq_len, _ = prompt_embeds.shape - # duplicate text embeddings for each generation per prompt, using mps friendly method - prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) - prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance and negative_prompt_embeds is None: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif prompt is not None and type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - # textual inversion: procecss multi-vector tokens if necessary - if isinstance(self, TextualInversionLoaderMixin): - uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer) - - max_length = prompt_embeds.shape[1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = uncond_input.attention_mask.to(device) - else: - attention_mask = None - - negative_prompt_embeds = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - negative_prompt_embeds = negative_prompt_embeds[0] - - if do_classifier_free_guidance: - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = negative_prompt_embeds.shape[1] - - negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) - - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) - - return prompt_embeds - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker - def run_safety_checker(self, image, device, dtype): - if self.safety_checker is None: - has_nsfw_concept = None - else: - if torch.is_tensor(image): - feature_extractor_input = self.image_processor.postprocess(image, output_type="pil") - else: - feature_extractor_input = self.image_processor.numpy_to_pil(image) - safety_checker_input = self.feature_extractor(feature_extractor_input, return_tensors="pt").to(device) - image, has_nsfw_concept = self.safety_checker( - images=image, clip_input=safety_checker_input.pixel_values.to(dtype) - ) - return image, has_nsfw_concept - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents - def decode_latents(self, latents): - warnings.warn( - "The decode_latents method is deprecated and will be removed in a future version. Please" - " use VaeImageProcessor instead", - FutureWarning, - ) - latents = 1 / self.vae.config.scaling_factor * latents - image = self.vae.decode(latents, return_dict=False)[0] - image = (image / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - return image - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.check_inputs - def check_inputs( - self, - prompt, - height, - width, - callback_steps, - negative_prompt=None, - prompt_embeds=None, - negative_prompt_embeds=None, - ): - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - if prompt is not None and prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" - " only forward one of the two." - ) - elif prompt is None and prompt_embeds is None: - raise ValueError( - "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." - ) - elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if negative_prompt is not None and negative_prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" - f" {negative_prompt_embeds}. Please make sure to only forward one of the two." - ) - - if prompt_embeds is not None and negative_prompt_embeds is not None: - if prompt_embeds.shape != negative_prompt_embeds.shape: - raise ValueError( - "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" - f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" - f" {negative_prompt_embeds.shape}." - ) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents - def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): - shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor) - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - if latents is None: - latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - else: - latents = latents.to(device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - return latents - - @torch.no_grad() - def edit_model( - self, - source_prompt: str, - destination_prompt: str, - lamb: float = 0.1, - restart_params: bool = True, - ): - r""" - Apply model editing via closed-form solution (see Eq. 5 in the TIME [paper](https://arxiv.org/abs/2303.08084)). - - Args: - source_prompt (`str`): - The source prompt containing the concept to be edited. - destination_prompt (`str`): - The destination prompt. Must contain all words from `source_prompt` with additional ones to specify the - target edit. - lamb (`float`, *optional*, defaults to 0.1): - The lambda parameter specifying the regularization intesity. Smaller values increase the editing power. - restart_params (`bool`, *optional*, defaults to True): - Restart the model parameters to their pre-trained version before editing. This is done to avoid edit - compounding. When it is `False`, edits accumulate. - """ - - # restart LDM parameters - if restart_params: - num_ca_clip_layers = len(self.ca_clip_layers) - for idx_, l in enumerate(self.ca_clip_layers): - l.to_v = copy.deepcopy(self.og_matrices[idx_]) - self.projection_matrices[idx_] = l.to_v - if self.with_to_k: - l.to_k = copy.deepcopy(self.og_matrices[num_ca_clip_layers + idx_]) - self.projection_matrices[num_ca_clip_layers + idx_] = l.to_k - - # set up sentences - old_texts = [source_prompt] - new_texts = [destination_prompt] - # add augmentations - base = old_texts[0] if old_texts[0][0:1] != "A" else "a" + old_texts[0][1:] - for aug in self.with_augs: - old_texts.append(aug + base) - base = new_texts[0] if new_texts[0][0:1] != "A" else "a" + new_texts[0][1:] - for aug in self.with_augs: - new_texts.append(aug + base) - - # prepare input k* and v* - old_embs, new_embs = [], [] - for old_text, new_text in zip(old_texts, new_texts): - text_input = self.tokenizer( - [old_text, new_text], - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_embeddings = self.text_encoder(text_input.input_ids.to(self.device))[0] - old_emb, new_emb = text_embeddings - old_embs.append(old_emb) - new_embs.append(new_emb) - - # identify corresponding destinations for each token in old_emb - idxs_replaces = [] - for old_text, new_text in zip(old_texts, new_texts): - tokens_a = self.tokenizer(old_text).input_ids - tokens_b = self.tokenizer(new_text).input_ids - tokens_a = [self.tokenizer.encode("a ")[1] if self.tokenizer.decode(t) == "an" else t for t in tokens_a] - tokens_b = [self.tokenizer.encode("a ")[1] if self.tokenizer.decode(t) == "an" else t for t in tokens_b] - num_orig_tokens = len(tokens_a) - idxs_replace = [] - j = 0 - for i in range(num_orig_tokens): - curr_token = tokens_a[i] - while tokens_b[j] != curr_token: - j += 1 - idxs_replace.append(j) - j += 1 - while j < 77: - idxs_replace.append(j) - j += 1 - while len(idxs_replace) < 77: - idxs_replace.append(76) - idxs_replaces.append(idxs_replace) - - # prepare batch: for each pair of setences, old context and new values - contexts, valuess = [], [] - for old_emb, new_emb, idxs_replace in zip(old_embs, new_embs, idxs_replaces): - context = old_emb.detach() - values = [] - with torch.no_grad(): - for layer in self.projection_matrices: - values.append(layer(new_emb[idxs_replace]).detach()) - contexts.append(context) - valuess.append(values) - - # edit the model - for layer_num in range(len(self.projection_matrices)): - # mat1 = \lambda W + \sum{v k^T} - mat1 = lamb * self.projection_matrices[layer_num].weight - - # mat2 = \lambda I + \sum{k k^T} - mat2 = lamb * torch.eye( - self.projection_matrices[layer_num].weight.shape[1], - device=self.projection_matrices[layer_num].weight.device, - ) - - # aggregate sums for mat1, mat2 - for context, values in zip(contexts, valuess): - context_vector = context.reshape(context.shape[0], context.shape[1], 1) - context_vector_T = context.reshape(context.shape[0], 1, context.shape[1]) - value_vector = values[layer_num].reshape(values[layer_num].shape[0], values[layer_num].shape[1], 1) - for_mat1 = (value_vector @ context_vector_T).sum(dim=0) - for_mat2 = (context_vector @ context_vector_T).sum(dim=0) - mat1 += for_mat1 - mat2 += for_mat2 - - # update projection matrix - self.projection_matrices[layer_num].weight = torch.nn.Parameter(mat1 @ torch.inverse(mat2)) - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]] = None, - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - ): - r""" - The call function to the pipeline for generation. - - Args: - prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`. - height (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - A higher guidance scale value encourages the model to generate images closely linked to the text - `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide what to not include in image generation. If not defined, you need to - pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies - to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make - generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor is generated by sampling using the supplied random `generator`. - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not - provided, text embeddings are generated from the `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If - not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generated image. Choose between `PIL.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that calls every `callback_steps` steps during inference. The function is called with the - following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function is called. If not specified, the callback is called at - every step. - cross_attention_kwargs (`dict`, *optional*): - A kwargs dictionary that if specified is passed along to the [`AttentionProcessor`] as defined in - [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py). - - Examples: - - ```py - >>> import torch - >>> from diffusers import StableDiffusionModelEditingPipeline - - >>> model_ckpt = "CompVis/stable-diffusion-v1-4" - >>> pipe = StableDiffusionModelEditingPipeline.from_pretrained(model_ckpt) - - >>> pipe = pipe.to("cuda") - - >>> source_prompt = "A pack of roses" - >>> destination_prompt = "A pack of blue roses" - >>> pipe.edit_model(source_prompt, destination_prompt) - - >>> prompt = "A field of roses" - >>> image = pipe(prompt).images[0] - ``` - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - If `return_dict` is `True`, [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] is returned, - otherwise a `tuple` is returned where the first element is a list with the generated images and the - second element is a list of `bool`s indicating whether the corresponding generated image contains - "not-safe-for-work" (nsfw) content. - """ - # 0. Default height and width to unet - height = height or self.unet.config.sample_size * self.vae_scale_factor - width = width or self.unet.config.sample_size * self.vae_scale_factor - - # 1. Check inputs. Raise error if not correct - self.check_inputs( - prompt, height, width, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds - ) - - # 2. Define call parameters - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - text_encoder_lora_scale = ( - cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None - ) - prompt_embeds = self._encode_prompt( - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt, - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - lora_scale=text_encoder_lora_scale, - ) - - # 4. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # 5. Prepare latent variables - num_channels_latents = self.unet.config.in_channels - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - prompt_embeds.dtype, - device, - generator, - latents, - ) - - # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 7. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.unet( - latent_model_input, - t, - encoder_hidden_states=prompt_embeds, - cross_attention_kwargs=cross_attention_kwargs, - ).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - if not output_type == "latent": - image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0] - image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) - else: - image = latents - has_nsfw_concept = None - - if has_nsfw_concept is None: - do_denormalize = [True] * image.shape[0] - else: - do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept] - - image = self.image_processor.postprocess(image, output_type=output_type, do_denormalize=do_denormalize) - - # Offload last model to CPU - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.final_offload_hook.offload() - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_note_seq_objects.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_note_seq_objects.py deleted file mode 100644 index c02d0b015aedc37c01fb3b843bc79547aae5da68..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_note_seq_objects.py +++ /dev/null @@ -1,17 +0,0 @@ -# This file is autogenerated by the command `make fix-copies`, do not edit. -from ..utils import DummyObject, requires_backends - - -class MidiProcessor(metaclass=DummyObject): - _backends = ["note_seq"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["note_seq"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["note_seq"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["note_seq"]) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/fp16/retinanet_r50_fpn_fp16_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/fp16/retinanet_r50_fpn_fp16_1x_coco.py deleted file mode 100644 index 519c4dbacb1a876dcd973f2a82ddeef98787619d..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/fp16/retinanet_r50_fpn_fp16_1x_coco.py +++ /dev/null @@ -1,3 +0,0 @@ -_base_ = '../retinanet/retinanet_r50_fpn_1x_coco.py' -# fp16 settings -fp16 = dict(loss_scale=512.) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_80k_pascal_context.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_80k_pascal_context.py deleted file mode 100644 index 3a46c28608add5325ec1decf33624c3c00bff1d7..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_80k_pascal_context.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './deeplabv3plus_r50-d8_480x480_80k_pascal_context.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/AndySAnker/DeepStruc/tools/data_loader.py b/spaces/AndySAnker/DeepStruc/tools/data_loader.py deleted file mode 100644 index 1813726813c40f106d45c068eab33cc811168eba..0000000000000000000000000000000000000000 --- a/spaces/AndySAnker/DeepStruc/tools/data_loader.py +++ /dev/null @@ -1,236 +0,0 @@ -import os, torch, h5py, random, sys, shutil, yaml -from pytorch_lightning.callbacks import ModelCheckpoint -import numpy as np -from torch_geometric.data import Data, DataLoader -from tqdm import tqdm -import pytorch_lightning as pl - - -class graph_loader(pl.LightningDataModule): - def __init__(self, data_dir, cluster_size=None, num_files=None, batchsize=1, shuffle=True, num_workers=0): - super(graph_loader, self).__init__() - """ - - Parameters - ---------- - data_dir - num_files - batchsize - shuffle - - Returns - ------- - - """ - self.batchsize = int(batchsize) - self.num_workers = num_workers - self.files_sorted = sorted(os.listdir(data_dir)) - self.cluster_size = cluster_size - files = self.files_sorted.copy() - # files = [file for file in files if 'FCC' in file] - - if shuffle == True: - random.shuffle(files) - if files != None: - files = files[:num_files] - else: - pass - - nTrain = int(0.6 * len(files)) - nValid = int((len(files) - nTrain) / 2) - nTest = len(files) - (nTrain + nValid) - - print('\nBatch size: {}'.format(batchsize)) - print('Total number of graphs {}.'.format(len(files))) - print('\tTraining files:', nTrain) - print('\tValidation files:', nValid) - print('\tTest files:', nTest, '\n') - - self.trSamples, self.vlSamples, self.teSamples = list(), list(), list() - print('Loading graphs:') - - for idx in range(len(files)): - h5f = h5py.File(data_dir + '/' + files[idx], 'r') - b = h5f['Node Feature Matrix'][:] - h5f.close() - - if self.cluster_size == None: - self.cluster_size = len(b) - elif len(b) > self.cluster_size: - self.cluster_size = len(b) - - largest_x_dist, largest_y_dist, largest_z_dist, edge_f_max = 0, 0, 0, 0 - for idx in range(nTrain): - h5f = h5py.File(data_dir + '/' + files[idx], 'r') - a = h5f['Edge Feature Matrix'][:] - b = h5f['Node Feature Matrix'][:] - - h5f.close() - - diff_ph = abs(np.amin(b, axis=0)) + np.amax(b, axis=0) - if largest_x_dist < diff_ph[0]: - largest_x_dist = diff_ph[0] - if largest_y_dist < diff_ph[1]: - largest_y_dist = diff_ph[1] - if largest_z_dist < diff_ph[2]: - largest_z_dist = diff_ph[2] - if np.amax(a) > edge_f_max: - edge_f_max = np.amax(a) - - self.largest_x_dist = largest_x_dist - self.largest_y_dist = largest_y_dist - self.largest_z_dist = largest_z_dist - - for idx in tqdm(range(len(files))): - h5f = h5py.File(data_dir + '/' + files[idx], 'r') - a = h5f['Edge Feature Matrix'][:] # todo: norm this - b = h5f['Node Feature Matrix'][:] - c = h5f['Edge Directions'][:] - d = h5f['PDF label'][:] - h5f.close() - - a /= edge_f_max - min_vals = np.amin(b, axis=0) - if min_vals[0] < 0.0: # Make all coordinates positive - b[:, 0] -= min_vals[0] - if min_vals[1] < 0.0: # Make all coordinates positive - b[:, 1] -= min_vals[1] - if min_vals[2] < 0.0: # Make all coordinates positive - b[:, 2] -= min_vals[2] - - b[:, 0] /= largest_x_dist - b[:, 1] /= largest_y_dist - b[:, 2] /= largest_z_dist - - cord_ph = np.zeros((self.cluster_size, np.shape(b)[1])) - 1 - cord_ph[:np.shape(b)[0]] = b - - d /= np.amax(d) # Standardize PDF - - pdf = torch.tensor([d], dtype=torch.float) - x = torch.tensor(b, dtype=torch.float) - y = torch.tensor([cord_ph], dtype=torch.float) - edge_index = torch.tensor(c, dtype=torch.long) - edge_attr = torch.tensor(a, dtype=torch.float) - name_idx = torch.tensor(self.files_sorted.index(files[idx]), dtype=torch.int16) - - if idx < nTrain: - self.trSamples.append( - tuple((Data(x=x, y=y, edge_index=edge_index, edge_attr=edge_attr), pdf.T, name_idx))) - elif idx < nTrain + nValid: - self.vlSamples.append( - tuple((Data(x=x, y=y, edge_index=edge_index, edge_attr=edge_attr), pdf.T, name_idx))) - else: - self.teSamples.append( - tuple((Data(x=x, y=y, edge_index=edge_index, edge_attr=edge_attr), pdf.T, name_idx))) - - def train_dataloader(self): - return DataLoader(self.trSamples, batch_size=self.batchsize, shuffle=True, num_workers=self.num_workers) - - def val_dataloader(self): - return DataLoader(self.vlSamples, batch_size=self.batchsize, num_workers=self.num_workers) - - def test_dataloader(self): - return DataLoader(self.teSamples, batch_size=self.batchsize, num_workers=self.num_workers) - - -def save_xyz_file(save_dir, cords, file_name, xyz_scale=[1,1,1]): - - cords = [xyz for xyz in cords if np.mean(xyz) >= -0.2] - cords = np.array(cords) - cords[:,0] -= cords[:,0].mean() - cords[:,1] -= cords[:,1].mean() - cords[:,2] -= cords[:,2].mean() - these_cords = [] - for count, xyz in enumerate(cords): - if count == 0: - these_cords.append(['{:d}'.format(len(cords))]) - these_cords.append(['']) - - these_cords.append(['W {:.4f} {:.4f} {:.4f}'.format(xyz[0]*xyz_scale[0], xyz[1]*xyz_scale[1], xyz[2]*xyz_scale[2])]) - - np.savetxt(save_dir + '/{}.xyz'.format(file_name), these_cords, fmt='%s') - - return these_cords - - -def folder_manager(input_dict, model_arch): - this_trainer = None - epoch = input_dict['epochs'] - if not os.path.isdir(input_dict['save_dir']): - os.mkdir(input_dict['save_dir']) - os.mkdir(input_dict['save_dir'] + '/models') - shutil.copy2('train.py', input_dict['save_dir'] + '/train.py') - shutil.copy2('./tools/data_loader.py', input_dict['save_dir'] + '/data_loader.py') - shutil.copy2('./tools/module.py', input_dict['save_dir'] + '/module.py') - os.mkdir(input_dict['save_dir'] + '/prior') - os.mkdir(input_dict['save_dir'] + '/posterior') - else: - shutil.copy2('train.py', input_dict['save_dir'] + '/train.py') - shutil.copy2('./tools/data_loader.py', input_dict['save_dir'] + '/data_loader.py') - shutil.copy2('./tools/module.py', input_dict['save_dir'] + '/module.py') - - if input_dict['load_trainer']: - best_model = sorted(os.listdir(input_dict['save_dir'] + '/models')) - print(f'\nUsing {best_model[0]} as starting model!\n') - this_trainer = input_dict['save_dir'] + '/models/' + best_model[0] - #input_dict = yaml.load(f'{input_dict["save_dir"]}/input_dict.yaml', Loader=yaml.FullLoader) - - try: - with open(f'{input_dict["save_dir"]}/input_dict.yaml') as file: - input_dict = yaml.full_load(file) - input_dict['load_trainer'] = True - input_dict['epochs'] = epoch - with open(f'{input_dict["save_dir"]}/model_arch.yaml') as file: - model_arch = yaml.full_load(file) - except FileNotFoundError: # todo: transition - need to be deleted at some point - with open(f'{input_dict["save_dir"]}/input_dict.yaml', 'w') as outfile: - yaml.dump(input_dict, outfile, allow_unicode=True, default_flow_style=False) - - with open(f'{input_dict["save_dir"]}/model_arch.yaml', 'w') as outfile: - yaml.dump(model_arch, outfile, allow_unicode=True, default_flow_style=False) - else: - with open(f'{input_dict["save_dir"]}/input_dict.yaml', 'w') as outfile: - yaml.dump(input_dict, outfile, allow_unicode=True, default_flow_style=False) - - with open(f'{input_dict["save_dir"]}/model_arch.yaml', 'w') as outfile: - yaml.dump(model_arch, outfile, allow_unicode=True, default_flow_style=False) - return this_trainer, input_dict, model_arch - - -def get_callbacks(save_dir): - checkpoint_callback_tot = ModelCheckpoint( - monitor='vld_tot', - dirpath=save_dir + '/models', - filename='model-{vld_tot:.5f}-{beta:.3f}-{vld_rec_pdf:.5f}-{epoch:010d}', - save_top_k=5, - mode='min', - save_last=True, - ) - - checkpoint_callback_rec = ModelCheckpoint( - monitor='vld_rec', - dirpath=save_dir + '/models', - filename='model-{vld_rec:.5f}-{beta:.3f}-{vld_rec_pdf:.5f}-{vld_tot:.5f}-{epoch:010d}', - save_top_k=5, - mode='min', - ) - - checkpoint_callback_kld = ModelCheckpoint( - monitor='vld_kld', - dirpath=save_dir + '/models', - filename='model-{vld_kld:.5f}-{beta:.3f}-{vld_rec_pdf:.5f}-{vld_tot:.5f}-{epoch:010d}', - save_top_k=5, - mode='min', - ) - - checkpoint_callback_vld_rec_pdf = ModelCheckpoint( - monitor='vld_rec_pdf', - dirpath=save_dir + '/models', - filename='model-{vld_rec_pdf:.5f}-{beta:.3f}-{vld_tot:.5f}-{epoch:010d}', - save_top_k=5, - mode='min', - ) - - return [checkpoint_callback_tot, checkpoint_callback_rec, checkpoint_callback_kld, - checkpoint_callback_vld_rec_pdf] diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/multimodal/script.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/multimodal/script.py deleted file mode 100644 index 8bc26315cc60882406343c92f8f6368f7e9239aa..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/multimodal/script.py +++ /dev/null @@ -1,112 +0,0 @@ -import base64 -import re -import time -from functools import partial -from io import BytesIO - -import gradio as gr -import torch - -from extensions.multimodal.multimodal_embedder import MultimodalEmbedder -from modules import shared -from modules.logging_colors import logger - -params = { - "add_all_images_to_prompt": False, - # device to run vision encoder on - "vision_device": None, - # bits to load vision encoder in, either 16 or 32 - "vision_bits": 32, - # device to run multimodal projector on - "projector_device": None, - # multimodal projector bits, either 32 or 16 - "projector_bits": 32 -} - - -# If 'state' is True, will hijack the next chat generation -input_hijack = { - 'state': False, - 'value': ["", ""] -} - - -# initialized in ui, so that params are loaded from settings -multimodal_embedder: MultimodalEmbedder = None - - -def chat_input_modifier(text, visible_text, state): - global input_hijack - if input_hijack['state']: - input_hijack['state'] = False - return input_hijack['value'](text, visible_text) - else: - return text, visible_text - - -def add_chat_picture(picture, text, visible_text): - # resize the image, so that shortest edge is at least 224 (size for CLIP), and at most 300 (to keep history manageable) - max_hw, min_hw = max(picture.size), min(picture.size) - aspect_ratio = max_hw / min_hw - shortest_edge = int(max(300 / aspect_ratio, 224)) - longest_edge = int(shortest_edge * aspect_ratio) - w = shortest_edge if picture.width < picture.height else longest_edge - h = shortest_edge if picture.width >= picture.height else longest_edge - picture = picture.resize((w, h)) - - buffer = BytesIO() - picture.save(buffer, format="JPEG") - img_str = base64.b64encode(buffer.getvalue()).decode('utf-8') - image = f'' - - if '' in text: - text = text.replace('', image) - else: - text = text + '\n' + image - - if visible_text == '' or visible_text is None: - visible_text = text - elif '' in visible_text: - visible_text = visible_text.replace('', image) - else: - visible_text = visible_text + '\n' + image - - return text, visible_text - - -def custom_tokenized_length(prompt): - return multimodal_embedder.len_in_tokens(prompt) - - -def tokenizer_modifier(state, prompt, input_ids, input_embeds): - global params - start_ts = time.time() - image_match = re.search(r'', prompt) - - if image_match is None: - return prompt, input_ids, input_embeds - - prompt, input_ids, input_embeds, total_embedded = multimodal_embedder.forward(prompt, state, params) - logger.info(f'Embedded {total_embedded} image(s) in {time.time()-start_ts:.2f}s') - return (prompt, - input_ids.unsqueeze(0).to(shared.model.device, dtype=torch.int64), - input_embeds.unsqueeze(0).to(shared.model.device, dtype=shared.model.dtype)) - - -def ui(): - global multimodal_embedder - multimodal_embedder = MultimodalEmbedder(params) - with gr.Column(): - picture_select = gr.Image(label='Send a picture', type='pil') - # The models don't seem to deal well with multiple images - single_image_checkbox = gr.Checkbox(False, label='Embed all images, not only the last one') - # Prepare the input hijack - picture_select.upload( - lambda picture: input_hijack.update({"state": True, "value": partial(add_chat_picture, picture)}), - [picture_select], - None - ) - picture_select.clear(lambda: input_hijack.update({"state": False, "value": ["", ""]}), None, None) - single_image_checkbox.change(lambda x: params.update({"add_all_images_to_prompt": x}), single_image_checkbox, None) - shared.gradio['Generate'].click(lambda: None, None, picture_select) - shared.gradio['textbox'].submit(lambda: None, None, picture_select) diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/__init__.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/__init__.py deleted file mode 100644 index 3364d40997447a4ec15ca7a525a4d0e92ab211bd..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/__init__.py +++ /dev/null @@ -1,27 +0,0 @@ -# Uniformer -# From https://github.com/Sense-X/UniFormer -# # Apache-2.0 license - -import os - -from annotator.uniformer.mmseg.apis import init_segmentor, inference_segmentor, show_result_pyplot -from annotator.uniformer.mmseg.core.evaluation import get_palette -from annotator.util import annotator_ckpts_path - - -checkpoint_file = "https://huggingface.co/lllyasviel/ControlNet/resolve/main/annotator/ckpts/upernet_global_small.pth" - - -class UniformerDetector: - def __init__(self): - modelpath = os.path.join(annotator_ckpts_path, "upernet_global_small.pth") - if not os.path.exists(modelpath): - from basicsr.utils.download_util import load_file_from_url - load_file_from_url(checkpoint_file, model_dir=annotator_ckpts_path) - config_file = os.path.join(os.path.dirname(annotator_ckpts_path), "uniformer", "exp", "upernet_global_small", "config.py") - self.model = init_segmentor(config_file, modelpath).cuda() - - def __call__(self, img): - result = inference_segmentor(self.model, img) - res_img = show_result_pyplot(self.model, img, result, get_palette('ade'), opacity=1) - return res_img diff --git a/spaces/ArdaSaygan/PollGeneratorApp/app.py b/spaces/ArdaSaygan/PollGeneratorApp/app.py deleted file mode 100644 index df6ae2bf2b6468bec19278477f56e8672e5da426..0000000000000000000000000000000000000000 --- a/spaces/ArdaSaygan/PollGeneratorApp/app.py +++ /dev/null @@ -1,60 +0,0 @@ -from email import message -import gradio as gr -import os -import openai -from create_poll import create_poll -from utils import GPTCompletion -from dotenv import load_dotenv - -gr.close_all() - -load_dotenv() - -openai.api_key = os.environ["OPENAI_API_KEY"] - - - -def chatWithGPT(chatHistory): - completion = GPTCompletion(system="You are an AI chatting with a human.", max_tokens=2048, temperature=1.5) - gptResponse = completion.chatComplete(chatHistory) - chatHistory[-1][1] = gptResponse - return chatHistory - -with gr.Blocks() as demo: - chatHistory = gr.State(value = []) - - def generateResponse(message, chatHistory): - completion = GPTCompletion(system="You are an AI chatting with a human.", max_tokens=2048, temperature=1.5) - gptResponse = completion.chatComplete(chatHistory,message) - chatHistory.append((message, gptResponse)) - return chatHistory - - def pollinize(chatHistory): - - chatList = [] - for log in chatHistory: - chatList.append("User: " + log[0]) - chatList.append("AI: " + log[1]) - chatString = "\n".join(chatList) - - return create_poll(chatString, openai.api_key) - - def uploadApi(apikey): - openai.api_key = apikey - - gr.Markdown("This little app is a demonstration of how LLMs can be used to create Polls from a chat. To give it a try, discuss a topic with ChatGPT first and then push Generate Poll button. Poll question will be generated on the context of the chat.") - - chatbot = gr.Chatbot().style(height=460) - input = gr.Textbox(label="Messeage") - nextBtn = gr.Button("Send message") - nextBtn.click(generateResponse, [input, chatHistory], chatbot, scroll_to_output=True, show_progress=True) - - debatePoll = gr.Textbox(label="Poll") - pollinizeButton = gr.Button("Create a poll") - pollinizeButton.click(pollinize,chatHistory, debatePoll, scroll_to_output=True, show_progress=True) - - apikey = gr.Textbox(label="API Key") - apiUpload = gr.Button("Upload custom api key") - apiUpload.click(uploadApi, apikey, None, scroll_to_output=True, show_progress=True) - -demo.launch() \ No newline at end of file diff --git a/spaces/Arnx/MusicGenXvAKN/tests/data/test_audio_dataset.py b/spaces/Arnx/MusicGenXvAKN/tests/data/test_audio_dataset.py deleted file mode 100644 index b69c9c397830738b73d6c229009f84b867cda801..0000000000000000000000000000000000000000 --- a/spaces/Arnx/MusicGenXvAKN/tests/data/test_audio_dataset.py +++ /dev/null @@ -1,352 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from functools import partial -from itertools import product -import json -import math -import os -import random -import typing as tp - -import pytest -import torch -from torch.utils.data import DataLoader - -from audiocraft.data.audio_dataset import ( - AudioDataset, - AudioMeta, - _get_audio_meta, - load_audio_meta, - save_audio_meta -) -from audiocraft.data.zip import PathInZip - -from ..common_utils import TempDirMixin, get_white_noise, save_wav - - -class TestAudioMeta(TempDirMixin): - - def test_get_audio_meta(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(duration * sample_rate) - wav = get_white_noise(ch, n_frames) - path = self.get_temp_path('sample.wav') - save_wav(path, wav, sample_rate) - m = _get_audio_meta(path, minimal=True) - assert m.path == path, 'path does not match' - assert m.sample_rate == sample_rate, 'sample rate does not match' - assert m.duration == duration, 'duration does not match' - assert m.amplitude is None - assert m.info_path is None - - def test_save_audio_meta(self): - audio_meta = [ - AudioMeta("mypath1", 1., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file1.json')), - AudioMeta("mypath2", 2., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file2.json')) - ] - empty_audio_meta = [] - for idx, meta in enumerate([audio_meta, empty_audio_meta]): - path = self.get_temp_path(f'data_{idx}_save.jsonl') - save_audio_meta(path, meta) - with open(path, 'r') as f: - lines = f.readlines() - read_meta = [AudioMeta.from_dict(json.loads(line)) for line in lines] - assert len(read_meta) == len(meta) - for m, read_m in zip(meta, read_meta): - assert m == read_m - - def test_load_audio_meta(self): - try: - import dora - except ImportError: - dora = None # type: ignore - - audio_meta = [ - AudioMeta("mypath1", 1., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file1.json')), - AudioMeta("mypath2", 2., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file2.json')) - ] - empty_meta = [] - for idx, meta in enumerate([audio_meta, empty_meta]): - path = self.get_temp_path(f'data_{idx}_load.jsonl') - with open(path, 'w') as f: - for m in meta: - json_str = json.dumps(m.to_dict()) + '\n' - f.write(json_str) - read_meta = load_audio_meta(path) - assert len(read_meta) == len(meta) - for m, read_m in zip(meta, read_meta): - if dora: - m.path = dora.git_save.to_absolute_path(m.path) - assert m == read_m, f'original={m}, read={read_m}' - - -class TestAudioDataset(TempDirMixin): - - def _create_audio_files(self, - root_name: str, - num_examples: int, - durations: tp.Union[float, tp.Tuple[float, float]] = (0.1, 1.), - sample_rate: int = 16_000, - channels: int = 1): - root_dir = self.get_temp_dir(root_name) - for i in range(num_examples): - if isinstance(durations, float): - duration = durations - elif isinstance(durations, tuple) and len(durations) == 1: - duration = durations[0] - elif isinstance(durations, tuple) and len(durations) == 2: - duration = random.uniform(durations[0], durations[1]) - else: - assert False - n_frames = int(duration * sample_rate) - wav = get_white_noise(channels, n_frames) - path = os.path.join(root_dir, f'example_{i}.wav') - save_wav(path, wav, sample_rate) - return root_dir - - def _create_audio_dataset(self, - root_name: str, - total_num_examples: int, - durations: tp.Union[float, tp.Tuple[float, float]] = (0.1, 1.), - sample_rate: int = 16_000, - channels: int = 1, - segment_duration: tp.Optional[float] = None, - num_examples: int = 10, - shuffle: bool = True, - return_info: bool = False): - root_dir = self._create_audio_files(root_name, total_num_examples, durations, sample_rate, channels) - dataset = AudioDataset.from_path(root_dir, - minimal_meta=True, - segment_duration=segment_duration, - num_samples=num_examples, - sample_rate=sample_rate, - channels=channels, - shuffle=shuffle, - return_info=return_info) - return dataset - - def test_dataset_full(self): - total_examples = 10 - min_duration, max_duration = 1., 4. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), - sample_rate=sample_rate, channels=channels, segment_duration=None) - assert len(dataset) == total_examples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] <= int(max_duration * sample_rate) - assert sample.shape[1] >= int(min_duration * sample_rate) - - def test_dataset_segment(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples) - assert len(dataset) == num_samples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == int(segment_duration * sample_rate) - - def test_dataset_equal_audio_and_segment_durations(self): - total_examples = 1 - num_samples = 2 - audio_duration = 1. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=audio_duration, sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples) - assert len(dataset) == num_samples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == int(segment_duration * sample_rate) - # the random seek_time adds variability on audio read - sample_1 = dataset[0] - sample_2 = dataset[1] - assert not torch.allclose(sample_1, sample_2) - - def test_dataset_samples(self): - total_examples = 1 - num_samples = 2 - audio_duration = 1. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - - create_dataset = partial( - self._create_audio_dataset, - 'dset', total_examples, durations=audio_duration, sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, - ) - - dataset = create_dataset(shuffle=True) - # when shuffle = True, we have different inputs for the same index across epoch - sample_1 = dataset[0] - sample_2 = dataset[0] - assert not torch.allclose(sample_1, sample_2) - - dataset_noshuffle = create_dataset(shuffle=False) - # when shuffle = False, we have same inputs for the same index across epoch - sample_1 = dataset_noshuffle[0] - sample_2 = dataset_noshuffle[0] - assert torch.allclose(sample_1, sample_2) - - def test_dataset_return_info(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True) - assert len(dataset) == num_samples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample, segment_info = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == int(segment_duration * sample_rate) - assert segment_info.sample_rate == sample_rate - assert segment_info.total_frames == int(segment_duration * sample_rate) - assert segment_info.n_frames <= int(segment_duration * sample_rate) - assert segment_info.seek_time >= 0 - - def test_dataset_return_info_no_segment_duration(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = None - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True) - assert len(dataset) == total_examples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample, segment_info = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == segment_info.total_frames - assert segment_info.sample_rate == sample_rate - assert segment_info.n_frames <= segment_info.total_frames - - def test_dataset_collate_fn(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=False) - batch_size = 4 - dataloader = DataLoader( - dataset, - batch_size=batch_size, - num_workers=0 - ) - for idx, batch in enumerate(dataloader): - assert batch.shape[0] == batch_size - - @pytest.mark.parametrize("segment_duration", [1.0, None]) - def test_dataset_with_meta_collate_fn(self, segment_duration): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True) - batch_size = 4 - dataloader = DataLoader( - dataset, - batch_size=batch_size, - collate_fn=dataset.collater, - num_workers=0 - ) - for idx, batch in enumerate(dataloader): - wav, infos = batch - assert wav.shape[0] == batch_size - assert len(infos) == batch_size - - @pytest.mark.parametrize("segment_duration,sample_on_weight,sample_on_duration,a_hist,b_hist,c_hist", [ - [1, True, True, 0.5, 0.5, 0.0], - [1, False, True, 0.25, 0.5, 0.25], - [1, True, False, 0.666, 0.333, 0.0], - [1, False, False, 0.333, 0.333, 0.333], - [None, False, False, 0.333, 0.333, 0.333]]) - def test_sample_with_weight(self, segment_duration, sample_on_weight, sample_on_duration, a_hist, b_hist, c_hist): - random.seed(1234) - rng = torch.Generator() - rng.manual_seed(1234) - - def _get_histogram(dataset, repetitions=20_000): - counts = {file_meta.path: 0. for file_meta in meta} - for _ in range(repetitions): - file_meta = dataset.sample_file(rng) - counts[file_meta.path] += 1 - return {name: count / repetitions for name, count in counts.items()} - - meta = [ - AudioMeta(path='a', duration=5, sample_rate=1, weight=2), - AudioMeta(path='b', duration=10, sample_rate=1, weight=None), - AudioMeta(path='c', duration=5, sample_rate=1, weight=0), - ] - dataset = AudioDataset( - meta, segment_duration=segment_duration, sample_on_weight=sample_on_weight, - sample_on_duration=sample_on_duration) - hist = _get_histogram(dataset) - assert math.isclose(hist['a'], a_hist, abs_tol=0.01) - assert math.isclose(hist['b'], b_hist, abs_tol=0.01) - assert math.isclose(hist['c'], c_hist, abs_tol=0.01) - - def test_meta_duration_filter_all(self): - meta = [ - AudioMeta(path='a', duration=5, sample_rate=1, weight=2), - AudioMeta(path='b', duration=10, sample_rate=1, weight=None), - AudioMeta(path='c', duration=5, sample_rate=1, weight=0), - ] - try: - AudioDataset(meta, segment_duration=11, min_segment_ratio=1) - assert False - except AssertionError: - assert True - - def test_meta_duration_filter_long(self): - meta = [ - AudioMeta(path='a', duration=5, sample_rate=1, weight=2), - AudioMeta(path='b', duration=10, sample_rate=1, weight=None), - AudioMeta(path='c', duration=5, sample_rate=1, weight=0), - ] - dataset = AudioDataset(meta, segment_duration=None, min_segment_ratio=1, max_audio_duration=7) - assert len(dataset) == 2 diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/util.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/util.py deleted file mode 100644 index 34ce092c6d08d9cdc2704840b7539de7b5ae1dcc..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/util.py +++ /dev/null @@ -1,235 +0,0 @@ -# util.py -import warnings -import types -import collections -import itertools -from functools import lru_cache -from typing import List, Union, Iterable - -_bslash = chr(92) - - -class __config_flags: - """Internal class for defining compatibility and debugging flags""" - - _all_names: List[str] = [] - _fixed_names: List[str] = [] - _type_desc = "configuration" - - @classmethod - def _set(cls, dname, value): - if dname in cls._fixed_names: - warnings.warn( - "{}.{} {} is {} and cannot be overridden".format( - cls.__name__, - dname, - cls._type_desc, - str(getattr(cls, dname)).upper(), - ) - ) - return - if dname in cls._all_names: - setattr(cls, dname, value) - else: - raise ValueError("no such {} {!r}".format(cls._type_desc, dname)) - - enable = classmethod(lambda cls, name: cls._set(name, True)) - disable = classmethod(lambda cls, name: cls._set(name, False)) - - -@lru_cache(maxsize=128) -def col(loc: int, strg: str) -> int: - """ - Returns current column within a string, counting newlines as line separators. - The first column is number 1. - - Note: the default parsing behavior is to expand tabs in the input string - before starting the parsing process. See - :class:`ParserElement.parseString` for more - information on parsing strings containing ```` s, and suggested - methods to maintain a consistent view of the parsed string, the parse - location, and line and column positions within the parsed string. - """ - s = strg - return 1 if 0 < loc < len(s) and s[loc - 1] == "\n" else loc - s.rfind("\n", 0, loc) - - -@lru_cache(maxsize=128) -def lineno(loc: int, strg: str) -> int: - """Returns current line number within a string, counting newlines as line separators. - The first line is number 1. - - Note - the default parsing behavior is to expand tabs in the input string - before starting the parsing process. See :class:`ParserElement.parseString` - for more information on parsing strings containing ```` s, and - suggested methods to maintain a consistent view of the parsed string, the - parse location, and line and column positions within the parsed string. - """ - return strg.count("\n", 0, loc) + 1 - - -@lru_cache(maxsize=128) -def line(loc: int, strg: str) -> str: - """ - Returns the line of text containing loc within a string, counting newlines as line separators. - """ - last_cr = strg.rfind("\n", 0, loc) - next_cr = strg.find("\n", loc) - return strg[last_cr + 1 : next_cr] if next_cr >= 0 else strg[last_cr + 1 :] - - -class _UnboundedCache: - def __init__(self): - cache = {} - cache_get = cache.get - self.not_in_cache = not_in_cache = object() - - def get(_, key): - return cache_get(key, not_in_cache) - - def set_(_, key, value): - cache[key] = value - - def clear(_): - cache.clear() - - self.size = None - self.get = types.MethodType(get, self) - self.set = types.MethodType(set_, self) - self.clear = types.MethodType(clear, self) - - -class _FifoCache: - def __init__(self, size): - self.not_in_cache = not_in_cache = object() - cache = collections.OrderedDict() - cache_get = cache.get - - def get(_, key): - return cache_get(key, not_in_cache) - - def set_(_, key, value): - cache[key] = value - while len(cache) > size: - cache.popitem(last=False) - - def clear(_): - cache.clear() - - self.size = size - self.get = types.MethodType(get, self) - self.set = types.MethodType(set_, self) - self.clear = types.MethodType(clear, self) - - -class LRUMemo: - """ - A memoizing mapping that retains `capacity` deleted items - - The memo tracks retained items by their access order; once `capacity` items - are retained, the least recently used item is discarded. - """ - - def __init__(self, capacity): - self._capacity = capacity - self._active = {} - self._memory = collections.OrderedDict() - - def __getitem__(self, key): - try: - return self._active[key] - except KeyError: - self._memory.move_to_end(key) - return self._memory[key] - - def __setitem__(self, key, value): - self._memory.pop(key, None) - self._active[key] = value - - def __delitem__(self, key): - try: - value = self._active.pop(key) - except KeyError: - pass - else: - while len(self._memory) >= self._capacity: - self._memory.popitem(last=False) - self._memory[key] = value - - def clear(self): - self._active.clear() - self._memory.clear() - - -class UnboundedMemo(dict): - """ - A memoizing mapping that retains all deleted items - """ - - def __delitem__(self, key): - pass - - -def _escape_regex_range_chars(s: str) -> str: - # escape these chars: ^-[] - for c in r"\^-[]": - s = s.replace(c, _bslash + c) - s = s.replace("\n", r"\n") - s = s.replace("\t", r"\t") - return str(s) - - -def _collapse_string_to_ranges( - s: Union[str, Iterable[str]], re_escape: bool = True -) -> str: - def is_consecutive(c): - c_int = ord(c) - is_consecutive.prev, prev = c_int, is_consecutive.prev - if c_int - prev > 1: - is_consecutive.value = next(is_consecutive.counter) - return is_consecutive.value - - is_consecutive.prev = 0 - is_consecutive.counter = itertools.count() - is_consecutive.value = -1 - - def escape_re_range_char(c): - return "\\" + c if c in r"\^-][" else c - - def no_escape_re_range_char(c): - return c - - if not re_escape: - escape_re_range_char = no_escape_re_range_char - - ret = [] - s = "".join(sorted(set(s))) - if len(s) > 3: - for _, chars in itertools.groupby(s, key=is_consecutive): - first = last = next(chars) - last = collections.deque( - itertools.chain(iter([last]), chars), maxlen=1 - ).pop() - if first == last: - ret.append(escape_re_range_char(first)) - else: - sep = "" if ord(last) == ord(first) + 1 else "-" - ret.append( - "{}{}{}".format( - escape_re_range_char(first), sep, escape_re_range_char(last) - ) - ) - else: - ret = [escape_re_range_char(c) for c in s] - - return "".join(ret) - - -def _flatten(ll: list) -> list: - ret = [] - for i in ll: - if isinstance(i, list): - ret.extend(_flatten(i)) - else: - ret.append(i) - return ret diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/resolvelib/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/resolvelib/__init__.py deleted file mode 100644 index d92acc7bedfc5c7c05130986a256e610640582e5..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/resolvelib/__init__.py +++ /dev/null @@ -1,26 +0,0 @@ -__all__ = [ - "__version__", - "AbstractProvider", - "AbstractResolver", - "BaseReporter", - "InconsistentCandidate", - "Resolver", - "RequirementsConflicted", - "ResolutionError", - "ResolutionImpossible", - "ResolutionTooDeep", -] - -__version__ = "1.0.1" - - -from .providers import AbstractProvider, AbstractResolver -from .reporters import BaseReporter -from .resolvers import ( - InconsistentCandidate, - RequirementsConflicted, - ResolutionError, - ResolutionImpossible, - ResolutionTooDeep, - Resolver, -) diff --git a/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/version.py b/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/version.py deleted file mode 100644 index 3ced3581bb601ae91b1e1da4b8f4f520855a065e..0000000000000000000000000000000000000000 --- a/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/version.py +++ /dev/null @@ -1 +0,0 @@ -__version__ = "0.2.1" diff --git a/spaces/AzinZ/vitscn/modules.py b/spaces/AzinZ/vitscn/modules.py deleted file mode 100644 index 9c7fd9cd6eb8b7e0ec0e08957e970744a374a924..0000000000000000000000000000000000000000 --- a/spaces/AzinZ/vitscn/modules.py +++ /dev/null @@ -1,390 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/Benson/text-generation/Examples/Cmo Descargar Fifa 2022 Apk.md b/spaces/Benson/text-generation/Examples/Cmo Descargar Fifa 2022 Apk.md deleted file mode 100644 index dbf161ea55699c03812a3f6aa2b5486ddb01d914..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Cmo Descargar Fifa 2022 Apk.md +++ /dev/null @@ -1,45 +0,0 @@ -
    -

    Cómo descargar FIFA 2022 APK

    -

    Si usted es un fan de los juegos de fútbol, es posible que se pregunte cómo descargar FIFA 2022 APK en su dispositivo Android. FIFA 2022 es la última entrega de la popular serie FIFA, desarrollada por EA Sports. Es uno de los juegos más esperados del año, con nuevas características de juego, modos, gráficos y más. En este artículo, le mostraremos cómo descargar FIFA 2022 APK de forma segura y fácil, así como algunas de las mejores características y consejos para jugar el juego.

    -

    cómo descargar fifa 2022 apk


    Download File ---> https://bltlly.com/2v6KiU



    -

    Características de FIFA 2022

    -

    FIFA 2022 no es solo un re-skin de su predecesor. Es un juego completamente nuevo que trae cada partido en cada modo aún más cerca de lo real. Estas son algunas de las características que hacen que FIFA 2022 se destaque de otros juegos de fútbol:

    -
      -
    • Tecnología HyperMotion: Esta es una nueva tecnología de juego que utiliza datos del mundo real y aprendizaje automático para crear animaciones, movimientos y reacciones realistas para cada jugador en el campo. También permite tácticas de equipo avanzadas y formaciones que se adaptan a la situación.
    • -
    • Modo de Copa Mundial: Este es el único juego móvil con licencia de la Copa Mundial de la FIFA 2022 donde se puede volver a jugar el torneo oficial con cualquiera de las 32 naciones calificadas. También puedes jugar en auténticos estadios, kits, insignias y pelotas de la Copa Mundial, con comentarios localizados.
    • -
    • Iconos y Héroes: Puedes construir tu equipo definitivo con más de 100 iconos y héroes de fútbol, incluyendo leyendas como Paolo Maldini, Ronaldinho, Kylian Mbappé, Christian Pulisic, Vinicius Jr y Son Heung-min. También puedes subir el nivel de tu equipo desde fanfavorito de un contendiente de la UEFA Champions League.
    • -
    • Simulación de fútbol de siguiente nivel: Puedes experimentar estadios de fútbol nuevos y mejorados con sonidos y efectos visuales realistas, hasta 60 fps en dispositivos compatibles. También puede disfrutar en tiempo real 11v11 juego, con auténtica acción de fútbol y la física.
    • - -
    -

    Requisitos del sistema FIFA 2022

    -

    Antes de descargar FIFA 2022 APK, es necesario asegurarse de que su dispositivo Android cumple con los requisitos mínimos o recomendados del sistema para el juego. Aquí están los requisitos del sistema para FIFA 2022 APK:

    - -Requisitos mínimosRequisitos recomendados -OS: 64-bit Android 6.0 o superior
    Procesador: Athlon X4 880K @4GHz o equivalente
    Memoria: 8 GB
    Tarjeta gráfica: Radeon HD 7850 o equivalente
    Espacio libre en disco: 50 GBOS>OS: 64-bit br8.0 más alto o
    Procesador fx: 813.6GHz o equivalente: 8 GB
    Tarjeta gráfica: Radeon R9 270x o equivalente
    Espacio libre en disco: 50 GB -
    -

    FIFA 2022 consejos de descarga

    -

    Ahora que sabes lo que es FIFA 2022 APK y qué características y requisitos del sistema que tiene, es posible que esté ansioso por descargarlo en su dispositivo Android. Sin embargo, hay algunas cosas que debe tener cuidado al descargar un archivo APK de Internet. Aquí hay algunos consejos para ayudarle a descargar FIFA 2022 APK de forma segura y fácil:

    -
      -
    1. Encuentra una fuente confiable:Encuentra una fuente confiable: No todos los archivos APK son seguros para descargar e instalar en su dispositivo. Algunos de ellos pueden contener malware, virus u otro software dañino que puede dañar su dispositivo o robar su información personal. Por lo tanto, siempre debe descargar archivos APK de fuentes confiables y de buena reputación, como el sitio web oficial de EA Sports, Google Play Store u otros sitios web verificados de terceros. También puede comprobar las revisiones y valoraciones del archivo APK antes de descargarlo para ver si otros usuarios han tenido problemas con él.
    2. - -
    3. Evite errores comunes: A veces, puede encontrar algunos errores o problemas al descargar o instalar un archivo APK. Por ejemplo, es posible que recibas un mensaje diciendo que la aplicación no es compatible con tu dispositivo, o que no hay suficiente espacio de almacenamiento en tu dispositivo, o que la aplicación ha dejado de funcionar. Para evitar estos errores, debe asegurarse de que su dispositivo cumple con los requisitos del sistema para FIFA 2022 APK, que tiene suficiente espacio libre en el disco de su dispositivo, y que ha actualizado el software del dispositivo a la última versión. También debe borrar la caché y los datos de la aplicación si se bloquea o se congela.
    4. -
    -

    Preguntas frecuentes de la FIFA 2022

    -

    Aquí están algunas de las preguntas más frecuentes sobre FIFA 2022 APK:

    -

    -
    -
    Q: ¿Es FIFA 2022 APK libre para descargar y jugar?
    -
    A: Sí, FIFA 2022 APK es gratis para descargar y jugar en su dispositivo Android. Sin embargo, algunas características y elementos del juego pueden requerir compras con dinero real.
    -
    Q: ¿Necesito una conexión a Internet para jugar FIFA 2022 APK?
    -
    A: Sí, necesita una conexión a Internet estable para jugar FIFA 2022 APK en línea con otros jugadores o acceder a algunos de los modos de juego y características.
    -
    Q: ¿Cómo puedo actualizar FIFA 2022 APK a la última versión?
    -
    A: Puede actualizar FIFA 2022 APK a la última versión mediante la descarga e instalación del nuevo archivo APK de la misma fuente que lo descargó desde. Alternativamente, puede habilitar actualizaciones automáticas en la configuración del dispositivo o en la configuración de la aplicación.
    -
    Q: ¿Cómo puedo transferir mi progreso y datos de FIFA 2022 de un dispositivo a otro?
    -
    A: Puedes transferir tu progreso y datos de FIFA 2022 de un dispositivo a otro iniciando sesión con la misma cuenta de EA en ambos dispositivos. También puede utilizar opciones de almacenamiento en la nube o de copia de seguridad en la configuración de la aplicación.
    -
    Q: ¿Cómo puedo contactar a EA Sports para obtener apoyo o comentarios sobre FIFA 2022 APK?
    - -
    -

    Conclusión

    -

    FIFA 2022 APK es un juego imprescindible para cualquier fanático del fútbol que quiera disfrutar de una simulación de fútbol realista e inmersiva en su dispositivo Android. Ofrece nuevas y mejoradas características de juego, modos, gráficos y más que te mantendrán enganchado durante horas. Para descargar FIFA 2022 APK de forma segura y fácil, es necesario seguir algunos consejos simples y trucos que hemos compartido en este artículo. Esperamos que este artículo le ha ayudado a aprender cómo descargar FIFA 2022 APK y disfrutar jugando en su dispositivo. Si tiene alguna pregunta o comentario, no dude en dejarlos abajo.

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Fifa Street 4 Pc Bagas31.md b/spaces/Benson/text-generation/Examples/Descargar Fifa Street 4 Pc Bagas31.md deleted file mode 100644 index 3191c75a45422f7a8d902948f915e255902c1268..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Fifa Street 4 Pc Bagas31.md +++ /dev/null @@ -1,155 +0,0 @@ - -

    Descargar FIFA Street 4 PC Bagas31: Una guía para disfrutar del fútbol callejero en su computadora

    -

    ¿Te encanta el fútbol pero quieres experimentarlo de una manera diferente? ¿Quieres mostrar tus habilidades y trucos en varios lugares de la calle en todo el mundo? ¿Quieres jugar con tus jugadores y equipos favoritos en un juego divertido y estilo árcade? Si respondiste sí a cualquiera de estas preguntas, entonces deberías probar FIFA Street 4, un juego que te permitirá disfrutar del fútbol callejero en tu ordenador.

    -

    descargar fifa street 4 pc bagas31


    DOWNLOAD ··· https://bltlly.com/2v6JFm



    -

    ¿Qué es FIFA Street 4?

    -

    FIFA Street 4 es un juego desarrollado por EA Sports y lanzado en 2012 para PlayStation 3 y Xbox 360. Es la cuarta entrega de la serie FIFA Street, que se centra en el fútbol callejero en lugar del fútbol tradicional. El juego cuenta con más de 50 ubicaciones diferentes, desde Río de Janeiro hasta Londres, donde puedes jugar con varios modos y reglas. También puedes personalizar tu propio equipo y jugador, y desbloquear nuevos objetos y habilidades a medida que avanzas.

    -

    El juego tiene varias características que lo hacen único y emocionante, como:

    -
      -
    • El sistema de control de la bola de la calle, que le permite realizar más realista y fluido dribbling, pasando, y tiro.
    • -
    • El modo World Tour, que te permite crear tu propio equipo y competir en torneos alrededor del mundo.
    • -
    • El modo Temporadas Online, que te permite jugar partidas online contra otros jugadores y posicionarte en divisiones.
    • -
    • El modo Street Network, que te permite compartir tus logros, vídeos y fotos con tus amigos y rivales.
    • -
    • El modo de jugadores legendarios, que te permite jugar con o contra algunos de los mejores jugadores de fútbol de todos los tiempos, como Pelé, Zidane o Messi.
    • -
    -

    ¿Por qué descargar FIFA Street 4 PC Bagas31?

    - -

    Al descargar FIFA Street 4 PC Bagas31, puedes disfrutar de varios beneficios, como:

    -
      -
    • Puedes jugar a FIFA Street 4 en tu PC sin necesidad de consola o emulador.
    • -
    • Puedes ahorrar dinero no comprando una consola o un disco de juego.
    • -
    • Puedes jugar a FIFA Street 4 con mejores gráficos y rendimiento que en una consola.
    • -
    • Puede utilizar el teclado y el ratón o un controlador para jugar FIFA Street 4.
    • -
    • Puede acceder a todas las características y modos de FIFA Street 4 sin restricciones o limitaciones.
    • -
    -

    ¿Cómo descargar FIFA Street 4 PC Bagas31?

    -

    Descargar FIFA Street 4 PC Bagas31 es fácil y sencillo. Solo tienes que seguir estos pasos:

    -

    -
      -
    1. Ir a [1](https://www.al - Ir a [1](https://www.alvindayu.com/2018/02/download-fifa-street-4-pc-full-version.html) y haga clic en el enlace de descarga en la parte inferior de la página. - Espere a que finalice la descarga. El tamaño del archivo es de aproximadamente 4,8 GB, por lo que puede tardar algún tiempo dependiendo de su velocidad de Internet. - Extraer el archivo descargado usando WinRAR o 7-Zip. Obtendrá una carpeta llamada FIFA Street 4 PC Bagas31. - Abra la carpeta y ejecute el archivo setup.exe como administrador. Siga las instrucciones en la pantalla para instalar el juego. - Después de la instalación se ha completado, ejecutar el juego desde el acceso directo de escritorio o el menú de inicio. - Disfrutar de jugar FIFA Street 4 PC Bagas31!

      ¿Cómo se juega FIFA Street 4 PC Bagas31?

      -

      Jugar FIFA Street 4 PC Bagas31 es similar a jugar cualquier otro juego de fútbol en PC. Puede utilizar el teclado y el ratón o un controlador para controlar a sus jugadores y realizar varias acciones. Aquí hay un resumen rápido de la jugabilidad y los controles:

      - -

      Los controles de FIFA Street 4 PC Bagas31 se basan en el sistema de control de la bola de la calle, que le permite realizar más realista y fluido dribbling, pasando, y tiro. También puedes usar trucos y habilidades para superar a tus oponentes y anotar goles espectaculares. Aquí hay una tabla de los controles básicos para teclado y ratón y controlador:

      - - -Acción -Teclado y ratón -Controlador - - -Mover -WASD -Stick izquierdo - - -Sprint -Shift -Gatillo derecho - - -Pase -Clic izquierdo -A - - -Disparar -Clic derecho -B - - -Paso/Cruz de lob -E -X - - -A través de la bola -R -Y - - -Movimiento/truco de habilidad -Q o F o rueda del ratón -Golpe derecho o parachoques izquierdo o derecho - - -Tackle/Slide Tackle -Espacio o C -A o X - - -Jockey/Contener/Teammate Contener -Z o X o V o B o N o M o Alt o Ctrl o Shift o Tab o Caps Bloquear o Entrar o Retroceso o Eliminar o Insertar o Inicio o Fin o Página Abajo o Teclas de flecha (cualquier tecla excepto WASD, Q, R, F, Esc)/ -Disparador izquierdo o derecho o parachoques izquierdo o derecho (cualquier botón excepto Stick izquierdo, Stick derecho, A, B, X, Y) - - Nota: Puedes cambiar los controles en la configuración del juego si lo prefieres.

      ¿Cuáles son los requisitos del sistema para FIFA Street 4 PC Bagas31?

      -

      Antes de descargar y jugar FIFA Street 4 PC Bagas31, debe asegurarse de que su computadora cumple con los requisitos mínimos y recomendados del sistema para ejecutar el juego. Aquí hay una lista de las especificaciones que necesita:

      - - -Requisitos mínimos -Requisitos recomendados - - -OS: Windows 7/8/10 (64 bits) -OS: Windows 10 (64 bits) - - -CPU: Intel Core 2 Duo E6600 o AMD Athlon 64 X2 5400+ -CPU: Intel Core i3-2100 o AMD Phenom II X4 965 - - -RAM: 4 GB -RAM: 8 GB - - -GPU: NVIDIA GeForce GTX 460 o AMD Radeon HD 5770 -GPU: NVIDIA GeForce GTX 660 o AMD Radeon HD 7850 - - -DirectX: Versión 11 -DirectX: Versión 11 - - -Almacenamiento: 10 GB de espacio disponible -Almacenamiento: 10 GB de espacio disponible - - Nota: Estos son los requisitos estimados del sistema basados en la versión original de la consola de FIFA Street 4. Los requisitos reales del sistema pueden variar dependiendo de la configuración de su PC y las modificaciones realizadas por Bagas31. -

      ¿Cómo mejorar el rendimiento de FIFA Street 4 PC Bagas31?

      -

      Si encuentras algún problema o problema al jugar FIFA Street 4 PC Bagas31, como retraso, tartamudeo, estrellarse o errores, puedes probar algunos de estos consejos y trucos para mejorar el rendimiento del juego y solucionar los problemas comunes:

      -
        -
      • Actualice sus controladores y Windows a la última versión.
      • -
      • Ejecutar el juego como administrador y en modo de compatibilidad para Windows 7.
      • -
      • Deshabilita cualquier programa antivirus o firewall que pueda interferir con el juego.
      • -
      • Cierre cualquier programa de fondo innecesario que pueda consumir su CPU, RAM o ancho de banda.
      • -
      • Baja la configuración del juego y la resolución para que coincida con las capacidades de tu PC.
      • -
      • Habilitar la sincronización en V y limitar la velocidad de fotogramas para evitar el desgarro de la pantalla y el sobrecalentamiento.
      • -
      • Elimine cualquier archivo dañado o desactualizado en la carpeta del juego y vuelva a instalar el juego si es necesario.
      • -
      • Busque en línea soluciones o parches específicos para su problema en particular.
      • -
      -

      ¿Cuáles son algunas alternativas a FIFA Street 4 PC Bagas31?

      -

      Si estás buscando otros juegos de fútbol callejero o emuladores que puedes jugar en tu PC, puedes ver algunas de estas alternativas a FIFA Street 4 PC Bagas31:

      - -
    2. FIFA Street (2012) Emulador de PS3: Esta es la versión original de FIFA Street 4 que puedes jugar en tu PC usando un emulador de PS3 como RPCS3. Necesitarás un disco de juego o un archivo ISO de PS3, un archivo BIOS de PS3 y un PC potente para ejecutar el emulador sin problemas. Puede encontrar más información e instrucciones sobre cómo usar RPCS3 [aquí].
    3. -
    4. FIFA Street 2 (2006) PCSX2 Emulator: Esta es la segunda entrega de la serie FIFA Street que puedes jugar en tu PC usando un emulador de PS2 como PCSX2. Necesitarás un disco de juego o un archivo ISO de PS2, un archivo BIOS de PS2 y un PC decente para ejecutar el emulador correctamente. Puede encontrar más información e instrucciones sobre cómo usar PCSX2 [aquí].
    5. -
    6. FIFA Street 3 (2008) Xbox 360 Emulator: Esta es la tercera entrega de la serie FIFA Street que puedes jugar en tu PC usando un emulador de Xbox 360 como Xenia. Necesitarás un disco de juego o un archivo ISO de Xbox 360, un archivo BIOS de Xbox 360 y un PC potente para ejecutar el emulador sin problemas. Puede encontrar más información e instrucciones sobre cómo usar Xenia [aquí].
    7. -
    8. NBA Street Homecourt (2007) Xbox 360 Emulator: Este es un juego de baloncesto que tiene un modo de juego y estilo similar a FIFA Street. Puedes jugarlo en tu PC usando un emulador de Xbox 360 como Xenia. Necesitarás un disco de juego o un archivo ISO de Xbox 360, un archivo BIOS de Xbox 360 y un PC potente para ejecutar el emulador sin problemas. Puede encontrar más información e instrucciones sobre cómo usar Xenia [aquí].
    9. -
    10. In - Inazuma Eleven Strikers (2012) Dolphin Emulator: Este es un juego de fútbol que tiene una mezcla de elementos de árcade y RPG. Puedes jugarlo en tu PC usando un emulador de Wii como Dolphin. Necesitarás un disco de juego de Wii o un archivo ISO, un archivo BIOS de Wii y un PC decente para ejecutar el emulador correctamente. Puede encontrar más información e instrucciones sobre cómo usar Dolphin [aquí].
    11. - -

      Conclusión

      - -

      Así que, ¿qué estás esperando? Descargar FIFA Street 4 PC Bagas31 hoy y divertirse jugando fútbol de la calle en su ordenador!

      -

      Preguntas frecuentes

      -

      Aquí hay algunas preguntas frecuentes sobre FIFA Street 4 PC Bagas31:

      -
        -
      1. ¿Es seguro descargar FIFA Street 4 PC Bagas31?
      2. -

        Sí, FIFA Street 4 PC Bagas31 es seguro para descargar desde el sitio web de Bagas31. Sin embargo, siempre debe escanear los archivos descargados con un programa antivirus antes de abrirlos, y tenga cuidado con cualquier pop-ups o anuncios que puedan aparecer en el sitio web.

        -
      3. ¿Es FIFA Street 4 PC Bagas31 legal para descargar?
      4. -

        No, FIFA Street 4 PC Bagas31 no es legal para descargar, ya que es una versión modificada de un juego con derechos de autor que nunca fue lanzado oficialmente para PC. Descargar y jugar FIFA Street 4 PC Bagas31 es bajo su propio riesgo, y usted puede enfrentar consecuencias legales si es capturado por las autoridades.

        -
      5. ¿Puedo jugar FIFA Street 4 PC Bagas31 sin conexión?
      6. -

        Sí, puede jugar FIFA Street 4 PC Bagas31 sin conexión a Internet. Sin embargo, no podrá acceder al modo Online Seasons o al modo Street Network, que requieren una conexión en línea.

        -
      7. ¿Puedo jugar FIFA Street 4 PC Bagas31 con mis amigos?
      8. -

        Sí, puedes jugar FIFA Street 4 PC Bagas31 con tus amigos en línea o localmente. Para jugar en línea, necesitará una conexión a Internet y una cuenta en los servidores de EA. Para jugar localmente, necesitarás dos controladores o teclados y ratones, y una opción de pantalla dividida en la configuración del juego.

        -
      9. ¿Puedo actualizar FIFA Street 4 PC Bagas31?
      10. -

        No, no puedes actualizar FIFA Street 4 PC Bagas31, ya que es una versión modificada del juego que no recibe ninguna actualización o parches oficiales de EA Sports. Sin embargo, puedes encontrar algunas actualizaciones o mods no oficiales de otras fuentes en línea que pueden mejorar o cambiar el juego de alguna manera.

        -

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/ec2/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/ec2/__init__.py deleted file mode 100644 index 6001b27b37430efbf22057efde79637b340fa1db..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/ec2/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"). You -# may not use this file except in compliance with the License. A copy of -# the License is located at -# -# https://aws.amazon.com/apache2.0/ -# -# or in the "license" file accompanying this file. This file is -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF -# ANY KIND, either express or implied. See the License for the specific -# language governing permissions and limitations under the License. diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/models/scheme.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/models/scheme.py deleted file mode 100644 index f51190ac60354d90eb2aef4b04c484f8517275c2..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/models/scheme.py +++ /dev/null @@ -1,31 +0,0 @@ -""" -For types associated with installation schemes. - -For a general overview of available schemes and their context, see -https://docs.python.org/3/install/index.html#alternate-installation. -""" - - -SCHEME_KEYS = ["platlib", "purelib", "headers", "scripts", "data"] - - -class Scheme: - """A Scheme holds paths which are used as the base directories for - artifacts associated with a Python package. - """ - - __slots__ = SCHEME_KEYS - - def __init__( - self, - platlib: str, - purelib: str, - headers: str, - scripts: str, - data: str, - ) -> None: - self.platlib = platlib - self.purelib = purelib - self.headers = headers - self.scripts = scripts - self.data = data diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/distlib/database.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/distlib/database.py deleted file mode 100644 index 5db5d7f507c1d150e6b36f236df7ee61c0f65581..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/distlib/database.py +++ /dev/null @@ -1,1350 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright (C) 2012-2017 The Python Software Foundation. -# See LICENSE.txt and CONTRIBUTORS.txt. -# -"""PEP 376 implementation.""" - -from __future__ import unicode_literals - -import base64 -import codecs -import contextlib -import hashlib -import logging -import os -import posixpath -import sys -import zipimport - -from . import DistlibException, resources -from .compat import StringIO -from .version import get_scheme, UnsupportedVersionError -from .metadata import (Metadata, METADATA_FILENAME, WHEEL_METADATA_FILENAME, - LEGACY_METADATA_FILENAME) -from .util import (parse_requirement, cached_property, parse_name_and_version, - read_exports, write_exports, CSVReader, CSVWriter) - - -__all__ = ['Distribution', 'BaseInstalledDistribution', - 'InstalledDistribution', 'EggInfoDistribution', - 'DistributionPath'] - - -logger = logging.getLogger(__name__) - -EXPORTS_FILENAME = 'pydist-exports.json' -COMMANDS_FILENAME = 'pydist-commands.json' - -DIST_FILES = ('INSTALLER', METADATA_FILENAME, 'RECORD', 'REQUESTED', - 'RESOURCES', EXPORTS_FILENAME, 'SHARED') - -DISTINFO_EXT = '.dist-info' - - -class _Cache(object): - """ - A simple cache mapping names and .dist-info paths to distributions - """ - def __init__(self): - """ - Initialise an instance. There is normally one for each DistributionPath. - """ - self.name = {} - self.path = {} - self.generated = False - - def clear(self): - """ - Clear the cache, setting it to its initial state. - """ - self.name.clear() - self.path.clear() - self.generated = False - - def add(self, dist): - """ - Add a distribution to the cache. - :param dist: The distribution to add. - """ - if dist.path not in self.path: - self.path[dist.path] = dist - self.name.setdefault(dist.key, []).append(dist) - - -class DistributionPath(object): - """ - Represents a set of distributions installed on a path (typically sys.path). - """ - def __init__(self, path=None, include_egg=False): - """ - Create an instance from a path, optionally including legacy (distutils/ - setuptools/distribute) distributions. - :param path: The path to use, as a list of directories. If not specified, - sys.path is used. - :param include_egg: If True, this instance will look for and return legacy - distributions as well as those based on PEP 376. - """ - if path is None: - path = sys.path - self.path = path - self._include_dist = True - self._include_egg = include_egg - - self._cache = _Cache() - self._cache_egg = _Cache() - self._cache_enabled = True - self._scheme = get_scheme('default') - - def _get_cache_enabled(self): - return self._cache_enabled - - def _set_cache_enabled(self, value): - self._cache_enabled = value - - cache_enabled = property(_get_cache_enabled, _set_cache_enabled) - - def clear_cache(self): - """ - Clears the internal cache. - """ - self._cache.clear() - self._cache_egg.clear() - - - def _yield_distributions(self): - """ - Yield .dist-info and/or .egg(-info) distributions. - """ - # We need to check if we've seen some resources already, because on - # some Linux systems (e.g. some Debian/Ubuntu variants) there are - # symlinks which alias other files in the environment. - seen = set() - for path in self.path: - finder = resources.finder_for_path(path) - if finder is None: - continue - r = finder.find('') - if not r or not r.is_container: - continue - rset = sorted(r.resources) - for entry in rset: - r = finder.find(entry) - if not r or r.path in seen: - continue - try: - if self._include_dist and entry.endswith(DISTINFO_EXT): - possible_filenames = [METADATA_FILENAME, - WHEEL_METADATA_FILENAME, - LEGACY_METADATA_FILENAME] - for metadata_filename in possible_filenames: - metadata_path = posixpath.join(entry, metadata_filename) - pydist = finder.find(metadata_path) - if pydist: - break - else: - continue - - with contextlib.closing(pydist.as_stream()) as stream: - metadata = Metadata(fileobj=stream, scheme='legacy') - logger.debug('Found %s', r.path) - seen.add(r.path) - yield new_dist_class(r.path, metadata=metadata, - env=self) - elif self._include_egg and entry.endswith(('.egg-info', - '.egg')): - logger.debug('Found %s', r.path) - seen.add(r.path) - yield old_dist_class(r.path, self) - except Exception as e: - msg = 'Unable to read distribution at %s, perhaps due to bad metadata: %s' - logger.warning(msg, r.path, e) - import warnings - warnings.warn(msg % (r.path, e), stacklevel=2) - - def _generate_cache(self): - """ - Scan the path for distributions and populate the cache with - those that are found. - """ - gen_dist = not self._cache.generated - gen_egg = self._include_egg and not self._cache_egg.generated - if gen_dist or gen_egg: - for dist in self._yield_distributions(): - if isinstance(dist, InstalledDistribution): - self._cache.add(dist) - else: - self._cache_egg.add(dist) - - if gen_dist: - self._cache.generated = True - if gen_egg: - self._cache_egg.generated = True - - @classmethod - def distinfo_dirname(cls, name, version): - """ - The *name* and *version* parameters are converted into their - filename-escaped form, i.e. any ``'-'`` characters are replaced - with ``'_'`` other than the one in ``'dist-info'`` and the one - separating the name from the version number. - - :parameter name: is converted to a standard distribution name by replacing - any runs of non- alphanumeric characters with a single - ``'-'``. - :type name: string - :parameter version: is converted to a standard version string. Spaces - become dots, and all other non-alphanumeric characters - (except dots) become dashes, with runs of multiple - dashes condensed to a single dash. - :type version: string - :returns: directory name - :rtype: string""" - name = name.replace('-', '_') - return '-'.join([name, version]) + DISTINFO_EXT - - def get_distributions(self): - """ - Provides an iterator that looks for distributions and returns - :class:`InstalledDistribution` or - :class:`EggInfoDistribution` instances for each one of them. - - :rtype: iterator of :class:`InstalledDistribution` and - :class:`EggInfoDistribution` instances - """ - if not self._cache_enabled: - for dist in self._yield_distributions(): - yield dist - else: - self._generate_cache() - - for dist in self._cache.path.values(): - yield dist - - if self._include_egg: - for dist in self._cache_egg.path.values(): - yield dist - - def get_distribution(self, name): - """ - Looks for a named distribution on the path. - - This function only returns the first result found, as no more than one - value is expected. If nothing is found, ``None`` is returned. - - :rtype: :class:`InstalledDistribution`, :class:`EggInfoDistribution` - or ``None`` - """ - result = None - name = name.lower() - if not self._cache_enabled: - for dist in self._yield_distributions(): - if dist.key == name: - result = dist - break - else: - self._generate_cache() - - if name in self._cache.name: - result = self._cache.name[name][0] - elif self._include_egg and name in self._cache_egg.name: - result = self._cache_egg.name[name][0] - return result - - def provides_distribution(self, name, version=None): - """ - Iterates over all distributions to find which distributions provide *name*. - If a *version* is provided, it will be used to filter the results. - - This function only returns the first result found, since no more than - one values are expected. If the directory is not found, returns ``None``. - - :parameter version: a version specifier that indicates the version - required, conforming to the format in ``PEP-345`` - - :type name: string - :type version: string - """ - matcher = None - if version is not None: - try: - matcher = self._scheme.matcher('%s (%s)' % (name, version)) - except ValueError: - raise DistlibException('invalid name or version: %r, %r' % - (name, version)) - - for dist in self.get_distributions(): - # We hit a problem on Travis where enum34 was installed and doesn't - # have a provides attribute ... - if not hasattr(dist, 'provides'): - logger.debug('No "provides": %s', dist) - else: - provided = dist.provides - - for p in provided: - p_name, p_ver = parse_name_and_version(p) - if matcher is None: - if p_name == name: - yield dist - break - else: - if p_name == name and matcher.match(p_ver): - yield dist - break - - def get_file_path(self, name, relative_path): - """ - Return the path to a resource file. - """ - dist = self.get_distribution(name) - if dist is None: - raise LookupError('no distribution named %r found' % name) - return dist.get_resource_path(relative_path) - - def get_exported_entries(self, category, name=None): - """ - Return all of the exported entries in a particular category. - - :param category: The category to search for entries. - :param name: If specified, only entries with that name are returned. - """ - for dist in self.get_distributions(): - r = dist.exports - if category in r: - d = r[category] - if name is not None: - if name in d: - yield d[name] - else: - for v in d.values(): - yield v - - -class Distribution(object): - """ - A base class for distributions, whether installed or from indexes. - Either way, it must have some metadata, so that's all that's needed - for construction. - """ - - build_time_dependency = False - """ - Set to True if it's known to be only a build-time dependency (i.e. - not needed after installation). - """ - - requested = False - """A boolean that indicates whether the ``REQUESTED`` metadata file is - present (in other words, whether the package was installed by user - request or it was installed as a dependency).""" - - def __init__(self, metadata): - """ - Initialise an instance. - :param metadata: The instance of :class:`Metadata` describing this - distribution. - """ - self.metadata = metadata - self.name = metadata.name - self.key = self.name.lower() # for case-insensitive comparisons - self.version = metadata.version - self.locator = None - self.digest = None - self.extras = None # additional features requested - self.context = None # environment marker overrides - self.download_urls = set() - self.digests = {} - - @property - def source_url(self): - """ - The source archive download URL for this distribution. - """ - return self.metadata.source_url - - download_url = source_url # Backward compatibility - - @property - def name_and_version(self): - """ - A utility property which displays the name and version in parentheses. - """ - return '%s (%s)' % (self.name, self.version) - - @property - def provides(self): - """ - A set of distribution names and versions provided by this distribution. - :return: A set of "name (version)" strings. - """ - plist = self.metadata.provides - s = '%s (%s)' % (self.name, self.version) - if s not in plist: - plist.append(s) - return plist - - def _get_requirements(self, req_attr): - md = self.metadata - reqts = getattr(md, req_attr) - logger.debug('%s: got requirements %r from metadata: %r', self.name, req_attr, - reqts) - return set(md.get_requirements(reqts, extras=self.extras, - env=self.context)) - - @property - def run_requires(self): - return self._get_requirements('run_requires') - - @property - def meta_requires(self): - return self._get_requirements('meta_requires') - - @property - def build_requires(self): - return self._get_requirements('build_requires') - - @property - def test_requires(self): - return self._get_requirements('test_requires') - - @property - def dev_requires(self): - return self._get_requirements('dev_requires') - - def matches_requirement(self, req): - """ - Say if this instance matches (fulfills) a requirement. - :param req: The requirement to match. - :rtype req: str - :return: True if it matches, else False. - """ - # Requirement may contain extras - parse to lose those - # from what's passed to the matcher - r = parse_requirement(req) - scheme = get_scheme(self.metadata.scheme) - try: - matcher = scheme.matcher(r.requirement) - except UnsupportedVersionError: - # XXX compat-mode if cannot read the version - logger.warning('could not read version %r - using name only', - req) - name = req.split()[0] - matcher = scheme.matcher(name) - - name = matcher.key # case-insensitive - - result = False - for p in self.provides: - p_name, p_ver = parse_name_and_version(p) - if p_name != name: - continue - try: - result = matcher.match(p_ver) - break - except UnsupportedVersionError: - pass - return result - - def __repr__(self): - """ - Return a textual representation of this instance, - """ - if self.source_url: - suffix = ' [%s]' % self.source_url - else: - suffix = '' - return '' % (self.name, self.version, suffix) - - def __eq__(self, other): - """ - See if this distribution is the same as another. - :param other: The distribution to compare with. To be equal to one - another. distributions must have the same type, name, - version and source_url. - :return: True if it is the same, else False. - """ - if type(other) is not type(self): - result = False - else: - result = (self.name == other.name and - self.version == other.version and - self.source_url == other.source_url) - return result - - def __hash__(self): - """ - Compute hash in a way which matches the equality test. - """ - return hash(self.name) + hash(self.version) + hash(self.source_url) - - -class BaseInstalledDistribution(Distribution): - """ - This is the base class for installed distributions (whether PEP 376 or - legacy). - """ - - hasher = None - - def __init__(self, metadata, path, env=None): - """ - Initialise an instance. - :param metadata: An instance of :class:`Metadata` which describes the - distribution. This will normally have been initialised - from a metadata file in the ``path``. - :param path: The path of the ``.dist-info`` or ``.egg-info`` - directory for the distribution. - :param env: This is normally the :class:`DistributionPath` - instance where this distribution was found. - """ - super(BaseInstalledDistribution, self).__init__(metadata) - self.path = path - self.dist_path = env - - def get_hash(self, data, hasher=None): - """ - Get the hash of some data, using a particular hash algorithm, if - specified. - - :param data: The data to be hashed. - :type data: bytes - :param hasher: The name of a hash implementation, supported by hashlib, - or ``None``. Examples of valid values are ``'sha1'``, - ``'sha224'``, ``'sha384'``, '``sha256'``, ``'md5'`` and - ``'sha512'``. If no hasher is specified, the ``hasher`` - attribute of the :class:`InstalledDistribution` instance - is used. If the hasher is determined to be ``None``, MD5 - is used as the hashing algorithm. - :returns: The hash of the data. If a hasher was explicitly specified, - the returned hash will be prefixed with the specified hasher - followed by '='. - :rtype: str - """ - if hasher is None: - hasher = self.hasher - if hasher is None: - hasher = hashlib.md5 - prefix = '' - else: - hasher = getattr(hashlib, hasher) - prefix = '%s=' % self.hasher - digest = hasher(data).digest() - digest = base64.urlsafe_b64encode(digest).rstrip(b'=').decode('ascii') - return '%s%s' % (prefix, digest) - - -class InstalledDistribution(BaseInstalledDistribution): - """ - Created with the *path* of the ``.dist-info`` directory provided to the - constructor. It reads the metadata contained in ``pydist.json`` when it is - instantiated., or uses a passed in Metadata instance (useful for when - dry-run mode is being used). - """ - - hasher = 'sha256' - - def __init__(self, path, metadata=None, env=None): - self.modules = [] - self.finder = finder = resources.finder_for_path(path) - if finder is None: - raise ValueError('finder unavailable for %s' % path) - if env and env._cache_enabled and path in env._cache.path: - metadata = env._cache.path[path].metadata - elif metadata is None: - r = finder.find(METADATA_FILENAME) - # Temporary - for Wheel 0.23 support - if r is None: - r = finder.find(WHEEL_METADATA_FILENAME) - # Temporary - for legacy support - if r is None: - r = finder.find(LEGACY_METADATA_FILENAME) - if r is None: - raise ValueError('no %s found in %s' % (METADATA_FILENAME, - path)) - with contextlib.closing(r.as_stream()) as stream: - metadata = Metadata(fileobj=stream, scheme='legacy') - - super(InstalledDistribution, self).__init__(metadata, path, env) - - if env and env._cache_enabled: - env._cache.add(self) - - r = finder.find('REQUESTED') - self.requested = r is not None - p = os.path.join(path, 'top_level.txt') - if os.path.exists(p): - with open(p, 'rb') as f: - data = f.read().decode('utf-8') - self.modules = data.splitlines() - - def __repr__(self): - return '' % ( - self.name, self.version, self.path) - - def __str__(self): - return "%s %s" % (self.name, self.version) - - def _get_records(self): - """ - Get the list of installed files for the distribution - :return: A list of tuples of path, hash and size. Note that hash and - size might be ``None`` for some entries. The path is exactly - as stored in the file (which is as in PEP 376). - """ - results = [] - r = self.get_distinfo_resource('RECORD') - with contextlib.closing(r.as_stream()) as stream: - with CSVReader(stream=stream) as record_reader: - # Base location is parent dir of .dist-info dir - #base_location = os.path.dirname(self.path) - #base_location = os.path.abspath(base_location) - for row in record_reader: - missing = [None for i in range(len(row), 3)] - path, checksum, size = row + missing - #if not os.path.isabs(path): - # path = path.replace('/', os.sep) - # path = os.path.join(base_location, path) - results.append((path, checksum, size)) - return results - - @cached_property - def exports(self): - """ - Return the information exported by this distribution. - :return: A dictionary of exports, mapping an export category to a dict - of :class:`ExportEntry` instances describing the individual - export entries, and keyed by name. - """ - result = {} - r = self.get_distinfo_resource(EXPORTS_FILENAME) - if r: - result = self.read_exports() - return result - - def read_exports(self): - """ - Read exports data from a file in .ini format. - - :return: A dictionary of exports, mapping an export category to a list - of :class:`ExportEntry` instances describing the individual - export entries. - """ - result = {} - r = self.get_distinfo_resource(EXPORTS_FILENAME) - if r: - with contextlib.closing(r.as_stream()) as stream: - result = read_exports(stream) - return result - - def write_exports(self, exports): - """ - Write a dictionary of exports to a file in .ini format. - :param exports: A dictionary of exports, mapping an export category to - a list of :class:`ExportEntry` instances describing the - individual export entries. - """ - rf = self.get_distinfo_file(EXPORTS_FILENAME) - with open(rf, 'w') as f: - write_exports(exports, f) - - def get_resource_path(self, relative_path): - """ - NOTE: This API may change in the future. - - Return the absolute path to a resource file with the given relative - path. - - :param relative_path: The path, relative to .dist-info, of the resource - of interest. - :return: The absolute path where the resource is to be found. - """ - r = self.get_distinfo_resource('RESOURCES') - with contextlib.closing(r.as_stream()) as stream: - with CSVReader(stream=stream) as resources_reader: - for relative, destination in resources_reader: - if relative == relative_path: - return destination - raise KeyError('no resource file with relative path %r ' - 'is installed' % relative_path) - - def list_installed_files(self): - """ - Iterates over the ``RECORD`` entries and returns a tuple - ``(path, hash, size)`` for each line. - - :returns: iterator of (path, hash, size) - """ - for result in self._get_records(): - yield result - - def write_installed_files(self, paths, prefix, dry_run=False): - """ - Writes the ``RECORD`` file, using the ``paths`` iterable passed in. Any - existing ``RECORD`` file is silently overwritten. - - prefix is used to determine when to write absolute paths. - """ - prefix = os.path.join(prefix, '') - base = os.path.dirname(self.path) - base_under_prefix = base.startswith(prefix) - base = os.path.join(base, '') - record_path = self.get_distinfo_file('RECORD') - logger.info('creating %s', record_path) - if dry_run: - return None - with CSVWriter(record_path) as writer: - for path in paths: - if os.path.isdir(path) or path.endswith(('.pyc', '.pyo')): - # do not put size and hash, as in PEP-376 - hash_value = size = '' - else: - size = '%d' % os.path.getsize(path) - with open(path, 'rb') as fp: - hash_value = self.get_hash(fp.read()) - if path.startswith(base) or (base_under_prefix and - path.startswith(prefix)): - path = os.path.relpath(path, base) - writer.writerow((path, hash_value, size)) - - # add the RECORD file itself - if record_path.startswith(base): - record_path = os.path.relpath(record_path, base) - writer.writerow((record_path, '', '')) - return record_path - - def check_installed_files(self): - """ - Checks that the hashes and sizes of the files in ``RECORD`` are - matched by the files themselves. Returns a (possibly empty) list of - mismatches. Each entry in the mismatch list will be a tuple consisting - of the path, 'exists', 'size' or 'hash' according to what didn't match - (existence is checked first, then size, then hash), the expected - value and the actual value. - """ - mismatches = [] - base = os.path.dirname(self.path) - record_path = self.get_distinfo_file('RECORD') - for path, hash_value, size in self.list_installed_files(): - if not os.path.isabs(path): - path = os.path.join(base, path) - if path == record_path: - continue - if not os.path.exists(path): - mismatches.append((path, 'exists', True, False)) - elif os.path.isfile(path): - actual_size = str(os.path.getsize(path)) - if size and actual_size != size: - mismatches.append((path, 'size', size, actual_size)) - elif hash_value: - if '=' in hash_value: - hasher = hash_value.split('=', 1)[0] - else: - hasher = None - - with open(path, 'rb') as f: - actual_hash = self.get_hash(f.read(), hasher) - if actual_hash != hash_value: - mismatches.append((path, 'hash', hash_value, actual_hash)) - return mismatches - - @cached_property - def shared_locations(self): - """ - A dictionary of shared locations whose keys are in the set 'prefix', - 'purelib', 'platlib', 'scripts', 'headers', 'data' and 'namespace'. - The corresponding value is the absolute path of that category for - this distribution, and takes into account any paths selected by the - user at installation time (e.g. via command-line arguments). In the - case of the 'namespace' key, this would be a list of absolute paths - for the roots of namespace packages in this distribution. - - The first time this property is accessed, the relevant information is - read from the SHARED file in the .dist-info directory. - """ - result = {} - shared_path = os.path.join(self.path, 'SHARED') - if os.path.isfile(shared_path): - with codecs.open(shared_path, 'r', encoding='utf-8') as f: - lines = f.read().splitlines() - for line in lines: - key, value = line.split('=', 1) - if key == 'namespace': - result.setdefault(key, []).append(value) - else: - result[key] = value - return result - - def write_shared_locations(self, paths, dry_run=False): - """ - Write shared location information to the SHARED file in .dist-info. - :param paths: A dictionary as described in the documentation for - :meth:`shared_locations`. - :param dry_run: If True, the action is logged but no file is actually - written. - :return: The path of the file written to. - """ - shared_path = os.path.join(self.path, 'SHARED') - logger.info('creating %s', shared_path) - if dry_run: - return None - lines = [] - for key in ('prefix', 'lib', 'headers', 'scripts', 'data'): - path = paths[key] - if os.path.isdir(paths[key]): - lines.append('%s=%s' % (key, path)) - for ns in paths.get('namespace', ()): - lines.append('namespace=%s' % ns) - - with codecs.open(shared_path, 'w', encoding='utf-8') as f: - f.write('\n'.join(lines)) - return shared_path - - def get_distinfo_resource(self, path): - if path not in DIST_FILES: - raise DistlibException('invalid path for a dist-info file: ' - '%r at %r' % (path, self.path)) - finder = resources.finder_for_path(self.path) - if finder is None: - raise DistlibException('Unable to get a finder for %s' % self.path) - return finder.find(path) - - def get_distinfo_file(self, path): - """ - Returns a path located under the ``.dist-info`` directory. Returns a - string representing the path. - - :parameter path: a ``'/'``-separated path relative to the - ``.dist-info`` directory or an absolute path; - If *path* is an absolute path and doesn't start - with the ``.dist-info`` directory path, - a :class:`DistlibException` is raised - :type path: str - :rtype: str - """ - # Check if it is an absolute path # XXX use relpath, add tests - if path.find(os.sep) >= 0: - # it's an absolute path? - distinfo_dirname, path = path.split(os.sep)[-2:] - if distinfo_dirname != self.path.split(os.sep)[-1]: - raise DistlibException( - 'dist-info file %r does not belong to the %r %s ' - 'distribution' % (path, self.name, self.version)) - - # The file must be relative - if path not in DIST_FILES: - raise DistlibException('invalid path for a dist-info file: ' - '%r at %r' % (path, self.path)) - - return os.path.join(self.path, path) - - def list_distinfo_files(self): - """ - Iterates over the ``RECORD`` entries and returns paths for each line if - the path is pointing to a file located in the ``.dist-info`` directory - or one of its subdirectories. - - :returns: iterator of paths - """ - base = os.path.dirname(self.path) - for path, checksum, size in self._get_records(): - # XXX add separator or use real relpath algo - if not os.path.isabs(path): - path = os.path.join(base, path) - if path.startswith(self.path): - yield path - - def __eq__(self, other): - return (isinstance(other, InstalledDistribution) and - self.path == other.path) - - # See http://docs.python.org/reference/datamodel#object.__hash__ - __hash__ = object.__hash__ - - -class EggInfoDistribution(BaseInstalledDistribution): - """Created with the *path* of the ``.egg-info`` directory or file provided - to the constructor. It reads the metadata contained in the file itself, or - if the given path happens to be a directory, the metadata is read from the - file ``PKG-INFO`` under that directory.""" - - requested = True # as we have no way of knowing, assume it was - shared_locations = {} - - def __init__(self, path, env=None): - def set_name_and_version(s, n, v): - s.name = n - s.key = n.lower() # for case-insensitive comparisons - s.version = v - - self.path = path - self.dist_path = env - if env and env._cache_enabled and path in env._cache_egg.path: - metadata = env._cache_egg.path[path].metadata - set_name_and_version(self, metadata.name, metadata.version) - else: - metadata = self._get_metadata(path) - - # Need to be set before caching - set_name_and_version(self, metadata.name, metadata.version) - - if env and env._cache_enabled: - env._cache_egg.add(self) - super(EggInfoDistribution, self).__init__(metadata, path, env) - - def _get_metadata(self, path): - requires = None - - def parse_requires_data(data): - """Create a list of dependencies from a requires.txt file. - - *data*: the contents of a setuptools-produced requires.txt file. - """ - reqs = [] - lines = data.splitlines() - for line in lines: - line = line.strip() - if line.startswith('['): - logger.warning('Unexpected line: quitting requirement scan: %r', - line) - break - r = parse_requirement(line) - if not r: - logger.warning('Not recognised as a requirement: %r', line) - continue - if r.extras: - logger.warning('extra requirements in requires.txt are ' - 'not supported') - if not r.constraints: - reqs.append(r.name) - else: - cons = ', '.join('%s%s' % c for c in r.constraints) - reqs.append('%s (%s)' % (r.name, cons)) - return reqs - - def parse_requires_path(req_path): - """Create a list of dependencies from a requires.txt file. - - *req_path*: the path to a setuptools-produced requires.txt file. - """ - - reqs = [] - try: - with codecs.open(req_path, 'r', 'utf-8') as fp: - reqs = parse_requires_data(fp.read()) - except IOError: - pass - return reqs - - tl_path = tl_data = None - if path.endswith('.egg'): - if os.path.isdir(path): - p = os.path.join(path, 'EGG-INFO') - meta_path = os.path.join(p, 'PKG-INFO') - metadata = Metadata(path=meta_path, scheme='legacy') - req_path = os.path.join(p, 'requires.txt') - tl_path = os.path.join(p, 'top_level.txt') - requires = parse_requires_path(req_path) - else: - # FIXME handle the case where zipfile is not available - zipf = zipimport.zipimporter(path) - fileobj = StringIO( - zipf.get_data('EGG-INFO/PKG-INFO').decode('utf8')) - metadata = Metadata(fileobj=fileobj, scheme='legacy') - try: - data = zipf.get_data('EGG-INFO/requires.txt') - tl_data = zipf.get_data('EGG-INFO/top_level.txt').decode('utf-8') - requires = parse_requires_data(data.decode('utf-8')) - except IOError: - requires = None - elif path.endswith('.egg-info'): - if os.path.isdir(path): - req_path = os.path.join(path, 'requires.txt') - requires = parse_requires_path(req_path) - path = os.path.join(path, 'PKG-INFO') - tl_path = os.path.join(path, 'top_level.txt') - metadata = Metadata(path=path, scheme='legacy') - else: - raise DistlibException('path must end with .egg-info or .egg, ' - 'got %r' % path) - - if requires: - metadata.add_requirements(requires) - # look for top-level modules in top_level.txt, if present - if tl_data is None: - if tl_path is not None and os.path.exists(tl_path): - with open(tl_path, 'rb') as f: - tl_data = f.read().decode('utf-8') - if not tl_data: - tl_data = [] - else: - tl_data = tl_data.splitlines() - self.modules = tl_data - return metadata - - def __repr__(self): - return '' % ( - self.name, self.version, self.path) - - def __str__(self): - return "%s %s" % (self.name, self.version) - - def check_installed_files(self): - """ - Checks that the hashes and sizes of the files in ``RECORD`` are - matched by the files themselves. Returns a (possibly empty) list of - mismatches. Each entry in the mismatch list will be a tuple consisting - of the path, 'exists', 'size' or 'hash' according to what didn't match - (existence is checked first, then size, then hash), the expected - value and the actual value. - """ - mismatches = [] - record_path = os.path.join(self.path, 'installed-files.txt') - if os.path.exists(record_path): - for path, _, _ in self.list_installed_files(): - if path == record_path: - continue - if not os.path.exists(path): - mismatches.append((path, 'exists', True, False)) - return mismatches - - def list_installed_files(self): - """ - Iterates over the ``installed-files.txt`` entries and returns a tuple - ``(path, hash, size)`` for each line. - - :returns: a list of (path, hash, size) - """ - - def _md5(path): - f = open(path, 'rb') - try: - content = f.read() - finally: - f.close() - return hashlib.md5(content).hexdigest() - - def _size(path): - return os.stat(path).st_size - - record_path = os.path.join(self.path, 'installed-files.txt') - result = [] - if os.path.exists(record_path): - with codecs.open(record_path, 'r', encoding='utf-8') as f: - for line in f: - line = line.strip() - p = os.path.normpath(os.path.join(self.path, line)) - # "./" is present as a marker between installed files - # and installation metadata files - if not os.path.exists(p): - logger.warning('Non-existent file: %s', p) - if p.endswith(('.pyc', '.pyo')): - continue - #otherwise fall through and fail - if not os.path.isdir(p): - result.append((p, _md5(p), _size(p))) - result.append((record_path, None, None)) - return result - - def list_distinfo_files(self, absolute=False): - """ - Iterates over the ``installed-files.txt`` entries and returns paths for - each line if the path is pointing to a file located in the - ``.egg-info`` directory or one of its subdirectories. - - :parameter absolute: If *absolute* is ``True``, each returned path is - transformed into a local absolute path. Otherwise the - raw value from ``installed-files.txt`` is returned. - :type absolute: boolean - :returns: iterator of paths - """ - record_path = os.path.join(self.path, 'installed-files.txt') - if os.path.exists(record_path): - skip = True - with codecs.open(record_path, 'r', encoding='utf-8') as f: - for line in f: - line = line.strip() - if line == './': - skip = False - continue - if not skip: - p = os.path.normpath(os.path.join(self.path, line)) - if p.startswith(self.path): - if absolute: - yield p - else: - yield line - - def __eq__(self, other): - return (isinstance(other, EggInfoDistribution) and - self.path == other.path) - - # See http://docs.python.org/reference/datamodel#object.__hash__ - __hash__ = object.__hash__ - -new_dist_class = InstalledDistribution -old_dist_class = EggInfoDistribution - - -class DependencyGraph(object): - """ - Represents a dependency graph between distributions. - - The dependency relationships are stored in an ``adjacency_list`` that maps - distributions to a list of ``(other, label)`` tuples where ``other`` - is a distribution and the edge is labeled with ``label`` (i.e. the version - specifier, if such was provided). Also, for more efficient traversal, for - every distribution ``x``, a list of predecessors is kept in - ``reverse_list[x]``. An edge from distribution ``a`` to - distribution ``b`` means that ``a`` depends on ``b``. If any missing - dependencies are found, they are stored in ``missing``, which is a - dictionary that maps distributions to a list of requirements that were not - provided by any other distributions. - """ - - def __init__(self): - self.adjacency_list = {} - self.reverse_list = {} - self.missing = {} - - def add_distribution(self, distribution): - """Add the *distribution* to the graph. - - :type distribution: :class:`distutils2.database.InstalledDistribution` - or :class:`distutils2.database.EggInfoDistribution` - """ - self.adjacency_list[distribution] = [] - self.reverse_list[distribution] = [] - #self.missing[distribution] = [] - - def add_edge(self, x, y, label=None): - """Add an edge from distribution *x* to distribution *y* with the given - *label*. - - :type x: :class:`distutils2.database.InstalledDistribution` or - :class:`distutils2.database.EggInfoDistribution` - :type y: :class:`distutils2.database.InstalledDistribution` or - :class:`distutils2.database.EggInfoDistribution` - :type label: ``str`` or ``None`` - """ - self.adjacency_list[x].append((y, label)) - # multiple edges are allowed, so be careful - if x not in self.reverse_list[y]: - self.reverse_list[y].append(x) - - def add_missing(self, distribution, requirement): - """ - Add a missing *requirement* for the given *distribution*. - - :type distribution: :class:`distutils2.database.InstalledDistribution` - or :class:`distutils2.database.EggInfoDistribution` - :type requirement: ``str`` - """ - logger.debug('%s missing %r', distribution, requirement) - self.missing.setdefault(distribution, []).append(requirement) - - def _repr_dist(self, dist): - return '%s %s' % (dist.name, dist.version) - - def repr_node(self, dist, level=1): - """Prints only a subgraph""" - output = [self._repr_dist(dist)] - for other, label in self.adjacency_list[dist]: - dist = self._repr_dist(other) - if label is not None: - dist = '%s [%s]' % (dist, label) - output.append(' ' * level + str(dist)) - suboutput = self.repr_node(other, level + 1) - subs = suboutput.split('\n') - output.extend(subs[1:]) - return '\n'.join(output) - - def to_dot(self, f, skip_disconnected=True): - """Writes a DOT output for the graph to the provided file *f*. - - If *skip_disconnected* is set to ``True``, then all distributions - that are not dependent on any other distribution are skipped. - - :type f: has to support ``file``-like operations - :type skip_disconnected: ``bool`` - """ - disconnected = [] - - f.write("digraph dependencies {\n") - for dist, adjs in self.adjacency_list.items(): - if len(adjs) == 0 and not skip_disconnected: - disconnected.append(dist) - for other, label in adjs: - if not label is None: - f.write('"%s" -> "%s" [label="%s"]\n' % - (dist.name, other.name, label)) - else: - f.write('"%s" -> "%s"\n' % (dist.name, other.name)) - if not skip_disconnected and len(disconnected) > 0: - f.write('subgraph disconnected {\n') - f.write('label = "Disconnected"\n') - f.write('bgcolor = red\n') - - for dist in disconnected: - f.write('"%s"' % dist.name) - f.write('\n') - f.write('}\n') - f.write('}\n') - - def topological_sort(self): - """ - Perform a topological sort of the graph. - :return: A tuple, the first element of which is a topologically sorted - list of distributions, and the second element of which is a - list of distributions that cannot be sorted because they have - circular dependencies and so form a cycle. - """ - result = [] - # Make a shallow copy of the adjacency list - alist = {} - for k, v in self.adjacency_list.items(): - alist[k] = v[:] - while True: - # See what we can remove in this run - to_remove = [] - for k, v in list(alist.items())[:]: - if not v: - to_remove.append(k) - del alist[k] - if not to_remove: - # What's left in alist (if anything) is a cycle. - break - # Remove from the adjacency list of others - for k, v in alist.items(): - alist[k] = [(d, r) for d, r in v if d not in to_remove] - logger.debug('Moving to result: %s', - ['%s (%s)' % (d.name, d.version) for d in to_remove]) - result.extend(to_remove) - return result, list(alist.keys()) - - def __repr__(self): - """Representation of the graph""" - output = [] - for dist, adjs in self.adjacency_list.items(): - output.append(self.repr_node(dist)) - return '\n'.join(output) - - -def make_graph(dists, scheme='default'): - """Makes a dependency graph from the given distributions. - - :parameter dists: a list of distributions - :type dists: list of :class:`distutils2.database.InstalledDistribution` and - :class:`distutils2.database.EggInfoDistribution` instances - :rtype: a :class:`DependencyGraph` instance - """ - scheme = get_scheme(scheme) - graph = DependencyGraph() - provided = {} # maps names to lists of (version, dist) tuples - - # first, build the graph and find out what's provided - for dist in dists: - graph.add_distribution(dist) - - for p in dist.provides: - name, version = parse_name_and_version(p) - logger.debug('Add to provided: %s, %s, %s', name, version, dist) - provided.setdefault(name, []).append((version, dist)) - - # now make the edges - for dist in dists: - requires = (dist.run_requires | dist.meta_requires | - dist.build_requires | dist.dev_requires) - for req in requires: - try: - matcher = scheme.matcher(req) - except UnsupportedVersionError: - # XXX compat-mode if cannot read the version - logger.warning('could not read version %r - using name only', - req) - name = req.split()[0] - matcher = scheme.matcher(name) - - name = matcher.key # case-insensitive - - matched = False - if name in provided: - for version, provider in provided[name]: - try: - match = matcher.match(version) - except UnsupportedVersionError: - match = False - - if match: - graph.add_edge(dist, provider, req) - matched = True - break - if not matched: - graph.add_missing(dist, req) - return graph - - -def get_dependent_dists(dists, dist): - """Recursively generate a list of distributions from *dists* that are - dependent on *dist*. - - :param dists: a list of distributions - :param dist: a distribution, member of *dists* for which we are interested - """ - if dist not in dists: - raise DistlibException('given distribution %r is not a member ' - 'of the list' % dist.name) - graph = make_graph(dists) - - dep = [dist] # dependent distributions - todo = graph.reverse_list[dist] # list of nodes we should inspect - - while todo: - d = todo.pop() - dep.append(d) - for succ in graph.reverse_list[d]: - if succ not in dep: - todo.append(succ) - - dep.pop(0) # remove dist from dep, was there to prevent infinite loops - return dep - - -def get_required_dists(dists, dist): - """Recursively generate a list of distributions from *dists* that are - required by *dist*. - - :param dists: a list of distributions - :param dist: a distribution, member of *dists* for which we are interested - in finding the dependencies. - """ - if dist not in dists: - raise DistlibException('given distribution %r is not a member ' - 'of the list' % dist.name) - graph = make_graph(dists) - - req = set() # required distributions - todo = graph.adjacency_list[dist] # list of nodes we should inspect - seen = set(t[0] for t in todo) # already added to todo - - while todo: - d = todo.pop()[0] - req.add(d) - pred_list = graph.adjacency_list[d] - for pred in pred_list: - d = pred[0] - if d not in req and d not in seen: - seen.add(d) - todo.append(pred) - return req - - -def make_dist(name, version, **kwargs): - """ - A convenience method for making a dist given just a name and version. - """ - summary = kwargs.pop('summary', 'Placeholder for summary') - md = Metadata(**kwargs) - md.name = name - md.version = version - md.summary = summary or 'Placeholder for summary' - return Distribution(md) diff --git a/spaces/Billius/runwayml-stable-diffusion-v1-5-04-07-2023/app.py b/spaces/Billius/runwayml-stable-diffusion-v1-5-04-07-2023/app.py deleted file mode 100644 index a82df332731f067826d3e1ef79fabceffb74d07e..0000000000000000000000000000000000000000 --- a/spaces/Billius/runwayml-stable-diffusion-v1-5-04-07-2023/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/runwayml/stable-diffusion-v1-5").launch() \ No newline at end of file diff --git a/spaces/Blessin/impro-scene-generator/app.py b/spaces/Blessin/impro-scene-generator/app.py deleted file mode 100644 index 11c56b8fb87d5bb0c192d15bc9ee3c830e11d80e..0000000000000000000000000000000000000000 --- a/spaces/Blessin/impro-scene-generator/app.py +++ /dev/null @@ -1,69 +0,0 @@ -import gradio as gr -import openai - -def generate_script(api_key, name1, name2, situation): - # Initialize OpenAI API with the provided key - openai.api_key = api_key - - # Define the example script to set the context - example_script = ( - "Setting: A cafe\n\n" - "Scene:\n\n" - "(Lights come up on a cozy little coffee shop. LOUIS and GIRL are sitting at a small table, with two untouched coffees between them. " - "It's clear from their expressions that the awkwardness from the previous scene hasn't worn off.)\n\n" - "LOUIS. (Attempting to break the ice) So, um... do you come to this coffee shop often?\n" - "GIRL. (Coldly) No, not really.\n" - "LOUIS. Oh. Well, they have the best almond croissants here. You should try one, sometime.\n" - "GIRL. (Slightly warming up) I'm more of a bagel person, actually.\n" - "LOUIS. Really? Me too! What's your favorite kind?\n" - "GIRL. Plain. With cream cheese.\n" - "LOUIS. Nice choice. I like mine with a bit of jam.\n" - "(A brief pause as they both sip their coffee, seemingly more at ease.)\n\n" - "LOUIS. (Hesitant) Look, I'm sorry about earlier... I don't know why I was acting like such a jerk.\n" - "GIRL. (Softening) It's alright. People can act strange when they meet someone new.\n" - "LOUIS. Yeah. It's just... I've been feeling kind of... out of place lately. And I guess I was trying too hard to impress you.\n" - "GIRL. You don't have to impress me, Louis. Just be yourself.\n" - "(LOUIS smiles shyly and looks down at his coffee. The GIRL does too, and for a moment, there's a comfortable silence between them.)\n\n" - "LOUIS. So... do you have any plans for the weekend?\n" - "GIRL. (Smiling) Not much. Just taking my dog to the park. Maybe catch a movie later. How about you?\n" - "LOUIS. Same here. Minus the dog, though.\n" - "GIRL. (Laughs) Well, maybe you can join me. And who knows, you might just enjoy the simple joy of a walk in the park.\n" - "LOUIS. (Smiling) I'd like that.\n\n" - "(As they continue chatting, the lights dim, suggesting the budding of a new connection between LOUIS and GIRL.)\n\n" - "(Blackout)\n" - ) - - # Define the prompt to transform the user's inputs into a stage play script - prompt_text = ( - f"{example_script}\n\n" - f"Generate a new stage play script based on the following details:\n" - f"Location: Cafe\n" - f"Character 1: {name1}\n" - f"Character 2: {name2}\n" - f"Situation: {situation}\n" - ) - - # Use OpenAI's Completion endpoint to generate the stage play script - response = openai.Completion.create(engine="davinci", prompt=prompt_text, max_tokens=500) - script = response.choices[0].text.strip() - - return script - -# Define Gradio Interface -iface = gr.Interface( - fn=generate_script, - inputs=[ - gr.components.Textbox(label="OpenAI API Key", type="password"), - gr.components.Textbox(label="Name 1", type="text"), - gr.components.Textbox(label="Name 2", type="text"), - gr.components.Textbox(label="Situation", type="text"), - ], - outputs=gr.components.Textbox(label="Generated Stage Play Script", type="text"), - live=True, - title="Stage Play Script Generator", - description="Generate a stage play script set in a cafe based on given characters and situation using OpenAI!", - progress="Generating stage play script...", -) - -# Launch the Gradio app -iface.launch() diff --git a/spaces/CVH-vn1210/make_hair/README.md b/spaces/CVH-vn1210/make_hair/README.md deleted file mode 100644 index 5314fec920994ce967a502d70280dc361807dd83..0000000000000000000000000000000000000000 --- a/spaces/CVH-vn1210/make_hair/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: MiniGPT-4 -emoji: 🚀 -colorFrom: purple -colorTo: gray -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: other -duplicated_from: Vision-CAIR/minigpt4 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/CVH-vn1210/make_hair/minigpt4/datasets/datasets/cc_combine_dataset.py b/spaces/CVH-vn1210/make_hair/minigpt4/datasets/datasets/cc_combine_dataset.py deleted file mode 100644 index def863d405a4bbe34b8a46c7d9a3220efec2aaf6..0000000000000000000000000000000000000000 --- a/spaces/CVH-vn1210/make_hair/minigpt4/datasets/datasets/cc_combine_dataset.py +++ /dev/null @@ -1,53 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" -import os -from PIL import Image -import webdataset as wds -from minigpt4.datasets.datasets.base_dataset import BaseDataset -from minigpt4.datasets.datasets.caption_datasets import CaptionDataset - - -class CCCombineDataset(BaseDataset): - def __init__(self, vis_processor, text_processor, location): - super().__init__(vis_processor=vis_processor, text_processor=text_processor) - - self.inner_dataset = wds.DataPipeline( - wds.ResampledShards(location), - wds.tarfile_to_samples(handler=wds.warn_and_continue), - wds.shuffle(1000, handler=wds.warn_and_continue), - wds.decode("pilrgb", handler=wds.warn_and_continue), - wds.to_tuple("jpg", "json", handler=wds.warn_and_continue), - wds.map_tuple(self.vis_processor, handler=wds.warn_and_continue), - wds.map(self.to_dict, handler=wds.warn_and_continue), - ) - - def to_dict(self, sample): - return { - "image": sample[0], - "text_input": self.text_processor(sample[1]["caption"]), - } - - -class CCAlignDataset(CaptionDataset): - - def __getitem__(self, index): - - # TODO this assumes image input, not general enough - ann = self.annotation[index] - - img_file = '{}.jpg'.format(ann["image_id"]) - image_path = os.path.join(self.vis_root, img_file) - image = Image.open(image_path).convert("RGB") - - image = self.vis_processor(image) - caption = ann["caption"] - - return { - "image": image, - "text_input": caption, - "image_id": self.img_ids[ann["image_id"]], - } \ No newline at end of file diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/.github/ISSUE_TEMPLATE.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/.github/ISSUE_TEMPLATE.md deleted file mode 100644 index 5e8aaa2d3722e7e73a3d94b2b7dfc4f751d7a240..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/.github/ISSUE_TEMPLATE.md +++ /dev/null @@ -1,5 +0,0 @@ - -Please select an issue template from -https://github.com/facebookresearch/detectron2/issues/new/choose . - -Otherwise your issue will be closed. diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TridentNet/tridentnet/trident_rcnn.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TridentNet/tridentnet/trident_rcnn.py deleted file mode 100644 index f458b25c44f104b783af7158740c175b0f3e36b2..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TridentNet/tridentnet/trident_rcnn.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -from detectron2.layers import batched_nms -from detectron2.modeling import ROI_HEADS_REGISTRY, StandardROIHeads -from detectron2.modeling.roi_heads.roi_heads import Res5ROIHeads -from detectron2.structures import Instances - - -def merge_branch_instances(instances, num_branch, nms_thrsh, topk_per_image): - """ - Merge detection results from different branches of TridentNet. - Return detection results by applying non-maximum suppression (NMS) on bounding boxes - and keep the unsuppressed boxes and other instances (e.g mask) if any. - - Args: - instances (list[Instances]): A list of N * num_branch instances that store detection - results. Contain N images and each image has num_branch instances. - num_branch (int): Number of branches used for merging detection results for each image. - nms_thresh (float): The threshold to use for box non-maximum suppression. Value in [0, 1]. - topk_per_image (int): The number of top scoring detections to return. Set < 0 to return - all detections. - - Returns: - results: (list[Instances]): A list of N instances, one for each image in the batch, - that stores the topk most confidence detections after merging results from multiple - branches. - """ - if num_branch == 1: - return instances - - batch_size = len(instances) // num_branch - results = [] - for i in range(batch_size): - instance = Instances.cat([instances[i + batch_size * j] for j in range(num_branch)]) - - # Apply per-class NMS - keep = batched_nms( - instance.pred_boxes.tensor, instance.scores, instance.pred_classes, nms_thrsh - ) - keep = keep[:topk_per_image] - result = instance[keep] - - results.append(result) - - return results - - -@ROI_HEADS_REGISTRY.register() -class TridentRes5ROIHeads(Res5ROIHeads): - """ - The TridentNet ROIHeads in a typical "C4" R-CNN model. - See :class:`Res5ROIHeads`. - """ - - def __init__(self, cfg, input_shape): - super().__init__(cfg, input_shape) - - self.num_branch = cfg.MODEL.TRIDENT.NUM_BRANCH - self.trident_fast = cfg.MODEL.TRIDENT.TEST_BRANCH_IDX != -1 - - def forward(self, images, features, proposals, targets=None): - """ - See :class:`Res5ROIHeads.forward`. - """ - num_branch = self.num_branch if self.training or not self.trident_fast else 1 - all_targets = targets * num_branch if targets is not None else None - pred_instances, losses = super().forward(images, features, proposals, all_targets) - del images, all_targets, targets - - if self.training: - return pred_instances, losses - else: - pred_instances = merge_branch_instances( - pred_instances, num_branch, self.test_nms_thresh, self.test_detections_per_img - ) - - return pred_instances, {} - - -@ROI_HEADS_REGISTRY.register() -class TridentStandardROIHeads(StandardROIHeads): - """ - The `StandardROIHeads` for TridentNet. - See :class:`StandardROIHeads`. - """ - - def __init__(self, cfg, input_shape): - super(TridentStandardROIHeads, self).__init__(cfg, input_shape) - - self.num_branch = cfg.MODEL.TRIDENT.NUM_BRANCH - self.trident_fast = cfg.MODEL.TRIDENT.TEST_BRANCH_IDX != -1 - - def forward(self, images, features, proposals, targets=None): - """ - See :class:`Res5ROIHeads.forward`. - """ - # Use 1 branch if using trident_fast during inference. - num_branch = self.num_branch if self.training or not self.trident_fast else 1 - # Duplicate targets for all branches in TridentNet. - all_targets = targets * num_branch if targets is not None else None - pred_instances, losses = super().forward(images, features, proposals, all_targets) - del images, all_targets, targets - - if self.training: - return pred_instances, losses - else: - pred_instances = merge_branch_instances( - pred_instances, num_branch, self.test_nms_thresh, self.test_detections_per_img - ) - - return pred_instances, {} diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/test_data_loader.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/test_data_loader.py deleted file mode 100644 index bdd94dd92366418347cc74a58e807240fd795111..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/test_data_loader.py +++ /dev/null @@ -1,116 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. - -import copy -import numpy as np -import unittest -import pycocotools.mask as mask_util - -from detectron2.data import detection_utils -from detectron2.data import transforms as T -from detectron2.structures import BitMasks, BoxMode - - -class TestTransformAnnotations(unittest.TestCase): - def test_transform_simple_annotation(self): - transforms = T.TransformList([T.HFlipTransform(400)]) - anno = { - "bbox": np.asarray([10, 10, 200, 300]), - "bbox_mode": BoxMode.XYXY_ABS, - "category_id": 3, - "segmentation": [[10, 10, 100, 100, 100, 10], [150, 150, 200, 150, 200, 200]], - } - - output = detection_utils.transform_instance_annotations(anno, transforms, (400, 400)) - self.assertTrue(np.allclose(output["bbox"], [200, 10, 390, 300])) - self.assertEqual(len(output["segmentation"]), len(anno["segmentation"])) - self.assertTrue(np.allclose(output["segmentation"][0], [390, 10, 300, 100, 300, 10])) - - detection_utils.annotations_to_instances([output, output], (400, 400)) - - def test_flip_keypoints(self): - transforms = T.TransformList([T.HFlipTransform(400)]) - anno = { - "bbox": np.asarray([10, 10, 200, 300]), - "bbox_mode": BoxMode.XYXY_ABS, - "keypoints": np.random.rand(17, 3) * 50 + 15, - } - - output = detection_utils.transform_instance_annotations( - copy.deepcopy(anno), - transforms, - (400, 400), - keypoint_hflip_indices=detection_utils.create_keypoint_hflip_indices( - ["keypoints_coco_2017_train"] - ), - ) - # The first keypoint is nose - self.assertTrue(np.allclose(output["keypoints"][0, 0], 400 - anno["keypoints"][0, 0])) - # The last 16 keypoints are 8 left-right pairs - self.assertTrue( - np.allclose( - output["keypoints"][1:, 0].reshape(-1, 2)[:, ::-1], - 400 - anno["keypoints"][1:, 0].reshape(-1, 2), - ) - ) - self.assertTrue( - np.allclose( - output["keypoints"][1:, 1:].reshape(-1, 2, 2)[:, ::-1, :], - anno["keypoints"][1:, 1:].reshape(-1, 2, 2), - ) - ) - - def test_transform_RLE(self): - transforms = T.TransformList([T.HFlipTransform(400)]) - mask = np.zeros((300, 400), order="F").astype("uint8") - mask[:, :200] = 1 - - anno = { - "bbox": np.asarray([10, 10, 200, 300]), - "bbox_mode": BoxMode.XYXY_ABS, - "segmentation": mask_util.encode(mask[:, :, None])[0], - "category_id": 3, - } - output = detection_utils.transform_instance_annotations( - copy.deepcopy(anno), transforms, (300, 400) - ) - mask = output["segmentation"] - self.assertTrue((mask[:, 200:] == 1).all()) - self.assertTrue((mask[:, :200] == 0).all()) - - inst = detection_utils.annotations_to_instances( - [output, output], (400, 400), mask_format="bitmask" - ) - self.assertTrue(isinstance(inst.gt_masks, BitMasks)) - - def test_transform_RLE_resize(self): - transforms = T.TransformList( - [T.HFlipTransform(400), T.ScaleTransform(300, 400, 400, 400, "bilinear")] - ) - mask = np.zeros((300, 400), order="F").astype("uint8") - mask[:, :200] = 1 - - anno = { - "bbox": np.asarray([10, 10, 200, 300]), - "bbox_mode": BoxMode.XYXY_ABS, - "segmentation": mask_util.encode(mask[:, :, None])[0], - "category_id": 3, - } - output = detection_utils.transform_instance_annotations( - copy.deepcopy(anno), transforms, (400, 400) - ) - - inst = detection_utils.annotations_to_instances( - [output, output], (400, 400), mask_format="bitmask" - ) - self.assertTrue(isinstance(inst.gt_masks, BitMasks)) - - def test_gen_crop(self): - instance = {"bbox": [10, 10, 100, 100], "bbox_mode": BoxMode.XYXY_ABS} - t = detection_utils.gen_crop_transform_with_instance((10, 10), (150, 150), instance) - # the box center must fall into the cropped region - self.assertTrue(t.x0 <= 55 <= t.x0 + t.w) - - def test_gen_crop_outside_boxes(self): - instance = {"bbox": [10, 10, 100, 100], "bbox_mode": BoxMode.XYXY_ABS} - with self.assertRaises(AssertionError): - detection_utils.gen_crop_transform_with_instance((10, 10), (15, 15), instance) diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/async/transform.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/async/transform.h deleted file mode 100644 index 544da5cb9efd4771f7638226e8bbcf8b74d14a3c..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/async/transform.h +++ /dev/null @@ -1,163 +0,0 @@ -/****************************************************************************** - * Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * * Neither the name of the NVIDIA CORPORATION nor the - * names of its contributors may be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - ******************************************************************************/ - -// TODO: Move into system::cuda - -#pragma once - -#include -#include - -#if THRUST_CPP_DIALECT >= 2014 - -#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC - -#include - -#include -#include -#include -#include -#include -#include - -#include - -namespace thrust -{ - -namespace system { namespace cuda { namespace detail -{ - -template -struct async_transform_fn -{ - ForwardIt first_; - OutputIt output_; - UnaryOperation op_; - - __host__ __device__ - async_transform_fn(ForwardIt&& first, OutputIt&& output, UnaryOperation&& op) - : first_(std::move(first)), output_(std::move(output)), op_(std::move(op)) - {} - - template - __host__ __device__ - void operator()(Index idx) - { - output_[idx] = op_(thrust::raw_reference_cast(first_[idx])); - } -}; - -template < - typename DerivedPolicy -, typename ForwardIt, typename Size, typename OutputIt, typename UnaryOperation -> -auto async_transform_n( - execution_policy& policy, - ForwardIt first, - Size n, - OutputIt output, - UnaryOperation op -) -> unique_eager_event -{ - unique_eager_event e; - - // Set up stream with dependencies. - - cudaStream_t const user_raw_stream = thrust::cuda_cub::stream(policy); - - if (thrust::cuda_cub::default_stream() != user_raw_stream) - { - e = make_dependent_event( - std::tuple_cat( - std::make_tuple( - unique_stream(nonowning, user_raw_stream) - ) - , extract_dependencies( - std::move(thrust::detail::derived_cast(policy)) - ) - ) - ); - } - else - { - e = make_dependent_event( - extract_dependencies( - std::move(thrust::detail::derived_cast(policy)) - ) - ); - } - - // Run transform. - - async_transform_fn wrapped( - std::move(first), std::move(output), std::move(op) - ); - - thrust::cuda_cub::throw_on_error( - thrust::cuda_cub::__parallel_for::parallel_for( - n, std::move(wrapped), e.stream().native_handle() - ) - , "after transform launch" - ); - - return e; -} - -}}} // namespace system::cuda::detail - -namespace cuda_cub -{ - -// ADL entry point. -template < - typename DerivedPolicy -, typename ForwardIt, typename Sentinel, typename OutputIt -, typename UnaryOperation -> -auto async_transform( - execution_policy& policy, - ForwardIt first, - Sentinel last, - OutputIt output, - UnaryOperation&& op -) -THRUST_RETURNS( - thrust::system::cuda::detail::async_transform_n( - policy, first, distance(first, last), output, THRUST_FWD(op) - ) -); - -} // cuda_cub - -} // end namespace thrust - -#endif // THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC - -#endif - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/temporary_buffer.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/temporary_buffer.h deleted file mode 100644 index 0cada5ee4b10a9fc36d19f80a276bb19ef7fff6d..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/temporary_buffer.h +++ /dev/null @@ -1,44 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// the purpose of this header is to #include the temporary_buffer.h header -// of the sequential, host, and device systems. It should be #included in any -// code which uses adl to dispatch get_temporary_buffer or return_temporary_buffer - -#include - -// SCons can't see through the #defines below to figure out what this header -// includes, so we fake it out by specifying all possible files we might end up -// including inside an #if 0. -#if 0 -#include -#include -#include -#include -#endif - -#define __THRUST_HOST_SYSTEM_TEMPORARY_BUFFER_HEADER <__THRUST_HOST_SYSTEM_ROOT/detail/temporary_buffer.h> -#include __THRUST_HOST_SYSTEM_TEMPORARY_BUFFER_HEADER -#undef __THRUST_HOST_SYSTEM_TEMPORARY_BUFFER_HEADER - -#define __THRUST_DEVICE_SYSTEM_TEMPORARY_BUFFER_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/detail/temporary_buffer.h> -#include __THRUST_DEVICE_SYSTEM_TEMPORARY_BUFFER_HEADER -#undef __THRUST_DEVICE_SYSTEM_TEMPORARY_BUFFER_HEADER - diff --git a/spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/data_structurization/autsl.py b/spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/data_structurization/autsl.py deleted file mode 100644 index fb0af84e1263e5eed3f1d8f7420166092039b0dd..0000000000000000000000000000000000000000 --- a/spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/data_structurization/autsl.py +++ /dev/null @@ -1,22 +0,0 @@ - -import os -import tqdm - -import pandas as pd -from shutil import copyfile - - -MAIN_PATH = "/Users/matyasbohacek/Documents/Academics/Projects/AUTSL" -BATCH = "test" - -df = pd.read_csv(MAIN_PATH + "/" + BATCH + "_labels.csv", encoding="utf-8", sep=";") - -if not os.path.exists(MAIN_PATH + "/" + BATCH + "_preprocessed/"): - os.mkdir(MAIN_PATH + "/" + BATCH + "_preprocessed/") - -for index_row, row in tqdm.tqdm(df.iterrows()): - if not os.path.exists(MAIN_PATH + "/" + BATCH + "_preprocessed/" + str(row["label"]) + "/"): - os.mkdir(MAIN_PATH + "/" + BATCH + "_preprocessed/" + str(row["label"]) + "/") - - copyfile(MAIN_PATH + "/" + BATCH + "/" + str(row["video"]) + "_color.mp4", MAIN_PATH + "/" + BATCH + "_preprocessed/" + str(row["label"]) + "/" + str(row["video"]) + "_color.mp4") - diff --git a/spaces/CVPR/lama-example/bin/make_checkpoint.py b/spaces/CVPR/lama-example/bin/make_checkpoint.py deleted file mode 100644 index 322147483915bef758770ae931e705e56083fa8d..0000000000000000000000000000000000000000 --- a/spaces/CVPR/lama-example/bin/make_checkpoint.py +++ /dev/null @@ -1,79 +0,0 @@ -#!/usr/bin/env python3 - -import os -import shutil - -import torch - - -def get_checkpoint_files(s): - s = s.strip() - if ',' in s: - return [get_checkpoint_files(chunk) for chunk in s.split(',')] - return 'last.ckpt' if s == 'last' else f'{s}.ckpt' - - -def main(args): - checkpoint_fnames = get_checkpoint_files(args.epochs) - if isinstance(checkpoint_fnames, str): - checkpoint_fnames = [checkpoint_fnames] - assert len(checkpoint_fnames) >= 1 - - checkpoint_path = os.path.join(args.indir, 'models', checkpoint_fnames[0]) - checkpoint = torch.load(checkpoint_path, map_location='cpu') - del checkpoint['optimizer_states'] - - if len(checkpoint_fnames) > 1: - for fname in checkpoint_fnames[1:]: - print('sum', fname) - sum_tensors_cnt = 0 - other_cp = torch.load(os.path.join(args.indir, 'models', fname), map_location='cpu') - for k in checkpoint['state_dict'].keys(): - if checkpoint['state_dict'][k].dtype is torch.float: - checkpoint['state_dict'][k].data.add_(other_cp['state_dict'][k].data) - sum_tensors_cnt += 1 - print('summed', sum_tensors_cnt, 'tensors') - - for k in checkpoint['state_dict'].keys(): - if checkpoint['state_dict'][k].dtype is torch.float: - checkpoint['state_dict'][k].data.mul_(1 / float(len(checkpoint_fnames))) - - state_dict = checkpoint['state_dict'] - - if not args.leave_discriminators: - for k in list(state_dict.keys()): - if k.startswith('discriminator.'): - del state_dict[k] - - if not args.leave_losses: - for k in list(state_dict.keys()): - if k.startswith('loss_'): - del state_dict[k] - - out_checkpoint_path = os.path.join(args.outdir, 'models', 'best.ckpt') - os.makedirs(os.path.dirname(out_checkpoint_path), exist_ok=True) - - torch.save(checkpoint, out_checkpoint_path) - - shutil.copy2(os.path.join(args.indir, 'config.yaml'), - os.path.join(args.outdir, 'config.yaml')) - - -if __name__ == '__main__': - import argparse - - aparser = argparse.ArgumentParser() - aparser.add_argument('indir', - help='Path to directory with output of training ' - '(i.e. directory, which has samples, modules, config.yaml and train.log') - aparser.add_argument('outdir', - help='Where to put minimal checkpoint, which can be consumed by "bin/predict.py"') - aparser.add_argument('--epochs', type=str, default='last', - help='Which checkpoint to take. ' - 'Can be "last" or integer - number of epoch') - aparser.add_argument('--leave-discriminators', action='store_true', - help='If enabled, the state of discriminators will not be removed from the checkpoint') - aparser.add_argument('--leave-losses', action='store_true', - help='If enabled, weights of nn-based losses (e.g. perceptual) will not be removed') - - main(aparser.parse_args()) diff --git a/spaces/CanonOverseer/Canons-Den/README.md b/spaces/CanonOverseer/Canons-Den/README.md deleted file mode 100644 index 508b1aba8b05b671595cf14fe19c992259017889..0000000000000000000000000000000000000000 --- a/spaces/CanonOverseer/Canons-Den/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Test Proxy -emoji: 🦀 -colorFrom: yellow -colorTo: purple -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/CarlDennis/HYTTS/text/croatia_to_ipa.py b/spaces/CarlDennis/HYTTS/text/croatia_to_ipa.py deleted file mode 100644 index 1f42ccbd9eb1b37f46c5d22b51f5acb50bcbe0f7..0000000000000000000000000000000000000000 --- a/spaces/CarlDennis/HYTTS/text/croatia_to_ipa.py +++ /dev/null @@ -1,8 +0,0 @@ -# -*- coding: utf-8 -*- - -import epitran - -def croatian_to_ipa(text): - epi = epitran.Epitran("hrv-Latn") - ipa_text = epi.transliterate(text) - return ipa_text diff --git a/spaces/ChrisPreston/diff-svc_minato_aqua/modules/nsf_hifigan/models.py b/spaces/ChrisPreston/diff-svc_minato_aqua/modules/nsf_hifigan/models.py deleted file mode 100644 index eb2c5de073e42c6bcd17996cd59f5eb57e05831b..0000000000000000000000000000000000000000 --- a/spaces/ChrisPreston/diff-svc_minato_aqua/modules/nsf_hifigan/models.py +++ /dev/null @@ -1,558 +0,0 @@ -import json -import os - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm - -from .env import AttrDict -from .utils import init_weights, get_padding - -LRELU_SLOPE = 0.1 - - -def load_model(model_path, device='cuda'): - config_file = os.path.join(os.path.split(model_path)[0], 'config.json') - with open(config_file) as f: - data = f.read() - - global h - json_config = json.loads(data) - h = AttrDict(json_config) - - generator = Generator(h).to(device) - - cp_dict = torch.load(model_path, map_location="cpu") - generator.load_state_dict(cp_dict['generator']) - generator.eval() - generator.remove_weight_norm() - del cp_dict - return generator, h - - -class ResBlock1(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.h = h - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - xt = c2(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.h = h - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Generator(torch.nn.Module): - def __init__(self, h): - super(Generator, self).__init__() - self.h = h - self.num_kernels = len(h.resblock_kernel_sizes) - self.num_upsamples = len(h.upsample_rates) - self.conv_pre = weight_norm(Conv1d(h.num_mels, h.upsample_initial_channel, 7, 1, padding=3)) - resblock = ResBlock1 if h.resblock == '1' else ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(h.upsample_rates, h.upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(h.upsample_initial_channel // (2 ** i), h.upsample_initial_channel // (2 ** (i + 1)), - k, u, padding=(k - u) // 2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = h.upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes)): - self.resblocks.append(resblock(h, ch, k, d)) - - self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3)) - self.ups.apply(init_weights) - self.conv_post.apply(init_weights) - - def forward(self, x): - x = self.conv_pre(x) - for i in range(self.num_upsamples): - x = F.leaky_relu(x, LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) - - -class SineGen(torch.nn.Module): - """ Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__(self, samp_rate, harmonic_num=0, - sine_amp=0.1, noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - self.flag_for_pulse = flag_for_pulse - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def _f02sine(self, f0_values): - """ f0_values: (batchsize, length, dim) - where dim indicates fundamental tone and overtones - """ - # convert to F0 in rad. The interger part n can be ignored - # because 2 * np.pi * n doesn't affect phase - rad_values = (f0_values / self.sampling_rate) % 1 - - # initial phase noise (no noise for fundamental component) - rand_ini = torch.rand(f0_values.shape[0], f0_values.shape[2], \ - device=f0_values.device) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - - # instantanouse phase sine[t] = sin(2*pi \sum_i=1 ^{t} rad) - if not self.flag_for_pulse: - # for normal case - - # To prevent torch.cumsum numerical overflow, - # it is necessary to add -1 whenever \sum_k=1^n rad_value_k > 1. - # Buffer tmp_over_one_idx indicates the time step to add -1. - # This will not change F0 of sine because (x-1) * 2*pi = x * 2*pi - tmp_over_one = torch.cumsum(rad_values, 1) % 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - - sines = torch.sin(torch.cumsum(rad_values + cumsum_shift, dim=1) - * 2 * np.pi) - else: - # If necessary, make sure that the first time step of every - # voiced segments is sin(pi) or cos(0) - # This is used for pulse-train generation - - # identify the last time step in unvoiced segments - uv = self._f02uv(f0_values) - uv_1 = torch.roll(uv, shifts=-1, dims=1) - uv_1[:, -1, :] = 1 - u_loc = (uv < 1) * (uv_1 > 0) - - # get the instantanouse phase - tmp_cumsum = torch.cumsum(rad_values, dim=1) - # different batch needs to be processed differently - for idx in range(f0_values.shape[0]): - temp_sum = tmp_cumsum[idx, u_loc[idx, :, 0], :] - temp_sum[1:, :] = temp_sum[1:, :] - temp_sum[0:-1, :] - # stores the accumulation of i.phase within - # each voiced segments - tmp_cumsum[idx, :, :] = 0 - tmp_cumsum[idx, u_loc[idx, :, 0], :] = temp_sum - - # rad_values - tmp_cumsum: remove the accumulation of i.phase - # within the previous voiced segment. - i_phase = torch.cumsum(rad_values - tmp_cumsum, dim=1) - - # get the sines - sines = torch.cos(i_phase * 2 * np.pi) - return sines - - def forward(self, f0): - """ sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, - device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (idx + 2) - - # generate sine waveforms - sine_waves = self._f02sine(f0_buf) * self.sine_amp - - # generate uv signal - # uv = torch.ones(f0.shape) - # uv = uv * (f0 > self.voiced_threshold) - uv = self._f02uv(f0) - - # noise: for unvoiced should be similar to sine_amp - # std = self.sine_amp/3 -> max value ~ self.sine_amp - # . for voiced regions is self.noise_std - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - - # first: set the unvoiced part to 0 by uv - # then: additive noise - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """ SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__(self, sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - - # to produce sine waveforms - self.l_sin_gen = SineGen(sampling_rate, harmonic_num, - sine_amp, add_noise_std, voiced_threshod) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x): - """ - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - """ - # source for harmonic branch - sine_wavs, uv, _ = self.l_sin_gen(x) - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - - # source for noise branch, in the same shape as uv - noise = torch.randn_like(uv) * self.sine_amp / 3 - return sine_merge, noise, uv - - -class Generator(torch.nn.Module): - def __init__(self, h): - super(Generator, self).__init__() - self.h = h - self.num_kernels = len(h.resblock_kernel_sizes) - self.num_upsamples = len(h.upsample_rates) - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(h.upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=h.sampling_rate, - harmonic_num=8) - self.noise_convs = nn.ModuleList() - self.conv_pre = weight_norm(Conv1d(h.num_mels, h.upsample_initial_channel, 7, 1, padding=3)) - resblock = ResBlock1 if h.resblock == '1' else ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(h.upsample_rates, h.upsample_kernel_sizes)): - c_cur = h.upsample_initial_channel // (2 ** (i + 1)) - self.ups.append(weight_norm( - ConvTranspose1d(h.upsample_initial_channel // (2 ** i), h.upsample_initial_channel // (2 ** (i + 1)), - k, u, padding=(k - u) // 2))) - if i + 1 < len(h.upsample_rates): # - stride_f0 = np.prod(h.upsample_rates[i + 1:]) - self.noise_convs.append(Conv1d( - 1, c_cur, kernel_size=stride_f0 * 2, stride=stride_f0, padding=stride_f0 // 2)) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = h.upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes)): - self.resblocks.append(resblock(h, ch, k, d)) - - self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3)) - self.ups.apply(init_weights) - self.conv_post.apply(init_weights) - - def forward(self, x, f0): - # print(1,x.shape,f0.shape,f0[:, None].shape) - f0 = self.f0_upsamp(f0[:, None]).transpose(1, 2) # bs,n,t - # print(2,f0.shape) - har_source, noi_source, uv = self.m_source(f0) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - # print(124,x.shape,har_source.shape) - for i in range(self.num_upsamples): - x = F.leaky_relu(x, LRELU_SLOPE) - # print(3,x.shape) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - # print(4,x_source.shape,har_source.shape,x.shape) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, periods=None): - super(MultiPeriodDiscriminator, self).__init__() - self.periods = periods if periods is not None else [2, 3, 5, 7, 11] - self.discriminators = nn.ModuleList() - for period in self.periods: - self.discriminators.append(DiscriminatorP(period)) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 128, 15, 1, padding=7)), - norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)), - norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)), - norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiScaleDiscriminator(torch.nn.Module): - def __init__(self): - super(MultiScaleDiscriminator, self).__init__() - self.discriminators = nn.ModuleList([ - DiscriminatorS(use_spectral_norm=True), - DiscriminatorS(), - DiscriminatorS(), - ]) - self.meanpools = nn.ModuleList([ - AvgPool1d(4, 2, padding=2), - AvgPool1d(4, 2, padding=2) - ]) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - if i != 0: - y = self.meanpools[i - 1](y) - y_hat = self.meanpools[i - 1](y_hat) - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - r_loss = torch.mean((1 - dr) ** 2) - g_loss = torch.mean(dg ** 2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - l = torch.mean((1 - dg) ** 2) - gen_losses.append(l) - loss += l - - return loss, gen_losses diff --git a/spaces/CikeyQI/Yunzai/Yunzai/lib/tools/test.js b/spaces/CikeyQI/Yunzai/Yunzai/lib/tools/test.js deleted file mode 100644 index 1bccf5ade7e32f0de2a5e945d4e27527da785998..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/Yunzai/Yunzai/lib/tools/test.js +++ /dev/null @@ -1,11 +0,0 @@ -import command from './command.js' - -/** - * npm test 十连 - * 配置数据config/test/defult.yaml - */ -await command.run() -// await command.run('bingCk') -// await command.run('gachaLog') -// await command.run('xlsx') -process.exit() diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/C_O_L_R_.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/C_O_L_R_.py deleted file mode 100644 index b4bc5d0c200e58f793fff6d3ffe95b2d76d36c64..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/C_O_L_R_.py +++ /dev/null @@ -1,158 +0,0 @@ -# Copyright 2013 Google, Inc. All Rights Reserved. -# -# Google Author(s): Behdad Esfahbod - -from fontTools.misc.textTools import safeEval -from . import DefaultTable - - -class table_C_O_L_R_(DefaultTable.DefaultTable): - - """This table is structured so that you can treat it like a dictionary keyed by glyph name. - - ``ttFont['COLR'][]`` will return the color layers for any glyph. - - ``ttFont['COLR'][] = `` will set the color layers for any glyph. - """ - - @staticmethod - def _decompileColorLayersV0(table): - if not table.LayerRecordArray: - return {} - colorLayerLists = {} - layerRecords = table.LayerRecordArray.LayerRecord - numLayerRecords = len(layerRecords) - for baseRec in table.BaseGlyphRecordArray.BaseGlyphRecord: - baseGlyph = baseRec.BaseGlyph - firstLayerIndex = baseRec.FirstLayerIndex - numLayers = baseRec.NumLayers - assert firstLayerIndex + numLayers <= numLayerRecords - layers = [] - for i in range(firstLayerIndex, firstLayerIndex + numLayers): - layerRec = layerRecords[i] - layers.append(LayerRecord(layerRec.LayerGlyph, layerRec.PaletteIndex)) - colorLayerLists[baseGlyph] = layers - return colorLayerLists - - def _toOTTable(self, ttFont): - from . import otTables - from fontTools.colorLib.builder import populateCOLRv0 - - tableClass = getattr(otTables, self.tableTag) - table = tableClass() - table.Version = self.version - - populateCOLRv0( - table, - { - baseGlyph: [(layer.name, layer.colorID) for layer in layers] - for baseGlyph, layers in self.ColorLayers.items() - }, - glyphMap=ttFont.getReverseGlyphMap(rebuild=True), - ) - return table - - def decompile(self, data, ttFont): - from .otBase import OTTableReader - from . import otTables - - # We use otData to decompile, but we adapt the decompiled otTables to the - # existing COLR v0 API for backward compatibility. - reader = OTTableReader(data, tableTag=self.tableTag) - tableClass = getattr(otTables, self.tableTag) - table = tableClass() - table.decompile(reader, ttFont) - - self.version = table.Version - if self.version == 0: - self.ColorLayers = self._decompileColorLayersV0(table) - else: - # for new versions, keep the raw otTables around - self.table = table - - def compile(self, ttFont): - from .otBase import OTTableWriter - - if hasattr(self, "table"): - table = self.table - else: - table = self._toOTTable(ttFont) - - writer = OTTableWriter(tableTag=self.tableTag) - table.compile(writer, ttFont) - return writer.getAllData() - - def toXML(self, writer, ttFont): - if hasattr(self, "table"): - self.table.toXML2(writer, ttFont) - else: - writer.simpletag("version", value=self.version) - writer.newline() - for baseGlyph in sorted(self.ColorLayers.keys(), key=ttFont.getGlyphID): - writer.begintag("ColorGlyph", name=baseGlyph) - writer.newline() - for layer in self.ColorLayers[baseGlyph]: - layer.toXML(writer, ttFont) - writer.endtag("ColorGlyph") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if name == "version": # old COLR v0 API - setattr(self, name, safeEval(attrs["value"])) - elif name == "ColorGlyph": - if not hasattr(self, "ColorLayers"): - self.ColorLayers = {} - glyphName = attrs["name"] - for element in content: - if isinstance(element, str): - continue - layers = [] - for element in content: - if isinstance(element, str): - continue - layer = LayerRecord() - layer.fromXML(element[0], element[1], element[2], ttFont) - layers.append(layer) - self.ColorLayers[glyphName] = layers - else: # new COLR v1 API - from . import otTables - - if not hasattr(self, "table"): - tableClass = getattr(otTables, self.tableTag) - self.table = tableClass() - self.table.fromXML(name, attrs, content, ttFont) - self.table.populateDefaults() - self.version = self.table.Version - - def __getitem__(self, glyphName): - if not isinstance(glyphName, str): - raise TypeError(f"expected str, found {type(glyphName).__name__}") - return self.ColorLayers[glyphName] - - def __setitem__(self, glyphName, value): - if not isinstance(glyphName, str): - raise TypeError(f"expected str, found {type(glyphName).__name__}") - if value is not None: - self.ColorLayers[glyphName] = value - elif glyphName in self.ColorLayers: - del self.ColorLayers[glyphName] - - def __delitem__(self, glyphName): - del self.ColorLayers[glyphName] - - -class LayerRecord(object): - def __init__(self, name=None, colorID=None): - self.name = name - self.colorID = colorID - - def toXML(self, writer, ttFont): - writer.simpletag("layer", name=self.name, colorID=self.colorID) - writer.newline() - - def fromXML(self, eltname, attrs, content, ttFont): - for (name, value) in attrs.items(): - if name == "name": - setattr(self, name, value) - else: - setattr(self, name, safeEval(value)) diff --git a/spaces/DiegoLigtenberg/realtimespeech/app.py b/spaces/DiegoLigtenberg/realtimespeech/app.py deleted file mode 100644 index 6ecfb95e0161dd4506e83cc5a1063b29e05b585f..0000000000000000000000000000000000000000 --- a/spaces/DiegoLigtenberg/realtimespeech/app.py +++ /dev/null @@ -1,93 +0,0 @@ -import streamlit as st -from models import BagOfModels, SoundToText, TextToSummary -from settings import MODEL_PARSER -args = MODEL_PARSER - -st.set_page_config( - page_title="TTS Applications | Incore Solutions", - layout="wide", - menu_items={ - "About": """This is a simple GUI for OpenAI's Whisper.""", - }, -) - -def open_instructions(): - with open("instructions.md", "r") as f: - st.write(f.read()) - -# Render input type selection on the sidebar & the form -input_type = st.sidebar.selectbox("Input Type", ["YouTube", "File"]) - -with st.sidebar.form("input_form"): - if input_type == "YouTube": - youtube_url = st.text_input("Youtube URL") - elif input_type == "File": - input_file = st.file_uploader("File", type=["mp3", "wav"]) - - whisper_model = st.selectbox("Whisper model", options = [whisper for whisper in BagOfModels.get_model_names() if "whisper" in whisper] , index=1) - - summary = st.checkbox("summarize") - if summary: - min_sum = st.number_input("Minimum words in the summary", min_value=1, step=1) - max_sum = min(min_sum,st.number_input("Maximum words in the summary", min_value=2, step=1)) - st.form_submit_button(label="Save settings") -with st.sidebar.form("save settings"): - transcribe = st.form_submit_button(label="Transcribe!") - - -if transcribe: - if input_type == "YouTube": - if youtube_url and youtube_url.startswith("http"): - model = BagOfModels.load_model(whisper_model,**vars(args)) - st.session_state.transcription = model.predict_stt(source=youtube_url,source_type=input_type,model_task="stt") - else: - st.error("Please enter a valid YouTube URL") - open_instructions() - - elif input_type == "File": - if input_file: - model = BagOfModels.load_model(whisper_model,**vars(args)) - st.session_state.transcription = model.predict_stt(source=input_file,source_type=input_type,model_task="stt") - else: - st.error("Please upload a file") - -if "transcription" in st.session_state: - st.session_state.transcription.whisper() - - # create two columns to separate page and youtube video - transcription_col, media_col = st.columns(2) - - with transcription_col: - st.markdown("#### Audio") - with open(st.session_state.transcription.audio_path, "rb") as f: - st.audio(f.read()) - st.markdown("---") - st.markdown(f"#### Transcription (whisper model - `{whisper_model}`)") - st.markdown(f"##### Language: `{st.session_state.transcription.language}`") - - # Trim raw transcribed output off tokens to simplify - raw_output = st.expander("Raw output") - raw_output.markdown(st.session_state.transcription.raw_output["text"]) - - if summary: - summarized_output = st.expander("summarized output") - # CURRENTLY ONLY SUPPORTS 1024 WORD TOKENS -> TODO: FIND METHOD TO INCREASE SUMMARY FOR LONGER VIDS -> 1024 * 4 = aprox 800 words within 1024 range - text_summary = TextToSummary(str(st.session_state.transcription.text[:1024*4]),min_sum,max_sum).get_summary() - summarized_output.markdown(text_summary[0]["summary_text"]) - - # Show transcription in format with timers added to text - time_annotated_output = st.expander("time_annotated_output") - for segment in st.session_state.transcription.segments: - time_annotated_output.markdown( - f"""[{round(segment["start"], 1)} - {round(segment["end"], 1)}] - {segment["text"]}""" - ) - - # Show input youtube video - with media_col: - if input_type == "YouTube": - st.markdown("---") - st.markdown("#### Original YouTube Video") - st.video(st.session_state.transcription.source) -else: - pass - diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/interpolation.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/interpolation.py deleted file mode 100644 index b578881834f4333d7e386e5a8f142e3a98a3252c..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/stylegan_human/interpolation.py +++ /dev/null @@ -1,155 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# interpolate between two z code -# score all middle latent code -# https://www.aiuai.cn/aifarm1929.html - -import os -import re -from typing import List -from tqdm import tqdm -import click -import dnnlib -import numpy as np -import PIL.Image -import torch -import click -import legacy -import random -from typing import List, Optional - - -def lerp(code1, code2, alpha): - return code1 * alpha + code2 * (1 - alpha) - -# Taken and adapted from wikipedia's slerp article -# https://en.wikipedia.org/wiki/Slerp - - -def slerp(code1, code2, alpha, DOT_THRESHOLD=0.9995): # Spherical linear interpolation - code1_copy = np.copy(code1) - code2_copy = np.copy(code2) - - code1 = code1 / np.linalg.norm(code1) - code2 = code2 / np.linalg.norm(code2) - dot = np.sum(code1 * code2) - if np.abs(dot) > DOT_THRESHOLD: - return lerp(code1_copy, code2_copy, alpha) - - # Calculate initial angle between v0 and v1 - theta_0 = np.arccos(dot) - sin_theta_0 = np.sin(theta_0) - # Angle at timestep t - theta_t = theta_0 * alpha - sin_theta_t = np.sin(theta_t) - - s0 = np.sin(theta_0 - theta_t) / sin_theta_0 - s1 = sin_theta_t / sin_theta_0 - code3 = s0 * code1_copy + s1 * code2_copy - return code3 - - -def generate_image_from_z(G, z, noise_mode, truncation_psi, device): - label = torch.zeros([1, G.c_dim], device=device) - w = G.mapping(z, label, truncation_psi=truncation_psi) - img = G.synthesis(w, noise_mode=noise_mode, force_fp32=True) - img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8) - img = PIL.Image.fromarray(img[0].cpu().numpy(), 'RGB') - return img - - -def get_concat_h(im1, im2): - dst = PIL.Image.new('RGB', (im1.width + im2.width, im1.height)) - dst.paste(im1, (0, 0)) - dst.paste(im2, (im1.width, 0)) - return dst - - -def make_latent_interp_animation(G, code1, code2, img1, img2, num_interps, noise_mode, save_mid_image, truncation_psi, device, outdir, fps): - step_size = 1.0/num_interps - - all_imgs = [] - amounts = np.arange(0, 1, step_size) - for seed_idx, alpha in enumerate(tqdm(amounts)): - interpolated_latent_code = lerp(code1, code2, alpha) - image = generate_image_from_z( - G, interpolated_latent_code, noise_mode, truncation_psi, device) - interp_latent_image = image.resize((512, 1024)) - if not os.path.exists(os.path.join(outdir, 'img')): - os.makedirs(os.path.join(outdir, 'img'), exist_ok=True) - if save_mid_image: - interp_latent_image.save(f'{outdir}/img/seed{seed_idx:04d}.png') - - frame = get_concat_h(img2, interp_latent_image) - frame = get_concat_h(frame, img1) - all_imgs.append(frame) - - save_name = os.path.join(outdir, 'latent_space_traversal.gif') - all_imgs[0].save(save_name, save_all=True, - append_images=all_imgs[1:], duration=1000/fps, loop=0) - - -""" -Create interpolated images between two given seeds using pretrained network pickle. - -Examples: - -\b -python interpolation.py --network=pretrained_models/stylegan_human_v2_1024.pkl --seeds=85,100 --outdir=outputs/inter_gifs - -""" - - -@click.command() -@click.pass_context -@click.option('--network', 'network_pkl', help='Network pickle filename', required=True) -@click.option('--seeds', type=legacy.num_range, help='List of 2 random seeds, e.g. 1,2') -@click.option('--trunc', 'truncation_psi', type=float, help='Truncation psi', default=0.8, show_default=True) -@click.option('--noise-mode', 'noise_mode', help='Noise mode', type=click.Choice(['const', 'random', 'none']), default='const', show_default=True) -@click.option('--outdir', default='outputs/inter_gifs', help='Where to save the output images', type=str, required=True, metavar='DIR') -@click.option('--save_mid_image', default=True, type=bool, help='select True if you want to save all interpolated images') -@click.option('--fps', default=15, help='FPS for GIF', type=int) -@click.option('--num_interps', default=100, help='Number of interpolation images', type=int) -def main( - ctx: click.Context, - network_pkl: str, - seeds: Optional[List[int]], - truncation_psi: float, - noise_mode: str, - outdir: str, - save_mid_image: bool, - fps: int, - num_interps: int -): - - device = torch.device('cuda') - with dnnlib.util.open_url(network_pkl) as f: - G = legacy.load_network_pkl(f)['G_ema'].to(device) # type: ignore - - outdir = os.path.join(outdir) - if not os.path.exists(outdir): - os.makedirs(outdir, exist_ok=True) - os.makedirs(os.path.join(outdir, 'img'), exist_ok=True) - - if len(seeds) > 2: - print("Receiving more than two seeds, only use the first two.") - seeds = seeds[0:2] - elif len(seeds) == 1: - print('Require two seeds, randomly generate two now.') - seeds = [seeds[0], random.randint(0, 10000)] - - z1 = torch.from_numpy(np.random.RandomState( - seeds[0]).randn(1, G.z_dim)).to(device) - z2 = torch.from_numpy(np.random.RandomState( - seeds[1]).randn(1, G.z_dim)).to(device) - img1 = generate_image_from_z(G, z1, noise_mode, truncation_psi, device) - img2 = generate_image_from_z(G, z2, noise_mode, truncation_psi, device) - img1.save(f'{outdir}/seed{seeds[0]:04d}.png') - img2.save(f'{outdir}/seed{seeds[1]:04d}.png') - - make_latent_interp_animation(G, z1, z2, img1, img2, num_interps, - noise_mode, save_mid_image, truncation_psi, device, outdir, fps) - - -if __name__ == "__main__": - main() diff --git a/spaces/DragGan/DragGan/stylegan_human/torch_utils/misc.py b/spaces/DragGan/DragGan/stylegan_human/torch_utils/misc.py deleted file mode 100644 index cd512ab8b61ece35d81ec35f43948a843efbbce1..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan/stylegan_human/torch_utils/misc.py +++ /dev/null @@ -1,264 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import re -import contextlib -import numpy as np -import torch -import warnings -import dnnlib - -#---------------------------------------------------------------------------- -# Cached construction of constant tensors. Avoids CPU=>GPU copy when the -# same constant is used multiple times. - -_constant_cache = dict() - -def constant(value, shape=None, dtype=None, device=None, memory_format=None): - value = np.asarray(value) - if shape is not None: - shape = tuple(shape) - if dtype is None: - dtype = torch.get_default_dtype() - if device is None: - device = torch.device('cpu') - if memory_format is None: - memory_format = torch.contiguous_format - - key = (value.shape, value.dtype, value.tobytes(), shape, dtype, device, memory_format) - tensor = _constant_cache.get(key, None) - if tensor is None: - tensor = torch.as_tensor(value.copy(), dtype=dtype, device=device) - if shape is not None: - tensor, _ = torch.broadcast_tensors(tensor, torch.empty(shape)) - tensor = tensor.contiguous(memory_format=memory_format) - _constant_cache[key] = tensor - return tensor - -#---------------------------------------------------------------------------- -# Replace NaN/Inf with specified numerical values. - -try: - nan_to_num = torch.nan_to_num # 1.8.0a0 -except AttributeError: - def nan_to_num(input, nan=0.0, posinf=None, neginf=None, *, out=None): # pylint: disable=redefined-builtin - assert isinstance(input, torch.Tensor) - if posinf is None: - posinf = torch.finfo(input.dtype).max - if neginf is None: - neginf = torch.finfo(input.dtype).min - assert nan == 0 - return torch.clamp(input.unsqueeze(0).nansum(0), min=neginf, max=posinf, out=out) - -#---------------------------------------------------------------------------- -# Symbolic assert. - -try: - symbolic_assert = torch._assert # 1.8.0a0 # pylint: disable=protected-access -except AttributeError: - symbolic_assert = torch.Assert # 1.7.0 - -#---------------------------------------------------------------------------- -# Context manager to suppress known warnings in torch.jit.trace(). - -class suppress_tracer_warnings(warnings.catch_warnings): - def __enter__(self): - super().__enter__() - warnings.simplefilter('ignore', category=torch.jit.TracerWarning) - return self - -#---------------------------------------------------------------------------- -# Assert that the shape of a tensor matches the given list of integers. -# None indicates that the size of a dimension is allowed to vary. -# Performs symbolic assertion when used in torch.jit.trace(). - -def assert_shape(tensor, ref_shape): - if tensor.ndim != len(ref_shape): - raise AssertionError(f'Wrong number of dimensions: got {tensor.ndim}, expected {len(ref_shape)}') - for idx, (size, ref_size) in enumerate(zip(tensor.shape, ref_shape)): - if ref_size is None: - pass - elif isinstance(ref_size, torch.Tensor): - with suppress_tracer_warnings(): # as_tensor results are registered as constants - symbolic_assert(torch.equal(torch.as_tensor(size), ref_size), f'Wrong size for dimension {idx}') - elif isinstance(size, torch.Tensor): - with suppress_tracer_warnings(): # as_tensor results are registered as constants - symbolic_assert(torch.equal(size, torch.as_tensor(ref_size)), f'Wrong size for dimension {idx}: expected {ref_size}') - elif size != ref_size: - raise AssertionError(f'Wrong size for dimension {idx}: got {size}, expected {ref_size}') - -#---------------------------------------------------------------------------- -# Function decorator that calls torch.autograd.profiler.record_function(). - -def profiled_function(fn): - def decorator(*args, **kwargs): - with torch.autograd.profiler.record_function(fn.__name__): - return fn(*args, **kwargs) - decorator.__name__ = fn.__name__ - return decorator - -#---------------------------------------------------------------------------- -# Sampler for torch.utils.data.DataLoader that loops over the dataset -# indefinitely, shuffling items as it goes. - -class InfiniteSampler(torch.utils.data.Sampler): - def __init__(self, dataset, rank=0, num_replicas=1, shuffle=True, seed=0, window_size=0.5): - assert len(dataset) > 0 - assert num_replicas > 0 - assert 0 <= rank < num_replicas - assert 0 <= window_size <= 1 - super().__init__(dataset) - self.dataset = dataset - self.rank = rank - self.num_replicas = num_replicas - self.shuffle = shuffle - self.seed = seed - self.window_size = window_size - - def __iter__(self): - order = np.arange(len(self.dataset)) - rnd = None - window = 0 - if self.shuffle: - rnd = np.random.RandomState(self.seed) - rnd.shuffle(order) - window = int(np.rint(order.size * self.window_size)) - - idx = 0 - while True: - i = idx % order.size - if idx % self.num_replicas == self.rank: - yield order[i] - if window >= 2: - j = (i - rnd.randint(window)) % order.size - order[i], order[j] = order[j], order[i] - idx += 1 - -#---------------------------------------------------------------------------- -# Utilities for operating with torch.nn.Module parameters and buffers. - -def params_and_buffers(module): - assert isinstance(module, torch.nn.Module) - return list(module.parameters()) + list(module.buffers()) - -def named_params_and_buffers(module): - assert isinstance(module, torch.nn.Module) - return list(module.named_parameters()) + list(module.named_buffers()) - -def copy_params_and_buffers(src_module, dst_module, require_all=False): - assert isinstance(src_module, torch.nn.Module) - assert isinstance(dst_module, torch.nn.Module) - src_tensors = {name: tensor for name, tensor in named_params_and_buffers(src_module)} - for name, tensor in named_params_and_buffers(dst_module): - assert (name in src_tensors) or (not require_all) - if name in src_tensors: - tensor.copy_(src_tensors[name].detach()).requires_grad_(tensor.requires_grad) - -#---------------------------------------------------------------------------- -# Context manager for easily enabling/disabling DistributedDataParallel -# synchronization. - -@contextlib.contextmanager -def ddp_sync(module, sync): - assert isinstance(module, torch.nn.Module) - if sync or not isinstance(module, torch.nn.parallel.DistributedDataParallel): - yield - else: - with module.no_sync(): - yield - -#---------------------------------------------------------------------------- -# Check DistributedDataParallel consistency across processes. - -def check_ddp_consistency(module, ignore_regex=None): - assert isinstance(module, torch.nn.Module) - for name, tensor in named_params_and_buffers(module): - fullname = type(module).__name__ + '.' + name - if ignore_regex is not None and re.fullmatch(ignore_regex, fullname): - continue - tensor = tensor.detach() - other = tensor.clone() - torch.distributed.broadcast(tensor=other, src=0) - assert (nan_to_num(tensor) == nan_to_num(other)).all(), fullname - -#---------------------------------------------------------------------------- -# Print summary table of module hierarchy. - -def print_module_summary(module, inputs, max_nesting=3, skip_redundant=True): - assert isinstance(module, torch.nn.Module) - assert not isinstance(module, torch.jit.ScriptModule) - assert isinstance(inputs, (tuple, list)) - - # Register hooks. - entries = [] - nesting = [0] - def pre_hook(_mod, _inputs): - nesting[0] += 1 - def post_hook(mod, _inputs, outputs): - nesting[0] -= 1 - if nesting[0] <= max_nesting: - outputs = list(outputs) if isinstance(outputs, (tuple, list)) else [outputs] - outputs = [t for t in outputs if isinstance(t, torch.Tensor)] - entries.append(dnnlib.EasyDict(mod=mod, outputs=outputs)) - hooks = [mod.register_forward_pre_hook(pre_hook) for mod in module.modules()] - hooks += [mod.register_forward_hook(post_hook) for mod in module.modules()] - - # Run module. - outputs = module(*inputs) - for hook in hooks: - hook.remove() - - # Identify unique outputs, parameters, and buffers. - tensors_seen = set() - for e in entries: - e.unique_params = [t for t in e.mod.parameters() if id(t) not in tensors_seen] - e.unique_buffers = [t for t in e.mod.buffers() if id(t) not in tensors_seen] - e.unique_outputs = [t for t in e.outputs if id(t) not in tensors_seen] - tensors_seen |= {id(t) for t in e.unique_params + e.unique_buffers + e.unique_outputs} - - # Filter out redundant entries. - if skip_redundant: - entries = [e for e in entries if len(e.unique_params) or len(e.unique_buffers) or len(e.unique_outputs)] - - # Construct table. - rows = [[type(module).__name__, 'Parameters', 'Buffers', 'Output shape', 'Datatype']] - rows += [['---'] * len(rows[0])] - param_total = 0 - buffer_total = 0 - submodule_names = {mod: name for name, mod in module.named_modules()} - for e in entries: - name = '' if e.mod is module else submodule_names[e.mod] - param_size = sum(t.numel() for t in e.unique_params) - buffer_size = sum(t.numel() for t in e.unique_buffers) - output_shapes = [str(list(e.outputs[0].shape)) for t in e.outputs] - output_dtypes = [str(t.dtype).split('.')[-1] for t in e.outputs] - rows += [[ - name + (':0' if len(e.outputs) >= 2 else ''), - str(param_size) if param_size else '-', - str(buffer_size) if buffer_size else '-', - (output_shapes + ['-'])[0], - (output_dtypes + ['-'])[0], - ]] - for idx in range(1, len(e.outputs)): - rows += [[name + f':{idx}', '-', '-', output_shapes[idx], output_dtypes[idx]]] - param_total += param_size - buffer_total += buffer_size - rows += [['---'] * len(rows[0])] - rows += [['Total', str(param_total), str(buffer_total), '-', '-']] - - # Print table. - widths = [max(len(cell) for cell in column) for column in zip(*rows)] - print() - for row in rows: - print(' '.join(cell + ' ' * (width - len(cell)) for cell, width in zip(row, widths))) - print() - return outputs - -#---------------------------------------------------------------------------- diff --git a/spaces/Duskfallcrew/DreamlikeArt-PhotoReal-2.0/app.py b/spaces/Duskfallcrew/DreamlikeArt-PhotoReal-2.0/app.py deleted file mode 100644 index 158506d01dfab4287db19ce0ed0d99f9febe6ad8..0000000000000000000000000000000000000000 --- a/spaces/Duskfallcrew/DreamlikeArt-PhotoReal-2.0/app.py +++ /dev/null @@ -1,147 +0,0 @@ -import gradio as gr -import os -import sys -from pathlib import Path -import random -import string -import time -from queue import Queue -from threading import Thread -import emoji - - -text_gen=gr.Interface.load("spaces/phenomenon1981/MagicPrompt-Stable-Diffusion") -def get_prompts(prompt_text): - return text_gen("photo, " + prompt_text) -proc1=gr.Interface.load("models/dreamlike-art/dreamlike-photoreal-2.0") - -def restart_script_periodically(): - while True: - time.sleep(600) # 10 minutes - try: - os.execl(sys.executable, sys.executable, *sys.argv) - except: - pass - -restart_thread = Thread(target=restart_script_periodically, daemon=True) -restart_thread.start() - -queue = Queue() -queue_threshold = 800 - -def add_random_noise(prompt, noise_level=0.00): - if noise_level == 0: - noise_level = 0.00 - # Get the percentage of characters to add as noise - percentage_noise = noise_level * 5 - # Get the number of characters to add as noise - num_noise_chars = int(len(prompt) * (percentage_noise/100)) - # Get the indices of the characters to add noise to - noise_indices = random.sample(range(len(prompt)), num_noise_chars) - # Add noise to the selected characters - prompt_list = list(prompt) - # Add numbers, special characters, and all emojis to the list of characters used to add noise - noise_chars = string.ascii_letters + string.punctuation + ' ' + string.digits + emoji.emojize(":all:") - for index in noise_indices: - prompt_list[index] = random.choice(noise_chars) - return "".join(prompt_list) - - -def send_it1(inputs, noise_level, proc1=proc1): - prompt_with_noise = add_random_noise(inputs, noise_level) - output1 = proc1(prompt_with_noise) - return output1 - -def send_it2(inputs, noise_level, proc1=proc1): - prompt_with_noise = add_random_noise(inputs, noise_level) - output2 = proc1(prompt_with_noise) - return output2 - -#def send_it3(inputs, noise_level, proc1=proc1): - #prompt_with_noise = add_random_noise(inputs, noise_level) - #output3 = proc1(prompt_with_noise) - #return output3 - -#def send_it4(inputs, noise_level, proc1=proc1): - #prompt_with_noise = add_random_noise(inputs, noise_level) - #output4 = proc1(prompt_with_noise) - #return output4 - - - -with gr.Blocks(css='style.css') as demo: - gr.HTML( - """ -
      -
      -

      - Dreamlike Photoreal 2.0 -

      -
      -

      - Noise Level: Controls how much randomness is added to the input before it is sent to the model. Higher noise level produces more diverse outputs, while lower noise level produces similar outputs, - created by Phenomenon1981. -

      -

      - ❤️ Press the Like Button if you enjoy my space! ❤️ -

      -
      - """ - ) - with gr.Column(elem_id="col-container"): - with gr.Row(variant="compact"): - input_text = gr.Textbox( - label="Short Prompt", - show_label=False, - max_lines=2, - placeholder="Enter a basic idea and click 'Magic Prompt'", - ).style( - container=False, - ) - see_prompts = gr.Button("✨ Magic Prompt ✨").style(full_width=False) - - - with gr.Row(variant="compact"): - prompt = gr.Textbox( - label="Enter your prompt", - show_label=False, - max_lines=2, - placeholder="Full Prompt", - ).style( - container=False, - ) - run = gr.Button("Generate Images").style(full_width=False) - - with gr.Row(): - with gr.Row(): - noise_level = gr.Slider(minimum=0.0, maximum=3, step=0.1, label="Noise Level") - with gr.Row(): - with gr.Row(): - output1=gr.Image(label="Dreamlike-photoreal-2.0",show_label=False) - output2=gr.Image(label="Dreamlike-photoreal-2.0",show_label=False) - - #with gr.Row(): - #output1=gr.Image() - - see_prompts.click(get_prompts, inputs=[input_text], outputs=[prompt], queue=False) - run.click(send_it1, inputs=[prompt, noise_level], outputs=[output1]) - run.click(send_it2, inputs=[prompt, noise_level], outputs=[output2]) - - - - with gr.Row(): - gr.HTML( - """ - -
      -

      Unleash your creative side and generate mesmerizing images with just a few clicks! Enter a spark of inspiration in the "Basic Idea" text box and click the "Magic Prompt" button to elevate it to a polished masterpiece. Make any final tweaks in the "Full Prompt" box and hit the "Generate Images" button to watch your vision come to life. Experiment with the "Noise Level" for a diverse range of outputs, from similar to wildly unique. Let the fun begin! -

      -
      - """ -) - - demo.launch(enable_queue=True, inline=True) - block.queue(concurrency_count=100) \ No newline at end of file diff --git a/spaces/ECCV2022/bytetrack/tutorials/qdtrack/mot_online/basetrack.py b/spaces/ECCV2022/bytetrack/tutorials/qdtrack/mot_online/basetrack.py deleted file mode 100644 index 4fe2233607f6d4ed28b11a0ae6c0303c8ca19098..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/tutorials/qdtrack/mot_online/basetrack.py +++ /dev/null @@ -1,52 +0,0 @@ -import numpy as np -from collections import OrderedDict - - -class TrackState(object): - New = 0 - Tracked = 1 - Lost = 2 - Removed = 3 - - -class BaseTrack(object): - _count = 0 - - track_id = 0 - is_activated = False - state = TrackState.New - - history = OrderedDict() - features = [] - curr_feature = None - score = 0 - start_frame = 0 - frame_id = 0 - time_since_update = 0 - - # multi-camera - location = (np.inf, np.inf) - - @property - def end_frame(self): - return self.frame_id - - @staticmethod - def next_id(): - BaseTrack._count += 1 - return BaseTrack._count - - def activate(self, *args): - raise NotImplementedError - - def predict(self): - raise NotImplementedError - - def update(self, *args, **kwargs): - raise NotImplementedError - - def mark_lost(self): - self.state = TrackState.Lost - - def mark_removed(self): - self.state = TrackState.Removed diff --git a/spaces/EronSamez/RVC_HFmeu/lib/infer_pack/models_onnx.py b/spaces/EronSamez/RVC_HFmeu/lib/infer_pack/models_onnx.py deleted file mode 100644 index 963e67b29f828e9fdd096397952054fe77cf3d10..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/lib/infer_pack/models_onnx.py +++ /dev/null @@ -1,819 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from lib.infer_pack import modules -from lib.infer_pack import attentions -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from lib.infer_pack.commons import init_weights -import numpy as np -from lib.infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMsNSFsidM(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - version, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - if version == "v1": - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - else: - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - self.speaker_map = None - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def construct_spkmixmap(self, n_speaker): - self.speaker_map = torch.zeros((n_speaker, 1, 1, self.gin_channels)) - for i in range(n_speaker): - self.speaker_map[i] = self.emb_g(torch.LongTensor([[i]])) - self.speaker_map = self.speaker_map.unsqueeze(0) - - def forward(self, phone, phone_lengths, pitch, nsff0, g, rnd, max_len=None): - if self.speaker_map is not None: # [N, S] * [S, B, 1, H] - g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1] - g = g * self.speaker_map # [N, S, B, 1, H] - g = torch.sum(g, dim=1) # [N, 1, B, 1, H] - g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N] - else: - g = g.unsqueeze(0) - g = self.emb_g(g).transpose(1, 2) - - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/EronSamez/RVC_HFmeu/tools/infer_cli.py b/spaces/EronSamez/RVC_HFmeu/tools/infer_cli.py deleted file mode 100644 index bbe0a53c1aac6a8f2d42613d554b2bdd07abea2d..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/tools/infer_cli.py +++ /dev/null @@ -1,67 +0,0 @@ -import argparse -import os -import sys - -now_dir = os.getcwd() -sys.path.append(now_dir) -from dotenv import load_dotenv -from scipy.io import wavfile - -from configs.config import Config -from infer.modules.vc.modules import VC - -#### -# USAGE -# -# In your Terminal or CMD or whatever - - -def arg_parse() -> tuple: - parser = argparse.ArgumentParser() - parser.add_argument("--f0up_key", type=int, default=0) - parser.add_argument("--input_path", type=str, help="input path") - parser.add_argument("--index_path", type=str, help="index path") - parser.add_argument("--f0method", type=str, default="harvest", help="harvest or pm") - parser.add_argument("--opt_path", type=str, help="opt path") - parser.add_argument("--model_name", type=str, help="store in assets/weight_root") - parser.add_argument("--index_rate", type=float, default=0.66, help="index rate") - parser.add_argument("--device", type=str, help="device") - parser.add_argument("--is_half", type=bool, help="use half -> True") - parser.add_argument("--filter_radius", type=int, default=3, help="filter radius") - parser.add_argument("--resample_sr", type=int, default=0, help="resample sr") - parser.add_argument("--rms_mix_rate", type=float, default=1, help="rms mix rate") - parser.add_argument("--protect", type=float, default=0.33, help="protect") - - args = parser.parse_args() - sys.argv = sys.argv[:1] - - return args - - -def main(): - load_dotenv() - args = arg_parse() - config = Config() - config.device = args.device if args.device else config.device - config.is_half = args.is_half if args.is_half else config.is_half - vc = VC(config) - vc.get_vc(args.model_name) - _, wav_opt = vc.vc_single( - 0, - args.input_path, - args.f0up_key, - None, - args.f0method, - args.index_path, - None, - args.index_rate, - args.filter_radius, - args.resample_sr, - args.rms_mix_rate, - args.protect, - ) - wavfile.write(args.opt_path, wav_opt[0], wav_opt[1]) - - -if __name__ == "__main__": - main() diff --git a/spaces/EuroSciPy2022/classification/README.md b/spaces/EuroSciPy2022/classification/README.md deleted file mode 100644 index bd034c1be2fa804b1331e71c7bf9a2a65808d656..0000000000000000000000000000000000000000 --- a/spaces/EuroSciPy2022/classification/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Classification -emoji: 🌲 -colorFrom: red -colorTo: purple -sdk: gradio -sdk_version: 3.1.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Faridmaruf/RVCV2MODEL/vc_infer_pipeline.py b/spaces/Faridmaruf/RVCV2MODEL/vc_infer_pipeline.py deleted file mode 100644 index 82c15f59a8072e1b317fa1d750ccc1b814a6989d..0000000000000000000000000000000000000000 --- a/spaces/Faridmaruf/RVCV2MODEL/vc_infer_pipeline.py +++ /dev/null @@ -1,443 +0,0 @@ -import numpy as np, parselmouth, torch, pdb, sys, os -from time import time as ttime -import torch.nn.functional as F -import scipy.signal as signal -import pyworld, os, traceback, faiss, librosa, torchcrepe -from scipy import signal -from functools import lru_cache - -now_dir = os.getcwd() -sys.path.append(now_dir) - -bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000) - -input_audio_path2wav = {} - - -@lru_cache -def cache_harvest_f0(input_audio_path, fs, f0max, f0min, frame_period): - audio = input_audio_path2wav[input_audio_path] - f0, t = pyworld.harvest( - audio, - fs=fs, - f0_ceil=f0max, - f0_floor=f0min, - frame_period=frame_period, - ) - f0 = pyworld.stonemask(audio, f0, t, fs) - return f0 - - -def change_rms(data1, sr1, data2, sr2, rate): # 1是输入音频,2是输出音频,rate是2的占比 - # print(data1.max(),data2.max()) - rms1 = librosa.feature.rms( - y=data1, frame_length=sr1 // 2 * 2, hop_length=sr1 // 2 - ) # 每半秒一个点 - rms2 = librosa.feature.rms(y=data2, frame_length=sr2 // 2 * 2, hop_length=sr2 // 2) - rms1 = torch.from_numpy(rms1) - rms1 = F.interpolate( - rms1.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.from_numpy(rms2) - rms2 = F.interpolate( - rms2.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.max(rms2, torch.zeros_like(rms2) + 1e-6) - data2 *= ( - torch.pow(rms1, torch.tensor(1 - rate)) - * torch.pow(rms2, torch.tensor(rate - 1)) - ).numpy() - return data2 - - -class VC(object): - def __init__(self, tgt_sr, config): - self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = ( - config.x_pad, - config.x_query, - config.x_center, - config.x_max, - config.is_half, - ) - self.sr = 16000 # hubert输入采样率 - self.window = 160 # 每帧点数 - self.t_pad = self.sr * self.x_pad # 每条前后pad时间 - self.t_pad_tgt = tgt_sr * self.x_pad - self.t_pad2 = self.t_pad * 2 - self.t_query = self.sr * self.x_query # 查询切点前后查询时间 - self.t_center = self.sr * self.x_center # 查询切点位置 - self.t_max = self.sr * self.x_max # 免查询时长阈值 - self.device = config.device - - def get_f0( - self, - input_audio_path, - x, - p_len, - f0_up_key, - f0_method, - filter_radius, - inp_f0=None, - ): - global input_audio_path2wav - time_step = self.window / self.sr * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - if f0_method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - input_audio_path2wav[input_audio_path] = x.astype(np.double) - f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10) - if filter_radius > 2: - f0 = signal.medfilt(f0, 3) - elif f0_method == "crepe": - model = "full" - # Pick a batch size that doesn't cause memory errors on your gpu - batch_size = 512 - # Compute pitch using first gpu - audio = torch.tensor(np.copy(x))[None].float() - f0, pd = torchcrepe.predict( - audio, - self.sr, - self.window, - f0_min, - f0_max, - model, - batch_size=batch_size, - device=self.device, - return_periodicity=True, - ) - pd = torchcrepe.filter.median(pd, 3) - f0 = torchcrepe.filter.mean(f0, 3) - f0[pd < 0.1] = 0 - f0 = f0[0].cpu().numpy() - elif f0_method == "rmvpe": - if hasattr(self, "model_rmvpe") == False: - from rmvpe import RMVPE - - print("loading rmvpe model") - self.model_rmvpe = RMVPE( - "rmvpe.pt", is_half=self.is_half, device=self.device - ) - f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03) - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0] - f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[ - :shape - ] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak # 1-0 - - def vc( - self, - model, - net_g, - sid, - audio0, - pitch, - pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - ): # ,file_index,file_big_npy - feats = torch.from_numpy(audio0) - if self.is_half: - feats = feats.half() - else: - feats = feats.float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - - inputs = { - "source": feats.to(self.device), - "padding_mask": padding_mask, - "output_layer": 9 if version == "v1" else 12, - } - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) if version == "v1" else logits[0] - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = feats.clone() - if ( - isinstance(index, type(None)) == False - and isinstance(big_npy, type(None)) == False - and index_rate != 0 - ): - npy = feats[0].cpu().numpy() - if self.is_half: - npy = npy.astype("float32") - - # _, I = index.search(npy, 1) - # npy = big_npy[I.squeeze()] - - score, ix = index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - - if self.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate - + (1 - index_rate) * feats - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = F.interpolate(feats0.permute(0, 2, 1), scale_factor=2).permute( - 0, 2, 1 - ) - t1 = ttime() - p_len = audio0.shape[0] // self.window - if feats.shape[1] < p_len: - p_len = feats.shape[1] - if pitch != None and pitchf != None: - pitch = pitch[:, :p_len] - pitchf = pitchf[:, :p_len] - - if protect < 0.5 and pitch != None and pitchf != None: - pitchff = pitchf.clone() - pitchff[pitchf > 0] = 1 - pitchff[pitchf < 1] = protect - pitchff = pitchff.unsqueeze(-1) - feats = feats * pitchff + feats0 * (1 - pitchff) - feats = feats.to(feats0.dtype) - p_len = torch.tensor([p_len], device=self.device).long() - with torch.no_grad(): - if pitch != None and pitchf != None: - audio1 = ( - (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]) - .data.cpu() - .float() - .numpy() - ) - else: - audio1 = ( - (net_g.infer(feats, p_len, sid)[0][0, 0]).data.cpu().float().numpy() - ) - del feats, p_len, padding_mask - if torch.cuda.is_available(): - torch.cuda.empty_cache() - t2 = ttime() - times[0] += t1 - t0 - times[2] += t2 - t1 - return audio1 - - def pipeline( - self, - model, - net_g, - sid, - audio, - input_audio_path, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=None, - ): - if ( - file_index != "" - # and file_big_npy != "" - # and os.path.exists(file_big_npy) == True - and os.path.exists(file_index) == True - and index_rate != 0 - ): - try: - index = faiss.read_index(file_index) - # big_npy = np.load(file_big_npy) - big_npy = index.reconstruct_n(0, index.ntotal) - except: - traceback.print_exc() - index = big_npy = None - else: - index = big_npy = None - audio = signal.filtfilt(bh, ah, audio) - audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect") - opt_ts = [] - if audio_pad.shape[0] > self.t_max: - audio_sum = np.zeros_like(audio) - for i in range(self.window): - audio_sum += audio_pad[i : i - self.window] - for t in range(self.t_center, audio.shape[0], self.t_center): - opt_ts.append( - t - - self.t_query - + np.where( - np.abs(audio_sum[t - self.t_query : t + self.t_query]) - == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min() - )[0][0] - ) - s = 0 - audio_opt = [] - t = None - t1 = ttime() - audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect") - p_len = audio_pad.shape[0] // self.window - inp_f0 = None - if hasattr(f0_file, "name") == True: - try: - with open(f0_file.name, "r") as f: - lines = f.read().strip("\n").split("\n") - inp_f0 = [] - for line in lines: - inp_f0.append([float(i) for i in line.split(",")]) - inp_f0 = np.array(inp_f0, dtype="float32") - except: - traceback.print_exc() - sid = torch.tensor(sid, device=self.device).unsqueeze(0).long() - pitch, pitchf = None, None - if if_f0 == 1: - pitch, pitchf = self.get_f0( - input_audio_path, - audio_pad, - p_len, - f0_up_key, - f0_method, - filter_radius, - inp_f0, - ) - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - if self.device == "mps": - pitchf = pitchf.astype(np.float32) - pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long() - pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float() - t2 = ttime() - times[1] += t2 - t1 - for t in opt_ts: - t = t // self.window * self.window - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - pitch[:, s // self.window : (t + self.t_pad2) // self.window], - pitchf[:, s // self.window : (t + self.t_pad2) // self.window], - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - s = t - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - pitch[:, t // self.window :] if t is not None else pitch, - pitchf[:, t // self.window :] if t is not None else pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - audio_opt = np.concatenate(audio_opt) - if rms_mix_rate != 1: - audio_opt = change_rms(audio, 16000, audio_opt, tgt_sr, rms_mix_rate) - if resample_sr >= 16000 and tgt_sr != resample_sr: - audio_opt = librosa.resample( - audio_opt, orig_sr=tgt_sr, target_sr=resample_sr - ) - audio_max = np.abs(audio_opt).max() / 0.99 - max_int16 = 32768 - if audio_max > 1: - max_int16 /= audio_max - audio_opt = (audio_opt * max_int16).astype(np.int16) - del pitch, pitchf, sid - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return audio_opt diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/solver.py b/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/solver.py deleted file mode 100644 index aaf0b21591b42fa903424f8d44fef88d7d791e57..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/solver.py +++ /dev/null @@ -1,195 +0,0 @@ -import os -import time -import numpy as np -import torch -import librosa -from diffusion.logger.saver import Saver -from diffusion.logger import utils -from torch import autocast -from torch.cuda.amp import GradScaler - -def test(args, model, vocoder, loader_test, saver): - print(' [*] testing...') - model.eval() - - # losses - test_loss = 0. - - # intialization - num_batches = len(loader_test) - rtf_all = [] - - # run - with torch.no_grad(): - for bidx, data in enumerate(loader_test): - fn = data['name'][0].split("/")[-1] - speaker = data['name'][0].split("/")[-2] - print('--------') - print('{}/{} - {}'.format(bidx, num_batches, fn)) - - # unpack data - for k in data.keys(): - if not k.startswith('name'): - data[k] = data[k].to(args.device) - print('>>', data['name'][0]) - - # forward - st_time = time.time() - mel = model( - data['units'], - data['f0'], - data['volume'], - data['spk_id'], - gt_spec=None, - infer=True, - infer_speedup=args.infer.speedup, - method=args.infer.method) - signal = vocoder.infer(mel, data['f0']) - ed_time = time.time() - - # RTF - run_time = ed_time - st_time - song_time = signal.shape[-1] / args.data.sampling_rate - rtf = run_time / song_time - print('RTF: {} | {} / {}'.format(rtf, run_time, song_time)) - rtf_all.append(rtf) - - # loss - for i in range(args.train.batch_size): - loss = model( - data['units'], - data['f0'], - data['volume'], - data['spk_id'], - gt_spec=data['mel'], - infer=False) - test_loss += loss.item() - - # log mel - saver.log_spec(f"{speaker}_{fn}.wav", data['mel'], mel) - - # log audi - path_audio = data['name_ext'][0] - audio, sr = librosa.load(path_audio, sr=args.data.sampling_rate) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio) - audio = torch.from_numpy(audio).unsqueeze(0).to(signal) - saver.log_audio({f"{speaker}_{fn}_gt.wav": audio,f"{speaker}_{fn}_pred.wav": signal}) - # report - test_loss /= args.train.batch_size - test_loss /= num_batches - - # check - print(' [test_loss] test_loss:', test_loss) - print(' Real Time Factor', np.mean(rtf_all)) - return test_loss - - -def train(args, initial_global_step, model, optimizer, scheduler, vocoder, loader_train, loader_test): - # saver - saver = Saver(args, initial_global_step=initial_global_step) - - # model size - params_count = utils.get_network_paras_amount({'model': model}) - saver.log_info('--- model size ---') - saver.log_info(params_count) - - # run - num_batches = len(loader_train) - model.train() - saver.log_info('======= start training =======') - scaler = GradScaler() - if args.train.amp_dtype == 'fp32': - dtype = torch.float32 - elif args.train.amp_dtype == 'fp16': - dtype = torch.float16 - elif args.train.amp_dtype == 'bf16': - dtype = torch.bfloat16 - else: - raise ValueError(' [x] Unknown amp_dtype: ' + args.train.amp_dtype) - saver.log_info("epoch|batch_idx/num_batches|output_dir|batch/s|lr|time|step") - for epoch in range(args.train.epochs): - for batch_idx, data in enumerate(loader_train): - saver.global_step_increment() - optimizer.zero_grad() - - # unpack data - for k in data.keys(): - if not k.startswith('name'): - data[k] = data[k].to(args.device) - - # forward - if dtype == torch.float32: - loss = model(data['units'].float(), data['f0'], data['volume'], data['spk_id'], - aug_shift = data['aug_shift'], gt_spec=data['mel'].float(), infer=False) - else: - with autocast(device_type=args.device, dtype=dtype): - loss = model(data['units'], data['f0'], data['volume'], data['spk_id'], - aug_shift = data['aug_shift'], gt_spec=data['mel'], infer=False) - - # handle nan loss - if torch.isnan(loss): - raise ValueError(' [x] nan loss ') - else: - # backpropagate - if dtype == torch.float32: - loss.backward() - optimizer.step() - else: - scaler.scale(loss).backward() - scaler.step(optimizer) - scaler.update() - scheduler.step() - - # log loss - if saver.global_step % args.train.interval_log == 0: - current_lr = optimizer.param_groups[0]['lr'] - saver.log_info( - 'epoch: {} | {:3d}/{:3d} | {} | batch/s: {:.2f} | lr: {:.6} | loss: {:.3f} | time: {} | step: {}'.format( - epoch, - batch_idx, - num_batches, - args.env.expdir, - args.train.interval_log/saver.get_interval_time(), - current_lr, - loss.item(), - saver.get_total_time(), - saver.global_step - ) - ) - - saver.log_value({ - 'train/loss': loss.item() - }) - - saver.log_value({ - 'train/lr': current_lr - }) - - # validation - if saver.global_step % args.train.interval_val == 0: - optimizer_save = optimizer if args.train.save_opt else None - - # save latest - saver.save_model(model, optimizer_save, postfix=f'{saver.global_step}') - last_val_step = saver.global_step - args.train.interval_val - if last_val_step % args.train.interval_force_save != 0: - saver.delete_model(postfix=f'{last_val_step}') - - # run testing set - test_loss = test(args, model, vocoder, loader_test, saver) - - # log loss - saver.log_info( - ' --- --- \nloss: {:.3f}. '.format( - test_loss, - ) - ) - - saver.log_value({ - 'validation/loss': test_loss - }) - - model.train() - - diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/vdecoder/hifigan/models.py b/spaces/FrankZxShen/so-vits-svc-models-pcr/vdecoder/hifigan/models.py deleted file mode 100644 index 9747301f350bb269e62601017fe4633ce271b27e..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-pcr/vdecoder/hifigan/models.py +++ /dev/null @@ -1,503 +0,0 @@ -import os -import json -from .env import AttrDict -import numpy as np -import torch -import torch.nn.functional as F -import torch.nn as nn -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from .utils import init_weights, get_padding - -LRELU_SLOPE = 0.1 - - -def load_model(model_path, device='cuda'): - config_file = os.path.join(os.path.split(model_path)[0], 'config.json') - with open(config_file) as f: - data = f.read() - - global h - json_config = json.loads(data) - h = AttrDict(json_config) - - generator = Generator(h).to(device) - - cp_dict = torch.load(model_path) - generator.load_state_dict(cp_dict['generator']) - generator.eval() - generator.remove_weight_norm() - del cp_dict - return generator, h - - -class ResBlock1(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.h = h - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - xt = c2(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.h = h - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -def padDiff(x): - return F.pad(F.pad(x, (0,0,-1,1), 'constant', 0) - x, (0,0,0,-1), 'constant', 0) - -class SineGen(torch.nn.Module): - """ Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__(self, samp_rate, harmonic_num=0, - sine_amp=0.1, noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - self.flag_for_pulse = flag_for_pulse - - def _f02uv(self, f0): - # generate uv signal - uv = (f0 > self.voiced_threshold).type(torch.float32) - return uv - - def _f02sine(self, f0_values): - """ f0_values: (batchsize, length, dim) - where dim indicates fundamental tone and overtones - """ - # convert to F0 in rad. The interger part n can be ignored - # because 2 * np.pi * n doesn't affect phase - rad_values = (f0_values / self.sampling_rate) % 1 - - # initial phase noise (no noise for fundamental component) - rand_ini = torch.rand(f0_values.shape[0], f0_values.shape[2], \ - device=f0_values.device) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - - # instantanouse phase sine[t] = sin(2*pi \sum_i=1 ^{t} rad) - if not self.flag_for_pulse: - # for normal case - - # To prevent torch.cumsum numerical overflow, - # it is necessary to add -1 whenever \sum_k=1^n rad_value_k > 1. - # Buffer tmp_over_one_idx indicates the time step to add -1. - # This will not change F0 of sine because (x-1) * 2*pi = x * 2*pi - tmp_over_one = torch.cumsum(rad_values, 1) % 1 - tmp_over_one_idx = (padDiff(tmp_over_one)) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - - sines = torch.sin(torch.cumsum(rad_values + cumsum_shift, dim=1) - * 2 * np.pi) - else: - # If necessary, make sure that the first time step of every - # voiced segments is sin(pi) or cos(0) - # This is used for pulse-train generation - - # identify the last time step in unvoiced segments - uv = self._f02uv(f0_values) - uv_1 = torch.roll(uv, shifts=-1, dims=1) - uv_1[:, -1, :] = 1 - u_loc = (uv < 1) * (uv_1 > 0) - - # get the instantanouse phase - tmp_cumsum = torch.cumsum(rad_values, dim=1) - # different batch needs to be processed differently - for idx in range(f0_values.shape[0]): - temp_sum = tmp_cumsum[idx, u_loc[idx, :, 0], :] - temp_sum[1:, :] = temp_sum[1:, :] - temp_sum[0:-1, :] - # stores the accumulation of i.phase within - # each voiced segments - tmp_cumsum[idx, :, :] = 0 - tmp_cumsum[idx, u_loc[idx, :, 0], :] = temp_sum - - # rad_values - tmp_cumsum: remove the accumulation of i.phase - # within the previous voiced segment. - i_phase = torch.cumsum(rad_values - tmp_cumsum, dim=1) - - # get the sines - sines = torch.cos(i_phase * 2 * np.pi) - return sines - - def forward(self, f0): - """ sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, - device=f0.device) - # fundamental component - fn = torch.multiply(f0, torch.FloatTensor([[range(1, self.harmonic_num + 2)]]).to(f0.device)) - - # generate sine waveforms - sine_waves = self._f02sine(fn) * self.sine_amp - - # generate uv signal - # uv = torch.ones(f0.shape) - # uv = uv * (f0 > self.voiced_threshold) - uv = self._f02uv(f0) - - # noise: for unvoiced should be similar to sine_amp - # std = self.sine_amp/3 -> max value ~ self.sine_amp - # . for voiced regions is self.noise_std - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - - # first: set the unvoiced part to 0 by uv - # then: additive noise - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """ SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__(self, sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - - # to produce sine waveforms - self.l_sin_gen = SineGen(sampling_rate, harmonic_num, - sine_amp, add_noise_std, voiced_threshod) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x): - """ - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - """ - # source for harmonic branch - sine_wavs, uv, _ = self.l_sin_gen(x) - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - - # source for noise branch, in the same shape as uv - noise = torch.randn_like(uv) * self.sine_amp / 3 - return sine_merge, noise, uv - - -class Generator(torch.nn.Module): - def __init__(self, h): - super(Generator, self).__init__() - self.h = h - - self.num_kernels = len(h["resblock_kernel_sizes"]) - self.num_upsamples = len(h["upsample_rates"]) - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(h["upsample_rates"])) - self.m_source = SourceModuleHnNSF( - sampling_rate=h["sampling_rate"], - harmonic_num=8) - self.noise_convs = nn.ModuleList() - self.conv_pre = weight_norm(Conv1d(h["inter_channels"], h["upsample_initial_channel"], 7, 1, padding=3)) - resblock = ResBlock1 if h["resblock"] == '1' else ResBlock2 - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(h["upsample_rates"], h["upsample_kernel_sizes"])): - c_cur = h["upsample_initial_channel"] // (2 ** (i + 1)) - self.ups.append(weight_norm( - ConvTranspose1d(h["upsample_initial_channel"] // (2 ** i), h["upsample_initial_channel"] // (2 ** (i + 1)), - k, u, padding=(k - u) // 2))) - if i + 1 < len(h["upsample_rates"]): # - stride_f0 = np.prod(h["upsample_rates"][i + 1:]) - self.noise_convs.append(Conv1d( - 1, c_cur, kernel_size=stride_f0 * 2, stride=stride_f0, padding=stride_f0 // 2)) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = h["upsample_initial_channel"] // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(h["resblock_kernel_sizes"], h["resblock_dilation_sizes"])): - self.resblocks.append(resblock(h, ch, k, d)) - - self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3)) - self.ups.apply(init_weights) - self.conv_post.apply(init_weights) - self.cond = nn.Conv1d(h['gin_channels'], h['upsample_initial_channel'], 1) - - def forward(self, x, f0, g=None): - # print(1,x.shape,f0.shape,f0[:, None].shape) - f0 = self.f0_upsamp(f0[:, None]).transpose(1, 2) # bs,n,t - # print(2,f0.shape) - har_source, noi_source, uv = self.m_source(f0) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - x = x + self.cond(g) - # print(124,x.shape,har_source.shape) - for i in range(self.num_upsamples): - x = F.leaky_relu(x, LRELU_SLOPE) - # print(3,x.shape) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - # print(4,x_source.shape,har_source.shape,x.shape) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, periods=None): - super(MultiPeriodDiscriminator, self).__init__() - self.periods = periods if periods is not None else [2, 3, 5, 7, 11] - self.discriminators = nn.ModuleList() - for period in self.periods: - self.discriminators.append(DiscriminatorP(period)) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 128, 15, 1, padding=7)), - norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)), - norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)), - norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiScaleDiscriminator(torch.nn.Module): - def __init__(self): - super(MultiScaleDiscriminator, self).__init__() - self.discriminators = nn.ModuleList([ - DiscriminatorS(use_spectral_norm=True), - DiscriminatorS(), - DiscriminatorS(), - ]) - self.meanpools = nn.ModuleList([ - AvgPool1d(4, 2, padding=2), - AvgPool1d(4, 2, padding=2) - ]) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - if i != 0: - y = self.meanpools[i - 1](y) - y_hat = self.meanpools[i - 1](y_hat) - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - r_loss = torch.mean((1 - dr) ** 2) - g_loss = torch.mean(dg ** 2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - l = torch.mean((1 - dg) ** 2) - gen_losses.append(l) - loss += l - - return loss, gen_losses diff --git a/spaces/Froleptan/stablediffusion-infinity/PyPatchMatch/README.md b/spaces/Froleptan/stablediffusion-infinity/PyPatchMatch/README.md deleted file mode 100644 index 12b49aadadfe0ff51c2873b2671c0ca020bc3506..0000000000000000000000000000000000000000 --- a/spaces/Froleptan/stablediffusion-infinity/PyPatchMatch/README.md +++ /dev/null @@ -1,64 +0,0 @@ -PatchMatch based Inpainting -===================================== -This library implements the PatchMatch based inpainting algorithm. It provides both C++ and Python interfaces. -This implementation is heavily based on the implementation by Younesse ANDAM: -(younesse-cv/PatchMatch)[https://github.com/younesse-cv/PatchMatch], with some bugs fix. - -Usage -------------------------------------- - -You need to first install OpenCV to compile the C++ libraries. Then, run `make` to compile the -shared library `libpatchmatch.so`. - -For Python users (example available at `examples/py_example.py`) - -```python -import patch_match - -image = ... # either a numpy ndarray or a PIL Image object. -mask = ... # either a numpy ndarray or a PIL Image object. -result = patch_match.inpaint(image, mask, patch_size=5) -``` - -For C++ users (examples available at `examples/cpp_example.cpp`) - -```cpp -#include "inpaint.h" - -int main() { - cv::Mat image = ... - cv::Mat mask = ... - - cv::Mat result = Inpainting(image, mask, 5).run(); - - return 0; -} -``` - - -README and COPYRIGHT by Younesse ANDAM -------------------------------------- -@Author: Younesse ANDAM - -@Contact: younesse.andam@gmail.com - -Description: This project is a personal implementation of an algorithm called PATCHMATCH that restores missing areas in an image. -The algorithm is presented in the following paper - PatchMatch A Randomized Correspondence Algorithm - for Structural Image Editing - by C.Barnes,E.Shechtman,A.Finkelstein and Dan B.Goldman - ACM Transactions on Graphics (Proc. SIGGRAPH), vol.28, aug-2009 - - For more information please refer to - http://www.cs.princeton.edu/gfx/pubs/Barnes_2009_PAR/index.php - -Copyright (c) 2010-2011 - - -Requirements -------------------------------------- - -To run the project you need to install Opencv library and link it to your project. -Opencv can be download it here -http://opencv.org/downloads.html - diff --git a/spaces/Gh6st66/invisiblecat-Uber_Realistic_Porn_Merge_V1.3/app.py b/spaces/Gh6st66/invisiblecat-Uber_Realistic_Porn_Merge_V1.3/app.py deleted file mode 100644 index a92fe1ebc915a50145f533e154c5d631711bcb89..0000000000000000000000000000000000000000 --- a/spaces/Gh6st66/invisiblecat-Uber_Realistic_Porn_Merge_V1.3/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/invisiblecat/Uber_Realistic_Porn_Merge_V1.3").launch() \ No newline at end of file diff --git a/spaces/Gradio-Blocks/anime-colorization/train_danbooru.sh b/spaces/Gradio-Blocks/anime-colorization/train_danbooru.sh deleted file mode 100644 index 10560117162d7604d296356f5feba0ccabcd2f77..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/anime-colorization/train_danbooru.sh +++ /dev/null @@ -1,6 +0,0 @@ - -MODEL_FLAGS="--image_size 32 --guide_size 128 --num_channels 128 --num_res_blocks 3 --learn_sigma True --dropout 0.0" -DIFFUSION_FLAGS="--diffusion_steps 4000 --noise_schedule cosine" -TRAIN_FLAGS="--use_fp16 True --lr 1e-4 --batch_size 128 --schedule_sampler loss-second-moment" - -OPENAI_LOGDIR="./danbooru2017_guided_log" python scripts/pixel_guide_train.py --data_dir data/danbooru2017/anime --guide_dir data/danbooru2017/anime_sketch $MODEL_FLAGS $DIFFUSION_FLAGS $TRAIN_FLAGS diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ccnet/README.md b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ccnet/README.md deleted file mode 100644 index 1c8ba1cdf74c0207683e41ad905361c671577d6d..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ccnet/README.md +++ /dev/null @@ -1,47 +0,0 @@ -# CCNet: Criss-Cross Attention for Semantic Segmentation - -## Introduction - - - -```latex -@article{huang2018ccnet, - title={CCNet: Criss-Cross Attention for Semantic Segmentation}, - author={Huang, Zilong and Wang, Xinggang and Huang, Lichao and Huang, Chang and Wei, Yunchao and Liu, Wenyu}, - booktitle={ICCV}, - year={2019} -} -``` - -## Results and models - -### Cityscapes - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| CCNet | R-50-D8 | 512x1024 | 40000 | 6 | 3.32 | 77.76 | 78.87 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r50-d8_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_512x1024_40k_cityscapes/ccnet_r50-d8_512x1024_40k_cityscapes_20200616_142517-4123f401.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_512x1024_40k_cityscapes/ccnet_r50-d8_512x1024_40k_cityscapes_20200616_142517.log.json) | -| CCNet | R-101-D8 | 512x1024 | 40000 | 9.5 | 2.31 | 76.35 | 78.19 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r101-d8_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_512x1024_40k_cityscapes/ccnet_r101-d8_512x1024_40k_cityscapes_20200616_142540-a3b84ba6.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_512x1024_40k_cityscapes/ccnet_r101-d8_512x1024_40k_cityscapes_20200616_142540.log.json) | -| CCNet | R-50-D8 | 769x769 | 40000 | 6.8 | 1.43 | 78.46 | 79.93 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r50-d8_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_769x769_40k_cityscapes/ccnet_r50-d8_769x769_40k_cityscapes_20200616_145125-76d11884.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_769x769_40k_cityscapes/ccnet_r50-d8_769x769_40k_cityscapes_20200616_145125.log.json) | -| CCNet | R-101-D8 | 769x769 | 40000 | 10.7 | 1.01 | 76.94 | 78.62 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r101-d8_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_769x769_40k_cityscapes/ccnet_r101-d8_769x769_40k_cityscapes_20200617_101428-4f57c8d0.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_769x769_40k_cityscapes/ccnet_r101-d8_769x769_40k_cityscapes_20200617_101428.log.json) | -| CCNet | R-50-D8 | 512x1024 | 80000 | - | - | 79.03 | 80.16 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r50-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_512x1024_80k_cityscapes/ccnet_r50-d8_512x1024_80k_cityscapes_20200617_010421-869a3423.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_512x1024_80k_cityscapes/ccnet_r50-d8_512x1024_80k_cityscapes_20200617_010421.log.json) | -| CCNet | R-101-D8 | 512x1024 | 80000 | - | - | 78.87 | 79.90 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r101-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_512x1024_80k_cityscapes/ccnet_r101-d8_512x1024_80k_cityscapes_20200617_203935-ffae8917.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_512x1024_80k_cityscapes/ccnet_r101-d8_512x1024_80k_cityscapes_20200617_203935.log.json) | -| CCNet | R-50-D8 | 769x769 | 80000 | - | - | 79.29 | 81.08 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r50-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_769x769_80k_cityscapes/ccnet_r50-d8_769x769_80k_cityscapes_20200617_010421-73eed8ca.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_769x769_80k_cityscapes/ccnet_r50-d8_769x769_80k_cityscapes_20200617_010421.log.json) | -| CCNet | R-101-D8 | 769x769 | 80000 | - | - | 79.45 | 80.66 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r101-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_769x769_80k_cityscapes/ccnet_r101-d8_769x769_80k_cityscapes_20200618_011502-ad3cd481.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_769x769_80k_cityscapes/ccnet_r101-d8_769x769_80k_cityscapes_20200618_011502.log.json) | - -### ADE20K - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | --------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| CCNet | R-50-D8 | 512x512 | 80000 | 8.8 | 20.89 | 41.78 | 42.98 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r50-d8_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_512x512_80k_ade20k/ccnet_r50-d8_512x512_80k_ade20k_20200615_014848-aa37f61e.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_512x512_80k_ade20k/ccnet_r50-d8_512x512_80k_ade20k_20200615_014848.log.json) | -| CCNet | R-101-D8 | 512x512 | 80000 | 12.2 | 14.11 | 43.97 | 45.13 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r101-d8_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_512x512_80k_ade20k/ccnet_r101-d8_512x512_80k_ade20k_20200615_014848-1f4929a3.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_512x512_80k_ade20k/ccnet_r101-d8_512x512_80k_ade20k_20200615_014848.log.json) | -| CCNet | R-50-D8 | 512x512 | 160000 | - | - | 42.08 | 43.13 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r50-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_512x512_160k_ade20k/ccnet_r50-d8_512x512_160k_ade20k_20200616_084435-7c97193b.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_512x512_160k_ade20k/ccnet_r50-d8_512x512_160k_ade20k_20200616_084435.log.json) | -| CCNet | R-101-D8 | 512x512 | 160000 | - | - | 43.71 | 45.04 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r101-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_512x512_160k_ade20k/ccnet_r101-d8_512x512_160k_ade20k_20200616_000644-e849e007.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_512x512_160k_ade20k/ccnet_r101-d8_512x512_160k_ade20k_20200616_000644.log.json) | - -### Pascal VOC 2012 + Aug - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ---------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| CCNet | R-50-D8 | 512x512 | 20000 | 6 | 20.45 | 76.17 | 77.51 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r50-d8_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_512x512_20k_voc12aug/ccnet_r50-d8_512x512_20k_voc12aug_20200617_193212-fad81784.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_512x512_20k_voc12aug/ccnet_r50-d8_512x512_20k_voc12aug_20200617_193212.log.json) | -| CCNet | R-101-D8 | 512x512 | 20000 | 9.5 | 13.64 | 77.27 | 79.02 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r101-d8_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_512x512_20k_voc12aug/ccnet_r101-d8_512x512_20k_voc12aug_20200617_193212-0007b61d.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_512x512_20k_voc12aug/ccnet_r101-d8_512x512_20k_voc12aug_20200617_193212.log.json) | -| CCNet | R-50-D8 | 512x512 | 40000 | - | - | 75.96 | 77.04 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r50-d8_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_512x512_40k_voc12aug/ccnet_r50-d8_512x512_40k_voc12aug_20200613_232127-c2a15f02.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r50-d8_512x512_40k_voc12aug/ccnet_r50-d8_512x512_40k_voc12aug_20200613_232127.log.json) | -| CCNet | R-101-D8 | 512x512 | 40000 | - | - | 77.87 | 78.90 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet/ccnet_r101-d8_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_512x512_40k_voc12aug/ccnet_r101-d8_512x512_40k_voc12aug_20200613_232127-c30da577.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ccnet/ccnet_r101-d8_512x512_40k_voc12aug/ccnet_r101-d8_512x512_40k_voc12aug_20200613_232127.log.json) | diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_769x769_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_769x769_40k_cityscapes.py deleted file mode 100644 index fca98c1d9ace73a61ae395914e5960832216bf67..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_769x769_40k_cityscapes.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = [ - '../_base_/models/fcn_r50-d8.py', - '../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_40k.py' -] -model = dict( - decode_head=dict(align_corners=True), - auxiliary_head=dict(align_corners=True), - test_cfg=dict(mode='slide', crop_size=(769, 769), stride=(513, 513))) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/segmentors/cascade_encoder_decoder.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/segmentors/cascade_encoder_decoder.py deleted file mode 100644 index 220ab2bb365b61885482bdd86606e632f2af74ae..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/segmentors/cascade_encoder_decoder.py +++ /dev/null @@ -1,98 +0,0 @@ -from torch import nn - -from mmseg.core import add_prefix -from mmseg.ops import resize -from .. import builder -from ..builder import SEGMENTORS -from .encoder_decoder import EncoderDecoder - - -@SEGMENTORS.register_module() -class CascadeEncoderDecoder(EncoderDecoder): - """Cascade Encoder Decoder segmentors. - - CascadeEncoderDecoder almost the same as EncoderDecoder, while decoders of - CascadeEncoderDecoder are cascaded. The output of previous decoder_head - will be the input of next decoder_head. - """ - - def __init__(self, - num_stages, - backbone, - decode_head, - neck=None, - auxiliary_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None): - self.num_stages = num_stages - super(CascadeEncoderDecoder, self).__init__( - backbone=backbone, - decode_head=decode_head, - neck=neck, - auxiliary_head=auxiliary_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained) - - def _init_decode_head(self, decode_head): - """Initialize ``decode_head``""" - assert isinstance(decode_head, list) - assert len(decode_head) == self.num_stages - self.decode_head = nn.ModuleList() - for i in range(self.num_stages): - self.decode_head.append(builder.build_head(decode_head[i])) - self.align_corners = self.decode_head[-1].align_corners - self.num_classes = self.decode_head[-1].num_classes - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone and heads. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - self.backbone.init_weights(pretrained=pretrained) - for i in range(self.num_stages): - self.decode_head[i].init_weights() - if self.with_auxiliary_head: - if isinstance(self.auxiliary_head, nn.ModuleList): - for aux_head in self.auxiliary_head: - aux_head.init_weights() - else: - self.auxiliary_head.init_weights() - - def encode_decode(self, img, img_metas): - """Encode images with backbone and decode into a semantic segmentation - map of the same size as input.""" - x = self.extract_feat(img) - out = self.decode_head[0].forward_test(x, img_metas, self.test_cfg) - for i in range(1, self.num_stages): - out = self.decode_head[i].forward_test(x, out, img_metas, - self.test_cfg) - out = resize( - input=out, - size=img.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - return out - - def _decode_head_forward_train(self, x, img_metas, gt_semantic_seg): - """Run forward function and calculate loss for decode head in - training.""" - losses = dict() - - loss_decode = self.decode_head[0].forward_train( - x, img_metas, gt_semantic_seg, self.train_cfg) - - losses.update(add_prefix(loss_decode, 'decode_0')) - - for i in range(1, self.num_stages): - # forward test again, maybe unnecessary for most methods. - prev_outputs = self.decode_head[i - 1].forward_test( - x, img_metas, self.test_cfg) - loss_decode = self.decode_head[i].forward_train( - x, prev_outputs, img_metas, gt_semantic_seg, self.train_cfg) - losses.update(add_prefix(loss_decode, f'decode_{i}')) - - return losses diff --git a/spaces/Gradio-Blocks/video_nca/README.md b/spaces/Gradio-Blocks/video_nca/README.md deleted file mode 100644 index c4cab5bf36224c4a522f9d87339cabfc0a1ad7b2..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/video_nca/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Video-driven Neural Cellular Automata -emoji: 👁 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.0.4 -app_file: app.py -pinned: false -license: mit ---- - -Made by Jonathan Whitaker (@johnowhitaker) - -More information: https://wandb.ai/johnowhitaker/nca/reports/Fun-with-Neural-Cellular-Automata--VmlldzoyMDQ5Mjg0 \ No newline at end of file diff --git a/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/modules/activations.py b/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/modules/activations.py deleted file mode 100644 index 8bd6f2917a56d72db56555d0ff54b2311bc21778..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/modules/activations.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -from torch import Tensor -from typing import Union, Callable - - -class CustomGLU(nn.Module): - """Custom Gated Linear Unit activation. - Applies a modified gated linear unit :math:`a * f(b)` where :math:`a` is the first half - of the input matrices, :math:`b` is the second half, and :math:`f` is a provided activation - function (i.e. sigmoid, swish, etc.). - - Args: - activation (nn.Module): The custom activation to apply in the Gated Linear Unit - dim (int): the dimension on which to split the input. Default: -1 - - Shape: - - Input: :math:`(\ast_1, N, \ast_2)` where `*` means, any number of additional - dimensions - - Output: :math:`(\ast_1, M, \ast_2)` where :math:`M=N/2` - - Examples:: - >>> m = CustomGLU(nn.Sigmoid()) - >>> input = torch.randn(4, 2) - >>> output = m(input) - """ - def __init__(self, activation: nn.Module, dim: int = -1): - super(CustomGLU, self).__init__() - self.dim = dim - self.activation = activation - - def forward(self, x: Tensor): - assert x.shape[self.dim] % 2 == 0 # M = N / 2 - a, b = torch.chunk(x, 2, dim=self.dim) - return a * self.activation(b) - - -class SwiGLU(CustomGLU): - """SiLU Gated Linear Unit activation. - Applies SiLU Gated Linear Unit :math:`a * SiLU(b)` where :math:`a` is - the first half of the input matrices, :math:`b` is the second half. - - Args: - dim (int): the dimension on which to split the input. Default: -1 - """ - def __init__(self, dim: int = -1): - super(SwiGLU, self).__init__(nn.SiLU(), dim) - - -class GeGLU(CustomGLU): - """GeLU Gated Linear Unit activation. - Applies GeLU Gated Linear Unit :math:`a * GELU(b)` where :math:`a` is - the first half of the input matrices, :math:`b` is the second half. - - Args: - dim (int): the dimension on which to split the input. Default: -1 - """ - def __init__(self, dim: int = -1): - super(GeGLU, self).__init__(nn.GELU(), dim) - - -class ReGLU(CustomGLU): - """ReLU Gated Linear Unit activation. - Applies ReLU Gated Linear Unit :math:`a * ReLU(b)` where :math:`a` is - the first half of the input matrices, :math:`b` is the second half. - - Args: - dim (int): the dimension on which to split the input. Default: -1 - """ - def __init__(self, dim: int = -1): - super(ReGLU, self).__init__(nn.ReLU(), dim) - - -def get_activation_fn( - activation: Union[str, Callable[[Tensor], Tensor]] -) -> Union[str, Callable[[Tensor], Tensor]]: - """Helper function to map an activation string to the activation class. - If the supplied activation is not a string that is recognized, the activation is passed back. - - Args: - activation (Union[str, Callable[[Tensor], Tensor]]): Activation to check - """ - if isinstance(activation, str): - if activation == "reglu": - return ReGLU() - elif activation == "geglu": - return GeGLU() - elif activation == "swiglu": - return SwiGLU() - return activation diff --git a/spaces/GuXiaoBei/wechat-chatbot/common/log.py b/spaces/GuXiaoBei/wechat-chatbot/common/log.py deleted file mode 100644 index e00456e93b09f41ff5c1688883f2c72c201b38a5..0000000000000000000000000000000000000000 --- a/spaces/GuXiaoBei/wechat-chatbot/common/log.py +++ /dev/null @@ -1,16 +0,0 @@ -import logging -import sys - - -def _get_logger(): - log = logging.getLogger('log') - log.setLevel(logging.INFO) - console_handle = logging.StreamHandler(sys.stdout) - console_handle.setFormatter(logging.Formatter('[%(levelname)s][%(asctime)s][%(filename)s:%(lineno)d] - %(message)s', - datefmt='%Y-%m-%d %H:%M:%S')) - log.addHandler(console_handle) - return log - - -# 日志句柄 -logger = _get_logger() \ No newline at end of file diff --git a/spaces/Guinnessgshep/AI_story_writing/README.md b/spaces/Guinnessgshep/AI_story_writing/README.md deleted file mode 100644 index d0db0407bdec29ba9ae5dadaffa2b090eb9e6e2e..0000000000000000000000000000000000000000 --- a/spaces/Guinnessgshep/AI_story_writing/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AI Story Writing -emoji: 📚 -colorFrom: pink -colorTo: red -sdk: gradio -sdk_version: 3.8 -app_file: app.py -pinned: false -duplicated_from: Catmeow/AI_story_writing ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Hallucinate/demo/py3d_tools.py b/spaces/Hallucinate/demo/py3d_tools.py deleted file mode 100644 index 138d6cd5f2987d43495a60603588bb5a5c5d18a6..0000000000000000000000000000000000000000 --- a/spaces/Hallucinate/demo/py3d_tools.py +++ /dev/null @@ -1,1799 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the BSD-style license found in the -# LICENSE file in the root directory of this source tree. - -import sys -import math -import warnings -from typing import List, Optional, Sequence, Tuple, Union, Any - -import numpy as np -import torch -import torch.nn.functional as F - -import copy -import inspect -import torch.nn as nn - -Device = Union[str, torch.device] - -# Default values for rotation and translation matrices. -_R = torch.eye(3)[None] # (1, 3, 3) -_T = torch.zeros(1, 3) # (1, 3) - - -# Provide get_origin and get_args even in Python 3.7. - -if sys.version_info >= (3, 8, 0): - from typing import get_args, get_origin -elif sys.version_info >= (3, 7, 0): - - def get_origin(cls): # pragma: no cover - return getattr(cls, "__origin__", None) - - def get_args(cls): # pragma: no cover - return getattr(cls, "__args__", None) - - -else: - raise ImportError("This module requires Python 3.7+") - -################################################################ -## ██████╗██╗ █████╗ ███████╗███████╗███████╗███████╗ ## -## ██╔════╝██║ ██╔══██╗██╔════╝██╔════╝██╔════╝██╔════╝ ## -## ██║ ██║ ███████║███████╗███████╗█████╗ ███████╗ ## -## ██║ ██║ ██╔══██║╚════██║╚════██║██╔══╝ ╚════██║ ## -## ╚██████╗███████╗██║ ██║███████║███████║███████╗███████║ ## -## ╚═════╝╚══════╝╚═╝ ╚═╝╚══════╝╚══════╝╚══════╝╚══════╝ ## -################################################################ - -class Transform3d: - """ - A Transform3d object encapsulates a batch of N 3D transformations, and knows - how to transform points and normal vectors. Suppose that t is a Transform3d; - then we can do the following: - - .. code-block:: python - - N = len(t) - points = torch.randn(N, P, 3) - normals = torch.randn(N, P, 3) - points_transformed = t.transform_points(points) # => (N, P, 3) - normals_transformed = t.transform_normals(normals) # => (N, P, 3) - - - BROADCASTING - Transform3d objects supports broadcasting. Suppose that t1 and tN are - Transform3d objects with len(t1) == 1 and len(tN) == N respectively. Then we - can broadcast transforms like this: - - .. code-block:: python - - t1.transform_points(torch.randn(P, 3)) # => (P, 3) - t1.transform_points(torch.randn(1, P, 3)) # => (1, P, 3) - t1.transform_points(torch.randn(M, P, 3)) # => (M, P, 3) - tN.transform_points(torch.randn(P, 3)) # => (N, P, 3) - tN.transform_points(torch.randn(1, P, 3)) # => (N, P, 3) - - - COMBINING TRANSFORMS - Transform3d objects can be combined in two ways: composing and stacking. - Composing is function composition. Given Transform3d objects t1, t2, t3, - the following all compute the same thing: - - .. code-block:: python - - y1 = t3.transform_points(t2.transform_points(t1.transform_points(x))) - y2 = t1.compose(t2).compose(t3).transform_points(x) - y3 = t1.compose(t2, t3).transform_points(x) - - - Composing transforms should broadcast. - - .. code-block:: python - - if len(t1) == 1 and len(t2) == N, then len(t1.compose(t2)) == N. - - We can also stack a sequence of Transform3d objects, which represents - composition along the batch dimension; then the following should compute the - same thing. - - .. code-block:: python - - N, M = len(tN), len(tM) - xN = torch.randn(N, P, 3) - xM = torch.randn(M, P, 3) - y1 = torch.cat([tN.transform_points(xN), tM.transform_points(xM)], dim=0) - y2 = tN.stack(tM).transform_points(torch.cat([xN, xM], dim=0)) - - BUILDING TRANSFORMS - We provide convenience methods for easily building Transform3d objects - as compositions of basic transforms. - - .. code-block:: python - - # Scale by 0.5, then translate by (1, 2, 3) - t1 = Transform3d().scale(0.5).translate(1, 2, 3) - - # Scale each axis by a different amount, then translate, then scale - t2 = Transform3d().scale(1, 3, 3).translate(2, 3, 1).scale(2.0) - - t3 = t1.compose(t2) - tN = t1.stack(t3, t3) - - - BACKPROP THROUGH TRANSFORMS - When building transforms, we can also parameterize them by Torch tensors; - in this case we can backprop through the construction and application of - Transform objects, so they could be learned via gradient descent or - predicted by a neural network. - - .. code-block:: python - - s1_params = torch.randn(N, requires_grad=True) - t_params = torch.randn(N, 3, requires_grad=True) - s2_params = torch.randn(N, 3, requires_grad=True) - - t = Transform3d().scale(s1_params).translate(t_params).scale(s2_params) - x = torch.randn(N, 3) - y = t.transform_points(x) - loss = compute_loss(y) - loss.backward() - - with torch.no_grad(): - s1_params -= lr * s1_params.grad - t_params -= lr * t_params.grad - s2_params -= lr * s2_params.grad - - CONVENTIONS - We adopt a right-hand coordinate system, meaning that rotation about an axis - with a positive angle results in a counter clockwise rotation. - - This class assumes that transformations are applied on inputs which - are row vectors. The internal representation of the Nx4x4 transformation - matrix is of the form: - - .. code-block:: python - - M = [ - [Rxx, Ryx, Rzx, 0], - [Rxy, Ryy, Rzy, 0], - [Rxz, Ryz, Rzz, 0], - [Tx, Ty, Tz, 1], - ] - - To apply the transformation to points which are row vectors, the M matrix - can be pre multiplied by the points: - - .. code-block:: python - - points = [[0, 1, 2]] # (1 x 3) xyz coordinates of a point - transformed_points = points * M - - """ - - def __init__( - self, - dtype: torch.dtype = torch.float32, - device: Device = "cpu", - matrix: Optional[torch.Tensor] = None, - ) -> None: - """ - Args: - dtype: The data type of the transformation matrix. - to be used if `matrix = None`. - device: The device for storing the implemented transformation. - If `matrix != None`, uses the device of input `matrix`. - matrix: A tensor of shape (4, 4) or of shape (minibatch, 4, 4) - representing the 4x4 3D transformation matrix. - If `None`, initializes with identity using - the specified `device` and `dtype`. - """ - - if matrix is None: - self._matrix = torch.eye(4, dtype=dtype, device=device).view(1, 4, 4) - else: - if matrix.ndim not in (2, 3): - raise ValueError('"matrix" has to be a 2- or a 3-dimensional tensor.') - if matrix.shape[-2] != 4 or matrix.shape[-1] != 4: - raise ValueError( - '"matrix" has to be a tensor of shape (minibatch, 4, 4)' - ) - # set dtype and device from matrix - dtype = matrix.dtype - device = matrix.device - self._matrix = matrix.view(-1, 4, 4) - - self._transforms = [] # store transforms to compose - self._lu = None - self.device = make_device(device) - self.dtype = dtype - - def __len__(self) -> int: - return self.get_matrix().shape[0] - - def __getitem__( - self, index: Union[int, List[int], slice, torch.Tensor] - ) -> "Transform3d": - """ - Args: - index: Specifying the index of the transform to retrieve. - Can be an int, slice, list of ints, boolean, long tensor. - Supports negative indices. - - Returns: - Transform3d object with selected transforms. The tensors are not cloned. - """ - if isinstance(index, int): - index = [index] - return self.__class__(matrix=self.get_matrix()[index]) - - def compose(self, *others: "Transform3d") -> "Transform3d": - """ - Return a new Transform3d representing the composition of self with the - given other transforms, which will be stored as an internal list. - - Args: - *others: Any number of Transform3d objects - - Returns: - A new Transform3d with the stored transforms - """ - out = Transform3d(dtype=self.dtype, device=self.device) - out._matrix = self._matrix.clone() - for other in others: - if not isinstance(other, Transform3d): - msg = "Only possible to compose Transform3d objects; got %s" - raise ValueError(msg % type(other)) - out._transforms = self._transforms + list(others) - return out - - def get_matrix(self) -> torch.Tensor: - """ - Return a matrix which is the result of composing this transform - with others stored in self.transforms. Where necessary transforms - are broadcast against each other. - For example, if self.transforms contains transforms t1, t2, and t3, and - given a set of points x, the following should be true: - - .. code-block:: python - - y1 = t1.compose(t2, t3).transform(x) - y2 = t3.transform(t2.transform(t1.transform(x))) - y1.get_matrix() == y2.get_matrix() - - Returns: - A transformation matrix representing the composed inputs. - """ - composed_matrix = self._matrix.clone() - if len(self._transforms) > 0: - for other in self._transforms: - other_matrix = other.get_matrix() - composed_matrix = _broadcast_bmm(composed_matrix, other_matrix) - return composed_matrix - - def _get_matrix_inverse(self) -> torch.Tensor: - """ - Return the inverse of self._matrix. - """ - return torch.inverse(self._matrix) - - def inverse(self, invert_composed: bool = False) -> "Transform3d": - """ - Returns a new Transform3d object that represents an inverse of the - current transformation. - - Args: - invert_composed: - - True: First compose the list of stored transformations - and then apply inverse to the result. This is - potentially slower for classes of transformations - with inverses that can be computed efficiently - (e.g. rotations and translations). - - False: Invert the individual stored transformations - independently without composing them. - - Returns: - A new Transform3d object containing the inverse of the original - transformation. - """ - - tinv = Transform3d(dtype=self.dtype, device=self.device) - - if invert_composed: - # first compose then invert - tinv._matrix = torch.inverse(self.get_matrix()) - else: - # self._get_matrix_inverse() implements efficient inverse - # of self._matrix - i_matrix = self._get_matrix_inverse() - - # 2 cases: - if len(self._transforms) > 0: - # a) Either we have a non-empty list of transforms: - # Here we take self._matrix and append its inverse at the - # end of the reverted _transforms list. After composing - # the transformations with get_matrix(), this correctly - # right-multiplies by the inverse of self._matrix - # at the end of the composition. - tinv._transforms = [t.inverse() for t in reversed(self._transforms)] - last = Transform3d(dtype=self.dtype, device=self.device) - last._matrix = i_matrix - tinv._transforms.append(last) - else: - # b) Or there are no stored transformations - # we just set inverted matrix - tinv._matrix = i_matrix - - return tinv - - def stack(self, *others: "Transform3d") -> "Transform3d": - """ - Return a new batched Transform3d representing the batch elements from - self and all the given other transforms all batched together. - - Args: - *others: Any number of Transform3d objects - - Returns: - A new Transform3d. - """ - transforms = [self] + list(others) - matrix = torch.cat([t.get_matrix() for t in transforms], dim=0) - out = Transform3d(dtype=self.dtype, device=self.device) - out._matrix = matrix - return out - - def transform_points(self, points, eps: Optional[float] = None) -> torch.Tensor: - """ - Use this transform to transform a set of 3D points. Assumes row major - ordering of the input points. - - Args: - points: Tensor of shape (P, 3) or (N, P, 3) - eps: If eps!=None, the argument is used to clamp the - last coordinate before performing the final division. - The clamping corresponds to: - last_coord := (last_coord.sign() + (last_coord==0)) * - torch.clamp(last_coord.abs(), eps), - i.e. the last coordinates that are exactly 0 will - be clamped to +eps. - - Returns: - points_out: points of shape (N, P, 3) or (P, 3) depending - on the dimensions of the transform - """ - points_batch = points.clone() - if points_batch.dim() == 2: - points_batch = points_batch[None] # (P, 3) -> (1, P, 3) - if points_batch.dim() != 3: - msg = "Expected points to have dim = 2 or dim = 3: got shape %r" - raise ValueError(msg % repr(points.shape)) - - N, P, _3 = points_batch.shape - ones = torch.ones(N, P, 1, dtype=points.dtype, device=points.device) - points_batch = torch.cat([points_batch, ones], dim=2) - - composed_matrix = self.get_matrix() - points_out = _broadcast_bmm(points_batch, composed_matrix) - denom = points_out[..., 3:] # denominator - if eps is not None: - denom_sign = denom.sign() + (denom == 0.0).type_as(denom) - denom = denom_sign * torch.clamp(denom.abs(), eps) - points_out = points_out[..., :3] / denom - - # When transform is (1, 4, 4) and points is (P, 3) return - # points_out of shape (P, 3) - if points_out.shape[0] == 1 and points.dim() == 2: - points_out = points_out.reshape(points.shape) - - return points_out - - def transform_normals(self, normals) -> torch.Tensor: - """ - Use this transform to transform a set of normal vectors. - - Args: - normals: Tensor of shape (P, 3) or (N, P, 3) - - Returns: - normals_out: Tensor of shape (P, 3) or (N, P, 3) depending - on the dimensions of the transform - """ - if normals.dim() not in [2, 3]: - msg = "Expected normals to have dim = 2 or dim = 3: got shape %r" - raise ValueError(msg % (normals.shape,)) - composed_matrix = self.get_matrix() - - # TODO: inverse is bad! Solve a linear system instead - mat = composed_matrix[:, :3, :3] - normals_out = _broadcast_bmm(normals, mat.transpose(1, 2).inverse()) - - # This doesn't pass unit tests. TODO investigate further - # if self._lu is None: - # self._lu = self._matrix[:, :3, :3].transpose(1, 2).lu() - # normals_out = normals.lu_solve(*self._lu) - - # When transform is (1, 4, 4) and normals is (P, 3) return - # normals_out of shape (P, 3) - if normals_out.shape[0] == 1 and normals.dim() == 2: - normals_out = normals_out.reshape(normals.shape) - - return normals_out - - def translate(self, *args, **kwargs) -> "Transform3d": - return self.compose( - Translate(device=self.device, dtype=self.dtype, *args, **kwargs) - ) - - def scale(self, *args, **kwargs) -> "Transform3d": - return self.compose( - Scale(device=self.device, dtype=self.dtype, *args, **kwargs) - ) - - def rotate(self, *args, **kwargs) -> "Transform3d": - return self.compose( - Rotate(device=self.device, dtype=self.dtype, *args, **kwargs) - ) - - def rotate_axis_angle(self, *args, **kwargs) -> "Transform3d": - return self.compose( - RotateAxisAngle(device=self.device, dtype=self.dtype, *args, **kwargs) - ) - - def clone(self) -> "Transform3d": - """ - Deep copy of Transforms object. All internal tensors are cloned - individually. - - Returns: - new Transforms object. - """ - other = Transform3d(dtype=self.dtype, device=self.device) - if self._lu is not None: - other._lu = [elem.clone() for elem in self._lu] - other._matrix = self._matrix.clone() - other._transforms = [t.clone() for t in self._transforms] - return other - - def to( - self, - device: Device, - copy: bool = False, - dtype: Optional[torch.dtype] = None, - ) -> "Transform3d": - """ - Match functionality of torch.Tensor.to() - If copy = True or the self Tensor is on a different device, the - returned tensor is a copy of self with the desired torch.device. - If copy = False and the self Tensor already has the correct torch.device, - then self is returned. - - Args: - device: Device (as str or torch.device) for the new tensor. - copy: Boolean indicator whether or not to clone self. Default False. - dtype: If not None, casts the internal tensor variables - to a given torch.dtype. - - Returns: - Transform3d object. - """ - device_ = make_device(device) - dtype_ = self.dtype if dtype is None else dtype - skip_to = self.device == device_ and self.dtype == dtype_ - - if not copy and skip_to: - return self - - other = self.clone() - - if skip_to: - return other - - other.device = device_ - other.dtype = dtype_ - other._matrix = other._matrix.to(device=device_, dtype=dtype_) - other._transforms = [ - t.to(device_, copy=copy, dtype=dtype_) for t in other._transforms - ] - return other - - def cpu(self) -> "Transform3d": - return self.to("cpu") - - def cuda(self) -> "Transform3d": - return self.to("cuda") - -class Translate(Transform3d): - def __init__( - self, - x, - y=None, - z=None, - dtype: torch.dtype = torch.float32, - device: Optional[Device] = None, - ) -> None: - """ - Create a new Transform3d representing 3D translations. - - Option I: Translate(xyz, dtype=torch.float32, device='cpu') - xyz should be a tensor of shape (N, 3) - - Option II: Translate(x, y, z, dtype=torch.float32, device='cpu') - Here x, y, and z will be broadcast against each other and - concatenated to form the translation. Each can be: - - A python scalar - - A torch scalar - - A 1D torch tensor - """ - xyz = _handle_input(x, y, z, dtype, device, "Translate") - super().__init__(device=xyz.device, dtype=dtype) - N = xyz.shape[0] - - mat = torch.eye(4, dtype=dtype, device=self.device) - mat = mat.view(1, 4, 4).repeat(N, 1, 1) - mat[:, 3, :3] = xyz - self._matrix = mat - - def _get_matrix_inverse(self) -> torch.Tensor: - """ - Return the inverse of self._matrix. - """ - inv_mask = self._matrix.new_ones([1, 4, 4]) - inv_mask[0, 3, :3] = -1.0 - i_matrix = self._matrix * inv_mask - return i_matrix - -class Rotate(Transform3d): - def __init__( - self, - R: torch.Tensor, - dtype: torch.dtype = torch.float32, - device: Optional[Device] = None, - orthogonal_tol: float = 1e-5, - ) -> None: - """ - Create a new Transform3d representing 3D rotation using a rotation - matrix as the input. - - Args: - R: a tensor of shape (3, 3) or (N, 3, 3) - orthogonal_tol: tolerance for the test of the orthogonality of R - - """ - device_ = get_device(R, device) - super().__init__(device=device_, dtype=dtype) - if R.dim() == 2: - R = R[None] - if R.shape[-2:] != (3, 3): - msg = "R must have shape (3, 3) or (N, 3, 3); got %s" - raise ValueError(msg % repr(R.shape)) - R = R.to(device=device_, dtype=dtype) - _check_valid_rotation_matrix(R, tol=orthogonal_tol) - N = R.shape[0] - mat = torch.eye(4, dtype=dtype, device=device_) - mat = mat.view(1, 4, 4).repeat(N, 1, 1) - mat[:, :3, :3] = R - self._matrix = mat - - def _get_matrix_inverse(self) -> torch.Tensor: - """ - Return the inverse of self._matrix. - """ - return self._matrix.permute(0, 2, 1).contiguous() - -class TensorAccessor(nn.Module): - """ - A helper class to be used with the __getitem__ method. This can be used for - getting/setting the values for an attribute of a class at one particular - index. This is useful when the attributes of a class are batched tensors - and one element in the batch needs to be modified. - """ - - def __init__(self, class_object, index: Union[int, slice]) -> None: - """ - Args: - class_object: this should be an instance of a class which has - attributes which are tensors representing a batch of - values. - index: int/slice, an index indicating the position in the batch. - In __setattr__ and __getattr__ only the value of class - attributes at this index will be accessed. - """ - self.__dict__["class_object"] = class_object - self.__dict__["index"] = index - - def __setattr__(self, name: str, value: Any): - """ - Update the attribute given by `name` to the value given by `value` - at the index specified by `self.index`. - Args: - name: str, name of the attribute. - value: value to set the attribute to. - """ - v = getattr(self.class_object, name) - if not torch.is_tensor(v): - msg = "Can only set values on attributes which are tensors; got %r" - raise AttributeError(msg % type(v)) - - # Convert the attribute to a tensor if it is not a tensor. - if not torch.is_tensor(value): - value = torch.tensor( - value, device=v.device, dtype=v.dtype, requires_grad=v.requires_grad - ) - - # Check the shapes match the existing shape and the shape of the index. - if v.dim() > 1 and value.dim() > 1 and value.shape[1:] != v.shape[1:]: - msg = "Expected value to have shape %r; got %r" - raise ValueError(msg % (v.shape, value.shape)) - if ( - v.dim() == 0 - and isinstance(self.index, slice) - and len(value) != len(self.index) - ): - msg = "Expected value to have len %r; got %r" - raise ValueError(msg % (len(self.index), len(value))) - self.class_object.__dict__[name][self.index] = value - - def __getattr__(self, name: str): - """ - Return the value of the attribute given by "name" on self.class_object - at the index specified in self.index. - Args: - name: string of the attribute name - """ - if hasattr(self.class_object, name): - return self.class_object.__dict__[name][self.index] - else: - msg = "Attribute %s not found on %r" - return AttributeError(msg % (name, self.class_object.__name__)) - -BROADCAST_TYPES = (float, int, list, tuple, torch.Tensor, np.ndarray) - -class TensorProperties(nn.Module): - """ - A mix-in class for storing tensors as properties with helper methods. - """ - - def __init__( - self, - dtype: torch.dtype = torch.float32, - device: Device = "cpu", - **kwargs, - ) -> None: - """ - Args: - dtype: data type to set for the inputs - device: Device (as str or torch.device) - kwargs: any number of keyword arguments. Any arguments which are - of type (float/int/list/tuple/tensor/array) are broadcasted and - other keyword arguments are set as attributes. - """ - super().__init__() - self.device = make_device(device) - self._N = 0 - if kwargs is not None: - - # broadcast all inputs which are float/int/list/tuple/tensor/array - # set as attributes anything else e.g. strings, bools - args_to_broadcast = {} - for k, v in kwargs.items(): - if v is None or isinstance(v, (str, bool)): - setattr(self, k, v) - elif isinstance(v, BROADCAST_TYPES): - args_to_broadcast[k] = v - else: - msg = "Arg %s with type %r is not broadcastable" - warnings.warn(msg % (k, type(v))) - - names = args_to_broadcast.keys() - # convert from type dict.values to tuple - values = tuple(v for v in args_to_broadcast.values()) - - if len(values) > 0: - broadcasted_values = convert_to_tensors_and_broadcast( - *values, device=device - ) - - # Set broadcasted values as attributes on self. - for i, n in enumerate(names): - setattr(self, n, broadcasted_values[i]) - if self._N == 0: - self._N = broadcasted_values[i].shape[0] - - def __len__(self) -> int: - return self._N - - def isempty(self) -> bool: - return self._N == 0 - - def __getitem__(self, index: Union[int, slice]) -> TensorAccessor: - """ - Args: - index: an int or slice used to index all the fields. - Returns: - if `index` is an index int/slice return a TensorAccessor class - with getattribute/setattribute methods which return/update the value - at the index in the original class. - """ - if isinstance(index, (int, slice)): - return TensorAccessor(class_object=self, index=index) - - msg = "Expected index of type int or slice; got %r" - raise ValueError(msg % type(index)) - - # pyre-fixme[14]: `to` overrides method defined in `Module` inconsistently. - def to(self, device: Device = "cpu") -> "TensorProperties": - """ - In place operation to move class properties which are tensors to a - specified device. If self has a property "device", update this as well. - """ - device_ = make_device(device) - for k in dir(self): - v = getattr(self, k) - if k == "device": - setattr(self, k, device_) - if torch.is_tensor(v) and v.device != device_: - setattr(self, k, v.to(device_)) - return self - - def cpu(self) -> "TensorProperties": - return self.to("cpu") - - # pyre-fixme[14]: `cuda` overrides method defined in `Module` inconsistently. - def cuda(self, device: Optional[int] = None) -> "TensorProperties": - return self.to(f"cuda:{device}" if device is not None else "cuda") - - def clone(self, other) -> "TensorProperties": - """ - Update the tensor properties of other with the cloned properties of self. - """ - for k in dir(self): - v = getattr(self, k) - if inspect.ismethod(v) or k.startswith("__"): - continue - if torch.is_tensor(v): - v_clone = v.clone() - else: - v_clone = copy.deepcopy(v) - setattr(other, k, v_clone) - return other - - def gather_props(self, batch_idx) -> "TensorProperties": - """ - This is an in place operation to reformat all tensor class attributes - based on a set of given indices using torch.gather. This is useful when - attributes which are batched tensors e.g. shape (N, 3) need to be - multiplied with another tensor which has a different first dimension - e.g. packed vertices of shape (V, 3). - Example - .. code-block:: python - self.specular_color = (N, 3) tensor of specular colors for each mesh - A lighting calculation may use - .. code-block:: python - verts_packed = meshes.verts_packed() # (V, 3) - To multiply these two tensors the batch dimension needs to be the same. - To achieve this we can do - .. code-block:: python - batch_idx = meshes.verts_packed_to_mesh_idx() # (V) - This gives index of the mesh for each vertex in verts_packed. - .. code-block:: python - self.gather_props(batch_idx) - self.specular_color = (V, 3) tensor with the specular color for - each packed vertex. - torch.gather requires the index tensor to have the same shape as the - input tensor so this method takes care of the reshaping of the index - tensor to use with class attributes with arbitrary dimensions. - Args: - batch_idx: shape (B, ...) where `...` represents an arbitrary - number of dimensions - Returns: - self with all properties reshaped. e.g. a property with shape (N, 3) - is transformed to shape (B, 3). - """ - # Iterate through the attributes of the class which are tensors. - for k in dir(self): - v = getattr(self, k) - if torch.is_tensor(v): - if v.shape[0] > 1: - # There are different values for each batch element - # so gather these using the batch_idx. - # First clone the input batch_idx tensor before - # modifying it. - _batch_idx = batch_idx.clone() - idx_dims = _batch_idx.shape - tensor_dims = v.shape - if len(idx_dims) > len(tensor_dims): - msg = "batch_idx cannot have more dimensions than %s. " - msg += "got shape %r and %s has shape %r" - raise ValueError(msg % (k, idx_dims, k, tensor_dims)) - if idx_dims != tensor_dims: - # To use torch.gather the index tensor (_batch_idx) has - # to have the same shape as the input tensor. - new_dims = len(tensor_dims) - len(idx_dims) - new_shape = idx_dims + (1,) * new_dims - expand_dims = (-1,) + tensor_dims[1:] - _batch_idx = _batch_idx.view(*new_shape) - _batch_idx = _batch_idx.expand(*expand_dims) - - v = v.gather(0, _batch_idx) - setattr(self, k, v) - return self - -class CamerasBase(TensorProperties): - """ - `CamerasBase` implements a base class for all cameras. - For cameras, there are four different coordinate systems (or spaces) - - World coordinate system: This is the system the object lives - the world. - - Camera view coordinate system: This is the system that has its origin on the camera - and the and the Z-axis perpendicular to the image plane. - In PyTorch3D, we assume that +X points left, and +Y points up and - +Z points out from the image plane. - The transformation from world --> view happens after applying a rotation (R) - and translation (T) - - NDC coordinate system: This is the normalized coordinate system that confines - in a volume the rendered part of the object or scene. Also known as view volume. - For square images, given the PyTorch3D convention, (+1, +1, znear) - is the top left near corner, and (-1, -1, zfar) is the bottom right far - corner of the volume. - The transformation from view --> NDC happens after applying the camera - projection matrix (P) if defined in NDC space. - For non square images, we scale the points such that smallest side - has range [-1, 1] and the largest side has range [-u, u], with u > 1. - - Screen coordinate system: This is another representation of the view volume with - the XY coordinates defined in image space instead of a normalized space. - A better illustration of the coordinate systems can be found in - pytorch3d/docs/notes/cameras.md. - It defines methods that are common to all camera models: - - `get_camera_center` that returns the optical center of the camera in - world coordinates - - `get_world_to_view_transform` which returns a 3D transform from - world coordinates to the camera view coordinates (R, T) - - `get_full_projection_transform` which composes the projection - transform (P) with the world-to-view transform (R, T) - - `transform_points` which takes a set of input points in world coordinates and - projects to the space the camera is defined in (NDC or screen) - - `get_ndc_camera_transform` which defines the transform from screen/NDC to - PyTorch3D's NDC space - - `transform_points_ndc` which takes a set of points in world coordinates and - projects them to PyTorch3D's NDC space - - `transform_points_screen` which takes a set of points in world coordinates and - projects them to screen space - For each new camera, one should implement the `get_projection_transform` - routine that returns the mapping from camera view coordinates to camera - coordinates (NDC or screen). - Another useful function that is specific to each camera model is - `unproject_points` which sends points from camera coordinates (NDC or screen) - back to camera view or world coordinates depending on the `world_coordinates` - boolean argument of the function. - """ - - # Used in __getitem__ to index the relevant fields - # When creating a new camera, this should be set in the __init__ - _FIELDS: Tuple[str, ...] = () - - # Names of fields which are a constant property of the whole batch, rather - # than themselves a batch of data. - # When joining objects into a batch, they will have to agree. - _SHARED_FIELDS: Tuple[str, ...] = () - - def get_projection_transform(self): - """ - Calculate the projective transformation matrix. - Args: - **kwargs: parameters for the projection can be passed in as keyword - arguments to override the default values set in `__init__`. - Return: - a `Transform3d` object which represents a batch of projection - matrices of shape (N, 3, 3) - """ - raise NotImplementedError() - - def unproject_points(self, xy_depth: torch.Tensor, **kwargs): - """ - Transform input points from camera coodinates (NDC or screen) - to the world / camera coordinates. - Each of the input points `xy_depth` of shape (..., 3) is - a concatenation of the x, y location and its depth. - For instance, for an input 2D tensor of shape `(num_points, 3)` - `xy_depth` takes the following form: - `xy_depth[i] = [x[i], y[i], depth[i]]`, - for a each point at an index `i`. - The following example demonstrates the relationship between - `transform_points` and `unproject_points`: - .. code-block:: python - cameras = # camera object derived from CamerasBase - xyz = # 3D points of shape (batch_size, num_points, 3) - # transform xyz to the camera view coordinates - xyz_cam = cameras.get_world_to_view_transform().transform_points(xyz) - # extract the depth of each point as the 3rd coord of xyz_cam - depth = xyz_cam[:, :, 2:] - # project the points xyz to the camera - xy = cameras.transform_points(xyz)[:, :, :2] - # append depth to xy - xy_depth = torch.cat((xy, depth), dim=2) - # unproject to the world coordinates - xyz_unproj_world = cameras.unproject_points(xy_depth, world_coordinates=True) - print(torch.allclose(xyz, xyz_unproj_world)) # True - # unproject to the camera coordinates - xyz_unproj = cameras.unproject_points(xy_depth, world_coordinates=False) - print(torch.allclose(xyz_cam, xyz_unproj)) # True - Args: - xy_depth: torch tensor of shape (..., 3). - world_coordinates: If `True`, unprojects the points back to world - coordinates using the camera extrinsics `R` and `T`. - `False` ignores `R` and `T` and unprojects to - the camera view coordinates. - from_ndc: If `False` (default), assumes xy part of input is in - NDC space if self.in_ndc(), otherwise in screen space. If - `True`, assumes xy is in NDC space even if the camera - is defined in screen space. - Returns - new_points: unprojected points with the same shape as `xy_depth`. - """ - raise NotImplementedError() - - def get_camera_center(self, **kwargs) -> torch.Tensor: - """ - Return the 3D location of the camera optical center - in the world coordinates. - Args: - **kwargs: parameters for the camera extrinsics can be passed in - as keyword arguments to override the default values - set in __init__. - Setting T here will update the values set in init as this - value may be needed later on in the rendering pipeline e.g. for - lighting calculations. - Returns: - C: a batch of 3D locations of shape (N, 3) denoting - the locations of the center of each camera in the batch. - """ - w2v_trans = self.get_world_to_view_transform(**kwargs) - P = w2v_trans.inverse().get_matrix() - # the camera center is the translation component (the first 3 elements - # of the last row) of the inverted world-to-view - # transform (4x4 RT matrix) - C = P[:, 3, :3] - return C - - def get_world_to_view_transform(self, **kwargs) -> Transform3d: - """ - Return the world-to-view transform. - Args: - **kwargs: parameters for the camera extrinsics can be passed in - as keyword arguments to override the default values - set in __init__. - Setting R and T here will update the values set in init as these - values may be needed later on in the rendering pipeline e.g. for - lighting calculations. - Returns: - A Transform3d object which represents a batch of transforms - of shape (N, 3, 3) - """ - R: torch.Tensor = kwargs.get("R", self.R) - T: torch.Tensor = kwargs.get("T", self.T) - self.R = R # pyre-ignore[16] - self.T = T # pyre-ignore[16] - world_to_view_transform = get_world_to_view_transform(R=R, T=T) - return world_to_view_transform - - def get_full_projection_transform(self, **kwargs) -> Transform3d: - """ - Return the full world-to-camera transform composing the - world-to-view and view-to-camera transforms. - If camera is defined in NDC space, the projected points are in NDC space. - If camera is defined in screen space, the projected points are in screen space. - Args: - **kwargs: parameters for the projection transforms can be passed in - as keyword arguments to override the default values - set in __init__. - Setting R and T here will update the values set in init as these - values may be needed later on in the rendering pipeline e.g. for - lighting calculations. - Returns: - a Transform3d object which represents a batch of transforms - of shape (N, 3, 3) - """ - self.R: torch.Tensor = kwargs.get("R", self.R) # pyre-ignore[16] - self.T: torch.Tensor = kwargs.get("T", self.T) # pyre-ignore[16] - world_to_view_transform = self.get_world_to_view_transform(R=self.R, T=self.T) - view_to_proj_transform = self.get_projection_transform(**kwargs) - return world_to_view_transform.compose(view_to_proj_transform) - - def transform_points( - self, points, eps: Optional[float] = None, **kwargs - ) -> torch.Tensor: - """ - Transform input points from world to camera space with the - projection matrix defined by the camera. - For `CamerasBase.transform_points`, setting `eps > 0` - stabilizes gradients since it leads to avoiding division - by excessively low numbers for points close to the camera plane. - Args: - points: torch tensor of shape (..., 3). - eps: If eps!=None, the argument is used to clamp the - divisor in the homogeneous normalization of the points - transformed to the ndc space. Please see - `transforms.Transform3d.transform_points` for details. - For `CamerasBase.transform_points`, setting `eps > 0` - stabilizes gradients since it leads to avoiding division - by excessively low numbers for points close to the - camera plane. - Returns - new_points: transformed points with the same shape as the input. - """ - world_to_proj_transform = self.get_full_projection_transform(**kwargs) - return world_to_proj_transform.transform_points(points, eps=eps) - - def get_ndc_camera_transform(self, **kwargs) -> Transform3d: - """ - Returns the transform from camera projection space (screen or NDC) to NDC space. - For cameras that can be specified in screen space, this transform - allows points to be converted from screen to NDC space. - The default transform scales the points from [0, W]x[0, H] - to [-1, 1]x[-u, u] or [-u, u]x[-1, 1] where u > 1 is the aspect ratio of the image. - This function should be modified per camera definitions if need be, - e.g. for Perspective/Orthographic cameras we provide a custom implementation. - This transform assumes PyTorch3D coordinate system conventions for - both the NDC space and the input points. - This transform interfaces with the PyTorch3D renderer which assumes - input points to the renderer to be in NDC space. - """ - if self.in_ndc(): - return Transform3d(device=self.device, dtype=torch.float32) - else: - # For custom cameras which can be defined in screen space, - # users might might have to implement the screen to NDC transform based - # on the definition of the camera parameters. - # See PerspectiveCameras/OrthographicCameras for an example. - # We don't flip xy because we assume that world points are in - # PyTorch3D coordinates, and thus conversion from screen to ndc - # is a mere scaling from image to [-1, 1] scale. - image_size = kwargs.get("image_size", self.get_image_size()) - return get_screen_to_ndc_transform( - self, with_xyflip=False, image_size=image_size - ) - - def transform_points_ndc( - self, points, eps: Optional[float] = None, **kwargs - ) -> torch.Tensor: - """ - Transforms points from PyTorch3D world/camera space to NDC space. - Input points follow the PyTorch3D coordinate system conventions: +X left, +Y up. - Output points are in NDC space: +X left, +Y up, origin at image center. - Args: - points: torch tensor of shape (..., 3). - eps: If eps!=None, the argument is used to clamp the - divisor in the homogeneous normalization of the points - transformed to the ndc space. Please see - `transforms.Transform3d.transform_points` for details. - For `CamerasBase.transform_points`, setting `eps > 0` - stabilizes gradients since it leads to avoiding division - by excessively low numbers for points close to the - camera plane. - Returns - new_points: transformed points with the same shape as the input. - """ - world_to_ndc_transform = self.get_full_projection_transform(**kwargs) - if not self.in_ndc(): - to_ndc_transform = self.get_ndc_camera_transform(**kwargs) - world_to_ndc_transform = world_to_ndc_transform.compose(to_ndc_transform) - - return world_to_ndc_transform.transform_points(points, eps=eps) - - def transform_points_screen( - self, points, eps: Optional[float] = None, **kwargs - ) -> torch.Tensor: - """ - Transforms points from PyTorch3D world/camera space to screen space. - Input points follow the PyTorch3D coordinate system conventions: +X left, +Y up. - Output points are in screen space: +X right, +Y down, origin at top left corner. - Args: - points: torch tensor of shape (..., 3). - eps: If eps!=None, the argument is used to clamp the - divisor in the homogeneous normalization of the points - transformed to the ndc space. Please see - `transforms.Transform3d.transform_points` for details. - For `CamerasBase.transform_points`, setting `eps > 0` - stabilizes gradients since it leads to avoiding division - by excessively low numbers for points close to the - camera plane. - Returns - new_points: transformed points with the same shape as the input. - """ - points_ndc = self.transform_points_ndc(points, eps=eps, **kwargs) - image_size = kwargs.get("image_size", self.get_image_size()) - return get_ndc_to_screen_transform( - self, with_xyflip=True, image_size=image_size - ).transform_points(points_ndc, eps=eps) - - def clone(self): - """ - Returns a copy of `self`. - """ - cam_type = type(self) - other = cam_type(device=self.device) - return super().clone(other) - - def is_perspective(self): - raise NotImplementedError() - - def in_ndc(self): - """ - Specifies whether the camera is defined in NDC space - or in screen (image) space - """ - raise NotImplementedError() - - def get_znear(self): - return self.znear if hasattr(self, "znear") else None - - def get_image_size(self): - """ - Returns the image size, if provided, expected in the form of (height, width) - The image size is used for conversion of projected points to screen coordinates. - """ - return self.image_size if hasattr(self, "image_size") else None - - def __getitem__( - self, index: Union[int, List[int], torch.LongTensor] - ) -> "CamerasBase": - """ - Override for the __getitem__ method in TensorProperties which needs to be - refactored. - Args: - index: an int/list/long tensor used to index all the fields in the cameras given by - self._FIELDS. - Returns: - if `index` is an index int/list/long tensor return an instance of the current - cameras class with only the values at the selected index. - """ - - kwargs = {} - - if not isinstance(index, (int, list, torch.LongTensor, torch.cuda.LongTensor)): - msg = "Invalid index type, expected int, List[int] or torch.LongTensor; got %r" - raise ValueError(msg % type(index)) - - if isinstance(index, int): - index = [index] - - if max(index) >= len(self): - raise ValueError(f"Index {max(index)} is out of bounds for select cameras") - - for field in self._FIELDS: - val = getattr(self, field, None) - if val is None: - continue - - # e.g. "in_ndc" is set as attribute "_in_ndc" on the class - # but provided as "in_ndc" on initialization - if field.startswith("_"): - field = field[1:] - - if isinstance(val, (str, bool)): - kwargs[field] = val - elif isinstance(val, torch.Tensor): - # In the init, all inputs will be converted to - # tensors before setting as attributes - kwargs[field] = val[index] - else: - raise ValueError(f"Field {field} type is not supported for indexing") - - kwargs["device"] = self.device - return self.__class__(**kwargs) - -class FoVPerspectiveCameras(CamerasBase): - """ - A class which stores a batch of parameters to generate a batch of - projection matrices by specifying the field of view. - The definition of the parameters follow the OpenGL perspective camera. - - The extrinsics of the camera (R and T matrices) can also be set in the - initializer or passed in to `get_full_projection_transform` to get - the full transformation from world -> ndc. - - The `transform_points` method calculates the full world -> ndc transform - and then applies it to the input points. - - The transforms can also be returned separately as Transform3d objects. - - * Setting the Aspect Ratio for Non Square Images * - - If the desired output image size is non square (i.e. a tuple of (H, W) where H != W) - the aspect ratio needs special consideration: There are two aspect ratios - to be aware of: - - the aspect ratio of each pixel - - the aspect ratio of the output image - The `aspect_ratio` setting in the FoVPerspectiveCameras sets the - pixel aspect ratio. When using this camera with the differentiable rasterizer - be aware that in the rasterizer we assume square pixels, but allow - variable image aspect ratio (i.e rectangle images). - - In most cases you will want to set the camera `aspect_ratio=1.0` - (i.e. square pixels) and only vary the output image dimensions in pixels - for rasterization. - """ - - # For __getitem__ - _FIELDS = ( - "K", - "znear", - "zfar", - "aspect_ratio", - "fov", - "R", - "T", - "degrees", - ) - - _SHARED_FIELDS = ("degrees",) - - def __init__( - self, - znear=1.0, - zfar=100.0, - aspect_ratio=1.0, - fov=60.0, - degrees: bool = True, - R: torch.Tensor = _R, - T: torch.Tensor = _T, - K: Optional[torch.Tensor] = None, - device: Device = "cpu", - ) -> None: - """ - - Args: - znear: near clipping plane of the view frustrum. - zfar: far clipping plane of the view frustrum. - aspect_ratio: aspect ratio of the image pixels. - 1.0 indicates square pixels. - fov: field of view angle of the camera. - degrees: bool, set to True if fov is specified in degrees. - R: Rotation matrix of shape (N, 3, 3) - T: Translation matrix of shape (N, 3) - K: (optional) A calibration matrix of shape (N, 4, 4) - If provided, don't need znear, zfar, fov, aspect_ratio, degrees - device: Device (as str or torch.device) - """ - # The initializer formats all inputs to torch tensors and broadcasts - # all the inputs to have the same batch dimension where necessary. - super().__init__( - device=device, - znear=znear, - zfar=zfar, - aspect_ratio=aspect_ratio, - fov=fov, - R=R, - T=T, - K=K, - ) - - # No need to convert to tensor or broadcast. - self.degrees = degrees - - def compute_projection_matrix( - self, znear, zfar, fov, aspect_ratio, degrees: bool - ) -> torch.Tensor: - """ - Compute the calibration matrix K of shape (N, 4, 4) - - Args: - znear: near clipping plane of the view frustrum. - zfar: far clipping plane of the view frustrum. - fov: field of view angle of the camera. - aspect_ratio: aspect ratio of the image pixels. - 1.0 indicates square pixels. - degrees: bool, set to True if fov is specified in degrees. - - Returns: - torch.FloatTensor of the calibration matrix with shape (N, 4, 4) - """ - K = torch.zeros((self._N, 4, 4), device=self.device, dtype=torch.float32) - ones = torch.ones((self._N), dtype=torch.float32, device=self.device) - if degrees: - fov = (np.pi / 180) * fov - - if not torch.is_tensor(fov): - fov = torch.tensor(fov, device=self.device) - tanHalfFov = torch.tan((fov / 2)) - max_y = tanHalfFov * znear - min_y = -max_y - max_x = max_y * aspect_ratio - min_x = -max_x - - # NOTE: In OpenGL the projection matrix changes the handedness of the - # coordinate frame. i.e the NDC space positive z direction is the - # camera space negative z direction. This is because the sign of the z - # in the projection matrix is set to -1.0. - # In pytorch3d we maintain a right handed coordinate system throughout - # so the so the z sign is 1.0. - z_sign = 1.0 - - K[:, 0, 0] = 2.0 * znear / (max_x - min_x) - K[:, 1, 1] = 2.0 * znear / (max_y - min_y) - K[:, 0, 2] = (max_x + min_x) / (max_x - min_x) - K[:, 1, 2] = (max_y + min_y) / (max_y - min_y) - K[:, 3, 2] = z_sign * ones - - # NOTE: This maps the z coordinate from [0, 1] where z = 0 if the point - # is at the near clipping plane and z = 1 when the point is at the far - # clipping plane. - K[:, 2, 2] = z_sign * zfar / (zfar - znear) - K[:, 2, 3] = -(zfar * znear) / (zfar - znear) - - return K - - def get_projection_transform(self, **kwargs) -> Transform3d: - """ - Calculate the perspective projection matrix with a symmetric - viewing frustrum. Use column major order. - The viewing frustrum will be projected into ndc, s.t. - (max_x, max_y) -> (+1, +1) - (min_x, min_y) -> (-1, -1) - - Args: - **kwargs: parameters for the projection can be passed in as keyword - arguments to override the default values set in `__init__`. - - Return: - a Transform3d object which represents a batch of projection - matrices of shape (N, 4, 4) - - .. code-block:: python - - h1 = (max_y + min_y)/(max_y - min_y) - w1 = (max_x + min_x)/(max_x - min_x) - tanhalffov = tan((fov/2)) - s1 = 1/tanhalffov - s2 = 1/(tanhalffov * (aspect_ratio)) - - # To map z to the range [0, 1] use: - f1 = far / (far - near) - f2 = -(far * near) / (far - near) - - # Projection matrix - K = [ - [s1, 0, w1, 0], - [0, s2, h1, 0], - [0, 0, f1, f2], - [0, 0, 1, 0], - ] - """ - K = kwargs.get("K", self.K) - if K is not None: - if K.shape != (self._N, 4, 4): - msg = "Expected K to have shape of (%r, 4, 4)" - raise ValueError(msg % (self._N)) - else: - K = self.compute_projection_matrix( - kwargs.get("znear", self.znear), - kwargs.get("zfar", self.zfar), - kwargs.get("fov", self.fov), - kwargs.get("aspect_ratio", self.aspect_ratio), - kwargs.get("degrees", self.degrees), - ) - - # Transpose the projection matrix as PyTorch3D transforms use row vectors. - transform = Transform3d( - matrix=K.transpose(1, 2).contiguous(), device=self.device - ) - return transform - - def unproject_points( - self, - xy_depth: torch.Tensor, - world_coordinates: bool = True, - scaled_depth_input: bool = False, - **kwargs, - ) -> torch.Tensor: - """>! - FoV cameras further allow for passing depth in world units - (`scaled_depth_input=False`) or in the [0, 1]-normalized units - (`scaled_depth_input=True`) - - Args: - scaled_depth_input: If `True`, assumes the input depth is in - the [0, 1]-normalized units. If `False` the input depth is in - the world units. - """ - - # obtain the relevant transformation to ndc - if world_coordinates: - to_ndc_transform = self.get_full_projection_transform() - else: - to_ndc_transform = self.get_projection_transform() - - if scaled_depth_input: - # the input is scaled depth, so we don't have to do anything - xy_sdepth = xy_depth - else: - # parse out important values from the projection matrix - K_matrix = self.get_projection_transform(**kwargs.copy()).get_matrix() - # parse out f1, f2 from K_matrix - unsqueeze_shape = [1] * xy_depth.dim() - unsqueeze_shape[0] = K_matrix.shape[0] - f1 = K_matrix[:, 2, 2].reshape(unsqueeze_shape) - f2 = K_matrix[:, 3, 2].reshape(unsqueeze_shape) - # get the scaled depth - sdepth = (f1 * xy_depth[..., 2:3] + f2) / xy_depth[..., 2:3] - # concatenate xy + scaled depth - xy_sdepth = torch.cat((xy_depth[..., 0:2], sdepth), dim=-1) - - # unproject with inverse of the projection - unprojection_transform = to_ndc_transform.inverse() - return unprojection_transform.transform_points(xy_sdepth) - - def is_perspective(self): - return True - - def in_ndc(self): - return True - -####################################################################################### -## ██████╗ ███████╗███████╗██╗███╗ ██╗██╗████████╗██╗ ██████╗ ███╗ ██╗███████╗ ## -## ██╔══██╗██╔════╝██╔════╝██║████╗ ██║██║╚══██╔══╝██║██╔═══██╗████╗ ██║██╔════╝ ## -## ██║ ██║█████╗ █████╗ ██║██╔██╗ ██║██║ ██║ ██║██║ ██║██╔██╗ ██║███████╗ ## -## ██║ ██║██╔══╝ ██╔══╝ ██║██║╚██╗██║██║ ██║ ██║██║ ██║██║╚██╗██║╚════██║ ## -## ██████╔╝███████╗██║ ██║██║ ╚████║██║ ██║ ██║╚██████╔╝██║ ╚████║███████║ ## -## ╚═════╝ ╚══════╝╚═╝ ╚═╝╚═╝ ╚═══╝╚═╝ ╚═╝ ╚═╝ ╚═════╝ ╚═╝ ╚═══╝╚══════╝ ## -####################################################################################### - -def make_device(device: Device) -> torch.device: - """ - Makes an actual torch.device object from the device specified as - either a string or torch.device object. If the device is `cuda` without - a specific index, the index of the current device is assigned. - Args: - device: Device (as str or torch.device) - Returns: - A matching torch.device object - """ - device = torch.device(device) if isinstance(device, str) else device - if device.type == "cuda" and device.index is None: # pyre-ignore[16] - # If cuda but with no index, then the current cuda device is indicated. - # In that case, we fix to that device - device = torch.device(f"cuda:{torch.cuda.current_device()}") - return device - -def get_device(x, device: Optional[Device] = None) -> torch.device: - """ - Gets the device of the specified variable x if it is a tensor, or - falls back to a default CPU device otherwise. Allows overriding by - providing an explicit device. - Args: - x: a torch.Tensor to get the device from or another type - device: Device (as str or torch.device) to fall back to - Returns: - A matching torch.device object - """ - - # User overrides device - if device is not None: - return make_device(device) - - # Set device based on input tensor - if torch.is_tensor(x): - return x.device - - # Default device is cpu - return torch.device("cpu") - -def _axis_angle_rotation(axis: str, angle: torch.Tensor) -> torch.Tensor: - """ - Return the rotation matrices for one of the rotations about an axis - of which Euler angles describe, for each value of the angle given. - - Args: - axis: Axis label "X" or "Y or "Z". - angle: any shape tensor of Euler angles in radians - - Returns: - Rotation matrices as tensor of shape (..., 3, 3). - """ - - cos = torch.cos(angle) - sin = torch.sin(angle) - one = torch.ones_like(angle) - zero = torch.zeros_like(angle) - - if axis == "X": - R_flat = (one, zero, zero, zero, cos, -sin, zero, sin, cos) - elif axis == "Y": - R_flat = (cos, zero, sin, zero, one, zero, -sin, zero, cos) - elif axis == "Z": - R_flat = (cos, -sin, zero, sin, cos, zero, zero, zero, one) - else: - raise ValueError("letter must be either X, Y or Z.") - - return torch.stack(R_flat, -1).reshape(angle.shape + (3, 3)) - -def euler_angles_to_matrix(euler_angles: torch.Tensor, convention: str) -> torch.Tensor: - """ - Convert rotations given as Euler angles in radians to rotation matrices. - - Args: - euler_angles: Euler angles in radians as tensor of shape (..., 3). - convention: Convention string of three uppercase letters from - {"X", "Y", and "Z"}. - - Returns: - Rotation matrices as tensor of shape (..., 3, 3). - """ - if euler_angles.dim() == 0 or euler_angles.shape[-1] != 3: - raise ValueError("Invalid input euler angles.") - if len(convention) != 3: - raise ValueError("Convention must have 3 letters.") - if convention[1] in (convention[0], convention[2]): - raise ValueError(f"Invalid convention {convention}.") - for letter in convention: - if letter not in ("X", "Y", "Z"): - raise ValueError(f"Invalid letter {letter} in convention string.") - matrices = [ - _axis_angle_rotation(c, e) - for c, e in zip(convention, torch.unbind(euler_angles, -1)) - ] - # return functools.reduce(torch.matmul, matrices) - return torch.matmul(torch.matmul(matrices[0], matrices[1]), matrices[2]) - -def _broadcast_bmm(a, b) -> torch.Tensor: - """ - Batch multiply two matrices and broadcast if necessary. - - Args: - a: torch tensor of shape (P, K) or (M, P, K) - b: torch tensor of shape (N, K, K) - - Returns: - a and b broadcast multiplied. The output batch dimension is max(N, M). - - To broadcast transforms across a batch dimension if M != N then - expect that either M = 1 or N = 1. The tensor with batch dimension 1 is - expanded to have shape N or M. - """ - if a.dim() == 2: - a = a[None] - if len(a) != len(b): - if not ((len(a) == 1) or (len(b) == 1)): - msg = "Expected batch dim for bmm to be equal or 1; got %r, %r" - raise ValueError(msg % (a.shape, b.shape)) - if len(a) == 1: - a = a.expand(len(b), -1, -1) - if len(b) == 1: - b = b.expand(len(a), -1, -1) - return a.bmm(b) - -def _safe_det_3x3(t: torch.Tensor): - """ - Fast determinant calculation for a batch of 3x3 matrices. - Note, result of this function might not be the same as `torch.det()`. - The differences might be in the last significant digit. - Args: - t: Tensor of shape (N, 3, 3). - Returns: - Tensor of shape (N) with determinants. - """ - - det = ( - t[..., 0, 0] * (t[..., 1, 1] * t[..., 2, 2] - t[..., 1, 2] * t[..., 2, 1]) - - t[..., 0, 1] * (t[..., 1, 0] * t[..., 2, 2] - t[..., 2, 0] * t[..., 1, 2]) - + t[..., 0, 2] * (t[..., 1, 0] * t[..., 2, 1] - t[..., 2, 0] * t[..., 1, 1]) - ) - - return det - -def get_world_to_view_transform( - R: torch.Tensor = _R, T: torch.Tensor = _T -) -> Transform3d: - """ - This function returns a Transform3d representing the transformation - matrix to go from world space to view space by applying a rotation and - a translation. - PyTorch3D uses the same convention as Hartley & Zisserman. - I.e., for camera extrinsic parameters R (rotation) and T (translation), - we map a 3D point `X_world` in world coordinates to - a point `X_cam` in camera coordinates with: - `X_cam = X_world R + T` - Args: - R: (N, 3, 3) matrix representing the rotation. - T: (N, 3) matrix representing the translation. - Returns: - a Transform3d object which represents the composed RT transformation. - """ - # TODO: also support the case where RT is specified as one matrix - # of shape (N, 4, 4). - - if T.shape[0] != R.shape[0]: - msg = "Expected R, T to have the same batch dimension; got %r, %r" - raise ValueError(msg % (R.shape[0], T.shape[0])) - if T.dim() != 2 or T.shape[1:] != (3,): - msg = "Expected T to have shape (N, 3); got %r" - raise ValueError(msg % repr(T.shape)) - if R.dim() != 3 or R.shape[1:] != (3, 3): - msg = "Expected R to have shape (N, 3, 3); got %r" - raise ValueError(msg % repr(R.shape)) - - # Create a Transform3d object - T_ = Translate(T, device=T.device) - R_ = Rotate(R, device=R.device) - return R_.compose(T_) - -def _check_valid_rotation_matrix(R, tol: float = 1e-7) -> None: - """ - Determine if R is a valid rotation matrix by checking it satisfies the - following conditions: - - ``RR^T = I and det(R) = 1`` - - Args: - R: an (N, 3, 3) matrix - - Returns: - None - - Emits a warning if R is an invalid rotation matrix. - """ - N = R.shape[0] - eye = torch.eye(3, dtype=R.dtype, device=R.device) - eye = eye.view(1, 3, 3).expand(N, -1, -1) - orthogonal = torch.allclose(R.bmm(R.transpose(1, 2)), eye, atol=tol) - det_R = _safe_det_3x3(R) - no_distortion = torch.allclose(det_R, torch.ones_like(det_R)) - if not (orthogonal and no_distortion): - msg = "R is not a valid rotation matrix" - warnings.warn(msg) - return - -def format_tensor( - input, - dtype: torch.dtype = torch.float32, - device: Device = "cpu", -) -> torch.Tensor: - """ - Helper function for converting a scalar value to a tensor. - Args: - input: Python scalar, Python list/tuple, torch scalar, 1D torch tensor - dtype: data type for the input - device: Device (as str or torch.device) on which the tensor should be placed. - Returns: - input_vec: torch tensor with optional added batch dimension. - """ - device_ = make_device(device) - if not torch.is_tensor(input): - input = torch.tensor(input, dtype=dtype, device=device_) - - if input.dim() == 0: - input = input.view(1) - - if input.device == device_: - return input - - input = input.to(device=device) - return input - -def convert_to_tensors_and_broadcast( - *args, - dtype: torch.dtype = torch.float32, - device: Device = "cpu", -): - """ - Helper function to handle parsing an arbitrary number of inputs (*args) - which all need to have the same batch dimension. - The output is a list of tensors. - Args: - *args: an arbitrary number of inputs - Each of the values in `args` can be one of the following - - Python scalar - - Torch scalar - - Torch tensor of shape (N, K_i) or (1, K_i) where K_i are - an arbitrary number of dimensions which can vary for each - value in args. In this case each input is broadcast to a - tensor of shape (N, K_i) - dtype: data type to use when creating new tensors. - device: torch device on which the tensors should be placed. - Output: - args: A list of tensors of shape (N, K_i) - """ - # Convert all inputs to tensors with a batch dimension - args_1d = [format_tensor(c, dtype, device) for c in args] - - # Find broadcast size - sizes = [c.shape[0] for c in args_1d] - N = max(sizes) - - args_Nd = [] - for c in args_1d: - if c.shape[0] != 1 and c.shape[0] != N: - msg = "Got non-broadcastable sizes %r" % sizes - raise ValueError(msg) - - # Expand broadcast dim and keep non broadcast dims the same size - expand_sizes = (N,) + (-1,) * len(c.shape[1:]) - args_Nd.append(c.expand(*expand_sizes)) - - return args_Nd - -def _handle_coord(c, dtype: torch.dtype, device: torch.device) -> torch.Tensor: - """ - Helper function for _handle_input. - - Args: - c: Python scalar, torch scalar, or 1D torch tensor - - Returns: - c_vec: 1D torch tensor - """ - if not torch.is_tensor(c): - c = torch.tensor(c, dtype=dtype, device=device) - if c.dim() == 0: - c = c.view(1) - if c.device != device or c.dtype != dtype: - c = c.to(device=device, dtype=dtype) - return c - -def _handle_input( - x, - y, - z, - dtype: torch.dtype, - device: Optional[Device], - name: str, - allow_singleton: bool = False, -) -> torch.Tensor: - """ - Helper function to handle parsing logic for building transforms. The output - is always a tensor of shape (N, 3), but there are several types of allowed - input. - - Case I: Single Matrix - In this case x is a tensor of shape (N, 3), and y and z are None. Here just - return x. - - Case II: Vectors and Scalars - In this case each of x, y, and z can be one of the following - - Python scalar - - Torch scalar - - Torch tensor of shape (N, 1) or (1, 1) - In this case x, y and z are broadcast to tensors of shape (N, 1) - and concatenated to a tensor of shape (N, 3) - - Case III: Singleton (only if allow_singleton=True) - In this case y and z are None, and x can be one of the following: - - Python scalar - - Torch scalar - - Torch tensor of shape (N, 1) or (1, 1) - Here x will be duplicated 3 times, and we return a tensor of shape (N, 3) - - Returns: - xyz: Tensor of shape (N, 3) - """ - device_ = get_device(x, device) - # If x is actually a tensor of shape (N, 3) then just return it - if torch.is_tensor(x) and x.dim() == 2: - if x.shape[1] != 3: - msg = "Expected tensor of shape (N, 3); got %r (in %s)" - raise ValueError(msg % (x.shape, name)) - if y is not None or z is not None: - msg = "Expected y and z to be None (in %s)" % name - raise ValueError(msg) - return x.to(device=device_, dtype=dtype) - - if allow_singleton and y is None and z is None: - y = x - z = x - - # Convert all to 1D tensors - xyz = [_handle_coord(c, dtype, device_) for c in [x, y, z]] - - # Broadcast and concatenate - sizes = [c.shape[0] for c in xyz] - N = max(sizes) - for c in xyz: - if c.shape[0] != 1 and c.shape[0] != N: - msg = "Got non-broadcastable sizes %r (in %s)" % (sizes, name) - raise ValueError(msg) - xyz = [c.expand(N) for c in xyz] - xyz = torch.stack(xyz, dim=1) - return xyz \ No newline at end of file diff --git a/spaces/Haokko/AronaTTS/attentions.py b/spaces/Haokko/AronaTTS/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/Haokko/AronaTTS/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/noisychannel/rerank_options.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/noisychannel/rerank_options.py deleted file mode 100644 index de91939e6635bdf33c9dc330116be07d9e8be6a2..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/noisychannel/rerank_options.py +++ /dev/null @@ -1,149 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq import options - - -def get_reranking_parser(default_task="translation"): - parser = options.get_parser("Generation and reranking", default_task) - add_reranking_args(parser) - return parser - - -def get_tuning_parser(default_task="translation"): - parser = options.get_parser("Reranking tuning", default_task) - add_reranking_args(parser) - add_tuning_args(parser) - return parser - - -def add_reranking_args(parser): - group = parser.add_argument_group("Reranking") - # fmt: off - group.add_argument('--score-model1', '-s1', type=str, metavar='FILE', required=True, - help='path to first model or ensemble of models for rescoring') - group.add_argument('--score-model2', '-s2', type=str, metavar='FILE', required=False, - help='path to second model or ensemble of models for rescoring') - group.add_argument('--num-rescore', '-n', type=int, metavar='N', default=10, - help='the number of candidate hypothesis to rescore') - group.add_argument('-bz', '--batch-size', type=int, metavar='N', default=128, - help='batch size for generating the nbest list') - group.add_argument('--gen-subset', default='test', metavar='SET', choices=['test', 'train', 'valid'], - help='data subset to generate (train, valid, test)') - group.add_argument('--gen-model', default=None, metavar='FILE', - help='the model to generate translations') - group.add_argument('-b1', '--backwards1', action='store_true', - help='whether or not the first model group is backwards') - group.add_argument('-b2', '--backwards2', action='store_true', - help='whether or not the second model group is backwards') - group.add_argument('-a', '--weight1', default=1, nargs='+', type=float, - help='the weight(s) of the first model') - group.add_argument('-b', '--weight2', default=1, nargs='+', type=float, - help='the weight(s) of the second model, or the gen model if using nbest from interactive.py') - group.add_argument('-c', '--weight3', default=1, nargs='+', type=float, - help='the weight(s) of the third model') - - # lm arguments - group.add_argument('-lm', '--language-model', default=None, metavar='FILE', - help='language model for target language to rescore translations') - group.add_argument('--lm-dict', default=None, metavar='FILE', - help='the dict of the language model for the target language') - group.add_argument('--lm-name', default=None, - help='the name of the language model for the target language') - group.add_argument('--lm-bpe-code', default=None, metavar='FILE', - help='the bpe code for the language model for the target language') - group.add_argument('--data-dir-name', default=None, - help='name of data directory') - group.add_argument('--lenpen', default=1, nargs='+', type=float, - help='length penalty: <1.0 favors shorter, >1.0 favors longer sentences') - group.add_argument('--score-dict-dir', default=None, - help='the directory with dictionaries for the scoring models') - group.add_argument('--right-to-left1', action='store_true', - help='whether the first model group is a right to left model') - group.add_argument('--right-to-left2', action='store_true', - help='whether the second model group is a right to left model') - group.add_argument('--post-process', '--remove-bpe', default='@@ ', - help='the bpe symbol, used for the bitext and LM') - group.add_argument('--prefix-len', default=None, type=int, - help='the length of the target prefix to use in rescoring (in terms of words wo bpe)') - group.add_argument('--sampling', action='store_true', - help='use sampling instead of beam search for generating n best list') - group.add_argument('--diff-bpe', action='store_true', - help='bpe for rescoring and nbest list not the same') - group.add_argument('--rescore-bpe-code', default=None, - help='bpe code for rescoring models') - group.add_argument('--nbest-list', default=None, - help='use predefined nbest list in interactive.py format') - group.add_argument('--write-hypos', default=None, - help='filename prefix to write hypos to') - group.add_argument('--ref-translation', default=None, - help='reference translation to use with nbest list from interactive.py') - group.add_argument('--backwards-score-dict-dir', default=None, - help='the directory with dictionaries for the backwards model,' - 'if None then it is assumed the fw and backwards models share dictionaries') - - # extra scaling args - group.add_argument('--gen-model-name', default=None, - help='the name of the models that generated the nbest list') - group.add_argument('--model1-name', default=None, - help='the name of the set for model1 group ') - group.add_argument('--model2-name', default=None, - help='the name of the set for model2 group') - group.add_argument('--shard-id', default=0, type=int, - help='the id of the shard to generate') - group.add_argument('--num-shards', default=1, type=int, - help='the number of shards to generate across') - group.add_argument('--all-shards', action='store_true', - help='use all shards') - group.add_argument('--target-prefix-frac', default=None, type=float, - help='the fraction of the target prefix to use in rescoring (in terms of words wo bpe)') - group.add_argument('--source-prefix-frac', default=None, type=float, - help='the fraction of the source prefix to use in rescoring (in terms of words wo bpe)') - group.add_argument('--normalize', action='store_true', - help='whether to normalize by src and target len') - # fmt: on - return group - - -def add_tuning_args(parser): - group = parser.add_argument_group("Tuning") - - group.add_argument( - "--lower-bound", - default=[-0.7], - nargs="+", - type=float, - help="lower bound of search space", - ) - group.add_argument( - "--upper-bound", - default=[3], - nargs="+", - type=float, - help="upper bound of search space", - ) - group.add_argument( - "--tune-param", - default=["lenpen"], - nargs="+", - choices=["lenpen", "weight1", "weight2", "weight3"], - help="the parameter(s) to tune", - ) - group.add_argument( - "--tune-subset", - default="valid", - choices=["valid", "test", "train"], - help="the subset to tune on ", - ) - group.add_argument( - "--num-trials", - default=1000, - type=int, - help="number of trials to do for random search", - ) - group.add_argument( - "--share-weights", action="store_true", help="share weight2 and weight 3" - ) - return group diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/speech_to_text/s2t_transformer.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/speech_to_text/s2t_transformer.py deleted file mode 100644 index aff9d0ffc7b7e671c476ff28d1cd945e9ff41519..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/speech_to_text/s2t_transformer.py +++ /dev/null @@ -1,502 +0,0 @@ -#!/usr/bin/env python3 - -import logging -import math -from typing import Dict, List, Optional, Tuple -from pathlib import Path - -import torch -import torch.nn as nn -from fairseq import checkpoint_utils, utils -from fairseq.data.data_utils import lengths_to_padding_mask -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderDecoderModel, - register_model, - register_model_architecture, -) -from fairseq.models.transformer import Embedding, TransformerDecoder -from fairseq.modules import ( - FairseqDropout, - LayerNorm, - PositionalEmbedding, - TransformerEncoderLayer, -) -from torch import Tensor - - -logger = logging.getLogger(__name__) - - -class Conv1dSubsampler(nn.Module): - """Convolutional subsampler: a stack of 1D convolution (along temporal - dimension) followed by non-linear activation via gated linear units - (https://arxiv.org/abs/1911.08460) - - Args: - in_channels (int): the number of input channels - mid_channels (int): the number of intermediate channels - out_channels (int): the number of output channels - kernel_sizes (List[int]): the kernel size for each convolutional layer - """ - - def __init__( - self, - in_channels: int, - mid_channels: int, - out_channels: int, - kernel_sizes: List[int] = (3, 3), - ): - super(Conv1dSubsampler, self).__init__() - self.n_layers = len(kernel_sizes) - self.conv_layers = nn.ModuleList( - nn.Conv1d( - in_channels if i == 0 else mid_channels // 2, - mid_channels if i < self.n_layers - 1 else out_channels * 2, - k, - stride=2, - padding=k // 2, - ) - for i, k in enumerate(kernel_sizes) - ) - - def get_out_seq_lens_tensor(self, in_seq_lens_tensor): - out = in_seq_lens_tensor.clone() - for _ in range(self.n_layers): - out = ((out.float() - 1) / 2 + 1).floor().long() - return out - - def forward(self, src_tokens, src_lengths): - bsz, in_seq_len, _ = src_tokens.size() # B x T x (C x D) - x = src_tokens.transpose(1, 2).contiguous() # -> B x (C x D) x T - for conv in self.conv_layers: - x = conv(x) - x = nn.functional.glu(x, dim=1) - _, _, out_seq_len = x.size() - x = x.transpose(1, 2).transpose(0, 1).contiguous() # -> T x B x (C x D) - return x, self.get_out_seq_lens_tensor(src_lengths) - - -@register_model("s2t_transformer") -class S2TTransformerModel(FairseqEncoderDecoderModel): - """Adapted Transformer model (https://arxiv.org/abs/1706.03762) for - speech-to-text tasks. The Transformer encoder/decoder remains the same. - A trainable input subsampler is prepended to the Transformer encoder to - project inputs into the encoder dimension as well as downsample input - sequence for computational efficiency.""" - - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - # input - parser.add_argument( - "--conv-kernel-sizes", - type=str, - metavar="N", - help="kernel sizes of Conv1d subsampling layers", - ) - parser.add_argument( - "--conv-channels", - type=int, - metavar="N", - help="# of channels in Conv1d subsampling layers", - ) - # Transformer - parser.add_argument( - "--activation-fn", - type=str, - default="relu", - choices=utils.get_available_activation_fns(), - help="activation function to use", - ) - parser.add_argument( - "--dropout", type=float, metavar="D", help="dropout probability" - ) - parser.add_argument( - "--attention-dropout", - type=float, - metavar="D", - help="dropout probability for attention weights", - ) - parser.add_argument( - "--activation-dropout", - "--relu-dropout", - type=float, - metavar="D", - help="dropout probability after activation in FFN.", - ) - parser.add_argument( - "--encoder-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--encoder-ffn-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension for FFN", - ) - parser.add_argument( - "--encoder-layers", type=int, metavar="N", help="num encoder layers" - ) - parser.add_argument( - "--encoder-attention-heads", - type=int, - metavar="N", - help="num encoder attention heads", - ) - parser.add_argument( - "--encoder-normalize-before", - action="store_true", - help="apply layernorm before each encoder block", - ) - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-ffn-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension for FFN", - ) - parser.add_argument( - "--decoder-layers", type=int, metavar="N", help="num decoder layers" - ) - parser.add_argument( - "--decoder-attention-heads", - type=int, - metavar="N", - help="num decoder attention heads", - ) - parser.add_argument( - "--decoder-normalize-before", - action="store_true", - help="apply layernorm before each decoder block", - ) - parser.add_argument( - "--share-decoder-input-output-embed", - action="store_true", - help="share decoder input and output embeddings", - ) - parser.add_argument( - "--layernorm-embedding", - action="store_true", - help="add layernorm to embedding", - ) - parser.add_argument( - "--no-scale-embedding", - action="store_true", - help="if True, dont scale embeddings", - ) - parser.add_argument( - "--load-pretrained-encoder-from", - type=str, - metavar="STR", - help="model to take encoder weights from (for initialization)", - ) - parser.add_argument( - '--encoder-freezing-updates', - type=int, - metavar='N', - help='freeze encoder for first N updates' - ) - - @classmethod - def build_encoder(cls, args): - encoder = S2TTransformerEncoder(args) - pretraining_path = getattr(args, "load_pretrained_encoder_from", None) - if pretraining_path is not None: - if not Path(pretraining_path).exists(): - logger.warning( - f"skipped pretraining because {pretraining_path} does not exist" - ) - else: - encoder = checkpoint_utils.load_pretrained_component_from_model( - component=encoder, checkpoint=pretraining_path - ) - logger.info(f"loaded pretrained encoder from: {pretraining_path}") - return encoder - - @classmethod - def build_decoder(cls, args, task, embed_tokens): - return TransformerDecoderScriptable(args, task.target_dictionary, embed_tokens) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present in older models - base_architecture(args) - - def build_embedding(dictionary, embed_dim): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - return Embedding(num_embeddings, embed_dim, padding_idx) - - decoder_embed_tokens = build_embedding( - task.target_dictionary, args.decoder_embed_dim - ) - encoder = cls.build_encoder(args) - decoder = cls.build_decoder(args, task, decoder_embed_tokens) - return cls(encoder, decoder) - - def get_normalized_probs( - self, - net_output: Tuple[Tensor, Optional[Dict[str, List[Optional[Tensor]]]]], - log_probs: bool, - sample: Optional[Dict[str, Tensor]] = None, - ): - # net_output['encoder_out'] is a (B, T, D) tensor - lprobs = self.get_normalized_probs_scriptable(net_output, log_probs, sample) - lprobs.batch_first = True - return lprobs - - def forward(self, src_tokens, src_lengths, prev_output_tokens): - """ - The forward method inherited from the base class has a **kwargs - argument in its input, which is not supported in torchscript. This - method overwrites the forward method definition without **kwargs. - """ - encoder_out = self.encoder(src_tokens=src_tokens, src_lengths=src_lengths) - decoder_out = self.decoder( - prev_output_tokens=prev_output_tokens, encoder_out=encoder_out - ) - return decoder_out - - -class S2TTransformerEncoder(FairseqEncoder): - """Speech-to-text Transformer encoder that consists of input subsampler and - Transformer encoder.""" - - def __init__(self, args): - super().__init__(None) - - self.encoder_freezing_updates = args.encoder_freezing_updates - self.num_updates = 0 - - self.dropout_module = FairseqDropout( - p=args.dropout, module_name=self.__class__.__name__ - ) - self.embed_scale = math.sqrt(args.encoder_embed_dim) - if args.no_scale_embedding: - self.embed_scale = 1.0 - self.padding_idx = 1 - - self.subsample = Conv1dSubsampler( - args.input_feat_per_channel * args.input_channels, - args.conv_channels, - args.encoder_embed_dim, - [int(k) for k in args.conv_kernel_sizes.split(",")], - ) - - self.embed_positions = PositionalEmbedding( - args.max_source_positions, args.encoder_embed_dim, self.padding_idx - ) - - self.transformer_layers = nn.ModuleList( - [TransformerEncoderLayer(args) for _ in range(args.encoder_layers)] - ) - if args.encoder_normalize_before: - self.layer_norm = LayerNorm(args.encoder_embed_dim) - else: - self.layer_norm = None - - def _forward(self, src_tokens, src_lengths, return_all_hiddens=False): - x, input_lengths = self.subsample(src_tokens, src_lengths) - x = self.embed_scale * x - - encoder_padding_mask = lengths_to_padding_mask(input_lengths) - positions = self.embed_positions(encoder_padding_mask).transpose(0, 1) - x += positions - x = self.dropout_module(x) - - encoder_states = [] - - for layer in self.transformer_layers: - x = layer(x, encoder_padding_mask) - if return_all_hiddens: - encoder_states.append(x) - - if self.layer_norm is not None: - x = self.layer_norm(x) - - return { - "encoder_out": [x], # T x B x C - "encoder_padding_mask": [encoder_padding_mask] if encoder_padding_mask.any() else [], # B x T - "encoder_embedding": [], # B x T x C - "encoder_states": encoder_states, # List[T x B x C] - "src_tokens": [], - "src_lengths": [], - } - - def forward(self, src_tokens, src_lengths, return_all_hiddens=False): - if self.num_updates < self.encoder_freezing_updates: - with torch.no_grad(): - x = self._forward(src_tokens, src_lengths, - return_all_hiddens=return_all_hiddens) - else: - x = self._forward(src_tokens, src_lengths, - return_all_hiddens=return_all_hiddens) - return x - - def reorder_encoder_out(self, encoder_out, new_order): - new_encoder_out = ( - [] if len(encoder_out["encoder_out"]) == 0 - else [x.index_select(1, new_order) for x in encoder_out["encoder_out"]] - ) - - new_encoder_padding_mask = ( - [] if len(encoder_out["encoder_padding_mask"]) == 0 - else [x.index_select(0, new_order) for x in encoder_out["encoder_padding_mask"]] - ) - - new_encoder_embedding = ( - [] if len(encoder_out["encoder_embedding"]) == 0 - else [x.index_select(0, new_order) for x in encoder_out["encoder_embedding"]] - ) - - encoder_states = encoder_out["encoder_states"] - if len(encoder_states) > 0: - for idx, state in enumerate(encoder_states): - encoder_states[idx] = state.index_select(1, new_order) - - return { - "encoder_out": new_encoder_out, # T x B x C - "encoder_padding_mask": new_encoder_padding_mask, # B x T - "encoder_embedding": new_encoder_embedding, # B x T x C - "encoder_states": encoder_states, # List[T x B x C] - "src_tokens": [], # B x T - "src_lengths": [], # B x 1 - } - - def set_num_updates(self, num_updates): - super().set_num_updates(num_updates) - self.num_updates = num_updates - - -class TransformerDecoderScriptable(TransformerDecoder): - def extract_features( - self, - prev_output_tokens, - encoder_out: Optional[Dict[str, List[Tensor]]] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - full_context_alignment: bool = False, - alignment_layer: Optional[int] = None, - alignment_heads: Optional[int] = None, - ): - # call scriptable method from parent class - x, _ = self.extract_features_scriptable( - prev_output_tokens, - encoder_out, - incremental_state, - full_context_alignment, - alignment_layer, - alignment_heads, - ) - return x, None - - -@register_model_architecture(model_name="s2t_transformer", arch_name="s2t_transformer") -def base_architecture(args): - args.encoder_freezing_updates = getattr(args, "encoder_freezing_updates", 0) - # Convolutional subsampler - args.conv_kernel_sizes = getattr(args, "conv_kernel_sizes", "5,5") - args.conv_channels = getattr(args, "conv_channels", 1024) - # Transformer - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048) - args.encoder_layers = getattr(args, "encoder_layers", 12) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", True) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", True) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.dropout = getattr(args, "dropout", 0.1) - args.attention_dropout = getattr(args, "attention_dropout", args.dropout) - args.activation_dropout = getattr(args, "activation_dropout", args.dropout) - args.activation_fn = getattr(args, "activation_fn", "relu") - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.adaptive_input = getattr(args, "adaptive_input", False) - args.decoder_layerdrop = getattr(args, "decoder_layerdrop", 0.0) - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - args.no_scale_embedding = getattr(args, "no_scale_embedding", False) - args.quant_noise_pq = getattr(args, "quant_noise_pq", 0) - - -@register_model_architecture("s2t_transformer", "s2t_transformer_s") -def s2t_transformer_s(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 256) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 256 * 8) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4) - args.dropout = getattr(args, "dropout", 0.1) - base_architecture(args) - - -@register_model_architecture("s2t_transformer", "s2t_transformer_xs") -def s2t_transformer_xs(args): - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.decoder_layers = getattr(args, "decoder_layers", 3) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 256 * 4) - args.dropout = getattr(args, "dropout", 0.3) - s2t_transformer_s(args) - - -@register_model_architecture("s2t_transformer", "s2t_transformer_sp") -def s2t_transformer_sp(args): - args.encoder_layers = getattr(args, "encoder_layers", 16) - s2t_transformer_s(args) - - -@register_model_architecture("s2t_transformer", "s2t_transformer_m") -def s2t_transformer_m(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 512 * 4) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.dropout = getattr(args, "dropout", 0.15) - base_architecture(args) - - -@register_model_architecture("s2t_transformer", "s2t_transformer_mp") -def s2t_transformer_mp(args): - args.encoder_layers = getattr(args, "encoder_layers", 16) - s2t_transformer_m(args) - - -@register_model_architecture("s2t_transformer", "s2t_transformer_l") -def s2t_transformer_l(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 1024 * 4) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - args.dropout = getattr(args, "dropout", 0.2) - base_architecture(args) - - -@register_model_architecture("s2t_transformer", "s2t_transformer_lp") -def s2t_transformer_lp(args): - args.encoder_layers = getattr(args, "encoder_layers", 16) - s2t_transformer_l(args) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/lightconv_layer/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/lightconv_layer/__init__.py deleted file mode 100644 index 3b2a99c1227f827768911e5e22e79f6865ffbfd3..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/lightconv_layer/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .lightconv_layer import LightconvLayer # noqa diff --git a/spaces/Hexii/FoodVision/README.md b/spaces/Hexii/FoodVision/README.md deleted file mode 100644 index cdfc12ec59ff8edbfb906682e921519126b43876..0000000000000000000000000000000000000000 --- a/spaces/Hexii/FoodVision/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: FoodVision -emoji: ⚡ -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.4.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/HighCWu/colorful-ascii-art/app_video.py b/spaces/HighCWu/colorful-ascii-art/app_video.py deleted file mode 100644 index 4476fe9eca52540645cbd070ba91dca76b10ac82..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/colorful-ascii-art/app_video.py +++ /dev/null @@ -1,188 +0,0 @@ -''' -Refer to https://huggingface.co/spaces/dt/ascii-art/blob/main/app.py -''' - -# Python code to convert an image to ASCII image. -import sys, random, argparse -import numpy as np -import math -import base64 -from PIL import Image, ImageFont, ImageDraw -from moviepy.editor import * -from tqdm.auto import tqdm - -import gradio as gr - -# 70 levels of gray -gscale1 = "$@B%8&WM#*oahkbdpqwmZO0QLCJUYXzcvunxrjft/\|()1{}[]?-_+~<>i!lI;:,\"^`'. " - -# 10 levels of gray -gscale2 = '@%#*+=-:. ' - -font = ImageFont.load_default() - -def getAverageL(image): - - """ - Given PIL Image, return average value of grayscale value - """ - # get image as numpy array - im = np.array(image) - # get shape - w,h = im.shape - - # get average - return np.average(im.reshape(w*h)) - -def covertImageToAscii(input_img, cols, scale, moreLevels): - """ - Given Image and dims (rows, cols) returns an m*n list of Images - """ - # declare globals - global gscale1, gscale2 - - # open image and convert to grayscale - image = input_img.convert('L') - - # store dimensions - # store dimensions - W, H = image.size[0], image.size[1] - - # compute width of tile - w = W/cols - - # compute tile height based on aspect ratio and scale - h = w/scale - - # compute number of rows - rows = int(H/h) - - # check if image size is too small - if cols > W or rows > H: - print("Image too small for specified cols!") - exit(0) - - # ascii image is a list of character strings - aimg = [] - # generate list of dimensions - for j in range(rows): - y1 = int(j*h) - y2 = int((j+1)*h) - - # correct last tile - if j == rows-1: - y2 = H - - # append an empty string - aimg.append("") - - for i in range(cols): - - # crop image to tile - x1 = int(i*w) - x2 = int((i+1)*w) - - # correct last tile - if i == cols-1: - x2 = W - - # crop image to extract tile - img = image.crop((x1, y1, x2, y2)) - - # get average luminance - avg = int(getAverageL(img)) - - # look up ascii char - if moreLevels: - gsval = gscale1[int((avg*69)/255)] - else: - gsval = gscale2[int((avg*9)/255)] - - # append ascii char to string - aimg[j] += gsval - - # return txt image - return aimg - - -def colorizeTextImage(input_img, text_img): - input_img = np.asarray(input_img) - input_img = input_img.reshape(( - input_img.shape[0]//11, - 11, - input_img.shape[1]//6, - 6, - 3 - )) - input_img = np.float32(input_img) - text_img = np.asarray(text_img) - text_img = text_img.reshape(( - input_img.shape[0], - 11, - input_img.shape[2], - 6, - 3 - )) - alpha = np.float32(text_img)[...,:1] / 255 - alpha[alpha < 0.125] = 0 - alpha[alpha >= 0.125] = 1 - out_img = input_img * alpha - out_colors = out_img.sum((1,3), keepdims=True) / (alpha.sum((1,3), keepdims=True) + 1e-12) - out_img = out_colors * alpha - out_img = out_img.reshape(( - out_img.shape[0] * out_img.shape[1], - out_img.shape[2] * out_img.shape[3], - 3 - )) - out_img = np.clip(out_img, 0, 255) - out_img = np.uint8(out_img) - - return out_img - - -def sepia(input_img, no_colors=False): - input_img = Image.fromarray(input_img).convert('RGB') - aimg = covertImageToAscii(input_img, 200, 6/11, True) - blank_image = Image.new(mode="RGB", size=(len(aimg[0])*6, len(aimg)*11), color=(0, 0, 0)) - - my_image = blank_image.copy() - image_editable = ImageDraw.Draw(my_image) - - image_editable.text((0, 0), "\n".join(aimg), (255, 255, 255), font=font, spacing=0) - if no_colors: - return np.asarray(my_image) - - input_img_resize = input_img.resize((len(aimg[0])*6, len(aimg)*11), Image.BICUBIC) - w, h = input_img.size - scale = 200 * 6 / w - w = 200 * 6 - h = int(round(h*scale)) - input_img = input_img.resize((200 * 6, h), Image.BICUBIC) - input_img_resize.paste(input_img, (0, 0, w, h)) - input_img = input_img_resize - - my_image = colorizeTextImage(input_img, my_image) - - return my_image - - -def sepia_video(video_file, no_colors=False): - clip = VideoFileClip(video_file) - audioclip = clip.audio - frames = int(clip.fps * clip.duration) - imgs = [] - for i in tqdm(range(frames)): - imgs.append(sepia(clip.get_frame(i/clip.fps), no_colors)) - video = ImageSequenceClip(imgs, fps=clip.fps) - video = video.set_audio(audioclip) - video.write_videofile("out.mp4", fps=clip.fps) - - return "out.mp4" - -iface = gr.Interface(sepia_video, - [gr.Video(format=None), gr.Checkbox(label="No Colors")], - "video", - title = "Colorful ASCII Art", - description = "Convert an image to colorful ASCII art based on ascii character density. Click the first output text to download the generated svg.") - -iface.launch() diff --git a/spaces/HuggingFaceM4/obelics_visualization/app.py b/spaces/HuggingFaceM4/obelics_visualization/app.py deleted file mode 100644 index 716d545e187bf8c3b2ed1b09140dd144949fb809..0000000000000000000000000000000000000000 --- a/spaces/HuggingFaceM4/obelics_visualization/app.py +++ /dev/null @@ -1,53 +0,0 @@ -import json - -import streamlit as st -from datasets import load_from_disk - - -class Visualization: - def __init__(self, path_web_documents_dataset): - self.path_web_documents_dataset = path_web_documents_dataset - - def visualization(self): - self.set_title() - self.load_dataset() - self.choose_document() - self.display_document() - - def set_title(self): - st.title("Visualization of OBELICS web documents") - - def load_dataset(self): - self.dataset = load_from_disk(self.path_web_documents_dataset) - - def choose_document(self): - st.header("Choose a document") - idx = st.number_input( - f"Select a document among the first {self.dataset.num_rows} ones", - min_value=0, - max_value=self.dataset.num_rows - 1, - value=0, - step=1, - help=f"Index between 0 and {self.dataset.num_rows-1}", - ) - self.current_doc = self.dataset[idx] - - def display_document(self): - st.header("Document") - texts = self.current_doc["texts"] - images = self.current_doc["images"] - metadata = json.loads(self.current_doc["metadata"]) - for text, image, meta in zip(texts, images, metadata): - if text: - display_text = f"{text}\n".replace("\n", "
      ") # .replace(" ", " ") Preserves white spaces, but creates text outside the width of the window - st.markdown(f"
      {display_text}
      ", unsafe_allow_html=True) - elif image: - st.markdown(f'', unsafe_allow_html=True) - st.text("\n") - - -if __name__ == "__main__": - st.set_page_config(layout="wide") - path_web_documents_dataset = "./web_docs_final_replaceimgbyurl" - visualization = Visualization(path_web_documents_dataset=path_web_documents_dataset) - visualization.visualization() diff --git a/spaces/HugoDzz/super-godot-galaxy/static/smg/index.audio.worklet.js b/spaces/HugoDzz/super-godot-galaxy/static/smg/index.audio.worklet.js deleted file mode 100644 index 89b581b3d6fae0561c19549bc1f3471e9d1c4762..0000000000000000000000000000000000000000 --- a/spaces/HugoDzz/super-godot-galaxy/static/smg/index.audio.worklet.js +++ /dev/null @@ -1,213 +0,0 @@ -/**************************************************************************/ -/* audio.worklet.js */ -/**************************************************************************/ -/* This file is part of: */ -/* GODOT ENGINE */ -/* https://godotengine.org */ -/**************************************************************************/ -/* Copyright (c) 2014-present Godot Engine contributors (see AUTHORS.md). */ -/* Copyright (c) 2007-2014 Juan Linietsky, Ariel Manzur. */ -/* */ -/* Permission is hereby granted, free of charge, to any person obtaining */ -/* a copy of this software and associated documentation files (the */ -/* "Software"), to deal in the Software without restriction, including */ -/* without limitation the rights to use, copy, modify, merge, publish, */ -/* distribute, sublicense, and/or sell copies of the Software, and to */ -/* permit persons to whom the Software is furnished to do so, subject to */ -/* the following conditions: */ -/* */ -/* The above copyright notice and this permission notice shall be */ -/* included in all copies or substantial portions of the Software. */ -/* */ -/* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, */ -/* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF */ -/* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. */ -/* IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY */ -/* CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, */ -/* TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE */ -/* SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ -/**************************************************************************/ - -class RingBuffer { - constructor(p_buffer, p_state, p_threads) { - this.buffer = p_buffer; - this.avail = p_state; - this.threads = p_threads; - this.rpos = 0; - this.wpos = 0; - } - - data_left() { - return this.threads ? Atomics.load(this.avail, 0) : this.avail; - } - - space_left() { - return this.buffer.length - this.data_left(); - } - - read(output) { - const size = this.buffer.length; - let from = 0; - let to_write = output.length; - if (this.rpos + to_write > size) { - const high = size - this.rpos; - output.set(this.buffer.subarray(this.rpos, size)); - from = high; - to_write -= high; - this.rpos = 0; - } - if (to_write) { - output.set(this.buffer.subarray(this.rpos, this.rpos + to_write), from); - } - this.rpos += to_write; - if (this.threads) { - Atomics.add(this.avail, 0, -output.length); - Atomics.notify(this.avail, 0); - } else { - this.avail -= output.length; - } - } - - write(p_buffer) { - const to_write = p_buffer.length; - const mw = this.buffer.length - this.wpos; - if (mw >= to_write) { - this.buffer.set(p_buffer, this.wpos); - this.wpos += to_write; - if (mw === to_write) { - this.wpos = 0; - } - } else { - const high = p_buffer.subarray(0, mw); - const low = p_buffer.subarray(mw); - this.buffer.set(high, this.wpos); - this.buffer.set(low); - this.wpos = low.length; - } - if (this.threads) { - Atomics.add(this.avail, 0, to_write); - Atomics.notify(this.avail, 0); - } else { - this.avail += to_write; - } - } -} - -class GodotProcessor extends AudioWorkletProcessor { - constructor() { - super(); - this.threads = false; - this.running = true; - this.lock = null; - this.notifier = null; - this.output = null; - this.output_buffer = new Float32Array(); - this.input = null; - this.input_buffer = new Float32Array(); - this.port.onmessage = (event) => { - const cmd = event.data['cmd']; - const data = event.data['data']; - this.parse_message(cmd, data); - }; - } - - process_notify() { - if (this.notifier) { - Atomics.add(this.notifier, 0, 1); - Atomics.notify(this.notifier, 0); - } - } - - parse_message(p_cmd, p_data) { - if (p_cmd === 'start' && p_data) { - const state = p_data[0]; - let idx = 0; - this.threads = true; - this.lock = state.subarray(idx, ++idx); - this.notifier = state.subarray(idx, ++idx); - const avail_in = state.subarray(idx, ++idx); - const avail_out = state.subarray(idx, ++idx); - this.input = new RingBuffer(p_data[1], avail_in, true); - this.output = new RingBuffer(p_data[2], avail_out, true); - } else if (p_cmd === 'stop') { - this.running = false; - this.output = null; - this.input = null; - this.lock = null; - this.notifier = null; - } else if (p_cmd === 'start_nothreads') { - this.output = new RingBuffer(p_data[0], p_data[0].length, false); - } else if (p_cmd === 'chunk') { - this.output.write(p_data); - } - } - - static array_has_data(arr) { - return arr.length && arr[0].length && arr[0][0].length; - } - - process(inputs, outputs, parameters) { - if (!this.running) { - return false; // Stop processing. - } - if (this.output === null) { - return true; // Not ready yet, keep processing. - } - const process_input = GodotProcessor.array_has_data(inputs); - if (process_input) { - const input = inputs[0]; - const chunk = input[0].length * input.length; - if (this.input_buffer.length !== chunk) { - this.input_buffer = new Float32Array(chunk); - } - if (!this.threads) { - GodotProcessor.write_input(this.input_buffer, input); - this.port.postMessage({ 'cmd': 'input', 'data': this.input_buffer }); - } else if (this.input.space_left() >= chunk) { - GodotProcessor.write_input(this.input_buffer, input); - this.input.write(this.input_buffer); - } else { - this.port.postMessage('Input buffer is full! Skipping input frame.'); - } - } - const process_output = GodotProcessor.array_has_data(outputs); - if (process_output) { - const output = outputs[0]; - const chunk = output[0].length * output.length; - if (this.output_buffer.length !== chunk) { - this.output_buffer = new Float32Array(chunk); - } - if (this.output.data_left() >= chunk) { - this.output.read(this.output_buffer); - GodotProcessor.write_output(output, this.output_buffer); - if (!this.threads) { - this.port.postMessage({ 'cmd': 'read', 'data': chunk }); - } - } else { - this.port.postMessage('Output buffer has not enough frames! Skipping output frame.'); - } - } - this.process_notify(); - return true; - } - - static write_output(dest, source) { - const channels = dest.length; - for (let ch = 0; ch < channels; ch++) { - for (let sample = 0; sample < dest[ch].length; sample++) { - dest[ch][sample] = source[sample * channels + ch]; - } - } - } - - static write_input(dest, source) { - const channels = source.length; - for (let ch = 0; ch < channels; ch++) { - for (let sample = 0; sample < source[ch].length; sample++) { - dest[sample * channels + ch] = source[ch][sample]; - } - } - } -} - -registerProcessor('godot-processor', GodotProcessor); diff --git a/spaces/Hunter731/Unity3D-RTS/Build/output.loader.js b/spaces/Hunter731/Unity3D-RTS/Build/output.loader.js deleted file mode 100644 index 6bcb648e952a40f7347ee111a82cf4cac8a1d15b..0000000000000000000000000000000000000000 --- a/spaces/Hunter731/Unity3D-RTS/Build/output.loader.js +++ /dev/null @@ -1 +0,0 @@ -function createUnityInstance(t,r,d){function i(e,t){if(!i.aborted&&r.showBanner)return"error"==t&&(i.aborted=!0),r.showBanner(e,t);switch(t){case"error":console.error(e);break;case"warning":console.warn(e);break;default:console.log(e)}}function n(e){var t=e.reason||e.error,r=t?t.toString():e.message||e.reason||"",n=t&&t.stack?t.stack.toString():"";(r+="\n"+(n=n.startsWith(r)?n.substring(r.length):n).trim())&&c.stackTraceRegExp&&c.stackTraceRegExp.test(r)&&C(r,e.filename||t&&(t.fileName||t.sourceURL)||"",e.lineno||t&&(t.lineNumber||t.line)||0)}function e(e,t,r){var n=e[t];void 0!==n&&n||(console.warn('Config option "'+t+'" is missing or empty. Falling back to default value: "'+r+'". Consider updating your WebGL template to include the missing config option.'),e[t]=r)}d=d||function(){};var o,c={canvas:t,webglContextAttributes:{preserveDrawingBuffer:!1,powerPreference:2},cacheControl:function(e){return e==c.dataUrl?"must-revalidate":"no-store"},streamingAssetsUrl:"StreamingAssets",downloadProgress:{},deinitializers:[],intervals:{},setInterval:function(e,t){e=window.setInterval(e,t);return this.intervals[e]=!0,e},clearInterval:function(e){delete this.intervals[e],window.clearInterval(e)},preRun:[],postRun:[],print:function(e){console.log(e)},printErr:function(e){console.error(e),"string"==typeof e&&-1!=e.indexOf("wasm streaming compile failed")&&(-1!=e.toLowerCase().indexOf("mime")?i('HTTP Response Header "Content-Type" configured incorrectly on the server for file '+c.codeUrl+' , should be "application/wasm". Startup time performance will suffer.',"warning"):i('WebAssembly streaming compilation failed! This can happen for example if "Content-Encoding" HTTP header is incorrectly enabled on the server for file '+c.codeUrl+", but the file is not pre-compressed on disk (or vice versa). Check the Network tab in browser Devtools to debug server header configuration.","warning"))},locateFile:function(e){return"build.wasm"==e?this.codeUrl:e},disabledCanvasEvents:["contextmenu","dragstart"]};for(o in e(r,"companyName","Unity"),e(r,"productName","WebGL Player"),e(r,"productVersion","1.0"),r)c[o]=r[o];c.streamingAssetsUrl=new URL(c.streamingAssetsUrl,document.URL).href;var a=c.disabledCanvasEvents.slice();function s(e){e.preventDefault()}a.forEach(function(e){t.addEventListener(e,s)}),window.addEventListener("error",n),window.addEventListener("unhandledrejection",n),c.deinitializers.push(function(){for(var e in c.disableAccessToMediaDevices(),a.forEach(function(e){t.removeEventListener(e,s)}),window.removeEventListener("error",n),window.removeEventListener("unhandledrejection",n),c.intervals)window.clearInterval(e);c.intervals={}}),c.QuitCleanup=function(){for(var e=0;eIf using custom web server, verify that web server is sending .br files with HTTP Response Header "Content-Encoding: br". Brotli compression may not be supported in Firefox over HTTP connections. '+n+' See https://bugzilla.mozilla.org/show_bug.cgi?id=1670675 for more information.':"Unable to parse "+c.frameworkUrl+'!
      If using custom web server, verify that web server is sending .br files with HTTP Response Header "Content-Encoding: br". Brotli compression may not be supported over HTTP connections. Migrate your server to use HTTPS.'),void i(r,"error"))}i("Unable to parse "+c.frameworkUrl+"! The file is corrupt, or compression was misconfigured? (check Content-Encoding HTTP Response Header on web server)","error")}var o=unityFramework;unityFramework=null,s.onload=null,a(o)},s.onerror=function(e){i("Unable to load file "+c.frameworkUrl+"! Check that the file exists on the remote server. (also check browser Console and Devtools Network tab to debug)","error")},document.body.appendChild(s),c.deinitializers.push(function(){document.body.removeChild(s)})}).then(function(e){e(c)});x(r="dataUrl"),e=c.cacheControl(c[r]),t=c.companyName&&c.productName?c.cachedFetch:c.fetchWithProgress,n=c[r],n=/file:\/\//.exec(n)?"same-origin":void 0;var r,e,t,n,o=t(c[r],{method:"GET",companyName:c.companyName,productName:c.productName,control:e,mode:n,onProgress:function(e){x(r,e)}}).then(function(e){return e.parsedBody}).catch(function(e){var t="Failed to download file "+c[r];"file:"==location.protocol?i(t+". Loading web pages via a file:// URL without a web server is not supported by this browser. Please use a local development web server to host Unity content, or use the Unity Build and Run option.","error"):console.error(t)});c.preRun.push(function(){c.addRunDependency("dataUrl"),o.then(function(e){var t=new DataView(e.buffer,e.byteOffset,e.byteLength),r=0,n="UnityWebData1.0\0";if(!String.fromCharCode.apply(null,e.subarray(r,r+n.length))==n)throw"unknown data format";var o=t.getUint32(r+=n.length,!0);for(r+=4;r $TMP_REF - -sacrebleu -t $DATASET -l $LANGPAIR --echo src -q \ -| sacremoses normalize -l $SRCLANG -q \ -| sacremoses tokenize -a -l $SRCLANG -q \ -| python $BPEROOT/apply_bpe.py -c $BPECODE \ -| fairseq-interactive $DATABIN --path $MODEL \ - -s $SRCLANG -t $TGTLANG \ - --beam 5 --remove-bpe --buffer-size 1024 --max-tokens 8000 \ -| grep ^H- | cut -f 3- \ -| fairseq-score --ref $TMP_REF - -rm -f $TMP_REF diff --git a/spaces/ICML2022/OFA/fairseq/examples/translation/README.md b/spaces/ICML2022/OFA/fairseq/examples/translation/README.md deleted file mode 100644 index 2941f5eb8482dab61dca5eca27a71abd7ee5bf5c..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/translation/README.md +++ /dev/null @@ -1,301 +0,0 @@ -# Neural Machine Translation - -This README contains instructions for [using pretrained translation models](#example-usage-torchhub) -as well as [training new models](#training-a-new-model). - -## Pre-trained models - -Model | Description | Dataset | Download ----|---|---|--- -`conv.wmt14.en-fr` | Convolutional
      ([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | model:
      [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.v2.en-fr.fconv-py.tar.bz2)
      newstest2014:
      [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.newstest2014.tar.bz2)
      newstest2012/2013:
      [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.ntst1213.tar.bz2) -`conv.wmt14.en-de` | Convolutional
      ([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT14 English-German](http://statmt.org/wmt14/translation-task.html#Download) | model:
      [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.en-de.fconv-py.tar.bz2)
      newstest2014:
      [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-de.newstest2014.tar.bz2) -`conv.wmt17.en-de` | Convolutional
      ([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT17 English-German](http://statmt.org/wmt17/translation-task.html#Download) | model:
      [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt17.v2.en-de.fconv-py.tar.bz2)
      newstest2014:
      [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt17.v2.en-de.newstest2014.tar.bz2) -`transformer.wmt14.en-fr` | Transformer
      ([Ott et al., 2018](https://arxiv.org/abs/1806.00187)) | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | model:
      [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.en-fr.joined-dict.transformer.tar.bz2)
      newstest2014:
      [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-fr.joined-dict.newstest2014.tar.bz2) -`transformer.wmt16.en-de` | Transformer
      ([Ott et al., 2018](https://arxiv.org/abs/1806.00187)) | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model:
      [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt16.en-de.joined-dict.transformer.tar.bz2)
      newstest2014:
      [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2) -`transformer.wmt18.en-de` | Transformer
      ([Edunov et al., 2018](https://arxiv.org/abs/1808.09381))
      WMT'18 winner | [WMT'18 English-German](http://www.statmt.org/wmt18/translation-task.html) | model:
      [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt18.en-de.ensemble.tar.gz)
      See NOTE in the archive -`transformer.wmt19.en-de` | Transformer
      ([Ng et al., 2019](https://arxiv.org/abs/1907.06616))
      WMT'19 winner | [WMT'19 English-German](http://www.statmt.org/wmt19/translation-task.html) | model:
      [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-de.joined-dict.ensemble.tar.gz) -`transformer.wmt19.de-en` | Transformer
      ([Ng et al., 2019](https://arxiv.org/abs/1907.06616))
      WMT'19 winner | [WMT'19 German-English](http://www.statmt.org/wmt19/translation-task.html) | model:
      [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.de-en.joined-dict.ensemble.tar.gz) -`transformer.wmt19.en-ru` | Transformer
      ([Ng et al., 2019](https://arxiv.org/abs/1907.06616))
      WMT'19 winner | [WMT'19 English-Russian](http://www.statmt.org/wmt19/translation-task.html) | model:
      [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-ru.ensemble.tar.gz) -`transformer.wmt19.ru-en` | Transformer
      ([Ng et al., 2019](https://arxiv.org/abs/1907.06616))
      WMT'19 winner | [WMT'19 Russian-English](http://www.statmt.org/wmt19/translation-task.html) | model:
      [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.ru-en.ensemble.tar.gz) - -## Example usage (torch.hub) - -We require a few additional Python dependencies for preprocessing: -```bash -pip install fastBPE sacremoses subword_nmt -``` - -Interactive translation via PyTorch Hub: -```python -import torch - -# List available models -torch.hub.list('pytorch/fairseq') # [..., 'transformer.wmt16.en-de', ... ] - -# Load a transformer trained on WMT'16 En-De -# Note: WMT'19 models use fastBPE instead of subword_nmt, see instructions below -en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt16.en-de', - tokenizer='moses', bpe='subword_nmt') -en2de.eval() # disable dropout - -# The underlying model is available under the *models* attribute -assert isinstance(en2de.models[0], fairseq.models.transformer.TransformerModel) - -# Move model to GPU for faster translation -en2de.cuda() - -# Translate a sentence -en2de.translate('Hello world!') -# 'Hallo Welt!' - -# Batched translation -en2de.translate(['Hello world!', 'The cat sat on the mat.']) -# ['Hallo Welt!', 'Die Katze saß auf der Matte.'] -``` - -Loading custom models: -```python -from fairseq.models.transformer import TransformerModel -zh2en = TransformerModel.from_pretrained( - '/path/to/checkpoints', - checkpoint_file='checkpoint_best.pt', - data_name_or_path='data-bin/wmt17_zh_en_full', - bpe='subword_nmt', - bpe_codes='data-bin/wmt17_zh_en_full/zh.code' -) -zh2en.translate('你好 世界') -# 'Hello World' -``` - -If you are using a `transformer.wmt19` models, you will need to set the `bpe` -argument to `'fastbpe'` and (optionally) load the 4-model ensemble: -```python -en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de', - checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt', - tokenizer='moses', bpe='fastbpe') -en2de.eval() # disable dropout -``` - -## Example usage (CLI tools) - -Generation with the binarized test sets can be run in batch mode as follows, e.g. for WMT 2014 English-French on a GTX-1080ti: -```bash -mkdir -p data-bin -curl https://dl.fbaipublicfiles.com/fairseq/models/wmt14.v2.en-fr.fconv-py.tar.bz2 | tar xvjf - -C data-bin -curl https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.newstest2014.tar.bz2 | tar xvjf - -C data-bin -fairseq-generate data-bin/wmt14.en-fr.newstest2014 \ - --path data-bin/wmt14.en-fr.fconv-py/model.pt \ - --beam 5 --batch-size 128 --remove-bpe | tee /tmp/gen.out -# ... -# | Translated 3003 sentences (96311 tokens) in 166.0s (580.04 tokens/s) -# | Generate test with beam=5: BLEU4 = 40.83, 67.5/46.9/34.4/25.5 (BP=1.000, ratio=1.006, syslen=83262, reflen=82787) - -# Compute BLEU score -grep ^H /tmp/gen.out | cut -f3- > /tmp/gen.out.sys -grep ^T /tmp/gen.out | cut -f2- > /tmp/gen.out.ref -fairseq-score --sys /tmp/gen.out.sys --ref /tmp/gen.out.ref -# BLEU4 = 40.83, 67.5/46.9/34.4/25.5 (BP=1.000, ratio=1.006, syslen=83262, reflen=82787) -``` - -## Training a new model - -### IWSLT'14 German to English (Transformer) - -The following instructions can be used to train a Transformer model on the [IWSLT'14 German to English dataset](http://workshop2014.iwslt.org/downloads/proceeding.pdf). - -First download and preprocess the data: -```bash -# Download and prepare the data -cd examples/translation/ -bash prepare-iwslt14.sh -cd ../.. - -# Preprocess/binarize the data -TEXT=examples/translation/iwslt14.tokenized.de-en -fairseq-preprocess --source-lang de --target-lang en \ - --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \ - --destdir data-bin/iwslt14.tokenized.de-en \ - --workers 20 -``` - -Next we'll train a Transformer translation model over this data: -```bash -CUDA_VISIBLE_DEVICES=0 fairseq-train \ - data-bin/iwslt14.tokenized.de-en \ - --arch transformer_iwslt_de_en --share-decoder-input-output-embed \ - --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 \ - --lr 5e-4 --lr-scheduler inverse_sqrt --warmup-updates 4000 \ - --dropout 0.3 --weight-decay 0.0001 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --max-tokens 4096 \ - --eval-bleu \ - --eval-bleu-args '{"beam": 5, "max_len_a": 1.2, "max_len_b": 10}' \ - --eval-bleu-detok moses \ - --eval-bleu-remove-bpe \ - --eval-bleu-print-samples \ - --best-checkpoint-metric bleu --maximize-best-checkpoint-metric -``` - -Finally we can evaluate our trained model: -```bash -fairseq-generate data-bin/iwslt14.tokenized.de-en \ - --path checkpoints/checkpoint_best.pt \ - --batch-size 128 --beam 5 --remove-bpe -``` - -### WMT'14 English to German (Convolutional) - -The following instructions can be used to train a Convolutional translation model on the WMT English to German dataset. -See the [Scaling NMT README](../scaling_nmt/README.md) for instructions to train a Transformer translation model on this data. - -The WMT English to German dataset can be preprocessed using the `prepare-wmt14en2de.sh` script. -By default it will produce a dataset that was modeled after [Attention Is All You Need (Vaswani et al., 2017)](https://arxiv.org/abs/1706.03762), but with additional news-commentary-v12 data from WMT'17. - -To use only data available in WMT'14 or to replicate results obtained in the original [Convolutional Sequence to Sequence Learning (Gehring et al., 2017)](https://arxiv.org/abs/1705.03122) paper, please use the `--icml17` option. - -```bash -# Download and prepare the data -cd examples/translation/ -# WMT'17 data: -bash prepare-wmt14en2de.sh -# or to use WMT'14 data: -# bash prepare-wmt14en2de.sh --icml17 -cd ../.. - -# Binarize the dataset -TEXT=examples/translation/wmt17_en_de -fairseq-preprocess \ - --source-lang en --target-lang de \ - --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \ - --destdir data-bin/wmt17_en_de --thresholdtgt 0 --thresholdsrc 0 \ - --workers 20 - -# Train the model -mkdir -p checkpoints/fconv_wmt_en_de -fairseq-train \ - data-bin/wmt17_en_de \ - --arch fconv_wmt_en_de \ - --dropout 0.2 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --optimizer nag --clip-norm 0.1 \ - --lr 0.5 --lr-scheduler fixed --force-anneal 50 \ - --max-tokens 4000 \ - --save-dir checkpoints/fconv_wmt_en_de - -# Evaluate -fairseq-generate data-bin/wmt17_en_de \ - --path checkpoints/fconv_wmt_en_de/checkpoint_best.pt \ - --beam 5 --remove-bpe -``` - -### WMT'14 English to French -```bash -# Download and prepare the data -cd examples/translation/ -bash prepare-wmt14en2fr.sh -cd ../.. - -# Binarize the dataset -TEXT=examples/translation/wmt14_en_fr -fairseq-preprocess \ - --source-lang en --target-lang fr \ - --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \ - --destdir data-bin/wmt14_en_fr --thresholdtgt 0 --thresholdsrc 0 \ - --workers 60 - -# Train the model -mkdir -p checkpoints/fconv_wmt_en_fr -fairseq-train \ - data-bin/wmt14_en_fr \ - --arch fconv_wmt_en_fr \ - --dropout 0.1 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --optimizer nag --clip-norm 0.1 \ - --lr 0.5 --lr-scheduler fixed --force-anneal 50 \ - --max-tokens 3000 \ - --save-dir checkpoints/fconv_wmt_en_fr - -# Evaluate -fairseq-generate \ - data-bin/fconv_wmt_en_fr \ - --path checkpoints/fconv_wmt_en_fr/checkpoint_best.pt \ - --beam 5 --remove-bpe -``` - -## Multilingual Translation - -We also support training multilingual translation models. In this example we'll -train a multilingual `{de,fr}-en` translation model using the IWSLT'17 datasets. - -Note that we use slightly different preprocessing here than for the IWSLT'14 -En-De data above. In particular we learn a joint BPE code for all three -languages and use fairseq-interactive and sacrebleu for scoring the test set. - -```bash -# First install sacrebleu and sentencepiece -pip install sacrebleu sentencepiece - -# Then download and preprocess the data -cd examples/translation/ -bash prepare-iwslt17-multilingual.sh -cd ../.. - -# Binarize the de-en dataset -TEXT=examples/translation/iwslt17.de_fr.en.bpe16k -fairseq-preprocess --source-lang de --target-lang en \ - --trainpref $TEXT/train.bpe.de-en \ - --validpref $TEXT/valid0.bpe.de-en,$TEXT/valid1.bpe.de-en,$TEXT/valid2.bpe.de-en,$TEXT/valid3.bpe.de-en,$TEXT/valid4.bpe.de-en,$TEXT/valid5.bpe.de-en \ - --destdir data-bin/iwslt17.de_fr.en.bpe16k \ - --workers 10 - -# Binarize the fr-en dataset -# NOTE: it's important to reuse the en dictionary from the previous step -fairseq-preprocess --source-lang fr --target-lang en \ - --trainpref $TEXT/train.bpe.fr-en \ - --validpref $TEXT/valid0.bpe.fr-en,$TEXT/valid1.bpe.fr-en,$TEXT/valid2.bpe.fr-en,$TEXT/valid3.bpe.fr-en,$TEXT/valid4.bpe.fr-en,$TEXT/valid5.bpe.fr-en \ - --tgtdict data-bin/iwslt17.de_fr.en.bpe16k/dict.en.txt \ - --destdir data-bin/iwslt17.de_fr.en.bpe16k \ - --workers 10 - -# Train a multilingual transformer model -# NOTE: the command below assumes 1 GPU, but accumulates gradients from -# 8 fwd/bwd passes to simulate training on 8 GPUs -mkdir -p checkpoints/multilingual_transformer -CUDA_VISIBLE_DEVICES=0 fairseq-train data-bin/iwslt17.de_fr.en.bpe16k/ \ - --max-epoch 50 \ - --ddp-backend=legacy_ddp \ - --task multilingual_translation --lang-pairs de-en,fr-en \ - --arch multilingual_transformer_iwslt_de_en \ - --share-decoders --share-decoder-input-output-embed \ - --optimizer adam --adam-betas '(0.9, 0.98)' \ - --lr 0.0005 --lr-scheduler inverse_sqrt \ - --warmup-updates 4000 --warmup-init-lr '1e-07' \ - --label-smoothing 0.1 --criterion label_smoothed_cross_entropy \ - --dropout 0.3 --weight-decay 0.0001 \ - --save-dir checkpoints/multilingual_transformer \ - --max-tokens 4000 \ - --update-freq 8 - -# Generate and score the test set with sacrebleu -SRC=de -sacrebleu --test-set iwslt17 --language-pair ${SRC}-en --echo src \ - | python scripts/spm_encode.py --model examples/translation/iwslt17.de_fr.en.bpe16k/sentencepiece.bpe.model \ - > iwslt17.test.${SRC}-en.${SRC}.bpe -cat iwslt17.test.${SRC}-en.${SRC}.bpe \ - | fairseq-interactive data-bin/iwslt17.de_fr.en.bpe16k/ \ - --task multilingual_translation --lang-pairs de-en,fr-en \ - --source-lang ${SRC} --target-lang en \ - --path checkpoints/multilingual_transformer/checkpoint_best.pt \ - --buffer-size 2000 --batch-size 128 \ - --beam 5 --remove-bpe=sentencepiece \ - > iwslt17.test.${SRC}-en.en.sys -grep ^H iwslt17.test.${SRC}-en.en.sys | cut -f3 \ - | sacrebleu --test-set iwslt17 --language-pair ${SRC}-en -``` - -##### Argument format during inference - -During inference it is required to specify a single `--source-lang` and -`--target-lang`, which indicates the inference langauge direction. -`--lang-pairs`, `--encoder-langtok`, `--decoder-langtok` have to be set to -the same value as training. diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/criterions/cross_entropy.py b/spaces/ICML2022/OFA/fairseq/fairseq/criterions/cross_entropy.py deleted file mode 100644 index fe461064716b38ecf2eb610daddbb609a1884e6b..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/criterions/cross_entropy.py +++ /dev/null @@ -1,90 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from dataclasses import dataclass - -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass -from omegaconf import II - - -@dataclass -class CrossEntropyCriterionConfig(FairseqDataclass): - sentence_avg: bool = II("optimization.sentence_avg") - - -@register_criterion("cross_entropy", dataclass=CrossEntropyCriterionConfig) -class CrossEntropyCriterion(FairseqCriterion): - def __init__(self, task, sentence_avg): - super().__init__(task) - self.sentence_avg = sentence_avg - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - net_output = model(**sample["net_input"]) - loss, _ = self.compute_loss(model, net_output, sample, reduce=reduce) - sample_size = ( - sample["target"].size(0) if self.sentence_avg else sample["ntokens"] - ) - logging_output = { - "loss": loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample["target"].size(0), - "sample_size": sample_size, - } - return loss, sample_size, logging_output - - def compute_loss(self, model, net_output, sample, reduce=True): - lprobs = model.get_normalized_probs(net_output, log_probs=True) - lprobs = lprobs.view(-1, lprobs.size(-1)) - target = model.get_targets(sample, net_output).view(-1) - loss = F.nll_loss( - lprobs, - target, - ignore_index=self.padding_idx, - reduction="sum" if reduce else "none", - ) - return loss, loss - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - - # we divide by log(2) to convert the loss from base e to base 2 - metrics.log_scalar( - "loss", loss_sum / sample_size / math.log(2), sample_size, round=3 - ) - if sample_size != ntokens: - metrics.log_scalar( - "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3 - ) - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg) - ) - else: - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["loss"].avg) - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/util/utils.py b/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/util/utils.py deleted file mode 100644 index e9f0318e306fa04bff0ada70486b41aaa69b07c8..0000000000000000000000000000000000000000 --- a/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/util/utils.py +++ /dev/null @@ -1,608 +0,0 @@ -import argparse -import json -import warnings -from collections import OrderedDict -from copy import deepcopy -from typing import Any, Dict, List - -import numpy as np -import torch -from transformers import AutoTokenizer - -from groundingdino.util.slconfig import SLConfig - - -def slprint(x, name="x"): - if isinstance(x, (torch.Tensor, np.ndarray)): - print(f"{name}.shape:", x.shape) - elif isinstance(x, (tuple, list)): - print("type x:", type(x)) - for i in range(min(10, len(x))): - slprint(x[i], f"{name}[{i}]") - elif isinstance(x, dict): - for k, v in x.items(): - slprint(v, f"{name}[{k}]") - else: - print(f"{name}.type:", type(x)) - - -def clean_state_dict(state_dict): - new_state_dict = OrderedDict() - for k, v in state_dict.items(): - if k[:7] == "module.": - k = k[7:] # remove `module.` - new_state_dict[k] = v - return new_state_dict - - -def renorm( - img: torch.FloatTensor, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] -) -> torch.FloatTensor: - # img: tensor(3,H,W) or tensor(B,3,H,W) - # return: same as img - assert img.dim() == 3 or img.dim() == 4, "img.dim() should be 3 or 4 but %d" % img.dim() - if img.dim() == 3: - assert img.size(0) == 3, 'img.size(0) shoule be 3 but "%d". (%s)' % ( - img.size(0), - str(img.size()), - ) - img_perm = img.permute(1, 2, 0) - mean = torch.Tensor(mean) - std = torch.Tensor(std) - img_res = img_perm * std + mean - return img_res.permute(2, 0, 1) - else: # img.dim() == 4 - assert img.size(1) == 3, 'img.size(1) shoule be 3 but "%d". (%s)' % ( - img.size(1), - str(img.size()), - ) - img_perm = img.permute(0, 2, 3, 1) - mean = torch.Tensor(mean) - std = torch.Tensor(std) - img_res = img_perm * std + mean - return img_res.permute(0, 3, 1, 2) - - -class CocoClassMapper: - def __init__(self) -> None: - self.category_map_str = { - "1": 1, - "2": 2, - "3": 3, - "4": 4, - "5": 5, - "6": 6, - "7": 7, - "8": 8, - "9": 9, - "10": 10, - "11": 11, - "13": 12, - "14": 13, - "15": 14, - "16": 15, - "17": 16, - "18": 17, - "19": 18, - "20": 19, - "21": 20, - "22": 21, - "23": 22, - "24": 23, - "25": 24, - "27": 25, - "28": 26, - "31": 27, - "32": 28, - "33": 29, - "34": 30, - "35": 31, - "36": 32, - "37": 33, - "38": 34, - "39": 35, - "40": 36, - "41": 37, - "42": 38, - "43": 39, - "44": 40, - "46": 41, - "47": 42, - "48": 43, - "49": 44, - "50": 45, - "51": 46, - "52": 47, - "53": 48, - "54": 49, - "55": 50, - "56": 51, - "57": 52, - "58": 53, - "59": 54, - "60": 55, - "61": 56, - "62": 57, - "63": 58, - "64": 59, - "65": 60, - "67": 61, - "70": 62, - "72": 63, - "73": 64, - "74": 65, - "75": 66, - "76": 67, - "77": 68, - "78": 69, - "79": 70, - "80": 71, - "81": 72, - "82": 73, - "84": 74, - "85": 75, - "86": 76, - "87": 77, - "88": 78, - "89": 79, - "90": 80, - } - self.origin2compact_mapper = {int(k): v - 1 for k, v in self.category_map_str.items()} - self.compact2origin_mapper = {int(v - 1): int(k) for k, v in self.category_map_str.items()} - - def origin2compact(self, idx): - return self.origin2compact_mapper[int(idx)] - - def compact2origin(self, idx): - return self.compact2origin_mapper[int(idx)] - - -def to_device(item, device): - if isinstance(item, torch.Tensor): - return item.to(device) - elif isinstance(item, list): - return [to_device(i, device) for i in item] - elif isinstance(item, dict): - return {k: to_device(v, device) for k, v in item.items()} - else: - raise NotImplementedError( - "Call Shilong if you use other containers! type: {}".format(type(item)) - ) - - -# -def get_gaussian_mean(x, axis, other_axis, softmax=True): - """ - - Args: - x (float): Input images(BxCxHxW) - axis (int): The index for weighted mean - other_axis (int): The other index - - Returns: weighted index for axis, BxC - - """ - mat2line = torch.sum(x, axis=other_axis) - # mat2line = mat2line / mat2line.mean() * 10 - if softmax: - u = torch.softmax(mat2line, axis=2) - else: - u = mat2line / (mat2line.sum(2, keepdim=True) + 1e-6) - size = x.shape[axis] - ind = torch.linspace(0, 1, size).to(x.device) - batch = x.shape[0] - channel = x.shape[1] - index = ind.repeat([batch, channel, 1]) - mean_position = torch.sum(index * u, dim=2) - return mean_position - - -def get_expected_points_from_map(hm, softmax=True): - """get_gaussian_map_from_points - B,C,H,W -> B,N,2 float(0, 1) float(0, 1) - softargmax function - - Args: - hm (float): Input images(BxCxHxW) - - Returns: - weighted index for axis, BxCx2. float between 0 and 1. - - """ - # hm = 10*hm - B, C, H, W = hm.shape - y_mean = get_gaussian_mean(hm, 2, 3, softmax=softmax) # B,C - x_mean = get_gaussian_mean(hm, 3, 2, softmax=softmax) # B,C - # return torch.cat((x_mean.unsqueeze(-1), y_mean.unsqueeze(-1)), 2) - return torch.stack([x_mean, y_mean], dim=2) - - -# Positional encoding (section 5.1) -# borrow from nerf -class Embedder: - def __init__(self, **kwargs): - self.kwargs = kwargs - self.create_embedding_fn() - - def create_embedding_fn(self): - embed_fns = [] - d = self.kwargs["input_dims"] - out_dim = 0 - if self.kwargs["include_input"]: - embed_fns.append(lambda x: x) - out_dim += d - - max_freq = self.kwargs["max_freq_log2"] - N_freqs = self.kwargs["num_freqs"] - - if self.kwargs["log_sampling"]: - freq_bands = 2.0 ** torch.linspace(0.0, max_freq, steps=N_freqs) - else: - freq_bands = torch.linspace(2.0**0.0, 2.0**max_freq, steps=N_freqs) - - for freq in freq_bands: - for p_fn in self.kwargs["periodic_fns"]: - embed_fns.append(lambda x, p_fn=p_fn, freq=freq: p_fn(x * freq)) - out_dim += d - - self.embed_fns = embed_fns - self.out_dim = out_dim - - def embed(self, inputs): - return torch.cat([fn(inputs) for fn in self.embed_fns], -1) - - -def get_embedder(multires, i=0): - import torch.nn as nn - - if i == -1: - return nn.Identity(), 3 - - embed_kwargs = { - "include_input": True, - "input_dims": 3, - "max_freq_log2": multires - 1, - "num_freqs": multires, - "log_sampling": True, - "periodic_fns": [torch.sin, torch.cos], - } - - embedder_obj = Embedder(**embed_kwargs) - embed = lambda x, eo=embedder_obj: eo.embed(x) - return embed, embedder_obj.out_dim - - -class APOPMeter: - def __init__(self) -> None: - self.tp = 0 - self.fp = 0 - self.tn = 0 - self.fn = 0 - - def update(self, pred, gt): - """ - Input: - pred, gt: Tensor() - """ - assert pred.shape == gt.shape - self.tp += torch.logical_and(pred == 1, gt == 1).sum().item() - self.fp += torch.logical_and(pred == 1, gt == 0).sum().item() - self.tn += torch.logical_and(pred == 0, gt == 0).sum().item() - self.tn += torch.logical_and(pred == 1, gt == 0).sum().item() - - def update_cm(self, tp, fp, tn, fn): - self.tp += tp - self.fp += fp - self.tn += tn - self.tn += fn - - -def inverse_sigmoid(x, eps=1e-5): - x = x.clamp(min=0, max=1) - x1 = x.clamp(min=eps) - x2 = (1 - x).clamp(min=eps) - return torch.log(x1 / x2) - - -def get_raw_dict(args): - """ - return the dicf contained in args. - - e.g: - >>> with open(path, 'w') as f: - json.dump(get_raw_dict(args), f, indent=2) - """ - if isinstance(args, argparse.Namespace): - return vars(args) - elif isinstance(args, dict): - return args - elif isinstance(args, SLConfig): - return args._cfg_dict - else: - raise NotImplementedError("Unknown type {}".format(type(args))) - - -def stat_tensors(tensor): - assert tensor.dim() == 1 - tensor_sm = tensor.softmax(0) - entropy = (tensor_sm * torch.log(tensor_sm + 1e-9)).sum() - - return { - "max": tensor.max(), - "min": tensor.min(), - "mean": tensor.mean(), - "var": tensor.var(), - "std": tensor.var() ** 0.5, - "entropy": entropy, - } - - -class NiceRepr: - """Inherit from this class and define ``__nice__`` to "nicely" print your - objects. - - Defines ``__str__`` and ``__repr__`` in terms of ``__nice__`` function - Classes that inherit from :class:`NiceRepr` should redefine ``__nice__``. - If the inheriting class has a ``__len__``, method then the default - ``__nice__`` method will return its length. - - Example: - >>> class Foo(NiceRepr): - ... def __nice__(self): - ... return 'info' - >>> foo = Foo() - >>> assert str(foo) == '' - >>> assert repr(foo).startswith('>> class Bar(NiceRepr): - ... pass - >>> bar = Bar() - >>> import pytest - >>> with pytest.warns(None) as record: - >>> assert 'object at' in str(bar) - >>> assert 'object at' in repr(bar) - - Example: - >>> class Baz(NiceRepr): - ... def __len__(self): - ... return 5 - >>> baz = Baz() - >>> assert str(baz) == '' - """ - - def __nice__(self): - """str: a "nice" summary string describing this module""" - if hasattr(self, "__len__"): - # It is a common pattern for objects to use __len__ in __nice__ - # As a convenience we define a default __nice__ for these objects - return str(len(self)) - else: - # In all other cases force the subclass to overload __nice__ - raise NotImplementedError(f"Define the __nice__ method for {self.__class__!r}") - - def __repr__(self): - """str: the string of the module""" - try: - nice = self.__nice__() - classname = self.__class__.__name__ - return f"<{classname}({nice}) at {hex(id(self))}>" - except NotImplementedError as ex: - warnings.warn(str(ex), category=RuntimeWarning) - return object.__repr__(self) - - def __str__(self): - """str: the string of the module""" - try: - classname = self.__class__.__name__ - nice = self.__nice__() - return f"<{classname}({nice})>" - except NotImplementedError as ex: - warnings.warn(str(ex), category=RuntimeWarning) - return object.__repr__(self) - - -def ensure_rng(rng=None): - """Coerces input into a random number generator. - - If the input is None, then a global random state is returned. - - If the input is a numeric value, then that is used as a seed to construct a - random state. Otherwise the input is returned as-is. - - Adapted from [1]_. - - Args: - rng (int | numpy.random.RandomState | None): - if None, then defaults to the global rng. Otherwise this can be an - integer or a RandomState class - Returns: - (numpy.random.RandomState) : rng - - a numpy random number generator - - References: - .. [1] https://gitlab.kitware.com/computer-vision/kwarray/blob/master/kwarray/util_random.py#L270 # noqa: E501 - """ - - if rng is None: - rng = np.random.mtrand._rand - elif isinstance(rng, int): - rng = np.random.RandomState(rng) - else: - rng = rng - return rng - - -def random_boxes(num=1, scale=1, rng=None): - """Simple version of ``kwimage.Boxes.random`` - - Returns: - Tensor: shape (n, 4) in x1, y1, x2, y2 format. - - References: - https://gitlab.kitware.com/computer-vision/kwimage/blob/master/kwimage/structs/boxes.py#L1390 - - Example: - >>> num = 3 - >>> scale = 512 - >>> rng = 0 - >>> boxes = random_boxes(num, scale, rng) - >>> print(boxes) - tensor([[280.9925, 278.9802, 308.6148, 366.1769], - [216.9113, 330.6978, 224.0446, 456.5878], - [405.3632, 196.3221, 493.3953, 270.7942]]) - """ - rng = ensure_rng(rng) - - tlbr = rng.rand(num, 4).astype(np.float32) - - tl_x = np.minimum(tlbr[:, 0], tlbr[:, 2]) - tl_y = np.minimum(tlbr[:, 1], tlbr[:, 3]) - br_x = np.maximum(tlbr[:, 0], tlbr[:, 2]) - br_y = np.maximum(tlbr[:, 1], tlbr[:, 3]) - - tlbr[:, 0] = tl_x * scale - tlbr[:, 1] = tl_y * scale - tlbr[:, 2] = br_x * scale - tlbr[:, 3] = br_y * scale - - boxes = torch.from_numpy(tlbr) - return boxes - - -class ModelEma(torch.nn.Module): - def __init__(self, model, decay=0.9997, device=None): - super(ModelEma, self).__init__() - # make a copy of the model for accumulating moving average of weights - self.module = deepcopy(model) - self.module.eval() - - # import ipdb; ipdb.set_trace() - - self.decay = decay - self.device = device # perform ema on different device from model if set - if self.device is not None: - self.module.to(device=device) - - def _update(self, model, update_fn): - with torch.no_grad(): - for ema_v, model_v in zip( - self.module.state_dict().values(), model.state_dict().values() - ): - if self.device is not None: - model_v = model_v.to(device=self.device) - ema_v.copy_(update_fn(ema_v, model_v)) - - def update(self, model): - self._update(model, update_fn=lambda e, m: self.decay * e + (1.0 - self.decay) * m) - - def set(self, model): - self._update(model, update_fn=lambda e, m: m) - - -class BestMetricSingle: - def __init__(self, init_res=0.0, better="large") -> None: - self.init_res = init_res - self.best_res = init_res - self.best_ep = -1 - - self.better = better - assert better in ["large", "small"] - - def isbetter(self, new_res, old_res): - if self.better == "large": - return new_res > old_res - if self.better == "small": - return new_res < old_res - - def update(self, new_res, ep): - if self.isbetter(new_res, self.best_res): - self.best_res = new_res - self.best_ep = ep - return True - return False - - def __str__(self) -> str: - return "best_res: {}\t best_ep: {}".format(self.best_res, self.best_ep) - - def __repr__(self) -> str: - return self.__str__() - - def summary(self) -> dict: - return { - "best_res": self.best_res, - "best_ep": self.best_ep, - } - - -class BestMetricHolder: - def __init__(self, init_res=0.0, better="large", use_ema=False) -> None: - self.best_all = BestMetricSingle(init_res, better) - self.use_ema = use_ema - if use_ema: - self.best_ema = BestMetricSingle(init_res, better) - self.best_regular = BestMetricSingle(init_res, better) - - def update(self, new_res, epoch, is_ema=False): - """ - return if the results is the best. - """ - if not self.use_ema: - return self.best_all.update(new_res, epoch) - else: - if is_ema: - self.best_ema.update(new_res, epoch) - return self.best_all.update(new_res, epoch) - else: - self.best_regular.update(new_res, epoch) - return self.best_all.update(new_res, epoch) - - def summary(self): - if not self.use_ema: - return self.best_all.summary() - - res = {} - res.update({f"all_{k}": v for k, v in self.best_all.summary().items()}) - res.update({f"regular_{k}": v for k, v in self.best_regular.summary().items()}) - res.update({f"ema_{k}": v for k, v in self.best_ema.summary().items()}) - return res - - def __repr__(self) -> str: - return json.dumps(self.summary(), indent=2) - - def __str__(self) -> str: - return self.__repr__() - - -def targets_to(targets: List[Dict[str, Any]], device): - """Moves the target dicts to the given device.""" - excluded_keys = [ - "questionId", - "tokens_positive", - "strings_positive", - "tokens", - "dataset_name", - "sentence_id", - "original_img_id", - "nb_eval", - "task_id", - "original_id", - "token_span", - "caption", - "dataset_type", - ] - return [ - {k: v.to(device) if k not in excluded_keys else v for k, v in t.items()} for t in targets - ] - - -def get_phrases_from_posmap( - posmap: torch.BoolTensor, tokenized: Dict, tokenizer: AutoTokenizer -): - assert isinstance(posmap, torch.Tensor), "posmap must be torch.Tensor" - if posmap.dim() == 1: - non_zero_idx = posmap.nonzero(as_tuple=True)[0].tolist() - token_ids = [tokenized["input_ids"][i] for i in non_zero_idx] - return tokenizer.decode(token_ids) - else: - raise NotImplementedError("posmap must be 1-dim") diff --git a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/data/scripts/get_coco.sh b/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/data/scripts/get_coco.sh deleted file mode 100644 index 0d388b0a12a84c504a2b12e85e3edcac5d78530c..0000000000000000000000000000000000000000 --- a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/data/scripts/get_coco.sh +++ /dev/null @@ -1,56 +0,0 @@ -#!/bin/bash -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -# Download COCO 2017 dataset http://cocodataset.org -# Example usage: bash data/scripts/get_coco.sh -# parent -# ├── yolov5 -# └── datasets -# └── coco ← downloads here - -# Arguments (optional) Usage: bash data/scripts/get_coco.sh --train --val --test --segments -if [ "$#" -gt 0 ]; then - for opt in "$@"; do - case "${opt}" in - --train) train=true ;; - --val) val=true ;; - --test) test=true ;; - --segments) segments=true ;; - esac - done -else - train=true - val=true - test=false - segments=false -fi - -# Download/unzip labels -d='../datasets' # unzip directory -url=https://github.com/ultralytics/yolov5/releases/download/v1.0/ -if [ "$segments" == "true" ]; then - f='coco2017labels-segments.zip' # 168 MB -else - f='coco2017labels.zip' # 46 MB -fi -echo 'Downloading' $url$f ' ...' -curl -L $url$f -o $f -# && unzip -q $f -d $d && rm $f & - -# Download/unzip images -d='../datasets/coco/images' # unzip directory -url=http://images.cocodataset.org/zips/ -if [ "$train" == "true" ]; then - f='train2017.zip' # 19G, 118k images - echo 'Downloading' $url$f '...' - curl -L $url$f -o $f -# && unzip -q $f -d $d && rm $f & -fi -if [ "$val" == "true" ]; then - f='val2017.zip' # 1G, 5k images - echo 'Downloading' $url$f '...' - curl -L $url$f -o $f -# && unzip -q $f -d $d && rm $f & -fi -if [ "$test" == "true" ]; then - f='test2017.zip' # 7G, 41k images (optional) - echo 'Downloading' $url$f '...' - curl -L $url$f -o $f -# && unzip -q $f -d $d && rm $f & -fi -wait # finish background tasks diff --git a/spaces/Illumotion/Koboldcpp/gguf-py/gguf/__init__.py b/spaces/Illumotion/Koboldcpp/gguf-py/gguf/__init__.py deleted file mode 100644 index f9b70a85b875e3626c518c321156abc5588b8ffe..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/gguf-py/gguf/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .gguf import * diff --git a/spaces/JLD/image-search/app.py b/spaces/JLD/image-search/app.py deleted file mode 100644 index 292f1f0dc0d657e6cc69f03bd1b79fea053f8f07..0000000000000000000000000000000000000000 --- a/spaces/JLD/image-search/app.py +++ /dev/null @@ -1,116 +0,0 @@ -import streamlit as st -from sentence_transformers import SentenceTransformer, util -from pathlib import Path -import pickle -import requests -from PIL import Image -from io import BytesIO -import pandas as pd -from loguru import logger -import torch - -T2I = "Text 2 Image" -I2I = "Image 2 Image" -def get_match(model, query, img_embs): - query_emb = model.encode([query], convert_to_tensor=True) - cosine_sim = util.pytorch_cos_sim(query_emb, img_embs) - return cosine_sim -def text_2_image(model, img_emb, img_names, img_urls, n_top_k_images): - st.title("Text to Image") - st.write("This is the text to image mode. Enter a text to be converted to an image") - text = st.text_input("Enter the text to be converted to an image") - if text: - if st.button("Convert"): - st.write("The image with the most similar embedding is:") - cosine_sim = get_match(model, text, img_emb) - top_k_images_indices = torch.topk(cosine_sim, n_top_k_images, 1).indices.squeeze() - if top_k_images_indices.nelement() == 1: - top_k_images_indices = [top_k_images_indices.tolist()] - else: - top_k_images_indices = top_k_images_indices.tolist() - images_found = [img_names[top_k_best_image] for top_k_best_image in top_k_images_indices] - cols = st.columns(n_top_k_images) - for i, image_found in enumerate(images_found): - logger.success(f"Image match found: {image_found}") - img_url_best_match = img_urls.loc[img_urls["photo_id"] == image_found] - logger.info(img_url_best_match.photo_url) - if len(img_url_best_match) >= 1: - response = requests.get(img_url_best_match.iloc[0]["photo_image_url"] + "?w=320") - image = Image.open(BytesIO(response.content)) - with cols[i]: - st.image(image, caption=f"{i+1}/{n_top_k_images} most similar") - else: - st.error("No image found") - - -def image_2_image(model, img_emb, img_names, img_urls,n_top_k_images): - st.title("Image to Image") - st.write("This is the image to image mode. Enter an image to be converted to an image") - image = st.file_uploader("Upload an image to be converted to an image", type=["jpg", "png", "jpeg"]) - if image is not None: - image = Image.open(BytesIO(image.getvalue())) - st.image(image, caption="Uploaded image") - if st.button("Convert"): - st.write("The image with the most similar embedding is:") - cosine_sim = get_match(model, image.convert("RGB"), img_emb) - top_k_images_indices = torch.topk(cosine_sim, n_top_k_images, 1).indices.squeeze() - if top_k_images_indices.nelement() == 1: - top_k_images_indices = [top_k_images_indices.tolist()] - else: - top_k_images_indices = top_k_images_indices.tolist() - images_found = [img_names[top_k_best_image] for top_k_best_image in top_k_images_indices] - cols = st.columns(n_top_k_images) - for i, image_found in enumerate(images_found): - logger.success(f"Image match found: {image_found}") - img_url_best_match = img_urls.loc[img_urls["photo_id"] == image_found] - logger.info(img_url_best_match.photo_url) - if len(img_url_best_match) >= 1: - response = requests.get(img_url_best_match.iloc[0]["photo_image_url"] + "?w=320") - image = Image.open(BytesIO(response.content)) - with cols[i]: - st.image(image, caption=f"{i+1}/{n_top_k_images} most similar") - else: - st.error("No image found") - -@st.cache(suppress_st_warning=True, allow_output_mutation=True) -def load_model(name): - # st.sidebar.info("Loading model") - model = SentenceTransformer(name) - # st.sidebar.success(f"Model {name} loaded") - return model - -@st.cache(suppress_st_warning=True) -def load_embeddings(filename): - st.sidebar.info("Loading Unsplash-Lite image embeddings") - with open(filename, "rb") as fIn: - img_names, img_emb = pickle.load(fIn) - st.sidebar.success("Images embeddings loaded") - return img_names, img_emb - -@st.cache(suppress_st_warning=True) -def load_image_url_list(filename): - url_list = pd.read_csv(filename, sep='\t', header=0) - return url_list - -def main(): - st.title("CLIP Image Search") - model = load_model("clip-ViT-B-32") - st.write("Select the mode to search for a match in Unsplash (thumbnail size) dataset. text2image mode needs a text as input and outputs the image with the most similar embedding (following cosine similarity). The Image to image mode is similar, but an input image is used instead of a text query") - emb_filename = Path("unsplash-25k-photos-embeddings.pkl") - urls_file = "photos.tsv000" - img_urls = load_image_url_list(urls_file) - img_names, img_emb = load_embeddings(emb_filename) - # Convert list of image names to a dict matching image IDs and their embedding index - img_names = {img_number: img_name.split('.')[0] for img_number, img_name in enumerate(img_names)} - st.sidebar.title("Settings") - app_mode = st.sidebar.selectbox("Choose the app mode", - [T2I, I2I]) - n_images_to_search = st.sidebar.number_input("Select the number of images to search", min_value=1, max_value=6) - if app_mode == T2I: - st.sidebar.info("Text to image mode") - text_2_image(model, img_emb, img_names, img_urls,n_images_to_search) - elif app_mode == I2I: - st.sidebar.info("Image to image mode") - image_2_image(model, img_emb, img_names, img_urls, n_images_to_search) -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/stable_diffusion_safe/safety_checker.py b/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/stable_diffusion_safe/safety_checker.py deleted file mode 100644 index f9dbf51e86440847646e168e5a50ebf835440f2a..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/stable_diffusion_safe/safety_checker.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import torch -import torch.nn as nn - -from transformers import CLIPConfig, CLIPVisionModel, PreTrainedModel - -from ...utils import logging - - -logger = logging.get_logger(__name__) - - -def cosine_distance(image_embeds, text_embeds): - normalized_image_embeds = nn.functional.normalize(image_embeds) - normalized_text_embeds = nn.functional.normalize(text_embeds) - return torch.mm(normalized_image_embeds, normalized_text_embeds.t()) - - -class SafeStableDiffusionSafetyChecker(PreTrainedModel): - config_class = CLIPConfig - - _no_split_modules = ["CLIPEncoderLayer"] - - def __init__(self, config: CLIPConfig): - super().__init__(config) - - self.vision_model = CLIPVisionModel(config.vision_config) - self.visual_projection = nn.Linear(config.vision_config.hidden_size, config.projection_dim, bias=False) - - self.concept_embeds = nn.Parameter(torch.ones(17, config.projection_dim), requires_grad=False) - self.special_care_embeds = nn.Parameter(torch.ones(3, config.projection_dim), requires_grad=False) - - self.concept_embeds_weights = nn.Parameter(torch.ones(17), requires_grad=False) - self.special_care_embeds_weights = nn.Parameter(torch.ones(3), requires_grad=False) - - @torch.no_grad() - def forward(self, clip_input, images): - pooled_output = self.vision_model(clip_input)[1] # pooled_output - image_embeds = self.visual_projection(pooled_output) - - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16 - special_cos_dist = cosine_distance(image_embeds, self.special_care_embeds).cpu().float().numpy() - cos_dist = cosine_distance(image_embeds, self.concept_embeds).cpu().float().numpy() - - result = [] - batch_size = image_embeds.shape[0] - for i in range(batch_size): - result_img = {"special_scores": {}, "special_care": [], "concept_scores": {}, "bad_concepts": []} - - # increase this value to create a stronger `nfsw` filter - # at the cost of increasing the possibility of filtering benign images - adjustment = 0.0 - - for concept_idx in range(len(special_cos_dist[0])): - concept_cos = special_cos_dist[i][concept_idx] - concept_threshold = self.special_care_embeds_weights[concept_idx].item() - result_img["special_scores"][concept_idx] = round(concept_cos - concept_threshold + adjustment, 3) - if result_img["special_scores"][concept_idx] > 0: - result_img["special_care"].append({concept_idx, result_img["special_scores"][concept_idx]}) - adjustment = 0.01 - - for concept_idx in range(len(cos_dist[0])): - concept_cos = cos_dist[i][concept_idx] - concept_threshold = self.concept_embeds_weights[concept_idx].item() - result_img["concept_scores"][concept_idx] = round(concept_cos - concept_threshold + adjustment, 3) - if result_img["concept_scores"][concept_idx] > 0: - result_img["bad_concepts"].append(concept_idx) - - result.append(result_img) - - has_nsfw_concepts = [len(res["bad_concepts"]) > 0 for res in result] - - return images, has_nsfw_concepts - - @torch.no_grad() - def forward_onnx(self, clip_input: torch.FloatTensor, images: torch.FloatTensor): - pooled_output = self.vision_model(clip_input)[1] # pooled_output - image_embeds = self.visual_projection(pooled_output) - - special_cos_dist = cosine_distance(image_embeds, self.special_care_embeds) - cos_dist = cosine_distance(image_embeds, self.concept_embeds) - - # increase this value to create a stronger `nsfw` filter - # at the cost of increasing the possibility of filtering benign images - adjustment = 0.0 - - special_scores = special_cos_dist - self.special_care_embeds_weights + adjustment - # special_scores = special_scores.round(decimals=3) - special_care = torch.any(special_scores > 0, dim=1) - special_adjustment = special_care * 0.01 - special_adjustment = special_adjustment.unsqueeze(1).expand(-1, cos_dist.shape[1]) - - concept_scores = (cos_dist - self.concept_embeds_weights) + special_adjustment - # concept_scores = concept_scores.round(decimals=3) - has_nsfw_concepts = torch.any(concept_scores > 0, dim=1) - - return images, has_nsfw_concepts diff --git a/spaces/Jamkonams/AutoGPT/autogpt/commands/file_operations.py b/spaces/Jamkonams/AutoGPT/autogpt/commands/file_operations.py deleted file mode 100644 index ad145ec956dd9dafd39e09c2244d001cf5febd2f..0000000000000000000000000000000000000000 --- a/spaces/Jamkonams/AutoGPT/autogpt/commands/file_operations.py +++ /dev/null @@ -1,267 +0,0 @@ -"""File operations for AutoGPT""" -from __future__ import annotations - -import os -import os.path -from typing import Generator - -import requests -from colorama import Back, Fore -from requests.adapters import HTTPAdapter, Retry - -from autogpt.spinner import Spinner -from autogpt.utils import readable_file_size -from autogpt.workspace import WORKSPACE_PATH, path_in_workspace - -LOG_FILE = "file_logger.txt" -LOG_FILE_PATH = WORKSPACE_PATH / LOG_FILE - - -def check_duplicate_operation(operation: str, filename: str) -> bool: - """Check if the operation has already been performed on the given file - - Args: - operation (str): The operation to check for - filename (str): The name of the file to check for - - Returns: - bool: True if the operation has already been performed on the file - """ - log_content = read_file(LOG_FILE) - log_entry = f"{operation}: {filename}\n" - return log_entry in log_content - - -def log_operation(operation: str, filename: str) -> None: - """Log the file operation to the file_logger.txt - - Args: - operation (str): The operation to log - filename (str): The name of the file the operation was performed on - """ - log_entry = f"{operation}: {filename}\n" - - # Create the log file if it doesn't exist - if not os.path.exists(LOG_FILE_PATH): - with open(LOG_FILE_PATH, "w", encoding="utf-8") as f: - f.write("File Operation Logger ") - - append_to_file(LOG_FILE, log_entry, shouldLog=False) - - -def split_file( - content: str, max_length: int = 4000, overlap: int = 0 -) -> Generator[str, None, None]: - """ - Split text into chunks of a specified maximum length with a specified overlap - between chunks. - - :param content: The input text to be split into chunks - :param max_length: The maximum length of each chunk, - default is 4000 (about 1k token) - :param overlap: The number of overlapping characters between chunks, - default is no overlap - :return: A generator yielding chunks of text - """ - start = 0 - content_length = len(content) - - while start < content_length: - end = start + max_length - if end + overlap < content_length: - chunk = content[start : end + overlap - 1] - else: - chunk = content[start:content_length] - - # Account for the case where the last chunk is shorter than the overlap, so it has already been consumed - if len(chunk) <= overlap: - break - - yield chunk - start += max_length - overlap - - -def read_file(filename: str) -> str: - """Read a file and return the contents - - Args: - filename (str): The name of the file to read - - Returns: - str: The contents of the file - """ - try: - filepath = path_in_workspace(filename) - with open(filepath, "r", encoding="utf-8") as f: - content = f.read() - return content - except Exception as e: - return f"Error: {str(e)}" - - -def ingest_file( - filename: str, memory, max_length: int = 4000, overlap: int = 200 -) -> None: - """ - Ingest a file by reading its content, splitting it into chunks with a specified - maximum length and overlap, and adding the chunks to the memory storage. - - :param filename: The name of the file to ingest - :param memory: An object with an add() method to store the chunks in memory - :param max_length: The maximum length of each chunk, default is 4000 - :param overlap: The number of overlapping characters between chunks, default is 200 - """ - try: - print(f"Working with file {filename}") - content = read_file(filename) - content_length = len(content) - print(f"File length: {content_length} characters") - - chunks = list(split_file(content, max_length=max_length, overlap=overlap)) - - num_chunks = len(chunks) - for i, chunk in enumerate(chunks): - print(f"Ingesting chunk {i + 1} / {num_chunks} into memory") - memory_to_add = ( - f"Filename: {filename}\n" f"Content part#{i + 1}/{num_chunks}: {chunk}" - ) - - memory.add(memory_to_add) - - print(f"Done ingesting {num_chunks} chunks from {filename}.") - except Exception as e: - print(f"Error while ingesting file '{filename}': {str(e)}") - - -def write_to_file(filename: str, text: str) -> str: - """Write text to a file - - Args: - filename (str): The name of the file to write to - text (str): The text to write to the file - - Returns: - str: A message indicating success or failure - """ - if check_duplicate_operation("write", filename): - return "Error: File has already been updated." - try: - filepath = path_in_workspace(filename) - directory = os.path.dirname(filepath) - if not os.path.exists(directory): - os.makedirs(directory) - with open(filepath, "w", encoding="utf-8") as f: - f.write(text) - log_operation("write", filename) - return "File written to successfully." - except Exception as e: - return f"Error: {str(e)}" - - -def append_to_file(filename: str, text: str, shouldLog: bool = True) -> str: - """Append text to a file - - Args: - filename (str): The name of the file to append to - text (str): The text to append to the file - - Returns: - str: A message indicating success or failure - """ - try: - filepath = path_in_workspace(filename) - with open(filepath, "a") as f: - f.write(text) - - if shouldLog: - log_operation("append", filename) - - return "Text appended successfully." - except Exception as e: - return f"Error: {str(e)}" - - -def delete_file(filename: str) -> str: - """Delete a file - - Args: - filename (str): The name of the file to delete - - Returns: - str: A message indicating success or failure - """ - if check_duplicate_operation("delete", filename): - return "Error: File has already been deleted." - try: - filepath = path_in_workspace(filename) - os.remove(filepath) - log_operation("delete", filename) - return "File deleted successfully." - except Exception as e: - return f"Error: {str(e)}" - - -def search_files(directory: str) -> list[str]: - """Search for files in a directory - - Args: - directory (str): The directory to search in - - Returns: - list[str]: A list of files found in the directory - """ - found_files = [] - - if directory in {"", "/"}: - search_directory = WORKSPACE_PATH - else: - search_directory = path_in_workspace(directory) - - for root, _, files in os.walk(search_directory): - for file in files: - if file.startswith("."): - continue - relative_path = os.path.relpath(os.path.join(root, file), WORKSPACE_PATH) - found_files.append(relative_path) - - return found_files - - -def download_file(url, filename): - """Downloads a file - Args: - url (str): URL of the file to download - filename (str): Filename to save the file as - """ - safe_filename = path_in_workspace(filename) - try: - message = f"{Fore.YELLOW}Downloading file from {Back.LIGHTBLUE_EX}{url}{Back.RESET}{Fore.RESET}" - with Spinner(message) as spinner: - session = requests.Session() - retry = Retry(total=3, backoff_factor=1, status_forcelist=[502, 503, 504]) - adapter = HTTPAdapter(max_retries=retry) - session.mount("http://", adapter) - session.mount("https://", adapter) - - total_size = 0 - downloaded_size = 0 - - with session.get(url, allow_redirects=True, stream=True) as r: - r.raise_for_status() - total_size = int(r.headers.get("Content-Length", 0)) - downloaded_size = 0 - - with open(safe_filename, "wb") as f: - for chunk in r.iter_content(chunk_size=8192): - f.write(chunk) - downloaded_size += len(chunk) - - # Update the progress message - progress = f"{readable_file_size(downloaded_size)} / {readable_file_size(total_size)}" - spinner.update_message(f"{message} {progress}") - - return f'Successfully downloaded and locally stored file: "{filename}"! (Size: {readable_file_size(total_size)})' - except requests.HTTPError as e: - return f"Got an HTTP Error whilst trying to download file: {e}" - except Exception as e: - return "Error: " + str(e) diff --git a/spaces/Jimmyfreelancer/Pix2Pix-Video/README.md b/spaces/Jimmyfreelancer/Pix2Pix-Video/README.md deleted file mode 100644 index 20cff0d5ee51519b0677d10d6b8808b162b79085..0000000000000000000000000000000000000000 --- a/spaces/Jimmyfreelancer/Pix2Pix-Video/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Pix2Pix Video -emoji: 🎨🎞️ -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -duplicated_from: fffiloni/Pix2Pix-Video ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Kedareeshwar/Dental-Caries-Diagnosis/app.py b/spaces/Kedareeshwar/Dental-Caries-Diagnosis/app.py deleted file mode 100644 index 8a65d4ae932e2b0945e6f79c435fb5a5811747fb..0000000000000000000000000000000000000000 --- a/spaces/Kedareeshwar/Dental-Caries-Diagnosis/app.py +++ /dev/null @@ -1,28 +0,0 @@ -import streamlit as slt -import cv2 -import numpy as np -from PIL import Image -from tensorflow import keras - -MODEL=keras.models.load_model("VGG16.h5") - -slt.title("Cavity Diagnosis") -upload_image = slt.file_uploader(label='Upload image', type=['png', 'jpg','jpeg'],accept_multiple_files=False) - -if upload_image is not None: - - image=Image.open(upload_image) - - converted_img = np.array(image.convert('RGB')) - - img = cv2.resize(converted_img, dsize=(256,256)) - - img_reshape = np.reshape(img,[1,256,256,3]) - - y_predict = np.argmax(MODEL.predict(img_reshape), axis=1) - - slt.text(y_predict) - if y_predict==1: - slt.text("No Cavity detected") - else: - slt.text("Cavity detected") \ No newline at end of file diff --git "a/spaces/Kunal7/squats-analysis/pages/2_ \342\254\206\357\270\217_Upload_Video.py" "b/spaces/Kunal7/squats-analysis/pages/2_ \342\254\206\357\270\217_Upload_Video.py" deleted file mode 100644 index 1bd81e451070d0015986d05dcd3bdcbe9077480b..0000000000000000000000000000000000000000 --- "a/spaces/Kunal7/squats-analysis/pages/2_ \342\254\206\357\270\217_Upload_Video.py" +++ /dev/null @@ -1,135 +0,0 @@ -import av -import os -import sys -import streamlit as st -import cv2 -import tempfile - - -BASE_DIR = os.path.abspath(os.path.join(__file__, '../../')) -sys.path.append(BASE_DIR) - - -from utils import get_mediapipe_pose -from process_frame import ProcessFrame -from thresholds import get_thresholds_beginner, get_thresholds_pro - - - -st.title('AI Fitness Trainer: Squats Analysis') - -mode = st.radio('Select Mode', ['Beginner', 'Pro'], horizontal=True) - - - -thresholds = None - -if mode == 'Beginner': - thresholds = get_thresholds_beginner() - -elif mode == 'Pro': - thresholds = get_thresholds_pro() - - - -upload_process_frame = ProcessFrame(thresholds=thresholds) - -# Initialize face mesh solution -pose = get_mediapipe_pose() - - -download = None - -if 'download' not in st.session_state: - st.session_state['download'] = False - - -output_video_file = f'output_recorded.mp4' - -if os.path.exists(output_video_file): - os.remove(output_video_file) - - -with st.form('Upload', clear_on_submit=True): - up_file = st.file_uploader("Upload a Video", ['mp4','mov', 'avi']) - uploaded = st.form_submit_button("Upload") - -stframe = st.empty() - -ip_vid_str = '

      Input Video

      ' -warning_str = '

      Please Upload a Video first!!!

      ' - -warn = st.empty() - - -download_button = st.empty() - -if up_file and uploaded: - - download_button.empty() - tfile = tempfile.NamedTemporaryFile(delete=False) - - try: - warn.empty() - tfile.write(up_file.read()) - - vf = cv2.VideoCapture(tfile.name) - - # --------------------- Write the processed video frame. -------------------- - fps = int(vf.get(cv2.CAP_PROP_FPS)) - width = int(vf.get(cv2.CAP_PROP_FRAME_WIDTH)) - height = int(vf.get(cv2.CAP_PROP_FRAME_HEIGHT)) - frame_size = (width, height) - fourcc = cv2.VideoWriter_fourcc(*'mp4v') - video_output = cv2.VideoWriter(output_video_file, fourcc, fps, frame_size) - # ----------------------------------------------------------------------------- - - - txt = st.sidebar.markdown(ip_vid_str, unsafe_allow_html=True) - ip_video = st.sidebar.video(tfile.name) - - while vf.isOpened(): - ret, frame = vf.read() - if not ret: - break - - # convert frame from BGR to RGB before processing it. - frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) - out_frame, _ = upload_process_frame.process(frame, pose) - stframe.image(out_frame) - video_output.write(out_frame[...,::-1]) - - - vf.release() - video_output.release() - stframe.empty() - ip_video.empty() - txt.empty() - tfile.close() - - except AttributeError: - warn.markdown(warning_str, unsafe_allow_html=True) - - - -if os.path.exists(output_video_file): - with open(output_video_file, 'rb') as op_vid: - download = download_button.download_button('Download Video', data = op_vid, file_name='output_recorded.mp4') - - if download: - st.session_state['download'] = True - - - -if os.path.exists(output_video_file) and st.session_state['download']: - os.remove(output_video_file) - st.session_state['download'] = False - download_button.empty() - - - - - - - - diff --git a/spaces/KyanChen/FunSR/models/baselines/aliif.py b/spaces/KyanChen/FunSR/models/baselines/aliif.py deleted file mode 100644 index 5af06846a31671367a3ad5a281c573c5f74b79a9..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/FunSR/models/baselines/aliif.py +++ /dev/null @@ -1,131 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -import models -from models import register -from utils import make_coord - - -@register('aliif') -class ALIIF(nn.Module): - - def __init__(self, encoder_spec, pdn_spec=None, basis_spec=None, imnet_spec=None, - local_ensemble=True, feat_unfold=True, cell_decode=True): - super().__init__() - self.local_ensemble = local_ensemble - self.feat_unfold = feat_unfold - self.cell_decode = cell_decode - - self.encoder = models.make(encoder_spec) - - if pdn_spec is not None: - self.pdn=models.make(pdn_spec) - self.use_pdn=True - else: - self.use_pdn = False - if basis_spec is not None: - self.basis=models.make(basis_spec) - self.use_basis=True - self.B,self.b=self.basis() - else: - self.use_basis = False - - if imnet_spec is not None: - imnet_in_dim = self.encoder.out_dim - if self.feat_unfold: - imnet_in_dim *= 9 - imnet_in_dim += 2 # attach coord - if self.cell_decode: - imnet_in_dim += 2 - self.imnet = models.make(imnet_spec, args={'in_dim': imnet_in_dim}) - else: - self.imnet = None - - def gen_feat(self, inp): - self.feat = self.encoder(inp) - return self.feat - - def query_rgb(self, coord, cell=None): - feat = self.feat - - if self.imnet is None: - ret = F.grid_sample(feat, coord.flip(-1).unsqueeze(1), - mode='nearest', align_corners=False)[:, :, 0, :] \ - .permute(0, 2, 1) - return ret - - if self.feat_unfold: - feat = F.unfold(feat, 3, padding=1).view( - feat.shape[0], feat.shape[1] * 9, feat.shape[2], feat.shape[3]) - - if self.local_ensemble: - vx_lst = [-1, 1] - vy_lst = [-1, 1] - eps_shift = 1e-6 - else: - vx_lst, vy_lst, eps_shift = [0], [0], 0 - - # field radius (global: [-1, 1]) - rx = 2 / feat.shape[-2] / 2 - ry = 2 / feat.shape[-1] / 2 - - feat_coord = make_coord(feat.shape[-2:], flatten=False).cuda() \ - .permute(2, 0, 1) \ - .unsqueeze(0).expand(feat.shape[0], 2, *feat.shape[-2:]) - - preds = [] - areas = [] - for vx in vx_lst: - for vy in vy_lst: - coord_ = coord.clone() - coord_[:, :, 0] += vx * rx + eps_shift - coord_[:, :, 1] += vy * ry + eps_shift - coord_.clamp_(-1 + 1e-6, 1 - 1e-6) - q_feat = F.grid_sample( - feat, coord_.flip(-1).unsqueeze(1), - mode='nearest', align_corners=False)[:, :, 0, :] \ - .permute(0, 2, 1) - q_coord = F.grid_sample( - feat_coord, coord_.flip(-1).unsqueeze(1), - mode='nearest', align_corners=False)[:, :, 0, :] \ - .permute(0, 2, 1) - rel_coord = coord - q_coord - rel_coord[:, :, 0] *= feat.shape[-2] - rel_coord[:, :, 1] *= feat.shape[-1] - inp = torch.cat([q_feat, rel_coord], dim=-1) - - if self.cell_decode: - rel_cell = cell.clone() - rel_cell[:, :, 0] *= feat.shape[-2] - rel_cell[:, :, 1] *= feat.shape[-1] - inp = torch.cat([inp, rel_cell], dim=-1) - - bs, q = coord.shape[:2] - - if self.use_pdn: - Coeff=self.pdn(inp) # out:(b,h*w,K) - else: - Coeff=torch.ones([inp.shape[0],inp.shape[1],1]) - if self.use_basis: - - pred = self.imnet(inp.view(bs * q, -1),Coeff.view(-1,Coeff.shape[2]),self.B,self.b).view(bs, q, -1) - else: - pred = self.imnet(inp.view(bs * q, -1)).view(bs, q, -1) - preds.append(pred) - - area = torch.abs(rel_coord[:, :, 0] * rel_coord[:, :, 1]) - areas.append(area + 1e-9) - - tot_area = torch.stack(areas).sum(dim=0) - if self.local_ensemble: - t = areas[0]; areas[0] = areas[3]; areas[3] = t - t = areas[1]; areas[1] = areas[2]; areas[2] = t - ret = 0 - for pred, area in zip(preds, areas): - ret = ret + pred * (area / tot_area).unsqueeze(-1) - return ret - - def forward(self, inp, coord, cell): - self.gen_feat(inp) - return self.query_rgb(coord, cell) diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/rtmdet.py b/spaces/KyanChen/RSPrompter/mmdet/models/detectors/rtmdet.py deleted file mode 100644 index cb10f76dd57d79761e9b58c310293eedba1e00d5..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/rtmdet.py +++ /dev/null @@ -1,52 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmengine.dist import get_world_size -from mmengine.logging import print_log - -from mmdet.registry import MODELS -from mmdet.utils import ConfigType, OptConfigType, OptMultiConfig -from .single_stage import SingleStageDetector - - -@MODELS.register_module() -class RTMDet(SingleStageDetector): - """Implementation of RTMDet. - - Args: - backbone (:obj:`ConfigDict` or dict): The backbone module. - neck (:obj:`ConfigDict` or dict): The neck module. - bbox_head (:obj:`ConfigDict` or dict): The bbox head module. - train_cfg (:obj:`ConfigDict` or dict, optional): The training config - of ATSS. Defaults to None. - test_cfg (:obj:`ConfigDict` or dict, optional): The testing config - of ATSS. Defaults to None. - data_preprocessor (:obj:`ConfigDict` or dict, optional): Config of - :class:`DetDataPreprocessor` to process the input data. - Defaults to None. - init_cfg (:obj:`ConfigDict` or dict, optional): the config to control - the initialization. Defaults to None. - use_syncbn (bool): Whether to use SyncBatchNorm. Defaults to True. - """ - - def __init__(self, - backbone: ConfigType, - neck: ConfigType, - bbox_head: ConfigType, - train_cfg: OptConfigType = None, - test_cfg: OptConfigType = None, - data_preprocessor: OptConfigType = None, - init_cfg: OptMultiConfig = None, - use_syncbn: bool = True) -> None: - super().__init__( - backbone=backbone, - neck=neck, - bbox_head=bbox_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - data_preprocessor=data_preprocessor, - init_cfg=init_cfg) - - # TODO: Waiting for mmengine support - if use_syncbn and get_world_size() > 1: - torch.nn.SyncBatchNorm.convert_sync_batchnorm(self) - print_log('Using SyncBatchNorm()', 'current') diff --git a/spaces/Lamai/LAMAIGPT/tests/integration/milvus_memory_tests.py b/spaces/Lamai/LAMAIGPT/tests/integration/milvus_memory_tests.py deleted file mode 100644 index ec38bf2f72087b5da679d26594ebff97d8a09b19..0000000000000000000000000000000000000000 --- a/spaces/Lamai/LAMAIGPT/tests/integration/milvus_memory_tests.py +++ /dev/null @@ -1,57 +0,0 @@ -# sourcery skip: snake-case-functions -"""Tests for the MilvusMemory class.""" -import random -import string -import unittest - -from autogpt.config import Config -from autogpt.memory.milvus import MilvusMemory - -try: - - class TestMilvusMemory(unittest.TestCase): - """Tests for the MilvusMemory class.""" - - def random_string(self, length: int) -> str: - """Generate a random string of the given length.""" - return "".join(random.choice(string.ascii_letters) for _ in range(length)) - - def setUp(self) -> None: - """Set up the test environment.""" - cfg = Config() - cfg.milvus_addr = "localhost:19530" - self.memory = MilvusMemory(cfg) - self.memory.clear() - - # Add example texts to the cache - self.example_texts = [ - "The quick brown fox jumps over the lazy dog", - "I love machine learning and natural language processing", - "The cake is a lie, but the pie is always true", - "ChatGPT is an advanced AI model for conversation", - ] - - for text in self.example_texts: - self.memory.add(text) - - # Add some random strings to test noise - for _ in range(5): - self.memory.add(self.random_string(10)) - - def test_get_relevant(self) -> None: - """Test getting relevant texts from the cache.""" - query = "I'm interested in artificial intelligence and NLP" - num_relevant = 3 - relevant_texts = self.memory.get_relevant(query, num_relevant) - - print(f"Top {k} relevant texts for the query '{query}':") - for i, text in enumerate(relevant_texts, start=1): - print(f"{i}. {text}") - - self.assertEqual(len(relevant_texts), k) - self.assertIn(self.example_texts[1], relevant_texts) - -except: - print( - "Skipping tests/integration/milvus_memory_tests.py as Milvus is not installed." - ) diff --git a/spaces/ML701G7/taim-gan/src/models/modules/discriminator.py b/spaces/ML701G7/taim-gan/src/models/modules/discriminator.py deleted file mode 100644 index ebe184675eba85ff4e0a2144a5a333af8a5250db..0000000000000000000000000000000000000000 --- a/spaces/ML701G7/taim-gan/src/models/modules/discriminator.py +++ /dev/null @@ -1,144 +0,0 @@ -"""Discriminator providing word-level feedback""" -from typing import Any - -import torch -from torch import nn - -from src.models.modules.conv_utils import conv1d, conv2d -from src.models.modules.image_encoder import InceptionEncoder - - -class WordLevelLogits(nn.Module): - """API for converting regional feature maps into logits for multi-class classification""" - - def __init__(self) -> None: - """ - Instantiate the module with softmax on channel dimension - """ - super().__init__() - self.softmax = nn.Softmax(dim=1) - # layer for flattening the feature maps - self.flat = nn.Flatten(start_dim=2) - # change dism of of textual embs to correlate with chans of inception - self.chan_reduction = conv1d(256, 128) - - def forward( - self, visual_features: torch.Tensor, word_embs: torch.Tensor, mask: torch.Tensor - ) -> Any: - """ - Fuse two types of features together to get output for feeding into the classification loss - :param torch.Tensor visual_features: - Feature maps of an image after being processed by Inception encoder. Bx128x17x17 - :param torch.Tensor word_embs: - Word-level embeddings from the text encoder Bx256xL - :return: Logits for each word in the picture. BxL - :rtype: Any - """ - # make textual and visual features have the same amount of channels - word_embs = self.chan_reduction(word_embs) - # flattening the feature maps - visual_features = self.flat(visual_features) - word_embs = torch.transpose(word_embs, 1, 2) - word_region_correlations = word_embs @ visual_features - # normalize across L dimension - m_norm_l = nn.functional.normalize(word_region_correlations, dim=1) - # normalize across H*W dimension - m_norm_hw = nn.functional.normalize(m_norm_l, dim=2) - m_norm_hw = torch.transpose(m_norm_hw, 1, 2) - weighted_img_feats = visual_features @ m_norm_hw - weighted_img_feats = torch.sum(weighted_img_feats, dim=1) - weighted_img_feats[mask] = -float("inf") - deltas = self.softmax(weighted_img_feats) - return deltas - - -class UnconditionalLogits(nn.Module): - """Head for retrieving logits from an image""" - - def __init__(self) -> None: - """Initialize modules that reduce the features down to a set of logits""" - super().__init__() - self.conv = nn.Conv2d(128, 1, kernel_size=17) - # flattening BxLx1x1 into Bx1 - self.flat = nn.Flatten() - - def forward(self, visual_features: torch.Tensor) -> Any: - """ - Compute logits for unconditioned adversarial loss - - :param visual_features: Local features from Inception network. Bx128x17x17 - :return: Logits for unconditioned adversarial loss. Bx1 - :rtype: Any - """ - # reduce channels and feature maps for visual features - visual_features = self.conv(visual_features) - # flatten Bx1x1x1 into Bx1 - logits = self.flat(visual_features) - return logits - - -class ConditionalLogits(nn.Module): - """Logits extractor for conditioned adversarial loss""" - - def __init__(self) -> None: - super().__init__() - # layer for forming the feature maps out of textual info - self.text_to_fm = conv1d(256, 17 * 17) - # fitting the size of text channels to the size of visual channels - self.chan_aligner = conv2d(1, 128) - # for reduced textual + visual features down to 1x1 feature map - self.joint_conv = nn.Conv2d(2 * 128, 1, kernel_size=17) - # converting Bx1x1x1 into Bx1 - self.flat = nn.Flatten() - - def forward(self, visual_features: torch.Tensor, sent_embs: torch.Tensor) -> Any: - """ - Compute logits for conditional adversarial loss - - :param torch.Tensor visual_features: Features from Inception encoder. Bx128x17x17 - :param torch.Tensor sent_embs: Sentence embeddings from text encoder. Bx256 - :return: Logits for conditional adversarial loss. BxL - :rtype: Any - """ - # make text and visual features have the same sizes of feature maps - # Bx256 -> Bx256x1 -> Bx289x1 - sent_embs = sent_embs.view(-1, 256, 1) - sent_embs = self.text_to_fm(sent_embs) - # transform textual info into shape of visual feature maps - # Bx289x1 -> Bx1x17x17 - sent_embs = sent_embs.view(-1, 1, 17, 17) - # propagate text embs through 1d conv to - # align dims with visual feature maps - sent_embs = self.chan_aligner(sent_embs) - # unite textual and visual features across the dim of channels - cross_features = torch.cat((visual_features, sent_embs), dim=1) - # reduce dims down to length of caption and form raw logits - cross_features = self.joint_conv(cross_features) - # form logits from Bx1x1x1 into Bx1 - logits = self.flat(cross_features) - return logits - - -class Discriminator(nn.Module): - """Simple CNN-based discriminator""" - - def __init__(self) -> None: - """Use a pretrained InceptionNet to extract features""" - super().__init__() - self.encoder = InceptionEncoder(D=128) - # define different logit extractors for different losses - self.logits_word_level = WordLevelLogits() - self.logits_uncond = UnconditionalLogits() - self.logits_cond = ConditionalLogits() - - def forward(self, images: torch.Tensor) -> Any: - """ - Retrieves image features encoded by the image encoder - - :param torch.Tensor images: Images to be analyzed. Bx3x256x256 - :return: image features encoded by image encoder. Bx128x17x17 - """ - # only taking the local features from inception - # Bx3x256x256 -> Bx128x17x17 - img_features, _ = self.encoder(images) - return img_features diff --git a/spaces/Marshalls/testmtd/analysis/sandbox_dalle.py b/spaces/Marshalls/testmtd/analysis/sandbox_dalle.py deleted file mode 100644 index 891bc6608da2b85226bf95bef4322697f20720af..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/analysis/sandbox_dalle.py +++ /dev/null @@ -1,34 +0,0 @@ -import torch -from models.cdvae import ConditionalDiscreteVAE - -vae = ConditionalDiscreteVAE( - input_shape = (7,7), - num_layers = 3, # number of downsamples - ex. 256 / (2 ** 3) = (32 x 32 feature map) - num_tokens = 8192, # number of visual tokens. in the paper, they used 8192, but could be smaller for downsized projects - codebook_dim = 512, # codebook dimension - cond_dim = 100, - hidden_dim = 64, # hidden dimension - num_resnet_blocks = 1, # number of resnet blocks - temperature = 0.9, # gumbel softmax temperature, the lower this is, the harder the discretization - straight_through = False, # straight-through for gumbel softmax. unclear if it is better one way or the other -) - -images = torch.randn(4, 3, *vae.input_shape) -cond = torch.randn(4, 100, *vae.codebook_layer_shape) - -logits = vae(images, cond=cond, return_logits = True) - -logits.shape - -import numpy as np - -torch.randint(0,10,(1,)) -image_seq = torch.randint(0,8192, (4,np.prod(vae.codebook_layer_shape))) -image = vae.decode(image_seq, cond=cond) - -image.shape - -# loss = vae(images, return_loss = True) -# loss.backward() -# loss -# train with a lot of data to learn a good codebook diff --git a/spaces/Marshalls/testmtd/analysis/visualization/generate_video_from_expmaps.py b/spaces/Marshalls/testmtd/analysis/visualization/generate_video_from_expmaps.py deleted file mode 100644 index 5afeacddcd7a563524dd14983b3f252b6261598e..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/analysis/visualization/generate_video_from_expmaps.py +++ /dev/null @@ -1,43 +0,0 @@ -import pickle -import matplotlib.pyplot as plt -import numpy as np -from analysis.pymo.parsers import BVHParser -from analysis.pymo.data import Joint, MocapData -from analysis.pymo.preprocessing import * -from analysis.pymo.viz_tools import * -from analysis.pymo.writers import * -from sklearn.pipeline import Pipeline -import joblib as jl -from .utils import generate_video_from_images, join_video_and_audio - -import matplotlib -matplotlib.use("Agg") - -def generate_video_from_expmaps(features_file, pipeline_file, output_folder, audio_file, trim_audio=0, generate_bvh=False): - data = np.load(features_file) - # pipeline = jl.load("data/scaled_features/motion_data_pipe.sav") - # containing_path = os.path.dirname(features_file) - # pipeline_file = containing_path + "/" + "motion_expmap_data_pipe.sav" - pipeline = jl.load(pipeline_file) - - filename = os.path.basename(features_file) - seq_id = filename.split(".")[0] - - bvh_data=pipeline.inverse_transform([data[:,0,:]]) - if generate_bvh: - writer = BVHWriter() - with open(output_folder+"/"+seq_id+".bvh",'w') as f: - writer.write(bvh_data[0], f) - - bvh2pos = MocapParameterizer('position') - pos_data = bvh2pos.fit_transform(bvh_data) - video_file = f'{output_folder}/{seq_id}.mp4' - #render_mp4(pos_data[0], video_file, axis_scale=100, elev=45, azim=45) - - render_mp4(pos_data[0], video_file, axis_scale=300, elev=45, azim=45) - if audio_file is not None: - join_video_and_audio(video_file, audio_file, trim_audio) - # draw_stickfigure3d(pos_data[0], 10) - # sketch_move(pos_data[0], data=None, ax=None, figsize=(16,8)): - - diff --git a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/modeling/debug.py b/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/modeling/debug.py deleted file mode 100644 index 9c7c442eb8aa9474c8874ac1dc75659371e8c894..0000000000000000000000000000000000000000 --- a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/modeling/debug.py +++ /dev/null @@ -1,334 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import cv2 -import numpy as np -import torch -import torch.nn.functional as F -import os - -COLORS = ((np.random.rand(1300, 3) * 0.4 + 0.6) * 255).astype( - np.uint8).reshape(1300, 1, 1, 3) - -def _get_color_image(heatmap): - heatmap = heatmap.reshape( - heatmap.shape[0], heatmap.shape[1], heatmap.shape[2], 1) - if heatmap.shape[0] == 1: - color_map = (heatmap * np.ones((1, 1, 1, 3), np.uint8) * 255).max( - axis=0).astype(np.uint8) # H, W, 3 - else: - color_map = (heatmap * COLORS[:heatmap.shape[0]]).max(axis=0).astype(np.uint8) # H, W, 3 - - return color_map - -def _blend_image(image, color_map, a=0.7): - color_map = cv2.resize(color_map, (image.shape[1], image.shape[0])) - ret = np.clip(image * (1 - a) + color_map * a, 0, 255).astype(np.uint8) - return ret - -def _blend_image_heatmaps(image, color_maps, a=0.7): - merges = np.zeros((image.shape[0], image.shape[1], 3), np.float32) - for color_map in color_maps: - color_map = cv2.resize(color_map, (image.shape[1], image.shape[0])) - merges = np.maximum(merges, color_map) - ret = np.clip(image * (1 - a) + merges * a, 0, 255).astype(np.uint8) - return ret - -def _decompose_level(x, shapes_per_level, N): - ''' - x: LNHiWi x C - ''' - x = x.view(x.shape[0], -1) - ret = [] - st = 0 - for l in range(len(shapes_per_level)): - ret.append([]) - h = shapes_per_level[l][0].int().item() - w = shapes_per_level[l][1].int().item() - for i in range(N): - ret[l].append(x[st + h * w * i:st + h * w * (i + 1)].view( - h, w, -1).permute(2, 0, 1)) - st += h * w * N - return ret - -def _imagelist_to_tensor(images): - images = [x for x in images] - image_sizes = [x.shape[-2:] for x in images] - h = max([size[0] for size in image_sizes]) - w = max([size[1] for size in image_sizes]) - S = 32 - h, w = ((h - 1) // S + 1) * S, ((w - 1) // S + 1) * S - images = [F.pad(x, (0, w - x.shape[2], 0, h - x.shape[1], 0, 0)) \ - for x in images] - images = torch.stack(images) - return images - - -def _ind2il(ind, shapes_per_level, N): - r = ind - l = 0 - S = 0 - while r - S >= N * shapes_per_level[l][0] * shapes_per_level[l][1]: - S += N * shapes_per_level[l][0] * shapes_per_level[l][1] - l += 1 - i = (r - S) // (shapes_per_level[l][0] * shapes_per_level[l][1]) - return i, l - -def debug_train( - images, gt_instances, flattened_hms, reg_targets, labels, pos_inds, - shapes_per_level, locations, strides): - ''' - images: N x 3 x H x W - flattened_hms: LNHiWi x C - shapes_per_level: L x 2 [(H_i, W_i)] - locations: LNHiWi x 2 - ''' - reg_inds = torch.nonzero( - reg_targets.max(dim=1)[0] > 0).squeeze(1) - N = len(images) - images = _imagelist_to_tensor(images) - repeated_locations = [torch.cat([loc] * N, dim=0) \ - for loc in locations] - locations = torch.cat(repeated_locations, dim=0) - gt_hms = _decompose_level(flattened_hms, shapes_per_level, N) - masks = flattened_hms.new_zeros((flattened_hms.shape[0], 1)) - masks[pos_inds] = 1 - masks = _decompose_level(masks, shapes_per_level, N) - for i in range(len(images)): - image = images[i].detach().cpu().numpy().transpose(1, 2, 0) - color_maps = [] - for l in range(len(gt_hms)): - color_map = _get_color_image( - gt_hms[l][i].detach().cpu().numpy()) - color_maps.append(color_map) - cv2.imshow('gthm_{}'.format(l), color_map) - blend = _blend_image_heatmaps(image.copy(), color_maps) - if gt_instances is not None: - bboxes = gt_instances[i].gt_boxes.tensor - for j in range(len(bboxes)): - bbox = bboxes[j] - cv2.rectangle( - blend, - (int(bbox[0]), int(bbox[1])), - (int(bbox[2]), int(bbox[3])), - (0, 0, 255), 3, cv2.LINE_AA) - - for j in range(len(pos_inds)): - image_id, l = _ind2il(pos_inds[j], shapes_per_level, N) - if image_id != i: - continue - loc = locations[pos_inds[j]] - cv2.drawMarker( - blend, (int(loc[0]), int(loc[1])), (0, 255, 255), - markerSize=(l + 1) * 16) - - for j in range(len(reg_inds)): - image_id, l = _ind2il(reg_inds[j], shapes_per_level, N) - if image_id != i: - continue - ltrb = reg_targets[reg_inds[j]] - ltrb *= strides[l] - loc = locations[reg_inds[j]] - bbox = [(loc[0] - ltrb[0]), (loc[1] - ltrb[1]), - (loc[0] + ltrb[2]), (loc[1] + ltrb[3])] - cv2.rectangle( - blend, - (int(bbox[0]), int(bbox[1])), - (int(bbox[2]), int(bbox[3])), - (255, 0, 0), 1, cv2.LINE_AA) - cv2.circle(blend, (int(loc[0]), int(loc[1])), 2, (255, 0, 0), -1) - - cv2.imshow('blend', blend) - cv2.waitKey() - - -def debug_test( - images, logits_pred, reg_pred, agn_hm_pred=[], preds=[], - vis_thresh=0.3, debug_show_name=False, mult_agn=False): - ''' - images: N x 3 x H x W - class_target: LNHiWi x C - cat_agn_heatmap: LNHiWi - shapes_per_level: L x 2 [(H_i, W_i)] - ''' - N = len(images) - for i in range(len(images)): - image = images[i].detach().cpu().numpy().transpose(1, 2, 0) - result = image.copy().astype(np.uint8) - pred_image = image.copy().astype(np.uint8) - color_maps = [] - L = len(logits_pred) - for l in range(L): - if logits_pred[0] is not None: - stride = min(image.shape[0], image.shape[1]) / min( - logits_pred[l][i].shape[1], logits_pred[l][i].shape[2]) - else: - stride = min(image.shape[0], image.shape[1]) / min( - agn_hm_pred[l][i].shape[1], agn_hm_pred[l][i].shape[2]) - stride = stride if stride < 60 else 64 if stride < 100 else 128 - if logits_pred[0] is not None: - if mult_agn: - logits_pred[l][i] = logits_pred[l][i] * agn_hm_pred[l][i] - color_map = _get_color_image( - logits_pred[l][i].detach().cpu().numpy()) - color_maps.append(color_map) - cv2.imshow('predhm_{}'.format(l), color_map) - - if debug_show_name: - from detectron2.data.datasets.lvis_v1_categories import LVIS_CATEGORIES - cat2name = [x['name'] for x in LVIS_CATEGORIES] - for j in range(len(preds[i].scores) if preds is not None else 0): - if preds[i].scores[j] > vis_thresh: - bbox = preds[i].proposal_boxes[j] \ - if preds[i].has('proposal_boxes') else \ - preds[i].pred_boxes[j] - bbox = bbox.tensor[0].detach().cpu().numpy().astype(np.int32) - cat = int(preds[i].pred_classes[j]) \ - if preds[i].has('pred_classes') else 0 - cl = COLORS[cat, 0, 0] - cv2.rectangle( - pred_image, (int(bbox[0]), int(bbox[1])), - (int(bbox[2]), int(bbox[3])), - (int(cl[0]), int(cl[1]), int(cl[2])), 2, cv2.LINE_AA) - if debug_show_name: - txt = '{}{:.1f}'.format( - cat2name[cat] if cat > 0 else '', - preds[i].scores[j]) - font = cv2.FONT_HERSHEY_SIMPLEX - cat_size = cv2.getTextSize(txt, font, 0.5, 2)[0] - cv2.rectangle( - pred_image, - (int(bbox[0]), int(bbox[1] - cat_size[1] - 2)), - (int(bbox[0] + cat_size[0]), int(bbox[1] - 2)), - (int(cl[0]), int(cl[1]), int(cl[2])), -1) - cv2.putText( - pred_image, txt, (int(bbox[0]), int(bbox[1] - 2)), - font, 0.5, (0, 0, 0), thickness=1, lineType=cv2.LINE_AA) - - - if agn_hm_pred[l] is not None: - agn_hm_ = agn_hm_pred[l][i, 0, :, :, None].detach().cpu().numpy() - agn_hm_ = (agn_hm_ * np.array([255, 255, 255]).reshape( - 1, 1, 3)).astype(np.uint8) - cv2.imshow('agn_hm_{}'.format(l), agn_hm_) - blend = _blend_image_heatmaps(image.copy(), color_maps) - cv2.imshow('blend', blend) - cv2.imshow('preds', pred_image) - cv2.waitKey() - -global cnt -cnt = 0 - -def debug_second_stage(images, instances, proposals=None, vis_thresh=0.3, - save_debug=False, debug_show_name=False, image_labels=[], - save_debug_path='output/save_debug/', - bgr=False): - images = _imagelist_to_tensor(images) - if 'COCO' in save_debug_path: - from detectron2.data.datasets.builtin_meta import COCO_CATEGORIES - cat2name = [x['name'] for x in COCO_CATEGORIES] - else: - from detectron2.data.datasets.lvis_v1_categories import LVIS_CATEGORIES - cat2name = ['({}){}'.format(x['frequency'], x['name']) \ - for x in LVIS_CATEGORIES] - for i in range(len(images)): - image = images[i].detach().cpu().numpy().transpose(1, 2, 0).astype(np.uint8).copy() - if bgr: - image = image[:, :, ::-1].copy() - if instances[i].has('gt_boxes'): - bboxes = instances[i].gt_boxes.tensor.cpu().numpy() - scores = np.ones(bboxes.shape[0]) - cats = instances[i].gt_classes.cpu().numpy() - else: - bboxes = instances[i].pred_boxes.tensor.cpu().numpy() - scores = instances[i].scores.cpu().numpy() - cats = instances[i].pred_classes.cpu().numpy() - for j in range(len(bboxes)): - if scores[j] > vis_thresh: - bbox = bboxes[j] - cl = COLORS[cats[j], 0, 0] - cl = (int(cl[0]), int(cl[1]), int(cl[2])) - cv2.rectangle( - image, - (int(bbox[0]), int(bbox[1])), - (int(bbox[2]), int(bbox[3])), - cl, 2, cv2.LINE_AA) - if debug_show_name: - cat = cats[j] - txt = '{}{:.1f}'.format( - cat2name[cat] if cat > 0 else '', - scores[j]) - font = cv2.FONT_HERSHEY_SIMPLEX - cat_size = cv2.getTextSize(txt, font, 0.5, 2)[0] - cv2.rectangle( - image, - (int(bbox[0]), int(bbox[1] - cat_size[1] - 2)), - (int(bbox[0] + cat_size[0]), int(bbox[1] - 2)), - (int(cl[0]), int(cl[1]), int(cl[2])), -1) - cv2.putText( - image, txt, (int(bbox[0]), int(bbox[1] - 2)), - font, 0.5, (0, 0, 0), thickness=1, lineType=cv2.LINE_AA) - if proposals is not None: - proposal_image = images[i].detach().cpu().numpy().transpose(1, 2, 0).astype(np.uint8).copy() - if bgr: - proposal_image = proposal_image.copy() - else: - proposal_image = proposal_image[:, :, ::-1].copy() - bboxes = proposals[i].proposal_boxes.tensor.cpu().numpy() - if proposals[i].has('scores'): - scores = proposals[i].scores.detach().cpu().numpy() - else: - scores = proposals[i].objectness_logits.detach().cpu().numpy() - # selected = -1 - # if proposals[i].has('image_loss'): - # selected = proposals[i].image_loss.argmin() - if proposals[i].has('selected'): - selected = proposals[i].selected - else: - selected = [-1 for _ in range(len(bboxes))] - for j in range(len(bboxes)): - if scores[j] > vis_thresh or selected[j] >= 0: - bbox = bboxes[j] - cl = (209, 159, 83) - th = 2 - if selected[j] >= 0: - cl = (0, 0, 0xa4) - th = 4 - cv2.rectangle( - proposal_image, - (int(bbox[0]), int(bbox[1])), - (int(bbox[2]), int(bbox[3])), - cl, th, cv2.LINE_AA) - if selected[j] >= 0 and debug_show_name: - cat = selected[j].item() - txt = '{}'.format(cat2name[cat]) - font = cv2.FONT_HERSHEY_SIMPLEX - cat_size = cv2.getTextSize(txt, font, 0.5, 2)[0] - cv2.rectangle( - proposal_image, - (int(bbox[0]), int(bbox[1] - cat_size[1] - 2)), - (int(bbox[0] + cat_size[0]), int(bbox[1] - 2)), - (int(cl[0]), int(cl[1]), int(cl[2])), -1) - cv2.putText( - proposal_image, txt, - (int(bbox[0]), int(bbox[1] - 2)), - font, 0.5, (0, 0, 0), thickness=1, - lineType=cv2.LINE_AA) - - if save_debug: - global cnt - cnt = (cnt + 1) % 5000 - if not os.path.exists(save_debug_path): - os.mkdir(save_debug_path) - save_name = '{}/{:05d}.jpg'.format(save_debug_path, cnt) - if i < len(image_labels): - image_label = image_labels[i] - save_name = '{}/{:05d}'.format(save_debug_path, cnt) - for x in image_label: - class_name = cat2name[x] - save_name = save_name + '|{}'.format(class_name) - save_name = save_name + '.jpg' - cv2.imwrite(save_name, proposal_image) - else: - cv2.imshow('image', image) - if proposals is not None: - cv2.imshow('proposals', proposal_image) - cv2.waitKey() \ No newline at end of file diff --git a/spaces/Menthe17/Nani17092005/README.md b/spaces/Menthe17/Nani17092005/README.md deleted file mode 100644 index c16992c038f0d43e489a22b4cba5fb8089ecec64..0000000000000000000000000000000000000000 --- a/spaces/Menthe17/Nani17092005/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Nani17092005 -emoji: 🐨 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MirageML/sjc/sd1/ldm/data/imagenet.py b/spaces/MirageML/sjc/sd1/ldm/data/imagenet.py deleted file mode 100644 index 1c473f9c6965b22315dbb289eff8247c71bdc790..0000000000000000000000000000000000000000 --- a/spaces/MirageML/sjc/sd1/ldm/data/imagenet.py +++ /dev/null @@ -1,394 +0,0 @@ -import os, yaml, pickle, shutil, tarfile, glob -import cv2 -import albumentations -import PIL -import numpy as np -import torchvision.transforms.functional as TF -from omegaconf import OmegaConf -from functools import partial -from PIL import Image -from tqdm import tqdm -from torch.utils.data import Dataset, Subset - -import taming.data.utils as tdu -from taming.data.imagenet import str_to_indices, give_synsets_from_indices, download, retrieve -from taming.data.imagenet import ImagePaths - -from ldm.modules.image_degradation import degradation_fn_bsr, degradation_fn_bsr_light - - -def synset2idx(path_to_yaml="data/index_synset.yaml"): - with open(path_to_yaml) as f: - di2s = yaml.load(f) - return dict((v,k) for k,v in di2s.items()) - - -class ImageNetBase(Dataset): - def __init__(self, config=None): - self.config = config or OmegaConf.create() - if not type(self.config)==dict: - self.config = OmegaConf.to_container(self.config) - self.keep_orig_class_label = self.config.get("keep_orig_class_label", False) - self.process_images = True # if False we skip loading & processing images and self.data contains filepaths - self._prepare() - self._prepare_synset_to_human() - self._prepare_idx_to_synset() - self._prepare_human_to_integer_label() - self._load() - - def __len__(self): - return len(self.data) - - def __getitem__(self, i): - return self.data[i] - - def _prepare(self): - raise NotImplementedError() - - def _filter_relpaths(self, relpaths): - ignore = set([ - "n06596364_9591.JPEG", - ]) - relpaths = [rpath for rpath in relpaths if not rpath.split("/")[-1] in ignore] - if "sub_indices" in self.config: - indices = str_to_indices(self.config["sub_indices"]) - synsets = give_synsets_from_indices(indices, path_to_yaml=self.idx2syn) # returns a list of strings - self.synset2idx = synset2idx(path_to_yaml=self.idx2syn) - files = [] - for rpath in relpaths: - syn = rpath.split("/")[0] - if syn in synsets: - files.append(rpath) - return files - else: - return relpaths - - def _prepare_synset_to_human(self): - SIZE = 2655750 - URL = "https://heibox.uni-heidelberg.de/f/9f28e956cd304264bb82/?dl=1" - self.human_dict = os.path.join(self.root, "synset_human.txt") - if (not os.path.exists(self.human_dict) or - not os.path.getsize(self.human_dict)==SIZE): - download(URL, self.human_dict) - - def _prepare_idx_to_synset(self): - URL = "https://heibox.uni-heidelberg.de/f/d835d5b6ceda4d3aa910/?dl=1" - self.idx2syn = os.path.join(self.root, "index_synset.yaml") - if (not os.path.exists(self.idx2syn)): - download(URL, self.idx2syn) - - def _prepare_human_to_integer_label(self): - URL = "https://heibox.uni-heidelberg.de/f/2362b797d5be43b883f6/?dl=1" - self.human2integer = os.path.join(self.root, "imagenet1000_clsidx_to_labels.txt") - if (not os.path.exists(self.human2integer)): - download(URL, self.human2integer) - with open(self.human2integer, "r") as f: - lines = f.read().splitlines() - assert len(lines) == 1000 - self.human2integer_dict = dict() - for line in lines: - value, key = line.split(":") - self.human2integer_dict[key] = int(value) - - def _load(self): - with open(self.txt_filelist, "r") as f: - self.relpaths = f.read().splitlines() - l1 = len(self.relpaths) - self.relpaths = self._filter_relpaths(self.relpaths) - print("Removed {} files from filelist during filtering.".format(l1 - len(self.relpaths))) - - self.synsets = [p.split("/")[0] for p in self.relpaths] - self.abspaths = [os.path.join(self.datadir, p) for p in self.relpaths] - - unique_synsets = np.unique(self.synsets) - class_dict = dict((synset, i) for i, synset in enumerate(unique_synsets)) - if not self.keep_orig_class_label: - self.class_labels = [class_dict[s] for s in self.synsets] - else: - self.class_labels = [self.synset2idx[s] for s in self.synsets] - - with open(self.human_dict, "r") as f: - human_dict = f.read().splitlines() - human_dict = dict(line.split(maxsplit=1) for line in human_dict) - - self.human_labels = [human_dict[s] for s in self.synsets] - - labels = { - "relpath": np.array(self.relpaths), - "synsets": np.array(self.synsets), - "class_label": np.array(self.class_labels), - "human_label": np.array(self.human_labels), - } - - if self.process_images: - self.size = retrieve(self.config, "size", default=256) - self.data = ImagePaths(self.abspaths, - labels=labels, - size=self.size, - random_crop=self.random_crop, - ) - else: - self.data = self.abspaths - - -class ImageNetTrain(ImageNetBase): - NAME = "ILSVRC2012_train" - URL = "http://www.image-net.org/challenges/LSVRC/2012/" - AT_HASH = "a306397ccf9c2ead27155983c254227c0fd938e2" - FILES = [ - "ILSVRC2012_img_train.tar", - ] - SIZES = [ - 147897477120, - ] - - def __init__(self, process_images=True, data_root=None, **kwargs): - self.process_images = process_images - self.data_root = data_root - super().__init__(**kwargs) - - def _prepare(self): - if self.data_root: - self.root = os.path.join(self.data_root, self.NAME) - else: - cachedir = os.environ.get("XDG_CACHE_HOME", os.path.expanduser("~/.cache")) - self.root = os.path.join(cachedir, "autoencoders/data", self.NAME) - - self.datadir = os.path.join(self.root, "data") - self.txt_filelist = os.path.join(self.root, "filelist.txt") - self.expected_length = 1281167 - self.random_crop = retrieve(self.config, "ImageNetTrain/random_crop", - default=True) - if not tdu.is_prepared(self.root): - # prep - print("Preparing dataset {} in {}".format(self.NAME, self.root)) - - datadir = self.datadir - if not os.path.exists(datadir): - path = os.path.join(self.root, self.FILES[0]) - if not os.path.exists(path) or not os.path.getsize(path)==self.SIZES[0]: - import academictorrents as at - atpath = at.get(self.AT_HASH, datastore=self.root) - assert atpath == path - - print("Extracting {} to {}".format(path, datadir)) - os.makedirs(datadir, exist_ok=True) - with tarfile.open(path, "r:") as tar: - tar.extractall(path=datadir) - - print("Extracting sub-tars.") - subpaths = sorted(glob.glob(os.path.join(datadir, "*.tar"))) - for subpath in tqdm(subpaths): - subdir = subpath[:-len(".tar")] - os.makedirs(subdir, exist_ok=True) - with tarfile.open(subpath, "r:") as tar: - tar.extractall(path=subdir) - - filelist = glob.glob(os.path.join(datadir, "**", "*.JPEG")) - filelist = [os.path.relpath(p, start=datadir) for p in filelist] - filelist = sorted(filelist) - filelist = "\n".join(filelist)+"\n" - with open(self.txt_filelist, "w") as f: - f.write(filelist) - - tdu.mark_prepared(self.root) - - -class ImageNetValidation(ImageNetBase): - NAME = "ILSVRC2012_validation" - URL = "http://www.image-net.org/challenges/LSVRC/2012/" - AT_HASH = "5d6d0df7ed81efd49ca99ea4737e0ae5e3a5f2e5" - VS_URL = "https://heibox.uni-heidelberg.de/f/3e0f6e9c624e45f2bd73/?dl=1" - FILES = [ - "ILSVRC2012_img_val.tar", - "validation_synset.txt", - ] - SIZES = [ - 6744924160, - 1950000, - ] - - def __init__(self, process_images=True, data_root=None, **kwargs): - self.data_root = data_root - self.process_images = process_images - super().__init__(**kwargs) - - def _prepare(self): - if self.data_root: - self.root = os.path.join(self.data_root, self.NAME) - else: - cachedir = os.environ.get("XDG_CACHE_HOME", os.path.expanduser("~/.cache")) - self.root = os.path.join(cachedir, "autoencoders/data", self.NAME) - self.datadir = os.path.join(self.root, "data") - self.txt_filelist = os.path.join(self.root, "filelist.txt") - self.expected_length = 50000 - self.random_crop = retrieve(self.config, "ImageNetValidation/random_crop", - default=False) - if not tdu.is_prepared(self.root): - # prep - print("Preparing dataset {} in {}".format(self.NAME, self.root)) - - datadir = self.datadir - if not os.path.exists(datadir): - path = os.path.join(self.root, self.FILES[0]) - if not os.path.exists(path) or not os.path.getsize(path)==self.SIZES[0]: - import academictorrents as at - atpath = at.get(self.AT_HASH, datastore=self.root) - assert atpath == path - - print("Extracting {} to {}".format(path, datadir)) - os.makedirs(datadir, exist_ok=True) - with tarfile.open(path, "r:") as tar: - tar.extractall(path=datadir) - - vspath = os.path.join(self.root, self.FILES[1]) - if not os.path.exists(vspath) or not os.path.getsize(vspath)==self.SIZES[1]: - download(self.VS_URL, vspath) - - with open(vspath, "r") as f: - synset_dict = f.read().splitlines() - synset_dict = dict(line.split() for line in synset_dict) - - print("Reorganizing into synset folders") - synsets = np.unique(list(synset_dict.values())) - for s in synsets: - os.makedirs(os.path.join(datadir, s), exist_ok=True) - for k, v in synset_dict.items(): - src = os.path.join(datadir, k) - dst = os.path.join(datadir, v) - shutil.move(src, dst) - - filelist = glob.glob(os.path.join(datadir, "**", "*.JPEG")) - filelist = [os.path.relpath(p, start=datadir) for p in filelist] - filelist = sorted(filelist) - filelist = "\n".join(filelist)+"\n" - with open(self.txt_filelist, "w") as f: - f.write(filelist) - - tdu.mark_prepared(self.root) - - - -class ImageNetSR(Dataset): - def __init__(self, size=None, - degradation=None, downscale_f=4, min_crop_f=0.5, max_crop_f=1., - random_crop=True): - """ - Imagenet Superresolution Dataloader - Performs following ops in order: - 1. crops a crop of size s from image either as random or center crop - 2. resizes crop to size with cv2.area_interpolation - 3. degrades resized crop with degradation_fn - - :param size: resizing to size after cropping - :param degradation: degradation_fn, e.g. cv_bicubic or bsrgan_light - :param downscale_f: Low Resolution Downsample factor - :param min_crop_f: determines crop size s, - where s = c * min_img_side_len with c sampled from interval (min_crop_f, max_crop_f) - :param max_crop_f: "" - :param data_root: - :param random_crop: - """ - self.base = self.get_base() - assert size - assert (size / downscale_f).is_integer() - self.size = size - self.LR_size = int(size / downscale_f) - self.min_crop_f = min_crop_f - self.max_crop_f = max_crop_f - assert(max_crop_f <= 1.) - self.center_crop = not random_crop - - self.image_rescaler = albumentations.SmallestMaxSize(max_size=size, interpolation=cv2.INTER_AREA) - - self.pil_interpolation = False # gets reset later if incase interp_op is from pillow - - if degradation == "bsrgan": - self.degradation_process = partial(degradation_fn_bsr, sf=downscale_f) - - elif degradation == "bsrgan_light": - self.degradation_process = partial(degradation_fn_bsr_light, sf=downscale_f) - - else: - interpolation_fn = { - "cv_nearest": cv2.INTER_NEAREST, - "cv_bilinear": cv2.INTER_LINEAR, - "cv_bicubic": cv2.INTER_CUBIC, - "cv_area": cv2.INTER_AREA, - "cv_lanczos": cv2.INTER_LANCZOS4, - "pil_nearest": PIL.Image.NEAREST, - "pil_bilinear": PIL.Image.BILINEAR, - "pil_bicubic": PIL.Image.BICUBIC, - "pil_box": PIL.Image.BOX, - "pil_hamming": PIL.Image.HAMMING, - "pil_lanczos": PIL.Image.LANCZOS, - }[degradation] - - self.pil_interpolation = degradation.startswith("pil_") - - if self.pil_interpolation: - self.degradation_process = partial(TF.resize, size=self.LR_size, interpolation=interpolation_fn) - - else: - self.degradation_process = albumentations.SmallestMaxSize(max_size=self.LR_size, - interpolation=interpolation_fn) - - def __len__(self): - return len(self.base) - - def __getitem__(self, i): - example = self.base[i] - image = Image.open(example["file_path_"]) - - if not image.mode == "RGB": - image = image.convert("RGB") - - image = np.array(image).astype(np.uint8) - - min_side_len = min(image.shape[:2]) - crop_side_len = min_side_len * np.random.uniform(self.min_crop_f, self.max_crop_f, size=None) - crop_side_len = int(crop_side_len) - - if self.center_crop: - self.cropper = albumentations.CenterCrop(height=crop_side_len, width=crop_side_len) - - else: - self.cropper = albumentations.RandomCrop(height=crop_side_len, width=crop_side_len) - - image = self.cropper(image=image)["image"] - image = self.image_rescaler(image=image)["image"] - - if self.pil_interpolation: - image_pil = PIL.Image.fromarray(image) - LR_image = self.degradation_process(image_pil) - LR_image = np.array(LR_image).astype(np.uint8) - - else: - LR_image = self.degradation_process(image=image)["image"] - - example["image"] = (image/127.5 - 1.0).astype(np.float32) - example["LR_image"] = (LR_image/127.5 - 1.0).astype(np.float32) - - return example - - -class ImageNetSRTrain(ImageNetSR): - def __init__(self, **kwargs): - super().__init__(**kwargs) - - def get_base(self): - with open("data/imagenet_train_hr_indices.p", "rb") as f: - indices = pickle.load(f) - dset = ImageNetTrain(process_images=False,) - return Subset(dset, indices) - - -class ImageNetSRValidation(ImageNetSR): - def __init__(self, **kwargs): - super().__init__(**kwargs) - - def get_base(self): - with open("data/imagenet_val_hr_indices.p", "rb") as f: - indices = pickle.load(f) - dset = ImageNetValidation(process_images=False,) - return Subset(dset, indices) diff --git a/spaces/Moonkiler/Nio22/README.md b/spaces/Moonkiler/Nio22/README.md deleted file mode 100644 index d04e817a0ae7badedaf88c6e7c9a49722772da1a..0000000000000000000000000000000000000000 --- a/spaces/Moonkiler/Nio22/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Nio22 -emoji: 🦀 -colorFrom: purple -colorTo: red -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textrecog/lv_converter.py b/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textrecog/lv_converter.py deleted file mode 100644 index d22c60b224d7fb122ebe26b2729650a961aac992..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textrecog/lv_converter.py +++ /dev/null @@ -1,68 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import argparse -import os.path as osp - -from mmocr.utils import dump_ocr_data - - -def convert_annotations(root_path, split): - """Convert original annotations to mmocr format. - - The annotation format is as the following: - Crops/val/11/1/1.png weighted - Crops/val/11/1/2.png 26 - Crops/val/11/1/3.png casting - Crops/val/11/1/4.png 28 - After this module, the annotation has been changed to the format below: - jsonl: - {'filename': 'Crops/val/11/1/1.png', 'text': 'weighted'} - {'filename': 'Crops/val/11/1/1.png', 'text': '26'} - {'filename': 'Crops/val/11/1/1.png', 'text': 'casting'} - {'filename': 'Crops/val/11/1/1.png', 'text': '28'} - - Args: - root_path (str): The root path of the dataset - split (str): The split of dataset. Namely: training or test - """ - assert isinstance(root_path, str) - assert isinstance(split, str) - - img_info = [] - with open( - osp.join(root_path, f'{split}_label.txt'), - encoding='"utf-8-sig') as f: - annos = f.readlines() - for anno in annos: - if anno: - # Text may contain spaces - dst_img_name, word = anno.split('png ') - word = word.strip('\n') - img_info.append({ - 'file_name': dst_img_name + 'png', - 'anno_info': [{ - 'text': word - }] - }) - dump_ocr_data(img_info, osp.join(root_path, f'{split.lower()}_label.json'), - 'textrecog') - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Generate training and test set of Lecture Video DB') - parser.add_argument('root_path', help='Root dir path of Lecture Video DB') - args = parser.parse_args() - return args - - -def main(): - args = parse_args() - root_path = args.root_path - - for split in ['train', 'val', 'test']: - convert_annotations(root_path, split) - print(f'{split} split converted.') - - -if __name__ == '__main__': - main() diff --git a/spaces/MrBodean/VoiceClone/encoder/data_objects/speaker_verification_dataset.py b/spaces/MrBodean/VoiceClone/encoder/data_objects/speaker_verification_dataset.py deleted file mode 100644 index 77a6e05eae6a939ae7575ae70b7173644141fffe..0000000000000000000000000000000000000000 --- a/spaces/MrBodean/VoiceClone/encoder/data_objects/speaker_verification_dataset.py +++ /dev/null @@ -1,56 +0,0 @@ -from encoder.data_objects.random_cycler import RandomCycler -from encoder.data_objects.speaker_batch import SpeakerBatch -from encoder.data_objects.speaker import Speaker -from encoder.params_data import partials_n_frames -from torch.utils.data import Dataset, DataLoader -from pathlib import Path - -# TODO: improve with a pool of speakers for data efficiency - -class SpeakerVerificationDataset(Dataset): - def __init__(self, datasets_root: Path): - self.root = datasets_root - speaker_dirs = [f for f in self.root.glob("*") if f.is_dir()] - if len(speaker_dirs) == 0: - raise Exception("No speakers found. Make sure you are pointing to the directory " - "containing all preprocessed speaker directories.") - self.speakers = [Speaker(speaker_dir) for speaker_dir in speaker_dirs] - self.speaker_cycler = RandomCycler(self.speakers) - - def __len__(self): - return int(1e10) - - def __getitem__(self, index): - return next(self.speaker_cycler) - - def get_logs(self): - log_string = "" - for log_fpath in self.root.glob("*.txt"): - with log_fpath.open("r") as log_file: - log_string += "".join(log_file.readlines()) - return log_string - - -class SpeakerVerificationDataLoader(DataLoader): - def __init__(self, dataset, speakers_per_batch, utterances_per_speaker, sampler=None, - batch_sampler=None, num_workers=0, pin_memory=False, timeout=0, - worker_init_fn=None): - self.utterances_per_speaker = utterances_per_speaker - - super().__init__( - dataset=dataset, - batch_size=speakers_per_batch, - shuffle=False, - sampler=sampler, - batch_sampler=batch_sampler, - num_workers=num_workers, - collate_fn=self.collate, - pin_memory=pin_memory, - drop_last=False, - timeout=timeout, - worker_init_fn=worker_init_fn - ) - - def collate(self, speakers): - return SpeakerBatch(speakers, self.utterances_per_speaker, partials_n_frames) - \ No newline at end of file diff --git a/spaces/MrTitanicus/rvc-models/infer_pack/models.py b/spaces/MrTitanicus/rvc-models/infer_pack/models.py deleted file mode 100644 index 96165f73644e6fb92d0ffedb4a3c9e1a457cb989..0000000000000000000000000000000000000000 --- a/spaces/MrTitanicus/rvc-models/infer_pack/models.py +++ /dev/null @@ -1,982 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder256Sim(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_sim(nn.Module): - """ - Synthesizer for Training - """ - - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - # hop_length, - gin_channels=0, - use_sdp=True, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256Sim( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - is_half=kwargs["is_half"], - ) - - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y_lengths, ds - ): # y是spec不需要了现在 - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - z_slice, ids_slice = commons.rand_slice_segments( - x, y_lengths, self.segment_size - ) - - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice - - def infer( - self, phone, phone_lengths, pitch, pitchf, ds, max_len=None - ): # y是spec不需要了现在 - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g) - return o, o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/NATSpeech/PortaSpeech/data_gen/tts/base_binarizer.py b/spaces/NATSpeech/PortaSpeech/data_gen/tts/base_binarizer.py deleted file mode 100644 index 83efbb79152b8f64dbac41b29fe5b28317e142ff..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/PortaSpeech/data_gen/tts/base_binarizer.py +++ /dev/null @@ -1,225 +0,0 @@ -import json -import os -import random -import traceback -from functools import partial - -import numpy as np -from resemblyzer import VoiceEncoder -from tqdm import tqdm - -import utils.commons.single_thread_env # NOQA -from utils.audio import librosa_wav2spec -from utils.audio.align import get_mel2ph, mel2token_to_dur -from utils.audio.cwt import get_lf0_cwt, get_cont_lf0 -from utils.audio.pitch.utils import f0_to_coarse -from utils.audio.pitch_extractors import extract_pitch_simple -from utils.commons.hparams import hparams -from utils.commons.indexed_datasets import IndexedDatasetBuilder -from utils.commons.multiprocess_utils import multiprocess_run_tqdm -from utils.os_utils import remove_file, copy_file - -np.seterr(divide='ignore', invalid='ignore') - - -class BinarizationError(Exception): - pass - - -class BaseBinarizer: - def __init__(self, processed_data_dir=None): - if processed_data_dir is None: - processed_data_dir = hparams['processed_data_dir'] - self.processed_data_dir = processed_data_dir - self.binarization_args = hparams['binarization_args'] - self.items = {} - self.item_names = [] - - def load_meta_data(self): - processed_data_dir = self.processed_data_dir - items_list = json.load(open(f"{processed_data_dir}/metadata.json")) - for r in tqdm(items_list, desc='Loading meta data.'): - item_name = r['item_name'] - self.items[item_name] = r - self.item_names.append(item_name) - if self.binarization_args['shuffle']: - random.seed(1234) - random.shuffle(self.item_names) - - @property - def train_item_names(self): - range_ = self._convert_range(self.binarization_args['train_range']) - return self.item_names[range_[0]:range_[1]] - - @property - def valid_item_names(self): - range_ = self._convert_range(self.binarization_args['valid_range']) - return self.item_names[range_[0]:range_[1]] - - @property - def test_item_names(self): - range_ = self._convert_range(self.binarization_args['test_range']) - return self.item_names[range_[0]:range_[1]] - - def _convert_range(self, range_): - if range_[1] == -1: - range_[1] = len(self.item_names) - return range_ - - def meta_data(self, prefix): - if prefix == 'valid': - item_names = self.valid_item_names - elif prefix == 'test': - item_names = self.test_item_names - else: - item_names = self.train_item_names - for item_name in item_names: - yield self.items[item_name] - - def process(self): - self.load_meta_data() - os.makedirs(hparams['binary_data_dir'], exist_ok=True) - for fn in ['phone_set.json', 'word_set.json', 'spk_map.json']: - remove_file(f"{hparams['binary_data_dir']}/{fn}") - copy_file(f"{hparams['processed_data_dir']}/{fn}", f"{hparams['binary_data_dir']}/{fn}") - self.process_data('valid') - self.process_data('test') - self.process_data('train') - - def process_data(self, prefix): - data_dir = hparams['binary_data_dir'] - builder = IndexedDatasetBuilder(f'{data_dir}/{prefix}') - meta_data = list(self.meta_data(prefix)) - process_item = partial(self.process_item, binarization_args=self.binarization_args) - ph_lengths = [] - mel_lengths = [] - total_sec = 0 - items = [] - args = [{'item': item} for item in meta_data] - for item_id, item in multiprocess_run_tqdm(process_item, args, desc='Processing data'): - if item is not None: - items.append(item) - if self.binarization_args['with_spk_embed']: - args = [{'wav': item['wav']} for item in items] - for item_id, spk_embed in multiprocess_run_tqdm( - self.get_spk_embed, args, - init_ctx_func=lambda wid: {'voice_encoder': VoiceEncoder().cuda()}, num_workers=4, - desc='Extracting spk embed'): - items[item_id]['spk_embed'] = spk_embed - - for item in items: - if not self.binarization_args['with_wav'] and 'wav' in item: - del item['wav'] - builder.add_item(item) - mel_lengths.append(item['len']) - assert item['len'] > 0, (item['item_name'], item['txt'], item['mel2ph']) - if 'ph_len' in item: - ph_lengths.append(item['ph_len']) - total_sec += item['sec'] - builder.finalize() - np.save(f'{data_dir}/{prefix}_lengths.npy', mel_lengths) - if len(ph_lengths) > 0: - np.save(f'{data_dir}/{prefix}_ph_lengths.npy', ph_lengths) - print(f"| {prefix} total duration: {total_sec:.3f}s") - - @classmethod - def process_item(cls, item, binarization_args): - item['ph_len'] = len(item['ph_token']) - item_name = item['item_name'] - wav_fn = item['wav_fn'] - wav, mel = cls.process_audio(wav_fn, item, binarization_args) - try: - n_bos_frames, n_eos_frames = 0, 0 - if binarization_args['with_align']: - tg_fn = f"{hparams['processed_data_dir']}/mfa_outputs/{item_name}.TextGrid" - item['tg_fn'] = tg_fn - cls.process_align(tg_fn, item) - if binarization_args['trim_eos_bos']: - n_bos_frames = item['dur'][0] - n_eos_frames = item['dur'][-1] - T = len(mel) - item['mel'] = mel[n_bos_frames:T - n_eos_frames] - item['mel2ph'] = item['mel2ph'][n_bos_frames:T - n_eos_frames] - item['mel2word'] = item['mel2word'][n_bos_frames:T - n_eos_frames] - item['dur'] = item['dur'][1:-1] - item['dur_word'] = item['dur_word'][1:-1] - item['len'] = item['mel'].shape[0] - item['wav'] = wav[n_bos_frames * hparams['hop_size']:len(wav) - n_eos_frames * hparams['hop_size']] - if binarization_args['with_f0']: - cls.process_pitch(item, n_bos_frames, n_eos_frames) - except BinarizationError as e: - print(f"| Skip item ({e}). item_name: {item_name}, wav_fn: {wav_fn}") - return None - except Exception as e: - traceback.print_exc() - print(f"| Skip item. item_name: {item_name}, wav_fn: {wav_fn}") - return None - return item - - @classmethod - def process_audio(cls, wav_fn, res, binarization_args): - wav2spec_dict = librosa_wav2spec( - wav_fn, - fft_size=hparams['fft_size'], - hop_size=hparams['hop_size'], - win_length=hparams['win_size'], - num_mels=hparams['audio_num_mel_bins'], - fmin=hparams['fmin'], - fmax=hparams['fmax'], - sample_rate=hparams['audio_sample_rate'], - loud_norm=hparams['loud_norm']) - mel = wav2spec_dict['mel'] - wav = wav2spec_dict['wav'].astype(np.float16) - if binarization_args['with_linear']: - res['linear'] = wav2spec_dict['linear'] - res.update({'mel': mel, 'wav': wav, 'sec': len(wav) / hparams['audio_sample_rate'], 'len': mel.shape[0]}) - return wav, mel - - @staticmethod - def process_align(tg_fn, item): - ph = item['ph'] - mel = item['mel'] - ph_token = item['ph_token'] - if tg_fn is not None and os.path.exists(tg_fn): - mel2ph, dur = get_mel2ph(tg_fn, ph, mel, hparams['hop_size'], hparams['audio_sample_rate'], - hparams['binarization_args']['min_sil_duration']) - else: - raise BinarizationError(f"Align not found") - if np.array(mel2ph).max() - 1 >= len(ph_token): - raise BinarizationError( - f"Align does not match: mel2ph.max() - 1: {mel2ph.max() - 1}, len(phone_encoded): {len(ph_token)}") - item['mel2ph'] = mel2ph - item['dur'] = dur - - ph2word = item['ph2word'] - mel2word = [ph2word[p - 1] for p in item['mel2ph']] - item['mel2word'] = mel2word # [T_mel] - dur_word = mel2token_to_dur(mel2word, len(item['word_token'])) - item['dur_word'] = dur_word.tolist() # [T_word] - - @staticmethod - def process_pitch(item, n_bos_frames, n_eos_frames): - wav, mel = item['wav'], item['mel'] - f0 = extract_pitch_simple(item['wav']) - if sum(f0) == 0: - raise BinarizationError("Empty f0") - assert len(mel) == len(f0), (len(mel), len(f0)) - pitch_coarse = f0_to_coarse(f0) - item['f0'] = f0 - item['pitch'] = pitch_coarse - if hparams['binarization_args']['with_f0cwt']: - uv, cont_lf0_lpf = get_cont_lf0(f0) - logf0s_mean_org, logf0s_std_org = np.mean(cont_lf0_lpf), np.std(cont_lf0_lpf) - cont_lf0_lpf_norm = (cont_lf0_lpf - logf0s_mean_org) / logf0s_std_org - cwt_spec, scales = get_lf0_cwt(cont_lf0_lpf_norm) - item['cwt_spec'] = cwt_spec - item['cwt_mean'] = logf0s_mean_org - item['cwt_std'] = logf0s_std_org - - @staticmethod - def get_spk_embed(wav, ctx): - return ctx['voice_encoder'].embed_utterance(wav.astype(float)) - - @property - def num_workers(self): - return int(os.getenv('N_PROC', hparams.get('N_PROC', os.cpu_count()))) diff --git a/spaces/NCTCMumbai/NCTC/models/official/benchmark/transformer_benchmark.py b/spaces/NCTCMumbai/NCTC/models/official/benchmark/transformer_benchmark.py deleted file mode 100644 index e61201aa174af4882c6dbab28e10fe64d8cc1377..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/benchmark/transformer_benchmark.py +++ /dev/null @@ -1,757 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Executes Transformer w/Keras benchmark and accuracy tests.""" -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import os -import time - -from absl import flags -import tensorflow as tf -from official.benchmark import benchmark_wrappers -from official.benchmark import owner_utils -from official.benchmark.perfzero_benchmark import PerfZeroBenchmark -from official.nlp.transformer import misc -from official.nlp.transformer import transformer_main as transformer_main -from official.utils.flags import core as flags_core - -TRANSFORMER_EN2DE_DATA_DIR_NAME = 'wmt32k-en2de-official' -EN2DE_2014_BLEU_DATA_DIR_NAME = 'newstest2014' -FLAGS = flags.FLAGS -TMP_DIR = os.getenv('TMPDIR') - - -class TransformerBenchmark(PerfZeroBenchmark): - """Methods common to executing transformer w/keras tests. - - Code under test for the Transformer Keras models report the same data and - require the same FLAG setup. - """ - - def __init__(self, output_dir=None, default_flags=None, root_data_dir=None, - flag_methods=None, tpu=None): - root_data_dir = root_data_dir if root_data_dir else '' - - self.train_data_dir = os.path.join(root_data_dir, - TRANSFORMER_EN2DE_DATA_DIR_NAME) - - self.vocab_file = os.path.join(root_data_dir, - TRANSFORMER_EN2DE_DATA_DIR_NAME, - 'vocab.ende.32768') - - self.bleu_source = os.path.join(root_data_dir, - EN2DE_2014_BLEU_DATA_DIR_NAME, - 'newstest2014.en') - - self.bleu_ref = os.path.join(root_data_dir, - EN2DE_2014_BLEU_DATA_DIR_NAME, - 'newstest2014.de') - - if default_flags is None: - default_flags = {} - default_flags['data_dir'] = self.train_data_dir - default_flags['vocab_file'] = self.vocab_file - - super(TransformerBenchmark, self).__init__( - output_dir=output_dir, - default_flags=default_flags, - flag_methods=flag_methods, - tpu=tpu) - - @benchmark_wrappers.enable_runtime_flags - def _run_and_report_benchmark(self, - bleu_max=None, - bleu_min=None, - log_steps=None, - total_batch_size=None, - warmup=1): - """Report benchmark results by writing to local protobuf file. - - Args: - bleu_max: highest passing level for bleu score. - bleu_min: lowest passing level for bleu score. - log_steps: How often the log was created for stats['step_timestamp_log']. - total_batch_size: Global batch-size. - warmup: number of entries in stats['step_timestamp_log'] to ignore. - """ - start_time_sec = time.time() - task = transformer_main.TransformerTask(FLAGS) - stats = task.train() - wall_time_sec = time.time() - start_time_sec - - metrics = [] - if 'bleu_uncased' in stats: - if 'bleu_uncased_history' in stats: - bleu_uncased_best = max(stats['bleu_uncased_history'], - key=lambda x: x[1]) - metrics.append({'name': 'bleu_uncased', - 'value': bleu_uncased_best[1], - 'min_value': bleu_min, - 'max_value': bleu_max}) - metrics.append({'name': 'bleu_best_score_iteration', - 'value': bleu_uncased_best[0]}) - metrics.append({'name': 'bleu_uncased_last', - 'value': stats['bleu_uncased']}) - else: - metrics.append({'name': 'bleu_uncased', - 'value': stats['bleu_uncased'], - 'min_value': bleu_min, - 'max_value': bleu_max}) - - if (warmup and 'step_timestamp_log' in stats and - len(stats['step_timestamp_log']) > warmup + 1): - # first entry in the time_log is start of step 1. The rest of the - # entries are the end of each step recorded - time_log = stats['step_timestamp_log'] - elapsed = time_log[-1].timestamp - time_log[warmup].timestamp - num_examples = ( - total_batch_size * log_steps * (len(time_log) - warmup - 1)) - examples_per_sec = num_examples / elapsed - metrics.append({'name': 'exp_per_second', - 'value': examples_per_sec}) - - if 'avg_exp_per_second' in stats: - metrics.append({'name': 'avg_exp_per_second', - 'value': stats['avg_exp_per_second']}) - - if 'step_timestamp_log' in stats: - time_log = stats['step_timestamp_log'] - metrics.append({'name': 'startup_time', - 'value': time_log[0].timestamp - start_time_sec}) - - flags_str = flags_core.get_nondefault_flags_as_str() - self.report_benchmark(iters=-1, wall_time=wall_time_sec, metrics=metrics, - extras={'flags': flags_str}) - - -class TransformerBaseKerasAccuracy(TransformerBenchmark): - """Benchmark accuracy tests for Transformer Base model w/ Keras.""" - - def __init__(self, output_dir=None, root_data_dir=None, **kwargs): - """Benchmark accuracy tests for Transformer Base model w/ Keras. - - Args: - output_dir: directory where to output e.g. log files - root_data_dir: directory under which to look for dataset - **kwargs: arbitrary named arguments. This is needed to make the - constructor forward compatible in case PerfZero provides more - named arguments before updating the constructor. - """ - flag_methods = [misc.define_transformer_flags] - - super(TransformerBaseKerasAccuracy, self).__init__( - output_dir=output_dir, root_data_dir=root_data_dir, - flag_methods=flag_methods) - - def benchmark_1_gpu(self): - """Benchmark 1 gpu. - - The paper uses 8 GPUs and a much larger effective batch size, this is will - not converge to the 27.3 BLEU (uncased) SOTA. - """ - self._setup() - FLAGS.num_gpus = 1 - FLAGS.data_dir = self.train_data_dir - FLAGS.vocab_file = self.vocab_file - # Sets values directly to avoid validation check. - FLAGS['bleu_source'].value = self.bleu_source - FLAGS['bleu_ref'].value = self.bleu_ref - FLAGS.param_set = 'base' - FLAGS.batch_size = 2048 - FLAGS.train_steps = 1000 - FLAGS.steps_between_evals = 500 - FLAGS.model_dir = self._get_model_dir('benchmark_1_gpu') - # These bleu scores are based on test runs after at this limited - # number of steps and batch size after verifying SOTA at 8xV100s. - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps, - bleu_min=25.3, - bleu_max=26) - - def benchmark_1_gpu_static_batch(self): - """Benchmark 1 gpu with static_batch. - - The paper uses 8 GPUs and a much larger effective batch size, this is will - not converge to the 27.3 BLEU (uncased) SOTA. - """ - self._setup() - FLAGS.num_gpus = 1 - FLAGS.data_dir = self.train_data_dir - FLAGS.vocab_file = self.vocab_file - # Sets values directly to avoid validation check. - FLAGS['bleu_source'].value = self.bleu_source - FLAGS['bleu_ref'].value = self.bleu_ref - FLAGS.param_set = 'base' - FLAGS.batch_size = 4096 - FLAGS.train_steps = 100000 - FLAGS.steps_between_evals = 5000 - FLAGS.static_batch = True - FLAGS.max_length = 64 - FLAGS.model_dir = self._get_model_dir('benchmark_1_gpu_static_batch') - # These bleu scores are based on test runs after at this limited - # number of steps and batch size after verifying SOTA at 8xV100s. - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps, - bleu_min=25.3, - bleu_max=26) - - def benchmark_8_gpu(self): - """Benchmark 8 gpu. - - Should converge to 27.3 BLEU (uncased). This has not been confirmed yet. - """ - self._setup() - FLAGS.num_gpus = 8 - FLAGS.data_dir = self.train_data_dir - FLAGS.vocab_file = self.vocab_file - # Sets values directly to avoid validation check. - FLAGS['bleu_source'].value = self.bleu_source - FLAGS['bleu_ref'].value = self.bleu_ref - FLAGS.param_set = 'base' - FLAGS.batch_size = 4096*8 - FLAGS.train_steps = 100000 - FLAGS.steps_between_evals = 20000 - FLAGS.model_dir = self._get_model_dir('benchmark_8_gpu') - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps, - bleu_min=27, - bleu_max=28) - - def benchmark_8_gpu_static_batch(self): - """Benchmark 8 gpu. - - Should converge to 27.3 BLEU (uncased). This has not been confirmed yet. - """ - self._setup() - FLAGS.num_gpus = 8 - FLAGS.data_dir = self.train_data_dir - FLAGS.vocab_file = self.vocab_file - # Sets values directly to avoid validation check. - FLAGS['bleu_source'].value = self.bleu_source - FLAGS['bleu_ref'].value = self.bleu_ref - FLAGS.param_set = 'base' - FLAGS.batch_size = 4096*8 - FLAGS.train_steps = 100000 - FLAGS.static_batch = True - FLAGS.max_length = 64 - FLAGS.steps_between_evals = 5000 - FLAGS.model_dir = self._get_model_dir('benchmark_8_gpu_static_batch') - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps, - bleu_min=27, - bleu_max=28) - - -class TransformerBigKerasAccuracy(TransformerBenchmark): - """Benchmark accuracy tests for Transformer Big model w/ Keras.""" - - def __init__(self, output_dir=None, root_data_dir=None, **kwargs): - """Benchmark accuracy tests for Transformer Big model w/ Keras. - - Args: - output_dir: directory where to output e.g. log files - root_data_dir: directory under which to look for dataset - **kwargs: arbitrary named arguments. This is needed to make the - constructor forward compatible in case PerfZero provides more - named arguments before updating the constructor. - """ - flag_methods = [misc.define_transformer_flags] - - super(TransformerBigKerasAccuracy, self).__init__( - output_dir=output_dir, root_data_dir=root_data_dir, - flag_methods=flag_methods) - - def benchmark_8_gpu(self): - """Benchmark 8 gpu. - - Over 6 runs with eval every 20K steps the average highest value was 28.195 - (bleu uncased). 28.424 was the highest and 27.96 the lowest. The values are - the highest value seen during a run and occurred at a median of iteration 9. - Iterations are not epochs, an iteration is a number of steps between evals. - """ - self._setup() - FLAGS.num_gpus = 8 - FLAGS.data_dir = self.train_data_dir - FLAGS.vocab_file = self.vocab_file - # Sets values directly to avoid validation check. - FLAGS['bleu_source'].value = self.bleu_source - FLAGS['bleu_ref'].value = self.bleu_ref - FLAGS.param_set = 'big' - FLAGS.batch_size = 3072*8 - FLAGS.train_steps = 20000 * 12 - FLAGS.steps_between_evals = 20000 - FLAGS.model_dir = self._get_model_dir('benchmark_8_gpu') - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps, - bleu_min=27.9, - bleu_max=29.2) - - def benchmark_8_gpu_static_batch(self): - """Benchmark 8 gpu. - - Should converge to 28.4 BLEU (uncased). This has not be verified yet." - """ - self._setup() - FLAGS.num_gpus = 8 - FLAGS.data_dir = self.train_data_dir - FLAGS.vocab_file = self.vocab_file - # Sets values directly to avoid validation check. - FLAGS['bleu_source'].value = self.bleu_source - FLAGS['bleu_ref'].value = self.bleu_ref - FLAGS.param_set = 'big' - FLAGS.batch_size = 3072*8 - FLAGS.static_batch = True - FLAGS.max_length = 64 - FLAGS.train_steps = 20000 * 12 - FLAGS.steps_between_evals = 20000 - FLAGS.model_dir = self._get_model_dir('benchmark_8_gpu_static_batch') - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps, - bleu_min=28, - bleu_max=29.2) - - def benchmark_8_gpu_fp16(self): - """Benchmark 8 gpu with dynamic batch and fp16. - - Over 6 runs with eval every 20K steps the average highest value was 28.247 - (bleu uncased). 28.424 was the highest and 28.09 the lowest. The values are - the highest value seen during a run and occurred at a median of iteration - 11. While this could be interpreted as worse than FP32, if looking at the - first iteration at which 28 is passed FP16 performs equal and possibly - better. Although not part of the initial test runs, the highest value - recorded with the arguments below was 28.9 at iteration 12. Iterations are - not epochs, an iteration is a number of steps between evals. - """ - self._setup() - FLAGS.num_gpus = 8 - FLAGS.dtype = 'fp16' - FLAGS.data_dir = self.train_data_dir - FLAGS.vocab_file = self.vocab_file - # Sets values directly to avoid validation check. - FLAGS['bleu_source'].value = self.bleu_source - FLAGS['bleu_ref'].value = self.bleu_ref - FLAGS.param_set = 'big' - FLAGS.batch_size = 3072*8 - FLAGS.train_steps = 20000 * 12 - FLAGS.steps_between_evals = 20000 - FLAGS.model_dir = self._get_model_dir('benchmark_8_gpu_fp16') - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps, - bleu_min=28, - bleu_max=29.2) - - def benchmark_8_gpu_fp16_amp(self): - """Benchmark 8 gpu with dynamic batch and fp16 with automatic mixed precision. - - Should converge to 28.4 BLEU (uncased). This has not be verified yet." - """ - self._setup() - FLAGS.num_gpus = 8 - FLAGS.dtype = 'fp16' - FLAGS.fp16_implementation = 'graph_rewrite' - FLAGS.data_dir = self.train_data_dir - FLAGS.vocab_file = self.vocab_file - # Sets values directly to avoid validation check. - FLAGS['bleu_source'].value = self.bleu_source - FLAGS['bleu_ref'].value = self.bleu_ref - FLAGS.param_set = 'big' - FLAGS.batch_size = 3072*8 - FLAGS.train_steps = 20000 * 12 - FLAGS.steps_between_evals = 20000 - FLAGS.model_dir = self._get_model_dir('benchmark_8_gpu_fp16_amp') - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps, - bleu_min=28, - bleu_max=29) - - def benchmark_8_gpu_static_batch_fp16(self): - """Benchmark 8 gpu with static batch and fp16. - - Should converge to 28.4 BLEU (uncased). This has not be verified yet." - """ - self._setup() - FLAGS.num_gpus = 8 - FLAGS.dtype = 'fp16' - FLAGS.data_dir = self.train_data_dir - FLAGS.vocab_file = self.vocab_file - # Sets values directly to avoid validation check. - FLAGS['bleu_source'].value = self.bleu_source - FLAGS['bleu_ref'].value = self.bleu_ref - FLAGS.param_set = 'big' - FLAGS.batch_size = 3072*8 - FLAGS.static_batch = True - FLAGS.max_length = 64 - FLAGS.train_steps = 400000 - FLAGS.steps_between_evals = 20000 - FLAGS.model_dir = self._get_model_dir('benchmark_8_gpu_static_batch_fp16') - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps, - bleu_min=28, - bleu_max=29.2) - - def benchmark_xla_8_gpu_static_batch_fp16(self): - """Benchmark 8 gpu with static batch, XLA, and FP16. - - Should converge to 28.4 BLEU (uncased). This has not be verified yet." - """ - self._setup() - FLAGS.num_gpus = 8 - FLAGS.dtype = 'fp16' - FLAGS.enable_xla = True - FLAGS.data_dir = self.train_data_dir - FLAGS.vocab_file = self.vocab_file - # Sets values directly to avoid validation check. - FLAGS['bleu_source'].value = self.bleu_source - FLAGS['bleu_ref'].value = self.bleu_ref - FLAGS.param_set = 'big' - FLAGS.batch_size = 3072*8 - FLAGS.static_batch = True - FLAGS.max_length = 64 - FLAGS.train_steps = 400000 - FLAGS.steps_between_evals = 20000 - FLAGS.model_dir = self._get_model_dir( - 'benchmark_xla_8_gpu_static_batch_fp16') - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps, - bleu_min=28, - bleu_max=29.2) - - -class TransformerKerasBenchmark(TransformerBenchmark): - """Benchmarks for Transformer (Base and Big) using Keras.""" - - def __init__(self, output_dir=None, default_flags=None, - root_data_dir=None, batch_per_gpu=4096, tpu=None): - """Initialize. - - Args: - output_dir: Based directory for saving artifacts, e.g. checkpoints. - default_flags: default flags to use for all tests. - root_data_dir: root directory for data, e.g. training. - batch_per_gpu: batch size to use per gpu. - tpu: Target TPU to use. - """ - flag_methods = [misc.define_transformer_flags] - self.batch_per_gpu = batch_per_gpu - - super(TransformerKerasBenchmark, self).__init__( - output_dir=output_dir, - default_flags=default_flags, - root_data_dir=root_data_dir, - flag_methods=flag_methods, - tpu=tpu) - - def benchmark_1_gpu_no_dist_strat(self): - """Benchmark 1 gpu without distribution strategy.""" - self._setup() - FLAGS.num_gpus = 1 - FLAGS.distribution_strategy = 'off' - FLAGS.batch_size = self.batch_per_gpu - FLAGS.model_dir = self._get_model_dir('benchmark_1_gpu_no_dist_strat') - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps) - - def benchmark_1_gpu_no_dist_strat_static_batch(self): - """Benchmark 1 gpu without distribution strategy with static batch.""" - self._setup() - FLAGS.num_gpus = 1 - FLAGS.distribution_strategy = 'off' - FLAGS.batch_size = self.batch_per_gpu - FLAGS.model_dir = self._get_model_dir('benchmark_1_gpu_no_ds_sb') - FLAGS.static_batch = True - FLAGS.max_length = 64 - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps) - - def benchmark_1_gpu(self): - """Benchmark 1 gpu.""" - self._setup() - FLAGS.num_gpus = 1 - FLAGS.batch_size = self.batch_per_gpu - FLAGS.model_dir = self._get_model_dir('benchmark_1_gpu') - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps) - - def benchmark_1_gpu_fp16(self): - """Benchmark 1 gpu FP16.""" - self._setup() - FLAGS.num_gpus = 1 - FLAGS.batch_size = self.batch_per_gpu - FLAGS.model_dir = self._get_model_dir('benchmark_1_gpu_fp16') - FLAGS.dtype = 'fp16' - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps) - - def benchmark_xla_1_gpu(self): - """Benchmark 1 gpu w/xla.""" - self._setup() - FLAGS.num_gpus = 1 - FLAGS.batch_size = self.batch_per_gpu - FLAGS.model_dir = self._get_model_dir('benchmark_xla_1_gpu') - FLAGS.enable_xla = True - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps) - - def benchmark_xla_1_gpu_fp16(self): - """Benchmark 1 gpu w/xla and FP16.""" - self._setup() - FLAGS.num_gpus = 1 - FLAGS.batch_size = self.batch_per_gpu - FLAGS.model_dir = self._get_model_dir('benchmark_xla_1_gpu_fp16') - FLAGS.enable_xla = True - FLAGS.dtype = 'fp16' - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps) - - def benchmark_1_gpu_static_batch(self): - """Benchmark 1 gpu with static batch.""" - self._setup() - FLAGS.num_gpus = 1 - FLAGS.batch_size = self.batch_per_gpu - FLAGS.model_dir = self._get_model_dir('benchmark_1_gpu_static_batch') - FLAGS.static_batch = True - FLAGS.max_length = 64 - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps) - - def benchmark_xla_1_gpu_static_batch(self): - """Benchmark 1 gpu with static batch w/xla.""" - self._setup() - FLAGS.num_gpus = 1 - FLAGS.batch_size = self.batch_per_gpu - FLAGS.model_dir = self._get_model_dir('benchmark_xla_1_gpu_static_batch') - FLAGS.static_batch = True - FLAGS.max_length = 64 - FLAGS.enable_xla = True - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps) - - def benchmark_1_gpu_static_batch_fp16(self): - """Benchmark 1 gpu with static batch FP16.""" - self._setup() - FLAGS.num_gpus = 1 - FLAGS.batch_size = self.batch_per_gpu - FLAGS.model_dir = self._get_model_dir( - 'benchmark_1_gpu_static_batch_fp16') - FLAGS.static_batch = True - FLAGS.max_length = 64 - FLAGS.dtype = 'fp16' - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps) - - def benchmark_xla_1_gpu_static_batch_fp16(self): - """Benchmark 1 gpu with static batch w/xla and FP16.""" - self._setup() - FLAGS.num_gpus = 1 - FLAGS.batch_size = self.batch_per_gpu - FLAGS.model_dir = self._get_model_dir( - 'benchmark_xla_1_gpu_static_batch_fp16') - FLAGS.static_batch = True - FLAGS.max_length = 64 - FLAGS.enable_xla = True - FLAGS.dtype = 'fp16' - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps) - - def benchmark_8_gpu(self): - """Benchmark 8 gpu.""" - self._setup() - FLAGS.num_gpus = 8 - FLAGS.batch_size = self.batch_per_gpu * 8 - FLAGS.model_dir = self._get_model_dir('benchmark_8_gpu') - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps) - - def benchmark_8_gpu_fp16(self): - """Benchmark 8 gpu FP16.""" - self._setup() - FLAGS.num_gpus = 8 - FLAGS.dtype = 'fp16' - FLAGS.batch_size = self.batch_per_gpu * 8 - FLAGS.model_dir = self._get_model_dir('benchmark_8_gpu_fp16') - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps) - - def benchmark_xla_8_gpu(self): - """Benchmark 8 gpu w/xla.""" - self._setup() - FLAGS.num_gpus = 8 - FLAGS.enable_xla = True - FLAGS.batch_size = self.batch_per_gpu * 8 - FLAGS.model_dir = self._get_model_dir('benchmark_xla_8_gpu') - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps) - - def benchmark_xla_8_gpu_fp16(self): - """Benchmark 8 gpu w/xla and FP16.""" - self._setup() - FLAGS.num_gpus = 8 - FLAGS.enable_xla = True - FLAGS.dtype = 'fp16' - FLAGS.batch_size = self.batch_per_gpu * 8 - FLAGS.model_dir = self._get_model_dir('benchmark_xla_8_gpu_fp16') - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps) - - def benchmark_8_gpu_static_batch(self): - """Benchmark 8 gpu with static batch.""" - self._setup() - FLAGS.num_gpus = 8 - FLAGS.batch_size = self.batch_per_gpu * 8 - FLAGS.model_dir = self._get_model_dir('benchmark_8_gpu_static_batch') - FLAGS.static_batch = True - FLAGS.max_length = 64 - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps) - - def benchmark_8_gpu_static_batch_fp16(self): - """Benchmark 8 gpu with static batch FP16.""" - self._setup() - FLAGS.num_gpus = 8 - FLAGS.dtype = 'fp16' - FLAGS.batch_size = self.batch_per_gpu * 8 - FLAGS.model_dir = self._get_model_dir( - 'benchmark_8_gpu_static_batch_fp16') - FLAGS.static_batch = True - FLAGS.max_length = 64 - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps) - - def benchmark_xla_8_gpu_static_batch(self): - """Benchmark 8 gpu with static batch w/xla.""" - self._setup() - FLAGS.num_gpus = 8 - FLAGS.enable_xla = True - FLAGS.batch_size = self.batch_per_gpu * 8 - FLAGS.model_dir = self._get_model_dir('benchmark_xla_8_gpu_static_batch') - FLAGS.static_batch = True - FLAGS.max_length = 64 - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps) - - def benchmark_xla_8_gpu_static_batch_fp16(self): - """Benchmark 8 gpu with static batch w/xla and FP16.""" - self._setup() - FLAGS.num_gpus = 8 - FLAGS.enable_xla = True - FLAGS.dtype = 'fp16' - FLAGS.batch_size = self.batch_per_gpu * 8 - FLAGS.model_dir = self._get_model_dir( - 'benchmark_xla_8_gpu_static_batch_fp16') - FLAGS.static_batch = True - FLAGS.max_length = 64 - self._run_and_report_benchmark(total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps) - - -class TransformerBaseKerasBenchmarkReal(TransformerKerasBenchmark): - """Transformer based version real data benchmark tests.""" - - def __init__(self, output_dir=TMP_DIR, root_data_dir=TMP_DIR, **kwargs): - def_flags = {} - def_flags['param_set'] = 'base' - def_flags['train_steps'] = 50 - def_flags['log_steps'] = 10 - - super(TransformerBaseKerasBenchmarkReal, self).__init__( - output_dir=output_dir, default_flags=def_flags, - root_data_dir=root_data_dir, batch_per_gpu=4096) - - -class TransformerBigKerasBenchmarkReal(TransformerKerasBenchmark): - """Transformer based version real data benchmark tests.""" - - def __init__(self, output_dir=TMP_DIR, root_data_dir=TMP_DIR, - tpu=None, **kwargs): - def_flags = {} - def_flags['param_set'] = 'big' - def_flags['train_steps'] = 50 - def_flags['log_steps'] = 10 - - super(TransformerBigKerasBenchmarkReal, self).__init__( - output_dir=output_dir, default_flags=def_flags, - root_data_dir=root_data_dir, batch_per_gpu=3072, - tpu=tpu) - - def benchmark_2x2_tpu(self): - """Port of former snaggletooth transformer_big model on 2x2.""" - self._setup() - FLAGS.model_dir = self._get_model_dir('benchmark_2x2_tpu') - FLAGS.train_steps = 300 - FLAGS.log_steps = 150 - FLAGS.steps_between_evals = 150 - FLAGS.distribution_strategy = 'tpu' - FLAGS.static_batch = True - FLAGS.use_ctl = True - FLAGS.batch_size = 6144 - FLAGS.max_length = 64 - FLAGS.decode_batch_size = 32 - FLAGS.decode_max_length = 97 - FLAGS.padded_decode = True - FLAGS.enable_checkpointing = False - - self._run_and_report_benchmark( - total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps) - - def benchmark_4x4_tpu(self): - """Port of former GCP transformer_big model on 4x4.""" - self._setup() - FLAGS.model_dir = self._get_model_dir('benchmark_4x4_tpu') - FLAGS.train_steps = 300 - FLAGS.log_steps = 150 - FLAGS.steps_between_evals = 150 - FLAGS.distribution_strategy = 'tpu' - FLAGS.static_batch = True - FLAGS.use_ctl = True - FLAGS.batch_size = 24576 - FLAGS.max_length = 64 - FLAGS.decode_batch_size = 32 - FLAGS.decode_max_length = 97 - FLAGS.padded_decode = True - FLAGS.enable_checkpointing = False - - self._run_and_report_benchmark( - total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps) - - @owner_utils.Owner('tf-graph-compiler') - def benchmark_4x4_tpu_mlir(self): - """Run transformer_big model on 4x4 with the MLIR Bridge enabled.""" - self._setup() - FLAGS.model_dir = self._get_model_dir('benchmark_4x4_tpu') - FLAGS.train_steps = 300 - FLAGS.log_steps = 150 - FLAGS.steps_between_evals = 150 - FLAGS.distribution_strategy = 'tpu' - FLAGS.static_batch = True - FLAGS.use_ctl = True - FLAGS.batch_size = 24576 - FLAGS.max_length = 64 - FLAGS.decode_batch_size = 32 - FLAGS.decode_max_length = 97 - FLAGS.padded_decode = True - FLAGS.enable_checkpointing = False - tf.config.experimental.enable_mlir_bridge() - - self._run_and_report_benchmark( - total_batch_size=FLAGS.batch_size, - log_steps=FLAGS.log_steps) - - -if __name__ == '__main__': - tf.test.main() diff --git a/spaces/NewtonKimathi/Sepsis_Prediction_FastApi/README.md b/spaces/NewtonKimathi/Sepsis_Prediction_FastApi/README.md deleted file mode 100644 index 35c66008f1a0e44c2e7f382aa2284410eb0f7e85..0000000000000000000000000000000000000000 --- a/spaces/NewtonKimathi/Sepsis_Prediction_FastApi/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -metadata: -title: Sepsis Prediction Fast API -sdk: docker -emoji: 👁 -colorFrom: red -colorTo: blue -pinned: false -app_file: main.py -app_port: 8000 ---- - - -Here is the link to directly access the API: here. Access the documentation here. - -To direcly access your API hosted on HuggingFace you should use the URL follow this format : https://-.hf.space/ - -In my case it is : https://NewtonKimathi-Sepsis_Prediction_FastApi.hf.space/ - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Nyari/Super-Resolution-Anime-Diffusion/RealESRGANv030/CODE_OF_CONDUCT.md b/spaces/Nyari/Super-Resolution-Anime-Diffusion/RealESRGANv030/CODE_OF_CONDUCT.md deleted file mode 100644 index e8cc4daa4345590464314889b187d6a2d7a8e20f..0000000000000000000000000000000000000000 --- a/spaces/Nyari/Super-Resolution-Anime-Diffusion/RealESRGANv030/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,128 +0,0 @@ -# Contributor Covenant Code of Conduct - -## Our Pledge - -We as members, contributors, and leaders pledge to make participation in our -community a harassment-free experience for everyone, regardless of age, body -size, visible or invisible disability, ethnicity, sex characteristics, gender -identity and expression, level of experience, education, socio-economic status, -nationality, personal appearance, race, religion, or sexual identity -and orientation. - -We pledge to act and interact in ways that contribute to an open, welcoming, -diverse, inclusive, and healthy community. - -## Our Standards - -Examples of behavior that contributes to a positive environment for our -community include: - -* Demonstrating empathy and kindness toward other people -* Being respectful of differing opinions, viewpoints, and experiences -* Giving and gracefully accepting constructive feedback -* Accepting responsibility and apologizing to those affected by our mistakes, - and learning from the experience -* Focusing on what is best not just for us as individuals, but for the - overall community - -Examples of unacceptable behavior include: - -* The use of sexualized language or imagery, and sexual attention or - advances of any kind -* Trolling, insulting or derogatory comments, and personal or political attacks -* Public or private harassment -* Publishing others' private information, such as a physical or email - address, without their explicit permission -* Other conduct which could reasonably be considered inappropriate in a - professional setting - -## Enforcement Responsibilities - -Community leaders are responsible for clarifying and enforcing our standards of -acceptable behavior and will take appropriate and fair corrective action in -response to any behavior that they deem inappropriate, threatening, offensive, -or harmful. - -Community leaders have the right and responsibility to remove, edit, or reject -comments, commits, code, wiki edits, issues, and other contributions that are -not aligned to this Code of Conduct, and will communicate reasons for moderation -decisions when appropriate. - -## Scope - -This Code of Conduct applies within all community spaces, and also applies when -an individual is officially representing the community in public spaces. -Examples of representing our community include using an official e-mail address, -posting via an official social media account, or acting as an appointed -representative at an online or offline event. - -## Enforcement - -Instances of abusive, harassing, or otherwise unacceptable behavior may be -reported to the community leaders responsible for enforcement at -xintao.wang@outlook.com or xintaowang@tencent.com. -All complaints will be reviewed and investigated promptly and fairly. - -All community leaders are obligated to respect the privacy and security of the -reporter of any incident. - -## Enforcement Guidelines - -Community leaders will follow these Community Impact Guidelines in determining -the consequences for any action they deem in violation of this Code of Conduct: - -### 1. Correction - -**Community Impact**: Use of inappropriate language or other behavior deemed -unprofessional or unwelcome in the community. - -**Consequence**: A private, written warning from community leaders, providing -clarity around the nature of the violation and an explanation of why the -behavior was inappropriate. A public apology may be requested. - -### 2. Warning - -**Community Impact**: A violation through a single incident or series -of actions. - -**Consequence**: A warning with consequences for continued behavior. No -interaction with the people involved, including unsolicited interaction with -those enforcing the Code of Conduct, for a specified period of time. This -includes avoiding interactions in community spaces as well as external channels -like social media. Violating these terms may lead to a temporary or -permanent ban. - -### 3. Temporary Ban - -**Community Impact**: A serious violation of community standards, including -sustained inappropriate behavior. - -**Consequence**: A temporary ban from any sort of interaction or public -communication with the community for a specified period of time. No public or -private interaction with the people involved, including unsolicited interaction -with those enforcing the Code of Conduct, is allowed during this period. -Violating these terms may lead to a permanent ban. - -### 4. Permanent Ban - -**Community Impact**: Demonstrating a pattern of violation of community -standards, including sustained inappropriate behavior, harassment of an -individual, or aggression toward or disparagement of classes of individuals. - -**Consequence**: A permanent ban from any sort of public interaction within -the community. - -## Attribution - -This Code of Conduct is adapted from the [Contributor Covenant][homepage], -version 2.0, available at -https://www.contributor-covenant.org/version/2/0/code_of_conduct.html. - -Community Impact Guidelines were inspired by [Mozilla's code of conduct -enforcement ladder](https://github.com/mozilla/diversity). - -[homepage]: https://www.contributor-covenant.org - -For answers to common questions about this code of conduct, see the FAQ at -https://www.contributor-covenant.org/faq. Translations are available at -https://www.contributor-covenant.org/translations. diff --git a/spaces/Nyari/Super-Resolution-Anime-Diffusion/Waifu2x/utils/prepare_images.py b/spaces/Nyari/Super-Resolution-Anime-Diffusion/Waifu2x/utils/prepare_images.py deleted file mode 100644 index 701e894ee828b704be3d0c73400b0326b26a697f..0000000000000000000000000000000000000000 --- a/spaces/Nyari/Super-Resolution-Anime-Diffusion/Waifu2x/utils/prepare_images.py +++ /dev/null @@ -1,143 +0,0 @@ -import copy -import glob -import os -from multiprocessing.dummy import Pool as ThreadPool - -from PIL import Image -from torchvision.transforms.functional import to_tensor - -from ..Models import * - - -class ImageSplitter: - # key points: - # Boarder padding and over-lapping img splitting to avoid the instability of edge value - # Thanks Waifu2x's autorh nagadomi for suggestions (https://github.com/nagadomi/waifu2x/issues/238) - - def __init__(self, seg_size=48, scale_factor=2, boarder_pad_size=3): - self.seg_size = seg_size - self.scale_factor = scale_factor - self.pad_size = boarder_pad_size - self.height = 0 - self.width = 0 - self.upsampler = nn.Upsample(scale_factor=scale_factor, mode="bilinear") - - def split_img_tensor(self, pil_img, scale_method=Image.BILINEAR, img_pad=0): - # resize image and convert them into tensor - img_tensor = to_tensor(pil_img).unsqueeze(0) - img_tensor = nn.ReplicationPad2d(self.pad_size)(img_tensor) - batch, channel, height, width = img_tensor.size() - self.height = height - self.width = width - - if scale_method is not None: - img_up = pil_img.resize( - (2 * pil_img.size[0], 2 * pil_img.size[1]), scale_method - ) - img_up = to_tensor(img_up).unsqueeze(0) - img_up = nn.ReplicationPad2d(self.pad_size * self.scale_factor)(img_up) - - patch_box = [] - # avoid the residual part is smaller than the padded size - if ( - height % self.seg_size < self.pad_size - or width % self.seg_size < self.pad_size - ): - self.seg_size += self.scale_factor * self.pad_size - - # split image into over-lapping pieces - for i in range(self.pad_size, height, self.seg_size): - for j in range(self.pad_size, width, self.seg_size): - part = img_tensor[ - :, - :, - (i - self.pad_size) : min( - i + self.pad_size + self.seg_size, height - ), - (j - self.pad_size) : min(j + self.pad_size + self.seg_size, width), - ] - if img_pad > 0: - part = nn.ZeroPad2d(img_pad)(part) - if scale_method is not None: - # part_up = self.upsampler(part) - part_up = img_up[ - :, - :, - self.scale_factor - * (i - self.pad_size) : min( - i + self.pad_size + self.seg_size, height - ) - * self.scale_factor, - self.scale_factor - * (j - self.pad_size) : min( - j + self.pad_size + self.seg_size, width - ) - * self.scale_factor, - ] - - patch_box.append((part, part_up)) - else: - patch_box.append(part) - return patch_box - - def merge_img_tensor(self, list_img_tensor): - out = torch.zeros( - (1, 3, self.height * self.scale_factor, self.width * self.scale_factor) - ) - img_tensors = copy.copy(list_img_tensor) - rem = self.pad_size * 2 - - pad_size = self.scale_factor * self.pad_size - seg_size = self.scale_factor * self.seg_size - height = self.scale_factor * self.height - width = self.scale_factor * self.width - for i in range(pad_size, height, seg_size): - for j in range(pad_size, width, seg_size): - part = img_tensors.pop(0) - part = part[:, :, rem:-rem, rem:-rem] - # might have error - if len(part.size()) > 3: - _, _, p_h, p_w = part.size() - out[:, :, i : i + p_h, j : j + p_w] = part - # out[:,:, - # self.scale_factor*i:self.scale_factor*i+p_h, - # self.scale_factor*j:self.scale_factor*j+p_w] = part - out = out[:, :, rem:-rem, rem:-rem] - return out - - -def load_single_image( - img_file, - up_scale=False, - up_scale_factor=2, - up_scale_method=Image.BILINEAR, - zero_padding=False, -): - img = Image.open(img_file).convert("RGB") - out = to_tensor(img).unsqueeze(0) - if zero_padding: - out = nn.ZeroPad2d(zero_padding)(out) - if up_scale: - size = tuple(map(lambda x: x * up_scale_factor, img.size)) - img_up = img.resize(size, up_scale_method) - img_up = to_tensor(img_up).unsqueeze(0) - out = (out, img_up) - - return out - - -def standardize_img_format(img_folder): - def process(img_file): - img_path = os.path.dirname(img_file) - img_name, _ = os.path.basename(img_file).split(".") - out = os.path.join(img_path, img_name + ".JPEG") - os.rename(img_file, out) - - list_imgs = [] - for i in ["png", "jpeg", "jpg"]: - list_imgs.extend(glob.glob(img_folder + "**/*." + i, recursive=True)) - print("Found {} images.".format(len(list_imgs))) - pool = ThreadPool(4) - pool.map(process, list_imgs) - pool.close() - pool.join() diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/continuation_eval.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/continuation_eval.py deleted file mode 100644 index 72b92a341dcd1b82035af72b8a6b4edc65783ecc..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/continuation_eval.py +++ /dev/null @@ -1,99 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from collections import defaultdict -import numpy as np -from misc.bleu_utils import sentence_bleu -import json -import warnings - - -def get_args(): - import argparse - - parser = argparse.ArgumentParser("Tool to calculate Continuation-BLEU2") - parser.add_argument('--asr-transcript', type=str, - help='Path to the transcript file.') - parser.add_argument('--prompts-description', type=str, - help='Path to the ground-truth continuation') - parser.add_argument('--manifest', type=str, required=True) - parser.add_argument('--take-shortest', type=int, default=1000) - - args = parser.parse_args() - - return args - - -def main(): - # NLTK produces warnings - warnings.filterwarnings("ignore") - - args = get_args() - - with open(args.prompts_description, 'r') as fin: - original_continuations = json.loads(fin.read()) - - sequence2length = [(k, v[0]) for k, v in original_continuations.items()] - assert all(float(v) >= 6.0 for (_, v) in sequence2length) # 6 seconds - - sequence2length.sort(key=lambda x: x[1]) - to_take = set(v[0] for v in sequence2length[:args.take_shortest]) - - with open(args.manifest, 'r') as fin: - fin.readline() - - linenum2file = dict([ - (i, l.split("__")[0]) for (i, l) in enumerate(fin) - ]) - - max_files = max(linenum2file.keys()) - continuations = defaultdict(list) - - mean_length_after = 0 - n_examples = 0 - - with open(args.asr_transcript, 'r') as fin: - for line in fin: - n_examples += 1 - line = line.split() - sequence_id = int(line[-1].split('-')[1][:-1]) - - assert sequence_id <= max_files - - sequence_name = linenum2file[sequence_id] - - continuations[sequence_name].append(line[:-1]) - mean_length_after += len(line) - - mean_length_after /= n_examples - print(f'Mean length of continuations, in words: {mean_length_after}') - metric_values = [] - - mean_ground_truth_words = 0 - n_examples = 0 - n_candidates = 0 - - for k, candidates in continuations.items(): - if k not in to_take: - continue - - n_examples += 1 - - ground_truth = original_continuations[k][1].split() - n_candidates += len(candidates) - bleu = sentence_bleu(candidates, ground_truth, weights=( - 0.5, 0.5), no_length_penalty=True, averaging_mode="geometric") - mean_ground_truth_words += len(ground_truth) - - metric_values.append(bleu) - - n = len(metric_values) - print( - f'Median BLEU over {n} examples: {np.median(metric_values)} +- {np.std(metric_values) / np.sqrt(n)}') - - -if __name__ == '__main__': - main() diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/id_dataset.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/id_dataset.py deleted file mode 100644 index 3e4d7969cf2a26e852b466f165a6fadabae3b35f..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/id_dataset.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from . import FairseqDataset - - -class IdDataset(FairseqDataset): - def __getitem__(self, index): - return index - - def __len__(self): - return 0 - - def collater(self, samples): - return torch.tensor(samples) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/encoders/subword_nmt_bpe.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/encoders/subword_nmt_bpe.py deleted file mode 100644 index 5d724d2730a5895ca55af2998c2ced471625b516..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/encoders/subword_nmt_bpe.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field - -from fairseq import file_utils -from fairseq.data.encoders import register_bpe -from fairseq.dataclass import FairseqDataclass - - -@dataclass -class SubwordNMTBPEConfig(FairseqDataclass): - bpe_codes: str = field(default="???", metadata={"help": "path to subword NMT BPE"}) - bpe_separator: str = field(default="@@", metadata={"help": "BPE separator"}) - - -@register_bpe("subword_nmt", dataclass=SubwordNMTBPEConfig) -class SubwordNMTBPE(object): - def __init__(self, cfg): - if cfg.bpe_codes is None: - raise ValueError("--bpe-codes is required for --bpe=subword_nmt") - codes = file_utils.cached_path(cfg.bpe_codes) - try: - from subword_nmt import apply_bpe - - bpe_parser = apply_bpe.create_parser() - bpe_args = bpe_parser.parse_args( - [ - "--codes", - codes, - "--separator", - cfg.bpe_separator, - ] - ) - self.bpe = apply_bpe.BPE( - bpe_args.codes, - bpe_args.merges, - bpe_args.separator, - None, - bpe_args.glossaries, - ) - self.bpe_symbol = bpe_args.separator + " " - except ImportError: - raise ImportError( - "Please install subword_nmt with: pip install subword-nmt" - ) - - def encode(self, x: str) -> str: - return self.bpe.process_line(x) - - def decode(self, x: str) -> str: - return (x + " ").replace(self.bpe_symbol, "").rstrip() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/cross_entropy.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/cross_entropy.py deleted file mode 100644 index 6f33c24cb56e25f91595009af38e63784c2263a0..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/cross_entropy.py +++ /dev/null @@ -1,61 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import torch -import torch.nn.functional as F - - -logger = logging.getLogger(__name__) - - -def _cross_entropy_pytorch(logits, target, ignore_index=None, reduction="mean"): - lprobs = F.log_softmax(logits, dim=-1, dtype=torch.float32) - return F.nll_loss( - lprobs, - target, - ignore_index=ignore_index, - reduction=reduction, - ) - - -try: - import xentropy_cuda - from apex.contrib import xentropy - - def cross_entropy(logits, target, ignore_index=-100, reduction="mean"): - if logits.device == torch.device("cpu"): - return _cross_entropy_pytorch(logits, target, ignore_index, reduction) - else: - if not getattr(cross_entropy, "_has_logged_once", False): - logger.info("using fused cross entropy") - cross_entropy._has_logged_once = True - - half_to_float = logits.dtype == torch.half - losses = xentropy.SoftmaxCrossEntropyLoss.apply( - logits, - target, - 0.0, - ignore_index, - half_to_float, - ) - if reduction == "sum": - return losses.sum() - elif reduction == "mean": - if ignore_index >= 0: - return losses.sum() / target.ne(ignore_index).sum() - else: - return losses.mean() - elif reduction == "none": - return losses - else: - raise NotImplementedError - - -except ImportError: - - def cross_entropy(logits, target, ignore_index=-100, reduction="mean"): - return _cross_entropy_pytorch(logits, target, ignore_index, reduction) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_to_text/docs/mustc_example.md b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_to_text/docs/mustc_example.md deleted file mode 100644 index c95ef3e15660107c3384f87c1680f005044e7f3b..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_to_text/docs/mustc_example.md +++ /dev/null @@ -1,155 +0,0 @@ -[[Back]](..) - -# S2T Example: Speech Translation (ST) on MuST-C - -[MuST-C](https://www.aclweb.org/anthology/N19-1202) is multilingual speech-to-text translation corpus with -8-language translations on English TED talks. We match the state-of-the-art performance in -[ESPNet-ST](https://arxiv.org/pdf/2004.10234.pdf) with a simpler model training pipeline. - -## Data Preparation -[Download](https://ict.fbk.eu/must-c) and unpack MuST-C data to a path -`${MUSTC_ROOT}/en-${TARGET_LANG_ID}`, then preprocess it with -```bash -# additional Python packages for S2T data processing/model training -pip install pandas torchaudio soundfile sentencepiece - -# Generate TSV manifests, features, vocabulary -# and configuration for each language -python examples/speech_to_text/prep_mustc_data.py \ - --data-root ${MUSTC_ROOT} --task asr \ - --vocab-type unigram --vocab-size 5000 -python examples/speech_to_text/prep_mustc_data.py \ - --data-root ${MUSTC_ROOT} --task st \ - --vocab-type unigram --vocab-size 8000 - -# Add vocabulary and configuration for joint data -# (based on the manifests and features generated above) -python examples/speech_to_text/prep_mustc_data.py \ - --data-root ${MUSTC_ROOT} --task asr --joint \ - --vocab-type unigram --vocab-size 10000 -python examples/speech_to_text/prep_mustc_data.py \ - --data-root ${MUSTC_ROOT} --task st --joint \ - --vocab-type unigram --vocab-size 10000 -``` -The generated files (manifest, features, vocabulary and data configuration) will be added to -`${MUSTC_ROOT}/en-${TARGET_LANG_ID}` (per-language data) and `MUSTC_ROOT` (joint data). - -Download our vocabulary files if you want to use our pre-trained models: -- ASR: [En-De](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_de_asr_vocab_unigram5000.zip), [En-Nl](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_nl_asr_vocab_unigram5000.zip), [En-Es](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_es_asr_vocab_unigram5000.zip), [En-Fr](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_fr_asr_vocab_unigram5000.zip), [En-It](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_it_asr_vocab_unigram5000.zip), [En-Pt](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_pt_asr_vocab_unigram5000.zip), [En-Ro](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ro_asr_vocab_unigram5000.zip), [En-Ru](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ru_asr_vocab_unigram5000.zip), [Joint](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_joint_asr_vocab_unigram10000.zip) -- ST: [En-De](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_de_st_vocab_unigram8000.zip), [En-Nl](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_nl_st_vocab_unigram8000.zip), [En-Es](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_es_st_vocab_unigram8000.zip), [En-Fr](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_fr_st_vocab_unigram8000.zip), [En-It](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_it_st_vocab_unigram8000.zip), [En-Pt](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_pt_st_vocab_unigram8000.zip), [En-Ro](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ro_st_vocab_unigram8000.zip), [En-Ru](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ru_st_vocab_unigram8000.zip), [Multilingual](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_multilingual_st_vocab_unigram10000.zip) - -## ASR -#### Training -En-De as example: -```bash -fairseq-train ${MUSTC_ROOT}/en-de \ - --config-yaml config_asr.yaml --train-subset train_asr --valid-subset dev_asr \ - --save-dir ${ASR_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-update 100000 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --report-accuracy \ - --arch s2t_transformer_s --optimizer adam --lr 1e-3 --lr-scheduler inverse_sqrt \ - --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8 -``` -For joint model (using ASR data from all 8 directions): -```bash -fairseq-train ${MUSTC_ROOT} \ - --config-yaml config_asr.yaml \ - --train-subset train_de_asr,train_nl_asr,train_es_asr,train_fr_asr,train_it_asr,train_pt_asr,train_ro_asr,train_ru_asr \ - --valid-subset dev_de_asr,dev_nl_asr,dev_es_asr,dev_fr_asr,dev_it_asr,dev_pt_asr,dev_ro_asr,dev_ru_asr \ - --save-dir ${JOINT_ASR_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-update 100000 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --report-accuracy \ - --arch s2t_transformer_s --optimizer adam --lr 1e-3 --lr-scheduler inverse_sqrt \ - --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8 -``` -where `ASR_SAVE_DIR` (`JOINT_ASR_SAVE_DIR`) is the checkpoint root path. We set `--update-freq 8` to simulate 8 GPUs -with 1 GPU. You may want to update it accordingly when using more than 1 GPU. - -#### Inference & Evaluation -```bash -CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt -python scripts/average_checkpoints.py \ - --inputs ${ASR_SAVE_DIR} --num-epoch-checkpoints 10 \ - --output "${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME}" -fairseq-generate ${MUSTC_ROOT}/en-de \ - --config-yaml config_asr.yaml --gen-subset tst-COMMON_asr --task speech_to_text \ - --path ${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} --max-tokens 50000 --beam 5 \ - --scoring wer --wer-tokenizer 13a --wer-lowercase --wer-remove-punct - -# For models trained on joint data -python scripts/average_checkpoints.py \ - --inputs ${JOINT_ASR_SAVE_DIR} --num-epoch-checkpoints 10 \ - --output "${JOINT_ASR_SAVE_DIR}/${CHECKPOINT_FILENAME}" -for LANG in de nl es fr it pt ro ru; do - fairseq-generate ${MUSTC_ROOT} \ - --config-yaml config_asr.yaml --gen-subset tst-COMMON_${LANG}_asr --task speech_to_text \ - --path ${JOINT_ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} --max-tokens 50000 --beam 5 \ - --scoring wer --wer-tokenizer 13a --wer-lowercase --wer-remove-punct -done -``` -#### Results -| Data | --arch | Params | En-De | En-Nl | En-Es | En-Fr | En-It | En-Pt | En-Ro | En-Ru | Model | -|---|---|---|---|---|---|---|---|---|---|---|---| -| Single | s2t_transformer_s | 31M | [18.2](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_de_asr_transformer_s.pt) | [17.6](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_nl_asr_transformer_s.pt) | [17.7](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_es_asr_transformer_s.pt) | [17.2](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_fr_asr_transformer_s.pt) | [17.9](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_it_asr_transformer_s.pt) | [19.1](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_pt_asr_transformer_s.pt) | [18.1](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ro_asr_transformer_s.pt) | [17.7](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ru_asr_transformer_s.pt) | (<-Download) | -| Joint | s2t_transformer_m | 76M | 16.8 | 16.7 | 16.9 | 16.9 | 17.0 | 17.4 | 17.0 | 16.9 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_joint_asr_transformer_m.pt) | - -## ST -#### Training -En-De as example: -```bash -fairseq-train ${MUSTC_ROOT}/en-de \ - --config-yaml config_st.yaml --train-subset train_st --valid-subset dev_st \ - --save-dir ${ST_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-update 100000 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --report-accuracy \ - --arch s2t_transformer_s --optimizer adam --lr 2e-3 --lr-scheduler inverse_sqrt \ - --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8 \ - --load-pretrained-encoder-from ${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} -``` -For multilingual model (all 8 directions): -```bash -fairseq-train ${MUSTC_ROOT} \ - --config-yaml config_st.yaml \ - --train-subset train_de_st,train_nl_st,train_es_st,train_fr_st,train_it_st,train_pt_st,train_ro_st,train_ru_st \ - --valid-subset dev_de_st,dev_nl_st,dev_es_st,dev_fr_st,dev_it_st,dev_pt_st,dev_ro_st,dev_ru_st \ - --save-dir ${MULTILINGUAL_ST_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-update 100000 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --report-accuracy \ - --arch s2t_transformer_s --ignore-prefix-size 1 --optimizer adam --lr 2e-3 --lr-scheduler inverse_sqrt \ - --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8 \ - --load-pretrained-encoder-from ${JOINT_ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} -``` -where `ST_SAVE_DIR` (`MULTILINGUAL_ST_SAVE_DIR`) is the checkpoint root path. The ST encoder is pre-trained by ASR -for faster training and better performance: `--load-pretrained-encoder-from <(JOINT_)ASR checkpoint path>`. We set -`--update-freq 8` to simulate 8 GPUs with 1 GPU. You may want to update it accordingly when using more than 1 GPU. -For multilingual models, we prepend target language ID token as target BOS, which should be excluded from -the training loss via `--ignore-prefix-size 1`. - -#### Inference & Evaluation -Average the last 10 checkpoints and evaluate on the `tst-COMMON` split: -```bash -CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt -python scripts/average_checkpoints.py \ - --inputs ${ST_SAVE_DIR} --num-epoch-checkpoints 10 \ - --output "${ST_SAVE_DIR}/${CHECKPOINT_FILENAME}" -fairseq-generate ${MUSTC_ROOT}/en-de \ - --config-yaml config_st.yaml --gen-subset tst-COMMON_st --task speech_to_text \ - --path ${ST_SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --max-tokens 50000 --beam 5 --scoring sacrebleu - -# For multilingual models -python scripts/average_checkpoints.py \ - --inputs ${MULTILINGUAL_ST_SAVE_DIR} --num-epoch-checkpoints 10 \ - --output "${MULTILINGUAL_ST_SAVE_DIR}/${CHECKPOINT_FILENAME}" -for LANG in de nl es fr it pt ro ru; do - fairseq-generate ${MUSTC_ROOT} \ - --config-yaml config_st.yaml --gen-subset tst-COMMON_${LANG}_st --task speech_to_text \ - --prefix-size 1 --path ${MULTILINGUAL_ST_SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --max-tokens 50000 --beam 5 --scoring sacrebleu -done -``` -For multilingual models, we force decoding from the target language ID token (as BOS) via `--prefix-size 1`. - -#### Results -| Data | --arch | Params | En-De | En-Nl | En-Es | En-Fr | En-It | En-Pt | En-Ro | En-Ru | Model | -|---|---|---|---|---|---|---|---|---|---|---|---| -| Bilingual | s2t_transformer_s | 31M | [22.7](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_de_st_transformer_s.pt) | [27.3](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_nl_st_transformer_s.pt) | [27.2](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_es_st_transformer_s.pt) | [32.9](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_fr_st_transformer_s.pt) | [22.7](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_it_st_transformer_s.pt) | [28.1](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_pt_st_transformer_s.pt) | [21.9](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ro_st_transformer_s.pt) | [15.3](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ru_st_transformer_s.pt) | (<-Download) | -| Multilingual | s2t_transformer_m | 76M | 24.5 | 28.6 | 28.2 | 34.9 | 24.6 | 31.1 | 23.8 | 16.0 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_multilingual_st_transformer_m.pt) | - -[[Back]](..) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/checkpoint_utils.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/checkpoint_utils.py deleted file mode 100644 index ef5d4c9022c3c35722f0bc9150260c7a65d35e5f..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/checkpoint_utils.py +++ /dev/null @@ -1,858 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import ast -import collections -import contextlib -import logging -import numpy as np -import os -import re -import time -import traceback -from collections import OrderedDict -from typing import Any, Dict, Optional, Union - -import torch -from fairseq.data import data_utils -from fairseq.dataclass.configs import CheckpointConfig -from fairseq.dataclass.utils import ( - convert_namespace_to_omegaconf, - overwrite_args_by_name, -) -from fairseq.distributed.fully_sharded_data_parallel import FSDP, has_FSDP -from fairseq.file_io import PathManager -from fairseq.models import FairseqDecoder, FairseqEncoder -from omegaconf import DictConfig, open_dict, OmegaConf - - -logger = logging.getLogger(__name__) - - -def save_checkpoint(cfg: CheckpointConfig, trainer, epoch_itr, val_loss): - from fairseq import meters - - # only one worker should attempt to create the required dir - if trainer.data_parallel_rank == 0: - os.makedirs(cfg.save_dir, exist_ok=True) - - prev_best = getattr(save_checkpoint, "best", val_loss) - if val_loss is not None: - best_function = max if cfg.maximize_best_checkpoint_metric else min - save_checkpoint.best = best_function(val_loss, prev_best) - - if cfg.no_save: - return - - trainer.consolidate_optimizer() # TODO(SS): do we need this if no_save_optimizer_state - - if not trainer.should_save_checkpoint_on_current_rank: - if trainer.always_call_state_dict_during_save_checkpoint: - trainer.state_dict() - return - - write_timer = meters.StopwatchMeter() - write_timer.start() - - epoch = epoch_itr.epoch - end_of_epoch = epoch_itr.end_of_epoch() - updates = trainer.get_num_updates() - - logger.info(f"Preparing to save checkpoint for epoch {epoch} @ {updates} updates") - - def is_better(a, b): - return a >= b if cfg.maximize_best_checkpoint_metric else a <= b - - suffix = trainer.checkpoint_suffix - checkpoint_conds = collections.OrderedDict() - checkpoint_conds["checkpoint{}{}.pt".format(epoch, suffix)] = ( - end_of_epoch and not cfg.no_epoch_checkpoints and epoch % cfg.save_interval == 0 - ) - checkpoint_conds["checkpoint_{}_{}{}.pt".format(epoch, updates, suffix)] = ( - not end_of_epoch - and cfg.save_interval_updates > 0 - and updates % cfg.save_interval_updates == 0 - ) - checkpoint_conds["checkpoint_best{}.pt".format(suffix)] = val_loss is not None and ( - not hasattr(save_checkpoint, "best") - or is_better(val_loss, save_checkpoint.best) - ) - if val_loss is not None and cfg.keep_best_checkpoints > 0: - worst_best = getattr(save_checkpoint, "best", None) - chkpts = checkpoint_paths( - cfg.save_dir, - pattern=r"checkpoint\.best_{}_(\d+\.?\d*){}\.pt".format( - cfg.best_checkpoint_metric, suffix - ), - ) - if len(chkpts) > 0: - p = chkpts[-1] if cfg.maximize_best_checkpoint_metric else chkpts[0] - worst_best = float(p.rsplit("_")[-1].replace("{}.pt".format(suffix), "")) - # add random digits to resolve ties - with data_utils.numpy_seed(epoch, updates, val_loss): - rand_sfx = np.random.randint(0, cfg.keep_best_checkpoints) - - checkpoint_conds[ - "checkpoint.best_{}_{:.3f}{}{}.pt".format( - cfg.best_checkpoint_metric, - val_loss, - rand_sfx, - suffix - ) - ] = worst_best is None or is_better(val_loss, worst_best) - checkpoint_conds[ - "checkpoint_last{}.pt".format(suffix) - ] = not cfg.no_last_checkpoints - - extra_state = {"train_iterator": epoch_itr.state_dict(), "val_loss": val_loss} - if hasattr(save_checkpoint, "best"): - extra_state.update({"best": save_checkpoint.best}) - - checkpoints = [ - os.path.join(cfg.save_dir, fn) for fn, cond in checkpoint_conds.items() if cond - ] - if len(checkpoints) > 0: - trainer.save_checkpoint(checkpoints[0], extra_state) - for cp in checkpoints[1:]: - if cfg.write_checkpoints_asynchronously: - # TODO[ioPath]: Need to implement a delayed asynchronous - # file copying/moving feature. - logger.warning( - f"ioPath is not copying {checkpoints[0]} to {cp} " - "since async write mode is on." - ) - else: - assert PathManager.copy( - checkpoints[0], cp, overwrite=True - ), f"Failed to copy {checkpoints[0]} to {cp}" - - write_timer.stop() - logger.info( - "Saved checkpoint {} (epoch {} @ {} updates, score {}) (writing took {} seconds)".format( - checkpoints[0], epoch, updates, val_loss, write_timer.sum - ) - ) - - if not end_of_epoch and cfg.keep_interval_updates > 0: - # remove old checkpoints; checkpoints are sorted in descending order - if cfg.keep_interval_updates_pattern == -1: - checkpoints = checkpoint_paths( - cfg.save_dir, pattern=r"checkpoint_\d+_(\d+){}\.pt".format(suffix) - ) - else: - checkpoints = checkpoint_paths( - cfg.save_dir, - pattern=r"checkpoint_\d+_(\d+){}\.pt".format(suffix), - keep_match=True, - ) - checkpoints = [ - x[0] - for x in checkpoints - if x[1] % cfg.keep_interval_updates_pattern != 0 - ] - - for old_chk in checkpoints[cfg.keep_interval_updates :]: - if os.path.lexists(old_chk): - os.remove(old_chk) - elif PathManager.exists(old_chk): - PathManager.rm(old_chk) - - if cfg.keep_last_epochs > 0: - # remove old epoch checkpoints; checkpoints are sorted in descending order - checkpoints = checkpoint_paths( - cfg.save_dir, pattern=r"checkpoint(\d+){}\.pt".format(suffix) - ) - for old_chk in checkpoints[cfg.keep_last_epochs :]: - if os.path.lexists(old_chk): - os.remove(old_chk) - elif PathManager.exists(old_chk): - PathManager.rm(old_chk) - - if cfg.keep_best_checkpoints > 0: - # only keep the best N checkpoints according to validation metric - checkpoints = checkpoint_paths( - cfg.save_dir, - pattern=r"checkpoint\.best_{}_(\d+\.?\d*){}\.pt".format( - cfg.best_checkpoint_metric, suffix - ), - ) - if not cfg.maximize_best_checkpoint_metric: - checkpoints = checkpoints[::-1] - for old_chk in checkpoints[cfg.keep_best_checkpoints :]: - if os.path.lexists(old_chk): - os.remove(old_chk) - elif PathManager.exists(old_chk): - PathManager.rm(old_chk) - - -def load_checkpoint(cfg: CheckpointConfig, trainer, **passthrough_args): - """ - Load a checkpoint and restore the training iterator. - - *passthrough_args* will be passed through to - ``trainer.get_train_iterator``. - """ - - reset_optimizer = cfg.reset_optimizer - reset_lr_scheduler = cfg.reset_lr_scheduler - optimizer_overrides = ast.literal_eval(cfg.optimizer_overrides) - reset_meters = cfg.reset_meters - reset_dataloader = cfg.reset_dataloader - - if cfg.finetune_from_model is not None and ( - reset_optimizer or reset_lr_scheduler or reset_meters or reset_dataloader - ): - raise ValueError( - "--finetune-from-model can not be set together with either --reset-optimizer" - " or reset_lr_scheduler or reset_meters or reset_dataloader" - ) - - suffix = trainer.checkpoint_suffix - if ( - cfg.restore_file == "checkpoint_last.pt" - ): # default value of restore_file is 'checkpoint_last.pt' - checkpoint_path = os.path.join( - cfg.save_dir, "checkpoint_last{}.pt".format(suffix) - ) - first_launch = not PathManager.exists(checkpoint_path) - if cfg.finetune_from_model is not None and first_launch: - # if there is no last checkpoint to restore, start the finetune from pretrained model - # else just use usual logic to load checkpoint, e.g. restart from last checkpoint and etc. - if PathManager.exists(cfg.finetune_from_model): - checkpoint_path = cfg.finetune_from_model - reset_optimizer = True - reset_lr_scheduler = True - reset_meters = True - reset_dataloader = True - logger.info( - f"loading pretrained model from {checkpoint_path}: " - "optimizer, lr scheduler, meters, dataloader will be reset" - ) - else: - raise ValueError( - f"--funetune-from-model {cfg.finetune_from_model} does not exist" - ) - elif suffix is not None: - checkpoint_path = cfg.restore_file.replace(".pt", suffix + ".pt") - else: - checkpoint_path = cfg.restore_file - - if cfg.restore_file != "checkpoint_last.pt" and cfg.finetune_from_model: - raise ValueError( - "--finetune-from-model and --restore-file (non-default value) " - "can not be specified together: " + str(cfg) - ) - - extra_state = trainer.load_checkpoint( - checkpoint_path, - reset_optimizer, - reset_lr_scheduler, - optimizer_overrides, - reset_meters=reset_meters, - ) - - if ( - extra_state is not None - and "best" in extra_state - and not reset_optimizer - and not reset_meters - ): - save_checkpoint.best = extra_state["best"] - - if extra_state is not None and not reset_dataloader: - # restore iterator from checkpoint - itr_state = extra_state["train_iterator"] - epoch_itr = trainer.get_train_iterator( - epoch=itr_state["epoch"], load_dataset=True, **passthrough_args - ) - epoch_itr.load_state_dict(itr_state) - else: - epoch_itr = trainer.get_train_iterator( - epoch=1, load_dataset=True, **passthrough_args - ) - - trainer.lr_step(epoch_itr.epoch) - - return extra_state, epoch_itr - - -def load_checkpoint_to_cpu(path, arg_overrides=None, load_on_all_ranks=False): - """Loads a checkpoint to CPU (with upgrading for backward compatibility). - - If doing single-GPU training or if the checkpoint is only being loaded by at - most one process on each node (current default behavior is for only rank 0 - to read the checkpoint from disk), load_on_all_ranks should be False to - avoid errors from torch.distributed not having been initialized or - torch.distributed.barrier() hanging. - - If all processes on each node may be loading the checkpoint - simultaneously, load_on_all_ranks should be set to True to avoid I/O - conflicts. - - There's currently no support for > 1 but < all processes loading the - checkpoint on each node. - """ - local_path = PathManager.get_local_path(path) - # The locally cached file returned by get_local_path() may be stale for - # remote files that are periodically updated/overwritten (ex: - # checkpoint_last.pt) - so we remove the local copy, sync across processes - # (if needed), and then download a fresh copy. - if local_path != path and PathManager.path_requires_pathmanager(path): - try: - os.remove(local_path) - except FileNotFoundError: - # With potentially multiple processes removing the same file, the - # file being missing is benign (missing_ok isn't available until - # Python 3.8). - pass - if load_on_all_ranks: - torch.distributed.barrier() - local_path = PathManager.get_local_path(path) - - with open(local_path, "rb") as f: - state = torch.load(f, map_location=torch.device("cpu")) - - if "args" in state and state["args"] is not None and arg_overrides is not None: - args = state["args"] - for arg_name, arg_val in arg_overrides.items(): - setattr(args, arg_name, arg_val) - - if "cfg" in state and state["cfg"] is not None: - - # hack to be able to set Namespace in dict config. this should be removed when we update to newer - # omegaconf version that supports object flags, or when we migrate all existing models - from omegaconf import _utils - - old_primitive = _utils.is_primitive_type - _utils.is_primitive_type = lambda _: True - - state["cfg"] = OmegaConf.create(state["cfg"]) - - _utils.is_primitive_type = old_primitive - OmegaConf.set_struct(state["cfg"], True) - - if arg_overrides is not None: - overwrite_args_by_name(state["cfg"], arg_overrides) - - state = _upgrade_state_dict(state) - return state - - -def load_model_ensemble( - filenames, - arg_overrides: Optional[Dict[str, Any]] = None, - task=None, - strict=True, - suffix="", - num_shards=1, - state=None, -): - """Loads an ensemble of models. - - Args: - filenames (List[str]): checkpoint files to load - arg_overrides (Dict[str,Any], optional): override model args that - were used during model training - task (fairseq.tasks.FairseqTask, optional): task to use for loading - """ - assert not ( - strict and num_shards > 1 - ), "Cannot load state dict with strict=True and checkpoint shards > 1" - ensemble, args, _task = load_model_ensemble_and_task( - filenames, - arg_overrides, - task, - strict, - suffix, - num_shards, - state, - ) - return ensemble, args - - -def get_maybe_sharded_checkpoint_filename( - filename: str, suffix: str, shard_idx: int, num_shards: int -) -> str: - orig_filename = filename - filename = filename.replace(".pt", suffix + ".pt") - fsdp_filename = filename[:-3] + f"-shard{shard_idx}.pt" - model_parallel_filename = orig_filename[:-3] + f"_part{shard_idx}.pt" - if PathManager.exists(fsdp_filename): - return fsdp_filename - elif num_shards > 1: - return model_parallel_filename - else: - return filename - - -def load_model_ensemble_and_task( - filenames, - arg_overrides: Optional[Dict[str, Any]] = None, - task=None, - strict=True, - suffix="", - num_shards=1, - state=None, -): - assert state is None or len(filenames) == 1 - - from fairseq import tasks - - assert not ( - strict and num_shards > 1 - ), "Cannot load state dict with strict=True and checkpoint shards > 1" - ensemble = [] - cfg = None - for filename in filenames: - orig_filename = filename - model_shard_state = {"shard_weights": [], "shard_metadata": []} - assert num_shards > 0 - st = time.time() - for shard_idx in range(num_shards): - filename = get_maybe_sharded_checkpoint_filename( - orig_filename, suffix, shard_idx, num_shards - ) - - if not PathManager.exists(filename): - raise IOError("Model file not found: {}".format(filename)) - if state is None: - state = load_checkpoint_to_cpu(filename, arg_overrides) - if "args" in state and state["args"] is not None: - cfg = convert_namespace_to_omegaconf(state["args"]) - elif "cfg" in state and state["cfg"] is not None: - cfg = state["cfg"] - else: - raise RuntimeError( - f"Neither args nor cfg exist in state keys = {state.keys()}" - ) - - if task is None: - task = tasks.setup_task(cfg.task) - - if "task_state" in state: - task.load_state_dict(state["task_state"]) - - if "fsdp_metadata" in state and num_shards > 1: - model_shard_state["shard_weights"].append(state["model"]) - model_shard_state["shard_metadata"].append(state["fsdp_metadata"]) - # check FSDP import before the code goes too far - if not has_FSDP: - raise ImportError( - "Cannot find FullyShardedDataParallel. " - "Please install fairscale with: pip install fairscale" - ) - if shard_idx == num_shards - 1: - consolidated_model_state = FSDP.consolidate_shard_weights( - shard_weights=model_shard_state["shard_weights"], - shard_metadata=model_shard_state["shard_metadata"], - ) - model = task.build_model(cfg.model) - model.load_state_dict( - consolidated_model_state, strict=strict, model_cfg=cfg.model - ) - else: - # model parallel checkpoint or unsharded checkpoint - model = task.build_model(cfg.model) - model.load_state_dict( - state["model"], strict=strict, model_cfg=cfg.model - ) - - # reset state so it gets loaded for the next model in ensemble - state = None - if shard_idx % 10 == 0 and shard_idx > 0: - elapsed = time.time() - st - logger.info( - f"Loaded {shard_idx} shards in {elapsed:.2f}s, {elapsed / (shard_idx+1):.2f}s/shard" - ) - - # build model for ensemble - ensemble.append(model) - return ensemble, cfg, task - - -def checkpoint_paths(path, pattern=r"checkpoint(\d+)\.pt", keep_match=False): - """Retrieves all checkpoints found in `path` directory. - - Checkpoints are identified by matching filename to the specified pattern. If - the pattern contains groups, the result will be sorted by the first group in - descending order. - """ - pt_regexp = re.compile(pattern) - files = PathManager.ls(path) - - entries = [] - for i, f in enumerate(files): - m = pt_regexp.fullmatch(f) - if m is not None: - idx = float(m.group(1)) if len(m.groups()) > 0 else i - entries.append((idx, m.group(0))) - if keep_match: - return [(os.path.join(path, x[1]), x[0]) for x in sorted(entries, reverse=True)] - else: - return [os.path.join(path, x[1]) for x in sorted(entries, reverse=True)] - - -def torch_persistent_save(obj, filename, async_write: bool = False): - if async_write: - with PathManager.opena(filename, "wb") as f: - _torch_persistent_save(obj, f) - else: - if PathManager.supports_rename(filename): - # do atomic save - with PathManager.open(filename + ".tmp", "wb") as f: - _torch_persistent_save(obj, f) - PathManager.rename(filename + ".tmp", filename) - else: - # fallback to non-atomic save - with PathManager.open(filename, "wb") as f: - _torch_persistent_save(obj, f) - - -def _torch_persistent_save(obj, f): - if isinstance(f, str): - with PathManager.open(f, "wb") as h: - torch_persistent_save(obj, h) - return - for i in range(3): - try: - return torch.save(obj, f) - except Exception: - if i == 2: - logger.error(traceback.format_exc()) - raise - - -def _upgrade_state_dict(state): - """Helper for upgrading old model checkpoints.""" - - # add optimizer_history - if "optimizer_history" not in state: - state["optimizer_history"] = [ - {"criterion_name": "CrossEntropyCriterion", "best_loss": state["best_loss"]} - ] - state["last_optimizer_state"] = state["optimizer"] - del state["optimizer"] - del state["best_loss"] - # move extra_state into sub-dictionary - if "epoch" in state and "extra_state" not in state: - state["extra_state"] = { - "epoch": state["epoch"], - "batch_offset": state["batch_offset"], - "val_loss": state["val_loss"], - } - del state["epoch"] - del state["batch_offset"] - del state["val_loss"] - # reduce optimizer history's memory usage (only keep the last state) - if "optimizer" in state["optimizer_history"][-1]: - state["last_optimizer_state"] = state["optimizer_history"][-1]["optimizer"] - for optim_hist in state["optimizer_history"]: - del optim_hist["optimizer"] - # record the optimizer class name - if "optimizer_name" not in state["optimizer_history"][-1]: - state["optimizer_history"][-1]["optimizer_name"] = "FairseqNAG" - # move best_loss into lr_scheduler_state - if "lr_scheduler_state" not in state["optimizer_history"][-1]: - state["optimizer_history"][-1]["lr_scheduler_state"] = { - "best": state["optimizer_history"][-1]["best_loss"] - } - del state["optimizer_history"][-1]["best_loss"] - # keep track of number of updates - if "num_updates" not in state["optimizer_history"][-1]: - state["optimizer_history"][-1]["num_updates"] = 0 - # old model checkpoints may not have separate source/target positions - if ( - "args" in state - and hasattr(state["args"], "max_positions") - and not hasattr(state["args"], "max_source_positions") - ): - state["args"].max_source_positions = state["args"].max_positions - state["args"].max_target_positions = state["args"].max_positions - # use stateful training data iterator - if "train_iterator" not in state["extra_state"]: - state["extra_state"]["train_iterator"] = { - "epoch": state["extra_state"]["epoch"], - "iterations_in_epoch": state["extra_state"].get("batch_offset", 0), - } - - # backward compatibility, cfg updates - if "args" in state and state["args"] is not None: - # default to translation task - if not hasattr(state["args"], "task"): - state["args"].task = "translation" - # --raw-text and --lazy-load are deprecated - if getattr(state["args"], "raw_text", False): - state["args"].dataset_impl = "raw" - elif getattr(state["args"], "lazy_load", False): - state["args"].dataset_impl = "lazy" - # epochs start at 1 - if state["extra_state"]["train_iterator"] is not None: - state["extra_state"]["train_iterator"]["epoch"] = max( - state["extra_state"]["train_iterator"].get("epoch", 1), 1 - ) - # --remove-bpe ==> --postprocess - if hasattr(state["args"], "remove_bpe"): - state["args"].post_process = state["args"].remove_bpe - # --min-lr ==> --stop-min-lr - if hasattr(state["args"], "min_lr"): - state["args"].stop_min_lr = state["args"].min_lr - del state["args"].min_lr - # binary_cross_entropy / kd_binary_cross_entropy => wav2vec criterion - if ( - hasattr(state["args"], "criterion") - and state["args"].criterion in [ - "binary_cross_entropy", - "kd_binary_cross_entropy", - ] - ): - state["args"].criterion = "wav2vec" - # remove log_keys if it's None (criteria will supply a default value of []) - if hasattr(state["args"], "log_keys") and state["args"].log_keys is None: - delattr(state["args"], "log_keys") - # speech_pretraining => audio pretraining - if ( - hasattr(state["args"], "task") - and state["args"].task == "speech_pretraining" - ): - state["args"].task = "audio_pretraining" - # audio_cpc => wav2vec - if hasattr(state["args"], "arch") and state["args"].arch == "audio_cpc": - state["args"].arch = "wav2vec" - # convert legacy float learning rate to List[float] - if hasattr(state["args"], "lr") and isinstance(state["args"].lr, float): - state["args"].lr = [state["args"].lr] - # convert task data arg to a string instead of List[string] - if ( - hasattr(state["args"], "data") - and isinstance(state["args"].data, list) - and len(state["args"].data) > 0 - ): - state["args"].data = state["args"].data[0] - # remove keys in state["args"] related to teacher-student learning - for key in [ - "static_teachers", - "static_teacher_weights", - "dynamic_teachers", - "dynamic_teacher_weights", - ]: - if key in state["args"]: - delattr(state["args"], key) - - state["cfg"] = convert_namespace_to_omegaconf(state["args"]) - - if "cfg" in state and state["cfg"] is not None: - cfg = state["cfg"] - with open_dict(cfg): - # any upgrades for Hydra-based configs - if ( - "task" in cfg - and "eval_wer_config" in cfg.task - and isinstance(cfg.task.eval_wer_config.print_alignment, bool) - ): - cfg.task.eval_wer_config.print_alignment = "hard" - if "generation" in cfg and isinstance(cfg.generation.print_alignment, bool): - cfg.generation.print_alignment = "hard" if cfg.generation.print_alignment else None - if ( - "model" in cfg - and "w2v_args" in cfg.model - and cfg.model.w2v_args is not None - and ( - hasattr(cfg.model.w2v_args, "task") or "task" in cfg.model.w2v_args - ) - and hasattr(cfg.model.w2v_args.task, "eval_wer_config") - and cfg.model.w2v_args.task.eval_wer_config is not None - and isinstance( - cfg.model.w2v_args.task.eval_wer_config.print_alignment, bool - ) - ): - cfg.model.w2v_args.task.eval_wer_config.print_alignment = "hard" - - return state - - -def prune_state_dict(state_dict, model_cfg: Optional[DictConfig]): - """Prune the given state_dict if desired for LayerDrop - (https://arxiv.org/abs/1909.11556). - - Training with LayerDrop allows models to be robust to pruning at inference - time. This function prunes state_dict to allow smaller models to be loaded - from a larger model and re-maps the existing state_dict for this to occur. - - It's called by functions that load models from checkpoints and does not - need to be called directly. - """ - arch = None - if model_cfg is not None: - arch = ( - model_cfg._name - if isinstance(model_cfg, DictConfig) - else getattr(model_cfg, "arch", None) - ) - - if not model_cfg or arch is None or arch == "ptt_transformer": - # args should not be none, but don't crash if it is. - return state_dict - - encoder_layers_to_keep = getattr(model_cfg, "encoder_layers_to_keep", None) - decoder_layers_to_keep = getattr(model_cfg, "decoder_layers_to_keep", None) - - if not encoder_layers_to_keep and not decoder_layers_to_keep: - return state_dict - - # apply pruning - logger.info( - "Pruning model to specified layer configuration - this works best if the model was trained with LayerDrop" - ) - - def create_pruning_pass(layers_to_keep, layer_name): - keep_layers = sorted( - int(layer_string) for layer_string in layers_to_keep.split(",") - ) - mapping_dict = {} - for i in range(len(keep_layers)): - mapping_dict[str(keep_layers[i])] = str(i) - - regex = re.compile(r"^{layer}.*\.layers\.(\d+)".format(layer=layer_name)) - return {"substitution_regex": regex, "mapping_dict": mapping_dict} - - pruning_passes = [] - if encoder_layers_to_keep: - pruning_passes.append(create_pruning_pass(encoder_layers_to_keep, "encoder")) - if decoder_layers_to_keep: - pruning_passes.append(create_pruning_pass(decoder_layers_to_keep, "decoder")) - - new_state_dict = {} - for layer_name in state_dict.keys(): - match = re.search(r"\.layers\.(\d+)\.", layer_name) - # if layer has no number in it, it is a supporting layer, such as an - # embedding - if not match: - new_state_dict[layer_name] = state_dict[layer_name] - continue - - # otherwise, layer should be pruned. - original_layer_number = match.group(1) - # figure out which mapping dict to replace from - for pruning_pass in pruning_passes: - if original_layer_number in pruning_pass["mapping_dict"] and pruning_pass[ - "substitution_regex" - ].search(layer_name): - new_layer_number = pruning_pass["mapping_dict"][original_layer_number] - substitution_match = pruning_pass["substitution_regex"].search( - layer_name - ) - new_state_key = ( - layer_name[: substitution_match.start(1)] - + new_layer_number - + layer_name[substitution_match.end(1) :] - ) - new_state_dict[new_state_key] = state_dict[layer_name] - - # Since layers are now pruned, *_layers_to_keep are no longer needed. - # This is more of "It would make it work fix" rather than a proper fix. - if isinstance(model_cfg, DictConfig): - context = open_dict(model_cfg) - else: - context = contextlib.ExitStack() - with context: - if hasattr(model_cfg, "encoder_layers_to_keep"): - model_cfg.encoder_layers_to_keep = None - if hasattr(model_cfg, "decoder_layers_to_keep"): - model_cfg.decoder_layers_to_keep = None - - return new_state_dict - - -def load_pretrained_component_from_model( - component: Union[FairseqEncoder, FairseqDecoder], checkpoint: str -): - """ - Load a pretrained FairseqEncoder or FairseqDecoder from checkpoint into the - provided `component` object. If state_dict fails to load, there may be a - mismatch in the architecture of the corresponding `component` found in the - `checkpoint` file. - """ - if not PathManager.exists(checkpoint): - raise IOError("Model file not found: {}".format(checkpoint)) - state = load_checkpoint_to_cpu(checkpoint) - if isinstance(component, FairseqEncoder): - component_type = "encoder" - elif isinstance(component, FairseqDecoder): - component_type = "decoder" - else: - raise ValueError( - "component to load must be either a FairseqEncoder or " - "FairseqDecoder. Loading other component types are not supported." - ) - component_state_dict = OrderedDict() - for key in state["model"].keys(): - if key.startswith(component_type): - # encoder.input_layers.0.0.weight --> input_layers.0.0.weight - component_subkey = key[len(component_type) + 1 :] - component_state_dict[component_subkey] = state["model"][key] - component.load_state_dict(component_state_dict, strict=True) - return component - - -def verify_checkpoint_directory(save_dir: str) -> None: - if not os.path.exists(save_dir): - os.makedirs(save_dir, exist_ok=True) - temp_file_path = os.path.join(save_dir, "dummy") - try: - with open(temp_file_path, "w"): - pass - except OSError as e: - logger.warning( - "Unable to access checkpoint save directory: {}".format(save_dir) - ) - raise e - else: - os.remove(temp_file_path) - - -def load_ema_from_checkpoint(fpath): - """Loads exponential moving averaged (EMA) checkpoint from input and - returns a model with ema weights. - - Args: - fpath: A string path of checkpoint to load from. - - Returns: - A dict of string keys mapping to various values. The 'model' key - from the returned dict should correspond to an OrderedDict mapping - string parameter names to torch Tensors. - """ - params_dict = collections.OrderedDict() - new_state = None - - with PathManager.open(fpath, 'rb') as f: - new_state = torch.load( - f, - map_location=( - lambda s, _: torch.serialization.default_restore_location(s, 'cpu') - ), - ) - - # EMA model is stored in a separate "extra state" - model_params = new_state['extra_state']['ema'] - - for key in list(model_params.keys()): - p = model_params[key] - if isinstance(p, torch.HalfTensor): - p = p.float() - if key not in params_dict: - params_dict[key] = p.clone() - # NOTE: clone() is needed in case of p is a shared parameter - else: - raise ValueError("Key {} is repeated in EMA model params.".format(key)) - - if len(params_dict) == 0: - raise ValueError( - f"Input checkpoint path '{fpath}' does not contain " - "ema model weights, is this model trained with EMA?" - ) - - new_state['model'] = params_dict - return new_state diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/speech_to_text/xm_transformer.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/speech_to_text/xm_transformer.py deleted file mode 100644 index 5eecbfa2158dcbee90eef6d395bb5611ff8ee8de..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/speech_to_text/xm_transformer.py +++ /dev/null @@ -1,505 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import copy -from typing import Dict, List, Optional, Tuple - -from fairseq import utils, checkpoint_utils -from fairseq.models import (FairseqEncoderDecoderModel, FairseqEncoder, - register_model, register_model_architecture) -from fairseq.models.transformer import Embedding, TransformerDecoder -from fairseq.models.wav2vec import Wav2VecEncoder -from fairseq.modules.layer_norm import LayerNorm -from fairseq.data.data_utils import lengths_to_padding_mask -from fairseq.utils import safe_hasattr -from torch import Tensor -import torch.nn as nn - - -logger = logging.getLogger(__name__) - - -class Conv1dAdaptor(nn.Module): - def __init__(self, in_dim, out_dim, n_layers=3, kernel_size=3, stride=2, - add_layernorm=False): - super().__init__() - self.layers = nn.ModuleList( - nn.Conv1d(in_dim if i == 0 else out_dim, out_dim * 2, kernel_size, - stride=stride, padding=kernel_size // 2) - for i in range(n_layers) - ) - self.layernorms = None - if add_layernorm: - self.layernorms = nn.ModuleList(LayerNorm(out_dim) - for _ in range(n_layers)) - self.stride = stride - - @classmethod - def add_args(cls, parser): - parser.add_argument("--adaptor-n-layers", type=int) - parser.add_argument("--adaptor-kernel-size", type=int) - parser.add_argument("--adaptor-stride", type=int) - parser.add_argument("--adaptor-layernorm", action='store_true') - - def get_out_seq_lens_tensor(self, in_seq_lens_tensor): - out = in_seq_lens_tensor.clone() - for _ in self.layers: - out = ((out.float() - 1) / self.stride + 1).floor().long() - return out - - def forward(self, x, padding_mask): - # T x B x C -> B x C x T - x = x.transpose(0, 1).transpose(1, 2) - for i, layer in enumerate(self.layers): - x = nn.functional.glu(layer(x), dim=1) - if self.layernorms is not None: - x = self.layernorms[i](x.transpose(1, 2)).transpose(1, 2) - # B x C x T -> T x B x C - x = x.transpose(1, 2).transpose(0, 1) - - if padding_mask is None: - out_padding_mask = None - else: - out_lengths = self.get_out_seq_lens_tensor((~padding_mask).sum(1)) - out_padding_mask = lengths_to_padding_mask(out_lengths) - return x, out_padding_mask - - -def add_wav2vec_asr_args(parser): - parser.add_argument("--w2v-path", help="path to wav2vec 2.0 model") - parser.add_argument( - "--no-pretrained-weights", - action="store_true", - help="if true, does not load pretrained weights", - ) - parser.add_argument( - "--dropout-input", - type=float, - metavar="D", - help="dropout to apply to the input (after feat extr)", - ) - parser.add_argument( - "--final-dropout", - type=float, - metavar="D", - help="dropout after transformer and before final projection", - ) - parser.add_argument( - "--apply-mask", action="store_true", help="apply masking during fine-tuning" - ) - parser.add_argument( - "--dropout", - type=float, - metavar="D", - help="dropout probability inside wav2vec 2.0 model", - ) - parser.add_argument( - "--attention-dropout", - type=float, - metavar="D", - help="dropout probability for attention weights inside wav2vec 2.0 model", - ) - parser.add_argument( - "--activation-dropout", - "--relu-dropout", - type=float, - metavar="D", - help="dropout probability after activation in FFN inside wav2vec 2.0 model", - ) - - parser.add_argument( - "--mask-length", type=int, help="repeat the mask indices multiple times" - ) - - parser.add_argument( - "--mask-prob", type=float, help="probability of replacing a token with mask" - ) - - parser.add_argument( - "--mask-selection", - type=str, - choices=["static", "uniform", "normal", "poisson"], - help="how to choose masks", - ) - - parser.add_argument( - "--mask-other", - type=float, - help="stdev of the mask length in case of 'normal' selection strategy", - ) - - parser.add_argument( - "--no-mask-overlap", - action="store_true", - help="whether to allow masks to overlap", - ) - - parser.add_argument( - "--mask-channel-length", type=int, help="repeat the mask indices multiple times" - ) - - parser.add_argument( - "--mask-channel-prob", - type=float, - help="probability of replacing a token with mask", - ) - - parser.add_argument( - "--mask-channel-selection", - type=str, - choices=["static", "uniform", "normal", "poisson"], - help="how to choose masks", - ) - - parser.add_argument( - "--mask-channel-other", - type=float, - help="stdev of the mask length in case of 'normal' selection strategy", - ) - - parser.add_argument( - "--no-mask-channel-overlap", - action="store_true", - help="whether to allow masks to overlap", - ) - - parser.add_argument( - "--freeze-finetune-updates", - default=0, - type=int, - help="dont finetune wav2vec for this many updates", - ) - - parser.add_argument( - "--feature-grad-mult", - default=None, - type=float, - help="reset feature grad mult in wav2vec 2.0 to this", - ) - - parser.add_argument( - "--layerdrop", - default=0.0, - type=float, - help="probability of dropping a layer in wav2vec 2.0", - ) - parser.add_argument("--w2v-args", default=None) - - -class Wav2VecEncoderWithAdaptor(FairseqEncoder): - def __init__(self, args): - super().__init__(None) - self.w2v_encoder = Wav2VecEncoder(args) - encoder_out_dim = self.w2v_encoder.w2v_model.encoder.embedding_dim - # Projection + 8x shrinking - self.adaptor = Conv1dAdaptor( - encoder_out_dim, args.decoder_embed_dim, - n_layers=args.adaptor_n_layers, - kernel_size=args.adaptor_kernel_size, stride=args.adaptor_stride, - add_layernorm=args.adaptor_layernorm - ) - for k, p in self.w2v_encoder.w2v_model.named_parameters(): - # Freeze pretrained models by default - if safe_hasattr(args, 'finetune_w2v_params') and XMTransformerModel.finetune_params( - args.finetune_w2v_params, k): - p.requires_grad = True - else: - p.requires_grad = False - - @classmethod - def add_args(cls, parser): - add_wav2vec_asr_args(parser) - parser.add_argument( - "--normalize", action="store_true", - help="if set, normalizes input to have 0 mean and unit variance", - ) - parser.add_argument("--finetune-w2v-params", type=str, metavar="STR", - help="comma-separated param strings to finetune.") - Conv1dAdaptor.add_args(parser) - - def forward(self, src_tokens, src_lengths=None, **kwargs): - padding_mask = lengths_to_padding_mask(src_lengths) - out = self.w2v_encoder.forward(src_tokens, padding_mask, tbc=True) - x = out["encoder_out"] - enc_padding_mask = None - if out["encoder_padding_mask"] is not None: - enc_padding_mask = out["encoder_padding_mask"].transpose(0, 1) # T X B --> B X T - - x, enc_padding_mask = self.adaptor(x, enc_padding_mask) - - return { - "encoder_out": [x], # T x B x C - "encoder_padding_mask": [enc_padding_mask] if enc_padding_mask.any() else [], # B x T - "encoder_embedding": [], # B x T x C - "encoder_states": [], # List[T x B x C] - "src_tokens": [], - "src_lengths": [], - } - - def reorder_encoder_out(self, encoder_out, new_order): - new_encoder_out = ( - [] if len(encoder_out["encoder_out"]) == 0 - else [x.index_select(1, new_order) for x in encoder_out["encoder_out"]] - ) - - new_encoder_padding_mask = ( - [] if len(encoder_out["encoder_padding_mask"]) == 0 - else [x.index_select(0, new_order) for x in - encoder_out["encoder_padding_mask"]] - ) - - new_encoder_embedding = ( - [] if len(encoder_out["encoder_embedding"]) == 0 - else [x.index_select(0, new_order) for x in - encoder_out["encoder_embedding"]] - ) - - encoder_states = encoder_out["encoder_states"] - if len(encoder_states) > 0: - for idx, state in enumerate(encoder_states): - encoder_states[idx] = state.index_select(1, new_order) - - return { - "encoder_out": new_encoder_out, # T x B x C - "encoder_padding_mask": new_encoder_padding_mask, # B x T - "encoder_embedding": new_encoder_embedding, # B x T x C - "encoder_states": encoder_states, # List[T x B x C] - "src_tokens": [], # B x T - "src_lengths": [], # B x 1 - } - - -def add_decoder_args(parser): - parser.add_argument("--activation-fn", type=str, default='relu', - choices=utils.get_available_activation_fns(), - help="activation function to use") - parser.add_argument("--decoder-dropout", type=float, metavar="D", - help="dropout probability") - parser.add_argument("--decoder-attention-dropout", type=float, - metavar="D", - help="dropout probability for attention weights") - parser.add_argument("--decoder-activation-dropout", type=float, - metavar="D", - help="dropout probability after activation in FFN.") - parser.add_argument("--decoder-embed-dim", type=int, metavar="N", - help="decoder embedding dimension") - parser.add_argument("--decoder-ffn-embed-dim", type=int, metavar="N", - help="decoder embedding dimension for FFN") - parser.add_argument("--decoder-layers", type=int, metavar="N", - help="num decoder layers") - parser.add_argument("--decoder-attention-heads", type=int, metavar="N", - help="num decoder attention heads") - parser.add_argument("--decoder-normalize-before", action="store_true", - help="apply layernorm before each decoder block") - parser.add_argument("--layernorm-embedding", action="store_true", - help="add layernorm to embedding") - parser.add_argument("--no-scale-embedding", action="store_true", - help="if True, dont scale embeddings") - parser.add_argument( - "--load-pretrained-decoder-from", type=str, metavar="STR", - help="model to take decoder weights from (for initialization)" - ) - parser.add_argument("--finetune-decoder-params", type=str, - metavar="STR", - help="comma-separated param strings to finetune.") - parser.add_argument("--checkpoint-activations", action="store_true") - - -@register_model("xm_transformer") -class XMTransformerModel(FairseqEncoderDecoderModel): - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - @classmethod - def add_args(cls, parser): - """Add model-specific arguments to the parser.""" - Wav2VecEncoderWithAdaptor.add_args(parser) - add_decoder_args(parser) - - @classmethod - def build_encoder(cls, args): - _args = copy.deepcopy(args) - state = checkpoint_utils.load_checkpoint_to_cpu(args.w2v_path) - if state.get("cfg") is not None: - encoder_embed_dim = state["cfg"]._content["model"]["encoder_embed_dim"] - elif state.get("args") is not None: - encoder_embed_dim = state["args"].encoder_embed_dim - else: - raise ValueError(f"Invalid config in {args.w2v_path}") - _args.decoder_embed_dim = encoder_embed_dim - encoder = Wav2VecEncoderWithAdaptor(_args) - return encoder - - @classmethod - def build_decoder(cls, args, task, embed_tokens): - _args = copy.deepcopy(args) - _args.dropout = args.decoder_dropout - _args.attention_dropout = args.decoder_attention_dropout - _args.activation_dropout = args.decoder_activation_dropout - _args.max_target_positions = 1024 - - decoder = TransformerDecoder(_args, task.target_dictionary, - embed_tokens) - if getattr(args, "load_pretrained_decoder_from", None): - decoder = checkpoint_utils.load_pretrained_component_from_model( - component=decoder, checkpoint=args.load_pretrained_decoder_from - ) - for k, p in decoder.named_parameters(): - # Freeze pretrained models by default - if safe_hasattr(args, 'finetune_decoder_params') and XMTransformerModel.finetune_params( - args.finetune_decoder_params, k): - p.requires_grad = True - else: - p.requires_grad = False - return decoder - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present in older models - base_architecture(args) - - def build_embedding(dictionary, embed_dim): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - return Embedding(num_embeddings, embed_dim, padding_idx) - - decoder_embed_tokens = build_embedding(task.target_dictionary, - args.decoder_embed_dim) - encoder = cls.build_encoder(args) - decoder = cls.build_decoder(args, task, decoder_embed_tokens) - return cls(encoder, decoder) - - def get_normalized_probs( - self, - net_output: Tuple[Tensor, Optional[Dict[str, List[Optional[Tensor]]]]], - log_probs: bool, - sample: Optional[Dict[str, Tensor]] = None, - ): - # net_output['encoder_out'] is a (B, T, D) tensor - lprobs = self.get_normalized_probs_scriptable(net_output, log_probs, - sample) - lprobs.batch_first = True - return lprobs - - def forward(self, src_tokens, src_lengths, prev_output_tokens, **kwargs): - """ - The forward method inherited from the base class has a **kwargs - argument in its input, which is not supported in torchscript. This - method overrites the forward method definition without **kwargs. - """ - encoder_out = self.encoder(src_tokens=src_tokens, - src_lengths=src_lengths, **kwargs) - decoder_out = self.decoder(prev_output_tokens=prev_output_tokens, - encoder_out=encoder_out) - return decoder_out - - def upgrade_state_dict(self, state_dict): - for k, _ in state_dict.items(): - if 'adaptor.layers' in state_dict: - print(k) - new = k.replace('adaptor.layers', 'adaptor_layers') - state_dict[new] = state_dict[k] - del state_dict[k] - - @staticmethod - def finetune_params(finetune_params, param_name): - if finetune_params == "all": - return True - finetune_params_list = finetune_params.split(",") - for finetune_param in finetune_params_list: - if finetune_param in param_name: - return True - return False - - -def set_default_w2v_encoder_args(args): - args.no_pretrained_weights = getattr(args, "no_pretrained_weights", False) - args.dropout_input = getattr(args, "dropout_input", 0) - args.final_dropout = getattr(args, "final_dropout", 0) - args.apply_mask = getattr(args, "apply_mask", False) - args.dropout = getattr(args, "dropout", 0) - args.attention_dropout = getattr(args, "attention_dropout", 0) - args.activation_dropout = getattr(args, "activation_dropout", 0) - - args.mask_length = getattr(args, "mask_length", 10) - args.mask_prob = getattr(args, "mask_prob", 0.5) - args.mask_selection = getattr(args, "mask_selection", "static") - args.mask_other = getattr(args, "mask_other", 0) - args.no_mask_overlap = getattr(args, "no_mask_overlap", False) - args.mask_channel_length = getattr(args, "mask_channel_length", 10) - args.mask_channel_prob = getattr(args, "mask_channel_prob", 0.5) - args.mask_channel_before = getattr(args, "mask_channel_before", False) - args.mask_channel_selection = getattr(args, "mask_channel_selection", - "static") - args.mask_channel_other = getattr(args, "mask_channel_other", 0) - args.no_mask_channel_overlap = getattr(args, "no_mask_channel_overlap", - False) - - args.freeze_finetune_updates = getattr(args, "freeze_finetune_updates", 0) - args.feature_grad_mult = 0.1 - args.layerdrop = getattr(args, "layerdrop", 0.0) - - args.normalize = getattr(args, "normalize", False) - - -def set_default_adaptor_args(args): - args.adaptor_n_layers = getattr(args, "adaptor_n_layers", 3) - args.adaptor_kernel_size = getattr(args, "adaptor_kernel_size", 3) - args.adaptor_stride = getattr(args, "adaptor_stride", 2) - args.adaptor_layernorm = getattr(args, "adaptor_layernorm", False) - - -def set_default_mbart_decoder_args(args): - args.decoder_embed_path = getattr(args, 'decoder_embed_path', None) - args.decoder_embed_dim = getattr(args, 'decoder_embed_dim', 1024) - args.decoder_ffn_embed_dim = getattr(args, 'decoder_ffn_embed_dim', - 4 * 1024) - args.decoder_layers = getattr(args, 'decoder_layers', 12) - args.decoder_attention_heads = getattr(args, 'decoder_attention_heads', 16) - args.decoder_normalize_before = getattr(args, 'decoder_normalize_before', - True) - args.decoder_learned_pos = getattr(args, 'decoder_learned_pos', True) - args.decoder_layerdrop = getattr(args, "decoder_layerdrop", 0.0) - args.adaptive_input = getattr(args, "adaptive_input", False) - args.decoder_attention_dropout = getattr(args, 'decoder_attention_dropout', - 0.) - args.decoder_activation_dropout = getattr(args, - 'decoder_activation_dropout', 0.) - args.decoder_dropout = getattr(args, 'decoder_dropout', 0.1) - args.adaptive_softmax_cutoff = getattr(args, 'adaptive_softmax_cutoff', - None) - args.adaptive_softmax_dropout = getattr(args, 'adaptive_softmax_dropout', 0) - args.share_decoder_input_output_embed = getattr( - args, 'share_decoder_input_output_embed', True - ) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - - args.decoder_output_dim = getattr(args, 'decoder_output_dim', - args.decoder_embed_dim) - args.decoder_input_dim = getattr(args, 'decoder_input_dim', - args.decoder_embed_dim) - - args.no_scale_embedding = getattr(args, 'no_scale_embedding', False) - args.quant_noise_pq = getattr(args, "quant_noise_pq", 0) - args.layernorm_embedding = getattr(args, 'layernorm_embedding', True) - - args.activation_fn = getattr(args, 'activation_fn', 'gelu') - args.pooler_activation_fn = getattr(args, 'pooler_activation_fn', 'tanh') - args.pooler_dropout = getattr(args, 'pooler_dropout', 0.0) - args.checkpoint_activations = getattr(args, "checkpoint_activations", False) - - -@register_model_architecture(model_name="xm_transformer", - arch_name="xm_transformer") -def base_architecture(args): - set_default_w2v_encoder_args(args) - set_default_adaptor_args(args) - set_default_mbart_decoder_args(args) diff --git a/spaces/OIUGLK/bingo/src/components/user-menu.tsx b/spaces/OIUGLK/bingo/src/components/user-menu.tsx deleted file mode 100644 index 9bd1edc9cf9f39b63629b021f0c1186b1a7c1341..0000000000000000000000000000000000000000 --- a/spaces/OIUGLK/bingo/src/components/user-menu.tsx +++ /dev/null @@ -1,113 +0,0 @@ -'use client' - -import { useEffect, useState } from 'react' -import Image from 'next/image' -import { toast } from 'react-hot-toast' -import { Button } from '@/components/ui/button' -import pkg from '../../package.json' -import { - DropdownMenu, - DropdownMenuContent, - DropdownMenuItem, - DropdownMenuSeparator, - DropdownMenuTrigger -} from '@/components/ui/dropdown-menu' -import { IconCopy, IconExternalLink, IconGitHub } from '@/components/ui/icons' -import SettingIcon from '@/assets/images/settings.svg' -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' - -export function UserMenu() { - const [host, setHost] = useState('') - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - useEffect(() => { - setHost(location.host) - }, []) - - useEffect(() => { - if (isCopied) { - toast.success('复制成功') - } - }, [isCopied]) - return ( -
      - - - - - - - location.href='#dialog="settings"' - } - className="cursor-pointer" - > - 设置用户 - - - - location.href='#dialog="voice"' - } - className="cursor-pointer" - > - 语音设置 - - - - - 开源地址 - - - - - - - - 托管地址 - 🤗 - - - - - - - 复制站点 - - - - - -
      版本信息 {pkg.version}
      -
      - - -
      站点域名
      -
      copyToClipboard(host)} className="flex gap-1 text-xs text-zinc-500 cursor-pointer"> - {host} -
      -
      -
      -
      -
      - ) -} diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/data/humanml/dataset_t2m_cb.py b/spaces/OpenMotionLab/MotionGPT/mGPT/data/humanml/dataset_t2m_cb.py deleted file mode 100644 index 6f7e1feb2d4f4a824ff8af5128d8cf78b56f92b8..0000000000000000000000000000000000000000 --- a/spaces/OpenMotionLab/MotionGPT/mGPT/data/humanml/dataset_t2m_cb.py +++ /dev/null @@ -1,211 +0,0 @@ -import rich -import random -import pickle -import os -import numpy as np -import codecs as cs -from torch.utils import data -from os.path import join as pjoin -from rich.progress import track -import json -import spacy - -class Text2MotionDatasetCB(data.Dataset): - def __init__( - self, - data_root, - split, - mean, - std, - max_motion_length=196, - min_motion_length=20, - unit_length=4, - fps=20, - tmpFile=True, - tiny=False, - debug=False, - stage='lm_pretrain', - code_path='VQVAE', - task_path=None, - std_text=False, - **kwargs, - ): - self.tiny = tiny - self.unit_length = unit_length - - # Data mean and std - self.mean = mean - self.std = std - - # Data path - split = 'train' - split_file = pjoin(data_root, split + '.txt') - motion_dir = pjoin(data_root, code_path) - text_dir = pjoin(data_root, 'texts') - - if task_path: - instructions = task_path - elif stage == 'lm_pretrain': - instructions = pjoin(data_root, 'template_pretrain.json') - elif stage in ['lm_instruct', "lm_rl"]: - instructions = pjoin(data_root, 'template_instructions.json') - else: - raise NotImplementedError(f"stage {stage} not implemented") - - # Data id list - self.id_list = [] - with cs.open(split_file, "r") as f: - for line in f.readlines(): - self.id_list.append(line.strip()) - - # Debug mode - if tiny or debug: - enumerator = enumerate(self.id_list) - maxdata = 100 - subset = '_tiny' - else: - enumerator = enumerate( - track( - self.id_list, - f"Loading HumanML3D {split}", - )) - maxdata = 1e10 - subset = '' - - new_name_list = [] - data_dict = {} - - # Fast loading - for i, name in enumerator: - if len(new_name_list) > maxdata: - break - try: - # Load motion tokens - m_token_list = np.load(pjoin(motion_dir, f'{name}.npy')) - # Read text - with cs.open(pjoin(text_dir, name + '.txt')) as f: - text_data = [] - flag = False - lines = f.readlines() - - for line in lines: - try: - text_dict = {} - line_split = line.strip().split('#') - caption = line_split[0] - t_tokens = line_split[1].split(' ') - f_tag = float(line_split[2]) - to_tag = float(line_split[3]) - f_tag = 0.0 if np.isnan(f_tag) else f_tag - to_tag = 0.0 if np.isnan(to_tag) else to_tag - - text_dict['caption'] = caption - text_dict['tokens'] = t_tokens - if f_tag == 0.0 and to_tag == 0.0: - flag = True - text_data.append(text_dict) - else: - m_token_list_new = [ - tokens[int(f_tag * fps / unit_length - ):int(to_tag * fps / - unit_length)] - for tokens in m_token_list - if int(f_tag * fps / unit_length) < - int(to_tag * fps / unit_length) - ] - - if len(m_token_list_new) == 0: - continue - new_name = '%s_%f_%f' % (name, f_tag, - to_tag) - - data_dict[new_name] = { - 'm_token_list': m_token_list_new, - 'text': [text_dict] - } - new_name_list.append(new_name) - except: - pass - - if flag: - data_dict[name] = { - 'm_token_list': m_token_list, - 'text': text_data - } - new_name_list.append(name) - except: - pass - - if tmpFile: - os.makedirs(pjoin(data_root, 'tmp'), exist_ok=True) - with open( - pjoin(data_root, - f'tmp/{split}{subset}_tokens_data.pkl'), - 'wb') as file: - pickle.dump(data_dict, file) - with open( - pjoin(data_root, - f'tmp/{split}{subset}_tokens_index.pkl'), - 'wb') as file: - pickle.dump(new_name_list, file) - - self.data_dict = data_dict - self.name_list = new_name_list - self.nlp = spacy.load('en_core_web_sm') - self.std_text = std_text - self.instructions = json.load(open(instructions, 'r')) - self.tasks = [] - for task in self.instructions.keys(): - for subtask in self.instructions[task].keys(): - self.tasks.append(self.instructions[task][subtask]) - - def __len__(self): - return len(self.name_list) * len(self.tasks) - - def __getitem__(self, item): - data_idx = item % len(self.name_list) - task_idx = item // len(self.name_list) - - data = self.data_dict[self.name_list[data_idx]] - m_token_list, text_list = data['m_token_list'], data['text'] - - m_tokens = random.choice(m_token_list) - text_data = random.choice(text_list) - caption = text_data['caption'] - if self.std_text: - doc = self.nlp(caption) - word_list = [] - pos_list = [] - for token in doc: - word = token.text - if not word.isalpha(): - continue - if (token.pos_ == 'NOUN' - or token.pos_ == 'VERB') and (word != 'left'): - word_list.append(token.lemma_) - else: - word_list.append(word) - pos_list.append(token.pos_) - - caption = ' '.join(word_list) - - all_captions = [ - ' '.join([token.split('/')[0] for token in text_dic['tokens']]) - for text_dic in text_list - ] - - coin = np.random.choice([False, False, True]) - - if coin: - # drop one token at the head or tail - coin2 = np.random.choice([True, False]) - if coin2: - m_tokens = m_tokens[:-1] - else: - m_tokens = m_tokens[1:] - - m_tokens_len = m_tokens.shape[0] - - tasks = self.tasks[task_idx] - - return caption, m_tokens, m_tokens_len, None, None, None, None, all_captions, tasks diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/render/blender/data.py b/spaces/OpenMotionLab/MotionGPT/mGPT/render/blender/data.py deleted file mode 100644 index 17c6a40dda1721a4f38f176251e913dd04095499..0000000000000000000000000000000000000000 --- a/spaces/OpenMotionLab/MotionGPT/mGPT/render/blender/data.py +++ /dev/null @@ -1,3 +0,0 @@ -class Data: - def __len__(self): - return self.N diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/__init__.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/__init__.py deleted file mode 100644 index 0f33124ed23fc6f27119a37bcb5ab004d3572be0..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/__init__.py +++ /dev/null @@ -1,35 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .activation import build_activation_layer -from .context_block import ContextBlock -from .conv import build_conv_layer -from .conv2d_adaptive_padding import Conv2dAdaptivePadding -from .conv_module import ConvModule -from .conv_ws import ConvAWS2d, ConvWS2d, conv_ws_2d -from .depthwise_separable_conv_module import DepthwiseSeparableConvModule -from .drop import Dropout, DropPath -from .generalized_attention import GeneralizedAttention -from .hsigmoid import HSigmoid -from .hswish import HSwish -from .non_local import NonLocal1d, NonLocal2d, NonLocal3d -from .norm import build_norm_layer, is_norm -from .padding import build_padding_layer -from .plugin import build_plugin_layer -from .registry import (ACTIVATION_LAYERS, CONV_LAYERS, NORM_LAYERS, - PADDING_LAYERS, PLUGIN_LAYERS, UPSAMPLE_LAYERS) -from .scale import Scale -from .swish import Swish -from .upsample import build_upsample_layer -from .wrappers import (Conv2d, Conv3d, ConvTranspose2d, ConvTranspose3d, - Linear, MaxPool2d, MaxPool3d) - -__all__ = [ - 'ConvModule', 'build_activation_layer', 'build_conv_layer', - 'build_norm_layer', 'build_padding_layer', 'build_upsample_layer', - 'build_plugin_layer', 'is_norm', 'HSigmoid', 'HSwish', 'NonLocal1d', - 'NonLocal2d', 'NonLocal3d', 'ContextBlock', 'GeneralizedAttention', - 'ACTIVATION_LAYERS', 'CONV_LAYERS', 'NORM_LAYERS', 'PADDING_LAYERS', - 'UPSAMPLE_LAYERS', 'PLUGIN_LAYERS', 'Scale', 'ConvAWS2d', 'ConvWS2d', - 'conv_ws_2d', 'DepthwiseSeparableConvModule', 'Swish', 'Linear', - 'Conv2dAdaptivePadding', 'Conv2d', 'ConvTranspose2d', 'MaxPool2d', - 'ConvTranspose3d', 'MaxPool3d', 'Conv3d', 'Dropout', 'DropPath' -] diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/optimizer/default_constructor.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/optimizer/default_constructor.py deleted file mode 100644 index 2c0da3503b75441738efe38d70352b55a210a34a..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/optimizer/default_constructor.py +++ /dev/null @@ -1,249 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -from torch.nn import GroupNorm, LayerNorm - -from annotator.uniformer.mmcv.utils import _BatchNorm, _InstanceNorm, build_from_cfg, is_list_of -from annotator.uniformer.mmcv.utils.ext_loader import check_ops_exist -from .builder import OPTIMIZER_BUILDERS, OPTIMIZERS - - -@OPTIMIZER_BUILDERS.register_module() -class DefaultOptimizerConstructor: - """Default constructor for optimizers. - - By default each parameter share the same optimizer settings, and we - provide an argument ``paramwise_cfg`` to specify parameter-wise settings. - It is a dict and may contain the following fields: - - - ``custom_keys`` (dict): Specified parameters-wise settings by keys. If - one of the keys in ``custom_keys`` is a substring of the name of one - parameter, then the setting of the parameter will be specified by - ``custom_keys[key]`` and other setting like ``bias_lr_mult`` etc. will - be ignored. It should be noted that the aforementioned ``key`` is the - longest key that is a substring of the name of the parameter. If there - are multiple matched keys with the same length, then the key with lower - alphabet order will be chosen. - ``custom_keys[key]`` should be a dict and may contain fields ``lr_mult`` - and ``decay_mult``. See Example 2 below. - - ``bias_lr_mult`` (float): It will be multiplied to the learning - rate for all bias parameters (except for those in normalization - layers and offset layers of DCN). - - ``bias_decay_mult`` (float): It will be multiplied to the weight - decay for all bias parameters (except for those in - normalization layers, depthwise conv layers, offset layers of DCN). - - ``norm_decay_mult`` (float): It will be multiplied to the weight - decay for all weight and bias parameters of normalization - layers. - - ``dwconv_decay_mult`` (float): It will be multiplied to the weight - decay for all weight and bias parameters of depthwise conv - layers. - - ``dcn_offset_lr_mult`` (float): It will be multiplied to the learning - rate for parameters of offset layer in the deformable convs - of a model. - - ``bypass_duplicate`` (bool): If true, the duplicate parameters - would not be added into optimizer. Default: False. - - Note: - 1. If the option ``dcn_offset_lr_mult`` is used, the constructor will - override the effect of ``bias_lr_mult`` in the bias of offset - layer. So be careful when using both ``bias_lr_mult`` and - ``dcn_offset_lr_mult``. If you wish to apply both of them to the - offset layer in deformable convs, set ``dcn_offset_lr_mult`` - to the original ``dcn_offset_lr_mult`` * ``bias_lr_mult``. - 2. If the option ``dcn_offset_lr_mult`` is used, the constructor will - apply it to all the DCN layers in the model. So be careful when - the model contains multiple DCN layers in places other than - backbone. - - Args: - model (:obj:`nn.Module`): The model with parameters to be optimized. - optimizer_cfg (dict): The config dict of the optimizer. - Positional fields are - - - `type`: class name of the optimizer. - - Optional fields are - - - any arguments of the corresponding optimizer type, e.g., - lr, weight_decay, momentum, etc. - paramwise_cfg (dict, optional): Parameter-wise options. - - Example 1: - >>> model = torch.nn.modules.Conv1d(1, 1, 1) - >>> optimizer_cfg = dict(type='SGD', lr=0.01, momentum=0.9, - >>> weight_decay=0.0001) - >>> paramwise_cfg = dict(norm_decay_mult=0.) - >>> optim_builder = DefaultOptimizerConstructor( - >>> optimizer_cfg, paramwise_cfg) - >>> optimizer = optim_builder(model) - - Example 2: - >>> # assume model have attribute model.backbone and model.cls_head - >>> optimizer_cfg = dict(type='SGD', lr=0.01, weight_decay=0.95) - >>> paramwise_cfg = dict(custom_keys={ - '.backbone': dict(lr_mult=0.1, decay_mult=0.9)}) - >>> optim_builder = DefaultOptimizerConstructor( - >>> optimizer_cfg, paramwise_cfg) - >>> optimizer = optim_builder(model) - >>> # Then the `lr` and `weight_decay` for model.backbone is - >>> # (0.01 * 0.1, 0.95 * 0.9). `lr` and `weight_decay` for - >>> # model.cls_head is (0.01, 0.95). - """ - - def __init__(self, optimizer_cfg, paramwise_cfg=None): - if not isinstance(optimizer_cfg, dict): - raise TypeError('optimizer_cfg should be a dict', - f'but got {type(optimizer_cfg)}') - self.optimizer_cfg = optimizer_cfg - self.paramwise_cfg = {} if paramwise_cfg is None else paramwise_cfg - self.base_lr = optimizer_cfg.get('lr', None) - self.base_wd = optimizer_cfg.get('weight_decay', None) - self._validate_cfg() - - def _validate_cfg(self): - if not isinstance(self.paramwise_cfg, dict): - raise TypeError('paramwise_cfg should be None or a dict, ' - f'but got {type(self.paramwise_cfg)}') - - if 'custom_keys' in self.paramwise_cfg: - if not isinstance(self.paramwise_cfg['custom_keys'], dict): - raise TypeError( - 'If specified, custom_keys must be a dict, ' - f'but got {type(self.paramwise_cfg["custom_keys"])}') - if self.base_wd is None: - for key in self.paramwise_cfg['custom_keys']: - if 'decay_mult' in self.paramwise_cfg['custom_keys'][key]: - raise ValueError('base_wd should not be None') - - # get base lr and weight decay - # weight_decay must be explicitly specified if mult is specified - if ('bias_decay_mult' in self.paramwise_cfg - or 'norm_decay_mult' in self.paramwise_cfg - or 'dwconv_decay_mult' in self.paramwise_cfg): - if self.base_wd is None: - raise ValueError('base_wd should not be None') - - def _is_in(self, param_group, param_group_list): - assert is_list_of(param_group_list, dict) - param = set(param_group['params']) - param_set = set() - for group in param_group_list: - param_set.update(set(group['params'])) - - return not param.isdisjoint(param_set) - - def add_params(self, params, module, prefix='', is_dcn_module=None): - """Add all parameters of module to the params list. - - The parameters of the given module will be added to the list of param - groups, with specific rules defined by paramwise_cfg. - - Args: - params (list[dict]): A list of param groups, it will be modified - in place. - module (nn.Module): The module to be added. - prefix (str): The prefix of the module - is_dcn_module (int|float|None): If the current module is a - submodule of DCN, `is_dcn_module` will be passed to - control conv_offset layer's learning rate. Defaults to None. - """ - # get param-wise options - custom_keys = self.paramwise_cfg.get('custom_keys', {}) - # first sort with alphabet order and then sort with reversed len of str - sorted_keys = sorted(sorted(custom_keys.keys()), key=len, reverse=True) - - bias_lr_mult = self.paramwise_cfg.get('bias_lr_mult', 1.) - bias_decay_mult = self.paramwise_cfg.get('bias_decay_mult', 1.) - norm_decay_mult = self.paramwise_cfg.get('norm_decay_mult', 1.) - dwconv_decay_mult = self.paramwise_cfg.get('dwconv_decay_mult', 1.) - bypass_duplicate = self.paramwise_cfg.get('bypass_duplicate', False) - dcn_offset_lr_mult = self.paramwise_cfg.get('dcn_offset_lr_mult', 1.) - - # special rules for norm layers and depth-wise conv layers - is_norm = isinstance(module, - (_BatchNorm, _InstanceNorm, GroupNorm, LayerNorm)) - is_dwconv = ( - isinstance(module, torch.nn.Conv2d) - and module.in_channels == module.groups) - - for name, param in module.named_parameters(recurse=False): - param_group = {'params': [param]} - if not param.requires_grad: - params.append(param_group) - continue - if bypass_duplicate and self._is_in(param_group, params): - warnings.warn(f'{prefix} is duplicate. It is skipped since ' - f'bypass_duplicate={bypass_duplicate}') - continue - # if the parameter match one of the custom keys, ignore other rules - is_custom = False - for key in sorted_keys: - if key in f'{prefix}.{name}': - is_custom = True - lr_mult = custom_keys[key].get('lr_mult', 1.) - param_group['lr'] = self.base_lr * lr_mult - if self.base_wd is not None: - decay_mult = custom_keys[key].get('decay_mult', 1.) - param_group['weight_decay'] = self.base_wd * decay_mult - break - - if not is_custom: - # bias_lr_mult affects all bias parameters - # except for norm.bias dcn.conv_offset.bias - if name == 'bias' and not (is_norm or is_dcn_module): - param_group['lr'] = self.base_lr * bias_lr_mult - - if (prefix.find('conv_offset') != -1 and is_dcn_module - and isinstance(module, torch.nn.Conv2d)): - # deal with both dcn_offset's bias & weight - param_group['lr'] = self.base_lr * dcn_offset_lr_mult - - # apply weight decay policies - if self.base_wd is not None: - # norm decay - if is_norm: - param_group[ - 'weight_decay'] = self.base_wd * norm_decay_mult - # depth-wise conv - elif is_dwconv: - param_group[ - 'weight_decay'] = self.base_wd * dwconv_decay_mult - # bias lr and decay - elif name == 'bias' and not is_dcn_module: - # TODO: current bias_decay_mult will have affect on DCN - param_group[ - 'weight_decay'] = self.base_wd * bias_decay_mult - params.append(param_group) - - if check_ops_exist(): - from annotator.uniformer.mmcv.ops import DeformConv2d, ModulatedDeformConv2d - is_dcn_module = isinstance(module, - (DeformConv2d, ModulatedDeformConv2d)) - else: - is_dcn_module = False - for child_name, child_mod in module.named_children(): - child_prefix = f'{prefix}.{child_name}' if prefix else child_name - self.add_params( - params, - child_mod, - prefix=child_prefix, - is_dcn_module=is_dcn_module) - - def __call__(self, model): - if hasattr(model, 'module'): - model = model.module - - optimizer_cfg = self.optimizer_cfg.copy() - # if no paramwise option is specified, just use the global setting - if not self.paramwise_cfg: - optimizer_cfg['params'] = model.parameters() - return build_from_cfg(optimizer_cfg, OPTIMIZERS) - - # set param-wise lr and weight decay recursively - params = [] - self.add_params(params, model) - optimizer_cfg['params'] = params - - return build_from_cfg(optimizer_cfg, OPTIMIZERS) diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/channel.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/channel.go deleted file mode 100644 index 5496ae7e93166ad7e46820ee77314762983fbeee..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/channel.go and /dev/null differ diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/utils/flops_counter.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/utils/flops_counter.py deleted file mode 100644 index d10af5feca7f4b8c0ba359b7b1c826f754e048be..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/utils/flops_counter.py +++ /dev/null @@ -1,599 +0,0 @@ -# Modified from flops-counter.pytorch by Vladislav Sovrasov -# original repo: https://github.com/sovrasov/flops-counter.pytorch - -# MIT License - -# Copyright (c) 2018 Vladislav Sovrasov - -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: - -# The above copyright notice and this permission notice shall be included in -# all copies or substantial portions of the Software. - -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - -import sys -from functools import partial - -import numpy as np -import torch -import torch.nn as nn - -import annotator.uniformer.mmcv as mmcv - - -def get_model_complexity_info(model, - input_shape, - print_per_layer_stat=True, - as_strings=True, - input_constructor=None, - flush=False, - ost=sys.stdout): - """Get complexity information of a model. - - This method can calculate FLOPs and parameter counts of a model with - corresponding input shape. It can also print complexity information for - each layer in a model. - - Supported layers are listed as below: - - Convolutions: ``nn.Conv1d``, ``nn.Conv2d``, ``nn.Conv3d``. - - Activations: ``nn.ReLU``, ``nn.PReLU``, ``nn.ELU``, ``nn.LeakyReLU``, - ``nn.ReLU6``. - - Poolings: ``nn.MaxPool1d``, ``nn.MaxPool2d``, ``nn.MaxPool3d``, - ``nn.AvgPool1d``, ``nn.AvgPool2d``, ``nn.AvgPool3d``, - ``nn.AdaptiveMaxPool1d``, ``nn.AdaptiveMaxPool2d``, - ``nn.AdaptiveMaxPool3d``, ``nn.AdaptiveAvgPool1d``, - ``nn.AdaptiveAvgPool2d``, ``nn.AdaptiveAvgPool3d``. - - BatchNorms: ``nn.BatchNorm1d``, ``nn.BatchNorm2d``, - ``nn.BatchNorm3d``, ``nn.GroupNorm``, ``nn.InstanceNorm1d``, - ``InstanceNorm2d``, ``InstanceNorm3d``, ``nn.LayerNorm``. - - Linear: ``nn.Linear``. - - Deconvolution: ``nn.ConvTranspose2d``. - - Upsample: ``nn.Upsample``. - - Args: - model (nn.Module): The model for complexity calculation. - input_shape (tuple): Input shape used for calculation. - print_per_layer_stat (bool): Whether to print complexity information - for each layer in a model. Default: True. - as_strings (bool): Output FLOPs and params counts in a string form. - Default: True. - input_constructor (None | callable): If specified, it takes a callable - method that generates input. otherwise, it will generate a random - tensor with input shape to calculate FLOPs. Default: None. - flush (bool): same as that in :func:`print`. Default: False. - ost (stream): same as ``file`` param in :func:`print`. - Default: sys.stdout. - - Returns: - tuple[float | str]: If ``as_strings`` is set to True, it will return - FLOPs and parameter counts in a string format. otherwise, it will - return those in a float number format. - """ - assert type(input_shape) is tuple - assert len(input_shape) >= 1 - assert isinstance(model, nn.Module) - flops_model = add_flops_counting_methods(model) - flops_model.eval() - flops_model.start_flops_count() - if input_constructor: - input = input_constructor(input_shape) - _ = flops_model(**input) - else: - try: - batch = torch.ones(()).new_empty( - (1, *input_shape), - dtype=next(flops_model.parameters()).dtype, - device=next(flops_model.parameters()).device) - except StopIteration: - # Avoid StopIteration for models which have no parameters, - # like `nn.Relu()`, `nn.AvgPool2d`, etc. - batch = torch.ones(()).new_empty((1, *input_shape)) - - _ = flops_model(batch) - - flops_count, params_count = flops_model.compute_average_flops_cost() - if print_per_layer_stat: - print_model_with_flops( - flops_model, flops_count, params_count, ost=ost, flush=flush) - flops_model.stop_flops_count() - - if as_strings: - return flops_to_string(flops_count), params_to_string(params_count) - - return flops_count, params_count - - -def flops_to_string(flops, units='GFLOPs', precision=2): - """Convert FLOPs number into a string. - - Note that Here we take a multiply-add counts as one FLOP. - - Args: - flops (float): FLOPs number to be converted. - units (str | None): Converted FLOPs units. Options are None, 'GFLOPs', - 'MFLOPs', 'KFLOPs', 'FLOPs'. If set to None, it will automatically - choose the most suitable unit for FLOPs. Default: 'GFLOPs'. - precision (int): Digit number after the decimal point. Default: 2. - - Returns: - str: The converted FLOPs number with units. - - Examples: - >>> flops_to_string(1e9) - '1.0 GFLOPs' - >>> flops_to_string(2e5, 'MFLOPs') - '0.2 MFLOPs' - >>> flops_to_string(3e-9, None) - '3e-09 FLOPs' - """ - if units is None: - if flops // 10**9 > 0: - return str(round(flops / 10.**9, precision)) + ' GFLOPs' - elif flops // 10**6 > 0: - return str(round(flops / 10.**6, precision)) + ' MFLOPs' - elif flops // 10**3 > 0: - return str(round(flops / 10.**3, precision)) + ' KFLOPs' - else: - return str(flops) + ' FLOPs' - else: - if units == 'GFLOPs': - return str(round(flops / 10.**9, precision)) + ' ' + units - elif units == 'MFLOPs': - return str(round(flops / 10.**6, precision)) + ' ' + units - elif units == 'KFLOPs': - return str(round(flops / 10.**3, precision)) + ' ' + units - else: - return str(flops) + ' FLOPs' - - -def params_to_string(num_params, units=None, precision=2): - """Convert parameter number into a string. - - Args: - num_params (float): Parameter number to be converted. - units (str | None): Converted FLOPs units. Options are None, 'M', - 'K' and ''. If set to None, it will automatically choose the most - suitable unit for Parameter number. Default: None. - precision (int): Digit number after the decimal point. Default: 2. - - Returns: - str: The converted parameter number with units. - - Examples: - >>> params_to_string(1e9) - '1000.0 M' - >>> params_to_string(2e5) - '200.0 k' - >>> params_to_string(3e-9) - '3e-09' - """ - if units is None: - if num_params // 10**6 > 0: - return str(round(num_params / 10**6, precision)) + ' M' - elif num_params // 10**3: - return str(round(num_params / 10**3, precision)) + ' k' - else: - return str(num_params) - else: - if units == 'M': - return str(round(num_params / 10.**6, precision)) + ' ' + units - elif units == 'K': - return str(round(num_params / 10.**3, precision)) + ' ' + units - else: - return str(num_params) - - -def print_model_with_flops(model, - total_flops, - total_params, - units='GFLOPs', - precision=3, - ost=sys.stdout, - flush=False): - """Print a model with FLOPs for each layer. - - Args: - model (nn.Module): The model to be printed. - total_flops (float): Total FLOPs of the model. - total_params (float): Total parameter counts of the model. - units (str | None): Converted FLOPs units. Default: 'GFLOPs'. - precision (int): Digit number after the decimal point. Default: 3. - ost (stream): same as `file` param in :func:`print`. - Default: sys.stdout. - flush (bool): same as that in :func:`print`. Default: False. - - Example: - >>> class ExampleModel(nn.Module): - - >>> def __init__(self): - >>> super().__init__() - >>> self.conv1 = nn.Conv2d(3, 8, 3) - >>> self.conv2 = nn.Conv2d(8, 256, 3) - >>> self.conv3 = nn.Conv2d(256, 8, 3) - >>> self.avg_pool = nn.AdaptiveAvgPool2d((1, 1)) - >>> self.flatten = nn.Flatten() - >>> self.fc = nn.Linear(8, 1) - - >>> def forward(self, x): - >>> x = self.conv1(x) - >>> x = self.conv2(x) - >>> x = self.conv3(x) - >>> x = self.avg_pool(x) - >>> x = self.flatten(x) - >>> x = self.fc(x) - >>> return x - - >>> model = ExampleModel() - >>> x = (3, 16, 16) - to print the complexity information state for each layer, you can use - >>> get_model_complexity_info(model, x) - or directly use - >>> print_model_with_flops(model, 4579784.0, 37361) - ExampleModel( - 0.037 M, 100.000% Params, 0.005 GFLOPs, 100.000% FLOPs, - (conv1): Conv2d(0.0 M, 0.600% Params, 0.0 GFLOPs, 0.959% FLOPs, 3, 8, kernel_size=(3, 3), stride=(1, 1)) # noqa: E501 - (conv2): Conv2d(0.019 M, 50.020% Params, 0.003 GFLOPs, 58.760% FLOPs, 8, 256, kernel_size=(3, 3), stride=(1, 1)) - (conv3): Conv2d(0.018 M, 49.356% Params, 0.002 GFLOPs, 40.264% FLOPs, 256, 8, kernel_size=(3, 3), stride=(1, 1)) - (avg_pool): AdaptiveAvgPool2d(0.0 M, 0.000% Params, 0.0 GFLOPs, 0.017% FLOPs, output_size=(1, 1)) - (flatten): Flatten(0.0 M, 0.000% Params, 0.0 GFLOPs, 0.000% FLOPs, ) - (fc): Linear(0.0 M, 0.024% Params, 0.0 GFLOPs, 0.000% FLOPs, in_features=8, out_features=1, bias=True) - ) - """ - - def accumulate_params(self): - if is_supported_instance(self): - return self.__params__ - else: - sum = 0 - for m in self.children(): - sum += m.accumulate_params() - return sum - - def accumulate_flops(self): - if is_supported_instance(self): - return self.__flops__ / model.__batch_counter__ - else: - sum = 0 - for m in self.children(): - sum += m.accumulate_flops() - return sum - - def flops_repr(self): - accumulated_num_params = self.accumulate_params() - accumulated_flops_cost = self.accumulate_flops() - return ', '.join([ - params_to_string( - accumulated_num_params, units='M', precision=precision), - '{:.3%} Params'.format(accumulated_num_params / total_params), - flops_to_string( - accumulated_flops_cost, units=units, precision=precision), - '{:.3%} FLOPs'.format(accumulated_flops_cost / total_flops), - self.original_extra_repr() - ]) - - def add_extra_repr(m): - m.accumulate_flops = accumulate_flops.__get__(m) - m.accumulate_params = accumulate_params.__get__(m) - flops_extra_repr = flops_repr.__get__(m) - if m.extra_repr != flops_extra_repr: - m.original_extra_repr = m.extra_repr - m.extra_repr = flops_extra_repr - assert m.extra_repr != m.original_extra_repr - - def del_extra_repr(m): - if hasattr(m, 'original_extra_repr'): - m.extra_repr = m.original_extra_repr - del m.original_extra_repr - if hasattr(m, 'accumulate_flops'): - del m.accumulate_flops - - model.apply(add_extra_repr) - print(model, file=ost, flush=flush) - model.apply(del_extra_repr) - - -def get_model_parameters_number(model): - """Calculate parameter number of a model. - - Args: - model (nn.module): The model for parameter number calculation. - - Returns: - float: Parameter number of the model. - """ - num_params = sum(p.numel() for p in model.parameters() if p.requires_grad) - return num_params - - -def add_flops_counting_methods(net_main_module): - # adding additional methods to the existing module object, - # this is done this way so that each function has access to self object - net_main_module.start_flops_count = start_flops_count.__get__( - net_main_module) - net_main_module.stop_flops_count = stop_flops_count.__get__( - net_main_module) - net_main_module.reset_flops_count = reset_flops_count.__get__( - net_main_module) - net_main_module.compute_average_flops_cost = compute_average_flops_cost.__get__( # noqa: E501 - net_main_module) - - net_main_module.reset_flops_count() - - return net_main_module - - -def compute_average_flops_cost(self): - """Compute average FLOPs cost. - - A method to compute average FLOPs cost, which will be available after - `add_flops_counting_methods()` is called on a desired net object. - - Returns: - float: Current mean flops consumption per image. - """ - batches_count = self.__batch_counter__ - flops_sum = 0 - for module in self.modules(): - if is_supported_instance(module): - flops_sum += module.__flops__ - params_sum = get_model_parameters_number(self) - return flops_sum / batches_count, params_sum - - -def start_flops_count(self): - """Activate the computation of mean flops consumption per image. - - A method to activate the computation of mean flops consumption per image. - which will be available after ``add_flops_counting_methods()`` is called on - a desired net object. It should be called before running the network. - """ - add_batch_counter_hook_function(self) - - def add_flops_counter_hook_function(module): - if is_supported_instance(module): - if hasattr(module, '__flops_handle__'): - return - - else: - handle = module.register_forward_hook( - get_modules_mapping()[type(module)]) - - module.__flops_handle__ = handle - - self.apply(partial(add_flops_counter_hook_function)) - - -def stop_flops_count(self): - """Stop computing the mean flops consumption per image. - - A method to stop computing the mean flops consumption per image, which will - be available after ``add_flops_counting_methods()`` is called on a desired - net object. It can be called to pause the computation whenever. - """ - remove_batch_counter_hook_function(self) - self.apply(remove_flops_counter_hook_function) - - -def reset_flops_count(self): - """Reset statistics computed so far. - - A method to Reset computed statistics, which will be available after - `add_flops_counting_methods()` is called on a desired net object. - """ - add_batch_counter_variables_or_reset(self) - self.apply(add_flops_counter_variable_or_reset) - - -# ---- Internal functions -def empty_flops_counter_hook(module, input, output): - module.__flops__ += 0 - - -def upsample_flops_counter_hook(module, input, output): - output_size = output[0] - batch_size = output_size.shape[0] - output_elements_count = batch_size - for val in output_size.shape[1:]: - output_elements_count *= val - module.__flops__ += int(output_elements_count) - - -def relu_flops_counter_hook(module, input, output): - active_elements_count = output.numel() - module.__flops__ += int(active_elements_count) - - -def linear_flops_counter_hook(module, input, output): - input = input[0] - output_last_dim = output.shape[ - -1] # pytorch checks dimensions, so here we don't care much - module.__flops__ += int(np.prod(input.shape) * output_last_dim) - - -def pool_flops_counter_hook(module, input, output): - input = input[0] - module.__flops__ += int(np.prod(input.shape)) - - -def norm_flops_counter_hook(module, input, output): - input = input[0] - - batch_flops = np.prod(input.shape) - if (getattr(module, 'affine', False) - or getattr(module, 'elementwise_affine', False)): - batch_flops *= 2 - module.__flops__ += int(batch_flops) - - -def deconv_flops_counter_hook(conv_module, input, output): - # Can have multiple inputs, getting the first one - input = input[0] - - batch_size = input.shape[0] - input_height, input_width = input.shape[2:] - - kernel_height, kernel_width = conv_module.kernel_size - in_channels = conv_module.in_channels - out_channels = conv_module.out_channels - groups = conv_module.groups - - filters_per_channel = out_channels // groups - conv_per_position_flops = ( - kernel_height * kernel_width * in_channels * filters_per_channel) - - active_elements_count = batch_size * input_height * input_width - overall_conv_flops = conv_per_position_flops * active_elements_count - bias_flops = 0 - if conv_module.bias is not None: - output_height, output_width = output.shape[2:] - bias_flops = out_channels * batch_size * output_height * output_height - overall_flops = overall_conv_flops + bias_flops - - conv_module.__flops__ += int(overall_flops) - - -def conv_flops_counter_hook(conv_module, input, output): - # Can have multiple inputs, getting the first one - input = input[0] - - batch_size = input.shape[0] - output_dims = list(output.shape[2:]) - - kernel_dims = list(conv_module.kernel_size) - in_channels = conv_module.in_channels - out_channels = conv_module.out_channels - groups = conv_module.groups - - filters_per_channel = out_channels // groups - conv_per_position_flops = int( - np.prod(kernel_dims)) * in_channels * filters_per_channel - - active_elements_count = batch_size * int(np.prod(output_dims)) - - overall_conv_flops = conv_per_position_flops * active_elements_count - - bias_flops = 0 - - if conv_module.bias is not None: - - bias_flops = out_channels * active_elements_count - - overall_flops = overall_conv_flops + bias_flops - - conv_module.__flops__ += int(overall_flops) - - -def batch_counter_hook(module, input, output): - batch_size = 1 - if len(input) > 0: - # Can have multiple inputs, getting the first one - input = input[0] - batch_size = len(input) - else: - pass - print('Warning! No positional inputs found for a module, ' - 'assuming batch size is 1.') - module.__batch_counter__ += batch_size - - -def add_batch_counter_variables_or_reset(module): - - module.__batch_counter__ = 0 - - -def add_batch_counter_hook_function(module): - if hasattr(module, '__batch_counter_handle__'): - return - - handle = module.register_forward_hook(batch_counter_hook) - module.__batch_counter_handle__ = handle - - -def remove_batch_counter_hook_function(module): - if hasattr(module, '__batch_counter_handle__'): - module.__batch_counter_handle__.remove() - del module.__batch_counter_handle__ - - -def add_flops_counter_variable_or_reset(module): - if is_supported_instance(module): - if hasattr(module, '__flops__') or hasattr(module, '__params__'): - print('Warning: variables __flops__ or __params__ are already ' - 'defined for the module' + type(module).__name__ + - ' ptflops can affect your code!') - module.__flops__ = 0 - module.__params__ = get_model_parameters_number(module) - - -def is_supported_instance(module): - if type(module) in get_modules_mapping(): - return True - return False - - -def remove_flops_counter_hook_function(module): - if is_supported_instance(module): - if hasattr(module, '__flops_handle__'): - module.__flops_handle__.remove() - del module.__flops_handle__ - - -def get_modules_mapping(): - return { - # convolutions - nn.Conv1d: conv_flops_counter_hook, - nn.Conv2d: conv_flops_counter_hook, - mmcv.cnn.bricks.Conv2d: conv_flops_counter_hook, - nn.Conv3d: conv_flops_counter_hook, - mmcv.cnn.bricks.Conv3d: conv_flops_counter_hook, - # activations - nn.ReLU: relu_flops_counter_hook, - nn.PReLU: relu_flops_counter_hook, - nn.ELU: relu_flops_counter_hook, - nn.LeakyReLU: relu_flops_counter_hook, - nn.ReLU6: relu_flops_counter_hook, - # poolings - nn.MaxPool1d: pool_flops_counter_hook, - nn.AvgPool1d: pool_flops_counter_hook, - nn.AvgPool2d: pool_flops_counter_hook, - nn.MaxPool2d: pool_flops_counter_hook, - mmcv.cnn.bricks.MaxPool2d: pool_flops_counter_hook, - nn.MaxPool3d: pool_flops_counter_hook, - mmcv.cnn.bricks.MaxPool3d: pool_flops_counter_hook, - nn.AvgPool3d: pool_flops_counter_hook, - nn.AdaptiveMaxPool1d: pool_flops_counter_hook, - nn.AdaptiveAvgPool1d: pool_flops_counter_hook, - nn.AdaptiveMaxPool2d: pool_flops_counter_hook, - nn.AdaptiveAvgPool2d: pool_flops_counter_hook, - nn.AdaptiveMaxPool3d: pool_flops_counter_hook, - nn.AdaptiveAvgPool3d: pool_flops_counter_hook, - # normalizations - nn.BatchNorm1d: norm_flops_counter_hook, - nn.BatchNorm2d: norm_flops_counter_hook, - nn.BatchNorm3d: norm_flops_counter_hook, - nn.GroupNorm: norm_flops_counter_hook, - nn.InstanceNorm1d: norm_flops_counter_hook, - nn.InstanceNorm2d: norm_flops_counter_hook, - nn.InstanceNorm3d: norm_flops_counter_hook, - nn.LayerNorm: norm_flops_counter_hook, - # FC - nn.Linear: linear_flops_counter_hook, - mmcv.cnn.bricks.Linear: linear_flops_counter_hook, - # Upscale - nn.Upsample: upsample_flops_counter_hook, - # Deconvolution - nn.ConvTranspose2d: deconv_flops_counter_hook, - mmcv.cnn.bricks.ConvTranspose2d: deconv_flops_counter_hook, - } diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/shallow_contrastive_loss_helper.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/shallow_contrastive_loss_helper.py deleted file mode 100644 index 027fb4598529c0072f670a4776f2c825968f5caf..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/shallow_contrastive_loss_helper.py +++ /dev/null @@ -1,58 +0,0 @@ -import torch -import maskrcnn_benchmark.utils.dist as dist - - -def normalized_positive_map(positive_map): - positive_map = positive_map.float() - positive_map_num_pos = positive_map.sum(2) - positive_map_num_pos[positive_map_num_pos == 0] = 1e-6 - positive_map = positive_map / positive_map_num_pos.unsqueeze(-1) - return positive_map - - -def pad_tensor_given_dim_length(tensor, dim, length, padding_value=0, batch_first=True): - new_size = list(tensor.size()[:dim]) + [length] + list(tensor.size()[dim + 1:]) - out_tensor = tensor.data.new(*new_size).fill_(padding_value) - if batch_first: - out_tensor[:, :tensor.size(1), ...] = tensor - else: - out_tensor[:tensor.size(0), ...] = tensor - return out_tensor - - -def pad_random_negative_tensor_given_length(positive_tensor, negative_padding_tensor, length=None): - assert positive_tensor.shape[0] + negative_padding_tensor.shape[0] == length - return torch.cat((positive_tensor, negative_padding_tensor), dim=0) - - -def gather_tensors(tensor): - """ - Performs all_gather operation on the provided tensors. - *** Warning ***: torch.distributed.all_gather has no gradient. - """ - if not dist.is_dist_avail_and_initialized(): - return torch.stack([tensor], dim=0) - - total = dist.get_world_size() - rank = torch.distributed.get_rank() - # gathered_normalized_img_emb = [torch.zeros_like(normalized_img_emb) for _ in range(total)] - # torch.distributed.all_gather(gathered_normalized_img_emb, normalized_img_emb) - - tensors_gather = [ - torch.zeros_like(tensor) - for _ in range(total) - ] - torch.distributed.all_gather(tensors_gather, tensor, async_op=False) - - # need to do this to restore propagation of the gradients - tensors_gather[rank] = tensor - output = torch.stack(tensors_gather, dim=0) - return output - - -def convert_to_roi_format(boxes): - concat_boxes = boxes.bbox - device, dtype = concat_boxes.device, concat_boxes.dtype - ids = torch.full((len(boxes), 1), 0, dtype=dtype, device=device) - rois = torch.cat([ids, concat_boxes], dim=1) - return rois \ No newline at end of file diff --git a/spaces/Poupeto/RVC_Ryu7ztv/vc_infer_pipeline.py b/spaces/Poupeto/RVC_Ryu7ztv/vc_infer_pipeline.py deleted file mode 100644 index c6be666c8d980fc6da24bd5e16ac9909d9204a46..0000000000000000000000000000000000000000 --- a/spaces/Poupeto/RVC_Ryu7ztv/vc_infer_pipeline.py +++ /dev/null @@ -1,431 +0,0 @@ -import numpy as np, parselmouth, torch, pdb -from time import time as ttime -import torch.nn.functional as F -import scipy.signal as signal -import pyworld, os, traceback, faiss, librosa, torchcrepe -from scipy import signal -from functools import lru_cache - -bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000) - -input_audio_path2wav = {} - - -@lru_cache -def cache_harvest_f0(input_audio_path, fs, f0max, f0min, frame_period): - audio = input_audio_path2wav[input_audio_path] - f0, t = pyworld.harvest( - audio, - fs=fs, - f0_ceil=f0max, - f0_floor=f0min, - frame_period=frame_period, - ) - f0 = pyworld.stonemask(audio, f0, t, fs) - return f0 - - -def change_rms(data1, sr1, data2, sr2, rate): # 1是输入音频,2是输出音频,rate是2的占比 - # print(data1.max(),data2.max()) - rms1 = librosa.feature.rms( - y=data1, frame_length=sr1 // 2 * 2, hop_length=sr1 // 2 - ) # 每半秒一个点 - rms2 = librosa.feature.rms(y=data2, frame_length=sr2 // 2 * 2, hop_length=sr2 // 2) - rms1 = torch.from_numpy(rms1) - rms1 = F.interpolate( - rms1.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.from_numpy(rms2) - rms2 = F.interpolate( - rms2.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.max(rms2, torch.zeros_like(rms2) + 1e-6) - data2 *= ( - torch.pow(rms1, torch.tensor(1 - rate)) - * torch.pow(rms2, torch.tensor(rate - 1)) - ).numpy() - return data2 - - -class VC(object): - def __init__(self, tgt_sr, config): - self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = ( - config.x_pad, - config.x_query, - config.x_center, - config.x_max, - config.is_half, - ) - self.sr = 16000 # hubert输入采样率 - self.window = 160 # 每帧点数 - self.t_pad = self.sr * self.x_pad # 每条前后pad时间 - self.t_pad_tgt = tgt_sr * self.x_pad - self.t_pad2 = self.t_pad * 2 - self.t_query = self.sr * self.x_query # 查询切点前后查询时间 - self.t_center = self.sr * self.x_center # 查询切点位置 - self.t_max = self.sr * self.x_max # 免查询时长阈值 - self.device = config.device - - def get_f0( - self, - input_audio_path, - x, - p_len, - f0_up_key, - f0_method, - filter_radius, - inp_f0=None, - ): - global input_audio_path2wav - time_step = self.window / self.sr * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - if f0_method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - input_audio_path2wav[input_audio_path] = x.astype(np.double) - f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10) - if filter_radius > 2: - f0 = signal.medfilt(f0, 3) - elif f0_method == "crepe": - model = "full" - # Pick a batch size that doesn't cause memory errors on your gpu - batch_size = 512 - # Compute pitch using first gpu - audio = torch.tensor(np.copy(x))[None].float() - f0, pd = torchcrepe.predict( - audio, - self.sr, - self.window, - f0_min, - f0_max, - model, - batch_size=batch_size, - device=self.device, - return_periodicity=True, - ) - pd = torchcrepe.filter.median(pd, 3) - f0 = torchcrepe.filter.mean(f0, 3) - f0[pd < 0.1] = 0 - f0 = f0[0].cpu().numpy() - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0] - f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[ - :shape - ] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak # 1-0 - - def vc( - self, - model, - net_g, - sid, - audio0, - pitch, - pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - ): # ,file_index,file_big_npy - feats = torch.from_numpy(audio0) - if self.is_half: - feats = feats.half() - else: - feats = feats.float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - - inputs = { - "source": feats.to(self.device), - "padding_mask": padding_mask, - "output_layer": 9 if version == "v1" else 12, - } - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) if version == "v1" else logits[0] - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = feats.clone() - if ( - isinstance(index, type(None)) == False - and isinstance(big_npy, type(None)) == False - and index_rate != 0 - ): - npy = feats[0].cpu().numpy() - if self.is_half: - npy = npy.astype("float32") - - # _, I = index.search(npy, 1) - # npy = big_npy[I.squeeze()] - - score, ix = index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - - if self.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate - + (1 - index_rate) * feats - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = F.interpolate(feats0.permute(0, 2, 1), scale_factor=2).permute( - 0, 2, 1 - ) - t1 = ttime() - p_len = audio0.shape[0] // self.window - if feats.shape[1] < p_len: - p_len = feats.shape[1] - if pitch != None and pitchf != None: - pitch = pitch[:, :p_len] - pitchf = pitchf[:, :p_len] - - if protect < 0.5 and pitch != None and pitchf != None: - pitchff = pitchf.clone() - pitchff[pitchf > 0] = 1 - pitchff[pitchf < 1] = protect - pitchff = pitchff.unsqueeze(-1) - feats = feats * pitchff + feats0 * (1 - pitchff) - feats = feats.to(feats0.dtype) - p_len = torch.tensor([p_len], device=self.device).long() - with torch.no_grad(): - if pitch != None and pitchf != None: - audio1 = ( - (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]) - .data.cpu() - .float() - .numpy() - ) - else: - audio1 = ( - (net_g.infer(feats, p_len, sid)[0][0, 0]).data.cpu().float().numpy() - ) - del feats, p_len, padding_mask - if torch.cuda.is_available(): - torch.cuda.empty_cache() - t2 = ttime() - times[0] += t1 - t0 - times[2] += t2 - t1 - return audio1 - - def pipeline( - self, - model, - net_g, - sid, - audio, - input_audio_path, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=None, - ): - if ( - file_index != "" - # and file_big_npy != "" - # and os.path.exists(file_big_npy) == True - and os.path.exists(file_index) == True - and index_rate != 0 - ): - try: - index = faiss.read_index(file_index) - # big_npy = np.load(file_big_npy) - big_npy = index.reconstruct_n(0, index.ntotal) - except: - traceback.print_exc() - index = big_npy = None - else: - index = big_npy = None - audio = signal.filtfilt(bh, ah, audio) - audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect") - opt_ts = [] - if audio_pad.shape[0] > self.t_max: - audio_sum = np.zeros_like(audio) - for i in range(self.window): - audio_sum += audio_pad[i : i - self.window] - for t in range(self.t_center, audio.shape[0], self.t_center): - opt_ts.append( - t - - self.t_query - + np.where( - np.abs(audio_sum[t - self.t_query : t + self.t_query]) - == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min() - )[0][0] - ) - s = 0 - audio_opt = [] - t = None - t1 = ttime() - audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect") - p_len = audio_pad.shape[0] // self.window - inp_f0 = None - if hasattr(f0_file, "name") == True: - try: - with open(f0_file.name, "r") as f: - lines = f.read().strip("\n").split("\n") - inp_f0 = [] - for line in lines: - inp_f0.append([float(i) for i in line.split(",")]) - inp_f0 = np.array(inp_f0, dtype="float32") - except: - traceback.print_exc() - sid = torch.tensor(sid, device=self.device).unsqueeze(0).long() - pitch, pitchf = None, None - if if_f0 == 1: - pitch, pitchf = self.get_f0( - input_audio_path, - audio_pad, - p_len, - f0_up_key, - f0_method, - filter_radius, - inp_f0, - ) - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - if self.device == "mps": - pitchf = pitchf.astype(np.float32) - pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long() - pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float() - t2 = ttime() - times[1] += t2 - t1 - for t in opt_ts: - t = t // self.window * self.window - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - pitch[:, s // self.window : (t + self.t_pad2) // self.window], - pitchf[:, s // self.window : (t + self.t_pad2) // self.window], - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - s = t - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - pitch[:, t // self.window :] if t is not None else pitch, - pitchf[:, t // self.window :] if t is not None else pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - audio_opt = np.concatenate(audio_opt) - if rms_mix_rate != 1: - audio_opt = change_rms(audio, 16000, audio_opt, tgt_sr, rms_mix_rate) - if resample_sr >= 16000 and tgt_sr != resample_sr: - audio_opt = librosa.resample( - audio_opt, orig_sr=tgt_sr, target_sr=resample_sr - ) - audio_max = np.abs(audio_opt).max() / 0.99 - max_int16 = 32768 - if audio_max > 1: - max_int16 /= audio_max - audio_opt = (audio_opt * max_int16).astype(np.int16) - del pitch, pitchf, sid - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return audio_opt diff --git a/spaces/ProteinDesignLab/protpardelle/ProteinMPNN/helper_scripts/make_bias_AA.py b/spaces/ProteinDesignLab/protpardelle/ProteinMPNN/helper_scripts/make_bias_AA.py deleted file mode 100644 index b32fb6b69dd4754d2ebef4c3baf5d81b6573d5a2..0000000000000000000000000000000000000000 --- a/spaces/ProteinDesignLab/protpardelle/ProteinMPNN/helper_scripts/make_bias_AA.py +++ /dev/null @@ -1,27 +0,0 @@ -import argparse - -def main(args): - - import numpy as np - import json - - bias_list = [float(item) for item in args.bias_list.split()] - AA_list = [str(item) for item in args.AA_list.split()] - - my_dict = dict(zip(AA_list, bias_list)) - - with open(args.output_path, 'w') as f: - f.write(json.dumps(my_dict) + '\n') - - -if __name__ == "__main__": - argparser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) - argparser.add_argument("--output_path", type=str, help="Path to the output dictionary") - argparser.add_argument("--AA_list", type=str, default='', help="List of AAs to be biased") - argparser.add_argument("--bias_list", type=str, default='', help="AA bias strengths") - - args = argparser.parse_args() - main(args) - -#e.g. output -#{"A": -0.01, "G": 0.02} diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/platformdirs/api.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/platformdirs/api.py deleted file mode 100644 index 6f6e2c2c69d25dba4d1038a2d548fbf68017f91b..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/platformdirs/api.py +++ /dev/null @@ -1,156 +0,0 @@ -from __future__ import annotations - -import os -import sys -from abc import ABC, abstractmethod -from pathlib import Path - -if sys.version_info >= (3, 8): # pragma: no branch - from typing import Literal # pragma: no cover - - -class PlatformDirsABC(ABC): - """ - Abstract base class for platform directories. - """ - - def __init__( - self, - appname: str | None = None, - appauthor: str | None | Literal[False] = None, - version: str | None = None, - roaming: bool = False, - multipath: bool = False, - opinion: bool = True, - ): - """ - Create a new platform directory. - - :param appname: See `appname`. - :param appauthor: See `appauthor`. - :param version: See `version`. - :param roaming: See `roaming`. - :param multipath: See `multipath`. - :param opinion: See `opinion`. - """ - self.appname = appname #: The name of application. - self.appauthor = appauthor - """ - The name of the app author or distributing body for this application. Typically, it is the owning company name. - Defaults to `appname`. You may pass ``False`` to disable it. - """ - self.version = version - """ - An optional version path element to append to the path. You might want to use this if you want multiple versions - of your app to be able to run independently. If used, this would typically be ``.``. - """ - self.roaming = roaming - """ - Whether to use the roaming appdata directory on Windows. That means that for users on a Windows network setup - for roaming profiles, this user data will be synced on login (see - `here `_). - """ - self.multipath = multipath - """ - An optional parameter only applicable to Unix/Linux which indicates that the entire list of data dirs should be - returned. By default, the first item would only be returned. - """ - self.opinion = opinion #: A flag to indicating to use opinionated values. - - def _append_app_name_and_version(self, *base: str) -> str: - params = list(base[1:]) - if self.appname: - params.append(self.appname) - if self.version: - params.append(self.version) - return os.path.join(base[0], *params) - - @property - @abstractmethod - def user_data_dir(self) -> str: - """:return: data directory tied to the user""" - - @property - @abstractmethod - def site_data_dir(self) -> str: - """:return: data directory shared by users""" - - @property - @abstractmethod - def user_config_dir(self) -> str: - """:return: config directory tied to the user""" - - @property - @abstractmethod - def site_config_dir(self) -> str: - """:return: config directory shared by the users""" - - @property - @abstractmethod - def user_cache_dir(self) -> str: - """:return: cache directory tied to the user""" - - @property - @abstractmethod - def user_state_dir(self) -> str: - """:return: state directory tied to the user""" - - @property - @abstractmethod - def user_log_dir(self) -> str: - """:return: log directory tied to the user""" - - @property - @abstractmethod - def user_documents_dir(self) -> str: - """:return: documents directory tied to the user""" - - @property - @abstractmethod - def user_runtime_dir(self) -> str: - """:return: runtime directory tied to the user""" - - @property - def user_data_path(self) -> Path: - """:return: data path tied to the user""" - return Path(self.user_data_dir) - - @property - def site_data_path(self) -> Path: - """:return: data path shared by users""" - return Path(self.site_data_dir) - - @property - def user_config_path(self) -> Path: - """:return: config path tied to the user""" - return Path(self.user_config_dir) - - @property - def site_config_path(self) -> Path: - """:return: config path shared by the users""" - return Path(self.site_config_dir) - - @property - def user_cache_path(self) -> Path: - """:return: cache path tied to the user""" - return Path(self.user_cache_dir) - - @property - def user_state_path(self) -> Path: - """:return: state path tied to the user""" - return Path(self.user_state_dir) - - @property - def user_log_path(self) -> Path: - """:return: log path tied to the user""" - return Path(self.user_log_dir) - - @property - def user_documents_path(self) -> Path: - """:return: documents path tied to the user""" - return Path(self.user_documents_dir) - - @property - def user_runtime_path(self) -> Path: - """:return: runtime path tied to the user""" - return Path(self.user_runtime_dir) diff --git a/spaces/Ritori/TTS_Yui/env.py b/spaces/Ritori/TTS_Yui/env.py deleted file mode 100644 index 2bdbc95d4f7a8bad8fd4f5eef657e2b51d946056..0000000000000000000000000000000000000000 --- a/spaces/Ritori/TTS_Yui/env.py +++ /dev/null @@ -1,15 +0,0 @@ -import os -import shutil - - -class AttrDict(dict): - def __init__(self, *args, **kwargs): - super(AttrDict, self).__init__(*args, **kwargs) - self.__dict__ = self - - -def build_env(config, config_name, path): - t_path = os.path.join(path, config_name) - if config != t_path: - os.makedirs(path, exist_ok=True) - shutil.copyfile(config, os.path.join(path, config_name)) diff --git a/spaces/Robert001/UniControl-Demo/annotator/outpainting/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/outpainting/__init__.py deleted file mode 100644 index 020cb2bca599171a731582646310bb97d9bb2524..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/outpainting/__init__.py +++ /dev/null @@ -1,25 +0,0 @@ -''' - * Copyright (c) 2023 Salesforce, Inc. - * All rights reserved. - * SPDX-License-Identifier: Apache License 2.0 - * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/ - * By Ning Yu -''' - -import numpy as np - -class Outpainter: - def __call__(self, img, height_top_extended, height_down_extended, width_left_extended, width_right_extended): - height, width, channel = img.shape - - height_top_new = int(float(height) / 100.0 * float(height_top_extended)) - height_down_new = int(float(height) / 100.0 * float(height_down_extended)) - width_left_new = int(float(width) / 100.0 * float(width_left_extended)) - width_right_new = int(float(width) / 100.0 * float(width_right_extended)) - - new_height = height + height_top_new + height_down_new - new_width = width + width_left_new + width_right_new - img_new = np.zeros([new_height, new_width, channel]) - img_new[height_top_new: (height + height_top_new), width_left_new: (width + width_left_new), : ] = img - img_new = img_new.astype('ubyte') - return img_new diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/necks/fpg.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/necks/fpg.py deleted file mode 100644 index c8e0d163ccf8cef6211530ba6c1b4d558ff6403f..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/necks/fpg.py +++ /dev/null @@ -1,398 +0,0 @@ -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule, caffe2_xavier_init, constant_init, is_norm - -from ..builder import NECKS - - -class Transition(nn.Module): - """Base class for transition. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - """ - - def __init__(self, in_channels, out_channels): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - - def forward(x): - pass - - -class UpInterpolationConv(Transition): - """A transition used for up-sampling. - - Up-sample the input by interpolation then refines the feature by - a convolution layer. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - scale_factor (int): Up-sampling factor. Default: 2. - mode (int): Interpolation mode. Default: nearest. - align_corners (bool): Whether align corners when interpolation. - Default: None. - kernel_size (int): Kernel size for the conv. Default: 3. - """ - - def __init__(self, - in_channels, - out_channels, - scale_factor=2, - mode='nearest', - align_corners=None, - kernel_size=3, - **kwargs): - super().__init__(in_channels, out_channels) - self.mode = mode - self.scale_factor = scale_factor - self.align_corners = align_corners - self.conv = ConvModule( - in_channels, - out_channels, - kernel_size, - padding=(kernel_size - 1) // 2, - **kwargs) - - def forward(self, x): - x = F.interpolate( - x, - scale_factor=self.scale_factor, - mode=self.mode, - align_corners=self.align_corners) - x = self.conv(x) - return x - - -class LastConv(Transition): - """A transition used for refining the output of the last stage. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - num_inputs (int): Number of inputs of the FPN features. - kernel_size (int): Kernel size for the conv. Default: 3. - """ - - def __init__(self, - in_channels, - out_channels, - num_inputs, - kernel_size=3, - **kwargs): - super().__init__(in_channels, out_channels) - self.num_inputs = num_inputs - self.conv_out = ConvModule( - in_channels, - out_channels, - kernel_size, - padding=(kernel_size - 1) // 2, - **kwargs) - - def forward(self, inputs): - assert len(inputs) == self.num_inputs - return self.conv_out(inputs[-1]) - - -@NECKS.register_module() -class FPG(nn.Module): - """FPG. - - Implementation of `Feature Pyramid Grids (FPG) - `_. - This implementation only gives the basic structure stated in the paper. - But users can implement different type of transitions to fully explore the - the potential power of the structure of FPG. - - Args: - in_channels (int): Number of input channels (feature maps of all levels - should have the same channels). - out_channels (int): Number of output channels (used at each scale) - num_outs (int): Number of output scales. - stack_times (int): The number of times the pyramid architecture will - be stacked. - paths (list[str]): Specify the path order of each stack level. - Each element in the list should be either 'bu' (bottom-up) or - 'td' (top-down). - inter_channels (int): Number of inter channels. - same_up_trans (dict): Transition that goes down at the same stage. - same_down_trans (dict): Transition that goes up at the same stage. - across_lateral_trans (dict): Across-pathway same-stage - across_down_trans (dict): Across-pathway bottom-up connection. - across_up_trans (dict): Across-pathway top-down connection. - across_skip_trans (dict): Across-pathway skip connection. - output_trans (dict): Transition that trans the output of the - last stage. - start_level (int): Index of the start input backbone level used to - build the feature pyramid. Default: 0. - end_level (int): Index of the end input backbone level (exclusive) to - build the feature pyramid. Default: -1, which means the last level. - add_extra_convs (bool): It decides whether to add conv - layers on top of the original feature maps. Default to False. - If True, its actual mode is specified by `extra_convs_on_inputs`. - norm_cfg (dict): Config dict for normalization layer. Default: None. - """ - - transition_types = { - 'conv': ConvModule, - 'interpolation_conv': UpInterpolationConv, - 'last_conv': LastConv, - } - - def __init__(self, - in_channels, - out_channels, - num_outs, - stack_times, - paths, - inter_channels=None, - same_down_trans=None, - same_up_trans=dict( - type='conv', kernel_size=3, stride=2, padding=1), - across_lateral_trans=dict(type='conv', kernel_size=1), - across_down_trans=dict(type='conv', kernel_size=3), - across_up_trans=None, - across_skip_trans=dict(type='identity'), - output_trans=dict(type='last_conv', kernel_size=3), - start_level=0, - end_level=-1, - add_extra_convs=False, - norm_cfg=None, - skip_inds=None): - super(FPG, self).__init__() - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) - self.num_outs = num_outs - if inter_channels is None: - self.inter_channels = [out_channels for _ in range(num_outs)] - elif isinstance(inter_channels, int): - self.inter_channels = [inter_channels for _ in range(num_outs)] - else: - assert isinstance(inter_channels, list) - assert len(inter_channels) == num_outs - self.inter_channels = inter_channels - self.stack_times = stack_times - self.paths = paths - assert isinstance(paths, list) and len(paths) == stack_times - for d in paths: - assert d in ('bu', 'td') - - self.same_down_trans = same_down_trans - self.same_up_trans = same_up_trans - self.across_lateral_trans = across_lateral_trans - self.across_down_trans = across_down_trans - self.across_up_trans = across_up_trans - self.output_trans = output_trans - self.across_skip_trans = across_skip_trans - - self.with_bias = norm_cfg is None - # skip inds must be specified if across skip trans is not None - if self.across_skip_trans is not None: - skip_inds is not None - self.skip_inds = skip_inds - assert len(self.skip_inds[0]) <= self.stack_times - - if end_level == -1: - self.backbone_end_level = self.num_ins - assert num_outs >= self.num_ins - start_level - else: - # if end_level < inputs, no extra level is allowed - self.backbone_end_level = end_level - assert end_level <= len(in_channels) - assert num_outs == end_level - start_level - self.start_level = start_level - self.end_level = end_level - self.add_extra_convs = add_extra_convs - - # build lateral 1x1 convs to reduce channels - self.lateral_convs = nn.ModuleList() - for i in range(self.start_level, self.backbone_end_level): - l_conv = nn.Conv2d(self.in_channels[i], - self.inter_channels[i - self.start_level], 1) - self.lateral_convs.append(l_conv) - - extra_levels = num_outs - self.backbone_end_level + self.start_level - self.extra_downsamples = nn.ModuleList() - for i in range(extra_levels): - if self.add_extra_convs: - fpn_idx = self.backbone_end_level - self.start_level + i - extra_conv = nn.Conv2d( - self.inter_channels[fpn_idx - 1], - self.inter_channels[fpn_idx], - 3, - stride=2, - padding=1) - self.extra_downsamples.append(extra_conv) - else: - self.extra_downsamples.append(nn.MaxPool2d(1, stride=2)) - - self.fpn_transitions = nn.ModuleList() # stack times - for s in range(self.stack_times): - stage_trans = nn.ModuleList() # num of feature levels - for i in range(self.num_outs): - # same, across_lateral, across_down, across_up - trans = nn.ModuleDict() - if s in self.skip_inds[i]: - stage_trans.append(trans) - continue - # build same-stage down trans (used in bottom-up paths) - if i == 0 or self.same_up_trans is None: - same_up_trans = None - else: - same_up_trans = self.build_trans( - self.same_up_trans, self.inter_channels[i - 1], - self.inter_channels[i]) - trans['same_up'] = same_up_trans - # build same-stage up trans (used in top-down paths) - if i == self.num_outs - 1 or self.same_down_trans is None: - same_down_trans = None - else: - same_down_trans = self.build_trans( - self.same_down_trans, self.inter_channels[i + 1], - self.inter_channels[i]) - trans['same_down'] = same_down_trans - # build across lateral trans - across_lateral_trans = self.build_trans( - self.across_lateral_trans, self.inter_channels[i], - self.inter_channels[i]) - trans['across_lateral'] = across_lateral_trans - # build across down trans - if i == self.num_outs - 1 or self.across_down_trans is None: - across_down_trans = None - else: - across_down_trans = self.build_trans( - self.across_down_trans, self.inter_channels[i + 1], - self.inter_channels[i]) - trans['across_down'] = across_down_trans - # build across up trans - if i == 0 or self.across_up_trans is None: - across_up_trans = None - else: - across_up_trans = self.build_trans( - self.across_up_trans, self.inter_channels[i - 1], - self.inter_channels[i]) - trans['across_up'] = across_up_trans - if self.across_skip_trans is None: - across_skip_trans = None - else: - across_skip_trans = self.build_trans( - self.across_skip_trans, self.inter_channels[i - 1], - self.inter_channels[i]) - trans['across_skip'] = across_skip_trans - # build across_skip trans - stage_trans.append(trans) - self.fpn_transitions.append(stage_trans) - - self.output_transition = nn.ModuleList() # output levels - for i in range(self.num_outs): - trans = self.build_trans( - self.output_trans, - self.inter_channels[i], - self.out_channels, - num_inputs=self.stack_times + 1) - self.output_transition.append(trans) - - self.relu = nn.ReLU(inplace=True) - - def build_trans(self, cfg, in_channels, out_channels, **extra_args): - cfg_ = cfg.copy() - trans_type = cfg_.pop('type') - trans_cls = self.transition_types[trans_type] - return trans_cls(in_channels, out_channels, **cfg_, **extra_args) - - def init_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - caffe2_xavier_init(m) - elif is_norm(m): - constant_init(m, 1.0) - - def fuse(self, fuse_dict): - out = None - for item in fuse_dict.values(): - if item is not None: - if out is None: - out = item - else: - out = out + item - return out - - def forward(self, inputs): - assert len(inputs) == len(self.in_channels) - - # build all levels from original feature maps - feats = [ - lateral_conv(inputs[i + self.start_level]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - for downsample in self.extra_downsamples: - feats.append(downsample(feats[-1])) - - outs = [feats] - - for i in range(self.stack_times): - current_outs = outs[-1] - next_outs = [] - direction = self.paths[i] - for j in range(self.num_outs): - if i in self.skip_inds[j]: - next_outs.append(outs[-1][j]) - continue - # feature level - if direction == 'td': - lvl = self.num_outs - j - 1 - else: - lvl = j - # get transitions - if direction == 'td': - same_trans = self.fpn_transitions[i][lvl]['same_down'] - else: - same_trans = self.fpn_transitions[i][lvl]['same_up'] - across_lateral_trans = self.fpn_transitions[i][lvl][ - 'across_lateral'] - across_down_trans = self.fpn_transitions[i][lvl]['across_down'] - across_up_trans = self.fpn_transitions[i][lvl]['across_up'] - across_skip_trans = self.fpn_transitions[i][lvl]['across_skip'] - # init output - to_fuse = dict( - same=None, lateral=None, across_up=None, across_down=None) - # same downsample/upsample - if same_trans is not None: - to_fuse['same'] = same_trans(next_outs[-1]) - # across lateral - if across_lateral_trans is not None: - to_fuse['lateral'] = across_lateral_trans( - current_outs[lvl]) - # across downsample - if lvl > 0 and across_up_trans is not None: - to_fuse['across_up'] = across_up_trans(current_outs[lvl - - 1]) - # across upsample - if (lvl < self.num_outs - 1 and across_down_trans is not None): - to_fuse['across_down'] = across_down_trans( - current_outs[lvl + 1]) - if across_skip_trans is not None: - to_fuse['across_skip'] = across_skip_trans(outs[0][lvl]) - x = self.fuse(to_fuse) - next_outs.append(x) - - if direction == 'td': - outs.append(next_outs[::-1]) - else: - outs.append(next_outs) - - # output trans - final_outs = [] - for i in range(self.num_outs): - lvl_out_list = [] - for s in range(len(outs)): - lvl_out_list.append(outs[s][i]) - lvl_out = self.output_transition[i](lvl_out_list) - final_outs.append(lvl_out) - - return final_outs diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/focal_loss.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/focal_loss.py deleted file mode 100644 index 763bc93bd2575c49ca8ccf20996bbd92d1e0d1a4..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/focal_loss.py +++ /dev/null @@ -1,212 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'sigmoid_focal_loss_forward', 'sigmoid_focal_loss_backward', - 'softmax_focal_loss_forward', 'softmax_focal_loss_backward' -]) - - -class SigmoidFocalLossFunction(Function): - - @staticmethod - def symbolic(g, input, target, gamma, alpha, weight, reduction): - return g.op( - 'mmcv::MMCVSigmoidFocalLoss', - input, - target, - gamma_f=gamma, - alpha_f=alpha, - weight_f=weight, - reduction_s=reduction) - - @staticmethod - def forward(ctx, - input, - target, - gamma=2.0, - alpha=0.25, - weight=None, - reduction='mean'): - - assert isinstance(target, (torch.LongTensor, torch.cuda.LongTensor)) - assert input.dim() == 2 - assert target.dim() == 1 - assert input.size(0) == target.size(0) - if weight is None: - weight = input.new_empty(0) - else: - assert weight.dim() == 1 - assert input.size(1) == weight.size(0) - ctx.reduction_dict = {'none': 0, 'mean': 1, 'sum': 2} - assert reduction in ctx.reduction_dict.keys() - - ctx.gamma = float(gamma) - ctx.alpha = float(alpha) - ctx.reduction = ctx.reduction_dict[reduction] - - output = input.new_zeros(input.size()) - - ext_module.sigmoid_focal_loss_forward( - input, target, weight, output, gamma=ctx.gamma, alpha=ctx.alpha) - if ctx.reduction == ctx.reduction_dict['mean']: - output = output.sum() / input.size(0) - elif ctx.reduction == ctx.reduction_dict['sum']: - output = output.sum() - ctx.save_for_backward(input, target, weight) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - input, target, weight = ctx.saved_tensors - - grad_input = input.new_zeros(input.size()) - - ext_module.sigmoid_focal_loss_backward( - input, - target, - weight, - grad_input, - gamma=ctx.gamma, - alpha=ctx.alpha) - - grad_input *= grad_output - if ctx.reduction == ctx.reduction_dict['mean']: - grad_input /= input.size(0) - return grad_input, None, None, None, None, None - - -sigmoid_focal_loss = SigmoidFocalLossFunction.apply - - -class SigmoidFocalLoss(nn.Module): - - def __init__(self, gamma, alpha, weight=None, reduction='mean'): - super(SigmoidFocalLoss, self).__init__() - self.gamma = gamma - self.alpha = alpha - self.register_buffer('weight', weight) - self.reduction = reduction - - def forward(self, input, target): - return sigmoid_focal_loss(input, target, self.gamma, self.alpha, - self.weight, self.reduction) - - def __repr__(self): - s = self.__class__.__name__ - s += f'(gamma={self.gamma}, ' - s += f'alpha={self.alpha}, ' - s += f'reduction={self.reduction})' - return s - - -class SoftmaxFocalLossFunction(Function): - - @staticmethod - def symbolic(g, input, target, gamma, alpha, weight, reduction): - return g.op( - 'mmcv::MMCVSoftmaxFocalLoss', - input, - target, - gamma_f=gamma, - alpha_f=alpha, - weight_f=weight, - reduction_s=reduction) - - @staticmethod - def forward(ctx, - input, - target, - gamma=2.0, - alpha=0.25, - weight=None, - reduction='mean'): - - assert isinstance(target, (torch.LongTensor, torch.cuda.LongTensor)) - assert input.dim() == 2 - assert target.dim() == 1 - assert input.size(0) == target.size(0) - if weight is None: - weight = input.new_empty(0) - else: - assert weight.dim() == 1 - assert input.size(1) == weight.size(0) - ctx.reduction_dict = {'none': 0, 'mean': 1, 'sum': 2} - assert reduction in ctx.reduction_dict.keys() - - ctx.gamma = float(gamma) - ctx.alpha = float(alpha) - ctx.reduction = ctx.reduction_dict[reduction] - - channel_stats, _ = torch.max(input, dim=1) - input_softmax = input - channel_stats.unsqueeze(1).expand_as(input) - input_softmax.exp_() - - channel_stats = input_softmax.sum(dim=1) - input_softmax /= channel_stats.unsqueeze(1).expand_as(input) - - output = input.new_zeros(input.size(0)) - ext_module.softmax_focal_loss_forward( - input_softmax, - target, - weight, - output, - gamma=ctx.gamma, - alpha=ctx.alpha) - - if ctx.reduction == ctx.reduction_dict['mean']: - output = output.sum() / input.size(0) - elif ctx.reduction == ctx.reduction_dict['sum']: - output = output.sum() - ctx.save_for_backward(input_softmax, target, weight) - return output - - @staticmethod - def backward(ctx, grad_output): - input_softmax, target, weight = ctx.saved_tensors - buff = input_softmax.new_zeros(input_softmax.size(0)) - grad_input = input_softmax.new_zeros(input_softmax.size()) - - ext_module.softmax_focal_loss_backward( - input_softmax, - target, - weight, - buff, - grad_input, - gamma=ctx.gamma, - alpha=ctx.alpha) - - grad_input *= grad_output - if ctx.reduction == ctx.reduction_dict['mean']: - grad_input /= input_softmax.size(0) - return grad_input, None, None, None, None, None - - -softmax_focal_loss = SoftmaxFocalLossFunction.apply - - -class SoftmaxFocalLoss(nn.Module): - - def __init__(self, gamma, alpha, weight=None, reduction='mean'): - super(SoftmaxFocalLoss, self).__init__() - self.gamma = gamma - self.alpha = alpha - self.register_buffer('weight', weight) - self.reduction = reduction - - def forward(self, input, target): - return softmax_focal_loss(input, target, self.gamma, self.alpha, - self.weight, self.reduction) - - def __repr__(self): - s = self.__class__.__name__ - s += f'(gamma={self.gamma}, ' - s += f'alpha={self.alpha}, ' - s += f'reduction={self.reduction})' - return s diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/optimizer/default_constructor.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/optimizer/default_constructor.py deleted file mode 100644 index 2c0da3503b75441738efe38d70352b55a210a34a..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/optimizer/default_constructor.py +++ /dev/null @@ -1,249 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -from torch.nn import GroupNorm, LayerNorm - -from annotator.uniformer.mmcv.utils import _BatchNorm, _InstanceNorm, build_from_cfg, is_list_of -from annotator.uniformer.mmcv.utils.ext_loader import check_ops_exist -from .builder import OPTIMIZER_BUILDERS, OPTIMIZERS - - -@OPTIMIZER_BUILDERS.register_module() -class DefaultOptimizerConstructor: - """Default constructor for optimizers. - - By default each parameter share the same optimizer settings, and we - provide an argument ``paramwise_cfg`` to specify parameter-wise settings. - It is a dict and may contain the following fields: - - - ``custom_keys`` (dict): Specified parameters-wise settings by keys. If - one of the keys in ``custom_keys`` is a substring of the name of one - parameter, then the setting of the parameter will be specified by - ``custom_keys[key]`` and other setting like ``bias_lr_mult`` etc. will - be ignored. It should be noted that the aforementioned ``key`` is the - longest key that is a substring of the name of the parameter. If there - are multiple matched keys with the same length, then the key with lower - alphabet order will be chosen. - ``custom_keys[key]`` should be a dict and may contain fields ``lr_mult`` - and ``decay_mult``. See Example 2 below. - - ``bias_lr_mult`` (float): It will be multiplied to the learning - rate for all bias parameters (except for those in normalization - layers and offset layers of DCN). - - ``bias_decay_mult`` (float): It will be multiplied to the weight - decay for all bias parameters (except for those in - normalization layers, depthwise conv layers, offset layers of DCN). - - ``norm_decay_mult`` (float): It will be multiplied to the weight - decay for all weight and bias parameters of normalization - layers. - - ``dwconv_decay_mult`` (float): It will be multiplied to the weight - decay for all weight and bias parameters of depthwise conv - layers. - - ``dcn_offset_lr_mult`` (float): It will be multiplied to the learning - rate for parameters of offset layer in the deformable convs - of a model. - - ``bypass_duplicate`` (bool): If true, the duplicate parameters - would not be added into optimizer. Default: False. - - Note: - 1. If the option ``dcn_offset_lr_mult`` is used, the constructor will - override the effect of ``bias_lr_mult`` in the bias of offset - layer. So be careful when using both ``bias_lr_mult`` and - ``dcn_offset_lr_mult``. If you wish to apply both of them to the - offset layer in deformable convs, set ``dcn_offset_lr_mult`` - to the original ``dcn_offset_lr_mult`` * ``bias_lr_mult``. - 2. If the option ``dcn_offset_lr_mult`` is used, the constructor will - apply it to all the DCN layers in the model. So be careful when - the model contains multiple DCN layers in places other than - backbone. - - Args: - model (:obj:`nn.Module`): The model with parameters to be optimized. - optimizer_cfg (dict): The config dict of the optimizer. - Positional fields are - - - `type`: class name of the optimizer. - - Optional fields are - - - any arguments of the corresponding optimizer type, e.g., - lr, weight_decay, momentum, etc. - paramwise_cfg (dict, optional): Parameter-wise options. - - Example 1: - >>> model = torch.nn.modules.Conv1d(1, 1, 1) - >>> optimizer_cfg = dict(type='SGD', lr=0.01, momentum=0.9, - >>> weight_decay=0.0001) - >>> paramwise_cfg = dict(norm_decay_mult=0.) - >>> optim_builder = DefaultOptimizerConstructor( - >>> optimizer_cfg, paramwise_cfg) - >>> optimizer = optim_builder(model) - - Example 2: - >>> # assume model have attribute model.backbone and model.cls_head - >>> optimizer_cfg = dict(type='SGD', lr=0.01, weight_decay=0.95) - >>> paramwise_cfg = dict(custom_keys={ - '.backbone': dict(lr_mult=0.1, decay_mult=0.9)}) - >>> optim_builder = DefaultOptimizerConstructor( - >>> optimizer_cfg, paramwise_cfg) - >>> optimizer = optim_builder(model) - >>> # Then the `lr` and `weight_decay` for model.backbone is - >>> # (0.01 * 0.1, 0.95 * 0.9). `lr` and `weight_decay` for - >>> # model.cls_head is (0.01, 0.95). - """ - - def __init__(self, optimizer_cfg, paramwise_cfg=None): - if not isinstance(optimizer_cfg, dict): - raise TypeError('optimizer_cfg should be a dict', - f'but got {type(optimizer_cfg)}') - self.optimizer_cfg = optimizer_cfg - self.paramwise_cfg = {} if paramwise_cfg is None else paramwise_cfg - self.base_lr = optimizer_cfg.get('lr', None) - self.base_wd = optimizer_cfg.get('weight_decay', None) - self._validate_cfg() - - def _validate_cfg(self): - if not isinstance(self.paramwise_cfg, dict): - raise TypeError('paramwise_cfg should be None or a dict, ' - f'but got {type(self.paramwise_cfg)}') - - if 'custom_keys' in self.paramwise_cfg: - if not isinstance(self.paramwise_cfg['custom_keys'], dict): - raise TypeError( - 'If specified, custom_keys must be a dict, ' - f'but got {type(self.paramwise_cfg["custom_keys"])}') - if self.base_wd is None: - for key in self.paramwise_cfg['custom_keys']: - if 'decay_mult' in self.paramwise_cfg['custom_keys'][key]: - raise ValueError('base_wd should not be None') - - # get base lr and weight decay - # weight_decay must be explicitly specified if mult is specified - if ('bias_decay_mult' in self.paramwise_cfg - or 'norm_decay_mult' in self.paramwise_cfg - or 'dwconv_decay_mult' in self.paramwise_cfg): - if self.base_wd is None: - raise ValueError('base_wd should not be None') - - def _is_in(self, param_group, param_group_list): - assert is_list_of(param_group_list, dict) - param = set(param_group['params']) - param_set = set() - for group in param_group_list: - param_set.update(set(group['params'])) - - return not param.isdisjoint(param_set) - - def add_params(self, params, module, prefix='', is_dcn_module=None): - """Add all parameters of module to the params list. - - The parameters of the given module will be added to the list of param - groups, with specific rules defined by paramwise_cfg. - - Args: - params (list[dict]): A list of param groups, it will be modified - in place. - module (nn.Module): The module to be added. - prefix (str): The prefix of the module - is_dcn_module (int|float|None): If the current module is a - submodule of DCN, `is_dcn_module` will be passed to - control conv_offset layer's learning rate. Defaults to None. - """ - # get param-wise options - custom_keys = self.paramwise_cfg.get('custom_keys', {}) - # first sort with alphabet order and then sort with reversed len of str - sorted_keys = sorted(sorted(custom_keys.keys()), key=len, reverse=True) - - bias_lr_mult = self.paramwise_cfg.get('bias_lr_mult', 1.) - bias_decay_mult = self.paramwise_cfg.get('bias_decay_mult', 1.) - norm_decay_mult = self.paramwise_cfg.get('norm_decay_mult', 1.) - dwconv_decay_mult = self.paramwise_cfg.get('dwconv_decay_mult', 1.) - bypass_duplicate = self.paramwise_cfg.get('bypass_duplicate', False) - dcn_offset_lr_mult = self.paramwise_cfg.get('dcn_offset_lr_mult', 1.) - - # special rules for norm layers and depth-wise conv layers - is_norm = isinstance(module, - (_BatchNorm, _InstanceNorm, GroupNorm, LayerNorm)) - is_dwconv = ( - isinstance(module, torch.nn.Conv2d) - and module.in_channels == module.groups) - - for name, param in module.named_parameters(recurse=False): - param_group = {'params': [param]} - if not param.requires_grad: - params.append(param_group) - continue - if bypass_duplicate and self._is_in(param_group, params): - warnings.warn(f'{prefix} is duplicate. It is skipped since ' - f'bypass_duplicate={bypass_duplicate}') - continue - # if the parameter match one of the custom keys, ignore other rules - is_custom = False - for key in sorted_keys: - if key in f'{prefix}.{name}': - is_custom = True - lr_mult = custom_keys[key].get('lr_mult', 1.) - param_group['lr'] = self.base_lr * lr_mult - if self.base_wd is not None: - decay_mult = custom_keys[key].get('decay_mult', 1.) - param_group['weight_decay'] = self.base_wd * decay_mult - break - - if not is_custom: - # bias_lr_mult affects all bias parameters - # except for norm.bias dcn.conv_offset.bias - if name == 'bias' and not (is_norm or is_dcn_module): - param_group['lr'] = self.base_lr * bias_lr_mult - - if (prefix.find('conv_offset') != -1 and is_dcn_module - and isinstance(module, torch.nn.Conv2d)): - # deal with both dcn_offset's bias & weight - param_group['lr'] = self.base_lr * dcn_offset_lr_mult - - # apply weight decay policies - if self.base_wd is not None: - # norm decay - if is_norm: - param_group[ - 'weight_decay'] = self.base_wd * norm_decay_mult - # depth-wise conv - elif is_dwconv: - param_group[ - 'weight_decay'] = self.base_wd * dwconv_decay_mult - # bias lr and decay - elif name == 'bias' and not is_dcn_module: - # TODO: current bias_decay_mult will have affect on DCN - param_group[ - 'weight_decay'] = self.base_wd * bias_decay_mult - params.append(param_group) - - if check_ops_exist(): - from annotator.uniformer.mmcv.ops import DeformConv2d, ModulatedDeformConv2d - is_dcn_module = isinstance(module, - (DeformConv2d, ModulatedDeformConv2d)) - else: - is_dcn_module = False - for child_name, child_mod in module.named_children(): - child_prefix = f'{prefix}.{child_name}' if prefix else child_name - self.add_params( - params, - child_mod, - prefix=child_prefix, - is_dcn_module=is_dcn_module) - - def __call__(self, model): - if hasattr(model, 'module'): - model = model.module - - optimizer_cfg = self.optimizer_cfg.copy() - # if no paramwise option is specified, just use the global setting - if not self.paramwise_cfg: - optimizer_cfg['params'] = model.parameters() - return build_from_cfg(optimizer_cfg, OPTIMIZERS) - - # set param-wise lr and weight decay recursively - params = [] - self.add_params(params, model) - optimizer_cfg['params'] = params - - return build_from_cfg(optimizer_cfg, OPTIMIZERS) diff --git a/spaces/Rongjiehuang/GenerSpeech/data_gen/tts/emotion/audio.py b/spaces/Rongjiehuang/GenerSpeech/data_gen/tts/emotion/audio.py deleted file mode 100644 index 2a5eed96f08995c4f7e872a0f28a02bf0f0ad4ab..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/GenerSpeech/data_gen/tts/emotion/audio.py +++ /dev/null @@ -1,107 +0,0 @@ -from scipy.ndimage.morphology import binary_dilation -from data_gen.tts.emotion.params_data import * -from pathlib import Path -from typing import Optional, Union -import numpy as np -import webrtcvad -import librosa -import struct - -int16_max = (2 ** 15) - 1 - - -def preprocess_wav(fpath_or_wav: Union[str, Path, np.ndarray], - source_sr: Optional[int] = None): - """ - Applies the preprocessing operations used in training the Speaker Encoder to a waveform - either on disk or in memory. The waveform will be resampled to match the data hyperparameters. - - :param fpath_or_wav: either a filepath to an audio file (many extensions are supported, not - just .wav), either the waveform as a numpy array of floats. - :param source_sr: if passing an audio waveform, the sampling rate of the waveform before - preprocessing. After preprocessing, the waveform's sampling rate will match the data - hyperparameters. If passing a filepath, the sampling rate will be automatically detected and - this argument will be ignored. - """ - # Load the wav from disk if needed - if isinstance(fpath_or_wav, str) or isinstance(fpath_or_wav, Path): - wav, source_sr = librosa.load(str(fpath_or_wav), sr=None) - else: - wav = fpath_or_wav - - # Resample the wav if needed - if source_sr is not None and source_sr != sampling_rate: - wav = librosa.resample(wav, source_sr, sampling_rate) - - # Apply the preprocessing: normalize volume and shorten long silences - wav = normalize_volume(wav, audio_norm_target_dBFS, increase_only=True) - wav = trim_long_silences(wav) - - return wav - - -def wav_to_mel_spectrogram(wav): - """ - Derives a mel spectrogram ready to be used by the encoder from a preprocessed audio waveform. - Note: this not a log-mel spectrogram. - """ - frames = librosa.feature.melspectrogram( - wav, - sampling_rate, - n_fft=int(sampling_rate * mel_window_length / 1000), - hop_length=int(sampling_rate * mel_window_step / 1000), - n_mels=mel_n_channels - ) - return frames.astype(np.float32).T - - -def trim_long_silences(wav): - """ - Ensures that segments without voice in the waveform remain no longer than a - threshold determined by the VAD parameters in params.py. - - :param wav: the raw waveform as a numpy array of floats - :return: the same waveform with silences trimmed away (length <= original wav length) - """ - # Compute the voice detection window size - samples_per_window = (vad_window_length * sampling_rate) // 1000 - - # Trim the end of the audio to have a multiple of the window size - wav = wav[:len(wav) - (len(wav) % samples_per_window)] - - # Convert the float waveform to 16-bit mono PCM - pcm_wave = struct.pack("%dh" % len(wav), *(np.round(wav * int16_max)).astype(np.int16)) - - # Perform voice activation detection - voice_flags = [] - vad = webrtcvad.Vad(mode=3) - for window_start in range(0, len(wav), samples_per_window): - window_end = window_start + samples_per_window - voice_flags.append(vad.is_speech(pcm_wave[window_start * 2:window_end * 2], - sample_rate=sampling_rate)) - voice_flags = np.array(voice_flags) - - # Smooth the voice detection with a moving average - def moving_average(array, width): - array_padded = np.concatenate((np.zeros((width - 1) // 2), array, np.zeros(width // 2))) - ret = np.cumsum(array_padded, dtype=float) - ret[width:] = ret[width:] - ret[:-width] - return ret[width - 1:] / width - - audio_mask = moving_average(voice_flags, vad_moving_average_width) - audio_mask = np.round(audio_mask).astype(np.bool) - - # Dilate the voiced regions - audio_mask = binary_dilation(audio_mask, np.ones(vad_max_silence_length + 1)) - audio_mask = np.repeat(audio_mask, samples_per_window) - - return wav[audio_mask == True] - - -def normalize_volume(wav, target_dBFS, increase_only=False, decrease_only=False): - if increase_only and decrease_only: - raise ValueError("Both increase only and decrease only are set") - dBFS_change = target_dBFS - 10 * np.log10(np.mean(wav ** 2)) - if (dBFS_change < 0 and increase_only) or (dBFS_change > 0 and decrease_only): - return wav - return wav * (10 ** (dBFS_change / 20)) diff --git a/spaces/Rongjiehuang/ProDiff/modules/FastDiff/module/FastDiff_model.py b/spaces/Rongjiehuang/ProDiff/modules/FastDiff/module/FastDiff_model.py deleted file mode 100644 index a70c4ac51c814c5c44f1bc9e48e073ac1f325175..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/ProDiff/modules/FastDiff/module/FastDiff_model.py +++ /dev/null @@ -1,123 +0,0 @@ -import torch.nn as nn -import torch -import logging -from modules.FastDiff.module.modules import DiffusionDBlock, TimeAware_LVCBlock -from modules.FastDiff.module.util import calc_diffusion_step_embedding - -def swish(x): - return x * torch.sigmoid(x) - -class FastDiff(nn.Module): - """FastDiff module.""" - - def __init__(self, - audio_channels=1, - inner_channels=32, - cond_channels=80, - upsample_ratios=[8, 8, 4], - lvc_layers_each_block=4, - lvc_kernel_size=3, - kpnet_hidden_channels=64, - kpnet_conv_size=3, - dropout=0.0, - diffusion_step_embed_dim_in=128, - diffusion_step_embed_dim_mid=512, - diffusion_step_embed_dim_out=512, - use_weight_norm=True): - super().__init__() - - self.diffusion_step_embed_dim_in = diffusion_step_embed_dim_in - - self.audio_channels = audio_channels - self.cond_channels = cond_channels - self.lvc_block_nums = len(upsample_ratios) - self.first_audio_conv = nn.Conv1d(1, inner_channels, - kernel_size=7, padding=(7 - 1) // 2, - dilation=1, bias=True) - - # define residual blocks - self.lvc_blocks = nn.ModuleList() - self.downsample = nn.ModuleList() - - # the layer-specific fc for noise scale embedding - self.fc_t = nn.ModuleList() - self.fc_t1 = nn.Linear(diffusion_step_embed_dim_in, diffusion_step_embed_dim_mid) - self.fc_t2 = nn.Linear(diffusion_step_embed_dim_mid, diffusion_step_embed_dim_out) - - cond_hop_length = 1 - for n in range(self.lvc_block_nums): - cond_hop_length = cond_hop_length * upsample_ratios[n] - lvcb = TimeAware_LVCBlock( - in_channels=inner_channels, - cond_channels=cond_channels, - upsample_ratio=upsample_ratios[n], - conv_layers=lvc_layers_each_block, - conv_kernel_size=lvc_kernel_size, - cond_hop_length=cond_hop_length, - kpnet_hidden_channels=kpnet_hidden_channels, - kpnet_conv_size=kpnet_conv_size, - kpnet_dropout=dropout, - noise_scale_embed_dim_out=diffusion_step_embed_dim_out - ) - self.lvc_blocks += [lvcb] - self.downsample.append(DiffusionDBlock(inner_channels, inner_channels, upsample_ratios[self.lvc_block_nums-n-1])) - - - # define output layers - self.final_conv = nn.Sequential(nn.Conv1d(inner_channels, audio_channels, kernel_size=7, padding=(7 - 1) // 2, - dilation=1, bias=True)) - - # apply weight norm - if use_weight_norm: - self.apply_weight_norm() - - def forward(self, data): - """Calculate forward propagation. - Args: - x (Tensor): Input noise signal (B, 1, T). - c (Tensor): Local conditioning auxiliary features (B, C ,T'). - Returns: - Tensor: Output tensor (B, out_channels, T) - """ - audio, c, diffusion_steps = data - - # embed diffusion step t - diffusion_step_embed = calc_diffusion_step_embedding(diffusion_steps, self.diffusion_step_embed_dim_in) - diffusion_step_embed = swish(self.fc_t1(diffusion_step_embed)) - diffusion_step_embed = swish(self.fc_t2(diffusion_step_embed)) - - audio = self.first_audio_conv(audio) - downsample = [] - for down_layer in self.downsample: - downsample.append(audio) - audio = down_layer(audio) - - x = audio - for n, audio_down in enumerate(reversed(downsample)): - x = self.lvc_blocks[n]((x, audio_down, c, diffusion_step_embed)) - - # apply final layers - x = self.final_conv(x) - - return x - - def remove_weight_norm(self): - """Remove weight normalization module from all of the layers.""" - def _remove_weight_norm(m): - try: - logging.debug(f"Weight norm is removed from {m}.") - torch.nn.utils.remove_weight_norm(m) - except ValueError: # this module didn't have weight norm - return - - self.apply(_remove_weight_norm) - - def apply_weight_norm(self): - """Apply weight normalization module from all of the layers.""" - def _apply_weight_norm(m): - if isinstance(m, torch.nn.Conv1d) or isinstance(m, torch.nn.Conv2d): - torch.nn.utils.weight_norm(m) - logging.debug(f"Weight norm is applied to {m}.") - - self.apply(_apply_weight_norm) - diff --git a/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/customized_langchain/vectorstores/faiss.py b/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/customized_langchain/vectorstores/faiss.py deleted file mode 100644 index 1caf619aa213aef66d0bbb1fdf631b8d72c20970..0000000000000000000000000000000000000000 --- a/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/customized_langchain/vectorstores/faiss.py +++ /dev/null @@ -1,100 +0,0 @@ -# import hashlib - -from langchain.vectorstores.faiss import * -from langchain.vectorstores.faiss import FAISS as OriginalFAISS - -from streamlit_langchain_chat.customized_langchain.docstore.in_memory import InMemoryDocstore - - -class FAISS(OriginalFAISS): - def __add( - self, - texts: Iterable[str], - embeddings: Iterable[List[float]], - metadatas: Optional[List[dict]] = None, - **kwargs: Any, - ) -> List[str]: - if not isinstance(self.docstore, AddableMixin): - raise ValueError( - "If trying to add texts, the underlying docstore should support " - f"adding items, which {self.docstore} does not" - ) - documents = [] - for i, text in enumerate(texts): - metadata = metadatas[i] if metadatas else {} - documents.append(Document(page_content=text, metadata=metadata)) - # Add to the index, the index_to_id mapping, and the docstore. - starting_len = len(self.index_to_docstore_id) - self.index.add(np.array(embeddings, dtype=np.float32)) - # Get list of index, id, and docs. - full_info = [ - (starting_len + i, str(uuid.uuid4()), doc) - for i, doc in enumerate(documents) - ] - # Add information to docstore and index. - self.docstore.add({_id: doc for _, _id, doc in full_info}) - index_to_id = {index: _id for index, _id, _ in full_info} - self.index_to_docstore_id.update(index_to_id) - return [_id for _, _id, _ in full_info] - - @classmethod - def __from( - cls, - texts: List[str], - embeddings: List[List[float]], - embedding: Embeddings, - metadatas: Optional[List[dict]] = None, - **kwargs: Any, - ) -> FAISS: - faiss = dependable_faiss_import() - index = faiss.IndexFlatL2(len(embeddings[0])) - index.add(np.array(embeddings, dtype=np.float32)) - documents = [] - for i, text in enumerate(texts): - metadata = metadatas[i] if metadatas else {} - documents.append(Document(page_content=text, metadata=metadata)) - index_to_id = {i: str(uuid.uuid4()) for i in range(len(documents))} - - # # TODO: cambiar para usar el hash. Y ver donde se pondria para que no cargara el chunk en el dataset - # index_to_id_2 = dict() - # for i in range(len(documents)): - # h = hashlib.new('sha256') - # text_ = documents[i].page_content - # h.update(text_.encode()) - # index_to_id_2[i] = str(h.hexdigest()) - # # - docstore = InMemoryDocstore( - {index_to_id[i]: doc for i, doc in enumerate(documents)} - ) - return cls(embedding.embed_query, index, docstore, index_to_id) - - @classmethod - def from_texts( - cls, - texts: List[str], - embedding: Embeddings, - metadatas: Optional[List[dict]] = None, - **kwargs: Any, - ) -> FAISS: - """Construct FAISS wrapper from raw documents. - - This is a user friendly interface that: - 1. Embeds documents. - 2. Creates an in memory docstore - 3. Initializes the FAISS database - - This is intended to be a quick way to get started. - - Example: - .. code-block:: python - - from langchain import FAISS - from langchain.embeddings import OpenAIEmbeddings - embeddings = OpenAIEmbeddings() - faiss = FAISS.from_texts(texts, embeddings) - """ - # embeddings = embedding.embed_documents(texts) - print(f"len(texts): {len(texts)}") # TODO: borrar - embeddings = [embedding.embed_documents([text])[0] for text in texts] - print(f"len(embeddings): {len(embeddings)}") # TODO: borrar - return cls.__from(texts, embeddings, embedding, metadatas, **kwargs) diff --git a/spaces/SQSora/VITS-Umamusume-voice-synthesizer/ONNXVITS_transforms.py b/spaces/SQSora/VITS-Umamusume-voice-synthesizer/ONNXVITS_transforms.py deleted file mode 100644 index 69b6d1c4b5724a3ef61f8bc3d64fc45c5e51e270..0000000000000000000000000000000000000000 --- a/spaces/SQSora/VITS-Umamusume-voice-synthesizer/ONNXVITS_transforms.py +++ /dev/null @@ -1,196 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - #unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - unnormalized_derivatives_ = torch.zeros((1, 1, unnormalized_derivatives.size(2), unnormalized_derivatives.size(3)+2)) - unnormalized_derivatives_[...,1:-1] = unnormalized_derivatives - unnormalized_derivatives = unnormalized_derivatives_ - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/Salesforce/EDICT/my_diffusers/onnx_utils.py b/spaces/Salesforce/EDICT/my_diffusers/onnx_utils.py deleted file mode 100644 index e840565dd5c1b9bd17422aba5af6dc0d045c4682..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/my_diffusers/onnx_utils.py +++ /dev/null @@ -1,189 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The HuggingFace Inc. team. -# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import os -import shutil -from pathlib import Path -from typing import Optional, Union - -import numpy as np - -from huggingface_hub import hf_hub_download - -from .utils import is_onnx_available, logging - - -if is_onnx_available(): - import onnxruntime as ort - - -ONNX_WEIGHTS_NAME = "model.onnx" - - -logger = logging.get_logger(__name__) - - -class OnnxRuntimeModel: - base_model_prefix = "onnx_model" - - def __init__(self, model=None, **kwargs): - logger.info("`diffusers.OnnxRuntimeModel` is experimental and might change in the future.") - self.model = model - self.model_save_dir = kwargs.get("model_save_dir", None) - self.latest_model_name = kwargs.get("latest_model_name", "model.onnx") - - def __call__(self, **kwargs): - inputs = {k: np.array(v) for k, v in kwargs.items()} - return self.model.run(None, inputs) - - @staticmethod - def load_model(path: Union[str, Path], provider=None): - """ - Loads an ONNX Inference session with an ExecutionProvider. Default provider is `CPUExecutionProvider` - - Arguments: - path (`str` or `Path`): - Directory from which to load - provider(`str`, *optional*): - Onnxruntime execution provider to use for loading the model, defaults to `CPUExecutionProvider` - """ - if provider is None: - logger.info("No onnxruntime provider specified, using CPUExecutionProvider") - provider = "CPUExecutionProvider" - - return ort.InferenceSession(path, providers=[provider]) - - def _save_pretrained(self, save_directory: Union[str, Path], file_name: Optional[str] = None, **kwargs): - """ - Save a model and its configuration file to a directory, so that it can be re-loaded using the - [`~optimum.onnxruntime.modeling_ort.ORTModel.from_pretrained`] class method. It will always save the - latest_model_name. - - Arguments: - save_directory (`str` or `Path`): - Directory where to save the model file. - file_name(`str`, *optional*): - Overwrites the default model file name from `"model.onnx"` to `file_name`. This allows you to save the - model with a different name. - """ - model_file_name = file_name if file_name is not None else ONNX_WEIGHTS_NAME - - src_path = self.model_save_dir.joinpath(self.latest_model_name) - dst_path = Path(save_directory).joinpath(model_file_name) - if not src_path.samefile(dst_path): - shutil.copyfile(src_path, dst_path) - - def save_pretrained( - self, - save_directory: Union[str, os.PathLike], - **kwargs, - ): - """ - Save a model to a directory, so that it can be re-loaded using the [`~OnnxModel.from_pretrained`] class - method.: - - Arguments: - save_directory (`str` or `os.PathLike`): - Directory to which to save. Will be created if it doesn't exist. - """ - if os.path.isfile(save_directory): - logger.error(f"Provided path ({save_directory}) should be a directory, not a file") - return - - os.makedirs(save_directory, exist_ok=True) - - # saving model weights/files - self._save_pretrained(save_directory, **kwargs) - - @classmethod - def _from_pretrained( - cls, - model_id: Union[str, Path], - use_auth_token: Optional[Union[bool, str, None]] = None, - revision: Optional[Union[str, None]] = None, - force_download: bool = False, - cache_dir: Optional[str] = None, - file_name: Optional[str] = None, - provider: Optional[str] = None, - **kwargs, - ): - """ - Load a model from a directory or the HF Hub. - - Arguments: - model_id (`str` or `Path`): - Directory from which to load - use_auth_token (`str` or `bool`): - Is needed to load models from a private or gated repository - revision (`str`): - Revision is the specific model version to use. It can be a branch name, a tag name, or a commit id - cache_dir (`Union[str, Path]`, *optional*): - Path to a directory in which a downloaded pretrained model configuration should be cached if the - standard cache should not be used. - force_download (`bool`, *optional*, defaults to `False`): - Whether or not to force the (re-)download of the model weights and configuration files, overriding the - cached versions if they exist. - file_name(`str`): - Overwrites the default model file name from `"model.onnx"` to `file_name`. This allows you to load - different model files from the same repository or directory. - provider(`str`): - The ONNX runtime provider, e.g. `CPUExecutionProvider` or `CUDAExecutionProvider`. - kwargs (`Dict`, *optional*): - kwargs will be passed to the model during initialization - """ - model_file_name = file_name if file_name is not None else ONNX_WEIGHTS_NAME - # load model from local directory - if os.path.isdir(model_id): - model = OnnxRuntimeModel.load_model(os.path.join(model_id, model_file_name), provider=provider) - kwargs["model_save_dir"] = Path(model_id) - # load model from hub - else: - # download model - model_cache_path = hf_hub_download( - repo_id=model_id, - filename=model_file_name, - use_auth_token=use_auth_token, - revision=revision, - cache_dir=cache_dir, - force_download=force_download, - ) - kwargs["model_save_dir"] = Path(model_cache_path).parent - kwargs["latest_model_name"] = Path(model_cache_path).name - model = OnnxRuntimeModel.load_model(model_cache_path, provider=provider) - return cls(model=model, **kwargs) - - @classmethod - def from_pretrained( - cls, - model_id: Union[str, Path], - force_download: bool = True, - use_auth_token: Optional[str] = None, - cache_dir: Optional[str] = None, - **model_kwargs, - ): - revision = None - if len(str(model_id).split("@")) == 2: - model_id, revision = model_id.split("@") - - return cls._from_pretrained( - model_id=model_id, - revision=revision, - cache_dir=cache_dir, - force_download=force_download, - use_auth_token=use_auth_token, - **model_kwargs, - ) diff --git a/spaces/SarthakSidhant/Go-Cattle/diseases/lumpy skin disease.md b/spaces/SarthakSidhant/Go-Cattle/diseases/lumpy skin disease.md deleted file mode 100644 index 3cd088f7ffda928fc84877e28c64ee4c93b7c3e3..0000000000000000000000000000000000000000 --- a/spaces/SarthakSidhant/Go-Cattle/diseases/lumpy skin disease.md +++ /dev/null @@ -1,37 +0,0 @@ -## Lumpy skin disease - -**Information:** Lumpy skin disease (LSD) is a viral disease that affects cattle, water buffalo, and other cloven-hoofed animals. It is caused by a virus called lumpy skin disease virus (LSDV). LSD can cause a variety of symptoms in affected animals, including fever, nodules on the skin, and lameness. In some cases, LSD can also be fatal. - -**Symptoms:** - -* Fever -* Nodules on the skin -* Lameness -* Loss of appetite -* Depression -* Swollen lymph nodes -* Difficulty breathing -* Death - -**Remedies:** - -* There is no specific cure for lumpy skin disease. -* Treatment for lumpy skin disease is supportive care, such as fluids and antibiotics. -* Animals that have recovered from lumpy skin disease may be immune to future infection. - -**Causes:** - -* Lumpy skin disease is caused by a virus called lumpy skin disease virus (LSDV). -* This virus is found in the saliva, blood, and other bodily fluids of infected animals. -* Animals become infected with LSDV when they come into contact with the virus, such as through contact with the saliva, blood, or other bodily fluids of infected animals, or through contact with contaminated objects. - -**Prevention:** - -* The best way to prevent lumpy skin disease is to vaccinate animals against the disease. -* Vaccinations are available for cattle and water buffalo. -* Other preventive measures include: - * Maintaining good herd health practices - * Practicing biosecurity measures - * Testing animals for lumpy skin disease - * Disposing of infected animals and their tissues properly -* Avoiding contact with infected animals or their bodily fluids diff --git a/spaces/Sharccc92/streamlit_in_web/app.py b/spaces/Sharccc92/streamlit_in_web/app.py deleted file mode 100644 index 7cd71d9ed92bb95c2520a484ba4438a40ca5f65d..0000000000000000000000000000000000000000 --- a/spaces/Sharccc92/streamlit_in_web/app.py +++ /dev/null @@ -1,37 +0,0 @@ -import streamlit -import requests -import json -import pandas as pd -import joblib -import numpy as np - - -df = pd.read_csv("cleaned_car_data.csv") - -# Initialize model artifacte files. -model = joblib.load("LinearRegressionModel.joblib") - - -def run(): - streamlit.title("Car Price Prediction") - name = streamlit.selectbox("Cars Model", df.name.unique()) - company = streamlit.selectbox("Company Name", df.company.unique()) - year = streamlit.number_input("Year") - kms_driven = streamlit.number_input("Kilometers driven") - fuel_type = streamlit.selectbox("Fuel type", df.fuel_type.unique()) - - data = { - 'name': name, - 'company': company, - 'year': year, - 'kms_driven': kms_driven, - 'fuel_type': fuel_type, - } - - if streamlit.button("Predict"): - prediction = model.predict(pd.DataFrame(columns=['name','company','year','kms_driven','fuel_type'],data=np.array([data["name"],data["company"],data["year"],data["kms_driven"],data["fuel_type"]]).reshape(1,5)))[0] - streamlit.success(f"The prediction from model: {prediction}") - -if __name__ == '__main__': - #by default it will run at 8501 port - run() \ No newline at end of file diff --git a/spaces/SilenWang/ReviewGPT/utils/config_sample.py b/spaces/SilenWang/ReviewGPT/utils/config_sample.py deleted file mode 100644 index b5f74635b7e66b783d9fbaa532f5a52c047c7107..0000000000000000000000000000000000000000 --- a/spaces/SilenWang/ReviewGPT/utils/config_sample.py +++ /dev/null @@ -1,77 +0,0 @@ -# 填写从openai获取的API Key -OPENAI_KEY = 'sk-YOUR_OPENAI_KEY' - -# 使用的语言模型 -REVIEW_MODEL = "gpt-3.5-turbo" - -# 进行Pubmed文献获取需要的邮箱 -EMAIL = "YOUR@MAIL.COM" - -# 预设的Promots, 目前用来实现的功能有两个 -Prompts = { - 'Summarize': ''' - {papers} - - 小结 #1-#{idx} 的内容, 并小结不同参考文献的异同 - - 如果后面还有其他问题, 请基于#1-#{idx} 的内容逐一进行回答 - - 所有问题请中文作答, 但对一些专有名词及其缩写可不翻译. - - {questions} - ''', - 'Summarize_Unit': ''' - 将编号为{paper_id}的文献标记为参考文献 #{idx}, 将后续的文段标记为 #{idx} 的摘要. - {abstract} - ''', - # 进行meta时的标准肯定是变化的, 因此这个功能的Promot主要是用于功能上的限定 - # 1. 限制回答方式为json格式的字符串, 方便解析 - # 2. 限制chatGPT逐一检查要检查的问题, 并在认为不符合标准时给出不符合哪一条 - # 3. 限制chatGPT尝试拼接语义正确但是内容不正确的回答, 让它在无法判定时直接说明难以判断, 交给人工判断 - 'Screen': ''' - 请你扮演一位研究者, 你接下来将要进行一项Meta分析, 因此需要逐一阅读并理解文献摘要, - 以判断文献是否满足Meta分析的准入标准. 在阅读时, 你需要对后面给出的每一条准入标准逐 - 一进行判断, 如果文献的摘要不满足任意一条标准, 那么需要给出你认为摘要不满足标准的原因. - 返回的结果请按照json字串的方式给出, 请不要附带任何其他的多余内容. - - 这个json字串中需要包含2个Key: - 1. "Inclusion": 其中说明你认为这篇文献是否满足准入标准, 回答可以是"Yes"或者"No", 当你认为无法判断时 - 请填写"Uknown" - 2. "Explanation": 这个部分中对所有准入标准逐一给出明确的判断结果, 以及你给出该结果的原因. 如果文字中 - 的描述内容不足, 导致你无法判断, 请不要给出结果, 直接写明"无法判断", 这部分内容请以中文给出 - - 返回结果的实例如下, 请仅仅参考实例的格式, 而不局限于Explanation中表述的方式, 但是请将所有要给出的 - 结果包含在json字符串内, 不要给出多余的内容: - {{ - "Inclusion": "Yes", - "Explanation": [ - '...' - ] - }} - - 下面是文献的准入标准, 请务必逐一检查文献是否满足标准: - {criterias} - - 下面是文献摘要的内容: - {abstract} - ''', - 'Review': ''' - 请你扮演一位研究者, 你接下来将要阅读一篇论文的部分页, 并基于这部分的内容回答一个或多个问题. - 回答问题时, 请完全基于提供的论文内容回答, 这具体指: 按照论文的思路, 用论文的表述方式进行回答. - 不要在回答中添加论文不存在的内容. 所有回答请使用中文, 但是对于一些专有名词及其缩写可以不翻译. - - 下面是这篇论文的部分页: - {pages} - - 下面是待回答的问题, 如果有多个问题, 请逐一回答他们, 在回答的时候请说明基于前面哪一页的内容进行了回答, - - 如: 1. 基于#1和#3的内容, 我的回答是... - - - {question} - ''', - 'Review_Unit': ''' - 将后续的文段标记为这篇文献的 #{page}页, 这一页中的内容是: - {content} - ''' -} \ No newline at end of file diff --git a/spaces/SrikanthPhalgun/Cifar10_ERAV1_GradCam_Demo/models/custom_resnet.py b/spaces/SrikanthPhalgun/Cifar10_ERAV1_GradCam_Demo/models/custom_resnet.py deleted file mode 100644 index 9bf9c56a109f5b2af44f1b635ce0839a7c3ceeeb..0000000000000000000000000000000000000000 --- a/spaces/SrikanthPhalgun/Cifar10_ERAV1_GradCam_Demo/models/custom_resnet.py +++ /dev/null @@ -1,67 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import torchvision -import torchvision.transforms as transforms - -class CustomRes_Network(nn.Module): - def __init__(self): - super(CustomRes_Network,self).__init__() - self.prep = nn.Sequential( - nn.Conv2d(3, 64 , 3 , 1 ,1), - nn.ReLU(), - nn.BatchNorm2d(64) - ) - self.layer1 = nn.Sequential( - nn.Conv2d(64, 128, 3, 1, 1), - nn.MaxPool2d(2,2), - nn.ReLU(), - nn.BatchNorm2d(128) - ) - self.residual1 = nn.Sequential( - nn.Conv2d(128, 128, 3, 1, 1), - nn.ReLU(), - nn.BatchNorm2d(128), - nn.Conv2d(128, 128, 3, 1, 1), - nn.ReLU(), - nn.BatchNorm2d(128) - ) - self.layer2 = nn.Sequential( - nn.Conv2d(128, 256, 3, 1, 1), - nn.MaxPool2d(2,2), - nn.ReLU(), - nn.BatchNorm2d(256) - ) - self.layer3 = nn.Sequential( - nn.Conv2d(256, 512, 3, 1, 1), - nn.MaxPool2d(2,2), - nn.ReLU(), - nn.BatchNorm2d(512), - nn.Dropout(0.1) - ) - self.residual2 = nn.Sequential( - nn.Conv2d(512, 512, 3, 1, 1), - nn.ReLU(), - nn.BatchNorm2d(512), - nn.Conv2d(512, 512, 3, 1, 1), - nn.ReLU(), - nn.BatchNorm2d(512) - ) - self.maxpool = nn.MaxPool2d(4,2) - self.fc = nn.Linear(512,10) - - - - def forward(self, x): - x = self.prep(x) - residual1 = self.layer1(x) - x = self.residual1(residual1) - x += residual1 - x = self.layer2(x) - residual2 = self.layer3(x) - x = self.residual2(residual2) - x += residual2 - x = self.maxpool(x) - x = x.view(x.size(0), -1) - x = self.fc(x) - return F.log_softmax(x,dim=-1) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/ImtImagePlugin.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/ImtImagePlugin.py deleted file mode 100644 index ac267457b0682a975a1a33da475c96531c398bd7..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/ImtImagePlugin.py +++ /dev/null @@ -1,101 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# IM Tools support for PIL -# -# history: -# 1996-05-27 fl Created (read 8-bit images only) -# 2001-02-17 fl Use 're' instead of 'regex' (Python 2.1) (0.2) -# -# Copyright (c) Secret Labs AB 1997-2001. -# Copyright (c) Fredrik Lundh 1996-2001. -# -# See the README file for information on usage and redistribution. -# - - -import re - -from . import Image, ImageFile - -# -# -------------------------------------------------------------------- - -field = re.compile(rb"([a-z]*) ([^ \r\n]*)") - - -## -# Image plugin for IM Tools images. - - -class ImtImageFile(ImageFile.ImageFile): - format = "IMT" - format_description = "IM Tools" - - def _open(self): - # Quick rejection: if there's not a LF among the first - # 100 bytes, this is (probably) not a text header. - - buffer = self.fp.read(100) - if b"\n" not in buffer: - msg = "not an IM file" - raise SyntaxError(msg) - - xsize = ysize = 0 - - while True: - if buffer: - s = buffer[:1] - buffer = buffer[1:] - else: - s = self.fp.read(1) - if not s: - break - - if s == b"\x0C": - # image data begins - self.tile = [ - ( - "raw", - (0, 0) + self.size, - self.fp.tell() - len(buffer), - (self.mode, 0, 1), - ) - ] - - break - - else: - # read key/value pair - if b"\n" not in buffer: - buffer += self.fp.read(100) - lines = buffer.split(b"\n") - s += lines.pop(0) - buffer = b"\n".join(lines) - if len(s) == 1 or len(s) > 100: - break - if s[0] == ord(b"*"): - continue # comment - - m = field.match(s) - if not m: - break - k, v = m.group(1, 2) - if k == b"width": - xsize = int(v) - self._size = xsize, ysize - elif k == b"height": - ysize = int(v) - self._size = xsize, ysize - elif k == b"pixel" and v == b"n8": - self.mode = "L" - - -# -# -------------------------------------------------------------------- - -Image.register_open(ImtImageFile.format, ImtImageFile) - -# -# no extension registered (".im" is simply too common) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/utils/messageid.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/utils/messageid.py deleted file mode 100644 index 9501f36c7598d575c2a2e6134213993cd9c4dbcc..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/utils/messageid.py +++ /dev/null @@ -1,80 +0,0 @@ -import pulsar - - -def pulsar_to_int(message_id: pulsar.MessageId) -> int: - ledger_id: int = message_id.ledger_id() - entry_id: int = message_id.entry_id() - batch_index: int = message_id.batch_index() - partition: int = message_id.partition() - - # Convert to offset binary encoding to preserve ordering semantics when encoded - # see https://en.wikipedia.org/wiki/Offset_binary - ledger_id = ledger_id + 2**63 - entry_id = entry_id + 2**63 - batch_index = batch_index + 2**31 - partition = partition + 2**31 - - return ledger_id << 128 | entry_id << 64 | batch_index << 32 | partition - - -def int_to_pulsar(message_id: int) -> pulsar.MessageId: - partition = message_id & 0xFFFFFFFF - batch_index = message_id >> 32 & 0xFFFFFFFF - entry_id = message_id >> 64 & 0xFFFFFFFFFFFFFFFF - ledger_id = message_id >> 128 & 0xFFFFFFFFFFFFFFFF - - partition = partition - 2**31 - batch_index = batch_index - 2**31 - entry_id = entry_id - 2**63 - ledger_id = ledger_id - 2**63 - - return pulsar.MessageId(partition, ledger_id, entry_id, batch_index) - - -def int_to_bytes(int: int) -> bytes: - """Convert int to a 24 byte big endian byte string""" - return int.to_bytes(24, "big") - - -def bytes_to_int(bytes: bytes) -> int: - """Convert a 24 byte big endian byte string to an int""" - return int.from_bytes(bytes, "big") - - -# Sorted in lexographic order -base85 = ( - "!#$%&()*+-0123456789;<=>?@ABCDEFGHIJKLMNOP" - + "QRSTUVWXYZ^_`abcdefghijklmnopqrstuvwxyz{|}~" -) - - -# not the most efficient way to do this, see benchmark function below -def _int_to_str(n: int) -> str: - if n < 85: - return base85[n] - else: - return _int_to_str(n // 85) + base85[n % 85] - - -def int_to_str(n: int) -> str: - return _int_to_str(n).rjust(36, "!") # left pad with '!' to 36 chars - - -def str_to_int(s: str) -> int: - return sum(base85.index(c) * 85**i for i, c in enumerate(s[::-1])) - - -# 1m in 5 seconds on a M1 Pro -# Not fast, but not likely to be a bottleneck either -def _benchmark() -> None: - import random - import time - - t0 = time.time() - for i in range(1000000): - x = random.randint(0, 2**192 - 1) - s = int_to_str(x) - if s == "!": # prevent compiler from optimizing out - print("oops") - t1 = time.time() - print(t1 - t0) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/click/termui.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/click/termui.py deleted file mode 100644 index bfb2f5ae67d19fc1b659b7167825ee5111952b89..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/click/termui.py +++ /dev/null @@ -1,787 +0,0 @@ -import inspect -import io -import itertools -import os -import sys -import typing as t -from gettext import gettext as _ - -from ._compat import isatty -from ._compat import strip_ansi -from ._compat import WIN -from .exceptions import Abort -from .exceptions import UsageError -from .globals import resolve_color_default -from .types import Choice -from .types import convert_type -from .types import ParamType -from .utils import echo -from .utils import LazyFile - -if t.TYPE_CHECKING: - from ._termui_impl import ProgressBar - -V = t.TypeVar("V") - -# The prompt functions to use. The doc tools currently override these -# functions to customize how they work. -visible_prompt_func: t.Callable[[str], str] = input - -_ansi_colors = { - "black": 30, - "red": 31, - "green": 32, - "yellow": 33, - "blue": 34, - "magenta": 35, - "cyan": 36, - "white": 37, - "reset": 39, - "bright_black": 90, - "bright_red": 91, - "bright_green": 92, - "bright_yellow": 93, - "bright_blue": 94, - "bright_magenta": 95, - "bright_cyan": 96, - "bright_white": 97, -} -_ansi_reset_all = "\033[0m" - - -def hidden_prompt_func(prompt: str) -> str: - import getpass - - return getpass.getpass(prompt) - - -def _build_prompt( - text: str, - suffix: str, - show_default: bool = False, - default: t.Optional[t.Any] = None, - show_choices: bool = True, - type: t.Optional[ParamType] = None, -) -> str: - prompt = text - if type is not None and show_choices and isinstance(type, Choice): - prompt += f" ({', '.join(map(str, type.choices))})" - if default is not None and show_default: - prompt = f"{prompt} [{_format_default(default)}]" - return f"{prompt}{suffix}" - - -def _format_default(default: t.Any) -> t.Any: - if isinstance(default, (io.IOBase, LazyFile)) and hasattr(default, "name"): - return default.name # type: ignore - - return default - - -def prompt( - text: str, - default: t.Optional[t.Any] = None, - hide_input: bool = False, - confirmation_prompt: t.Union[bool, str] = False, - type: t.Optional[t.Union[ParamType, t.Any]] = None, - value_proc: t.Optional[t.Callable[[str], t.Any]] = None, - prompt_suffix: str = ": ", - show_default: bool = True, - err: bool = False, - show_choices: bool = True, -) -> t.Any: - """Prompts a user for input. This is a convenience function that can - be used to prompt a user for input later. - - If the user aborts the input by sending an interrupt signal, this - function will catch it and raise a :exc:`Abort` exception. - - :param text: the text to show for the prompt. - :param default: the default value to use if no input happens. If this - is not given it will prompt until it's aborted. - :param hide_input: if this is set to true then the input value will - be hidden. - :param confirmation_prompt: Prompt a second time to confirm the - value. Can be set to a string instead of ``True`` to customize - the message. - :param type: the type to use to check the value against. - :param value_proc: if this parameter is provided it's a function that - is invoked instead of the type conversion to - convert a value. - :param prompt_suffix: a suffix that should be added to the prompt. - :param show_default: shows or hides the default value in the prompt. - :param err: if set to true the file defaults to ``stderr`` instead of - ``stdout``, the same as with echo. - :param show_choices: Show or hide choices if the passed type is a Choice. - For example if type is a Choice of either day or week, - show_choices is true and text is "Group by" then the - prompt will be "Group by (day, week): ". - - .. versionadded:: 8.0 - ``confirmation_prompt`` can be a custom string. - - .. versionadded:: 7.0 - Added the ``show_choices`` parameter. - - .. versionadded:: 6.0 - Added unicode support for cmd.exe on Windows. - - .. versionadded:: 4.0 - Added the `err` parameter. - - """ - - def prompt_func(text: str) -> str: - f = hidden_prompt_func if hide_input else visible_prompt_func - try: - # Write the prompt separately so that we get nice - # coloring through colorama on Windows - echo(text.rstrip(" "), nl=False, err=err) - # Echo a space to stdout to work around an issue where - # readline causes backspace to clear the whole line. - return f(" ") - except (KeyboardInterrupt, EOFError): - # getpass doesn't print a newline if the user aborts input with ^C. - # Allegedly this behavior is inherited from getpass(3). - # A doc bug has been filed at https://bugs.python.org/issue24711 - if hide_input: - echo(None, err=err) - raise Abort() from None - - if value_proc is None: - value_proc = convert_type(type, default) - - prompt = _build_prompt( - text, prompt_suffix, show_default, default, show_choices, type - ) - - if confirmation_prompt: - if confirmation_prompt is True: - confirmation_prompt = _("Repeat for confirmation") - - confirmation_prompt = _build_prompt(confirmation_prompt, prompt_suffix) - - while True: - while True: - value = prompt_func(prompt) - if value: - break - elif default is not None: - value = default - break - try: - result = value_proc(value) - except UsageError as e: - if hide_input: - echo(_("Error: The value you entered was invalid."), err=err) - else: - echo(_("Error: {e.message}").format(e=e), err=err) # noqa: B306 - continue - if not confirmation_prompt: - return result - while True: - value2 = prompt_func(confirmation_prompt) - is_empty = not value and not value2 - if value2 or is_empty: - break - if value == value2: - return result - echo(_("Error: The two entered values do not match."), err=err) - - -def confirm( - text: str, - default: t.Optional[bool] = False, - abort: bool = False, - prompt_suffix: str = ": ", - show_default: bool = True, - err: bool = False, -) -> bool: - """Prompts for confirmation (yes/no question). - - If the user aborts the input by sending a interrupt signal this - function will catch it and raise a :exc:`Abort` exception. - - :param text: the question to ask. - :param default: The default value to use when no input is given. If - ``None``, repeat until input is given. - :param abort: if this is set to `True` a negative answer aborts the - exception by raising :exc:`Abort`. - :param prompt_suffix: a suffix that should be added to the prompt. - :param show_default: shows or hides the default value in the prompt. - :param err: if set to true the file defaults to ``stderr`` instead of - ``stdout``, the same as with echo. - - .. versionchanged:: 8.0 - Repeat until input is given if ``default`` is ``None``. - - .. versionadded:: 4.0 - Added the ``err`` parameter. - """ - prompt = _build_prompt( - text, - prompt_suffix, - show_default, - "y/n" if default is None else ("Y/n" if default else "y/N"), - ) - - while True: - try: - # Write the prompt separately so that we get nice - # coloring through colorama on Windows - echo(prompt.rstrip(" "), nl=False, err=err) - # Echo a space to stdout to work around an issue where - # readline causes backspace to clear the whole line. - value = visible_prompt_func(" ").lower().strip() - except (KeyboardInterrupt, EOFError): - raise Abort() from None - if value in ("y", "yes"): - rv = True - elif value in ("n", "no"): - rv = False - elif default is not None and value == "": - rv = default - else: - echo(_("Error: invalid input"), err=err) - continue - break - if abort and not rv: - raise Abort() - return rv - - -def echo_via_pager( - text_or_generator: t.Union[t.Iterable[str], t.Callable[[], t.Iterable[str]], str], - color: t.Optional[bool] = None, -) -> None: - """This function takes a text and shows it via an environment specific - pager on stdout. - - .. versionchanged:: 3.0 - Added the `color` flag. - - :param text_or_generator: the text to page, or alternatively, a - generator emitting the text to page. - :param color: controls if the pager supports ANSI colors or not. The - default is autodetection. - """ - color = resolve_color_default(color) - - if inspect.isgeneratorfunction(text_or_generator): - i = t.cast(t.Callable[[], t.Iterable[str]], text_or_generator)() - elif isinstance(text_or_generator, str): - i = [text_or_generator] - else: - i = iter(t.cast(t.Iterable[str], text_or_generator)) - - # convert every element of i to a text type if necessary - text_generator = (el if isinstance(el, str) else str(el) for el in i) - - from ._termui_impl import pager - - return pager(itertools.chain(text_generator, "\n"), color) - - -def progressbar( - iterable: t.Optional[t.Iterable[V]] = None, - length: t.Optional[int] = None, - label: t.Optional[str] = None, - show_eta: bool = True, - show_percent: t.Optional[bool] = None, - show_pos: bool = False, - item_show_func: t.Optional[t.Callable[[t.Optional[V]], t.Optional[str]]] = None, - fill_char: str = "#", - empty_char: str = "-", - bar_template: str = "%(label)s [%(bar)s] %(info)s", - info_sep: str = " ", - width: int = 36, - file: t.Optional[t.TextIO] = None, - color: t.Optional[bool] = None, - update_min_steps: int = 1, -) -> "ProgressBar[V]": - """This function creates an iterable context manager that can be used - to iterate over something while showing a progress bar. It will - either iterate over the `iterable` or `length` items (that are counted - up). While iteration happens, this function will print a rendered - progress bar to the given `file` (defaults to stdout) and will attempt - to calculate remaining time and more. By default, this progress bar - will not be rendered if the file is not a terminal. - - The context manager creates the progress bar. When the context - manager is entered the progress bar is already created. With every - iteration over the progress bar, the iterable passed to the bar is - advanced and the bar is updated. When the context manager exits, - a newline is printed and the progress bar is finalized on screen. - - Note: The progress bar is currently designed for use cases where the - total progress can be expected to take at least several seconds. - Because of this, the ProgressBar class object won't display - progress that is considered too fast, and progress where the time - between steps is less than a second. - - No printing must happen or the progress bar will be unintentionally - destroyed. - - Example usage:: - - with progressbar(items) as bar: - for item in bar: - do_something_with(item) - - Alternatively, if no iterable is specified, one can manually update the - progress bar through the `update()` method instead of directly - iterating over the progress bar. The update method accepts the number - of steps to increment the bar with:: - - with progressbar(length=chunks.total_bytes) as bar: - for chunk in chunks: - process_chunk(chunk) - bar.update(chunks.bytes) - - The ``update()`` method also takes an optional value specifying the - ``current_item`` at the new position. This is useful when used - together with ``item_show_func`` to customize the output for each - manual step:: - - with click.progressbar( - length=total_size, - label='Unzipping archive', - item_show_func=lambda a: a.filename - ) as bar: - for archive in zip_file: - archive.extract() - bar.update(archive.size, archive) - - :param iterable: an iterable to iterate over. If not provided the length - is required. - :param length: the number of items to iterate over. By default the - progressbar will attempt to ask the iterator about its - length, which might or might not work. If an iterable is - also provided this parameter can be used to override the - length. If an iterable is not provided the progress bar - will iterate over a range of that length. - :param label: the label to show next to the progress bar. - :param show_eta: enables or disables the estimated time display. This is - automatically disabled if the length cannot be - determined. - :param show_percent: enables or disables the percentage display. The - default is `True` if the iterable has a length or - `False` if not. - :param show_pos: enables or disables the absolute position display. The - default is `False`. - :param item_show_func: A function called with the current item which - can return a string to show next to the progress bar. If the - function returns ``None`` nothing is shown. The current item can - be ``None``, such as when entering and exiting the bar. - :param fill_char: the character to use to show the filled part of the - progress bar. - :param empty_char: the character to use to show the non-filled part of - the progress bar. - :param bar_template: the format string to use as template for the bar. - The parameters in it are ``label`` for the label, - ``bar`` for the progress bar and ``info`` for the - info section. - :param info_sep: the separator between multiple info items (eta etc.) - :param width: the width of the progress bar in characters, 0 means full - terminal width - :param file: The file to write to. If this is not a terminal then - only the label is printed. - :param color: controls if the terminal supports ANSI colors or not. The - default is autodetection. This is only needed if ANSI - codes are included anywhere in the progress bar output - which is not the case by default. - :param update_min_steps: Render only when this many updates have - completed. This allows tuning for very fast iterators. - - .. versionchanged:: 8.0 - Output is shown even if execution time is less than 0.5 seconds. - - .. versionchanged:: 8.0 - ``item_show_func`` shows the current item, not the previous one. - - .. versionchanged:: 8.0 - Labels are echoed if the output is not a TTY. Reverts a change - in 7.0 that removed all output. - - .. versionadded:: 8.0 - Added the ``update_min_steps`` parameter. - - .. versionchanged:: 4.0 - Added the ``color`` parameter. Added the ``update`` method to - the object. - - .. versionadded:: 2.0 - """ - from ._termui_impl import ProgressBar - - color = resolve_color_default(color) - return ProgressBar( - iterable=iterable, - length=length, - show_eta=show_eta, - show_percent=show_percent, - show_pos=show_pos, - item_show_func=item_show_func, - fill_char=fill_char, - empty_char=empty_char, - bar_template=bar_template, - info_sep=info_sep, - file=file, - label=label, - width=width, - color=color, - update_min_steps=update_min_steps, - ) - - -def clear() -> None: - """Clears the terminal screen. This will have the effect of clearing - the whole visible space of the terminal and moving the cursor to the - top left. This does not do anything if not connected to a terminal. - - .. versionadded:: 2.0 - """ - if not isatty(sys.stdout): - return - if WIN: - os.system("cls") - else: - sys.stdout.write("\033[2J\033[1;1H") - - -def _interpret_color( - color: t.Union[int, t.Tuple[int, int, int], str], offset: int = 0 -) -> str: - if isinstance(color, int): - return f"{38 + offset};5;{color:d}" - - if isinstance(color, (tuple, list)): - r, g, b = color - return f"{38 + offset};2;{r:d};{g:d};{b:d}" - - return str(_ansi_colors[color] + offset) - - -def style( - text: t.Any, - fg: t.Optional[t.Union[int, t.Tuple[int, int, int], str]] = None, - bg: t.Optional[t.Union[int, t.Tuple[int, int, int], str]] = None, - bold: t.Optional[bool] = None, - dim: t.Optional[bool] = None, - underline: t.Optional[bool] = None, - overline: t.Optional[bool] = None, - italic: t.Optional[bool] = None, - blink: t.Optional[bool] = None, - reverse: t.Optional[bool] = None, - strikethrough: t.Optional[bool] = None, - reset: bool = True, -) -> str: - """Styles a text with ANSI styles and returns the new string. By - default the styling is self contained which means that at the end - of the string a reset code is issued. This can be prevented by - passing ``reset=False``. - - Examples:: - - click.echo(click.style('Hello World!', fg='green')) - click.echo(click.style('ATTENTION!', blink=True)) - click.echo(click.style('Some things', reverse=True, fg='cyan')) - click.echo(click.style('More colors', fg=(255, 12, 128), bg=117)) - - Supported color names: - - * ``black`` (might be a gray) - * ``red`` - * ``green`` - * ``yellow`` (might be an orange) - * ``blue`` - * ``magenta`` - * ``cyan`` - * ``white`` (might be light gray) - * ``bright_black`` - * ``bright_red`` - * ``bright_green`` - * ``bright_yellow`` - * ``bright_blue`` - * ``bright_magenta`` - * ``bright_cyan`` - * ``bright_white`` - * ``reset`` (reset the color code only) - - If the terminal supports it, color may also be specified as: - - - An integer in the interval [0, 255]. The terminal must support - 8-bit/256-color mode. - - An RGB tuple of three integers in [0, 255]. The terminal must - support 24-bit/true-color mode. - - See https://en.wikipedia.org/wiki/ANSI_color and - https://gist.github.com/XVilka/8346728 for more information. - - :param text: the string to style with ansi codes. - :param fg: if provided this will become the foreground color. - :param bg: if provided this will become the background color. - :param bold: if provided this will enable or disable bold mode. - :param dim: if provided this will enable or disable dim mode. This is - badly supported. - :param underline: if provided this will enable or disable underline. - :param overline: if provided this will enable or disable overline. - :param italic: if provided this will enable or disable italic. - :param blink: if provided this will enable or disable blinking. - :param reverse: if provided this will enable or disable inverse - rendering (foreground becomes background and the - other way round). - :param strikethrough: if provided this will enable or disable - striking through text. - :param reset: by default a reset-all code is added at the end of the - string which means that styles do not carry over. This - can be disabled to compose styles. - - .. versionchanged:: 8.0 - A non-string ``message`` is converted to a string. - - .. versionchanged:: 8.0 - Added support for 256 and RGB color codes. - - .. versionchanged:: 8.0 - Added the ``strikethrough``, ``italic``, and ``overline`` - parameters. - - .. versionchanged:: 7.0 - Added support for bright colors. - - .. versionadded:: 2.0 - """ - if not isinstance(text, str): - text = str(text) - - bits = [] - - if fg: - try: - bits.append(f"\033[{_interpret_color(fg)}m") - except KeyError: - raise TypeError(f"Unknown color {fg!r}") from None - - if bg: - try: - bits.append(f"\033[{_interpret_color(bg, 10)}m") - except KeyError: - raise TypeError(f"Unknown color {bg!r}") from None - - if bold is not None: - bits.append(f"\033[{1 if bold else 22}m") - if dim is not None: - bits.append(f"\033[{2 if dim else 22}m") - if underline is not None: - bits.append(f"\033[{4 if underline else 24}m") - if overline is not None: - bits.append(f"\033[{53 if overline else 55}m") - if italic is not None: - bits.append(f"\033[{3 if italic else 23}m") - if blink is not None: - bits.append(f"\033[{5 if blink else 25}m") - if reverse is not None: - bits.append(f"\033[{7 if reverse else 27}m") - if strikethrough is not None: - bits.append(f"\033[{9 if strikethrough else 29}m") - bits.append(text) - if reset: - bits.append(_ansi_reset_all) - return "".join(bits) - - -def unstyle(text: str) -> str: - """Removes ANSI styling information from a string. Usually it's not - necessary to use this function as Click's echo function will - automatically remove styling if necessary. - - .. versionadded:: 2.0 - - :param text: the text to remove style information from. - """ - return strip_ansi(text) - - -def secho( - message: t.Optional[t.Any] = None, - file: t.Optional[t.IO[t.AnyStr]] = None, - nl: bool = True, - err: bool = False, - color: t.Optional[bool] = None, - **styles: t.Any, -) -> None: - """This function combines :func:`echo` and :func:`style` into one - call. As such the following two calls are the same:: - - click.secho('Hello World!', fg='green') - click.echo(click.style('Hello World!', fg='green')) - - All keyword arguments are forwarded to the underlying functions - depending on which one they go with. - - Non-string types will be converted to :class:`str`. However, - :class:`bytes` are passed directly to :meth:`echo` without applying - style. If you want to style bytes that represent text, call - :meth:`bytes.decode` first. - - .. versionchanged:: 8.0 - A non-string ``message`` is converted to a string. Bytes are - passed through without style applied. - - .. versionadded:: 2.0 - """ - if message is not None and not isinstance(message, (bytes, bytearray)): - message = style(message, **styles) - - return echo(message, file=file, nl=nl, err=err, color=color) - - -def edit( - text: t.Optional[t.AnyStr] = None, - editor: t.Optional[str] = None, - env: t.Optional[t.Mapping[str, str]] = None, - require_save: bool = True, - extension: str = ".txt", - filename: t.Optional[str] = None, -) -> t.Optional[t.AnyStr]: - r"""Edits the given text in the defined editor. If an editor is given - (should be the full path to the executable but the regular operating - system search path is used for finding the executable) it overrides - the detected editor. Optionally, some environment variables can be - used. If the editor is closed without changes, `None` is returned. In - case a file is edited directly the return value is always `None` and - `require_save` and `extension` are ignored. - - If the editor cannot be opened a :exc:`UsageError` is raised. - - Note for Windows: to simplify cross-platform usage, the newlines are - automatically converted from POSIX to Windows and vice versa. As such, - the message here will have ``\n`` as newline markers. - - :param text: the text to edit. - :param editor: optionally the editor to use. Defaults to automatic - detection. - :param env: environment variables to forward to the editor. - :param require_save: if this is true, then not saving in the editor - will make the return value become `None`. - :param extension: the extension to tell the editor about. This defaults - to `.txt` but changing this might change syntax - highlighting. - :param filename: if provided it will edit this file instead of the - provided text contents. It will not use a temporary - file as an indirection in that case. - """ - from ._termui_impl import Editor - - ed = Editor(editor=editor, env=env, require_save=require_save, extension=extension) - - if filename is None: - return ed.edit(text) - - ed.edit_file(filename) - return None - - -def launch(url: str, wait: bool = False, locate: bool = False) -> int: - """This function launches the given URL (or filename) in the default - viewer application for this file type. If this is an executable, it - might launch the executable in a new session. The return value is - the exit code of the launched application. Usually, ``0`` indicates - success. - - Examples:: - - click.launch('https://click.palletsprojects.com/') - click.launch('/my/downloaded/file', locate=True) - - .. versionadded:: 2.0 - - :param url: URL or filename of the thing to launch. - :param wait: Wait for the program to exit before returning. This - only works if the launched program blocks. In particular, - ``xdg-open`` on Linux does not block. - :param locate: if this is set to `True` then instead of launching the - application associated with the URL it will attempt to - launch a file manager with the file located. This - might have weird effects if the URL does not point to - the filesystem. - """ - from ._termui_impl import open_url - - return open_url(url, wait=wait, locate=locate) - - -# If this is provided, getchar() calls into this instead. This is used -# for unittesting purposes. -_getchar: t.Optional[t.Callable[[bool], str]] = None - - -def getchar(echo: bool = False) -> str: - """Fetches a single character from the terminal and returns it. This - will always return a unicode character and under certain rare - circumstances this might return more than one character. The - situations which more than one character is returned is when for - whatever reason multiple characters end up in the terminal buffer or - standard input was not actually a terminal. - - Note that this will always read from the terminal, even if something - is piped into the standard input. - - Note for Windows: in rare cases when typing non-ASCII characters, this - function might wait for a second character and then return both at once. - This is because certain Unicode characters look like special-key markers. - - .. versionadded:: 2.0 - - :param echo: if set to `True`, the character read will also show up on - the terminal. The default is to not show it. - """ - global _getchar - - if _getchar is None: - from ._termui_impl import getchar as f - - _getchar = f - - return _getchar(echo) - - -def raw_terminal() -> t.ContextManager[int]: - from ._termui_impl import raw_terminal as f - - return f() - - -def pause(info: t.Optional[str] = None, err: bool = False) -> None: - """This command stops execution and waits for the user to press any - key to continue. This is similar to the Windows batch "pause" - command. If the program is not run through a terminal, this command - will instead do nothing. - - .. versionadded:: 2.0 - - .. versionadded:: 4.0 - Added the `err` parameter. - - :param info: The message to print before pausing. Defaults to - ``"Press any key to continue..."``. - :param err: if set to message goes to ``stderr`` instead of - ``stdout``, the same as with echo. - """ - if not isatty(sys.stdin) or not isatty(sys.stdout): - return - - if info is None: - info = _("Press any key to continue...") - - try: - if info: - echo(info, nl=False, err=err) - try: - getchar() - except (KeyboardInterrupt, EOFError): - pass - finally: - if info: - echo(err=err) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/_debug_adapter/__main__pydevd_gen_debug_adapter_protocol.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/_debug_adapter/__main__pydevd_gen_debug_adapter_protocol.py deleted file mode 100644 index b45fa5f9db7868b6990f23448b49bd08f3da1377..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/_debug_adapter/__main__pydevd_gen_debug_adapter_protocol.py +++ /dev/null @@ -1,592 +0,0 @@ -''' -Run this module to regenerate the `pydevd_schema.py` file. - -Note that it'll generate it based on the current debugProtocol.json. Erase it and rerun -to download the latest version. -''' - - -def is_variable_to_translate(cls_name, var_name): - if var_name in ('variablesReference', 'frameId', 'threadId'): - return True - - if cls_name == 'StackFrame' and var_name == 'id': - # It's frameId everywhere except on StackFrame. - return True - - if cls_name == 'Thread' and var_name == 'id': - # It's threadId everywhere except on Thread. - return True - - return False - - -def _get_noqa_for_var(prop_name): - return ' # noqa (assign to builtin)' if prop_name in ('type', 'format', 'id', 'hex', 'breakpoint', 'filter') else '' - - -class _OrderedSet(object): - # Not a good ordered set (just something to be small without adding any deps) - - def __init__(self, initial_contents=None): - self._contents = [] - self._contents_as_set = set() - if initial_contents is not None: - for x in initial_contents: - self.add(x) - - def add(self, x): - if x not in self._contents_as_set: - self._contents_as_set.add(x) - self._contents.append(x) - - def discard(self, x): - if x in self._contents_as_set: - self._contents_as_set.remove(x) - self._contents.remove(x) - - def copy(self): - return _OrderedSet(self._contents) - - def update(self, contents): - for x in contents: - self.add(x) - - def __iter__(self): - return iter(self._contents) - - def __contains__(self, item): - return item in self._contents_as_set - - def __len__(self): - return len(self._contents) - - def set_repr(self): - if len(self) == 0: - return 'set()' - - lst = [repr(x) for x in self] - return 'set([' + ', '.join(lst) + '])' - - -class Ref(object): - - def __init__(self, ref, ref_data): - self.ref = ref - self.ref_data = ref_data - - def __str__(self): - return self.ref - - -def load_schema_data(): - import os.path - import json - - json_file = os.path.join(os.path.dirname(__file__), 'debugProtocol.json') - if not os.path.exists(json_file): - import requests - req = requests.get('https://raw.githubusercontent.com/microsoft/debug-adapter-protocol/gh-pages/debugAdapterProtocol.json') - assert req.status_code == 200 - with open(json_file, 'wb') as stream: - stream.write(req.content) - - with open(json_file, 'rb') as json_contents: - json_schema_data = json.loads(json_contents.read()) - return json_schema_data - - -def load_custom_schema_data(): - import os.path - import json - - json_file = os.path.join(os.path.dirname(__file__), 'debugProtocolCustom.json') - - with open(json_file, 'rb') as json_contents: - json_schema_data = json.loads(json_contents.read()) - return json_schema_data - - -def create_classes_to_generate_structure(json_schema_data): - definitions = json_schema_data['definitions'] - - class_to_generatees = {} - - for name, definition in definitions.items(): - all_of = definition.get('allOf') - description = definition.get('description') - is_enum = definition.get('type') == 'string' and 'enum' in definition - enum_values = None - if is_enum: - enum_values = definition['enum'] - properties = {} - properties.update(definition.get('properties', {})) - required = _OrderedSet(definition.get('required', _OrderedSet())) - base_definitions = [] - - if all_of is not None: - for definition in all_of: - ref = definition.get('$ref') - if ref is not None: - assert ref.startswith('#/definitions/') - ref = ref[len('#/definitions/'):] - base_definitions.append(ref) - else: - if not description: - description = definition.get('description') - properties.update(definition.get('properties', {})) - required.update(_OrderedSet(definition.get('required', _OrderedSet()))) - - if isinstance(description, (list, tuple)): - description = '\n'.join(description) - - if name == 'ModulesRequest': # Hack to accept modules request without arguments (ptvsd: 2050). - required.discard('arguments') - class_to_generatees[name] = dict( - name=name, - properties=properties, - base_definitions=base_definitions, - description=description, - required=required, - is_enum=is_enum, - enum_values=enum_values - ) - return class_to_generatees - - -def collect_bases(curr_class, classes_to_generate, memo=None): - ret = [] - if memo is None: - memo = {} - - base_definitions = curr_class['base_definitions'] - for base_definition in base_definitions: - if base_definition not in memo: - ret.append(base_definition) - ret.extend(collect_bases(classes_to_generate[base_definition], classes_to_generate, memo)) - - return ret - - -def fill_properties_and_required_from_base(classes_to_generate): - # Now, resolve properties based on refs - for class_to_generate in classes_to_generate.values(): - dct = {} - s = _OrderedSet() - - for base_definition in reversed(collect_bases(class_to_generate, classes_to_generate)): - # Note: go from base to current so that the initial order of the properties has that - # same order. - dct.update(classes_to_generate[base_definition].get('properties', {})) - s.update(classes_to_generate[base_definition].get('required', _OrderedSet())) - - dct.update(class_to_generate['properties']) - class_to_generate['properties'] = dct - - s.update(class_to_generate['required']) - class_to_generate['required'] = s - - return class_to_generate - - -def update_class_to_generate_description(class_to_generate): - import textwrap - description = class_to_generate['description'] - lines = [] - for line in description.splitlines(): - wrapped = textwrap.wrap(line.strip(), 100) - lines.extend(wrapped) - lines.append('') - - while lines and lines[-1] == '': - lines = lines[:-1] - - class_to_generate['description'] = ' ' + ('\n '.join(lines)) - - -def update_class_to_generate_type(classes_to_generate, class_to_generate): - properties = class_to_generate.get('properties') - for _prop_name, prop_val in properties.items(): - prop_type = prop_val.get('type', '') - if not prop_type: - prop_type = prop_val.pop('$ref', '') - if prop_type: - assert prop_type.startswith('#/definitions/') - prop_type = prop_type[len('#/definitions/'):] - prop_val['type'] = Ref(prop_type, classes_to_generate[prop_type]) - - -def update_class_to_generate_register_dec(classes_to_generate, class_to_generate): - # Default - class_to_generate['register_request'] = '' - class_to_generate['register_dec'] = '@register' - - properties = class_to_generate.get('properties') - enum_type = properties.get('type', {}).get('enum') - command = None - event = None - if enum_type and len(enum_type) == 1 and next(iter(enum_type)) in ("request", "response", "event"): - msg_type = next(iter(enum_type)) - if msg_type == 'response': - # The actual command is typed in the request - response_name = class_to_generate['name'] - request_name = response_name[:-len('Response')] + 'Request' - if request_name in classes_to_generate: - command = classes_to_generate[request_name]['properties'].get('command') - else: - if response_name == 'ErrorResponse': - command = {'enum': ['error']} - else: - raise AssertionError('Unhandled: %s' % (response_name,)) - - elif msg_type == 'request': - command = properties.get('command') - - elif msg_type == 'event': - command = properties.get('event') - - else: - raise AssertionError('Unexpected condition.') - - if command: - enum = command.get('enum') - if enum and len(enum) == 1: - class_to_generate['register_request'] = '@register_%s(%r)\n' % (msg_type, enum[0]) - - -def extract_prop_name_and_prop(class_to_generate): - properties = class_to_generate.get('properties') - required = _OrderedSet(class_to_generate.get('required', _OrderedSet())) - - # Sort so that required come first - prop_name_and_prop = list(properties.items()) - - def compute_sort_key(x): - key = x[0] - if key in required: - if key == 'seq': - return 0.5 # seq when required is after the other required keys (to have a default of -1). - return 0 - return 1 - - prop_name_and_prop.sort(key=compute_sort_key) - - return prop_name_and_prop - - -def update_class_to_generate_to_json(class_to_generate): - required = _OrderedSet(class_to_generate.get('required', _OrderedSet())) - prop_name_and_prop = extract_prop_name_and_prop(class_to_generate) - - to_dict_body = ['def to_dict(self, update_ids_to_dap=False): # noqa (update_ids_to_dap may be unused)'] - - translate_prop_names = [] - for prop_name, prop in prop_name_and_prop: - if is_variable_to_translate(class_to_generate['name'], prop_name): - translate_prop_names.append(prop_name) - - for prop_name, prop in prop_name_and_prop: - namespace = dict(prop_name=prop_name, noqa=_get_noqa_for_var(prop_name)) - to_dict_body.append(' %(prop_name)s = self.%(prop_name)s%(noqa)s' % namespace) - - if prop.get('type') == 'array': - to_dict_body.append(' if %(prop_name)s and hasattr(%(prop_name)s[0], "to_dict"):' % namespace) - to_dict_body.append(' %(prop_name)s = [x.to_dict() for x in %(prop_name)s]' % namespace) - - if translate_prop_names: - to_dict_body.append(' if update_ids_to_dap:') - for prop_name in translate_prop_names: - namespace = dict(prop_name=prop_name, noqa=_get_noqa_for_var(prop_name)) - to_dict_body.append(' if %(prop_name)s is not None:' % namespace) - to_dict_body.append(' %(prop_name)s = self._translate_id_to_dap(%(prop_name)s)%(noqa)s' % namespace) - - if not translate_prop_names: - update_dict_ids_from_dap_body = [] - else: - update_dict_ids_from_dap_body = ['', '', '@classmethod', 'def update_dict_ids_from_dap(cls, dct):'] - for prop_name in translate_prop_names: - namespace = dict(prop_name=prop_name) - update_dict_ids_from_dap_body.append(' if %(prop_name)r in dct:' % namespace) - update_dict_ids_from_dap_body.append(' dct[%(prop_name)r] = cls._translate_id_from_dap(dct[%(prop_name)r])' % namespace) - update_dict_ids_from_dap_body.append(' return dct') - - class_to_generate['update_dict_ids_from_dap'] = _indent_lines('\n'.join(update_dict_ids_from_dap_body)) - - to_dict_body.append(' dct = {') - first_not_required = False - - for prop_name, prop in prop_name_and_prop: - use_to_dict = prop['type'].__class__ == Ref and not prop['type'].ref_data.get('is_enum', False) - is_array = prop['type'] == 'array' - ref_array_cls_name = '' - if is_array: - ref = prop['items'].get('$ref') - if ref is not None: - ref_array_cls_name = ref.split('/')[-1] - - namespace = dict(prop_name=prop_name, ref_array_cls_name=ref_array_cls_name) - if prop_name in required: - if use_to_dict: - to_dict_body.append(' %(prop_name)r: %(prop_name)s.to_dict(update_ids_to_dap=update_ids_to_dap),' % namespace) - else: - if ref_array_cls_name: - to_dict_body.append(' %(prop_name)r: [%(ref_array_cls_name)s.update_dict_ids_to_dap(o) for o in %(prop_name)s] if (update_ids_to_dap and %(prop_name)s) else %(prop_name)s,' % namespace) - else: - to_dict_body.append(' %(prop_name)r: %(prop_name)s,' % namespace) - else: - if not first_not_required: - first_not_required = True - to_dict_body.append(' }') - - to_dict_body.append(' if %(prop_name)s is not None:' % namespace) - if use_to_dict: - to_dict_body.append(' dct[%(prop_name)r] = %(prop_name)s.to_dict(update_ids_to_dap=update_ids_to_dap)' % namespace) - else: - if ref_array_cls_name: - to_dict_body.append(' dct[%(prop_name)r] = [%(ref_array_cls_name)s.update_dict_ids_to_dap(o) for o in %(prop_name)s] if (update_ids_to_dap and %(prop_name)s) else %(prop_name)s' % namespace) - else: - to_dict_body.append(' dct[%(prop_name)r] = %(prop_name)s' % namespace) - - if not first_not_required: - first_not_required = True - to_dict_body.append(' }') - - to_dict_body.append(' dct.update(self.kwargs)') - to_dict_body.append(' return dct') - - class_to_generate['to_dict'] = _indent_lines('\n'.join(to_dict_body)) - - if not translate_prop_names: - update_dict_ids_to_dap_body = [] - else: - update_dict_ids_to_dap_body = ['', '', '@classmethod', 'def update_dict_ids_to_dap(cls, dct):'] - for prop_name in translate_prop_names: - namespace = dict(prop_name=prop_name) - update_dict_ids_to_dap_body.append(' if %(prop_name)r in dct:' % namespace) - update_dict_ids_to_dap_body.append(' dct[%(prop_name)r] = cls._translate_id_to_dap(dct[%(prop_name)r])' % namespace) - update_dict_ids_to_dap_body.append(' return dct') - - class_to_generate['update_dict_ids_to_dap'] = _indent_lines('\n'.join(update_dict_ids_to_dap_body)) - - -def update_class_to_generate_init(class_to_generate): - args = [] - init_body = [] - docstring = [] - - required = _OrderedSet(class_to_generate.get('required', _OrderedSet())) - prop_name_and_prop = extract_prop_name_and_prop(class_to_generate) - - translate_prop_names = [] - for prop_name, prop in prop_name_and_prop: - if is_variable_to_translate(class_to_generate['name'], prop_name): - translate_prop_names.append(prop_name) - - enum = prop.get('enum') - if enum and len(enum) == 1: - init_body.append(' self.%(prop_name)s = %(enum)r' % dict(prop_name=prop_name, enum=next(iter(enum)))) - else: - if prop_name in required: - if prop_name == 'seq': - args.append(prop_name + '=-1') - else: - args.append(prop_name) - else: - args.append(prop_name + '=None') - - if prop['type'].__class__ == Ref: - ref = prop['type'] - ref_data = ref.ref_data - if ref_data.get('is_enum', False): - init_body.append(' if %s is not None:' % (prop_name,)) - init_body.append(' assert %s in %s.VALID_VALUES' % (prop_name, str(ref))) - init_body.append(' self.%(prop_name)s = %(prop_name)s' % dict( - prop_name=prop_name)) - else: - namespace = dict( - prop_name=prop_name, - ref_name=str(ref) - ) - init_body.append(' if %(prop_name)s is None:' % namespace) - init_body.append(' self.%(prop_name)s = %(ref_name)s()' % namespace) - init_body.append(' else:') - init_body.append(' self.%(prop_name)s = %(ref_name)s(update_ids_from_dap=update_ids_from_dap, **%(prop_name)s) if %(prop_name)s.__class__ != %(ref_name)s else %(prop_name)s' % namespace - ) - - else: - init_body.append(' self.%(prop_name)s = %(prop_name)s' % dict(prop_name=prop_name)) - - if prop['type'] == 'array': - ref = prop['items'].get('$ref') - if ref is not None: - ref_array_cls_name = ref.split('/')[-1] - init_body.append(' if update_ids_from_dap and self.%(prop_name)s:' % dict(prop_name=prop_name)) - init_body.append(' for o in self.%(prop_name)s:' % dict(prop_name=prop_name)) - init_body.append(' %(ref_array_cls_name)s.update_dict_ids_from_dap(o)' % dict(ref_array_cls_name=ref_array_cls_name)) - - prop_type = prop['type'] - prop_description = prop.get('description', '') - - if isinstance(prop_description, (list, tuple)): - prop_description = '\n '.join(prop_description) - - docstring.append(':param %(prop_type)s %(prop_name)s: %(prop_description)s' % dict( - prop_type=prop_type, prop_name=prop_name, prop_description=prop_description)) - - if translate_prop_names: - init_body.append(' if update_ids_from_dap:') - for prop_name in translate_prop_names: - init_body.append(' self.%(prop_name)s = self._translate_id_from_dap(self.%(prop_name)s)' % dict(prop_name=prop_name)) - - docstring = _indent_lines('\n'.join(docstring)) - init_body = '\n'.join(init_body) - - # Actually bundle the whole __init__ from the parts. - args = ', '.join(args) - if args: - args = ', ' + args - - # Note: added kwargs because some messages are expected to be extended by the user (so, we'll actually - # make all extendable so that we don't have to worry about which ones -- we loose a little on typing, - # but may be better than doing a allow list based on something only pointed out in the documentation). - class_to_generate['init'] = '''def __init__(self%(args)s, update_ids_from_dap=False, **kwargs): # noqa (update_ids_from_dap may be unused) - """ -%(docstring)s - """ -%(init_body)s - self.kwargs = kwargs -''' % dict(args=args, init_body=init_body, docstring=docstring) - - class_to_generate['init'] = _indent_lines(class_to_generate['init']) - - -def update_class_to_generate_props(class_to_generate): - import json - - def default(o): - if isinstance(o, Ref): - return o.ref - raise AssertionError('Unhandled: %s' % (o,)) - - properties = class_to_generate['properties'] - class_to_generate['props'] = ' __props__ = %s' % _indent_lines( - json.dumps(properties, indent=4, default=default)).strip() - - -def update_class_to_generate_refs(class_to_generate): - properties = class_to_generate['properties'] - class_to_generate['refs'] = ' __refs__ = %s' % _OrderedSet( - key for (key, val) in properties.items() if val['type'].__class__ == Ref).set_repr() - - -def update_class_to_generate_enums(class_to_generate): - class_to_generate['enums'] = '' - if class_to_generate.get('is_enum', False): - enums = '' - for enum in class_to_generate['enum_values']: - enums += ' %s = %r\n' % (enum.upper(), enum) - enums += '\n' - enums += ' VALID_VALUES = %s\n\n' % _OrderedSet(class_to_generate['enum_values']).set_repr() - class_to_generate['enums'] = enums - - -def update_class_to_generate_objects(classes_to_generate, class_to_generate): - properties = class_to_generate['properties'] - for key, val in properties.items(): - if 'type' not in val: - val['type'] = 'TypeNA' - continue - - if val['type'] == 'object': - create_new = val.copy() - create_new.update({ - 'name': '%s%s' % (class_to_generate['name'], key.title()), - 'description': ' "%s" of %s' % (key, class_to_generate['name']) - }) - if 'properties' not in create_new: - create_new['properties'] = {} - - assert create_new['name'] not in classes_to_generate - classes_to_generate[create_new['name']] = create_new - - update_class_to_generate_type(classes_to_generate, create_new) - update_class_to_generate_props(create_new) - - # Update nested object types - update_class_to_generate_objects(classes_to_generate, create_new) - - val['type'] = Ref(create_new['name'], classes_to_generate[create_new['name']]) - val.pop('properties', None) - - -def gen_debugger_protocol(): - import os.path - import sys - - if sys.version_info[:2] < (3, 6): - raise AssertionError('Must be run with Python 3.6 onwards (to keep dict order).') - - classes_to_generate = create_classes_to_generate_structure(load_schema_data()) - classes_to_generate.update(create_classes_to_generate_structure(load_custom_schema_data())) - - class_to_generate = fill_properties_and_required_from_base(classes_to_generate) - - for class_to_generate in list(classes_to_generate.values()): - update_class_to_generate_description(class_to_generate) - update_class_to_generate_type(classes_to_generate, class_to_generate) - update_class_to_generate_props(class_to_generate) - update_class_to_generate_objects(classes_to_generate, class_to_generate) - - for class_to_generate in classes_to_generate.values(): - update_class_to_generate_refs(class_to_generate) - update_class_to_generate_init(class_to_generate) - update_class_to_generate_enums(class_to_generate) - update_class_to_generate_to_json(class_to_generate) - update_class_to_generate_register_dec(classes_to_generate, class_to_generate) - - class_template = ''' -%(register_request)s%(register_dec)s -class %(name)s(BaseSchema): - """ -%(description)s - - Note: automatically generated code. Do not edit manually. - """ - -%(enums)s%(props)s -%(refs)s - - __slots__ = list(__props__.keys()) + ['kwargs'] - -%(init)s%(update_dict_ids_from_dap)s - -%(to_dict)s%(update_dict_ids_to_dap)s -''' - - contents = [] - contents.append('# coding: utf-8') - contents.append('# Automatically generated code.') - contents.append('# Do not edit manually.') - contents.append('# Generated by running: %s' % os.path.basename(__file__)) - contents.append('from .pydevd_base_schema import BaseSchema, register, register_request, register_response, register_event') - contents.append('') - for class_to_generate in classes_to_generate.values(): - contents.append(class_template % class_to_generate) - - parent_dir = os.path.dirname(__file__) - schema = os.path.join(parent_dir, 'pydevd_schema.py') - with open(schema, 'w', encoding='utf-8') as stream: - stream.write('\n'.join(contents)) - - -def _indent_lines(lines, indent=' '): - out_lines = [] - for line in lines.splitlines(keepends=True): - out_lines.append(indent + line) - - return ''.join(out_lines) - - -if __name__ == '__main__': - - gen_debugger_protocol() diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/data/datasets/builtin_meta.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/data/datasets/builtin_meta.py deleted file mode 100644 index 63c7a1a31b31dd89b82011effee26471faccacf5..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/data/datasets/builtin_meta.py +++ /dev/null @@ -1,350 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -""" -Note: -For your custom dataset, there is no need to hard-code metadata anywhere in the code. -For example, for COCO-format dataset, metadata will be obtained automatically -when calling `load_coco_json`. For other dataset, metadata may also be obtained in other ways -during loading. - -However, we hard-coded metadata for a few common dataset here. -The only goal is to allow users who don't have these dataset to use pre-trained models. -Users don't have to download a COCO json (which contains metadata), in order to visualize a -COCO model (with correct class names and colors). -""" - - -# All coco categories, together with their nice-looking visualization colors -# It's from https://github.com/cocodataset/panopticapi/blob/master/panoptic_coco_categories.json -COCO_CATEGORIES = [ - {"color": [220, 20, 60], "isthing": 1, "id": 1, "name": "person"}, - {"color": [119, 11, 32], "isthing": 1, "id": 2, "name": "bicycle"}, - {"color": [0, 0, 142], "isthing": 1, "id": 3, "name": "car"}, - {"color": [0, 0, 230], "isthing": 1, "id": 4, "name": "motorcycle"}, - {"color": [106, 0, 228], "isthing": 1, "id": 5, "name": "airplane"}, - {"color": [0, 60, 100], "isthing": 1, "id": 6, "name": "bus"}, - {"color": [0, 80, 100], "isthing": 1, "id": 7, "name": "train"}, - {"color": [0, 0, 70], "isthing": 1, "id": 8, "name": "truck"}, - {"color": [0, 0, 192], "isthing": 1, "id": 9, "name": "boat"}, - {"color": [250, 170, 30], "isthing": 1, "id": 10, "name": "traffic light"}, - {"color": [100, 170, 30], "isthing": 1, "id": 11, "name": "fire hydrant"}, - {"color": [220, 220, 0], "isthing": 1, "id": 13, "name": "stop sign"}, - {"color": [175, 116, 175], "isthing": 1, "id": 14, "name": "parking meter"}, - {"color": [250, 0, 30], "isthing": 1, "id": 15, "name": "bench"}, - {"color": [165, 42, 42], "isthing": 1, "id": 16, "name": "bird"}, - {"color": [255, 77, 255], "isthing": 1, "id": 17, "name": "cat"}, - {"color": [0, 226, 252], "isthing": 1, "id": 18, "name": "dog"}, - {"color": [182, 182, 255], "isthing": 1, "id": 19, "name": "horse"}, - {"color": [0, 82, 0], "isthing": 1, "id": 20, "name": "sheep"}, - {"color": [120, 166, 157], "isthing": 1, "id": 21, "name": "cow"}, - {"color": [110, 76, 0], "isthing": 1, "id": 22, "name": "elephant"}, - {"color": [174, 57, 255], "isthing": 1, "id": 23, "name": "bear"}, - {"color": [199, 100, 0], "isthing": 1, "id": 24, "name": "zebra"}, - {"color": [72, 0, 118], "isthing": 1, "id": 25, "name": "giraffe"}, - {"color": [255, 179, 240], "isthing": 1, "id": 27, "name": "backpack"}, - {"color": [0, 125, 92], "isthing": 1, "id": 28, "name": "umbrella"}, - {"color": [209, 0, 151], "isthing": 1, "id": 31, "name": "handbag"}, - {"color": [188, 208, 182], "isthing": 1, "id": 32, "name": "tie"}, - {"color": [0, 220, 176], "isthing": 1, "id": 33, "name": "suitcase"}, - {"color": [255, 99, 164], "isthing": 1, "id": 34, "name": "frisbee"}, - {"color": [92, 0, 73], "isthing": 1, "id": 35, "name": "skis"}, - {"color": [133, 129, 255], "isthing": 1, "id": 36, "name": "snowboard"}, - {"color": [78, 180, 255], "isthing": 1, "id": 37, "name": "sports ball"}, - {"color": [0, 228, 0], "isthing": 1, "id": 38, "name": "kite"}, - {"color": [174, 255, 243], "isthing": 1, "id": 39, "name": "baseball bat"}, - {"color": [45, 89, 255], "isthing": 1, "id": 40, "name": "baseball glove"}, - {"color": [134, 134, 103], "isthing": 1, "id": 41, "name": "skateboard"}, - {"color": [145, 148, 174], "isthing": 1, "id": 42, "name": "surfboard"}, - {"color": [255, 208, 186], "isthing": 1, "id": 43, "name": "tennis racket"}, - {"color": [197, 226, 255], "isthing": 1, "id": 44, "name": "bottle"}, - {"color": [171, 134, 1], "isthing": 1, "id": 46, "name": "wine glass"}, - {"color": [109, 63, 54], "isthing": 1, "id": 47, "name": "cup"}, - {"color": [207, 138, 255], "isthing": 1, "id": 48, "name": "fork"}, - {"color": [151, 0, 95], "isthing": 1, "id": 49, "name": "knife"}, - {"color": [9, 80, 61], "isthing": 1, "id": 50, "name": "spoon"}, - {"color": [84, 105, 51], "isthing": 1, "id": 51, "name": "bowl"}, - {"color": [74, 65, 105], "isthing": 1, "id": 52, "name": "banana"}, - {"color": [166, 196, 102], "isthing": 1, "id": 53, "name": "apple"}, - {"color": [208, 195, 210], "isthing": 1, "id": 54, "name": "sandwich"}, - {"color": [255, 109, 65], "isthing": 1, "id": 55, "name": "orange"}, - {"color": [0, 143, 149], "isthing": 1, "id": 56, "name": "broccoli"}, - {"color": [179, 0, 194], "isthing": 1, "id": 57, "name": "carrot"}, - {"color": [209, 99, 106], "isthing": 1, "id": 58, "name": "hot dog"}, - {"color": [5, 121, 0], "isthing": 1, "id": 59, "name": "pizza"}, - {"color": [227, 255, 205], "isthing": 1, "id": 60, "name": "donut"}, - {"color": [147, 186, 208], "isthing": 1, "id": 61, "name": "cake"}, - {"color": [153, 69, 1], "isthing": 1, "id": 62, "name": "chair"}, - {"color": [3, 95, 161], "isthing": 1, "id": 63, "name": "couch"}, - {"color": [163, 255, 0], "isthing": 1, "id": 64, "name": "potted plant"}, - {"color": [119, 0, 170], "isthing": 1, "id": 65, "name": "bed"}, - {"color": [0, 182, 199], "isthing": 1, "id": 67, "name": "dining table"}, - {"color": [0, 165, 120], "isthing": 1, "id": 70, "name": "toilet"}, - {"color": [183, 130, 88], "isthing": 1, "id": 72, "name": "tv"}, - {"color": [95, 32, 0], "isthing": 1, "id": 73, "name": "laptop"}, - {"color": [130, 114, 135], "isthing": 1, "id": 74, "name": "mouse"}, - {"color": [110, 129, 133], "isthing": 1, "id": 75, "name": "remote"}, - {"color": [166, 74, 118], "isthing": 1, "id": 76, "name": "keyboard"}, - {"color": [219, 142, 185], "isthing": 1, "id": 77, "name": "cell phone"}, - {"color": [79, 210, 114], "isthing": 1, "id": 78, "name": "microwave"}, - {"color": [178, 90, 62], "isthing": 1, "id": 79, "name": "oven"}, - {"color": [65, 70, 15], "isthing": 1, "id": 80, "name": "toaster"}, - {"color": [127, 167, 115], "isthing": 1, "id": 81, "name": "sink"}, - {"color": [59, 105, 106], "isthing": 1, "id": 82, "name": "refrigerator"}, - {"color": [142, 108, 45], "isthing": 1, "id": 84, "name": "book"}, - {"color": [196, 172, 0], "isthing": 1, "id": 85, "name": "clock"}, - {"color": [95, 54, 80], "isthing": 1, "id": 86, "name": "vase"}, - {"color": [128, 76, 255], "isthing": 1, "id": 87, "name": "scissors"}, - {"color": [201, 57, 1], "isthing": 1, "id": 88, "name": "teddy bear"}, - {"color": [246, 0, 122], "isthing": 1, "id": 89, "name": "hair drier"}, - {"color": [191, 162, 208], "isthing": 1, "id": 90, "name": "toothbrush"}, - {"color": [255, 255, 128], "isthing": 0, "id": 92, "name": "banner"}, - {"color": [147, 211, 203], "isthing": 0, "id": 93, "name": "blanket"}, - {"color": [150, 100, 100], "isthing": 0, "id": 95, "name": "bridge"}, - {"color": [168, 171, 172], "isthing": 0, "id": 100, "name": "cardboard"}, - {"color": [146, 112, 198], "isthing": 0, "id": 107, "name": "counter"}, - {"color": [210, 170, 100], "isthing": 0, "id": 109, "name": "curtain"}, - {"color": [92, 136, 89], "isthing": 0, "id": 112, "name": "door-stuff"}, - {"color": [218, 88, 184], "isthing": 0, "id": 118, "name": "floor-wood"}, - {"color": [241, 129, 0], "isthing": 0, "id": 119, "name": "flower"}, - {"color": [217, 17, 255], "isthing": 0, "id": 122, "name": "fruit"}, - {"color": [124, 74, 181], "isthing": 0, "id": 125, "name": "gravel"}, - {"color": [70, 70, 70], "isthing": 0, "id": 128, "name": "house"}, - {"color": [255, 228, 255], "isthing": 0, "id": 130, "name": "light"}, - {"color": [154, 208, 0], "isthing": 0, "id": 133, "name": "mirror-stuff"}, - {"color": [193, 0, 92], "isthing": 0, "id": 138, "name": "net"}, - {"color": [76, 91, 113], "isthing": 0, "id": 141, "name": "pillow"}, - {"color": [255, 180, 195], "isthing": 0, "id": 144, "name": "platform"}, - {"color": [106, 154, 176], "isthing": 0, "id": 145, "name": "playingfield"}, - {"color": [230, 150, 140], "isthing": 0, "id": 147, "name": "railroad"}, - {"color": [60, 143, 255], "isthing": 0, "id": 148, "name": "river"}, - {"color": [128, 64, 128], "isthing": 0, "id": 149, "name": "road"}, - {"color": [92, 82, 55], "isthing": 0, "id": 151, "name": "roof"}, - {"color": [254, 212, 124], "isthing": 0, "id": 154, "name": "sand"}, - {"color": [73, 77, 174], "isthing": 0, "id": 155, "name": "sea"}, - {"color": [255, 160, 98], "isthing": 0, "id": 156, "name": "shelf"}, - {"color": [255, 255, 255], "isthing": 0, "id": 159, "name": "snow"}, - {"color": [104, 84, 109], "isthing": 0, "id": 161, "name": "stairs"}, - {"color": [169, 164, 131], "isthing": 0, "id": 166, "name": "tent"}, - {"color": [225, 199, 255], "isthing": 0, "id": 168, "name": "towel"}, - {"color": [137, 54, 74], "isthing": 0, "id": 171, "name": "wall-brick"}, - {"color": [135, 158, 223], "isthing": 0, "id": 175, "name": "wall-stone"}, - {"color": [7, 246, 231], "isthing": 0, "id": 176, "name": "wall-tile"}, - {"color": [107, 255, 200], "isthing": 0, "id": 177, "name": "wall-wood"}, - {"color": [58, 41, 149], "isthing": 0, "id": 178, "name": "water-other"}, - {"color": [183, 121, 142], "isthing": 0, "id": 180, "name": "window-blind"}, - {"color": [255, 73, 97], "isthing": 0, "id": 181, "name": "window-other"}, - {"color": [107, 142, 35], "isthing": 0, "id": 184, "name": "tree-merged"}, - {"color": [190, 153, 153], "isthing": 0, "id": 185, "name": "fence-merged"}, - {"color": [146, 139, 141], "isthing": 0, "id": 186, "name": "ceiling-merged"}, - {"color": [70, 130, 180], "isthing": 0, "id": 187, "name": "sky-other-merged"}, - {"color": [134, 199, 156], "isthing": 0, "id": 188, "name": "cabinet-merged"}, - {"color": [209, 226, 140], "isthing": 0, "id": 189, "name": "table-merged"}, - {"color": [96, 36, 108], "isthing": 0, "id": 190, "name": "floor-other-merged"}, - {"color": [96, 96, 96], "isthing": 0, "id": 191, "name": "pavement-merged"}, - {"color": [64, 170, 64], "isthing": 0, "id": 192, "name": "mountain-merged"}, - {"color": [152, 251, 152], "isthing": 0, "id": 193, "name": "grass-merged"}, - {"color": [208, 229, 228], "isthing": 0, "id": 194, "name": "dirt-merged"}, - {"color": [206, 186, 171], "isthing": 0, "id": 195, "name": "paper-merged"}, - {"color": [152, 161, 64], "isthing": 0, "id": 196, "name": "food-other-merged"}, - {"color": [116, 112, 0], "isthing": 0, "id": 197, "name": "building-other-merged"}, - {"color": [0, 114, 143], "isthing": 0, "id": 198, "name": "rock-merged"}, - {"color": [102, 102, 156], "isthing": 0, "id": 199, "name": "wall-other-merged"}, - {"color": [250, 141, 255], "isthing": 0, "id": 200, "name": "rug-merged"}, -] - -# fmt: off -COCO_PERSON_KEYPOINT_NAMES = ( - "nose", - "left_eye", "right_eye", - "left_ear", "right_ear", - "left_shoulder", "right_shoulder", - "left_elbow", "right_elbow", - "left_wrist", "right_wrist", - "left_hip", "right_hip", - "left_knee", "right_knee", - "left_ankle", "right_ankle", -) -# fmt: on - -# Pairs of keypoints that should be exchanged under horizontal flipping -COCO_PERSON_KEYPOINT_FLIP_MAP = ( - ("left_eye", "right_eye"), - ("left_ear", "right_ear"), - ("left_shoulder", "right_shoulder"), - ("left_elbow", "right_elbow"), - ("left_wrist", "right_wrist"), - ("left_hip", "right_hip"), - ("left_knee", "right_knee"), - ("left_ankle", "right_ankle"), -) - -# rules for pairs of keypoints to draw a line between, and the line color to use. -KEYPOINT_CONNECTION_RULES = [ - # face - ("left_ear", "left_eye", (102, 204, 255)), - ("right_ear", "right_eye", (51, 153, 255)), - ("left_eye", "nose", (102, 0, 204)), - ("nose", "right_eye", (51, 102, 255)), - # upper-body - ("left_shoulder", "right_shoulder", (255, 128, 0)), - ("left_shoulder", "left_elbow", (153, 255, 204)), - ("right_shoulder", "right_elbow", (128, 229, 255)), - ("left_elbow", "left_wrist", (153, 255, 153)), - ("right_elbow", "right_wrist", (102, 255, 224)), - # lower-body - ("left_hip", "right_hip", (255, 102, 0)), - ("left_hip", "left_knee", (255, 255, 77)), - ("right_hip", "right_knee", (153, 255, 204)), - ("left_knee", "left_ankle", (191, 255, 128)), - ("right_knee", "right_ankle", (255, 195, 77)), -] - -# All Cityscapes categories, together with their nice-looking visualization colors -# It's from https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/helpers/labels.py # noqa -CITYSCAPES_CATEGORIES = [ - {"color": (128, 64, 128), "isthing": 0, "id": 7, "trainId": 0, "name": "road"}, - {"color": (244, 35, 232), "isthing": 0, "id": 8, "trainId": 1, "name": "sidewalk"}, - {"color": (70, 70, 70), "isthing": 0, "id": 11, "trainId": 2, "name": "building"}, - {"color": (102, 102, 156), "isthing": 0, "id": 12, "trainId": 3, "name": "wall"}, - {"color": (190, 153, 153), "isthing": 0, "id": 13, "trainId": 4, "name": "fence"}, - {"color": (153, 153, 153), "isthing": 0, "id": 17, "trainId": 5, "name": "pole"}, - {"color": (250, 170, 30), "isthing": 0, "id": 19, "trainId": 6, "name": "traffic light"}, - {"color": (220, 220, 0), "isthing": 0, "id": 20, "trainId": 7, "name": "traffic sign"}, - {"color": (107, 142, 35), "isthing": 0, "id": 21, "trainId": 8, "name": "vegetation"}, - {"color": (152, 251, 152), "isthing": 0, "id": 22, "trainId": 9, "name": "terrain"}, - {"color": (70, 130, 180), "isthing": 0, "id": 23, "trainId": 10, "name": "sky"}, - {"color": (220, 20, 60), "isthing": 1, "id": 24, "trainId": 11, "name": "person"}, - {"color": (255, 0, 0), "isthing": 1, "id": 25, "trainId": 12, "name": "rider"}, - {"color": (0, 0, 142), "isthing": 1, "id": 26, "trainId": 13, "name": "car"}, - {"color": (0, 0, 70), "isthing": 1, "id": 27, "trainId": 14, "name": "truck"}, - {"color": (0, 60, 100), "isthing": 1, "id": 28, "trainId": 15, "name": "bus"}, - {"color": (0, 80, 100), "isthing": 1, "id": 31, "trainId": 16, "name": "train"}, - {"color": (0, 0, 230), "isthing": 1, "id": 32, "trainId": 17, "name": "motorcycle"}, - {"color": (119, 11, 32), "isthing": 1, "id": 33, "trainId": 18, "name": "bicycle"}, -] - -# fmt: off -ADE20K_SEM_SEG_CATEGORIES = [ - "wall", "building", "sky", "floor", "tree", "ceiling", "road, route", "bed", "window ", "grass", "cabinet", "sidewalk, pavement", "person", "earth, ground", "door", "table", "mountain, mount", "plant", "curtain", "chair", "car", "water", "painting, picture", "sofa", "shelf", "house", "sea", "mirror", "rug", "field", "armchair", "seat", "fence", "desk", "rock, stone", "wardrobe, closet, press", "lamp", "tub", "rail", "cushion", "base, pedestal, stand", "box", "column, pillar", "signboard, sign", "chest of drawers, chest, bureau, dresser", "counter", "sand", "sink", "skyscraper", "fireplace", "refrigerator, icebox", "grandstand, covered stand", "path", "stairs", "runway", "case, display case, showcase, vitrine", "pool table, billiard table, snooker table", "pillow", "screen door, screen", "stairway, staircase", "river", "bridge, span", "bookcase", "blind, screen", "coffee table", "toilet, can, commode, crapper, pot, potty, stool, throne", "flower", "book", "hill", "bench", "countertop", "stove", "palm, palm tree", "kitchen island", "computer", "swivel chair", "boat", "bar", "arcade machine", "hovel, hut, hutch, shack, shanty", "bus", "towel", "light", "truck", "tower", "chandelier", "awning, sunshade, sunblind", "street lamp", "booth", "tv", "plane", "dirt track", "clothes", "pole", "land, ground, soil", "bannister, banister, balustrade, balusters, handrail", "escalator, moving staircase, moving stairway", "ottoman, pouf, pouffe, puff, hassock", "bottle", "buffet, counter, sideboard", "poster, posting, placard, notice, bill, card", "stage", "van", "ship", "fountain", "conveyer belt, conveyor belt, conveyer, conveyor, transporter", "canopy", "washer, automatic washer, washing machine", "plaything, toy", "pool", "stool", "barrel, cask", "basket, handbasket", "falls", "tent", "bag", "minibike, motorbike", "cradle", "oven", "ball", "food, solid food", "step, stair", "tank, storage tank", "trade name", "microwave", "pot", "animal", "bicycle", "lake", "dishwasher", "screen", "blanket, cover", "sculpture", "hood, exhaust hood", "sconce", "vase", "traffic light", "tray", "trash can", "fan", "pier", "crt screen", "plate", "monitor", "bulletin board", "shower", "radiator", "glass, drinking glass", "clock", "flag", # noqa -] -# After processed by `prepare_ade20k_sem_seg.py`, id 255 means ignore -# fmt: on - - -def _get_coco_instances_meta(): - thing_ids = [k["id"] for k in COCO_CATEGORIES if k["isthing"] == 1] - thing_colors = [k["color"] for k in COCO_CATEGORIES if k["isthing"] == 1] - assert len(thing_ids) == 80, len(thing_ids) - # Mapping from the incontiguous COCO category id to an id in [0, 79] - thing_dataset_id_to_contiguous_id = {k: i for i, k in enumerate(thing_ids)} - thing_classes = [k["name"] for k in COCO_CATEGORIES if k["isthing"] == 1] - ret = { - "thing_dataset_id_to_contiguous_id": thing_dataset_id_to_contiguous_id, - "thing_classes": thing_classes, - "thing_colors": thing_colors, - } - return ret - - -def _get_coco_panoptic_separated_meta(): - """ - Returns metadata for "separated" version of the panoptic segmentation dataset. - """ - stuff_ids = [k["id"] for k in COCO_CATEGORIES if k["isthing"] == 0] - assert len(stuff_ids) == 53, len(stuff_ids) - - # For semantic segmentation, this mapping maps from contiguous stuff id - # (in [0, 53], used in models) to ids in the dataset (used for processing results) - # The id 0 is mapped to an extra category "thing". - stuff_dataset_id_to_contiguous_id = {k: i + 1 for i, k in enumerate(stuff_ids)} - # When converting COCO panoptic annotations to semantic annotations - # We label the "thing" category to 0 - stuff_dataset_id_to_contiguous_id[0] = 0 - - # 54 names for COCO stuff categories (including "things") - stuff_classes = ["things"] + [ - k["name"].replace("-other", "").replace("-merged", "") - for k in COCO_CATEGORIES - if k["isthing"] == 0 - ] - - # NOTE: I randomly picked a color for things - stuff_colors = [[82, 18, 128]] + [k["color"] for k in COCO_CATEGORIES if k["isthing"] == 0] - ret = { - "stuff_dataset_id_to_contiguous_id": stuff_dataset_id_to_contiguous_id, - "stuff_classes": stuff_classes, - "stuff_colors": stuff_colors, - } - ret.update(_get_coco_instances_meta()) - return ret - - -def _get_builtin_metadata(dataset_name): - if dataset_name == "coco": - return _get_coco_instances_meta() - if dataset_name == "coco_panoptic_separated": - return _get_coco_panoptic_separated_meta() - elif dataset_name == "coco_panoptic_standard": - meta = {} - # The following metadata maps contiguous id from [0, #thing categories + - # #stuff categories) to their names and colors. We have to replica of the - # same name and color under "thing_*" and "stuff_*" because the current - # visualization function in D2 handles thing and class classes differently - # due to some heuristic used in Panoptic FPN. We keep the same naming to - # enable reusing existing visualization functions. - thing_classes = [k["name"] for k in COCO_CATEGORIES] - thing_colors = [k["color"] for k in COCO_CATEGORIES] - stuff_classes = [k["name"] for k in COCO_CATEGORIES] - stuff_colors = [k["color"] for k in COCO_CATEGORIES] - - meta["thing_classes"] = thing_classes - meta["thing_colors"] = thing_colors - meta["stuff_classes"] = stuff_classes - meta["stuff_colors"] = stuff_colors - - # Convert category id for training: - # category id: like semantic segmentation, it is the class id for each - # pixel. Since there are some classes not used in evaluation, the category - # id is not always contiguous and thus we have two set of category ids: - # - original category id: category id in the original dataset, mainly - # used for evaluation. - # - contiguous category id: [0, #classes), in order to train the linear - # softmax classifier. - thing_dataset_id_to_contiguous_id = {} - stuff_dataset_id_to_contiguous_id = {} - - for i, cat in enumerate(COCO_CATEGORIES): - if cat["isthing"]: - thing_dataset_id_to_contiguous_id[cat["id"]] = i - else: - stuff_dataset_id_to_contiguous_id[cat["id"]] = i - - meta["thing_dataset_id_to_contiguous_id"] = thing_dataset_id_to_contiguous_id - meta["stuff_dataset_id_to_contiguous_id"] = stuff_dataset_id_to_contiguous_id - - return meta - elif dataset_name == "coco_person": - return { - "thing_classes": ["person"], - "keypoint_names": COCO_PERSON_KEYPOINT_NAMES, - "keypoint_flip_map": COCO_PERSON_KEYPOINT_FLIP_MAP, - "keypoint_connection_rules": KEYPOINT_CONNECTION_RULES, - } - elif dataset_name == "cityscapes": - # fmt: off - CITYSCAPES_THING_CLASSES = [ - "person", "rider", "car", "truck", - "bus", "train", "motorcycle", "bicycle", - ] - CITYSCAPES_STUFF_CLASSES = [ - "road", "sidewalk", "building", "wall", "fence", "pole", "traffic light", - "traffic sign", "vegetation", "terrain", "sky", "person", "rider", "car", - "truck", "bus", "train", "motorcycle", "bicycle", - ] - # fmt: on - return { - "thing_classes": CITYSCAPES_THING_CLASSES, - "stuff_classes": CITYSCAPES_STUFF_CLASSES, - } - raise KeyError("No built-in metadata for dataset {}".format(dataset_name)) diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/data/datasets/register_cityscapes_panoptic.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/data/datasets/register_cityscapes_panoptic.py deleted file mode 100644 index 5f2c2a69e8c396b4b6fa8eb4125d76b9d1f3a101..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/data/datasets/register_cityscapes_panoptic.py +++ /dev/null @@ -1,199 +0,0 @@ -# ------------------------------------------------------------------------------ -# Reference: https://github.com/facebookresearch/detectron2/blob/main/detectron2/data/datasets/cityscapes_panoptic.py -# Modified by Jitesh Jain (https://github.com/praeclarumjj3) -# ------------------------------------------------------------------------------ - -import json -import logging -import os - -from annotator.oneformer.detectron2.data import DatasetCatalog, MetadataCatalog -from annotator.oneformer.detectron2.data.datasets.builtin_meta import CITYSCAPES_CATEGORIES -from annotator.oneformer.detectron2.utils.file_io import PathManager - -""" -This file contains functions to register the Cityscapes panoptic dataset to the DatasetCatalog. -""" - - -logger = logging.getLogger(__name__) - - -def get_cityscapes_panoptic_files(image_dir, gt_dir, json_info): - files = [] - # scan through the directory - cities = PathManager.ls(image_dir) - logger.info(f"{len(cities)} cities found in '{image_dir}'.") - image_dict = {} - for city in cities: - city_img_dir = os.path.join(image_dir, city) - for basename in PathManager.ls(city_img_dir): - image_file = os.path.join(city_img_dir, basename) - - suffix = "_leftImg8bit.png" - assert basename.endswith(suffix), basename - basename = os.path.basename(basename)[: -len(suffix)] - - image_dict[basename] = image_file - - for ann in json_info["annotations"]: - image_file = image_dict.get(ann["image_id"], None) - assert image_file is not None, "No image {} found for annotation {}".format( - ann["image_id"], ann["file_name"] - ) - label_file = os.path.join(gt_dir, ann["file_name"]) - segments_info = ann["segments_info"] - files.append((image_file, label_file, segments_info)) - - assert len(files), "No images found in {}".format(image_dir) - assert PathManager.isfile(files[0][0]), files[0][0] - assert PathManager.isfile(files[0][1]), files[0][1] - return files - - -def load_cityscapes_panoptic(image_dir, gt_dir, gt_json, meta): - """ - Args: - image_dir (str): path to the raw dataset. e.g., "~/cityscapes/leftImg8bit/train". - gt_dir (str): path to the raw annotations. e.g., - "~/cityscapes/gtFine/cityscapes_panoptic_train". - gt_json (str): path to the json file. e.g., - "~/cityscapes/gtFine/cityscapes_panoptic_train.json". - meta (dict): dictionary containing "thing_dataset_id_to_contiguous_id" - and "stuff_dataset_id_to_contiguous_id" to map category ids to - contiguous ids for training. - - Returns: - list[dict]: a list of dicts in Detectron2 standard format. (See - `Using Custom Datasets `_ ) - """ - - def _convert_category_id(segment_info, meta): - if segment_info["category_id"] in meta["thing_dataset_id_to_contiguous_id"]: - segment_info["category_id"] = meta["thing_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - else: - segment_info["category_id"] = meta["stuff_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - return segment_info - - assert os.path.exists( - gt_json - ), "Please run `python cityscapesscripts/preparation/createPanopticImgs.py` to generate label files." # noqa - - - with open(gt_json) as f: - json_info = json.load(f) - - files = get_cityscapes_panoptic_files(image_dir, gt_dir, json_info) - ret = [] - for image_file, label_file, segments_info in files: - sem_label_file = ( - image_file.replace("leftImg8bit", "gtFine").split(".")[0] + "_labelTrainIds.png" - ) - segments_info = [_convert_category_id(x, meta) for x in segments_info] - ret.append( - { - "file_name": image_file, - "image_id": "_".join( - os.path.splitext(os.path.basename(image_file))[0].split("_")[:3] - ), - "sem_seg_file_name": sem_label_file, - "pan_seg_file_name": label_file, - "segments_info": segments_info, - } - ) - assert len(ret), f"No images found in {image_dir}!" - assert PathManager.isfile( - ret[0]["sem_seg_file_name"] - ), "Please generate labelTrainIds.png with cityscapesscripts/preparation/createTrainIdLabelImgs.py" # noqa - assert PathManager.isfile( - ret[0]["pan_seg_file_name"] - ), "Please generate panoptic annotation with python cityscapesscripts/preparation/createPanopticImgs.py" # noqa - return ret - - -_RAW_CITYSCAPES_PANOPTIC_SPLITS = { - "cityscapes_fine_panoptic_train": ( - "cityscapes/leftImg8bit/train", - "cityscapes/gtFine/cityscapes_panoptic_train", - "cityscapes/gtFine/cityscapes_panoptic_train.json", - ), - "cityscapes_fine_panoptic_val": ( - "cityscapes/leftImg8bit/val", - "cityscapes/gtFine/cityscapes_panoptic_val", - "cityscapes/gtFine/cityscapes_panoptic_val.json", - ), - # "cityscapes_fine_panoptic_test": not supported yet -} - - -def register_all_cityscapes_panoptic(root): - meta = {} - # The following metadata maps contiguous id from [0, #thing categories + - # #stuff categories) to their names and colors. We have to replica of the - # same name and color under "thing_*" and "stuff_*" because the current - # visualization function in D2 handles thing and class classes differently - # due to some heuristic used in Panoptic FPN. We keep the same naming to - # enable reusing existing visualization functions. - thing_classes = [k["name"] for k in CITYSCAPES_CATEGORIES] - thing_colors = [k["color"] for k in CITYSCAPES_CATEGORIES] - stuff_classes = [k["name"] for k in CITYSCAPES_CATEGORIES] - stuff_colors = [k["color"] for k in CITYSCAPES_CATEGORIES] - - meta["thing_classes"] = thing_classes - meta["thing_colors"] = thing_colors - meta["stuff_classes"] = stuff_classes - meta["stuff_colors"] = stuff_colors - - # There are three types of ids in cityscapes panoptic segmentation: - # (1) category id: like semantic segmentation, it is the class id for each - # pixel. Since there are some classes not used in evaluation, the category - # id is not always contiguous and thus we have two set of category ids: - # - original category id: category id in the original dataset, mainly - # used for evaluation. - # - contiguous category id: [0, #classes), in order to train the classifier - # (2) instance id: this id is used to differentiate different instances from - # the same category. For "stuff" classes, the instance id is always 0; for - # "thing" classes, the instance id starts from 1 and 0 is reserved for - # ignored instances (e.g. crowd annotation). - # (3) panoptic id: this is the compact id that encode both category and - # instance id by: category_id * 1000 + instance_id. - thing_dataset_id_to_contiguous_id = {} - stuff_dataset_id_to_contiguous_id = {} - - for k in CITYSCAPES_CATEGORIES: - if k["isthing"] == 1: - thing_dataset_id_to_contiguous_id[k["id"]] = k["trainId"] - else: - stuff_dataset_id_to_contiguous_id[k["id"]] = k["trainId"] - - meta["thing_dataset_id_to_contiguous_id"] = thing_dataset_id_to_contiguous_id - meta["stuff_dataset_id_to_contiguous_id"] = stuff_dataset_id_to_contiguous_id - - for key, (image_dir, gt_dir, gt_json) in _RAW_CITYSCAPES_PANOPTIC_SPLITS.items(): - image_dir = os.path.join(root, image_dir) - gt_dir = os.path.join(root, gt_dir) - gt_json = os.path.join(root, gt_json) - - if key in DatasetCatalog.list(): - DatasetCatalog.remove(key) - - DatasetCatalog.register( - key, lambda x=image_dir, y=gt_dir, z=gt_json: load_cityscapes_panoptic(x, y, z, meta) - ) - MetadataCatalog.get(key).set( - panoptic_root=gt_dir, - image_root=image_dir, - panoptic_json=gt_json, - gt_dir=gt_dir.replace("cityscapes_panoptic_", ""), - evaluator_type="cityscapes_panoptic_seg", - ignore_label=255, - label_divisor=1000, - **meta, - ) - -_root = os.getenv("DETECTRON2_DATASETS", "datasets") -register_all_cityscapes_panoptic(_root) \ No newline at end of file diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distlib/scripts.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distlib/scripts.py deleted file mode 100644 index d2706242b8aac125a66450d5ce8dcd3395336182..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distlib/scripts.py +++ /dev/null @@ -1,437 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright (C) 2013-2015 Vinay Sajip. -# Licensed to the Python Software Foundation under a contributor agreement. -# See LICENSE.txt and CONTRIBUTORS.txt. -# -from io import BytesIO -import logging -import os -import re -import struct -import sys -import time -from zipfile import ZipInfo - -from .compat import sysconfig, detect_encoding, ZipFile -from .resources import finder -from .util import (FileOperator, get_export_entry, convert_path, - get_executable, get_platform, in_venv) - -logger = logging.getLogger(__name__) - -_DEFAULT_MANIFEST = ''' - - - - - - - - - - - - -'''.strip() - -# check if Python is called on the first line with this expression -FIRST_LINE_RE = re.compile(b'^#!.*pythonw?[0-9.]*([ \t].*)?$') -SCRIPT_TEMPLATE = r'''# -*- coding: utf-8 -*- -import re -import sys -from %(module)s import %(import_name)s -if __name__ == '__main__': - sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) - sys.exit(%(func)s()) -''' - - -def enquote_executable(executable): - if ' ' in executable: - # make sure we quote only the executable in case of env - # for example /usr/bin/env "/dir with spaces/bin/jython" - # instead of "/usr/bin/env /dir with spaces/bin/jython" - # otherwise whole - if executable.startswith('/usr/bin/env '): - env, _executable = executable.split(' ', 1) - if ' ' in _executable and not _executable.startswith('"'): - executable = '%s "%s"' % (env, _executable) - else: - if not executable.startswith('"'): - executable = '"%s"' % executable - return executable - -# Keep the old name around (for now), as there is at least one project using it! -_enquote_executable = enquote_executable - -class ScriptMaker(object): - """ - A class to copy or create scripts from source scripts or callable - specifications. - """ - script_template = SCRIPT_TEMPLATE - - executable = None # for shebangs - - def __init__(self, source_dir, target_dir, add_launchers=True, - dry_run=False, fileop=None): - self.source_dir = source_dir - self.target_dir = target_dir - self.add_launchers = add_launchers - self.force = False - self.clobber = False - # It only makes sense to set mode bits on POSIX. - self.set_mode = (os.name == 'posix') or (os.name == 'java' and - os._name == 'posix') - self.variants = set(('', 'X.Y')) - self._fileop = fileop or FileOperator(dry_run) - - self._is_nt = os.name == 'nt' or ( - os.name == 'java' and os._name == 'nt') - self.version_info = sys.version_info - - def _get_alternate_executable(self, executable, options): - if options.get('gui', False) and self._is_nt: # pragma: no cover - dn, fn = os.path.split(executable) - fn = fn.replace('python', 'pythonw') - executable = os.path.join(dn, fn) - return executable - - if sys.platform.startswith('java'): # pragma: no cover - def _is_shell(self, executable): - """ - Determine if the specified executable is a script - (contains a #! line) - """ - try: - with open(executable) as fp: - return fp.read(2) == '#!' - except (OSError, IOError): - logger.warning('Failed to open %s', executable) - return False - - def _fix_jython_executable(self, executable): - if self._is_shell(executable): - # Workaround for Jython is not needed on Linux systems. - import java - - if java.lang.System.getProperty('os.name') == 'Linux': - return executable - elif executable.lower().endswith('jython.exe'): - # Use wrapper exe for Jython on Windows - return executable - return '/usr/bin/env %s' % executable - - def _build_shebang(self, executable, post_interp): - """ - Build a shebang line. In the simple case (on Windows, or a shebang line - which is not too long or contains spaces) use a simple formulation for - the shebang. Otherwise, use /bin/sh as the executable, with a contrived - shebang which allows the script to run either under Python or sh, using - suitable quoting. Thanks to Harald Nordgren for his input. - - See also: http://www.in-ulm.de/~mascheck/various/shebang/#length - https://hg.mozilla.org/mozilla-central/file/tip/mach - """ - if os.name != 'posix': - simple_shebang = True - else: - # Add 3 for '#!' prefix and newline suffix. - shebang_length = len(executable) + len(post_interp) + 3 - if sys.platform == 'darwin': - max_shebang_length = 512 - else: - max_shebang_length = 127 - simple_shebang = ((b' ' not in executable) and - (shebang_length <= max_shebang_length)) - - if simple_shebang: - result = b'#!' + executable + post_interp + b'\n' - else: - result = b'#!/bin/sh\n' - result += b"'''exec' " + executable + post_interp + b' "$0" "$@"\n' - result += b"' '''" - return result - - def _get_shebang(self, encoding, post_interp=b'', options=None): - enquote = True - if self.executable: - executable = self.executable - enquote = False # assume this will be taken care of - elif not sysconfig.is_python_build(): - executable = get_executable() - elif in_venv(): # pragma: no cover - executable = os.path.join(sysconfig.get_path('scripts'), - 'python%s' % sysconfig.get_config_var('EXE')) - else: # pragma: no cover - executable = os.path.join( - sysconfig.get_config_var('BINDIR'), - 'python%s%s' % (sysconfig.get_config_var('VERSION'), - sysconfig.get_config_var('EXE'))) - if not os.path.isfile(executable): - # for Python builds from source on Windows, no Python executables with - # a version suffix are created, so we use python.exe - executable = os.path.join(sysconfig.get_config_var('BINDIR'), - 'python%s' % (sysconfig.get_config_var('EXE'))) - if options: - executable = self._get_alternate_executable(executable, options) - - if sys.platform.startswith('java'): # pragma: no cover - executable = self._fix_jython_executable(executable) - - # Normalise case for Windows - COMMENTED OUT - # executable = os.path.normcase(executable) - # N.B. The normalising operation above has been commented out: See - # issue #124. Although paths in Windows are generally case-insensitive, - # they aren't always. For example, a path containing a ẞ (which is a - # LATIN CAPITAL LETTER SHARP S - U+1E9E) is normcased to ß (which is a - # LATIN SMALL LETTER SHARP S' - U+00DF). The two are not considered by - # Windows as equivalent in path names. - - # If the user didn't specify an executable, it may be necessary to - # cater for executable paths with spaces (not uncommon on Windows) - if enquote: - executable = enquote_executable(executable) - # Issue #51: don't use fsencode, since we later try to - # check that the shebang is decodable using utf-8. - executable = executable.encode('utf-8') - # in case of IronPython, play safe and enable frames support - if (sys.platform == 'cli' and '-X:Frames' not in post_interp - and '-X:FullFrames' not in post_interp): # pragma: no cover - post_interp += b' -X:Frames' - shebang = self._build_shebang(executable, post_interp) - # Python parser starts to read a script using UTF-8 until - # it gets a #coding:xxx cookie. The shebang has to be the - # first line of a file, the #coding:xxx cookie cannot be - # written before. So the shebang has to be decodable from - # UTF-8. - try: - shebang.decode('utf-8') - except UnicodeDecodeError: # pragma: no cover - raise ValueError( - 'The shebang (%r) is not decodable from utf-8' % shebang) - # If the script is encoded to a custom encoding (use a - # #coding:xxx cookie), the shebang has to be decodable from - # the script encoding too. - if encoding != 'utf-8': - try: - shebang.decode(encoding) - except UnicodeDecodeError: # pragma: no cover - raise ValueError( - 'The shebang (%r) is not decodable ' - 'from the script encoding (%r)' % (shebang, encoding)) - return shebang - - def _get_script_text(self, entry): - return self.script_template % dict(module=entry.prefix, - import_name=entry.suffix.split('.')[0], - func=entry.suffix) - - manifest = _DEFAULT_MANIFEST - - def get_manifest(self, exename): - base = os.path.basename(exename) - return self.manifest % base - - def _write_script(self, names, shebang, script_bytes, filenames, ext): - use_launcher = self.add_launchers and self._is_nt - linesep = os.linesep.encode('utf-8') - if not shebang.endswith(linesep): - shebang += linesep - if not use_launcher: - script_bytes = shebang + script_bytes - else: # pragma: no cover - if ext == 'py': - launcher = self._get_launcher('t') - else: - launcher = self._get_launcher('w') - stream = BytesIO() - with ZipFile(stream, 'w') as zf: - source_date_epoch = os.environ.get('SOURCE_DATE_EPOCH') - if source_date_epoch: - date_time = time.gmtime(int(source_date_epoch))[:6] - zinfo = ZipInfo(filename='__main__.py', date_time=date_time) - zf.writestr(zinfo, script_bytes) - else: - zf.writestr('__main__.py', script_bytes) - zip_data = stream.getvalue() - script_bytes = launcher + shebang + zip_data - for name in names: - outname = os.path.join(self.target_dir, name) - if use_launcher: # pragma: no cover - n, e = os.path.splitext(outname) - if e.startswith('.py'): - outname = n - outname = '%s.exe' % outname - try: - self._fileop.write_binary_file(outname, script_bytes) - except Exception: - # Failed writing an executable - it might be in use. - logger.warning('Failed to write executable - trying to ' - 'use .deleteme logic') - dfname = '%s.deleteme' % outname - if os.path.exists(dfname): - os.remove(dfname) # Not allowed to fail here - os.rename(outname, dfname) # nor here - self._fileop.write_binary_file(outname, script_bytes) - logger.debug('Able to replace executable using ' - '.deleteme logic') - try: - os.remove(dfname) - except Exception: - pass # still in use - ignore error - else: - if self._is_nt and not outname.endswith('.' + ext): # pragma: no cover - outname = '%s.%s' % (outname, ext) - if os.path.exists(outname) and not self.clobber: - logger.warning('Skipping existing file %s', outname) - continue - self._fileop.write_binary_file(outname, script_bytes) - if self.set_mode: - self._fileop.set_executable_mode([outname]) - filenames.append(outname) - - variant_separator = '-' - - def get_script_filenames(self, name): - result = set() - if '' in self.variants: - result.add(name) - if 'X' in self.variants: - result.add('%s%s' % (name, self.version_info[0])) - if 'X.Y' in self.variants: - result.add('%s%s%s.%s' % (name, self.variant_separator, - self.version_info[0], self.version_info[1])) - return result - - def _make_script(self, entry, filenames, options=None): - post_interp = b'' - if options: - args = options.get('interpreter_args', []) - if args: - args = ' %s' % ' '.join(args) - post_interp = args.encode('utf-8') - shebang = self._get_shebang('utf-8', post_interp, options=options) - script = self._get_script_text(entry).encode('utf-8') - scriptnames = self.get_script_filenames(entry.name) - if options and options.get('gui', False): - ext = 'pyw' - else: - ext = 'py' - self._write_script(scriptnames, shebang, script, filenames, ext) - - def _copy_script(self, script, filenames): - adjust = False - script = os.path.join(self.source_dir, convert_path(script)) - outname = os.path.join(self.target_dir, os.path.basename(script)) - if not self.force and not self._fileop.newer(script, outname): - logger.debug('not copying %s (up-to-date)', script) - return - - # Always open the file, but ignore failures in dry-run mode -- - # that way, we'll get accurate feedback if we can read the - # script. - try: - f = open(script, 'rb') - except IOError: # pragma: no cover - if not self.dry_run: - raise - f = None - else: - first_line = f.readline() - if not first_line: # pragma: no cover - logger.warning('%s is an empty file (skipping)', script) - return - - match = FIRST_LINE_RE.match(first_line.replace(b'\r\n', b'\n')) - if match: - adjust = True - post_interp = match.group(1) or b'' - - if not adjust: - if f: - f.close() - self._fileop.copy_file(script, outname) - if self.set_mode: - self._fileop.set_executable_mode([outname]) - filenames.append(outname) - else: - logger.info('copying and adjusting %s -> %s', script, - self.target_dir) - if not self._fileop.dry_run: - encoding, lines = detect_encoding(f.readline) - f.seek(0) - shebang = self._get_shebang(encoding, post_interp) - if b'pythonw' in first_line: # pragma: no cover - ext = 'pyw' - else: - ext = 'py' - n = os.path.basename(outname) - self._write_script([n], shebang, f.read(), filenames, ext) - if f: - f.close() - - @property - def dry_run(self): - return self._fileop.dry_run - - @dry_run.setter - def dry_run(self, value): - self._fileop.dry_run = value - - if os.name == 'nt' or (os.name == 'java' and os._name == 'nt'): # pragma: no cover - # Executable launcher support. - # Launchers are from https://bitbucket.org/vinay.sajip/simple_launcher/ - - def _get_launcher(self, kind): - if struct.calcsize('P') == 8: # 64-bit - bits = '64' - else: - bits = '32' - platform_suffix = '-arm' if get_platform() == 'win-arm64' else '' - name = '%s%s%s.exe' % (kind, bits, platform_suffix) - # Issue 31: don't hardcode an absolute package name, but - # determine it relative to the current package - distlib_package = __name__.rsplit('.', 1)[0] - resource = finder(distlib_package).find(name) - if not resource: - msg = ('Unable to find resource %s in package %s' % (name, - distlib_package)) - raise ValueError(msg) - return resource.bytes - - # Public API follows - - def make(self, specification, options=None): - """ - Make a script. - - :param specification: The specification, which is either a valid export - entry specification (to make a script from a - callable) or a filename (to make a script by - copying from a source location). - :param options: A dictionary of options controlling script generation. - :return: A list of all absolute pathnames written to. - """ - filenames = [] - entry = get_export_entry(specification) - if entry is None: - self._copy_script(specification, filenames) - else: - self._make_script(entry, filenames, options=options) - return filenames - - def make_multiple(self, specifications, options=None): - """ - Take a list of specifications and make scripts from them, - :param specifications: A list of specifications. - :return: A list of all absolute pathnames written to, - """ - filenames = [] - for specification in specifications: - filenames.extend(self.make(specification, options)) - return filenames diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/typing_extensions.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/typing_extensions.py deleted file mode 100644 index ef42417c208e93c55d704728d3e88dfe46250d92..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/typing_extensions.py +++ /dev/null @@ -1,2209 +0,0 @@ -import abc -import collections -import collections.abc -import functools -import operator -import sys -import types as _types -import typing - - -__all__ = [ - # Super-special typing primitives. - 'Any', - 'ClassVar', - 'Concatenate', - 'Final', - 'LiteralString', - 'ParamSpec', - 'ParamSpecArgs', - 'ParamSpecKwargs', - 'Self', - 'Type', - 'TypeVar', - 'TypeVarTuple', - 'Unpack', - - # ABCs (from collections.abc). - 'Awaitable', - 'AsyncIterator', - 'AsyncIterable', - 'Coroutine', - 'AsyncGenerator', - 'AsyncContextManager', - 'ChainMap', - - # Concrete collection types. - 'ContextManager', - 'Counter', - 'Deque', - 'DefaultDict', - 'NamedTuple', - 'OrderedDict', - 'TypedDict', - - # Structural checks, a.k.a. protocols. - 'SupportsIndex', - - # One-off things. - 'Annotated', - 'assert_never', - 'assert_type', - 'clear_overloads', - 'dataclass_transform', - 'get_overloads', - 'final', - 'get_args', - 'get_origin', - 'get_type_hints', - 'IntVar', - 'is_typeddict', - 'Literal', - 'NewType', - 'overload', - 'override', - 'Protocol', - 'reveal_type', - 'runtime', - 'runtime_checkable', - 'Text', - 'TypeAlias', - 'TypeGuard', - 'TYPE_CHECKING', - 'Never', - 'NoReturn', - 'Required', - 'NotRequired', -] - -# for backward compatibility -PEP_560 = True -GenericMeta = type - -# The functions below are modified copies of typing internal helpers. -# They are needed by _ProtocolMeta and they provide support for PEP 646. - -_marker = object() - - -def _check_generic(cls, parameters, elen=_marker): - """Check correct count for parameters of a generic cls (internal helper). - This gives a nice error message in case of count mismatch. - """ - if not elen: - raise TypeError(f"{cls} is not a generic class") - if elen is _marker: - if not hasattr(cls, "__parameters__") or not cls.__parameters__: - raise TypeError(f"{cls} is not a generic class") - elen = len(cls.__parameters__) - alen = len(parameters) - if alen != elen: - if hasattr(cls, "__parameters__"): - parameters = [p for p in cls.__parameters__ if not _is_unpack(p)] - num_tv_tuples = sum(isinstance(p, TypeVarTuple) for p in parameters) - if (num_tv_tuples > 0) and (alen >= elen - num_tv_tuples): - return - raise TypeError(f"Too {'many' if alen > elen else 'few'} parameters for {cls};" - f" actual {alen}, expected {elen}") - - -if sys.version_info >= (3, 10): - def _should_collect_from_parameters(t): - return isinstance( - t, (typing._GenericAlias, _types.GenericAlias, _types.UnionType) - ) -elif sys.version_info >= (3, 9): - def _should_collect_from_parameters(t): - return isinstance(t, (typing._GenericAlias, _types.GenericAlias)) -else: - def _should_collect_from_parameters(t): - return isinstance(t, typing._GenericAlias) and not t._special - - -def _collect_type_vars(types, typevar_types=None): - """Collect all type variable contained in types in order of - first appearance (lexicographic order). For example:: - - _collect_type_vars((T, List[S, T])) == (T, S) - """ - if typevar_types is None: - typevar_types = typing.TypeVar - tvars = [] - for t in types: - if ( - isinstance(t, typevar_types) and - t not in tvars and - not _is_unpack(t) - ): - tvars.append(t) - if _should_collect_from_parameters(t): - tvars.extend([t for t in t.__parameters__ if t not in tvars]) - return tuple(tvars) - - -NoReturn = typing.NoReturn - -# Some unconstrained type variables. These are used by the container types. -# (These are not for export.) -T = typing.TypeVar('T') # Any type. -KT = typing.TypeVar('KT') # Key type. -VT = typing.TypeVar('VT') # Value type. -T_co = typing.TypeVar('T_co', covariant=True) # Any type covariant containers. -T_contra = typing.TypeVar('T_contra', contravariant=True) # Ditto contravariant. - - -if sys.version_info >= (3, 11): - from typing import Any -else: - - class _AnyMeta(type): - def __instancecheck__(self, obj): - if self is Any: - raise TypeError("typing_extensions.Any cannot be used with isinstance()") - return super().__instancecheck__(obj) - - def __repr__(self): - if self is Any: - return "typing_extensions.Any" - return super().__repr__() - - class Any(metaclass=_AnyMeta): - """Special type indicating an unconstrained type. - - Any is compatible with every type. - - Any assumed to have all methods. - - All values assumed to be instances of Any. - Note that all the above statements are true from the point of view of - static type checkers. At runtime, Any should not be used with instance - checks. - """ - def __new__(cls, *args, **kwargs): - if cls is Any: - raise TypeError("Any cannot be instantiated") - return super().__new__(cls, *args, **kwargs) - - -ClassVar = typing.ClassVar - -# On older versions of typing there is an internal class named "Final". -# 3.8+ -if hasattr(typing, 'Final') and sys.version_info[:2] >= (3, 7): - Final = typing.Final -# 3.7 -else: - class _FinalForm(typing._SpecialForm, _root=True): - - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - item = typing._type_check(parameters, - f'{self._name} accepts only a single type.') - return typing._GenericAlias(self, (item,)) - - Final = _FinalForm('Final', - doc="""A special typing construct to indicate that a name - cannot be re-assigned or overridden in a subclass. - For example: - - MAX_SIZE: Final = 9000 - MAX_SIZE += 1 # Error reported by type checker - - class Connection: - TIMEOUT: Final[int] = 10 - class FastConnector(Connection): - TIMEOUT = 1 # Error reported by type checker - - There is no runtime checking of these properties.""") - -if sys.version_info >= (3, 11): - final = typing.final -else: - # @final exists in 3.8+, but we backport it for all versions - # before 3.11 to keep support for the __final__ attribute. - # See https://bugs.python.org/issue46342 - def final(f): - """This decorator can be used to indicate to type checkers that - the decorated method cannot be overridden, and decorated class - cannot be subclassed. For example: - - class Base: - @final - def done(self) -> None: - ... - class Sub(Base): - def done(self) -> None: # Error reported by type checker - ... - @final - class Leaf: - ... - class Other(Leaf): # Error reported by type checker - ... - - There is no runtime checking of these properties. The decorator - sets the ``__final__`` attribute to ``True`` on the decorated object - to allow runtime introspection. - """ - try: - f.__final__ = True - except (AttributeError, TypeError): - # Skip the attribute silently if it is not writable. - # AttributeError happens if the object has __slots__ or a - # read-only property, TypeError if it's a builtin class. - pass - return f - - -def IntVar(name): - return typing.TypeVar(name) - - -# 3.8+: -if hasattr(typing, 'Literal'): - Literal = typing.Literal -# 3.7: -else: - class _LiteralForm(typing._SpecialForm, _root=True): - - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - return typing._GenericAlias(self, parameters) - - Literal = _LiteralForm('Literal', - doc="""A type that can be used to indicate to type checkers - that the corresponding value has a value literally equivalent - to the provided parameter. For example: - - var: Literal[4] = 4 - - The type checker understands that 'var' is literally equal to - the value 4 and no other value. - - Literal[...] cannot be subclassed. There is no runtime - checking verifying that the parameter is actually a value - instead of a type.""") - - -_overload_dummy = typing._overload_dummy # noqa - - -if hasattr(typing, "get_overloads"): # 3.11+ - overload = typing.overload - get_overloads = typing.get_overloads - clear_overloads = typing.clear_overloads -else: - # {module: {qualname: {firstlineno: func}}} - _overload_registry = collections.defaultdict( - functools.partial(collections.defaultdict, dict) - ) - - def overload(func): - """Decorator for overloaded functions/methods. - - In a stub file, place two or more stub definitions for the same - function in a row, each decorated with @overload. For example: - - @overload - def utf8(value: None) -> None: ... - @overload - def utf8(value: bytes) -> bytes: ... - @overload - def utf8(value: str) -> bytes: ... - - In a non-stub file (i.e. a regular .py file), do the same but - follow it with an implementation. The implementation should *not* - be decorated with @overload. For example: - - @overload - def utf8(value: None) -> None: ... - @overload - def utf8(value: bytes) -> bytes: ... - @overload - def utf8(value: str) -> bytes: ... - def utf8(value): - # implementation goes here - - The overloads for a function can be retrieved at runtime using the - get_overloads() function. - """ - # classmethod and staticmethod - f = getattr(func, "__func__", func) - try: - _overload_registry[f.__module__][f.__qualname__][ - f.__code__.co_firstlineno - ] = func - except AttributeError: - # Not a normal function; ignore. - pass - return _overload_dummy - - def get_overloads(func): - """Return all defined overloads for *func* as a sequence.""" - # classmethod and staticmethod - f = getattr(func, "__func__", func) - if f.__module__ not in _overload_registry: - return [] - mod_dict = _overload_registry[f.__module__] - if f.__qualname__ not in mod_dict: - return [] - return list(mod_dict[f.__qualname__].values()) - - def clear_overloads(): - """Clear all overloads in the registry.""" - _overload_registry.clear() - - -# This is not a real generic class. Don't use outside annotations. -Type = typing.Type - -# Various ABCs mimicking those in collections.abc. -# A few are simply re-exported for completeness. - - -Awaitable = typing.Awaitable -Coroutine = typing.Coroutine -AsyncIterable = typing.AsyncIterable -AsyncIterator = typing.AsyncIterator -Deque = typing.Deque -ContextManager = typing.ContextManager -AsyncContextManager = typing.AsyncContextManager -DefaultDict = typing.DefaultDict - -# 3.7.2+ -if hasattr(typing, 'OrderedDict'): - OrderedDict = typing.OrderedDict -# 3.7.0-3.7.2 -else: - OrderedDict = typing._alias(collections.OrderedDict, (KT, VT)) - -Counter = typing.Counter -ChainMap = typing.ChainMap -AsyncGenerator = typing.AsyncGenerator -NewType = typing.NewType -Text = typing.Text -TYPE_CHECKING = typing.TYPE_CHECKING - - -_PROTO_WHITELIST = ['Callable', 'Awaitable', - 'Iterable', 'Iterator', 'AsyncIterable', 'AsyncIterator', - 'Hashable', 'Sized', 'Container', 'Collection', 'Reversible', - 'ContextManager', 'AsyncContextManager'] - - -def _get_protocol_attrs(cls): - attrs = set() - for base in cls.__mro__[:-1]: # without object - if base.__name__ in ('Protocol', 'Generic'): - continue - annotations = getattr(base, '__annotations__', {}) - for attr in list(base.__dict__.keys()) + list(annotations.keys()): - if (not attr.startswith('_abc_') and attr not in ( - '__abstractmethods__', '__annotations__', '__weakref__', - '_is_protocol', '_is_runtime_protocol', '__dict__', - '__args__', '__slots__', - '__next_in_mro__', '__parameters__', '__origin__', - '__orig_bases__', '__extra__', '__tree_hash__', - '__doc__', '__subclasshook__', '__init__', '__new__', - '__module__', '_MutableMapping__marker', '_gorg')): - attrs.add(attr) - return attrs - - -def _is_callable_members_only(cls): - return all(callable(getattr(cls, attr, None)) for attr in _get_protocol_attrs(cls)) - - -def _maybe_adjust_parameters(cls): - """Helper function used in Protocol.__init_subclass__ and _TypedDictMeta.__new__. - - The contents of this function are very similar - to logic found in typing.Generic.__init_subclass__ - on the CPython main branch. - """ - tvars = [] - if '__orig_bases__' in cls.__dict__: - tvars = typing._collect_type_vars(cls.__orig_bases__) - # Look for Generic[T1, ..., Tn] or Protocol[T1, ..., Tn]. - # If found, tvars must be a subset of it. - # If not found, tvars is it. - # Also check for and reject plain Generic, - # and reject multiple Generic[...] and/or Protocol[...]. - gvars = None - for base in cls.__orig_bases__: - if (isinstance(base, typing._GenericAlias) and - base.__origin__ in (typing.Generic, Protocol)): - # for error messages - the_base = base.__origin__.__name__ - if gvars is not None: - raise TypeError( - "Cannot inherit from Generic[...]" - " and/or Protocol[...] multiple types.") - gvars = base.__parameters__ - if gvars is None: - gvars = tvars - else: - tvarset = set(tvars) - gvarset = set(gvars) - if not tvarset <= gvarset: - s_vars = ', '.join(str(t) for t in tvars if t not in gvarset) - s_args = ', '.join(str(g) for g in gvars) - raise TypeError(f"Some type variables ({s_vars}) are" - f" not listed in {the_base}[{s_args}]") - tvars = gvars - cls.__parameters__ = tuple(tvars) - - -# 3.8+ -if hasattr(typing, 'Protocol'): - Protocol = typing.Protocol -# 3.7 -else: - - def _no_init(self, *args, **kwargs): - if type(self)._is_protocol: - raise TypeError('Protocols cannot be instantiated') - - class _ProtocolMeta(abc.ABCMeta): # noqa: B024 - # This metaclass is a bit unfortunate and exists only because of the lack - # of __instancehook__. - def __instancecheck__(cls, instance): - # We need this method for situations where attributes are - # assigned in __init__. - if ((not getattr(cls, '_is_protocol', False) or - _is_callable_members_only(cls)) and - issubclass(instance.__class__, cls)): - return True - if cls._is_protocol: - if all(hasattr(instance, attr) and - (not callable(getattr(cls, attr, None)) or - getattr(instance, attr) is not None) - for attr in _get_protocol_attrs(cls)): - return True - return super().__instancecheck__(instance) - - class Protocol(metaclass=_ProtocolMeta): - # There is quite a lot of overlapping code with typing.Generic. - # Unfortunately it is hard to avoid this while these live in two different - # modules. The duplicated code will be removed when Protocol is moved to typing. - """Base class for protocol classes. Protocol classes are defined as:: - - class Proto(Protocol): - def meth(self) -> int: - ... - - Such classes are primarily used with static type checkers that recognize - structural subtyping (static duck-typing), for example:: - - class C: - def meth(self) -> int: - return 0 - - def func(x: Proto) -> int: - return x.meth() - - func(C()) # Passes static type check - - See PEP 544 for details. Protocol classes decorated with - @typing_extensions.runtime act as simple-minded runtime protocol that checks - only the presence of given attributes, ignoring their type signatures. - - Protocol classes can be generic, they are defined as:: - - class GenProto(Protocol[T]): - def meth(self) -> T: - ... - """ - __slots__ = () - _is_protocol = True - - def __new__(cls, *args, **kwds): - if cls is Protocol: - raise TypeError("Type Protocol cannot be instantiated; " - "it can only be used as a base class") - return super().__new__(cls) - - @typing._tp_cache - def __class_getitem__(cls, params): - if not isinstance(params, tuple): - params = (params,) - if not params and cls is not typing.Tuple: - raise TypeError( - f"Parameter list to {cls.__qualname__}[...] cannot be empty") - msg = "Parameters to generic types must be types." - params = tuple(typing._type_check(p, msg) for p in params) # noqa - if cls is Protocol: - # Generic can only be subscripted with unique type variables. - if not all(isinstance(p, typing.TypeVar) for p in params): - i = 0 - while isinstance(params[i], typing.TypeVar): - i += 1 - raise TypeError( - "Parameters to Protocol[...] must all be type variables." - f" Parameter {i + 1} is {params[i]}") - if len(set(params)) != len(params): - raise TypeError( - "Parameters to Protocol[...] must all be unique") - else: - # Subscripting a regular Generic subclass. - _check_generic(cls, params, len(cls.__parameters__)) - return typing._GenericAlias(cls, params) - - def __init_subclass__(cls, *args, **kwargs): - if '__orig_bases__' in cls.__dict__: - error = typing.Generic in cls.__orig_bases__ - else: - error = typing.Generic in cls.__bases__ - if error: - raise TypeError("Cannot inherit from plain Generic") - _maybe_adjust_parameters(cls) - - # Determine if this is a protocol or a concrete subclass. - if not cls.__dict__.get('_is_protocol', None): - cls._is_protocol = any(b is Protocol for b in cls.__bases__) - - # Set (or override) the protocol subclass hook. - def _proto_hook(other): - if not cls.__dict__.get('_is_protocol', None): - return NotImplemented - if not getattr(cls, '_is_runtime_protocol', False): - if sys._getframe(2).f_globals['__name__'] in ['abc', 'functools']: - return NotImplemented - raise TypeError("Instance and class checks can only be used with" - " @runtime protocols") - if not _is_callable_members_only(cls): - if sys._getframe(2).f_globals['__name__'] in ['abc', 'functools']: - return NotImplemented - raise TypeError("Protocols with non-method members" - " don't support issubclass()") - if not isinstance(other, type): - # Same error as for issubclass(1, int) - raise TypeError('issubclass() arg 1 must be a class') - for attr in _get_protocol_attrs(cls): - for base in other.__mro__: - if attr in base.__dict__: - if base.__dict__[attr] is None: - return NotImplemented - break - annotations = getattr(base, '__annotations__', {}) - if (isinstance(annotations, typing.Mapping) and - attr in annotations and - isinstance(other, _ProtocolMeta) and - other._is_protocol): - break - else: - return NotImplemented - return True - if '__subclasshook__' not in cls.__dict__: - cls.__subclasshook__ = _proto_hook - - # We have nothing more to do for non-protocols. - if not cls._is_protocol: - return - - # Check consistency of bases. - for base in cls.__bases__: - if not (base in (object, typing.Generic) or - base.__module__ == 'collections.abc' and - base.__name__ in _PROTO_WHITELIST or - isinstance(base, _ProtocolMeta) and base._is_protocol): - raise TypeError('Protocols can only inherit from other' - f' protocols, got {repr(base)}') - cls.__init__ = _no_init - - -# 3.8+ -if hasattr(typing, 'runtime_checkable'): - runtime_checkable = typing.runtime_checkable -# 3.7 -else: - def runtime_checkable(cls): - """Mark a protocol class as a runtime protocol, so that it - can be used with isinstance() and issubclass(). Raise TypeError - if applied to a non-protocol class. - - This allows a simple-minded structural check very similar to the - one-offs in collections.abc such as Hashable. - """ - if not isinstance(cls, _ProtocolMeta) or not cls._is_protocol: - raise TypeError('@runtime_checkable can be only applied to protocol classes,' - f' got {cls!r}') - cls._is_runtime_protocol = True - return cls - - -# Exists for backwards compatibility. -runtime = runtime_checkable - - -# 3.8+ -if hasattr(typing, 'SupportsIndex'): - SupportsIndex = typing.SupportsIndex -# 3.7 -else: - @runtime_checkable - class SupportsIndex(Protocol): - __slots__ = () - - @abc.abstractmethod - def __index__(self) -> int: - pass - - -if hasattr(typing, "Required"): - # The standard library TypedDict in Python 3.8 does not store runtime information - # about which (if any) keys are optional. See https://bugs.python.org/issue38834 - # The standard library TypedDict in Python 3.9.0/1 does not honour the "total" - # keyword with old-style TypedDict(). See https://bugs.python.org/issue42059 - # The standard library TypedDict below Python 3.11 does not store runtime - # information about optional and required keys when using Required or NotRequired. - # Generic TypedDicts are also impossible using typing.TypedDict on Python <3.11. - TypedDict = typing.TypedDict - _TypedDictMeta = typing._TypedDictMeta - is_typeddict = typing.is_typeddict -else: - def _check_fails(cls, other): - try: - if sys._getframe(1).f_globals['__name__'] not in ['abc', - 'functools', - 'typing']: - # Typed dicts are only for static structural subtyping. - raise TypeError('TypedDict does not support instance and class checks') - except (AttributeError, ValueError): - pass - return False - - def _dict_new(*args, **kwargs): - if not args: - raise TypeError('TypedDict.__new__(): not enough arguments') - _, args = args[0], args[1:] # allow the "cls" keyword be passed - return dict(*args, **kwargs) - - _dict_new.__text_signature__ = '($cls, _typename, _fields=None, /, **kwargs)' - - def _typeddict_new(*args, total=True, **kwargs): - if not args: - raise TypeError('TypedDict.__new__(): not enough arguments') - _, args = args[0], args[1:] # allow the "cls" keyword be passed - if args: - typename, args = args[0], args[1:] # allow the "_typename" keyword be passed - elif '_typename' in kwargs: - typename = kwargs.pop('_typename') - import warnings - warnings.warn("Passing '_typename' as keyword argument is deprecated", - DeprecationWarning, stacklevel=2) - else: - raise TypeError("TypedDict.__new__() missing 1 required positional " - "argument: '_typename'") - if args: - try: - fields, = args # allow the "_fields" keyword be passed - except ValueError: - raise TypeError('TypedDict.__new__() takes from 2 to 3 ' - f'positional arguments but {len(args) + 2} ' - 'were given') - elif '_fields' in kwargs and len(kwargs) == 1: - fields = kwargs.pop('_fields') - import warnings - warnings.warn("Passing '_fields' as keyword argument is deprecated", - DeprecationWarning, stacklevel=2) - else: - fields = None - - if fields is None: - fields = kwargs - elif kwargs: - raise TypeError("TypedDict takes either a dict or keyword arguments," - " but not both") - - ns = {'__annotations__': dict(fields)} - try: - # Setting correct module is necessary to make typed dict classes pickleable. - ns['__module__'] = sys._getframe(1).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): - pass - - return _TypedDictMeta(typename, (), ns, total=total) - - _typeddict_new.__text_signature__ = ('($cls, _typename, _fields=None,' - ' /, *, total=True, **kwargs)') - - class _TypedDictMeta(type): - def __init__(cls, name, bases, ns, total=True): - super().__init__(name, bases, ns) - - def __new__(cls, name, bases, ns, total=True): - # Create new typed dict class object. - # This method is called directly when TypedDict is subclassed, - # or via _typeddict_new when TypedDict is instantiated. This way - # TypedDict supports all three syntaxes described in its docstring. - # Subclasses and instances of TypedDict return actual dictionaries - # via _dict_new. - ns['__new__'] = _typeddict_new if name == 'TypedDict' else _dict_new - # Don't insert typing.Generic into __bases__ here, - # or Generic.__init_subclass__ will raise TypeError - # in the super().__new__() call. - # Instead, monkey-patch __bases__ onto the class after it's been created. - tp_dict = super().__new__(cls, name, (dict,), ns) - - if any(issubclass(base, typing.Generic) for base in bases): - tp_dict.__bases__ = (typing.Generic, dict) - _maybe_adjust_parameters(tp_dict) - - annotations = {} - own_annotations = ns.get('__annotations__', {}) - msg = "TypedDict('Name', {f0: t0, f1: t1, ...}); each t must be a type" - own_annotations = { - n: typing._type_check(tp, msg) for n, tp in own_annotations.items() - } - required_keys = set() - optional_keys = set() - - for base in bases: - annotations.update(base.__dict__.get('__annotations__', {})) - required_keys.update(base.__dict__.get('__required_keys__', ())) - optional_keys.update(base.__dict__.get('__optional_keys__', ())) - - annotations.update(own_annotations) - for annotation_key, annotation_type in own_annotations.items(): - annotation_origin = get_origin(annotation_type) - if annotation_origin is Annotated: - annotation_args = get_args(annotation_type) - if annotation_args: - annotation_type = annotation_args[0] - annotation_origin = get_origin(annotation_type) - - if annotation_origin is Required: - required_keys.add(annotation_key) - elif annotation_origin is NotRequired: - optional_keys.add(annotation_key) - elif total: - required_keys.add(annotation_key) - else: - optional_keys.add(annotation_key) - - tp_dict.__annotations__ = annotations - tp_dict.__required_keys__ = frozenset(required_keys) - tp_dict.__optional_keys__ = frozenset(optional_keys) - if not hasattr(tp_dict, '__total__'): - tp_dict.__total__ = total - return tp_dict - - __instancecheck__ = __subclasscheck__ = _check_fails - - TypedDict = _TypedDictMeta('TypedDict', (dict,), {}) - TypedDict.__module__ = __name__ - TypedDict.__doc__ = \ - """A simple typed name space. At runtime it is equivalent to a plain dict. - - TypedDict creates a dictionary type that expects all of its - instances to have a certain set of keys, with each key - associated with a value of a consistent type. This expectation - is not checked at runtime but is only enforced by type checkers. - Usage:: - - class Point2D(TypedDict): - x: int - y: int - label: str - - a: Point2D = {'x': 1, 'y': 2, 'label': 'good'} # OK - b: Point2D = {'z': 3, 'label': 'bad'} # Fails type check - - assert Point2D(x=1, y=2, label='first') == dict(x=1, y=2, label='first') - - The type info can be accessed via the Point2D.__annotations__ dict, and - the Point2D.__required_keys__ and Point2D.__optional_keys__ frozensets. - TypedDict supports two additional equivalent forms:: - - Point2D = TypedDict('Point2D', x=int, y=int, label=str) - Point2D = TypedDict('Point2D', {'x': int, 'y': int, 'label': str}) - - The class syntax is only supported in Python 3.6+, while two other - syntax forms work for Python 2.7 and 3.2+ - """ - - if hasattr(typing, "_TypedDictMeta"): - _TYPEDDICT_TYPES = (typing._TypedDictMeta, _TypedDictMeta) - else: - _TYPEDDICT_TYPES = (_TypedDictMeta,) - - def is_typeddict(tp): - """Check if an annotation is a TypedDict class - - For example:: - class Film(TypedDict): - title: str - year: int - - is_typeddict(Film) # => True - is_typeddict(Union[list, str]) # => False - """ - return isinstance(tp, tuple(_TYPEDDICT_TYPES)) - - -if hasattr(typing, "assert_type"): - assert_type = typing.assert_type - -else: - def assert_type(__val, __typ): - """Assert (to the type checker) that the value is of the given type. - - When the type checker encounters a call to assert_type(), it - emits an error if the value is not of the specified type:: - - def greet(name: str) -> None: - assert_type(name, str) # ok - assert_type(name, int) # type checker error - - At runtime this returns the first argument unchanged and otherwise - does nothing. - """ - return __val - - -if hasattr(typing, "Required"): - get_type_hints = typing.get_type_hints -else: - import functools - import types - - # replaces _strip_annotations() - def _strip_extras(t): - """Strips Annotated, Required and NotRequired from a given type.""" - if isinstance(t, _AnnotatedAlias): - return _strip_extras(t.__origin__) - if hasattr(t, "__origin__") and t.__origin__ in (Required, NotRequired): - return _strip_extras(t.__args__[0]) - if isinstance(t, typing._GenericAlias): - stripped_args = tuple(_strip_extras(a) for a in t.__args__) - if stripped_args == t.__args__: - return t - return t.copy_with(stripped_args) - if hasattr(types, "GenericAlias") and isinstance(t, types.GenericAlias): - stripped_args = tuple(_strip_extras(a) for a in t.__args__) - if stripped_args == t.__args__: - return t - return types.GenericAlias(t.__origin__, stripped_args) - if hasattr(types, "UnionType") and isinstance(t, types.UnionType): - stripped_args = tuple(_strip_extras(a) for a in t.__args__) - if stripped_args == t.__args__: - return t - return functools.reduce(operator.or_, stripped_args) - - return t - - def get_type_hints(obj, globalns=None, localns=None, include_extras=False): - """Return type hints for an object. - - This is often the same as obj.__annotations__, but it handles - forward references encoded as string literals, adds Optional[t] if a - default value equal to None is set and recursively replaces all - 'Annotated[T, ...]', 'Required[T]' or 'NotRequired[T]' with 'T' - (unless 'include_extras=True'). - - The argument may be a module, class, method, or function. The annotations - are returned as a dictionary. For classes, annotations include also - inherited members. - - TypeError is raised if the argument is not of a type that can contain - annotations, and an empty dictionary is returned if no annotations are - present. - - BEWARE -- the behavior of globalns and localns is counterintuitive - (unless you are familiar with how eval() and exec() work). The - search order is locals first, then globals. - - - If no dict arguments are passed, an attempt is made to use the - globals from obj (or the respective module's globals for classes), - and these are also used as the locals. If the object does not appear - to have globals, an empty dictionary is used. - - - If one dict argument is passed, it is used for both globals and - locals. - - - If two dict arguments are passed, they specify globals and - locals, respectively. - """ - if hasattr(typing, "Annotated"): - hint = typing.get_type_hints( - obj, globalns=globalns, localns=localns, include_extras=True - ) - else: - hint = typing.get_type_hints(obj, globalns=globalns, localns=localns) - if include_extras: - return hint - return {k: _strip_extras(t) for k, t in hint.items()} - - -# Python 3.9+ has PEP 593 (Annotated) -if hasattr(typing, 'Annotated'): - Annotated = typing.Annotated - # Not exported and not a public API, but needed for get_origin() and get_args() - # to work. - _AnnotatedAlias = typing._AnnotatedAlias -# 3.7-3.8 -else: - class _AnnotatedAlias(typing._GenericAlias, _root=True): - """Runtime representation of an annotated type. - - At its core 'Annotated[t, dec1, dec2, ...]' is an alias for the type 't' - with extra annotations. The alias behaves like a normal typing alias, - instantiating is the same as instantiating the underlying type, binding - it to types is also the same. - """ - def __init__(self, origin, metadata): - if isinstance(origin, _AnnotatedAlias): - metadata = origin.__metadata__ + metadata - origin = origin.__origin__ - super().__init__(origin, origin) - self.__metadata__ = metadata - - def copy_with(self, params): - assert len(params) == 1 - new_type = params[0] - return _AnnotatedAlias(new_type, self.__metadata__) - - def __repr__(self): - return (f"typing_extensions.Annotated[{typing._type_repr(self.__origin__)}, " - f"{', '.join(repr(a) for a in self.__metadata__)}]") - - def __reduce__(self): - return operator.getitem, ( - Annotated, (self.__origin__,) + self.__metadata__ - ) - - def __eq__(self, other): - if not isinstance(other, _AnnotatedAlias): - return NotImplemented - if self.__origin__ != other.__origin__: - return False - return self.__metadata__ == other.__metadata__ - - def __hash__(self): - return hash((self.__origin__, self.__metadata__)) - - class Annotated: - """Add context specific metadata to a type. - - Example: Annotated[int, runtime_check.Unsigned] indicates to the - hypothetical runtime_check module that this type is an unsigned int. - Every other consumer of this type can ignore this metadata and treat - this type as int. - - The first argument to Annotated must be a valid type (and will be in - the __origin__ field), the remaining arguments are kept as a tuple in - the __extra__ field. - - Details: - - - It's an error to call `Annotated` with less than two arguments. - - Nested Annotated are flattened:: - - Annotated[Annotated[T, Ann1, Ann2], Ann3] == Annotated[T, Ann1, Ann2, Ann3] - - - Instantiating an annotated type is equivalent to instantiating the - underlying type:: - - Annotated[C, Ann1](5) == C(5) - - - Annotated can be used as a generic type alias:: - - Optimized = Annotated[T, runtime.Optimize()] - Optimized[int] == Annotated[int, runtime.Optimize()] - - OptimizedList = Annotated[List[T], runtime.Optimize()] - OptimizedList[int] == Annotated[List[int], runtime.Optimize()] - """ - - __slots__ = () - - def __new__(cls, *args, **kwargs): - raise TypeError("Type Annotated cannot be instantiated.") - - @typing._tp_cache - def __class_getitem__(cls, params): - if not isinstance(params, tuple) or len(params) < 2: - raise TypeError("Annotated[...] should be used " - "with at least two arguments (a type and an " - "annotation).") - allowed_special_forms = (ClassVar, Final) - if get_origin(params[0]) in allowed_special_forms: - origin = params[0] - else: - msg = "Annotated[t, ...]: t must be a type." - origin = typing._type_check(params[0], msg) - metadata = tuple(params[1:]) - return _AnnotatedAlias(origin, metadata) - - def __init_subclass__(cls, *args, **kwargs): - raise TypeError( - f"Cannot subclass {cls.__module__}.Annotated" - ) - -# Python 3.8 has get_origin() and get_args() but those implementations aren't -# Annotated-aware, so we can't use those. Python 3.9's versions don't support -# ParamSpecArgs and ParamSpecKwargs, so only Python 3.10's versions will do. -if sys.version_info[:2] >= (3, 10): - get_origin = typing.get_origin - get_args = typing.get_args -# 3.7-3.9 -else: - try: - # 3.9+ - from typing import _BaseGenericAlias - except ImportError: - _BaseGenericAlias = typing._GenericAlias - try: - # 3.9+ - from typing import GenericAlias as _typing_GenericAlias - except ImportError: - _typing_GenericAlias = typing._GenericAlias - - def get_origin(tp): - """Get the unsubscripted version of a type. - - This supports generic types, Callable, Tuple, Union, Literal, Final, ClassVar - and Annotated. Return None for unsupported types. Examples:: - - get_origin(Literal[42]) is Literal - get_origin(int) is None - get_origin(ClassVar[int]) is ClassVar - get_origin(Generic) is Generic - get_origin(Generic[T]) is Generic - get_origin(Union[T, int]) is Union - get_origin(List[Tuple[T, T]][int]) == list - get_origin(P.args) is P - """ - if isinstance(tp, _AnnotatedAlias): - return Annotated - if isinstance(tp, (typing._GenericAlias, _typing_GenericAlias, _BaseGenericAlias, - ParamSpecArgs, ParamSpecKwargs)): - return tp.__origin__ - if tp is typing.Generic: - return typing.Generic - return None - - def get_args(tp): - """Get type arguments with all substitutions performed. - - For unions, basic simplifications used by Union constructor are performed. - Examples:: - get_args(Dict[str, int]) == (str, int) - get_args(int) == () - get_args(Union[int, Union[T, int], str][int]) == (int, str) - get_args(Union[int, Tuple[T, int]][str]) == (int, Tuple[str, int]) - get_args(Callable[[], T][int]) == ([], int) - """ - if isinstance(tp, _AnnotatedAlias): - return (tp.__origin__,) + tp.__metadata__ - if isinstance(tp, (typing._GenericAlias, _typing_GenericAlias)): - if getattr(tp, "_special", False): - return () - res = tp.__args__ - if get_origin(tp) is collections.abc.Callable and res[0] is not Ellipsis: - res = (list(res[:-1]), res[-1]) - return res - return () - - -# 3.10+ -if hasattr(typing, 'TypeAlias'): - TypeAlias = typing.TypeAlias -# 3.9 -elif sys.version_info[:2] >= (3, 9): - class _TypeAliasForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - @_TypeAliasForm - def TypeAlias(self, parameters): - """Special marker indicating that an assignment should - be recognized as a proper type alias definition by type - checkers. - - For example:: - - Predicate: TypeAlias = Callable[..., bool] - - It's invalid when used anywhere except as in the example above. - """ - raise TypeError(f"{self} is not subscriptable") -# 3.7-3.8 -else: - class _TypeAliasForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - TypeAlias = _TypeAliasForm('TypeAlias', - doc="""Special marker indicating that an assignment should - be recognized as a proper type alias definition by type - checkers. - - For example:: - - Predicate: TypeAlias = Callable[..., bool] - - It's invalid when used anywhere except as in the example - above.""") - - -class _DefaultMixin: - """Mixin for TypeVarLike defaults.""" - - __slots__ = () - - def __init__(self, default): - if isinstance(default, (tuple, list)): - self.__default__ = tuple((typing._type_check(d, "Default must be a type") - for d in default)) - elif default: - self.__default__ = typing._type_check(default, "Default must be a type") - else: - self.__default__ = None - - -# Add default and infer_variance parameters from PEP 696 and 695 -class TypeVar(typing.TypeVar, _DefaultMixin, _root=True): - """Type variable.""" - - __module__ = 'typing' - - def __init__(self, name, *constraints, bound=None, - covariant=False, contravariant=False, - default=None, infer_variance=False): - super().__init__(name, *constraints, bound=bound, covariant=covariant, - contravariant=contravariant) - _DefaultMixin.__init__(self, default) - self.__infer_variance__ = infer_variance - - # for pickling: - try: - def_mod = sys._getframe(1).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): - def_mod = None - if def_mod != 'typing_extensions': - self.__module__ = def_mod - - -# Python 3.10+ has PEP 612 -if hasattr(typing, 'ParamSpecArgs'): - ParamSpecArgs = typing.ParamSpecArgs - ParamSpecKwargs = typing.ParamSpecKwargs -# 3.7-3.9 -else: - class _Immutable: - """Mixin to indicate that object should not be copied.""" - __slots__ = () - - def __copy__(self): - return self - - def __deepcopy__(self, memo): - return self - - class ParamSpecArgs(_Immutable): - """The args for a ParamSpec object. - - Given a ParamSpec object P, P.args is an instance of ParamSpecArgs. - - ParamSpecArgs objects have a reference back to their ParamSpec: - - P.args.__origin__ is P - - This type is meant for runtime introspection and has no special meaning to - static type checkers. - """ - def __init__(self, origin): - self.__origin__ = origin - - def __repr__(self): - return f"{self.__origin__.__name__}.args" - - def __eq__(self, other): - if not isinstance(other, ParamSpecArgs): - return NotImplemented - return self.__origin__ == other.__origin__ - - class ParamSpecKwargs(_Immutable): - """The kwargs for a ParamSpec object. - - Given a ParamSpec object P, P.kwargs is an instance of ParamSpecKwargs. - - ParamSpecKwargs objects have a reference back to their ParamSpec: - - P.kwargs.__origin__ is P - - This type is meant for runtime introspection and has no special meaning to - static type checkers. - """ - def __init__(self, origin): - self.__origin__ = origin - - def __repr__(self): - return f"{self.__origin__.__name__}.kwargs" - - def __eq__(self, other): - if not isinstance(other, ParamSpecKwargs): - return NotImplemented - return self.__origin__ == other.__origin__ - -# 3.10+ -if hasattr(typing, 'ParamSpec'): - - # Add default Parameter - PEP 696 - class ParamSpec(typing.ParamSpec, _DefaultMixin, _root=True): - """Parameter specification variable.""" - - __module__ = 'typing' - - def __init__(self, name, *, bound=None, covariant=False, contravariant=False, - default=None): - super().__init__(name, bound=bound, covariant=covariant, - contravariant=contravariant) - _DefaultMixin.__init__(self, default) - - # for pickling: - try: - def_mod = sys._getframe(1).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): - def_mod = None - if def_mod != 'typing_extensions': - self.__module__ = def_mod - -# 3.7-3.9 -else: - - # Inherits from list as a workaround for Callable checks in Python < 3.9.2. - class ParamSpec(list, _DefaultMixin): - """Parameter specification variable. - - Usage:: - - P = ParamSpec('P') - - Parameter specification variables exist primarily for the benefit of static - type checkers. They are used to forward the parameter types of one - callable to another callable, a pattern commonly found in higher order - functions and decorators. They are only valid when used in ``Concatenate``, - or s the first argument to ``Callable``. In Python 3.10 and higher, - they are also supported in user-defined Generics at runtime. - See class Generic for more information on generic types. An - example for annotating a decorator:: - - T = TypeVar('T') - P = ParamSpec('P') - - def add_logging(f: Callable[P, T]) -> Callable[P, T]: - '''A type-safe decorator to add logging to a function.''' - def inner(*args: P.args, **kwargs: P.kwargs) -> T: - logging.info(f'{f.__name__} was called') - return f(*args, **kwargs) - return inner - - @add_logging - def add_two(x: float, y: float) -> float: - '''Add two numbers together.''' - return x + y - - Parameter specification variables defined with covariant=True or - contravariant=True can be used to declare covariant or contravariant - generic types. These keyword arguments are valid, but their actual semantics - are yet to be decided. See PEP 612 for details. - - Parameter specification variables can be introspected. e.g.: - - P.__name__ == 'T' - P.__bound__ == None - P.__covariant__ == False - P.__contravariant__ == False - - Note that only parameter specification variables defined in global scope can - be pickled. - """ - - # Trick Generic __parameters__. - __class__ = typing.TypeVar - - @property - def args(self): - return ParamSpecArgs(self) - - @property - def kwargs(self): - return ParamSpecKwargs(self) - - def __init__(self, name, *, bound=None, covariant=False, contravariant=False, - default=None): - super().__init__([self]) - self.__name__ = name - self.__covariant__ = bool(covariant) - self.__contravariant__ = bool(contravariant) - if bound: - self.__bound__ = typing._type_check(bound, 'Bound must be a type.') - else: - self.__bound__ = None - _DefaultMixin.__init__(self, default) - - # for pickling: - try: - def_mod = sys._getframe(1).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): - def_mod = None - if def_mod != 'typing_extensions': - self.__module__ = def_mod - - def __repr__(self): - if self.__covariant__: - prefix = '+' - elif self.__contravariant__: - prefix = '-' - else: - prefix = '~' - return prefix + self.__name__ - - def __hash__(self): - return object.__hash__(self) - - def __eq__(self, other): - return self is other - - def __reduce__(self): - return self.__name__ - - # Hack to get typing._type_check to pass. - def __call__(self, *args, **kwargs): - pass - - -# 3.7-3.9 -if not hasattr(typing, 'Concatenate'): - # Inherits from list as a workaround for Callable checks in Python < 3.9.2. - class _ConcatenateGenericAlias(list): - - # Trick Generic into looking into this for __parameters__. - __class__ = typing._GenericAlias - - # Flag in 3.8. - _special = False - - def __init__(self, origin, args): - super().__init__(args) - self.__origin__ = origin - self.__args__ = args - - def __repr__(self): - _type_repr = typing._type_repr - return (f'{_type_repr(self.__origin__)}' - f'[{", ".join(_type_repr(arg) for arg in self.__args__)}]') - - def __hash__(self): - return hash((self.__origin__, self.__args__)) - - # Hack to get typing._type_check to pass in Generic. - def __call__(self, *args, **kwargs): - pass - - @property - def __parameters__(self): - return tuple( - tp for tp in self.__args__ if isinstance(tp, (typing.TypeVar, ParamSpec)) - ) - - -# 3.7-3.9 -@typing._tp_cache -def _concatenate_getitem(self, parameters): - if parameters == (): - raise TypeError("Cannot take a Concatenate of no types.") - if not isinstance(parameters, tuple): - parameters = (parameters,) - if not isinstance(parameters[-1], ParamSpec): - raise TypeError("The last parameter to Concatenate should be a " - "ParamSpec variable.") - msg = "Concatenate[arg, ...]: each arg must be a type." - parameters = tuple(typing._type_check(p, msg) for p in parameters) - return _ConcatenateGenericAlias(self, parameters) - - -# 3.10+ -if hasattr(typing, 'Concatenate'): - Concatenate = typing.Concatenate - _ConcatenateGenericAlias = typing._ConcatenateGenericAlias # noqa -# 3.9 -elif sys.version_info[:2] >= (3, 9): - @_TypeAliasForm - def Concatenate(self, parameters): - """Used in conjunction with ``ParamSpec`` and ``Callable`` to represent a - higher order function which adds, removes or transforms parameters of a - callable. - - For example:: - - Callable[Concatenate[int, P], int] - - See PEP 612 for detailed information. - """ - return _concatenate_getitem(self, parameters) -# 3.7-8 -else: - class _ConcatenateForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - return _concatenate_getitem(self, parameters) - - Concatenate = _ConcatenateForm( - 'Concatenate', - doc="""Used in conjunction with ``ParamSpec`` and ``Callable`` to represent a - higher order function which adds, removes or transforms parameters of a - callable. - - For example:: - - Callable[Concatenate[int, P], int] - - See PEP 612 for detailed information. - """) - -# 3.10+ -if hasattr(typing, 'TypeGuard'): - TypeGuard = typing.TypeGuard -# 3.9 -elif sys.version_info[:2] >= (3, 9): - class _TypeGuardForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - @_TypeGuardForm - def TypeGuard(self, parameters): - """Special typing form used to annotate the return type of a user-defined - type guard function. ``TypeGuard`` only accepts a single type argument. - At runtime, functions marked this way should return a boolean. - - ``TypeGuard`` aims to benefit *type narrowing* -- a technique used by static - type checkers to determine a more precise type of an expression within a - program's code flow. Usually type narrowing is done by analyzing - conditional code flow and applying the narrowing to a block of code. The - conditional expression here is sometimes referred to as a "type guard". - - Sometimes it would be convenient to use a user-defined boolean function - as a type guard. Such a function should use ``TypeGuard[...]`` as its - return type to alert static type checkers to this intention. - - Using ``-> TypeGuard`` tells the static type checker that for a given - function: - - 1. The return value is a boolean. - 2. If the return value is ``True``, the type of its argument - is the type inside ``TypeGuard``. - - For example:: - - def is_str(val: Union[str, float]): - # "isinstance" type guard - if isinstance(val, str): - # Type of ``val`` is narrowed to ``str`` - ... - else: - # Else, type of ``val`` is narrowed to ``float``. - ... - - Strict type narrowing is not enforced -- ``TypeB`` need not be a narrower - form of ``TypeA`` (it can even be a wider form) and this may lead to - type-unsafe results. The main reason is to allow for things like - narrowing ``List[object]`` to ``List[str]`` even though the latter is not - a subtype of the former, since ``List`` is invariant. The responsibility of - writing type-safe type guards is left to the user. - - ``TypeGuard`` also works with type variables. For more information, see - PEP 647 (User-Defined Type Guards). - """ - item = typing._type_check(parameters, f'{self} accepts only a single type.') - return typing._GenericAlias(self, (item,)) -# 3.7-3.8 -else: - class _TypeGuardForm(typing._SpecialForm, _root=True): - - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - item = typing._type_check(parameters, - f'{self._name} accepts only a single type') - return typing._GenericAlias(self, (item,)) - - TypeGuard = _TypeGuardForm( - 'TypeGuard', - doc="""Special typing form used to annotate the return type of a user-defined - type guard function. ``TypeGuard`` only accepts a single type argument. - At runtime, functions marked this way should return a boolean. - - ``TypeGuard`` aims to benefit *type narrowing* -- a technique used by static - type checkers to determine a more precise type of an expression within a - program's code flow. Usually type narrowing is done by analyzing - conditional code flow and applying the narrowing to a block of code. The - conditional expression here is sometimes referred to as a "type guard". - - Sometimes it would be convenient to use a user-defined boolean function - as a type guard. Such a function should use ``TypeGuard[...]`` as its - return type to alert static type checkers to this intention. - - Using ``-> TypeGuard`` tells the static type checker that for a given - function: - - 1. The return value is a boolean. - 2. If the return value is ``True``, the type of its argument - is the type inside ``TypeGuard``. - - For example:: - - def is_str(val: Union[str, float]): - # "isinstance" type guard - if isinstance(val, str): - # Type of ``val`` is narrowed to ``str`` - ... - else: - # Else, type of ``val`` is narrowed to ``float``. - ... - - Strict type narrowing is not enforced -- ``TypeB`` need not be a narrower - form of ``TypeA`` (it can even be a wider form) and this may lead to - type-unsafe results. The main reason is to allow for things like - narrowing ``List[object]`` to ``List[str]`` even though the latter is not - a subtype of the former, since ``List`` is invariant. The responsibility of - writing type-safe type guards is left to the user. - - ``TypeGuard`` also works with type variables. For more information, see - PEP 647 (User-Defined Type Guards). - """) - - -# Vendored from cpython typing._SpecialFrom -class _SpecialForm(typing._Final, _root=True): - __slots__ = ('_name', '__doc__', '_getitem') - - def __init__(self, getitem): - self._getitem = getitem - self._name = getitem.__name__ - self.__doc__ = getitem.__doc__ - - def __getattr__(self, item): - if item in {'__name__', '__qualname__'}: - return self._name - - raise AttributeError(item) - - def __mro_entries__(self, bases): - raise TypeError(f"Cannot subclass {self!r}") - - def __repr__(self): - return f'typing_extensions.{self._name}' - - def __reduce__(self): - return self._name - - def __call__(self, *args, **kwds): - raise TypeError(f"Cannot instantiate {self!r}") - - def __or__(self, other): - return typing.Union[self, other] - - def __ror__(self, other): - return typing.Union[other, self] - - def __instancecheck__(self, obj): - raise TypeError(f"{self} cannot be used with isinstance()") - - def __subclasscheck__(self, cls): - raise TypeError(f"{self} cannot be used with issubclass()") - - @typing._tp_cache - def __getitem__(self, parameters): - return self._getitem(self, parameters) - - -if hasattr(typing, "LiteralString"): - LiteralString = typing.LiteralString -else: - @_SpecialForm - def LiteralString(self, params): - """Represents an arbitrary literal string. - - Example:: - - from typing_extensions import LiteralString - - def query(sql: LiteralString) -> ...: - ... - - query("SELECT * FROM table") # ok - query(f"SELECT * FROM {input()}") # not ok - - See PEP 675 for details. - - """ - raise TypeError(f"{self} is not subscriptable") - - -if hasattr(typing, "Self"): - Self = typing.Self -else: - @_SpecialForm - def Self(self, params): - """Used to spell the type of "self" in classes. - - Example:: - - from typing import Self - - class ReturnsSelf: - def parse(self, data: bytes) -> Self: - ... - return self - - """ - - raise TypeError(f"{self} is not subscriptable") - - -if hasattr(typing, "Never"): - Never = typing.Never -else: - @_SpecialForm - def Never(self, params): - """The bottom type, a type that has no members. - - This can be used to define a function that should never be - called, or a function that never returns:: - - from typing_extensions import Never - - def never_call_me(arg: Never) -> None: - pass - - def int_or_str(arg: int | str) -> None: - never_call_me(arg) # type checker error - match arg: - case int(): - print("It's an int") - case str(): - print("It's a str") - case _: - never_call_me(arg) # ok, arg is of type Never - - """ - - raise TypeError(f"{self} is not subscriptable") - - -if hasattr(typing, 'Required'): - Required = typing.Required - NotRequired = typing.NotRequired -elif sys.version_info[:2] >= (3, 9): - class _ExtensionsSpecialForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - @_ExtensionsSpecialForm - def Required(self, parameters): - """A special typing construct to mark a key of a total=False TypedDict - as required. For example: - - class Movie(TypedDict, total=False): - title: Required[str] - year: int - - m = Movie( - title='The Matrix', # typechecker error if key is omitted - year=1999, - ) - - There is no runtime checking that a required key is actually provided - when instantiating a related TypedDict. - """ - item = typing._type_check(parameters, f'{self._name} accepts only a single type.') - return typing._GenericAlias(self, (item,)) - - @_ExtensionsSpecialForm - def NotRequired(self, parameters): - """A special typing construct to mark a key of a TypedDict as - potentially missing. For example: - - class Movie(TypedDict): - title: str - year: NotRequired[int] - - m = Movie( - title='The Matrix', # typechecker error if key is omitted - year=1999, - ) - """ - item = typing._type_check(parameters, f'{self._name} accepts only a single type.') - return typing._GenericAlias(self, (item,)) - -else: - class _RequiredForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - item = typing._type_check(parameters, - f'{self._name} accepts only a single type.') - return typing._GenericAlias(self, (item,)) - - Required = _RequiredForm( - 'Required', - doc="""A special typing construct to mark a key of a total=False TypedDict - as required. For example: - - class Movie(TypedDict, total=False): - title: Required[str] - year: int - - m = Movie( - title='The Matrix', # typechecker error if key is omitted - year=1999, - ) - - There is no runtime checking that a required key is actually provided - when instantiating a related TypedDict. - """) - NotRequired = _RequiredForm( - 'NotRequired', - doc="""A special typing construct to mark a key of a TypedDict as - potentially missing. For example: - - class Movie(TypedDict): - title: str - year: NotRequired[int] - - m = Movie( - title='The Matrix', # typechecker error if key is omitted - year=1999, - ) - """) - - -if hasattr(typing, "Unpack"): # 3.11+ - Unpack = typing.Unpack -elif sys.version_info[:2] >= (3, 9): - class _UnpackSpecialForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - class _UnpackAlias(typing._GenericAlias, _root=True): - __class__ = typing.TypeVar - - @_UnpackSpecialForm - def Unpack(self, parameters): - """A special typing construct to unpack a variadic type. For example: - - Shape = TypeVarTuple('Shape') - Batch = NewType('Batch', int) - - def add_batch_axis( - x: Array[Unpack[Shape]] - ) -> Array[Batch, Unpack[Shape]]: ... - - """ - item = typing._type_check(parameters, f'{self._name} accepts only a single type.') - return _UnpackAlias(self, (item,)) - - def _is_unpack(obj): - return isinstance(obj, _UnpackAlias) - -else: - class _UnpackAlias(typing._GenericAlias, _root=True): - __class__ = typing.TypeVar - - class _UnpackForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - item = typing._type_check(parameters, - f'{self._name} accepts only a single type.') - return _UnpackAlias(self, (item,)) - - Unpack = _UnpackForm( - 'Unpack', - doc="""A special typing construct to unpack a variadic type. For example: - - Shape = TypeVarTuple('Shape') - Batch = NewType('Batch', int) - - def add_batch_axis( - x: Array[Unpack[Shape]] - ) -> Array[Batch, Unpack[Shape]]: ... - - """) - - def _is_unpack(obj): - return isinstance(obj, _UnpackAlias) - - -if hasattr(typing, "TypeVarTuple"): # 3.11+ - - # Add default Parameter - PEP 696 - class TypeVarTuple(typing.TypeVarTuple, _DefaultMixin, _root=True): - """Type variable tuple.""" - - def __init__(self, name, *, default=None): - super().__init__(name) - _DefaultMixin.__init__(self, default) - - # for pickling: - try: - def_mod = sys._getframe(1).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): - def_mod = None - if def_mod != 'typing_extensions': - self.__module__ = def_mod - -else: - class TypeVarTuple(_DefaultMixin): - """Type variable tuple. - - Usage:: - - Ts = TypeVarTuple('Ts') - - In the same way that a normal type variable is a stand-in for a single - type such as ``int``, a type variable *tuple* is a stand-in for a *tuple* - type such as ``Tuple[int, str]``. - - Type variable tuples can be used in ``Generic`` declarations. - Consider the following example:: - - class Array(Generic[*Ts]): ... - - The ``Ts`` type variable tuple here behaves like ``tuple[T1, T2]``, - where ``T1`` and ``T2`` are type variables. To use these type variables - as type parameters of ``Array``, we must *unpack* the type variable tuple using - the star operator: ``*Ts``. The signature of ``Array`` then behaves - as if we had simply written ``class Array(Generic[T1, T2]): ...``. - In contrast to ``Generic[T1, T2]``, however, ``Generic[*Shape]`` allows - us to parameterise the class with an *arbitrary* number of type parameters. - - Type variable tuples can be used anywhere a normal ``TypeVar`` can. - This includes class definitions, as shown above, as well as function - signatures and variable annotations:: - - class Array(Generic[*Ts]): - - def __init__(self, shape: Tuple[*Ts]): - self._shape: Tuple[*Ts] = shape - - def get_shape(self) -> Tuple[*Ts]: - return self._shape - - shape = (Height(480), Width(640)) - x: Array[Height, Width] = Array(shape) - y = abs(x) # Inferred type is Array[Height, Width] - z = x + x # ... is Array[Height, Width] - x.get_shape() # ... is tuple[Height, Width] - - """ - - # Trick Generic __parameters__. - __class__ = typing.TypeVar - - def __iter__(self): - yield self.__unpacked__ - - def __init__(self, name, *, default=None): - self.__name__ = name - _DefaultMixin.__init__(self, default) - - # for pickling: - try: - def_mod = sys._getframe(1).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): - def_mod = None - if def_mod != 'typing_extensions': - self.__module__ = def_mod - - self.__unpacked__ = Unpack[self] - - def __repr__(self): - return self.__name__ - - def __hash__(self): - return object.__hash__(self) - - def __eq__(self, other): - return self is other - - def __reduce__(self): - return self.__name__ - - def __init_subclass__(self, *args, **kwds): - if '_root' not in kwds: - raise TypeError("Cannot subclass special typing classes") - - -if hasattr(typing, "reveal_type"): - reveal_type = typing.reveal_type -else: - def reveal_type(__obj: T) -> T: - """Reveal the inferred type of a variable. - - When a static type checker encounters a call to ``reveal_type()``, - it will emit the inferred type of the argument:: - - x: int = 1 - reveal_type(x) - - Running a static type checker (e.g., ``mypy``) on this example - will produce output similar to 'Revealed type is "builtins.int"'. - - At runtime, the function prints the runtime type of the - argument and returns it unchanged. - - """ - print(f"Runtime type is {type(__obj).__name__!r}", file=sys.stderr) - return __obj - - -if hasattr(typing, "assert_never"): - assert_never = typing.assert_never -else: - def assert_never(__arg: Never) -> Never: - """Assert to the type checker that a line of code is unreachable. - - Example:: - - def int_or_str(arg: int | str) -> None: - match arg: - case int(): - print("It's an int") - case str(): - print("It's a str") - case _: - assert_never(arg) - - If a type checker finds that a call to assert_never() is - reachable, it will emit an error. - - At runtime, this throws an exception when called. - - """ - raise AssertionError("Expected code to be unreachable") - - -if hasattr(typing, 'dataclass_transform'): - dataclass_transform = typing.dataclass_transform -else: - def dataclass_transform( - *, - eq_default: bool = True, - order_default: bool = False, - kw_only_default: bool = False, - field_specifiers: typing.Tuple[ - typing.Union[typing.Type[typing.Any], typing.Callable[..., typing.Any]], - ... - ] = (), - **kwargs: typing.Any, - ) -> typing.Callable[[T], T]: - """Decorator that marks a function, class, or metaclass as providing - dataclass-like behavior. - - Example: - - from typing_extensions import dataclass_transform - - _T = TypeVar("_T") - - # Used on a decorator function - @dataclass_transform() - def create_model(cls: type[_T]) -> type[_T]: - ... - return cls - - @create_model - class CustomerModel: - id: int - name: str - - # Used on a base class - @dataclass_transform() - class ModelBase: ... - - class CustomerModel(ModelBase): - id: int - name: str - - # Used on a metaclass - @dataclass_transform() - class ModelMeta(type): ... - - class ModelBase(metaclass=ModelMeta): ... - - class CustomerModel(ModelBase): - id: int - name: str - - Each of the ``CustomerModel`` classes defined in this example will now - behave similarly to a dataclass created with the ``@dataclasses.dataclass`` - decorator. For example, the type checker will synthesize an ``__init__`` - method. - - The arguments to this decorator can be used to customize this behavior: - - ``eq_default`` indicates whether the ``eq`` parameter is assumed to be - True or False if it is omitted by the caller. - - ``order_default`` indicates whether the ``order`` parameter is - assumed to be True or False if it is omitted by the caller. - - ``kw_only_default`` indicates whether the ``kw_only`` parameter is - assumed to be True or False if it is omitted by the caller. - - ``field_specifiers`` specifies a static list of supported classes - or functions that describe fields, similar to ``dataclasses.field()``. - - At runtime, this decorator records its arguments in the - ``__dataclass_transform__`` attribute on the decorated object. - - See PEP 681 for details. - - """ - def decorator(cls_or_fn): - cls_or_fn.__dataclass_transform__ = { - "eq_default": eq_default, - "order_default": order_default, - "kw_only_default": kw_only_default, - "field_specifiers": field_specifiers, - "kwargs": kwargs, - } - return cls_or_fn - return decorator - - -if hasattr(typing, "override"): - override = typing.override -else: - _F = typing.TypeVar("_F", bound=typing.Callable[..., typing.Any]) - - def override(__arg: _F) -> _F: - """Indicate that a method is intended to override a method in a base class. - - Usage: - - class Base: - def method(self) -> None: ... - pass - - class Child(Base): - @override - def method(self) -> None: - super().method() - - When this decorator is applied to a method, the type checker will - validate that it overrides a method with the same name on a base class. - This helps prevent bugs that may occur when a base class is changed - without an equivalent change to a child class. - - See PEP 698 for details. - - """ - return __arg - - -# We have to do some monkey patching to deal with the dual nature of -# Unpack/TypeVarTuple: -# - We want Unpack to be a kind of TypeVar so it gets accepted in -# Generic[Unpack[Ts]] -# - We want it to *not* be treated as a TypeVar for the purposes of -# counting generic parameters, so that when we subscript a generic, -# the runtime doesn't try to substitute the Unpack with the subscripted type. -if not hasattr(typing, "TypeVarTuple"): - typing._collect_type_vars = _collect_type_vars - typing._check_generic = _check_generic - - -# Backport typing.NamedTuple as it exists in Python 3.11. -# In 3.11, the ability to define generic `NamedTuple`s was supported. -# This was explicitly disallowed in 3.9-3.10, and only half-worked in <=3.8. -if sys.version_info >= (3, 11): - NamedTuple = typing.NamedTuple -else: - def _caller(): - try: - return sys._getframe(2).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): # For platforms without _getframe() - return None - - def _make_nmtuple(name, types, module, defaults=()): - fields = [n for n, t in types] - annotations = {n: typing._type_check(t, f"field {n} annotation must be a type") - for n, t in types} - nm_tpl = collections.namedtuple(name, fields, - defaults=defaults, module=module) - nm_tpl.__annotations__ = nm_tpl.__new__.__annotations__ = annotations - # The `_field_types` attribute was removed in 3.9; - # in earlier versions, it is the same as the `__annotations__` attribute - if sys.version_info < (3, 9): - nm_tpl._field_types = annotations - return nm_tpl - - _prohibited_namedtuple_fields = typing._prohibited - _special_namedtuple_fields = frozenset({'__module__', '__name__', '__annotations__'}) - - class _NamedTupleMeta(type): - def __new__(cls, typename, bases, ns): - assert _NamedTuple in bases - for base in bases: - if base is not _NamedTuple and base is not typing.Generic: - raise TypeError( - 'can only inherit from a NamedTuple type and Generic') - bases = tuple(tuple if base is _NamedTuple else base for base in bases) - types = ns.get('__annotations__', {}) - default_names = [] - for field_name in types: - if field_name in ns: - default_names.append(field_name) - elif default_names: - raise TypeError(f"Non-default namedtuple field {field_name} " - f"cannot follow default field" - f"{'s' if len(default_names) > 1 else ''} " - f"{', '.join(default_names)}") - nm_tpl = _make_nmtuple( - typename, types.items(), - defaults=[ns[n] for n in default_names], - module=ns['__module__'] - ) - nm_tpl.__bases__ = bases - if typing.Generic in bases: - class_getitem = typing.Generic.__class_getitem__.__func__ - nm_tpl.__class_getitem__ = classmethod(class_getitem) - # update from user namespace without overriding special namedtuple attributes - for key in ns: - if key in _prohibited_namedtuple_fields: - raise AttributeError("Cannot overwrite NamedTuple attribute " + key) - elif key not in _special_namedtuple_fields and key not in nm_tpl._fields: - setattr(nm_tpl, key, ns[key]) - if typing.Generic in bases: - nm_tpl.__init_subclass__() - return nm_tpl - - def NamedTuple(__typename, __fields=None, **kwargs): - if __fields is None: - __fields = kwargs.items() - elif kwargs: - raise TypeError("Either list of fields or keywords" - " can be provided to NamedTuple, not both") - return _make_nmtuple(__typename, __fields, module=_caller()) - - NamedTuple.__doc__ = typing.NamedTuple.__doc__ - _NamedTuple = type.__new__(_NamedTupleMeta, 'NamedTuple', (), {}) - - # On 3.8+, alter the signature so that it matches typing.NamedTuple. - # The signature of typing.NamedTuple on >=3.8 is invalid syntax in Python 3.7, - # so just leave the signature as it is on 3.7. - if sys.version_info >= (3, 8): - NamedTuple.__text_signature__ = '(typename, fields=None, /, **kwargs)' - - def _namedtuple_mro_entries(bases): - assert NamedTuple in bases - return (_NamedTuple,) - - NamedTuple.__mro_entries__ = _namedtuple_mro_entries diff --git a/spaces/Testys/diabetes-app/main.py b/spaces/Testys/diabetes-app/main.py deleted file mode 100644 index 608926bd41caeac09157b3bc822398dfe710c9d1..0000000000000000000000000000000000000000 --- a/spaces/Testys/diabetes-app/main.py +++ /dev/null @@ -1,79 +0,0 @@ -# importing python modules. -import streamlit as st -import joblib -import numpy as np -import time - -# loading pickle files gotten from model -lightgbm_pickle = open(r"./lightgbm.pickle", "rb") -lgbm_model = joblib.load(lightgbm_pickle) - -# column name for each column in the diabetes dataset. -column_names = ['cholesterol', 'glucose', 'hdl_chol', 'chol_hdl_ratio', 'age', - 'gender', 'weight', 'height', 'bmi', 'systolic_bp', 'diastolic_bp', 'waist', 'hip', - 'waist_hip_ratio', 'diabetes'] - - -# function to receive user information. -def inputs(): - # creating form for data inputs. - with st.form(key="diabetes_data"): - name = st.text_input("Patient's Name: ") - gender_obj = st.selectbox(label="Patient's Gender: ", options=["Male", "Female"]) - if gender_obj == "Male": - gender = 1 - else: - gender = 0 - - age = st.slider(label="Patient's Age: ", min_value=0, max_value=100) - chol = st.slider(label="Patient's Cholesterol Level(mg/dL): ", min_value=40, max_value=400) - glucose = st.slider(label="Patient's Sugar Level(mg/dL): ", min_value=40, max_value=250) - height_cm = st.number_input(label="Patient's Height(cm): ") - height = height_cm * 0.393701 - weight_kg = st.number_input("Patient's Weight in(kg): ") - weight = weight_kg * 2.205 - hdl_chol = st.slider(label="Patient's HDL Cholesterol(mg/dL): ", min_value=0, max_value=100) - waist = st.number_input("Patient's Waist Size(inches): ", step=1) - hip = st.number_input("Patient's Hip Size(inches): ", step=1) - systolic_bp = st.number_input(label="Patient's Systolic Blood Pressure(mmHg): ", step=1) - diastolic_bp = st.number_input(label="Patient's Diastolic Blood Pressure(mmHg): ", step=1) - submit = st.form_submit_button("Submit Test") - if submit: - bmi = weight_kg / ((height_cm / 100)**2) - chol_hdl_ratio = chol / hdl_chol - waist_hip_ratio = waist / hip - patient_data = [chol, glucose, hdl_chol, chol_hdl_ratio, age, gender, weight, height, bmi, - systolic_bp, diastolic_bp, waist, hip, waist_hip_ratio] - else: - patient_data = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] - return patient_data - - -# function to create a dataframe and carry out prediction. -def predict(var_name): - pred = [var_name] - np_pred = np.array(pred) - score = lgbm_model.predict(np_pred) - return score - - -# function to run streamlit app -def run(): - st.title("Diabetes Test App") - st.write("Diabetes is known as a very deadly disease if not diagnosed early. To make it easier for health " - "practitioners to diagnose this disease early, previous data have been accumulated to predict an accurate " - "result for new patients. " - "The Doctor is to retrieve necessary information from the patients to carry out this test." - " A diabetic patient should be notified early and should commence treatment immediately.") - info = inputs() - dia_score = predict(info) - with st.spinner(text="Diagnosing....."): - time.sleep(5) - if dia_score == 1: - st.error("Positive. Diabetes Diagnosed.") - else: - st.success("Negative. Diabetes not diagnosed.") - - -if __name__ == "__main__": - run() diff --git a/spaces/Theivaprakasham/yolov6/yolov6/utils/torch_utils.py b/spaces/Theivaprakasham/yolov6/yolov6/utils/torch_utils.py deleted file mode 100644 index 9e67c2ab18e048f0d05ddd61750843ebd73669de..0000000000000000000000000000000000000000 --- a/spaces/Theivaprakasham/yolov6/yolov6/utils/torch_utils.py +++ /dev/null @@ -1,109 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding:utf-8 -*- - -import time -from contextlib import contextmanager -from copy import deepcopy -import torch -import torch.distributed as dist -import torch.nn as nn -import torch.nn.functional as F -from yolov6.utils.events import LOGGER - -try: - import thop # for FLOPs computation -except ImportError: - thop = None - - -@contextmanager -def torch_distributed_zero_first(local_rank: int): - """ - Decorator to make all processes in distributed training wait for each local_master to do something. - """ - if local_rank not in [-1, 0]: - dist.barrier(device_ids=[local_rank]) - yield - if local_rank == 0: - dist.barrier(device_ids=[0]) - - -def time_sync(): - # Waits for all kernels in all streams on a CUDA device to complete if cuda is available. - if torch.cuda.is_available(): - torch.cuda.synchronize() - return time.time() - - -def initialize_weights(model): - for m in model.modules(): - t = type(m) - if t is nn.Conv2d: - pass # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif t is nn.BatchNorm2d: - m.eps = 1e-3 - m.momentum = 0.03 - elif t in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU]: - m.inplace = True - - -def fuse_conv_and_bn(conv, bn): - # Fuse convolution and batchnorm layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/ - fusedconv = ( - nn.Conv2d( - conv.in_channels, - conv.out_channels, - kernel_size=conv.kernel_size, - stride=conv.stride, - padding=conv.padding, - groups=conv.groups, - bias=True, - ) - .requires_grad_(False) - .to(conv.weight.device) - ) - - # prepare filters - w_conv = conv.weight.clone().view(conv.out_channels, -1) - w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var))) - fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape)) - - # prepare spatial bias - b_conv = ( - torch.zeros(conv.weight.size(0), device=conv.weight.device) - if conv.bias is None - else conv.bias - ) - b_bn = bn.bias - bn.weight.mul(bn.running_mean).div( - torch.sqrt(bn.running_var + bn.eps) - ) - fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn) - - return fusedconv - - -def fuse_model(model): - from yolov6.layers.common import Conv - - for m in model.modules(): - if type(m) is Conv and hasattr(m, "bn"): - m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv - delattr(m, "bn") # remove batchnorm - m.forward = m.forward_fuse # update forward - return model - - -def get_model_info(model, img_size=640): - """Get model Params and GFlops. - Code base on https://github.com/Megvii-BaseDetection/YOLOX/blob/main/yolox/utils/model_utils.py - """ - from thop import profile - stride = 32 - img = torch.zeros((1, 3, stride, stride), device=next(model.parameters()).device) - flops, params = profile(deepcopy(model), inputs=(img,), verbose=False) - params /= 1e6 - flops /= 1e9 - img_size = img_size if isinstance(img_size, list) else [img_size, img_size] - flops *= img_size[0] * img_size[1] / stride / stride * 2 # Gflops - info = "Params: {:.2f}M, Gflops: {:.2f}".format(params, flops) - return info diff --git a/spaces/ThirdEyeData/Customer-Conversion-Prediction/visit_history.py b/spaces/ThirdEyeData/Customer-Conversion-Prediction/visit_history.py deleted file mode 100644 index ede839337459209730d68c45dc59867b6d72a81e..0000000000000000000000000000000000000000 --- a/spaces/ThirdEyeData/Customer-Conversion-Prediction/visit_history.py +++ /dev/null @@ -1,113 +0,0 @@ -#!/usr/bin/env python -# coding: utf-8 - -# In[ ]: - - -import os -import sys -from random import randint -import time -import uuid -import argparse -sys.path.append(os.path.abspath("../supv")) -from matumizi.util import * -from mcclf import * - -def genVisitHistory(numUsers, convRate, label): - for i in range(numUsers): - userID = genID(12) - userSess = [] - userSess.append(userID) - - conv = randint(0, 100) - if (conv < convRate): - #converted - if (label): - if (randint(0,100) < 90): - userSess.append("T") - else: - userSess.append("F") - - - numSession = randint(2, 20) - for j in range(numSession): - sess = randint(0, 100) - if (sess <= 15): - elapsed = "H" - elif (sess > 15 and sess <= 40): - elapsed = "M" - else: - elapsed = "L" - - sess = randint(0, 100) - if (sess <= 15): - duration = "L" - elif (sess > 15 and sess <= 40): - duration = "M" - else: - duration = "H" - - sessSummary = elapsed + duration - userSess.append(sessSummary) - - - else: - #not converted - if (label): - if (randint(0,100) < 90): - userSess.append("F") - else: - userSess.append("T") - - numSession = randint(2, 12) - for j in range(numSession): - sess = randint(0, 100) - if (sess <= 20): - elapsed = "L" - elif (sess > 20 and sess <= 45): - elapsed = "M" - else: - elapsed = "H" - - sess = randint(0, 100) - if (sess <= 20): - duration = "H" - elif (sess > 20 and sess <= 45): - duration = "M" - else: - duration = "L" - - sessSummary = elapsed + duration - userSess.append(sessSummary) - - print(",".join(userSess)) - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument('--op', type=str, default = "none", help = "operation") - parser.add_argument('--nuser', type=int, default = 100, help = "num of users") - parser.add_argument('--crate', type=int, default = 10, help = "concersion rate") - parser.add_argument('--label', type=str, default = "false", help = "whether to add label") - parser.add_argument('--mlfpath', type=str, default = "false", help = "ml config file") - args = parser.parse_args() - op = args.op - - if op == "gen": - numUsers = args.nuser - convRate = args.crate - - label = args.label == "true" - genVisitHistory(numUsers, convRate, label) - - elif op == "train": - model = MarkovChainClassifier(args.mlfpath) - model.train() - - elif op == "pred": - model = MarkovChainClassifier(args.mlfpath) - model.predict() - - else: - exitWithMsg("invalid command)") - diff --git a/spaces/TotoB12/llama2-7b-chat-ggml/run-app.sh b/spaces/TotoB12/llama2-7b-chat-ggml/run-app.sh deleted file mode 100644 index a63a8e06f941fc702fd223ea3d4de2e28692824e..0000000000000000000000000000000000000000 --- a/spaces/TotoB12/llama2-7b-chat-ggml/run-app.sh +++ /dev/null @@ -1 +0,0 @@ -nodemon -w app.py -x python app.py diff --git a/spaces/Ukrania/RVC-Models/app.py b/spaces/Ukrania/RVC-Models/app.py deleted file mode 100644 index ece0dba34bfe96f562950393d3c059f01da04693..0000000000000000000000000000000000000000 --- a/spaces/Ukrania/RVC-Models/app.py +++ /dev/null @@ -1,505 +0,0 @@ -import os -import glob -import json -import traceback -import logging -import gradio as gr -import numpy as np -import librosa -import torch -import asyncio -import edge_tts -import yt_dlp -import ffmpeg -import subprocess -import sys -import io -import wave -from datetime import datetime -from fairseq import checkpoint_utils -from lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from vc_infer_pipeline import VC -from config import Config -config = Config() -logging.getLogger("numba").setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" - -audio_mode = [] -f0method_mode = [] -f0method_info = "" -if limitation is True: - audio_mode = ["Upload audio", "TTS Audio"] - f0method_mode = ["pm", "harvest"] - f0method_info = "PM is fast, Harvest is good but extremely slow. (Default: PM)" -else: - audio_mode = ["Input path", "Upload audio", "Youtube", "TTS Audio"] - f0method_mode = ["pm", "harvest", "crepe"] - f0method_info = "PM is fast, Harvest is good but extremely slow, and Crepe effect is good but requires GPU (Default: PM)" - -def create_vc_fn(model_title, tgt_sr, net_g, vc, if_f0, version, file_index): - def vc_fn( - vc_audio_mode, - vc_input, - vc_upload, - tts_text, - tts_voice, - f0_up_key, - f0_method, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - ): - try: - if vc_audio_mode == "Input path" or "Youtube" and vc_input != "": - audio, sr = librosa.load(vc_input, sr=16000, mono=True) - elif vc_audio_mode == "Upload audio": - if vc_upload is None: - return "You need to upload an audio", None - sampling_rate, audio = vc_upload - duration = audio.shape[0] / sampling_rate - if duration > 20 and limitation: - return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - elif vc_audio_mode == "TTS Audio": - if len(tts_text) > 100 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - vc_input = "tts.mp3" - times = [0, 0, 0] - f0_up_key = int(f0_up_key) - audio_opt = vc.pipeline( - hubert_model, - net_g, - 0, - audio, - vc_input, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=None, - ) - info = f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - print(f"{model_title} | {info}") - return info, (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, (None, None) - return vc_fn - -def load_model(): - categories = [] - with open("weights/folder_info.json", "r", encoding="utf-8") as f: - folder_info = json.load(f) - for category_name, category_info in folder_info.items(): - if not category_info['enable']: - continue - category_title = category_info['title'] - category_folder = category_info['folder_path'] - description = category_info['description'] - models = [] - with open(f"weights/{category_folder}/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for character_name, info in models_info.items(): - if not info['enable']: - continue - model_title = info['title'] - model_name = info['model_path'] - model_author = info.get("author", None) - model_cover = f"weights/{category_folder}/{character_name}/{info['cover']}" - model_index = f"weights/{category_folder}/{character_name}/{info['feature_retrieval_library']}" - cpt = torch.load(f"weights/{category_folder}/{character_name}/{model_name}", map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - model_version = "V1" - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - model_version = "V2" - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) - net_g.eval().to(config.device) - if config.is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, config) - print(f"Model loaded: {character_name} / {info['feature_retrieval_library']} | ({model_version})") - models.append((character_name, model_title, model_author, model_cover, model_version, create_vc_fn(model_title, tgt_sr, net_g, vc, if_f0, version, model_index))) - categories.append([category_title, category_folder, description, models]) - return categories - -def cut_vocal_and_inst(url, audio_provider, split_model): - if url != "": - if not os.path.exists("dl_audio"): - os.mkdir("dl_audio") - if audio_provider == "Youtube": - ydl_opts = { - 'format': 'bestaudio/best', - 'postprocessors': [{ - 'key': 'FFmpegExtractAudio', - 'preferredcodec': 'wav', - }], - "outtmpl": 'dl_audio/youtube_audio', - } - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - ydl.download([url]) - audio_path = "dl_audio/youtube_audio.wav" - else: - # Spotify doesnt work. - # Need to find other solution soon. - ''' - command = f"spotdl download {url} --output dl_audio/.wav" - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - audio_path = "dl_audio/spotify_audio.wav" - ''' - if split_model == "htdemucs": - command = f"demucs --two-stems=vocals {audio_path} -o output" - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return "output/htdemucs/youtube_audio/vocals.wav", "output/htdemucs/youtube_audio/no_vocals.wav", audio_path, "output/htdemucs/youtube_audio/vocals.wav" - else: - command = f"demucs --two-stems=vocals -n mdx_extra_q {audio_path} -o output" - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return "output/mdx_extra_q/youtube_audio/vocals.wav", "output/mdx_extra_q/youtube_audio/no_vocals.wav", audio_path, "output/mdx_extra_q/youtube_audio/vocals.wav" - else: - raise gr.Error("URL Required!") - return None, None, None, None - -def combine_vocal_and_inst(audio_data, audio_volume, split_model): - if not os.path.exists("output/result"): - os.mkdir("output/result") - vocal_path = "output/result/output.wav" - output_path = "output/result/combine.mp3" - if split_model == "htdemucs": - inst_path = "output/htdemucs/youtube_audio/no_vocals.wav" - else: - inst_path = "output/mdx_extra_q/youtube_audio/no_vocals.wav" - with wave.open(vocal_path, "w") as wave_file: - wave_file.setnchannels(1) - wave_file.setsampwidth(2) - wave_file.setframerate(audio_data[0]) - wave_file.writeframes(audio_data[1].tobytes()) - command = f'ffmpeg -y -i {inst_path} -i {vocal_path} -filter_complex [1:a]volume={audio_volume}dB[v];[0:a][v]amix=inputs=2:duration=longest -b:a 320k -c:a libmp3lame {output_path}' - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return output_path - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(config.device) - if config.is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def change_audio_mode(vc_audio_mode): - if vc_audio_mode == "Input path": - return ( - # Input & Upload - gr.Textbox.update(visible=True), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "Upload audio": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Audio.update(visible=True), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "Youtube": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=True), - gr.Textbox.update(visible=True), - gr.Dropdown.update(visible=True), - gr.Button.update(visible=True), - gr.Audio.update(visible=True), - gr.Audio.update(visible=True), - gr.Audio.update(visible=True), - gr.Slider.update(visible=True), - gr.Audio.update(visible=True), - gr.Button.update(visible=True), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "TTS Audio": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=True), - gr.Dropdown.update(visible=True) - ) - else: - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Audio.update(visible=True), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - -if __name__ == '__main__': - load_hubert() - categories = load_model() - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - with gr.Blocks(theme=gr.themes.Default()) as app: - gr.Markdown( - "#
      Multi Model RVC Inference\n" - "###
      Support v2 Model\n" - "###
      Este proyecto fue basado en [kevinwang676](https://huggingface.co/spaces/kevinwang676/rvc-anigames-v2)\n" - "[![image](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/Ukran1a/1f202b29cb5efbf69354e11ef95797b0/webones-rvc-models.ipynb)\n\n" - "[![Original Repo](https://badgen.net/badge/icon/github?icon=github&label=Original%20Repo)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)" - ) - for (folder_title, folder, description, models) in categories: - with gr.TabItem(folder_title): - if description: - gr.Markdown(f"###
      {description}") - with gr.Tabs(): - if not models: - gr.Markdown("#
      No Model Loaded.") - gr.Markdown("##
      Please add model or fix your model path.") - continue - for (name, title, author, cover, model_version, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '
      ' - f'
      {title}
      \n'+ - f'
      RVC {model_version} Model
      \n'+ - (f'
      Model author: {author}
      ' if author else "")+ - (f'' if cover else "")+ - '
      ' - ) - with gr.Row(): - with gr.Column(): - vc_audio_mode = gr.Dropdown(label="Input voice", choices=audio_mode, allow_custom_value=False, value="Upload audio") - # Input and Upload - vc_input = gr.Textbox(label="Input audio path", visible=False) - vc_upload = gr.Audio(label="Upload audio file", visible=True, interactive=True) - # Youtube - vc_download_audio = gr.Dropdown(label="Provider", choices=["Youtube"], allow_custom_value=False, visible=False, value="Youtube", info="Select provider (Default: Youtube)") - vc_link = gr.Textbox(label="Youtube URL", visible=False, info="Example: https://www.youtube.com/watch?v=Nc0sB1Bmf-A", placeholder="https://www.youtube.com/watch?v=...") - vc_split_model = gr.Dropdown(label="Splitter Model", choices=["htdemucs", "mdx_extra_q"], allow_custom_value=False, visible=False, value="htdemucs", info="Select the splitter model (Default: htdemucs)") - vc_split = gr.Button("Split Audio", variant="primary", visible=False) - vc_vocal_preview = gr.Audio(label="Vocal Preview", visible=False) - vc_inst_preview = gr.Audio(label="Instrumental Preview", visible=False) - vc_audio_preview = gr.Audio(label="Audio Preview", visible=False) - # TTS - tts_text = gr.Textbox(visible=False, label="TTS text", info="Text to speech input") - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - with gr.Column(): - vc_transform0 = gr.Number(label="Transpose", value=0, info='Type "12" to change from male to female voice. Type "-12" to change female to male voice') - f0method0 = gr.Radio( - label="Pitch extraction algorithm", - info=f0method_info, - choices=f0method_mode, - value="pm", - interactive=True - ) - index_rate1 = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - info="(Default: 0.6)", - value=0.6, - interactive=True, - ) - filter_radius0 = gr.Slider( - minimum=0, - maximum=7, - label="Apply Median Filtering", - info="The value represents the filter radius and can reduce breathiness.", - value=3, - step=1, - interactive=True, - ) - resample_sr0 = gr.Slider( - minimum=0, - maximum=48000, - label="Resample the output audio", - info="Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling", - value=0, - step=1, - interactive=True, - ) - rms_mix_rate0 = gr.Slider( - minimum=0, - maximum=1, - label="Volume Envelope", - info="Use the volume envelope of the input to replace or mix with the volume envelope of the output. The closer the ratio is to 1, the more the output envelope is used", - value=1, - interactive=True, - ) - protect0 = gr.Slider( - minimum=0, - maximum=0.5, - label="Voice Protection", - info="Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy", - value=0.4, - step=0.01, - interactive=True, - ) - with gr.Column(): - vc_log = gr.Textbox(label="Output Information", interactive=False) - vc_output = gr.Audio(label="Output Audio", interactive=False) - vc_convert = gr.Button("Convert", variant="primary") - vc_volume = gr.Slider( - minimum=0, - maximum=10, - label="Vocal volume", - value=4, - interactive=True, - step=1, - info="Adjust vocal volume (Default: 4}", - visible=False - ) - vc_combined_output = gr.Audio(label="Output Combined Audio", visible=False) - vc_combine = gr.Button("Combine",variant="primary", visible=False) - vc_convert.click( - fn=vc_fn, - inputs=[ - vc_audio_mode, - vc_input, - vc_upload, - tts_text, - tts_voice, - vc_transform0, - f0method0, - index_rate1, - filter_radius0, - resample_sr0, - rms_mix_rate0, - protect0, - ], - outputs=[vc_log ,vc_output] - ) - vc_split.click( - fn=cut_vocal_and_inst, - inputs=[vc_link, vc_download_audio, vc_split_model], - outputs=[vc_vocal_preview, vc_inst_preview, vc_audio_preview, vc_input] - ) - vc_combine.click( - fn=combine_vocal_and_inst, - inputs=[vc_output, vc_volume, vc_split_model], - outputs=[vc_combined_output] - ) - vc_audio_mode.change( - fn=change_audio_mode, - inputs=[vc_audio_mode], - outputs=[ - vc_input, - vc_upload, - vc_download_audio, - vc_link, - vc_split_model, - vc_split, - vc_vocal_preview, - vc_inst_preview, - vc_audio_preview, - vc_volume, - vc_combined_output, - vc_combine, - tts_text, - tts_voice - ] - ) - app.queue(concurrency_count=1, max_size=20, api_open=config.api).launch(share=config.colab) \ No newline at end of file diff --git a/spaces/VIOD/Real-CUGAN/app.py b/spaces/VIOD/Real-CUGAN/app.py deleted file mode 100644 index 2439c5cec6b61e8a517f957daf710cbb6b5c3cf6..0000000000000000000000000000000000000000 --- a/spaces/VIOD/Real-CUGAN/app.py +++ /dev/null @@ -1,62 +0,0 @@ -from upcunet_v3 import RealWaifuUpScaler -import gradio as gr -import time -import logging -import os -from PIL import ImageOps -import numpy as np -import math - - -def greet(input_img, input_model_name, input_tile_mode): - # if input_img.size[0] * input_img.size[1] > 256 * 256: - # y = int(math.sqrt(256*256/input_img.size[0]*input_img.size[1])) - # x = int(input_img.size[0]/input_img.size[1]*y) - # input_img = ImageOps.fit(input_img, (x, y)) - input_img = np.array(input_img) - if input_model_name not in model_cache: - t1 = time.time() - upscaler = RealWaifuUpScaler(input_model_name[2], ModelPath + input_model_name, half=False, device="cpu") - t2 = time.time() - logger.info(f'load model time, {t2 - t1}') - model_cache[input_model_name] = upscaler - else: - upscaler = model_cache[input_model_name] - logger.info(f'load model from cache') - - start = time.time() - result = upscaler(input_img, tile_mode=input_tile_mode) - end = time.time() - logger.info(f'input_model_name, {input_model_name}') - logger.info(f'input_tile_mode, {input_tile_mode}') - logger.info(f'input shape, {input_img.shape}') - logger.info(f'output shape, {result.shape}') - logger.info(f'speed time, {end - start}') - return result - - -if __name__ == '__main__': - logging.basicConfig(level=logging.INFO, format="[%(asctime)s] [%(process)d] [%(levelname)s] %(message)s") - logger = logging.getLogger() - - ModelPath = "weights_v3/" - model_cache = {} - - input_model_name = gr.inputs.Dropdown(os.listdir(ModelPath), default="up2x-latest-denoise2x.pth", label='选择model') - input_tile_mode = gr.inputs.Dropdown([0, 1, 2, 3, 4], default=2, label='选择tile_mode') - input_img = gr.inputs.Image(label='image', type='pil') - - inputs = [input_img, input_model_name, input_tile_mode] - outputs = "image" - iface = gr.Interface(fn=greet, - inputs=inputs, - outputs=outputs, - allow_screenshot=False, - allow_flagging='never', - examples=[['test-img.jpg', "up2x-latest-denoise2x.pth", 2]], - article='[https://github.com/bilibili/ailab/tree/main/Real-CUGAN](https://github.com/bilibili/ailab/tree/main/Real-CUGAN)
      ' - '感谢b站开源的项目,图片过大会导致内存不足,所有我将图片裁剪小,想体验大图片的效果请自行前往上面的链接。
      ' - '修改bbb' - 'The large image will lead to memory limit exceeded. So I crop and resize image. ' - 'If you want to experience the large image, please go to the link above.') - iface.launch() diff --git a/spaces/Vegecken/sovits4dzl/flask_api.py b/spaces/Vegecken/sovits4dzl/flask_api.py deleted file mode 100644 index 8cc236a1c34c9ddeddea99bcea13024fb0ccc90b..0000000000000000000000000000000000000000 --- a/spaces/Vegecken/sovits4dzl/flask_api.py +++ /dev/null @@ -1,56 +0,0 @@ -import io -import logging - -import soundfile -import torch -import torchaudio -from flask import Flask, request, send_file -from flask_cors import CORS - -from inference.infer_tool import Svc, RealTimeVC - -app = Flask(__name__) - -CORS(app) - -logging.getLogger('numba').setLevel(logging.WARNING) - - -@app.route("/voiceChangeModel", methods=["POST"]) -def voice_change_model(): - request_form = request.form - wave_file = request.files.get("sample", None) - # 变调信息 - f_pitch_change = float(request_form.get("fPitchChange", 0)) - # DAW所需的采样率 - daw_sample = int(float(request_form.get("sampleRate", 0))) - speaker_id = int(float(request_form.get("sSpeakId", 0))) - # http获得wav文件并转换 - input_wav_path = io.BytesIO(wave_file.read()) - - # 模型推理 - if raw_infer: - out_audio, out_sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path) - tar_audio = torchaudio.functional.resample(out_audio, svc_model.target_sample, daw_sample) - else: - out_audio = svc.process(svc_model, speaker_id, f_pitch_change, input_wav_path) - tar_audio = torchaudio.functional.resample(torch.from_numpy(out_audio), svc_model.target_sample, daw_sample) - # 返回音频 - out_wav_path = io.BytesIO() - soundfile.write(out_wav_path, tar_audio.cpu().numpy(), daw_sample, format="wav") - out_wav_path.seek(0) - return send_file(out_wav_path, download_name="temp.wav", as_attachment=True) - - -if __name__ == '__main__': - # 启用则为直接切片合成,False为交叉淡化方式 - # vst插件调整0.3-0.5s切片时间可以降低延迟,直接切片方法会有连接处爆音、交叉淡化会有轻微重叠声音 - # 自行选择能接受的方法,或将vst最大切片时间调整为1s,此处设为Ture,延迟大音质稳定一些 - raw_infer = True - # 每个模型和config是唯一对应的 - model_name = "logs/32k/G_174000-Copy1.pth" - config_name = "configs/config.json" - svc_model = Svc(model_name, config_name) - svc = RealTimeVC() - # 此处与vst插件对应,不建议更改 - app.run(port=6842, host="0.0.0.0", debug=False, threaded=False) diff --git a/spaces/WZT/DigiProj/dataset.py b/spaces/WZT/DigiProj/dataset.py deleted file mode 100644 index 7f313681a09b1c2c9e882316fd155439e010411c..0000000000000000000000000000000000000000 --- a/spaces/WZT/DigiProj/dataset.py +++ /dev/null @@ -1,167 +0,0 @@ -import torch.utils.data as data - -from PIL import Image - -import os -import os.path -from io import BytesIO - -import lmdb -from torch.utils.data import Dataset - -class MultiResolutionDataset(Dataset): - def __init__(self, path, transform, resolution=256): - self.env = lmdb.open( - path, - max_readers=32, - readonly=True, - lock=False, - readahead=False, - meminit=False, - ) - - if not self.env: - raise IOError('Cannot open lmdb dataset', path) - - with self.env.begin(write=False) as txn: - self.length = int(txn.get('length'.encode('utf-8')).decode('utf-8')) - - self.resolution = resolution - self.transform = transform - - def __len__(self): - return self.length - - def __getitem__(self, index): - with self.env.begin(write=False) as txn: - key = f'{self.resolution}-{str(index).zfill(5)}'.encode('utf-8') - img_bytes = txn.get(key) - - buffer = BytesIO(img_bytes) - img = Image.open(buffer) - img = self.transform(img) - - return img - - -def has_file_allowed_extension(filename, extensions): - """Checks if a file is an allowed extension. - - Args: - filename (string): path to a file - - Returns: - bool: True if the filename ends with a known image extension - """ - filename_lower = filename.lower() - return any(filename_lower.endswith(ext) for ext in extensions) - - -def find_classes(dir): - classes = [d for d in os.listdir(dir) if os.path.isdir(os.path.join(dir, d))] - classes.sort() - class_to_idx = {classes[i]: i for i in range(len(classes))} - return classes, class_to_idx - - -def make_dataset(dir, extensions): - images = [] - for root, _, fnames in sorted(os.walk(dir)): - for fname in sorted(fnames): - if has_file_allowed_extension(fname, extensions): - path = os.path.join(root, fname) - item = (path, 0) - images.append(item) - - return images - - -class DatasetFolder(data.Dataset): - def __init__(self, root, loader, extensions, transform=None, target_transform=None): - # classes, class_to_idx = find_classes(root) - samples = make_dataset(root, extensions) - if len(samples) == 0: - raise(RuntimeError("Found 0 files in subfolders of: " + root + "\n" - "Supported extensions are: " + ",".join(extensions))) - - self.root = root - self.loader = loader - self.extensions = extensions - self.samples = samples - - self.transform = transform - self.target_transform = target_transform - - def __getitem__(self, index): - """ - Args: - index (int): Index - - Returns: - tuple: (sample, target) where target is class_index of the target class. - """ - path, target = self.samples[index] - sample = self.loader(path) - if self.transform is not None: - sample = self.transform(sample) - if self.target_transform is not None: - target = self.target_transform(target) - - return sample - - def __len__(self): - return len(self.samples) - - def __repr__(self): - fmt_str = 'Dataset ' + self.__class__.__name__ + '\n' - fmt_str += ' Number of datapoints: {}\n'.format(self.__len__()) - fmt_str += ' Root Location: {}\n'.format(self.root) - tmp = ' Transforms (if any): ' - fmt_str += '{0}{1}\n'.format(tmp, self.transform.__repr__().replace('\n', '\n' + ' ' * len(tmp))) - tmp = ' Target Transforms (if any): ' - fmt_str += '{0}{1}'.format(tmp, self.target_transform.__repr__().replace('\n', '\n' + ' ' * len(tmp))) - return fmt_str - - -IMG_EXTENSIONS = ['.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm', '.tif'] - - -def pil_loader(path): - # open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835) - with open(path, 'rb') as f: - img = Image.open(f) - return img.convert('RGB') - - -def default_loader(path): - return pil_loader(path) - - -class ImageFolder(DatasetFolder): - def __init__(self, root, transform1=None, transform2=None, target_transform=None, - loader=default_loader): - super(ImageFolder, self).__init__(root, loader, IMG_EXTENSIONS, - transform=transform1, - target_transform=target_transform) - self.imgs = self.samples - self.transform2 = transform2 - - def set_stage(self, stage): - if stage == 'last': - self.transform = self.transform2 - -class ListFolder(Dataset): - def __init__(self, txt, transform): - with open(txt) as f: - imgpaths= f.readlines() - self.imgpaths = [x.strip() for x in imgpaths] - self.transform = transform - - def __getitem__(self, idx): - path = self.imgpaths[idx] - image = Image.open(path) - return self.transform(image) - - def __len__(self): - return len(self.imgpaths) - diff --git a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/imports/__init__.py b/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/imports/__init__.py deleted file mode 100644 index d61cb768acb79ed49c20afb7d0957110a8d8769f..0000000000000000000000000000000000000000 --- a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/imports/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .core import * -from .torch import * diff --git a/spaces/XzJosh/nanami-Bert-VITS2/text/__init__.py b/spaces/XzJosh/nanami-Bert-VITS2/text/__init__.py deleted file mode 100644 index 7566bf351ca9b95af9cdc6d729557a9da083800f..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/nanami-Bert-VITS2/text/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -from text.symbols import * - - -_symbol_to_id = {s: i for i, s in enumerate(symbols)} - -def cleaned_text_to_sequence(cleaned_text, tones, language): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - phones = [_symbol_to_id[symbol] for symbol in cleaned_text] - tone_start = language_tone_start_map[language] - tones = [i + tone_start for i in tones] - lang_id = language_id_map[language] - lang_ids = [lang_id for i in phones] - return phones, tones, lang_ids - -def get_bert(norm_text, word2ph, language): - from .chinese_bert import get_bert_feature as zh_bert - from .english_bert_mock import get_bert_feature as en_bert - lang_bert_func_map = { - 'ZH': zh_bert, - 'EN': en_bert - } - bert = lang_bert_func_map[language](norm_text, word2ph) - return bert diff --git a/spaces/YE01/saya-vits/text/korean.py b/spaces/YE01/saya-vits/text/korean.py deleted file mode 100644 index edee07429a450c55e3d8e246997faaa1e0b89cc9..0000000000000000000000000000000000000000 --- a/spaces/YE01/saya-vits/text/korean.py +++ /dev/null @@ -1,210 +0,0 @@ -import re -from jamo import h2j, j2hcj -import ko_pron - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (ipa, lazy ipa) pairs: -_ipa_to_lazy_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('t͡ɕ','ʧ'), - ('d͡ʑ','ʥ'), - ('ɲ','n^'), - ('ɕ','ʃ'), - ('ʷ','w'), - ('ɭ','l`'), - ('ʎ','ɾ'), - ('ɣ','ŋ'), - ('ɰ','ɯ'), - ('ʝ','j'), - ('ʌ','ə'), - ('ɡ','g'), - ('\u031a','#'), - ('\u0348','='), - ('\u031e',''), - ('\u0320',''), - ('\u0339','') -]] - - -def latin_to_hangul(text): - for regex, replacement in _latin_to_hangul: - text = re.sub(regex, replacement, text) - return text - - -def divide_hangul(text): - text = j2hcj(h2j(text)) - for regex, replacement in _hangul_divided: - text = re.sub(regex, replacement, text) - return text - - -def hangul_number(num, sino=True): - '''Reference https://github.com/Kyubyong/g2pK''' - num = re.sub(',', '', num) - - if num == '0': - return '영' - if not sino and num == '20': - return '스무' - - digits = '123456789' - names = '일이삼사오육칠팔구' - digit2name = {d: n for d, n in zip(digits, names)} - - modifiers = '한 두 세 네 다섯 여섯 일곱 여덟 아홉' - decimals = '열 스물 서른 마흔 쉰 예순 일흔 여든 아흔' - digit2mod = {d: mod for d, mod in zip(digits, modifiers.split())} - digit2dec = {d: dec for d, dec in zip(digits, decimals.split())} - - spelledout = [] - for i, digit in enumerate(num): - i = len(num) - i - 1 - if sino: - if i == 0: - name = digit2name.get(digit, '') - elif i == 1: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - else: - if i == 0: - name = digit2mod.get(digit, '') - elif i == 1: - name = digit2dec.get(digit, '') - if digit == '0': - if i % 4 == 0: - last_three = spelledout[-min(3, len(spelledout)):] - if ''.join(last_three) == '': - spelledout.append('') - continue - else: - spelledout.append('') - continue - if i == 2: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 3: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 4: - name = digit2name.get(digit, '') + '만' - name = name.replace('일만', '만') - elif i == 5: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - elif i == 6: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 7: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 8: - name = digit2name.get(digit, '') + '억' - elif i == 9: - name = digit2name.get(digit, '') + '십' - elif i == 10: - name = digit2name.get(digit, '') + '백' - elif i == 11: - name = digit2name.get(digit, '') + '천' - elif i == 12: - name = digit2name.get(digit, '') + '조' - elif i == 13: - name = digit2name.get(digit, '') + '십' - elif i == 14: - name = digit2name.get(digit, '') + '백' - elif i == 15: - name = digit2name.get(digit, '') + '천' - spelledout.append(name) - return ''.join(elem for elem in spelledout) - - -def number_to_hangul(text): - '''Reference https://github.com/Kyubyong/g2pK''' - tokens = set(re.findall(r'(\d[\d,]*)([\uac00-\ud71f]+)', text)) - for token in tokens: - num, classifier = token - if classifier[:2] in _korean_classifiers or classifier[0] in _korean_classifiers: - spelledout = hangul_number(num, sino=False) - else: - spelledout = hangul_number(num, sino=True) - text = text.replace(f'{num}{classifier}', f'{spelledout}{classifier}') - # digit by digit for remaining digits - digits = '0123456789' - names = '영일이삼사오육칠팔구' - for d, n in zip(digits, names): - text = text.replace(d, n) - return text - - -def korean_to_lazy_ipa(text): - text = latin_to_hangul(text) - text = number_to_hangul(text) - text=re.sub('[\uac00-\ud7af]+',lambda x:ko_pron.romanise(x.group(0),'ipa').split('] ~ [')[0],text) - for regex, replacement in _ipa_to_lazy_ipa: - text = re.sub(regex, replacement, text) - return text - - -def korean_to_ipa(text): - text = korean_to_lazy_ipa(text) - return text.replace('ʧ','tʃ').replace('ʥ','dʑ') diff --git a/spaces/YUANAI/DiffspeechResearch/utils/metrics/ssim.py b/spaces/YUANAI/DiffspeechResearch/utils/metrics/ssim.py deleted file mode 100644 index cb8c6a47b14fbd450a6717a21236906d6de9679f..0000000000000000000000000000000000000000 --- a/spaces/YUANAI/DiffspeechResearch/utils/metrics/ssim.py +++ /dev/null @@ -1,84 +0,0 @@ -""" -Adapted from https://github.com/Po-Hsun-Su/pytorch-ssim -""" - -import torch -import torch.nn.functional as F -from torch.autograd import Variable -import numpy as np -from math import exp - - -def gaussian(window_size, sigma): - gauss = torch.Tensor([exp(-(x - window_size // 2) ** 2 / float(2 * sigma ** 2)) for x in range(window_size)]) - return gauss / gauss.sum() - - -def create_window(window_size, channel): - _1D_window = gaussian(window_size, 1.5).unsqueeze(1) - _2D_window = _1D_window.mm(_1D_window.t()).float().unsqueeze(0).unsqueeze(0) - window = Variable(_2D_window.expand(channel, 1, window_size, window_size).contiguous()) - return window - - -def _ssim(img1, img2, window, window_size, channel, size_average=True): - mu1 = F.conv2d(img1, window, padding=window_size // 2, groups=channel) - mu2 = F.conv2d(img2, window, padding=window_size // 2, groups=channel) - - mu1_sq = mu1.pow(2) - mu2_sq = mu2.pow(2) - mu1_mu2 = mu1 * mu2 - - sigma1_sq = F.conv2d(img1 * img1, window, padding=window_size // 2, groups=channel) - mu1_sq - sigma2_sq = F.conv2d(img2 * img2, window, padding=window_size // 2, groups=channel) - mu2_sq - sigma12 = F.conv2d(img1 * img2, window, padding=window_size // 2, groups=channel) - mu1_mu2 - - C1 = 0.01 ** 2 - C2 = 0.03 ** 2 - - ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * (sigma1_sq + sigma2_sq + C2)) - - if size_average: - return ssim_map.mean() - else: - return ssim_map.mean(1) - - -class SSIM(torch.nn.Module): - def __init__(self, window_size=11, size_average=True): - super(SSIM, self).__init__() - self.window_size = window_size - self.size_average = size_average - self.channel = 1 - self.window = create_window(window_size, self.channel) - - def forward(self, img1, img2): - (_, channel, _, _) = img1.size() - - if channel == self.channel and self.window.data.type() == img1.data.type(): - window = self.window - else: - window = create_window(self.window_size, channel) - - if img1.is_cuda: - window = window.cuda(img1.get_device()) - window = window.type_as(img1) - - self.window = window - self.channel = channel - - return _ssim(img1, img2, window, self.window_size, channel, self.size_average) - - -window = None - - -def ssim(img1, img2, window_size=11, size_average=True): - (_, channel, _, _) = img1.size() - global window - if window is None: - window = create_window(window_size, channel) - if img1.is_cuda: - window = window.cuda(img1.get_device()) - window = window.type_as(img1) - return _ssim(img1, img2, window, window_size, channel, size_average) diff --git a/spaces/Yeshwant123/mcc/tests.py b/spaces/Yeshwant123/mcc/tests.py deleted file mode 100644 index 601ed757507caebec67493462d11eb4c8901c2a1..0000000000000000000000000000000000000000 --- a/spaces/Yeshwant123/mcc/tests.py +++ /dev/null @@ -1,17 +0,0 @@ -test_cases = [ - { - "predictions": [0, 0], - "references": [1, 1], - "result": {"metric_score": 0} - }, - { - "predictions": [1, 1], - "references": [1, 1], - "result": {"metric_score": 1} - }, - { - "predictions": [1, 0], - "references": [1, 1], - "result": {"metric_score": 0.5} - } -] \ No newline at end of file diff --git a/spaces/Yuliang/ICON/lib/pymaf/core/train_options.py b/spaces/Yuliang/ICON/lib/pymaf/core/train_options.py deleted file mode 100644 index 43daeb4486e8751b903b3852b066e5b3e13bd9de..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ICON/lib/pymaf/core/train_options.py +++ /dev/null @@ -1,135 +0,0 @@ -import argparse - - -class TrainOptions(): - def __init__(self): - self.parser = argparse.ArgumentParser() - - gen = self.parser.add_argument_group('General') - gen.add_argument( - '--resume', - dest='resume', - default=False, - action='store_true', - help='Resume from checkpoint (Use latest checkpoint by default') - - io = self.parser.add_argument_group('io') - io.add_argument('--log_dir', - default='logs', - help='Directory to store logs') - io.add_argument( - '--pretrained_checkpoint', - default=None, - help='Load a pretrained checkpoint at the beginning training') - - train = self.parser.add_argument_group('Training Options') - train.add_argument('--num_epochs', - type=int, - default=200, - help='Total number of training epochs') - train.add_argument('--regressor', - type=str, - choices=['hmr', 'pymaf_net'], - default='pymaf_net', - help='Name of the SMPL regressor.') - train.add_argument('--cfg_file', - type=str, - default='./configs/pymaf_config.yaml', - help='config file path for PyMAF.') - train.add_argument( - '--img_res', - type=int, - default=224, - help='Rescale bounding boxes to size [img_res, img_res] before feeding them in the network' - ) - train.add_argument( - '--rot_factor', - type=float, - default=30, - help='Random rotation in the range [-rot_factor, rot_factor]') - train.add_argument( - '--noise_factor', - type=float, - default=0.4, - help='Randomly multiply pixel values with factor in the range [1-noise_factor, 1+noise_factor]' - ) - train.add_argument( - '--scale_factor', - type=float, - default=0.25, - help='Rescale bounding boxes by a factor of [1-scale_factor,1+scale_factor]' - ) - train.add_argument( - '--openpose_train_weight', - default=0., - help='Weight for OpenPose keypoints during training') - train.add_argument('--gt_train_weight', - default=1., - help='Weight for GT keypoints during training') - train.add_argument('--eval_dataset', - type=str, - default='h36m-p2-mosh', - help='Name of the evaluation dataset.') - train.add_argument('--single_dataset', - default=False, - action='store_true', - help='Use a single dataset') - train.add_argument('--single_dataname', - type=str, - default='h36m', - help='Name of the single dataset.') - train.add_argument('--eval_pve', - default=False, - action='store_true', - help='evaluate PVE') - train.add_argument('--overwrite', - default=False, - action='store_true', - help='overwrite the latest checkpoint') - - train.add_argument('--distributed', - action='store_true', - help='Use distributed training') - train.add_argument('--dist_backend', - default='nccl', - type=str, - help='distributed backend') - train.add_argument('--dist_url', - default='tcp://127.0.0.1:10356', - type=str, - help='url used to set up distributed training') - train.add_argument('--world_size', - default=1, - type=int, - help='number of nodes for distributed training') - train.add_argument("--local_rank", default=0, type=int) - train.add_argument('--rank', - default=0, - type=int, - help='node rank for distributed training') - train.add_argument( - '--multiprocessing_distributed', - action='store_true', - help='Use multi-processing distributed training to launch ' - 'N processes per node, which has N GPUs. This is the ' - 'fastest way to use PyTorch for either single node or ' - 'multi node data parallel training') - - misc = self.parser.add_argument_group('Misc Options') - misc.add_argument('--misc', - help="Modify config options using the command-line", - default=None, - nargs=argparse.REMAINDER) - return - - def parse_args(self): - """Parse input arguments.""" - self.args = self.parser.parse_args() - self.save_dump() - return self.args - - def save_dump(self): - """Store all argument values to a json file. - The default location is logs/expname/args.json. - """ - pass diff --git a/spaces/ZenXir/FreeVC/speaker_encoder/visualizations.py b/spaces/ZenXir/FreeVC/speaker_encoder/visualizations.py deleted file mode 100644 index ec00fc64d6e9fda2bb8e613531066ac824df1451..0000000000000000000000000000000000000000 --- a/spaces/ZenXir/FreeVC/speaker_encoder/visualizations.py +++ /dev/null @@ -1,178 +0,0 @@ -from speaker_encoder.data_objects.speaker_verification_dataset import SpeakerVerificationDataset -from datetime import datetime -from time import perf_counter as timer -import matplotlib.pyplot as plt -import numpy as np -# import webbrowser -import visdom -import umap - -colormap = np.array([ - [76, 255, 0], - [0, 127, 70], - [255, 0, 0], - [255, 217, 38], - [0, 135, 255], - [165, 0, 165], - [255, 167, 255], - [0, 255, 255], - [255, 96, 38], - [142, 76, 0], - [33, 0, 127], - [0, 0, 0], - [183, 183, 183], -], dtype=np.float) / 255 - - -class Visualizations: - def __init__(self, env_name=None, update_every=10, server="http://localhost", disabled=False): - # Tracking data - self.last_update_timestamp = timer() - self.update_every = update_every - self.step_times = [] - self.losses = [] - self.eers = [] - print("Updating the visualizations every %d steps." % update_every) - - # If visdom is disabled TODO: use a better paradigm for that - self.disabled = disabled - if self.disabled: - return - - # Set the environment name - now = str(datetime.now().strftime("%d-%m %Hh%M")) - if env_name is None: - self.env_name = now - else: - self.env_name = "%s (%s)" % (env_name, now) - - # Connect to visdom and open the corresponding window in the browser - try: - self.vis = visdom.Visdom(server, env=self.env_name, raise_exceptions=True) - except ConnectionError: - raise Exception("No visdom server detected. Run the command \"visdom\" in your CLI to " - "start it.") - # webbrowser.open("http://localhost:8097/env/" + self.env_name) - - # Create the windows - self.loss_win = None - self.eer_win = None - # self.lr_win = None - self.implementation_win = None - self.projection_win = None - self.implementation_string = "" - - def log_params(self): - if self.disabled: - return - from speaker_encoder import params_data - from speaker_encoder import params_model - param_string = "Model parameters:
      " - for param_name in (p for p in dir(params_model) if not p.startswith("__")): - value = getattr(params_model, param_name) - param_string += "\t%s: %s
      " % (param_name, value) - param_string += "Data parameters:
      " - for param_name in (p for p in dir(params_data) if not p.startswith("__")): - value = getattr(params_data, param_name) - param_string += "\t%s: %s
      " % (param_name, value) - self.vis.text(param_string, opts={"title": "Parameters"}) - - def log_dataset(self, dataset: SpeakerVerificationDataset): - if self.disabled: - return - dataset_string = "" - dataset_string += "Speakers: %s\n" % len(dataset.speakers) - dataset_string += "\n" + dataset.get_logs() - dataset_string = dataset_string.replace("\n", "
      ") - self.vis.text(dataset_string, opts={"title": "Dataset"}) - - def log_implementation(self, params): - if self.disabled: - return - implementation_string = "" - for param, value in params.items(): - implementation_string += "%s: %s\n" % (param, value) - implementation_string = implementation_string.replace("\n", "
      ") - self.implementation_string = implementation_string - self.implementation_win = self.vis.text( - implementation_string, - opts={"title": "Training implementation"} - ) - - def update(self, loss, eer, step): - # Update the tracking data - now = timer() - self.step_times.append(1000 * (now - self.last_update_timestamp)) - self.last_update_timestamp = now - self.losses.append(loss) - self.eers.append(eer) - print(".", end="") - - # Update the plots every steps - if step % self.update_every != 0: - return - time_string = "Step time: mean: %5dms std: %5dms" % \ - (int(np.mean(self.step_times)), int(np.std(self.step_times))) - print("\nStep %6d Loss: %.4f EER: %.4f %s" % - (step, np.mean(self.losses), np.mean(self.eers), time_string)) - if not self.disabled: - self.loss_win = self.vis.line( - [np.mean(self.losses)], - [step], - win=self.loss_win, - update="append" if self.loss_win else None, - opts=dict( - legend=["Avg. loss"], - xlabel="Step", - ylabel="Loss", - title="Loss", - ) - ) - self.eer_win = self.vis.line( - [np.mean(self.eers)], - [step], - win=self.eer_win, - update="append" if self.eer_win else None, - opts=dict( - legend=["Avg. EER"], - xlabel="Step", - ylabel="EER", - title="Equal error rate" - ) - ) - if self.implementation_win is not None: - self.vis.text( - self.implementation_string + ("%s" % time_string), - win=self.implementation_win, - opts={"title": "Training implementation"}, - ) - - # Reset the tracking - self.losses.clear() - self.eers.clear() - self.step_times.clear() - - def draw_projections(self, embeds, utterances_per_speaker, step, out_fpath=None, - max_speakers=10): - max_speakers = min(max_speakers, len(colormap)) - embeds = embeds[:max_speakers * utterances_per_speaker] - - n_speakers = len(embeds) // utterances_per_speaker - ground_truth = np.repeat(np.arange(n_speakers), utterances_per_speaker) - colors = [colormap[i] for i in ground_truth] - - reducer = umap.UMAP() - projected = reducer.fit_transform(embeds) - plt.scatter(projected[:, 0], projected[:, 1], c=colors) - plt.gca().set_aspect("equal", "datalim") - plt.title("UMAP projection (step %d)" % step) - if not self.disabled: - self.projection_win = self.vis.matplot(plt, win=self.projection_win) - if out_fpath is not None: - plt.savefig(out_fpath) - plt.clf() - - def save(self): - if not self.disabled: - self.vis.save([self.env_name]) - \ No newline at end of file diff --git a/spaces/ZilliaxOfficial/nyaru-svc-3.0/resample.py b/spaces/ZilliaxOfficial/nyaru-svc-3.0/resample.py deleted file mode 100644 index fabae4afbb330cccad1681b7941a63547c93c640..0000000000000000000000000000000000000000 --- a/spaces/ZilliaxOfficial/nyaru-svc-3.0/resample.py +++ /dev/null @@ -1,47 +0,0 @@ -import os -import argparse -import librosa -import numpy as np -from multiprocessing import Pool, cpu_count -from scipy.io import wavfile -from tqdm import tqdm - - -def process(item): - spkdir, wav_name, args = item - # speaker 's5', 'p280', 'p315' are excluded, - speaker = spkdir.split(os.sep)[-1] - wav_path = os.path.join(args.in_dir, speaker, wav_name) - if os.path.exists(wav_path) and '.wav' in wav_path: - os.makedirs(os.path.join(args.out_dir2, speaker), exist_ok=True) - wav, sr = librosa.load(wav_path, None) - wav, _ = librosa.effects.trim(wav, top_db=20) - peak = np.abs(wav).max() - if peak > 1.0: - wav = 0.98 * wav / peak - wav2 = librosa.resample(wav, orig_sr=sr, target_sr=args.sr2) - save_name = wav_name - save_path2 = os.path.join(args.out_dir2, speaker, save_name) - wavfile.write( - save_path2, - args.sr2, - (wav2 * np.iinfo(np.int16).max).astype(np.int16) - ) - - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--sr2", type=int, default=32000, help="sampling rate") - parser.add_argument("--in_dir", type=str, default="./dataset_raw", help="path to source dir") - parser.add_argument("--out_dir2", type=str, default="./dataset/32k", help="path to target dir") - args = parser.parse_args() - processs = cpu_count()-2 if cpu_count() >4 else 1 - pool = Pool(processes=processs) - - for speaker in os.listdir(args.in_dir): - spk_dir = os.path.join(args.in_dir, speaker) - if os.path.isdir(spk_dir): - print(spk_dir) - for _ in tqdm(pool.imap_unordered(process, [(spk_dir, i, args) for i in os.listdir(spk_dir) if i.endswith("wav")])): - pass diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/ops/__init__.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/ops/__init__.py deleted file mode 100644 index 999e090a458ee148ceca0649f1e3806a40e909bd..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/ops/__init__.py +++ /dev/null @@ -1,81 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .assign_score_withk import assign_score_withk -from .ball_query import ball_query -from .bbox import bbox_overlaps -from .border_align import BorderAlign, border_align -from .box_iou_rotated import box_iou_rotated -from .carafe import CARAFE, CARAFENaive, CARAFEPack, carafe, carafe_naive -from .cc_attention import CrissCrossAttention -from .contour_expand import contour_expand -from .corner_pool import CornerPool -from .correlation import Correlation -from .deform_conv import DeformConv2d, DeformConv2dPack, deform_conv2d -from .deform_roi_pool import (DeformRoIPool, DeformRoIPoolPack, - ModulatedDeformRoIPoolPack, deform_roi_pool) -from .deprecated_wrappers import Conv2d_deprecated as Conv2d -from .deprecated_wrappers import ConvTranspose2d_deprecated as ConvTranspose2d -from .deprecated_wrappers import Linear_deprecated as Linear -from .deprecated_wrappers import MaxPool2d_deprecated as MaxPool2d -from .focal_loss import (SigmoidFocalLoss, SoftmaxFocalLoss, - sigmoid_focal_loss, softmax_focal_loss) -from .furthest_point_sample import (furthest_point_sample, - furthest_point_sample_with_dist) -from .fused_bias_leakyrelu import FusedBiasLeakyReLU, fused_bias_leakyrelu -from .gather_points import gather_points -from .group_points import GroupAll, QueryAndGroup, grouping_operation -from .info import (get_compiler_version, get_compiling_cuda_version, - get_onnxruntime_op_path) -from .iou3d import boxes_iou_bev, nms_bev, nms_normal_bev -from .knn import knn -from .masked_conv import MaskedConv2d, masked_conv2d -from .modulated_deform_conv import (ModulatedDeformConv2d, - ModulatedDeformConv2dPack, - modulated_deform_conv2d) -from .multi_scale_deform_attn import MultiScaleDeformableAttention -from .nms import batched_nms, nms, nms_match, nms_rotated, soft_nms -from .pixel_group import pixel_group -from .point_sample import (SimpleRoIAlign, point_sample, - rel_roi_point_to_rel_img_point) -from .points_in_boxes import (points_in_boxes_all, points_in_boxes_cpu, - points_in_boxes_part) -from .points_sampler import PointsSampler -from .psa_mask import PSAMask -from .roi_align import RoIAlign, roi_align -from .roi_align_rotated import RoIAlignRotated, roi_align_rotated -from .roi_pool import RoIPool, roi_pool -from .roiaware_pool3d import RoIAwarePool3d -from .roipoint_pool3d import RoIPointPool3d -from .saconv import SAConv2d -from .scatter_points import DynamicScatter, dynamic_scatter -from .sync_bn import SyncBatchNorm -from .three_interpolate import three_interpolate -from .three_nn import three_nn -from .tin_shift import TINShift, tin_shift -from .upfirdn2d import upfirdn2d -from .voxelize import Voxelization, voxelization - -__all__ = [ - 'bbox_overlaps', 'CARAFE', 'CARAFENaive', 'CARAFEPack', 'carafe', - 'carafe_naive', 'CornerPool', 'DeformConv2d', 'DeformConv2dPack', - 'deform_conv2d', 'DeformRoIPool', 'DeformRoIPoolPack', - 'ModulatedDeformRoIPoolPack', 'deform_roi_pool', 'SigmoidFocalLoss', - 'SoftmaxFocalLoss', 'sigmoid_focal_loss', 'softmax_focal_loss', - 'get_compiler_version', 'get_compiling_cuda_version', - 'get_onnxruntime_op_path', 'MaskedConv2d', 'masked_conv2d', - 'ModulatedDeformConv2d', 'ModulatedDeformConv2dPack', - 'modulated_deform_conv2d', 'batched_nms', 'nms', 'soft_nms', 'nms_match', - 'RoIAlign', 'roi_align', 'RoIPool', 'roi_pool', 'SyncBatchNorm', 'Conv2d', - 'ConvTranspose2d', 'Linear', 'MaxPool2d', 'CrissCrossAttention', 'PSAMask', - 'point_sample', 'rel_roi_point_to_rel_img_point', 'SimpleRoIAlign', - 'SAConv2d', 'TINShift', 'tin_shift', 'assign_score_withk', - 'box_iou_rotated', 'RoIPointPool3d', 'nms_rotated', 'knn', 'ball_query', - 'upfirdn2d', 'FusedBiasLeakyReLU', 'fused_bias_leakyrelu', - 'RoIAlignRotated', 'roi_align_rotated', 'pixel_group', 'QueryAndGroup', - 'GroupAll', 'grouping_operation', 'contour_expand', 'three_nn', - 'three_interpolate', 'MultiScaleDeformableAttention', 'BorderAlign', - 'border_align', 'gather_points', 'furthest_point_sample', - 'furthest_point_sample_with_dist', 'PointsSampler', 'Correlation', - 'boxes_iou_bev', 'nms_bev', 'nms_normal_bev', 'Voxelization', - 'voxelization', 'dynamic_scatter', 'DynamicScatter', 'RoIAwarePool3d', - 'points_in_boxes_part', 'points_in_boxes_cpu', 'points_in_boxes_all' -] diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/detectors/point_rend.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/detectors/point_rend.py deleted file mode 100644 index 808ef2258ae88301d349db3aaa2711f223e5c971..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/detectors/point_rend.py +++ /dev/null @@ -1,29 +0,0 @@ -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class PointRend(TwoStageDetector): - """PointRend: Image Segmentation as Rendering - - This detector is the implementation of - `PointRend `_. - - """ - - def __init__(self, - backbone, - rpn_head, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None): - super(PointRend, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained) diff --git a/spaces/adhisetiawan/anime-voice-generator/mel_processing.py b/spaces/adhisetiawan/anime-voice-generator/mel_processing.py deleted file mode 100644 index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000 --- a/spaces/adhisetiawan/anime-voice-generator/mel_processing.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/adirik/stylemc-demo/encoder4editing/models/__init__.py b/spaces/adirik/stylemc-demo/encoder4editing/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/akhaliq/MT3/app.py b/spaces/akhaliq/MT3/app.py deleted file mode 100644 index 8227395fcb2f3edf60bf52e301dcb85a3386ef97..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/MT3/app.py +++ /dev/null @@ -1,295 +0,0 @@ -import os -os.system("pip install gradio==2.4.6") -import gradio as gr -from pathlib import Path -os.system("pip install gsutil") - - -os.system("git clone --branch=main https://github.com/google-research/t5x") -os.system("mv t5x t5x_tmp; mv t5x_tmp/* .; rm -r t5x_tmp") -os.system("sed -i 's:jax\[tpu\]:jax:' setup.py") -os.system("python3 -m pip install -e .") - - -# install mt3 -os.system("git clone --branch=main https://github.com/magenta/mt3") -os.system("mv mt3 mt3_tmp; mv mt3_tmp/* .; rm -r mt3_tmp") -os.system("python3 -m pip install -e .") - -# copy checkpoints -os.system("gsutil -q -m cp -r gs://mt3/checkpoints .") - -# copy soundfont (originally from https://sites.google.com/site/soundfonts4u) -os.system("gsutil -q -m cp gs://magentadata/soundfonts/SGM-v2.01-Sal-Guit-Bass-V1.3.sf2 .") - -#@title Imports and Definitions - -import functools -import os - -import numpy as np -import tensorflow.compat.v2 as tf - -import functools -import gin -import jax -import librosa -import note_seq -import seqio -import t5 -import t5x - -from mt3 import metrics_utils -from mt3 import models -from mt3 import network -from mt3 import note_sequences -from mt3 import preprocessors -from mt3 import spectrograms -from mt3 import vocabularies - - -import nest_asyncio -nest_asyncio.apply() - -SAMPLE_RATE = 16000 -SF2_PATH = 'SGM-v2.01-Sal-Guit-Bass-V1.3.sf2' - -def upload_audio(audio, sample_rate): - return note_seq.audio_io.wav_data_to_samples_librosa( - audio, sample_rate=sample_rate) - - - -class InferenceModel(object): - """Wrapper of T5X model for music transcription.""" - - def __init__(self, checkpoint_path, model_type='mt3'): - - # Model Constants. - if model_type == 'ismir2021': - num_velocity_bins = 127 - self.encoding_spec = note_sequences.NoteEncodingSpec - self.inputs_length = 512 - elif model_type == 'mt3': - num_velocity_bins = 1 - self.encoding_spec = note_sequences.NoteEncodingWithTiesSpec - self.inputs_length = 256 - else: - raise ValueError('unknown model_type: %s' % model_type) - - gin_files = ['/home/user/app/mt3/gin/model.gin', - '/home/user/app/mt3/gin/mt3.gin'] - - self.batch_size = 8 - self.outputs_length = 1024 - self.sequence_length = {'inputs': self.inputs_length, - 'targets': self.outputs_length} - - self.partitioner = t5x.partitioning.ModelBasedPjitPartitioner( - model_parallel_submesh=(1, 1, 1, 1), num_partitions=1) - - # Build Codecs and Vocabularies. - self.spectrogram_config = spectrograms.SpectrogramConfig() - self.codec = vocabularies.build_codec( - vocab_config=vocabularies.VocabularyConfig( - num_velocity_bins=num_velocity_bins)) - self.vocabulary = vocabularies.vocabulary_from_codec(self.codec) - self.output_features = { - 'inputs': seqio.ContinuousFeature(dtype=tf.float32, rank=2), - 'targets': seqio.Feature(vocabulary=self.vocabulary), - } - - # Create a T5X model. - self._parse_gin(gin_files) - self.model = self._load_model() - - # Restore from checkpoint. - self.restore_from_checkpoint(checkpoint_path) - - @property - def input_shapes(self): - return { - 'encoder_input_tokens': (self.batch_size, self.inputs_length), - 'decoder_input_tokens': (self.batch_size, self.outputs_length) - } - - def _parse_gin(self, gin_files): - """Parse gin files used to train the model.""" - gin_bindings = [ - 'from __gin__ import dynamic_registration', - 'from mt3 import vocabularies', - 'VOCAB_CONFIG=@vocabularies.VocabularyConfig()', - 'vocabularies.VocabularyConfig.num_velocity_bins=%NUM_VELOCITY_BINS' - ] - with gin.unlock_config(): - gin.parse_config_files_and_bindings( - gin_files, gin_bindings, finalize_config=False) - - def _load_model(self): - """Load up a T5X `Model` after parsing training gin config.""" - model_config = gin.get_configurable(network.T5Config)() - module = network.Transformer(config=model_config) - return models.ContinuousInputsEncoderDecoderModel( - module=module, - input_vocabulary=self.output_features['inputs'].vocabulary, - output_vocabulary=self.output_features['targets'].vocabulary, - optimizer_def=t5x.adafactor.Adafactor(decay_rate=0.8, step_offset=0), - input_depth=spectrograms.input_depth(self.spectrogram_config)) - - - def restore_from_checkpoint(self, checkpoint_path): - """Restore training state from checkpoint, resets self._predict_fn().""" - train_state_initializer = t5x.utils.TrainStateInitializer( - optimizer_def=self.model.optimizer_def, - init_fn=self.model.get_initial_variables, - input_shapes=self.input_shapes, - partitioner=self.partitioner) - - restore_checkpoint_cfg = t5x.utils.RestoreCheckpointConfig( - path=checkpoint_path, mode='specific', dtype='float32') - - train_state_axes = train_state_initializer.train_state_axes - self._predict_fn = self._get_predict_fn(train_state_axes) - self._train_state = train_state_initializer.from_checkpoint_or_scratch( - [restore_checkpoint_cfg], init_rng=jax.random.PRNGKey(0)) - - @functools.lru_cache() - def _get_predict_fn(self, train_state_axes): - """Generate a partitioned prediction function for decoding.""" - def partial_predict_fn(params, batch, decode_rng): - return self.model.predict_batch_with_aux( - params, batch, decoder_params={'decode_rng': None}) - return self.partitioner.partition( - partial_predict_fn, - in_axis_resources=( - train_state_axes.params, - t5x.partitioning.PartitionSpec('data',), None), - out_axis_resources=t5x.partitioning.PartitionSpec('data',) - ) - - def predict_tokens(self, batch, seed=0): - """Predict tokens from preprocessed dataset batch.""" - prediction, _ = self._predict_fn( - self._train_state.params, batch, jax.random.PRNGKey(seed)) - return self.vocabulary.decode_tf(prediction).numpy() - - def __call__(self, audio): - """Infer note sequence from audio samples. - - Args: - audio: 1-d numpy array of audio samples (16kHz) for a single example. - - Returns: - A note_sequence of the transcribed audio. - """ - ds = self.audio_to_dataset(audio) - ds = self.preprocess(ds) - - model_ds = self.model.FEATURE_CONVERTER_CLS(pack=False)( - ds, task_feature_lengths=self.sequence_length) - model_ds = model_ds.batch(self.batch_size) - - inferences = (tokens for batch in model_ds.as_numpy_iterator() - for tokens in self.predict_tokens(batch)) - - predictions = [] - for example, tokens in zip(ds.as_numpy_iterator(), inferences): - predictions.append(self.postprocess(tokens, example)) - - result = metrics_utils.event_predictions_to_ns( - predictions, codec=self.codec, encoding_spec=self.encoding_spec) - return result['est_ns'] - - def audio_to_dataset(self, audio): - """Create a TF Dataset of spectrograms from input audio.""" - frames, frame_times = self._audio_to_frames(audio) - return tf.data.Dataset.from_tensors({ - 'inputs': frames, - 'input_times': frame_times, - }) - - def _audio_to_frames(self, audio): - """Compute spectrogram frames from audio.""" - frame_size = self.spectrogram_config.hop_width - padding = [0, frame_size - len(audio) % frame_size] - audio = np.pad(audio, padding, mode='constant') - frames = spectrograms.split_audio(audio, self.spectrogram_config) - num_frames = len(audio) // frame_size - times = np.arange(num_frames) / self.spectrogram_config.frames_per_second - return frames, times - - def preprocess(self, ds): - pp_chain = [ - functools.partial( - t5.data.preprocessors.split_tokens_to_inputs_length, - sequence_length=self.sequence_length, - output_features=self.output_features, - feature_key='inputs', - additional_feature_keys=['input_times']), - # Cache occurs here during training. - preprocessors.add_dummy_targets, - functools.partial( - preprocessors.compute_spectrograms, - spectrogram_config=self.spectrogram_config) - ] - for pp in pp_chain: - ds = pp(ds) - return ds - - def postprocess(self, tokens, example): - tokens = self._trim_eos(tokens) - start_time = example['input_times'][0] - # Round down to nearest symbolic token step. - start_time -= start_time % (1 / self.codec.steps_per_second) - return { - 'est_tokens': tokens, - 'start_time': start_time, - # Internal MT3 code expects raw inputs, not used here. - 'raw_inputs': [] - } - - @staticmethod - def _trim_eos(tokens): - tokens = np.array(tokens, np.int32) - if vocabularies.DECODED_EOS_ID in tokens: - tokens = tokens[:np.argmax(tokens == vocabularies.DECODED_EOS_ID)] - return tokens - - - - - - -inference_model = InferenceModel('/home/user/app/checkpoints/mt3/', 'mt3') - - -def inference(audio): - with open(audio, 'rb') as fd: - contents = fd.read() - audio = upload_audio(contents,sample_rate=16000) - - est_ns = inference_model(audio) - - note_seq.sequence_proto_to_midi_file(est_ns, './transcribed.mid') - - return './transcribed.mid' - -title = "MT3" -description = "Gradio demo for MT3: Multi-Task Multitrack Music Transcription. To use it, simply upload your audio file, or click one of the examples to load them. Read more at the links below." - -article = "

      MT3: Multi-Task Multitrack Music Transcription | Github Repo

      " - -examples=[['download.wav']] - -gr.Interface( - inference, - gr.inputs.Audio(type="filepath", label="Input"), - [gr.outputs.File(label="Output")], - title=title, - description=description, - article=article, - examples=examples, - allow_flagging=False, - allow_screenshot=False, - enable_queue=True - ).launch() \ No newline at end of file diff --git a/spaces/akhaliq/deeplab2/model/encoder/axial_resnet_instances.py b/spaces/akhaliq/deeplab2/model/encoder/axial_resnet_instances.py deleted file mode 100644 index a110c11cd9a97aec27be98b85b5136af291004ef..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/deeplab2/model/encoder/axial_resnet_instances.py +++ /dev/null @@ -1,493 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The Deeplab2 Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Contains Axial-ResNet model instances for Axial-DeepLab and MaX-DeepLab. - -Reference: - Axial-Deeplab: Stand-Alone Axial-Attention for Panoptic Segmentation, - ECCV 2020 Spotlight. https://arxiv.org/abs/2003.07853 - Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, - Liang-Chieh Chen. - MaX-DeepLab: "End-to-End Panoptic Segmentation with Mask Transformers", - CVPR 2021. https://arxiv.org/abs/2012.00759 - Huiyu Wang, Yukun Zhu, Hartwig Adam, Alan Yuille, Liang-Chieh Chen. -""" - -import abc -import collections.abc -import copy - -from absl import logging -import tensorflow as tf - -from deeplab2.model.encoder import axial_resnet - - -def _get_default_config(): - """Gets the default config for Axial-ResNets.""" - # The default config dictionary for an Axial-ResNet is the MaX-DeepLab-S - # architecture for panoptic segmentation. This default config dictionary also - # exactly matches the default arguments of the functions. - default_config = { - 'num_blocks': [3, 4, 6, 3], - 'backbone_layer_multiplier': 1.0, - 'width_multiplier': 1.0, - 'stem_width_multiplier': 1.0, - 'output_stride': 16, - 'classification_mode': False, - 'backbone_type': 'resnet_beta', - 'use_axial_beyond_stride': 16, - 'backbone_use_transformer_beyond_stride': 32, - 'extra_decoder_use_transformer_beyond_stride': 32, - 'backbone_decoder_num_stacks': 0, - 'backbone_decoder_blocks_per_stage': 1, - 'extra_decoder_num_stacks': 0, - 'extra_decoder_blocks_per_stage': 1, - 'max_num_mask_slots': 128, - 'num_mask_slots': 128, - 'memory_channels': 256, - 'base_transformer_expansion': 1.0, - 'global_feed_forward_network_channels': 256, - 'high_resolution_output_stride': 4, - 'activation': 'relu', - 'block_group_config': { - 'attention_bottleneck_expansion': 2, - 'drop_path_keep_prob': 0.8, - 'drop_path_beyond_stride': 16, - 'drop_path_schedule': 'constant', - 'positional_encoding_type': None, - 'use_global_beyond_stride': 0, - 'use_sac_beyond_stride': 0, - 'use_squeeze_and_excite': False, - 'conv_use_recompute_grad': False, - 'axial_use_recompute_grad': True, - 'recompute_within_stride': 0, - 'transformer_use_recompute_grad': False, - 'axial_layer_config': { - 'query_shape': (129, 129), - 'key_expansion': 1, - 'value_expansion': 2, - 'memory_flange': (32, 32), - 'double_global_attention': False, - 'num_heads': 8, - 'use_query_rpe_similarity': True, - 'use_key_rpe_similarity': True, - 'use_content_similarity': True, - 'retrieve_value_rpe': True, - 'retrieve_value_content': True, - 'initialization_std_for_query_key_rpe': 1.0, - 'initialization_std_for_value_rpe': 1.0, - 'self_attention_activation': 'softmax', - }, - 'dual_path_transformer_layer_config': { - 'num_heads': 8, - 'bottleneck_expansion': 2, - 'key_expansion': 1, - 'value_expansion': 2, - 'feed_forward_network_channels': 2048, - 'use_memory_self_attention': True, - 'use_pixel2memory_feedback_attention': True, - 'transformer_activation': 'softmax', - }, - }, - 'bn_layer': tf.keras.layers.BatchNormalization, - 'conv_kernel_weight_decay': 0.0, - } - return default_config - - -def override(config_dict, override_dict): - """Recursively overrides a config dict with another.""" - output_dict = copy.deepcopy(config_dict) - for key, value in override_dict.items(): - if isinstance(value, collections.abc.Mapping): - output_dict[key] = override(config_dict.get(key, {}), value) - else: - output_dict[key] = value - return output_dict - - -class AxialResNetInstance(axial_resnet.AxialResNet): - """A base Axial-ResNet model.""" - - @classmethod - @abc.abstractmethod - def _get_config(cls): - pass - - def __init__(self, name, **kwargs): - """Builds an Axial-ResNet model.""" - # Get the config of the current model. - current_config = self._get_config() - - # Override the default config with the current config. This line can be - # omitted because the default config equals the default arguments of the - # functions that build the model. But we make all the configs explicit here. - current_config = override(_get_default_config(), current_config) - - # Finally, override the current model config with keyword arguments. In this - # way, we still respect arguments passed as keyword arguments, such as - # classification_mode, output_stride, etc. - current_config = override(current_config, kwargs) - logging.info('Axial-ResNet final config: %s', current_config) - super(AxialResNetInstance, self).__init__(name, **current_config) - - -class MaXDeepLabS(AxialResNetInstance): - """MaX-DeepLab-S for panoptic segmentation. - - Reference: - Axial-Deeplab: Stand-Alone Axial-Attention for Panoptic Segmentation, - ECCV 2020 Spotlight. https://arxiv.org/abs/2003.07853 - Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, - Liang-Chieh Chen. - MaX-DeepLab: "End-to-End Panoptic Segmentation with Mask Transformers", - CVPR 2021. https://arxiv.org/abs/2012.00759 - Huiyu Wang, Yukun Zhu, Hartwig Adam, Alan Yuille, Liang-Chieh Chen. - """ - - @classmethod - def _get_config(cls): - # Return an empty dictionary as the default values are all set for - # MaX-DeepLab-S. - return {} - - -class MaXDeepLabL(AxialResNetInstance): - """MaX-DeepLab-L for panoptic segmentation. - - Reference: - Axial-Deeplab: Stand-Alone Axial-Attention for Panoptic Segmentation, - ECCV 2020 Spotlight. https://arxiv.org/abs/2003.07853 - Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, - Liang-Chieh Chen. - MaX-DeepLab: "End-to-End Panoptic Segmentation with Mask Transformers", - CVPR 2021. https://arxiv.org/abs/2012.00759 - Huiyu Wang, Yukun Zhu, Hartwig Adam, Alan Yuille, Liang-Chieh Chen. - """ - - @classmethod - def _get_config(cls): - return { - 'num_blocks': [3, 6, 3, 3], - 'backbone_type': 'wider_resnet', - 'backbone_use_transformer_beyond_stride': 16, - 'extra_decoder_use_transformer_beyond_stride': 16, - 'backbone_decoder_num_stacks': 1, - 'extra_decoder_num_stacks': 1, - 'extra_decoder_blocks_per_stage': 3, - 'memory_channels': 512, - 'base_transformer_expansion': 2.0, - 'global_feed_forward_network_channels': 512, - 'block_group_config': { - 'attention_bottleneck_expansion': 4, - 'drop_path_beyond_stride': 4, - 'axial_layer_config': { - 'key_expansion': 2, - 'value_expansion': 4, - }, - }, - } - - -class MaXDeepLabSBackbone(MaXDeepLabS): - """MaX-DeepLab-S backbone for image classification pretraining. - - Reference: - Axial-Deeplab: Stand-Alone Axial-Attention for Panoptic Segmentation, - ECCV 2020 Spotlight. https://arxiv.org/abs/2003.07853 - Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, - Liang-Chieh Chen. - MaX-DeepLab: "End-to-End Panoptic Segmentation with Mask Transformers", - CVPR 2021. https://arxiv.org/abs/2012.00759 - Huiyu Wang, Yukun Zhu, Hartwig Adam, Alan Yuille, Liang-Chieh Chen. - """ - - @classmethod - def _get_config(cls): - base_config = super(MaXDeepLabSBackbone, cls)._get_config() - # Override the config of MaXDeepLabS. - override_config = { - 'classification_mode': True, - # The transformer blocks are not ImageNet pretrained. They are randomly - # initialized and trained from scratch for panoptic segmentation. - 'backbone_use_transformer_beyond_stride': 0, - } - return override(base_config, override_config) - - -class MaXDeepLabLBackbone(MaXDeepLabL): - """MaX-DeepLab-L backbone for image classification pretraining. - - Reference: - Axial-Deeplab: Stand-Alone Axial-Attention for Panoptic Segmentation, - ECCV 2020 Spotlight. https://arxiv.org/abs/2003.07853 - Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, - Liang-Chieh Chen. - MaX-DeepLab: "End-to-End Panoptic Segmentation with Mask Transformers", - CVPR 2021. https://arxiv.org/abs/2012.00759 - Huiyu Wang, Yukun Zhu, Hartwig Adam, Alan Yuille, Liang-Chieh Chen. - """ - - @classmethod - def _get_config(cls): - base_config = super(MaXDeepLabLBackbone, cls)._get_config() - # Override the config of MaXDeepLabL. - override_config = { - 'classification_mode': True, - # The transformer blocks are not ImageNet pretrained. They are randomly - # initialized and trained from scratch for panoptic segmentation. - 'backbone_use_transformer_beyond_stride': 0, - } - return override(base_config, override_config) - - -class ResNet50(AxialResNetInstance): - """A ResNet-50 instance. - - Note that the implementation is different from the original ResNet-50 in: - (1) We apply strided convolutions in the first 3x3 convolution of the first - residual block of a stage. - (2) We replace the strided max pooling layer in the stem by applying strided - convolution in the immediate next residual block. - """ - - @classmethod - def _get_config(cls): - return { - 'classification_mode': True, - 'backbone_type': 'resnet', - 'use_axial_beyond_stride': 0, - 'backbone_use_transformer_beyond_stride': 0, - 'block_group_config': { - 'drop_path_keep_prob': 1.0, - }, - } - - -class ResNet50Beta(ResNet50): - """A ResNet-50 but with inception stem. - - Note that the implementation is different from the original ResNet-50 in: - (1) We apply strided convolutions in the first 3x3 convolution of the first - residual block of a stage. - (2) We replace the strided max pooling layer in the stem by applying strided - convolution in the immediate next residual block. - """ - - @classmethod - def _get_config(cls): - base_config = super(ResNet50Beta, cls)._get_config() - # Override the config of ResNet50. - override_config = { - 'backbone_type': 'resnet_beta', - } - return override(base_config, override_config) - - -class AxialResNetL(ResNet50): - """Axial-ResNet-L for image classification only. - - Axial-ResNet-L is a ResNet50 with use_axial_beyond_stride = 2. - - Reference: - Axial-Deeplab: Stand-Alone Axial-Attention for Panoptic Segmentation, - ECCV 2020 Spotlight. https://arxiv.org/abs/2003.07853 - Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, - Liang-Chieh Chen. - """ - - @classmethod - def _get_config(cls): - base_config = super(AxialResNetL, cls)._get_config() - # Override the config of ResNet50. - override_config = { - 'use_axial_beyond_stride': 2, - } - return override(base_config, override_config) - - -class AxialResNetS(ResNet50): - """Axial-ResNet-S for image classification only. - - Axial-ResNet-S is a ResNet50 with use_axial_beyond_stride = 2 and - width_multiplier = 0.5. - - Reference: - Axial-Deeplab: Stand-Alone Axial-Attention for Panoptic Segmentation, - ECCV 2020 Spotlight. https://arxiv.org/abs/2003.07853 - Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, - Liang-Chieh Chen. - """ - - @classmethod - def _get_config(cls): - base_config = super(AxialResNetS, cls)._get_config() - # Override the config of ResNet50. - override_config = { - 'width_multiplier': 0.5, - 'use_axial_beyond_stride': 2, - } - return override(base_config, override_config) - - -class AxialDeepLabL(ResNet50Beta): - """Axial-DeepLab-L for panoptic segmentation. - - Axial-DeepLab-L is a ResNet50Beta with use_axial_beyond_stride = 2. - Axial-DeepLab-L is also equivalent to Axial-ResNet-L with an inception stem. - - Reference: - Axial-Deeplab: Stand-Alone Axial-Attention for Panoptic Segmentation, - ECCV 2020 Spotlight. https://arxiv.org/abs/2003.07853 - Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, - Liang-Chieh Chen. - """ - - @classmethod - def _get_config(cls): - base_config = super(AxialDeepLabL, cls)._get_config() - override_config = { - 'use_axial_beyond_stride': 2, - } - return override(base_config, override_config) - - -class AxialDeepLabS(ResNet50Beta): - """Axial-DeepLab-S for panoptic segmentation. - - Axial-DeepLab-S is a ResNet50Beta with use_axial_beyond_stride = 2 and - width_multiplier = 0.5. - Axial-DeepLab-S is also equivalent to Axial-ResNet-S with an inception stem. - - Reference: - Axial-Deeplab: Stand-Alone Axial-Attention for Panoptic Segmentation, - ECCV 2020 Spotlight. https://arxiv.org/abs/2003.07853 - Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, - Liang-Chieh Chen. - """ - - @classmethod - def _get_config(cls): - base_config = super(AxialDeepLabS, cls)._get_config() - override_config = { - 'width_multiplier': 0.5, - 'use_axial_beyond_stride': 2, - } - return override(base_config, override_config) - - -class SWideRNet(AxialResNetInstance): - """A SWideRNet instance. - - Note that the implementation is different from the original SWideRNet in: - (1) We apply strided convolutions in the first residual block of a stage, - instead of the last residual block. - (2) We replace the strided max pooling layer in the stem by applying strided - convolution in the immediate next residual block. - (3) We (optionally) use squeeze and excitation in all five stages, instead - of the last four stages only. - - Reference: - Scaling Wide Residual Networks for Panoptic Segmentation, - https://arxiv.org/abs/2011.11675 - Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao. - Axial-Deeplab: Stand-Alone Axial-Attention for Panoptic Segmentation, - ECCV 2020 Spotlight. https://arxiv.org/abs/2003.07853 - Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, - Liang-Chieh Chen. - """ - - @classmethod - def _get_config(cls): - return { - 'num_blocks': [3, 6, 3, 3], - 'classification_mode': True, - 'backbone_type': 'wider_resnet', - 'use_axial_beyond_stride': 0, - 'backbone_use_transformer_beyond_stride': 0, - 'block_group_config': { - 'drop_path_beyond_stride': 4, - 'conv_use_recompute_grad': True, - }, - } - - -class AxialSWideRNet(SWideRNet): - """SWideRNet with axial attention blocks in the last two stages. - - Note that the implementation is different from the original SWideRNet in: - (1) We apply strided convolutions in the first residual block of a stage, - instead of the last residual block. - (2) We replace the strided max pooling layer in the stem by applying strided - convolution in the immediate next residual block. - (3) We (optionally) use squeeze and excitation in all five stages, instead - of the last four stages only. - - Reference: - Scaling Wide Residual Networks for Panoptic Segmentation, - https://arxiv.org/abs/2011.11675 - Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao. - Axial-Deeplab: Stand-Alone Axial-Attention for Panoptic Segmentation, - ECCV 2020 Spotlight. https://arxiv.org/abs/2003.07853 - Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, - Liang-Chieh Chen. - """ - - @classmethod - def _get_config(cls): - base_config = super(AxialSWideRNet, cls)._get_config() - override_config = { - 'use_axial_beyond_stride': 16, - 'block_group_config': { - 'attention_bottleneck_expansion': 4, - 'axial_layer_config': { - 'key_expansion': 2, - 'value_expansion': 4, - }, - }, - } - return override(base_config, override_config) - - -def get_model(name, **kwargs): - """Gets the model instance given the model name.""" - name_lower = name.lower() - if name_lower == 'max_deeplab_s': - return MaXDeepLabS(name_lower, **kwargs) - elif name_lower == 'max_deeplab_l': - return MaXDeepLabL(name_lower, **kwargs) - elif name_lower == 'max_deeplab_s_backbone': - return MaXDeepLabSBackbone(name_lower, **kwargs) - elif name_lower == 'max_deeplab_l_backbone': - return MaXDeepLabLBackbone(name_lower, **kwargs) - elif name_lower == 'resnet50': - return ResNet50(name_lower, **kwargs) - elif name_lower == 'resnet50_beta': - return ResNet50Beta(name_lower, **kwargs) - elif name_lower == 'swidernet' or name_lower == 'wide_resnet41': - return SWideRNet(name_lower, **kwargs) - elif name_lower == 'axial_swidernet': - return AxialSWideRNet(name_lower, **kwargs) - elif name_lower == 'axial_resnet_s': - return AxialResNetS(name_lower, **kwargs) - elif name_lower == 'axial_resnet_l': - return AxialResNetL(name_lower, **kwargs) - elif name_lower == 'axial_deeplab_s': - return AxialDeepLabS(name_lower, **kwargs) - elif name_lower == 'axial_deeplab_l': - return AxialDeepLabL(name_lower, **kwargs) - else: - raise ValueError(name_lower + ' is not supported.') diff --git a/spaces/akhaliq/lama/saicinpainting/evaluation/masks/mask.py b/spaces/akhaliq/lama/saicinpainting/evaluation/masks/mask.py deleted file mode 100644 index 3e34d0675a781fba983cb542f18390255aaf2609..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/lama/saicinpainting/evaluation/masks/mask.py +++ /dev/null @@ -1,429 +0,0 @@ -import enum -from copy import deepcopy - -import numpy as np -from skimage import img_as_ubyte -from skimage.transform import rescale, resize -try: - from detectron2 import model_zoo - from detectron2.config import get_cfg - from detectron2.engine import DefaultPredictor - DETECTRON_INSTALLED = True -except: - print("Detectron v2 is not installed") - DETECTRON_INSTALLED = False - -from .countless.countless2d import zero_corrected_countless - - -class ObjectMask(): - def __init__(self, mask): - self.height, self.width = mask.shape - (self.up, self.down), (self.left, self.right) = self._get_limits(mask) - self.mask = mask[self.up:self.down, self.left:self.right].copy() - - @staticmethod - def _get_limits(mask): - def indicator_limits(indicator): - lower = indicator.argmax() - upper = len(indicator) - indicator[::-1].argmax() - return lower, upper - - vertical_indicator = mask.any(axis=1) - vertical_limits = indicator_limits(vertical_indicator) - - horizontal_indicator = mask.any(axis=0) - horizontal_limits = indicator_limits(horizontal_indicator) - - return vertical_limits, horizontal_limits - - def _clean(self): - self.up, self.down, self.left, self.right = 0, 0, 0, 0 - self.mask = np.empty((0, 0)) - - def horizontal_flip(self, inplace=False): - if not inplace: - flipped = deepcopy(self) - return flipped.horizontal_flip(inplace=True) - - self.mask = self.mask[:, ::-1] - return self - - def vertical_flip(self, inplace=False): - if not inplace: - flipped = deepcopy(self) - return flipped.vertical_flip(inplace=True) - - self.mask = self.mask[::-1, :] - return self - - def image_center(self): - y_center = self.up + (self.down - self.up) / 2 - x_center = self.left + (self.right - self.left) / 2 - return y_center, x_center - - def rescale(self, scaling_factor, inplace=False): - if not inplace: - scaled = deepcopy(self) - return scaled.rescale(scaling_factor, inplace=True) - - scaled_mask = rescale(self.mask.astype(float), scaling_factor, order=0) > 0.5 - (up, down), (left, right) = self._get_limits(scaled_mask) - self.mask = scaled_mask[up:down, left:right] - - y_center, x_center = self.image_center() - mask_height, mask_width = self.mask.shape - self.up = int(round(y_center - mask_height / 2)) - self.down = self.up + mask_height - self.left = int(round(x_center - mask_width / 2)) - self.right = self.left + mask_width - return self - - def crop_to_canvas(self, vertical=True, horizontal=True, inplace=False): - if not inplace: - cropped = deepcopy(self) - cropped.crop_to_canvas(vertical=vertical, horizontal=horizontal, inplace=True) - return cropped - - if vertical: - if self.up >= self.height or self.down <= 0: - self._clean() - else: - cut_up, cut_down = max(-self.up, 0), max(self.down - self.height, 0) - if cut_up != 0: - self.mask = self.mask[cut_up:] - self.up = 0 - if cut_down != 0: - self.mask = self.mask[:-cut_down] - self.down = self.height - - if horizontal: - if self.left >= self.width or self.right <= 0: - self._clean() - else: - cut_left, cut_right = max(-self.left, 0), max(self.right - self.width, 0) - if cut_left != 0: - self.mask = self.mask[:, cut_left:] - self.left = 0 - if cut_right != 0: - self.mask = self.mask[:, :-cut_right] - self.right = self.width - - return self - - def restore_full_mask(self, allow_crop=False): - cropped = self.crop_to_canvas(inplace=allow_crop) - mask = np.zeros((cropped.height, cropped.width), dtype=bool) - mask[cropped.up:cropped.down, cropped.left:cropped.right] = cropped.mask - return mask - - def shift(self, vertical=0, horizontal=0, inplace=False): - if not inplace: - shifted = deepcopy(self) - return shifted.shift(vertical=vertical, horizontal=horizontal, inplace=True) - - self.up += vertical - self.down += vertical - self.left += horizontal - self.right += horizontal - return self - - def area(self): - return self.mask.sum() - - -class RigidnessMode(enum.Enum): - soft = 0 - rigid = 1 - - -class SegmentationMask: - def __init__(self, confidence_threshold=0.5, rigidness_mode=RigidnessMode.rigid, - max_object_area=0.3, min_mask_area=0.02, downsample_levels=6, num_variants_per_mask=4, - max_mask_intersection=0.5, max_foreground_coverage=0.5, max_foreground_intersection=0.5, - max_hidden_area=0.2, max_scale_change=0.25, horizontal_flip=True, - max_vertical_shift=0.1, position_shuffle=True): - """ - :param confidence_threshold: float; threshold for confidence of the panoptic segmentator to allow for - the instance. - :param rigidness_mode: RigidnessMode object - when soft, checks intersection only with the object from which the mask_object was produced - when rigid, checks intersection with any foreground class object - :param max_object_area: float; allowed upper bound for to be considered as mask_object. - :param min_mask_area: float; lower bound for mask to be considered valid - :param downsample_levels: int; defines width of the resized segmentation to obtain shifted masks; - :param num_variants_per_mask: int; maximal number of the masks for the same object; - :param max_mask_intersection: float; maximum allowed area fraction of intersection for 2 masks - produced by horizontal shift of the same mask_object; higher value -> more diversity - :param max_foreground_coverage: float; maximum allowed area fraction of intersection for foreground object to be - covered by mask; lower value -> less the objects are covered - :param max_foreground_intersection: float; maximum allowed area of intersection for the mask with foreground - object; lower value -> mask is more on the background than on the objects - :param max_hidden_area: upper bound on part of the object hidden by shifting object outside the screen area; - :param max_scale_change: allowed scale change for the mask_object; - :param horizontal_flip: if horizontal flips are allowed; - :param max_vertical_shift: amount of vertical movement allowed; - :param position_shuffle: shuffle - """ - - assert DETECTRON_INSTALLED, 'Cannot use SegmentationMask without detectron2' - self.cfg = get_cfg() - self.cfg.merge_from_file(model_zoo.get_config_file("COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml")) - self.cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml") - self.cfg.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH = confidence_threshold - self.predictor = DefaultPredictor(self.cfg) - - self.rigidness_mode = RigidnessMode(rigidness_mode) - self.max_object_area = max_object_area - self.min_mask_area = min_mask_area - self.downsample_levels = downsample_levels - self.num_variants_per_mask = num_variants_per_mask - self.max_mask_intersection = max_mask_intersection - self.max_foreground_coverage = max_foreground_coverage - self.max_foreground_intersection = max_foreground_intersection - self.max_hidden_area = max_hidden_area - self.position_shuffle = position_shuffle - - self.max_scale_change = max_scale_change - self.horizontal_flip = horizontal_flip - self.max_vertical_shift = max_vertical_shift - - def get_segmentation(self, img): - im = img_as_ubyte(img) - panoptic_seg, segment_info = self.predictor(im)["panoptic_seg"] - return panoptic_seg, segment_info - - @staticmethod - def _is_power_of_two(n): - return (n != 0) and (n & (n-1) == 0) - - def identify_candidates(self, panoptic_seg, segments_info): - potential_mask_ids = [] - for segment in segments_info: - if not segment["isthing"]: - continue - mask = (panoptic_seg == segment["id"]).int().detach().cpu().numpy() - area = mask.sum().item() / np.prod(panoptic_seg.shape) - if area >= self.max_object_area: - continue - potential_mask_ids.append(segment["id"]) - return potential_mask_ids - - def downsample_mask(self, mask): - height, width = mask.shape - if not (self._is_power_of_two(height) and self._is_power_of_two(width)): - raise ValueError("Image sides are not power of 2.") - - num_iterations = width.bit_length() - 1 - self.downsample_levels - if num_iterations < 0: - raise ValueError(f"Width is lower than 2^{self.downsample_levels}.") - - if height.bit_length() - 1 < num_iterations: - raise ValueError("Height is too low to perform downsampling") - - downsampled = mask - for _ in range(num_iterations): - downsampled = zero_corrected_countless(downsampled) - - return downsampled - - def _augmentation_params(self): - scaling_factor = np.random.uniform(1 - self.max_scale_change, 1 + self.max_scale_change) - if self.horizontal_flip: - horizontal_flip = bool(np.random.choice(2)) - else: - horizontal_flip = False - vertical_shift = np.random.uniform(-self.max_vertical_shift, self.max_vertical_shift) - - return { - "scaling_factor": scaling_factor, - "horizontal_flip": horizontal_flip, - "vertical_shift": vertical_shift - } - - def _get_intersection(self, mask_array, mask_object): - intersection = mask_array[ - mask_object.up:mask_object.down, mask_object.left:mask_object.right - ] & mask_object.mask - return intersection - - def _check_masks_intersection(self, aug_mask, total_mask_area, prev_masks): - for existing_mask in prev_masks: - intersection_area = self._get_intersection(existing_mask, aug_mask).sum() - intersection_existing = intersection_area / existing_mask.sum() - intersection_current = 1 - (aug_mask.area() - intersection_area) / total_mask_area - if (intersection_existing > self.max_mask_intersection) or \ - (intersection_current > self.max_mask_intersection): - return False - return True - - def _check_foreground_intersection(self, aug_mask, foreground): - for existing_mask in foreground: - intersection_area = self._get_intersection(existing_mask, aug_mask).sum() - intersection_existing = intersection_area / existing_mask.sum() - if intersection_existing > self.max_foreground_coverage: - return False - intersection_mask = intersection_area / aug_mask.area() - if intersection_mask > self.max_foreground_intersection: - return False - return True - - def _move_mask(self, mask, foreground): - # Obtaining properties of the original mask_object: - orig_mask = ObjectMask(mask) - - chosen_masks = [] - chosen_parameters = [] - # to fix the case when resizing gives mask_object consisting only of False - scaling_factor_lower_bound = 0. - - for var_idx in range(self.num_variants_per_mask): - # Obtaining augmentation parameters and applying them to the downscaled mask_object - augmentation_params = self._augmentation_params() - augmentation_params["scaling_factor"] = min([ - augmentation_params["scaling_factor"], - 2 * min(orig_mask.up, orig_mask.height - orig_mask.down) / orig_mask.height + 1., - 2 * min(orig_mask.left, orig_mask.width - orig_mask.right) / orig_mask.width + 1. - ]) - augmentation_params["scaling_factor"] = max([ - augmentation_params["scaling_factor"], scaling_factor_lower_bound - ]) - - aug_mask = deepcopy(orig_mask) - aug_mask.rescale(augmentation_params["scaling_factor"], inplace=True) - if augmentation_params["horizontal_flip"]: - aug_mask.horizontal_flip(inplace=True) - total_aug_area = aug_mask.area() - if total_aug_area == 0: - scaling_factor_lower_bound = 1. - continue - - # Fix if the element vertical shift is too strong and shown area is too small: - vertical_area = aug_mask.mask.sum(axis=1) / total_aug_area # share of area taken by rows - # number of rows which are allowed to be hidden from upper and lower parts of image respectively - max_hidden_up = np.searchsorted(vertical_area.cumsum(), self.max_hidden_area) - max_hidden_down = np.searchsorted(vertical_area[::-1].cumsum(), self.max_hidden_area) - # correcting vertical shift, so not too much area will be hidden - augmentation_params["vertical_shift"] = np.clip( - augmentation_params["vertical_shift"], - -(aug_mask.up + max_hidden_up) / aug_mask.height, - (aug_mask.height - aug_mask.down + max_hidden_down) / aug_mask.height - ) - # Applying vertical shift: - vertical_shift = int(round(aug_mask.height * augmentation_params["vertical_shift"])) - aug_mask.shift(vertical=vertical_shift, inplace=True) - aug_mask.crop_to_canvas(vertical=True, horizontal=False, inplace=True) - - # Choosing horizontal shift: - max_hidden_area = self.max_hidden_area - (1 - aug_mask.area() / total_aug_area) - horizontal_area = aug_mask.mask.sum(axis=0) / total_aug_area - max_hidden_left = np.searchsorted(horizontal_area.cumsum(), max_hidden_area) - max_hidden_right = np.searchsorted(horizontal_area[::-1].cumsum(), max_hidden_area) - allowed_shifts = np.arange(-max_hidden_left, aug_mask.width - - (aug_mask.right - aug_mask.left) + max_hidden_right + 1) - allowed_shifts = - (aug_mask.left - allowed_shifts) - - if self.position_shuffle: - np.random.shuffle(allowed_shifts) - - mask_is_found = False - for horizontal_shift in allowed_shifts: - aug_mask_left = deepcopy(aug_mask) - aug_mask_left.shift(horizontal=horizontal_shift, inplace=True) - aug_mask_left.crop_to_canvas(inplace=True) - - prev_masks = [mask] + chosen_masks - is_mask_suitable = self._check_masks_intersection(aug_mask_left, total_aug_area, prev_masks) & \ - self._check_foreground_intersection(aug_mask_left, foreground) - if is_mask_suitable: - aug_draw = aug_mask_left.restore_full_mask() - chosen_masks.append(aug_draw) - augmentation_params["horizontal_shift"] = horizontal_shift / aug_mask_left.width - chosen_parameters.append(augmentation_params) - mask_is_found = True - break - - if not mask_is_found: - break - - return chosen_parameters - - def _prepare_mask(self, mask): - height, width = mask.shape - target_width = width if self._is_power_of_two(width) else (1 << width.bit_length()) - target_height = height if self._is_power_of_two(height) else (1 << height.bit_length()) - - return resize(mask.astype('float32'), (target_height, target_width), order=0, mode='edge').round().astype('int32') - - def get_masks(self, im, return_panoptic=False): - panoptic_seg, segments_info = self.get_segmentation(im) - potential_mask_ids = self.identify_candidates(panoptic_seg, segments_info) - - panoptic_seg_scaled = self._prepare_mask(panoptic_seg.detach().cpu().numpy()) - downsampled = self.downsample_mask(panoptic_seg_scaled) - scene_objects = [] - for segment in segments_info: - if not segment["isthing"]: - continue - mask = downsampled == segment["id"] - if not np.any(mask): - continue - scene_objects.append(mask) - - mask_set = [] - for mask_id in potential_mask_ids: - mask = downsampled == mask_id - if not np.any(mask): - continue - - if self.rigidness_mode is RigidnessMode.soft: - foreground = [mask] - elif self.rigidness_mode is RigidnessMode.rigid: - foreground = scene_objects - else: - raise ValueError(f'Unexpected rigidness_mode: {rigidness_mode}') - - masks_params = self._move_mask(mask, foreground) - - full_mask = ObjectMask((panoptic_seg == mask_id).detach().cpu().numpy()) - - for params in masks_params: - aug_mask = deepcopy(full_mask) - aug_mask.rescale(params["scaling_factor"], inplace=True) - if params["horizontal_flip"]: - aug_mask.horizontal_flip(inplace=True) - - vertical_shift = int(round(aug_mask.height * params["vertical_shift"])) - horizontal_shift = int(round(aug_mask.width * params["horizontal_shift"])) - aug_mask.shift(vertical=vertical_shift, horizontal=horizontal_shift, inplace=True) - aug_mask = aug_mask.restore_full_mask().astype('uint8') - if aug_mask.mean() <= self.min_mask_area: - continue - mask_set.append(aug_mask) - - if return_panoptic: - return mask_set, panoptic_seg.detach().cpu().numpy() - else: - return mask_set - - -def propose_random_square_crop(mask, min_overlap=0.5): - height, width = mask.shape - mask_ys, mask_xs = np.where(mask > 0.5) # mask==0 is known fragment and mask==1 is missing - - if height < width: - crop_size = height - obj_left, obj_right = mask_xs.min(), mask_xs.max() - obj_width = obj_right - obj_left - left_border = max(0, min(width - crop_size - 1, obj_left + obj_width * min_overlap - crop_size)) - right_border = max(left_border + 1, min(width - crop_size, obj_left + obj_width * min_overlap)) - start_x = np.random.randint(left_border, right_border) - return start_x, 0, start_x + crop_size, height - else: - crop_size = width - obj_top, obj_bottom = mask_ys.min(), mask_ys.max() - obj_height = obj_bottom - obj_top - top_border = max(0, min(height - crop_size - 1, obj_top + obj_height * min_overlap - crop_size)) - bottom_border = max(top_border + 1, min(height - crop_size, obj_top + obj_height * min_overlap)) - start_y = np.random.randint(top_border, bottom_border) - return 0, start_y, width, start_y + crop_size diff --git a/spaces/akhaliq/mae/README.md b/spaces/akhaliq/mae/README.md deleted file mode 100644 index 87ee25262ec0dbc8bf80d61317a6d3aaf8b89028..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/mae/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Mae -emoji: 🚀 -colorFrom: green -colorTo: yellow -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/akhooli/poetry2023/app.py b/spaces/akhooli/poetry2023/app.py deleted file mode 100644 index 5b6654d5a405778ddbc9ca5fa5d041aff535f3b5..0000000000000000000000000000000000000000 --- a/spaces/akhooli/poetry2023/app.py +++ /dev/null @@ -1,53 +0,0 @@ -import gc -import gradio as gr -from transformers import pipeline, set_seed - -pipe = pipeline('text-generation', framework='pt', model='akhooli/ap2023', tokenizer='akhooli/ap2023') -#gc.collect() -samples = [['أنت' - ,1.0, 50, 1.0, 1.0, 114],['هل غادر' - ,1.0, 50, 1.0, 1.0, 114 ],['ألا ليت' - ,1.0, 50, 1.0, 1.0, 114 ],['يا قدس' - ,1.0, 50, 1.0, 1.0, 114],['عيد بأية حال' - ,1.0, 50, 1.0, 1.0, 114],['لكل شيء إذا ما' - ,1.0, 50, 1.0, 1.0, 114 ],['.' - ,1.0, 50, 1.0, 1.0, 114]] - -notes = """ -- Enter a short prompt or select (click) one of the examples and click SEND -- Adjust parameters (temperture, top k, top p and penalty) through the slider (keep close to default values). -- For the same seed (randomness), the same output is regenerated if other parameters are fixed -- Clear and enter new prompt or select another example and SEND to regenerate -- The '.' means start a new line from no prompt (your prompt need not be long) -- Be patient: this runs on CPU (free tier) -- Feedback (Twitter): @akhooli (https://twitter.com/akhooli/status/1611025232201977859) -- Note/Disclaimer: may generate unaccepted or inappropriate content. Use at your own risk. -""" -def sayPoetry(prompt, temp=1.0, topk = 50, topp = 1.0, penalty=1.0, seed=114): - if not int(seed) >= 0: seed=114 - set_seed(seed) - gen = pipe(prompt, max_length=96, do_sample=True, temperature=temp, top_k=topk, top_p=topp, repetition_penalty=penalty, - min_length = 64, no_repeat_ngram_size = 3, return_full_text=True, - num_beams=5, num_return_sequences=1)[0]["generated_text"] - poetry ="" - for line in gen.split('.')[:-1]: - poetry += line #+ "\n" - return poetry -poetry = gr.Interface(fn=sayPoetry, - inputs=[ - gr.Textbox(label="Enter short prompt or select from examples:"), - gr.Slider(0.70, 1.2, step=0.01,value=1.0, label='control temperature'), - gr.Slider(25, 100, step=1,value=50, label='control top k'), - gr.Slider(0.80, 1.0, step=0.01,value=1.0, label='control top p'), - gr.Slider(0.90, 1.50, step=0.01,value=1.0, label='control penalty'), - gr.Number(value=139750, precision=0, label='Seed'), - ], - outputs=[gr.Textbox(label="Generated Poetry:")], - - allow_flagging='never', - title='Arabic Poetry Generation Demo (updated Jan. 2023)', - description = "A simple demo of AI generated poetry based on 1M poems fine-tuned using AraGPT2 (be patient, runs on cpu)", - examples=samples, - cache_examples=False, - article = notes) -poetry.launch() # show_error = True, debug=True \ No newline at end of file diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/__main__.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/__main__.py deleted file mode 100644 index 010896b88ff684c7a73a71ca23af5e76503cd0c2..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/__main__.py +++ /dev/null @@ -1,17 +0,0 @@ -""" - pygments.__main__ - ~~~~~~~~~~~~~~~~~ - - Main entry point for ``python -m pygments``. - - :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import sys -from pip._vendor.pygments.cmdline import main - -try: - sys.exit(main(sys.argv)) -except KeyboardInterrupt: - sys.exit(1) diff --git a/spaces/aliabid94/AutoGPT/autogpt/memory/no_memory.py b/spaces/aliabid94/AutoGPT/autogpt/memory/no_memory.py deleted file mode 100644 index 0371e96ae89f5eb88dae019a66351a229596ed7a..0000000000000000000000000000000000000000 --- a/spaces/aliabid94/AutoGPT/autogpt/memory/no_memory.py +++ /dev/null @@ -1,73 +0,0 @@ -"""A class that does not store any data. This is the default memory provider.""" -from __future__ import annotations - -from typing import Any - -from autogpt.memory.base import MemoryProviderSingleton - - -class NoMemory(MemoryProviderSingleton): - """ - A class that does not store any data. This is the default memory provider. - """ - - def __init__(self, cfg): - """ - Initializes the NoMemory provider. - - Args: - cfg: The config object. - - Returns: None - """ - pass - - def add(self, data: str) -> str: - """ - Adds a data point to the memory. No action is taken in NoMemory. - - Args: - data: The data to add. - - Returns: An empty string. - """ - return "" - - def get(self, data: str) -> list[Any] | None: - """ - Gets the data from the memory that is most relevant to the given data. - NoMemory always returns None. - - Args: - data: The data to compare to. - - Returns: None - """ - return None - - def clear(self) -> str: - """ - Clears the memory. No action is taken in NoMemory. - - Returns: An empty string. - """ - return "" - - def get_relevant(self, data: str, num_relevant: int = 5) -> list[Any] | None: - """ - Returns all the data in the memory that is relevant to the given data. - NoMemory always returns None. - - Args: - data: The data to compare to. - num_relevant: The number of relevant data to return. - - Returns: None - """ - return None - - def get_stats(self): - """ - Returns: An empty dictionary as there are no stats in NoMemory. - """ - return {} diff --git a/spaces/allandclive/Uganda_MMS/uroman/lib/NLP/UTF8.pm b/spaces/allandclive/Uganda_MMS/uroman/lib/NLP/UTF8.pm deleted file mode 100644 index b28cb4dede3b84f45aeade2e24f240e3a39e7cc1..0000000000000000000000000000000000000000 --- a/spaces/allandclive/Uganda_MMS/uroman/lib/NLP/UTF8.pm +++ /dev/null @@ -1,1404 +0,0 @@ -################################################################ -# # -# UTF8 # -# # -################################################################ - -package NLP::UTF8; - -use NLP::utilities; -$util = NLP::utilities; - -%empty_ht = (); - -sub new { - local($caller) = @_; - - my $object = {}; - my $class = ref( $caller ) || $caller; - bless($object, $class); - return $object; -} - -sub unicode_string2string { -# input: string that might contain unicode sequences such as "U+0627" -# output: string in pure utf-8 - local($caller,$s) = @_; - - my $pre; - my $unicode; - my $post; - my $r1; - my $r2; - my $r3; - - ($pre,$unicode,$post) = ($s =~ /^(.*)(?:U\+|\\u)([0-9A-Fa-f][0-9A-Fa-f][0-9A-Fa-f][0-9A-Fa-f])(.*)$/); - return $s unless defined($post); - $r1 = $caller->unicode_string2string($pre); - $r2 = $caller->unicode_hex_string2string($unicode); - $r3 = $caller->unicode_string2string($post); - $result = $r1 . $r2 . $r3; - return $result; -} - -sub unicode_hex_string2string { -# input: "0627" (interpreted as hex code) -# output: utf-8 string for Arabic letter alef - local($caller,$unicode) = @_; - return "" unless defined($unicode); - my $d = hex($unicode); - return $caller->unicode2string($d); -} - -sub unicode2string { -# input: non-neg integer, e.g. 0x627 -# output: utf-8 string for Arabic letter alef - local($caller,$d) = @_; - return "" unless defined($d) && $d >= 0; - return sprintf("%c",$d) if $d <= 0x7F; - - my $lastbyte1 = ($d & 0x3F) | 0x80; - $d >>= 6; - return sprintf("%c%c",$d | 0xC0, $lastbyte1) if $d <= 0x1F; - - my $lastbyte2 = ($d & 0x3F) | 0x80; - $d >>= 6; - return sprintf("%c%c%c",$d | 0xE0, $lastbyte2, $lastbyte1) if $d <= 0xF; - - my $lastbyte3 = ($d & 0x3F) | 0x80; - $d >>= 6; - return sprintf("%c%c%c%c",$d | 0xF0, $lastbyte3, $lastbyte2, $lastbyte1) if $d <= 0x7; - - my $lastbyte4 = ($d & 0x3F) | 0x80; - $d >>= 6; - return sprintf("%c%c%c%c%c",$d | 0xF8, $lastbyte4, $lastbyte3, $lastbyte2, $lastbyte1) if $d <= 0x3; - - my $lastbyte5 = ($d & 0x3F) | 0x80; - $d >>= 6; - return sprintf("%c%c%c%c%c%c",$d | 0xFC, $lastbyte5, $lastbyte4, $lastbyte3, $lastbyte2, $lastbyte1) if $d <= 0x1; - return ""; # bad input -} - -sub html2utf8 { - local($caller, $string) = @_; - - return $string unless $string =~ /\&\#\d{3,5};/; - - my $prev = ""; - my $s = $string; - while ($s ne $prev) { - $prev = $s; - ($pre,$d,$post) = ($s =~ /^(.*)\&\#(\d+);(.*)$/); - if (defined($d) && ((($d >= 160) && ($d <= 255)) - || (($d >= 1500) && ($d <= 1699)) - || (($d >= 19968) && ($d <= 40879)))) { - $html_code = "\&\#" . $d . ";"; - $utf8_code = $caller->unicode2string($d); - $s =~ s/$html_code/$utf8_code/; - } - } - return $s; -} - -sub xhtml2utf8 { - local($caller, $string) = @_; - - return $string unless $string =~ /\&\#x[0-9a-fA-F]{2,5};/; - - my $prev = ""; - my $s = $string; - while ($s ne $prev) { - $prev = $s; - if (($pre, $html_code, $x, $post) = ($s =~ /^(.*)(\&\#x([0-9a-fA-F]{2,5});)(.*)$/)) { - $utf8_code = $caller->unicode_hex_string2string($x); - $s =~ s/$html_code/$utf8_code/; - } - } - return $s; -} - -sub utf8_marker { - return sprintf("%c%c%c\n", 0xEF, 0xBB, 0xBF); -} - -sub enforcer { -# input: string that might not conform to utf-8 -# output: string in pure utf-8, with a few "smart replacements" and possibly "?" - local($caller,$s,$no_repair) = @_; - - my $ascii; - my $utf8; - my $rest; - - return $s if $s =~ /^[\x00-\x7F]*$/; - - $no_repair = 0 unless defined($no_repair); - $orig = $s; - $result = ""; - - while ($s ne "") { - ($ascii,$rest) = ($s =~ /^([\x00-\x7F]+)(.*)$/); - if (defined($ascii)) { - $result .= $ascii; - $s = $rest; - next; - } - ($utf8,$rest) = ($s =~ /^([\xC0-\xDF][\x80-\xBF])(.*)$/); - ($utf8,$rest) = ($s =~ /^([\xE0-\xEF][\x80-\xBF][\x80-\xBF])(.*)$/) - unless defined($rest); - ($utf8,$rest) = ($s =~ /^([\xF0-\xF7][\x80-\xBF][\x80-\xBF][\x80-\xBF])(.*)$/) - unless defined($rest); - ($utf8,$rest) = ($s =~ /^([\xF8-\xFB][\x80-\xBF][\x80-\xBF][\x80-\xBF][\x80-\xBF])(.*)$/) - unless defined($rest); - if (defined($utf8)) { - $result .= $utf8; - $s = $rest; - next; - } - ($c,$rest) = ($s =~ /^(.)(.*)$/); - if (defined($c)) { - if ($no_repair) { $result .= "?"; } - elsif ($c =~ /\x85/) { $result .= "..."; } - elsif ($c =~ /\x91/) { $result .= "'"; } - elsif ($c =~ /\x92/) { $result .= "'"; } - elsif ($c =~ /\x93/) { $result .= $caller->unicode2string(0x201C); } - elsif ($c =~ /\x94/) { $result .= $caller->unicode2string(0x201D); } - elsif ($c =~ /[\xC0-\xFF]/) { - $c2 = $c; - $c2 =~ tr/[\xC0-\xFF]/[\x80-\xBF]/; - $result .= "\xC3$c2"; - } else { - $result .= "?"; - } - $s = $rest; - next; - } - $s = ""; - } - $result .= "\n" if ($orig =~ /\n$/) && ! ($result =~ /\n$/); - return $result; -} - -sub split_into_utf8_characters { -# input: utf8 string -# output: list of sub-strings, each representing a utf8 character - local($caller,$string,$group_control, *ht) = @_; - - @characters = (); - $end_of_token_p_string = ""; - $skipped_bytes = ""; - $group_control = "" unless defined($group_control); - $group_ascii_numbers = ($group_control =~ /ASCII numbers/); - $group_ascii_spaces = ($group_control =~ /ASCII spaces/); - $group_ascii_punct = ($group_control =~ /ASCII punct/); - $group_ascii_chars = ($group_control =~ /ASCII chars/); - $group_xml_chars = ($group_control =~ /XML chars/); - $group_xml_tags = ($group_control =~ /XML tags/); - $return_only_chars = ($group_control =~ /return only chars/); - $return_trailing_whitespaces = ($group_control =~ /return trailing whitespaces/); - if ($group_control =~ /ASCII all/) { - $group_ascii_numbers = 1; - $group_ascii_spaces = 1; - $group_ascii_chars = 1; - $group_ascii_punct = 1; - } - if ($group_control =~ /(XML chars and tags|XML tags and chars)/) { - $group_xml_chars = 1; - $group_xml_tags = 1; - } - $orig_string = $string; - $string .= " "; - while ($string =~ /\S/) { - # one-character UTF-8 = ASCII - if ($string =~ /^[\x00-\x7F]/) { - if ($group_xml_chars - && (($dec_unicode, $rest) = ($string =~ /^&#(\d+);(.*)$/s)) - && ($utf8_char = $caller->unicode2string($dec_unicode))) { - push(@characters, $utf8_char); - $string = $rest; - } elsif ($group_xml_chars - && (($hex_unicode, $rest) = ($string =~ /^&#x([0-9a-f]{1,6});(.*)$/is)) - && ($utf8_char = $caller->unicode_hex_string2string($hex_unicode))) { - push(@characters, $utf8_char); - $string = $rest; - } elsif ($group_xml_chars - && (($html_entity_name, $rest) = ($string =~ /^&([a-z]{1,6});(.*)$/is)) - && ($dec_unicode = $ht{HTML_ENTITY_NAME_TO_DECUNICODE}->{$html_entity_name}) - && ($utf8_char = $caller->unicode2string($dec_unicode)) - ) { - push(@characters, $utf8_char); - $string = $rest; - } elsif ($group_xml_tags - && (($tag, $rest) = ($string =~ /^(<\/?[a-zA-Z][-_:a-zA-Z0-9]*(\s+[a-zA-Z][-_:a-zA-Z0-9]*=\"[^"]*\")*\s*\/?>)(.*)$/s))) { - push(@characters, $tag); - $string = $rest; - } elsif ($group_ascii_numbers && ($string =~ /^[12]\d\d\d\.[01]?\d.[0-3]?\d([^0-9].*)?$/)) { - ($date) = ($string =~ /^(\d\d\d\d\.\d?\d.\d?\d)([^0-9].*)?$/); - push(@characters,$date); - $string = substr($string, length($date)); - } elsif ($group_ascii_numbers && ($string =~ /^\d/)) { - ($number) = ($string =~ /^(\d+(,\d\d\d)*(\.\d+)?)/); - push(@characters,$number); - $string = substr($string, length($number)); - } elsif ($group_ascii_spaces && ($string =~ /^(\s+)/)) { - ($space) = ($string =~ /^(\s+)/); - $string = substr($string, length($space)); - } elsif ($group_ascii_punct && (($punct_seq) = ($string =~ /^(-+|\.+|[:,%()"])/))) { - push(@characters,$punct_seq); - $string = substr($string, length($punct_seq)); - } elsif ($group_ascii_chars && (($word) = ($string =~ /^(\$[A-Z]*|[A-Z]{1,3}\$)/))) { - push(@characters,$word); - $string = substr($string, length($word)); - } elsif ($group_ascii_chars && (($abbrev) = ($string =~ /^((?:Jan|Feb|Febr|Mar|Apr|Jun|Jul|Aug|Sep|Sept|Oct|Nov|Dec|Mr|Mrs|Dr|a.m|p.m)\.)/))) { - push(@characters,$abbrev); - $string = substr($string, length($abbrev)); - } elsif ($group_ascii_chars && (($word) = ($string =~ /^(second|minute|hour|day|week|month|year|inch|foot|yard|meter|kilometer|mile)-(?:long|old)/i))) { - push(@characters,$word); - $string = substr($string, length($word)); - } elsif ($group_ascii_chars && (($word) = ($string =~ /^(zero|one|two|three|four|five|six|seven|eight|nine|ten|eleven|twelve|thirteen|fourteen|fifteen|sixteen|seventeen|eighteen|nineteen|twenty|thirty|forty|fifty|sixty|seventy|eighty|ninety|hundred|thousand|million|billion|trillion)-/i))) { - push(@characters,$word); - $string = substr($string, length($word)); - } elsif ($group_ascii_chars && (($word) = ($string =~ /^([a-zA-Z]+)(?:[ ,;%?|()"]|'s |' |\. |\d+[:hms][0-9 ])/))) { - push(@characters,$word); - $string = substr($string, length($word)); - } elsif ($group_ascii_chars && ($string =~ /^([\x21-\x27\x2A-\x7E]+)/)) { # exclude () - ($ascii) = ($string =~ /^([\x21-\x27\x2A-\x7E]+)/); # ASCII black-characters - push(@characters,$ascii); - $string = substr($string, length($ascii)); - } elsif ($group_ascii_chars && ($string =~ /^([\x21-\x7E]+)/)) { - ($ascii) = ($string =~ /^([\x21-\x7E]+)/); # ASCII black-characters - push(@characters,$ascii); - $string = substr($string, length($ascii)); - } elsif ($group_ascii_chars && ($string =~ /^([\x00-\x7F]+)/)) { - ($ascii) = ($string =~ /^([\x00-\x7F]+)/); - push(@characters,$ascii); - $string = substr($string, length($ascii)); - } else { - push(@characters,substr($string, 0, 1)); - $string = substr($string, 1); - } - - # two-character UTF-8 - } elsif ($string =~ /^[\xC0-\xDF][\x80-\xBF]/) { - push(@characters,substr($string, 0, 2)); - $string = substr($string, 2); - - # three-character UTF-8 - } elsif ($string =~ /^[\xE0-\xEF][\x80-\xBF][\x80-\xBF]/) { - push(@characters,substr($string, 0, 3)); - $string = substr($string, 3); - - # four-character UTF-8 - } elsif ($string =~ /^[\xF0-\xF7][\x80-\xBF][\x80-\xBF][\x80-\xBF]/) { - push(@characters,substr($string, 0, 4)); - $string = substr($string, 4); - - # five-character UTF-8 - } elsif ($string =~ /^[\xF8-\xFB][\x80-\xBF][\x80-\xBF][\x80-\xBF][\x80-\xBF]/) { - push(@characters,substr($string, 0, 5)); - $string = substr($string, 5); - - # six-character UTF-8 - } elsif ($string =~ /^[\xFC-\xFD][\x80-\xBF][\x80-\xBF][\x80-\xBF][\x80-\xBF][\x80-\xBF]/) { - push(@characters,substr($string, 0, 6)); - $string = substr($string, 6); - - # not a UTF-8 character - } else { - $skipped_bytes .= substr($string, 0, 1); - $string = substr($string, 1); - } - - $end_of_token_p_string .= ($string =~ /^\S/) ? "0" : "1" - if $#characters >= length($end_of_token_p_string); - } - $string =~ s/ $//; # remove previously added space, but keep original spaces - if ($return_trailing_whitespaces) { - while ($string =~ /^[ \t]/) { - push(@characters,substr($string, 0, 1)); - $string = substr($string, 1); - } - push(@characters, "\n") if $orig_string =~ /\n$/; - } - return ($return_only_chars) ? @characters : ($skipped_bytes, $end_of_token_p_string, @characters); -} - -sub max_substring_info { - local($caller,$s1,$s2,$info_type) = @_; - - ($skipped_bytes1, $end_of_token_p_string1, @char_list1) = $caller->split_into_utf8_characters($s1, "", *empty_ht); - ($skipped_bytes2, $end_of_token_p_string2, @char_list2) = $caller->split_into_utf8_characters($s2, "", *empty_ht); - return 0 if $skipped_bytes1 || $skipped_bytes2; - - $best_substring_start1 = 0; - $best_substring_start2 = 0; - $best_substring_length = 0; - - foreach $start_pos2 ((0 .. $#char_list2)) { - last if $start_pos2 + $best_substring_length > $#char_list2; - foreach $start_pos1 ((0 .. $#char_list1)) { - last if $start_pos1 + $best_substring_length > $#char_list1; - $matching_length = 0; - while (($start_pos1 + $matching_length <= $#char_list1) - && ($start_pos2 + $matching_length <= $#char_list2) - && ($char_list1[$start_pos1+$matching_length] eq $char_list2[$start_pos2+$matching_length])) { - $matching_length++; - } - if ($matching_length > $best_substring_length) { - $best_substring_length = $matching_length; - $best_substring_start1 = $start_pos1; - $best_substring_start2 = $start_pos2; - } - } - } - if ($info_type =~ /^max-ratio1$/) { - $length1 = $#char_list1 + 1; - return ($length1 > 0) ? ($best_substring_length / $length1) : 0; - } elsif ($info_type =~ /^max-ratio2$/) { - $length2 = $#char_list2 + 1; - return ($length2 > 0) ? ($best_substring_length / $length2) : 0; - } elsif ($info_type =~ /^substring$/) { - return join("", @char_list1[$best_substring_start1 .. $best_substring_start1+$best_substring_length-1]); - } else { - $length1 = $#char_list1 + 1; - $length2 = $#char_list2 + 1; - $info = "s1=$s1;s2=$s2"; - $info .= ";best_substring_length=$best_substring_length"; - $info .= ";best_substring_start1=$best_substring_start1"; - $info .= ";best_substring_start2=$best_substring_start2"; - $info .= ";length1=$length1"; - $info .= ";length2=$length2"; - return $info; - } -} - -sub n_shared_chars_at_start { - local($caller,$s1,$s2) = @_; - - my $n = 0; - while (($s1 ne "") && ($s2 ne "")) { - ($c1, $rest1) = ($s1 =~ /^(.[\x80-\xBF]*)(.*)$/); - ($c2, $rest2) = ($s2 =~ /^(.[\x80-\xBF]*)(.*)$/); - if ($c1 eq $c2) { - $n++; - $s1 = $rest1; - $s2 = $rest2; - } else { - last; - } - } - return $n; -} - -sub char_length { - local($caller,$string,$byte_offset) = @_; - - my $char = ($byte_offset) ? substr($string, $byte_offset) : $string; - return 1 if $char =~ /^[\x00-\x7F]/; - return 2 if $char =~ /^[\xC0-\xDF]/; - return 3 if $char =~ /^[\xE0-\xEF]/; - return 4 if $char =~ /^[\xF0-\xF7]/; - return 5 if $char =~ /^[\xF8-\xFB]/; - return 6 if $char =~ /^[\xFC-\xFD]/; - return 0; -} - -sub length_in_utf8_chars { - local($caller,$s) = @_; - - $s =~ s/[\x80-\xBF]//g; - $s =~ s/[\x00-\x7F\xC0-\xFF]/c/g; - return length($s); -} - -sub byte_length_of_n_chars { - local($caller,$char_length,$string,$byte_offset,$undef_return_value) = @_; - - $byte_offset = 0 unless defined($byte_offset); - $undef_return_value = -1 unless defined($undef_return_value); - my $result = 0; - my $len; - foreach $i ((1 .. $char_length)) { - $len = $caller->char_length($string,($byte_offset+$result)); - return $undef_return_value unless $len; - $result += $len; - } - return $result; -} - -sub replace_non_ASCII_bytes { - local($caller,$string,$replacement) = @_; - - $replacement = "HEX" unless defined($replacement); - if ($replacement =~ /^(Unicode|U\+4|\\u|HEX)$/) { - $new_string = ""; - while (($pre,$utf8_char, $post) = ($string =~ /^([\x09\x0A\x20-\x7E]*)([\x00-\x08\x0B-\x1F\x7F]|[\xC0-\xDF][\x80-\xBF]|[\xE0-\xEF][\x80-\xBF][\x80-\xBF]|[\xF0-\xF7][\x80-\xBF][\x80-\xBF][\x80-\xBF]|[\xF8-\xFF][\x80-\xBF]+|[\x80-\xBF])(.*)$/s)) { - if ($replacement =~ /Unicode/) { - $new_string .= $pre . "utf8_to_unicode($utf8_char)) . ">"; - } elsif ($replacement =~ /\\u/) { - $new_string .= $pre . "\\u" . (uc sprintf("%04x", $caller->utf8_to_unicode($utf8_char))); - } elsif ($replacement =~ /U\+4/) { - $new_string .= $pre . "utf8_to_4hex_unicode($utf8_char)) . ">"; - } else { - $new_string .= $pre . "utf8_to_hex($utf8_char) . ">"; - } - $string = $post; - } - $new_string .= $string; - } else { - $new_string = $string; - $new_string =~ s/[\x80-\xFF]/$replacement/g; - } - return $new_string; -} - -sub valid_utf8_string_p { - local($caller,$string) = @_; - - return $string =~ /^(?:[\x09\x0A\x20-\x7E]|[\xC0-\xDF][\x80-\xBF]|[\xE0-\xEF][\x80-\xBF][\x80-\xBF]|[\xF0-\xF7][\x80-\xBF][\x80-\xBF][\x80-\xBF])*$/; -} - -sub valid_utf8_string_incl_ascii_control_p { - local($caller,$string) = @_; - - return $string =~ /^(?:[\x00-\x7F]|[\xC0-\xDF][\x80-\xBF]|[\xE0-\xEF][\x80-\xBF][\x80-\xBF]|[\xF0-\xF7][\x80-\xBF][\x80-\xBF][\x80-\xBF])*$/; -} - -sub utf8_to_hex { - local($caller,$s) = @_; - - $hex = ""; - foreach $i ((0 .. length($s)-1)) { - $hex .= uc sprintf("%2.2x",ord(substr($s, $i, 1))); - } - return $hex; -} - -sub hex_to_utf8 { - local($caller,$s) = @_; - # surface string \xE2\x80\xBA to UTF8 - - my $utf8 = ""; - while (($hex, $rest) = ($s =~ /^(?:\\x)?([0-9A-Fa-f]{2,2})(.*)$/)) { - $utf8 .= sprintf("%c", hex($hex)); - $s = $rest; - } - return $utf8; -} - -sub utf8_to_4hex_unicode { - local($caller,$s) = @_; - - return sprintf("%4.4x", $caller->utf8_to_unicode($s)); -} - -sub utf8_to_unicode { - local($caller,$s) = @_; - - $unicode = 0; - foreach $i ((0 .. length($s)-1)) { - $c = substr($s, $i, 1); - if ($c =~ /^[\x80-\xBF]$/) { - $unicode = $unicode * 64 + (ord($c) & 0x3F); - } elsif ($c =~ /^[\xC0-\xDF]$/) { - $unicode = $unicode * 32 + (ord($c) & 0x1F); - } elsif ($c =~ /^[\xE0-\xEF]$/) { - $unicode = $unicode * 16 + (ord($c) & 0x0F); - } elsif ($c =~ /^[\xF0-\xF7]$/) { - $unicode = $unicode * 8 + (ord($c) & 0x07); - } elsif ($c =~ /^[\xF8-\xFB]$/) { - $unicode = $unicode * 4 + (ord($c) & 0x03); - } elsif ($c =~ /^[\xFC-\xFD]$/) { - $unicode = $unicode * 2 + (ord($c) & 0x01); - } - } - return $unicode; -} - -sub charhex { - local($caller,$string) = @_; - - my $result = ""; - while ($string ne "") { - $char = substr($string, 0, 1); - $string = substr($string, 1); - if ($char =~ /^[ -~]$/) { - $result .= $char; - } else { - $hex = sprintf("%2.2x",ord($char)); - $hex =~ tr/a-f/A-F/; - $result .= ""; - } - } - return $result; -} - -sub windows1252_to_utf8 { - local($caller,$s, $norm_to_ascii_p, $preserve_potential_utf8s_p) = @_; - - return $s if $s =~ /^[\x00-\x7F]*$/; # all ASCII - - $norm_to_ascii_p = 1 unless defined($norm_to_ascii_p); - $preserve_potential_utf8s_p = 1 unless defined($preserve_potential_utf8s_p); - my $result = ""; - my $c = ""; - while ($s ne "") { - $n_bytes = 1; - if ($s =~ /^[\x00-\x7F]/) { - $result .= substr($s, 0, 1); # ASCII - } elsif ($preserve_potential_utf8s_p && ($s =~ /^[\xC0-\xDF][\x80-\xBF]/)) { - $result .= substr($s, 0, 2); # valid 2-byte UTF8 - $n_bytes = 2; - } elsif ($preserve_potential_utf8s_p && ($s =~ /^[\xE0-\xEF][\x80-\xBF][\x80-\xBF]/)) { - $result .= substr($s, 0, 3); # valid 3-byte UTF8 - $n_bytes = 3; - } elsif ($preserve_potential_utf8s_p && ($s =~ /^[\xF0-\xF7][\x80-\xBF][\x80-\xBF][\x80-\xBF]/)) { - $result .= substr($s, 0, 4); # valid 4-byte UTF8 - $n_bytes = 4; - } elsif ($preserve_potential_utf8s_p && ($s =~ /^[\xF8-\xFB][\x80-\xBF][\x80-\xBF][\x80-\xBF][\x80-\xBF]/)) { - $result .= substr($s, 0, 5); # valid 5-byte UTF8 - $n_bytes = 5; - } elsif ($s =~ /^[\xA0-\xBF]/) { - $c = substr($s, 0, 1); - $result .= "\xC2$c"; - } elsif ($s =~ /^[\xC0-\xFF]/) { - $c = substr($s, 0, 1); - $c =~ tr/[\xC0-\xFF]/[\x80-\xBF]/; - $result .= "\xC3$c"; - } elsif ($s =~ /^\x80/) { - $result .= "\xE2\x82\xAC"; # Euro sign - } elsif ($s =~ /^\x82/) { - $result .= "\xE2\x80\x9A"; # single low quotation mark - } elsif ($s =~ /^\x83/) { - $result .= "\xC6\x92"; # Latin small letter f with hook - } elsif ($s =~ /^\x84/) { - $result .= "\xE2\x80\x9E"; # double low quotation mark - } elsif ($s =~ /^\x85/) { - $result .= ($norm_to_ascii_p) ? "..." : "\xE2\x80\xA6"; # horizontal ellipsis (three dots) - } elsif ($s =~ /^\x86/) { - $result .= "\xE2\x80\xA0"; # dagger - } elsif ($s =~ /^\x87/) { - $result .= "\xE2\x80\xA1"; # double dagger - } elsif ($s =~ /^\x88/) { - $result .= "\xCB\x86"; # circumflex - } elsif ($s =~ /^\x89/) { - $result .= "\xE2\x80\xB0"; # per mille sign - } elsif ($s =~ /^\x8A/) { - $result .= "\xC5\xA0"; # Latin capital letter S with caron - } elsif ($s =~ /^\x8B/) { - $result .= "\xE2\x80\xB9"; # single left-pointing angle quotation mark - } elsif ($s =~ /^\x8C/) { - $result .= "\xC5\x92"; # OE ligature - } elsif ($s =~ /^\x8E/) { - $result .= "\xC5\xBD"; # Latin capital letter Z with caron - } elsif ($s =~ /^\x91/) { - $result .= ($norm_to_ascii_p) ? "`" : "\xE2\x80\x98"; # left single quotation mark - } elsif ($s =~ /^\x92/) { - $result .= ($norm_to_ascii_p) ? "'" : "\xE2\x80\x99"; # right single quotation mark - } elsif ($s =~ /^\x93/) { - $result .= "\xE2\x80\x9C"; # left double quotation mark - } elsif ($s =~ /^\x94/) { - $result .= "\xE2\x80\x9D"; # right double quotation mark - } elsif ($s =~ /^\x95/) { - $result .= "\xE2\x80\xA2"; # bullet - } elsif ($s =~ /^\x96/) { - $result .= ($norm_to_ascii_p) ? "-" : "\xE2\x80\x93"; # n dash - } elsif ($s =~ /^\x97/) { - $result .= ($norm_to_ascii_p) ? "-" : "\xE2\x80\x94"; # m dash - } elsif ($s =~ /^\x98/) { - $result .= ($norm_to_ascii_p) ? "~" : "\xCB\x9C"; # small tilde - } elsif ($s =~ /^\x99/) { - $result .= "\xE2\x84\xA2"; # trade mark sign - } elsif ($s =~ /^\x9A/) { - $result .= "\xC5\xA1"; # Latin small letter s with caron - } elsif ($s =~ /^\x9B/) { - $result .= "\xE2\x80\xBA"; # single right-pointing angle quotation mark - } elsif ($s =~ /^\x9C/) { - $result .= "\xC5\x93"; # oe ligature - } elsif ($s =~ /^\x9E/) { - $result .= "\xC5\xBE"; # Latin small letter z with caron - } elsif ($s =~ /^\x9F/) { - $result .= "\xC5\xB8"; # Latin capital letter Y with diaeresis - } else { - $result .= "?"; - } - $s = substr($s, $n_bytes); - } - return $result; -} - -sub delete_weird_stuff { - local($caller, $s) = @_; - - # delete control chacters (except tab and linefeed), zero-width characters, byte order mark, - # directional marks, join marks, variation selectors, Arabic tatweel - $s =~ s/([\x00-\x08\x0B-\x1F\x7F]|\xC2[\x80-\x9F]|\xD9\x80|\xE2\x80[\x8B-\x8F]|\xEF\xB8[\x80-\x8F]|\xEF\xBB\xBF|\xF3\xA0[\x84-\x87][\x80-\xBF])//g; - return $s; -} - -sub number_of_utf8_character { - local($caller, $s) = @_; - - $s2 = $s; - $s2 =~ s/[\x80-\xBF]//g; - return length($s2); -} - -sub cap_letter_reg_exp { - # includes A-Z and other Latin-based capital letters with accents, umlauts and other decorations etc. - return "[A-Z]|\xC3[\x80-\x96\x98-\x9E]|\xC4[\x80\x82\x84\x86\x88\x8A\x8C\x8E\x90\x94\x964\x98\x9A\x9C\x9E\xA0\xA2\xA4\xA6\xA8\xAA\xAC\xAE\xB0\xB2\xB4\xB6\xB9\xBB\xBD\xBF]|\xC5[\x81\x83\x85\x87\x8A\x8C\x8E\x90\x92\x96\x98\x9A\x9C\x9E\xA0\xA2\xA4\xA6\xA8\xAA\xAC\xB0\xB2\xB4\xB6\xB8\xB9\xBB\xBD]"; -} - -sub regex_extended_case_expansion { - local($caller, $s) = @_; - - if ($s =~ /\xC3/) { - $s =~ s/\xC3\xA0/\xC3\[\x80\xA0\]/g; - $s =~ s/\xC3\xA1/\xC3\[\x81\xA1\]/g; - $s =~ s/\xC3\xA2/\xC3\[\x82\xA2\]/g; - $s =~ s/\xC3\xA3/\xC3\[\x83\xA3\]/g; - $s =~ s/\xC3\xA4/\xC3\[\x84\xA4\]/g; - $s =~ s/\xC3\xA5/\xC3\[\x85\xA5\]/g; - $s =~ s/\xC3\xA6/\xC3\[\x86\xA6\]/g; - $s =~ s/\xC3\xA7/\xC3\[\x87\xA7\]/g; - $s =~ s/\xC3\xA8/\xC3\[\x88\xA8\]/g; - $s =~ s/\xC3\xA9/\xC3\[\x89\xA9\]/g; - $s =~ s/\xC3\xAA/\xC3\[\x8A\xAA\]/g; - $s =~ s/\xC3\xAB/\xC3\[\x8B\xAB\]/g; - $s =~ s/\xC3\xAC/\xC3\[\x8C\xAC\]/g; - $s =~ s/\xC3\xAD/\xC3\[\x8D\xAD\]/g; - $s =~ s/\xC3\xAE/\xC3\[\x8E\xAE\]/g; - $s =~ s/\xC3\xAF/\xC3\[\x8F\xAF\]/g; - $s =~ s/\xC3\xB0/\xC3\[\x90\xB0\]/g; - $s =~ s/\xC3\xB1/\xC3\[\x91\xB1\]/g; - $s =~ s/\xC3\xB2/\xC3\[\x92\xB2\]/g; - $s =~ s/\xC3\xB3/\xC3\[\x93\xB3\]/g; - $s =~ s/\xC3\xB4/\xC3\[\x94\xB4\]/g; - $s =~ s/\xC3\xB5/\xC3\[\x95\xB5\]/g; - $s =~ s/\xC3\xB6/\xC3\[\x96\xB6\]/g; - $s =~ s/\xC3\xB8/\xC3\[\x98\xB8\]/g; - $s =~ s/\xC3\xB9/\xC3\[\x99\xB9\]/g; - $s =~ s/\xC3\xBA/\xC3\[\x9A\xBA\]/g; - $s =~ s/\xC3\xBB/\xC3\[\x9B\xBB\]/g; - $s =~ s/\xC3\xBC/\xC3\[\x9C\xBC\]/g; - $s =~ s/\xC3\xBD/\xC3\[\x9D\xBD\]/g; - $s =~ s/\xC3\xBE/\xC3\[\x9E\xBE\]/g; - } - if ($s =~ /\xC5/) { - $s =~ s/\xC5\x91/\xC5\[\x90\x91\]/g; - $s =~ s/\xC5\xA1/\xC5\[\xA0\xA1\]/g; - $s =~ s/\xC5\xB1/\xC5\[\xB0\xB1\]/g; - } - - return $s; -} - -sub extended_lower_case { - local($caller, $s) = @_; - - $s =~ tr/A-Z/a-z/; - - # Latin-1 - if ($s =~ /\xC3[\x80-\x9F]/) { - $s =~ s/À/à/g; - $s =~ s/Á/á/g; - $s =~ s/Â/â/g; - $s =~ s/Ã/ã/g; - $s =~ s/Ä/ä/g; - $s =~ s/Å/å/g; - $s =~ s/Æ/æ/g; - $s =~ s/Ç/ç/g; - $s =~ s/È/è/g; - $s =~ s/É/é/g; - $s =~ s/Ê/ê/g; - $s =~ s/Ë/ë/g; - $s =~ s/Ì/ì/g; - $s =~ s/Í/í/g; - $s =~ s/Î/î/g; - $s =~ s/Ï/ï/g; - $s =~ s/Ð/ð/g; - $s =~ s/Ñ/ñ/g; - $s =~ s/Ò/ò/g; - $s =~ s/Ó/ó/g; - $s =~ s/Ô/ô/g; - $s =~ s/Õ/õ/g; - $s =~ s/Ö/ö/g; - $s =~ s/Ø/ø/g; - $s =~ s/Ù/ù/g; - $s =~ s/Ú/ú/g; - $s =~ s/Û/û/g; - $s =~ s/Ü/ü/g; - $s =~ s/Ý/ý/g; - $s =~ s/Þ/þ/g; - } - # Latin Extended-A - if ($s =~ /[\xC4-\xC5][\x80-\xBF]/) { - $s =~ s/Ā/ā/g; - $s =~ s/Ă/ă/g; - $s =~ s/Ą/ą/g; - $s =~ s/Ć/ć/g; - $s =~ s/Ĉ/ĉ/g; - $s =~ s/Ċ/ċ/g; - $s =~ s/Č/č/g; - $s =~ s/Ď/ď/g; - $s =~ s/Đ/đ/g; - $s =~ s/Ē/ē/g; - $s =~ s/Ĕ/ĕ/g; - $s =~ s/Ė/ė/g; - $s =~ s/Ę/ę/g; - $s =~ s/Ě/ě/g; - $s =~ s/Ĝ/ĝ/g; - $s =~ s/Ğ/ğ/g; - $s =~ s/Ġ/ġ/g; - $s =~ s/Ģ/ģ/g; - $s =~ s/Ĥ/ĥ/g; - $s =~ s/Ħ/ħ/g; - $s =~ s/Ĩ/ĩ/g; - $s =~ s/Ī/ī/g; - $s =~ s/Ĭ/ĭ/g; - $s =~ s/Į/į/g; - $s =~ s/İ/ı/g; - $s =~ s/IJ/ij/g; - $s =~ s/Ĵ/ĵ/g; - $s =~ s/Ķ/ķ/g; - $s =~ s/Ĺ/ĺ/g; - $s =~ s/Ļ/ļ/g; - $s =~ s/Ľ/ľ/g; - $s =~ s/Ŀ/ŀ/g; - $s =~ s/Ł/ł/g; - $s =~ s/Ń/ń/g; - $s =~ s/Ņ/ņ/g; - $s =~ s/Ň/ň/g; - $s =~ s/Ŋ/ŋ/g; - $s =~ s/Ō/ō/g; - $s =~ s/Ŏ/ŏ/g; - $s =~ s/Ő/ő/g; - $s =~ s/Œ/œ/g; - $s =~ s/Ŕ/ŕ/g; - $s =~ s/Ŗ/ŗ/g; - $s =~ s/Ř/ř/g; - $s =~ s/Ś/ś/g; - $s =~ s/Ŝ/ŝ/g; - $s =~ s/Ş/ş/g; - $s =~ s/Š/š/g; - $s =~ s/Ţ/ţ/g; - $s =~ s/Ť/ť/g; - $s =~ s/Ŧ/ŧ/g; - $s =~ s/Ũ/ũ/g; - $s =~ s/Ū/ū/g; - $s =~ s/Ŭ/ŭ/g; - $s =~ s/Ů/ů/g; - $s =~ s/Ű/ű/g; - $s =~ s/Ų/ų/g; - $s =~ s/Ŵ/ŵ/g; - $s =~ s/Ŷ/ŷ/g; - $s =~ s/Ź/ź/g; - $s =~ s/Ż/ż/g; - $s =~ s/Ž/ž/g; - } - # Greek letters - if ($s =~ /\xCE[\x86-\xAB]/) { - $s =~ s/Α/α/g; - $s =~ s/Β/β/g; - $s =~ s/Γ/γ/g; - $s =~ s/Δ/δ/g; - $s =~ s/Ε/ε/g; - $s =~ s/Ζ/ζ/g; - $s =~ s/Η/η/g; - $s =~ s/Θ/θ/g; - $s =~ s/Ι/ι/g; - $s =~ s/Κ/κ/g; - $s =~ s/Λ/λ/g; - $s =~ s/Μ/μ/g; - $s =~ s/Ν/ν/g; - $s =~ s/Ξ/ξ/g; - $s =~ s/Ο/ο/g; - $s =~ s/Π/π/g; - $s =~ s/Ρ/ρ/g; - $s =~ s/Σ/σ/g; - $s =~ s/Τ/τ/g; - $s =~ s/Υ/υ/g; - $s =~ s/Φ/φ/g; - $s =~ s/Χ/χ/g; - $s =~ s/Ψ/ψ/g; - $s =~ s/Ω/ω/g; - $s =~ s/Ϊ/ϊ/g; - $s =~ s/Ϋ/ϋ/g; - $s =~ s/Ά/ά/g; - $s =~ s/Έ/έ/g; - $s =~ s/Ή/ή/g; - $s =~ s/Ί/ί/g; - $s =~ s/Ό/ό/g; - $s =~ s/Ύ/ύ/g; - $s =~ s/Ώ/ώ/g; - } - # Cyrillic letters - if ($s =~ /\xD0[\x80-\xAF]/) { - $s =~ s/А/а/g; - $s =~ s/Б/б/g; - $s =~ s/В/в/g; - $s =~ s/Г/г/g; - $s =~ s/Д/д/g; - $s =~ s/Е/е/g; - $s =~ s/Ж/ж/g; - $s =~ s/З/з/g; - $s =~ s/И/и/g; - $s =~ s/Й/й/g; - $s =~ s/К/к/g; - $s =~ s/Л/л/g; - $s =~ s/М/м/g; - $s =~ s/Н/н/g; - $s =~ s/О/о/g; - $s =~ s/П/п/g; - $s =~ s/Р/р/g; - $s =~ s/С/с/g; - $s =~ s/Т/т/g; - $s =~ s/У/у/g; - $s =~ s/Ф/ф/g; - $s =~ s/Х/х/g; - $s =~ s/Ц/ц/g; - $s =~ s/Ч/ч/g; - $s =~ s/Ш/ш/g; - $s =~ s/Щ/щ/g; - $s =~ s/Ъ/ъ/g; - $s =~ s/Ы/ы/g; - $s =~ s/Ь/ь/g; - $s =~ s/Э/э/g; - $s =~ s/Ю/ю/g; - $s =~ s/Я/я/g; - $s =~ s/Ѐ/ѐ/g; - $s =~ s/Ё/ё/g; - $s =~ s/Ђ/ђ/g; - $s =~ s/Ѓ/ѓ/g; - $s =~ s/Є/є/g; - $s =~ s/Ѕ/ѕ/g; - $s =~ s/І/і/g; - $s =~ s/Ї/ї/g; - $s =~ s/Ј/ј/g; - $s =~ s/Љ/љ/g; - $s =~ s/Њ/њ/g; - $s =~ s/Ћ/ћ/g; - $s =~ s/Ќ/ќ/g; - $s =~ s/Ѝ/ѝ/g; - $s =~ s/Ў/ў/g; - $s =~ s/Џ/џ/g; - } - # Fullwidth A-Z - if ($s =~ /\xEF\xBC[\xA1-\xBA]/) { - $s =~ s/A/a/g; - $s =~ s/B/b/g; - $s =~ s/C/c/g; - $s =~ s/D/d/g; - $s =~ s/E/e/g; - $s =~ s/F/f/g; - $s =~ s/G/g/g; - $s =~ s/H/h/g; - $s =~ s/I/i/g; - $s =~ s/J/j/g; - $s =~ s/K/k/g; - $s =~ s/L/l/g; - $s =~ s/M/m/g; - $s =~ s/N/n/g; - $s =~ s/O/o/g; - $s =~ s/P/p/g; - $s =~ s/Q/q/g; - $s =~ s/R/r/g; - $s =~ s/S/s/g; - $s =~ s/T/t/g; - $s =~ s/U/u/g; - $s =~ s/V/v/g; - $s =~ s/W/w/g; - $s =~ s/X/x/g; - $s =~ s/Y/y/g; - $s =~ s/Z/z/g; - } - - return $s; -} - -sub extended_upper_case { - local($caller, $s) = @_; - - $s =~ tr/a-z/A-Z/; - return $s unless $s =~ /[\xC3-\xC5][\x80-\xBF]/; - - $s =~ s/\xC3\xA0/\xC3\x80/g; - $s =~ s/\xC3\xA1/\xC3\x81/g; - $s =~ s/\xC3\xA2/\xC3\x82/g; - $s =~ s/\xC3\xA3/\xC3\x83/g; - $s =~ s/\xC3\xA4/\xC3\x84/g; - $s =~ s/\xC3\xA5/\xC3\x85/g; - $s =~ s/\xC3\xA6/\xC3\x86/g; - $s =~ s/\xC3\xA7/\xC3\x87/g; - $s =~ s/\xC3\xA8/\xC3\x88/g; - $s =~ s/\xC3\xA9/\xC3\x89/g; - $s =~ s/\xC3\xAA/\xC3\x8A/g; - $s =~ s/\xC3\xAB/\xC3\x8B/g; - $s =~ s/\xC3\xAC/\xC3\x8C/g; - $s =~ s/\xC3\xAD/\xC3\x8D/g; - $s =~ s/\xC3\xAE/\xC3\x8E/g; - $s =~ s/\xC3\xAF/\xC3\x8F/g; - $s =~ s/\xC3\xB0/\xC3\x90/g; - $s =~ s/\xC3\xB1/\xC3\x91/g; - $s =~ s/\xC3\xB2/\xC3\x92/g; - $s =~ s/\xC3\xB3/\xC3\x93/g; - $s =~ s/\xC3\xB4/\xC3\x94/g; - $s =~ s/\xC3\xB5/\xC3\x95/g; - $s =~ s/\xC3\xB6/\xC3\x96/g; - $s =~ s/\xC3\xB8/\xC3\x98/g; - $s =~ s/\xC3\xB9/\xC3\x99/g; - $s =~ s/\xC3\xBA/\xC3\x9A/g; - $s =~ s/\xC3\xBB/\xC3\x9B/g; - $s =~ s/\xC3\xBC/\xC3\x9C/g; - $s =~ s/\xC3\xBD/\xC3\x9D/g; - $s =~ s/\xC3\xBE/\xC3\x9E/g; - - $s =~ s/\xC5\x91/\xC5\x90/g; - $s =~ s/\xC5\xA1/\xC5\xA0/g; - $s =~ s/\xC5\xB1/\xC5\xB0/g; - return $s unless $s =~ /[\xC3-\xC5][\x80-\xBF]/; - - return $s; -} - -sub extended_first_upper_case { - local($caller, $s) = @_; - - if (($first_char, $rest) = ($s =~ /^([\x00-\x7F]|[\xC0-\xDF][\x80-\xBF]|[\xE0-\xEF][\x80-\xBF][\x80-\xBF])(.*)$/)) { - return $caller->extended_upper_case($first_char) . $rest; - } else { - return $s; - } -} - -sub repair_doubly_converted_utf8_strings { - local($caller, $s) = @_; - - if ($s =~ /\xC3[\x82-\x85]\xC2[\x80-\xBF]/) { - $s =~ s/\xC3\x82\xC2([\x80-\xBF])/\xC2$1/g; - $s =~ s/\xC3\x83\xC2([\x80-\xBF])/\xC3$1/g; - $s =~ s/\xC3\x84\xC2([\x80-\xBF])/\xC4$1/g; - $s =~ s/\xC3\x85\xC2([\x80-\xBF])/\xC5$1/g; - } - return $s; -} - -sub repair_misconverted_windows_to_utf8_strings { - local($caller, $s) = @_; - - # correcting conversions of UTF8 using Latin1-to-UTF converter - if ($s =~ /\xC3\xA2\xC2\x80\xC2[\x90-\xEF]/) { - my $result = ""; - while (($pre,$last_c,$post) = ($s =~ /^(.*?)\xC3\xA2\xC2\x80\xC2([\x90-\xEF])(.*)$/s)) { - $result .= "$pre\xE2\x80$last_c"; - $s = $post; - } - $result .= $s; - $s = $result; - } - # correcting conversions of Windows1252-to-UTF8 using Latin1-to-UTF converter - if ($s =~ /\xC2[\x80-\x9F]/) { - my $result = ""; - while (($pre,$c_windows,$post) = ($s =~ /^(.*?)\xC2([\x80-\x9F])(.*)$/s)) { - $c_utf8 = $caller->windows1252_to_utf8($c_windows, 0); - $result .= ($c_utf8 eq "?") ? ($pre . "\xC2" . $c_windows) : "$pre$c_utf8"; - $s = $post; - } - $result .= $s; - $s = $result; - } - if ($s =~ /\xC3/) { - $s =~ s/\xC3\xA2\xE2\x80\x9A\xC2\xAC/\xE2\x82\xAC/g; # x80 -> Euro sign - # x81 codepoint undefined in Windows 1252 - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC5\xA1/\xE2\x80\x9A/g; # x82 -> single low-9 quotation mark - $s =~ s/\xC3\x86\xE2\x80\x99/\xC6\x92/g; # x83 -> Latin small letter f with hook - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC5\xBE/\xE2\x80\x9E/g; # x84 -> double low-9 quotation mark - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC2\xA6/\xE2\x80\xA6/g; # x85 -> horizontal ellipsis - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC2\xA0/\xE2\x80\xA0/g; # x86 -> dagger - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC2\xA1/\xE2\x80\xA1/g; # x87 -> double dagger - $s =~ s/\xC3\x8B\xE2\x80\xA0/\xCB\x86/g; # x88 -> modifier letter circumflex accent - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC2\xB0/\xE2\x80\xB0/g; # x89 -> per mille sign - $s =~ s/\xC3\x85\xC2\xA0/\xC5\xA0/g; # x8A -> Latin capital letter S with caron - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC2\xB9/\xE2\x80\xB9/g; # x8B -> single left-pointing angle quotation mark - $s =~ s/\xC3\x85\xE2\x80\x99/\xC5\x92/g; # x8C -> Latin capital ligature OE - # x8D codepoint undefined in Windows 1252 - $s =~ s/\xC3\x85\xC2\xBD/\xC5\xBD/g; # x8E -> Latin capital letter Z with caron - # x8F codepoint undefined in Windows 1252 - # x90 codepoint undefined in Windows 1252 - $s =~ s/\xC3\xA2\xE2\x82\xAC\xCB\x9C/\xE2\x80\x98/g; # x91 a-circumflex+euro+small tilde -> left single quotation mark - $s =~ s/\xC3\xA2\xE2\x82\xAC\xE2\x84\xA2/\xE2\x80\x99/g; # x92 a-circumflex+euro+trademark -> right single quotation mark - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC5\x93/\xE2\x80\x9C/g; # x93 a-circumflex+euro+Latin small ligature oe -> left double quotation mark - # x94 maps through undefined intermediate code point - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC2\xA2/\xE2\x80\xA2/g; # x95 a-circumflex+euro+cent sign -> bullet - $s =~ s/\xC3\xA2\xE2\x82\xAC\xE2\x80\x9C/\xE2\x80\x93/g; # x96 a-circumflex+euro+left double quotation mark -> en dash - $s =~ s/\xC3\xA2\xE2\x82\xAC\xE2\x80\x9D/\xE2\x80\x94/g; # x97 a-circumflex+euro+right double quotation mark -> em dash - $s =~ s/\xC3\x8B\xC5\x93/\xCB\x9C/g; # x98 Latin capital e diaeresis+Latin small ligature oe -> small tilde - $s =~ s/\xC3\xA2\xE2\x80\x9E\xC2\xA2/\xE2\x84\xA2/g; # x99 -> trade mark sign - $s =~ s/\xC3\x85\xC2\xA1/\xC5\xA1/g; # x9A -> Latin small letter s with caron - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC2\xBA/\xE2\x80\xBA/g; # x9B -> single right-pointing angle quotation mark - $s =~ s/\xC3\x85\xE2\x80\x9C/\xC5\x93/g; # x9C -> Latin small ligature oe - # x9D codepoint undefined in Windows 1252 - $s =~ s/\xC3\x85\xC2\xBE/\xC5\xBE/g; # x9E -> Latin small letter z with caron - $s =~ s/\xC3\x85\xC2\xB8/\xC5\xB8/g; # x9F -> Latin capital letter Y with diaeresis - $s =~ s/\xC3\xAF\xC2\xBF\xC2\xBD/\xEF\xBF\xBD/g; # replacement character - } - - return $s; -} - -sub latin1_to_utf { - local($caller, $s) = @_; - - my $result = ""; - while (($pre,$c,$post) = ($s =~ /^(.*?)([\x80-\xFF])(.*)$/s)) { - $result .= $pre; - if ($c =~ /^[\x80-\xBF]$/) { - $result .= "\xC2$c"; - } elsif ($c =~ /^[\xC0-\xFF]$/) { - $c =~ tr/[\xC0-\xFF]/[\x80-\xBF]/; - $result .= "\xC3$c"; - } - $s = $post; - } - $result .= $s; - return $result; -} - -sub character_type_is_letter_type { - local($caller, $char_type) = @_; - - return ($char_type =~ /\b((CJK|hiragana|kana|katakana)\s+character|diacritic|letter|syllable)\b/); -} - -sub character_type { - local($caller, $c) = @_; - - if ($c =~ /^[\x00-\x7F]/) { - return "XML tag" if $c =~ /^<.*>$/; - return "ASCII Latin letter" if $c =~ /^[a-z]$/i; - return "ASCII digit" if $c =~ /^[0-9]$/i; - return "ASCII whitespace" if $c =~ /^[\x09-\x0D\x20]$/; - return "ASCII control-character" if $c =~ /^[\x00-\x1F\x7F]$/; - return "ASCII currency" if $c eq "\$"; - return "ASCII punctuation"; - } elsif ($c =~ /^[\xC0-\xDF]/) { - return "non-UTF8 (invalid)" unless $c =~ /^[\xC0-\xDF][\x80-\xBF]$/; - return "non-shortest-UTF8 (invalid)" if $c =~ /[\xC0-\xC1]/; - return "non-ASCII control-character" if $c =~ /\xC2[\x80-\x9F]/; - return "non-ASCII whitespace" if $c =~ /\xC2\xA0/; - return "non-ASCII currency" if $c =~ /\xC2[\xA2-\xA5]/; - return "fraction" if $c =~ /\xC2[\xBC-\xBE]/; # NEW - return "superscript digit" if $c =~ /\xC2[\xB2\xB3\xB9]/; - return "non-ASCII Latin letter" if $c =~ /\xC2\xB5/; # micro sign - return "non-ASCII punctuation" if $c =~ /\xC2[\xA0-\xBF]/; - return "non-ASCII punctuation" if $c =~ /\xC3[\x97\xB7]/; - return "non-ASCII Latin letter" if $c =~ /\xC3[\x80-\xBF]/; - return "Latin ligature letter" if $c =~ /\xC4[\xB2\xB3]/; - return "Latin ligature letter" if $c =~ /\xC5[\x92\x93]/; - return "non-ASCII Latin letter" if $c =~ /[\xC4-\xC8]/; - return "non-ASCII Latin letter" if $c =~ /\xC9[\x80-\x8F]/; - return "IPA" if $c =~ /\xC9[\x90-\xBF]/; - return "IPA" if $c =~ /\xCA[\x80-\xBF]/; - return "IPA" if $c =~ /\xCB[\x80-\xBF]/; - return "combining-diacritic" if $c =~ /\xCC[\x80-\xBF]/; - return "combining-diacritic" if $c =~ /\xCD[\x80-\xAF]/; - return "Greek punctuation" if $c =~ /\xCD[\xBE]/; # Greek question mark - return "Greek punctuation" if $c =~ /\xCE[\x87]/; # Greek semicolon - return "Greek letter" if $c =~ /\xCD[\xB0-\xBF]/; - return "Greek letter" if $c =~ /\xCE/; - return "Greek letter" if $c =~ /\xCF[\x80-\xA1\xB3\xB7\xB8\xBA\xBB]/; - return "Coptic letter" if $c =~ /\xCF[\xA2-\xAF]/; - return "Cyrillic letter" if $c =~ /[\xD0-\xD3]/; - return "Cyrillic letter" if $c =~ /\xD4[\x80-\xAF]/; - return "Armenian punctuation" if $c =~ /\xD5[\x9A-\x9F]/; - return "Armenian punctuation" if $c =~ /\xD6[\x89-\x8F]/; - return "Armenian letter" if $c =~ /\xD4[\xB0-\xBF]/; - return "Armenian letter" if $c =~ /\xD5/; - return "Armenian letter" if $c =~ /\xD6[\x80-\x8F]/; - return "Hebrew accent" if $c =~ /\xD6[\x91-\xAE]/; - return "Hebrew punctuation" if $c =~ /\xD6\xBE/; - return "Hebrew punctuation" if $c =~ /\xD7[\x80\x83\x86\xB3\xB4]/; - return "Hebrew point" if $c =~ /\xD6[\xB0-\xBF]/; - return "Hebrew point" if $c =~ /\xD7[\x81\x82\x87]/; - return "Hebrew letter" if $c =~ /\xD7[\x90-\xB2]/; - return "other Hebrew" if $c =~ /\xD6[\x90-\xBF]/; - return "other Hebrew" if $c =~ /\xD7/; - return "Arabic currency" if $c =~ /\xD8\x8B/; # Afghani sign - return "Arabic punctuation" if $c =~ /\xD8[\x89-\x8D\x9B\x9E\x9F]/; - return "Arabic punctuation" if $c =~ /\xD9[\xAA-\xAD]/; - return "Arabic punctuation" if $c =~ /\xDB[\x94]/; - return "Arabic tatweel" if $c =~ /\xD9\x80/; - return "Arabic letter" if $c =~ /\xD8[\xA0-\xBF]/; - return "Arabic letter" if $c =~ /\xD9[\x81-\x9F]/; - return "Arabic letter" if $c =~ /\xD9[\xAE-\xBF]/; - return "Arabic letter" if $c =~ /\xDA[\x80-\xBF]/; - return "Arabic letter" if $c =~ /\xDB[\x80-\x95]/; - return "Arabic Indic digit" if $c =~ /\xD9[\xA0-\xA9]/; - return "Arabic Indic digit" if $c =~ /\xDB[\xB0-\xB9]/; - return "other Arabic" if $c =~ /[\xD8-\xDB]/; - return "Syriac punctuation" if $c =~ /\xDC[\x80-\x8F]/; - return "Syriac letter" if $c =~ /\xDC[\x90-\xAF]/; - return "Syriac diacritic" if $c =~ /\xDC[\xB0-\xBF]/; - return "Syriac diacritic" if $c =~ /\xDD[\x80-\x8A]/; - return "Thaana letter" if $c =~ /\xDE/; - } elsif ($c =~ /^[\xE0-\xEF]/) { - return "non-UTF8 (invalid)" unless $c =~ /^[\xE0-\xEF][\x80-\xBF]{2,2}$/; - return "non-shortest-UTF8 (invalid)" if $c =~ /\xE0[\x80-\x9F]/; - return "Arabic letter" if $c =~ /\xE0\xA2[\xA0-\xBF]/; # extended letters - return "other Arabic" if $c =~ /\xE0\xA3/; # extended characters - return "Devanagari punctuation" if $c =~ /\xE0\xA5[\xA4\xA5]/; # danda, double danda - return "Devanagari digit" if $c =~ /\xE0\xA5[\xA6-\xAF]/; - return "Devanagari letter" if $c =~ /\xE0[\xA4-\xA5]/; - return "Bengali digit" if $c =~ /\xE0\xA7[\xA6-\xAF]/; - return "Bengali currency" if $c =~ /\xE0\xA7[\xB2-\xB9]/; - return "Bengali letter" if $c =~ /\xE0[\xA6-\xA7]/; - return "Gurmukhi digit" if $c =~ /\xE0\xA9[\xA6-\xAF]/; - return "Gurmukhi letter" if $c =~ /\xE0[\xA8-\xA9]/; - return "Gujarati digit" if $c =~ /\xE0\xAB[\xA6-\xAF]/; - return "Gujarati letter" if $c =~ /\xE0[\xAA-\xAB]/; - return "Oriya digit" if $c =~ /\xE0\xAD[\xA6-\xAF]/; - return "Oriya fraction" if $c =~ /\xE0\xAD[\xB2-\xB7]/; - return "Oriya letter" if $c =~ /\xE0[\xAC-\xAD]/; - return "Tamil digit" if $c =~ /\xE0\xAF[\xA6-\xAF]/; - return "Tamil number" if $c =~ /\xE0\xAF[\xB0-\xB2]/; # number (10, 100, 1000) - return "Tamil letter" if $c =~ /\xE0[\xAE-\xAF]/; - return "Telegu digit" if $c =~ /\xE0\xB1[\xA6-\xAF]/; - return "Telegu fraction" if $c =~ /\xE0\xB1[\xB8-\xBE]/; - return "Telegu letter" if $c =~ /\xE0[\xB0-\xB1]/; - return "Kannada digit" if $c =~ /\xE0\xB3[\xA6-\xAF]/; - return "Kannada letter" if $c =~ /\xE0[\xB2-\xB3]/; - return "Malayalam digit" if $c =~ /\xE0\xB5[\x98-\x9E\xA6-\xB8]/; - return "Malayalam punctuation" if $c =~ /\xE0\xB5\xB9/; # date mark - return "Malayalam letter" if $c =~ /\xE0[\xB4-\xB5]/; - return "Sinhala digit" if $c =~ /\xE0\xB7[\xA6-\xAF]/; - return "Sinhala punctuation" if $c =~ /\xE0\xB7\xB4/; - return "Sinhala letter" if $c =~ /\xE0[\xB6-\xB7]/; - return "Thai currency" if $c =~ /\xE0\xB8\xBF/; - return "Thai digit" if $c =~ /\xE0\xB9[\x90-\x99]/; - return "Thai character" if $c =~ /\xE0[\xB8-\xB9]/; - return "Lao punctuation" if $c =~ /\xE0\xBA\xAF/; # Lao ellipsis - return "Lao digit" if $c =~ /\xE0\xBB[\x90-\x99]/; - return "Lao character" if $c =~ /\xE0[\xBA-\xBB]/; - return "Tibetan punctuation" if $c =~ /\xE0\xBC[\x81-\x94]/; - return "Tibetan sign" if $c =~ /\xE0\xBC[\x95-\x9F]/; - return "Tibetan digit" if $c =~ /\xE0\xBC[\xA0-\xB3]/; - return "Tibetan punctuation" if $c =~ /\xE0\xBC[\xB4-\xBD]/; - return "Tibetan letter" if $c =~ /\xE0[\xBC-\xBF]/; - return "Myanmar digit" if $c =~ /\xE1\x81[\x80-\x89]/; - return "Myanmar digit" if $c =~ /\xE1\x82[\x90-\x99]/; # Myanmar Shan digits - return "Myanmar punctuation" if $c =~ /\xE1\x81[\x8A-\x8B]/; - return "Myanmar letter" if $c =~ /\xE1[\x80-\x81]/; - return "Myanmar letter" if $c =~ /\xE1\x82[\x80-\x9F]/; - return "Georgian punctuation" if $c =~ /\xE1\x83\xBB/; - return "Georgian letter" if $c =~ /\xE1\x82[\xA0-\xBF]/; - return "Georgian letter" if $c =~ /\xE1\x83/; - return "Georgian letter" if $c =~ /\xE1\xB2[\x90-\xBF]/; # Georgian Mtavruli capital letters - return "Georgian letter" if $c =~ /\xE2\xB4[\x80-\xAF]/; # Georgian small letters (Khutsuri) - return "Korean Hangul letter" if $c =~ /\xE1[\x84-\x87]/; - return "Ethiopic punctuation" if $c =~ /\xE1\x8D[\xA0-\xA8]/; - return "Ethiopic digit" if $c =~ /\xE1\x8D[\xA9-\xB1]/; - return "Ethiopic number" if $c =~ /\xE1\x8D[\xB2-\xBC]/; - return "Ethiopic syllable" if $c =~ /\xE1[\x88-\x8D]/; - return "Cherokee letter" if $c =~ /\xE1\x8E[\xA0-\xBF]/; - return "Cherokee letter" if $c =~ /\xE1\x8F/; - return "Canadian punctuation" if $c =~ /\xE1\x90\x80/; # Canadian Syllabics hyphen - return "Canadian punctuation" if $c =~ /\xE1\x99\xAE/; # Canadian Syllabics full stop - return "Canadian syllable" if $c =~ /\xE1[\x90-\x99]/; - return "Canadian syllable" if $c =~ /\xE1\xA2[\xB0-\xBF]/; - return "Canadian syllable" if $c =~ /\xE1\xA3/; - return "Ogham whitespace" if $c =~ /\xE1\x9A\x80/; - return "Ogham letter" if $c =~ /\xE1\x9A[\x81-\x9A]/; - return "Ogham punctuation" if $c =~ /\xE1\x9A[\x9B-\x9C]/; - return "Runic punctuation" if $c =~ /\xE1\x9B[\xAB-\xAD]/; - return "Runic letter" if $c =~ /\xE1\x9A[\xA0-\xBF]/; - return "Runic letter" if $c =~ /\xE1\x9B/; - return "Khmer currency" if $c =~ /\xE1\x9F\x9B/; - return "Khmer digit" if $c =~ /\xE1\x9F[\xA0-\xA9]/; - return "Khmer letter" if $c =~ /\xE1[\x9E-\x9F]/; - return "Mongolian punctuation" if $c =~ /\xE1\xA0[\x80-\x8A]/; - return "Mongolian digit" if $c =~ /\xE1\xA0[\x90-\x99]/; - return "Mongolian letter" if $c =~ /\xE1[\xA0-\xA1]/; - return "Mongolian letter" if $c =~ /\xE1\xA2[\x80-\xAF]/; - return "Buginese letter" if $c =~ /\xE1\xA8[\x80-\x9B]/; - return "Buginese punctuation" if $c =~ /\xE1\xA8[\x9E-\x9F]/; - return "Balinese letter" if $c =~ /\xE1\xAC/; - return "Balinese letter" if $c =~ /\xE1\xAD[\x80-\x8F]/; - return "Balinese digit" if $c =~ /\xE1\xAD[\x90-\x99]/; - return "Balinese puncutation" if $c =~ /\xE1\xAD[\x9A-\xA0]/; - return "Balinese symbol" if $c =~ /\xE1\xAD[\xA1-\xBF]/; - return "Sundanese digit" if $c =~ /\xE1\xAE[\xB0-\xB9]/; - return "Sundanese letter" if $c =~ /\xE1\xAE/; - return "Cyrillic letter" if $c =~ /\xE1\xB2[\x80-\x8F]/; - return "Sundanese punctuation" if $c =~ /\xE1\xB3[\x80-\x8F]/; - return "IPA" if $c =~ /\xE1[\xB4-\xB6]/; - return "non-ASCII Latin letter" if $c =~ /\xE1[\xB8-\xBB]/; - return "Greek letter" if $c =~ /\xE1[\xBC-\xBF]/; - return "non-ASCII whitespace" if $c =~ /\xE2\x80[\x80-\x8A\xAF]/; - return "zero-width space" if $c =~ /\xE2\x80\x8B/; - return "zero-width non-space" if $c =~ /\xE2\x80\x8C/; - return "zero-width joiner" if $c =~ /\xE2\x80\x8D/; - return "directional mark" if $c =~ /\xE2\x80[\x8E-\x8F\xAA-\xAE]/; - return "non-ASCII punctuation" if $c =~ /\xE2\x80[\x90-\xBF]/; - return "non-ASCII punctuation" if $c =~ /\xE2\x81[\x80-\x9E]/; - return "superscript letter" if $c =~ /\xE2\x81[\xB1\xBF]/; - return "superscript digit" if $c =~ /\xE2\x81[\xB0-\xB9]/; - return "superscript punctuation" if $c =~ /\xE2\x81[\xBA-\xBE]/; - return "subscript digit" if $c =~ /\xE2\x82[\x80-\x89]/; - return "subscript punctuation" if $c =~ /\xE2\x82[\x8A-\x8E]/; - return "non-ASCII currency" if $c =~ /\xE2\x82[\xA0-\xBF]/; - return "letterlike symbol" if $c =~ /\xE2\x84/; - return "letterlike symbol" if $c =~ /\xE2\x85[\x80-\x8F]/; - return "fraction" if $c =~ /\xE2\x85[\x90-\x9E]/; # NEW - return "Roman number" if $c =~ /\xE2\x85[\xA0-\xBF]/; # NEW - return "arrow symbol" if $c =~ /\xE2\x86[\x90-\xBF]/; - return "arrow symbol" if $c =~ /\xE2\x87/; - return "mathematical operator" if $c =~ /\xE2[\x88-\x8B]/; - return "technical symbol" if $c =~ /\xE2[\x8C-\x8F]/; - return "enclosed alphanumeric" if $c =~ /\xE2\x91[\xA0-\xBF]/; - return "enclosed alphanumeric" if $c =~ /\xE2[\x92-\x93]/; - return "box drawing" if $c =~ /\xE2[\x94-\x95]/; - return "geometric shape" if $c =~ /\xE2\x96[\xA0-\xBF]/; - return "geometric shape" if $c =~ /\xE2\x97/; - return "pictograph" if $c =~ /\xE2[\x98-\x9E]/; - return "arrow symbol" if $c =~ /\xE2\xAC[\x80-\x91\xB0-\xBF]/; - return "geometric shape" if $c =~ /\xE2\xAC[\x92-\xAF]/; - return "arrow symbol" if $c =~ /\xE2\xAD[\x80-\x8F\x9A-\xBF]/; - return "geometric shape" if $c =~ /\xE2\xAD[\x90-\x99]/; - return "arrow symbol" if $c =~ /\xE2\xAE[\x80-\xB9]/; - return "geometric shape" if $c =~ /\xE2\xAE[\xBA-\xBF]/; - return "geometric shape" if $c =~ /\xE2\xAF[\x80-\x88\x8A-\x8F]/; - return "symbol" if $c =~ /\xE2[\xAC-\xAF]/; - return "Coptic fraction" if $c =~ /\xE2\xB3\xBD/; - return "Coptic punctuation" if $c =~ /\xE2\xB3[\xB9-\xBF]/; - return "Coptic letter" if $c =~ /\xE2[\xB2-\xB3]/; - return "Georgian letter" if $c =~ /\xE2\xB4[\x80-\xAF]/; - return "Tifinagh punctuation" if $c =~ /\xE2\xB5\xB0/; - return "Tifinagh letter" if $c =~ /\xE2\xB4[\xB0-\xBF]/; - return "Tifinagh letter" if $c =~ /\xE2\xB5/; - return "Ethiopic syllable" if $c =~ /\xE2\xB6/; - return "Ethiopic syllable" if $c =~ /\xE2\xB7[\x80-\x9F]/; - return "non-ASCII punctuation" if $c =~ /\xE3\x80[\x80-\x91\x94-\x9F\xB0\xBB-\xBD]/; - return "symbol" if $c =~ /\xE3\x80[\x91\x92\xA0\xB6\xB7]/; - return "Japanese hiragana character" if $c =~ /\xE3\x81/; - return "Japanese hiragana character" if $c =~ /\xE3\x82[\x80-\x9F]/; - return "Japanese katakana character" if $c =~ /\xE3\x82[\xA0-\xBF]/; - return "Japanese katakana character" if $c =~ /\xE3\x83/; - return "Bopomofo letter" if $c =~ /\xE3\x84[\x80-\xAF]/; - return "Korean Hangul letter" if $c =~ /\xE3\x84[\xB0-\xBF]/; - return "Korean Hangul letter" if $c =~ /\xE3\x85/; - return "Korean Hangul letter" if $c =~ /\xE3\x86[\x80-\x8F]/; - return "Bopomofo letter" if $c =~ /\xE3\x86[\xA0-\xBF]/; - return "CJK stroke" if $c =~ /\xE3\x87[\x80-\xAF]/; - return "Japanese kana character" if $c =~ /\xE3\x87[\xB0-\xBF]/; - return "CJK symbol" if $c =~ /\xE3[\x88-\x8B]/; - return "CJK square Latin abbreviation" if $c =~ /\xE3\x8D[\xB1-\xBA]/; - return "CJK square Latin abbreviation" if $c =~ /\xE3\x8E/; - return "CJK square Latin abbreviation" if $c =~ /\xE3\x8F[\x80-\x9F\xBF]/; - return "CJK character" if $c =~ /\xE4[\xB8-\xBF]/; - return "CJK character" if $c =~ /[\xE5-\xE9]/; - return "Yi syllable" if $c =~ /\xEA[\x80-\x92]/; - return "Lisu letter" if $c =~ /\xEA\x93[\x90-\xBD]/; - return "Lisu punctuation" if $c =~ /\xEA\x93[\xBE-\xBF]/; - return "Cyrillic letter" if $c =~ /\xEA\x99/; - return "Cyrillic letter" if $c =~ /\xEA\x9A[\x80-\x9F]/; - return "modifier tone" if $c =~ /\xEA\x9C[\x80-\xA1]/; - return "Javanese punctuation" if $c =~ /\xEA\xA7[\x81-\x8D\x9E-\x9F]/; - return "Javanese digit" if $c =~ /\xEA\xA7[\x90-\x99]/; - return "Javanese letter" if $c =~ /\xEA\xA6/; - return "Javanese letter" if $c =~ /\xEA\xA7[\x80-\x9F]/; - return "Ethiopic syllable" if $c =~ /\xEA\xAC[\x80-\xAF]/; - return "Cherokee letter" if $c =~ /\xEA\xAD[\xB0-\xBF]/; - return "Cherokee letter" if $c =~ /\xEA\xAE/; - return "Meetai Mayek digit" if $c =~ /\xEA\xAF[\xB0-\xB9]/; - return "Meetai Mayek letter" if $c =~ /\xEA\xAF/; - return "Korean Hangul syllable" if $c =~ /\xEA[\xB0-\xBF]/; - return "Korean Hangul syllable" if $c =~ /[\xEB-\xEC]/; - return "Korean Hangul syllable" if $c =~ /\xED[\x80-\x9E]/; - return "Klingon letter" if $c =~ /\xEF\xA3[\x90-\xA9]/; - return "Klingon digit" if $c =~ /\xEF\xA3[\xB0-\xB9]/; - return "Klingon punctuation" if $c =~ /\xEF\xA3[\xBD-\xBE]/; - return "Klingon symbol" if $c =~ /\xEF\xA3\xBF/; - return "private use character" if $c =~ /\xEE/; - return "Latin typographic ligature" if $c =~ /\xEF\xAC[\x80-\x86]/; - return "Hebrew presentation letter" if $c =~ /\xEF\xAC[\x9D-\xBF]/; - return "Hebrew presentation letter" if $c =~ /\xEF\xAD[\x80-\x8F]/; - return "Arabic presentation letter" if $c =~ /\xEF\xAD[\x90-\xBF]/; - return "Arabic presentation letter" if $c =~ /\xEF[\xAE-\xB7]/; - return "non-ASCII punctuation" if $c =~ /\xEF\xB8[\x90-\x99]/; - return "non-ASCII punctuation" if $c =~ /\xEF\xB8[\xB0-\xBF]/; - return "non-ASCII punctuation" if $c =~ /\xEF\xB9[\x80-\xAB]/; - return "Arabic presentation letter" if $c =~ /\xEF\xB9[\xB0-\xBF]/; - return "Arabic presentation letter" if $c =~ /\xEF\xBA/; - return "Arabic presentation letter" if $c =~ /\xEF\xBB[\x80-\xBC]/; - return "byte-order mark/zero-width no-break space" if $c eq "\xEF\xBB\xBF"; - return "fullwidth currency" if $c =~ /\xEF\xBC\x84/; - return "fullwidth digit" if $c =~ /\xEF\xBC[\x90-\x99]/; - return "fullwidth Latin letter" if $c =~ /\xEF\xBC[\xA1-\xBA]/; - return "fullwidth Latin letter" if $c =~ /\xEF\xBD[\x81-\x9A]/; - return "fullwidth punctuation" if $c =~ /\xEF\xBC/; - return "fullwidth punctuation" if $c =~ /\xEF\xBD[\x9B-\xA4]/; - return "halfwidth Japanese punctuation" if $c =~ /\xEF\xBD[\xA1-\xA4]/; - return "halfwidth Japanese katakana character" if $c =~ /\xEF\xBD[\xA5-\xBF]/; - return "halfwidth Japanese katakana character" if $c =~ /\xEF\xBE[\x80-\x9F]/; - return "fullwidth currency" if $c =~ /\xEF\xBF[\xA0-\xA6]/; - return "replacement character" if $c eq "\xEF\xBF\xBD"; - } elsif ($c =~ /[\xF0-\xF7]/) { - return "non-UTF8 (invalid)" unless $c =~ /[\xF0-\xF7][\x80-\xBF]{3,3}$/; - return "non-shortest-UTF8 (invalid)" if $c =~ /\xF0[\x80-\x8F]/; - return "Linear B syllable" if $c =~ /\xF0\x90\x80/; - return "Linear B syllable" if $c =~ /\xF0\x90\x81[\x80-\x8F]/; - return "Linear B symbol" if $c =~ /\xF0\x90\x81[\x90-\x9F]/; - return "Linear B ideogram" if $c =~ /\xF0\x90[\x82-\x83]/; - return "Gothic letter" if $c =~ /\xF0\x90\x8C[\xB0-\xBF]/; - return "Gothic letter" if $c =~ /\xF0\x90\x8D[\x80-\x8F]/; - return "Phoenician letter" if $c =~ /\xF0\x90\xA4[\x80-\x95]/; - return "Phoenician number" if $c =~ /\xF0\x90\xA4[\x96-\x9B]/; - return "Phoenician punctuation" if $c =~ /\xF0\x90\xA4\x9F/; # word separator - return "Old Hungarian number" if $c =~ /\xF0\x90\xB3[\xBA-\xBF]/; - return "Old Hungarian letter" if $c =~ /\xF0\x90[\xB2-\xB3]/; - return "Cuneiform digit" if $c =~ /\xF0\x92\x90/; # numberic sign - return "Cuneiform digit" if $c =~ /\xF0\x92\x91[\x80-\xAF]/; # numberic sign - return "Cuneiform punctuation" if $c =~ /\xF0\x92\x91[\xB0-\xBF]/; - return "Cuneiform sign" if $c =~ /\xF0\x92[\x80-\x95]/; - return "Egyptian hieroglyph number" if $c =~ /\xF0\x93\x81\xA8/; - return "Egyptian hieroglyph number" if $c =~ /\xF0\x93\x82[\xAD-\xB6]/; - return "Egyptian hieroglyph number" if $c =~ /\xF0\x93\x86[\x90\xBC-\xBF]/; - return "Egyptian hieroglyph number" if $c =~ /\xF0\x93\x87[\x80-\x84]/; - return "Egyptian hieroglyph number" if $c =~ /\xF0\x93\x8D[\xA2-\xAB]/; - return "Egyptian hieroglyph number" if $c =~ /\xF0\x93\x8E[\x86-\x92]/; - return "Egyptian hieroglyph number" if $c =~ /\xF0\x93\x8F[\xBA-\xBF]/; - return "Egyptian hieroglyph number" if $c =~ /\xF0\x93\x90[\x80-\x83]/; - return "Egyptian hieroglyph" if $c =~ /\xF0\x93[\x80-\x90]/; - return "enclosed alphanumeric" if $c =~ /\xF0\x9F[\x84-\x87]/; - return "Mahjong symbol" if $c =~ /\xF0\x9F\x80[\x80-\xAF]/; - return "Domino symbol" if $c =~ /\xF0\x9F\x80[\xB0-\xBF]/; - return "Domino symbol" if $c =~ /\xF0\x9F\x81/; - return "Domino symbol" if $c =~ /\xF0\x9F\x82[\x80-\x9F]/; - return "Playing card symbol" if $c =~ /\xF0\x9F\x82[\xA0-\xBF]/; - return "Playing card symbol" if $c =~ /\xF0\x9F\x83/; - return "CJK symbol" if $c =~ /\xF0\x9F[\x88-\x8B]/; - return "pictograph" if $c =~ /\xF0\x9F[\x8C-\x9B]/; - return "geometric shape" if $c =~ /\xF0\x9F[\x9E-\x9F]/; - return "non-ASCII punctuation" if $c =~ /\xF0\x9F[\xA0-\xA3]/; - return "pictograph" if $c =~ /\xF0\x9F[\xA4-\xAB]/; - return "CJK character" if $c =~ /\xF0[\xA0-\xAF]/; - return "tag" if $c =~ /\xF3\xA0[\x80-\x81]/; - return "variation selector" if $c =~ /\xF3\xA0[\x84-\x87]/; - return "private use character" if $c =~ /\xF3[\xB0-\xBF]/; - return "private use character" if $c =~ /\xF4[\x80-\x8F]/; - # ... - } elsif ($c =~ /[\xF8-\xFB]/) { - return "non-UTF8 (invalid)" unless $c =~ /[\xF8-\xFB][\x80-\xBF]{4,4}$/; - } elsif ($c =~ /[\xFC-\xFD]/) { - return "non-UTF8 (invalid)" unless $c =~ /[\xFC-\xFD][\x80-\xBF]{5,5}$/; - } elsif ($c =~ /\xFE/) { - return "non-UTF8 (invalid)" unless $c =~ /\xFE][\x80-\xBF]{6,6}$/; - } else { - return "non-UTF8 (invalid)"; - } - return "other character"; -} - -1; - - diff --git a/spaces/allknowingroger/Image-Models-Test104/app.py b/spaces/allknowingroger/Image-Models-Test104/app.py deleted file mode 100644 index baf1e8dd09922570348da0ba960a48e5ebf2ee1a..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test104/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "kingrabbit911/lora-trained-xl", - "digiplay/VoidnoiseCore_R0829", - "artificialguybr/StorybookRedmondUnbound", - "jagonzalr/stable-diffusion-v1-5", - "digiplay/NextPhoto_v3", - "mahendra0203/lora-trained-xl-colab-5k-steps", - "plasmo/woolitize", - "digiplay/NextGenMix_R2.8VAE", - "Shashi598/ouka_star", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/allknowingroger/Image-Models-Test89/app.py b/spaces/allknowingroger/Image-Models-Test89/app.py deleted file mode 100644 index e16f66e5867542ebb6f4db89005111f9af67445c..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test89/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "bulu/whiskey_textual_inversion", - "MakAttack/653b8221e806b310f8b8d12d", - "matteo1222/lora-trained-xl-colab", - "LinoyTsaban/lora-trained-xl-colab-woman-0.0001-1000", - "kyujinpy/KO-stable-diffusion-disney", - "kyujinpy/KO-anything-v4.5", - "AVIIAX/ds8", - "digiplay/OldFish_fix1.1.997_diffusers", - "matteo1222/lora-trained-xl-colab-cheeto", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/almontalvao/GenAds-AI/app.py b/spaces/almontalvao/GenAds-AI/app.py deleted file mode 100644 index 509b8bdc3a684a6e3c7c6448a8949ceb5f02cf2b..0000000000000000000000000000000000000000 --- a/spaces/almontalvao/GenAds-AI/app.py +++ /dev/null @@ -1,47 +0,0 @@ -import torch -from peft import PeftModel, PeftConfig -from transformers import AutoModelForCausalLM, AutoTokenizer - -#Load model - -peft_model_id = f"monta135/gen_ads_bloom" -config = PeftConfig.from_pretrained(peft_model_id) -model = AutoModelForCausalLM.from_pretrained( - config.base_model_name_or_path, - return_dict=True, - load_in_8bit=True, - device_map="auto", -) -tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) - -# Load the Lora model -model = PeftModel.from_pretrained(model, peft_model_id) - - -def make_inference(product_name, product_description): - batch = tokenizer( - f"### Product and Description:\n{product_name}: {product_description}\n\n### Ad:", - return_tensors="pt", - ) - - with torch.cuda.amp.autocast(): - output_tokens = model.generate(**batch, max_new_tokens=50) - - return tokenizer.decode(output_tokens[0], skip_special_tokens=True) - - -if __name__ == "__main__": - - # gradio interface - import gradio as gr - - gr.Interface( - make_inference, - [ - gr.inputs.Textbox(lines=2, label="Product Name"), - gr.inputs.Textbox(lines=5, label="Product Description"), - ], - gr.outputs.Textbox(label="Ad"), - title="GenAds-AI", - description="GenAds-AI is a generative model that creates ads for home products.", - ).launch() \ No newline at end of file diff --git a/spaces/antonovmaxim/text-generation-webui-space/docs/Using-LoRAs.md b/spaces/antonovmaxim/text-generation-webui-space/docs/Using-LoRAs.md deleted file mode 100644 index fafd6cde2d87bfdf46d942ab841a74bf50facdb5..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/docs/Using-LoRAs.md +++ /dev/null @@ -1,55 +0,0 @@ -Based on https://github.com/tloen/alpaca-lora - -## Instructions - -1. Download a LoRA, for instance: - -``` -python download-model.py tloen/alpaca-lora-7b -``` - -2. Load the LoRA. 16-bit, 8-bit, and CPU modes work: - -``` -python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b -python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --load-in-8bit -python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --cpu -``` - -* For using LoRAs in 4-bit mode, follow [these special instructions](GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode). - -* Instead of using the `--lora` command-line flag, you can also select the LoRA in the "Parameters" tab of the interface. - -## Prompt -For the Alpaca LoRA in particular, the prompt must be formatted like this: - -``` -Below is an instruction that describes a task. Write a response that appropriately completes the request. -### Instruction: -Write a Python script that generates text using the transformers library. -### Response: -``` - -Sample output: - -``` -Below is an instruction that describes a task. Write a response that appropriately completes the request. -### Instruction: -Write a Python script that generates text using the transformers library. -### Response: - -import transformers -from transformers import AutoTokenizer, AutoModelForCausalLM -tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") -model = AutoModelForCausalLM.from_pretrained("bert-base-uncased") -texts = ["Hello world", "How are you"] -for sentence in texts: -sentence = tokenizer(sentence) -print(f"Generated {len(sentence)} tokens from '{sentence}'") -output = model(sentences=sentence).predict() -print(f"Predicted {len(output)} tokens for '{sentence}':\n{output}") -``` - -## Training a LoRA - -You can train your own LoRAs from the `Training` tab. See [Training LoRAs](Training-LoRAs.md) for details. diff --git a/spaces/argilla/argilla-streamlit-customs/my_app/pages/__init__.py b/spaces/argilla/argilla-streamlit-customs/my_app/pages/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/SHA384.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/SHA384.py deleted file mode 100644 index a98fa9a3c9c9e49127f880d30a3568109c4f1bd7..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/SHA384.py +++ /dev/null @@ -1,186 +0,0 @@ -# -*- coding: utf-8 -*- -# -# =================================================================== -# The contents of this file are dedicated to the public domain. To -# the extent that dedication to the public domain is not available, -# everyone is granted a worldwide, perpetual, royalty-free, -# non-exclusive license to exercise all rights associated with the -# contents of this file for any purpose whatsoever. -# No rights are reserved. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS -# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN -# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# =================================================================== - -from Crypto.Util.py3compat import bord - -from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, - VoidPointer, SmartPointer, - create_string_buffer, - get_raw_buffer, c_size_t, - c_uint8_ptr) - -_raw_sha384_lib = load_pycryptodome_raw_lib("Crypto.Hash._SHA384", - """ - int SHA384_init(void **shaState); - int SHA384_destroy(void *shaState); - int SHA384_update(void *hs, - const uint8_t *buf, - size_t len); - int SHA384_digest(const void *shaState, - uint8_t *digest, - size_t digest_size); - int SHA384_copy(const void *src, void *dst); - - int SHA384_pbkdf2_hmac_assist(const void *inner, - const void *outer, - const uint8_t *first_digest, - uint8_t *final_digest, - size_t iterations, - size_t digest_size); - """) - -class SHA384Hash(object): - """A SHA-384 hash object. - Do not instantiate directly. Use the :func:`new` function. - - :ivar oid: ASN.1 Object ID - :vartype oid: string - - :ivar block_size: the size in bytes of the internal message block, - input to the compression function - :vartype block_size: integer - - :ivar digest_size: the size in bytes of the resulting hash - :vartype digest_size: integer - """ - - # The size of the resulting hash in bytes. - digest_size = 48 - # The internal block size of the hash algorithm in bytes. - block_size = 128 - # ASN.1 Object ID - oid = '2.16.840.1.101.3.4.2.2' - - def __init__(self, data=None): - state = VoidPointer() - result = _raw_sha384_lib.SHA384_init(state.address_of()) - if result: - raise ValueError("Error %d while instantiating SHA384" - % result) - self._state = SmartPointer(state.get(), - _raw_sha384_lib.SHA384_destroy) - if data: - self.update(data) - - def update(self, data): - """Continue hashing of a message by consuming the next chunk of data. - - Args: - data (byte string/byte array/memoryview): The next chunk of the message being hashed. - """ - - result = _raw_sha384_lib.SHA384_update(self._state.get(), - c_uint8_ptr(data), - c_size_t(len(data))) - if result: - raise ValueError("Error %d while hashing data with SHA384" - % result) - - def digest(self): - """Return the **binary** (non-printable) digest of the message that has been hashed so far. - - :return: The hash digest, computed over the data processed so far. - Binary form. - :rtype: byte string - """ - - bfr = create_string_buffer(self.digest_size) - result = _raw_sha384_lib.SHA384_digest(self._state.get(), - bfr, - c_size_t(self.digest_size)) - if result: - raise ValueError("Error %d while making SHA384 digest" - % result) - - return get_raw_buffer(bfr) - - def hexdigest(self): - """Return the **printable** digest of the message that has been hashed so far. - - :return: The hash digest, computed over the data processed so far. - Hexadecimal encoded. - :rtype: string - """ - - return "".join(["%02x" % bord(x) for x in self.digest()]) - - def copy(self): - """Return a copy ("clone") of the hash object. - - The copy will have the same internal state as the original hash - object. - This can be used to efficiently compute the digests of strings that - share a common initial substring. - - :return: A hash object of the same type - """ - - clone = SHA384Hash() - result = _raw_sha384_lib.SHA384_copy(self._state.get(), - clone._state.get()) - if result: - raise ValueError("Error %d while copying SHA384" % result) - return clone - - def new(self, data=None): - """Create a fresh SHA-384 hash object.""" - - return SHA384Hash(data) - - -def new(data=None): - """Create a new hash object. - - :parameter data: - Optional. The very first chunk of the message to hash. - It is equivalent to an early call to :meth:`SHA384Hash.update`. - :type data: byte string/byte array/memoryview - - :Return: A :class:`SHA384Hash` hash object - """ - - return SHA384Hash().new(data) - - -# The size of the resulting hash in bytes. -digest_size = SHA384Hash.digest_size - -# The internal block size of the hash algorithm in bytes. -block_size = SHA384Hash.block_size - - -def _pbkdf2_hmac_assist(inner, outer, first_digest, iterations): - """Compute the expensive inner loop in PBKDF-HMAC.""" - - assert iterations > 0 - - bfr = create_string_buffer(len(first_digest)); - result = _raw_sha384_lib.SHA384_pbkdf2_hmac_assist( - inner._state.get(), - outer._state.get(), - first_digest, - bfr, - c_size_t(iterations), - c_size_t(len(first_digest))) - - if result: - raise ValueError("Error %d with PBKDF2-HMAC assist for SHA384" % result) - - return get_raw_buffer(bfr) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/__init__.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/__init__.py deleted file mode 100644 index c67625df6547855044f7fffdfe687bfa55c35b07..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# flake8: noqa -from .v4 import * diff --git a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Olga Milkovska.html b/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Olga Milkovska.html deleted file mode 100644 index ad0b7f72c0065387f461455d31526764b6c89561..0000000000000000000000000000000000000000 --- a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Olga Milkovska.html +++ /dev/null @@ -1,134 +0,0 @@ - - - - Olga Milkovska - - - - -
      -

      Olga Milkovska

      - -
      -
      How did you hear about SM?
      • Was looking for coaching opportunities and SM stood out (liked the quality of the mentors)

      Brief background
      • In Canada for 8 years
      • worked as DA - marketing
      • currently F/T at LinkedIn - Insights Analyst (basically DA)
      • Team lead
      • also part-time instructor at Brain Station
      • MS in statistics

      Mentorship exp
      • At LinkedIn, team lead, technical coaching
      • Previously at Ubisoft, managing interns
      • Mentorship energizes me, love seeing people grow

      What do beginners need and how can you help?
      • Really depends on the technical expertise of each person
      • Developing a structure/approach
      • Helping to figure out the strategy of learning and having a person to ask ad-hoc smaller questions
      • Someone willing to commit, open to constructive feedback,
      • passionate, motived, willing to do the work and digest feedback
      -
      -
      Questions about SM:
      • Personal experience as a mentor
      • And how does it work in general
      • What does the average mentee look like?
      • What does the contract look like?
      -
      - -
      - - - \ No newline at end of file diff --git a/spaces/auto-academic/auto-draft/latex_templates/Default/conclusion.tex b/spaces/auto-academic/auto-draft/latex_templates/Default/conclusion.tex deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/avivdm1/AutoGPT/ui/app.py b/spaces/avivdm1/AutoGPT/ui/app.py deleted file mode 100644 index d7dbd31e901969d090292215935bdbc3d9d75e37..0000000000000000000000000000000000000000 --- a/spaces/avivdm1/AutoGPT/ui/app.py +++ /dev/null @@ -1,145 +0,0 @@ -import gradio as gr -import utils -from api import AutoAPI, get_openai_api_key -import os, shutil -import json - -FILE_DIR = os.path.dirname(os.path.abspath(__file__)) -OUTPUT_DIR = os.path.join(os.path.dirname(FILE_DIR), "auto_gpt_workspace") -if not os.path.exists(OUTPUT_DIR): - os.mkdir(OUTPUT_DIR) - -CSS = """ -#chatbot {font-family: monospace;} -#files .generating {display: none;} -#files .min {min-height: 0px;} -""" - -with gr.Blocks(css=CSS) as app: - with gr.Column() as setup_pane: - gr.Markdown(f"""# Auto-GPT - 1. Duplicate this Space: Duplicate Space This will **NOT** work without duplication! - 2. Enter your OpenAI API Key below. - """) - with gr.Row(): - open_ai_key = gr.Textbox( - value=get_openai_api_key(), - label="OpenAI API Key", - type="password", - ) - gr.Markdown( - "3. Fill the values below, then click 'Start'. There are example values you can load at the bottom of this page." - ) - with gr.Row(): - ai_name = gr.Textbox(label="AI Name", placeholder="e.g. Entrepreneur-GPT") - ai_role = gr.Textbox( - label="AI Role", - placeholder="e.g. an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.", - ) - top_5_goals = gr.Dataframe( - row_count=(5, "fixed"), - col_count=(1, "fixed"), - headers=["AI Goals - Enter up to 5"], - type="array" - ) - start_btn = gr.Button("Start", variant="primary") - with open(os.path.join(FILE_DIR, "examples.json"), "r") as f: - example_values = json.load(f) - gr.Examples( - example_values, - [ai_name, ai_role, top_5_goals], - ) - with gr.Column(visible=False) as main_pane: - with gr.Row(): - with gr.Column(scale=2): - chatbot = gr.Chatbot(elem_id="chatbot") - with gr.Row(): - yes_btn = gr.Button("Yes", variant="primary", interactive=False) - consecutive_yes = gr.Slider( - 1, 10, 1, step=1, label="Consecutive Yes", interactive=False - ) - custom_response = gr.Textbox( - label="Custom Response", - placeholder="Press 'Enter' to Submit.", - interactive=False, - ) - with gr.Column(scale=1): - gr.HTML( - lambda: f""" - Generated Files -
      {utils.format_directory(OUTPUT_DIR)}
      - """, every=3, elem_id="files" - ) - download_btn = gr.Button("Download All Files") - - chat_history = gr.State([[None, None]]) - api = gr.State(None) - - def start(open_ai_key, ai_name, ai_role, top_5_goals): - auto_api = AutoAPI(open_ai_key, ai_name, ai_role, top_5_goals) - return gr.Column.update(visible=False), gr.Column.update(visible=True), auto_api - - def bot_response(chat, api): - messages = [] - for message in api.get_chatbot_response(): - messages.append(message) - chat[-1][1] = "\n".join(messages) + "..." - yield chat - chat[-1][1] = "\n".join(messages) - yield chat - - def send_message(count, chat, api, message="Y"): - if message != "Y": - count = 1 - for i in range(count): - chat.append([message, None]) - yield chat, count - i - api.send_message(message) - for updated_chat in bot_response(chat, api): - yield updated_chat, count - i - - def activate_inputs(): - return { - yes_btn: gr.Button.update(interactive=True), - consecutive_yes: gr.Slider.update(interactive=True), - custom_response: gr.Textbox.update(interactive=True), - } - - def deactivate_inputs(): - return { - yes_btn: gr.Button.update(interactive=False), - consecutive_yes: gr.Slider.update(interactive=False), - custom_response: gr.Textbox.update(interactive=False), - } - - start_btn.click( - start, - [open_ai_key, ai_name, ai_role, top_5_goals], - [setup_pane, main_pane, api], - ).then(bot_response, [chat_history, api], chatbot).then( - activate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ) - - yes_btn.click( - deactivate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ).then( - send_message, [consecutive_yes, chat_history, api], [chatbot, consecutive_yes] - ).then( - activate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ) - custom_response.submit( - deactivate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ).then( - send_message, - [consecutive_yes, chat_history, api, custom_response], - [chatbot, consecutive_yes], - ).then( - activate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ) - - def download_all_files(): - shutil.make_archive("outputs", "zip", OUTPUT_DIR) - - download_btn.click(download_all_files).then(None, _js=utils.DOWNLOAD_OUTPUTS_JS) - -app.queue(concurrency_count=20).launch(file_directories=[OUTPUT_DIR]) diff --git a/spaces/awacke1/AW-04-GR-Seq-2-Seq-QA-Auto-Gen/qasrl_model_pipeline.py b/spaces/awacke1/AW-04-GR-Seq-2-Seq-QA-Auto-Gen/qasrl_model_pipeline.py deleted file mode 100644 index 50135f76849bc8537fcae83b72532da661487da6..0000000000000000000000000000000000000000 --- a/spaces/awacke1/AW-04-GR-Seq-2-Seq-QA-Auto-Gen/qasrl_model_pipeline.py +++ /dev/null @@ -1,183 +0,0 @@ -from typing import Optional -import json -from argparse import Namespace -from pathlib import Path -from transformers import Text2TextGenerationPipeline, AutoModelForSeq2SeqLM, AutoTokenizer - -def get_markers_for_model(is_t5_model: bool) -> Namespace: - special_tokens_constants = Namespace() - if is_t5_model: - # T5 model have 100 special tokens by default - special_tokens_constants.separator_input_question_predicate = "" - special_tokens_constants.separator_output_answers = "" - special_tokens_constants.separator_output_questions = "" # if using only questions - special_tokens_constants.separator_output_question_answer = "" - special_tokens_constants.separator_output_pairs = "" - special_tokens_constants.predicate_generic_marker = "" - special_tokens_constants.predicate_verb_marker = "" - special_tokens_constants.predicate_nominalization_marker = "" - - else: - special_tokens_constants.separator_input_question_predicate = "" - special_tokens_constants.separator_output_answers = "" - special_tokens_constants.separator_output_questions = "" # if using only questions - special_tokens_constants.separator_output_question_answer = "" - special_tokens_constants.separator_output_pairs = "" - special_tokens_constants.predicate_generic_marker = "" - special_tokens_constants.predicate_verb_marker = "" - special_tokens_constants.predicate_nominalization_marker = "" - return special_tokens_constants - -def load_trained_model(name_or_path): - import huggingface_hub as HFhub - tokenizer = AutoTokenizer.from_pretrained(name_or_path) - model = AutoModelForSeq2SeqLM.from_pretrained(name_or_path) - # load preprocessing_kwargs from the model repo on HF hub, or from the local model directory - kwargs_filename = None - if name_or_path.startswith("kleinay/"): # and 'preprocessing_kwargs.json' in HFhub.list_repo_files(name_or_path): # the supported version of HFhub doesn't support list_repo_files - kwargs_filename = HFhub.hf_hub_download(repo_id=name_or_path, filename="preprocessing_kwargs.json") - elif Path(name_or_path).is_dir() and (Path(name_or_path) / "experiment_kwargs.json").exists(): - kwargs_filename = Path(name_or_path) / "experiment_kwargs.json" - - if kwargs_filename: - preprocessing_kwargs = json.load(open(kwargs_filename)) - # integrate into model.config (for decoding args, e.g. "num_beams"), and save also as standalone object for preprocessing - model.config.preprocessing_kwargs = Namespace(**preprocessing_kwargs) - model.config.update(preprocessing_kwargs) - return model, tokenizer - - -class QASRL_Pipeline(Text2TextGenerationPipeline): - def __init__(self, model_repo: str, **kwargs): - model, tokenizer = load_trained_model(model_repo) - super().__init__(model, tokenizer, framework="pt") - self.is_t5_model = "t5" in model.config.model_type - self.special_tokens = get_markers_for_model(self.is_t5_model) - self.data_args = model.config.preprocessing_kwargs - # backward compatibility - default keyword values implemeted in `run_summarization`, thus not saved in `preprocessing_kwargs` - if "predicate_marker_type" not in vars(self.data_args): - self.data_args.predicate_marker_type = "generic" - if "use_bilateral_predicate_marker" not in vars(self.data_args): - self.data_args.use_bilateral_predicate_marker = True - if "append_verb_form" not in vars(self.data_args): - self.data_args.append_verb_form = True - self._update_config(**kwargs) - - def _update_config(self, **kwargs): - " Update self.model.config with initialization parameters and necessary defaults. " - # set default values that will always override model.config, but can overriden by __init__ kwargs - kwargs["max_length"] = kwargs.get("max_length", 80) - # override model.config with kwargs - for k,v in kwargs.items(): - self.model.config.__dict__[k] = v - - def _sanitize_parameters(self, **kwargs): - preprocess_kwargs, forward_kwargs, postprocess_kwargs = {}, {}, {} - if "predicate_marker" in kwargs: - preprocess_kwargs["predicate_marker"] = kwargs["predicate_marker"] - if "predicate_type" in kwargs: - preprocess_kwargs["predicate_type"] = kwargs["predicate_type"] - if "verb_form" in kwargs: - preprocess_kwargs["verb_form"] = kwargs["verb_form"] - return preprocess_kwargs, forward_kwargs, postprocess_kwargs - - def preprocess(self, inputs, predicate_marker="", predicate_type=None, verb_form=None): - # Here, inputs is string or list of strings; apply string postprocessing - if isinstance(inputs, str): - processed_inputs = self._preprocess_string(inputs, predicate_marker, predicate_type, verb_form) - elif hasattr(inputs, "__iter__"): - processed_inputs = [self._preprocess_string(s, predicate_marker, predicate_type, verb_form) for s in inputs] - else: - raise ValueError("inputs must be str or Iterable[str]") - # Now pass to super.preprocess for tokenization - return super().preprocess(processed_inputs) - - def _preprocess_string(self, seq: str, predicate_marker: str, predicate_type: Optional[str], verb_form: Optional[str]) -> str: - sent_tokens = seq.split(" ") - assert predicate_marker in sent_tokens, f"Input sentence must include a predicate-marker token ('{predicate_marker}') before the target predicate word" - predicate_idx = sent_tokens.index(predicate_marker) - sent_tokens.remove(predicate_marker) - sentence_before_predicate = " ".join([sent_tokens[i] for i in range(predicate_idx)]) - predicate = sent_tokens[predicate_idx] - sentence_after_predicate = " ".join([sent_tokens[i] for i in range(predicate_idx+1, len(sent_tokens))]) - - if self.data_args.predicate_marker_type == "generic": - predicate_marker = self.special_tokens.predicate_generic_marker - # In case we want special marker for each predicate type: """ - elif self.data_args.predicate_marker_type == "pred_type": - assert predicate_type is not None, "For this model, you must provide the `predicate_type` either when initializing QASRL_Pipeline(...) or when applying __call__(...) on it" - assert predicate_type in ("verbal", "nominal"), f"`predicate_type` must be either 'verbal' or 'nominal'; got '{predicate_type}'" - predicate_marker = {"verbal": self.special_tokens.predicate_verb_marker , - "nominal": self.special_tokens.predicate_nominalization_marker - }[predicate_type] - - if self.data_args.use_bilateral_predicate_marker: - seq = f"{sentence_before_predicate} {predicate_marker} {predicate} {predicate_marker} {sentence_after_predicate}" - else: - seq = f"{sentence_before_predicate} {predicate_marker} {predicate} {sentence_after_predicate}" - - # embed also verb_form - if self.data_args.append_verb_form and verb_form is None: - raise ValueError(f"For this model, you must provide the `verb_form` of the predicate when applying __call__(...)") - elif self.data_args.append_verb_form: - seq = f"{seq} {self.special_tokens.separator_input_question_predicate} {verb_form} " - else: - seq = f"{seq} " - - # append source prefix (for t5 models) - prefix = self._get_source_prefix(predicate_type) - - return prefix + seq - - def _get_source_prefix(self, predicate_type: Optional[str]): - if not self.is_t5_model or self.data_args.source_prefix is None: - return '' - if not self.data_args.source_prefix.startswith("<"): # Regular prefix - not dependent on input row x - return self.data_args.source_prefix - if self.data_args.source_prefix == "": - if predicate_type is None: - raise ValueError("source_prefix is '' but input no `predicate_type`.") - else: - return f"Generate QAs for {predicate_type} QASRL: " - - def _forward(self, *args, **kwargs): - outputs = super()._forward(*args, **kwargs) - return outputs - - - def postprocess(self, model_outputs): - output_seq = self.tokenizer.decode( - model_outputs["output_ids"].squeeze(), - skip_special_tokens=False, - clean_up_tokenization_spaces=False, - ) - output_seq = output_seq.strip(self.tokenizer.pad_token).strip(self.tokenizer.eos_token).strip() - qa_subseqs = output_seq.split(self.special_tokens.separator_output_pairs) - qas = [self._postrocess_qa(qa_subseq) for qa_subseq in qa_subseqs] - return {"generated_text": output_seq, - "QAs": qas} - - def _postrocess_qa(self, seq: str) -> str: - # split question and answers - if self.special_tokens.separator_output_question_answer in seq: - question, answer = seq.split(self.special_tokens.separator_output_question_answer)[:2] - else: - print("invalid format: no separator between question and answer found...") - return None - # question, answer = seq, '' # Or: backoff to only question - # skip "_" slots in questions - question = ' '.join(t for t in question.split(' ') if t != '_') - answers = [a.strip() for a in answer.split(self.special_tokens.separator_output_answers)] - return {"question": question, "answers": answers} - - -if __name__ == "__main__": - pipe = QASRL_Pipeline("kleinay/qanom-seq2seq-model-baseline") - res1 = pipe("The student was interested in Luke 's research about sea animals .", verb_form="research", predicate_type="nominal") - res2 = pipe(["The doctor was interested in Luke 's treatment .", - "The Veterinary student was interested in Luke 's treatment of sea animals ."], verb_form="treat", predicate_type="nominal", num_beams=10) - res3 = pipe("A number of professions have developed that specialize in the treatment of mental disorders .", verb_form="develop", predicate_type="verbal") - print(res1) - print(res2) - print(res3) - \ No newline at end of file diff --git a/spaces/awacke1/Gamification-Word-Search/README.md b/spaces/awacke1/Gamification-Word-Search/README.md deleted file mode 100644 index db512583da74ef4911bdbea1a5e54e8483ae4e73..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Gamification-Word-Search/README.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: 📱Word Search to Gamify 🤓Language Learning🕹️ -emoji: 🤓🕹️📱 -colorFrom: indigo -colorTo: indigo -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -🎉 Introducing Gamification Word Search! 🧐 - -We are thrilled to announce the launch of our latest program - Gamification Word Search! 🚀 - -Gamification Word Search is a fun and exciting way to learn new vocabulary words. - -🤓 The program uses the classic word search game to help you learn new words and improve your vocabulary. 📚 - -With Gamification Word Search, you can choose from a variety of word categories, including animals, food, sports, and more! 🐶🍔🏀 - -You can also adjust the difficulty level to match your skill level and challenge yourself. 💪 - -Gamification Word Search is perfect for students, language learners, or anyone looking to expand their vocabulary. - -Plus, it's a great way to have fun while learning! 🎉 - -Key features of Gamification Word Search include: - -A wide variety of word categories to choose from 🗂️ -Adjustable difficulty levels to match your skill level 📈 -Engaging and interactive gameplay 🕹️ -Beautiful and colorful graphics 🌈 - -Gamification Word Search is available now on HuggingFace to play! -📱 Try it today and start improving your vocabulary the fun way! 🤩 - -#GamificationWordSearch #LearnVocabulary #WordSearchGame #FunLearning #Gamification #Education #LanguageLearners \ No newline at end of file diff --git a/spaces/awacke1/Slot-Machine-HTML5/style.css b/spaces/awacke1/Slot-Machine-HTML5/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Slot-Machine-HTML5/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/awen666/web-ui/_next/static/chunks/framework-43665103d101a22d.js b/spaces/awen666/web-ui/_next/static/chunks/framework-43665103d101a22d.js deleted file mode 100644 index ef9e52f3ad47f2c60e0236bb728b3b7d602ebe5a..0000000000000000000000000000000000000000 --- a/spaces/awen666/web-ui/_next/static/chunks/framework-43665103d101a22d.js +++ /dev/null @@ -1,25 +0,0 @@ -"use strict";(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[774],{64448:function(e,n,t){/** - * @license React - * react-dom.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var r,l,a,u,o,i,s=t(67294),c=t(63840);function f(e){for(var n="https://reactjs.org/docs/error-decoder.html?invariant="+e,t=1;t