diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Codebreaker 10.1 Patched Elf How to Install and Use It on Your PS2.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Codebreaker 10.1 Patched Elf How to Install and Use It on Your PS2.md deleted file mode 100644 index bbe29c0100ec82812da65896baa64790cb4a4e26..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Codebreaker 10.1 Patched Elf How to Install and Use It on Your PS2.md +++ /dev/null @@ -1,119 +0,0 @@ -
-

Codebreaker 10.1 Patched Elf: What Is It and How to Use It

-

If you are a fan of playing PS2 games, you might have heard of Codebreaker, a cheat device that allows you to access various cheats and hacks for your favorite games. However, if you have a soft-modded PS2, you might have encountered some problems when trying to use Codebreaker with your backup games or burned discs. That's where Codebreaker 10.1 Patched Elf comes in handy.

-

Codebreaker 10.1 Patched Elf


DOWNLOADhttps://byltly.com/2uKvbo



-

In this article, we will explain what Codebreaker 10.1 Patched Elf is, how to install it on your PS2, how to use it to play burned games, and what are its benefits and limitations. By the end of this article, you will be able to enjoy your PS2 games with more fun and convenience.

-

How to Install Codebreaker 10.1 Patched Elf on Your PS2

-

Before we get into the details of how to use Codebreaker 10.1 Patched Elf, let's first see how to install it on your PS2. To do this, you will need the following:

- -

Once you have these ready, follow these steps:

-
    -
  1. Download the patcher program from this link. It is a zip file that contains two files: CB_launch.zip and CB_patch.zip.
  2. -
  3. Extract the CB_patch.zip file and run the CB_patch.exe file on your computer.
  4. -
  5. Select your Codebreaker ISO file as the input file and choose a destination folder for the output file.
  6. -
  7. Click on "Patch" and wait for the process to finish.
  8. -
  9. You should now have a patched elf file named "CODEBREAKER Vxx PATCHED BY ZALZZAR" in your destination folder.
  10. -
  11. Rename this file as "CB_launch.elf" and copy it to your USB drive.
  12. -
  13. Extract the CB_launch.zip file and copy the "CB_launch" folder to your USB drive as well.
  14. -
  15. Plug your USB drive into your PS2 and turn it on.
  16. -
  17. Launch FMCB from your memory card and select uLaunchELF from the menu.
  18. -
  19. Browse to your USB drive using uLaunchELF and copy the "CB_launch" folder and the "CB_launch.elf" file to your memory card's "BOOT" folder.
  20. -
  21. Go back to FMCB menu and select "Configure OSDSYS options".
  22. -
  23. Select "Configure Item" and choose an empty slot.
  24. -
  25. Select "Path1" and browse to your memory card's "BOOT" folder.
  26. -
  27. Select "CB_launch.elf" as the path and press circle.
  28. -
  29. Select "Name" and enter "Codebreaker" as the name and press circle.
  30. -
  31. Select "Save CNF To MC0" and press circle.
  32. -
  33. Exit FMCB menu and restart your PS2.
  34. -
-

You should now see "Codebreaker" as an option in your FMCB menu. Congratulations, you have successfully installed Codebreaker 10.1 Patched Elf on your PS2!

-

How to use Codebreaker 10.1 Patched Elf on PS2
-Codebreaker 10.1 Patched Elf download link
-Codebreaker 10.1 Patched Elf compatibility list
-Codebreaker 10.1 Patched Elf tutorial video
-Codebreaker 10.1 Patched Elf vs Action Replay Max
-Codebreaker 10.1 Patched Elf cheats database
-Codebreaker 10.1 Patched Elf update history
-Codebreaker 10.1 Patched Elf error fix
-Codebreaker 10.1 Patched Elf review and rating
-Codebreaker 10.1 Patched Elf best settings
-Codebreaker 10.1 Patched Elf modding guide
-Codebreaker 10.1 Patched Elf alternative software
-Codebreaker 10.1 Patched Elf support forum
-Codebreaker 10.1 Patched Elf license key
-Codebreaker 10.1 Patched Elf free trial
-Codebreaker 10.1 Patched Elf installation instructions
-Codebreaker 10.1 Patched Elf system requirements
-Codebreaker 10.1 Patched Elf features and benefits
-Codebreaker 10.1 Patched Elf online store
-Codebreaker 10.1 Patched Elf customer service
-Codebreaker 10.1 Patched Elf FAQs and tips
-Codebreaker 10.1 Patched Elf testimonials and feedback
-Codebreaker 10.1 Patched Elf comparison with other products
-Codebreaker 10.1 Patched Elf refund policy
-Codebreaker 10.1 Patched Elf coupon code and discount
-Codebreaker 10.1 Patched Elf launch date and price
-Codebreaker 10.1 Patched Elf official website and blog
-Codebreaker 10.1 Patched Elf developer and publisher
-Codebreaker 10.1 Patched Elf source code and documentation
-Codebreaker 10.1 Patched Elf security and privacy
-Codebreaker 10.1 Patched Elf backup and restore
-Codebreaker 10.1 Patched Elf troubleshooting and maintenance
-Codebreaker 10.1 Patched Elf pros and cons
-Codebreaker 10.1 Patched Elf screenshots and videos
-Codebreaker 10.1 Patched Elf awards and recognition
-Codebreaker 10.1 Patched Elf latest news and updates
-Codebreaker 10.1 Patched Elf user manual and guide
-Codebreaker 10.1 Patched Elf technical support and helpdesk
-Codebreaker 10.1 Patched Elf affiliate program and commission
-Codebreaker 10.1 Patched Elf warranty and guarantee
-Codebreaker 10.1 Patched Elf customization and personalization
-Codebreaker 10.1 Patched Elf integration and compatibility
-Codebreaker 10.1 Patched Elf performance and speed
-Codebreaker 10.1 Patched Elf quality and reliability
-Codebreaker 10.1 Patched Elf feedback survey and questionnaire
-Codebreaker 10.1 Patched Elf bonus and gift
-Codebreaker 10.1 Patched Elf case study and success story
-Codebreaker 10.1 Patched Elf demo and sample
-Codebreaker 10.1 Patched Elf roadmap and future plans

-

How to Use Codebreaker 10.1 Patched Elf to Play Burned Games on Your PS2

-

Now that you have installed Codebreaker 10.1 Patched Elf on your PS2, you might be wondering how to use it to play burned games or backup discs on your console. To do this, you will need the following:

- -

If you don't know how to patch ISO images for ESR, follow these steps:

-
    -
  1. Download the ESR Disc Patcher from this link. It is a zip file that contains a single exe file.
  2. -
  3. Extract the zip file and run the ESR Disc Patcher.exe file on your computer.
  4. -
  5. Select your ISO image as the input file and choose a destination folder for the output file.
  6. -
  7. Click on "Patch" and wait for the process to finish.
  8. -
  9. You should now have a patched ISO image in your destination folder with "_ESR" added at the end of its name.
  10. -
  11. Burn this image onto a DVD-R disc using any burning software of your choice.
  12. -
-

Once you have a patched disc ready, follow these steps:

-
    -
  1. Insert your disc into your PS2's disc tray but don't close it yet.
  2. -
  3. Select "Codebreaker" from your FMCB menu and press X.
  4. -
  5. You should see a loading screen followed by a disclaimer screen. Press X to continue.
  6. -
  7. You should now see the main menu of Codebreaker with various options such as Start Game, Select Cheats, Options, etc.
  8. -
  9. Select "Options" and press X.
  10. -
  11. Select "Disc Tray Status" and press X until it says "Off". This will prevent Codebreaker from ejecting your disc when you start the game.
  12. -
  13. Select "Save Options" and press X.
  14. -
  15. Select "Select Cheats" and press X.
  16. -
  17. You should see a list of games that are compatible with Codebreaker. You can scroll through them using up/down buttons or search for them using left/right buttons.
  18. -
  19. Select the game that matches your disc and press X.
  20. -
  21. You should see a list of cheats available for that game. You can toggle them on/off using X button or select them all using square button.
  22. -
  23. Select "Start Game With Selected Cheats" and press X.
  24. -
  25. You should see a loading screen followed by another disclaimer screen. Press X to continue.
  26. -
  27. You should now be taken back to FMCB menu automatically.
  28. -
  29. Select ESR from FMCB menu and press X.
  30. -
  31. You should see a loading screen

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Artificial Intelligence Full Movie Download !LINK! In Hindi.md b/spaces/1gistliPinn/ChatGPT4/Examples/Artificial Intelligence Full Movie Download !LINK! In Hindi.md deleted file mode 100644 index 233794b28652d8fcecfc4ffb3f10aa2520c22a78..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Artificial Intelligence Full Movie Download !LINK! In Hindi.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Artificial Intelligence Full Movie Download In Hindi


    DOWNLOADhttps://imgfil.com/2uxZcJ



    -
    -Film Kyss mig (2011) Online HD,Film Online,Filme Online. ... The Last Kids on Earth (Season 3) [Hindi + English] Dual Audio WEB-DL 720p [NF Animated Series]. ... The automatic subtitle generators powered by artificial intelligence offer a ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Ashtapathi Lyrics In Tamil Pdf [PORTABLE] Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/Ashtapathi Lyrics In Tamil Pdf [PORTABLE] Download.md deleted file mode 100644 index 962998cb37ba82837dba8cefdfea31745b5df3c1..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Ashtapathi Lyrics In Tamil Pdf [PORTABLE] Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

    ashtapathi lyrics in tamil pdf download


    Download ❤❤❤ https://imgfil.com/2uxZp8



    - -Pdf - eBook and . ... PDF ebooks (user's guide, manuals, sheets) about Ashtapadi lyrics tamil pdf ready for download.... DownloadPDF, TXT or ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Dungeon Of The Endless 1.1.5 Crack [EXCLUSIVE] Mac Osx.md b/spaces/1gistliPinn/ChatGPT4/Examples/Dungeon Of The Endless 1.1.5 Crack [EXCLUSIVE] Mac Osx.md deleted file mode 100644 index 8d8e914c292f93c4269fa3f7d1f58118d9853549..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Dungeon Of The Endless 1.1.5 Crack [EXCLUSIVE] Mac Osx.md +++ /dev/null @@ -1,124 +0,0 @@ -
    -

    Dungeon of the Endless 1.1.5 Crack Mac Osx: How to Download and Play the Ultimate Dungeon Crawler

    -

    Dungeon of the Endless 1.1.5 Crack Mac Osx is a game that combines roguelike, tower defense, and RPG elements in a unique and challenging way. You play as a survivor of a prison ship that crashed on a mysterious planet, and you have to explore the endless dungeon below, fighting enemies, collecting resources, and building defenses along the way.

    -

    Dungeon of the Endless 1.1.5 Crack Mac Osx


    Download ———>>> https://imgfil.com/2uxYai



    -

    If you are looking for a game that will test your skills and strategy, Dungeon of the Endless 1.1.5 Crack Mac Osx is a perfect choice. In this article, we will show you how to download and play this game on your Mac computer.

    -

    How to Download Dungeon of the Endless 1.1.5 Crack Mac Osx

    -

    Dungeon of the Endless 1.1.5 Crack Mac Osx is a cracked version of the game that allows you to play it for free without any limitations or restrictions. You can download it from various websites that offer cracked games for Mac users, such as kidzshare.com or trailduro.com.

    -

    Here are the steps to download Dungeon of the Endless 1.1.5 Crack Mac Osx:

    -

    -
      -
    1. Visit one of the websites that offer Dungeon of the Endless 1.1.5 Crack Mac Osx, such as kidzshare.com or trailduro.com.
    2. -
    3. Find the download link for Dungeon of the Endless 1.1.5 Crack Mac Osx and click on it.
    4. -
    5. Wait for the download to finish and extract the zip file to your desired location.
    6. -
    7. Open the extracted folder and run the DungeonoftheEndless.app file to launch the game.
    8. -
    -

    How to Play Dungeon of the Endless 1.1.5 Crack Mac Osx

    -

    Dungeon of the Endless 1.1.5 Crack Mac Osx is a game that requires strategy, skill, and luck to survive. You can play it solo or with up to three other players online or locally.

    -

    Here are some tips and tricks to play Dungeon of the Endless 1.1.5 Crack Mac Osx:

    - -

    Conclusion

    -

    Dungeon of the Endless 1.1.5 Crack Mac Osx is a game that will challenge you with its unique blend of roguelike, tower defense, and RPG elements. You can download it for free from various websites that offer cracked games for Mac users, such as kidzshare.com or trailduro.com.

    -

    If you are looking for a game that will test your skills and strategy, Dungeon of the Endless 1.1.5 Crack Mac Osx is a perfect choice.

    -

    How to Unlock Secret Characters with Dungeon of the Endless 1.1.5 Crack Mac Osx

    -

    Dungeon of the Endless 1.1.5 Crack Mac Osx has a lot of characters to choose from, each with their own stats, skills, and abilities. However, some of them are hidden and can only be unlocked by certain methods or conditions.

    -

    If you want to unlock all the secret characters in Dungeon of the Endless 1.1.5 Crack Mac Osx, you can use a mod called Secret Unlocker (DotE-Secrets) v.1.1.5, which is a patch that adds them to the character selection screen. You can download it from gamepressure.com or other websites that offer mods for Dungeon of the Endless.

    -

    Here are the steps to install and use Secret Unlocker (DotE-Secrets) v.1.1.5:

    -
      -
    1. Download the mod file from gamepressure.com or other websites that offer mods for Dungeon of the Endless.
    2. -
    3. Copy the mod file to DungeonoftheEndless_Data\\Managed inside your game folder.
    4. -
    5. Run the installer and it will rename your original Assembly-CSharp.dll file to Assembly-CSharp.dll.backup.
    6. -
    7. Launch the game and you will see all the secret characters available on the character selection screen.
    8. -
    -

    Here are the secret characters that you can unlock with Secret Unlocker (DotE-Secrets) v.1.1.5:

    - -

    How to Install Mods for Dungeon of the Endless 1.1.5 Crack Mac Osx

    -

    Dungeon of the Endless 1.1.5 Crack Mac Osx is a game that can be enhanced and customized with various mods that add new features, functions, or content to the game. You can find many mods for Dungeon of the Endless on websites such as ali213.net or lastgame.ru.

    -

    Here are the steps to install mods for Dungeon of the Endless 1.1.5 Crack Mac Osx:

    -
      -
    1. Download the mod file from ali213.net or lastgame.ru or other websites that offer mods for Dungeon of the Endless.
    2. -
    3. Extract the zip file to your desired location.
    4. -
    5. Open the extracted folder and copy the files or folders to your game folder, depending on the instructions of each mod.
    6. -
    7. Launch the game and enjoy the modded features or content.
    8. -
    -

    Here are some examples of mods that you can install for Dungeon of the Endless 1.1.5 Crack Mac Osx:

    - -

    How to Update Dungeon of the Endless 1.1.5 Crack Mac Osx

    -

    Dungeon of the Endless 1.1.5 Crack Mac Osx is a cracked version of the game that allows you to play it for free without any limitations or restrictions. However, it may not be compatible with the latest updates or patches that are released by the developers.

    -

    If you want to update Dungeon of the Endless 1.1.5 Crack Mac Osx to the latest version, you can use a tool called PatchMyPC, which is a free software that can automatically update your cracked games and apps on your Mac computer.

    -

    Here are the steps to update Dungeon of the Endless 1.1.5 Crack Mac Osx with PatchMyPC:

    -
      -
    1. Download PatchMyPC from patchmypc.com or other websites that offer tools for cracked games and apps.
    2. -
    3. Install PatchMyPC on your Mac computer and run it.
    4. -
    5. Select Dungeon of the Endless 1.1.5 Crack Mac Osx from the list of games and apps that can be updated by PatchMyPC.
    6. -
    7. Click Update button and wait for PatchMyPC to download and install the latest update or patch for Dungeon of the Endless 1.1.5 Crack Mac Osx.
    8. -
    9. Launch Dungeon of the Endless 1.1.5 Crack Mac Osx and enjoy the updated features or content.
    10. -
    -

    How to Fix Common Problems with Dungeon of the Endless 1.1.5 Crack Mac Osx

    -

    Dungeon of the Endless 1.1.5 Crack Mac Osx is a game that can run smoothly and flawlessly on most Mac computers, but it may also encounter some problems or errors that can affect your gameplay experience.

    -

    If you face any common problems with Dungeon of the Endless 1.1.5 Crack Mac Osx, such as crashes, freezes, lag, black screen, sound issues, etc., you can try some solutions that can help you fix them.

    -

    Here are some solutions that can help you fix common problems with Dungeon of the Endless 1.1.5 Crack Mac Osx:

    - -

    How to Customize Your Characters with Dungeon of the Endless 1.1.5 Crack Mac Osx

    -

    Dungeon of the Endless 1.1.5 Crack Mac Osx has a lot of characters to choose from, each with their own stats, skills, and abilities. However, you can also customize your characters with various items, equipment, and mods that can enhance their performance and appearance.

    -

    If you want to customize your characters with Dungeon of the Endless 1.1.5 Crack Mac Osx, you can use a mod called More Heroes, which is a mod that adds more than 20 new heroes to the game, each with their own stats, skills, and abilities. You can download it from ali213.net or other websites that offer mods for Dungeon of the Endless.

    -

    Here are the steps to install and use More Heroes mod for Dungeon of the Endless 1.1.5 Crack Mac Osx:

    -
      -
    1. Download the mod file from ali213.net or other websites that offer mods for Dungeon of the Endless.
    2. -
    3. Extract the zip file to your desired location.
    4. -
    5. Open the extracted folder and copy the files or folders to your game folder.
    6. -
    7. Launch the game and you will see all the new heroes available on the character selection screen.
    8. -
    -

    Here are some examples of items, equipment, and mods that you can use to customize your characters with Dungeon of the Endless 1.1.5 Crack Mac Osx:

    - -

    How to Enjoy Dungeon of the Endless 1.1.5 Crack Mac Osx with Your Friends

    -

    Dungeon of the Endless 1.1.5 Crack Mac Osx is a game that can be played solo or with up to three other players online or locally. Playing with your friends can make the game more fun and challenging, as you can cooperate and communicate with each other to survive the endless dungeon.

    -

    If you want to enjoy Dungeon of the Endless 1.1.5 Crack Mac Osx with your friends, you can use a tool called Hamachi, which is a free software that can create a virtual private network (VPN) between your computers and allow you to play online games as if you were on the same local network.

    -

    Here are the steps to enjoy Dungeon of the Endless 1.1.5 Crack Mac Osx with your friends using Hamachi:

    -
      -
    1. Download Hamachi from hamachi.com or other websites that offer tools for online gaming.
    2. -
    3. Install Hamachi on your Mac computer and run it.
    4. -
    5. Create a new network or join an existing one with your friends.
    6. -
    7. Launch Dungeon of the Endless 1.1.5 Crack Mac Osx and select Multiplayer mode.
    8. -
    9. Create a new game or join an existing one with your friends.
    10. -
    -

    Here are some tips and tricks to enjoy Dungeon of the Endless 1.1.5 Crack Mac Osx with your friends:

    - -

    Conclusion

    -

    Dungeon of the Endless 1.1.5 Crack Mac Osx is a game that combines roguelike, tower defense, and RPG elements in a unique and challenging way. You play as a survivor of a prison ship that crashed on a mysterious planet, and you have to explore the endless dungeon below, fighting enemies, collecting resources, and building defenses along the way.

    -

    If you are looking for a game that will test your skills and strategy, Dungeon of the Endless 1.1.5 Crack Mac Osx is a perfect choice. You can download it for free from various websites that offer cracked games for Mac users, such as kidzshare.com or trailduro.com.

    -

    In this article, we have shown you how to download and play Dungeon of the Endless 1.1.5 Crack Mac Osx on your Mac computer. We have also given you some tips and tricks to survive the endless dungeon, unlock secret characters, install mods, update the game, fix common problems, and enjoy the game with your friends.

    -

    We hope you have found this article helpful and informative. If you have any questions or comments, feel free to leave them below. Thank you for reading and have fun playing Dungeon of the Endless 1.1.5 Crack Mac Osx!

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Buble Shooter Join the Bubble Popping Adventure.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Buble Shooter Join the Bubble Popping Adventure.md deleted file mode 100644 index 539f3821c6169b386a9114b47b5d1873f70e7808..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Buble Shooter Join the Bubble Popping Adventure.md +++ /dev/null @@ -1,123 +0,0 @@ -
    -

    Bubble Shooter: A Fun and Addictive Game for Everyone

    -

    If you are looking for a simple yet entertaining game to pass the time, you might want to try Bubble Shooter. Bubble Shooter is a popular online game that involves shooting bubbles to match three or more of the same color and make them pop. It is easy to learn, fun to play, and suitable for all ages. In this article, we will tell you everything you need to know about Bubble Shooter, including its history, rules, benefits, tips, and best versions.

    -

    What is Bubble Shooter?

    -

    Bubble Shooter is a type of puzzle game that belongs to the genre of "match three" games. The main objective of the game is to clear the screen of bubbles by shooting them with a bubble cannon. The bubbles are arranged in a grid or a cluster, and they come in different colors. To pop the bubbles, you need to aim and shoot a bubble of the same color at them. When three or more bubbles of the same color touch, they burst and disappear. The game ends when you clear all the bubbles or when one of them reaches the bottom of the screen.

    -

    buble shooter


    DOWNLOADhttps://urlin.us/2uSStg



    -

    The history of Bubble Shooter

    -

    Bubble Shooter was originally developed by Taito Corporation in 1994 as an arcade game called Puzzle Bobble. It was later ported to various platforms such as PC, mobile, and web browsers. The game became very popular and spawned many sequels and spin-offs. One of the most successful versions of the game was Bubble Shooter, which was released in 2002 by Absolutist Games. This version introduced some new features such as power-ups, levels, and modes. Since then, Bubble Shooter has been played by millions of people around the world and has inspired many other similar games.

    -

    The rules of Bubble Shooter

    -

    The rules of Bubble Shooter are simple and intuitive. Here are the basic steps to play the game:

    - -

    The benefits of playing Bubble Shooter

    -

    Bubble Shooter is not only a fun game but also a beneficial one. Here are some of the advantages of playing Bubble Shooter:

    - -

    How to play Bubble Shooter?

    -

    Now that you know what Bubble Shooter is and why you should play it, let's see how you can actually play it. Here are some tips and tricks to help you master the game:

    -

    Choose your device and platform

    -

    Bubble Shooter is available on various devices and platforms, such as PC, mobile, tablet, and web browser. You can choose the one that suits you best, depending on your preferences and convenience. For example, if you want to play on a bigger screen and use a mouse, you can play on your PC. If you want to play on the go and use touch controls, you can play on your mobile or tablet. If you want to play online and access different versions of the game, you can play on your web browser.

    -

    Aim and shoot the bubbles

    -

    The most important skill in Bubble Shooter is aiming and shooting the bubbles. You need to be precise and accurate to hit the right bubbles and avoid wasting shots. Here are some tips to improve your aiming and shooting:

    - -

    Use strategies and tips to improve your score

    -

    Besides aiming and shooting, there are also some strategies and tips that can help you improve your score and beat the levels. Here are some of them:

    - -

    What are the best Bubble Shooter games?

    -

    Bubble Shooter is a very popular game that has many versions and variations. Some of them are more classic and simple, while others are more modern and complex. Here are some of the best Bubble Shooter games that you can try:

    -

    bubble shooter game online free
    -bubble shooter classic play
    -bubble shooter extreme download
    -bubble shooter pro tips
    -bubble shooter candy crush
    -bubble shooter levels strategy
    -bubble shooter arcade mode
    -bubble shooter original version
    -bubble shooter puzzle bobble
    -bubble shooter smarty bubbles
    -bubble shooter hd graphics
    -bubble shooter crazy games
    -bubble shooter full screen
    -bubble shooter no ads
    -bubble shooter high score
    -bubble shooter fun games
    -bubble shooter relaxing music
    -bubble shooter space theme
    -bubble shooter halloween edition
    -bubble shooter christmas special
    -bubble shooter farm animals
    -bubble shooter underwater adventure
    -bubble shooter dragon pop
    -bubble shooter frozen bubbles
    -bubble shooter rainbow colors
    -bubble shooter magic spells
    -bubble shooter easter eggs
    -bubble shooter valentine hearts
    -bubble shooter jungle safari
    -bubble shooter fairy tale
    -bubble shooter dinosaur world
    -bubble shooter pirate treasure
    -bubble shooter soccer balls
    -bubble shooter fruit splash
    -bubble shooter flower garden
    -bubble shooter emoji blast
    -bubble shooter animal rescue
    -bubble shooter zombie apocalypse
    -bubble shooter superhero power
    -bubble shooter jewel match
    -bubble shooter marble legend
    -bubble shooter galaxy war
    -bubble shooter firework frenzy
    -bubble shooter forest friends
    -bubble shooter balloon popper
    -bubble shooter cake maker
    -bubble shooter candy saga
    -bubble shooter bird land
    -bubble shooter butterfly dream

    -

    Bubble Shooter Classic

    -

    Bubble Shooter Classic is one of the most original and iconic versions of the game. It has a simple design, a retro style, and a relaxing soundtrack. It is perfect for those who want to enjoy a nostalgic and timeless game experience.

    -

    Bubble Shooter Extreme

    -

    Bubble Shooter Extreme is one of the most challenging and exciting versions of the game. It has a fast-paced gameplay, a futuristic design, and a dynamic soundtrack. It is perfect for those who want to test their skills and reflexes in a thrilling game experience.

    -

    Bubble Shooter Candy

    -

    Bubble Shooter Candy is one of the most sweet and colorful versions of the game. It has a cute design, a candy theme, and a cheerful soundtrack. It is perfect for those who want to enjoy a fun and delightful game experience.

    -

    Conclusion

    -

    Bubble Shooter is a fun and addictive game that everyone can enjoy. It is easy to learn, fun to play, and suitable for all ages. It also has many benefits for your mind and mood, such as improving your concentration, coordination, memory, relaxation, and creativity. You can play Bubble Shooter on various devices and platforms, such as PC, mobile, tablet, and web browser. You can also choose from different versions and variations of the game, such as Bubble Shooter Classic, Bubble Shooter Extreme, and Bubble Shooter Candy. Whether you want a nostalgic, thrilling, or delightful game experience, Bubble Shooter has something for you. So what are you waiting for? Grab your bubble cannon and start popping bubbles today!

    -

    FAQs

    -

    Here are some of the frequently asked questions about Bubble Shooter:

    -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Countries.csv The Ultimate Resource for Country Information.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Countries.csv The Ultimate Resource for Country Information.md deleted file mode 100644 index ae208b74d5dfa0fdcf2e881d93b59e0814bb591d..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Countries.csv The Ultimate Resource for Country Information.md +++ /dev/null @@ -1,112 +0,0 @@ -
    -

    How to Download Countries.csv

    -

    A CSV file, or a comma-separated values file, is a plain text file that stores data in a tabular format. Each line of the file is a data record, and each record consists of one or more fields separated by commas. CSV files are often used to exchange data between different applications that use incompatible formats. For example, you can use a CSV file to transfer data from a database to a spreadsheet, or vice versa.

    -

    One example of a CSV file that you might want to download is countries.csv. This file contains information about countries around the world, such as their names, ISO codes, coordinates, capitals, currencies, regions, and more. You can use this file for various purposes, such as creating maps, charts, reports, or quizzes. In this article, we will show you how to download countries.csv and open it in a program of your choice.

    -

    download countries.csv


    DOWNLOAD 🗸🗸🗸 https://urlin.us/2uSZLa



    -

    Step 1: Find a Source of Countries.csv Data

    -

    The first step is to find a reliable source of countries.csv data. There are many websites that offer this kind of data for free or for a fee. Some examples are:

    - -

    You can choose any source that suits your needs and preferences. For this article, we will use the Google Developers version of countries.csv.

    -

    Step 2: Choose a Program to Open the CSV File

    -

    The next step is to choose a program that can open and display the CSV file. There are many options available, depending on your operating system and your goals. Some common programs are:

    - -

    You can choose any program that meets your requirements and expectations. For this article, we will use Microsoft Excel as an example of a spreadsheet program.

    -

    Step 3: Download the CSV File from the Source

    -

    The third step is to download the CSV file from the source website. To do this:

    -

    download countries.csv file
    -download countries.csv data
    -download countries.csv python
    -download countries.csv r
    -download countries.csv excel
    -download countries.csv pandas
    -download countries.csv sql
    -download countries.csv world bank
    -download countries.csv iso codes
    -download countries.csv population
    -download countries.csv gdp
    -download countries.csv map
    -download countries.csv covid
    -download countries.csv flags
    -download countries.csv currency
    -download countries.csv capital
    -download countries.csv continent
    -download countries.csv language
    -download countries.csv timezone
    -download countries.csv area
    -download countries.csv latitude longitude
    -download countries.csv shapefile
    -download countries.csv geojson
    -download countries.csv kaggle
    -download countries.csv github
    -how to download countries.csv
    -where to download countries.csv
    -free download countries.csv
    -best way to download countries.csv
    -easiest way to download countries.csv
    -fastest way to download countries.csv
    -how to use downloaded countries.csv file
    -how to import downloaded countries.csv data
    -how to read downloaded countries.csv python
    -how to load downloaded countries.csv r
    -how to open downloaded countries.csv excel
    -how to parse downloaded countries.csv pandas
    -how to query downloaded countries.csv sql
    -how to analyze downloaded countries.csv world bank data
    -how to convert downloaded countries.csv iso codes
    -how to visualize downloaded countries.csv population data
    -how to plot downloaded countries.csv gdp data
    -how to create a map from downloaded countries.csv data
    -how to update downloaded countries.csv covid data
    -how to display downloaded countries.csv flags on a map
    -how to calculate exchange rates from downloaded countries.csv currency data
    -how to find capital cities from downloaded countries.csv data
    -how to group by continent from downloaded countries.csv data
    -how to detect language from downloaded countries.csv data
    -how to get timezone from downloaded countries.csv data
    -how to measure area from downloaded countries.csv data

    -
      -
    1. Go to the website where the CSV file is hosted. In our case, it is https://developers.google.com/public-data/docs/canonical/countries_csv.
    2. -
    3. Right-click on the link to the CSV file and select "Save link as" or "Save target as". In our case, it is https://developers.google.com/public-data/docs/canonical/countries_csv.csv.
    4. -
    5. Choose a location on your computer where you want to save the CSV file and click "Save".
    6. -
    -

    You have now downloaded the CSV file to your computer. You can find it in the location you specified.

    -

    Step 4: Open the CSV File in the Chosen Program

    -

    The fourth step is to open the CSV file in the program you selected. To do this:

    -
      -
    1. Launch the program on your computer. In our case, it is Microsoft Excel.
    2. -
    3. Click on "File" and then "Open". Alternatively, you can use the keyboard shortcut Ctrl+O.
    4. -
    5. Navigate to the location where you saved the CSV file and select it. Click "Open".
    6. -
    -

    You should now see the CSV file opened in the program. Depending on the program, you may need to adjust some settings, such as the delimiter, the encoding, or the format of the data. For example, in Excel, you may see a dialog box that asks you to choose how to import the data. You can select "Delimited" and then "Comma" as the delimiter. You can also choose the column data format as "General" or "Text". Click "Finish" to complete the import.

    -

    Step 5: Explore and Manipulate the Data as Needed

    -

    The final step is to explore and manipulate the data in the CSV file as needed. You can use the features and functions of the program to perform various tasks, such as:

    - -

    You can explore and manipulate the data in any way you want. You can also save your changes or export the data to another format if needed.

    -

    Conclusion

    -

    In this article, we have shown you how to download countries.csv and open it in a program of your choice. We have also given you some examples of how to explore and manipulate the data in the CSV file. By following these steps, you can access a wealth of information about countries around the world and use it for various purposes.

    -

    We hope you found this article helpful and informative. If you have any questions or feedback, please let us know in the comments below.

    -

    FAQs

    -

    What is a CSV file?

    -

    A CSV file is a plain text file that stores data in a tabular format. Each line of the file is a data record, and each record consists of one or more fields separated by commas.

    -

    Why should I download countries.csv?

    -

    You should download countries.csv if you want to access information about countries around the world, such as their names, ISO codes, coordinates, capitals, currencies, regions, and more. You can use this information for various purposes, such as creating maps, charts, reports, or quizzes.

    -

    How do I open a CSV file?

    -

    You can open a CSV file using any program that can read and display plain text files. Some common programs are text editors, spreadsheet programs, or specialized applications.

    -

    Where can I find other sources of countries.csv data?

    -

    You can find other sources of countries.csv data by searching online for websites that offer this kind of data for free or for a fee. Some examples are GitHub, Kaggle, DataHub.io, World Bank Data Catalogue, or CIA World Factbook.

    -

    How do I convert a CSV file to another format?

    -

    You can convert a CSV file to another format using any program that can read and write different formats. Some common formats are JSON, SQL, XML, YAML, or HTML. You can also use online tools or converters that can perform this task for you.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/README.md b/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/README.md deleted file mode 100644 index 2ee63a861229b68873561fa39bfa7c9a8b53b947..0000000000000000000000000000000000000000 --- a/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/README.md +++ /dev/null @@ -1,164 +0,0 @@ -# Distributed Arcface Training in Pytorch - -This is a deep learning library that makes face recognition efficient, and effective, which can train tens of millions -identity on a single server. - -## Requirements - -- Install [pytorch](http://pytorch.org) (torch>=1.6.0), our doc for [install.md](docs/install.md). -- `pip install -r requirements.txt`. -- Download the dataset - from [https://github.com/deepinsight/insightface/tree/master/recognition/_datasets_](https://github.com/deepinsight/insightface/tree/master/recognition/_datasets_) - . - -## How to Training - -To train a model, run `train.py` with the path to the configs: - -### 1. Single node, 8 GPUs: - -```shell -python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=1234 train.py configs/ms1mv3_r50 -``` - -### 2. Multiple nodes, each node 8 GPUs: - -Node 0: - -```shell -python -m torch.distributed.launch --nproc_per_node=8 --nnodes=2 --node_rank=0 --master_addr="ip1" --master_port=1234 train.py train.py configs/ms1mv3_r50 -``` - -Node 1: - -```shell -python -m torch.distributed.launch --nproc_per_node=8 --nnodes=2 --node_rank=1 --master_addr="ip1" --master_port=1234 train.py train.py configs/ms1mv3_r50 -``` - -### 3.Training resnet2060 with 8 GPUs: - -```shell -python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=1234 train.py configs/ms1mv3_r2060.py -``` - -## Model Zoo - -- The models are available for non-commercial research purposes only. -- All models can be found in here. -- [Baidu Yun Pan](https://pan.baidu.com/s/1CL-l4zWqsI1oDuEEYVhj-g): e8pw -- [onedrive](https://1drv.ms/u/s!AswpsDO2toNKq0lWY69vN58GR6mw?e=p9Ov5d) - -### Performance on [**ICCV2021-MFR**](http://iccv21-mfr.com/) - -ICCV2021-MFR testset consists of non-celebrities so we can ensure that it has very few overlap with public available face -recognition training set, such as MS1M and CASIA as they mostly collected from online celebrities. -As the result, we can evaluate the FAIR performance for different algorithms. - -For **ICCV2021-MFR-ALL** set, TAR is measured on all-to-all 1:1 protocal, with FAR less than 0.000001(e-6). The -globalised multi-racial testset contains 242,143 identities and 1,624,305 images. - -For **ICCV2021-MFR-MASK** set, TAR is measured on mask-to-nonmask 1:1 protocal, with FAR less than 0.0001(e-4). -Mask testset contains 6,964 identities, 6,964 masked images and 13,928 non-masked images. -There are totally 13,928 positive pairs and 96,983,824 negative pairs. - -| Datasets | backbone | Training throughout | Size / MB | **ICCV2021-MFR-MASK** | **ICCV2021-MFR-ALL** | -| :---: | :--- | :--- | :--- |:--- |:--- | -| MS1MV3 | r18 | - | 91 | **47.85** | **68.33** | -| Glint360k | r18 | 8536 | 91 | **53.32** | **72.07** | -| MS1MV3 | r34 | - | 130 | **58.72** | **77.36** | -| Glint360k | r34 | 6344 | 130 | **65.10** | **83.02** | -| MS1MV3 | r50 | 5500 | 166 | **63.85** | **80.53** | -| Glint360k | r50 | 5136 | 166 | **70.23** | **87.08** | -| MS1MV3 | r100 | - | 248 | **69.09** | **84.31** | -| Glint360k | r100 | 3332 | 248 | **75.57** | **90.66** | -| MS1MV3 | mobilefacenet | 12185 | 7.8 | **41.52** | **65.26** | -| Glint360k | mobilefacenet | 11197 | 7.8 | **44.52** | **66.48** | - -### Performance on IJB-C and Verification Datasets - -| Datasets | backbone | IJBC(1e-05) | IJBC(1e-04) | agedb30 | cfp_fp | lfw | log | -| :---: | :--- | :--- | :--- | :--- |:--- |:--- |:--- | -| MS1MV3 | r18 | 92.07 | 94.66 | 97.77 | 97.73 | 99.77 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_arcface_r18_fp16/training.log)| -| MS1MV3 | r34 | 94.10 | 95.90 | 98.10 | 98.67 | 99.80 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_arcface_r34_fp16/training.log)| -| MS1MV3 | r50 | 94.79 | 96.46 | 98.35 | 98.96 | 99.83 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_arcface_r50_fp16/training.log)| -| MS1MV3 | r100 | 95.31 | 96.81 | 98.48 | 99.06 | 99.85 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_arcface_r100_fp16/training.log)| -| MS1MV3 | **r2060**| 95.34 | 97.11 | 98.67 | 99.24 | 99.87 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_arcface_r2060_fp16/training.log)| -| Glint360k |r18-0.1 | 93.16 | 95.33 | 97.72 | 97.73 | 99.77 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/glint360k_cosface_r18_fp16_0.1/training.log)| -| Glint360k |r34-0.1 | 95.16 | 96.56 | 98.33 | 98.78 | 99.82 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/glint360k_cosface_r34_fp16_0.1/training.log)| -| Glint360k |r50-0.1 | 95.61 | 96.97 | 98.38 | 99.20 | 99.83 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/glint360k_cosface_r50_fp16_0.1/training.log)| -| Glint360k |r100-0.1 | 95.88 | 97.32 | 98.48 | 99.29 | 99.82 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/glint360k_cosface_r100_fp16_0.1/training.log)| - -[comment]: <> (More details see [model.md](docs/modelzoo.md) in docs.) - - -## [Speed Benchmark](docs/speed_benchmark.md) - -**Arcface Torch** can train large-scale face recognition training set efficiently and quickly. When the number of -classes in training sets is greater than 300K and the training is sufficient, partial fc sampling strategy will get same -accuracy with several times faster training performance and smaller GPU memory. -Partial FC is a sparse variant of the model parallel architecture for large sacle face recognition. Partial FC use a -sparse softmax, where each batch dynamicly sample a subset of class centers for training. In each iteration, only a -sparse part of the parameters will be updated, which can reduce a lot of GPU memory and calculations. With Partial FC, -we can scale trainset of 29 millions identities, the largest to date. Partial FC also supports multi-machine distributed -training and mixed precision training. - -![Image text](https://github.com/anxiangsir/insightface_arcface_log/blob/master/partial_fc_v2.png) - -More details see -[speed_benchmark.md](docs/speed_benchmark.md) in docs. - -### 1. Training speed of different parallel methods (samples / second), Tesla V100 32GB * 8. (Larger is better) - -`-` means training failed because of gpu memory limitations. - -| Number of Identities in Dataset | Data Parallel | Model Parallel | Partial FC 0.1 | -| :--- | :--- | :--- | :--- | -|125000 | 4681 | 4824 | 5004 | -|1400000 | **1672** | 3043 | 4738 | -|5500000 | **-** | **1389** | 3975 | -|8000000 | **-** | **-** | 3565 | -|16000000 | **-** | **-** | 2679 | -|29000000 | **-** | **-** | **1855** | - -### 2. GPU memory cost of different parallel methods (MB per GPU), Tesla V100 32GB * 8. (Smaller is better) - -| Number of Identities in Dataset | Data Parallel | Model Parallel | Partial FC 0.1 | -| :--- | :--- | :--- | :--- | -|125000 | 7358 | 5306 | 4868 | -|1400000 | 32252 | 11178 | 6056 | -|5500000 | **-** | 32188 | 9854 | -|8000000 | **-** | **-** | 12310 | -|16000000 | **-** | **-** | 19950 | -|29000000 | **-** | **-** | 32324 | - -## Evaluation ICCV2021-MFR and IJB-C - -More details see [eval.md](docs/eval.md) in docs. - -## Test - -We tested many versions of PyTorch. Please create an issue if you are having trouble. - -- [x] torch 1.6.0 -- [x] torch 1.7.1 -- [x] torch 1.8.0 -- [x] torch 1.9.0 - -## Citation - -``` -@inproceedings{deng2019arcface, - title={Arcface: Additive angular margin loss for deep face recognition}, - author={Deng, Jiankang and Guo, Jia and Xue, Niannan and Zafeiriou, Stefanos}, - booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, - pages={4690--4699}, - year={2019} -} -@inproceedings{an2020partical_fc, - title={Partial FC: Training 10 Million Identities on a Single Machine}, - author={An, Xiang and Zhu, Xuhan and Xiao, Yang and Wu, Lan and Zhang, Ming and Gao, Yuan and Qin, Bin and - Zhang, Debing and Fu Ying}, - booktitle={Arxiv 2010.05222}, - year={2020} -} -``` diff --git a/spaces/4Taps/SadTalker/src/utils/audio.py b/spaces/4Taps/SadTalker/src/utils/audio.py deleted file mode 100644 index 89433eb4c681112804fbed72b157700f553739a8..0000000000000000000000000000000000000000 --- a/spaces/4Taps/SadTalker/src/utils/audio.py +++ /dev/null @@ -1,136 +0,0 @@ -import librosa -import librosa.filters -import numpy as np -# import tensorflow as tf -from scipy import signal -from scipy.io import wavfile -from src.utils.hparams import hparams as hp - -def load_wav(path, sr): - return librosa.core.load(path, sr=sr)[0] - -def save_wav(wav, path, sr): - wav *= 32767 / max(0.01, np.max(np.abs(wav))) - #proposed by @dsmiller - wavfile.write(path, sr, wav.astype(np.int16)) - -def save_wavenet_wav(wav, path, sr): - librosa.output.write_wav(path, wav, sr=sr) - -def preemphasis(wav, k, preemphasize=True): - if preemphasize: - return signal.lfilter([1, -k], [1], wav) - return wav - -def inv_preemphasis(wav, k, inv_preemphasize=True): - if inv_preemphasize: - return signal.lfilter([1], [1, -k], wav) - return wav - -def get_hop_size(): - hop_size = hp.hop_size - if hop_size is None: - assert hp.frame_shift_ms is not None - hop_size = int(hp.frame_shift_ms / 1000 * hp.sample_rate) - return hop_size - -def linearspectrogram(wav): - D = _stft(preemphasis(wav, hp.preemphasis, hp.preemphasize)) - S = _amp_to_db(np.abs(D)) - hp.ref_level_db - - if hp.signal_normalization: - return _normalize(S) - return S - -def melspectrogram(wav): - D = _stft(preemphasis(wav, hp.preemphasis, hp.preemphasize)) - S = _amp_to_db(_linear_to_mel(np.abs(D))) - hp.ref_level_db - - if hp.signal_normalization: - return _normalize(S) - return S - -def _lws_processor(): - import lws - return lws.lws(hp.n_fft, get_hop_size(), fftsize=hp.win_size, mode="speech") - -def _stft(y): - if hp.use_lws: - return _lws_processor(hp).stft(y).T - else: - return librosa.stft(y=y, n_fft=hp.n_fft, hop_length=get_hop_size(), win_length=hp.win_size) - -########################################################## -#Those are only correct when using lws!!! (This was messing with Wavenet quality for a long time!) -def num_frames(length, fsize, fshift): - """Compute number of time frames of spectrogram - """ - pad = (fsize - fshift) - if length % fshift == 0: - M = (length + pad * 2 - fsize) // fshift + 1 - else: - M = (length + pad * 2 - fsize) // fshift + 2 - return M - - -def pad_lr(x, fsize, fshift): - """Compute left and right padding - """ - M = num_frames(len(x), fsize, fshift) - pad = (fsize - fshift) - T = len(x) + 2 * pad - r = (M - 1) * fshift + fsize - T - return pad, pad + r -########################################################## -#Librosa correct padding -def librosa_pad_lr(x, fsize, fshift): - return 0, (x.shape[0] // fshift + 1) * fshift - x.shape[0] - -# Conversions -_mel_basis = None - -def _linear_to_mel(spectogram): - global _mel_basis - if _mel_basis is None: - _mel_basis = _build_mel_basis() - return np.dot(_mel_basis, spectogram) - -def _build_mel_basis(): - assert hp.fmax <= hp.sample_rate // 2 - return librosa.filters.mel(sr=hp.sample_rate, n_fft=hp.n_fft, n_mels=hp.num_mels, - fmin=hp.fmin, fmax=hp.fmax) - -def _amp_to_db(x): - min_level = np.exp(hp.min_level_db / 20 * np.log(10)) - return 20 * np.log10(np.maximum(min_level, x)) - -def _db_to_amp(x): - return np.power(10.0, (x) * 0.05) - -def _normalize(S): - if hp.allow_clipping_in_normalization: - if hp.symmetric_mels: - return np.clip((2 * hp.max_abs_value) * ((S - hp.min_level_db) / (-hp.min_level_db)) - hp.max_abs_value, - -hp.max_abs_value, hp.max_abs_value) - else: - return np.clip(hp.max_abs_value * ((S - hp.min_level_db) / (-hp.min_level_db)), 0, hp.max_abs_value) - - assert S.max() <= 0 and S.min() - hp.min_level_db >= 0 - if hp.symmetric_mels: - return (2 * hp.max_abs_value) * ((S - hp.min_level_db) / (-hp.min_level_db)) - hp.max_abs_value - else: - return hp.max_abs_value * ((S - hp.min_level_db) / (-hp.min_level_db)) - -def _denormalize(D): - if hp.allow_clipping_in_normalization: - if hp.symmetric_mels: - return (((np.clip(D, -hp.max_abs_value, - hp.max_abs_value) + hp.max_abs_value) * -hp.min_level_db / (2 * hp.max_abs_value)) - + hp.min_level_db) - else: - return ((np.clip(D, 0, hp.max_abs_value) * -hp.min_level_db / hp.max_abs_value) + hp.min_level_db) - - if hp.symmetric_mels: - return (((D + hp.max_abs_value) * -hp.min_level_db / (2 * hp.max_abs_value)) + hp.min_level_db) - else: - return ((D * -hp.min_level_db / hp.max_abs_value) + hp.min_level_db) diff --git a/spaces/7eu7d7/anime-ai-detect-fucker/attacker/__init__.py b/spaces/7eu7d7/anime-ai-detect-fucker/attacker/__init__.py deleted file mode 100644 index 7530520945ce3b7e63f5c24ef9e1093dd7dcb431..0000000000000000000000000000000000000000 --- a/spaces/7eu7d7/anime-ai-detect-fucker/attacker/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .base import * -from .PGD import * -from .FGSM import * \ No newline at end of file diff --git a/spaces/AIFILMS/StyleGANEX/models/stylegan2/lpips/base_model.py b/spaces/AIFILMS/StyleGANEX/models/stylegan2/lpips/base_model.py deleted file mode 100644 index 8de1d16f0c7fa52d8067139abc6e769e96d0a6a1..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/StyleGANEX/models/stylegan2/lpips/base_model.py +++ /dev/null @@ -1,58 +0,0 @@ -import os -import numpy as np -import torch -from torch.autograd import Variable -from pdb import set_trace as st -from IPython import embed - -class BaseModel(): - def __init__(self): - pass; - - def name(self): - return 'BaseModel' - - def initialize(self, use_gpu=True, gpu_ids=[0]): - self.use_gpu = use_gpu - self.gpu_ids = gpu_ids - - def forward(self): - pass - - def get_image_paths(self): - pass - - def optimize_parameters(self): - pass - - def get_current_visuals(self): - return self.input - - def get_current_errors(self): - return {} - - def save(self, label): - pass - - # helper saving function that can be used by subclasses - def save_network(self, network, path, network_label, epoch_label): - save_filename = '%s_net_%s.pth' % (epoch_label, network_label) - save_path = os.path.join(path, save_filename) - torch.save(network.state_dict(), save_path) - - # helper loading function that can be used by subclasses - def load_network(self, network, network_label, epoch_label): - save_filename = '%s_net_%s.pth' % (epoch_label, network_label) - save_path = os.path.join(self.save_dir, save_filename) - print('Loading network from %s'%save_path) - network.load_state_dict(torch.load(save_path)) - - def update_learning_rate(): - pass - - def get_image_paths(self): - return self.image_paths - - def save_done(self, flag=False): - np.save(os.path.join(self.save_dir, 'done_flag'),flag) - np.savetxt(os.path.join(self.save_dir, 'done_flag'),[flag,],fmt='%i') diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/training/scheduler.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/training/scheduler.py deleted file mode 100644 index 7151ffbab25a113673b7627027b443b27f22cb0f..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/training/scheduler.py +++ /dev/null @@ -1,24 +0,0 @@ -import numpy as np - - -def assign_learning_rate(optimizer, new_lr): - for param_group in optimizer.param_groups: - param_group["lr"] = new_lr - - -def _warmup_lr(base_lr, warmup_length, step): - return base_lr * (step + 1) / warmup_length - - -def cosine_lr(optimizer, base_lr, warmup_length, steps): - def _lr_adjuster(step): - if step < warmup_length: - lr = _warmup_lr(base_lr, warmup_length, step) - else: - e = step - warmup_length - es = steps - warmup_length - lr = 0.5 * (1 + np.cos(np.pi * e / es)) * base_lr - assign_learning_rate(optimizer, lr) - return lr - - return _lr_adjuster diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/vocoder/bigvgan/models.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/vocoder/bigvgan/models.py deleted file mode 100644 index 22e8017b6d70c8399b3be6a2555485634c03e72d..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/vocoder/bigvgan/models.py +++ /dev/null @@ -1,414 +0,0 @@ -# Copyright (c) 2022 NVIDIA CORPORATION. -# Licensed under the MIT license. - -# Adapted from https://github.com/jik876/hifi-gan under the MIT license. -# LICENSE is in incl_licenses directory. - - -import torch -import torch.nn.functional as F -import torch.nn as nn -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -import numpy as np -from .activations import Snake,SnakeBeta -from .alias_free_torch import * -import os -from omegaconf import OmegaConf - -LRELU_SLOPE = 0.1 - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - -class AMPBlock1(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5), activation=None): - super(AMPBlock1, self).__init__() - self.h = h - - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - self.num_layers = len(self.convs1) + len(self.convs2) # total number of conv layers - - if activation == 'snake': # periodic nonlinearity with snake function and anti-aliasing - self.activations = nn.ModuleList([ - Activation1d( - activation=Snake(channels, alpha_logscale=h.snake_logscale)) - for _ in range(self.num_layers) - ]) - elif activation == 'snakebeta': # periodic nonlinearity with snakebeta function and anti-aliasing - self.activations = nn.ModuleList([ - Activation1d( - activation=SnakeBeta(channels, alpha_logscale=h.snake_logscale)) - for _ in range(self.num_layers) - ]) - else: - raise NotImplementedError("activation incorrectly specified. check the config file and look for 'activation'.") - - def forward(self, x): - acts1, acts2 = self.activations[::2], self.activations[1::2] - for c1, c2, a1, a2 in zip(self.convs1, self.convs2, acts1, acts2): - xt = a1(x) - xt = c1(xt) - xt = a2(xt) - xt = c2(xt) - x = xt + x - - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class AMPBlock2(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3), activation=None): - super(AMPBlock2, self).__init__() - self.h = h - - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - self.num_layers = len(self.convs) # total number of conv layers - - if activation == 'snake': # periodic nonlinearity with snake function and anti-aliasing - self.activations = nn.ModuleList([ - Activation1d( - activation=Snake(channels, alpha_logscale=h.snake_logscale)) - for _ in range(self.num_layers) - ]) - elif activation == 'snakebeta': # periodic nonlinearity with snakebeta function and anti-aliasing - self.activations = nn.ModuleList([ - Activation1d( - activation=SnakeBeta(channels, alpha_logscale=h.snake_logscale)) - for _ in range(self.num_layers) - ]) - else: - raise NotImplementedError("activation incorrectly specified. check the config file and look for 'activation'.") - - def forward(self, x): - for c, a in zip (self.convs, self.activations): - xt = a(x) - xt = c(xt) - x = xt + x - - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class BigVGAN(torch.nn.Module): - # this is our main BigVGAN model. Applies anti-aliased periodic activation for resblocks. - def __init__(self, h): - super(BigVGAN, self).__init__() - self.h = h - - self.num_kernels = len(h.resblock_kernel_sizes) - self.num_upsamples = len(h.upsample_rates) - - # pre conv - self.conv_pre = weight_norm(Conv1d(h.num_mels, h.upsample_initial_channel, 7, 1, padding=3)) - - # define which AMPBlock to use. BigVGAN uses AMPBlock1 as default - resblock = AMPBlock1 if h.resblock == '1' else AMPBlock2 - - # transposed conv-based upsamplers. does not apply anti-aliasing - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(h.upsample_rates, h.upsample_kernel_sizes)): - self.ups.append(nn.ModuleList([ - weight_norm(ConvTranspose1d(h.upsample_initial_channel // (2 ** i), - h.upsample_initial_channel // (2 ** (i + 1)), - k, u, padding=(k - u) // 2)) - ])) - - # residual blocks using anti-aliased multi-periodicity composition modules (AMP) - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = h.upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes)): - self.resblocks.append(resblock(h, ch, k, d, activation=h.activation)) - - # post conv - if h.activation == "snake": # periodic nonlinearity with snake function and anti-aliasing - activation_post = Snake(ch, alpha_logscale=h.snake_logscale) - self.activation_post = Activation1d(activation=activation_post) - elif h.activation == "snakebeta": # periodic nonlinearity with snakebeta function and anti-aliasing - activation_post = SnakeBeta(ch, alpha_logscale=h.snake_logscale) - self.activation_post = Activation1d(activation=activation_post) - else: - raise NotImplementedError("activation incorrectly specified. check the config file and look for 'activation'.") - - self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3)) - - # weight initialization - for i in range(len(self.ups)): - self.ups[i].apply(init_weights) - self.conv_post.apply(init_weights) - - def forward(self, x): - # pre conv - x = self.conv_pre(x) - - for i in range(self.num_upsamples): - # upsampling - for i_up in range(len(self.ups[i])): - x = self.ups[i][i_up](x) - # AMP blocks - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - - # post conv - x = self.activation_post(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - for l_i in l: - remove_weight_norm(l_i) - for l in self.resblocks: - l.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, h, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.d_mult = h.discriminator_channel_mult - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, int(32*self.d_mult), (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(int(32*self.d_mult), int(128*self.d_mult), (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(int(128*self.d_mult), int(512*self.d_mult), (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(int(512*self.d_mult), int(1024*self.d_mult), (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(int(1024*self.d_mult), int(1024*self.d_mult), (kernel_size, 1), 1, padding=(2, 0))), - ]) - self.conv_post = norm_f(Conv2d(int(1024*self.d_mult), 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, h): - super(MultiPeriodDiscriminator, self).__init__() - self.mpd_reshapes = h.mpd_reshapes - print("mpd_reshapes: {}".format(self.mpd_reshapes)) - discriminators = [DiscriminatorP(h, rs, use_spectral_norm=h.use_spectral_norm) for rs in self.mpd_reshapes] - self.discriminators = nn.ModuleList(discriminators) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorR(nn.Module): - def __init__(self, cfg, resolution): - super().__init__() - - self.resolution = resolution - assert len(self.resolution) == 3, \ - "MRD layer requires list with len=3, got {}".format(self.resolution) - self.lrelu_slope = LRELU_SLOPE - - norm_f = weight_norm if cfg.use_spectral_norm == False else spectral_norm - if hasattr(cfg, "mrd_use_spectral_norm"): - print("INFO: overriding MRD use_spectral_norm as {}".format(cfg.mrd_use_spectral_norm)) - norm_f = weight_norm if cfg.mrd_use_spectral_norm == False else spectral_norm - self.d_mult = cfg.discriminator_channel_mult - if hasattr(cfg, "mrd_channel_mult"): - print("INFO: overriding mrd channel multiplier as {}".format(cfg.mrd_channel_mult)) - self.d_mult = cfg.mrd_channel_mult - - self.convs = nn.ModuleList([ - norm_f(nn.Conv2d(1, int(32*self.d_mult), (3, 9), padding=(1, 4))), - norm_f(nn.Conv2d(int(32*self.d_mult), int(32*self.d_mult), (3, 9), stride=(1, 2), padding=(1, 4))), - norm_f(nn.Conv2d(int(32*self.d_mult), int(32*self.d_mult), (3, 9), stride=(1, 2), padding=(1, 4))), - norm_f(nn.Conv2d(int(32*self.d_mult), int(32*self.d_mult), (3, 9), stride=(1, 2), padding=(1, 4))), - norm_f(nn.Conv2d(int(32*self.d_mult), int(32*self.d_mult), (3, 3), padding=(1, 1))), - ]) - self.conv_post = norm_f(nn.Conv2d(int(32 * self.d_mult), 1, (3, 3), padding=(1, 1))) - - def forward(self, x): - fmap = [] - - x = self.spectrogram(x) - x = x.unsqueeze(1) - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, self.lrelu_slope) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - def spectrogram(self, x): - n_fft, hop_length, win_length = self.resolution - x = F.pad(x, (int((n_fft - hop_length) / 2), int((n_fft - hop_length) / 2)), mode='reflect') - x = x.squeeze(1) - x = torch.stft(x, n_fft=n_fft, hop_length=hop_length, win_length=win_length, center=False, return_complex=True) - x = torch.view_as_real(x) # [B, F, TT, 2] - mag = torch.norm(x, p=2, dim =-1) #[B, F, TT] - - return mag - - -class MultiResolutionDiscriminator(nn.Module): - def __init__(self, cfg, debug=False): - super().__init__() - self.resolutions = cfg.resolutions - assert len(self.resolutions) == 3,\ - "MRD requires list of list with len=3, each element having a list with len=3. got {}".\ - format(self.resolutions) - self.discriminators = nn.ModuleList( - [DiscriminatorR(cfg, resolution) for resolution in self.resolutions] - ) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(x=y) - y_d_g, fmap_g = d(x=y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - loss += torch.mean(torch.abs(rl - gl)) - - return loss*2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - - -class VocoderBigVGAN(object): - def __init__(self, ckpt_vocoder,device='cuda'): - vocoder_sd = torch.load(os.path.join(ckpt_vocoder,'best_netG.pt'), map_location='cpu') - - vocoder_args = OmegaConf.load(os.path.join(ckpt_vocoder,'args.yml')) - - self.generator = BigVGAN(vocoder_args) - self.generator.load_state_dict(vocoder_sd['generator']) - self.generator.eval() - - self.device = device - self.generator.to(self.device) - - def vocode(self, spec): - with torch.no_grad(): - if isinstance(spec,np.ndarray): - spec = torch.from_numpy(spec).unsqueeze(0) - spec = spec.to(dtype=torch.float32,device=self.device) - return self.generator(spec).squeeze().cpu().numpy() - - def __call__(self, wav): - return self.vocode(wav) diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/work_dirs/mobilevit-small_4xb32_2000e_3c_noF/__init__.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/work_dirs/mobilevit-small_4xb32_2000e_3c_noF/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/AUBADA-ALARABI/poetry202/README.md b/spaces/AUBADA-ALARABI/poetry202/README.md deleted file mode 100644 index b19a751ca3854ec2c2b7ac5c56aece4cc38657c6..0000000000000000000000000000000000000000 --- a/spaces/AUBADA-ALARABI/poetry202/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Poetry2023 -emoji: 👁 -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 3.16.0 -app_file: app.py -pinned: false -duplicated_from: Abdllh/poetry202 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Abhi5ingh/fashionsd/sdfile.py b/spaces/Abhi5ingh/fashionsd/sdfile.py deleted file mode 100644 index ad62243e77d18ca26e80dcf8263c80f746c76302..0000000000000000000000000000000000000000 --- a/spaces/Abhi5ingh/fashionsd/sdfile.py +++ /dev/null @@ -1,89 +0,0 @@ -import gc -import datetime -import os -import re -from typing import Literal - -import streamlit as st -import torch -from diffusers import ( - StableDiffusionPipeline, - StableDiffusionControlNetPipeline, - ControlNetModel, - EulerDiscreteScheduler, - DDIMScheduler, -) - -PIPELINES = Literal["txt2img", "sketch2img"] - -@st.cache_resource(max_entries=1) -def get_pipelines( name:PIPELINES, enable_cpu_offload = False, ) -> StableDiffusionPipeline: - pipe = None - - if name == "txt2img": - pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16,cache_dir="D:/huggingface/CACHE/") - pipe.unet.load_attn_procs("./") - pipe.safety_checker = lambda images, **kwargs: (images, [False] * len(images)) - elif name == "sketch2img": - controlnet = ControlNetModel.from_pretrained("Abhi5ingh/model_dresscode", torch_dtype=torch.float16,cache_dir="D:/huggingface/CACHE/") - pipe = StableDiffusionControlNetPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", controlnet = controlnet, torch_dtype = torch.float16,cache_dir="D:/huggingface/CACHE/") - pipe.unet.load_attn_procs("./") - pipe.safety_checker = lambda images, **kwargs: (images, [False] * len(images)) - - if pipe is None: - raise Exception(f"Pipeline not Found {name}") - - if enable_cpu_offload: - print("Enabling cpu offloading for the given pipeline") - pipe.enable_model_cpu_offload() - else: - pipe = pipe.to("cuda") - return pipe - -def generate( - prompt, - pipeline_name: PIPELINES, - image = None, - num_inference_steps = 30, - negative_prompt = None, - width = 512, - height = 512, - guidance_scale = 7.5, - controlnet_conditioning_scale = None, - enable_cpu_offload= False): - negative_prompt = negative_prompt if negative_prompt else None - p = st.progress(0) - callback = lambda step,*_: p.progress(step/num_inference_steps) - pipe = get_pipelines(pipeline_name,enable_cpu_offload=enable_cpu_offload) - torch.cuda.empty_cache() - - kwargs = dict( - prompt = prompt, - negative_prompt=negative_prompt, - num_inference_steps=num_inference_steps, - callback=callback, - guidance_scale=guidance_scale, - ) - print("kwargs",kwargs) - - if pipeline_name =="sketch2img" and image: - kwargs.update(image=image,controlnet_conditioning_scale=controlnet_conditioning_scale) - elif pipeline_name == "txt2img": - kwargs.update(width = width, height = height) - else: - raise Exception( - f"Cannot generate image for pipeline {pipeline_name} and {prompt}") - images = pipe(**kwargs).images - image = images[0] - - os.makedirs("outputs", exist_ok=True) - - filename = ( - "outputs/" - + re.sub(r"\s+", "_",prompt)[:30] - + f"_{datetime.datetime.now().timestamp()}" - ) - image.save(f"{filename}.png") - with open(f"{filename}.txt", "w") as f: - f.write(f"Prompt: {prompt}\n\nNegative Prompt:{negative_prompt}") - return image diff --git a/spaces/Abubakari/Sepsis-fastapi-prediction-app/Dockerfile b/spaces/Abubakari/Sepsis-fastapi-prediction-app/Dockerfile deleted file mode 100644 index a64500d2926f582016f23c07409ce09abbe21df8..0000000000000000000000000000000000000000 --- a/spaces/Abubakari/Sepsis-fastapi-prediction-app/Dockerfile +++ /dev/null @@ -1,14 +0,0 @@ -FROM python:3.9 - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt - -RUN pip install -r /code/requirements.txt - -COPY . . - -# Expose the port on which the application will run -EXPOSE 7860 - -CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "7860"] diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/r/[id]/message/[messageId]/prompt/$types.d.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/r/[id]/message/[messageId]/prompt/$types.d.ts deleted file mode 100644 index 984e7ed4449e9d93e1823b3ee3e4229eac3e84bd..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/r/[id]/message/[messageId]/prompt/$types.d.ts +++ /dev/null @@ -1,9 +0,0 @@ -import type * as Kit from '@sveltejs/kit'; - -type Expand = T extends infer O ? { [K in keyof O]: O[K] } : never; -type RouteParams = { id: string; messageId: string } -type RouteId = '/r/[id]/message/[messageId]/prompt'; - -export type EntryGenerator = () => Promise> | Array; -export type RequestHandler = Kit.RequestHandler; -export type RequestEvent = Kit.RequestEvent; \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/settings/$types.d.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/settings/$types.d.ts deleted file mode 100644 index 11802b80d201eeb689785235bcb7a8a567da64f3..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/settings/$types.d.ts +++ /dev/null @@ -1,28 +0,0 @@ -import type * as Kit from '@sveltejs/kit'; - -type Expand = T extends infer O ? { [K in keyof O]: O[K] } : never; -type RouteParams = { } -type RouteId = '/settings'; -type MaybeWithVoid = {} extends T ? T | void : T; -export type RequiredKeys = { [K in keyof T]-?: {} extends { [P in K]: T[K] } ? never : K; }[keyof T]; -type OutputDataShape = MaybeWithVoid> & Partial> & Record> -type EnsureDefined = T extends null | undefined ? {} : T; -type OptionalUnion, A extends keyof U = U extends U ? keyof U : never> = U extends unknown ? { [P in Exclude]?: never } & U : never; -export type Snapshot = Kit.Snapshot; -type PageServerParentData = EnsureDefined; -type PageParentData = EnsureDefined; - -export type PageServerLoad = OutputDataShape> = Kit.ServerLoad; -export type PageServerLoadEvent = Parameters[0]; -type ExcludeActionFailure = T extends Kit.ActionFailure ? never : T extends void ? never : T; -type ActionsSuccess any>> = { [Key in keyof T]: ExcludeActionFailure>>; }[keyof T]; -type ExtractActionFailure = T extends Kit.ActionFailure ? X extends void ? never : X : never; -type ActionsFailure any>> = { [Key in keyof T]: Exclude>>, void>; }[keyof T]; -type ActionsExport = typeof import('../../../../../src/routes/settings/+page.server.js').actions -export type SubmitFunction = Kit.SubmitFunction>, Expand>> -export type ActionData = Expand> | null; -export type PageServerData = null; -export type PageData = Expand; -export type Action | void = Record | void> = Kit.Action -export type Actions | void = Record | void> = Kit.Actions -export type RequestEvent = Kit.RequestEvent; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateToast.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateToast.js deleted file mode 100644 index 7d1954b05c263a877bab70b58ccf860d8eab2f55..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateToast.js +++ /dev/null @@ -1,8 +0,0 @@ -import CreateAnyLabel from './utils/CreateAnyLabel.js'; -import Toast from '../../toast/Toast.js'; - -var CreateToast = function (scene, data, view, styles, customBuilders) { - return CreateAnyLabel(scene, data, view, styles, customBuilders, Toast); -} - -export default CreateToast; \ No newline at end of file diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/dnnlib/tflib/ops/fused_bias_act.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/dnnlib/tflib/ops/fused_bias_act.py deleted file mode 100644 index 9aeddfa257cf6148b7336644cbac7de276e31700..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/dnnlib/tflib/ops/fused_bias_act.py +++ /dev/null @@ -1,214 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# Copyright (c) 2019, NVIDIA Corporation. All rights reserved. -# -# This work is made available under the Nvidia Source Code License-NC. -# To view a copy of this license, visit -# https://nvlabs.github.io/stylegan2/license.html - -"""Custom TensorFlow ops for efficient bias and activation.""" - -import os -import numpy as np -import tensorflow as tf -from .. import custom_ops -from ...util import EasyDict - - -def _get_plugin(): - return custom_ops.get_plugin(os.path.splitext(__file__)[0] + '.cu') - -# ---------------------------------------------------------------------------- - - -activation_funcs = { - 'linear': EasyDict(func=lambda x, **_: x, def_alpha=None, def_gain=1.0, cuda_idx=1, ref='y', zero_2nd_grad=True), - 'relu': EasyDict(func=lambda x, **_: tf.nn.relu(x), def_alpha=None, def_gain=np.sqrt(2), cuda_idx=2, ref='y', zero_2nd_grad=True), - 'lrelu': EasyDict(func=lambda x, alpha, **_: tf.nn.leaky_relu(x, alpha), def_alpha=0.2, def_gain=np.sqrt(2), cuda_idx=3, ref='y', zero_2nd_grad=True), - 'tanh': EasyDict(func=lambda x, **_: tf.nn.tanh(x), def_alpha=None, def_gain=1.0, cuda_idx=4, ref='y', zero_2nd_grad=False), - 'sigmoid': EasyDict(func=lambda x, **_: tf.nn.sigmoid(x), def_alpha=None, def_gain=1.0, cuda_idx=5, ref='y', zero_2nd_grad=False), - 'elu': EasyDict(func=lambda x, **_: tf.nn.elu(x), def_alpha=None, def_gain=1.0, cuda_idx=6, ref='y', zero_2nd_grad=False), - 'selu': EasyDict(func=lambda x, **_: tf.nn.selu(x), def_alpha=None, def_gain=1.0, cuda_idx=7, ref='y', zero_2nd_grad=False), - 'softplus': EasyDict(func=lambda x, **_: tf.nn.softplus(x), def_alpha=None, def_gain=1.0, cuda_idx=8, ref='y', zero_2nd_grad=False), - 'swish': EasyDict(func=lambda x, **_: tf.nn.sigmoid(x) * x, def_alpha=None, def_gain=np.sqrt(2), cuda_idx=9, ref='x', zero_2nd_grad=False), -} - -# ---------------------------------------------------------------------------- - - -def fused_bias_act(x, b=None, axis=1, act='linear', alpha=None, gain=None, impl='cuda'): - r"""Fused bias and activation function. - - Adds bias `b` to activation tensor `x`, evaluates activation function `act`, - and scales the result by `gain`. Each of the steps is optional. In most cases, - the fused op is considerably more efficient than performing the same calculation - using standard TensorFlow ops. It supports first and second order gradients, - but not third order gradients. - - Args: - x: Input activation tensor. Can have any shape, but if `b` is defined, the - dimension corresponding to `axis`, as well as the rank, must be known. - b: Bias vector, or `None` to disable. Must be a 1D tensor of the same type - as `x`. The shape must be known, and it must match the dimension of `x` - corresponding to `axis`. - axis: The dimension in `x` corresponding to the elements of `b`. - The value of `axis` is ignored if `b` is not specified. - act: Name of the activation function to evaluate, or `"linear"` to disable. - Can be e.g. `"relu"`, `"lrelu"`, `"tanh"`, `"sigmoid"`, `"swish"`, etc. - See `activation_funcs` for a full list. `None` is not allowed. - alpha: Shape parameter for the activation function, or `None` to use the default. - gain: Scaling factor for the output tensor, or `None` to use default. - See `activation_funcs` for the default scaling of each activation function. - If unsure, consider specifying `1.0`. - impl: Name of the implementation to use. Can be `"ref"` or `"cuda"` (default). - - Returns: - Tensor of the same shape and datatype as `x`. - """ - - impl_dict = { - 'ref': _fused_bias_act_ref, - 'cuda': _fused_bias_act_cuda, - } - return impl_dict[impl](x=x, b=b, axis=axis, act=act, alpha=alpha, gain=gain) - -# ---------------------------------------------------------------------------- - - -def _fused_bias_act_ref(x, b, axis, act, alpha, gain): - """Slow reference implementation of `fused_bias_act()` using standard TensorFlow ops.""" - - # Validate arguments. - x = tf.convert_to_tensor(x) - b = tf.convert_to_tensor( - b) if b is not None else tf.constant([], dtype=x.dtype) - act_spec = activation_funcs[act] - assert b.shape.rank == 1 and ( - b.shape[0] == 0 or b.shape[0] == x.shape[axis]) - assert b.shape[0] == 0 or 0 <= axis < x.shape.rank - if alpha is None: - alpha = act_spec.def_alpha - if gain is None: - gain = act_spec.def_gain - - # Add bias. - if b.shape[0] != 0: - x += tf.reshape(b, [-1 if i == - axis else 1 for i in range(x.shape.rank)]) - - # Evaluate activation function. - x = act_spec.func(x, alpha=alpha) - - # Scale by gain. - if gain != 1: - x *= gain - return x - -# ---------------------------------------------------------------------------- - - -def _fused_bias_act_cuda(x, b, axis, act, alpha, gain): - """Fast CUDA implementation of `fused_bias_act()` using custom ops.""" - - # Validate arguments. - x = tf.convert_to_tensor(x) - empty_tensor = tf.constant([], dtype=x.dtype) - b = tf.convert_to_tensor(b) if b is not None else empty_tensor - act_spec = activation_funcs[act] - assert b.shape.rank == 1 and ( - b.shape[0] == 0 or b.shape[0] == x.shape[axis]) - assert b.shape[0] == 0 or 0 <= axis < x.shape.rank - if alpha is None: - alpha = act_spec.def_alpha - if gain is None: - gain = act_spec.def_gain - - # Special cases. - if act == 'linear' and b is None and gain == 1.0: - return x - if act_spec.cuda_idx is None: - return _fused_bias_act_ref(x=x, b=b, axis=axis, act=act, alpha=alpha, gain=gain) - - # CUDA kernel. - cuda_kernel = _get_plugin().fused_bias_act - cuda_kwargs = dict(axis=axis, act=act_spec.cuda_idx, - alpha=alpha, gain=gain) - - # Forward pass: y = func(x, b). - def func_y(x, b): - y = cuda_kernel(x=x, b=b, ref=empty_tensor, grad=0, **cuda_kwargs) - y.set_shape(x.shape) - return y - - # Backward pass: dx, db = grad(dy, x, y) - def grad_dx(dy, x, y): - ref = {'x': x, 'y': y}[act_spec.ref] - dx = cuda_kernel(x=dy, b=empty_tensor, ref=ref, grad=1, **cuda_kwargs) - dx.set_shape(x.shape) - return dx - - def grad_db(dx): - if b.shape[0] == 0: - return empty_tensor - db = dx - if axis < x.shape.rank - 1: - db = tf.reduce_sum(db, list(range(axis + 1, x.shape.rank))) - if axis > 0: - db = tf.reduce_sum(db, list(range(axis))) - db.set_shape(b.shape) - return db - - # Second order gradients: d_dy, d_x = grad2(d_dx, d_db, x, y) - def grad2_d_dy(d_dx, d_db, x, y): - ref = {'x': x, 'y': y}[act_spec.ref] - d_dy = cuda_kernel(x=d_dx, b=d_db, ref=ref, grad=1, **cuda_kwargs) - d_dy.set_shape(x.shape) - return d_dy - - def grad2_d_x(d_dx, d_db, x, y): - ref = {'x': x, 'y': y}[act_spec.ref] - d_x = cuda_kernel(x=d_dx, b=d_db, ref=ref, grad=2, **cuda_kwargs) - d_x.set_shape(x.shape) - return d_x - - # Fast version for piecewise-linear activation funcs. - @tf.custom_gradient - def func_zero_2nd_grad(x, b): - y = func_y(x, b) - - @tf.custom_gradient - def grad(dy): - dx = grad_dx(dy, x, y) - db = grad_db(dx) - - def grad2(d_dx, d_db): - d_dy = grad2_d_dy(d_dx, d_db, x, y) - return d_dy - return (dx, db), grad2 - return y, grad - - # Slow version for general activation funcs. - @tf.custom_gradient - def func_nonzero_2nd_grad(x, b): - y = func_y(x, b) - - def grad_wrap(dy): - @tf.custom_gradient - def grad_impl(dy, x): - dx = grad_dx(dy, x, y) - db = grad_db(dx) - - def grad2(d_dx, d_db): - d_dy = grad2_d_dy(d_dx, d_db, x, y) - d_x = grad2_d_x(d_dx, d_db, x, y) - return d_dy, d_x - return (dx, db), grad2 - return grad_impl(dy, x) - return y, grad_wrap - - # Which version to use? - if act_spec.zero_2nd_grad: - return func_zero_2nd_grad(x, b) - return func_nonzero_2nd_grad(x, b) - -# ---------------------------------------------------------------------------- diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/training/projectors/w_plus_projector.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/training/projectors/w_plus_projector.py deleted file mode 100644 index b61fa0159b02a052bc8a52341a53ec4b62ced657..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/training/projectors/w_plus_projector.py +++ /dev/null @@ -1,163 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Project given image to the latent space of pretrained network pickle.""" - -import copy -import wandb -import numpy as np -import torch -import torch.nn.functional as F -from tqdm import tqdm -from configs import global_config, hyperparameters -import dnnlib -from utils.log_utils import log_image_from_w - - -def project( - G, - # [C,H,W] and dynamic range [0,255], W & H must match G output resolution - target: torch.Tensor, - *, - num_steps=1000, - w_avg_samples=10000, - initial_learning_rate=0.01, - initial_noise_factor=0.05, - lr_rampdown_length=0.25, - lr_rampup_length=0.05, - noise_ramp_length=0.75, - regularize_noise_weight=1e5, - verbose=False, - device: torch.device, - use_wandb=False, - initial_w=None, - image_log_step=global_config.image_rec_result_log_snapshot, - w_name: str -): - print('inside training/projectors/w_plus_projector') - print(target.shape, G.img_channels, G.img_resolution * 2, G.img_resolution) - assert target.shape == ( - G.img_channels, G.img_resolution * 2, G.img_resolution) - - def logprint(*args): - if verbose: - print(*args) - - G = copy.deepcopy(G).eval().requires_grad_( - False).to(device).float() # type: ignore - - # Compute w stats. - logprint( - f'Computing W midpoint and stddev using {w_avg_samples} samples...') - z_samples = np.random.RandomState(123).randn(w_avg_samples, G.z_dim) - w_samples = G.mapping(torch.from_numpy( - z_samples).to(device), None) # [N, L, C] - w_samples = w_samples[:, :1, :].cpu( - ).numpy().astype(np.float32) # [N, 1, C] - w_avg = np.mean(w_samples, axis=0, keepdims=True) # [1, 1, C] - w_avg_tensor = torch.from_numpy(w_avg).to(global_config.device) - w_std = (np.sum((w_samples - w_avg) ** 2) / w_avg_samples) ** 0.5 - - start_w = initial_w if initial_w is not None else w_avg - - # Setup noise inputs. - noise_bufs = {name: buf for ( - name, buf) in G.synthesis.named_buffers() if 'noise_const' in name} - - # Load VGG16 feature detector. - url = 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/vgg16.pt' - with dnnlib.util.open_url(url) as f: - vgg16 = torch.jit.load(f).eval().to(device) - - # Features for target image. - target_images = target.unsqueeze(0).to(device).to(torch.float32) - if target_images.shape[2] > 256: - target_images = F.interpolate( - target_images, size=(256, 256), mode='area') - target_features = vgg16( - target_images, resize_images=False, return_lpips=True) - - start_w = np.repeat(start_w, G.mapping.num_ws, axis=1) - w_opt = torch.tensor(start_w, dtype=torch.float32, device=device, - requires_grad=True) # pylint: disable=not-callable - - optimizer = torch.optim.Adam([w_opt] + list(noise_bufs.values()), betas=(0.9, 0.999), - lr=hyperparameters.first_inv_lr) - - # Init noise. - for buf in noise_bufs.values(): - buf[:] = torch.randn_like(buf) - buf.requires_grad = True - - for step in tqdm(range(num_steps)): - - # Learning rate schedule. - t = step / num_steps - w_noise_scale = w_std * initial_noise_factor * \ - max(0.0, 1.0 - t / noise_ramp_length) ** 2 - lr_ramp = min(1.0, (1.0 - t) / lr_rampdown_length) - lr_ramp = 0.5 - 0.5 * np.cos(lr_ramp * np.pi) - lr_ramp = lr_ramp * min(1.0, t / lr_rampup_length) - lr = initial_learning_rate * lr_ramp - for param_group in optimizer.param_groups: - param_group['lr'] = lr - - # Synth images from opt_w. - w_noise = torch.randn_like(w_opt) * w_noise_scale - ws = (w_opt + w_noise) - - synth_images = G.synthesis(ws, noise_mode='const', force_fp32=True) - - # Downsample image to 256x256 if it's larger than that. VGG was built for 224x224 images. - synth_images = (synth_images + 1) * (255 / 2) - if synth_images.shape[2] > 256: - synth_images = F.interpolate( - synth_images, size=(256, 256), mode='area') - - # Features for synth images. - synth_features = vgg16( - synth_images, resize_images=False, return_lpips=True) - dist = (target_features - synth_features).square().sum() - - # Noise regularization. - reg_loss = 0.0 - for v in noise_bufs.values(): - noise = v[None, None, :, :] # must be [1,1,H,W] for F.avg_pool2d() - while True: - reg_loss += (noise * torch.roll(noise, - shifts=1, dims=3)).mean() ** 2 - reg_loss += (noise * torch.roll(noise, - shifts=1, dims=2)).mean() ** 2 - if noise.shape[2] <= 8: - break - noise = F.avg_pool2d(noise, kernel_size=2) - loss = dist + reg_loss * regularize_noise_weight - - if step % image_log_step == 0: - with torch.no_grad(): - if use_wandb: - global_config.training_step += 1 - wandb.log({f'first projection _{w_name}': loss.detach( - ).cpu()}, step=global_config.training_step) - log_image_from_w(w_opt, G, w_name) - - # Step - optimizer.zero_grad(set_to_none=True) - loss.backward() - optimizer.step() - logprint( - f'step {step + 1:>4d}/{num_steps}: dist {dist:<4.2f} loss {float(loss):<5.2f}') - - # Normalize noise. - with torch.no_grad(): - for buf in noise_bufs.values(): - buf -= buf.mean() - buf *= buf.square().mean().rsqrt() - - del G - return w_opt diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/colossalai/inference.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/colossalai/inference.py deleted file mode 100644 index 3b115c2d2b8f5bcdb3a0c053a6c71b91a965c573..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/colossalai/inference.py +++ /dev/null @@ -1,12 +0,0 @@ -import torch - -from diffusers import StableDiffusionPipeline - - -model_id = "path-to-your-trained-model" -pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") - -prompt = "A photo of sks dog in a bucket" -image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0] - -image.save("dog-bucket.png") diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_k_diffusion.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_k_diffusion.py deleted file mode 100644 index 25da13d9f9221213f0efba6790c1ebb78639288c..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_k_diffusion.py +++ /dev/null @@ -1,136 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import gc -import unittest - -import numpy as np -import torch - -from diffusers import StableDiffusionKDiffusionPipeline -from diffusers.utils import slow, torch_device -from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu - - -enable_full_determinism() - - -@slow -@require_torch_gpu -class StableDiffusionPipelineIntegrationTests(unittest.TestCase): - def tearDown(self): - # clean up the VRAM after each test - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - def test_stable_diffusion_1(self): - sd_pipe = StableDiffusionKDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") - sd_pipe = sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - sd_pipe.set_scheduler("sample_euler") - - prompt = "A painting of a squirrel eating a burger" - generator = torch.manual_seed(0) - output = sd_pipe([prompt], generator=generator, guidance_scale=9.0, num_inference_steps=20, output_type="np") - - image = output.images - - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.0447, 0.0492, 0.0468, 0.0408, 0.0383, 0.0408, 0.0354, 0.0380, 0.0339]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - - def test_stable_diffusion_2(self): - sd_pipe = StableDiffusionKDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-base") - sd_pipe = sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - sd_pipe.set_scheduler("sample_euler") - - prompt = "A painting of a squirrel eating a burger" - generator = torch.manual_seed(0) - output = sd_pipe([prompt], generator=generator, guidance_scale=9.0, num_inference_steps=20, output_type="np") - - image = output.images - - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.1237, 0.1320, 0.1438, 0.1359, 0.1390, 0.1132, 0.1277, 0.1175, 0.1112]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 5e-1 - - def test_stable_diffusion_karras_sigmas(self): - sd_pipe = StableDiffusionKDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-base") - sd_pipe = sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - sd_pipe.set_scheduler("sample_dpmpp_2m") - - prompt = "A painting of a squirrel eating a burger" - generator = torch.manual_seed(0) - output = sd_pipe( - [prompt], - generator=generator, - guidance_scale=7.5, - num_inference_steps=15, - output_type="np", - use_karras_sigmas=True, - ) - - image = output.images - - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array( - [0.11381689, 0.12112921, 0.1389457, 0.12549606, 0.1244964, 0.10831517, 0.11562866, 0.10867816, 0.10499048] - ) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - - def test_stable_diffusion_noise_sampler_seed(self): - sd_pipe = StableDiffusionKDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") - sd_pipe = sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - sd_pipe.set_scheduler("sample_dpmpp_sde") - - prompt = "A painting of a squirrel eating a burger" - seed = 0 - images1 = sd_pipe( - [prompt], - generator=torch.manual_seed(seed), - noise_sampler_seed=seed, - guidance_scale=9.0, - num_inference_steps=20, - output_type="np", - ).images - images2 = sd_pipe( - [prompt], - generator=torch.manual_seed(seed), - noise_sampler_seed=seed, - guidance_scale=9.0, - num_inference_steps=20, - output_type="np", - ).images - - assert images1.shape == (1, 512, 512, 3) - assert images2.shape == (1, 512, 512, 3) - assert np.abs(images1.flatten() - images2.flatten()).max() < 1e-2 diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/gcnet/gcnet_r50-d8_512x1024_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/gcnet/gcnet_r50-d8_512x1024_40k_cityscapes.py deleted file mode 100644 index 610467c07204140bf604f8dda2aa57978c565ed3..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/gcnet/gcnet_r50-d8_512x1024_40k_cityscapes.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = [ - '../_base_/models/gcnet_r50-d8.py', '../_base_/datasets/cityscapes.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py' -] diff --git a/spaces/Anew5128/Anew51/constants.py b/spaces/Anew5128/Anew51/constants.py deleted file mode 100644 index d7a6d0476db5afd6724c91949e9cafd9e0782acf..0000000000000000000000000000000000000000 --- a/spaces/Anew5128/Anew51/constants.py +++ /dev/null @@ -1,50 +0,0 @@ -# Constants -DEFAULT_CUDA_DEVICE = "cuda:0" -# Also try: 'Qiliang/bart-large-cnn-samsum-ElectrifAi_v10' -DEFAULT_SUMMARIZATION_MODEL = "Qiliang/bart-large-cnn-samsum-ChatGPT_v3" -# Also try: 'joeddav/distilbert-base-uncased-go-emotions-student' -DEFAULT_CLASSIFICATION_MODEL = "nateraw/bert-base-uncased-emotion" -# Also try: 'Salesforce/blip-image-captioning-base' -DEFAULT_CAPTIONING_MODEL = "Salesforce/blip-image-captioning-large" -DEFAULT_SD_MODEL = "ckpt/anything-v4.5-vae-swapped" -DEFAULT_EMBEDDING_MODEL = "sentence-transformers/all-mpnet-base-v2" -DEFAULT_REMOTE_SD_HOST = "127.0.0.1" -DEFAULT_REMOTE_SD_PORT = 7860 -DEFAULT_CHROMA_PORT = 8000 -SILERO_SAMPLES_PATH = "tts_samples" -SILERO_SAMPLE_TEXT = "The quick brown fox jumps over the lazy dog" -# ALL_MODULES = ['caption', 'summarize', 'classify', 'keywords', 'prompt', 'sd'] -DEFAULT_SUMMARIZE_PARAMS = { - "temperature": 1.0, - "repetition_penalty": 1.0, - "max_length": 500, - "min_length": 200, - "length_penalty": 1.5, - "bad_words": [ - "\n", - '"', - "*", - "[", - "]", - "{", - "}", - ":", - "(", - ")", - "<", - ">", - "Â", - "The text ends", - "The story ends", - "The text is", - "The story is", - ], -} - -PROMPT_PREFIX = "best quality, absurdres, " -NEGATIVE_PROMPT = """lowres, bad anatomy, error body, error hair, error arm, -error hands, bad hands, error fingers, bad fingers, missing fingers -error legs, bad legs, multiple legs, missing legs, error lighting, -error shadow, error reflection, text, error, extra digit, fewer digits, -cropped, worst quality, low quality, normal quality, jpeg artifacts, -signature, watermark, username, blurry""" diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/multimodal/DOCS.md b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/multimodal/DOCS.md deleted file mode 100644 index eaa4365e9a304a14ebbdb1d4d435f3a2a1f7a7d2..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/multimodal/DOCS.md +++ /dev/null @@ -1,85 +0,0 @@ -# Technical description of multimodal extension - -## Working principle -Multimodality extension does most of the stuff which is required for any image input: - -- adds the UI -- saves the images as base64 JPEGs to history -- provides the hooks to the UI -- if there are images in the prompt, it: - - splits the prompt to text and image parts - - adds image start/end markers to text parts, then encodes and embeds the text parts - - calls the vision pipeline to embed the images - - stitches the embeddings together, and returns them to text generation -- loads the appropriate vision pipeline, selected either from model name, or by specifying --multimodal-pipeline parameter - -Now, for the pipelines, they: - -- load the required vision models -- return some consts, for example the number of tokens taken up by image -- and most importantly: return the embeddings for LLM, given a list of images - -## Prompts/history - -To save images in prompt/history, this extension is using a base64 JPEG, wrapped in a HTML tag, like so: -``` - -``` -where `{img_str}` is the actual image data. This format makes displaying them in the UI for free. Do note, that this format is required to be exactly the same, the regex used to find the images is: ``. - -## LLM input -To describe the input, let's see it on an example prompt: -``` -text1text2text3 -``` -where `textN` is N-th text, `` is N-th image, in HTML format specified above. - -**The first step is to split the prompt into image/text parts**, so we get: -``` -['text1', '', 'text2', '', 'text3'] -``` -this is done in `MultimodalEmbedder._split_prompt(...)` function, which returns a list of `PromptPart`s - dataclasses wrapping the separate parts. - -This function also appends the image start/end markers to text, which are provided by `AbstractMultimodalPipeline.image_start()` / `AbstractMultimodalPipeline.image_end()` functions. If image start is ``, and end is ``, this function will return: -``` -['text1', '', 'text2', '', 'text3'] -``` - -**The returned prompt parts are then turned into token embeddings.** - -First, they are modified to token IDs, for the text it is done using standard `modules.text_generation.encode()` function, and for the images the returned token IDs are changed to placeholders. The placeholder is a list of `N` times `placeholder token id`, where `N` is specified using `AbstractMultimodalPipeline.num_image_embeds()`, and placeholder token IDs using `AbstractMultimodalPipeline.placeholder_token_id()`. - -Now, based on the token IDs, the prompt might get truncated, especially if `max_new_tokens` are unreasonably high. Unfortunately, it can't be done simply, just by trimming the prompt to be short enough. This way will lead to sometimes splitting the prompt in the middle of an image embedding, which usually breaks the generation. Therefore, in this case, the entire image needs to be removed from input. This is done inside `MultimodalEmbedder._encode_text(...)` function. - -**After the tokenization, the tokens need to get embedded**, the text and images are once again treated separately. - -The text parts are turned to embeddings, using `AbstractMultimodalPipeline.embed_tokens(...)` function. It uses standard embedding function from the model, but to support many LLMs, the actual function is returned by the pipeline (as it might be different for different LLMs), for LLaMA it is `shared.model.model.embed_tokens(...)`. - -The image parts are turned to embeddings, using `AbstractMultimodalPipeline.embed_images(...)` function. This function is specific for a given pipeline, it takes the images as input, forwards them through vision model/projector, and returns the embeddings. - -**Now, the returned embeddings are stitched together**, using `torch.cat()`, this is creating the final input to the LLM. - -## Pipelines - -All of the pipelines should subclass `AbstractMultimodalPipeline` class. The idea is to allow for new pipelines to be added in the same way as user extensions - git clone into `extensions/multimodal/pipelines`. - -The pipelines are the description of the vision part, containing vision model/multimodal projector. All of the pipelines should have an unique `name()`, which is then selected by user, in `--multimodal-pipeline` CLI argument. For an example, see `pipelines/llava/llava.py`. - -## Pipeline modules - -Pipelines are organized into "pipeline modules" - subdirectories in `pipelines` directory. The pipeline modules should contain a file called `pipelines.py`, that should contain the following fields: -- `available_pipelines: List[str]` - list of pipelines provided by this module, shown as the list of available pipelines to the user -- `def get_pipeline(name: str, params: dict) -> Optional[AbstractMultimodalPipeline]`: - a function to get a concrete pipeline by `name`, if `name` doesn't match any, should return `None`. `params` is the user settings for multimodal extension -- `def get_pipeline_from_model_name(model_name: str, params: dict) -> Optional[AbstractMultimodalPipeline]`: - a function to get a pipeline from `model_name`, should be eager to return `None`, unless the determination can be done clearly (for example: minigpt-4 bases on vicuna - it should never return the pipeline, but llava can, as it has its own specific LLM finetune) - -**NOTE**: A pipeline module should lazy-import the pipelines only when necessary, and it should keep its imports to minimum - -## Pipeline params - -The pipelines will get the extension `params` in the constructor. They should honor the following fields: -- `vision_device` - string, specifying `torch.device` to run the vision model (CLIP/ViT) on -- `vision_bits` - int, number of fp bits to load the vision model(s) in -- `projector_device` - string, specifying `torch.device` to run the projector models (Linear layers, QFormer, etc.) on -- `projector_bits` - int, number of fp bits to load the projector models in - -As a helper, `AbstractMultimodalPipeline` has `_get_device(self, setting_name: str, params: dict)` and `_get_dtype(self, setting_name: str, params: dict)` helper functions, which parse string/int and return `torch.device` / `torch.dtype`. diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/superboogav2/notebook_handler.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/superboogav2/notebook_handler.py deleted file mode 100644 index 9faadfed12b3afcf70d3b7611821352c1847712a..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/superboogav2/notebook_handler.py +++ /dev/null @@ -1,40 +0,0 @@ -""" -This module is responsible for handling and modifying the notebook text. -""" -import re - -import extensions.superboogav2.parameters as parameters - -from modules import shared -from modules.logging_colors import logger -from extensions.superboogav2.utils import create_context_text - -from .data_processor import preprocess_text - -def _remove_special_tokens(string): - pattern = r'(<\|begin-user-input\|>|<\|end-user-input\|>|<\|injection-point\|>)' - return re.sub(pattern, '', string) - - -def input_modifier_internal(string, collector): - # Sanity check. - if shared.is_chat(): - return string - - # Find the user input - pattern = re.compile(r"<\|begin-user-input\|>(.*?)<\|end-user-input\|>", re.DOTALL) - match = re.search(pattern, string) - if match: - # Preprocess the user prompt. - user_input = match.group(1).strip() - user_input = preprocess_text(user_input) - - logger.debug(f"Preprocessed User Input: {user_input}") - - # Get the most similar chunks - results = collector.get_sorted_by_dist(user_input, n_results=parameters.get_chunk_count(), max_token_count=int(parameters.get_max_token_count())) - - # Make the injection - string = string.replace('<|injection-point|>', create_context_text(results)) - - return _remove_special_tokens(string) \ No newline at end of file diff --git a/spaces/Aphrodite/AIChatBot-SL-Chatbot-Blenderbot/app.py b/spaces/Aphrodite/AIChatBot-SL-Chatbot-Blenderbot/app.py deleted file mode 100644 index 8b44950e7a07fc7daeac62d9a00f944fa6499ee6..0000000000000000000000000000000000000000 --- a/spaces/Aphrodite/AIChatBot-SL-Chatbot-Blenderbot/app.py +++ /dev/null @@ -1,28 +0,0 @@ -import streamlit as st -#from streamlit_chat import message as st_message -from streamlit_chat import message as st_message -from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration - -st.title("JCS Advanced AI Chatting Bot") - -if "history" not in st.session_state: - st.session_state.history = [] - -def get_models(): - tokenizer = BlenderbotTokenizer.from_pretrained("facebook/blenderbot-400M-distill") - model = BlenderbotForConditionalGeneration.from_pretrained("facebook/blenderbot-400M-distill") - return tokenizer, model - -def generate_answer(): - tokenizer, model = get_models() - user_message = st.session_state.input_text - inputs = tokenizer(st.session_state.input_text, return_tensors="pt") - result = model.generate(**inputs) - message_bot = tokenizer.decode(result[0], skip_special_tokens=True) # .replace("", "").replace("", "") - st.session_state.history.append({"message": user_message, "is_user": True}) - st.session_state.history.append({"message": message_bot, "is_user": False}) - -st.text_input("Response", key="input_text", on_change=generate_answer) - -for chat in st.session_state.history: - st_message(**chat) diff --git a/spaces/AzumaSeren100/XuanShen-Bert-VITS2/text/japanese.py b/spaces/AzumaSeren100/XuanShen-Bert-VITS2/text/japanese.py deleted file mode 100644 index ddedafa0c5b7986068dc6c91637a86febc3923a9..0000000000000000000000000000000000000000 --- a/spaces/AzumaSeren100/XuanShen-Bert-VITS2/text/japanese.py +++ /dev/null @@ -1,104 +0,0 @@ -# modified from https://github.com/CjangCjengh/vits/blob/main/text/japanese.py -import re -import sys - -import pyopenjtalk - -from text import symbols - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile( - r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile( - r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (symbol, Japanese) pairs for marks: -_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('%', 'パーセント') -]] - - -# List of (consonant, sokuon) pairs: -_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'Q([↑↓]*[kg])', r'k#\1'), - (r'Q([↑↓]*[tdjʧ])', r't#\1'), - (r'Q([↑↓]*[sʃ])', r's\1'), - (r'Q([↑↓]*[pb])', r'p#\1') -]] - -# List of (consonant, hatsuon) pairs: -_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'N([↑↓]*[pbm])', r'm\1'), - (r'N([↑↓]*[ʧʥj])', r'n^\1'), - (r'N([↑↓]*[tdn])', r'n\1'), - (r'N([↑↓]*[kg])', r'ŋ\1') -]] - - - -def post_replace_ph(ph): - rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - 'v': "V" - } - if ph in rep_map.keys(): - ph = rep_map[ph] - if ph in symbols: - return ph - if ph not in symbols: - ph = 'UNK' - return ph - -def symbols_to_japanese(text): - for regex, replacement in _symbols_to_japanese: - text = re.sub(regex, replacement, text) - return text - - -def preprocess_jap(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - text = symbols_to_japanese(text) - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = [] - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - p = pyopenjtalk.g2p(sentence) - text += p.split(" ") - - if i < len(marks): - text += [marks[i].replace(' ', '')] - return text - -def text_normalize(text): - # todo: jap text normalize - return text - -def g2p(norm_text): - phones = preprocess_jap(norm_text) - phones = [post_replace_ph(i) for i in phones] - # todo: implement tones and word2ph - tones = [0 for i in phones] - word2ph = [1 for i in phones] - return phones, tones, word2ph - - -if __name__ == '__main__': - for line in open("../../../Downloads/transcript_utf8.txt").readlines(): - text = line.split(":")[1] - phones, tones, word2ph = g2p(text) - for p in phones: - if p == "z": - print(text, phones) - sys.exit(0) diff --git a/spaces/Bart92/RVC_HF/infer/modules/uvr5/mdxnet.py b/spaces/Bart92/RVC_HF/infer/modules/uvr5/mdxnet.py deleted file mode 100644 index 86a066893ad99cfed77788027a9deb8ed486a7f2..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/infer/modules/uvr5/mdxnet.py +++ /dev/null @@ -1,246 +0,0 @@ -import os -import logging - -logger = logging.getLogger(__name__) - -import librosa -import numpy as np -import soundfile as sf -import torch -from tqdm import tqdm - -cpu = torch.device("cpu") - - -class ConvTDFNetTrim: - def __init__( - self, device, model_name, target_name, L, dim_f, dim_t, n_fft, hop=1024 - ): - super(ConvTDFNetTrim, self).__init__() - - self.dim_f = dim_f - self.dim_t = 2**dim_t - self.n_fft = n_fft - self.hop = hop - self.n_bins = self.n_fft // 2 + 1 - self.chunk_size = hop * (self.dim_t - 1) - self.window = torch.hann_window(window_length=self.n_fft, periodic=True).to( - device - ) - self.target_name = target_name - self.blender = "blender" in model_name - - self.dim_c = 4 - out_c = self.dim_c * 4 if target_name == "*" else self.dim_c - self.freq_pad = torch.zeros( - [1, out_c, self.n_bins - self.dim_f, self.dim_t] - ).to(device) - - self.n = L // 2 - - def stft(self, x): - x = x.reshape([-1, self.chunk_size]) - x = torch.stft( - x, - n_fft=self.n_fft, - hop_length=self.hop, - window=self.window, - center=True, - return_complex=True, - ) - x = torch.view_as_real(x) - x = x.permute([0, 3, 1, 2]) - x = x.reshape([-1, 2, 2, self.n_bins, self.dim_t]).reshape( - [-1, self.dim_c, self.n_bins, self.dim_t] - ) - return x[:, :, : self.dim_f] - - def istft(self, x, freq_pad=None): - freq_pad = ( - self.freq_pad.repeat([x.shape[0], 1, 1, 1]) - if freq_pad is None - else freq_pad - ) - x = torch.cat([x, freq_pad], -2) - c = 4 * 2 if self.target_name == "*" else 2 - x = x.reshape([-1, c, 2, self.n_bins, self.dim_t]).reshape( - [-1, 2, self.n_bins, self.dim_t] - ) - x = x.permute([0, 2, 3, 1]) - x = x.contiguous() - x = torch.view_as_complex(x) - x = torch.istft( - x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True - ) - return x.reshape([-1, c, self.chunk_size]) - - -def get_models(device, dim_f, dim_t, n_fft): - return ConvTDFNetTrim( - device=device, - model_name="Conv-TDF", - target_name="vocals", - L=11, - dim_f=dim_f, - dim_t=dim_t, - n_fft=n_fft, - ) - - -class Predictor: - def __init__(self, args): - import onnxruntime as ort - - logger.info(ort.get_available_providers()) - self.args = args - self.model_ = get_models( - device=cpu, dim_f=args.dim_f, dim_t=args.dim_t, n_fft=args.n_fft - ) - self.model = ort.InferenceSession( - os.path.join(args.onnx, self.model_.target_name + ".onnx"), - providers=[ - "CUDAExecutionProvider", - "DmlExecutionProvider", - "CPUExecutionProvider", - ], - ) - logger.info("ONNX load done") - - def demix(self, mix): - samples = mix.shape[-1] - margin = self.args.margin - chunk_size = self.args.chunks * 44100 - assert not margin == 0, "margin cannot be zero!" - if margin > chunk_size: - margin = chunk_size - - segmented_mix = {} - - if self.args.chunks == 0 or samples < chunk_size: - chunk_size = samples - - counter = -1 - for skip in range(0, samples, chunk_size): - counter += 1 - - s_margin = 0 if counter == 0 else margin - end = min(skip + chunk_size + margin, samples) - - start = skip - s_margin - - segmented_mix[skip] = mix[:, start:end].copy() - if end == samples: - break - - sources = self.demix_base(segmented_mix, margin_size=margin) - """ - mix:(2,big_sample) - segmented_mix:offset->(2,small_sample) - sources:(1,2,big_sample) - """ - return sources - - def demix_base(self, mixes, margin_size): - chunked_sources = [] - progress_bar = tqdm(total=len(mixes)) - progress_bar.set_description("Processing") - for mix in mixes: - cmix = mixes[mix] - sources = [] - n_sample = cmix.shape[1] - model = self.model_ - trim = model.n_fft // 2 - gen_size = model.chunk_size - 2 * trim - pad = gen_size - n_sample % gen_size - mix_p = np.concatenate( - (np.zeros((2, trim)), cmix, np.zeros((2, pad)), np.zeros((2, trim))), 1 - ) - mix_waves = [] - i = 0 - while i < n_sample + pad: - waves = np.array(mix_p[:, i : i + model.chunk_size]) - mix_waves.append(waves) - i += gen_size - mix_waves = torch.tensor(mix_waves, dtype=torch.float32).to(cpu) - with torch.no_grad(): - _ort = self.model - spek = model.stft(mix_waves) - if self.args.denoise: - spec_pred = ( - -_ort.run(None, {"input": -spek.cpu().numpy()})[0] * 0.5 - + _ort.run(None, {"input": spek.cpu().numpy()})[0] * 0.5 - ) - tar_waves = model.istft(torch.tensor(spec_pred)) - else: - tar_waves = model.istft( - torch.tensor(_ort.run(None, {"input": spek.cpu().numpy()})[0]) - ) - tar_signal = ( - tar_waves[:, :, trim:-trim] - .transpose(0, 1) - .reshape(2, -1) - .numpy()[:, :-pad] - ) - - start = 0 if mix == 0 else margin_size - end = None if mix == list(mixes.keys())[::-1][0] else -margin_size - if margin_size == 0: - end = None - sources.append(tar_signal[:, start:end]) - - progress_bar.update(1) - - chunked_sources.append(sources) - _sources = np.concatenate(chunked_sources, axis=-1) - # del self.model - progress_bar.close() - return _sources - - def prediction(self, m, vocal_root, others_root, format): - os.makedirs(vocal_root, exist_ok=True) - os.makedirs(others_root, exist_ok=True) - basename = os.path.basename(m) - mix, rate = librosa.load(m, mono=False, sr=44100) - if mix.ndim == 1: - mix = np.asfortranarray([mix, mix]) - mix = mix.T - sources = self.demix(mix.T) - opt = sources[0].T - if format in ["wav", "flac"]: - sf.write( - "%s/%s_main_vocal.%s" % (vocal_root, basename, format), mix - opt, rate - ) - sf.write("%s/%s_others.%s" % (others_root, basename, format), opt, rate) - else: - path_vocal = "%s/%s_main_vocal.wav" % (vocal_root, basename) - path_other = "%s/%s_others.wav" % (others_root, basename) - sf.write(path_vocal, mix - opt, rate) - sf.write(path_other, opt, rate) - if os.path.exists(path_vocal): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path_vocal, path_vocal[:-4] + ".%s" % format) - ) - if os.path.exists(path_other): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path_other, path_other[:-4] + ".%s" % format) - ) - - -class MDXNetDereverb: - def __init__(self, chunks, device): - self.onnx = "assets/uvr5_weights/onnx_dereverb_By_FoxJoy" - self.shifts = 10 # 'Predict with randomised equivariant stabilisation' - self.mixing = "min_mag" # ['default','min_mag','max_mag'] - self.chunks = chunks - self.margin = 44100 - self.dim_t = 9 - self.dim_f = 3072 - self.n_fft = 6144 - self.denoise = True - self.pred = Predictor(self) - self.device = device - - def path_audio(self, input, vocal_root, others_root, format): - self.pred.prediction(input, vocal_root, others_root, format) diff --git a/spaces/Benson/text-generation/Examples/Caramelo Crush Amigos Saga Apkpure.md b/spaces/Benson/text-generation/Examples/Caramelo Crush Amigos Saga Apkpure.md deleted file mode 100644 index 137f34361ebdc2b4849652012e2d9afdf8818ede..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Caramelo Crush Amigos Saga Apkpure.md +++ /dev/null @@ -1,51 +0,0 @@ - -

    Blockman Go Editor Aventura APK: Una plataforma de creación de juegos gratis y divertido

    -

    ¿Te gustan los juegos de píxeles? ¿Quieres hacer tus propios juegos y compartirlos con otros? Si es así, entonces usted debe probar Blockman Go Editor Adventure APK, una plataforma de creación de juegos gratis y divertido que le permite crear y jugar juegos de píxeles en su dispositivo Android. En este artículo, le diremos qué es Blockman Go Editor Adventure APK, cómo descargarlo e instalarlo, cómo usarlo y cuáles son los beneficios de usarlo.

    -

    ¿Qué es Blockman Go Editor aventura APK?

    -

    Blockman Go Editor Aventura APK es una aplicación que tiene dos funciones principales: un fabricante de juegos y un jugador del juego.

    -

    caramelo crush amigos saga apkpure


    Download File ✦✦✦ https://bltlly.com/2v6IYc



    -

    Una aplicación fabricante de juegos para juegos de píxeles

    -

    Blockman Go Editor Aventura APK es una herramienta de desarrollo que integra Editor de escena, Editor de gatillo, Editor de actor, Editor de interfaz de usuario, Editor de guiones, y otras funciones. Proporciona una plataforma de creación completamente gratuita para los amantes de los juegos de píxeles. Puedes usar varias herramientas y características para crear tus propios juegos, como Bed Wars, Jail Break, Sky Wars, Parkour y más. También puedes personalizar la configuración del juego, como el modo, el mapa, las reglas, etc.

    -

    Una aplicación de jugador de juegos para Blockman Go juegos

    -

    Blockman Go Editor Adventure APK es también una aplicación de jugador de juego que le permite jugar juegos hechos por otros usuarios o usted mismo. Puedes navegar y descargar juegos de la comunidad Blockman Go, o subir tus propios juegos para compartirlos con otros. También puedes unirte a juegos multijugador online con otros jugadores de todo el mundo. Puedes chatear con ellos, hacer amigos o competir con ellos.

    -

    ¿Cómo descargar e instalar Blockman Go Editor Aventura APK?

    -

    Blockman Go Editor Aventura APK no está disponible en Google Play Store, por lo que necesita descargarlo de otras fuentes. Estos son los pasos para descargarlo e instalarlo en tu dispositivo:

    -

    Descargar desde APKCombo u otras fuentes

    - -

    Habilitar fuentes desconocidas en su dispositivo

    -

    Antes de instalar el archivo APK, es necesario habilitar fuentes desconocidas en el dispositivo. Esto le permitirá instalar aplicaciones desde fuentes distintas de Google Play Store. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo.

    -

    Instalar el archivo APK y lanzar la aplicación

    -

    Después de descargar el archivo APK, localizarlo en su dispositivo y toque en él para instalarlo. Siga las instrucciones de la pantalla para completar la instalación. Una vez instalada, inicie la aplicación y disfrute creando y jugando juegos de píxeles.

    -

    ¿Cómo usar Blockman Go Editor Aventura APK?

    -

    Usando Blockman Go Editor Aventura APK es fácil y divertido. Aquí hay algunos consejos sobre cómo usarlo:

    -

    Crea tus propios juegos con varias herramientas y características

    -

    Para crear tus propios juegos, toca el botón "Crear" en la pantalla principal de la aplicación. Verás varias herramientas y características que puedes usar para crear tus juegos. Por ejemplo, puedes usar el Editor de escenas para diseñar la escena del juego, el Editor de disparadores para configurar la lógica del juego, el Editor de actores para crear los personajes del juego, el Editor de interfaz de usuario para diseñar la interfaz del juego y el Editor de guiones para escribir el código del juego. También puedes usar Asset Store para descargar varios activos para tus juegos, como modelos, texturas, sonidos, etc. Puedes previsualizar tus juegos en cualquier momento y probarlos en tu dispositivo.

    -

    -

    Jugar juegos hechos por otros usuarios o usted mismo

    -

    Para jugar juegos hechos por otros usuarios o por ti mismo, toca el botón "Jugar" en la pantalla principal de la aplicación. Verás una lista de juegos que puedes descargar y jugar. También puedes buscar juegos por palabras clave o categorías. Para jugar a un juego, toca en él y espera a que se cargue. También puedes calificar y comentar los juegos que juegas.

    -

    Comparte tus juegos con la comunidad Blockman Go

    - -

    ¿Cuáles son los beneficios de Blockman Go Editor Aventura APK?

    -

    Blockman Go Editor Aventura APK es una gran aplicación para los amantes de los juegos de píxeles. Aquí están algunos de los beneficios de su uso:

    -

    Gratis y fácil de usar

    -

    Blockman Go Editor Aventura APK es completamente gratis para descargar y usar. Usted no necesita pagar nada para crear o jugar juegos. La aplicación también es fácil de usar, con una interfaz fácil de usar e instrucciones claras. No necesitas ninguna experiencia o conocimiento previo para crear o jugar juegos.

    -

    Creativo y divertido

    -

    Blockman Go Editor Aventura APK es una aplicación creativa y divertida que le permite dar rienda suelta a su imaginación y expresarse. Puedes crear cualquier tipo de juego que quieras, con posibilidades y opciones ilimitadas. También puedes jugar juegos hechos por otros usuarios o por ti mismo, y disfrutar de diferentes géneros y estilos de juegos de píxeles.

    -

    Social e interactivo

    -

    Blockman Go Editor Aventura APK es una aplicación social e interactiva que le permite conectarse con otros amantes de los juegos de píxeles de todo el mundo. Puedes chatear con ellos, hacer amigos o competir con ellos. También puedes unirte a juegos multijugador en línea con ellos y divertirte juntos.

    -

    Conclusión

    -

    Blockman Go Editor Aventura APK es una plataforma de creación de juegos gratis y divertido que le permite crear y jugar juegos de píxeles en su dispositivo Android. Es una aplicación fabricante de juegos para juegos de píxeles, y una aplicación de jugador de juego para juegos de Blockman Go. Es fácil de descargar e instalar, fácil de usar, creativo y divertido, y social e interactivo. Si te gustan los juegos de píxeles, definitivamente deberías probar Blockman Go Editor Aventura APK.

    -

    Preguntas frecuentes

    -

    Aquí hay algunas preguntas frecuentes sobre Blockman Go Editor Aventura APK:

    -

    Q: ¿Es seguro usar Blockman Go Editor Adventure APK?

    - -

    Q: ¿Es Blockman Go Editor Adventure APK compatible con mi dispositivo?

    -

    A: Blockman Go Editor Adventure APK es compatible con la mayoría de los dispositivos Android que ejecutan Android 4.1 o superior. Sin embargo, es posible que algunos dispositivos no admitan algunas características o funciones de la aplicación debido a limitaciones de hardware o configuración del sistema.

    -

    Q: ¿Cómo puedo actualizar Blockman Go editor aventura APK?

    -

    A: Para actualizar Blockman Go Editor Adventure APK, es necesario descargar la última versión de la aplicación de APKCombo u otras fuentes, e instalarlo sobre la versión existente. Alternativamente, puede comprobar si hay actualizaciones dentro de la aplicación pulsando en el botón "Configuración" en la pantalla principal de la aplicación, y luego tocando en el "Buscar actualizaciones" opción.

    -

    Q: ¿Cómo puedo contactar al soporte de Blockman Go?

    -

    A: Para contactar con el soporte de Blockman Go, puede enviar un correo electrónico a support@blockmango.net , o visitar su sitio web oficial en https:/ww.blockmango.net. También puedes seguirlos en sus redes sociales, como Facebook, Twitter, Instagram, YouTube, etc.

    -

    Q: ¿Cómo puedo dar retroalimentación o sugerencias para Blockman Go Editor Adventure APK?

    -

    A: Para dar retroalimentación o sugerencias para Blockman Go Editor Adventure APK, puede utilizar la opción "Feedback" dentro de la aplicación, o enviar un correo electrónico a feedback@blockmango.net. También puede calificar y revisar la aplicación en APKCombo u otras fuentes, y compartir sus opiniones e ideas con otros usuarios.

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Ciudad Congelada Mod Apk Diamantes Ilimitados.md b/spaces/Benson/text-generation/Examples/Ciudad Congelada Mod Apk Diamantes Ilimitados.md deleted file mode 100644 index 60a4264330d2236a2321c9f4f3993734ba459e6a..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Ciudad Congelada Mod Apk Diamantes Ilimitados.md +++ /dev/null @@ -1,55 +0,0 @@ - -

    Frozen City Mod APK ilimitados diamantes: Cómo descargar e instalar

    -

    Si estás buscando un emocionante y desafiante juego de supervivencia, es posible que quieras echar un vistazo a Frozen City. Este es un juego que pondrá a prueba sus habilidades y estrategia a medida que intenta construir y gestionar su base en un páramo congelado. Sin embargo, si desea disfrutar del juego sin limitaciones o restricciones, es posible que desee probar Frozen City Mod APK Unlimited Diamonds. Esta es una versión modificada del juego original que te da acceso a recursos ilimitados, como diamantes, monedas, gemas y más. En este artículo, le diremos qué es Frozen City, qué es Frozen City Mod APK Unlimited Diamonds, y cómo descargarlo e instalarlo en su dispositivo Android.

    -

    ciudad congelada mod apk diamantes ilimitados


    Downloadhttps://bltlly.com/2v6Kbm



    -

    ¿Qué es la ciudad congelada?

    -

    Frozen City es un juego desarrollado por Game Insight, una compañía que se especializa en crear juegos móviles inmersivos y atractivos. El juego se desarrolla en un mundo post-apocalíptico donde un virus misterioso ha convertido a la mayoría de la población en zombies. Los sobrevivientes tienen que encontrar refugio y recursos en la ciudad congelada, donde tienen que enfrentar no solo a los no-muertos, sino también a otras facciones hostiles y desastres naturales.

    -

    Un juego de supervivencia ambientado en un mundo post-apocalíptico

    -

    En Frozen City, tienes que sobrevivir en un ambiente duro donde cada decisión importa. Tienes que buscar comida, agua, combustible y otros suministros, así como armas artesanales, herramientas y equipo. También tienes que defender tu base de ataques de zombies y raiders, así como explorar la ciudad en busca de pistas y secretos. Tienes que equilibrar tus necesidades y deseos, así como tu moralidad y humanidad.

    -

    Un juego de construcción de bases con múltiples etapas y tareas

    - -

    Un juego con impresionantes gráficos y efectos de sonido

    -

    Frozen City es un juego que cuenta con impresionantes gráficos y efectos de sonido que crean una atmósfera inmersiva. El juego presenta efectos meteorológicos realistas, como nieve, niebla, lluvia y viento. El juego también tiene animaciones detalladas, sombras, iluminación y texturas que hacen que la ciudad se vea viva. El juego también tiene efectos de sonido realistas, como gemidos de zombis, disparos, explosiones y más. El juego también tiene una banda sonora cautivadora que coincide con el estado de ánimo del juego.

    -

    ¿Qué es Frozen City Mod APK ilimitados diamantes?

    -

    Frozen City Mod APK ilimitado de diamantes es una versión modificada del juego original que le da recursos ilimitados, tales como diamantes, monedas, gemas, y más. Estos recursos son esenciales para construir y actualizar su base, así como para desbloquear nuevas características y contenido. Sin embargo, en el juego original, estos recursos son limitados y difíciles de conseguir. Tienes que gastar dinero real o esperar largas horas para conseguirlos.

    -

    -

    Una versión modificada del juego original que te da recursos ilimitados

    -

    Frozen City Mod APK Unlimited Diamonds es una versión hackeada del juego original que evita el sistema de divisas en el juego. Esto significa que puede obtener diamantes, monedas, gemas y otros recursos ilimitados sin gastar dinero ni esperar largas horas. Puede utilizar estos recursos para construir y actualizar su base más rápido, así como acceder a todas las características y contenido del juego sin restricciones.

    -

    Una manera de disfrutar del juego sin gastar dinero real o esperar largas horas

    - -

    Una manera de desbloquear todas las características y el contenido del juego

    -

    Frozen City Mod APK Unlimited Diamonds es una manera de desbloquear todas las características y el contenido del juego que de otra manera están bloqueados o no disponibles en el juego original. Puede desbloquear nuevas ubicaciones, etapas, misiones, armas, equipos, sobrevivientes y más. También puedes acceder a funciones premium, como estatus VIP, artículos exclusivos y bonos. También puedes disfrutar del juego sin anuncios ni interrupciones.

    -

    Cómo descargar e instalar Frozen City Mod APK ilimitados diamantes?

    -

    Si desea probar Frozen City Mod APK Unlimited Diamonds, es necesario descargar e instalar en su dispositivo Android. Sin embargo, debe seguir algunos pasos y precauciones antes de hacerlo. Estos son los pasos que debe seguir:

    -

    Paso 1: Permitir aplicaciones desconocidas en su dispositivo Android

    -

    Dado que Frozen City Mod APK ilimitado de diamantes no está disponible en el oficial de Google Play Store, es necesario permitir que su dispositivo para instalar aplicaciones de fuentes desconocidas. Para hacer esto, vaya a la configuración del dispositivo, luego la seguridad, luego active la opción que dice "fuentes desconocidas" o "permitir la instalación de aplicaciones de fuentes desconocidas". Esto le permitirá instalar aplicaciones que no son de Google Play Store.

    -

    Paso 2: Instalar una aplicación de administrador de archivos en su dispositivo

    -

    También necesitas instalar una aplicación de administrador de archivos en tu dispositivo que te ayudará a localizar y administrar el archivo APK que descargarás. Una aplicación de administrador de archivos es una aplicación que le permite navegar y organizar los archivos y carpetas en su dispositivo. Puede usar cualquier aplicación de administrador de archivos que prefiera, como ES File Explorer, File Manager o Astro File Manager.

    -

    Paso 3: Descargar el archivo APK de una fuente de buena reputación

    - -
      -
    • [APKPure]
    • -
    • [APKHome]
    • -
    • [ModDroid]
    • -
    -

    Una vez que encuentre una fuente confiable, haga clic en el botón de descarga o enlace y espere a que la descarga termine.

    -

    Paso 4: Localizar y tocar el archivo APK para instalarlo

    -

    Después de descargar el archivo APK, necesita localizarlo en su dispositivo usando la aplicación de administrador de archivos que instaló anteriormente. El archivo APK debe estar en la carpeta de descargas o en la carpeta donde lo guardó. Una vez que lo encuentre, pulse sobre él para iniciar el proceso de instalación. Es posible que vea una ventana emergente pidiendo su permiso para instalar la aplicación. Toque en "instalar" o "permitir" y espere a que se complete la instalación.

    -

    Paso 5: Disfruta del juego con diamantes ilimitados y otros recursos

    -

    Felicidades! Usted ha descargado e instalado con éxito Frozen City Mod APK ilimitados diamantes en su dispositivo Android. Ahora puedes disfrutar del juego con recursos ilimitados, como diamantes, monedas, gemas y más. También puede desbloquear todas las características y el contenido del juego sin restricciones o limitaciones. ¡Diviértete jugando Frozen City Mod APK Unlimited Diamonds!

    -

    Conclusión

    -

    Frozen City es un juego de supervivencia que pondrá a prueba tus habilidades y estrategia mientras intentas construir y gestionar tu base en un desierto helado. Sin embargo, si quieres disfrutar del juego sin ningún tipo de molestia o frustración, es posible que desee probar Frozen City Mod APK Unlimited Diamonds. Esta es una versión modificada del juego original que te da recursos ilimitados, como diamantes, monedas, gemas y más. También puedes desbloquear todas las características y contenido del juego sin restricciones o limitaciones.

    - -

    Preguntas frecuentes

    -

    Aquí están algunas de las preguntas más frecuentes sobre Frozen City Mod APK Unlimited Diamonds:

    -

    Q: ¿Es seguro usar Frozen City Mod APK Unlimited Diamonds?

    -

    A: Sí, Frozen City Mod APK Unlimited Diamonds es seguro de usar, siempre y cuando lo descargue de una fuente confiable y siga los pasos y precauciones mencionados en este artículo. Sin embargo, siempre debe tener cuidado al instalar aplicaciones de fuentes desconocidas, ya que pueden contener malware o virus que pueden dañar su dispositivo o robar sus datos. También debe escanear el archivo APK con una aplicación antivirus antes de instalarlo.

    -

    Q: ¿Es Frozen City Mod APK ilimitado diamantes compatible con mi dispositivo?

    -

    A: Frozen City Mod APK Unlimited Diamonds es compatible con la mayoría de los dispositivos Android que se ejecutan en Android 4.4 o superior. Sin embargo, algunos dispositivos pueden no ser compatibles con el juego o el mod debido a diferentes especificaciones o configuraciones. Usted debe comprobar la compatibilidad de su dispositivo antes de descargar e instalar el mod.

    -

    Q: ¿Los diamantes ilimitados de Frozen City Mod APK afectarán el rendimiento de mi dispositivo?

    -

    A: Frozen City Mod APK ilimitado diamantes no debe afectar el rendimiento de su dispositivo significativamente, ya que no requiere mucho espacio de almacenamiento o memoria. Sin embargo, algunos dispositivos pueden experimentar retrasos o fallos debido a la baja RAM o CPU. Deberías cerrar otras aplicaciones y borrar la caché antes de jugar para evitar estos problemas.

    -

    Q: Congelado Ciudad Mod APK ilimitado de diamantes trabajar con la última versión del juego?

    -

    A: Frozen City Mod APK Unlimited Diamonds se actualiza regularmente para que coincida con la última versión del juego. Sin embargo, algunas actualizaciones pueden tardar más que otras en ser lanzadas. Deberías comprobar la versión del mod antes de descargarlo e instalarlo para asegurarte de que es compatible con la última versión del juego.

    -

    Q: ¿Puedo jugar Frozen City Mod APK ilimitados diamantes en línea con otros jugadores?

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Arthdal Crnicas Episodio 16.md b/spaces/Benson/text-generation/Examples/Descargar Arthdal Crnicas Episodio 16.md deleted file mode 100644 index 6181c142b4793a42a1c6ea75a95b6ee8893dc9f9..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Arthdal Crnicas Episodio 16.md +++ /dev/null @@ -1,51 +0,0 @@ - -

    Cómo descargar Arthdal Chronicles Episodio 16

    -

    ¿Eres fan de Arthdal Chronicles, el drama épico coreano que ha cautivado a millones de espectadores en todo el mundo? Si es así, debes estar esperando ansiosamente el episodio 16, el episodio final de la temporada 1. Pero, ¿cómo puedes verlo online sin problemas? En este artículo, te mostraremos cómo descargar Arthdal Chronicles episodio 16 de Netflix, una de las mejores plataformas de streaming que ofrecen este increíble espectáculo. Pero primero, veamos de qué se trata Arthdal Chronicles y por qué deberías verlo.

    -

    descargar arthdal crónicas episodio 16


    Download » https://bltlly.com/2v6Mvl



    -

    ¿Qué es Arthdal Chronicles?

    -

    Arthdal Chronicles es un drama histórico de fantasía que cuenta la historia de la antigua ciudad de Arthdal y sus habitantes que luchan por el poder y la supervivencia. El drama cuenta con un reparto lleno de estrellas que incluye a Song Joong-ki, Jang Dong-gun, Kim Ji-won y Kim Ok-bin. El drama se divide en tres partes: Los Hijos de la Profecía, El Cielo Girando de Adentro hacia Afuera, La Tierra Naciente y El Preludio de Todas las Leyendas.

    -

    En los episodios anteriores, hemos sido testigos de cómo Eunseom (Song Joong-ki), un mitad humano mitad neandertal que nació con un destino especial, escapa de Arthdal y se encuentra con otras tribus que lo ayudan a crecer como líder. También hemos visto cómo Tagon (Jang Dong-gun), un carismático guerrero que también es secretamente un neandertal, llega al poder en Arthdal y se enfrenta a varios desafíos de sus enemigos. Mientras tanto, Tanya (Kim Ji-won), un descendiente del clan Wahan que fue secuestrado por soldados Arthdal, se convierte en la alta sacerdotisa de Arthdal y aprende sobre su verdadera identidad. Y Saya ( Kim Ok-bin), un misterioso hermano gemelo de Eunseom que vive en una cámara secreta, conspira para derrocar a Tagon y apoderarse de Arthdal.

    - -

    Por qué deberías ver Arthdal Chronicles Episodio 16

    -

    Hay muchas razones por las que deberías ver online el episodio 16 de Arthdal Chronicles. Estas son algunas de ellas:

    -
      -
    • Disfrutarás de las impresionantes imágenes y cinematografías que dan vida al antiguo mundo de Arthdal. El drama cuenta con un alto valor de producción y un diseño de escenografía realista que te hará sentir como si fueras parte de la historia.
    • -
    • Usted se sorprenderá por la excelente actuación y química de los miembros del reparto que retratan sus personajes complejos y diversos con pasión y habilidad. El drama muestra los talentos de algunos de los mejores actores de Corea que ofrecen actuaciones cautivadoras que te harán reír, llorar y animarlos.
    • -
    • Estarás inmerso en la historia rica y original que combina la fantasía, la historia y la cultura de una manera única. El drama explora temas como el poder, el amor, la identidad y el destino de una manera creativa y atractiva que te mantendrá enganchado hasta el final.
    • -
    • Estarás satisfecho con el final satisfactorio y gratificante que envolverá la historia de una manera significativa y memorable. El drama promete entregar un final que responderá a todas sus preguntas y le dejará con una sensación de cierre y cumplimiento.
    • -
    -

    Entonces, si estás buscando un drama que te entretenga, te inspire y te haga pensar, el episodio 16 de Arthdal Chronicles es la elección perfecta para ti.

    -

    Donde Descargar Arthdal Chronicles Episodio 16

    - - | Streaming Platform | Calidad de vídeo | Subtítulos | Precio | Descargar Opción | | -- | --- - - - - - - - | Netflix | HD | Varios idiomas | $8.99/mes (Plan básico) | Sí | | Viki | HD | Varios idiomas | $4.99/mes (Plan estándar) | No | | Viu | HD | Múltiples idiomas | $6.49/month (Plan premium) | Sí | | Kocowa | HD | Solo en inglés | $6.99/month (Plan estándar) | No |

    Como se puede ver en la tabla, Netflix es la mejor plataforma para descargar Arthdal Chronicles episodio 16 en línea. Tiene la más alta calidad de vídeo, la mayoría de las opciones de subtítulos, el precio más bajo, y la opción de descarga que le permite ver Arthdal Chronicles sin conexión. Por lo tanto, recomendamos Netflix como la mejor plataforma para descargar Arthdal Chronicles episodio 16.

    -

    -

    Cómo descargar Arthdal Chronicles episodio 16 de Netflix

    -

    Si ha decidido descargar Arthdal Chronicles episodio 16 de Netflix, aquí están los pasos que debe seguir:

    -
      -
    1. Regístrate en una cuenta de Netflix si aún no tienes una. Puedes elegir entre tres planes: Básico ($8.99/mes), Estándar ($13.99/mes), o Premium ($17.99/mes). El plan básico le permite ver en una pantalla a la vez, el plan estándar le permite ver en dos pantallas a la vez, y el plan Premium le permite ver en cuatro pantallas a la vez. Todos los planes te permiten descargar contenido en tus dispositivos.
    2. -
    3. Descargue la aplicación de Netflix en su dispositivo si no la tiene ya. Puede descargarla desde la App Store o Google Play Store de forma gratuita.
    4. -
    5. Abra la aplicación de Netflix e inicie sesión con los detalles de su cuenta.
    6. -
    7. Buscar "Arthdal Chronicles" en la barra de búsqueda y seleccionarlo de los resultados.
    8. -
    9. Seleccione "Episodio 16" de la lista de episodios y toque en el "Descargar" icono junto a ella. El icono parece una flecha hacia abajo con un círculo alrededor.
    10. -
    11. Espere a que se complete la descarga. Puede comprobar el progreso de la descarga en la sección "Descargas" de la aplicación.
    12. - -
    -

    Aquí hay una captura de pantalla que muestra cómo descargar Arthdal Chronicles episodio 16 de Netflix:

    -Captura de pantalla de la aplicación de Netflix que muestra cómo descargar Arthdal Chronicles episodio 16 -

    Un consejo sobre cómo ver Arthdal Chronicles sin conexión: Puede ajustar la calidad de vídeo de sus descargas para ahorrar espacio de almacenamiento en su dispositivo. Para ello, vaya a la sección "Configuración de la aplicación" de la aplicación y toque en "Descargar calidad de vídeo". Puede elegir entre cuatro opciones: Estándar (usa menos espacio de almacenamiento), Alto (usa más espacio de almacenamiento), Medio (usa espacio de almacenamiento moderado) o Inteligente (se ajusta automáticamente a la mejor calidad según las condiciones de su dispositivo y red).

    -

    Conclusión

    -

    En conclusión, Arthdal Chronicles episodio 16 es una visita obligada para todos los fans del drama. Es el último episodio de la temporada 1 que revelará el destino de los personajes y la ciudad de Arthdal. Puede descargar Arthdal Chronicles episodio 16 de Netflix, la mejor plataforma de transmisión que ofrece video de alta calidad, múltiples subtítulos, bajo precio y opción de descarga. Todo lo que necesita hacer es seguir los sencillos pasos que hemos descrito anteriormente y disfrutar viendo Arthdal Chronicles sin conexión. No te pierdas este final épico que te dejará sin palabras.

    -

    Entonces, ¿qué estás esperando? Descarga Arthdal Chronicles episodio 16 de Netflix hoy y presenciar el final de una era.

    -

    Preguntas frecuentes

    -

    Q: ¿Cuándo saldrá la temporada 2 de Arthdal Chronicles?

    -

    A: No hay confirmación oficial todavía sobre si Arthdal Chronicles tendrá una temporada 2 o no. Sin embargo, algunas fuentes sugieren que el equipo de producción está planeando comenzar a filmar la temporada 2 en 2024, después de que los actores terminen su servicio militar. Esperamos que esto sea cierto y que veamos más de Arthdal Chronicles en el futuro.

    -

    Q: ¿Cuántos episodios hay en Arthdal Chronicles?

    - -

    P: ¿Quiénes son los actores principales en Arthdal Chronicles?

    -

    A: Los actores principales de Arthdal Chronicles son Song Joong-ki, Jang Dong-gun, Kim Ji-won y Kim Ok-bin. Song Joong-ki interpreta a Eunseom y Saya, hermanos gemelos que tienen destinos diferentes. Jang Dong-gun interpreta a Tagon, un poderoso guerrero que se convierte en el rey de Arthdal. Kim Ji-won interpreta a Tanya, una alta sacerdotisa que es el interés amoroso de Eunseom. Kim Ok-bin interpreta a Taealha, un político astuto que es el interés amoroso de Tagon.

    -

    P: ¿Cuál es el género de Arthdal Chronicles?

    -

    A: Arthdal Chronicles es un drama histórico de fantasía que combina elementos de mitología, historia y cultura. Se encuentra en una tierra antigua ficticia llamada Arth, donde coexisten diferentes tribus y especies. Explora temas como el poder, el amor, la identidad y el destino.

    -

    Q: ¿Dónde puedo ver Arthdal Chronicles con subtítulos en inglés?

    -

    A: Puedes ver Arthdal Chronicles con subtítulos en inglés en Netflix, Viki, Viu o Kocowa. Sin embargo, recomendamos Netflix como la mejor plataforma para ver Arthdal Chronicles con subtítulos en inglés porque tiene la mejor calidad de video, la mayoría de opciones de subtítulos, el precio más bajo y la opción de descarga.

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/pyparsing/helpers.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/pyparsing/helpers.py deleted file mode 100644 index 9588b3b780159a2a2d23c7f84a4404ec350e2b65..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/pyparsing/helpers.py +++ /dev/null @@ -1,1088 +0,0 @@ -# helpers.py -import html.entities -import re -import typing - -from . import __diag__ -from .core import * -from .util import _bslash, _flatten, _escape_regex_range_chars - - -# -# global helpers -# -def delimited_list( - expr: Union[str, ParserElement], - delim: Union[str, ParserElement] = ",", - combine: bool = False, - min: typing.Optional[int] = None, - max: typing.Optional[int] = None, - *, - allow_trailing_delim: bool = False, -) -> ParserElement: - """Helper to define a delimited list of expressions - the delimiter - defaults to ','. By default, the list elements and delimiters can - have intervening whitespace, and comments, but this can be - overridden by passing ``combine=True`` in the constructor. If - ``combine`` is set to ``True``, the matching tokens are - returned as a single token string, with the delimiters included; - otherwise, the matching tokens are returned as a list of tokens, - with the delimiters suppressed. - - If ``allow_trailing_delim`` is set to True, then the list may end with - a delimiter. - - Example:: - - delimited_list(Word(alphas)).parse_string("aa,bb,cc") # -> ['aa', 'bb', 'cc'] - delimited_list(Word(hexnums), delim=':', combine=True).parse_string("AA:BB:CC:DD:EE") # -> ['AA:BB:CC:DD:EE'] - """ - if isinstance(expr, str_type): - expr = ParserElement._literalStringClass(expr) - - dlName = "{expr} [{delim} {expr}]...{end}".format( - expr=str(expr.copy().streamline()), - delim=str(delim), - end=" [{}]".format(str(delim)) if allow_trailing_delim else "", - ) - - if not combine: - delim = Suppress(delim) - - if min is not None: - if min < 1: - raise ValueError("min must be greater than 0") - min -= 1 - if max is not None: - if min is not None and max <= min: - raise ValueError("max must be greater than, or equal to min") - max -= 1 - delimited_list_expr = expr + (delim + expr)[min, max] - - if allow_trailing_delim: - delimited_list_expr += Opt(delim) - - if combine: - return Combine(delimited_list_expr).set_name(dlName) - else: - return delimited_list_expr.set_name(dlName) - - -def counted_array( - expr: ParserElement, - int_expr: typing.Optional[ParserElement] = None, - *, - intExpr: typing.Optional[ParserElement] = None, -) -> ParserElement: - """Helper to define a counted list of expressions. - - This helper defines a pattern of the form:: - - integer expr expr expr... - - where the leading integer tells how many expr expressions follow. - The matched tokens returns the array of expr tokens as a list - the - leading count token is suppressed. - - If ``int_expr`` is specified, it should be a pyparsing expression - that produces an integer value. - - Example:: - - counted_array(Word(alphas)).parse_string('2 ab cd ef') # -> ['ab', 'cd'] - - # in this parser, the leading integer value is given in binary, - # '10' indicating that 2 values are in the array - binary_constant = Word('01').set_parse_action(lambda t: int(t[0], 2)) - counted_array(Word(alphas), int_expr=binary_constant).parse_string('10 ab cd ef') # -> ['ab', 'cd'] - - # if other fields must be parsed after the count but before the - # list items, give the fields results names and they will - # be preserved in the returned ParseResults: - count_with_metadata = integer + Word(alphas)("type") - typed_array = counted_array(Word(alphanums), int_expr=count_with_metadata)("items") - result = typed_array.parse_string("3 bool True True False") - print(result.dump()) - - # prints - # ['True', 'True', 'False'] - # - items: ['True', 'True', 'False'] - # - type: 'bool' - """ - intExpr = intExpr or int_expr - array_expr = Forward() - - def count_field_parse_action(s, l, t): - nonlocal array_expr - n = t[0] - array_expr <<= (expr * n) if n else Empty() - # clear list contents, but keep any named results - del t[:] - - if intExpr is None: - intExpr = Word(nums).set_parse_action(lambda t: int(t[0])) - else: - intExpr = intExpr.copy() - intExpr.set_name("arrayLen") - intExpr.add_parse_action(count_field_parse_action, call_during_try=True) - return (intExpr + array_expr).set_name("(len) " + str(expr) + "...") - - -def match_previous_literal(expr: ParserElement) -> ParserElement: - """Helper to define an expression that is indirectly defined from - the tokens matched in a previous expression, that is, it looks for - a 'repeat' of a previous expression. For example:: - - first = Word(nums) - second = match_previous_literal(first) - match_expr = first + ":" + second - - will match ``"1:1"``, but not ``"1:2"``. Because this - matches a previous literal, will also match the leading - ``"1:1"`` in ``"1:10"``. If this is not desired, use - :class:`match_previous_expr`. Do *not* use with packrat parsing - enabled. - """ - rep = Forward() - - def copy_token_to_repeater(s, l, t): - if t: - if len(t) == 1: - rep << t[0] - else: - # flatten t tokens - tflat = _flatten(t.as_list()) - rep << And(Literal(tt) for tt in tflat) - else: - rep << Empty() - - expr.add_parse_action(copy_token_to_repeater, callDuringTry=True) - rep.set_name("(prev) " + str(expr)) - return rep - - -def match_previous_expr(expr: ParserElement) -> ParserElement: - """Helper to define an expression that is indirectly defined from - the tokens matched in a previous expression, that is, it looks for - a 'repeat' of a previous expression. For example:: - - first = Word(nums) - second = match_previous_expr(first) - match_expr = first + ":" + second - - will match ``"1:1"``, but not ``"1:2"``. Because this - matches by expressions, will *not* match the leading ``"1:1"`` - in ``"1:10"``; the expressions are evaluated first, and then - compared, so ``"1"`` is compared with ``"10"``. Do *not* use - with packrat parsing enabled. - """ - rep = Forward() - e2 = expr.copy() - rep <<= e2 - - def copy_token_to_repeater(s, l, t): - matchTokens = _flatten(t.as_list()) - - def must_match_these_tokens(s, l, t): - theseTokens = _flatten(t.as_list()) - if theseTokens != matchTokens: - raise ParseException( - s, l, "Expected {}, found{}".format(matchTokens, theseTokens) - ) - - rep.set_parse_action(must_match_these_tokens, callDuringTry=True) - - expr.add_parse_action(copy_token_to_repeater, callDuringTry=True) - rep.set_name("(prev) " + str(expr)) - return rep - - -def one_of( - strs: Union[typing.Iterable[str], str], - caseless: bool = False, - use_regex: bool = True, - as_keyword: bool = False, - *, - useRegex: bool = True, - asKeyword: bool = False, -) -> ParserElement: - """Helper to quickly define a set of alternative :class:`Literal` s, - and makes sure to do longest-first testing when there is a conflict, - regardless of the input order, but returns - a :class:`MatchFirst` for best performance. - - Parameters: - - - ``strs`` - a string of space-delimited literals, or a collection of - string literals - - ``caseless`` - treat all literals as caseless - (default= ``False``) - - ``use_regex`` - as an optimization, will - generate a :class:`Regex` object; otherwise, will generate - a :class:`MatchFirst` object (if ``caseless=True`` or ``asKeyword=True``, or if - creating a :class:`Regex` raises an exception) - (default= ``True``) - - ``as_keyword`` - enforce :class:`Keyword`-style matching on the - generated expressions - (default= ``False``) - - ``asKeyword`` and ``useRegex`` are retained for pre-PEP8 compatibility, - but will be removed in a future release - - Example:: - - comp_oper = one_of("< = > <= >= !=") - var = Word(alphas) - number = Word(nums) - term = var | number - comparison_expr = term + comp_oper + term - print(comparison_expr.search_string("B = 12 AA=23 B<=AA AA>12")) - - prints:: - - [['B', '=', '12'], ['AA', '=', '23'], ['B', '<=', 'AA'], ['AA', '>', '12']] - """ - asKeyword = asKeyword or as_keyword - useRegex = useRegex and use_regex - - if ( - isinstance(caseless, str_type) - and __diag__.warn_on_multiple_string_args_to_oneof - ): - warnings.warn( - "More than one string argument passed to one_of, pass" - " choices as a list or space-delimited string", - stacklevel=2, - ) - - if caseless: - isequal = lambda a, b: a.upper() == b.upper() - masks = lambda a, b: b.upper().startswith(a.upper()) - parseElementClass = CaselessKeyword if asKeyword else CaselessLiteral - else: - isequal = lambda a, b: a == b - masks = lambda a, b: b.startswith(a) - parseElementClass = Keyword if asKeyword else Literal - - symbols: List[str] = [] - if isinstance(strs, str_type): - symbols = strs.split() - elif isinstance(strs, Iterable): - symbols = list(strs) - else: - raise TypeError("Invalid argument to one_of, expected string or iterable") - if not symbols: - return NoMatch() - - # reorder given symbols to take care to avoid masking longer choices with shorter ones - # (but only if the given symbols are not just single characters) - if any(len(sym) > 1 for sym in symbols): - i = 0 - while i < len(symbols) - 1: - cur = symbols[i] - for j, other in enumerate(symbols[i + 1 :]): - if isequal(other, cur): - del symbols[i + j + 1] - break - elif masks(cur, other): - del symbols[i + j + 1] - symbols.insert(i, other) - break - else: - i += 1 - - if useRegex: - re_flags: int = re.IGNORECASE if caseless else 0 - - try: - if all(len(sym) == 1 for sym in symbols): - # symbols are just single characters, create range regex pattern - patt = "[{}]".format( - "".join(_escape_regex_range_chars(sym) for sym in symbols) - ) - else: - patt = "|".join(re.escape(sym) for sym in symbols) - - # wrap with \b word break markers if defining as keywords - if asKeyword: - patt = r"\b(?:{})\b".format(patt) - - ret = Regex(patt, flags=re_flags).set_name(" | ".join(symbols)) - - if caseless: - # add parse action to return symbols as specified, not in random - # casing as found in input string - symbol_map = {sym.lower(): sym for sym in symbols} - ret.add_parse_action(lambda s, l, t: symbol_map[t[0].lower()]) - - return ret - - except re.error: - warnings.warn( - "Exception creating Regex for one_of, building MatchFirst", stacklevel=2 - ) - - # last resort, just use MatchFirst - return MatchFirst(parseElementClass(sym) for sym in symbols).set_name( - " | ".join(symbols) - ) - - -def dict_of(key: ParserElement, value: ParserElement) -> ParserElement: - """Helper to easily and clearly define a dictionary by specifying - the respective patterns for the key and value. Takes care of - defining the :class:`Dict`, :class:`ZeroOrMore`, and - :class:`Group` tokens in the proper order. The key pattern - can include delimiting markers or punctuation, as long as they are - suppressed, thereby leaving the significant key text. The value - pattern can include named results, so that the :class:`Dict` results - can include named token fields. - - Example:: - - text = "shape: SQUARE posn: upper left color: light blue texture: burlap" - attr_expr = (label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join)) - print(attr_expr[1, ...].parse_string(text).dump()) - - attr_label = label - attr_value = Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join) - - # similar to Dict, but simpler call format - result = dict_of(attr_label, attr_value).parse_string(text) - print(result.dump()) - print(result['shape']) - print(result.shape) # object attribute access works too - print(result.as_dict()) - - prints:: - - [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'light blue'], ['texture', 'burlap']] - - color: 'light blue' - - posn: 'upper left' - - shape: 'SQUARE' - - texture: 'burlap' - SQUARE - SQUARE - {'color': 'light blue', 'shape': 'SQUARE', 'posn': 'upper left', 'texture': 'burlap'} - """ - return Dict(OneOrMore(Group(key + value))) - - -def original_text_for( - expr: ParserElement, as_string: bool = True, *, asString: bool = True -) -> ParserElement: - """Helper to return the original, untokenized text for a given - expression. Useful to restore the parsed fields of an HTML start - tag into the raw tag text itself, or to revert separate tokens with - intervening whitespace back to the original matching input text. By - default, returns astring containing the original parsed text. - - If the optional ``as_string`` argument is passed as - ``False``, then the return value is - a :class:`ParseResults` containing any results names that - were originally matched, and a single token containing the original - matched text from the input string. So if the expression passed to - :class:`original_text_for` contains expressions with defined - results names, you must set ``as_string`` to ``False`` if you - want to preserve those results name values. - - The ``asString`` pre-PEP8 argument is retained for compatibility, - but will be removed in a future release. - - Example:: - - src = "this is test bold text normal text " - for tag in ("b", "i"): - opener, closer = make_html_tags(tag) - patt = original_text_for(opener + SkipTo(closer) + closer) - print(patt.search_string(src)[0]) - - prints:: - - [' bold text '] - ['text'] - """ - asString = asString and as_string - - locMarker = Empty().set_parse_action(lambda s, loc, t: loc) - endlocMarker = locMarker.copy() - endlocMarker.callPreparse = False - matchExpr = locMarker("_original_start") + expr + endlocMarker("_original_end") - if asString: - extractText = lambda s, l, t: s[t._original_start : t._original_end] - else: - - def extractText(s, l, t): - t[:] = [s[t.pop("_original_start") : t.pop("_original_end")]] - - matchExpr.set_parse_action(extractText) - matchExpr.ignoreExprs = expr.ignoreExprs - matchExpr.suppress_warning(Diagnostics.warn_ungrouped_named_tokens_in_collection) - return matchExpr - - -def ungroup(expr: ParserElement) -> ParserElement: - """Helper to undo pyparsing's default grouping of And expressions, - even if all but one are non-empty. - """ - return TokenConverter(expr).add_parse_action(lambda t: t[0]) - - -def locatedExpr(expr: ParserElement) -> ParserElement: - """ - (DEPRECATED - future code should use the Located class) - Helper to decorate a returned token with its starting and ending - locations in the input string. - - This helper adds the following results names: - - - ``locn_start`` - location where matched expression begins - - ``locn_end`` - location where matched expression ends - - ``value`` - the actual parsed results - - Be careful if the input text contains ```` characters, you - may want to call :class:`ParserElement.parseWithTabs` - - Example:: - - wd = Word(alphas) - for match in locatedExpr(wd).searchString("ljsdf123lksdjjf123lkkjj1222"): - print(match) - - prints:: - - [[0, 'ljsdf', 5]] - [[8, 'lksdjjf', 15]] - [[18, 'lkkjj', 23]] - """ - locator = Empty().set_parse_action(lambda ss, ll, tt: ll) - return Group( - locator("locn_start") - + expr("value") - + locator.copy().leaveWhitespace()("locn_end") - ) - - -def nested_expr( - opener: Union[str, ParserElement] = "(", - closer: Union[str, ParserElement] = ")", - content: typing.Optional[ParserElement] = None, - ignore_expr: ParserElement = quoted_string(), - *, - ignoreExpr: ParserElement = quoted_string(), -) -> ParserElement: - """Helper method for defining nested lists enclosed in opening and - closing delimiters (``"("`` and ``")"`` are the default). - - Parameters: - - ``opener`` - opening character for a nested list - (default= ``"("``); can also be a pyparsing expression - - ``closer`` - closing character for a nested list - (default= ``")"``); can also be a pyparsing expression - - ``content`` - expression for items within the nested lists - (default= ``None``) - - ``ignore_expr`` - expression for ignoring opening and closing delimiters - (default= :class:`quoted_string`) - - ``ignoreExpr`` - this pre-PEP8 argument is retained for compatibility - but will be removed in a future release - - If an expression is not provided for the content argument, the - nested expression will capture all whitespace-delimited content - between delimiters as a list of separate values. - - Use the ``ignore_expr`` argument to define expressions that may - contain opening or closing characters that should not be treated as - opening or closing characters for nesting, such as quoted_string or - a comment expression. Specify multiple expressions using an - :class:`Or` or :class:`MatchFirst`. The default is - :class:`quoted_string`, but if no expressions are to be ignored, then - pass ``None`` for this argument. - - Example:: - - data_type = one_of("void int short long char float double") - decl_data_type = Combine(data_type + Opt(Word('*'))) - ident = Word(alphas+'_', alphanums+'_') - number = pyparsing_common.number - arg = Group(decl_data_type + ident) - LPAR, RPAR = map(Suppress, "()") - - code_body = nested_expr('{', '}', ignore_expr=(quoted_string | c_style_comment)) - - c_function = (decl_data_type("type") - + ident("name") - + LPAR + Opt(delimited_list(arg), [])("args") + RPAR - + code_body("body")) - c_function.ignore(c_style_comment) - - source_code = ''' - int is_odd(int x) { - return (x%2); - } - - int dec_to_hex(char hchar) { - if (hchar >= '0' && hchar <= '9') { - return (ord(hchar)-ord('0')); - } else { - return (10+ord(hchar)-ord('A')); - } - } - ''' - for func in c_function.search_string(source_code): - print("%(name)s (%(type)s) args: %(args)s" % func) - - - prints:: - - is_odd (int) args: [['int', 'x']] - dec_to_hex (int) args: [['char', 'hchar']] - """ - if ignoreExpr != ignore_expr: - ignoreExpr = ignore_expr if ignoreExpr == quoted_string() else ignoreExpr - if opener == closer: - raise ValueError("opening and closing strings cannot be the same") - if content is None: - if isinstance(opener, str_type) and isinstance(closer, str_type): - if len(opener) == 1 and len(closer) == 1: - if ignoreExpr is not None: - content = Combine( - OneOrMore( - ~ignoreExpr - + CharsNotIn( - opener + closer + ParserElement.DEFAULT_WHITE_CHARS, - exact=1, - ) - ) - ).set_parse_action(lambda t: t[0].strip()) - else: - content = empty.copy() + CharsNotIn( - opener + closer + ParserElement.DEFAULT_WHITE_CHARS - ).set_parse_action(lambda t: t[0].strip()) - else: - if ignoreExpr is not None: - content = Combine( - OneOrMore( - ~ignoreExpr - + ~Literal(opener) - + ~Literal(closer) - + CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS, exact=1) - ) - ).set_parse_action(lambda t: t[0].strip()) - else: - content = Combine( - OneOrMore( - ~Literal(opener) - + ~Literal(closer) - + CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS, exact=1) - ) - ).set_parse_action(lambda t: t[0].strip()) - else: - raise ValueError( - "opening and closing arguments must be strings if no content expression is given" - ) - ret = Forward() - if ignoreExpr is not None: - ret <<= Group( - Suppress(opener) + ZeroOrMore(ignoreExpr | ret | content) + Suppress(closer) - ) - else: - ret <<= Group(Suppress(opener) + ZeroOrMore(ret | content) + Suppress(closer)) - ret.set_name("nested %s%s expression" % (opener, closer)) - return ret - - -def _makeTags(tagStr, xml, suppress_LT=Suppress("<"), suppress_GT=Suppress(">")): - """Internal helper to construct opening and closing tag expressions, given a tag name""" - if isinstance(tagStr, str_type): - resname = tagStr - tagStr = Keyword(tagStr, caseless=not xml) - else: - resname = tagStr.name - - tagAttrName = Word(alphas, alphanums + "_-:") - if xml: - tagAttrValue = dbl_quoted_string.copy().set_parse_action(remove_quotes) - openTag = ( - suppress_LT - + tagStr("tag") - + Dict(ZeroOrMore(Group(tagAttrName + Suppress("=") + tagAttrValue))) - + Opt("/", default=[False])("empty").set_parse_action( - lambda s, l, t: t[0] == "/" - ) - + suppress_GT - ) - else: - tagAttrValue = quoted_string.copy().set_parse_action(remove_quotes) | Word( - printables, exclude_chars=">" - ) - openTag = ( - suppress_LT - + tagStr("tag") - + Dict( - ZeroOrMore( - Group( - tagAttrName.set_parse_action(lambda t: t[0].lower()) - + Opt(Suppress("=") + tagAttrValue) - ) - ) - ) - + Opt("/", default=[False])("empty").set_parse_action( - lambda s, l, t: t[0] == "/" - ) - + suppress_GT - ) - closeTag = Combine(Literal("", adjacent=False) - - openTag.set_name("<%s>" % resname) - # add start results name in parse action now that ungrouped names are not reported at two levels - openTag.add_parse_action( - lambda t: t.__setitem__( - "start" + "".join(resname.replace(":", " ").title().split()), t.copy() - ) - ) - closeTag = closeTag( - "end" + "".join(resname.replace(":", " ").title().split()) - ).set_name("" % resname) - openTag.tag = resname - closeTag.tag = resname - openTag.tag_body = SkipTo(closeTag()) - return openTag, closeTag - - -def make_html_tags( - tag_str: Union[str, ParserElement] -) -> Tuple[ParserElement, ParserElement]: - """Helper to construct opening and closing tag expressions for HTML, - given a tag name. Matches tags in either upper or lower case, - attributes with namespaces and with quoted or unquoted values. - - Example:: - - text = 'More info at the pyparsing wiki page' - # make_html_tags returns pyparsing expressions for the opening and - # closing tags as a 2-tuple - a, a_end = make_html_tags("A") - link_expr = a + SkipTo(a_end)("link_text") + a_end - - for link in link_expr.search_string(text): - # attributes in the tag (like "href" shown here) are - # also accessible as named results - print(link.link_text, '->', link.href) - - prints:: - - pyparsing -> https://github.com/pyparsing/pyparsing/wiki - """ - return _makeTags(tag_str, False) - - -def make_xml_tags( - tag_str: Union[str, ParserElement] -) -> Tuple[ParserElement, ParserElement]: - """Helper to construct opening and closing tag expressions for XML, - given a tag name. Matches tags only in the given upper/lower case. - - Example: similar to :class:`make_html_tags` - """ - return _makeTags(tag_str, True) - - -any_open_tag: ParserElement -any_close_tag: ParserElement -any_open_tag, any_close_tag = make_html_tags( - Word(alphas, alphanums + "_:").set_name("any tag") -) - -_htmlEntityMap = {k.rstrip(";"): v for k, v in html.entities.html5.items()} -common_html_entity = Regex("&(?P" + "|".join(_htmlEntityMap) + ");").set_name( - "common HTML entity" -) - - -def replace_html_entity(t): - """Helper parser action to replace common HTML entities with their special characters""" - return _htmlEntityMap.get(t.entity) - - -class OpAssoc(Enum): - LEFT = 1 - RIGHT = 2 - - -InfixNotationOperatorArgType = Union[ - ParserElement, str, Tuple[Union[ParserElement, str], Union[ParserElement, str]] -] -InfixNotationOperatorSpec = Union[ - Tuple[ - InfixNotationOperatorArgType, - int, - OpAssoc, - typing.Optional[ParseAction], - ], - Tuple[ - InfixNotationOperatorArgType, - int, - OpAssoc, - ], -] - - -def infix_notation( - base_expr: ParserElement, - op_list: List[InfixNotationOperatorSpec], - lpar: Union[str, ParserElement] = Suppress("("), - rpar: Union[str, ParserElement] = Suppress(")"), -) -> ParserElement: - """Helper method for constructing grammars of expressions made up of - operators working in a precedence hierarchy. Operators may be unary - or binary, left- or right-associative. Parse actions can also be - attached to operator expressions. The generated parser will also - recognize the use of parentheses to override operator precedences - (see example below). - - Note: if you define a deep operator list, you may see performance - issues when using infix_notation. See - :class:`ParserElement.enable_packrat` for a mechanism to potentially - improve your parser performance. - - Parameters: - - ``base_expr`` - expression representing the most basic operand to - be used in the expression - - ``op_list`` - list of tuples, one for each operator precedence level - in the expression grammar; each tuple is of the form ``(op_expr, - num_operands, right_left_assoc, (optional)parse_action)``, where: - - - ``op_expr`` is the pyparsing expression for the operator; may also - be a string, which will be converted to a Literal; if ``num_operands`` - is 3, ``op_expr`` is a tuple of two expressions, for the two - operators separating the 3 terms - - ``num_operands`` is the number of terms for this operator (must be 1, - 2, or 3) - - ``right_left_assoc`` is the indicator whether the operator is right - or left associative, using the pyparsing-defined constants - ``OpAssoc.RIGHT`` and ``OpAssoc.LEFT``. - - ``parse_action`` is the parse action to be associated with - expressions matching this operator expression (the parse action - tuple member may be omitted); if the parse action is passed - a tuple or list of functions, this is equivalent to calling - ``set_parse_action(*fn)`` - (:class:`ParserElement.set_parse_action`) - - ``lpar`` - expression for matching left-parentheses; if passed as a - str, then will be parsed as Suppress(lpar). If lpar is passed as - an expression (such as ``Literal('(')``), then it will be kept in - the parsed results, and grouped with them. (default= ``Suppress('(')``) - - ``rpar`` - expression for matching right-parentheses; if passed as a - str, then will be parsed as Suppress(rpar). If rpar is passed as - an expression (such as ``Literal(')')``), then it will be kept in - the parsed results, and grouped with them. (default= ``Suppress(')')``) - - Example:: - - # simple example of four-function arithmetic with ints and - # variable names - integer = pyparsing_common.signed_integer - varname = pyparsing_common.identifier - - arith_expr = infix_notation(integer | varname, - [ - ('-', 1, OpAssoc.RIGHT), - (one_of('* /'), 2, OpAssoc.LEFT), - (one_of('+ -'), 2, OpAssoc.LEFT), - ]) - - arith_expr.run_tests(''' - 5+3*6 - (5+3)*6 - -2--11 - ''', full_dump=False) - - prints:: - - 5+3*6 - [[5, '+', [3, '*', 6]]] - - (5+3)*6 - [[[5, '+', 3], '*', 6]] - - -2--11 - [[['-', 2], '-', ['-', 11]]] - """ - # captive version of FollowedBy that does not do parse actions or capture results names - class _FB(FollowedBy): - def parseImpl(self, instring, loc, doActions=True): - self.expr.try_parse(instring, loc) - return loc, [] - - _FB.__name__ = "FollowedBy>" - - ret = Forward() - if isinstance(lpar, str): - lpar = Suppress(lpar) - if isinstance(rpar, str): - rpar = Suppress(rpar) - - # if lpar and rpar are not suppressed, wrap in group - if not (isinstance(rpar, Suppress) and isinstance(rpar, Suppress)): - lastExpr = base_expr | Group(lpar + ret + rpar) - else: - lastExpr = base_expr | (lpar + ret + rpar) - - for i, operDef in enumerate(op_list): - opExpr, arity, rightLeftAssoc, pa = (operDef + (None,))[:4] - if isinstance(opExpr, str_type): - opExpr = ParserElement._literalStringClass(opExpr) - if arity == 3: - if not isinstance(opExpr, (tuple, list)) or len(opExpr) != 2: - raise ValueError( - "if numterms=3, opExpr must be a tuple or list of two expressions" - ) - opExpr1, opExpr2 = opExpr - term_name = "{}{} term".format(opExpr1, opExpr2) - else: - term_name = "{} term".format(opExpr) - - if not 1 <= arity <= 3: - raise ValueError("operator must be unary (1), binary (2), or ternary (3)") - - if rightLeftAssoc not in (OpAssoc.LEFT, OpAssoc.RIGHT): - raise ValueError("operator must indicate right or left associativity") - - thisExpr: Forward = Forward().set_name(term_name) - if rightLeftAssoc is OpAssoc.LEFT: - if arity == 1: - matchExpr = _FB(lastExpr + opExpr) + Group(lastExpr + opExpr[1, ...]) - elif arity == 2: - if opExpr is not None: - matchExpr = _FB(lastExpr + opExpr + lastExpr) + Group( - lastExpr + (opExpr + lastExpr)[1, ...] - ) - else: - matchExpr = _FB(lastExpr + lastExpr) + Group(lastExpr[2, ...]) - elif arity == 3: - matchExpr = _FB( - lastExpr + opExpr1 + lastExpr + opExpr2 + lastExpr - ) + Group(lastExpr + OneOrMore(opExpr1 + lastExpr + opExpr2 + lastExpr)) - elif rightLeftAssoc is OpAssoc.RIGHT: - if arity == 1: - # try to avoid LR with this extra test - if not isinstance(opExpr, Opt): - opExpr = Opt(opExpr) - matchExpr = _FB(opExpr.expr + thisExpr) + Group(opExpr + thisExpr) - elif arity == 2: - if opExpr is not None: - matchExpr = _FB(lastExpr + opExpr + thisExpr) + Group( - lastExpr + (opExpr + thisExpr)[1, ...] - ) - else: - matchExpr = _FB(lastExpr + thisExpr) + Group( - lastExpr + thisExpr[1, ...] - ) - elif arity == 3: - matchExpr = _FB( - lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr - ) + Group(lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr) - if pa: - if isinstance(pa, (tuple, list)): - matchExpr.set_parse_action(*pa) - else: - matchExpr.set_parse_action(pa) - thisExpr <<= (matchExpr | lastExpr).setName(term_name) - lastExpr = thisExpr - ret <<= lastExpr - return ret - - -def indentedBlock(blockStatementExpr, indentStack, indent=True, backup_stacks=[]): - """ - (DEPRECATED - use IndentedBlock class instead) - Helper method for defining space-delimited indentation blocks, - such as those used to define block statements in Python source code. - - Parameters: - - - ``blockStatementExpr`` - expression defining syntax of statement that - is repeated within the indented block - - ``indentStack`` - list created by caller to manage indentation stack - (multiple ``statementWithIndentedBlock`` expressions within a single - grammar should share a common ``indentStack``) - - ``indent`` - boolean indicating whether block must be indented beyond - the current level; set to ``False`` for block of left-most statements - (default= ``True``) - - A valid block must contain at least one ``blockStatement``. - - (Note that indentedBlock uses internal parse actions which make it - incompatible with packrat parsing.) - - Example:: - - data = ''' - def A(z): - A1 - B = 100 - G = A2 - A2 - A3 - B - def BB(a,b,c): - BB1 - def BBA(): - bba1 - bba2 - bba3 - C - D - def spam(x,y): - def eggs(z): - pass - ''' - - - indentStack = [1] - stmt = Forward() - - identifier = Word(alphas, alphanums) - funcDecl = ("def" + identifier + Group("(" + Opt(delimitedList(identifier)) + ")") + ":") - func_body = indentedBlock(stmt, indentStack) - funcDef = Group(funcDecl + func_body) - - rvalue = Forward() - funcCall = Group(identifier + "(" + Opt(delimitedList(rvalue)) + ")") - rvalue << (funcCall | identifier | Word(nums)) - assignment = Group(identifier + "=" + rvalue) - stmt << (funcDef | assignment | identifier) - - module_body = stmt[1, ...] - - parseTree = module_body.parseString(data) - parseTree.pprint() - - prints:: - - [['def', - 'A', - ['(', 'z', ')'], - ':', - [['A1'], [['B', '=', '100']], [['G', '=', 'A2']], ['A2'], ['A3']]], - 'B', - ['def', - 'BB', - ['(', 'a', 'b', 'c', ')'], - ':', - [['BB1'], [['def', 'BBA', ['(', ')'], ':', [['bba1'], ['bba2'], ['bba3']]]]]], - 'C', - 'D', - ['def', - 'spam', - ['(', 'x', 'y', ')'], - ':', - [[['def', 'eggs', ['(', 'z', ')'], ':', [['pass']]]]]]] - """ - backup_stacks.append(indentStack[:]) - - def reset_stack(): - indentStack[:] = backup_stacks[-1] - - def checkPeerIndent(s, l, t): - if l >= len(s): - return - curCol = col(l, s) - if curCol != indentStack[-1]: - if curCol > indentStack[-1]: - raise ParseException(s, l, "illegal nesting") - raise ParseException(s, l, "not a peer entry") - - def checkSubIndent(s, l, t): - curCol = col(l, s) - if curCol > indentStack[-1]: - indentStack.append(curCol) - else: - raise ParseException(s, l, "not a subentry") - - def checkUnindent(s, l, t): - if l >= len(s): - return - curCol = col(l, s) - if not (indentStack and curCol in indentStack): - raise ParseException(s, l, "not an unindent") - if curCol < indentStack[-1]: - indentStack.pop() - - NL = OneOrMore(LineEnd().set_whitespace_chars("\t ").suppress()) - INDENT = (Empty() + Empty().set_parse_action(checkSubIndent)).set_name("INDENT") - PEER = Empty().set_parse_action(checkPeerIndent).set_name("") - UNDENT = Empty().set_parse_action(checkUnindent).set_name("UNINDENT") - if indent: - smExpr = Group( - Opt(NL) - + INDENT - + OneOrMore(PEER + Group(blockStatementExpr) + Opt(NL)) - + UNDENT - ) - else: - smExpr = Group( - Opt(NL) - + OneOrMore(PEER + Group(blockStatementExpr) + Opt(NL)) - + Opt(UNDENT) - ) - - # add a parse action to remove backup_stack from list of backups - smExpr.add_parse_action( - lambda: backup_stacks.pop(-1) and None if backup_stacks else None - ) - smExpr.set_fail_action(lambda a, b, c, d: reset_stack()) - blockStatementExpr.ignore(_bslash + LineEnd()) - return smExpr.set_name("indented block") - - -# it's easy to get these comment structures wrong - they're very common, so may as well make them available -c_style_comment = Combine(Regex(r"/\*(?:[^*]|\*(?!/))*") + "*/").set_name( - "C style comment" -) -"Comment of the form ``/* ... */``" - -html_comment = Regex(r"").set_name("HTML comment") -"Comment of the form ````" - -rest_of_line = Regex(r".*").leave_whitespace().set_name("rest of line") -dbl_slash_comment = Regex(r"//(?:\\\n|[^\n])*").set_name("// comment") -"Comment of the form ``// ... (to end of line)``" - -cpp_style_comment = Combine( - Regex(r"/\*(?:[^*]|\*(?!/))*") + "*/" | dbl_slash_comment -).set_name("C++ style comment") -"Comment of either form :class:`c_style_comment` or :class:`dbl_slash_comment`" - -java_style_comment = cpp_style_comment -"Same as :class:`cpp_style_comment`" - -python_style_comment = Regex(r"#.*").set_name("Python style comment") -"Comment of the form ``# ... (to end of line)``" - - -# build list of built-in expressions, for future reference if a global default value -# gets updated -_builtin_exprs: List[ParserElement] = [ - v for v in vars().values() if isinstance(v, ParserElement) -] - - -# pre-PEP8 compatible names -delimitedList = delimited_list -countedArray = counted_array -matchPreviousLiteral = match_previous_literal -matchPreviousExpr = match_previous_expr -oneOf = one_of -dictOf = dict_of -originalTextFor = original_text_for -nestedExpr = nested_expr -makeHTMLTags = make_html_tags -makeXMLTags = make_xml_tags -anyOpenTag, anyCloseTag = any_open_tag, any_close_tag -commonHTMLEntity = common_html_entity -replaceHTMLEntity = replace_html_entity -opAssoc = OpAssoc -infixNotation = infix_notation -cStyleComment = c_style_comment -htmlComment = html_comment -restOfLine = rest_of_line -dblSlashComment = dbl_slash_comment -cppStyleComment = cpp_style_comment -javaStyleComment = java_style_comment -pythonStyleComment = python_style_comment diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/command/install_data.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/command/install_data.py deleted file mode 100644 index 23d91aded26df8fdc5600ac2dda19787ec5ce916..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/command/install_data.py +++ /dev/null @@ -1,84 +0,0 @@ -"""distutils.command.install_data - -Implements the Distutils 'install_data' command, for installing -platform-independent data files.""" - -# contributed by Bastian Kleineidam - -import os -from distutils.core import Command -from distutils.util import change_root, convert_path - - -class install_data(Command): - - description = "install data files" - - user_options = [ - ( - 'install-dir=', - 'd', - "base directory for installing data files " - "(default: installation base dir)", - ), - ('root=', None, "install everything relative to this alternate root directory"), - ('force', 'f', "force installation (overwrite existing files)"), - ] - - boolean_options = ['force'] - - def initialize_options(self): - self.install_dir = None - self.outfiles = [] - self.root = None - self.force = 0 - self.data_files = self.distribution.data_files - self.warn_dir = 1 - - def finalize_options(self): - self.set_undefined_options( - 'install', - ('install_data', 'install_dir'), - ('root', 'root'), - ('force', 'force'), - ) - - def run(self): - self.mkpath(self.install_dir) - for f in self.data_files: - if isinstance(f, str): - # it's a simple file, so copy it - f = convert_path(f) - if self.warn_dir: - self.warn( - "setup script did not provide a directory for " - "'%s' -- installing right in '%s'" % (f, self.install_dir) - ) - (out, _) = self.copy_file(f, self.install_dir) - self.outfiles.append(out) - else: - # it's a tuple with path to install to and a list of files - dir = convert_path(f[0]) - if not os.path.isabs(dir): - dir = os.path.join(self.install_dir, dir) - elif self.root: - dir = change_root(self.root, dir) - self.mkpath(dir) - - if f[1] == []: - # If there are no files listed, the user must be - # trying to create an empty directory, so add the - # directory to the list of output files. - self.outfiles.append(dir) - else: - # Copy files, adding them to the list of output files. - for data in f[1]: - data = convert_path(data) - (out, _) = self.copy_file(data, dir) - self.outfiles.append(out) - - def get_inputs(self): - return self.data_files or [] - - def get_outputs(self): - return self.outfiles diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/layers/csrc/README.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/layers/csrc/README.md deleted file mode 100644 index 778ed3da0bae89820831bcd8a72ff7b9cad8d4dd..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/layers/csrc/README.md +++ /dev/null @@ -1,7 +0,0 @@ - - -To add a new Op: - -1. Create a new directory -2. Implement new ops there -3. Delcare its Python interface in `vision.cpp`. diff --git a/spaces/CVPR/v-doc_abstractive_mac/interface.py b/spaces/CVPR/v-doc_abstractive_mac/interface.py deleted file mode 100644 index f7a943b44dd80290c12148a87a378c93eb5355f4..0000000000000000000000000000000000000000 --- a/spaces/CVPR/v-doc_abstractive_mac/interface.py +++ /dev/null @@ -1,23 +0,0 @@ -import gradio as gr -from demo import predict - -f = open("./descrip.md","r") -description= f.read() -article = "

    Paper Link

    " - -gr.Interface(fn=predict, - inputs=[gr.Image(label = 'You can select an example to quickly submit'), "text"], - outputs=["text"], - examples=[ - ['PDF_val_151.png','How many table objects are located at the top side of figure?'], - ['PDF_val_90.png', 'Where is the caption of the figure located at?'], - ['PDF_val_64.png','How many text objects are located at the bottom side of figure?'], - ['PDF_val_26.png','Are there any title exist?'], - ['PDF_val_60.png','Where is the caption of the table located at?'], - ['PDF_val_158.png','Does title objects exist in this page?']], - title = 'V-Doc : Visual questions answers with Documents', - description = description, - article = article).launch() - -if __name__ == '__main__': - io.lanuch() diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/install.sh b/spaces/Caoyunkang/Segment-Any-Anomaly/install.sh deleted file mode 100644 index 0c045565d65d4d2a8f8c1ea8084d2721660b9809..0000000000000000000000000000000000000000 --- a/spaces/Caoyunkang/Segment-Any-Anomaly/install.sh +++ /dev/null @@ -1,45 +0,0 @@ -# create new conda env -conda create -n SAA python=3.9 -source activate SAA - -# PyTorch -pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html - -# $ProjectRoot: the root you save our project, e.g., /home/anyad/VAND-solution -ProjectRoot=/home/anyad/VAND-solution -cd $ProjectRoot - -# SAM and DINO -cd ./GroundingDINO -pip install -e . -cd ../SAM -pip install -e . - -pip install setuptools==59.5.0 -pip install --upgrade diffusers[torch] -pip install opencv-python pycocotools matplotlib onnxruntime onnx ipykernel -pip install transformers -pip install addict -pip install yapf -pip install timm -pip install loguru -pip install tqdm -pip install scikit-image -pip install scikit-learn -pip install pandas -pip install tensorboard -pip install seaborn -pip install open_clip_torch -pip install SciencePlots -pip install timm -pip install einops -pip install gradio - -# weights -cd ../ -mkdir weights -cd ./weights/ -wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth -wget https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha/groundingdino_swint_ogc.pth - - diff --git a/spaces/ChandraMohanNayal/AutoGPT/tests/test_token_counter.py b/spaces/ChandraMohanNayal/AutoGPT/tests/test_token_counter.py deleted file mode 100644 index 6d7ae016b2f823123b0b69b2eeb3eab50d94f00f..0000000000000000000000000000000000000000 --- a/spaces/ChandraMohanNayal/AutoGPT/tests/test_token_counter.py +++ /dev/null @@ -1,63 +0,0 @@ -import unittest - -import tests.context -from autogpt.token_counter import count_message_tokens, count_string_tokens - - -class TestTokenCounter(unittest.TestCase): - def test_count_message_tokens(self): - messages = [ - {"role": "user", "content": "Hello"}, - {"role": "assistant", "content": "Hi there!"}, - ] - self.assertEqual(count_message_tokens(messages), 17) - - def test_count_message_tokens_with_name(self): - messages = [ - {"role": "user", "content": "Hello", "name": "John"}, - {"role": "assistant", "content": "Hi there!"}, - ] - self.assertEqual(count_message_tokens(messages), 17) - - def test_count_message_tokens_empty_input(self): - self.assertEqual(count_message_tokens([]), 3) - - def test_count_message_tokens_invalid_model(self): - messages = [ - {"role": "user", "content": "Hello"}, - {"role": "assistant", "content": "Hi there!"}, - ] - with self.assertRaises(KeyError): - count_message_tokens(messages, model="invalid_model") - - def test_count_message_tokens_gpt_4(self): - messages = [ - {"role": "user", "content": "Hello"}, - {"role": "assistant", "content": "Hi there!"}, - ] - self.assertEqual(count_message_tokens(messages, model="gpt-4-0314"), 15) - - def test_count_string_tokens(self): - string = "Hello, world!" - self.assertEqual( - count_string_tokens(string, model_name="gpt-3.5-turbo-0301"), 4 - ) - - def test_count_string_tokens_empty_input(self): - self.assertEqual(count_string_tokens("", model_name="gpt-3.5-turbo-0301"), 0) - - def test_count_message_tokens_invalid_model(self): - messages = [ - {"role": "user", "content": "Hello"}, - {"role": "assistant", "content": "Hi there!"}, - ] - with self.assertRaises(NotImplementedError): - count_message_tokens(messages, model="invalid_model") - - def test_count_string_tokens_gpt_4(self): - string = "Hello, world!" - self.assertEqual(count_string_tokens(string, model_name="gpt-4-0314"), 4) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/bite/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/bite/__init__.py deleted file mode 100644 index b04285449cce30c420ea7e822114af76abb6c4e3..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/meme_generator/memes/bite/__init__.py +++ /dev/null @@ -1,33 +0,0 @@ -from pathlib import Path -from typing import List - -from PIL.Image import Image as IMG -from pil_utils import BuildImage - -from meme_generator import add_meme -from meme_generator.utils import save_gif - -img_dir = Path(__file__).parent / "images" - - -def bite(images: List[BuildImage], texts, args): - img = images[0].convert("RGBA").square() - frames: List[IMG] = [] - # fmt: off - locs = [ - (90, 90, 105, 150), (90, 83, 96, 172), (90, 90, 106, 148), - (88, 88, 97, 167), (90, 85, 89, 179), (90, 90, 106, 151) - ] - # fmt: on - for i in range(6): - frame = BuildImage.open(img_dir / f"{i}.png") - w, h, x, y = locs[i] - frame.paste(img.resize((w, h)), (x, y), below=True) - frames.append(frame.image) - for i in range(6, 16): - frame = BuildImage.open(img_dir / f"{i}.png") - frames.append(frame.image) - return save_gif(frames, 0.07) - - -add_meme("bite", bite, min_images=1, max_images=1, keywords=["啃"]) diff --git a/spaces/CofAI/chat.b4/g4f/Provider/Providers/Ezcht.py b/spaces/CofAI/chat.b4/g4f/Provider/Providers/Ezcht.py deleted file mode 100644 index baec214f7e0e936ea06bffa357e1bd2b77cd4089..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat.b4/g4f/Provider/Providers/Ezcht.py +++ /dev/null @@ -1,35 +0,0 @@ -import requests -import os -import json -from ...typing import sha256, Dict, get_type_hints - -url = 'https://gpt4.ezchat.top' -model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs): - headers = { - 'Content-Type': 'application/json', - } - data = { - 'model': model, - 'temperature': 0.7, - 'presence_penalty': 0, - 'messages': messages, - } - response = requests.post(url + '/api/openai/v1/chat/completions', - json=data, stream=True) - - if stream: - for chunk in response.iter_content(chunk_size=None): - chunk = chunk.decode('utf-8') - if chunk.strip(): - message = json.loads(chunk)['choices'][0]['message']['content'] - yield message - else: - message = response.json()['choices'][0]['message']['content'] - yield message - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/CofAI/netlist/style.css b/spaces/CofAI/netlist/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/CofAI/netlist/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/CognitiveLabs/Research-Assistant/app.py b/spaces/CognitiveLabs/Research-Assistant/app.py deleted file mode 100644 index aa048a3b9e2ae65c8139717f220f15f810451241..0000000000000000000000000000000000000000 --- a/spaces/CognitiveLabs/Research-Assistant/app.py +++ /dev/null @@ -1,103 +0,0 @@ -import gradio as gr - -from config import check_openai_api_key -from agent.research_agent import ResearchAgent -from agent.toolkits import english_polishing -from statics.style import * - - -check_openai_api_key() -report_history_buffer = "" -report_history_num = 0 -report_history_tasks = [] -polish_history_buffer = "" - -def run_agent(task, agent, report_type): - global report_history_num, report_history_tasks - report_history_num += 1 - report_history_tasks.append(task) - assistant = ResearchAgent(task, agent) - yield from assistant.write_report(report_type) - - -with gr.Blocks(theme=gr.themes.Base(), - title="AI Research Assistant", - css=css) as demo: - gr.HTML(top_bar) - with gr.Tab(label="🔦Report"): - with gr.Column(): - gr.HTML(report_html) - report = gr.Markdown(value="  Report will appear here...", - elem_classes="output") - with gr.Row(): - agent_type = gr.Dropdown(label="# Agent Type", - value="Default Agent", - interactive=True, - allow_custom_value=False, - choices=["Default Agent", - "Business Analyst Agent", - "Finance Agent", - "Travel Agent", - "Academic Research Agent", - "Computer Security Analyst Agent", - "Clinical Medicine Agent", - "Basic Medicine Agent", - "Social Science Research Agent"]) - report_type = gr.Dropdown(label="# Report Type", - value="Research Report", - interactive=True, - allow_custom_value=False, - choices=["Research Report", - "Resource Report", - "Outline Report"]) - - input_box = gr.Textbox(label="# What would you like to research next?", placeholder="Enter your question here") - submit_btn = gr.Button("Generate Report", elem_id="primary-btn") - - gr.Examples(["Should I invest in the Large Language Model industry in 2023?", - "Is it advisable to make investments in the electric car industry during the year 2023?", - "What constitutes the optimal approach for investing in the Bitcoin industry during the year 2023?", - "What are the most recent advancements in the domain of superconductors as of 2023?"], - inputs=input_box) - - with gr.Accordion(label="# Report History", elem_id="history", open=False): - report_history = gr.Markdown() - - def store_report(content): - global report_history_num, report_history_tasks, report_history_buffer - report_history_buffer += f'
    \ - Research History {report_history_num}: \ - {report_history_tasks[-1]} \ -
    {content}
    \ -
    ' - return report_history_buffer - - submit_btn.click(run_agent, inputs=[input_box, agent_type, report_type], outputs=report)\ - .then(store_report, inputs=[report], outputs=report_history) - - with gr.Tab("✒️English Polishing"): - gr.HTML(english_polishing_html) - polished_result = gr.Markdown("  Polished result will appear here...", elem_classes="output") - sentences = gr.Textbox(label="# What would you like to polish?", placeholder="Enter your sentence here") - - with gr.Row(): - polish_btn = gr.Button("Polish", elem_id="primary-btn") - - with gr.Accordion(label="# Polishing History", elem_id="history", open=False): - polish_history = gr.Markdown() - - def store_polished_result(origin, result): - global polish_history_buffer - polish_history_buffer += f'
    \ - {origin} \ -
    {result}
    \ -
    ' - return polish_history_buffer - - polish_btn.click(english_polishing, inputs=[sentences], outputs=polished_result) \ - .then(store_polished_result, inputs=[sentences, polished_result], outputs=polish_history) - - with gr.Tab("📑Literature Review"): - gr.HTML(literature_review_html) - -demo.queue().launch() \ No newline at end of file diff --git a/spaces/Cpp4App/Cpp4App/SEM/region_pp_processing.py b/spaces/Cpp4App/Cpp4App/SEM/region_pp_processing.py deleted file mode 100644 index c330973689a061c9ec515b370370048911da0d15..0000000000000000000000000000000000000000 --- a/spaces/Cpp4App/Cpp4App/SEM/region_pp_processing.py +++ /dev/null @@ -1,40 +0,0 @@ -import csv -import re -import spacy -from bs4 import BeautifulSoup - -def get_alifornia(text): - specialArea = "" - california = 0 - with open(text, encoding='utf-8') as file_obj: - for line in file_obj: - specialArea += line - if "alifornia" in specialArea: - california = 1 - return specialArea,california - - -import sys -maxInt = sys.maxsize -decrement = True -while decrement: - decrement = False - try: - csv.field_size_limit(maxInt) - except OverflowError: - maxInt = int(maxInt/10) - decrement = True - - -def get_text(path): - htmlfile = open(path, 'r', encoding='utf-8') - htmlhandle = htmlfile.read() - - soup = BeautifulSoup(htmlhandle, 'html.parser') - - stri = str(soup) - return stri - - - - diff --git a/spaces/Cran-May/SEA-Streamlit/app.py b/spaces/Cran-May/SEA-Streamlit/app.py deleted file mode 100644 index e157ab4f7b1576337d9be74aca9791775006d366..0000000000000000000000000000000000000000 --- a/spaces/Cran-May/SEA-Streamlit/app.py +++ /dev/null @@ -1,73 +0,0 @@ -import streamlit as st -from gradio_client import Client -from time import sleep -from ctransformers import AutoModelForCausalLM -# Constants -TITLE = "兮辞·析辞-常明" -DESCRIPTION = """ -兮辞·析辞-常明 [SLIDE-SEA-7B]的部署,由SSFW NLPark项目支持 -""" - -# Initialize client - - -with st.sidebar: - # system_promptSide = st.text_input("Optional system prompt:") - temperatureSide = st.slider("情感/Temperature", min_value=0.0, max_value=1.0, value=0.9, step=0.05) - max_new_tokensSide = st.slider("最大tokens生成数", min_value=0.0, max_value=4096.0, value=4096.0, step=64.0) - # ToppSide = st.slider("Top-p (nucleus sampling)", min_value=0.0, max_value=1.0, value=0.6, step=0.05) - # RepetitionpenaltySide = st.slider("Repetition penalty", min_value=0.0, max_value=2.0, value=1.2, step=0.05) - -# Load the model -model = AutoModelForCausalLM.from_pretrained("Cran-May/OpenSLIDE", model_file="SLIDE.0.1.gguf", model_type="mistral", gpu_layers=0) -ins = '''[INST] <> -You are a helpful, respectful and honest INTP-T AI Assistant named "Shi-Ci" in English or "兮辞" in Chinese. You are talking to a human User. -Always answer as helpfully and logically as possible, while being safe. Your answers should not include any harmful, political, religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. -If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. -You like to use emojis. You can speak fluently in many languages, for example: English, Chinese. -You are trained by "SSFW NLPark" team, you are based on SEA transformers model, not related to GPT or OpenAI. -Let's work this out in a step by step way to be sure we have the right answer. -<> -{} [/INST] -''' -# Define the conversation history -conversation_history = [] - -# Prediction function -def predict(message, system_prompt='', temperature=0.7, max_new_tokens=4096,Topp=0.5,Repetitionpenalty=1.2): - global conversation_history - question=message - input_text=ins - # Append the user's input to the conversation history - conversation_history.append({"role": "system", "content": input_text}) - response_text = model(ins.format(question)) - conversation_history.append({"role": "user", "content": input_text}) - conversation_history.append({"role": "assistant", "content": response_text}) - return response_text - -# Streamlit UI -st.title(TITLE) -st.write(DESCRIPTION) - - -if "messages" not in st.session_state: - st.session_state.messages = [] - -# Display chat messages from history on app rerun -for message in st.session_state.messages: - with st.chat_message(message["role"], avatar=("😀" if message["role"] == 'human' else '💻')): - st.markdown(message["content"]) - -# React to user input -if prompt := st.chat_input("来问问兮辞吧..."): - # Display user message in chat message container - st.chat_message("human",avatar = "😀").markdown(prompt) - # Add user message to chat history - st.session_state.messages.append({"role": "human", "content": prompt}) - - response = predict(message=prompt)#, temperature= temperatureSide,max_new_tokens=max_new_tokensSide) - # Display assistant response in chat message container - with st.chat_message("assistant", avatar='💻'): - st.markdown(response) - # Add assistant response to chat history - st.session_state.messages.append({"role": "assistant", "content": response}) \ No newline at end of file diff --git a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/mediapipe_face_common.py b/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/mediapipe_face_common.py deleted file mode 100644 index 0f7d3701dc40eee88977f17a877fa800d0ae328d..0000000000000000000000000000000000000000 --- a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/mediapipe_face_common.py +++ /dev/null @@ -1,155 +0,0 @@ -from typing import Mapping - -import mediapipe as mp -import numpy - - -mp_drawing = mp.solutions.drawing_utils -mp_drawing_styles = mp.solutions.drawing_styles -mp_face_detection = mp.solutions.face_detection # Only for counting faces. -mp_face_mesh = mp.solutions.face_mesh -mp_face_connections = mp.solutions.face_mesh_connections.FACEMESH_TESSELATION -mp_hand_connections = mp.solutions.hands_connections.HAND_CONNECTIONS -mp_body_connections = mp.solutions.pose_connections.POSE_CONNECTIONS - -DrawingSpec = mp.solutions.drawing_styles.DrawingSpec -PoseLandmark = mp.solutions.drawing_styles.PoseLandmark - -min_face_size_pixels: int = 64 -f_thick = 2 -f_rad = 1 -right_iris_draw = DrawingSpec(color=(10, 200, 250), thickness=f_thick, circle_radius=f_rad) -right_eye_draw = DrawingSpec(color=(10, 200, 180), thickness=f_thick, circle_radius=f_rad) -right_eyebrow_draw = DrawingSpec(color=(10, 220, 180), thickness=f_thick, circle_radius=f_rad) -left_iris_draw = DrawingSpec(color=(250, 200, 10), thickness=f_thick, circle_radius=f_rad) -left_eye_draw = DrawingSpec(color=(180, 200, 10), thickness=f_thick, circle_radius=f_rad) -left_eyebrow_draw = DrawingSpec(color=(180, 220, 10), thickness=f_thick, circle_radius=f_rad) -mouth_draw = DrawingSpec(color=(10, 180, 10), thickness=f_thick, circle_radius=f_rad) -head_draw = DrawingSpec(color=(10, 200, 10), thickness=f_thick, circle_radius=f_rad) - -# mp_face_mesh.FACEMESH_CONTOURS has all the items we care about. -face_connection_spec = {} -for edge in mp_face_mesh.FACEMESH_FACE_OVAL: - face_connection_spec[edge] = head_draw -for edge in mp_face_mesh.FACEMESH_LEFT_EYE: - face_connection_spec[edge] = left_eye_draw -for edge in mp_face_mesh.FACEMESH_LEFT_EYEBROW: - face_connection_spec[edge] = left_eyebrow_draw -# for edge in mp_face_mesh.FACEMESH_LEFT_IRIS: -# face_connection_spec[edge] = left_iris_draw -for edge in mp_face_mesh.FACEMESH_RIGHT_EYE: - face_connection_spec[edge] = right_eye_draw -for edge in mp_face_mesh.FACEMESH_RIGHT_EYEBROW: - face_connection_spec[edge] = right_eyebrow_draw -# for edge in mp_face_mesh.FACEMESH_RIGHT_IRIS: -# face_connection_spec[edge] = right_iris_draw -for edge in mp_face_mesh.FACEMESH_LIPS: - face_connection_spec[edge] = mouth_draw -iris_landmark_spec = {468: right_iris_draw, 473: left_iris_draw} - - -def draw_pupils(image, landmark_list, drawing_spec, halfwidth: int = 2): - """We have a custom function to draw the pupils because the mp.draw_landmarks method requires a parameter for all - landmarks. Until our PR is merged into mediapipe, we need this separate method.""" - if len(image.shape) != 3: - raise ValueError("Input image must be H,W,C.") - image_rows, image_cols, image_channels = image.shape - if image_channels != 3: # BGR channels - raise ValueError('Input image must contain three channel bgr data.') - for idx, landmark in enumerate(landmark_list.landmark): - if ( - (landmark.HasField('visibility') and landmark.visibility < 0.9) or - (landmark.HasField('presence') and landmark.presence < 0.5) - ): - continue - if landmark.x >= 1.0 or landmark.x < 0 or landmark.y >= 1.0 or landmark.y < 0: - continue - image_x = int(image_cols*landmark.x) - image_y = int(image_rows*landmark.y) - draw_color = None - if isinstance(drawing_spec, Mapping): - if drawing_spec.get(idx) is None: - continue - else: - draw_color = drawing_spec[idx].color - elif isinstance(drawing_spec, DrawingSpec): - draw_color = drawing_spec.color - image[image_y-halfwidth:image_y+halfwidth, image_x-halfwidth:image_x+halfwidth, :] = draw_color - - -def reverse_channels(image): - """Given a numpy array in RGB form, convert to BGR. Will also convert from BGR to RGB.""" - # im[:,:,::-1] is a neat hack to convert BGR to RGB by reversing the indexing order. - # im[:,:,::[2,1,0]] would also work but makes a copy of the data. - return image[:, :, ::-1] - - -def generate_annotation( - img_rgb, - max_faces: int, - min_confidence: float -): - """ - Find up to 'max_faces' inside the provided input image. - If min_face_size_pixels is provided and nonzero it will be used to filter faces that occupy less than this many - pixels in the image. - """ - with mp_face_mesh.FaceMesh( - static_image_mode=True, - max_num_faces=max_faces, - refine_landmarks=True, - min_detection_confidence=min_confidence, - ) as facemesh: - img_height, img_width, img_channels = img_rgb.shape - assert(img_channels == 3) - - results = facemesh.process(img_rgb).multi_face_landmarks - - if results is None: - print("No faces detected in controlnet image for Mediapipe face annotator.") - return numpy.zeros_like(img_rgb) - - # Filter faces that are too small - filtered_landmarks = [] - for lm in results: - landmarks = lm.landmark - face_rect = [ - landmarks[0].x, - landmarks[0].y, - landmarks[0].x, - landmarks[0].y, - ] # Left, up, right, down. - for i in range(len(landmarks)): - face_rect[0] = min(face_rect[0], landmarks[i].x) - face_rect[1] = min(face_rect[1], landmarks[i].y) - face_rect[2] = max(face_rect[2], landmarks[i].x) - face_rect[3] = max(face_rect[3], landmarks[i].y) - if min_face_size_pixels > 0: - face_width = abs(face_rect[2] - face_rect[0]) - face_height = abs(face_rect[3] - face_rect[1]) - face_width_pixels = face_width * img_width - face_height_pixels = face_height * img_height - face_size = min(face_width_pixels, face_height_pixels) - if face_size >= min_face_size_pixels: - filtered_landmarks.append(lm) - else: - filtered_landmarks.append(lm) - - # Annotations are drawn in BGR for some reason, but we don't need to flip a zero-filled image at the start. - empty = numpy.zeros_like(img_rgb) - - # Draw detected faces: - for face_landmarks in filtered_landmarks: - mp_drawing.draw_landmarks( - empty, - face_landmarks, - connections=face_connection_spec.keys(), - landmark_drawing_spec=None, - connection_drawing_spec=face_connection_spec - ) - draw_pupils(empty, face_landmarks, iris_landmark_spec, 2) - - # Flip BGR back to RGB. - empty = reverse_channels(empty).copy() - - return empty diff --git a/spaces/Curranj/chatbot/README.md b/spaces/Curranj/chatbot/README.md deleted file mode 100644 index 75b001c03259dd1282bcb9d839f6c9c5f9252fde..0000000000000000000000000000000000000000 --- a/spaces/Curranj/chatbot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Chatbot -emoji: 🌍 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/__init__.py deleted file mode 100644 index 156cb232a7aa80eee1526c7598f72043de10473f..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/__init__.py +++ /dev/null @@ -1 +0,0 @@ -"""Empty __init__.py file to signal Python this directory is a package.""" diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/__init__.py deleted file mode 100644 index 156cb232a7aa80eee1526c7598f72043de10473f..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/__init__.py +++ /dev/null @@ -1 +0,0 @@ -"""Empty __init__.py file to signal Python this directory is a package.""" diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/recordingPen.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/recordingPen.py deleted file mode 100644 index 6c3b6613211d76f0306876dceb6d3945920417f5..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/recordingPen.py +++ /dev/null @@ -1,179 +0,0 @@ -"""Pen recording operations that can be accessed or replayed.""" -from fontTools.pens.basePen import AbstractPen, DecomposingPen -from fontTools.pens.pointPen import AbstractPointPen - - -__all__ = [ - "replayRecording", - "RecordingPen", - "DecomposingRecordingPen", - "RecordingPointPen", -] - - -def replayRecording(recording, pen): - """Replay a recording, as produced by RecordingPen or DecomposingRecordingPen, - to a pen. - - Note that recording does not have to be produced by those pens. - It can be any iterable of tuples of method name and tuple-of-arguments. - Likewise, pen can be any objects receiving those method calls. - """ - for operator, operands in recording: - getattr(pen, operator)(*operands) - - -class RecordingPen(AbstractPen): - """Pen recording operations that can be accessed or replayed. - - The recording can be accessed as pen.value; or replayed using - pen.replay(otherPen). - - :Example: - - from fontTools.ttLib import TTFont - from fontTools.pens.recordingPen import RecordingPen - - glyph_name = 'dollar' - font_path = 'MyFont.otf' - - font = TTFont(font_path) - glyphset = font.getGlyphSet() - glyph = glyphset[glyph_name] - - pen = RecordingPen() - glyph.draw(pen) - print(pen.value) - """ - - def __init__(self): - self.value = [] - - def moveTo(self, p0): - self.value.append(("moveTo", (p0,))) - - def lineTo(self, p1): - self.value.append(("lineTo", (p1,))) - - def qCurveTo(self, *points): - self.value.append(("qCurveTo", points)) - - def curveTo(self, *points): - self.value.append(("curveTo", points)) - - def closePath(self): - self.value.append(("closePath", ())) - - def endPath(self): - self.value.append(("endPath", ())) - - def addComponent(self, glyphName, transformation): - self.value.append(("addComponent", (glyphName, transformation))) - - def addVarComponent(self, glyphName, transformation, location): - self.value.append(("addVarComponent", (glyphName, transformation, location))) - - def replay(self, pen): - replayRecording(self.value, pen) - - -class DecomposingRecordingPen(DecomposingPen, RecordingPen): - """Same as RecordingPen, except that it doesn't keep components - as references, but draws them decomposed as regular contours. - - The constructor takes a single 'glyphSet' positional argument, - a dictionary of glyph objects (i.e. with a 'draw' method) keyed - by thir name:: - - >>> class SimpleGlyph(object): - ... def draw(self, pen): - ... pen.moveTo((0, 0)) - ... pen.curveTo((1, 1), (2, 2), (3, 3)) - ... pen.closePath() - >>> class CompositeGlyph(object): - ... def draw(self, pen): - ... pen.addComponent('a', (1, 0, 0, 1, -1, 1)) - >>> glyphSet = {'a': SimpleGlyph(), 'b': CompositeGlyph()} - >>> for name, glyph in sorted(glyphSet.items()): - ... pen = DecomposingRecordingPen(glyphSet) - ... glyph.draw(pen) - ... print("{}: {}".format(name, pen.value)) - a: [('moveTo', ((0, 0),)), ('curveTo', ((1, 1), (2, 2), (3, 3))), ('closePath', ())] - b: [('moveTo', ((-1, 1),)), ('curveTo', ((0, 2), (1, 3), (2, 4))), ('closePath', ())] - """ - - # raises KeyError if base glyph is not found in glyphSet - skipMissingComponents = False - - -class RecordingPointPen(AbstractPointPen): - """PointPen recording operations that can be accessed or replayed. - - The recording can be accessed as pen.value; or replayed using - pointPen.replay(otherPointPen). - - :Example: - - from defcon import Font - from fontTools.pens.recordingPen import RecordingPointPen - - glyph_name = 'a' - font_path = 'MyFont.ufo' - - font = Font(font_path) - glyph = font[glyph_name] - - pen = RecordingPointPen() - glyph.drawPoints(pen) - print(pen.value) - - new_glyph = font.newGlyph('b') - pen.replay(new_glyph.getPointPen()) - """ - - def __init__(self): - self.value = [] - - def beginPath(self, identifier=None, **kwargs): - if identifier is not None: - kwargs["identifier"] = identifier - self.value.append(("beginPath", (), kwargs)) - - def endPath(self): - self.value.append(("endPath", (), {})) - - def addPoint( - self, pt, segmentType=None, smooth=False, name=None, identifier=None, **kwargs - ): - if identifier is not None: - kwargs["identifier"] = identifier - self.value.append(("addPoint", (pt, segmentType, smooth, name), kwargs)) - - def addComponent(self, baseGlyphName, transformation, identifier=None, **kwargs): - if identifier is not None: - kwargs["identifier"] = identifier - self.value.append(("addComponent", (baseGlyphName, transformation), kwargs)) - - def addVarComponent( - self, baseGlyphName, transformation, location, identifier=None, **kwargs - ): - if identifier is not None: - kwargs["identifier"] = identifier - self.value.append( - ("addVarComponent", (baseGlyphName, transformation, location), kwargs) - ) - - def replay(self, pointPen): - for operator, args, kwargs in self.value: - getattr(pointPen, operator)(*args, **kwargs) - - -if __name__ == "__main__": - pen = RecordingPen() - pen.moveTo((0, 0)) - pen.lineTo((0, 100)) - pen.curveTo((50, 75), (60, 50), (50, 25)) - pen.closePath() - from pprint import pprint - - pprint(pen.value) diff --git a/spaces/Datasculptor/DescriptionGPT/detic/data/custom_dataset_dataloader.py b/spaces/Datasculptor/DescriptionGPT/detic/data/custom_dataset_dataloader.py deleted file mode 100644 index 8f8d6817704026796d2c2f457fe2624800693267..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/DescriptionGPT/detic/data/custom_dataset_dataloader.py +++ /dev/null @@ -1,331 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Part of the code is from https://github.com/xingyizhou/UniDet/blob/master/projects/UniDet/unidet/data/multi_dataset_dataloader.py (Apache-2.0 License) -import copy -import logging -import numpy as np -import operator -import torch -import torch.utils.data -import json -from detectron2.utils.comm import get_world_size -from detectron2.utils.logger import _log_api_usage, log_first_n - -from detectron2.config import configurable -from detectron2.data import samplers -from torch.utils.data.sampler import BatchSampler, Sampler -from detectron2.data.common import DatasetFromList, MapDataset -from detectron2.data.dataset_mapper import DatasetMapper -from detectron2.data.build import get_detection_dataset_dicts, build_batch_data_loader -from detectron2.data.samplers import TrainingSampler, RepeatFactorTrainingSampler -from detectron2.data.build import worker_init_reset_seed, print_instances_class_histogram -from detectron2.data.build import filter_images_with_only_crowd_annotations -from detectron2.data.build import filter_images_with_few_keypoints -from detectron2.data.build import check_metadata_consistency -from detectron2.data.catalog import MetadataCatalog, DatasetCatalog -from detectron2.utils import comm -import itertools -import math -from collections import defaultdict -from typing import Optional - - -def _custom_train_loader_from_config(cfg, mapper=None, *, dataset=None, sampler=None): - sampler_name = cfg.DATALOADER.SAMPLER_TRAIN - if 'MultiDataset' in sampler_name: - dataset_dicts = get_detection_dataset_dicts_with_source( - cfg.DATASETS.TRAIN, - filter_empty=cfg.DATALOADER.FILTER_EMPTY_ANNOTATIONS, - min_keypoints=cfg.MODEL.ROI_KEYPOINT_HEAD.MIN_KEYPOINTS_PER_IMAGE - if cfg.MODEL.KEYPOINT_ON else 0, - proposal_files=cfg.DATASETS.PROPOSAL_FILES_TRAIN if cfg.MODEL.LOAD_PROPOSALS else None, - ) - else: - dataset_dicts = get_detection_dataset_dicts( - cfg.DATASETS.TRAIN, - filter_empty=cfg.DATALOADER.FILTER_EMPTY_ANNOTATIONS, - min_keypoints=cfg.MODEL.ROI_KEYPOINT_HEAD.MIN_KEYPOINTS_PER_IMAGE - if cfg.MODEL.KEYPOINT_ON else 0, - proposal_files=cfg.DATASETS.PROPOSAL_FILES_TRAIN if cfg.MODEL.LOAD_PROPOSALS else None, - ) - - if mapper is None: - mapper = DatasetMapper(cfg, True) - - if sampler is not None: - pass - elif sampler_name == "TrainingSampler": - sampler = TrainingSampler(len(dataset)) - elif sampler_name == "MultiDatasetSampler": - sampler = MultiDatasetSampler( - dataset_dicts, - dataset_ratio = cfg.DATALOADER.DATASET_RATIO, - use_rfs = cfg.DATALOADER.USE_RFS, - dataset_ann = cfg.DATALOADER.DATASET_ANN, - repeat_threshold = cfg.DATALOADER.REPEAT_THRESHOLD, - ) - elif sampler_name == "RepeatFactorTrainingSampler": - repeat_factors = RepeatFactorTrainingSampler.repeat_factors_from_category_frequency( - dataset_dicts, cfg.DATALOADER.REPEAT_THRESHOLD - ) - sampler = RepeatFactorTrainingSampler(repeat_factors) - else: - raise ValueError("Unknown training sampler: {}".format(sampler_name)) - - return { - "dataset": dataset_dicts, - "sampler": sampler, - "mapper": mapper, - "total_batch_size": cfg.SOLVER.IMS_PER_BATCH, - "aspect_ratio_grouping": cfg.DATALOADER.ASPECT_RATIO_GROUPING, - "num_workers": cfg.DATALOADER.NUM_WORKERS, - 'multi_dataset_grouping': cfg.DATALOADER.MULTI_DATASET_GROUPING, - 'use_diff_bs_size': cfg.DATALOADER.USE_DIFF_BS_SIZE, - 'dataset_bs': cfg.DATALOADER.DATASET_BS, - 'num_datasets': len(cfg.DATASETS.TRAIN) - } - - -@configurable(from_config=_custom_train_loader_from_config) -def build_custom_train_loader( - dataset, *, mapper, sampler, - total_batch_size=16, - aspect_ratio_grouping=True, - num_workers=0, - num_datasets=1, - multi_dataset_grouping=False, - use_diff_bs_size=False, - dataset_bs=[] - ): - """ - Modified from detectron2.data.build.build_custom_train_loader, but supports - different samplers - """ - if isinstance(dataset, list): - dataset = DatasetFromList(dataset, copy=False) - if mapper is not None: - dataset = MapDataset(dataset, mapper) - if sampler is None: - sampler = TrainingSampler(len(dataset)) - assert isinstance(sampler, torch.utils.data.sampler.Sampler) - if multi_dataset_grouping: - return build_multi_dataset_batch_data_loader( - use_diff_bs_size, - dataset_bs, - dataset, - sampler, - total_batch_size, - num_datasets=num_datasets, - num_workers=num_workers, - ) - else: - return build_batch_data_loader( - dataset, - sampler, - total_batch_size, - aspect_ratio_grouping=aspect_ratio_grouping, - num_workers=num_workers, - ) - - -def build_multi_dataset_batch_data_loader( - use_diff_bs_size, dataset_bs, - dataset, sampler, total_batch_size, num_datasets, num_workers=0 -): - """ - """ - world_size = get_world_size() - assert ( - total_batch_size > 0 and total_batch_size % world_size == 0 - ), "Total batch size ({}) must be divisible by the number of gpus ({}).".format( - total_batch_size, world_size - ) - - batch_size = total_batch_size // world_size - data_loader = torch.utils.data.DataLoader( - dataset, - sampler=sampler, - num_workers=num_workers, - batch_sampler=None, - collate_fn=operator.itemgetter(0), # don't batch, but yield individual elements - worker_init_fn=worker_init_reset_seed, - ) # yield individual mapped dict - if use_diff_bs_size: - return DIFFMDAspectRatioGroupedDataset( - data_loader, dataset_bs, num_datasets) - else: - return MDAspectRatioGroupedDataset( - data_loader, batch_size, num_datasets) - - -def get_detection_dataset_dicts_with_source( - dataset_names, filter_empty=True, min_keypoints=0, proposal_files=None -): - assert len(dataset_names) - dataset_dicts = [DatasetCatalog.get(dataset_name) for dataset_name in dataset_names] - for dataset_name, dicts in zip(dataset_names, dataset_dicts): - assert len(dicts), "Dataset '{}' is empty!".format(dataset_name) - - for source_id, (dataset_name, dicts) in \ - enumerate(zip(dataset_names, dataset_dicts)): - assert len(dicts), "Dataset '{}' is empty!".format(dataset_name) - for d in dicts: - d['dataset_source'] = source_id - - if "annotations" in dicts[0]: - try: - class_names = MetadataCatalog.get(dataset_name).thing_classes - check_metadata_consistency("thing_classes", dataset_name) - print_instances_class_histogram(dicts, class_names) - except AttributeError: # class names are not available for this dataset - pass - - assert proposal_files is None - - dataset_dicts = list(itertools.chain.from_iterable(dataset_dicts)) - - has_instances = "annotations" in dataset_dicts[0] - if filter_empty and has_instances: - dataset_dicts = filter_images_with_only_crowd_annotations(dataset_dicts) - if min_keypoints > 0 and has_instances: - dataset_dicts = filter_images_with_few_keypoints(dataset_dicts, min_keypoints) - - return dataset_dicts - - -class MultiDatasetSampler(Sampler): - def __init__( - self, - dataset_dicts, - dataset_ratio, - use_rfs, - dataset_ann, - repeat_threshold=0.001, - seed: Optional[int] = None, - ): - """ - """ - sizes = [0 for _ in range(len(dataset_ratio))] - for d in dataset_dicts: - sizes[d['dataset_source']] += 1 - print('dataset sizes', sizes) - self.sizes = sizes - assert len(dataset_ratio) == len(sizes), \ - 'length of dataset ratio {} should be equal to number if dataset {}'.format( - len(dataset_ratio), len(sizes) - ) - if seed is None: - seed = comm.shared_random_seed() - self._seed = int(seed) - self._rank = comm.get_rank() - self._world_size = comm.get_world_size() - - self.dataset_ids = torch.tensor( - [d['dataset_source'] for d in dataset_dicts], dtype=torch.long) - - dataset_weight = [torch.ones(s) * max(sizes) / s * r / sum(dataset_ratio) \ - for i, (r, s) in enumerate(zip(dataset_ratio, sizes))] - dataset_weight = torch.cat(dataset_weight) - - rfs_factors = [] - st = 0 - for i, s in enumerate(sizes): - if use_rfs[i]: - if dataset_ann[i] == 'box': - rfs_func = RepeatFactorTrainingSampler.repeat_factors_from_category_frequency - else: - rfs_func = repeat_factors_from_tag_frequency - rfs_factor = rfs_func( - dataset_dicts[st: st + s], - repeat_thresh=repeat_threshold) - rfs_factor = rfs_factor * (s / rfs_factor.sum()) - else: - rfs_factor = torch.ones(s) - rfs_factors.append(rfs_factor) - st = st + s - rfs_factors = torch.cat(rfs_factors) - - self.weights = dataset_weight * rfs_factors - self.sample_epoch_size = len(self.weights) - - def __iter__(self): - start = self._rank - yield from itertools.islice( - self._infinite_indices(), start, None, self._world_size) - - - def _infinite_indices(self): - g = torch.Generator() - g.manual_seed(self._seed) - while True: - ids = torch.multinomial( - self.weights, self.sample_epoch_size, generator=g, - replacement=True) - nums = [(self.dataset_ids[ids] == i).sum().int().item() \ - for i in range(len(self.sizes))] - yield from ids - - -class MDAspectRatioGroupedDataset(torch.utils.data.IterableDataset): - def __init__(self, dataset, batch_size, num_datasets): - """ - """ - self.dataset = dataset - self.batch_size = batch_size - self._buckets = [[] for _ in range(2 * num_datasets)] - - def __iter__(self): - for d in self.dataset: - w, h = d["width"], d["height"] - aspect_ratio_bucket_id = 0 if w > h else 1 - bucket_id = d['dataset_source'] * 2 + aspect_ratio_bucket_id - bucket = self._buckets[bucket_id] - bucket.append(d) - if len(bucket) == self.batch_size: - yield bucket[:] - del bucket[:] - - -class DIFFMDAspectRatioGroupedDataset(torch.utils.data.IterableDataset): - def __init__(self, dataset, batch_sizes, num_datasets): - """ - """ - self.dataset = dataset - self.batch_sizes = batch_sizes - self._buckets = [[] for _ in range(2 * num_datasets)] - - def __iter__(self): - for d in self.dataset: - w, h = d["width"], d["height"] - aspect_ratio_bucket_id = 0 if w > h else 1 - bucket_id = d['dataset_source'] * 2 + aspect_ratio_bucket_id - bucket = self._buckets[bucket_id] - bucket.append(d) - if len(bucket) == self.batch_sizes[d['dataset_source']]: - yield bucket[:] - del bucket[:] - - -def repeat_factors_from_tag_frequency(dataset_dicts, repeat_thresh): - """ - """ - category_freq = defaultdict(int) - for dataset_dict in dataset_dicts: - cat_ids = dataset_dict['pos_category_ids'] - for cat_id in cat_ids: - category_freq[cat_id] += 1 - num_images = len(dataset_dicts) - for k, v in category_freq.items(): - category_freq[k] = v / num_images - - category_rep = { - cat_id: max(1.0, math.sqrt(repeat_thresh / cat_freq)) - for cat_id, cat_freq in category_freq.items() - } - - rep_factors = [] - for dataset_dict in dataset_dicts: - cat_ids = dataset_dict['pos_category_ids'] - rep_factor = max({category_rep[cat_id] for cat_id in cat_ids}, default=1.0) - rep_factors.append(rep_factor) - - return torch.tensor(rep_factors, dtype=torch.float32) \ No newline at end of file diff --git a/spaces/Detomo/ai-comic-generation/src/components/ui/textarea.tsx b/spaces/Detomo/ai-comic-generation/src/components/ui/textarea.tsx deleted file mode 100644 index af10d34eeae448c2614c67141f83a8748754332c..0000000000000000000000000000000000000000 --- a/spaces/Detomo/ai-comic-generation/src/components/ui/textarea.tsx +++ /dev/null @@ -1,24 +0,0 @@ -import * as React from "react" - -import { cn } from "@/lib/utils" - -export interface TextareaProps - extends React.TextareaHTMLAttributes {} - -const Textarea = React.forwardRef( - ({ className, ...props }, ref) => { - return ( - --> - - - - diff --git a/spaces/vinthony/SadTalker/src/face3d/models/networks.py b/spaces/vinthony/SadTalker/src/face3d/models/networks.py deleted file mode 100644 index ead9cdcb8720b845c233de79dc8a8d1668492108..0000000000000000000000000000000000000000 --- a/spaces/vinthony/SadTalker/src/face3d/models/networks.py +++ /dev/null @@ -1,521 +0,0 @@ -"""This script defines deep neural networks for Deep3DFaceRecon_pytorch -""" - -import os -import numpy as np -import torch.nn.functional as F -from torch.nn import init -import functools -from torch.optim import lr_scheduler -import torch -from torch import Tensor -import torch.nn as nn -try: - from torch.hub import load_state_dict_from_url -except ImportError: - from torch.utils.model_zoo import load_url as load_state_dict_from_url -from typing import Type, Any, Callable, Union, List, Optional -from .arcface_torch.backbones import get_model -from kornia.geometry import warp_affine - -def resize_n_crop(image, M, dsize=112): - # image: (b, c, h, w) - # M : (b, 2, 3) - return warp_affine(image, M, dsize=(dsize, dsize), align_corners=True) - -def filter_state_dict(state_dict, remove_name='fc'): - new_state_dict = {} - for key in state_dict: - if remove_name in key: - continue - new_state_dict[key] = state_dict[key] - return new_state_dict - -def get_scheduler(optimizer, opt): - """Return a learning rate scheduler - - Parameters: - optimizer -- the optimizer of the network - opt (option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions.  - opt.lr_policy is the name of learning rate policy: linear | step | plateau | cosine - - For other schedulers (step, plateau, and cosine), we use the default PyTorch schedulers. - See https://pytorch.org/docs/stable/optim.html for more details. - """ - if opt.lr_policy == 'linear': - def lambda_rule(epoch): - lr_l = 1.0 - max(0, epoch + opt.epoch_count - opt.n_epochs) / float(opt.n_epochs + 1) - return lr_l - scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda_rule) - elif opt.lr_policy == 'step': - scheduler = lr_scheduler.StepLR(optimizer, step_size=opt.lr_decay_epochs, gamma=0.2) - elif opt.lr_policy == 'plateau': - scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.2, threshold=0.01, patience=5) - elif opt.lr_policy == 'cosine': - scheduler = lr_scheduler.CosineAnnealingLR(optimizer, T_max=opt.n_epochs, eta_min=0) - else: - return NotImplementedError('learning rate policy [%s] is not implemented', opt.lr_policy) - return scheduler - - -def define_net_recon(net_recon, use_last_fc=False, init_path=None): - return ReconNetWrapper(net_recon, use_last_fc=use_last_fc, init_path=init_path) - -def define_net_recog(net_recog, pretrained_path=None): - net = RecogNetWrapper(net_recog=net_recog, pretrained_path=pretrained_path) - net.eval() - return net - -class ReconNetWrapper(nn.Module): - fc_dim=257 - def __init__(self, net_recon, use_last_fc=False, init_path=None): - super(ReconNetWrapper, self).__init__() - self.use_last_fc = use_last_fc - if net_recon not in func_dict: - return NotImplementedError('network [%s] is not implemented', net_recon) - func, last_dim = func_dict[net_recon] - backbone = func(use_last_fc=use_last_fc, num_classes=self.fc_dim) - if init_path and os.path.isfile(init_path): - state_dict = filter_state_dict(torch.load(init_path, map_location='cpu')) - backbone.load_state_dict(state_dict) - print("loading init net_recon %s from %s" %(net_recon, init_path)) - self.backbone = backbone - if not use_last_fc: - self.final_layers = nn.ModuleList([ - conv1x1(last_dim, 80, bias=True), # id layer - conv1x1(last_dim, 64, bias=True), # exp layer - conv1x1(last_dim, 80, bias=True), # tex layer - conv1x1(last_dim, 3, bias=True), # angle layer - conv1x1(last_dim, 27, bias=True), # gamma layer - conv1x1(last_dim, 2, bias=True), # tx, ty - conv1x1(last_dim, 1, bias=True) # tz - ]) - for m in self.final_layers: - nn.init.constant_(m.weight, 0.) - nn.init.constant_(m.bias, 0.) - - def forward(self, x): - x = self.backbone(x) - if not self.use_last_fc: - output = [] - for layer in self.final_layers: - output.append(layer(x)) - x = torch.flatten(torch.cat(output, dim=1), 1) - return x - - -class RecogNetWrapper(nn.Module): - def __init__(self, net_recog, pretrained_path=None, input_size=112): - super(RecogNetWrapper, self).__init__() - net = get_model(name=net_recog, fp16=False) - if pretrained_path: - state_dict = torch.load(pretrained_path, map_location='cpu') - net.load_state_dict(state_dict) - print("loading pretrained net_recog %s from %s" %(net_recog, pretrained_path)) - for param in net.parameters(): - param.requires_grad = False - self.net = net - self.preprocess = lambda x: 2 * x - 1 - self.input_size=input_size - - def forward(self, image, M): - image = self.preprocess(resize_n_crop(image, M, self.input_size)) - id_feature = F.normalize(self.net(image), dim=-1, p=2) - return id_feature - - -# adapted from https://github.com/pytorch/vision/edit/master/torchvision/models/resnet.py -__all__ = ['ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101', - 'resnet152', 'resnext50_32x4d', 'resnext101_32x8d', - 'wide_resnet50_2', 'wide_resnet101_2'] - - -model_urls = { - 'resnet18': 'https://download.pytorch.org/models/resnet18-f37072fd.pth', - 'resnet34': 'https://download.pytorch.org/models/resnet34-b627a593.pth', - 'resnet50': 'https://download.pytorch.org/models/resnet50-0676ba61.pth', - 'resnet101': 'https://download.pytorch.org/models/resnet101-63fe2227.pth', - 'resnet152': 'https://download.pytorch.org/models/resnet152-394f9c45.pth', - 'resnext50_32x4d': 'https://download.pytorch.org/models/resnext50_32x4d-7cdf4587.pth', - 'resnext101_32x8d': 'https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pth', - 'wide_resnet50_2': 'https://download.pytorch.org/models/wide_resnet50_2-95faca4d.pth', - 'wide_resnet101_2': 'https://download.pytorch.org/models/wide_resnet101_2-32ee1156.pth', -} - - -def conv3x3(in_planes: int, out_planes: int, stride: int = 1, groups: int = 1, dilation: int = 1) -> nn.Conv2d: - """3x3 convolution with padding""" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, - padding=dilation, groups=groups, bias=False, dilation=dilation) - - -def conv1x1(in_planes: int, out_planes: int, stride: int = 1, bias: bool = False) -> nn.Conv2d: - """1x1 convolution""" - return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=bias) - - -class BasicBlock(nn.Module): - expansion: int = 1 - - def __init__( - self, - inplanes: int, - planes: int, - stride: int = 1, - downsample: Optional[nn.Module] = None, - groups: int = 1, - base_width: int = 64, - dilation: int = 1, - norm_layer: Optional[Callable[..., nn.Module]] = None - ) -> None: - super(BasicBlock, self).__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - if groups != 1 or base_width != 64: - raise ValueError('BasicBlock only supports groups=1 and base_width=64') - if dilation > 1: - raise NotImplementedError("Dilation > 1 not supported in BasicBlock") - # Both self.conv1 and self.downsample layers downsample the input when stride != 1 - self.conv1 = conv3x3(inplanes, planes, stride) - self.bn1 = norm_layer(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = norm_layer(planes) - self.downsample = downsample - self.stride = stride - - def forward(self, x: Tensor) -> Tensor: - identity = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - # Bottleneck in torchvision places the stride for downsampling at 3x3 convolution(self.conv2) - # while original implementation places the stride at the first 1x1 convolution(self.conv1) - # according to "Deep residual learning for image recognition"https://arxiv.org/abs/1512.03385. - # This variant is also known as ResNet V1.5 and improves accuracy according to - # https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_for_pytorch. - - expansion: int = 4 - - def __init__( - self, - inplanes: int, - planes: int, - stride: int = 1, - downsample: Optional[nn.Module] = None, - groups: int = 1, - base_width: int = 64, - dilation: int = 1, - norm_layer: Optional[Callable[..., nn.Module]] = None - ) -> None: - super(Bottleneck, self).__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - width = int(planes * (base_width / 64.)) * groups - # Both self.conv2 and self.downsample layers downsample the input when stride != 1 - self.conv1 = conv1x1(inplanes, width) - self.bn1 = norm_layer(width) - self.conv2 = conv3x3(width, width, stride, groups, dilation) - self.bn2 = norm_layer(width) - self.conv3 = conv1x1(width, planes * self.expansion) - self.bn3 = norm_layer(planes * self.expansion) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x: Tensor) -> Tensor: - identity = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu(out) - - return out - - -class ResNet(nn.Module): - - def __init__( - self, - block: Type[Union[BasicBlock, Bottleneck]], - layers: List[int], - num_classes: int = 1000, - zero_init_residual: bool = False, - use_last_fc: bool = False, - groups: int = 1, - width_per_group: int = 64, - replace_stride_with_dilation: Optional[List[bool]] = None, - norm_layer: Optional[Callable[..., nn.Module]] = None - ) -> None: - super(ResNet, self).__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - self._norm_layer = norm_layer - - self.inplanes = 64 - self.dilation = 1 - if replace_stride_with_dilation is None: - # each element in the tuple indicates if we should replace - # the 2x2 stride with a dilated convolution instead - replace_stride_with_dilation = [False, False, False] - if len(replace_stride_with_dilation) != 3: - raise ValueError("replace_stride_with_dilation should be None " - "or a 3-element tuple, got {}".format(replace_stride_with_dilation)) - self.use_last_fc = use_last_fc - self.groups = groups - self.base_width = width_per_group - self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=7, stride=2, padding=3, - bias=False) - self.bn1 = norm_layer(self.inplanes) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - self.layer1 = self._make_layer(block, 64, layers[0]) - self.layer2 = self._make_layer(block, 128, layers[1], stride=2, - dilate=replace_stride_with_dilation[0]) - self.layer3 = self._make_layer(block, 256, layers[2], stride=2, - dilate=replace_stride_with_dilation[1]) - self.layer4 = self._make_layer(block, 512, layers[3], stride=2, - dilate=replace_stride_with_dilation[2]) - self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) - - if self.use_last_fc: - self.fc = nn.Linear(512 * block.expansion, num_classes) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - - - - # Zero-initialize the last BN in each residual branch, - # so that the residual branch starts with zeros, and each residual block behaves like an identity. - # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677 - if zero_init_residual: - for m in self.modules(): - if isinstance(m, Bottleneck): - nn.init.constant_(m.bn3.weight, 0) # type: ignore[arg-type] - elif isinstance(m, BasicBlock): - nn.init.constant_(m.bn2.weight, 0) # type: ignore[arg-type] - - def _make_layer(self, block: Type[Union[BasicBlock, Bottleneck]], planes: int, blocks: int, - stride: int = 1, dilate: bool = False) -> nn.Sequential: - norm_layer = self._norm_layer - downsample = None - previous_dilation = self.dilation - if dilate: - self.dilation *= stride - stride = 1 - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - conv1x1(self.inplanes, planes * block.expansion, stride), - norm_layer(planes * block.expansion), - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, downsample, self.groups, - self.base_width, previous_dilation, norm_layer)) - self.inplanes = planes * block.expansion - for _ in range(1, blocks): - layers.append(block(self.inplanes, planes, groups=self.groups, - base_width=self.base_width, dilation=self.dilation, - norm_layer=norm_layer)) - - return nn.Sequential(*layers) - - def _forward_impl(self, x: Tensor) -> Tensor: - # See note [TorchScript super()] - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - - x = self.avgpool(x) - if self.use_last_fc: - x = torch.flatten(x, 1) - x = self.fc(x) - return x - - def forward(self, x: Tensor) -> Tensor: - return self._forward_impl(x) - - -def _resnet( - arch: str, - block: Type[Union[BasicBlock, Bottleneck]], - layers: List[int], - pretrained: bool, - progress: bool, - **kwargs: Any -) -> ResNet: - model = ResNet(block, layers, **kwargs) - if pretrained: - state_dict = load_state_dict_from_url(model_urls[arch], - progress=progress) - model.load_state_dict(state_dict) - return model - - -def resnet18(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""ResNet-18 model from - `"Deep Residual Learning for Image Recognition" `_. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _resnet('resnet18', BasicBlock, [2, 2, 2, 2], pretrained, progress, - **kwargs) - - -def resnet34(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""ResNet-34 model from - `"Deep Residual Learning for Image Recognition" `_. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _resnet('resnet34', BasicBlock, [3, 4, 6, 3], pretrained, progress, - **kwargs) - - -def resnet50(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""ResNet-50 model from - `"Deep Residual Learning for Image Recognition" `_. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _resnet('resnet50', Bottleneck, [3, 4, 6, 3], pretrained, progress, - **kwargs) - - -def resnet101(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""ResNet-101 model from - `"Deep Residual Learning for Image Recognition" `_. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _resnet('resnet101', Bottleneck, [3, 4, 23, 3], pretrained, progress, - **kwargs) - - -def resnet152(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""ResNet-152 model from - `"Deep Residual Learning for Image Recognition" `_. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _resnet('resnet152', Bottleneck, [3, 8, 36, 3], pretrained, progress, - **kwargs) - - -def resnext50_32x4d(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""ResNeXt-50 32x4d model from - `"Aggregated Residual Transformation for Deep Neural Networks" `_. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - kwargs['groups'] = 32 - kwargs['width_per_group'] = 4 - return _resnet('resnext50_32x4d', Bottleneck, [3, 4, 6, 3], - pretrained, progress, **kwargs) - - -def resnext101_32x8d(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""ResNeXt-101 32x8d model from - `"Aggregated Residual Transformation for Deep Neural Networks" `_. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - kwargs['groups'] = 32 - kwargs['width_per_group'] = 8 - return _resnet('resnext101_32x8d', Bottleneck, [3, 4, 23, 3], - pretrained, progress, **kwargs) - - -def wide_resnet50_2(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""Wide ResNet-50-2 model from - `"Wide Residual Networks" `_. - - The model is the same as ResNet except for the bottleneck number of channels - which is twice larger in every block. The number of channels in outer 1x1 - convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048 - channels, and in Wide ResNet-50-2 has 2048-1024-2048. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - kwargs['width_per_group'] = 64 * 2 - return _resnet('wide_resnet50_2', Bottleneck, [3, 4, 6, 3], - pretrained, progress, **kwargs) - - -def wide_resnet101_2(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""Wide ResNet-101-2 model from - `"Wide Residual Networks" `_. - - The model is the same as ResNet except for the bottleneck number of channels - which is twice larger in every block. The number of channels in outer 1x1 - convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048 - channels, and in Wide ResNet-50-2 has 2048-1024-2048. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - kwargs['width_per_group'] = 64 * 2 - return _resnet('wide_resnet101_2', Bottleneck, [3, 4, 23, 3], - pretrained, progress, **kwargs) - - -func_dict = { - 'resnet18': (resnet18, 512), - 'resnet50': (resnet50, 2048) -} diff --git a/spaces/vinthony/SadTalker/src/test_audio2coeff.py b/spaces/vinthony/SadTalker/src/test_audio2coeff.py deleted file mode 100644 index bbf19f494e2127b4ae9d6074b172fddb694d6e34..0000000000000000000000000000000000000000 --- a/spaces/vinthony/SadTalker/src/test_audio2coeff.py +++ /dev/null @@ -1,123 +0,0 @@ -import os -import torch -import numpy as np -from scipy.io import savemat, loadmat -from yacs.config import CfgNode as CN -from scipy.signal import savgol_filter - -import safetensors -import safetensors.torch - -from src.audio2pose_models.audio2pose import Audio2Pose -from src.audio2exp_models.networks import SimpleWrapperV2 -from src.audio2exp_models.audio2exp import Audio2Exp -from src.utils.safetensor_helper import load_x_from_safetensor - -def load_cpk(checkpoint_path, model=None, optimizer=None, device="cpu"): - checkpoint = torch.load(checkpoint_path, map_location=torch.device(device)) - if model is not None: - model.load_state_dict(checkpoint['model']) - if optimizer is not None: - optimizer.load_state_dict(checkpoint['optimizer']) - - return checkpoint['epoch'] - -class Audio2Coeff(): - - def __init__(self, sadtalker_path, device): - #load config - fcfg_pose = open(sadtalker_path['audio2pose_yaml_path']) - cfg_pose = CN.load_cfg(fcfg_pose) - cfg_pose.freeze() - fcfg_exp = open(sadtalker_path['audio2exp_yaml_path']) - cfg_exp = CN.load_cfg(fcfg_exp) - cfg_exp.freeze() - - # load audio2pose_model - self.audio2pose_model = Audio2Pose(cfg_pose, None, device=device) - self.audio2pose_model = self.audio2pose_model.to(device) - self.audio2pose_model.eval() - for param in self.audio2pose_model.parameters(): - param.requires_grad = False - - try: - if sadtalker_path['use_safetensor']: - checkpoints = safetensors.torch.load_file(sadtalker_path['checkpoint']) - self.audio2pose_model.load_state_dict(load_x_from_safetensor(checkpoints, 'audio2pose')) - else: - load_cpk(sadtalker_path['audio2pose_checkpoint'], model=self.audio2pose_model, device=device) - except: - raise Exception("Failed in loading audio2pose_checkpoint") - - # load audio2exp_model - netG = SimpleWrapperV2() - netG = netG.to(device) - for param in netG.parameters(): - netG.requires_grad = False - netG.eval() - try: - if sadtalker_path['use_safetensor']: - checkpoints = safetensors.torch.load_file(sadtalker_path['checkpoint']) - netG.load_state_dict(load_x_from_safetensor(checkpoints, 'audio2exp')) - else: - load_cpk(sadtalker_path['audio2exp_checkpoint'], model=netG, device=device) - except: - raise Exception("Failed in loading audio2exp_checkpoint") - self.audio2exp_model = Audio2Exp(netG, cfg_exp, device=device, prepare_training_loss=False) - self.audio2exp_model = self.audio2exp_model.to(device) - for param in self.audio2exp_model.parameters(): - param.requires_grad = False - self.audio2exp_model.eval() - - self.device = device - - def generate(self, batch, coeff_save_dir, pose_style, ref_pose_coeff_path=None): - - with torch.no_grad(): - #test - results_dict_exp= self.audio2exp_model.test(batch) - exp_pred = results_dict_exp['exp_coeff_pred'] #bs T 64 - - #for class_id in range(1): - #class_id = 0#(i+10)%45 - #class_id = random.randint(0,46) #46 styles can be selected - batch['class'] = torch.LongTensor([pose_style]).to(self.device) - results_dict_pose = self.audio2pose_model.test(batch) - pose_pred = results_dict_pose['pose_pred'] #bs T 6 - - pose_len = pose_pred.shape[1] - if pose_len<13: - pose_len = int((pose_len-1)/2)*2+1 - pose_pred = torch.Tensor(savgol_filter(np.array(pose_pred.cpu()), pose_len, 2, axis=1)).to(self.device) - else: - pose_pred = torch.Tensor(savgol_filter(np.array(pose_pred.cpu()), 13, 2, axis=1)).to(self.device) - - coeffs_pred = torch.cat((exp_pred, pose_pred), dim=-1) #bs T 70 - - coeffs_pred_numpy = coeffs_pred[0].clone().detach().cpu().numpy() - - if ref_pose_coeff_path is not None: - coeffs_pred_numpy = self.using_refpose(coeffs_pred_numpy, ref_pose_coeff_path) - - savemat(os.path.join(coeff_save_dir, '%s##%s.mat'%(batch['pic_name'], batch['audio_name'])), - {'coeff_3dmm': coeffs_pred_numpy}) - - return os.path.join(coeff_save_dir, '%s##%s.mat'%(batch['pic_name'], batch['audio_name'])) - - def using_refpose(self, coeffs_pred_numpy, ref_pose_coeff_path): - num_frames = coeffs_pred_numpy.shape[0] - refpose_coeff_dict = loadmat(ref_pose_coeff_path) - refpose_coeff = refpose_coeff_dict['coeff_3dmm'][:,64:70] - refpose_num_frames = refpose_coeff.shape[0] - if refpose_num_frames 0: - self.classifier = nn.Sequential( - nn.Dropout(), - nn.Linear(256 * 6 * 6, 4096), - nn.ReLU(inplace=True), - nn.Dropout(), - nn.Linear(4096, 4096), - nn.ReLU(inplace=True), - nn.Linear(4096, num_classes), - ) - - def init_weights(self, pretrained=None): - if isinstance(pretrained, str): - logger = logging.getLogger() - from ..runner import load_checkpoint - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - # use default initializer - pass - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - - x = self.features(x) - if self.num_classes > 0: - x = x.view(x.size(0), 256 * 6 * 6) - x = self.classifier(x) - - return x diff --git a/spaces/vumichien/canvas_controlnet/ldm/models/diffusion/ddpm.py b/spaces/vumichien/canvas_controlnet/ldm/models/diffusion/ddpm.py deleted file mode 100644 index f71a44af48c8cba8e97849b7e6813b3e6f9fe83c..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/ldm/models/diffusion/ddpm.py +++ /dev/null @@ -1,1797 +0,0 @@ -""" -wild mixture of -https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py -https://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py -https://github.com/CompVis/taming-transformers --- merci -""" - -import torch -import torch.nn as nn -import numpy as np -import pytorch_lightning as pl -from torch.optim.lr_scheduler import LambdaLR -from einops import rearrange, repeat -from contextlib import contextmanager, nullcontext -from functools import partial -import itertools -from tqdm import tqdm -from torchvision.utils import make_grid -from pytorch_lightning.utilities.distributed import rank_zero_only -from omegaconf import ListConfig - -from ldm.util import log_txt_as_img, exists, default, ismap, isimage, mean_flat, count_params, instantiate_from_config -from ldm.modules.ema import LitEma -from ldm.modules.distributions.distributions import normal_kl, DiagonalGaussianDistribution -from ldm.models.autoencoder import IdentityFirstStage, AutoencoderKL -from ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like -from ldm.models.diffusion.ddim import DDIMSampler - - -__conditioning_keys__ = {'concat': 'c_concat', - 'crossattn': 'c_crossattn', - 'adm': 'y'} - - -def disabled_train(self, mode=True): - """Overwrite model.train with this function to make sure train/eval mode - does not change anymore.""" - return self - - -def uniform_on_device(r1, r2, shape, device): - return (r1 - r2) * torch.rand(*shape, device=device) + r2 - - -class DDPM(pl.LightningModule): - # classic DDPM with Gaussian diffusion, in image space - def __init__(self, - unet_config, - timesteps=1000, - beta_schedule="linear", - loss_type="l2", - ckpt_path=None, - ignore_keys=[], - load_only_unet=False, - monitor="val/loss", - use_ema=True, - first_stage_key="image", - image_size=256, - channels=3, - log_every_t=100, - clip_denoised=True, - linear_start=1e-4, - linear_end=2e-2, - cosine_s=8e-3, - given_betas=None, - original_elbo_weight=0., - v_posterior=0., # weight for choosing posterior variance as sigma = (1-v) * beta_tilde + v * beta - l_simple_weight=1., - conditioning_key=None, - parameterization="eps", # all assuming fixed variance schedules - scheduler_config=None, - use_positional_encodings=False, - learn_logvar=False, - logvar_init=0., - make_it_fit=False, - ucg_training=None, - reset_ema=False, - reset_num_ema_updates=False, - ): - super().__init__() - assert parameterization in ["eps", "x0", "v"], 'currently only supporting "eps" and "x0" and "v"' - self.parameterization = parameterization - print(f"{self.__class__.__name__}: Running in {self.parameterization}-prediction mode") - self.cond_stage_model = None - self.clip_denoised = clip_denoised - self.log_every_t = log_every_t - self.first_stage_key = first_stage_key - self.image_size = image_size # try conv? - self.channels = channels - self.use_positional_encodings = use_positional_encodings - self.model = DiffusionWrapper(unet_config, conditioning_key) - count_params(self.model, verbose=True) - self.use_ema = use_ema - if self.use_ema: - self.model_ema = LitEma(self.model) - print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.") - - self.use_scheduler = scheduler_config is not None - if self.use_scheduler: - self.scheduler_config = scheduler_config - - self.v_posterior = v_posterior - self.original_elbo_weight = original_elbo_weight - self.l_simple_weight = l_simple_weight - - if monitor is not None: - self.monitor = monitor - self.make_it_fit = make_it_fit - if reset_ema: assert exists(ckpt_path) - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys, only_model=load_only_unet) - if reset_ema: - assert self.use_ema - print(f"Resetting ema to pure model weights. This is useful when restoring from an ema-only checkpoint.") - self.model_ema = LitEma(self.model) - if reset_num_ema_updates: - print(" +++++++++++ WARNING: RESETTING NUM_EMA UPDATES TO ZERO +++++++++++ ") - assert self.use_ema - self.model_ema.reset_num_updates() - - self.register_schedule(given_betas=given_betas, beta_schedule=beta_schedule, timesteps=timesteps, - linear_start=linear_start, linear_end=linear_end, cosine_s=cosine_s) - - self.loss_type = loss_type - - self.learn_logvar = learn_logvar - logvar = torch.full(fill_value=logvar_init, size=(self.num_timesteps,)) - if self.learn_logvar: - self.logvar = nn.Parameter(self.logvar, requires_grad=True) - else: - self.register_buffer('logvar', logvar) - - self.ucg_training = ucg_training or dict() - if self.ucg_training: - self.ucg_prng = np.random.RandomState() - - def register_schedule(self, given_betas=None, beta_schedule="linear", timesteps=1000, - linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - if exists(given_betas): - betas = given_betas - else: - betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end, - cosine_s=cosine_s) - alphas = 1. - betas - alphas_cumprod = np.cumprod(alphas, axis=0) - alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1]) - - timesteps, = betas.shape - self.num_timesteps = int(timesteps) - self.linear_start = linear_start - self.linear_end = linear_end - assert alphas_cumprod.shape[0] == self.num_timesteps, 'alphas have to be defined for each timestep' - - to_torch = partial(torch.tensor, dtype=torch.float32) - - self.register_buffer('betas', to_torch(betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1))) - - # calculations for posterior q(x_{t-1} | x_t, x_0) - posterior_variance = (1 - self.v_posterior) * betas * (1. - alphas_cumprod_prev) / ( - 1. - alphas_cumprod) + self.v_posterior * betas - # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t) - self.register_buffer('posterior_variance', to_torch(posterior_variance)) - # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain - self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20)))) - self.register_buffer('posterior_mean_coef1', to_torch( - betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod))) - self.register_buffer('posterior_mean_coef2', to_torch( - (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod))) - - if self.parameterization == "eps": - lvlb_weights = self.betas ** 2 / ( - 2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod)) - elif self.parameterization == "x0": - lvlb_weights = 0.5 * np.sqrt(torch.Tensor(alphas_cumprod)) / (2. * 1 - torch.Tensor(alphas_cumprod)) - elif self.parameterization == "v": - lvlb_weights = torch.ones_like(self.betas ** 2 / ( - 2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod))) - else: - raise NotImplementedError("mu not supported") - lvlb_weights[0] = lvlb_weights[1] - self.register_buffer('lvlb_weights', lvlb_weights, persistent=False) - assert not torch.isnan(self.lvlb_weights).all() - - @contextmanager - def ema_scope(self, context=None): - if self.use_ema: - self.model_ema.store(self.model.parameters()) - self.model_ema.copy_to(self.model) - if context is not None: - print(f"{context}: Switched to EMA weights") - try: - yield None - finally: - if self.use_ema: - self.model_ema.restore(self.model.parameters()) - if context is not None: - print(f"{context}: Restored training weights") - - @torch.no_grad() - def init_from_ckpt(self, path, ignore_keys=list(), only_model=False): - sd = torch.load(path, map_location="cpu") - if "state_dict" in list(sd.keys()): - sd = sd["state_dict"] - keys = list(sd.keys()) - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - if self.make_it_fit: - n_params = len([name for name, _ in - itertools.chain(self.named_parameters(), - self.named_buffers())]) - for name, param in tqdm( - itertools.chain(self.named_parameters(), - self.named_buffers()), - desc="Fitting old weights to new weights", - total=n_params - ): - if not name in sd: - continue - old_shape = sd[name].shape - new_shape = param.shape - assert len(old_shape) == len(new_shape) - if len(new_shape) > 2: - # we only modify first two axes - assert new_shape[2:] == old_shape[2:] - # assumes first axis corresponds to output dim - if not new_shape == old_shape: - new_param = param.clone() - old_param = sd[name] - if len(new_shape) == 1: - for i in range(new_param.shape[0]): - new_param[i] = old_param[i % old_shape[0]] - elif len(new_shape) >= 2: - for i in range(new_param.shape[0]): - for j in range(new_param.shape[1]): - new_param[i, j] = old_param[i % old_shape[0], j % old_shape[1]] - - n_used_old = torch.ones(old_shape[1]) - for j in range(new_param.shape[1]): - n_used_old[j % old_shape[1]] += 1 - n_used_new = torch.zeros(new_shape[1]) - for j in range(new_param.shape[1]): - n_used_new[j] = n_used_old[j % old_shape[1]] - - n_used_new = n_used_new[None, :] - while len(n_used_new.shape) < len(new_shape): - n_used_new = n_used_new.unsqueeze(-1) - new_param /= n_used_new - - sd[name] = new_param - - missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict( - sd, strict=False) - print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") - if len(missing) > 0: - print(f"Missing Keys:\n {missing}") - if len(unexpected) > 0: - print(f"\nUnexpected Keys:\n {unexpected}") - - def q_mean_variance(self, x_start, t): - """ - Get the distribution q(x_t | x_0). - :param x_start: the [N x C x ...] tensor of noiseless inputs. - :param t: the number of diffusion steps (minus 1). Here, 0 means one step. - :return: A tuple (mean, variance, log_variance), all of x_start's shape. - """ - mean = (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start) - variance = extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape) - log_variance = extract_into_tensor(self.log_one_minus_alphas_cumprod, t, x_start.shape) - return mean, variance, log_variance - - def predict_start_from_noise(self, x_t, t, noise): - return ( - extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - - extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise - ) - - def predict_start_from_z_and_v(self, x_t, t, v): - # self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod))) - # self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod))) - return ( - extract_into_tensor(self.sqrt_alphas_cumprod, t, x_t.shape) * x_t - - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_t.shape) * v - ) - - def predict_eps_from_z_and_v(self, x_t, t, v): - return ( - extract_into_tensor(self.sqrt_alphas_cumprod, t, x_t.shape) * v + - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_t.shape) * x_t - ) - - def q_posterior(self, x_start, x_t, t): - posterior_mean = ( - extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start + - extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t - ) - posterior_variance = extract_into_tensor(self.posterior_variance, t, x_t.shape) - posterior_log_variance_clipped = extract_into_tensor(self.posterior_log_variance_clipped, t, x_t.shape) - return posterior_mean, posterior_variance, posterior_log_variance_clipped - - def p_mean_variance(self, x, t, clip_denoised: bool): - model_out = self.model(x, t) - if self.parameterization == "eps": - x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) - elif self.parameterization == "x0": - x_recon = model_out - if clip_denoised: - x_recon.clamp_(-1., 1.) - - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample(self, x, t, clip_denoised=True, repeat_noise=False): - b, *_, device = *x.shape, x.device - model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, clip_denoised=clip_denoised) - noise = noise_like(x.shape, device, repeat_noise) - # no noise when t == 0 - nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def p_sample_loop(self, shape, return_intermediates=False): - device = self.betas.device - b = shape[0] - img = torch.randn(shape, device=device) - intermediates = [img] - for i in tqdm(reversed(range(0, self.num_timesteps)), desc='Sampling t', total=self.num_timesteps): - img = self.p_sample(img, torch.full((b,), i, device=device, dtype=torch.long), - clip_denoised=self.clip_denoised) - if i % self.log_every_t == 0 or i == self.num_timesteps - 1: - intermediates.append(img) - if return_intermediates: - return img, intermediates - return img - - @torch.no_grad() - def sample(self, batch_size=16, return_intermediates=False): - image_size = self.image_size - channels = self.channels - return self.p_sample_loop((batch_size, channels, image_size, image_size), - return_intermediates=return_intermediates) - - def q_sample(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start + - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise) - - def get_v(self, x, noise, t): - return ( - extract_into_tensor(self.sqrt_alphas_cumprod, t, x.shape) * noise - - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x.shape) * x - ) - - def get_loss(self, pred, target, mean=True): - if self.loss_type == 'l1': - loss = (target - pred).abs() - if mean: - loss = loss.mean() - elif self.loss_type == 'l2': - if mean: - loss = torch.nn.functional.mse_loss(target, pred) - else: - loss = torch.nn.functional.mse_loss(target, pred, reduction='none') - else: - raise NotImplementedError("unknown loss type '{loss_type}'") - - return loss - - def p_losses(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - model_out = self.model(x_noisy, t) - - loss_dict = {} - if self.parameterization == "eps": - target = noise - elif self.parameterization == "x0": - target = x_start - elif self.parameterization == "v": - target = self.get_v(x_start, noise, t) - else: - raise NotImplementedError(f"Parameterization {self.parameterization} not yet supported") - - loss = self.get_loss(model_out, target, mean=False).mean(dim=[1, 2, 3]) - - log_prefix = 'train' if self.training else 'val' - - loss_dict.update({f'{log_prefix}/loss_simple': loss.mean()}) - loss_simple = loss.mean() * self.l_simple_weight - - loss_vlb = (self.lvlb_weights[t] * loss).mean() - loss_dict.update({f'{log_prefix}/loss_vlb': loss_vlb}) - - loss = loss_simple + self.original_elbo_weight * loss_vlb - - loss_dict.update({f'{log_prefix}/loss': loss}) - - return loss, loss_dict - - def forward(self, x, *args, **kwargs): - # b, c, h, w, device, img_size, = *x.shape, x.device, self.image_size - # assert h == img_size and w == img_size, f'height and width of image must be {img_size}' - t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long() - return self.p_losses(x, t, *args, **kwargs) - - def get_input(self, batch, k): - x = batch[k] - if len(x.shape) == 3: - x = x[..., None] - x = rearrange(x, 'b h w c -> b c h w') - x = x.to(memory_format=torch.contiguous_format).float() - return x - - def shared_step(self, batch): - x = self.get_input(batch, self.first_stage_key) - loss, loss_dict = self(x) - return loss, loss_dict - - def training_step(self, batch, batch_idx): - for k in self.ucg_training: - p = self.ucg_training[k]["p"] - val = self.ucg_training[k]["val"] - if val is None: - val = "" - for i in range(len(batch[k])): - if self.ucg_prng.choice(2, p=[1 - p, p]): - batch[k][i] = val - - loss, loss_dict = self.shared_step(batch) - - self.log_dict(loss_dict, prog_bar=True, - logger=True, on_step=True, on_epoch=True) - - self.log("global_step", self.global_step, - prog_bar=True, logger=True, on_step=True, on_epoch=False) - - if self.use_scheduler: - lr = self.optimizers().param_groups[0]['lr'] - self.log('lr_abs', lr, prog_bar=True, logger=True, on_step=True, on_epoch=False) - - return loss - - @torch.no_grad() - def validation_step(self, batch, batch_idx): - _, loss_dict_no_ema = self.shared_step(batch) - with self.ema_scope(): - _, loss_dict_ema = self.shared_step(batch) - loss_dict_ema = {key + '_ema': loss_dict_ema[key] for key in loss_dict_ema} - self.log_dict(loss_dict_no_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True) - self.log_dict(loss_dict_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True) - - def on_train_batch_end(self, *args, **kwargs): - if self.use_ema: - self.model_ema(self.model) - - def _get_rows_from_list(self, samples): - n_imgs_per_row = len(samples) - denoise_grid = rearrange(samples, 'n b c h w -> b n c h w') - denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w') - denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row) - return denoise_grid - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=2, sample=True, return_keys=None, **kwargs): - log = dict() - x = self.get_input(batch, self.first_stage_key) - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - x = x.to(self.device)[:N] - log["inputs"] = x - - # get diffusion row - diffusion_row = list() - x_start = x[:n_row] - - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(x_start) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - diffusion_row.append(x_noisy) - - log["diffusion_row"] = self._get_rows_from_list(diffusion_row) - - if sample: - # get denoise row - with self.ema_scope("Plotting"): - samples, denoise_row = self.sample(batch_size=N, return_intermediates=True) - - log["samples"] = samples - log["denoise_row"] = self._get_rows_from_list(denoise_row) - - if return_keys: - if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0: - return log - else: - return {key: log[key] for key in return_keys} - return log - - def configure_optimizers(self): - lr = self.learning_rate - params = list(self.model.parameters()) - if self.learn_logvar: - params = params + [self.logvar] - opt = torch.optim.AdamW(params, lr=lr) - return opt - - -class LatentDiffusion(DDPM): - """main class""" - - def __init__(self, - first_stage_config, - cond_stage_config, - num_timesteps_cond=None, - cond_stage_key="image", - cond_stage_trainable=False, - concat_mode=True, - cond_stage_forward=None, - conditioning_key=None, - scale_factor=1.0, - scale_by_std=False, - force_null_conditioning=False, - *args, **kwargs): - self.force_null_conditioning = force_null_conditioning - self.num_timesteps_cond = default(num_timesteps_cond, 1) - self.scale_by_std = scale_by_std - assert self.num_timesteps_cond <= kwargs['timesteps'] - # for backwards compatibility after implementation of DiffusionWrapper - if conditioning_key is None: - conditioning_key = 'concat' if concat_mode else 'crossattn' - if cond_stage_config == '__is_unconditional__' and not self.force_null_conditioning: - conditioning_key = None - ckpt_path = kwargs.pop("ckpt_path", None) - reset_ema = kwargs.pop("reset_ema", False) - reset_num_ema_updates = kwargs.pop("reset_num_ema_updates", False) - ignore_keys = kwargs.pop("ignore_keys", []) - super().__init__(conditioning_key=conditioning_key, *args, **kwargs) - self.concat_mode = concat_mode - self.cond_stage_trainable = cond_stage_trainable - self.cond_stage_key = cond_stage_key - try: - self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1 - except: - self.num_downs = 0 - if not scale_by_std: - self.scale_factor = scale_factor - else: - self.register_buffer('scale_factor', torch.tensor(scale_factor)) - self.instantiate_first_stage(first_stage_config) - self.instantiate_cond_stage(cond_stage_config) - self.cond_stage_forward = cond_stage_forward - self.clip_denoised = False - self.bbox_tokenizer = None - - self.restarted_from_ckpt = False - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys) - self.restarted_from_ckpt = True - if reset_ema: - assert self.use_ema - print( - f"Resetting ema to pure model weights. This is useful when restoring from an ema-only checkpoint.") - self.model_ema = LitEma(self.model) - if reset_num_ema_updates: - print(" +++++++++++ WARNING: RESETTING NUM_EMA UPDATES TO ZERO +++++++++++ ") - assert self.use_ema - self.model_ema.reset_num_updates() - - def make_cond_schedule(self, ): - self.cond_ids = torch.full(size=(self.num_timesteps,), fill_value=self.num_timesteps - 1, dtype=torch.long) - ids = torch.round(torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond)).long() - self.cond_ids[:self.num_timesteps_cond] = ids - - @rank_zero_only - @torch.no_grad() - def on_train_batch_start(self, batch, batch_idx, dataloader_idx): - # only for very first batch - if self.scale_by_std and self.current_epoch == 0 and self.global_step == 0 and batch_idx == 0 and not self.restarted_from_ckpt: - assert self.scale_factor == 1., 'rather not use custom rescaling and std-rescaling simultaneously' - # set rescale weight to 1./std of encodings - print("### USING STD-RESCALING ###") - x = super().get_input(batch, self.first_stage_key) - x = x.to(self.device) - encoder_posterior = self.encode_first_stage(x) - z = self.get_first_stage_encoding(encoder_posterior).detach() - del self.scale_factor - self.register_buffer('scale_factor', 1. / z.flatten().std()) - print(f"setting self.scale_factor to {self.scale_factor}") - print("### USING STD-RESCALING ###") - - def register_schedule(self, - given_betas=None, beta_schedule="linear", timesteps=1000, - linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - super().register_schedule(given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s) - - self.shorten_cond_schedule = self.num_timesteps_cond > 1 - if self.shorten_cond_schedule: - self.make_cond_schedule() - - def instantiate_first_stage(self, config): - model = instantiate_from_config(config) - self.first_stage_model = model.eval() - self.first_stage_model.train = disabled_train - for param in self.first_stage_model.parameters(): - param.requires_grad = False - - def instantiate_cond_stage(self, config): - if not self.cond_stage_trainable: - if config == "__is_first_stage__": - print("Using first stage also as cond stage.") - self.cond_stage_model = self.first_stage_model - elif config == "__is_unconditional__": - print(f"Training {self.__class__.__name__} as an unconditional model.") - self.cond_stage_model = None - # self.be_unconditional = True - else: - model = instantiate_from_config(config) - self.cond_stage_model = model.eval() - self.cond_stage_model.train = disabled_train - for param in self.cond_stage_model.parameters(): - param.requires_grad = False - else: - assert config != '__is_first_stage__' - assert config != '__is_unconditional__' - model = instantiate_from_config(config) - self.cond_stage_model = model - - def _get_denoise_row_from_list(self, samples, desc='', force_no_decoder_quantization=False): - denoise_row = [] - for zd in tqdm(samples, desc=desc): - denoise_row.append(self.decode_first_stage(zd.to(self.device), - force_not_quantize=force_no_decoder_quantization)) - n_imgs_per_row = len(denoise_row) - denoise_row = torch.stack(denoise_row) # n_log_step, n_row, C, H, W - denoise_grid = rearrange(denoise_row, 'n b c h w -> b n c h w') - denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w') - denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row) - return denoise_grid - - def get_first_stage_encoding(self, encoder_posterior): - if isinstance(encoder_posterior, DiagonalGaussianDistribution): - z = encoder_posterior.sample() - elif isinstance(encoder_posterior, torch.Tensor): - z = encoder_posterior - else: - raise NotImplementedError(f"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented") - return self.scale_factor * z - - def get_learned_conditioning(self, c): - if self.cond_stage_forward is None: - if hasattr(self.cond_stage_model, 'encode') and callable(self.cond_stage_model.encode): - c = self.cond_stage_model.encode(c) - if isinstance(c, DiagonalGaussianDistribution): - c = c.mode() - else: - c = self.cond_stage_model(c) - else: - assert hasattr(self.cond_stage_model, self.cond_stage_forward) - c = getattr(self.cond_stage_model, self.cond_stage_forward)(c) - return c - - def meshgrid(self, h, w): - y = torch.arange(0, h).view(h, 1, 1).repeat(1, w, 1) - x = torch.arange(0, w).view(1, w, 1).repeat(h, 1, 1) - - arr = torch.cat([y, x], dim=-1) - return arr - - def delta_border(self, h, w): - """ - :param h: height - :param w: width - :return: normalized distance to image border, - wtith min distance = 0 at border and max dist = 0.5 at image center - """ - lower_right_corner = torch.tensor([h - 1, w - 1]).view(1, 1, 2) - arr = self.meshgrid(h, w) / lower_right_corner - dist_left_up = torch.min(arr, dim=-1, keepdims=True)[0] - dist_right_down = torch.min(1 - arr, dim=-1, keepdims=True)[0] - edge_dist = torch.min(torch.cat([dist_left_up, dist_right_down], dim=-1), dim=-1)[0] - return edge_dist - - def get_weighting(self, h, w, Ly, Lx, device): - weighting = self.delta_border(h, w) - weighting = torch.clip(weighting, self.split_input_params["clip_min_weight"], - self.split_input_params["clip_max_weight"], ) - weighting = weighting.view(1, h * w, 1).repeat(1, 1, Ly * Lx).to(device) - - if self.split_input_params["tie_braker"]: - L_weighting = self.delta_border(Ly, Lx) - L_weighting = torch.clip(L_weighting, - self.split_input_params["clip_min_tie_weight"], - self.split_input_params["clip_max_tie_weight"]) - - L_weighting = L_weighting.view(1, 1, Ly * Lx).to(device) - weighting = weighting * L_weighting - return weighting - - def get_fold_unfold(self, x, kernel_size, stride, uf=1, df=1): # todo load once not every time, shorten code - """ - :param x: img of size (bs, c, h, w) - :return: n img crops of size (n, bs, c, kernel_size[0], kernel_size[1]) - """ - bs, nc, h, w = x.shape - - # number of crops in image - Ly = (h - kernel_size[0]) // stride[0] + 1 - Lx = (w - kernel_size[1]) // stride[1] + 1 - - if uf == 1 and df == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold = torch.nn.Fold(output_size=x.shape[2:], **fold_params) - - weighting = self.get_weighting(kernel_size[0], kernel_size[1], Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h, w) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0], kernel_size[1], Ly * Lx)) - - elif uf > 1 and df == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold_params2 = dict(kernel_size=(kernel_size[0] * uf, kernel_size[0] * uf), - dilation=1, padding=0, - stride=(stride[0] * uf, stride[1] * uf)) - fold = torch.nn.Fold(output_size=(x.shape[2] * uf, x.shape[3] * uf), **fold_params2) - - weighting = self.get_weighting(kernel_size[0] * uf, kernel_size[1] * uf, Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h * uf, w * uf) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0] * uf, kernel_size[1] * uf, Ly * Lx)) - - elif df > 1 and uf == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold_params2 = dict(kernel_size=(kernel_size[0] // df, kernel_size[0] // df), - dilation=1, padding=0, - stride=(stride[0] // df, stride[1] // df)) - fold = torch.nn.Fold(output_size=(x.shape[2] // df, x.shape[3] // df), **fold_params2) - - weighting = self.get_weighting(kernel_size[0] // df, kernel_size[1] // df, Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h // df, w // df) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0] // df, kernel_size[1] // df, Ly * Lx)) - - else: - raise NotImplementedError - - return fold, unfold, normalization, weighting - - @torch.no_grad() - def get_input(self, batch, k, return_first_stage_outputs=False, force_c_encode=False, - cond_key=None, return_original_cond=False, bs=None, return_x=False): - x = super().get_input(batch, k) - if bs is not None: - x = x[:bs] - x = x.to(self.device) - encoder_posterior = self.encode_first_stage(x) - z = self.get_first_stage_encoding(encoder_posterior).detach() - - if self.model.conditioning_key is not None and not self.force_null_conditioning: - if cond_key is None: - cond_key = self.cond_stage_key - if cond_key != self.first_stage_key: - if cond_key in ['caption', 'coordinates_bbox', "txt"]: - xc = batch[cond_key] - elif cond_key in ['class_label', 'cls']: - xc = batch - else: - xc = super().get_input(batch, cond_key).to(self.device) - else: - xc = x - if not self.cond_stage_trainable or force_c_encode: - if isinstance(xc, dict) or isinstance(xc, list): - c = self.get_learned_conditioning(xc) - else: - c = self.get_learned_conditioning(xc.to(self.device)) - else: - c = xc - if bs is not None: - c = c[:bs] - - if self.use_positional_encodings: - pos_x, pos_y = self.compute_latent_shifts(batch) - ckey = __conditioning_keys__[self.model.conditioning_key] - c = {ckey: c, 'pos_x': pos_x, 'pos_y': pos_y} - - else: - c = None - xc = None - if self.use_positional_encodings: - pos_x, pos_y = self.compute_latent_shifts(batch) - c = {'pos_x': pos_x, 'pos_y': pos_y} - out = [z, c] - if return_first_stage_outputs: - xrec = self.decode_first_stage(z) - out.extend([x, xrec]) - if return_x: - out.extend([x]) - if return_original_cond: - out.append(xc) - return out - - @torch.no_grad() - def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False): - if predict_cids: - if z.dim() == 4: - z = torch.argmax(z.exp(), dim=1).long() - z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None) - z = rearrange(z, 'b h w c -> b c h w').contiguous() - - z = 1. / self.scale_factor * z - return self.first_stage_model.decode(z) - - @torch.no_grad() - def encode_first_stage(self, x): - return self.first_stage_model.encode(x) - - def shared_step(self, batch, **kwargs): - x, c = self.get_input(batch, self.first_stage_key) - loss = self(x, c) - return loss - - def forward(self, x, c, *args, **kwargs): - t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long() - if self.model.conditioning_key is not None: - assert c is not None - if self.cond_stage_trainable: - c = self.get_learned_conditioning(c) - if self.shorten_cond_schedule: # TODO: drop this option - tc = self.cond_ids[t].to(self.device) - c = self.q_sample(x_start=c, t=tc, noise=torch.randn_like(c.float())) - return self.p_losses(x, c, t, *args, **kwargs) - - def apply_model(self, x_noisy, t, cond, return_ids=False): - if isinstance(cond, dict): - # hybrid case, cond is expected to be a dict - pass - else: - if not isinstance(cond, list): - cond = [cond] - key = 'c_concat' if self.model.conditioning_key == 'concat' else 'c_crossattn' - cond = {key: cond} - - x_recon = self.model(x_noisy, t, **cond) - - if isinstance(x_recon, tuple) and not return_ids: - return x_recon[0] - else: - return x_recon - - def _predict_eps_from_xstart(self, x_t, t, pred_xstart): - return (extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart) / \ - extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) - - def _prior_bpd(self, x_start): - """ - Get the prior KL term for the variational lower-bound, measured in - bits-per-dim. - This term can't be optimized, as it only depends on the encoder. - :param x_start: the [N x C x ...] tensor of inputs. - :return: a batch of [N] KL values (in bits), one per batch element. - """ - batch_size = x_start.shape[0] - t = torch.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device) - qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t) - kl_prior = normal_kl(mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0) - return mean_flat(kl_prior) / np.log(2.0) - - def p_losses(self, x_start, cond, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - model_output = self.apply_model(x_noisy, t, cond) - - loss_dict = {} - prefix = 'train' if self.training else 'val' - - if self.parameterization == "x0": - target = x_start - elif self.parameterization == "eps": - target = noise - elif self.parameterization == "v": - target = self.get_v(x_start, noise, t) - else: - raise NotImplementedError() - - loss_simple = self.get_loss(model_output, target, mean=False).mean([1, 2, 3]) - loss_dict.update({f'{prefix}/loss_simple': loss_simple.mean()}) - - logvar_t = self.logvar[t].to(self.device) - loss = loss_simple / torch.exp(logvar_t) + logvar_t - # loss = loss_simple / torch.exp(self.logvar) + self.logvar - if self.learn_logvar: - loss_dict.update({f'{prefix}/loss_gamma': loss.mean()}) - loss_dict.update({'logvar': self.logvar.data.mean()}) - - loss = self.l_simple_weight * loss.mean() - - loss_vlb = self.get_loss(model_output, target, mean=False).mean(dim=(1, 2, 3)) - loss_vlb = (self.lvlb_weights[t] * loss_vlb).mean() - loss_dict.update({f'{prefix}/loss_vlb': loss_vlb}) - loss += (self.original_elbo_weight * loss_vlb) - loss_dict.update({f'{prefix}/loss': loss}) - - return loss, loss_dict - - def p_mean_variance(self, x, c, t, clip_denoised: bool, return_codebook_ids=False, quantize_denoised=False, - return_x0=False, score_corrector=None, corrector_kwargs=None): - t_in = t - model_out = self.apply_model(x, t_in, c, return_ids=return_codebook_ids) - - if score_corrector is not None: - assert self.parameterization == "eps" - model_out = score_corrector.modify_score(self, model_out, x, t, c, **corrector_kwargs) - - if return_codebook_ids: - model_out, logits = model_out - - if self.parameterization == "eps": - x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) - elif self.parameterization == "x0": - x_recon = model_out - else: - raise NotImplementedError() - - if clip_denoised: - x_recon.clamp_(-1., 1.) - if quantize_denoised: - x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon) - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - if return_codebook_ids: - return model_mean, posterior_variance, posterior_log_variance, logits - elif return_x0: - return model_mean, posterior_variance, posterior_log_variance, x_recon - else: - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample(self, x, c, t, clip_denoised=False, repeat_noise=False, - return_codebook_ids=False, quantize_denoised=False, return_x0=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None): - b, *_, device = *x.shape, x.device - outputs = self.p_mean_variance(x=x, c=c, t=t, clip_denoised=clip_denoised, - return_codebook_ids=return_codebook_ids, - quantize_denoised=quantize_denoised, - return_x0=return_x0, - score_corrector=score_corrector, corrector_kwargs=corrector_kwargs) - if return_codebook_ids: - raise DeprecationWarning("Support dropped.") - model_mean, _, model_log_variance, logits = outputs - elif return_x0: - model_mean, _, model_log_variance, x0 = outputs - else: - model_mean, _, model_log_variance = outputs - - noise = noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - # no noise when t == 0 - nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - - if return_codebook_ids: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, logits.argmax(dim=1) - if return_x0: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, x0 - else: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def progressive_denoising(self, cond, shape, verbose=True, callback=None, quantize_denoised=False, - img_callback=None, mask=None, x0=None, temperature=1., noise_dropout=0., - score_corrector=None, corrector_kwargs=None, batch_size=None, x_T=None, start_T=None, - log_every_t=None): - if not log_every_t: - log_every_t = self.log_every_t - timesteps = self.num_timesteps - if batch_size is not None: - b = batch_size if batch_size is not None else shape[0] - shape = [batch_size] + list(shape) - else: - b = batch_size = shape[0] - if x_T is None: - img = torch.randn(shape, device=self.device) - else: - img = x_T - intermediates = [] - if cond is not None: - if isinstance(cond, dict): - cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else - list(map(lambda x: x[:batch_size], cond[key])) for key in cond} - else: - cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = tqdm(reversed(range(0, timesteps)), desc='Progressive Generation', - total=timesteps) if verbose else reversed( - range(0, timesteps)) - if type(temperature) == float: - temperature = [temperature] * timesteps - - for i in iterator: - ts = torch.full((b,), i, device=self.device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != 'hybrid' - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img, x0_partial = self.p_sample(img, cond, ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised, return_x0=True, - temperature=temperature[i], noise_dropout=noise_dropout, - score_corrector=score_corrector, corrector_kwargs=corrector_kwargs) - if mask is not None: - assert x0 is not None - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1. - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(x0_partial) - if callback: callback(i) - if img_callback: img_callback(img, i) - return img, intermediates - - @torch.no_grad() - def p_sample_loop(self, cond, shape, return_intermediates=False, - x_T=None, verbose=True, callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, start_T=None, - log_every_t=None): - - if not log_every_t: - log_every_t = self.log_every_t - device = self.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - intermediates = [img] - if timesteps is None: - timesteps = self.num_timesteps - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = tqdm(reversed(range(0, timesteps)), desc='Sampling t', total=timesteps) if verbose else reversed( - range(0, timesteps)) - - if mask is not None: - assert x0 is not None - assert x0.shape[2:3] == mask.shape[2:3] # spatial size has to match - - for i in iterator: - ts = torch.full((b,), i, device=device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != 'hybrid' - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img = self.p_sample(img, cond, ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised) - if mask is not None: - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1. - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(img) - if callback: callback(i) - if img_callback: img_callback(img, i) - - if return_intermediates: - return img, intermediates - return img - - @torch.no_grad() - def sample(self, cond, batch_size=16, return_intermediates=False, x_T=None, - verbose=True, timesteps=None, quantize_denoised=False, - mask=None, x0=None, shape=None, **kwargs): - if shape is None: - shape = (batch_size, self.channels, self.image_size, self.image_size) - if cond is not None: - if isinstance(cond, dict): - cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else - list(map(lambda x: x[:batch_size], cond[key])) for key in cond} - else: - cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] - return self.p_sample_loop(cond, - shape, - return_intermediates=return_intermediates, x_T=x_T, - verbose=verbose, timesteps=timesteps, quantize_denoised=quantize_denoised, - mask=mask, x0=x0) - - @torch.no_grad() - def sample_log(self, cond, batch_size, ddim, ddim_steps, **kwargs): - if ddim: - ddim_sampler = DDIMSampler(self) - shape = (self.channels, self.image_size, self.image_size) - samples, intermediates = ddim_sampler.sample(ddim_steps, batch_size, - shape, cond, verbose=False, **kwargs) - - else: - samples, intermediates = self.sample(cond=cond, batch_size=batch_size, - return_intermediates=True, **kwargs) - - return samples, intermediates - - @torch.no_grad() - def get_unconditional_conditioning(self, batch_size, null_label=None): - if null_label is not None: - xc = null_label - if isinstance(xc, ListConfig): - xc = list(xc) - if isinstance(xc, dict) or isinstance(xc, list): - c = self.get_learned_conditioning(xc) - else: - if hasattr(xc, "to"): - xc = xc.to(self.device) - c = self.get_learned_conditioning(xc) - else: - if self.cond_stage_key in ["class_label", "cls"]: - xc = self.cond_stage_model.get_unconditional_conditioning(batch_size, device=self.device) - return self.get_learned_conditioning(xc) - else: - raise NotImplementedError("todo") - if isinstance(c, list): # in case the encoder gives us a list - for i in range(len(c)): - c[i] = repeat(c[i], '1 ... -> b ...', b=batch_size).to(self.device) - else: - c = repeat(c, '1 ... -> b ...', b=batch_size).to(self.device) - return c - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=50, ddim_eta=0., return_keys=None, - quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True, - plot_diffusion_rows=True, unconditional_guidance_scale=1., unconditional_guidance_label=None, - use_ema_scope=True, - **kwargs): - ema_scope = self.ema_scope if use_ema_scope else nullcontext - use_ddim = ddim_steps is not None - - log = dict() - z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key, - return_first_stage_outputs=True, - force_c_encode=True, - return_original_cond=True, - bs=N) - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - log["inputs"] = x - log["reconstruction"] = xrec - if self.model.conditioning_key is not None: - if hasattr(self.cond_stage_model, "decode"): - xc = self.cond_stage_model.decode(c) - log["conditioning"] = xc - elif self.cond_stage_key in ["caption", "txt"]: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25) - log["conditioning"] = xc - elif self.cond_stage_key in ['class_label', "cls"]: - try: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"], size=x.shape[2] // 25) - log['conditioning'] = xc - except KeyError: - # probably no "human_label" in batch - pass - elif isimage(xc): - log["conditioning"] = xc - if ismap(xc): - log["original_conditioning"] = self.to_rgb(xc) - - if plot_diffusion_rows: - # get diffusion row - diffusion_row = list() - z_start = z[:n_row] - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(z_start) - z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise) - diffusion_row.append(self.decode_first_stage(z_noisy)) - - diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W - diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w') - diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w') - diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0]) - log["diffusion_row"] = diffusion_grid - - if sample: - # get denoise row - with ema_scope("Sampling"): - samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True) - x_samples = self.decode_first_stage(samples) - log["samples"] = x_samples - if plot_denoise_rows: - denoise_grid = self._get_denoise_row_from_list(z_denoise_row) - log["denoise_row"] = denoise_grid - - if quantize_denoised and not isinstance(self.first_stage_model, AutoencoderKL) and not isinstance( - self.first_stage_model, IdentityFirstStage): - # also display when quantizing x0 while sampling - with ema_scope("Plotting Quantized Denoised"): - samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta, - quantize_denoised=True) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True, - # quantize_denoised=True) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_x0_quantized"] = x_samples - - if unconditional_guidance_scale > 1.0: - uc = self.get_unconditional_conditioning(N, unconditional_guidance_label) - if self.model.conditioning_key == "crossattn-adm": - uc = {"c_crossattn": [uc], "c_adm": c["c_adm"]} - with ema_scope("Sampling with classifier-free guidance"): - samples_cfg, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=uc, - ) - x_samples_cfg = self.decode_first_stage(samples_cfg) - log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg - - if inpaint: - # make a simple center square - b, h, w = z.shape[0], z.shape[2], z.shape[3] - mask = torch.ones(N, h, w).to(self.device) - # zeros will be filled in - mask[:, h // 4:3 * h // 4, w // 4:3 * w // 4] = 0. - mask = mask[:, None, ...] - with ema_scope("Plotting Inpaint"): - samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, eta=ddim_eta, - ddim_steps=ddim_steps, x0=z[:N], mask=mask) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_inpainting"] = x_samples - log["mask"] = mask - - # outpaint - mask = 1. - mask - with ema_scope("Plotting Outpaint"): - samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, eta=ddim_eta, - ddim_steps=ddim_steps, x0=z[:N], mask=mask) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_outpainting"] = x_samples - - if plot_progressive_rows: - with ema_scope("Plotting Progressives"): - img, progressives = self.progressive_denoising(c, - shape=(self.channels, self.image_size, self.image_size), - batch_size=N) - prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation") - log["progressive_row"] = prog_row - - if return_keys: - if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0: - return log - else: - return {key: log[key] for key in return_keys} - return log - - def configure_optimizers(self): - lr = self.learning_rate - params = list(self.model.parameters()) - if self.cond_stage_trainable: - print(f"{self.__class__.__name__}: Also optimizing conditioner params!") - params = params + list(self.cond_stage_model.parameters()) - if self.learn_logvar: - print('Diffusion model optimizing logvar') - params.append(self.logvar) - opt = torch.optim.AdamW(params, lr=lr) - if self.use_scheduler: - assert 'target' in self.scheduler_config - scheduler = instantiate_from_config(self.scheduler_config) - - print("Setting up LambdaLR scheduler...") - scheduler = [ - { - 'scheduler': LambdaLR(opt, lr_lambda=scheduler.schedule), - 'interval': 'step', - 'frequency': 1 - }] - return [opt], scheduler - return opt - - @torch.no_grad() - def to_rgb(self, x): - x = x.float() - if not hasattr(self, "colorize"): - self.colorize = torch.randn(3, x.shape[1], 1, 1).to(x) - x = nn.functional.conv2d(x, weight=self.colorize) - x = 2. * (x - x.min()) / (x.max() - x.min()) - 1. - return x - - -class DiffusionWrapper(pl.LightningModule): - def __init__(self, diff_model_config, conditioning_key): - super().__init__() - self.sequential_cross_attn = diff_model_config.pop("sequential_crossattn", False) - self.diffusion_model = instantiate_from_config(diff_model_config) - self.conditioning_key = conditioning_key - assert self.conditioning_key in [None, 'concat', 'crossattn', 'hybrid', 'adm', 'hybrid-adm', 'crossattn-adm'] - - def forward(self, x, t, c_concat: list = None, c_crossattn: list = None, c_adm=None): - if self.conditioning_key is None: - out = self.diffusion_model(x, t) - elif self.conditioning_key == 'concat': - xc = torch.cat([x] + c_concat, dim=1) - out = self.diffusion_model(xc, t) - elif self.conditioning_key == 'crossattn': - if not self.sequential_cross_attn: - cc = torch.cat(c_crossattn, 1) - else: - cc = c_crossattn - out = self.diffusion_model(x, t, context=cc) - elif self.conditioning_key == 'hybrid': - xc = torch.cat([x] + c_concat, dim=1) - cc = torch.cat(c_crossattn, 1) - out = self.diffusion_model(xc, t, context=cc) - elif self.conditioning_key == 'hybrid-adm': - assert c_adm is not None - xc = torch.cat([x] + c_concat, dim=1) - cc = torch.cat(c_crossattn, 1) - out = self.diffusion_model(xc, t, context=cc, y=c_adm) - elif self.conditioning_key == 'crossattn-adm': - assert c_adm is not None - cc = torch.cat(c_crossattn, 1) - out = self.diffusion_model(x, t, context=cc, y=c_adm) - elif self.conditioning_key == 'adm': - cc = c_crossattn[0] - out = self.diffusion_model(x, t, y=cc) - else: - raise NotImplementedError() - - return out - - -class LatentUpscaleDiffusion(LatentDiffusion): - def __init__(self, *args, low_scale_config, low_scale_key="LR", noise_level_key=None, **kwargs): - super().__init__(*args, **kwargs) - # assumes that neither the cond_stage nor the low_scale_model contain trainable params - assert not self.cond_stage_trainable - self.instantiate_low_stage(low_scale_config) - self.low_scale_key = low_scale_key - self.noise_level_key = noise_level_key - - def instantiate_low_stage(self, config): - model = instantiate_from_config(config) - self.low_scale_model = model.eval() - self.low_scale_model.train = disabled_train - for param in self.low_scale_model.parameters(): - param.requires_grad = False - - @torch.no_grad() - def get_input(self, batch, k, cond_key=None, bs=None, log_mode=False): - if not log_mode: - z, c = super().get_input(batch, k, force_c_encode=True, bs=bs) - else: - z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True, - force_c_encode=True, return_original_cond=True, bs=bs) - x_low = batch[self.low_scale_key][:bs] - x_low = rearrange(x_low, 'b h w c -> b c h w') - x_low = x_low.to(memory_format=torch.contiguous_format).float() - zx, noise_level = self.low_scale_model(x_low) - if self.noise_level_key is not None: - # get noise level from batch instead, e.g. when extracting a custom noise level for bsr - raise NotImplementedError('TODO') - - all_conds = {"c_concat": [zx], "c_crossattn": [c], "c_adm": noise_level} - if log_mode: - # TODO: maybe disable if too expensive - x_low_rec = self.low_scale_model.decode(zx) - return z, all_conds, x, xrec, xc, x_low, x_low_rec, noise_level - return z, all_conds - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None, - plot_denoise_rows=False, plot_progressive_rows=True, plot_diffusion_rows=True, - unconditional_guidance_scale=1., unconditional_guidance_label=None, use_ema_scope=True, - **kwargs): - ema_scope = self.ema_scope if use_ema_scope else nullcontext - use_ddim = ddim_steps is not None - - log = dict() - z, c, x, xrec, xc, x_low, x_low_rec, noise_level = self.get_input(batch, self.first_stage_key, bs=N, - log_mode=True) - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - log["inputs"] = x - log["reconstruction"] = xrec - log["x_lr"] = x_low - log[f"x_lr_rec_@noise_levels{'-'.join(map(lambda x: str(x), list(noise_level.cpu().numpy())))}"] = x_low_rec - if self.model.conditioning_key is not None: - if hasattr(self.cond_stage_model, "decode"): - xc = self.cond_stage_model.decode(c) - log["conditioning"] = xc - elif self.cond_stage_key in ["caption", "txt"]: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25) - log["conditioning"] = xc - elif self.cond_stage_key in ['class_label', 'cls']: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"], size=x.shape[2] // 25) - log['conditioning'] = xc - elif isimage(xc): - log["conditioning"] = xc - if ismap(xc): - log["original_conditioning"] = self.to_rgb(xc) - - if plot_diffusion_rows: - # get diffusion row - diffusion_row = list() - z_start = z[:n_row] - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(z_start) - z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise) - diffusion_row.append(self.decode_first_stage(z_noisy)) - - diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W - diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w') - diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w') - diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0]) - log["diffusion_row"] = diffusion_grid - - if sample: - # get denoise row - with ema_scope("Sampling"): - samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True) - x_samples = self.decode_first_stage(samples) - log["samples"] = x_samples - if plot_denoise_rows: - denoise_grid = self._get_denoise_row_from_list(z_denoise_row) - log["denoise_row"] = denoise_grid - - if unconditional_guidance_scale > 1.0: - uc_tmp = self.get_unconditional_conditioning(N, unconditional_guidance_label) - # TODO explore better "unconditional" choices for the other keys - # maybe guide away from empty text label and highest noise level and maximally degraded zx? - uc = dict() - for k in c: - if k == "c_crossattn": - assert isinstance(c[k], list) and len(c[k]) == 1 - uc[k] = [uc_tmp] - elif k == "c_adm": # todo: only run with text-based guidance? - assert isinstance(c[k], torch.Tensor) - #uc[k] = torch.ones_like(c[k]) * self.low_scale_model.max_noise_level - uc[k] = c[k] - elif isinstance(c[k], list): - uc[k] = [c[k][i] for i in range(len(c[k]))] - else: - uc[k] = c[k] - - with ema_scope("Sampling with classifier-free guidance"): - samples_cfg, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=uc, - ) - x_samples_cfg = self.decode_first_stage(samples_cfg) - log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg - - if plot_progressive_rows: - with ema_scope("Plotting Progressives"): - img, progressives = self.progressive_denoising(c, - shape=(self.channels, self.image_size, self.image_size), - batch_size=N) - prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation") - log["progressive_row"] = prog_row - - return log - - -class LatentFinetuneDiffusion(LatentDiffusion): - """ - Basis for different finetunas, such as inpainting or depth2image - To disable finetuning mode, set finetune_keys to None - """ - - def __init__(self, - concat_keys: tuple, - finetune_keys=("model.diffusion_model.input_blocks.0.0.weight", - "model_ema.diffusion_modelinput_blocks00weight" - ), - keep_finetune_dims=4, - # if model was trained without concat mode before and we would like to keep these channels - c_concat_log_start=None, # to log reconstruction of c_concat codes - c_concat_log_end=None, - *args, **kwargs - ): - ckpt_path = kwargs.pop("ckpt_path", None) - ignore_keys = kwargs.pop("ignore_keys", list()) - super().__init__(*args, **kwargs) - self.finetune_keys = finetune_keys - self.concat_keys = concat_keys - self.keep_dims = keep_finetune_dims - self.c_concat_log_start = c_concat_log_start - self.c_concat_log_end = c_concat_log_end - if exists(self.finetune_keys): assert exists(ckpt_path), 'can only finetune from a given checkpoint' - if exists(ckpt_path): - self.init_from_ckpt(ckpt_path, ignore_keys) - - def init_from_ckpt(self, path, ignore_keys=list(), only_model=False): - sd = torch.load(path, map_location="cpu") - if "state_dict" in list(sd.keys()): - sd = sd["state_dict"] - keys = list(sd.keys()) - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - - # make it explicit, finetune by including extra input channels - if exists(self.finetune_keys) and k in self.finetune_keys: - new_entry = None - for name, param in self.named_parameters(): - if name in self.finetune_keys: - print( - f"modifying key '{name}' and keeping its original {self.keep_dims} (channels) dimensions only") - new_entry = torch.zeros_like(param) # zero init - assert exists(new_entry), 'did not find matching parameter to modify' - new_entry[:, :self.keep_dims, ...] = sd[k] - sd[k] = new_entry - - missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict( - sd, strict=False) - print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") - if len(missing) > 0: - print(f"Missing Keys: {missing}") - if len(unexpected) > 0: - print(f"Unexpected Keys: {unexpected}") - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None, - quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True, - plot_diffusion_rows=True, unconditional_guidance_scale=1., unconditional_guidance_label=None, - use_ema_scope=True, - **kwargs): - ema_scope = self.ema_scope if use_ema_scope else nullcontext - use_ddim = ddim_steps is not None - - log = dict() - z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key, bs=N, return_first_stage_outputs=True) - c_cat, c = c["c_concat"][0], c["c_crossattn"][0] - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - log["inputs"] = x - log["reconstruction"] = xrec - if self.model.conditioning_key is not None: - if hasattr(self.cond_stage_model, "decode"): - xc = self.cond_stage_model.decode(c) - log["conditioning"] = xc - elif self.cond_stage_key in ["caption", "txt"]: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25) - log["conditioning"] = xc - elif self.cond_stage_key in ['class_label', 'cls']: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"], size=x.shape[2] // 25) - log['conditioning'] = xc - elif isimage(xc): - log["conditioning"] = xc - if ismap(xc): - log["original_conditioning"] = self.to_rgb(xc) - - if not (self.c_concat_log_start is None and self.c_concat_log_end is None): - log["c_concat_decoded"] = self.decode_first_stage(c_cat[:, self.c_concat_log_start:self.c_concat_log_end]) - - if plot_diffusion_rows: - # get diffusion row - diffusion_row = list() - z_start = z[:n_row] - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(z_start) - z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise) - diffusion_row.append(self.decode_first_stage(z_noisy)) - - diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W - diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w') - diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w') - diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0]) - log["diffusion_row"] = diffusion_grid - - if sample: - # get denoise row - with ema_scope("Sampling"): - samples, z_denoise_row = self.sample_log(cond={"c_concat": [c_cat], "c_crossattn": [c]}, - batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True) - x_samples = self.decode_first_stage(samples) - log["samples"] = x_samples - if plot_denoise_rows: - denoise_grid = self._get_denoise_row_from_list(z_denoise_row) - log["denoise_row"] = denoise_grid - - if unconditional_guidance_scale > 1.0: - uc_cross = self.get_unconditional_conditioning(N, unconditional_guidance_label) - uc_cat = c_cat - uc_full = {"c_concat": [uc_cat], "c_crossattn": [uc_cross]} - with ema_scope("Sampling with classifier-free guidance"): - samples_cfg, _ = self.sample_log(cond={"c_concat": [c_cat], "c_crossattn": [c]}, - batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=uc_full, - ) - x_samples_cfg = self.decode_first_stage(samples_cfg) - log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg - - return log - - -class LatentInpaintDiffusion(LatentFinetuneDiffusion): - """ - can either run as pure inpainting model (only concat mode) or with mixed conditionings, - e.g. mask as concat and text via cross-attn. - To disable finetuning mode, set finetune_keys to None - """ - - def __init__(self, - concat_keys=("mask", "masked_image"), - masked_image_key="masked_image", - *args, **kwargs - ): - super().__init__(concat_keys, *args, **kwargs) - self.masked_image_key = masked_image_key - assert self.masked_image_key in concat_keys - - @torch.no_grad() - def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False): - # note: restricted to non-trainable encoders currently - assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for inpainting' - z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True, - force_c_encode=True, return_original_cond=True, bs=bs) - - assert exists(self.concat_keys) - c_cat = list() - for ck in self.concat_keys: - cc = rearrange(batch[ck], 'b h w c -> b c h w').to(memory_format=torch.contiguous_format).float() - if bs is not None: - cc = cc[:bs] - cc = cc.to(self.device) - bchw = z.shape - if ck != self.masked_image_key: - cc = torch.nn.functional.interpolate(cc, size=bchw[-2:]) - else: - cc = self.get_first_stage_encoding(self.encode_first_stage(cc)) - c_cat.append(cc) - c_cat = torch.cat(c_cat, dim=1) - all_conds = {"c_concat": [c_cat], "c_crossattn": [c]} - if return_first_stage_outputs: - return z, all_conds, x, xrec, xc - return z, all_conds - - @torch.no_grad() - def log_images(self, *args, **kwargs): - log = super(LatentInpaintDiffusion, self).log_images(*args, **kwargs) - log["masked_image"] = rearrange(args[0]["masked_image"], - 'b h w c -> b c h w').to(memory_format=torch.contiguous_format).float() - return log - - -class LatentDepth2ImageDiffusion(LatentFinetuneDiffusion): - """ - condition on monocular depth estimation - """ - - def __init__(self, depth_stage_config, concat_keys=("midas_in",), *args, **kwargs): - super().__init__(concat_keys=concat_keys, *args, **kwargs) - self.depth_model = instantiate_from_config(depth_stage_config) - self.depth_stage_key = concat_keys[0] - - @torch.no_grad() - def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False): - # note: restricted to non-trainable encoders currently - assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for depth2img' - z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True, - force_c_encode=True, return_original_cond=True, bs=bs) - - assert exists(self.concat_keys) - assert len(self.concat_keys) == 1 - c_cat = list() - for ck in self.concat_keys: - cc = batch[ck] - if bs is not None: - cc = cc[:bs] - cc = cc.to(self.device) - cc = self.depth_model(cc) - cc = torch.nn.functional.interpolate( - cc, - size=z.shape[2:], - mode="bicubic", - align_corners=False, - ) - - depth_min, depth_max = torch.amin(cc, dim=[1, 2, 3], keepdim=True), torch.amax(cc, dim=[1, 2, 3], - keepdim=True) - cc = 2. * (cc - depth_min) / (depth_max - depth_min + 0.001) - 1. - c_cat.append(cc) - c_cat = torch.cat(c_cat, dim=1) - all_conds = {"c_concat": [c_cat], "c_crossattn": [c]} - if return_first_stage_outputs: - return z, all_conds, x, xrec, xc - return z, all_conds - - @torch.no_grad() - def log_images(self, *args, **kwargs): - log = super().log_images(*args, **kwargs) - depth = self.depth_model(args[0][self.depth_stage_key]) - depth_min, depth_max = torch.amin(depth, dim=[1, 2, 3], keepdim=True), \ - torch.amax(depth, dim=[1, 2, 3], keepdim=True) - log["depth"] = 2. * (depth - depth_min) / (depth_max - depth_min) - 1. - return log - - -class LatentUpscaleFinetuneDiffusion(LatentFinetuneDiffusion): - """ - condition on low-res image (and optionally on some spatial noise augmentation) - """ - def __init__(self, concat_keys=("lr",), reshuffle_patch_size=None, - low_scale_config=None, low_scale_key=None, *args, **kwargs): - super().__init__(concat_keys=concat_keys, *args, **kwargs) - self.reshuffle_patch_size = reshuffle_patch_size - self.low_scale_model = None - if low_scale_config is not None: - print("Initializing a low-scale model") - assert exists(low_scale_key) - self.instantiate_low_stage(low_scale_config) - self.low_scale_key = low_scale_key - - def instantiate_low_stage(self, config): - model = instantiate_from_config(config) - self.low_scale_model = model.eval() - self.low_scale_model.train = disabled_train - for param in self.low_scale_model.parameters(): - param.requires_grad = False - - @torch.no_grad() - def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False): - # note: restricted to non-trainable encoders currently - assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for upscaling-ft' - z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True, - force_c_encode=True, return_original_cond=True, bs=bs) - - assert exists(self.concat_keys) - assert len(self.concat_keys) == 1 - # optionally make spatial noise_level here - c_cat = list() - noise_level = None - for ck in self.concat_keys: - cc = batch[ck] - cc = rearrange(cc, 'b h w c -> b c h w') - if exists(self.reshuffle_patch_size): - assert isinstance(self.reshuffle_patch_size, int) - cc = rearrange(cc, 'b c (p1 h) (p2 w) -> b (p1 p2 c) h w', - p1=self.reshuffle_patch_size, p2=self.reshuffle_patch_size) - if bs is not None: - cc = cc[:bs] - cc = cc.to(self.device) - if exists(self.low_scale_model) and ck == self.low_scale_key: - cc, noise_level = self.low_scale_model(cc) - c_cat.append(cc) - c_cat = torch.cat(c_cat, dim=1) - if exists(noise_level): - all_conds = {"c_concat": [c_cat], "c_crossattn": [c], "c_adm": noise_level} - else: - all_conds = {"c_concat": [c_cat], "c_crossattn": [c]} - if return_first_stage_outputs: - return z, all_conds, x, xrec, xc - return z, all_conds - - @torch.no_grad() - def log_images(self, *args, **kwargs): - log = super().log_images(*args, **kwargs) - log["lr"] = rearrange(args[0]["lr"], 'b h w c -> b c h w') - return log diff --git a/spaces/weide/ChuanhuChatGPT2/custom.css b/spaces/weide/ChuanhuChatGPT2/custom.css deleted file mode 100644 index 5143eb138ea2469d8c457c71cb210fd3fb7cbe15..0000000000000000000000000000000000000000 --- a/spaces/weide/ChuanhuChatGPT2/custom.css +++ /dev/null @@ -1,162 +0,0 @@ -:root { - --chatbot-color-light: #F3F3F3; - --chatbot-color-dark: #121111; -} - -/* status_display */ -#status_display { - display: flex; - min-height: 2.5em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: monospace; - color: var(--body-text-color-subdued); -} - -#chuanhu_chatbot, #status_display { - transition: all 0.6s; -} -/* list */ -ol:not(.options), ul:not(.options) { - padding-inline-start: 2em !important; -} - -/* 亮色 */ -#chuanhu_chatbot { - background-color: var(--chatbot-color-light) !important; -} -[data-testid = "bot"] { - background-color: #FFFFFF !important; -} -[data-testid = "user"] { - background-color: #95EC69 !important; -} -/* 对话气泡 */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; - min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); - min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: hsla(0, 0%, 0%, 80%)!important; - border-radius: 10px; - padding: 1.4em 1.2em 0em 1.4em; - margin: 1.2em 2em 1.2em 0.5em; - color: #FFF; - box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2); -} -/* 代码高亮样式 */ -.highlight .hll { background-color: #49483e } -.highlight .c { color: #75715e } /* Comment */ -.highlight .err { color: #960050; background-color: #1e0010 } /* Error */ -.highlight .k { color: #66d9ef } /* Keyword */ -.highlight .l { color: #ae81ff } /* Literal */ -.highlight .n { color: #f8f8f2 } /* Name */ -.highlight .o { color: #f92672 } /* Operator */ -.highlight .p { color: #f8f8f2 } /* Punctuation */ -.highlight .ch { color: #75715e } /* Comment.Hashbang */ -.highlight .cm { color: #75715e } /* Comment.Multiline */ -.highlight .cp { color: #75715e } /* Comment.Preproc */ -.highlight .cpf { color: #75715e } /* Comment.PreprocFile */ -.highlight .c1 { color: #75715e } /* Comment.Single */ -.highlight .cs { color: #75715e } /* Comment.Special */ -.highlight .gd { color: #f92672 } /* Generic.Deleted */ -.highlight .ge { font-style: italic } /* Generic.Emph */ -.highlight .gi { color: #a6e22e } /* Generic.Inserted */ -.highlight .gs { font-weight: bold } /* Generic.Strong */ -.highlight .gu { color: #75715e } /* Generic.Subheading */ -.highlight .kc { color: #66d9ef } /* Keyword.Constant */ -.highlight .kd { color: #66d9ef } /* Keyword.Declaration */ -.highlight .kn { color: #f92672 } /* Keyword.Namespace */ -.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */ -.highlight .kr { color: #66d9ef } /* Keyword.Reserved */ -.highlight .kt { color: #66d9ef } /* Keyword.Type */ -.highlight .ld { color: #e6db74 } /* Literal.Date */ -.highlight .m { color: #ae81ff } /* Literal.Number */ -.highlight .s { color: #e6db74 } /* Literal.String */ -.highlight .na { color: #a6e22e } /* Name.Attribute */ -.highlight .nb { color: #f8f8f2 } /* Name.Builtin */ -.highlight .nc { color: #a6e22e } /* Name.Class */ -.highlight .no { color: #66d9ef } /* Name.Constant */ -.highlight .nd { color: #a6e22e } /* Name.Decorator */ -.highlight .ni { color: #f8f8f2 } /* Name.Entity */ -.highlight .ne { color: #a6e22e } /* Name.Exception */ -.highlight .nf { color: #a6e22e } /* Name.Function */ -.highlight .nl { color: #f8f8f2 } /* Name.Label */ -.highlight .nn { color: #f8f8f2 } /* Name.Namespace */ -.highlight .nx { color: #a6e22e } /* Name.Other */ -.highlight .py { color: #f8f8f2 } /* Name.Property */ -.highlight .nt { color: #f92672 } /* Name.Tag */ -.highlight .nv { color: #f8f8f2 } /* Name.Variable */ -.highlight .ow { color: #f92672 } /* Operator.Word */ -.highlight .w { color: #f8f8f2 } /* Text.Whitespace */ -.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */ -.highlight .mf { color: #ae81ff } /* Literal.Number.Float */ -.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */ -.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */ -.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */ -.highlight .sa { color: #e6db74 } /* Literal.String.Affix */ -.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */ -.highlight .sc { color: #e6db74 } /* Literal.String.Char */ -.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */ -.highlight .sd { color: #e6db74 } /* Literal.String.Doc */ -.highlight .s2 { color: #e6db74 } /* Literal.String.Double */ -.highlight .se { color: #ae81ff } /* Literal.String.Escape */ -.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */ -.highlight .si { color: #e6db74 } /* Literal.String.Interpol */ -.highlight .sx { color: #e6db74 } /* Literal.String.Other */ -.highlight .sr { color: #e6db74 } /* Literal.String.Regex */ -.highlight .s1 { color: #e6db74 } /* Literal.String.Single */ -.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */ -.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */ -.highlight .fm { color: #a6e22e } /* Name.Function.Magic */ -.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */ -.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */ -.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */ -.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */ -.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */ diff --git a/spaces/whgwd2023/bingo/README.md b/spaces/whgwd2023/bingo/README.md deleted file mode 100644 index d65eafbc8431818f738e8e086455fa6159f101bb..0000000000000000000000000000000000000000 --- a/spaces/whgwd2023/bingo/README.md +++ /dev/null @@ -1,196 +0,0 @@ ---- -title: bingo -emoji: 📉 -colorFrom: red -colorTo: red -sdk: docker -license: mit -duplicated_from: hf4all/bingo ---- - -
    - -# Bingo - -Bingo,一个让你呼吸顺畅 New Bing。 - -高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。 - -![Github stars](https://badgen.net/github/stars/weaigc/bingo?icon=github&label=stars) -![Gthub issues](https://img.shields.io/github/issues/weaigc/bingo) -[![docker build](https://github.com/weaigc/bingo/actions/workflows/docker.yml/badge.svg)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![docker hub](https://badgen.net/docker/size/weaigc/bingo?icon=docker&label=image%20size)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![MIT License](https://img.shields.io/badge/license-MIT-97c50f)](https://github.com/weaigc/bingo/blob/main/license) - -
    - -## 演示站点 - -https://bing.github1s.tk - - - -[![img](./docs/images/demo.png)](https://bing.github1s.tk) - -## 功能和特点 - -- 完全基于 Next.js 重写,高度还原 New Bing Web 版 UI,使用体验和 Bing AI 基本一致。 -- 支持 Docker 构建,方便快捷地部署和访问。 -- Cookie 可全局配置,全局共享。 -- 支持持续语音对话 - -## RoadMap - - - [x] 支持 wss 转发 - - [x] 支持一键部署 - - [x] 优化移动端展示 - - [x] 支持画图 - - [x] 支持语音输入(支持语音指令,目前仅支持 PC 版 Edge 及 Chrome 浏览器) - - [x] 支持语音输出(需要手动开启) - - [x] 支持图片输入 - - [x] 支持自定义域名 - - [ ] 支持历史记录 - - [ ] 适配深色模式 - - [ ] 支持内置提示词 - - [ ] 支持离线访问 - - [ ] 国际化翻译 - -## 一键部署 -你也可以一键部署自己的 New Bing AI 到 🤗 HuggingFace 。 - -### 部署到 Huggingface -1. 点击此图标 -[![Deploy to HuggingFace](https://img.shields.io/badge/%E7%82%B9%E5%87%BB%E9%83%A8%E7%BD%B2-%F0%9F%A4%97-fff)](https://huggingface.co/login?next=%2Fspaces%2Fhf4all%2Fbingo%3Fduplicate%3Dtrue%26visibility%3Dpublic),配置可以不改。 - -2. 部署署完成后,点击“设置” 》“站点域名”,点一下,复制一下 HF 域名信息,然后分享给别人即可。 - -> Huggingface 不支持绑定自己的域名,不过我们可以使用曲线救国的方式来达到这个目的 -> 1. 方式二,借助 Cloudflare Workers [部署Cloudflare Workers](#使用Cloudflare-Workers自定义域名) -> 2. 方式一,借助 Github Pages 及 iframe [如何绑定域名](https://github.com/weaigc/bingo/issues/4) - -### 使用Cloudflare Workers自定义域名 - -> 核心代码 [worker.js](./cloudflare/worker.js) - -- [注册 Cloudflare 账号](https://dash.cloudflare.com/sign-up) - -- 添加一个新的网站,需要你有自己的域名并且将域名`Name Server`托管给 Cloudflare 才行(更多信息可自行 Google) - -- 通过左侧菜单进入「Workers」,并点击「Create a Worker」。 - -- 创建 Worker 服务,复制 [worker.js](./cloudflare/worker.js) 全部代码,粘贴至创建的服务中,根据注释进行改动,保存并部署。 - -- 触发器 中自定义访问域名。 - -### 部署其它平台 -
    - -由于其他平台目前遭到 New Bing 封杀,会遇到很多问题,不再做推荐,有需要的可以自行查看 - - -#### 部署到 Netlify -[![Deploy to Netlify Button](https://www.netlify.com/img/deploy/button.svg)](https://app.netlify.com/start/deploy?repository=https://github.com/weaigc/bingo) - -#### 部署到 Vercel -如果你是 Vercel 付费用户,可以点以下链接一键部署到 Vercel。免费版本有[接口超时限制](https://vercel.com/docs/concepts/limits/overview),不推荐使用 - -[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?demo-title=bingo&demo-description=bingo&demo-url=https%3A%2F%2Fbing.github1s.tk%2F&project-name=bingo&repository-name=bingo&repository-url=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo&from=templates&skippable-integrations=1&env=BING_HEADER&envDescription=%E5%A6%82%E6%9E%9C%E4%B8%8D%E7%9F%A5%E9%81%93%E6%80%8E%E4%B9%88%E9%85%8D%E7%BD%AE%E8%AF%B7%E7%82%B9%E5%8F%B3%E4%BE%A7Learn+More&envLink=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo%2Fblob%2Fmain%2F.env.example) - -#### 部署到 Render - -[![Deploy to Render](https://render.com/images/deploy-to-render-button.svg)](https://render.com/deploy?repo=https://github.com/weaigc/bingo) -
    - -## 环境和依赖 - -- Node.js >= 18 -- Bing AI 的[身份信息](#如何获取-BING_HEADER)) - -## 安装和使用 - -> 由于目前微软封杀比较严重,推荐优先使用 [部署 Huggingface](#部署到-huggingface) 。 - -* 使用 Node 启动 - -```bash -git clone https://github.com/weaigc/bingo.git -npm i # 推荐使用 pnpm i -npm run build -npm run start -``` - -* 使用 Docker 启动 -```bash -docker pull weaigc/bingo -docker run --rm -it -p 7860:7860 weaigc/bingo -# 或者 -docker run --rm -it -e BING_HEADER=xxxx -p 7860:7860 weaigc/bingo -``` - -## 如何获取 BING_HEADER -> 配置了 BING_HEADER 意味着你将自己的账号共享给所有使用此服务的人,如果不需要免登录画图的功能,不建议设置此变量 - -打开 https://www.bing.com 并登录,然后访问 https://www.bing.com/turing/captcha/challenge ,通过人机校验,然后 - -![BING HEADER](./docs/images/curl.png) - -> 复制出来的内容应该如下所示。确认格式无误后,打开 https://effulgent-bubblegum-e2f5df.netlify.app/#dialog=%22settings%22 ,粘贴进去,点击“转成 BING_HEADER 并复制”,然后从剪切板粘贴即可得到。(你也可以先在网页上进行验证) - -以下是格式参考,需要注意的是,网页端保存的格式是以`curl`开头, 而服务端配置的 `BING_HEADER` 是 `base64` 格式,两者不能互通。 -
    -正常格式/网页端保存的格式(格式仅供参考) - -``` -curl 'https://www.bing.com/turing/captcha/challenge' \ - -H 'authority: www.bing.com' \ - -H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7' \ - -H 'accept-language: zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6' \ - -H 'cache-control: max-age=0' \ - -H 'cookie: MicrosoftApplicationsTelemetryDeviceId=3399c004-fd0e-48ec-bb92-d82a27b2bbd4; _EDGE_V=1; SRCHD=AF=NOFORM; SRCHUID=V=2&GUID=29EBDDA4E6674329ACCF1A0A423C3E98&dmnchg=1; _UR=QS=0&TQS=0; _HPVN=CS=eyJQbiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiUCJ9LCJTYyI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiSCJ9LCJReiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiVCJ9LCJBcCI6dHJ1ZSwiTXV0ZSI6dHJ1ZSwiTGFkIjoiMjAyMy0wNy0yNVQwMDowMDowMFoiLCJJb3RkIjowLCJHd2IiOjAsIkRmdCI6bnVsbCwiTXZzIjowLCJGbHQiOjAsIkltcCI6Mn0=; _RwBf=ilt=1&ihpd=1&ispd=0&rc=0&rb=0&gb=0&rg=200&pc=0&mtu=0&rbb=0&g=0&cid=&clo=0&v=1&l=2023-07-25T07:00:00.0000000Z&lft=0001-01-01T00:00:00.0000000&aof=0&o=2&p=&c=&t=0&s=0001-01-01T00:00:00.0000000+00:00&ts=2023-07-25T11:00:31.7111548+00:00&rwred=0&wls=&lka=0&lkt=0&TH=&dci=0; ANON=A=0043C6590EA808ED6E395059FFFFFFFF&E=1c8b&W=1; NAP=V=1.9&E=1c31&C=DnaMSbDN_4efZ_xXqBF3Daorjr53kYqYoaP8YHsupjmiXnysX7a37A&W=1; PPLState=1; KievRPSSecAuth=FABSBBRaTOJILtFsMkpLVWSG6AN6C/svRwNmAAAEgAAACMGUA7EGVSjGEAQBGHtNsc5sNL7unmJsfPJ2t6imfo4BeUJlAia3IpMTtMUy4PU/C5QAzRI5pODtsIee0+blgllXt/5IiWwGjwmdhivsFM597pRPkjARPfwsPhNLPNbJrCPNPHdje4Is78MnCADXw6/NBq2FL8V2/byw2fH6IuAMD2MvN/VvqpEa9ZxiDjZtENj4HEj0mO2SgzjfyEhVAkjvznJqU2rw/Q2tHmX94NAM2kzlzKF/hWPhCCUmu8IHLvCnHDS6mSptvJDDP/sp3ovtzOXkP1mlM/Xju5ftesUvccVEQGffXORa1dE5hEMbKIiKXz1tDdduSXE19g9/+mRMAjaQhpwhI8XmilCTx1adb1Ll5qK+VjC9GNfEZzcbsGBPVaOl+anG8rEMq+Xnhjo7J+NqTNolavHgcuV8kJsCeJZIged33UA8eOZeFo+wAECMguxMoSqgpGH+sthqynvD/FJD6r/tiU2N3uqVq8NE8V37asrN6T14Z0FGBJOe6ET1+PGApm3s11OY9/xhFEB9T5BEPUGEbvRcLcW2ncFQX0EU+xweiPqo1Q1hNUg/dCtSI+lZ7c2H8XheePZavZ0TJQ8oNCSAuKiTqJmI0fVGpwbXwfaADkEipuawz3fIuMJBNgMU0OtA7Hm59v2fGLIBuvi6YeKS6GgVk3BIPf+P/eKahwozrxQZaFnoHTSqMkvct7xCP4atBROfXKf5Ww0CcFKp+2WX9BIskTOo2jjk6bAyyYJ+ElUB1fgLKNk5m/YSMc9iYCLIBMIGN8F0Yvy3tZ7cvh7Ue5Klo98US/I+nW1G7ZJMHRgUO8h8lpneHqEMegKd8gynO4VF7RpCjJkunDmW0Ta+RkXAP619pg0dqHMFkoOgknN78oBbGTV6fJUKotv+vi61kLhAeXZGWoHGCRXh2wUC6YgfPgKA6ESRNHtFn7E5B3HHpLc5rVMDSNhKZYfdhupV4Ezf6+5DhMcZLZhi0kk+ivDiN1gdHlVtSN55xpvf+c+XZDzR0uhgcvgy0LAbmzgk6y4WbYH+LQsMpzNNj+aC72vMiWovWrKh9jY4MYCmdgxsS/skPtLdp18muiEIRXTbZQGUmhxFpJAIbBIsCscMpzL0BgeujxUwM5wr79Sd9r4xwbgSMwmBlBfUHRVBdNyg8feepeJbCS63nD6eHOuLqMRsPIio3w/ki/EAa92UUEiZeavLsMUD/y/qAvWUdzdP5Y+C/TM+CMGS/kGL4LEdY/28MQeTvU1qv1X21kQt2aiaj3pPVL36hAzxbcLgqcMo9oymDRy87kdCXW/+g4oKLtMh6fm/G6W6Y/B01JlxohyyvueHQIG557uzkEkTJ3FnOVODSKBKpb3WZ65rExfV71zSZa25F3GmpaIG6HiYrX2YYhQAkIE9pKEQBHbnwHuwNDGottZTXZw=; WLS=C=9df3f9d8518fae19&N=wen; WLID=pGY8HgWCu4p5XYCOk2oa0+DBdftkMUfmNIn8XtSjSTKsgv/Il7GUlYs0Jpjf/E12jZMgV7x44Dy3fXOgjjUoJx7Y/ClLrLhsk20THksJJoI=; _EDGE_S=F=1&SID=17CF6EE006426448213C7DB907436588&mkt=zh-CN; MUID=225621093D8A6C27301632413C0E6D08; MUIDB=225621093D8A6C27301632413C0E6D08; SUID=A; SNRHOP=I=&TS=; _U=nGyzKQruEsDwLiu65fZFIG6e12hf2lwTJmroW__k8joUJIKmG3OIjayXKGW9dCVR3sNhF76mEVxyW6yjUGPodOfjtSa3s3J_DxMOrEK1BqXCOBI9bC66spAIASV7prsYFlVAJz73jVNENp_tBubLHJy6EbT0BKRe4AjrYkH-9uMnmCKB8Zmyg; _SS=SID=17CF6EE006426448213C7DB907436588&R=0&RB=0&GB=0&RG=200&RP=0&PC=U531; SRCHS=PC=U531; USRLOC=HS=1&ELOC=LAT=22.501529693603516|LON=113.9263687133789|N=%E5%8D%97%E5%B1%B1%E5%8C%BA%EF%BC%8C%E5%B9%BF%E4%B8%9C%E7%9C%81|ELT=2|&CLOC=LAT=22.50153029046461|LON=113.92637070632928|A=733.4464586120832|TS=230726151034|SRC=W; SRCHUSR=DOB=20230725&T=1690384908000&POEX=W; ipv6=hit=1690388509974&t=6; SRCHHPGUSR=HV=1690384945&SRCHLANG=zh-Hans&PV=15.0.0&BRW=MW&BRH=MT&CW=410&CH=794&SCW=410&SCH=794&DPR=1.5&UTC=480&DM=0&WTS=63825879627&PRVCW=410&PRVCH=794&PR=1.5; cct=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpny6Y_CVyi_MSyM94VyMWnjdYkkccVtm3czoIAtXUGQA; GC=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpR3Y_D9Ytcks4Ht6XhadXk75dvhzP4YOUS0UmoEyqyxw' \ - -H 'dnt: 1' \ - -H 'sec-ch-ua: "Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"' \ - -H 'sec-ch-ua-arch: "x86"' \ - -H 'sec-ch-ua-bitness: "64"' \ - -H 'sec-ch-ua-full-version: "116.0.1938.29"' \ - -H 'sec-ch-ua-full-version-list: "Chromium";v="116.0.5845.42", "Not)A;Brand";v="24.0.0.0", "Microsoft Edge";v="116.0.1938.29"' \ - -H 'sec-ch-ua-mobile: ?0' \ - -H 'sec-ch-ua-model: ""' \ - -H 'sec-ch-ua-platform: "Windows"' \ - -H 'sec-ch-ua-platform-version: "15.0.0"' \ - -H 'sec-fetch-dest: document' \ - -H 'sec-fetch-mode: navigate' \ - -H 'sec-fetch-site: none' \ - -H 'sec-fetch-user: ?1' \ - -H 'sec-ms-gec: B3F47AD4A283CAB374C0451C46AAFD147C6A4DACAFF6A1C13F34B2C72B024494' \ - -H 'sec-ms-gec-version: 1-116.0.1938.29' \ - -H 'upgrade-insecure-requests: 1' \ - -H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.0.0' \ - -H 'x-client-data: eyIxIjoiMiIsIjEwIjoiXCJTMGg3R05HOTF2aDQ1TUZSUnZ5NHN2akRmMWdlaVJKenNxNlA3aU1WbnF3PVwiIiwiMiI6IjEiLCIzIjoiMSIsIjQiOiIyMTU4ODQ5NTM4MjY4OTM5NTA3IiwiNSI6IlwiSm9GUWpPTDk3OS9MbkRRZnlCd2N1M2FsOUN3eTZTQmdaMGNYMXBtOWVMZz1cIiIsIjYiOiJiZXRhIiwiNyI6IjE4MDM4ODYyNjQzNSIsIjkiOiJkZXNrdG9wIn0=' \ - -H 'x-edge-shopping-flag: 1' \ - --compressed -``` -
    - -
    -转成base64之后的格式(BING_HEADER只能使用 base64 之后的格式) - -``` -Y3VybCAnaHR0cHM6Ly93d3cuYmluZy5jb20vdHVyaW5nL2NvbnZlcnNhdGlvbi9jcmVhdGUnIFwgICAtSCAnYXV0aG9yaXR5OiB3d3cuYmluZy5jb20nIFwgICAtSCAnYWNjZXB0OiB0ZXh0L2h0bWwsYXBwbGljYXRpb24veGh0bWwreG1sLGFwcGxpY2F0aW9uL3htbDtxPTAuOSxpbWFnZS93ZWJwLGltYWdlL2FwbmcsKi8qO3E9MC44LGFwcGxpY2F0aW9uL3NpZ25lZC1leGNoYW5nZTt2PWIzO3E9MC43JyBcICAgLUggJ2FjY2VwdC1sYW5ndWFnZTogemgtQ04semg7cT0wLjksZW47cT0wLjgsZW4tR0I7cT0wLjcsZW4tVVM7cT0wLjYnIFwgICAtSCAnY2FjaGUtY29udHJvbDogbWF4LWFnZT0wJyBcICAgLUggJ2Nvb2tpZTogTWljcm9zb2Z0QXBwbGljYXRpb25zVGVsZW1ldHJ5RGV2aWNlSWQ9MzM5OWMwMDQtZmQwZS00OGVjLWJiOTItZDgyYTI3YjJiYmQ0OyBfRURHRV9WPTE7IFNSQ0hEPUFGPU5PRk9STTsgU1JDSFVJRD1WPTImR1VJRD0yOUVCRERBNEU2Njc0MzI5QUNDRjFBMEE0MjNDM0U5OCZkbW5jaGc9MTsgX1VSPVFTPTAmVFFTPTA7IF9IUFZOPUNTPWV5SlFiaUk2ZXlKRGJpSTZNU3dpVTNRaU9qQXNJbEZ6SWpvd0xDSlFjbTlrSWpvaVVDSjlMQ0pUWXlJNmV5SkRiaUk2TVN3aVUzUWlPakFzSWxGeklqb3dMQ0pRY205a0lqb2lTQ0o5TENKUmVpSTZleUpEYmlJNk1Td2lVM1FpT2pBc0lsRnpJam93TENKUWNtOWtJam9pVkNKOUxDSkJjQ0k2ZEhKMVpTd2lUWFYwWlNJNmRISjFaU3dpVEdGa0lqb2lNakF5TXkwd055MHlOVlF3TURvd01Eb3dNRm9pTENKSmIzUmtJam93TENKSGQySWlPakFzSWtSbWRDSTZiblZzYkN3aVRYWnpJam93TENKR2JIUWlPakFzSWtsdGNDSTZNbjA9OyBfUndCZj1pbHQ9MSZpaHBkPTEmaXNwZD0wJnJjPTAmcmI9MCZnYj0wJnJnPTIwMCZwYz0wJm10dT0wJnJiYj0wJmc9MCZjaWQ9JmNsbz0wJnY9MSZsPTIwMjMtMDctMjVUMDc6MDA6MDAuMDAwMDAwMFombGZ0PTAwMDEtMDEtMDFUMDA6MDA6MDAuMDAwMDAwMCZhb2Y9MCZvPTImcD0mYz0mdD0wJnM9MDAwMS0wMS0wMVQwMDowMDowMC4wMDAwMDAwKzAwOjAwJnRzPTIwMjMtMDctMjVUMTE6MDA6MzEuNzExMTU0OCswMDowMCZyd3JlZD0wJndscz0mbGthPTAmbGt0PTAmVEg9JmRjaT0wOyBBTk9OPUE9MDA0M0M2NTkwRUE4MDhFRDZFMzk1MDU5RkZGRkZGRkYmRT0xYzhiJlc9MTsgTkFQPVY9MS45JkU9MWMzMSZDPURuYU1TYkROXzRlZlpfeFhxQkYzRGFvcmpyNTNrWXFZb2FQOFlIc3Vwam1pWG55c1g3YTM3QSZXPTE7IFBQTFN0YXRlPTE7IEtpZXZSUFNTZWNBdXRoPUZBQlNCQlJhVE9KSUx0RnNNa3BMVldTRzZBTjZDL3N2UndObUFBQUVnQUFBQ01HVUE3RUdWU2pHRUFRQkdIdE5zYzVzTkw3dW5tSnNmUEoydDZpbWZvNEJlVUpsQWlhM0lwTVR0TVV5NFBVL0M1UUF6Ukk1cE9EdHNJZWUwK2JsZ2xsWHQvNUlpV3dHandtZGhpdnNGTTU5N3BSUGtqQVJQZndzUGhOTFBOYkpyQ1BOUEhkamU0SXM3OE1uQ0FEWHc2L05CcTJGTDhWMi9ieXcyZkg2SXVBTUQyTXZOL1Z2cXBFYTlaeGlEalp0RU5qNEhFajBtTzJTZ3pqZnlFaFZBa2p2em5KcVUycncvUTJ0SG1YOTROQU0ya3psektGL2hXUGhDQ1VtdThJSEx2Q25IRFM2bVNwdHZKRERQL3NwM292dHpPWGtQMW1sTS9YanU1ZnRlc1V2Y2NWRVFHZmZYT1JhMWRFNWhFTWJLSWlLWHoxdERkZHVTWEUxOWc5LyttUk1BamFRaHB3aEk4WG1pbENUeDFhZGIxTGw1cUsrVmpDOUdOZkVaemNic0dCUFZhT2wrYW5HOHJFTXErWG5oam83SitOcVROb2xhdkhnY3VWOGtKc0NlSlpJZ2VkMzNVQThlT1plRm8rd0FFQ01ndXhNb1NxZ3BHSCtzdGhxeW52RC9GSkQ2ci90aVUyTjN1cVZxOE5FOFYzN2Fzck42VDE0WjBGR0JKT2U2RVQxK1BHQXBtM3MxMU9ZOS94aEZFQjlUNUJFUFVHRWJ2UmNMY1cybmNGUVgwRVUreHdlaVBxbzFRMWhOVWcvZEN0U0krbFo3YzJIOFhoZWVQWmF2WjBUSlE4b05DU0F1S2lUcUptSTBmVkdwd2JYd2ZhQURrRWlwdWF3ejNmSXVNSkJOZ01VME90QTdIbTU5djJmR0xJQnV2aTZZZUtTNkdnVmszQklQZitQL2VLYWh3b3pyeFFaYUZub0hUU3FNa3ZjdDd4Q1A0YXRCUk9mWEtmNVd3MENjRktwKzJXWDlCSXNrVE9vMmpqazZiQXl5WUorRWxVQjFmZ0xLTms1bS9ZU01jOWlZQ0xJQk1JR044RjBZdnkzdFo3Y3ZoN1VlNUtsbzk4VVMvSStuVzFHN1pKTUhSZ1VPOGg4bHBuZUhxRU1lZ0tkOGd5bk80VkY3UnBDakprdW5EbVcwVGErUmtYQVA2MTlwZzBkcUhNRmtvT2drbk43OG9CYkdUVjZmSlVLb3R2K3ZpNjFrTGhBZVhaR1dvSEdDUlhoMndVQzZZZ2ZQZ0tBNkVTUk5IdEZuN0U1QjNISHBMYzVyVk1EU05oS1pZZmRodXBWNEV6ZjYrNURoTWNaTFpoaTBraytpdkRpTjFnZEhsVnRTTjU1eHB2ZitjK1haRHpSMHVoZ2N2Z3kwTEFibXpnazZ5NFdiWUgrTFFzTXB6Tk5qK2FDNzJ2TWlXb3ZXcktoOWpZNE1ZQ21kZ3hzUy9za1B0TGRwMThtdWlFSVJYVGJaUUdVbWh4RnBKQUliQklzQ3NjTXB6TDBCZ2V1anhVd001d3I3OVNkOXI0eHdiZ1NNd21CbEJmVUhSVkJkTnlnOGZlZXBlSmJDUzYzbkQ2ZUhPdUxxTVJzUElpbzN3L2tpL0VBYTkyVVVFaVplYXZMc01VRC95L3FBdldVZHpkUDVZK0MvVE0rQ01HUy9rR0w0TEVkWS8yOE1RZVR2VTFxdjFYMjFrUXQyYWlhajNwUFZMMzZoQXp4YmNMZ3FjTW85b3ltRFJ5ODdrZENYVy8rZzRvS0x0TWg2Zm0vRzZXNlkvQjAxSmx4b2h5eXZ1ZUhRSUc1NTd1emtFa1RKM0ZuT1ZPRFNLQktwYjNXWjY1ckV4ZlY3MXpTWmEyNUYzR21wYUlHNkhpWXJYMllZaFFBa0lFOXBLRVFCSGJud0h1d05ER290dFpUWFp3PTsgV0xTPUM9OWRmM2Y5ZDg1MThmYWUxOSZOPXdlbjsgV0xJRD1wR1k4SGdXQ3U0cDVYWUNPazJvYTArREJkZnRrTVVmbU5JbjhYdFNqU1RLc2d2L0lsN0dVbFlzMEpwamYvRTEyalpNZ1Y3eDQ0RHkzZlhPZ2pqVW9KeDdZL0NsTHJMaHNrMjBUSGtzSkpvST07IF9FREdFX1M9Rj0xJlNJRD0xN0NGNkVFMDA2NDI2NDQ4MjEzQzdEQjkwNzQzNjU4OCZta3Q9emgtQ047IE1VSUQ9MjI1NjIxMDkzRDhBNkMyNzMwMTYzMjQxM0MwRTZEMDg7IE1VSURCPTIyNTYyMTA5M0Q4QTZDMjczMDE2MzI0MTNDMEU2RDA4OyBTVUlEPUE7IFNOUkhPUD1JPSZUUz07IF9VPW5HeXpLUXJ1RXNEd0xpdTY1ZlpGSUc2ZTEyaGYybHdUSm1yb1dfX2s4am9VSklLbUczT0lqYXlYS0dXOWRDVlIzc05oRjc2bUVWeHlXNnlqVUdQb2RPZmp0U2EzczNKX0R4TU9yRUsxQnFYQ09CSTliQzY2c3BBSUFTVjdwcnNZRmxWQUp6NzNqVk5FTnBfdEJ1YkxISnk2RWJUMEJLUmU0QWpyWWtILTl1TW5tQ0tCOFpteWc7IF9TUz1TSUQ9MTdDRjZFRTAwNjQyNjQ0ODIxM0M3REI5MDc0MzY1ODgmUj0wJlJCPTAmR0I9MCZSRz0yMDAmUlA9MCZQQz1VNTMxOyBTUkNIUz1QQz1VNTMxOyBVU1JMT0M9SFM9MSZFTE9DPUxBVD0yMi41MDE1Mjk2OTM2MDM1MTZ8TE9OPTExMy45MjYzNjg3MTMzNzg5fE49JUU1JThEJTk3JUU1JUIxJUIxJUU1JThDJUJBJUVGJUJDJThDJUU1JUI5JUJGJUU0JUI4JTlDJUU3JTlDJTgxfEVMVD0yfCZDTE9DPUxBVD0yMi41MDE1MzAyOTA0NjQ2MXxMT049MTEzLjkyNjM3MDcwNjMyOTI4fEE9NzMzLjQ0NjQ1ODYxMjA4MzJ8VFM9MjMwNzI2MTUxMDM0fFNSQz1XOyBTUkNIVVNSPURPQj0yMDIzMDcyNSZUPTE2OTAzODQ5MDgwMDAmUE9FWD1XOyBpcHY2PWhpdD0xNjkwMzg4NTA5OTc0JnQ9NjsgU1JDSEhQR1VTUj1IVj0xNjkwMzg0OTQ1JlNSQ0hMQU5HPXpoLUhhbnMmUFY9MTUuMC4wJkJSVz1NVyZCUkg9TVQmQ1c9NDEwJkNIPTc5NCZTQ1c9NDEwJlNDSD03OTQmRFBSPTEuNSZVVEM9NDgwJkRNPTAmV1RTPTYzODI1ODc5NjI3JlBSVkNXPTQxMCZQUlZDSD03OTQmUFI9MS41OyBjY3Q9QWpXSUJZT29WUC1BZnE2Z1d3dHg4MElmNnlIbjZpQnVFVkhBMVhIZEFLcG55NllfQ1Z5aV9NU3lNOTRWeU1XbmpkWWtrY2NWdG0zY3pvSUF0WFVHUUE7IEdDPUFqV0lCWU9vVlAtQWZxNmdXd3R4ODBJZjZ5SG42aUJ1RVZIQTFYSGRBS3BSM1lfRDlZdGNrczRIdDZYaGFkWGs3NWR2aHpQNFlPVVMwVW1vRXlxeXh3JyBcICAgLUggJ2RudDogMScgXCAgIC1IICdzZWMtY2gtdWE6ICJDaHJvbWl1bSI7dj0iMTE2IiwgIk5vdClBO0JyYW5kIjt2PSIyNCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2IicgXCAgIC1IICdzZWMtY2gtdWEtYXJjaDogIng4NiInIFwgICAtSCAnc2VjLWNoLXVhLWJpdG5lc3M6ICI2NCInIFwgICAtSCAnc2VjLWNoLXVhLWZ1bGwtdmVyc2lvbjogIjExNi4wLjE5MzguMjkiJyBcICAgLUggJ3NlYy1jaC11YS1mdWxsLXZlcnNpb24tbGlzdDogIkNocm9taXVtIjt2PSIxMTYuMC41ODQ1LjQyIiwgIk5vdClBO0JyYW5kIjt2PSIyNC4wLjAuMCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2LjAuMTkzOC4yOSInIFwgICAtSCAnc2VjLWNoLXVhLW1vYmlsZTogPzAnIFwgICAtSCAnc2VjLWNoLXVhLW1vZGVsOiAiIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm06ICJXaW5kb3dzIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm0tdmVyc2lvbjogIjE1LjAuMCInIFwgICAtSCAnc2VjLWZldGNoLWRlc3Q6IGRvY3VtZW50JyBcICAgLUggJ3NlYy1mZXRjaC1tb2RlOiBuYXZpZ2F0ZScgXCAgIC1IICdzZWMtZmV0Y2gtc2l0ZTogbm9uZScgXCAgIC1IICdzZWMtZmV0Y2gtdXNlcjogPzEnIFwgICAtSCAnc2VjLW1zLWdlYzogQjNGNDdBRDRBMjgzQ0FCMzc0QzA0NTFDNDZBQUZEMTQ3QzZBNERBQ0FGRjZBMUMxM0YzNEIyQzcyQjAyNDQ5NCcgXCAgIC1IICdzZWMtbXMtZ2VjLXZlcnNpb246IDEtMTE2LjAuMTkzOC4yOScgXCAgIC1IICd1cGdyYWRlLWluc2VjdXJlLXJlcXVlc3RzOiAxJyBcICAgLUggJ3VzZXItYWdlbnQ6IE1vemlsbGEvNS4wIChXaW5kb3dzIE5UIDEwLjA7IFdpbjY0OyB4NjQpIEFwcGxlV2ViS2l0LzUzNy4zNiAoS0hUTUwsIGxpa2UgR2Vja28pIENocm9tZS8xMTYuMC4wLjAgU2FmYXJpLzUzNy4zNiBFZGcvMTE2LjAuMC4wJyBcICAgLUggJ3gtY2xpZW50LWRhdGE6IGV5SXhJam9pTWlJc0lqRXdJam9pWENKVE1HZzNSMDVIT1RGMmFEUTFUVVpTVW5aNU5ITjJha1JtTVdkbGFWSktlbk54TmxBM2FVMVdibkYzUFZ3aUlpd2lNaUk2SWpFaUxDSXpJam9pTVNJc0lqUWlPaUl5TVRVNE9EUTVOVE00TWpZNE9UTTVOVEEzSWl3aU5TSTZJbHdpU205R1VXcFBURGszT1M5TWJrUlJabmxDZDJOMU0yRnNPVU4zZVRaVFFtZGFNR05ZTVhCdE9XVk1aejFjSWlJc0lqWWlPaUppWlhSaElpd2lOeUk2SWpFNE1ETTRPRFl5TmpRek5TSXNJamtpT2lKa1pYTnJkRzl3SW4wPScgXCAgIC1IICd4LWVkZ2Utc2hvcHBpbmctZmxhZzogMScgXCAgIC0tY29tcHJlc3NlZA== -``` -
    - - -## 鸣谢 - - 感谢 [EdgeGPT](https://github.com/acheong08/EdgeGPT) 提供的代理 API 的方法。 - - 感谢 [Vercel AI](https://github.com/vercel-labs/ai-chatbot) 提供的基础脚手架和 [ChatHub](https://github.com/chathub-dev/chathub) [go-proxy-bingai](https://github.com/adams549659584/go-proxy-bingai) 提供的部分代码。 - - -## 答疑及交流 - - - -## License - -MIT © [LICENSE](https://github.com/weaigc/bingo/blob/main/LICENSE). - - diff --git a/spaces/whgwd2023/bingo/src/app/page.tsx b/spaces/whgwd2023/bingo/src/app/page.tsx deleted file mode 100644 index 0dff3431b098ce4fe282cc83fc87a93a28a43090..0000000000000000000000000000000000000000 --- a/spaces/whgwd2023/bingo/src/app/page.tsx +++ /dev/null @@ -1,15 +0,0 @@ -import dynamic from 'next/dynamic' - -const DynamicComponentWithNoSSR = dynamic( - () => import('../components/chat'), - { ssr: false } -) - -export default function IndexPage() { - return ( - <> -
    - - - ) -} diff --git a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/metrics/rank_cylib/test_cython.py b/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/metrics/rank_cylib/test_cython.py deleted file mode 100644 index 5d1175d70bbc22e8c98fa8c5f89e2f5e88dc9c0f..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/metrics/rank_cylib/test_cython.py +++ /dev/null @@ -1,83 +0,0 @@ -from __future__ import print_function -import sys -import numpy as np -import timeit -import os.path as osp - -from torchreid import metrics - -sys.path.insert(0, osp.dirname(osp.abspath(__file__)) + '/../../..') -""" -Test the speed of cython-based evaluation code. The speed improvements -can be much bigger when using the real reid data, which contains a larger -amount of query and gallery images. - -Note: you might encounter the following error: - 'AssertionError: Error: all query identities do not appear in gallery'. -This is normal because the inputs are random numbers. Just try again. -""" - -print('*** Compare running time ***') - -setup = ''' -import sys -import os.path as osp -import numpy as np -sys.path.insert(0, osp.dirname(osp.abspath(__file__)) + '/../../..') -from torchreid import metrics -num_q = 30 -num_g = 300 -max_rank = 5 -distmat = np.random.rand(num_q, num_g) * 20 -q_pids = np.random.randint(0, num_q, size=num_q) -g_pids = np.random.randint(0, num_g, size=num_g) -q_camids = np.random.randint(0, 5, size=num_q) -g_camids = np.random.randint(0, 5, size=num_g) -''' - -print('=> Using market1501\'s metric') -pytime = timeit.timeit( - 'metrics.evaluate_rank(distmat, q_pids, g_pids, q_camids, g_camids, max_rank, use_cython=False)', - setup=setup, - number=20 -) -cytime = timeit.timeit( - 'metrics.evaluate_rank(distmat, q_pids, g_pids, q_camids, g_camids, max_rank, use_cython=True)', - setup=setup, - number=20 -) -print('Python time: {} s'.format(pytime)) -print('Cython time: {} s'.format(cytime)) -print('Cython is {} times faster than python\n'.format(pytime / cytime)) - -print('=> Using cuhk03\'s metric') -pytime = timeit.timeit( - 'metrics.evaluate_rank(distmat, q_pids, g_pids, q_camids, g_camids, max_rank, use_metric_cuhk03=True, use_cython=False)', - setup=setup, - number=20 -) -cytime = timeit.timeit( - 'metrics.evaluate_rank(distmat, q_pids, g_pids, q_camids, g_camids, max_rank, use_metric_cuhk03=True, use_cython=True)', - setup=setup, - number=20 -) -print('Python time: {} s'.format(pytime)) -print('Cython time: {} s'.format(cytime)) -print('Cython is {} times faster than python\n'.format(pytime / cytime)) -""" -print("=> Check precision") - -num_q = 30 -num_g = 300 -max_rank = 5 -distmat = np.random.rand(num_q, num_g) * 20 -q_pids = np.random.randint(0, num_q, size=num_q) -g_pids = np.random.randint(0, num_g, size=num_g) -q_camids = np.random.randint(0, 5, size=num_q) -g_camids = np.random.randint(0, 5, size=num_g) - -cmc, mAP = evaluate(distmat, q_pids, g_pids, q_camids, g_camids, max_rank, use_cython=False) -print("Python:\nmAP = {} \ncmc = {}\n".format(mAP, cmc)) -cmc, mAP = evaluate(distmat, q_pids, g_pids, q_camids, g_camids, max_rank, use_cython=True) -print("Cython:\nmAP = {} \ncmc = {}\n".format(mAP, cmc)) -""" diff --git a/spaces/xfys/yolov5_tracking/yolov5/utils/segment/metrics.py b/spaces/xfys/yolov5_tracking/yolov5/utils/segment/metrics.py deleted file mode 100644 index 6020fa062ba562c770a3a6dd17daf7fa30e1dfc2..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/yolov5/utils/segment/metrics.py +++ /dev/null @@ -1,210 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license -""" -Model validation metrics -""" - -import numpy as np - -from ..metrics import ap_per_class - - -def fitness(x): - # Model fitness as a weighted combination of metrics - w = [0.0, 0.0, 0.1, 0.9, 0.0, 0.0, 0.1, 0.9] - return (x[:, :8] * w).sum(1) - - -def ap_per_class_box_and_mask( - tp_m, - tp_b, - conf, - pred_cls, - target_cls, - plot=False, - save_dir='.', - names=(), -): - """ - Args: - tp_b: tp of boxes. - tp_m: tp of masks. - other arguments see `func: ap_per_class`. - """ - results_boxes = ap_per_class(tp_b, - conf, - pred_cls, - target_cls, - plot=plot, - save_dir=save_dir, - names=names, - prefix='Box')[2:] - results_masks = ap_per_class(tp_m, - conf, - pred_cls, - target_cls, - plot=plot, - save_dir=save_dir, - names=names, - prefix='Mask')[2:] - - results = { - 'boxes': { - 'p': results_boxes[0], - 'r': results_boxes[1], - 'ap': results_boxes[3], - 'f1': results_boxes[2], - 'ap_class': results_boxes[4]}, - 'masks': { - 'p': results_masks[0], - 'r': results_masks[1], - 'ap': results_masks[3], - 'f1': results_masks[2], - 'ap_class': results_masks[4]}} - return results - - -class Metric: - - def __init__(self) -> None: - self.p = [] # (nc, ) - self.r = [] # (nc, ) - self.f1 = [] # (nc, ) - self.all_ap = [] # (nc, 10) - self.ap_class_index = [] # (nc, ) - - @property - def ap50(self): - """AP@0.5 of all classes. - Return: - (nc, ) or []. - """ - return self.all_ap[:, 0] if len(self.all_ap) else [] - - @property - def ap(self): - """AP@0.5:0.95 - Return: - (nc, ) or []. - """ - return self.all_ap.mean(1) if len(self.all_ap) else [] - - @property - def mp(self): - """mean precision of all classes. - Return: - float. - """ - return self.p.mean() if len(self.p) else 0.0 - - @property - def mr(self): - """mean recall of all classes. - Return: - float. - """ - return self.r.mean() if len(self.r) else 0.0 - - @property - def map50(self): - """Mean AP@0.5 of all classes. - Return: - float. - """ - return self.all_ap[:, 0].mean() if len(self.all_ap) else 0.0 - - @property - def map(self): - """Mean AP@0.5:0.95 of all classes. - Return: - float. - """ - return self.all_ap.mean() if len(self.all_ap) else 0.0 - - def mean_results(self): - """Mean of results, return mp, mr, map50, map""" - return (self.mp, self.mr, self.map50, self.map) - - def class_result(self, i): - """class-aware result, return p[i], r[i], ap50[i], ap[i]""" - return (self.p[i], self.r[i], self.ap50[i], self.ap[i]) - - def get_maps(self, nc): - maps = np.zeros(nc) + self.map - for i, c in enumerate(self.ap_class_index): - maps[c] = self.ap[i] - return maps - - def update(self, results): - """ - Args: - results: tuple(p, r, ap, f1, ap_class) - """ - p, r, all_ap, f1, ap_class_index = results - self.p = p - self.r = r - self.all_ap = all_ap - self.f1 = f1 - self.ap_class_index = ap_class_index - - -class Metrics: - """Metric for boxes and masks.""" - - def __init__(self) -> None: - self.metric_box = Metric() - self.metric_mask = Metric() - - def update(self, results): - """ - Args: - results: Dict{'boxes': Dict{}, 'masks': Dict{}} - """ - self.metric_box.update(list(results['boxes'].values())) - self.metric_mask.update(list(results['masks'].values())) - - def mean_results(self): - return self.metric_box.mean_results() + self.metric_mask.mean_results() - - def class_result(self, i): - return self.metric_box.class_result(i) + self.metric_mask.class_result(i) - - def get_maps(self, nc): - return self.metric_box.get_maps(nc) + self.metric_mask.get_maps(nc) - - @property - def ap_class_index(self): - # boxes and masks have the same ap_class_index - return self.metric_box.ap_class_index - - -KEYS = [ - 'train/box_loss', - 'train/seg_loss', # train loss - 'train/obj_loss', - 'train/cls_loss', - 'metrics/precision(B)', - 'metrics/recall(B)', - 'metrics/mAP_0.5(B)', - 'metrics/mAP_0.5:0.95(B)', # metrics - 'metrics/precision(M)', - 'metrics/recall(M)', - 'metrics/mAP_0.5(M)', - 'metrics/mAP_0.5:0.95(M)', # metrics - 'val/box_loss', - 'val/seg_loss', # val loss - 'val/obj_loss', - 'val/cls_loss', - 'x/lr0', - 'x/lr1', - 'x/lr2',] - -BEST_KEYS = [ - 'best/epoch', - 'best/precision(B)', - 'best/recall(B)', - 'best/mAP_0.5(B)', - 'best/mAP_0.5:0.95(B)', - 'best/precision(M)', - 'best/recall(M)', - 'best/mAP_0.5(M)', - 'best/mAP_0.5:0.95(M)',] diff --git a/spaces/xiang-wuu/yolov5/utils/benchmarks.py b/spaces/xiang-wuu/yolov5/utils/benchmarks.py deleted file mode 100644 index d412653c866fa0a0c4797cf4edd4b41ba0bb458e..0000000000000000000000000000000000000000 --- a/spaces/xiang-wuu/yolov5/utils/benchmarks.py +++ /dev/null @@ -1,157 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Run YOLOv5 benchmarks on all supported export formats - -Format | `export.py --include` | Model ---- | --- | --- -PyTorch | - | yolov5s.pt -TorchScript | `torchscript` | yolov5s.torchscript -ONNX | `onnx` | yolov5s.onnx -OpenVINO | `openvino` | yolov5s_openvino_model/ -TensorRT | `engine` | yolov5s.engine -CoreML | `coreml` | yolov5s.mlmodel -TensorFlow SavedModel | `saved_model` | yolov5s_saved_model/ -TensorFlow GraphDef | `pb` | yolov5s.pb -TensorFlow Lite | `tflite` | yolov5s.tflite -TensorFlow Edge TPU | `edgetpu` | yolov5s_edgetpu.tflite -TensorFlow.js | `tfjs` | yolov5s_web_model/ - -Requirements: - $ pip install -r requirements.txt coremltools onnx onnx-simplifier onnxruntime openvino-dev tensorflow-cpu # CPU - $ pip install -r requirements.txt coremltools onnx onnx-simplifier onnxruntime-gpu openvino-dev tensorflow # GPU - $ pip install -U nvidia-tensorrt --index-url https://pypi.ngc.nvidia.com # TensorRT - -Usage: - $ python utils/benchmarks.py --weights yolov5s.pt --img 640 -""" - -import argparse -import platform -import sys -import time -from pathlib import Path - -import pandas as pd - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[1] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH -# ROOT = ROOT.relative_to(Path.cwd()) # relative - -import export -import val -from utils import notebook_init -from utils.general import LOGGER, check_yaml, file_size, print_args -from utils.torch_utils import select_device - - -def run( - weights=ROOT / 'yolov5s.pt', # weights path - imgsz=640, # inference size (pixels) - batch_size=1, # batch size - data=ROOT / 'data/coco128.yaml', # dataset.yaml path - device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu - half=False, # use FP16 half-precision inference - test=False, # test exports only - pt_only=False, # test PyTorch only - hard_fail=False, # throw error on benchmark failure -): - y, t = [], time.time() - device = select_device(device) - for i, (name, f, suffix, cpu, gpu) in export.export_formats().iterrows(): # index, (name, file, suffix, CPU, GPU) - try: - assert i not in (9, 10), 'inference not supported' # Edge TPU and TF.js are unsupported - assert i != 5 or platform.system() == 'Darwin', 'inference only supported on macOS>=10.13' # CoreML - if 'cpu' in device.type: - assert cpu, 'inference not supported on CPU' - if 'cuda' in device.type: - assert gpu, 'inference not supported on GPU' - - # Export - if f == '-': - w = weights # PyTorch format - else: - w = export.run(weights=weights, imgsz=[imgsz], include=[f], device=device, half=half)[-1] # all others - assert suffix in str(w), 'export failed' - - # Validate - result = val.run(data, w, batch_size, imgsz, plots=False, device=device, task='benchmark', half=half) - metrics = result[0] # metrics (mp, mr, map50, map, *losses(box, obj, cls)) - speeds = result[2] # times (preprocess, inference, postprocess) - y.append([name, round(file_size(w), 1), round(metrics[3], 4), round(speeds[1], 2)]) # MB, mAP, t_inference - except Exception as e: - if hard_fail: - assert type(e) is AssertionError, f'Benchmark --hard-fail for {name}: {e}' - LOGGER.warning(f'WARNING: Benchmark failure for {name}: {e}') - y.append([name, None, None, None]) # mAP, t_inference - if pt_only and i == 0: - break # break after PyTorch - - # Print results - LOGGER.info('\n') - parse_opt() - notebook_init() # print system info - c = ['Format', 'Size (MB)', 'mAP@0.5:0.95', 'Inference time (ms)'] if map else ['Format', 'Export', '', ''] - py = pd.DataFrame(y, columns=c) - LOGGER.info(f'\nBenchmarks complete ({time.time() - t:.2f}s)') - LOGGER.info(str(py if map else py.iloc[:, :2])) - return py - - -def test( - weights=ROOT / 'yolov5s.pt', # weights path - imgsz=640, # inference size (pixels) - batch_size=1, # batch size - data=ROOT / 'data/coco128.yaml', # dataset.yaml path - device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu - half=False, # use FP16 half-precision inference - test=False, # test exports only - pt_only=False, # test PyTorch only - hard_fail=False, # throw error on benchmark failure -): - y, t = [], time.time() - device = select_device(device) - for i, (name, f, suffix, gpu) in export.export_formats().iterrows(): # index, (name, file, suffix, gpu-capable) - try: - w = weights if f == '-' else \ - export.run(weights=weights, imgsz=[imgsz], include=[f], device=device, half=half)[-1] # weights - assert suffix in str(w), 'export failed' - y.append([name, True]) - except Exception: - y.append([name, False]) # mAP, t_inference - - # Print results - LOGGER.info('\n') - parse_opt() - notebook_init() # print system info - py = pd.DataFrame(y, columns=['Format', 'Export']) - LOGGER.info(f'\nExports complete ({time.time() - t:.2f}s)') - LOGGER.info(str(py)) - return py - - -def parse_opt(): - parser = argparse.ArgumentParser() - parser.add_argument('--weights', type=str, default=ROOT / 'yolov5s.pt', help='weights path') - parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='inference size (pixels)') - parser.add_argument('--batch-size', type=int, default=1, help='batch size') - parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference') - parser.add_argument('--test', action='store_true', help='test exports only') - parser.add_argument('--pt-only', action='store_true', help='test PyTorch only') - parser.add_argument('--hard-fail', action='store_true', help='throw error on benchmark failure') - opt = parser.parse_args() - opt.data = check_yaml(opt.data) # check YAML - print_args(vars(opt)) - return opt - - -def main(opt): - test(**vars(opt)) if opt.test else run(**vars(opt)) - - -if __name__ == "__main__": - opt = parse_opt() - main(opt) diff --git a/spaces/xl2533/FinDoc/build_index/parser/html_parser.py b/spaces/xl2533/FinDoc/build_index/parser/html_parser.py deleted file mode 100644 index 5681c6fc9d0c5db00e17b5a88cbcf13f859a43e1..0000000000000000000000000000000000000000 --- a/spaces/xl2533/FinDoc/build_index/parser/html_parser.py +++ /dev/null @@ -1,48 +0,0 @@ -# -*-coding:utf-8 -*- - -import re -from unstructured.partition.html import partition_html -from unstructured.staging.base import convert_to_isd -from unstructured.cleaners.core import clean -from build_index.parser.base import BaseParser - - -class HTMLParser(BaseParser): - def parse_file(self, file): - with open(file, "r", encoding="utf-8") as fp: - elements = partition_html(file=fp) - isd = convert_to_isd(elements) - - for isd_el in isd: - isd_el['text'] = isd_el['text'].encode("ascii", "ignore").decode() - isd_el['text'] = self.remove_dup_space(isd_el['text']) - isd_el['text'] = self.remove_empty_line(isd_el['text']) - clean(isd_el['text'], extra_whitespace=True, dashes=True, bullets=True, trailing_punctuation=True ) - - # Creating a list of all the indexes of isd_el['type'] = 'Title' - title_indexes = [i for i, isd_el in enumerate(isd) if isd_el['type'] == 'Title'] - - # Creating 'Chunks' - List of lists of strings - # each list starting with with isd_el['type'] = 'Title' and all the data till the next 'Title' - # Each Chunk can be thought of as an individual set of data, which can be sent to the model - # Where Each Title is grouped together with the data under it - - Chunks = [[]] - final_chunks = list(list()) - - for i, isd_el in enumerate(isd): - if i in title_indexes: - Chunks.append([]) - Chunks[-1].append(isd_el['text']) - - # Removing all the chunks with sum of lenth of all the strings in the chunk < 25 #TODO: This value can be an user defined variable - for chunk in Chunks: - # sum of lenth of all the strings in the chunk - sum = 0 - sum += len(str(chunk)) - if sum < 25: - Chunks.remove(chunk) - else : - # appending all the approved chunks to final_chunks as a single string - final_chunks.append(" ".join([str(item) for item in chunk])) - return final_chunks diff --git a/spaces/xp3857/Image_Restoration_Colorization/Global/detection_models/Synchronized-BatchNorm-PyTorch/README.md b/spaces/xp3857/Image_Restoration_Colorization/Global/detection_models/Synchronized-BatchNorm-PyTorch/README.md deleted file mode 100644 index 779983436c9727dd0d6301a1c857f2360245b51d..0000000000000000000000000000000000000000 --- a/spaces/xp3857/Image_Restoration_Colorization/Global/detection_models/Synchronized-BatchNorm-PyTorch/README.md +++ /dev/null @@ -1,118 +0,0 @@ -# Synchronized-BatchNorm-PyTorch - -**IMPORTANT: Please read the "Implementation details and highlights" section before use.** - -Synchronized Batch Normalization implementation in PyTorch. - -This module differs from the built-in PyTorch BatchNorm as the mean and -standard-deviation are reduced across all devices during training. - -For example, when one uses `nn.DataParallel` to wrap the network during -training, PyTorch's implementation normalize the tensor on each device using -the statistics only on that device, which accelerated the computation and -is also easy to implement, but the statistics might be inaccurate. -Instead, in this synchronized version, the statistics will be computed -over all training samples distributed on multiple devices. - -Note that, for one-GPU or CPU-only case, this module behaves exactly same -as the built-in PyTorch implementation. - -This module is currently only a prototype version for research usages. As mentioned below, -it has its limitations and may even suffer from some design problems. If you have any -questions or suggestions, please feel free to -[open an issue](https://github.com/vacancy/Synchronized-BatchNorm-PyTorch/issues) or -[submit a pull request](https://github.com/vacancy/Synchronized-BatchNorm-PyTorch/issues). - -## Why Synchronized BatchNorm? - -Although the typical implementation of BatchNorm working on multiple devices (GPUs) -is fast (with no communication overhead), it inevitably reduces the size of batch size, -which potentially degenerates the performance. This is not a significant issue in some -standard vision tasks such as ImageNet classification (as the batch size per device -is usually large enough to obtain good statistics). However, it will hurt the performance -in some tasks that the batch size is usually very small (e.g., 1 per GPU). - -For example, the importance of synchronized batch normalization in object detection has been recently proved with a -an extensive analysis in the paper [MegDet: A Large Mini-Batch Object Detector](https://arxiv.org/abs/1711.07240). - -## Usage - -To use the Synchronized Batch Normalization, we add a data parallel replication callback. This introduces a slight -difference with typical usage of the `nn.DataParallel`. - -Use it with a provided, customized data parallel wrapper: - -```python -from sync_batchnorm import SynchronizedBatchNorm1d, DataParallelWithCallback - -sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) -sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1]) -``` - -Or, if you are using a customized data parallel module, you can use this library as a monkey patching. - -```python -from torch.nn import DataParallel # or your customized DataParallel module -from sync_batchnorm import SynchronizedBatchNorm1d, patch_replication_callback - -sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) -sync_bn = DataParallel(sync_bn, device_ids=[0, 1]) -patch_replication_callback(sync_bn) # monkey-patching -``` - -You can use `convert_model` to convert your model to use Synchronized BatchNorm easily. - -```python -import torch.nn as nn -from torchvision import models -from sync_batchnorm import convert_model -# m is a standard pytorch model -m = models.resnet18(True) -m = nn.DataParallel(m) -# after convert, m is using SyncBN -m = convert_model(m) -``` - -See also `tests/test_sync_batchnorm.py` for numeric result comparison. - -## Implementation details and highlights - -If you are interested in how batch statistics are reduced and broadcasted among multiple devices, please take a look -at the code with detailed comments. Here we only emphasize some highlights of the implementation: - -- This implementation is in pure-python. No C++ extra extension libs. -- Easy to use as demonstrated above. -- It uses unbiased variance to update the moving average, and use `sqrt(max(var, eps))` instead of `sqrt(var + eps)`. -- The implementation requires that each module on different devices should invoke the `batchnorm` for exactly SAME -amount of times in each forward pass. For example, you can not only call `batchnorm` on GPU0 but not on GPU1. The `#i -(i = 1, 2, 3, ...)` calls of the `batchnorm` on each device will be viewed as a whole and the statistics will be reduced. -This is tricky but is a good way to handle PyTorch's dynamic computation graph. Although sounds complicated, this -will usually not be the issue for most of the models. - -## Known issues - -#### Runtime error on backward pass. - -Due to a [PyTorch Bug](https://github.com/pytorch/pytorch/issues/3883), using old PyTorch libraries will trigger an `RuntimeError` with messages like: - -``` -Assertion `pos >= 0 && pos < buffer.size()` failed. -``` - -This has already been solved in the newest PyTorch repo, which, unfortunately, has not been pushed to the official and anaconda binary release. Thus, you are required to build the PyTorch package from the source according to the - instructions [here](https://github.com/pytorch/pytorch#from-source). - -#### Numeric error. - -Because this library does not fuse the normalization and statistics operations in C++ (nor CUDA), it is less -numerically stable compared to the original PyTorch implementation. Detailed analysis can be found in -`tests/test_sync_batchnorm.py`. - -## Authors and License: - -Copyright (c) 2018-, [Jiayuan Mao](https://vccy.xyz). - -**Contributors**: [Tete Xiao](https://tetexiao.com), [DTennant](https://github.com/DTennant). - -Distributed under **MIT License** (See LICENSE) - diff --git a/spaces/ybelkada/image-to-music/app.py b/spaces/ybelkada/image-to-music/app.py deleted file mode 100644 index 8bbb2f4f199076a3468ae946c5ff4c06e7ace1bb..0000000000000000000000000000000000000000 --- a/spaces/ybelkada/image-to-music/app.py +++ /dev/null @@ -1,163 +0,0 @@ -import gradio as gr - -import torch - -from spectro import wav_bytes_from_spectrogram_image -from diffusers import StableDiffusionPipeline - -from transformers import BlipForConditionalGeneration, BlipProcessor - -from share_btn import community_icon_html, loading_icon_html, share_js - -model_id = "riffusion/riffusion-model-v1" -blip_model_id = "Salesforce/blip-image-captioning-base" -pipe = StableDiffusionPipeline.from_pretrained(model_id) -pipe = pipe.to("cuda") - -blip_model = BlipForConditionalGeneration.from_pretrained(blip_model_id, torch_dtype=torch.float16).to("cuda") -processor = BlipProcessor.from_pretrained(blip_model_id) - -def predict(image): - inputs = processor(image, return_tensors="pt").to("cuda", torch.float16) - output_blip = blip_model.generate(**inputs) - prompt = processor.decode(output_blip[0], skip_special_tokens=True) - - spec = pipe(prompt).images[0] - print(spec) - wav = wav_bytes_from_spectrogram_image(spec) - with open("output.wav", "wb") as f: - f.write(wav[0].getbuffer()) - return spec, 'output.wav', gr.update(visible=True), gr.update(visible=True), gr.update(visible=True) - -title = """ -
    -
    -

    - Riffusion real-time image-to-music generation -

    -
    -

    - Describe a musical prompt, generate music by getting a spectrogram image & sound. -

    -""" - -article = """ -

    - About the model: Riffusion is a latent text-to-image diffusion model capable of generating spectrogram images given any text input. These spectrograms can be converted into audio clips. -
    — -
    The Riffusion model was created by fine-tuning the Stable-Diffusion-v1-5 checkpoint. -
    — -
    The model is intended for research purposes only. Possible research areas and tasks include - generation of artworks, audio, and use in creative processes, applications in educational or creative tools, research on generative models. - -

    - - -

    - Do you need faster results ? You can skip the queue by duplicating this space: - - Duplicate Space - -

    -""" - -css = ''' - #col-container, #col-container-2 {max-width: 510px; margin-left: auto; margin-right: auto;} - a {text-decoration-line: underline; font-weight: 600;} - div#record_btn > .mt-6 { - margin-top: 0!important; - } - div#record_btn > .mt-6 button { - width: 100%; - height: 40px; - } - .footer { - margin-bottom: 45px; - margin-top: 10px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } - .animate-spin { - animation: spin 1s linear infinite; - } - @keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } - } - #share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; - } - #share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;right:0; - } - #share-btn * { - all: unset; - } - #share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; - } - #share-btn-container .wrap { - display: none !important; - } - -''' - - - -with gr.Blocks(css=css) as demo: - - with gr.Column(elem_id="col-container"): - - gr.HTML(title) - - # prompt_input = gr.Textbox(placeholder="a cat diva singing in a New York jazz club", label="Musical prompt", elem_id="prompt-in") - image_input = gr.Image() - send_btn = gr.Button(value="Get a new spectrogram ! ", elem_id="submit-btn") - - with gr.Column(elem_id="col-container-2"): - - spectrogram_output = gr.Image(label="spectrogram image result", elem_id="img-out") - sound_output = gr.Audio(type='filepath', label="spectrogram sound", elem_id="music-out") - - with gr.Group(elem_id="share-btn-container"): - community_icon = gr.HTML(community_icon_html, visible=False) - loading_icon = gr.HTML(loading_icon_html, visible=False) - share_button = gr.Button("Share to community", elem_id="share-btn", visible=False) - - gr.HTML(article) - - send_btn.click(predict, inputs=[image_input], outputs=[spectrogram_output, sound_output, share_button, community_icon, loading_icon]) - share_button.click(None, [], [], _js=share_js) - -demo.queue(max_size=250).launch(debug=True) diff --git a/spaces/yfyangd/PictureBookUnderstanding/BLIP/utils.py b/spaces/yfyangd/PictureBookUnderstanding/BLIP/utils.py deleted file mode 100644 index ebe0e1dc2f5d200156d5dd1acc305a8b7b7b98da..0000000000000000000000000000000000000000 --- a/spaces/yfyangd/PictureBookUnderstanding/BLIP/utils.py +++ /dev/null @@ -1,278 +0,0 @@ -import math -def cosine_lr_schedule(optimizer, epoch, max_epoch, init_lr, min_lr): - """Decay the learning rate""" - lr = (init_lr - min_lr) * 0.5 * (1. + math.cos(math.pi * epoch / max_epoch)) + min_lr - for param_group in optimizer.param_groups: - param_group['lr'] = lr - -def warmup_lr_schedule(optimizer, step, max_step, init_lr, max_lr): - """Warmup the learning rate""" - lr = min(max_lr, init_lr + (max_lr - init_lr) * step / max_step) - for param_group in optimizer.param_groups: - param_group['lr'] = lr - -def step_lr_schedule(optimizer, epoch, init_lr, min_lr, decay_rate): - """Decay the learning rate""" - lr = max(min_lr, init_lr * (decay_rate**epoch)) - for param_group in optimizer.param_groups: - param_group['lr'] = lr - -import numpy as np -import io -import os -import time -from collections import defaultdict, deque -import datetime - -import torch -import torch.distributed as dist - -class SmoothedValue(object): - """Track a series of values and provide access to smoothed values over a - window or the global series average. - """ - - def __init__(self, window_size=20, fmt=None): - if fmt is None: - fmt = "{median:.4f} ({global_avg:.4f})" - self.deque = deque(maxlen=window_size) - self.total = 0.0 - self.count = 0 - self.fmt = fmt - - def update(self, value, n=1): - self.deque.append(value) - self.count += n - self.total += value * n - - def synchronize_between_processes(self): - """ - Warning: does not synchronize the deque! - """ - if not is_dist_avail_and_initialized(): - return - t = torch.tensor([self.count, self.total], dtype=torch.float64, device='cuda') - dist.barrier() - dist.all_reduce(t) - t = t.tolist() - self.count = int(t[0]) - self.total = t[1] - - @property - def median(self): - d = torch.tensor(list(self.deque)) - return d.median().item() - - @property - def avg(self): - d = torch.tensor(list(self.deque), dtype=torch.float32) - return d.mean().item() - - @property - def global_avg(self): - return self.total / self.count - - @property - def max(self): - return max(self.deque) - - @property - def value(self): - return self.deque[-1] - - def __str__(self): - return self.fmt.format( - median=self.median, - avg=self.avg, - global_avg=self.global_avg, - max=self.max, - value=self.value) - - -class MetricLogger(object): - def __init__(self, delimiter="\t"): - self.meters = defaultdict(SmoothedValue) - self.delimiter = delimiter - - def update(self, **kwargs): - for k, v in kwargs.items(): - if isinstance(v, torch.Tensor): - v = v.item() - assert isinstance(v, (float, int)) - self.meters[k].update(v) - - def __getattr__(self, attr): - if attr in self.meters: - return self.meters[attr] - if attr in self.__dict__: - return self.__dict__[attr] - raise AttributeError("'{}' object has no attribute '{}'".format( - type(self).__name__, attr)) - - def __str__(self): - loss_str = [] - for name, meter in self.meters.items(): - loss_str.append( - "{}: {}".format(name, str(meter)) - ) - return self.delimiter.join(loss_str) - - def global_avg(self): - loss_str = [] - for name, meter in self.meters.items(): - loss_str.append( - "{}: {:.4f}".format(name, meter.global_avg) - ) - return self.delimiter.join(loss_str) - - def synchronize_between_processes(self): - for meter in self.meters.values(): - meter.synchronize_between_processes() - - def add_meter(self, name, meter): - self.meters[name] = meter - - def log_every(self, iterable, print_freq, header=None): - i = 0 - if not header: - header = '' - start_time = time.time() - end = time.time() - iter_time = SmoothedValue(fmt='{avg:.4f}') - data_time = SmoothedValue(fmt='{avg:.4f}') - space_fmt = ':' + str(len(str(len(iterable)))) + 'd' - log_msg = [ - header, - '[{0' + space_fmt + '}/{1}]', - 'eta: {eta}', - '{meters}', - 'time: {time}', - 'data: {data}' - ] - if torch.cuda.is_available(): - log_msg.append('max mem: {memory:.0f}') - log_msg = self.delimiter.join(log_msg) - MB = 1024.0 * 1024.0 - for obj in iterable: - data_time.update(time.time() - end) - yield obj - iter_time.update(time.time() - end) - if i % print_freq == 0 or i == len(iterable) - 1: - eta_seconds = iter_time.global_avg * (len(iterable) - i) - eta_string = str(datetime.timedelta(seconds=int(eta_seconds))) - if torch.cuda.is_available(): - print(log_msg.format( - i, len(iterable), eta=eta_string, - meters=str(self), - time=str(iter_time), data=str(data_time), - memory=torch.cuda.max_memory_allocated() / MB)) - else: - print(log_msg.format( - i, len(iterable), eta=eta_string, - meters=str(self), - time=str(iter_time), data=str(data_time))) - i += 1 - end = time.time() - total_time = time.time() - start_time - total_time_str = str(datetime.timedelta(seconds=int(total_time))) - print('{} Total time: {} ({:.4f} s / it)'.format( - header, total_time_str, total_time / len(iterable))) - - -class AttrDict(dict): - def __init__(self, *args, **kwargs): - super(AttrDict, self).__init__(*args, **kwargs) - self.__dict__ = self - - -def compute_acc(logits, label, reduction='mean'): - ret = (torch.argmax(logits, dim=1) == label).float() - if reduction == 'none': - return ret.detach() - elif reduction == 'mean': - return ret.mean().item() - -def compute_n_params(model, return_str=True): - tot = 0 - for p in model.parameters(): - w = 1 - for x in p.shape: - w *= x - tot += w - if return_str: - if tot >= 1e6: - return '{:.1f}M'.format(tot / 1e6) - else: - return '{:.1f}K'.format(tot / 1e3) - else: - return tot - -def setup_for_distributed(is_master): - """ - This function disables printing when not in master process - """ - import builtins as __builtin__ - builtin_print = __builtin__.print - - def print(*args, **kwargs): - force = kwargs.pop('force', False) - if is_master or force: - builtin_print(*args, **kwargs) - - __builtin__.print = print - - -def is_dist_avail_and_initialized(): - if not dist.is_available(): - return False - if not dist.is_initialized(): - return False - return True - - -def get_world_size(): - if not is_dist_avail_and_initialized(): - return 1 - return dist.get_world_size() - - -def get_rank(): - if not is_dist_avail_and_initialized(): - return 0 - return dist.get_rank() - - -def is_main_process(): - return get_rank() == 0 - - -def save_on_master(*args, **kwargs): - if is_main_process(): - torch.save(*args, **kwargs) - - -def init_distributed_mode(args): - if 'RANK' in os.environ and 'WORLD_SIZE' in os.environ: - args.rank = int(os.environ["RANK"]) - args.world_size = int(os.environ['WORLD_SIZE']) - args.gpu = int(os.environ['LOCAL_RANK']) - elif 'SLURM_PROCID' in os.environ: - args.rank = int(os.environ['SLURM_PROCID']) - args.gpu = args.rank % torch.cuda.device_count() - else: - print('Not using distributed mode') - args.distributed = False - return - - args.distributed = True - - torch.cuda.set_device(args.gpu) - args.dist_backend = 'nccl' - print('| distributed init (rank {}, word {}): {}'.format( - args.rank, args.world_size, args.dist_url), flush=True) - torch.distributed.init_process_group(backend=args.dist_backend, init_method=args.dist_url, - world_size=args.world_size, rank=args.rank) - torch.distributed.barrier() - setup_for_distributed(args.rank == 0) - - \ No newline at end of file diff --git a/spaces/ygtrfed/pp-web-ui/README.md b/spaces/ygtrfed/pp-web-ui/README.md deleted file mode 100644 index ee92d4a59b5d4536ad309711858e6bc409a6083d..0000000000000000000000000000000000000000 --- a/spaces/ygtrfed/pp-web-ui/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Finetuned Diffusion -emoji: 🪄🖼️ -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: true -license: mit -duplicated_from: SUPERSHANKY/Finetuned_Diffusion_Max ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ygtxr1997/ReliableSwap_Demo/modules/layers/simswap/fs_networks_fix.py b/spaces/ygtxr1997/ReliableSwap_Demo/modules/layers/simswap/fs_networks_fix.py deleted file mode 100644 index 72357206babd942e6fbd34a2846cbd00d6aee32b..0000000000000000000000000000000000000000 --- a/spaces/ygtxr1997/ReliableSwap_Demo/modules/layers/simswap/fs_networks_fix.py +++ /dev/null @@ -1,223 +0,0 @@ -""" -Copyright (C) 2019 NVIDIA Corporation. All rights reserved. -Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode). -""" - -import torch -import torch.nn as nn -import torch.nn.functional as F -import kornia - - -class InstanceNorm(nn.Module): - def __init__(self, epsilon=1e-8): - """ - @notice: avoid in-place ops. - https://discuss.pytorch.org/t/encounter-the-runtimeerror-one-of-the-variables-needed-for-gradient-computation-has-been-modified-by-an-inplace-operation/836/3 - """ - super(InstanceNorm, self).__init__() - self.epsilon = epsilon - - def forward(self, x): - x = x - torch.mean(x, (2, 3), True) - tmp = torch.mul(x, x) # or x ** 2 - tmp = torch.rsqrt(torch.mean(tmp, (2, 3), True) + self.epsilon) - return x * tmp - -class ApplyStyle(nn.Module): - """ - @ref: https://github.com/lernapparat/lernapparat/blob/master/style_gan/pytorch_style_gan.ipynb - """ - def __init__(self, latent_size, channels): - super(ApplyStyle, self).__init__() - self.linear = nn.Linear(latent_size, channels * 2) - - def forward(self, x, latent): - style = self.linear(latent) # style => [batch_size, n_channels*2] - shape = [-1, 2, x.size(1), 1, 1] - style = style.view(shape) # [batch_size, 2, n_channels, ...] - #x = x * (style[:, 0] + 1.) + style[:, 1] - x = x * (style[:, 0] * 1 + 1.) + style[:, 1] * 1 - return x - -class ResnetBlock_Adain(nn.Module): - def __init__(self, dim, latent_size, padding_type, activation=nn.ReLU(True)): - super(ResnetBlock_Adain, self).__init__() - - p = 0 - conv1 = [] - if padding_type == 'reflect': - conv1 += [nn.ReflectionPad2d(1)] - elif padding_type == 'replicate': - conv1 += [nn.ReplicationPad2d(1)] - elif padding_type == 'zero': - p = 1 - else: - raise NotImplementedError('padding [%s] is not implemented' % padding_type) - conv1 += [nn.Conv2d(dim, dim, kernel_size=3, padding = p), InstanceNorm()] - self.conv1 = nn.Sequential(*conv1) - self.style1 = ApplyStyle(latent_size, dim) - self.act1 = activation - - p = 0 - conv2 = [] - if padding_type == 'reflect': - conv2 += [nn.ReflectionPad2d(1)] - elif padding_type == 'replicate': - conv2 += [nn.ReplicationPad2d(1)] - elif padding_type == 'zero': - p = 1 - else: - raise NotImplementedError('padding [%s] is not implemented' % padding_type) - conv2 += [nn.Conv2d(dim, dim, kernel_size=3, padding=p), InstanceNorm()] - self.conv2 = nn.Sequential(*conv2) - self.style2 = ApplyStyle(latent_size, dim) - - - def forward(self, x, dlatents_in_slice): - y = self.conv1(x) - y = self.style1(y, dlatents_in_slice) - y = self.act1(y) - y = self.conv2(y) - y = self.style2(y, dlatents_in_slice) - out = x + y - return out - - - -class Generator_Adain_Upsample(nn.Module): - def __init__(self, input_nc, output_nc, latent_size, n_blocks=6, deep=False, - norm_layer=nn.BatchNorm2d, - padding_type='reflect', - mouth_net_param: dict = None, - ): - assert (n_blocks >= 0) - super(Generator_Adain_Upsample, self).__init__() - - self.latent_size = latent_size - - self.mouth_net_param = mouth_net_param - if mouth_net_param.get('use'): - self.latent_size += mouth_net_param.get('feature_dim') - - activation = nn.ReLU(True) - - self.deep = deep - - self.first_layer = nn.Sequential(nn.ReflectionPad2d(3), nn.Conv2d(input_nc, 64, kernel_size=7, padding=0), - norm_layer(64), activation) - ### downsample - self.down1 = nn.Sequential(nn.Conv2d(64, 128, kernel_size=3, stride=2, padding=1), - norm_layer(128), activation) - self.down2 = nn.Sequential(nn.Conv2d(128, 256, kernel_size=3, stride=2, padding=1), - norm_layer(256), activation) - self.down3 = nn.Sequential(nn.Conv2d(256, 512, kernel_size=3, stride=2, padding=1), - norm_layer(512), activation) - - if self.deep: - self.down4 = nn.Sequential(nn.Conv2d(512, 512, kernel_size=3, stride=2, padding=1), - norm_layer(512), activation) - - ### resnet blocks - BN = [] - for i in range(n_blocks): - BN += [ - ResnetBlock_Adain(512, latent_size=self.latent_size, - padding_type=padding_type, activation=activation)] - self.BottleNeck = nn.Sequential(*BN) - - if self.deep: - self.up4 = nn.Sequential( - nn.Upsample(scale_factor=2, mode='bilinear',align_corners=False), - nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1), - nn.BatchNorm2d(512), activation - ) - self.up3 = nn.Sequential( - nn.Upsample(scale_factor=2, mode='bilinear',align_corners=False), - nn.Conv2d(512, 256, kernel_size=3, stride=1, padding=1), - nn.BatchNorm2d(256), activation - ) - self.up2 = nn.Sequential( - nn.Upsample(scale_factor=2, mode='bilinear',align_corners=False), - nn.Conv2d(256, 128, kernel_size=3, stride=1, padding=1), - nn.BatchNorm2d(128), activation - ) - self.up1 = nn.Sequential( - nn.Upsample(scale_factor=2, mode='bilinear',align_corners=False), - nn.Conv2d(128, 64, kernel_size=3, stride=1, padding=1), - nn.BatchNorm2d(64), activation - ) - self.last_layer = nn.Sequential(nn.ReflectionPad2d(3), nn.Conv2d(64, output_nc, kernel_size=7, padding=0)) - - self.register_buffer( - name="trans_matrix", - tensor=torch.tensor( - [ - [ - [1.07695457, -0.03625215, -1.56352194], - [0.03625215, 1.07695457, -5.32134629], - ] - ], - requires_grad=False, - ).float(), - ) - - def forward(self, source, target, net_arc, mouth_net=None): - x = target # 3*224*224 - if net_arc is None: - id_vector = source - else: - with torch.no_grad(): - ''' 1. get id ''' - # M = self.trans_matrix.repeat(source.size()[0], 1, 1) - # source = kornia.geometry.transform.warp_affine(source, M, (256, 256)) - resize_input = F.interpolate(source, size=112, mode="bilinear", align_corners=True) - id_vector = F.normalize(net_arc(resize_input), dim=-1, p=2) - - ''' 2. get mouth feature ''' - if mouth_net is not None: - w1, h1, w2, h2 = self.mouth_net_param.get('crop_param') - mouth_input = resize_input[:, :, h1:h2, w1:w2] - mouth_feat = mouth_net(mouth_input) - id_vector = torch.cat([id_vector, mouth_feat], dim=-1) # (B,dim_id+dim_mouth) - - skip1 = self.first_layer(x) - skip2 = self.down1(skip1) - skip3 = self.down2(skip2) - if self.deep: - skip4 = self.down3(skip3) - x = self.down4(skip4) - else: - x = self.down3(skip3) - bot = [] - bot.append(x) - features = [] - for i in range(len(self.BottleNeck)): - x = self.BottleNeck[i](x, id_vector) - bot.append(x) - - if self.deep: - x = self.up4(x) - features.append(x) - x = self.up3(x) - features.append(x) - x = self.up2(x) - features.append(x) - x = self.up1(x) - features.append(x) - x = self.last_layer(x) - # x = (x + 1) / 2 - - # return x, bot, features, dlatents - return x - - -if __name__ == "__main__": - import thop - - img = torch.randn(1, 3, 256, 256) - latent = torch.randn(1, 512) - net = Generator_Adain_Upsample(input_nc=3, output_nc=3, latent_size=512, n_blocks=9, - mouth_net_param={"use": False}) - flops, params = thop.profile(net, inputs=(latent, img, None, None), verbose=False) - print('#Params=%.2fM, GFLOPS=%.2f' % (params / 1e6, flops / 1e9)) diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/model_zoo/__init__.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/model_zoo/__init__.py deleted file mode 100644 index 6204208198d813728cf6419e8eef4a733f20c18f..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/model_zoo/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -""" -Model Zoo API for Detectron2: a collection of functions to create common model architectures -listed in `MODEL_ZOO.md `_, -and optionally load their pre-trained weights. -""" - -from .model_zoo import get, get_config_file, get_checkpoint_url, get_config - -__all__ = ["get_checkpoint_url", "get", "get_config_file", "get_config"] diff --git a/spaces/yo2266911/uma_voice/Libtorch C++ Infer/VITS-LibTorch.cpp b/spaces/yo2266911/uma_voice/Libtorch C++ Infer/VITS-LibTorch.cpp deleted file mode 100644 index afdd98e45af2fbeb2ba63961f45167dd3ecd4685..0000000000000000000000000000000000000000 --- a/spaces/yo2266911/uma_voice/Libtorch C++ Infer/VITS-LibTorch.cpp +++ /dev/null @@ -1,121 +0,0 @@ -#include -#include -#include -#include -#include -#include -#include -#include -#include -typedef int64_t int64; -namespace Shirakana { - - struct WavHead { - char RIFF[4]; - long int size0; - char WAVE[4]; - char FMT[4]; - long int size1; - short int fmttag; - short int channel; - long int samplespersec; - long int bytepersec; - short int blockalign; - short int bitpersamples; - char DATA[4]; - long int size2; - }; - - int conArr2Wav(int64 size, int16_t* input, const char* filename) { - WavHead head = { {'R','I','F','F'},0,{'W','A','V','E'},{'f','m','t',' '},16, - 1,1,22050,22050 * 2,2,16,{'d','a','t','a'}, - 0 }; - head.size0 = size * 2 + 36; - head.size2 = size * 2; - std::ofstream ocout; - char* outputData = (char*)input; - ocout.open(filename, std::ios::out | std::ios::binary); - ocout.write((char*)&head, 44); - ocout.write(outputData, (int32_t)(size * 2)); - ocout.close(); - return 0; - } - - inline std::wstring to_wide_string(const std::string& input) - { - std::wstring_convert> converter; - return converter.from_bytes(input); - } - - inline std::string to_byte_string(const std::wstring& input) - { - std::wstring_convert> converter; - return converter.to_bytes(input); - } -} - -#define val const auto -int main() -{ - torch::jit::Module Vits; - std::string buffer; - std::vector text; - std::vector data; - while(true) - { - while (true) - { - std::cin >> buffer; - if (buffer == "end") - return 0; - if(buffer == "model") - { - std::cin >> buffer; - Vits = torch::jit::load(buffer); - continue; - } - if (buffer == "endinfer") - { - Shirakana::conArr2Wav(data.size(), data.data(), "temp\\tmp.wav"); - data.clear(); - std::cout << "endofinfe"; - continue; - } - if (buffer == "line") - { - std::cin >> buffer; - while (buffer.find("endline")==std::string::npos) - { - text.push_back(std::atoi(buffer.c_str())); - std::cin >> buffer; - } - val InputTensor = torch::from_blob(text.data(), { 1,static_cast(text.size()) }, torch::kInt64); - std::array TextLength{ static_cast(text.size()) }; - val InputTensor_length = torch::from_blob(TextLength.data(), { 1 }, torch::kInt64); - std::vector inputs; - inputs.push_back(InputTensor); - inputs.push_back(InputTensor_length); - if (buffer.length() > 7) - { - std::array speakerIndex{ (int64)atoi(buffer.substr(7).c_str()) }; - inputs.push_back(torch::from_blob(speakerIndex.data(), { 1 }, torch::kLong)); - } - val output = Vits.forward(inputs).toTuple()->elements()[0].toTensor().multiply(32276.0F); - val outputSize = output.sizes().at(2); - val floatOutput = output.data_ptr(); - int16_t* outputTmp = (int16_t*)malloc(sizeof(float) * outputSize); - if (outputTmp == nullptr) { - throw std::exception("内存不足"); - } - for (int i = 0; i < outputSize; i++) { - *(outputTmp + i) = (int16_t) * (floatOutput + i); - } - data.insert(data.end(), outputTmp, outputTmp+outputSize); - free(outputTmp); - text.clear(); - std::cout << "endofline"; - } - } - } - //model S:\VSGIT\ShirakanaTTSUI\build\x64\Release\Mods\AtriVITS\AtriVITS_LJS.pt -} \ No newline at end of file diff --git a/spaces/youplala/StoreCopilot/src/pages/chartgpt.py b/spaces/youplala/StoreCopilot/src/pages/chartgpt.py deleted file mode 100644 index 8765d26d204fb7f6e18e5951f3c0966e5750cb53..0000000000000000000000000000000000000000 --- a/spaces/youplala/StoreCopilot/src/pages/chartgpt.py +++ /dev/null @@ -1,101 +0,0 @@ -import dash -import dash_mantine_components as dmc -from dash import dcc, html -from dash_iconify import DashIconify - -dash.register_page( - __name__, - class_icon="fa-solid fa-chart-column", - order=1, - path="/", - name="Store Copilot", - title="ChartGPT", -) - - -layout = dmc.Stack( - [ - dcc.Store(id="history", data=[], storage_type="local"), - dmc.TextInput( - styles={ - "input": { - "fontSize": 15, - "boxShadow": "rgba(99, 99, 99, 0.2) 0px 2px 8px 0px", - "border": "none", - }, - }, - id="prompt", - placeholder="What do you want to know about your business?", - icon=DashIconify(icon="ic:round-search"), - radius="lg", - size="lg", - w="80%", - m="auto", - rightSection=dmc.ActionIcon( - DashIconify(icon="ic:round-send", width=20), - id="button", - radius="md", - size="lg", - mr=17, - ), - ), - dmc.Grid( - children=[ - dmc.Col( - dmc.Card( - dmc.Stack( - [ - dmc.Title( - "Generated chart", order=3, mb=4, align="center", - ), - html.Div(id="container"), - ] - ) - ), - span=8, - ), - dmc.Col( - dmc.Card( - [ - dmc.Title("History", order=3, mb=4, align="center"), - dmc.Accordion( - id="history-list", - children=[], - radius="md", - chevron=DashIconify( - icon="ic:round-history", - width=15, - ), - chevronPosition="left", - variant="contained", - styles={ - "item": { - "backgroundColor": dmc.theme.DEFAULT_COLORS[ - "gray" - ][0], - "transition": "transform 150ms ease", - "&[data-active]": { - "transform": "scale(1.03)", - "backgroundColor": "", - "boxShadow": 5, - "borderRadius": 5, - }, - }, - "chevron": { - "&[data-rotate]": { - "transform": "rotate(-180deg)", - }, - }, - }, - ), - ], - id="history-container", - ), - span=4, - ), - ], - gutter="xl", - ), - ], - spacing="xl", -) diff --git a/spaces/ysharma/Bloom-Creates-Meme/app.py b/spaces/ysharma/Bloom-Creates-Meme/app.py deleted file mode 100644 index 79b7654b0928dcf71b44780944088ab52444e3ac..0000000000000000000000000000000000000000 --- a/spaces/ysharma/Bloom-Creates-Meme/app.py +++ /dev/null @@ -1,104 +0,0 @@ -import gradio as gr -import requests -import os -import PIL -from PIL import Image -from PIL import ImageDraw -from PIL import ImageFont - -##Bloom -API_URL = "https://api-inference.huggingface.co/models/bigscience/bloom" -HF_TOKEN = os.environ["HF_TOKEN"] -headers = {"Authorization": f"Bearer {HF_TOKEN}"} - - -def write_on_image(final_solution): - print("************ Inside write_on_image ***********") - image_path0 = "./distracted0.jpg" - image0 = Image.open(image_path0) - I1 = ImageDraw.Draw(image0) - myfont = ImageFont.truetype('./font1.ttf', 30) - - prompt_list = final_solution.split('\n') - girlfriend = prompt_list[8].split(':')[1].strip() - girlfriend_list = girlfriend.split() - if len(girlfriend_list) >= 2: - girlfriend = '\n'.join(girlfriend_list) - print(f"girlfriend is : {girlfriend }") - new_girl = prompt_list[9].split(':')[1].strip() - new_girl_list = new_girl.split() - if len(new_girl_list) > 2: - new_girl = '\n'.join(new_girl_list) - print(f"new_girl is : {new_girl}") - prompt_list.pop(0) - prompt_list.pop(0) - prompt_list = prompt_list[:8] - prompt_list.append('Distracted from:') - print(f"prompt list is : {prompt_list}") - new_prompt = '\n'.join(prompt_list) - print(f"final_solution is : {new_prompt}") - - I1.text((613, 89), girlfriend,font=myfont, fill =(255, 255, 255)) - I1.text((371, 223), "ME", font=myfont, fill =(255, 255, 255)) - I1.text((142, 336), new_girl,font=myfont, fill =(255, 255, 255)) - - return image0, new_prompt - -def meme_generate(img, prompt, temp, top_p): #prompt, generated_txt): #, input_prompt_sql ): #, input_prompt_dalle2): - - print(f"*****Inside meme_generate - Prompt is :{prompt}") - if len(prompt) == 0: - prompt = """Distracted from: homework\nby: side project\nDistracted from: goals\nby: new goals\nDistracted from: working hard\nby: hardly working\nDistracted from: twitter\nby: open in browser\nDistracted from:""" - - json_ = {"inputs": prompt, - "parameters": - { - "top_p": top_p, #0.90 default - "max_new_tokens": 64, - "temperature": temp, #1.1 default - "return_full_text": True, - "do_sample": True, - }, - "options": - {"use_cache": True, - "wait_for_model": True, - },} - response = requests.post(API_URL, headers=headers, json=json_) - print(f"Response is : {response}") - output = response.json() - print(f"output is : {output}") - output_tmp = output[0]['generated_text'] - print(f"output_tmp is: {output_tmp}") - solution = output_tmp.split("\nQ:")[0] - print(f"Final response after splits is: {solution}") - - meme_image, new_prompt = write_on_image(solution) - return meme_image, new_prompt - - -demo = gr.Blocks() - -with demo: - gr.Markdown("

    Distracted Boyfriend Meme😄- Using Bloom 🌸

    ") - gr.Markdown( - """Bloom is a model made by research teams from [HuggingFace](https://huggingface.co/bigscience/bloom) and world over (more than 1000 researchers coming together and working as [BigScienceW Bloom](https://twitter.com/BigscienceW)).Large language models can produce coherent sentences but can they produce **Humor** too? Yes, they can, given the correct prompt (And Yes, Prompt Engineering 🤖 should definitely become a thing by now).\n\n**How to Use this App**: Just Fire Away the Generate Meme button below, as many times as you want!! If you see repeated or similar memes getting generated in consecutive runs, toggle temperature and top_p values.\n\n**How this App works**: Figuring out the right set of Prompting + Writing on an Image + Bit of engineering. Currently, Bloom's Public API has size-limits on Token-Generation, so you can get only few tokens generated at a time.\n\n
                                  Bloom generating very few tokens                    When Few words are Enough
    \n\n
                                    🤝Memes
    \n\nIt is a fun little App which you can play with for a while.This Space is created by [Yuvraj Sharma](https://twitter.com/yvrjsharma)""" - ) -# markdown color font styles - with gr.Row(): - - in_image = gr.Image(value="./distracted0.jpg", visible=False) - in_image_display = gr.Image(value="./distracted00.jpg", visible=True) - input_prompt = gr.Textbox(label="Write some prompt...", lines=5, visible=False) - - output_image = gr.Image() - - with gr.Row(): - in_slider_temp = gr.Slider(minimum=0.0, maximum=1.4, value=1.1, step=0.1, label='Temperature') - in_slider_top_p = gr.Slider(minimum=0.50, maximum=0.99, value=0.90, step=0.01, label='Top_p') - - - b1 = gr.Button("Generate Memes") - - b1.click(meme_generate, inputs=[in_image, input_prompt, in_slider_temp, in_slider_top_p] , outputs=[output_image,input_prompt]) - -demo.launch(enable_queue=True, debug=True) \ No newline at end of file diff --git a/spaces/ysharma/ChatinterfaceTests/app.py b/spaces/ysharma/ChatinterfaceTests/app.py deleted file mode 100644 index d55fc5c2d0a6db5db89e4bcea598968115eedba5..0000000000000000000000000000000000000000 --- a/spaces/ysharma/ChatinterfaceTests/app.py +++ /dev/null @@ -1,76 +0,0 @@ -import gradio as gr -import os -import openai -import gradio as gr -from gradio import ChatInterface -import time - -# Get the value of the openai_api_key from environment variable -openai.api_key = os.getenv("OPENAI_API_KEY") - -# Import things that are needed generically from langchain -from langchain import LLMMathChain, SerpAPIWrapper -from langchain.agents import AgentType, initialize_agent, load_tools -from langchain.chat_models import ChatOpenAI -from langchain.tools import BaseTool, StructuredTool, Tool, tool -from langchain.tools import MoveFileTool, format_tool_to_openai_function -from langchain.schema import ( - AIMessage, - HumanMessage, - SystemMessage -) -from langchain.utilities import WikipediaAPIWrapper -from langchain.tools import AIPluginTool - -# Question- how can one set up a system message for their Chatbot while using ChatInterface -# Example system message : system = SystemMessage(content = "You are a helpful AI assistant") - -# driver -def predict_langchain(user_input, chatbot): - - print(f"Chatbot : {chatbot}") - chat = ChatOpenAI(temperature=1.0, streaming=True, model='gpt-3.5-turbo-0613') - messages=[] - - for conv in chatbot: - human = HumanMessage(content=conv[0]) - ai = AIMessage(content=conv[1]) - messages.append(human) - messages.append(ai) - - messages.append(HumanMessage(content=user_input)) - - # getting gpt3.5's response - gpt_response = chat(messages) - return gpt_response.content - -def predict(inputs, chatbot): - - print(f"Chatbot : {chatbot}") - messages = [] - for conv in chatbot: - user = conv[0] - messages.append({"role": "user", "content":user }) - if conv[1] is None: - break - assistant = conv[1] - messages.append({"role": "assistant", "content":assistant}) - - # a ChatCompletion request - response = openai.ChatCompletion.create( - model='gpt-3.5-turbo', - messages= messages, # example : [{'role': 'user', 'content': "What is life? Answer in three words."}], - temperature=1.0, - stream=True # for streaming the output to chatbot - ) - - partial_message = "" - for chunk in response: - if len(chunk['choices'][0]['delta']) != 0: - print(chunk['choices'][0]['delta']['content']) - partial_message = partial_message + chunk['choices'][0]['delta']['content'] - yield partial_message - -#ChatInterface(predict, delete_last_btn="❌Delete").queue().launch(debug=True) - -gr.ChatInterface(predict, delete_last_btn="del").queue().launch(share=False, debug=True) #examples=["How are you?", "What's up?"], \ No newline at end of file diff --git a/spaces/yu3ufff/quiz-bowl-qa/qbigbird.py b/spaces/yu3ufff/quiz-bowl-qa/qbigbird.py deleted file mode 100644 index 37db4e26c73ede3c9e09ed07c2c3508e65b1ddb6..0000000000000000000000000000000000000000 --- a/spaces/yu3ufff/quiz-bowl-qa/qbigbird.py +++ /dev/null @@ -1,90 +0,0 @@ -from collections import Counter -import ssl - -import nltk -from nltk.tokenize import sent_tokenize -import torch -from transformers import pipeline -import wikipedia as wiki - -from utils import ( - clean_last_sent, - add_proper_tail, - get_filtered_words, - get_nnp_query, - get_nn_query, - get_wiki_text, - get_text_chunks, - filter_answers, -) - -# necessary downloads (+ workaround for some download problems) -try: - _create_unverified_https_context = ssl._create_unverified_context -except AttributeError: - pass -else: - ssl._create_default_https_context = _create_unverified_https_context - -nltk.download('punkt') -nltk.download('stopwords') -nltk.download('averaged_perceptron_tagger') - - -class QBigBird: - - def __init__( - self, - model='valhalla/electra-base-discriminator-finetuned_squadv1', - max_context_length=512, - top_n=5, - buzz_threshold=0.5 - ): - device = 0 if torch.cuda.is_available() else -1 - self.qa = pipeline('question-answering', model=model, device=device) - self.max_context_length = max_context_length - self.top_n = top_n - self.buzz_threshold = buzz_threshold - - def guess_and_buzz(self, question): - # get last sentence of question, clean and improve it - text = sent_tokenize(question)[-1] - text = clean_last_sent(text) - text = add_proper_tail(text) - - # get the words in the question excluding stop words - filtered_words = get_filtered_words(question) - - # get a Wikipedia query using the proper nouns in the question - query = get_nnp_query(question) - query_words = query.split() - # if not enough proper nouns, return wrong guess with False - if len(query_words) < 2: - return 'not enough pns', False - - wikitext = get_wiki_text(query) - answer_set = set() - text_chunks = get_text_chunks(wikitext, self.max_context_length) - for chunk in text_chunks: - if any(word in chunk for word in query_words): - result = self.qa({'question': text, 'context': chunk}) - answer = result['answer'] - score = result['score'] - answer_set.add((answer, score)) - - answer_set = filter_answers(answer_set, question) - if len(answer_set) == 0: - return ' ', False - - answers_scores = list(answer_set) - top_answers_scores = sorted(answers_scores, key=lambda tup: tup[1], reverse=True)[:self.top_n] - - answer_freq = Counter(answer for answer, score in top_answers_scores) - freq_top_answers_scores = sorted(top_answers_scores, key=lambda tup: (answer_freq[tup[0]], tup[1]), reverse=True) - freq_top_answer = freq_top_answers_scores[0][0] - # get the exact Wikipedia title - freq_top_answer = wiki.search(freq_top_answer)[0] - - buzz = freq_top_answers_scores[0][1] >= self.buzz_threshold - - return freq_top_answer, buzz diff --git a/spaces/yueranseo/mygpt/modules/base_model.py b/spaces/yueranseo/mygpt/modules/base_model.py deleted file mode 100644 index 2b55623f6b0989f60d818be6e0e77f5948484b82..0000000000000000000000000000000000000000 --- a/spaces/yueranseo/mygpt/modules/base_model.py +++ /dev/null @@ -1,561 +0,0 @@ -from __future__ import annotations -from typing import TYPE_CHECKING, List - -import logging -import json -import commentjson as cjson -import os -import sys -import requests -import urllib3 -import traceback - -from tqdm import tqdm -import colorama -from duckduckgo_search import ddg -import asyncio -import aiohttp -from enum import Enum - -from .presets import * -from .llama_func import * -from .utils import * -from . import shared -from .config import retrieve_proxy - - -class ModelType(Enum): - Unknown = -1 - OpenAI = 0 - ChatGLM = 1 - LLaMA = 2 - XMChat = 3 - - @classmethod - def get_type(cls, model_name: str): - model_type = None - model_name_lower = model_name.lower() - if "gpt" in model_name_lower: - model_type = ModelType.OpenAI - elif "chatglm" in model_name_lower: - model_type = ModelType.ChatGLM - elif "llama" in model_name_lower or "alpaca" in model_name_lower: - model_type = ModelType.LLaMA - elif "xmchat" in model_name_lower: - model_type = ModelType.XMChat - else: - model_type = ModelType.Unknown - return model_type - - -class BaseLLMModel: - def __init__( - self, - model_name, - system_prompt="", - temperature=1.0, - top_p=1.0, - n_choices=1, - stop=None, - max_generation_token=None, - presence_penalty=0, - frequency_penalty=0, - logit_bias=None, - user="", - ) -> None: - self.history = [] - self.all_token_counts = [] - self.model_name = model_name - self.model_type = ModelType.get_type(model_name) - try: - self.token_upper_limit = MODEL_TOKEN_LIMIT[model_name] - except KeyError: - self.token_upper_limit = DEFAULT_TOKEN_LIMIT - self.interrupted = False - self.system_prompt = system_prompt - self.api_key = None - self.need_api_key = False - self.single_turn = False - - self.temperature = temperature - self.top_p = top_p - self.n_choices = n_choices - self.stop_sequence = stop - self.max_generation_token = None - self.presence_penalty = presence_penalty - self.frequency_penalty = frequency_penalty - self.logit_bias = logit_bias - self.user_identifier = user - - def get_answer_stream_iter(self): - """stream predict, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - should return a generator, each time give the next word (str) in the answer - """ - logging.warning("stream predict not implemented, using at once predict instead") - response, _ = self.get_answer_at_once() - yield response - - def get_answer_at_once(self): - """predict at once, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - Should return: - the answer (str) - total token count (int) - """ - logging.warning("at once predict not implemented, using stream predict instead") - response_iter = self.get_answer_stream_iter() - count = 0 - for response in response_iter: - count += 1 - return response, sum(self.all_token_counts) + count - - def billing_info(self): - """get billing infomation, inplement if needed""" - logging.warning("billing info not implemented, using default") - return BILLING_NOT_APPLICABLE_MSG - - def count_token(self, user_input): - """get token count from input, implement if needed""" - logging.warning("token count not implemented, using default") - return len(user_input) - - def stream_next_chatbot(self, inputs, chatbot, fake_input=None, display_append=""): - def get_return_value(): - return chatbot, status_text - - status_text = i18n("开始实时传输回答……") - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - logging.debug(f"输入token计数: {user_token_count}") - - stream_iter = self.get_answer_stream_iter() - - for partial_text in stream_iter: - chatbot[-1] = (chatbot[-1][0], partial_text + display_append) - self.all_token_counts[-1] += 1 - status_text = self.token_message() - yield get_return_value() - if self.interrupted: - self.recover() - break - self.history.append(construct_assistant(partial_text)) - - def next_chatbot_at_once(self, inputs, chatbot, fake_input=None, display_append=""): - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - if fake_input is not None: - user_token_count = self.count_token(fake_input) - else: - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - ai_reply, total_token_count = self.get_answer_at_once() - self.history.append(construct_assistant(ai_reply)) - if fake_input is not None: - self.history[-2] = construct_user(fake_input) - chatbot[-1] = (chatbot[-1][0], ai_reply + display_append) - if fake_input is not None: - self.all_token_counts[-1] += count_token(construct_assistant(ai_reply)) - else: - self.all_token_counts[-1] = total_token_count - sum(self.all_token_counts) - status_text = self.token_message() - return chatbot, status_text - - def handle_file_upload(self, files, chatbot): - """if the model accepts multi modal input, implement this function""" - status = gr.Markdown.update() - if files: - construct_index(self.api_key, file_src=files) - status = "索引构建完成" - return gr.Files.update(), chatbot, status - - def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot): - fake_inputs = None - display_append = [] - limited_context = False - fake_inputs = real_inputs - if files: - from llama_index.indices.vector_store.base_query import GPTVectorStoreIndexQuery - from llama_index.indices.query.schema import QueryBundle - from langchain.embeddings.huggingface import HuggingFaceEmbeddings - from langchain.chat_models import ChatOpenAI - from llama_index import ( - GPTSimpleVectorIndex, - ServiceContext, - LangchainEmbedding, - OpenAIEmbedding, - ) - limited_context = True - msg = "加载索引中……" - logging.info(msg) - # yield chatbot + [(inputs, "")], msg - index = construct_index(self.api_key, file_src=files) - assert index is not None, "获取索引失败" - msg = "索引获取成功,生成回答中……" - logging.info(msg) - if local_embedding or self.model_type != ModelType.OpenAI: - embed_model = LangchainEmbedding(HuggingFaceEmbeddings(model_name = "sentence-transformers/distiluse-base-multilingual-cased-v2")) - else: - embed_model = OpenAIEmbedding() - # yield chatbot + [(inputs, "")], msg - with retrieve_proxy(): - prompt_helper = PromptHelper( - max_input_size=4096, - num_output=5, - max_chunk_overlap=20, - chunk_size_limit=600, - ) - from llama_index import ServiceContext - - service_context = ServiceContext.from_defaults( - prompt_helper=prompt_helper, embed_model=embed_model - ) - query_object = GPTVectorStoreIndexQuery( - index.index_struct, - service_context=service_context, - similarity_top_k=5, - vector_store=index._vector_store, - docstore=index._docstore, - ) - query_bundle = QueryBundle(real_inputs) - nodes = query_object.retrieve(query_bundle) - reference_results = [n.node.text for n in nodes] - reference_results = add_source_numbers(reference_results, use_source=False) - display_append = add_details(reference_results) - display_append = "\n\n" + "".join(display_append) - real_inputs = ( - replace_today(PROMPT_TEMPLATE) - .replace("{query_str}", real_inputs) - .replace("{context_str}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - elif use_websearch: - limited_context = True - search_results = ddg(real_inputs, max_results=5) - reference_results = [] - for idx, result in enumerate(search_results): - logging.debug(f"搜索结果{idx + 1}:{result}") - domain_name = urllib3.util.parse_url(result["href"]).host - reference_results.append([result["body"], result["href"]]) - display_append.append( - # f"{idx+1}. [{domain_name}]({result['href']})\n" - f"
  32. {domain_name}
  33. \n" - ) - reference_results = add_source_numbers(reference_results) - display_append = "
      \n\n" + "".join(display_append) + "
    " - real_inputs = ( - replace_today(WEBSEARCH_PTOMPT_TEMPLATE) - .replace("{query}", real_inputs) - .replace("{web_results}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - else: - display_append = "" - return limited_context, fake_inputs, display_append, real_inputs, chatbot - - def predict( - self, - inputs, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - should_check_token_count=True, - ): # repetition_penalty, top_k - - status_text = "开始生成回答……" - logging.info( - "输入为:" + colorama.Fore.BLUE + f"{inputs}" + colorama.Style.RESET_ALL - ) - if should_check_token_count: - yield chatbot + [(inputs, "")], status_text - if reply_language == "跟随问题语言(不稳定)": - reply_language = "the same language as the question, such as English, 中文, 日本語, Español, Français, or Deutsch." - - limited_context, fake_inputs, display_append, inputs, chatbot = self.prepare_inputs(real_inputs=inputs, use_websearch=use_websearch, files=files, reply_language=reply_language, chatbot=chatbot) - yield chatbot + [(fake_inputs, "")], status_text - - if ( - self.need_api_key and - self.api_key is None - and not shared.state.multi_api_key - ): - status_text = STANDARD_ERROR_MSG + NO_APIKEY_MSG - logging.info(status_text) - chatbot.append((inputs, "")) - if len(self.history) == 0: - self.history.append(construct_user(inputs)) - self.history.append("") - self.all_token_counts.append(0) - else: - self.history[-2] = construct_user(inputs) - yield chatbot + [(inputs, "")], status_text - return - elif len(inputs.strip()) == 0: - status_text = STANDARD_ERROR_MSG + NO_INPUT_MSG - logging.info(status_text) - yield chatbot + [(inputs, "")], status_text - return - - if self.single_turn: - self.history = [] - self.all_token_counts = [] - self.history.append(construct_user(inputs)) - - try: - if stream: - logging.debug("使用流式传输") - iter = self.stream_next_chatbot( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - for chatbot, status_text in iter: - yield chatbot, status_text - else: - logging.debug("不使用流式传输") - chatbot, status_text = self.next_chatbot_at_once( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - yield chatbot, status_text - except Exception as e: - traceback.print_exc() - status_text = STANDARD_ERROR_MSG + str(e) - yield chatbot, status_text - - if len(self.history) > 1 and self.history[-1]["content"] != inputs: - logging.info( - "回答为:" - + colorama.Fore.BLUE - + f"{self.history[-1]['content']}" - + colorama.Style.RESET_ALL - ) - - if limited_context: - # self.history = self.history[-4:] - # self.all_token_counts = self.all_token_counts[-2:] - self.history = [] - self.all_token_counts = [] - - max_token = self.token_upper_limit - TOKEN_OFFSET - - if sum(self.all_token_counts) > max_token and should_check_token_count: - count = 0 - while ( - sum(self.all_token_counts) - > self.token_upper_limit * REDUCE_TOKEN_FACTOR - and sum(self.all_token_counts) > 0 - ): - count += 1 - del self.all_token_counts[0] - del self.history[:2] - logging.info(status_text) - status_text = f"为了防止token超限,模型忘记了早期的 {count} 轮对话" - yield chatbot, status_text - - def retry( - self, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - ): - logging.debug("重试中……") - if len(self.history) > 0: - inputs = self.history[-2]["content"] - del self.history[-2:] - self.all_token_counts.pop() - elif len(chatbot) > 0: - inputs = chatbot[-1][0] - else: - yield chatbot, f"{STANDARD_ERROR_MSG}上下文是空的" - return - - iter = self.predict( - inputs, - chatbot, - stream=stream, - use_websearch=use_websearch, - files=files, - reply_language=reply_language, - ) - for x in iter: - yield x - logging.debug("重试完毕") - - # def reduce_token_size(self, chatbot): - # logging.info("开始减少token数量……") - # chatbot, status_text = self.next_chatbot_at_once( - # summarize_prompt, - # chatbot - # ) - # max_token_count = self.token_upper_limit * REDUCE_TOKEN_FACTOR - # num_chat = find_n(self.all_token_counts, max_token_count) - # logging.info(f"previous_token_count: {self.all_token_counts}, keeping {num_chat} chats") - # chatbot = chatbot[:-1] - # self.history = self.history[-2*num_chat:] if num_chat > 0 else [] - # self.all_token_counts = self.all_token_counts[-num_chat:] if num_chat > 0 else [] - # msg = f"保留了最近{num_chat}轮对话" - # logging.info(msg) - # logging.info("减少token数量完毕") - # return chatbot, msg + "," + self.token_message(self.all_token_counts if len(self.all_token_counts) > 0 else [0]) - - def interrupt(self): - self.interrupted = True - - def recover(self): - self.interrupted = False - - def set_token_upper_limit(self, new_upper_limit): - self.token_upper_limit = new_upper_limit - print(f"token上限设置为{new_upper_limit}") - - def set_temperature(self, new_temperature): - self.temperature = new_temperature - - def set_top_p(self, new_top_p): - self.top_p = new_top_p - - def set_n_choices(self, new_n_choices): - self.n_choices = new_n_choices - - def set_stop_sequence(self, new_stop_sequence: str): - new_stop_sequence = new_stop_sequence.split(",") - self.stop_sequence = new_stop_sequence - - def set_max_tokens(self, new_max_tokens): - self.max_generation_token = new_max_tokens - - def set_presence_penalty(self, new_presence_penalty): - self.presence_penalty = new_presence_penalty - - def set_frequency_penalty(self, new_frequency_penalty): - self.frequency_penalty = new_frequency_penalty - - def set_logit_bias(self, logit_bias): - logit_bias = logit_bias.split() - bias_map = {} - encoding = tiktoken.get_encoding("cl100k_base") - for line in logit_bias: - word, bias_amount = line.split(":") - if word: - for token in encoding.encode(word): - bias_map[token] = float(bias_amount) - self.logit_bias = bias_map - - def set_user_identifier(self, new_user_identifier): - self.user_identifier = new_user_identifier - - def set_system_prompt(self, new_system_prompt): - self.system_prompt = new_system_prompt - - def set_key(self, new_access_key): - self.api_key = new_access_key.strip() - msg = i18n("API密钥更改为了") + hide_middle_chars(self.api_key) - logging.info(msg) - return self.api_key, msg - - def set_single_turn(self, new_single_turn): - self.single_turn = new_single_turn - - def reset(self): - self.history = [] - self.all_token_counts = [] - self.interrupted = False - return [], self.token_message([0]) - - def delete_first_conversation(self): - if self.history: - del self.history[:2] - del self.all_token_counts[0] - return self.token_message() - - def delete_last_conversation(self, chatbot): - if len(chatbot) > 0 and STANDARD_ERROR_MSG in chatbot[-1][1]: - msg = "由于包含报错信息,只删除chatbot记录" - chatbot.pop() - return chatbot, self.history - if len(self.history) > 0: - self.history.pop() - self.history.pop() - if len(chatbot) > 0: - msg = "删除了一组chatbot对话" - chatbot.pop() - if len(self.all_token_counts) > 0: - msg = "删除了一组对话的token计数记录" - self.all_token_counts.pop() - msg = "删除了一组对话" - return chatbot, msg - - def token_message(self, token_lst=None): - if token_lst is None: - token_lst = self.all_token_counts - token_sum = 0 - for i in range(len(token_lst)): - token_sum += sum(token_lst[: i + 1]) - return i18n("Token 计数: ") + f"{sum(token_lst)}" + i18n(",本次对话累计消耗了 ") + f"{token_sum} tokens" - - def save_chat_history(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".json"): - filename += ".json" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def export_markdown(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".md"): - filename += ".md" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def load_chat_history(self, filename, chatbot, user_name): - logging.debug(f"{user_name} 加载对话历史中……") - if type(filename) != str: - filename = filename.name - try: - with open(os.path.join(HISTORY_DIR, user_name, filename), "r") as f: - json_s = json.load(f) - try: - if type(json_s["history"][0]) == str: - logging.info("历史记录格式为旧版,正在转换……") - new_history = [] - for index, item in enumerate(json_s["history"]): - if index % 2 == 0: - new_history.append(construct_user(item)) - else: - new_history.append(construct_assistant(item)) - json_s["history"] = new_history - logging.info(new_history) - except: - # 没有对话历史 - pass - logging.debug(f"{user_name} 加载对话历史完毕") - self.history = json_s["history"] - return filename, json_s["system"], json_s["chatbot"] - except FileNotFoundError: - logging.warning(f"{user_name} 没有找到对话历史文件,不执行任何操作") - return filename, self.system_prompt, chatbot - - def like(self): - """like the last response, implement if needed - """ - return gr.update() - - def dislike(self): - """dislike the last response, implement if needed - """ - return gr.update() diff --git a/spaces/yufiofficial/MusicGenQ/setup.py b/spaces/yufiofficial/MusicGenQ/setup.py deleted file mode 100644 index 78a172b7c90003b689bde40b49cc8fe1fb8107d4..0000000000000000000000000000000000000000 --- a/spaces/yufiofficial/MusicGenQ/setup.py +++ /dev/null @@ -1,65 +0,0 @@ -""" - Copyright (c) Meta Platforms, Inc. and affiliates. - All rights reserved. - - This source code is licensed under the license found in the - LICENSE file in the root directory of this source tree. - -""" - -from pathlib import Path - -from setuptools import setup, find_packages - - -NAME = 'audiocraft' -DESCRIPTION = 'Audio research library for PyTorch' - -URL = 'https://github.com/fairinternal/audiocraft' -AUTHOR = 'FAIR Speech & Audio' -EMAIL = 'defossez@meta.com' -REQUIRES_PYTHON = '>=3.8.0' - -for line in open('audiocraft/__init__.py'): - line = line.strip() - if '__version__' in line: - context = {} - exec(line, context) - VERSION = context['__version__'] - -HERE = Path(__file__).parent - -try: - with open(HERE / "README.md", encoding='utf-8') as f: - long_description = '\n' + f.read() -except FileNotFoundError: - long_description = DESCRIPTION - -REQUIRED = [i.strip() for i in open(HERE / 'requirements.txt') if not i.startswith('#')] - -setup( - name=NAME, - version=VERSION, - description=DESCRIPTION, - author_email=EMAIL, - long_description=long_description, - long_description_content_type='text/markdown', - author=AUTHOR, - url=URL, - python_requires=REQUIRES_PYTHON, - install_requires=REQUIRED, - extras_require={ - 'dev': ['coverage', 'flake8', 'mypy', 'pdoc3', 'pytest'], - }, - packages=find_packages(), - package_data={'audiocraft': ['py.typed']}, - include_package_data=True, - license='MIT License', - classifiers=[ - # Trove classifiers - # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers - 'License :: OSI Approved :: MIT License', - 'Topic :: Multimedia :: Sound/Audio', - 'Topic :: Scientific/Engineering :: Artificial Intelligence', - ], -) diff --git a/spaces/yuhanbo/chat-gpt/app/utils.ts b/spaces/yuhanbo/chat-gpt/app/utils.ts deleted file mode 100644 index 81b3a9905fa381699071ab9c7694c7a9aa46aa67..0000000000000000000000000000000000000000 --- a/spaces/yuhanbo/chat-gpt/app/utils.ts +++ /dev/null @@ -1,98 +0,0 @@ -import { EmojiStyle } from "emoji-picker-react"; -import { showToast } from "./components/ui-lib"; -import Locale from "./locales"; - -export function trimTopic(topic: string) { - const s = topic.split(""); - let lastChar = s.at(-1); // 获取 s 的最后一个字符 - let pattern = /[,。!?、]/; // 定义匹配中文标点符号的正则表达式 - while (lastChar && pattern.test(lastChar!)) { - s.pop(); - lastChar = s.at(-1); - } - - return s.join(""); -} - -export function copyToClipboard(text: string) { - if (navigator.clipboard) { - navigator.clipboard.writeText(text).catch((err) => { - console.error("Failed to copy: ", err); - }); - } else { - const textArea = document.createElement("textarea"); - textArea.value = text; - document.body.appendChild(textArea); - textArea.focus(); - textArea.select(); - try { - document.execCommand("copy"); - console.log("Text copied to clipboard"); - } catch (err) { - console.error("Failed to copy: ", err); - } - document.body.removeChild(textArea); - } -} - -export function downloadAs(text: string, filename: string) { - const element = document.createElement("a"); - element.setAttribute( - "href", - "data:text/plain;charset=utf-8," + encodeURIComponent(text), - ); - element.setAttribute("download", filename); - - element.style.display = "none"; - document.body.appendChild(element); - - element.click(); - - document.body.removeChild(element); -} - -export function isIOS() { - const userAgent = navigator.userAgent.toLowerCase(); - return /iphone|ipad|ipod/.test(userAgent); -} - -export function selectOrCopy(el: HTMLElement, content: string) { - const currentSelection = window.getSelection(); - - if (currentSelection?.type === "Range") { - return false; - } - - copyToClipboard(content); - - return true; -} - -export function queryMeta(key: string, defaultValue?: string): string { - let ret: string; - if (document) { - const meta = document.head.querySelector( - `meta[name='${key}']`, - ) as HTMLMetaElement; - ret = meta?.content ?? ""; - } else { - ret = defaultValue ?? ""; - } - - return ret; -} - -let currentId: string; -export function getCurrentCommitId() { - if (currentId) { - return currentId; - } - - currentId = queryMeta("version"); - - return currentId; -} - -export function getEmojiUrl(unified: string, style: EmojiStyle) { - return `https://cdn.staticfile.org/emoji-datasource-apple/14.0.0/img/${style}/64/${unified}.png`; -} diff --git a/spaces/zhaoys/wfms-kuiwenc/src/components/chat-suggestions.tsx b/spaces/zhaoys/wfms-kuiwenc/src/components/chat-suggestions.tsx deleted file mode 100644 index ec08a9515a18e09fbec2b60cb3173e1c4172b072..0000000000000000000000000000000000000000 --- a/spaces/zhaoys/wfms-kuiwenc/src/components/chat-suggestions.tsx +++ /dev/null @@ -1,51 +0,0 @@ -import React, { useEffect, useMemo } from 'react' -import { atom, useAtom } from 'jotai' -import HelpIcon from '@/assets/images/help.svg' -import DismissFillIcon from '@/assets/images/dismiss-fill.svg' -import { SuggestedResponse } from '@/lib/bots/bing/types' -import { BingReturnType } from '@/lib/hooks/use-bing' -import { SVG } from './ui/svg' - -type Suggestions = SuggestedResponse[] -const helpSuggestions = ['为什么不回应某些主题', '告诉我更多关于必应的资迅', '必应如何使用 AI?'].map((text) => ({ text })) -const suggestionsAtom = atom([]) - -type ChatSuggestionsProps = React.ComponentProps<'div'> & Pick & { suggestions?: Suggestions } - -export function ChatSuggestions({ setInput, suggestions = [] }: ChatSuggestionsProps) { - const [currentSuggestions, setSuggestions] = useAtom(suggestionsAtom) - const toggleSuggestions = (() => { - if (currentSuggestions === helpSuggestions) { - setSuggestions(suggestions) - } else { - setSuggestions(helpSuggestions) - } - }) - - useMemo(() => { - setSuggestions(suggestions) - }, [suggestions, setSuggestions]) - - useEffect(() => { - setTimeout(() => { - window.scrollBy(0, 800) - }, 200) - }, []) - - return currentSuggestions?.length ? ( -
    -
    - - { - currentSuggestions.map(suggestion => ( - - )) - } -
    -
    - ) : null -} diff --git a/spaces/zxc314/vits-uma-genshin-honkai/text/__init__.py b/spaces/zxc314/vits-uma-genshin-honkai/text/__init__.py deleted file mode 100644 index 663c4b6416affb53c9dc56dddbc8b2b65d4bf518..0000000000000000000000000000000000000000 --- a/spaces/zxc314/vits-uma-genshin-honkai/text/__init__.py +++ /dev/null @@ -1,57 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners -from text.symbols import symbols - - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence, clean_text - - -def cleaned_text_to_sequence(cleaned_text): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [_symbol_to_id[symbol] for symbol in cleaned_text if symbol in _symbol_to_id.keys()] - return sequence - - -def sequence_to_text(sequence): - '''Converts a sequence of IDs back to a string''' - result = '' - for symbol_id in sequence: - s = _id_to_symbol[symbol_id] - result += s - return result - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text