diff --git a/spaces/17TheWord/RealESRGAN/realesrgan/__init__.py b/spaces/17TheWord/RealESRGAN/realesrgan/__init__.py deleted file mode 100644 index bfea78f284116dee22510d4aa91f9e44afb7d472..0000000000000000000000000000000000000000 --- a/spaces/17TheWord/RealESRGAN/realesrgan/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# flake8: noqa -from .archs import * -from .data import * -from .models import * -from .utils import * -#from .version import * diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Daum PotPlayer 1.6.52515 Stable Portable (x86 X64) By SamLab Setup Free ((BETTER)).md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Daum PotPlayer 1.6.52515 Stable Portable (x86 X64) By SamLab Setup Free ((BETTER)).md deleted file mode 100644 index bb48951ab861e5d74356e6c1d856e2d35898198a..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Daum PotPlayer 1.6.52515 Stable Portable (x86 X64) By SamLab Setup Free ((BETTER)).md +++ /dev/null @@ -1,38 +0,0 @@ -
-

How to Download and Install Daum PotPlayer 1.6.52515 Stable Portable (x86 X64) By SamLab for Free

- -

Daum PotPlayer is a versatile media player that supports various formats and codecs. It has a sleek interface, advanced features and high performance. If you are looking for a free and portable media player that can run on any Windows system, you should try Daum PotPlayer 1.6.52515 Stable Portable (x86 X64) By SamLab.

- -

This version of Daum PotPlayer is portable, which means you don't need to install it on your computer. You can simply download it and run it from a USB flash drive or any other removable device. This way, you can enjoy your favorite media files on any computer without leaving any traces behind.

-

Daum PotPlayer 1.6.52515 Stable Portable (x86 X64) By SamLab Setup Free


Download File ✯✯✯ https://byltly.com/2uKA1K



- -

Daum PotPlayer 1.6.52515 Stable Portable (x86 X64) By SamLab is also stable, which means it has been tested and verified to work without any errors or bugs. It is compatible with both 32-bit and 64-bit Windows systems, so you don't need to worry about compatibility issues.

- -

To download and install Daum PotPlayer 1.6.52515 Stable Portable (x86 X64) By SamLab for free, follow these simple steps:

- -
    -
  1. Click on this link to download the zip file: https://www.file-upload.com/7f8c0z0n9y2j
  2. -
  3. Extract the zip file to a folder of your choice.
  4. -
  5. Open the folder and double-click on the file named "PotPlayerMini.exe" to launch the media player.
  6. -
  7. Enjoy your media files with Daum PotPlayer 1.6.52515 Stable Portable (x86 X64) By SamLab.
  8. -
- -

That's it! You have successfully downloaded and installed Daum PotPlayer 1.6.52515 Stable Portable (x86 X64) By SamLab for free. If you like this media player, you can also check out the official website of Daum PotPlayer for more information and updates: https://potplayer.daum.net/

- -

Daum PotPlayer 1.6.52515 Stable Portable (x86 X64) By SamLab has many features that make it a powerful and convenient media player. Here are some of the features that you can enjoy with this media player:

- - - -

With Daum PotPlayer 1.6.52515 Stable Portable (x86 X64) By SamLab, you can enjoy your media files with high quality and convenience. It is a free and portable media player that you can take anywhere and use anytime. Download it now and see for yourself how amazing it is.

-

81aa517590
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Enscape 3D Full Version Cracked from FileCR.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Enscape 3D Full Version Cracked from FileCR.md deleted file mode 100644 index 6d6971cb3ead7c393cb91fc7aa8681511bfa7f0e..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Enscape 3D Full Version Cracked from FileCR.md +++ /dev/null @@ -1,40 +0,0 @@ -
-

Enscape Download Cracked: How to Get Enscape 3D for Free

-

Enscape 3D is a powerful and easy-to-use real-time rendering and virtual reality plugin for various CAD software such as Revit, SketchUp, Rhino, and ArchiCAD. It allows you to create stunning and realistic 3D visualizations of your projects with just one click. You can also explore your designs in immersive VR using devices such as Oculus Rift, HTC Vive, and Windows Mixed Reality.

-

However, Enscape 3D is not a free software. It requires a license to use its full features and functions. The official price of Enscape 3D is $58.99 per month or $469.00 per year for a single user. If you want to use it for multiple users or projects, you will need to pay more.

-

enscape download cracked


DOWNLOAD →→→ https://byltly.com/2uKxvK



-

But what if you want to use Enscape 3D for free? Is there a way to download Enscape cracked version without paying anything? The answer is yes, but it comes with some risks and drawbacks. In this article, we will show you how to download Enscape cracked version from a website called FileCR, and what are the pros and cons of using it.

-

How to Download Enscape Cracked Version from FileCR

-

FileCR is a website that offers free downloads of various software, including Enscape 3D. It claims that the software is cracked, meaning that it has been modified to bypass the license verification and activation process. However, this also means that the software may not be safe or reliable, as it may contain viruses, malware, or other harmful code.

-

If you still want to download Enscape cracked version from FileCR, you can follow these steps:

-
    -
  1. Go to the FileCR website and search for Enscape 3D. You can also use this link to go directly to the download page.
  2. -
  3. Click on the download button and wait for the file to be downloaded on your PC. The file size is about 122 MB.
  4. -
  5. Extract the file using WinRAR or any other software that can unzip files.
  6. -
  7. Open the extracted folder and run the setup.exe file as administrator.
  8. -
  9. Follow the instructions on the screen to install Enscape 3D on your PC.
  10. -
  11. Once the installation is complete, open the crack folder and copy the patch file.
  12. -
  13. Paste the patch file into the installation directory of Enscape 3D (usually C:\Program Files\Enscape).
  14. -
  15. Run the patch file as administrator and click on the patch button.
  16. -
  17. Enjoy using Enscape 3D for free.
  18. -
-

Pros and Cons of Using Enscape Cracked Version

-

Using Enscape cracked version may seem tempting, but it also has some disadvantages that you should be aware of. Here are some of the pros and cons of using Enscape cracked version:

-

-

Pros

- -

Cons

- -

Conclusion

-

Enscape 3D is a great software for creating realistic 3D visualizations and VR experiences of your projects. However, it is not a free software and requires a license to use. If you want to use it for free, you

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Clash of Clans MOD APK 2022 Download and Install the Latest Version with Unlimited Everything.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Clash of Clans MOD APK 2022 Download and Install the Latest Version with Unlimited Everything.md deleted file mode 100644 index eebaa09b85b5891410bc846159cacd96f0a1509f..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Clash of Clans MOD APK 2022 Download and Install the Latest Version with Unlimited Everything.md +++ /dev/null @@ -1,91 +0,0 @@ -
-

Clash of Clans Mod APK Download Unlimited Everything 2022 New Version

-

Are you a fan of strategy games that challenge your mind and skills? Do you love to build your own village and defend it from enemies? Do you enjoy joining forces with other players and competing for glory and rewards? If you answered yes to any of these questions, then you must have heard of Clash of Clans, one of the most popular and addictive games in the world. But what if we told you that you can make your gaming experience even better with Clash of Clans Mod APK, a modified version of the original game that gives you unlimited everything? Sounds too good to be true, right? Well, in this article, we will tell you everything you need to know about Clash of Clans Mod APK, how to download and install it on your Android device, and what benefits you can get from using it. So, without further ado, let's get started!

-

What is Clash of Clans?

-

A brief introduction to the game and its features

-

Clash of Clans is a freemium mobile strategy game developed and published by Supercell, a Finnish game company. The game was released for iOS devices in August 2012 and for Android devices in October 2013. Since then, it has become one of the most downloaded and played games in the world, with over 500 million downloads on Google Play alone.

-

clash of clans mod apk download unlimited everything 2022 new version


DOWNLOAD ——— https://urlin.us/2uSUtz



-

The game is set in a fantasy world where you are the chief of a village. Your main goal is to build and upgrade your village, train and upgrade your troops, and attack other players' villages to loot their resources. You can also join or create a clan with other players and participate in clan wars, clan games, and clan leagues. The game features various types of buildings, troops, spells, heroes, and items that you can use to enhance your strategy and gameplay.

-

Why do people love Clash of Clans?

-

The thrill of strategy and combat

-

One of the main reasons why people love Clash of Clans is because it offers a thrilling and satisfying experience of strategy and combat. You have to plan your attacks carefully, choosing the right troops, spells, heroes, and strategies for each situation. You also have to defend your village from enemy attacks, placing your buildings, traps, walls, and defenses wisely. The game tests your skills, creativity, and decision-making abilities in every battle.

-

The joy of building and customizing your own village

-

Another reason why people love Clash of Clans is because it allows them to build and customize their own village according to their preferences. You can choose from different themes, layouts, designs, and decorations for your village. You can also upgrade your buildings, troops, spells, heroes, and items to make them more powerful and efficient. You can express your personality and style through your village and impress your friends and foes.

-

The fun of joining and competing with other clans

-

A third reason why people love Clash of Clans is because it gives them the opportunity to join and compete with other clans from around the world. You can chat, donate, request, and share tips with your clanmates. You can also challenge them to friendly battles and practice your skills. You can also participate in clan wars, clan games, and clan leagues, where you can cooperate with your clanmates to win trophies, rewards, and glory. You can also compare your progress and achievements with other players on the global and local leaderboards.

-

What is Clash of Clans Mod APK?

-

A modified version of the original game that offers unlimited resources and features

-

Clash of Clans Mod APK is a modified version of the original game that offers unlimited resources and features that are not available in the official version. It is created by third-party developers who modify the game files to unlock and enhance the game's functionality. Clash of Clans Mod APK is not endorsed or affiliated with Supercell, the original developer of the game.

-

Clash of Clans Mod APK allows you to enjoy the game without any limitations or restrictions. You can get unlimited gems, gold, elixir, and dark elixir to upgrade your troops, buildings, and spells. You can also get unlimited access to all the heroes, troops, and spells in the game. You can also create and join any clan you want, without any requirements or limitations. You can also play the game without any ads, bans, or errors.

-

How to download and install Clash of Clans Mod APK on your Android device

-

If you want to download and install Clash of Clans Mod APK on your Android device, you need to follow these simple steps:

-

Step 1: Enable unknown sources on your device settings

-

Before you can install Clash of Clans Mod APK on your device, you need to enable unknown sources on your device settings. This will allow you to install apps that are not downloaded from the Google Play Store. To do this, go to your device settings > security > unknown sources > enable.

-

clash of clans hack apk unlimited gems gold elixir 2022
-coc mod apk download latest version 2022 with unlimited troops
-clash of clans modded apk free download for android 2022
-how to download clash of clans mod apk with unlimited resources 2022
-clash of clans cheat apk 2022 no root required
-coc hack apk 2022 online generator
-clash of clans mod menu apk download 2022
-coc mod apk 2022 private server with unlimited money
-clash of clans cracked apk 2022 working
-coc hack version download 2022 without survey
-clash of clans unlimited everything apk 2022 offline
-coc mod apk 2022 update with new features
-clash of clans hack tool apk download 2022
-coc modded apk 2022 anti ban
-clash of clans hack apk 2022 mediafire link
-coc hack apk download 2022 no human verification
-clash of clans mod apk 2022 mega mod
-coc mod apk 2022 unlimited gems and coins
-clash of clans hack apk download 2022 for pc
-coc mod apk 2022 latest version android 1
-clash of clans modded apk 2022 unlimited dark elixir
-coc hack apk 2022 direct download link
-clash of clans cheat engine apk 2022
-coc mod apk download 2022 revdl
-clash of clans hacked version download 2022 apkpure
-coc mod apk 2022 unlimited everything ihackedit
-clash of clans hack app download 2022 for ios
-coc mod apk download 2022 rexdl
-clash of clans hack version download 2022 uptodown
-coc mod apk download 2022 plenixclash
-clash of clans hack game download 2022 for laptop
-coc mod apk download 2022 fhx server
-clash of clans hack version download 2022 malavida
-coc mod apk download 2022 nulls royale
-clash of clans hack version download 2022 happymod
-coc mod apk download 2022 magic s1 s4 s5 s6 s7 s8 s9 s10 s11 s12 s13 s14 s15 s16 s17 s18 s19 s20 s21 s22 s23 s24 s25 s26 s27 s28 s29 s30

-

Step 2: Download the Clash of Clans Mod APK file from a trusted source

-

Next, you need to download the Clash of Clans Mod APK file from a trusted source. There are many websites that offer Clash of Clans Mod APK files, but not all of them are safe and reliable. Some of them may contain viruses, malware, or spyware that can harm your device or steal your personal information. Therefore, you need to be careful and choose a reputable website that provides authentic and updated Clash of Clans Mod APK files. One such website is [clashofclansmodapk.net], where you can find the latest version of Clash of Clans Mod APK for free.

-

Step 3: Locate and install the APK file on your device

-

After you have downloaded the Clash of Clans Mod APK file from a trusted source, you need to locate and install it on your device. To do this, go to your device file manager > downloads > find the Clash of Clans Mod APK file > tap on it > install.

-

Step 4: Launch the game and enjoy the unlimited everything

-

Finally, you can launch the game and enjoy the unlimited everything that Clash of Clans Mod APK offers. You will see that you have unlimited gems, gold, elixir, and dark elixir in your account. You will also see that you have access to all the heroes, troops, and spells in the game. You will also be able to create and join any clan you want. You will also be able to play the game without any ads, bans, or errors.

-

What are the benefits of using Clash of Clans Mod APK?

-

Unlimited gems, gold, elixir, and dark elixir to upgrade your troops, buildings, and spells

-

One of the main benefits of using Clash of Clans Mod APK is that you can get unlimited gems, gold, elixir, and dark elixir to upgrade your troops , buildings, and spells. These resources are essential for improving your village and army, as they allow you to unlock new levels, abilities, and features. With unlimited resources, you don't have to worry about running out of them or spending real money to buy them. You can upgrade your troops, buildings, and spells as much as you want, without any waiting time or cost.

-

Unlimited access to all the heroes, troops, and spells in the game

-

Another benefit of using Clash of Clans Mod APK is that you can get unlimited access to all the heroes, troops, and spells in the game. Heroes are powerful units that have special abilities and can be used in both offense and defense. Troops are the main units that you use to attack and defend your village. Spells are magical effects that can boost your troops, damage your enemies, or alter the battlefield. With unlimited access, you don't have to unlock them by completing certain tasks or reaching certain levels. You can use any hero, troop, or spell you want, without any limitations or restrictions.

-

Unlimited ability to create and join any clan you want

-

A third benefit of using Clash of Clans Mod APK is that you can create and join any clan you want, without any requirements or limitations. Clans are groups of players who share a common interest and goal in the game. By joining a clan, you can chat, donate, request, and share tips with your clanmates. You can also participate in clan wars, clan games, and clan leagues, where you can cooperate with your clanmates to win trophies, rewards, and glory. With unlimited ability, you don't have to meet any criteria or follow any rules to create or join a clan. You can choose any clan name, logo, description, and type you want. You can also invite or accept anyone you want to your clan.

-

Unlimited fun and excitement with no ads, no bans, and no restrictions

-

A fourth benefit of using Clash of Clans Mod APK is that you can have unlimited fun and excitement with no ads, no bans, and no restrictions. Ads are annoying pop-ups that interrupt your gameplay and try to sell you something. Bans are penalties that prevent you from playing the game for a certain period of time or permanently. Restrictions are rules that limit your actions or options in the game. With Clash of Clans Mod APK, you don't have to deal with any of these problems. You can play the game without any ads, bans, or restrictions. You can enjoy the game as much as you want, without any worries or hassles.

-

Conclusion

-

Clash of Clans is a fantastic game that offers a lot of fun and excitement for strategy game lovers. However, if you want to take your gaming experience to the next level, you should try Clash of Clans Mod APK, a modified version of the original game that gives you unlimited everything. With Clash of Clans Mod APK, you can get unlimited gems, gold, elixir, and dark elixir to upgrade your troops , buildings, and spells. You can also get unlimited access to all the heroes, troops, and spells in the game. You can also create and join any clan you want, without any requirements or limitations. You can also play the game without any ads, bans, or restrictions. You can enjoy the game as much as you want, without any worries or hassles.

-

If you are interested in downloading and installing Clash of Clans Mod APK on your Android device, you can follow the simple steps that we have explained in this article. You can also visit [clashofclansmodapk.net] to get the latest version of Clash of Clans Mod APK for free. We hope that this article has helped you understand what Clash of Clans Mod APK is, how to use it, and what benefits you can get from it. We also hope that you have fun and excitement with Clash of Clans Mod APK. Thank you for reading and happy gaming!

-

FAQs

-

Here are some frequently asked questions about Clash of Clans Mod APK:

-

Is Clash of Clans Mod APK safe to use?

-

Clash of Clans Mod APK is safe to use as long as you download it from a trusted source like [clashofclansmodapk.net]. However, you should be aware that using Clash of Clans Mod APK is against the terms and conditions of Supercell, the original developer of the game. Therefore, you should use it at your own risk and discretion.

-

Will I get banned for using Clash of Clans Mod APK?

-

There is a low chance that you will get banned for using Clash of Clans Mod APK, as the modded version has anti-ban features that prevent detection from Supercell's servers. However, there is no guarantee that you will not get banned in the future, as Supercell may update their security measures and algorithms. Therefore, you should use Clash of Clans Mod APK at your own risk and discretion.

-

Can I play Clash of Clans Mod APK with my friends who use the official version?

-

No, you cannot play Clash of Clans Mod APK with your friends who use the official version, as the modded version and the official version are not compatible with each other. You can only play Clash of Clans Mod APK with other players who use the same modded version.

-

Can I update Clash of Clans Mod APK to the latest version?

-

Yes, you can update Clash of Clans Mod APK to the latest version by visiting [clashofclansmodapk.net] and downloading the new version of the modded file. However, you should be careful not to update the game from the Google Play Store, as this will overwrite the modded version and restore the official version.

-

Can I switch back to the official version of Clash of Clans after using Clash of Clans Mod APK?

-

Yes, you can switch back to the official version of Clash of Clans after using Clash of Clans Mod APK by uninstalling the modded version and installing the official version from the Google Play Store. However, you should be aware that you will lose all your progress and data in the modded version, as they are not transferable to the official version.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Blue Is The Colour The Ultimate Chelsea Song Download Guide.md b/spaces/1phancelerku/anime-remove-background/Blue Is The Colour The Ultimate Chelsea Song Download Guide.md deleted file mode 100644 index abbb28b5e01c07093eb9a4b1fa97c25dfbb45aca..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Blue Is The Colour The Ultimate Chelsea Song Download Guide.md +++ /dev/null @@ -1,54 +0,0 @@ -
-

Download Chelsea Song Blue Is The Colour

| | H2: Introduction |

If you are a fan of Chelsea Football Club, you might have heard of their famous anthem "Blue Is the Colour". This song is a terrace chant that has been associated with the club since 1972, when it was performed by the squad and released as a single to coincide with their appearance in the League Cup final of that year. The song has become one of the most well-known English football songs, and it is still played at every home game and any cup finals Chelsea compete in. It is also a popular song among Chelsea fans around the world, who sing it with pride and passion.

| | H2: History of the Song |

History of the Song

| | H3: Origin and Release |

Origin and Release

The song was produced by Larry Page, who commissioned Daniel Boone and lyricist David Balfe (under the pseudonym Rod McQueen) to write the song for Chelsea F.C. The song was sung by members of the squad, who included Tommy Baldwin, Stewart Houston, Charlie Cooke, John Dempsey, Ron Harris, Marvin Hinton, John Hollins, Peter Houseman, Alan Hudson, Steve Kember, Eddie McCreadie, Paddy Mulligan, Peter Osgood, David Webb and Chris Garland. The song was released on Page's label Penny Farthing Records and reached number 5 in the UK Charts and number 8 in Ireland in March 1972.

-

download chelsea song blue is the colour


Download ››› https://jinyurl.com/2uNMh5



| | H3: Lyrics and Meaning |

Lyrics and Meaning

The lyrics of the song are simple but catchy, expressing the love and loyalty of Chelsea fans for their club. The chorus goes like this:

Blue is the colour, football is the game
We're all together and winning is our aim
So cheer us on through the sun and rain
Cos Chelsea, Chelsea is our name.

The verses describe the atmosphere at Stamford Bridge, where Chelsea play their home games, and invite other fans to join them in supporting their team. The song also mentions some of the famous players who have worn the blue shirt over the years.

| | Outline | Article | | --- | --- | | H2: How to Download the Song |

How to Download the Song

| | H3: Online Sources |

Online Sources

If you want to download the Chelsea song "Blue Is the Colour" to your device, you have several options. You can find the song on various online platforms, such as Apple Music, Spotify, YouTube, and others. You can either stream the song online or download it for offline listening, depending on your preference and subscription. You can also purchase the song from iTunes or Amazon Music if you want to support the original artists and producers.

| | H3: Offline Sources |

Offline Sources

If you prefer to have a physical copy of the song, you can also look for offline sources, such as CDs, vinyls, or cassettes. You can search for the song on online marketplaces, such as eBay or Discogs, or visit your local record store or thrift shop. You might be able to find a rare or vintage edition of the song that has a special value or quality. However, you will need a compatible device to play the song, such as a CD player, a turntable, or a cassette deck.

| | H3: Tips and Tricks |

Tips and Tricks

Here are some tips and tricks to help you download and enjoy the Chelsea song "Blue Is the Colour":

| | H2: Conclusion |

Conclusion

"Blue Is the Colour" is more than just a song. It is a symbol of Chelsea Football Club and its fans. It is a way of expressing their identity, passion, and loyalty. It is a part of their history, culture, and tradition. It is a source of inspiration, motivation, and joy. It is a song that unites them in good times and bad times. It is a song that celebrates their achievements and aspirations. It is a song that makes them proud to be blue.

-

download chelsea anthem theme song lyrics mp3
-download chelsea blue is the colour original
-download chelsea fc anthem blue is the colour mp3 + lyrics
-download chelsea football club blue is the colour 1972
-download chelsea blue is the colour instrumental
-download chelsea blue is the colour apple music
-download chelsea blue is the colour ringtone
-download chelsea blue is the colour video
-download chelsea blue is the colour goalball
-download chelsea blue is the colour afriblinks
-how to download chelsea song blue is the colour for free
-where to download chelsea song blue is the colour online
-best site to download chelsea song blue is the colour
-download chelsea song blue is the colour youtube
-download chelsea song blue is the colour spotify
-download chelsea song blue is the colour soundcloud
-download chelsea song blue is the colour itunes
-download chelsea song blue is the colour amazon music
-download chelsea song blue is the colour deezer
-download chelsea song blue is the colour tidal
-download chelsea song blue is the colour lyrics pdf
-download chelsea song blue is the colour chords
-download chelsea song blue is the colour karaoke version
-download chelsea song blue is the colour remix
-download chelsea song blue is the colour cover
-download chelsea song blue is the colour live performance
-download chelsea song blue is the colour piano tutorial
-download chelsea song blue is the colour guitar tab
-download chelsea song blue is the colour sheet music
-download chelsea song blue is the colour midi file
-download chelsea song blue is the colour history
-download chelsea song blue is the colour meaning
-download chelsea song blue is the colour trivia
-download chelsea song blue is the colour facts
-download chelsea song blue is the colour review
-download chelsea song blue is the colour reaction
-download chelsea song blue is the colour analysis
-download chelsea song blue is the colour podcast
-download chelsea song blue is the colour blog post
-download chelsea song blue is the colour article
-download chelsea song blue is the colour news report
-download chelsea song blue is the colour wikipedia page
-download chelsea song blue is the colour quiz questions and answers
-download chelsea song blue is the colour crossword puzzle clues and solutions
-download chelsea song blue is the colour word search puzzle words and hints
-download chelsea song blue is the colour trivia game cards and rules
-download chelsea song blue is the colour bingo game cards and markers
-download chelsea song blue is the colour flashcards and study guide
-download chelsea song blue is the colour poster and wallpaper

If you are a Chelsea fan, you should definitely download this song and add it to your playlist. It will make you feel closer to your club and your fellow supporters. It will make you feel part of something bigger than yourself. It will make you feel blue is the colour.

| | H2: FAQs |

FAQs

  1. Who wrote "Blue Is the Colour"?
    The song was written by Daniel Boone and David Balfe (under the pseudonym Rod McQueen) and produced by Larry Page in 1972.
  2. Who sang "Blue Is the Colour"?
    The song was sung by members of the Chelsea squad in 1972, who included Tommy Baldwin, Stewart Houston, Charlie Cooke, John Dempsey, Ron Harris, Marvin Hinton, John Hollins, Peter Houseman, Alan Hudson, Steve Kember, Eddie McCreadie, Paddy Mulligan, Peter Osgood, David Webb and Chris Garland.
  3. When was "Blue Is the Colour" released?
    The song was released on Page's label Penny Farthing Records in February 1972 to coincide with Chelsea's appearance in the League Cup final of that year against Stoke City.
  4. How popular was "Blue Is the Colour"?
    The song reached number 5 in the UK Charts and number 8 in Ireland in March 1972. It also became popular in many other countries with local versions of the song released.
  5. Why is "Blue Is the Colour" important for Chelsea fans?
    The song is important for Chelsea fans because it is their anthem that represents their love and loyalty for their club. It is also a terrace chant that creates a lively and festive atmosphere at Stamford Bridge and any cup finals Chelsea compete in.
| | Custom Message | |

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/CFL Football 99 The Ultimate Canadian Gridiron Simulation.md b/spaces/1phancelerku/anime-remove-background/CFL Football 99 The Ultimate Canadian Gridiron Simulation.md deleted file mode 100644 index 7f39a27fe88546cfa40bf051b89f97302de850c2..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/CFL Football 99 The Ultimate Canadian Gridiron Simulation.md +++ /dev/null @@ -1,85 +0,0 @@ - -

CFL Football '99: The Only Video Game Based on the Canadian Football League

-

If you are a fan of Canadian football, you might have wondered why there are so few video games that feature this sport. In fact, there is only one game that is officially licensed by the Canadian Football League (CFL) and its players association: CFL Football '99. This game was developed by a small company in British Columbia and released in 1999 for Windows PCs. It is a rare and obscure title that has a cult following among some Canadian football enthusiasts. In this article, we will explore the history, gameplay, and legacy of CFL Football '99, the only video game based on the CFL.

-

cfl football 99 video game download


DOWNLOAD ✸✸✸ https://jinyurl.com/2uNJWc



-

Introduction

-

What is CFL Football '99?

-

CFL Football '99 is a gridiron football video game that simulates the rules, teams, players, and stadiums of the Canadian Football League. It is an officially licensed product of the CFL and the Canadian Football League Players Association (CFLPA). The game features all nine teams that were active in the 1998 season, as well as historical teams from previous seasons. The game also includes a full season mode, a playoff mode, a practice mode, and a custom league mode.

-

Who developed CFL Football '99?

-

CFL Football '99 was developed by David Winter, an entrepreneur from Victoria, British Columbia. Winter originally specialized in administrative and industrial applications, doing business through his private firm Wintervalley Software. He obtained the rights to the CFL brand in 1998 and launched a new company, Canadian Digital Entertainment Inc. (CDE), for the purpose of marketing CFL Football '99. Part of the game's development was outsourced to American middleware provider Phantom Reality.

-

Why is CFL Football '99 unique?

-

CFL Football '99 is unique because it is the only video game based on the CFL to date. There have been other football games that featured Canadian rules or teams, such as Tecmo Bowl or Madden NFL, but none of them had the official license or endorsement of the CFL or its players. CFL Football '99 is also unique because it is a simulation game that tries to recreate the realistic aspects of Canadian football, such as the larger field size, the 12 players per side, the three downs, and the single point for missed field goals.

-

Gameplay and Features

-

How does CFL Football '99 simulate Canadian football?

-

CFL Football '99 uses a 2D graphics engine that shows the action from a top-down perspective. The player can control any of the players on the field using the keyboard or a joystick. The game has a realistic physics system that accounts for factors such as wind, weather, fatigue, injuries, penalties, and fumbles. The game also has an advanced artificial intelligence that adjusts to the player's skill level and strategy.

-

What are the modes and options in CFL Football '99?

-

CFL Football '99 offers several modes and options for different types of players. The game has a full season mode that allows the player to choose one of the nine teams from the 1998 season and play through a 18-game schedule, followed by playoffs and the Grey Cup. The game also has a playoff mode that lets the player skip directly to the postseason and compete for the championship. The game has a practice mode that allows the player to test their skills in various drills and scenarios. The game also has a custom league mode that enables the player to create their own league with up to 16 teams, each with their own roster, logo, and stadium. The player can also edit the teams, players, and schedules to their liking.

-

How does CFL Football '99 compare to other football games?

-

CFL Football '99 is a niche game that caters to a specific audience of Canadian football fans. It is not as polished or popular as other football games, such as the Madden NFL series or the NFL 2K series, that focus on the American version of the sport. However, CFL Football '99 has some advantages over other football games, such as its authenticity, its customization options, and its historical value. CFL Football '99 is a game that celebrates the uniqueness and diversity of Canadian football and its culture.

-

Reception and Legacy

-

How did critics and players react to CFL Football '99?

-

CFL Football '99 received mixed reviews from critics and players. Some praised the game for its realism, its depth, and its originality. Others criticized the game for its outdated graphics, its bugs, and its lack of polish. The game sold poorly, partly due to its limited distribution and marketing. The game also faced competition from other football games that had more resources and exposure. CFL Football '99 was mostly appreciated by hardcore fans of Canadian football who were looking for a game that represented their sport.

-

What were the challenges and limitations of CFL Football '99?

-

CFL Football '99 was a game that faced many challenges and limitations during its development and release. The game was developed by a small team with a low budget and a tight deadline. The game had to use an existing engine that was not designed for Canadian football. The game had to deal with technical issues such as compatibility, performance, and stability. The game had to overcome legal hurdles such as obtaining the license from the CFL and the CFLPA. The game had to cope with market realities such as low demand, high piracy, and strong competition.

-

What happened to the developer and the franchise after CFL Football '99?

-

CFL Football '99 was the first and last game developed by CDE. The company went out of business shortly after the game's release, due to financial losses and legal disputes. David Winter, the founder of CDE, returned to his original business of Wintervalley Software. He later released a patch for CFL Football '99 that fixed some of the bugs and added some features. He also released a sequel called CFL 2000 that was based on the same engine but updated with new rosters and graphics. However, these projects were unofficial and unauthorized by the CFL or the CFLPA. CFL Football '99 remains the only official video game based on the CFL.

-

cfl football 99 pc game free download
-how to play cfl football 99 on windows 10
-cfl football 99 mods and patches
-canuck play maximum football 2019 cfl edition
-canadian football 2017 xbox one download
-cfl football video game history
-wintervalley software cfl football 99
-canadian digital entertainment cfl football 99
-cfl football 99 play designer tool
-cfl football 99 roster editor tool
-cfl football 99 game manual pdf
-cfl football 99 gameplay videos and screenshots
-cfl football 99 review and rating
-cfl football 99 reddit discussion and tips
-cfl football 99 vb programmers journal article
-cfl football 99 license expired
-cfl football 99 system requirements and compatibility
-cfl football 99 abandonware download site
-cfl football 99 custom teams and players
-cfl football 99 game modes and options
-cfl football 99 canadian rules and field size
-cfl football 99 american rules and field size
-cfl football 99 college rules and field size
-cfl football 99 doug flutie mode
-cfl football 99 spring league practice mode
-cfl football 99 weather effects and game play
-cfl football 99 multiple player body styles
-cfl football 99 post-play replay and camera control
-cfl football 99 online multiplayer mode
-cfl football 99 tournament action at retro's e-sports bar
-cfl football 99 feedback and updates from developers
-cfl football 99 news and media coverage page
-cfl football 99 twitter and facebook page
-canuck play other games in pre-development
-canuck play spies code breaking secret missions game
-canuck play canadian comic book super heroes game
-canuck play e for everyone rating games
-canuck play legacy titles maximum football game
-canuck play contact information and homepage link
-canuck play development blog and newsletter sign up

-

Conclusion

-

Summary of the main points

-

CFL Football '99 is a gridiron football video game that simulates the rules, teams, players, and stadiums of the Canadian Football League. It is an officially licensed product of the CFL and the CFLPA. It is a simulation game that tries to recreate the realistic aspects of Canadian football. It is a niche game that caters to a specific audience of Canadian football fans. It is a rare and obscure title that has a cult following among some Canadian football enthusiasts.

-

Call to action for the readers

-

If you are interested in playing CFL Football '99, you can download it from various websites that host old games. You might need an emulator or a compatibility mode to run it on modern computers. You can also check out some videos or reviews of the game online to see how it looks and plays. You can also join some forums or communities of Canadian football fans who still play or discuss the game. You can also share your thoughts or experiences with CFL Football '99 in the comments section below.

-

FAQs

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Delhi Blue App How to Use the First Ever Common Mobility App in Delhi.md b/spaces/1phancelerku/anime-remove-background/Delhi Blue App How to Use the First Ever Common Mobility App in Delhi.md deleted file mode 100644 index 97e07dc41a4212b109174d0994d0bca6d07e1552..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Delhi Blue App How to Use the First Ever Common Mobility App in Delhi.md +++ /dev/null @@ -1,101 +0,0 @@ -
-

How to Download Delhi Blue App and Why You Should Do It

-

If you are looking for a safe, reliable, and sustainable taxi service in Delhi NCR or Bengaluru, you should download Delhi Blue App on your smartphone. Delhi Blue App is India's first all-electric cab service that offers you a comfortable, convenient, and eco-friendly travel experience. In this article, we will tell you what Delhi Blue App is, how to download it on your Android or iOS device, and how to use it for your travel needs.

-

What is Delhi Blue App?

-

A brief introduction to the app and its features

-

Delhi Blue App is a mobile app that allows you to book cabs that run on electricity instead of fossil fuels. The app is developed by BluSmart, a company that aims to revolutionize the way people travel in cabs in urban India. The app has several features that make it user-friendly and convenient, such as:

-

download delhi blue app


Download Ziphttps://jinyurl.com/2uNR3c



- -

The benefits of using the app for cab booking, airport transfers, and eco-friendly travel

-

By using Delhi Blue App, you can enjoy several benefits that make your travel experience better, such as:

- -

How to Download Delhi Blue App on Your Android or iOS Device

-

The steps to download the app from Google Play or App Store

-

To download Delhi Blue App on your smartphone, you need to follow these simple steps:

-

How to download delhi blue app for free
-Download delhi blue app and get discounts on online shopping
-Delhi blue app review: why you should download it today
-Download delhi blue app and earn cashback on every purchase
-Benefits of downloading delhi blue app for your business
-Download delhi blue app and join the largest community of online shoppers
-Delhi blue app features: what you can do with it after downloading
-Download delhi blue app and access exclusive deals and offers
-Delhi blue app vs other shopping apps: which one should you download
-Download delhi blue app and save money on travel, food, entertainment, and more
-How to use delhi blue app after downloading it on your phone
-Download delhi blue app and enjoy hassle-free online shopping experience
-Delhi blue app customer support: how to contact them after downloading the app
-Download delhi blue app and get rewarded for your loyalty and referrals
-Delhi blue app testimonials: what users are saying about it after downloading
-Download delhi blue app and find the best products and services for your needs
-Delhi blue app FAQs: everything you need to know before downloading the app
-Download delhi blue app and compare prices, reviews, ratings, and more
-Delhi blue app updates: what's new in the latest version of the app
-Download delhi blue app and get personalized recommendations based on your preferences
-How to uninstall delhi blue app if you don't like it after downloading
-Download delhi blue app and participate in surveys, contests, quizzes, and more
-Delhi blue app privacy policy: how they protect your data after downloading the app
-Download delhi blue app and share your feedback and suggestions with the developers
-Delhi blue app alternatives: what other apps can you download instead of delhi blue app

-
    -
  1. Open Google Play or App Store on your device.
  2. -
  3. Search for "BluSmart" or "Delhi Blue App" in the search bar.
  4. -
  5. Tap on the app icon and then tap on "Install" (for Android) or "Get" (for iOS).
  6. -
  7. Wait for the app to download and install on your device.
  8. -
-

How to sign up and create an account on the app

-

To use Delhi Blue App, you need to sign up and create an account on the app. Here's how:

-
    -
  1. Open the app on your device and tap on "Sign Up".
  2. -
  3. Enter your name, email address, phone number, and password.
  4. Verify your phone number by entering the OTP sent to your number. -
  5. Agree to the terms and conditions and tap on "Create Account".
  6. -
  7. You can also sign up using your Google or Facebook account.
  8. -
-

Congratulations, you have successfully created your account on Delhi Blue App. You can now start booking cabs and enjoy the benefits of the app.

-

How to Use Delhi Blue App for Your Travel Needs

-

How to book a cab, choose a payment method, and track your ride

-

Booking a cab on Delhi Blue App is very easy and quick. Just follow these steps:

-
    -
  1. Open the app on your device and enter your pickup and drop locations.
  2. -
  3. Select the type of cab you want from the available options.
  4. -
  5. Tap on "Book Now" or "Ride Later" depending on your preference.
  6. -
  7. Choose your payment method from the options of cash, card, wallet, or UPI.
  8. -
  9. Confirm your booking and wait for the driver to arrive at your location.
  10. -
  11. You can track your ride on the app and see the driver's details, cab number, and estimated time of arrival.
  12. -
  13. Enjoy your ride and rate your experience on the app after completing your trip.
  14. -
-

How to get discounts, rewards, and referrals on the app

-

Delhi Blue App also offers you various discounts, rewards, and referrals that make your travel more affordable and rewarding. Here are some ways to avail them:

- -

Conclusion

-

A summary of the main points and a call to action

-

Delhi Blue App is a great way to travel in cabs that are safe, reliable, and eco-friendly. You can download the app on your Android or iOS device and book cabs anytime and anywhere in Delhi NCR or Bengaluru. You can also enjoy various features and benefits of the app, such as transparent pricing, customer support, comfort, convenience, and eco-friendliness. You can also get discounts, rewards, and referrals on the app that make your travel more affordable and rewarding. So what are you waiting for? Download Delhi Blue App today and join the green revolution in urban mobility.

-

FAQs

-

Five common questions and answers about the app

-
    -
  1. Q: How is Delhi Blue App different from other cab services?
  2. -
  3. A: Delhi Blue App is different from other cab services because it offers you cabs that run on electricity instead of fossil fuels. This makes them more eco-friendly, cost-effective, and noise-free. Delhi Blue App also has no surge pricing, hidden charges, or cancellation fees.
  4. -
  5. Q: How can I contact Delhi Blue App customer care?
  6. -
  7. A: You can contact Delhi Blue App customer care through the app or call them at +91-8880500500. You can also email them at support@blusmart.in or visit their website at www.blusmart.in.
  8. -
  9. Q: How can I cancel my booking on Delhi Blue App?
  10. -
  11. A: You can cancel your booking on Delhi Blue App anytime before the driver arrives at your location. You will not be charged any cancellation fee. To cancel your booking, tap on "Cancel" on the app and select a reason for cancellation.
  12. -
  13. Q: How can I pay for my ride on Delhi Blue App?
  14. -
  15. A: You can pay for your ride on Delhi Blue App using cash, card, wallet, or UPI. You can choose your preferred payment method before confirming your booking. You can also change your payment method after completing your trip.
  16. -
  17. Q: How can I give feedback or suggestions to Delhi Blue App?
  18. -
  19. A: You can give feedback or suggestions to Delhi Blue App by rating your ride experience on the app after completing your trip. You can also write a review or share your comments on the app or on social media platforms like Facebook, Twitter, Instagram, or LinkedIn.
  20. -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Mortal Kombat Tamil Dubbed Movie in HD Quality from Isaimini.md b/spaces/1phancelerku/anime-remove-background/Download Mortal Kombat Tamil Dubbed Movie in HD Quality from Isaimini.md deleted file mode 100644 index 7f177b006fca3098e8efc45bce805ed8a3b4a27a..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Mortal Kombat Tamil Dubbed Movie in HD Quality from Isaimini.md +++ /dev/null @@ -1,147 +0,0 @@ -
-

Mortal Kombat Tamil Dubbed Movie Download Isaimini: A Review

-

Mortal Kombat is one of the most anticipated movies of 2021, based on the popular video game series of the same name. It is a reboot of the previous film adaptations, featuring a new cast and a new storyline. The movie has been released in multiple languages, including Tamil, to cater to the diverse fan base. But how good is the movie, and how can you watch it in Tamil? In this article, we will review the Mortal Kombat Tamil dubbed movie download Isaimini option, and also give you some insights into the plot, the characters, and the quality of the movie.

-

mortal kombat tamil dubbed movie download isaimini


Download ····· https://jinyurl.com/2uNNHD



-

Introduction

-

What is Mortal Kombat?

-

Mortal Kombat is a media franchise that originated from a fighting video game developed by Midway Games in 1992. The game features a variety of characters, each with their own special abilities and moves, who compete in a tournament called Mortal Kombat. The tournament is a way to determine the fate of different realms, such as Earthrealm, Outworld, and Netherrealm, which are constantly at war with each other. The game is known for its violent and graphic content, such as fatalities, brutalities, and x-rays.

-

Why is it popular in Tamil Nadu?

-

Mortal Kombat has a huge fan following in Tamil Nadu, especially among the young generation. There are several reasons for this popularity. First of all, the game has a lot of cultural references and influences from various mythologies and religions, such as Hinduism, Buddhism, Taoism, and Norse mythology. Some of the characters are inspired by gods, demons, and heroes from these traditions, such as Raiden, Shiva, Goro, and Scorpion. Secondly, the game has a lot of action and thrill, which appeals to the Tamil audience who love masala movies. Thirdly, the game has a lot of humor and sarcasm, which matches the Tamil sense of humor. Fourthly, the game has a lot of customization options, which allows the players to create their own characters and costumes.

-

How to download Mortal Kombat Tamil dubbed movie from Isaimini?

-

Isaimini is one of the most popular websites for downloading Tamil movies and songs. It offers a wide range of genres and categories, such as action, comedy, romance, horror, thriller, drama, and animation. It also provides dubbed versions of Hollywood and Bollywood movies, such as Mortal Kombat. To download Mortal Kombat Tamil dubbed movie from Isaimini, you need to follow these steps:

-
    -
  1. Go to the official website of Isaimini using a VPN or proxy service.
  2. -
  3. Search for Mortal Kombat in the search bar or browse through the categories.
  4. -
  5. Select the movie from the list of results and click on it.
  6. -
  7. Choose the quality and format of the movie that you want to download.
  8. -
  9. Click on the download link and wait for the movie to be downloaded.
  10. -
-

Note: Downloading movies from Isaimini is illegal and may expose you to cyber risks. We do not endorse or promote piracy in any way. We recommend that you watch movies from legal sources only.

-

Plot summary

-

The main characters

-

The movie follows the lives of several characters who are chosen to participate in the Mortal Kombat tournament. They are:

-

mortal kombat 2021 tamil voice over full movie free download
-watch mortal kombat tamil dubbed online hd quality
-mortal kombat tamil audio track download for english movie
-how to download mortal kombat tamil dubbed movie from isaimini
-mortal kombat tamil dubbed movie review and rating
-mortal kombat tamil dubbed movie trailer and release date
-mortal kombat tamil dubbed movie cast and crew details
-mortal kombat tamil dubbed movie download telegram link
-mortal kombat tamil dubbed movie download in moviesda
-mortal kombat tamil dubbed movie download in tamilyogi
-mortal kombat tamil dubbed movie download in kuttymovies
-mortal kombat tamil dubbed movie download in tamilrockers
-mortal kombat tamil dubbed movie download in isaidub
-mortal kombat tamil dubbed movie download in madrasrockers
-mortal kombat tamil dubbed movie download in filmyzilla
-mortal kombat tamil dubbed movie download in filmywap
-mortal kombat tamil dubbed movie download in 9xmovies
-mortal kombat tamil dubbed movie download in worldfree4u
-mortal kombat tamil dubbed movie download in 123movies
-mortal kombat tamil dubbed movie download in movierulz
-mortal kombat tamil dubbed movie download in bolly4u
-mortal kombat tamil dubbed movie download in pagalworld
-mortal kombat tamil dubbed movie download in skymovieshd
-mortal kombat tamil dubbed movie download in mp4moviez
-mortal kombat tamil dubbed movie download in 7starhd
-mortal kombat tamil dubbed movie download 480p 720p 1080p
-mortal kombat tamil dubbed movie download hdrip dvdrip bluray
-mortal kombat tamil dubbed movie download mkv avi mp4 format
-mortal kombat tamil dubbed movie watch online dailymotion youtube
-mortal kombat tamil dubbed movie watch online with english subtitles
-mortal kombat full movie in tamil language free download
-isaimini website for downloading mortal kombat full movie in tamil
-best alternative sites to isaimini for downloading mortal kombat full movie in tamil
-how to unblock isaimini site to access mortal kombat full movie in tamil
-is it legal to download mortal kombat full movie in tamil from isaimini site
-is it safe to download mortal kombat full movie in tamil from isaimini site
-how to avoid ads and pop-ups while downloading mortal kombat full movie in tamil from isaimini site
-how to use vpn to download mortal kombat full movie in tamil from isaimini site
-how to use torrent to download mortal kombat full movie in tamil from isaimini site
-how to use idm to download mortal kombat full movie in tamil from isaimini site

- -

The story arc

-

The movie begins with a flashback to 17th century Japan, where Hanzo Hasashi, a ninja leader of the Shirai Ryu clan, is attacked by Bi-Han, a rival assassin of the Lin Kuei clan. Bi-Han kills Hanzo's wife and son with his ice powers, and then kills Hanzo himself. However, Hanzo's blood is collected by Raiden, who transports his body to the Netherrealm, where he becomes Scorpion, a vengeful specter.

-

In the present day, Cole Young is a struggling MMA fighter who has a dragon mark on his chest. He is targeted by Bi-Han, who now goes by Sub-Zero, and is rescued by Jax, who also has the mark. Jax tells Cole to find Sonya Blade, who knows more about the mark and the Mortal Kombat tournament. Cole meets Sonya at her hideout, where he also encounters Kano, who has been captured by Sonya. Sonya explains that the mark is a sign of being chosen to fight in Mortal Kombat, a tournament that decides the fate of different realms. She also reveals that Earthrealm has lost nine out of ten tournaments to Outworld, and if they lose one more, Outworld will invade and enslave Earthrealm.

-

Sonya, Cole, and Kano are attacked by Reptile, who is sent by Shang Tsung to kill them. They manage to defeat Reptile with Kano's help, who rips out his heart. Kano agrees to join Sonya and Cole in exchange for money, and they fly to Raiden's temple in China. There they meet Liu Kang and Kung Lao, who are also chosen fighters for Earthrealm. They also meet Raiden, who is not impressed by their lack of skills and abilities. Raiden explains that each fighter has a special power called Arcana, that they need to unlock in order to fight in the tournament. He also warns them that Shang Tsung and his warriors are trying to kill them before the tournament begins, in order to secure their victory.

-

The Earthrealm fighters begin their training under Liu Kang and Kung Lao, who teach them how to use their Arcana. Kano discovers his Arcana first, which is a laser eye. Cole, however, struggles to find his Arcana, and is constantly defeated by Kung Lao. Raiden tells Cole that he is a descendant of Hanzo Hasashi, and that he has a special destiny. He also shows him Hanzo's kunai, which is a dagger with a rope attached to it.

-

Meanwhile, Shang Tsung sends Sub-Zero, Mileena, Nitara, Kabal, and Goro to attack the temple. Raiden creates a force field to protect the temple, but Kano betrays the Earthrealm fighters and disables the field, allowing the invaders to enter. A series of battles ensue, in which Kung Lao kills Nitara with his hat, Liu Kang kills Kabal with his fire dragon, and Sonya kills Kano with a garden gnome. Cole fights Goro and unlocks his Arcana, which is a suit of armor that absorbs damage and enhances his strength. He kills Goro with Hanzo's kunai.

-

The climax and the ending

-

Shang Tsung arrives and kills Kung Lao by stealing his soul. He declares that he will kill all the Earthrealm fighters and take over their realm. Raiden intervenes and teleports the Earthrealm fighters to different locations, where they can face their enemies one-on-one. He also gives Cole Hanzo's kunai and tells him to find Scorpion in the Netherrealm.

-

Cole travels to the Netherrealm and uses Hanzo's kunai to summon Scorpion from his hellish prison. Scorpion recognizes Cole as his bloodline and agrees to help him fight Sub-Zero, who has kidnapped Cole's family. They return to Earthrealm and confront Sub-Zero in an abandoned gym. A fierce fight ensues, in which Scorpion and Cole manage to overpower Sub-Zero with their combined skills and powers. Scorpion finishes Sub-Zero with his signature move, "Get over here!", and burns him alive with his fire breath.

-

Scorpion thanks Cole for freeing him from his curse and tells him to protect his family and his realm. He then disappears into flames. Cole reunites with his family and embraces them. Raiden appears and congratulates Cole for his victory. He also warns him that Shang Tsung will return with more warriors, and that they need to prepare for the next tournament. He tells Cole to find more champions for Earthrealm, and gives him a hint by showing him a poster of Johnny Cage, a famous Hollywood actor and martial artist.

-

Cole decides to leave his MMA career and travel to Hollywood to recruit Johnny Cage. The movie ends with a shot of Johnny Cage's poster, which has his name and a slogan: "You won't believe what comes next".

-

Analysis and critique

-

The strengths of the movie

-

The movie has several strengths that make it an enjoyable and entertaining watch for Mortal Kombat fans and newcomers alike. Some of these strengths are:

- -

The weaknesses of the movie

-

The movie also has some weaknesses that prevent it from being a perfect adaptation of Mortal Kombat. Some of these weaknesses are:

- -

The comparison with the original version and other adaptations

-

The movie is a reboot of the previous film adaptations of Mortal Kombat, which were released in 1995 and 1997. The movie is also based on the video game series of Mortal Kombat, which has been running since 1992. The movie differs from the original version and other adaptations in several ways. Some of these differences are:

- -

Conclusion

-

The final verdict

-

Mortal Kombat is a movie that delivers what it promises: a lot of action, gore, and fun. The movie is a faithful adaptation of the video game series, and a satisfying reboot of the film franchise. The movie has a good cast, a good production value, and a good sense of humor. The movie is not perfect, however, and has some flaws in its plot, pacing, dialogue, and direction. The movie is also not for everyone, as it is rated R and may be too violent or offensive for some viewers. The movie is best enjoyed by Mortal Kombat fans and action lovers, who can appreciate the movie for what it is: a guilty pleasure.

-

The alternatives to Isaimini

-

As mentioned earlier, downloading movies from Isaimini is illegal and risky. Therefore, we suggest that you watch Mortal Kombat from legal sources only. Some of the alternatives to Isaimini are:

- -

The future of Mortal Kombat franchise

-

Mortal Kombat is a movie that sets up the stage for more sequels and spin-offs. The movie ends with a cliffhanger that hints at the introduction of Johnny Cage, one of the most iconic characters of Mortal Kombat. The movie also leaves some room for more characters and stories from the video game series, such as Kitana, Sindel, Shao Kahn, Quan Chi, and more. The movie has received mixed reviews from critics and audiences, but has performed well at the box office and streaming platforms. The movie has also generated a lot of buzz and hype among Mortal Kombat fans and newcomers alike. Therefore, it is likely that we will see more Mortal Kombat movies in the future, as long as there is enough demand and support from the fans.

-

FAQs

-

Here are some frequently asked questions about Mortal Kombat Tamil dubbed movie download Isaimini:

-
    -
  1. Q: Is Mortal Kombat Tamil dubbed movie available on Isaimini?
  2. -
  3. A: Yes, Mortal Kombat Tamil dubbed movie is available on Isaimini, but it is illegal and risky to download it from there.
  4. -
  5. Q: How can I watch Mortal Kombat Tamil dubbed movie legally?
  6. -
  7. A: You can watch Mortal Kombat Tamil dubbed movie legally from platforms such as HBO Max, Amazon Prime Video, Netflix, YouTube, or theaters.
  8. -
  9. Q: Who are the actors who play the roles of Mortal Kombat characters?
  10. -
  11. A: The actors who play the roles of Mortal Kombat characters are Lewis Tan as Cole Young/Scorpion's descendant, Hiroyuki Sanada as Hanzo Hasashi/Scorpion, Joe Taslim as Bi-Han/Sub-Zero, Jessica McNamee as Sonya Blade, Mehcad Brooks as Jax, Josh Lawson as Kano, Ludi Lin as Liu Kang, Max Huang as Kung Lao, Tadanobu Asano as Raiden, Chin Han as Shang Tsung, Sisi Stringer as Mileena, Angus Sampson as Goro, Samuel Hargrave as Reptile, Mel Jarnson as Nitara, and Damon Herriman as Kabal.
  12. -
  13. Q: What are the ratings and reviews of Mortal Kombat movie?
  14. -
  15. A: Mortal Kombat movie has a rating of 6.2 out of 10 on IMDb, 55% on Rotten Tomatoes, and 44% on Metacritic. The movie has received mixed reviews from critics and audiences, with some praising its action, humor, and fidelity to the source material, and others criticizing its plot, pacing, dialogue, and direction.
  16. -
  17. Q: When will Mortal Kombat 2 movie be released?
  18. -
  19. A: There is no official confirmation or announcement about Mortal Kombat 2 movie yet, but the director Simon McQuoid has expressed his interest and willingness to make a sequel, depending on the response and demand from the fans. The movie also sets up the stage for a sequel, by introducing Johnny Cage and teasing more characters and stories from the video game series.
  20. -
  21. Q: How many Mortal Kombat movies are there?
  22. -
  23. A: There are three Mortal Kombat movies so far. The first one is Mortal Kombat (1995), directed by Paul W.S. Anderson and starring Christopher Lambert, Robin Shou, Linden Ashby, Bridgette Wilson, and Cary-Hiroyuki Tagawa. The second one is Mortal Kombat: Annihilation (1997), directed by John R. Leonetti and starring Robin Shou, Talisa Soto, Brian Thompson, Sandra Hess, and James Remar. The third one is Mortal Kombat (2021), directed by Simon McQuoid and starring Lewis Tan, Hiroyuki Sanada, Joe Taslim, Jessica McNamee, Mehcad Brooks, Josh Lawson, Ludi Lin, Max Huang, Tadanobu Asano, Chin Han, Sisi Stringer, Angus Sampson, Samuel Hargrave, Mel Jarnson, and Damon Herriman.
  24. -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/2023Liu2023/bingo/src/components/ui/button.tsx b/spaces/2023Liu2023/bingo/src/components/ui/button.tsx deleted file mode 100644 index 281da005124fa94c89a9a9db7605748a92b60865..0000000000000000000000000000000000000000 --- a/spaces/2023Liu2023/bingo/src/components/ui/button.tsx +++ /dev/null @@ -1,57 +0,0 @@ -import * as React from 'react' -import { Slot } from '@radix-ui/react-slot' -import { cva, type VariantProps } from 'class-variance-authority' - -import { cn } from '@/lib/utils' - -const buttonVariants = cva( - 'inline-flex items-center justify-center rounded-md text-sm font-medium shadow ring-offset-background transition-colors outline-none disabled:pointer-events-none disabled:opacity-50', - { - variants: { - variant: { - default: - 'bg-primary text-primary-foreground shadow-md hover:bg-primary/90', - destructive: - 'bg-destructive text-destructive-foreground hover:bg-destructive/90', - outline: - 'border border-input hover:bg-accent hover:text-accent-foreground', - secondary: - 'bg-secondary text-secondary-foreground hover:bg-secondary/80', - ghost: 'shadow-none hover:bg-accent hover:text-accent-foreground', - link: 'text-primary underline-offset-4 shadow-none hover:underline' - }, - size: { - default: 'h-8 px-4 py-2', - sm: 'h-8 rounded-md px-3', - lg: 'h-11 rounded-md px-8', - icon: 'h-8 w-8 p-0' - } - }, - defaultVariants: { - variant: 'default', - size: 'default' - } - } -) - -export interface ButtonProps - extends React.ButtonHTMLAttributes, - VariantProps { - asChild?: boolean -} - -const Button = React.forwardRef( - ({ className, variant, size, asChild = false, ...props }, ref) => { - const Comp = asChild ? Slot : 'button' - return ( - - ) - } -) -Button.displayName = 'Button' - -export { Button, buttonVariants } diff --git a/spaces/232labs/VToonify/vtoonify/model/raft/core/datasets.py b/spaces/232labs/VToonify/vtoonify/model/raft/core/datasets.py deleted file mode 100644 index 9991f15f4c3861c19d1a4b8766d49f83af11db70..0000000000000000000000000000000000000000 --- a/spaces/232labs/VToonify/vtoonify/model/raft/core/datasets.py +++ /dev/null @@ -1,235 +0,0 @@ -# Data loading based on https://github.com/NVIDIA/flownet2-pytorch - -import numpy as np -import torch -import torch.utils.data as data -import torch.nn.functional as F - -import os -import math -import random -from glob import glob -import os.path as osp - -from model.raft.core.utils import frame_utils -from model.raft.core.utils.augmentor import FlowAugmentor, SparseFlowAugmentor - - -class FlowDataset(data.Dataset): - def __init__(self, aug_params=None, sparse=False): - self.augmentor = None - self.sparse = sparse - if aug_params is not None: - if sparse: - self.augmentor = SparseFlowAugmentor(**aug_params) - else: - self.augmentor = FlowAugmentor(**aug_params) - - self.is_test = False - self.init_seed = False - self.flow_list = [] - self.image_list = [] - self.extra_info = [] - - def __getitem__(self, index): - - if self.is_test: - img1 = frame_utils.read_gen(self.image_list[index][0]) - img2 = frame_utils.read_gen(self.image_list[index][1]) - img1 = np.array(img1).astype(np.uint8)[..., :3] - img2 = np.array(img2).astype(np.uint8)[..., :3] - img1 = torch.from_numpy(img1).permute(2, 0, 1).float() - img2 = torch.from_numpy(img2).permute(2, 0, 1).float() - return img1, img2, self.extra_info[index] - - if not self.init_seed: - worker_info = torch.utils.data.get_worker_info() - if worker_info is not None: - torch.manual_seed(worker_info.id) - np.random.seed(worker_info.id) - random.seed(worker_info.id) - self.init_seed = True - - index = index % len(self.image_list) - valid = None - if self.sparse: - flow, valid = frame_utils.readFlowKITTI(self.flow_list[index]) - else: - flow = frame_utils.read_gen(self.flow_list[index]) - - img1 = frame_utils.read_gen(self.image_list[index][0]) - img2 = frame_utils.read_gen(self.image_list[index][1]) - - flow = np.array(flow).astype(np.float32) - img1 = np.array(img1).astype(np.uint8) - img2 = np.array(img2).astype(np.uint8) - - # grayscale images - if len(img1.shape) == 2: - img1 = np.tile(img1[...,None], (1, 1, 3)) - img2 = np.tile(img2[...,None], (1, 1, 3)) - else: - img1 = img1[..., :3] - img2 = img2[..., :3] - - if self.augmentor is not None: - if self.sparse: - img1, img2, flow, valid = self.augmentor(img1, img2, flow, valid) - else: - img1, img2, flow = self.augmentor(img1, img2, flow) - - img1 = torch.from_numpy(img1).permute(2, 0, 1).float() - img2 = torch.from_numpy(img2).permute(2, 0, 1).float() - flow = torch.from_numpy(flow).permute(2, 0, 1).float() - - if valid is not None: - valid = torch.from_numpy(valid) - else: - valid = (flow[0].abs() < 1000) & (flow[1].abs() < 1000) - - return img1, img2, flow, valid.float() - - - def __rmul__(self, v): - self.flow_list = v * self.flow_list - self.image_list = v * self.image_list - return self - - def __len__(self): - return len(self.image_list) - - -class MpiSintel(FlowDataset): - def __init__(self, aug_params=None, split='training', root='datasets/Sintel', dstype='clean'): - super(MpiSintel, self).__init__(aug_params) - flow_root = osp.join(root, split, 'flow') - image_root = osp.join(root, split, dstype) - - if split == 'test': - self.is_test = True - - for scene in os.listdir(image_root): - image_list = sorted(glob(osp.join(image_root, scene, '*.png'))) - for i in range(len(image_list)-1): - self.image_list += [ [image_list[i], image_list[i+1]] ] - self.extra_info += [ (scene, i) ] # scene and frame_id - - if split != 'test': - self.flow_list += sorted(glob(osp.join(flow_root, scene, '*.flo'))) - - -class FlyingChairs(FlowDataset): - def __init__(self, aug_params=None, split='train', root='datasets/FlyingChairs_release/data'): - super(FlyingChairs, self).__init__(aug_params) - - images = sorted(glob(osp.join(root, '*.ppm'))) - flows = sorted(glob(osp.join(root, '*.flo'))) - assert (len(images)//2 == len(flows)) - - split_list = np.loadtxt('chairs_split.txt', dtype=np.int32) - for i in range(len(flows)): - xid = split_list[i] - if (split=='training' and xid==1) or (split=='validation' and xid==2): - self.flow_list += [ flows[i] ] - self.image_list += [ [images[2*i], images[2*i+1]] ] - - -class FlyingThings3D(FlowDataset): - def __init__(self, aug_params=None, root='datasets/FlyingThings3D', dstype='frames_cleanpass'): - super(FlyingThings3D, self).__init__(aug_params) - - for cam in ['left']: - for direction in ['into_future', 'into_past']: - image_dirs = sorted(glob(osp.join(root, dstype, 'TRAIN/*/*'))) - image_dirs = sorted([osp.join(f, cam) for f in image_dirs]) - - flow_dirs = sorted(glob(osp.join(root, 'optical_flow/TRAIN/*/*'))) - flow_dirs = sorted([osp.join(f, direction, cam) for f in flow_dirs]) - - for idir, fdir in zip(image_dirs, flow_dirs): - images = sorted(glob(osp.join(idir, '*.png')) ) - flows = sorted(glob(osp.join(fdir, '*.pfm')) ) - for i in range(len(flows)-1): - if direction == 'into_future': - self.image_list += [ [images[i], images[i+1]] ] - self.flow_list += [ flows[i] ] - elif direction == 'into_past': - self.image_list += [ [images[i+1], images[i]] ] - self.flow_list += [ flows[i+1] ] - - -class KITTI(FlowDataset): - def __init__(self, aug_params=None, split='training', root='datasets/KITTI'): - super(KITTI, self).__init__(aug_params, sparse=True) - if split == 'testing': - self.is_test = True - - root = osp.join(root, split) - images1 = sorted(glob(osp.join(root, 'image_2/*_10.png'))) - images2 = sorted(glob(osp.join(root, 'image_2/*_11.png'))) - - for img1, img2 in zip(images1, images2): - frame_id = img1.split('/')[-1] - self.extra_info += [ [frame_id] ] - self.image_list += [ [img1, img2] ] - - if split == 'training': - self.flow_list = sorted(glob(osp.join(root, 'flow_occ/*_10.png'))) - - -class HD1K(FlowDataset): - def __init__(self, aug_params=None, root='datasets/HD1k'): - super(HD1K, self).__init__(aug_params, sparse=True) - - seq_ix = 0 - while 1: - flows = sorted(glob(os.path.join(root, 'hd1k_flow_gt', 'flow_occ/%06d_*.png' % seq_ix))) - images = sorted(glob(os.path.join(root, 'hd1k_input', 'image_2/%06d_*.png' % seq_ix))) - - if len(flows) == 0: - break - - for i in range(len(flows)-1): - self.flow_list += [flows[i]] - self.image_list += [ [images[i], images[i+1]] ] - - seq_ix += 1 - - -def fetch_dataloader(args, TRAIN_DS='C+T+K+S+H'): - """ Create the data loader for the corresponding trainign set """ - - if args.stage == 'chairs': - aug_params = {'crop_size': args.image_size, 'min_scale': -0.1, 'max_scale': 1.0, 'do_flip': True} - train_dataset = FlyingChairs(aug_params, split='training') - - elif args.stage == 'things': - aug_params = {'crop_size': args.image_size, 'min_scale': -0.4, 'max_scale': 0.8, 'do_flip': True} - clean_dataset = FlyingThings3D(aug_params, dstype='frames_cleanpass') - final_dataset = FlyingThings3D(aug_params, dstype='frames_finalpass') - train_dataset = clean_dataset + final_dataset - - elif args.stage == 'sintel': - aug_params = {'crop_size': args.image_size, 'min_scale': -0.2, 'max_scale': 0.6, 'do_flip': True} - things = FlyingThings3D(aug_params, dstype='frames_cleanpass') - sintel_clean = MpiSintel(aug_params, split='training', dstype='clean') - sintel_final = MpiSintel(aug_params, split='training', dstype='final') - - if TRAIN_DS == 'C+T+K+S+H': - kitti = KITTI({'crop_size': args.image_size, 'min_scale': -0.3, 'max_scale': 0.5, 'do_flip': True}) - hd1k = HD1K({'crop_size': args.image_size, 'min_scale': -0.5, 'max_scale': 0.2, 'do_flip': True}) - train_dataset = 100*sintel_clean + 100*sintel_final + 200*kitti + 5*hd1k + things - - elif TRAIN_DS == 'C+T+K/S': - train_dataset = 100*sintel_clean + 100*sintel_final + things - - elif args.stage == 'kitti': - aug_params = {'crop_size': args.image_size, 'min_scale': -0.2, 'max_scale': 0.4, 'do_flip': False} - train_dataset = KITTI(aug_params, split='training') - - train_loader = data.DataLoader(train_dataset, batch_size=args.batch_size, - pin_memory=False, shuffle=True, num_workers=4, drop_last=True) - - print('Training with %d image pairs' % len(train_dataset)) - return train_loader - diff --git a/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/backbones/__init__.py b/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/backbones/__init__.py deleted file mode 100644 index 55bd4c5d1889a1a998b52eb56793bbc1eef1b691..0000000000000000000000000000000000000000 --- a/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/backbones/__init__.py +++ /dev/null @@ -1,25 +0,0 @@ -from .iresnet import iresnet18, iresnet34, iresnet50, iresnet100, iresnet200 -from .mobilefacenet import get_mbf - - -def get_model(name, **kwargs): - # resnet - if name == "r18": - return iresnet18(False, **kwargs) - elif name == "r34": - return iresnet34(False, **kwargs) - elif name == "r50": - return iresnet50(False, **kwargs) - elif name == "r100": - return iresnet100(False, **kwargs) - elif name == "r200": - return iresnet200(False, **kwargs) - elif name == "r2060": - from .iresnet2060 import iresnet2060 - return iresnet2060(False, **kwargs) - elif name == "mbf": - fp16 = kwargs.get("fp16", False) - num_features = kwargs.get("num_features", 512) - return get_mbf(fp16=fp16, num_features=num_features) - else: - raise ValueError() \ No newline at end of file diff --git a/spaces/52Hz/CMFNet_deblurring/model/block.py b/spaces/52Hz/CMFNet_deblurring/model/block.py deleted file mode 100644 index 32d4d9d50d6a2c1e7251fc6551defbd605497779..0000000000000000000000000000000000000000 --- a/spaces/52Hz/CMFNet_deblurring/model/block.py +++ /dev/null @@ -1,146 +0,0 @@ -import torch -import torch.nn as nn -########################################################################## -def conv(in_channels, out_channels, kernel_size, bias=False, stride=1): - layer = nn.Conv2d(in_channels, out_channels, kernel_size, padding=(kernel_size // 2), bias=bias, stride=stride) - return layer - - -def conv3x3(in_chn, out_chn, bias=True): - layer = nn.Conv2d(in_chn, out_chn, kernel_size=3, stride=1, padding=1, bias=bias) - return layer - - -def conv_down(in_chn, out_chn, bias=False): - layer = nn.Conv2d(in_chn, out_chn, kernel_size=4, stride=2, padding=1, bias=bias) - return layer - -########################################################################## -## Supervised Attention Module (RAM) -class SAM(nn.Module): - def __init__(self, n_feat, kernel_size, bias): - super(SAM, self).__init__() - self.conv1 = conv(n_feat, n_feat, kernel_size, bias=bias) - self.conv2 = conv(n_feat, 3, kernel_size, bias=bias) - self.conv3 = conv(3, n_feat, kernel_size, bias=bias) - - def forward(self, x, x_img): - x1 = self.conv1(x) - img = self.conv2(x) + x_img - x2 = torch.sigmoid(self.conv3(img)) - x1 = x1 * x2 - x1 = x1 + x - return x1, img - -########################################################################## -## Spatial Attention -class SALayer(nn.Module): - def __init__(self, kernel_size=7): - super(SALayer, self).__init__() - self.conv1 = nn.Conv2d(2, 1, kernel_size, padding=kernel_size // 2, bias=False) - self.sigmoid = nn.Sigmoid() - - def forward(self, x): - avg_out = torch.mean(x, dim=1, keepdim=True) - max_out, _ = torch.max(x, dim=1, keepdim=True) - y = torch.cat([avg_out, max_out], dim=1) - y = self.conv1(y) - y = self.sigmoid(y) - return x * y - -# Spatial Attention Block (SAB) -class SAB(nn.Module): - def __init__(self, n_feat, kernel_size, reduction, bias, act): - super(SAB, self).__init__() - modules_body = [conv(n_feat, n_feat, kernel_size, bias=bias), act, conv(n_feat, n_feat, kernel_size, bias=bias)] - self.body = nn.Sequential(*modules_body) - self.SA = SALayer(kernel_size=7) - - def forward(self, x): - res = self.body(x) - res = self.SA(res) - res += x - return res - -########################################################################## -## Pixel Attention -class PALayer(nn.Module): - def __init__(self, channel, reduction=16, bias=False): - super(PALayer, self).__init__() - self.pa = nn.Sequential( - nn.Conv2d(channel, channel // reduction, 1, padding=0, bias=bias), - nn.ReLU(inplace=True), - nn.Conv2d(channel // reduction, channel, 1, padding=0, bias=bias), # channel <-> 1 - nn.Sigmoid() - ) - - def forward(self, x): - y = self.pa(x) - return x * y - -## Pixel Attention Block (PAB) -class PAB(nn.Module): - def __init__(self, n_feat, kernel_size, reduction, bias, act): - super(PAB, self).__init__() - modules_body = [conv(n_feat, n_feat, kernel_size, bias=bias), act, conv(n_feat, n_feat, kernel_size, bias=bias)] - self.PA = PALayer(n_feat, reduction, bias=bias) - self.body = nn.Sequential(*modules_body) - - def forward(self, x): - res = self.body(x) - res = self.PA(res) - res += x - return res - -########################################################################## -## Channel Attention Layer -class CALayer(nn.Module): - def __init__(self, channel, reduction=16, bias=False): - super(CALayer, self).__init__() - # global average pooling: feature --> point - self.avg_pool = nn.AdaptiveAvgPool2d(1) - # feature channel downscale and upscale --> channel weight - self.conv_du = nn.Sequential( - nn.Conv2d(channel, channel // reduction, 1, padding=0, bias=bias), - nn.ReLU(inplace=True), - nn.Conv2d(channel // reduction, channel, 1, padding=0, bias=bias), - nn.Sigmoid() - ) - - def forward(self, x): - y = self.avg_pool(x) - y = self.conv_du(y) - return x * y - -## Channel Attention Block (CAB) -class CAB(nn.Module): - def __init__(self, n_feat, kernel_size, reduction, bias, act): - super(CAB, self).__init__() - modules_body = [conv(n_feat, n_feat, kernel_size, bias=bias), act, conv(n_feat, n_feat, kernel_size, bias=bias)] - - self.CA = CALayer(n_feat, reduction, bias=bias) - self.body = nn.Sequential(*modules_body) - - def forward(self, x): - res = self.body(x) - res = self.CA(res) - res += x - return res - - -if __name__ == "__main__": - import time - from thop import profile - # layer = CAB(64, 3, 4, False, nn.PReLU()) - layer = PAB(64, 3, 4, False, nn.PReLU()) - # layer = SAB(64, 3, 4, False, nn.PReLU()) - for idx, m in enumerate(layer.modules()): - print(idx, "-", m) - s = time.time() - - rgb = torch.ones(1, 64, 256, 256, dtype=torch.float, requires_grad=False) - out = layer(rgb) - flops, params = profile(layer, inputs=(rgb,)) - print('parameters:', params) - print('flops', flops) - print('time: {:.4f}ms'.format((time.time()-s)*10)) \ No newline at end of file diff --git a/spaces/801artistry/RVC801/i18n.py b/spaces/801artistry/RVC801/i18n.py deleted file mode 100644 index b958c6f7244c4b920e097a9a9e67e81990d03f59..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/i18n.py +++ /dev/null @@ -1,43 +0,0 @@ -import json - -def load_language_list(language): - try: - with open(f"./i18n/locale/{language}.json", "r", encoding="utf-8") as f: - return json.load(f) - except FileNotFoundError: - raise FileNotFoundError( - f"Failed to load language file for {language}. Check if the correct .json file exists." - ) - - -class I18nAuto: - """ - A class used for internationalization using JSON language files. - - Examples - -------- - >>> i18n = I18nAuto('en_US') - >>> i18n.print() - Using Language: en_US - """ - def __init__(self, language=None): - from locale import getdefaultlocale - language = language or getdefaultlocale()[0] - if not self._language_exists(language): - language = "en_US" - - self.language_map = load_language_list(language) - self.language = language - - @staticmethod - def _language_exists(language): - from os.path import exists - return exists(f"./i18n/locale/{language}.json") - - def __call__(self, key): - """Returns the translation of the given key if it exists, else returns the key itself.""" - return self.language_map.get(key, key) - - def print(self): - """Prints the language currently in use.""" - print(f"Using Language: {self.language}") \ No newline at end of file diff --git a/spaces/A-Celsius/ADR_Predictor/app.py b/spaces/A-Celsius/ADR_Predictor/app.py deleted file mode 100644 index 9722321ceb658c219888fec17b0b5b1f31f93a1f..0000000000000000000000000000000000000000 --- a/spaces/A-Celsius/ADR_Predictor/app.py +++ /dev/null @@ -1,103 +0,0 @@ -import pickle, joblib -import gradio as gr -from datetime import datetime, timedelta, timezone - -model = joblib.load('model.pkl') - -def preprocess_city(selected_city): - # Map the selected city to its one-hot encoded representation - city_mapping = { - 'Hyderabad' : [1, 0, 0, 0, 0, 0, 0], - 'Indore': [1, 0, 0, 0, 0, 0, 0], - 'Jaipur': [0, 1, 0, 0, 0, 0, 0], - 'Mahabaleshwar': [0, 0, 1, 0, 0, 0, 0], - 'Mussoorie': [0, 0, 0, 1, 0, 0, 0], - 'Raipur': [0, 0, 0, 0, 1, 0, 0], - 'Udaipur': [0, 0, 0, 0, 0, 1, 0], - 'Varanasi': [0, 0, 0, 0, 0, 0, 1] - } - return city_mapping[selected_city] - -def preprocess_date(date_string): - # Parse the date string into a datetime object - date_obj = datetime.strptime(date_string, '%Y-%m-%d') - year = date_obj.year - month = date_obj.month - day = date_obj.day - return year, month, day - -def calculate_lead_time(checkin_date): - # Convert input date to datetime object - input_date = datetime.strptime(checkin_date, '%Y-%m-%d') - - # Get current date and time in GMT+5:30 timezone - current_date = datetime.now(timezone(timedelta(hours=5, minutes=30))) - - # Make current_date an aware datetime with the same timezone - current_date = current_date.replace(tzinfo=input_date.tzinfo) - - # Calculate lead time as difference in days - lead_time = (input_date - current_date).days - - return lead_time - -def is_weekend(checkin_date): - # Convert input date to datetime object - input_date = datetime.strptime(checkin_date, '%Y-%m-%d') - - # Calculate the day of the week (0=Monday, 6=Sunday) - day_of_week = input_date.weekday() - - # Check if the day is Friday (4) or Saturday (5) - return 1 if day_of_week == 4 or day_of_week == 5 else 0 - -def predict(selected_city, checkin_date, star_rating, text_rating, season, additional_views, room_category): - # Preprocess user input - # Here, selected_city is the name of the city selected from the dropdown - # checkin_date is the date selected using the text input - # star_rating is the selected star rating from the dropdown - # text_rating is the numeric rating from the text box - # season is the selected option from the radio button (On Season or Off Season) - season_binary = 1 if season == 'On Season' else 0 - # additional_views is the selected option from the radio button (Yes or No) - additional_views_binary = 1 if additional_views == 'Yes' else 0 - - room_categories = ["Dorm", "Standard", "Deluxe", "Executive", "Suite"] - room_category_number = room_categories.index(room_category) - - # Preprocess the date - year, month, day = preprocess_date(checkin_date) - - # Preprocess the selected city - city_encoded = preprocess_city(selected_city) - - # Calculate lead time - lead_time = calculate_lead_time(checkin_date) - - # Calculate if the input date is a weekend (1) or weekday (0) - is_weekend_value = is_weekend(checkin_date) - - # Combine all the input features - input_data = [star_rating, text_rating, season_binary, day, month, year, is_weekend_value, lead_time,room_category_number, additional_views_binary]+city_encoded - - # Make predictions using the model - prediction = model.predict([input_data]) - return "{:.2f}".format(prediction[0]) - -# Define input components -city_dropdown = gr.components.Dropdown(choices=['Hyderabad', 'Indore', 'Jaipur', 'Mahabaleshwar', 'Mussoorie', 'Raipur', 'Udaipur', 'Varanasi'], label='Select a City') -date_input = gr.components.Textbox(label='Check-in Date (YYYY-MM-DD)') -star_rating_dropdown = gr.components.Dropdown(choices=[1, 2, 3, 4, 5], label='Select Star Rating') -text_rating_input = gr.components.Number(label='Enter Numeric Rating (1-5)') -season_radio = gr.components.Radio(['On Season', 'Off Season'], label='Season') -room_category_dropdown = gr.components.Dropdown(choices=["Dorm", "Standard", "Deluxe", "Executive", "Suite"], label='Select Room Category') -additional_views_radio = gr.components.Radio(['Yes', 'No'], label='Additional Views') - -# Define output component -output = gr.components.Textbox(label='Predicted Output') -# Create the interface -interface = gr.Interface(fn=predict, inputs=[city_dropdown, date_input, star_rating_dropdown, text_rating_input, season_radio, additional_views_radio, room_category_dropdown], outputs=output, title='Model Prediction Interface') - -# Launch the interface -interface.launch() - diff --git a/spaces/A00001/bingothoo/cloudflare/worker.js b/spaces/A00001/bingothoo/cloudflare/worker.js deleted file mode 100644 index e0debd750615f1329b2c72fbce73e1b9291f7137..0000000000000000000000000000000000000000 --- a/spaces/A00001/bingothoo/cloudflare/worker.js +++ /dev/null @@ -1,18 +0,0 @@ -const TRAGET_HOST='hf4all-bingo.hf.space' // 请将此域名改成你自己的,域名信息在设置》站点域名查看。 - -export default { - async fetch(request) { - const uri = new URL(request.url); - if (uri.protocol === 'http:') { - uri.protocol = 'https:'; - return new Response('', { - status: 301, - headers: { - location: uri.toString(), - }, - }) - } - uri.host = TRAGET_HOST - return fetch(new Request(uri.toString(), request)); - }, -}; diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/uvr5_pack/lib_v5/layers_123821KB.py b/spaces/AI-Hobbyist/Hoyo-RVC/uvr5_pack/lib_v5/layers_123821KB.py deleted file mode 100644 index 9835dc0f0dd66a7ef3517101180ec2c54eb6011d..0000000000000000000000000000000000000000 --- a/spaces/AI-Hobbyist/Hoyo-RVC/uvr5_pack/lib_v5/layers_123821KB.py +++ /dev/null @@ -1,118 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from uvr5_pack.lib_v5 import spec_utils - - -class Conv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(Conv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nout, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - bias=False, - ), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class SeperableConv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(SeperableConv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nin, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - groups=nin, - bias=False, - ), - nn.Conv2d(nin, nout, kernel_size=1, bias=False), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class Encoder(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU): - super(Encoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ) - - def __call__(self, x): - skip = self.conv1(x) - h = self.conv2(skip) - - return h, skip - - -class Decoder(nn.Module): - def __init__( - self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False - ): - super(Decoder, self).__init__() - self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def __call__(self, x, skip=None): - x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) - if skip is not None: - skip = spec_utils.crop_center(skip, x) - x = torch.cat([x, skip], dim=1) - h = self.conv(x) - - if self.dropout is not None: - h = self.dropout(h) - - return h - - -class ASPPModule(nn.Module): - def __init__(self, nin, nout, dilations=(4, 8, 16), activ=nn.ReLU): - super(ASPPModule, self).__init__() - self.conv1 = nn.Sequential( - nn.AdaptiveAvgPool2d((1, None)), - Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ), - ) - self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ) - self.conv3 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[0], dilations[0], activ=activ - ) - self.conv4 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[1], dilations[1], activ=activ - ) - self.conv5 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.bottleneck = nn.Sequential( - Conv2DBNActiv(nin * 5, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1) - ) - - def forward(self, x): - _, _, h, w = x.size() - feat1 = F.interpolate( - self.conv1(x), size=(h, w), mode="bilinear", align_corners=True - ) - feat2 = self.conv2(x) - feat3 = self.conv3(x) - feat4 = self.conv4(x) - feat5 = self.conv5(x) - out = torch.cat((feat1, feat2, feat3, feat4, feat5), dim=1) - bottle = self.bottleneck(out) - return bottle diff --git a/spaces/AIFILMS/generate_human_motion/VQ-Trans/utils/losses.py b/spaces/AIFILMS/generate_human_motion/VQ-Trans/utils/losses.py deleted file mode 100644 index 1998161032731fc2c3edae701700679c00fd00d0..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/generate_human_motion/VQ-Trans/utils/losses.py +++ /dev/null @@ -1,30 +0,0 @@ -import torch -import torch.nn as nn - -class ReConsLoss(nn.Module): - def __init__(self, recons_loss, nb_joints): - super(ReConsLoss, self).__init__() - - if recons_loss == 'l1': - self.Loss = torch.nn.L1Loss() - elif recons_loss == 'l2' : - self.Loss = torch.nn.MSELoss() - elif recons_loss == 'l1_smooth' : - self.Loss = torch.nn.SmoothL1Loss() - - # 4 global motion associated to root - # 12 local motion (3 local xyz, 3 vel xyz, 6 rot6d) - # 3 global vel xyz - # 4 foot contact - self.nb_joints = nb_joints - self.motion_dim = (nb_joints - 1) * 12 + 4 + 3 + 4 - - def forward(self, motion_pred, motion_gt) : - loss = self.Loss(motion_pred[..., : self.motion_dim], motion_gt[..., :self.motion_dim]) - return loss - - def forward_vel(self, motion_pred, motion_gt) : - loss = self.Loss(motion_pred[..., 4 : (self.nb_joints - 1) * 3 + 4], motion_gt[..., 4 : (self.nb_joints - 1) * 3 + 4]) - return loss - - \ No newline at end of file diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/commons/wavenet.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/commons/wavenet.py deleted file mode 100644 index 7809c9b9d3331ba4fd2ffd4caae14e721e4b0732..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/commons/wavenet.py +++ /dev/null @@ -1,97 +0,0 @@ -import torch -from torch import nn - - -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -class WN(torch.nn.Module): - def __init__(self, hidden_size, kernel_size, dilation_rate, n_layers, c_cond=0, - p_dropout=0, share_cond_layers=False, is_BTC=False): - super(WN, self).__init__() - assert (kernel_size % 2 == 1) - assert (hidden_size % 2 == 0) - self.is_BTC = is_BTC - self.hidden_size = hidden_size - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = c_cond - self.p_dropout = p_dropout - self.share_cond_layers = share_cond_layers - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if c_cond != 0 and not share_cond_layers: - cond_layer = torch.nn.Conv1d(c_cond, 2 * hidden_size * n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_size, 2 * hidden_size, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_size - else: - res_skip_channels = hidden_size - - res_skip_layer = torch.nn.Conv1d(hidden_size, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, nonpadding=None, cond=None): - if self.is_BTC: - x = x.transpose(1, 2) - cond = cond.transpose(1, 2) if cond is not None else None - nonpadding = nonpadding.transpose(1, 2) if nonpadding is not None else None - if nonpadding is None: - nonpadding = 1 - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_size]) - - if cond is not None and not self.share_cond_layers: - cond = self.cond_layer(cond) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - x_in = self.drop(x_in) - if cond is not None: - cond_offset = i * 2 * self.hidden_size - cond_l = cond[:, cond_offset:cond_offset + 2 * self.hidden_size, :] - else: - cond_l = torch.zeros_like(x_in) - - acts = fused_add_tanh_sigmoid_multiply(x_in, cond_l, n_channels_tensor) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - x = (x + res_skip_acts[:, :self.hidden_size, :]) * nonpadding - output = output + res_skip_acts[:, self.hidden_size:, :] - else: - output = output + res_skip_acts - output = output * nonpadding - if self.is_BTC: - output = output.transpose(1, 2) - return output - - def remove_weight_norm(self): - def remove_weight_norm(m): - try: - nn.utils.remove_weight_norm(m) - except ValueError: # this module didn't have weight norm - return - - self.apply(remove_weight_norm) diff --git a/spaces/AILab-CVC/SEED-LLaMA/scripts/start_frontend_14b.sh b/spaces/AILab-CVC/SEED-LLaMA/scripts/start_frontend_14b.sh deleted file mode 100644 index 6b865e19756e2c72fb081b9122596a669b98df67..0000000000000000000000000000000000000000 --- a/spaces/AILab-CVC/SEED-LLaMA/scripts/start_frontend_14b.sh +++ /dev/null @@ -1 +0,0 @@ -python3 gradio_demo/seed_llama_gradio.py --server_port 80 --request_address http://127.0.0.1:7890/generate --model_type seed-llama-14b \ No newline at end of file diff --git a/spaces/AIWaves/SOP_Generation-single/Component/PromptComponent.py b/spaces/AIWaves/SOP_Generation-single/Component/PromptComponent.py deleted file mode 100644 index 0f61d4012384f39f9071e8fc5c9b269ce5047b3f..0000000000000000000000000000000000000000 --- a/spaces/AIWaves/SOP_Generation-single/Component/PromptComponent.py +++ /dev/null @@ -1,126 +0,0 @@ -from abc import abstractmethod - - -class PromptComponent: - def __init__(self): - pass - - @abstractmethod - def get_prompt(self, agent): - pass - -class TaskComponent(PromptComponent): - def __init__(self, task): - super().__init__() - self.task = task - - def get_prompt(self, agent): - return f"""The task you need to execute is: {self.task}.\n""" - - -class OutputComponent(PromptComponent): - def __init__(self, output): - super().__init__() - self.output = output - - def get_prompt(self, agent): - return f"""Please contact the above to extract <{self.output}> and , \ - do not perform additional output, please output in strict accordance with the above format!\n""" - - -class SystemComponent(PromptComponent): - def __init__(self,system_prompt): - super().__init__() - self.system_prompt = system_prompt - - def get_prompt(self, agent): - return self.system_prompt - -class LastComponent(PromptComponent): - def __init__(self, last_prompt): - super().__init__() - self.last_prompt = last_prompt - - def get_prompt(self, agent): - return self.last_prompt - - -class StyleComponent(PromptComponent): - """ - 角色、风格组件 - """ - - def __init__(self, role): - super().__init__() - self.role = role - - def get_prompt(self, agent): - name = agent.name - style = agent.style - return f"""Now your role is:\n{self.role}, your name is:\n{name}. \ - You need to follow the output style:\n{style}.\n""" - - -class RuleComponent(PromptComponent): - def __init__(self, rule): - super().__init__() - self.rule = rule - - def get_prompt(self, agent): - return f"""The rule you need to follow is:\n{self.rule}.\n""" - - -class DemonstrationComponent(PromptComponent): - """ - input a list,the example of answer. - """ - - def __init__(self, demonstrations): - super().__init__() - self.demonstrations = demonstrations - - - def get_prompt(self, agent): - prompt = f"Here are demonstrations you can refer to:\n{self.demonstrations}" - return prompt - - -class CoTComponent(PromptComponent): - """ - input a list,the example of answer. - """ - - def __init__(self, demonstrations): - super().__init__() - self.demonstrations = demonstrations - - def add_demonstration(self, demonstration): - self.demonstrations.append(demonstration) - - def get_prompt(self, agent): - prompt = "You need to think in detail before outputting, the thinking case is as follows:\n" - for demonstration in self.demonstrations: - prompt += "\n" + demonstration - return prompt - - -class CustomizeComponent(PromptComponent): - """ - Custom template - template(str) : example: "i am {}" - keywords(list) : example : ["name"] - example : agent.environment.shared_memory["name"] = "Lilong" - the component will get the keyword attribute from the environment, and then add it to the template. - Return : "i am Lilong" - """ - def __init__(self, template, keywords) -> None: - super().__init__() - self.template = template - self.keywords = keywords - - def get_prompt(self, agent): - template_keyword = {} - for keyword in self.keywords: - current_keyword = agent.environment.shared_memory[keyword] if keyword in agent.environment.shared_memory else "" - template_keyword[keyword] = current_keyword - return self.template.format(**template_keyword) \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/unfinished/Komo.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/unfinished/Komo.py deleted file mode 100644 index 84d8d634bc65cdbe265f28aae925456b694e329b..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/unfinished/Komo.py +++ /dev/null @@ -1,44 +0,0 @@ -from __future__ import annotations - -import json - -from ...requests import StreamSession -from ...typing import AsyncGenerator -from ..base_provider import AsyncGeneratorProvider, format_prompt - -class Komo(AsyncGeneratorProvider): - url = "https://komo.ai/api/ask" - supports_gpt_35_turbo = True - - @classmethod - async def create_async_generator( - cls, - model: str, - messages: list[dict[str, str]], - **kwargs - ) -> AsyncGenerator: - async with StreamSession(impersonate="chrome107") as session: - prompt = format_prompt(messages) - data = { - "query": prompt, - "FLAG_URLEXTRACT": "false", - "token": "", - "FLAG_MODELA": "1", - } - headers = { - 'authority': 'komo.ai', - 'accept': 'text/event-stream', - 'cache-control': 'no-cache', - 'referer': 'https://komo.ai/', - } - - async with session.get(cls.url, params=data, headers=headers) as response: - response.raise_for_status() - next = False - async for line in response.iter_lines(): - if line == b"event: line": - next = True - elif next and line.startswith(b"data: "): - yield json.loads(line[6:]) - next = False - diff --git a/spaces/AgentVerse/agentVerse/scripts/evaluate_math.py b/spaces/AgentVerse/agentVerse/scripts/evaluate_math.py deleted file mode 100644 index 189c05a5db7ae3dce325511912dd8294ce5f2a2f..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/scripts/evaluate_math.py +++ /dev/null @@ -1,93 +0,0 @@ -import re -import json -import subprocess -from importlib import reload -from argparse import ArgumentParser - -parser = ArgumentParser() -parser.add_argument("--path", type=str, required=True) -parser.add_argument("--max_line", type=int, default=1000000000000) -parser.add_argument("--ci_smoke_test", action="store_true") -args = parser.parse_args() - - -def check_corr(result: str, correct_solution: str, tol: float = 1e-3): - result = result.replace(",", "") - if result.strip() == correct_solution.strip(): - return 1 - try: - result = float(result.strip()) - correct_solution = float(correct_solution.strip()) - return abs(result - correct_solution) < tol - except: - return 0 - - -# final_accs = [] -# for i in range(2): -# acc = 0 -# total = 0 -# with open(args.path) as f: -# for line in f: -# line = json.loads(line) -# label = str(line["label"]) -# if i == 0: -# code = line["response"] -# else: -# code = line["logs"][0]["content"] -# total += 1 -# code = code.strip().replace("```", "") -# code = code.lstrip("python3") -# code = code.lstrip("python") -# with open("tmp.py", "w") as f: -# f.write(code) - -# try: -# import tmp - -# reload(tmp) -# result = str(tmp.solution()) -# is_corr = check_corr(result, label) - -# is_corr = int(is_corr) -# # Step 2 -# if is_corr: -# acc += 1 -# except: -# print(code) -# final_accs.append(acc / total) -# print(final_accs) - -final_accs = [] -err_cnts = [] -for i in range(2): - acc = 0 - total = 0 - err_cnt = 0 - with open(args.path) as f: - for idx, line in enumerate(f): - if idx == args.max_line: - break - line = json.loads(line) - label = str(line["label"]) - if i == 0: - response = line["response"] - else: - if line["logs"][0]["module"] == "Role Assigner": - response = line["logs"][1]["content"] - else: - response = line["logs"][0]["content"] - total += 1 - result = re.findall(r"\\boxed\{(.+?)\}", response) - if len(result) == 0: - err_cnt += 1 - print(response) - continue - result = result[0] - acc += check_corr(result, label) - final_accs.append(acc / total) - err_cnts.append(err_cnt) -print(final_accs) -print(err_cnts) -if args.ci_smoke_test is True: - assert final_accs[0] == 1.0 diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/clock/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/clock/Factory.js deleted file mode 100644 index fb5e0791b317d9b71a69e3ab82daeff8174b4f94..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/clock/Factory.js +++ /dev/null @@ -1,13 +0,0 @@ -import Clock from './Clock.js'; -import ObjectFactory from '../ObjectFactory.js'; -import SetValue from '../../../plugins/utils/object/SetValue.js'; - -ObjectFactory.register('clock', function (config) { - var gameObject = new Clock(this.scene, config); - this.scene.add.existing(gameObject); - return gameObject; -}); - -SetValue(window, 'RexPlugins.Spinner.Clock', Clock); - -export default Clock; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/GetExpandedChildWidth.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/GetExpandedChildWidth.js deleted file mode 100644 index 37be007674b9ab605b93bdf845dde9a8f4ca0b7f..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/GetExpandedChildWidth.js +++ /dev/null @@ -1,6 +0,0 @@ -// Override -var GetExpandedChildWidth = function (child, parentWidth) { - return parentWidth; -} - -export default GetExpandedChildWidth; \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/kandinsky_v22.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/kandinsky_v22.md deleted file mode 100644 index 3f88997ff4f53948c8fee1b5337e1c309b1e954c..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/kandinsky_v22.md +++ /dev/null @@ -1,357 +0,0 @@ - - -# Kandinsky 2.2 - -The Kandinsky 2.2 release includes robust new text-to-image models that support text-to-image generation, image-to-image generation, image interpolation, and text-guided image inpainting. The general workflow to perform these tasks using Kandinsky 2.2 is the same as in Kandinsky 2.1. First, you will need to use a prior pipeline to generate image embeddings based on your text prompt, and then use one of the image decoding pipelines to generate the output image. The only difference is that in Kandinsky 2.2, all of the decoding pipelines no longer accept the `prompt` input, and the image generation process is conditioned with only `image_embeds` and `negative_image_embeds`. - -Same as with Kandinsky 2.1, the easiest way to perform text-to-image generation is to use the combined Kandinsky pipeline. This process is exactly the same as Kandinsky 2.1. All you need to do is to replace the Kandinsky 2.1 checkpoint with 2.2. - -```python -from diffusers import AutoPipelineForText2Image -import torch - -pipe = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16) -pipe.enable_model_cpu_offload() - -prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" -negative_prompt = "low quality, bad quality" - -image = pipe(prompt=prompt, negative_prompt=negative_prompt, prior_guidance_scale =1.0, height=768, width=768).images[0] -``` - -Now, let's look at an example where we take separate steps to run the prior pipeline and text-to-image pipeline. This way, we can understand what's happening under the hood and how Kandinsky 2.2 differs from Kandinsky 2.1. - -First, let's create the prior pipeline and text-to-image pipeline with Kandinsky 2.2 checkpoints. - -```python -from diffusers import DiffusionPipeline -import torch - -pipe_prior = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16) -pipe_prior.to("cuda") - -t2i_pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16) -t2i_pipe.to("cuda") -``` - -You can then use `pipe_prior` to generate image embeddings. - -```python -prompt = "portrait of a women, blue eyes, cinematic" -negative_prompt = "low quality, bad quality" - -image_embeds, negative_image_embeds = pipe_prior(prompt, guidance_scale=1.0).to_tuple() -``` - -Now you can pass these embeddings to the text-to-image pipeline. When using Kandinsky 2.2 you don't need to pass the `prompt` (but you do with the previous version, Kandinsky 2.1). - -``` -image = t2i_pipe(image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height=768, width=768).images[ - 0 -] -image.save("portrait.png") -``` -![img](https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/%20blue%20eyes.png) - -We used the text-to-image pipeline as an example, but the same process applies to all decoding pipelines in Kandinsky 2.2. For more information, please refer to our API section for each pipeline. - -### Text-to-Image Generation with ControlNet Conditioning - -In the following, we give a simple example of how to use [`KandinskyV22ControlnetPipeline`] to add control to the text-to-image generation with a depth image. - -First, let's take an image and extract its depth map. - -```python -from diffusers.utils import load_image - -img = load_image( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png" -).resize((768, 768)) -``` -![img](https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png) - -We can use the `depth-estimation` pipeline from transformers to process the image and retrieve its depth map. - -```python -import torch -import numpy as np - -from transformers import pipeline -from diffusers.utils import load_image - - -def make_hint(image, depth_estimator): - image = depth_estimator(image)["depth"] - image = np.array(image) - image = image[:, :, None] - image = np.concatenate([image, image, image], axis=2) - detected_map = torch.from_numpy(image).float() / 255.0 - hint = detected_map.permute(2, 0, 1) - return hint - - -depth_estimator = pipeline("depth-estimation") -hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda") -``` -Now, we load the prior pipeline and the text-to-image controlnet pipeline - -```python -from diffusers import KandinskyV22PriorPipeline, KandinskyV22ControlnetPipeline - -pipe_prior = KandinskyV22PriorPipeline.from_pretrained( - "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 -) -pipe_prior = pipe_prior.to("cuda") - -pipe = KandinskyV22ControlnetPipeline.from_pretrained( - "kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16 -) -pipe = pipe.to("cuda") -``` - -We pass the prompt and negative prompt through the prior to generate image embeddings - -```python -prompt = "A robot, 4k photo" - -negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature" - -generator = torch.Generator(device="cuda").manual_seed(43) -image_emb, zero_image_emb = pipe_prior( - prompt=prompt, negative_prompt=negative_prior_prompt, generator=generator -).to_tuple() -``` - -Now we can pass the image embeddings and the depth image we extracted to the controlnet pipeline. With Kandinsky 2.2, only prior pipelines accept `prompt` input. You do not need to pass the prompt to the controlnet pipeline. - -```python -images = pipe( - image_embeds=image_emb, - negative_image_embeds=zero_image_emb, - hint=hint, - num_inference_steps=50, - generator=generator, - height=768, - width=768, -).images - -images[0].save("robot_cat.png") -``` - -The output image looks as follow: -![img](https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/robot_cat_text2img.png) - -### Image-to-Image Generation with ControlNet Conditioning - -Kandinsky 2.2 also includes a [`KandinskyV22ControlnetImg2ImgPipeline`] that will allow you to add control to the image generation process with both the image and its depth map. This pipeline works really well with [`KandinskyV22PriorEmb2EmbPipeline`], which generates image embeddings based on both a text prompt and an image. - -For our robot cat example, we will pass the prompt and cat image together to the prior pipeline to generate an image embedding. We will then use that image embedding and the depth map of the cat to further control the image generation process. - -We can use the same cat image and its depth map from the last example. - -```python -import torch -import numpy as np - -from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22ControlnetImg2ImgPipeline -from diffusers.utils import load_image -from transformers import pipeline - -img = load_image( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinskyv22/cat.png" -).resize((768, 768)) - - -def make_hint(image, depth_estimator): - image = depth_estimator(image)["depth"] - image = np.array(image) - image = image[:, :, None] - image = np.concatenate([image, image, image], axis=2) - detected_map = torch.from_numpy(image).float() / 255.0 - hint = detected_map.permute(2, 0, 1) - return hint - - -depth_estimator = pipeline("depth-estimation") -hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda") - -pipe_prior = KandinskyV22PriorEmb2EmbPipeline.from_pretrained( - "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 -) -pipe_prior = pipe_prior.to("cuda") - -pipe = KandinskyV22ControlnetImg2ImgPipeline.from_pretrained( - "kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16 -) -pipe = pipe.to("cuda") - -prompt = "A robot, 4k photo" -negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature" - -generator = torch.Generator(device="cuda").manual_seed(43) - -# run prior pipeline - -img_emb = pipe_prior(prompt=prompt, image=img, strength=0.85, generator=generator) -negative_emb = pipe_prior(prompt=negative_prior_prompt, image=img, strength=1, generator=generator) - -# run controlnet img2img pipeline -images = pipe( - image=img, - strength=0.5, - image_embeds=img_emb.image_embeds, - negative_image_embeds=negative_emb.image_embeds, - hint=hint, - num_inference_steps=50, - generator=generator, - height=768, - width=768, -).images - -images[0].save("robot_cat.png") -``` - -Here is the output. Compared with the output from our text-to-image controlnet example, it kept a lot more cat facial details from the original image and worked into the robot style we asked for. - -![img](https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/robot_cat.png) - -## Optimization - -Running Kandinsky in inference requires running both a first prior pipeline: [`KandinskyPriorPipeline`] -and a second image decoding pipeline which is one of [`KandinskyPipeline`], [`KandinskyImg2ImgPipeline`], or [`KandinskyInpaintPipeline`]. - -The bulk of the computation time will always be the second image decoding pipeline, so when looking -into optimizing the model, one should look into the second image decoding pipeline. - -When running with PyTorch < 2.0, we strongly recommend making use of [`xformers`](https://github.com/facebookresearch/xformers) -to speed-up the optimization. This can be done by simply running: - -```py -from diffusers import DiffusionPipeline -import torch - -t2i_pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) -t2i_pipe.enable_xformers_memory_efficient_attention() -``` - -When running on PyTorch >= 2.0, PyTorch's SDPA attention will automatically be used. For more information on -PyTorch's SDPA, feel free to have a look at [this blog post](https://pytorch.org/blog/accelerated-diffusers-pt-20/). - -To have explicit control , you can also manually set the pipeline to use PyTorch's 2.0 efficient attention: - -```py -from diffusers.models.attention_processor import AttnAddedKVProcessor2_0 - -t2i_pipe.unet.set_attn_processor(AttnAddedKVProcessor2_0()) -``` - -The slowest and most memory intense attention processor is the default `AttnAddedKVProcessor` processor. -We do **not** recommend using it except for testing purposes or cases where very high determistic behaviour is desired. -You can set it with: - -```py -from diffusers.models.attention_processor import AttnAddedKVProcessor - -t2i_pipe.unet.set_attn_processor(AttnAddedKVProcessor()) -``` - -With PyTorch >= 2.0, you can also use Kandinsky with `torch.compile` which depending -on your hardware can signficantly speed-up your inference time once the model is compiled. -To use Kandinsksy with `torch.compile`, you can do: - -```py -t2i_pipe.unet.to(memory_format=torch.channels_last) -t2i_pipe.unet = torch.compile(t2i_pipe.unet, mode="reduce-overhead", fullgraph=True) -``` - -After compilation you should see a very fast inference time. For more information, -feel free to have a look at [Our PyTorch 2.0 benchmark](https://huggingface.co/docs/diffusers/main/en/optimization/torch2.0). - - - -To generate images directly from a single pipeline, you can use [`KandinskyV22CombinedPipeline`], [`KandinskyV22Img2ImgCombinedPipeline`], [`KandinskyV22InpaintCombinedPipeline`]. -These combined pipelines wrap the [`KandinskyV22PriorPipeline`] and [`KandinskyV22Pipeline`], [`KandinskyV22Img2ImgPipeline`], [`KandinskyV22InpaintPipeline`] respectively into a single -pipeline for a simpler user experience - - - -## Available Pipelines: - -| Pipeline | Tasks | -|---|---| -| [pipeline_kandinsky2_2.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2.py) | *Text-to-Image Generation* | -| [pipeline_kandinsky2_2_combined.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_combined.py) | *End-to-end Text-to-Image, image-to-image, Inpainting Generation* | -| [pipeline_kandinsky2_2_inpaint.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_inpaint.py) | *Image-Guided Image Generation* | -| [pipeline_kandinsky2_2_img2img.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_img2img.py) | *Image-Guided Image Generation* | -| [pipeline_kandinsky2_2_controlnet.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet.py) | *Image-Guided Image Generation* | -| [pipeline_kandinsky2_2_controlnet_img2img.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet_img2img.py) | *Image-Guided Image Generation* | - - -### KandinskyV22Pipeline - -[[autodoc]] KandinskyV22Pipeline - - all - - __call__ - -### KandinskyV22ControlnetPipeline - -[[autodoc]] KandinskyV22ControlnetPipeline - - all - - __call__ - -### KandinskyV22ControlnetImg2ImgPipeline - -[[autodoc]] KandinskyV22ControlnetImg2ImgPipeline - - all - - __call__ - -### KandinskyV22Img2ImgPipeline - -[[autodoc]] KandinskyV22Img2ImgPipeline - - all - - __call__ - -### KandinskyV22InpaintPipeline - -[[autodoc]] KandinskyV22InpaintPipeline - - all - - __call__ - -### KandinskyV22PriorPipeline - -[[autodoc]] KandinskyV22PriorPipeline - - all - - __call__ - - interpolate - -### KandinskyV22PriorEmb2EmbPipeline - -[[autodoc]] KandinskyV22PriorEmb2EmbPipeline - - all - - __call__ - - interpolate - -### KandinskyV22CombinedPipeline - -[[autodoc]] KandinskyV22CombinedPipeline - - all - - __call__ - -### KandinskyV22Img2ImgCombinedPipeline - -[[autodoc]] KandinskyV22Img2ImgCombinedPipeline - - all - - __call__ - -### KandinskyV22InpaintCombinedPipeline - -[[autodoc]] KandinskyV22InpaintCombinedPipeline - - all - - __call__ diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/wildcard_stable_diffusion.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/wildcard_stable_diffusion.py deleted file mode 100644 index aec79fb8e12e38c8b20af7bc47a7d634b45a7680..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/wildcard_stable_diffusion.py +++ /dev/null @@ -1,418 +0,0 @@ -import inspect -import os -import random -import re -from dataclasses import dataclass -from typing import Callable, Dict, List, Optional, Union - -import torch -from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer - -from diffusers import DiffusionPipeline -from diffusers.configuration_utils import FrozenDict -from diffusers.models import AutoencoderKL, UNet2DConditionModel -from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipelineOutput -from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker -from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler -from diffusers.utils import deprecate, logging - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -global_re_wildcard = re.compile(r"__([^_]*)__") - - -def get_filename(path: str): - # this doesn't work on Windows - return os.path.basename(path).split(".txt")[0] - - -def read_wildcard_values(path: str): - with open(path, encoding="utf8") as f: - return f.read().splitlines() - - -def grab_wildcard_values(wildcard_option_dict: Dict[str, List[str]] = {}, wildcard_files: List[str] = []): - for wildcard_file in wildcard_files: - filename = get_filename(wildcard_file) - read_values = read_wildcard_values(wildcard_file) - if filename not in wildcard_option_dict: - wildcard_option_dict[filename] = [] - wildcard_option_dict[filename].extend(read_values) - return wildcard_option_dict - - -def replace_prompt_with_wildcards( - prompt: str, wildcard_option_dict: Dict[str, List[str]] = {}, wildcard_files: List[str] = [] -): - new_prompt = prompt - - # get wildcard options - wildcard_option_dict = grab_wildcard_values(wildcard_option_dict, wildcard_files) - - for m in global_re_wildcard.finditer(new_prompt): - wildcard_value = m.group() - replace_value = random.choice(wildcard_option_dict[wildcard_value.strip("__")]) - new_prompt = new_prompt.replace(wildcard_value, replace_value, 1) - - return new_prompt - - -@dataclass -class WildcardStableDiffusionOutput(StableDiffusionPipelineOutput): - prompts: List[str] - - -class WildcardStableDiffusionPipeline(DiffusionPipeline): - r""" - Example Usage: - pipe = WildcardStableDiffusionPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", - - torch_dtype=torch.float16, - ) - prompt = "__animal__ sitting on a __object__ wearing a __clothing__" - out = pipe( - prompt, - wildcard_option_dict={ - "clothing":["hat", "shirt", "scarf", "beret"] - }, - wildcard_files=["object.txt", "animal.txt"], - num_prompt_samples=1 - ) - - - Pipeline for text-to-image generation with wild cards using Stable Diffusion. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details. - feature_extractor ([`CLIPImageProcessor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler], - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPImageProcessor, - ): - super().__init__() - - if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`" - f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure " - "to update the config accordingly as leaving `steps_offset` might led to incorrect results" - " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub," - " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`" - " file" - ) - deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["steps_offset"] = 1 - scheduler._internal_dict = FrozenDict(new_config) - - if safety_checker is None: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]], - height: int = 512, - width: int = 512, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[torch.Generator] = None, - latents: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - wildcard_option_dict: Dict[str, List[str]] = {}, - wildcard_files: List[str] = [], - num_prompt_samples: Optional[int] = 1, - **kwargs, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - height (`int`, *optional*, defaults to 512): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to 512): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator`, *optional*): - A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation - deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - wildcard_option_dict (Dict[str, List[str]]): - dict with key as `wildcard` and values as a list of possible replacements. For example if a prompt, "A __animal__ sitting on a chair". A wildcard_option_dict can provide possible values for "animal" like this: {"animal":["dog", "cat", "fox"]} - wildcard_files: (List[str]) - List of filenames of txt files for wildcard replacements. For example if a prompt, "A __animal__ sitting on a chair". A file can be provided ["animal.txt"] - num_prompt_samples: int - Number of times to sample wildcards for each prompt provided - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - - if isinstance(prompt, str): - prompt = [ - replace_prompt_with_wildcards(prompt, wildcard_option_dict, wildcard_files) - for i in range(num_prompt_samples) - ] - batch_size = len(prompt) - elif isinstance(prompt, list): - prompt_list = [] - for p in prompt: - for i in range(num_prompt_samples): - prompt_list.append(replace_prompt_with_wildcards(p, wildcard_option_dict, wildcard_files)) - prompt = prompt_list - batch_size = len(prompt) - else: - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - # get prompt text embeddings - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - - if text_input_ids.shape[-1] > self.tokenizer.model_max_length: - removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length] - text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0] - - # duplicate text embeddings for each generation per prompt, using mps friendly method - bs_embed, seq_len, _ = text_embeddings.shape - text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1) - text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - max_length = text_input_ids.shape[-1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0] - - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = uncond_embeddings.shape[1] - uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1) - uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) - - # get the initial random noise unless the user supplied it - - # Unlike in other pipelines, latents need to be generated in the target device - # for 1-to-1 results reproducibility with the CompVis implementation. - # However this currently doesn't work in `mps`. - latents_shape = (batch_size * num_images_per_prompt, self.unet.config.in_channels, height // 8, width // 8) - latents_dtype = text_embeddings.dtype - if latents is None: - if self.device.type == "mps": - # randn does not exist on mps - latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to( - self.device - ) - else: - latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype) - else: - if latents.shape != latents_shape: - raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}") - latents = latents.to(self.device) - - # set timesteps - self.scheduler.set_timesteps(num_inference_steps) - - # Some schedulers like PNDM have timesteps as arrays - # It's more optimized to move all timesteps to correct device beforehand - timesteps_tensor = self.scheduler.timesteps.to(self.device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - for i, t in enumerate(self.progress_bar(timesteps_tensor)): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # call the callback, if provided - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - latents = 1 / 0.18215 * latents - image = self.vae.decode(latents).sample - - image = (image / 2 + 0.5).clamp(0, 1) - - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - - if self.safety_checker is not None: - safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to( - self.device - ) - image, has_nsfw_concept = self.safety_checker( - images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype) - ) - else: - has_nsfw_concept = None - - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image, has_nsfw_concept) - - return WildcardStableDiffusionOutput(images=image, nsfw_content_detected=has_nsfw_concept, prompts=prompt) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky/pipeline_kandinsky.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky/pipeline_kandinsky.py deleted file mode 100644 index 89afa0060ef84b69aeb7b8361726ed51e557cbb3..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky/pipeline_kandinsky.py +++ /dev/null @@ -1,429 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import Callable, List, Optional, Union - -import torch -from transformers import ( - XLMRobertaTokenizer, -) - -from ...models import UNet2DConditionModel, VQModel -from ...schedulers import DDIMScheduler, DDPMScheduler -from ...utils import ( - is_accelerate_available, - is_accelerate_version, - logging, - randn_tensor, - replace_example_docstring, -) -from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput -from .text_encoder import MultilingualCLIP - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> from diffusers import KandinskyPipeline, KandinskyPriorPipeline - >>> import torch - - >>> pipe_prior = KandinskyPriorPipeline.from_pretrained("kandinsky-community/Kandinsky-2-1-prior") - >>> pipe_prior.to("cuda") - - >>> prompt = "red cat, 4k photo" - >>> out = pipe_prior(prompt) - >>> image_emb = out.image_embeds - >>> negative_image_emb = out.negative_image_embeds - - >>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1") - >>> pipe.to("cuda") - - >>> image = pipe( - ... prompt, - ... image_embeds=image_emb, - ... negative_image_embeds=negative_image_emb, - ... height=768, - ... width=768, - ... num_inference_steps=100, - ... ).images - - >>> image[0].save("cat.png") - ``` -""" - - -def get_new_h_w(h, w, scale_factor=8): - new_h = h // scale_factor**2 - if h % scale_factor**2 != 0: - new_h += 1 - new_w = w // scale_factor**2 - if w % scale_factor**2 != 0: - new_w += 1 - return new_h * scale_factor, new_w * scale_factor - - -class KandinskyPipeline(DiffusionPipeline): - """ - Pipeline for text-to-image generation using Kandinsky - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - text_encoder ([`MultilingualCLIP`]): - Frozen text-encoder. - tokenizer ([`XLMRobertaTokenizer`]): - Tokenizer of class - scheduler (Union[`DDIMScheduler`,`DDPMScheduler`]): - A scheduler to be used in combination with `unet` to generate image latents. - unet ([`UNet2DConditionModel`]): - Conditional U-Net architecture to denoise the image embedding. - movq ([`VQModel`]): - MoVQ Decoder to generate the image from the latents. - """ - - def __init__( - self, - text_encoder: MultilingualCLIP, - tokenizer: XLMRobertaTokenizer, - unet: UNet2DConditionModel, - scheduler: Union[DDIMScheduler, DDPMScheduler], - movq: VQModel, - ): - super().__init__() - - self.register_modules( - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - movq=movq, - ) - self.movq_scale_factor = 2 ** (len(self.movq.config.block_out_channels) - 1) - - # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.prepare_latents - def prepare_latents(self, shape, dtype, device, generator, latents, scheduler): - if latents is None: - latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - else: - if latents.shape != shape: - raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}") - latents = latents.to(device) - - latents = latents * scheduler.init_noise_sigma - return latents - - def _encode_prompt( - self, - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt=None, - ): - batch_size = len(prompt) if isinstance(prompt, list) else 1 - # get prompt text embeddings - text_inputs = self.tokenizer( - prompt, - padding="max_length", - truncation=True, - max_length=77, - return_attention_mask=True, - add_special_tokens=True, - return_tensors="pt", - ) - - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(text_input_ids, untruncated_ids): - removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - text_input_ids = text_input_ids.to(device) - text_mask = text_inputs.attention_mask.to(device) - - prompt_embeds, text_encoder_hidden_states = self.text_encoder( - input_ids=text_input_ids, attention_mask=text_mask - ) - - prompt_embeds = prompt_embeds.repeat_interleave(num_images_per_prompt, dim=0) - text_encoder_hidden_states = text_encoder_hidden_states.repeat_interleave(num_images_per_prompt, dim=0) - text_mask = text_mask.repeat_interleave(num_images_per_prompt, dim=0) - - if do_classifier_free_guidance: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=77, - truncation=True, - return_attention_mask=True, - add_special_tokens=True, - return_tensors="pt", - ) - uncond_text_input_ids = uncond_input.input_ids.to(device) - uncond_text_mask = uncond_input.attention_mask.to(device) - - negative_prompt_embeds, uncond_text_encoder_hidden_states = self.text_encoder( - input_ids=uncond_text_input_ids, attention_mask=uncond_text_mask - ) - - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - - seq_len = negative_prompt_embeds.shape[1] - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len) - - seq_len = uncond_text_encoder_hidden_states.shape[1] - uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.repeat(1, num_images_per_prompt, 1) - uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.view( - batch_size * num_images_per_prompt, seq_len, -1 - ) - uncond_text_mask = uncond_text_mask.repeat_interleave(num_images_per_prompt, dim=0) - - # done duplicates - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) - text_encoder_hidden_states = torch.cat([uncond_text_encoder_hidden_states, text_encoder_hidden_states]) - - text_mask = torch.cat([uncond_text_mask, text_mask]) - - return prompt_embeds, text_encoder_hidden_states, text_mask - - def enable_model_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared - to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward` - method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with - `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`. - """ - if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"): - from accelerate import cpu_offload_with_hook - else: - raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.") - - device = torch.device(f"cuda:{gpu_id}") - - if self.device.type != "cpu": - self.to("cpu", silence_dtype_warnings=True) - torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist) - - hook = None - for cpu_offloaded_model in [self.text_encoder, self.unet, self.movq]: - _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook) - - # We'll offload the last model manually. - self.final_offload_hook = hook - - @torch.no_grad() - @replace_example_docstring(EXAMPLE_DOC_STRING) - def __call__( - self, - prompt: Union[str, List[str]], - image_embeds: Union[torch.FloatTensor, List[torch.FloatTensor]], - negative_image_embeds: Union[torch.FloatTensor, List[torch.FloatTensor]], - negative_prompt: Optional[Union[str, List[str]]] = None, - height: int = 512, - width: int = 512, - num_inference_steps: int = 100, - guidance_scale: float = 4.0, - num_images_per_prompt: int = 1, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - return_dict: bool = True, - ): - """ - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - image_embeds (`torch.FloatTensor` or `List[torch.FloatTensor]`): - The clip image embeddings for text prompt, that will be used to condition the image generation. - negative_image_embeds (`torch.FloatTensor` or `List[torch.FloatTensor]`): - The clip image embeddings for negative text prompt, will be used to condition the image generation. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - height (`int`, *optional*, defaults to 512): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to 512): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 100): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 4.0): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"` - (`np.array`) or `"pt"` (`torch.Tensor`). - callback (`Callable`, *optional*): - A function that calls every `callback_steps` steps during inference. The function is called with the - following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function is called. If not specified, the callback is called at - every step. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple. - - Examples: - - Returns: - [`~pipelines.ImagePipelineOutput`] or `tuple` - """ - - if isinstance(prompt, str): - batch_size = 1 - elif isinstance(prompt, list): - batch_size = len(prompt) - else: - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - device = self._execution_device - - batch_size = batch_size * num_images_per_prompt - do_classifier_free_guidance = guidance_scale > 1.0 - - prompt_embeds, text_encoder_hidden_states, _ = self._encode_prompt( - prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt - ) - - if isinstance(image_embeds, list): - image_embeds = torch.cat(image_embeds, dim=0) - if isinstance(negative_image_embeds, list): - negative_image_embeds = torch.cat(negative_image_embeds, dim=0) - - if do_classifier_free_guidance: - image_embeds = image_embeds.repeat_interleave(num_images_per_prompt, dim=0) - negative_image_embeds = negative_image_embeds.repeat_interleave(num_images_per_prompt, dim=0) - - image_embeds = torch.cat([negative_image_embeds, image_embeds], dim=0).to( - dtype=prompt_embeds.dtype, device=device - ) - - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps_tensor = self.scheduler.timesteps - - num_channels_latents = self.unet.config.in_channels - - height, width = get_new_h_w(height, width, self.movq_scale_factor) - - # create initial latent - latents = self.prepare_latents( - (batch_size, num_channels_latents, height, width), - text_encoder_hidden_states.dtype, - device, - generator, - latents, - self.scheduler, - ) - - for i, t in enumerate(self.progress_bar(timesteps_tensor)): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - - added_cond_kwargs = {"text_embeds": prompt_embeds, "image_embeds": image_embeds} - noise_pred = self.unet( - sample=latent_model_input, - timestep=t, - encoder_hidden_states=text_encoder_hidden_states, - added_cond_kwargs=added_cond_kwargs, - return_dict=False, - )[0] - - if do_classifier_free_guidance: - noise_pred, variance_pred = noise_pred.split(latents.shape[1], dim=1) - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - _, variance_pred_text = variance_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - noise_pred = torch.cat([noise_pred, variance_pred_text], dim=1) - - if not ( - hasattr(self.scheduler.config, "variance_type") - and self.scheduler.config.variance_type in ["learned", "learned_range"] - ): - noise_pred, _ = noise_pred.split(latents.shape[1], dim=1) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step( - noise_pred, - t, - latents, - generator=generator, - ).prev_sample - - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # post-processing - image = self.movq.decode(latents, force_not_quantize=True)["sample"] - - if output_type not in ["pt", "np", "pil"]: - raise ValueError(f"Only the output types `pt`, `pil` and `np` are supported not output_type={output_type}") - - if output_type in ["np", "pil"]: - image = image * 0.5 + 0.5 - image = image.clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image,) - - return ImagePipelineOutput(images=image) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/kandinsky_v22/test_kandinsky_controlnet_img2img.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/kandinsky_v22/test_kandinsky_controlnet_img2img.py deleted file mode 100644 index 9ff2936cbd72433c32e1d71b541229fd83c4b2f2..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/kandinsky_v22/test_kandinsky_controlnet_img2img.py +++ /dev/null @@ -1,290 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import gc -import random -import unittest - -import numpy as np -import torch -from PIL import Image - -from diffusers import ( - DDIMScheduler, - KandinskyV22ControlnetImg2ImgPipeline, - KandinskyV22PriorEmb2EmbPipeline, - UNet2DConditionModel, - VQModel, -) -from diffusers.utils import floats_tensor, load_image, load_numpy, slow, torch_device -from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu - -from ..test_pipelines_common import PipelineTesterMixin, assert_mean_pixel_difference - - -enable_full_determinism() - - -class KandinskyV22ControlnetImg2ImgPipelineFastTests(PipelineTesterMixin, unittest.TestCase): - pipeline_class = KandinskyV22ControlnetImg2ImgPipeline - params = ["image_embeds", "negative_image_embeds", "image", "hint"] - batch_params = ["image_embeds", "negative_image_embeds", "image", "hint"] - required_optional_params = [ - "generator", - "height", - "width", - "strength", - "guidance_scale", - "num_inference_steps", - "return_dict", - "guidance_scale", - "num_images_per_prompt", - "output_type", - "return_dict", - ] - test_xformers_attention = False - - @property - def text_embedder_hidden_size(self): - return 32 - - @property - def time_input_dim(self): - return 32 - - @property - def block_out_channels_0(self): - return self.time_input_dim - - @property - def time_embed_dim(self): - return self.time_input_dim * 4 - - @property - def cross_attention_dim(self): - return 100 - - @property - def dummy_unet(self): - torch.manual_seed(0) - - model_kwargs = { - "in_channels": 8, - # Out channels is double in channels because predicts mean and variance - "out_channels": 8, - "addition_embed_type": "image_hint", - "down_block_types": ("ResnetDownsampleBlock2D", "SimpleCrossAttnDownBlock2D"), - "up_block_types": ("SimpleCrossAttnUpBlock2D", "ResnetUpsampleBlock2D"), - "mid_block_type": "UNetMidBlock2DSimpleCrossAttn", - "block_out_channels": (self.block_out_channels_0, self.block_out_channels_0 * 2), - "layers_per_block": 1, - "encoder_hid_dim": self.text_embedder_hidden_size, - "encoder_hid_dim_type": "image_proj", - "cross_attention_dim": self.cross_attention_dim, - "attention_head_dim": 4, - "resnet_time_scale_shift": "scale_shift", - "class_embed_type": None, - } - - model = UNet2DConditionModel(**model_kwargs) - return model - - @property - def dummy_movq_kwargs(self): - return { - "block_out_channels": [32, 32, 64, 64], - "down_block_types": [ - "DownEncoderBlock2D", - "DownEncoderBlock2D", - "DownEncoderBlock2D", - "AttnDownEncoderBlock2D", - ], - "in_channels": 3, - "latent_channels": 4, - "layers_per_block": 1, - "norm_num_groups": 8, - "norm_type": "spatial", - "num_vq_embeddings": 12, - "out_channels": 3, - "up_block_types": ["AttnUpDecoderBlock2D", "UpDecoderBlock2D", "UpDecoderBlock2D", "UpDecoderBlock2D"], - "vq_embed_dim": 4, - } - - @property - def dummy_movq(self): - torch.manual_seed(0) - model = VQModel(**self.dummy_movq_kwargs) - return model - - def get_dummy_components(self): - unet = self.dummy_unet - movq = self.dummy_movq - - ddim_config = { - "num_train_timesteps": 1000, - "beta_schedule": "linear", - "beta_start": 0.00085, - "beta_end": 0.012, - "clip_sample": False, - "set_alpha_to_one": False, - "steps_offset": 0, - "prediction_type": "epsilon", - "thresholding": False, - } - - scheduler = DDIMScheduler(**ddim_config) - - components = { - "unet": unet, - "scheduler": scheduler, - "movq": movq, - } - - return components - - def get_dummy_inputs(self, device, seed=0): - image_embeds = floats_tensor((1, self.text_embedder_hidden_size), rng=random.Random(seed)).to(device) - negative_image_embeds = floats_tensor((1, self.text_embedder_hidden_size), rng=random.Random(seed + 1)).to( - device - ) - # create init_image - image = floats_tensor((1, 3, 64, 64), rng=random.Random(seed)).to(device) - image = image.cpu().permute(0, 2, 3, 1)[0] - init_image = Image.fromarray(np.uint8(image)).convert("RGB").resize((256, 256)) - # create hint - hint = floats_tensor((1, 3, 64, 64), rng=random.Random(seed)).to(device) - - if str(device).startswith("mps"): - generator = torch.manual_seed(seed) - else: - generator = torch.Generator(device=device).manual_seed(seed) - inputs = { - "image": init_image, - "image_embeds": image_embeds, - "negative_image_embeds": negative_image_embeds, - "hint": hint, - "generator": generator, - "height": 64, - "width": 64, - "num_inference_steps": 10, - "guidance_scale": 7.0, - "strength": 0.2, - "output_type": "np", - } - return inputs - - def test_kandinsky_controlnet_img2img(self): - device = "cpu" - - components = self.get_dummy_components() - - pipe = self.pipeline_class(**components) - pipe = pipe.to(device) - - pipe.set_progress_bar_config(disable=None) - - output = pipe(**self.get_dummy_inputs(device)) - image = output.images - - image_from_tuple = pipe( - **self.get_dummy_inputs(device), - return_dict=False, - )[0] - - image_slice = image[0, -3:, -3:, -1] - image_from_tuple_slice = image_from_tuple[0, -3:, -3:, -1] - - assert image.shape == (1, 64, 64, 3) - - expected_slice = np.array( - [0.54985034, 0.55509365, 0.52561504, 0.5570494, 0.5593818, 0.5263979, 0.50285643, 0.5069846, 0.51196736] - ) - assert ( - np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - ), f" expected_slice {expected_slice}, but got {image_slice.flatten()}" - assert ( - np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2 - ), f" expected_slice {expected_slice}, but got {image_from_tuple_slice.flatten()}" - - -@slow -@require_torch_gpu -class KandinskyV22ControlnetImg2ImgPipelineIntegrationTests(unittest.TestCase): - def tearDown(self): - # clean up the VRAM after each test - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - def test_kandinsky_controlnet_img2img(self): - expected_image = load_numpy( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" - "/kandinskyv22/kandinskyv22_controlnet_img2img_robotcat_fp16.npy" - ) - - init_image = load_image( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png" - ) - init_image = init_image.resize((512, 512)) - - hint = load_image( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" - "/kandinskyv22/hint_image_cat.png" - ) - hint = torch.from_numpy(np.array(hint)).float() / 255.0 - hint = hint.permute(2, 0, 1).unsqueeze(0) - - prompt = "A robot, 4k photo" - - pipe_prior = KandinskyV22PriorEmb2EmbPipeline.from_pretrained( - "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 - ) - pipe_prior.to(torch_device) - - pipeline = KandinskyV22ControlnetImg2ImgPipeline.from_pretrained( - "kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16 - ) - pipeline = pipeline.to(torch_device) - - pipeline.set_progress_bar_config(disable=None) - - generator = torch.Generator(device="cpu").manual_seed(0) - - image_emb, zero_image_emb = pipe_prior( - prompt, - image=init_image, - strength=0.85, - generator=generator, - negative_prompt="", - ).to_tuple() - - output = pipeline( - image=init_image, - image_embeds=image_emb, - negative_image_embeds=zero_image_emb, - hint=hint, - generator=generator, - num_inference_steps=100, - height=512, - width=512, - strength=0.5, - output_type="np", - ) - - image = output.images[0] - - assert image.shape == (512, 512, 3) - - assert_mean_pixel_difference(image, expected_image) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/retinanet/retinanet_r101_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/retinanet/retinanet_r101_fpn_1x_coco.py deleted file mode 100644 index 1e6f46340d551abaa22ff2176bec22824188d6cb..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/retinanet/retinanet_r101_fpn_1x_coco.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './retinanet_r50_fpn_1x_coco.py' -model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r101-d8_480x480_40k_pascal_context.py b/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r101-d8_480x480_40k_pascal_context.py deleted file mode 100644 index 0b5a990604a77238375cb6d2b8298a382a457dd6..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r101-d8_480x480_40k_pascal_context.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './pspnet_r50-d8_480x480_40k_pascal_context.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/multimodal/README.md b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/multimodal/README.md deleted file mode 100644 index 506810343f54658e9e42b3dd45ed593a8cb70b25..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/multimodal/README.md +++ /dev/null @@ -1,83 +0,0 @@ -# Multimodal - -## Description - -Adds support for multimodality (text+images) to text-generation-webui. - -https://user-images.githubusercontent.com/3718215/233817203-69b57e77-0c55-4fd6-b742-3204bb13b8fc.mp4 - -## Usage - -To run this extension, download a LLM that supports multimodality, and then start server.py with the appropriate `--multimodal-pipeline` argument. Examples: - -``` -python server.py --model wojtab_llava-7b-v0-4bit-128g --multimodal-pipeline llava-7b -python3 server.py --model wojtab_llava-13b-v0-4bit-128g --multimodal-pipeline llava-13b -python server.py --model anon8231489123_vicuna-13b-GPTQ-4bit-128g --multimodal-pipeline minigpt4-13b -python server.py --model llama-7b-4bit --multimodal-pipeline minigpt4-7b -``` - -There is built-in support for LLaVA-v0-13B and LLaVA-v0-7b. To install `minigpt4`: - -- clone https://github.com/Wojtab/minigpt-4-pipeline into `extensions/multimodal/pipelines` -- install the requirements.txt - -The same procedure should be used to install other pipelines, which can then be used with `--multimodal-pipeline [pipeline name]`. For additional multimodal pipelines refer to the compatibility section below. - -Do note, that each image takes up a considerable amount of tokens, so adjust `max_new_tokens` to be at most 1700 (recommended value is between 200 to 500), so the images don't get truncated. - -To send an image, just upload it to the extension field below chat, and send a prompt as always. The image will be added to the end of your message. If you wish to modify the placement, include a string `` in your prompt. - -Additionally, there is *Embed all images, not only the last one* checkbox. It modifies the image embeddings, by default (if it's unchecked), all but the most recent images have their embeddings empty, so they are not fed to the network. It seems as if some multimodal networks consider the features in all images at the same time as if they were a single image. Due to this behavior, by default, the extension skips previous images. However, it can lead to sub-par generation on other pipelines. If you want to include all images, just tick this checkbox. - -## Compatibility -As of now, the following multimodal pipelines are supported: -|Pipeline|`--multimodal-pipeline`|Default LLM|LLM info(for the linked model)|Pipeline repository| -|-|-|-|-|-| -|[LLaVA 13B](https://github.com/haotian-liu/LLaVA)|`llava-13b`|[LLaVA 13B](https://huggingface.co/wojtab/llava-13b-v0-4bit-128g)|GPTQ 4-bit quant, old CUDA|built-in| -|[LLaVA 7B](https://github.com/haotian-liu/LLaVA)|`llava-7b`|[LLaVA 7B](https://huggingface.co/wojtab/llava-7b-v0-4bit-128g)|GPTQ 4-bit quant, old CUDA|built-in| -|[MiniGPT-4 7B](https://github.com/Vision-CAIR/MiniGPT-4)|`minigpt4-7b`|[Vicuna v0 7B](https://huggingface.co/TheBloke/vicuna-7B-GPTQ-4bit-128g)|GPTQ 4-bit quant, new format|[Wojtab/minigpt-4-pipeline](https://github.com/Wojtab/minigpt-4-pipeline)| -|[MiniGPT-4 13B](https://github.com/Vision-CAIR/MiniGPT-4)|`minigpt4-13b`|[Vicuna v0 13B](https://huggingface.co/anon8231489123/vicuna-13b-GPTQ-4bit-128g)|GPTQ 4-bit quant, old CUDA|[Wojtab/minigpt-4-pipeline](https://github.com/Wojtab/minigpt-4-pipeline)| -|[InstructBLIP 7B](https://github.com/salesforce/LAVIS/tree/main/projects/instructblip)|`instructblip-7b`|[Vicuna v1.1 7B](https://huggingface.co/TheBloke/vicuna-7B-1.1-GPTQ-4bit-128g)|GPTQ 4-bit quant|[kjerk/instructblip-pipeline](https://github.com/kjerk/instructblip-pipeline)| -|[InstructBLIP 13B](https://github.com/salesforce/LAVIS/tree/main/projects/instructblip)|`instructblip-13b`|[Vicuna v1.1 13B](https://huggingface.co/TheBloke/vicuna-13B-1.1-GPTQ-4bit-128g)|GPTQ 4-bit quant|[kjerk/instructblip-pipeline](https://github.com/kjerk/instructblip-pipeline)| - -Some pipelines could support different LLMs but do note that while it might work, it isn't a supported configuration. - -DO NOT report bugs if you are using a different LLM. - -DO NOT report bugs with pipelines in this repository (unless they are built-in) - -## Extension config -This extension uses the following parameters (from `settings.json`): -|Parameter|Description| -|---------|-----------| -|`multimodal-vision_bits`|Number of bits to load vision models (CLIP/ViT) feature extractor in (most pipelines should support either 32 or 16, default=32)| -|`multimodal-vision_device`|Torch device to run the feature extractor on, for example, `cpu` or `cuda:0`, by default `cuda:0` if available| -|`multimodal-projector_bits`|Number of bits to load feature projector model(s) in (most pipelines should support either 32 or 16, default=32)| -|`multimodal-projector_device`|Torch device to run the feature projector model(s) on, for example `cpu` or `cuda:0`, by default `cuda:0` if available| -|`multimodal-add_all_images_to_prompt`|Default value of "Embed all images, not only the last one" checkbox| - -## Usage through API - -You can run the multimodal inference through API, by inputting the images to prompt. Images are embedded like so: `f''`, where `img_str` is base-64 jpeg data. Note that you will need to launch `server.py` with the arguments `--api --extensions multimodal`. - -Python example: - -```Python -import base64 -import requests - -CONTEXT = "You are LLaVA, a large language and vision assistant trained by UW Madison WAIV Lab. You are able to understand the visual content that the user provides, and assist the user with a variety of tasks using natural language. Follow the instructions carefully and explain your answers in detail.### Human: Hi!### Assistant: Hi there! How can I help you today?\n" - -with open('extreme_ironing.jpg', 'rb') as f: - img_str = base64.b64encode(f.read()).decode('utf-8') - prompt = CONTEXT + f'### Human: What is unusual about this image: \n### Assistant: ' - print(requests.post('http://127.0.0.1:5000/api/v1/generate', json={'prompt': prompt, 'stopping_strings': ['\n###']}).json()) -``` -script output: -```Python -{'results': [{'text': "The unusual aspect of this image is that a man is standing on top of a yellow minivan while doing his laundry. He has set up a makeshift clothes line using the car's rooftop as an outdoor drying area. This scene is uncommon because people typically do their laundry indoors, in a dedicated space like a laundromat or a room in their home, rather than on top of a moving vehicle. Additionally, hanging clothes on the car could be potentially hazardous or illegal in some jurisdictions due to the risk of damaging the vehicle or causing accidents on the road.\n##"}]} -``` - -## For pipeline developers/technical description -see [DOCS.md](https://github.com/oobabooga/text-generation-webui/blob/main/extensions/multimodal/DOCS.md) diff --git a/spaces/AnnasBlackHat/Image-Downloader/gofile.py b/spaces/AnnasBlackHat/Image-Downloader/gofile.py deleted file mode 100644 index 52d8b3d953cb5be028dfde0a2c6b4eb422ccd08a..0000000000000000000000000000000000000000 --- a/spaces/AnnasBlackHat/Image-Downloader/gofile.py +++ /dev/null @@ -1,25 +0,0 @@ -import requests - -class Gofile: - def __init__(self, token = None, folder_id= None): - self.token = token - self.folder_id = folder_id - - def find_server(self): - resp = requests.get('https://api.gofile.io/getServer') - result = resp.json() - return result['data']['server'] - - def upload(self, files): - server = self.find_server() - url = f'https://{server}.gofile.io/uploadFile' - data_payload = {'token': self.token, 'folderId': self.folder_id} - download_link = [] - for file in files: - with open(file, 'rb') as f: - resp = requests.post(url, files = {'file': f}, data= data_payload) - print('upload status: ', resp.status_code) - download_page = resp.json()['data']['downloadPage'] - download_link.append(download_page) - print('download page: ',download_page) - return download_link \ No newline at end of file diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/openpose/util.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/openpose/util.py deleted file mode 100644 index 6f91ae0e65abaf0cbd62d803f56498991141e61b..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/openpose/util.py +++ /dev/null @@ -1,164 +0,0 @@ -import math -import numpy as np -import matplotlib -import cv2 - - -def padRightDownCorner(img, stride, padValue): - h = img.shape[0] - w = img.shape[1] - - pad = 4 * [None] - pad[0] = 0 # up - pad[1] = 0 # left - pad[2] = 0 if (h % stride == 0) else stride - (h % stride) # down - pad[3] = 0 if (w % stride == 0) else stride - (w % stride) # right - - img_padded = img - pad_up = np.tile(img_padded[0:1, :, :]*0 + padValue, (pad[0], 1, 1)) - img_padded = np.concatenate((pad_up, img_padded), axis=0) - pad_left = np.tile(img_padded[:, 0:1, :]*0 + padValue, (1, pad[1], 1)) - img_padded = np.concatenate((pad_left, img_padded), axis=1) - pad_down = np.tile(img_padded[-2:-1, :, :]*0 + padValue, (pad[2], 1, 1)) - img_padded = np.concatenate((img_padded, pad_down), axis=0) - pad_right = np.tile(img_padded[:, -2:-1, :]*0 + padValue, (1, pad[3], 1)) - img_padded = np.concatenate((img_padded, pad_right), axis=1) - - return img_padded, pad - -# transfer caffe model to pytorch which will match the layer name -def transfer(model, model_weights): - transfered_model_weights = {} - for weights_name in model.state_dict().keys(): - transfered_model_weights[weights_name] = model_weights['.'.join(weights_name.split('.')[1:])] - return transfered_model_weights - -# draw the body keypoint and lims -def draw_bodypose(canvas, candidate, subset): - stickwidth = 4 - limbSeq = [[2, 3], [2, 6], [3, 4], [4, 5], [6, 7], [7, 8], [2, 9], [9, 10], \ - [10, 11], [2, 12], [12, 13], [13, 14], [2, 1], [1, 15], [15, 17], \ - [1, 16], [16, 18], [3, 17], [6, 18]] - - colors = [[255, 0, 0], [255, 85, 0], [255, 170, 0], [255, 255, 0], [170, 255, 0], [85, 255, 0], [0, 255, 0], \ - [0, 255, 85], [0, 255, 170], [0, 255, 255], [0, 170, 255], [0, 85, 255], [0, 0, 255], [85, 0, 255], \ - [170, 0, 255], [255, 0, 255], [255, 0, 170], [255, 0, 85]] - for i in range(18): - for n in range(len(subset)): - index = int(subset[n][i]) - if index == -1: - continue - x, y = candidate[index][0:2] - cv2.circle(canvas, (int(x), int(y)), 4, colors[i], thickness=-1) - for i in range(17): - for n in range(len(subset)): - index = subset[n][np.array(limbSeq[i]) - 1] - if -1 in index: - continue - cur_canvas = canvas.copy() - Y = candidate[index.astype(int), 0] - X = candidate[index.astype(int), 1] - mX = np.mean(X) - mY = np.mean(Y) - length = ((X[0] - X[1]) ** 2 + (Y[0] - Y[1]) ** 2) ** 0.5 - angle = math.degrees(math.atan2(X[0] - X[1], Y[0] - Y[1])) - polygon = cv2.ellipse2Poly((int(mY), int(mX)), (int(length / 2), stickwidth), int(angle), 0, 360, 1) - cv2.fillConvexPoly(cur_canvas, polygon, colors[i]) - canvas = cv2.addWeighted(canvas, 0.4, cur_canvas, 0.6, 0) - # plt.imsave("preview.jpg", canvas[:, :, [2, 1, 0]]) - # plt.imshow(canvas[:, :, [2, 1, 0]]) - return canvas - - -# image drawed by opencv is not good. -def draw_handpose(canvas, all_hand_peaks, show_number=False): - edges = [[0, 1], [1, 2], [2, 3], [3, 4], [0, 5], [5, 6], [6, 7], [7, 8], [0, 9], [9, 10], \ - [10, 11], [11, 12], [0, 13], [13, 14], [14, 15], [15, 16], [0, 17], [17, 18], [18, 19], [19, 20]] - - for peaks in all_hand_peaks: - for ie, e in enumerate(edges): - if np.sum(np.all(peaks[e], axis=1)==0)==0: - x1, y1 = peaks[e[0]] - x2, y2 = peaks[e[1]] - cv2.line(canvas, (x1, y1), (x2, y2), matplotlib.colors.hsv_to_rgb([ie/float(len(edges)), 1.0, 1.0])*255, thickness=2) - - for i, keyponit in enumerate(peaks): - x, y = keyponit - cv2.circle(canvas, (x, y), 4, (0, 0, 255), thickness=-1) - if show_number: - cv2.putText(canvas, str(i), (x, y), cv2.FONT_HERSHEY_SIMPLEX, 0.3, (0, 0, 0), lineType=cv2.LINE_AA) - return canvas - -# detect hand according to body pose keypoints -# please refer to https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/src/openpose/hand/handDetector.cpp -def handDetect(candidate, subset, oriImg): - # right hand: wrist 4, elbow 3, shoulder 2 - # left hand: wrist 7, elbow 6, shoulder 5 - ratioWristElbow = 0.33 - detect_result = [] - image_height, image_width = oriImg.shape[0:2] - for person in subset.astype(int): - # if any of three not detected - has_left = np.sum(person[[5, 6, 7]] == -1) == 0 - has_right = np.sum(person[[2, 3, 4]] == -1) == 0 - if not (has_left or has_right): - continue - hands = [] - #left hand - if has_left: - left_shoulder_index, left_elbow_index, left_wrist_index = person[[5, 6, 7]] - x1, y1 = candidate[left_shoulder_index][:2] - x2, y2 = candidate[left_elbow_index][:2] - x3, y3 = candidate[left_wrist_index][:2] - hands.append([x1, y1, x2, y2, x3, y3, True]) - # right hand - if has_right: - right_shoulder_index, right_elbow_index, right_wrist_index = person[[2, 3, 4]] - x1, y1 = candidate[right_shoulder_index][:2] - x2, y2 = candidate[right_elbow_index][:2] - x3, y3 = candidate[right_wrist_index][:2] - hands.append([x1, y1, x2, y2, x3, y3, False]) - - for x1, y1, x2, y2, x3, y3, is_left in hands: - # pos_hand = pos_wrist + ratio * (pos_wrist - pos_elbox) = (1 + ratio) * pos_wrist - ratio * pos_elbox - # handRectangle.x = posePtr[wrist*3] + ratioWristElbow * (posePtr[wrist*3] - posePtr[elbow*3]); - # handRectangle.y = posePtr[wrist*3+1] + ratioWristElbow * (posePtr[wrist*3+1] - posePtr[elbow*3+1]); - # const auto distanceWristElbow = getDistance(poseKeypoints, person, wrist, elbow); - # const auto distanceElbowShoulder = getDistance(poseKeypoints, person, elbow, shoulder); - # handRectangle.width = 1.5f * fastMax(distanceWristElbow, 0.9f * distanceElbowShoulder); - x = x3 + ratioWristElbow * (x3 - x2) - y = y3 + ratioWristElbow * (y3 - y2) - distanceWristElbow = math.sqrt((x3 - x2) ** 2 + (y3 - y2) ** 2) - distanceElbowShoulder = math.sqrt((x2 - x1) ** 2 + (y2 - y1) ** 2) - width = 1.5 * max(distanceWristElbow, 0.9 * distanceElbowShoulder) - # x-y refers to the center --> offset to topLeft point - # handRectangle.x -= handRectangle.width / 2.f; - # handRectangle.y -= handRectangle.height / 2.f; - x -= width / 2 - y -= width / 2 # width = height - # overflow the image - if x < 0: x = 0 - if y < 0: y = 0 - width1 = width - width2 = width - if x + width > image_width: width1 = image_width - x - if y + width > image_height: width2 = image_height - y - width = min(width1, width2) - # the max hand box value is 20 pixels - if width >= 20: - detect_result.append([int(x), int(y), int(width), is_left]) - - ''' - return value: [[x, y, w, True if left hand else False]]. - width=height since the network require squared input. - x, y is the coordinate of top left - ''' - return detect_result - -# get max index of 2d array -def npmax(array): - arrayindex = array.argmax(1) - arrayvalue = array.max(1) - i = arrayvalue.argmax() - j = arrayindex[i] - return i, j diff --git a/spaces/Apex-X/Tm/roop/processors/frame/core.py b/spaces/Apex-X/Tm/roop/processors/frame/core.py deleted file mode 100644 index c225f9de483a2914a98392ce9de5bd03f2013a2d..0000000000000000000000000000000000000000 --- a/spaces/Apex-X/Tm/roop/processors/frame/core.py +++ /dev/null @@ -1,88 +0,0 @@ -import os -import importlib -import psutil -from concurrent.futures import ThreadPoolExecutor, as_completed -from queue import Queue -from types import ModuleType -from typing import Any, List, Callable -from tqdm import tqdm - -import roop - -FRAME_PROCESSORS_MODULES: List[ModuleType] = [] -FRAME_PROCESSORS_INTERFACE = [ - 'pre_check', - 'pre_start', - 'process_frame', - 'process_frames', - 'process_image', - 'process_video', - 'post_process' -] - - -def load_frame_processor_module(frame_processor: str) -> Any: - try: - frame_processor_module = importlib.import_module(f'roop.processors.frame.{frame_processor}') - for method_name in FRAME_PROCESSORS_INTERFACE: - if not hasattr(frame_processor_module, method_name): - raise NotImplementedError - except (ImportError, NotImplementedError): - quit(f'Frame processor {frame_processor} crashed.') - return frame_processor_module - - -def get_frame_processors_modules(frame_processors: List[str]) -> List[ModuleType]: - global FRAME_PROCESSORS_MODULES - - if not FRAME_PROCESSORS_MODULES: - for frame_processor in frame_processors: - frame_processor_module = load_frame_processor_module(frame_processor) - FRAME_PROCESSORS_MODULES.append(frame_processor_module) - return FRAME_PROCESSORS_MODULES - - -def multi_process_frame(source_path: str, temp_frame_paths: List[str], process_frames: Callable[[str, List[str], Any], None], update: Callable[[], None]) -> None: - with ThreadPoolExecutor(max_workers=roop.globals.execution_threads) as executor: - futures = [] - queue = create_queue(temp_frame_paths) - queue_per_future = len(temp_frame_paths) // roop.globals.execution_threads - while not queue.empty(): - future = executor.submit(process_frames, source_path, pick_queue(queue, queue_per_future), update) - futures.append(future) - for future in as_completed(futures): - future.result() - - -def create_queue(temp_frame_paths: List[str]) -> Queue[str]: - queue: Queue[str] = Queue() - for frame_path in temp_frame_paths: - queue.put(frame_path) - return queue - - -def pick_queue(queue: Queue[str], queue_per_future: int) -> List[str]: - queues = [] - for _ in range(queue_per_future): - if not queue.empty(): - queues.append(queue.get()) - return queues - - -def process_video(source_path: str, frame_paths: list[str], process_frames: Callable[[str, List[str], Any], None]) -> None: - progress_bar_format = '{l_bar}{bar}| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, {rate_fmt}{postfix}]' - total = len(frame_paths) - with tqdm(total=total, desc='Processing', unit='frame', dynamic_ncols=True, bar_format=progress_bar_format) as progress: - multi_process_frame(source_path, frame_paths, process_frames, lambda: update_progress(progress)) - - -def update_progress(progress: Any = None) -> None: - process = psutil.Process(os.getpid()) - memory_usage = process.memory_info().rss / 1024 / 1024 / 1024 - progress.set_postfix({ - 'memory_usage': '{:.2f}'.format(memory_usage).zfill(5) + 'GB', - 'execution_providers': roop.globals.execution_providers, - 'execution_threads': roop.globals.execution_threads - }) - progress.refresh() - progress.update(1) diff --git a/spaces/Apex-X/nono/roop/face_reference.py b/spaces/Apex-X/nono/roop/face_reference.py deleted file mode 100644 index 3c3e1f1c6e13c73ceafd40c0912c066a3a86a528..0000000000000000000000000000000000000000 --- a/spaces/Apex-X/nono/roop/face_reference.py +++ /dev/null @@ -1,21 +0,0 @@ -from typing import Optional - -from roop.typing import Face - -FACE_REFERENCE = None - - -def get_face_reference() -> Optional[Face]: - return FACE_REFERENCE - - -def set_face_reference(face: Face) -> None: - global FACE_REFERENCE - - FACE_REFERENCE = face - - -def clear_face_reference() -> None: - global FACE_REFERENCE - - FACE_REFERENCE = None diff --git a/spaces/Arnaudding001/OpenAI_whisperLive/utils.py b/spaces/Arnaudding001/OpenAI_whisperLive/utils.py deleted file mode 100644 index b85a7f3ff5c2e3e94823f4e1bf181e54edb1ddf9..0000000000000000000000000000000000000000 --- a/spaces/Arnaudding001/OpenAI_whisperLive/utils.py +++ /dev/null @@ -1,115 +0,0 @@ -import textwrap -import unicodedata -import re - -import zlib -from typing import Iterator, TextIO - - -def exact_div(x, y): - assert x % y == 0 - return x // y - - -def str2bool(string): - str2val = {"True": True, "False": False} - if string in str2val: - return str2val[string] - else: - raise ValueError(f"Expected one of {set(str2val.keys())}, got {string}") - - -def optional_int(string): - return None if string == "None" else int(string) - - -def optional_float(string): - return None if string == "None" else float(string) - - -def compression_ratio(text) -> float: - return len(text) / len(zlib.compress(text.encode("utf-8"))) - - -def format_timestamp(seconds: float, always_include_hours: bool = False, fractionalSeperator: str = '.'): - assert seconds >= 0, "non-negative timestamp expected" - milliseconds = round(seconds * 1000.0) - - hours = milliseconds // 3_600_000 - milliseconds -= hours * 3_600_000 - - minutes = milliseconds // 60_000 - milliseconds -= minutes * 60_000 - - seconds = milliseconds // 1_000 - milliseconds -= seconds * 1_000 - - hours_marker = f"{hours:02d}:" if always_include_hours or hours > 0 else "" - return f"{hours_marker}{minutes:02d}:{seconds:02d}{fractionalSeperator}{milliseconds:03d}" - - -def write_txt(transcript: Iterator[dict], file: TextIO): - for segment in transcript: - print(segment['text'].strip(), file=file, flush=True) - - -def write_vtt(transcript: Iterator[dict], file: TextIO, maxLineWidth=None): - print("WEBVTT\n", file=file) - for segment in transcript: - text = process_text(segment['text'], maxLineWidth).replace('-->', '->') - - print( - f"{format_timestamp(segment['start'])} --> {format_timestamp(segment['end'])}\n" - f"{text}\n", - file=file, - flush=True, - ) - - -def write_srt(transcript: Iterator[dict], file: TextIO, maxLineWidth=None): - """ - Write a transcript to a file in SRT format. - Example usage: - from pathlib import Path - from whisper.utils import write_srt - result = transcribe(model, audio_path, temperature=temperature, **args) - # save SRT - audio_basename = Path(audio_path).stem - with open(Path(output_dir) / (audio_basename + ".srt"), "w", encoding="utf-8") as srt: - write_srt(result["segments"], file=srt) - """ - for i, segment in enumerate(transcript, start=1): - text = process_text(segment['text'].strip(), maxLineWidth).replace('-->', '->') - - # write srt lines - print( - f"{i}\n" - f"{format_timestamp(segment['start'], always_include_hours=True, fractionalSeperator=',')} --> " - f"{format_timestamp(segment['end'], always_include_hours=True, fractionalSeperator=',')}\n" - f"{text}\n", - file=file, - flush=True, - ) - -def process_text(text: str, maxLineWidth=None): - if (maxLineWidth is None or maxLineWidth < 0): - return text - - lines = textwrap.wrap(text, width=maxLineWidth, tabsize=4) - return '\n'.join(lines) - -def slugify(value, allow_unicode=False): - """ - Taken from https://github.com/django/django/blob/master/django/utils/text.py - Convert to ASCII if 'allow_unicode' is False. Convert spaces or repeated - dashes to single dashes. Remove characters that aren't alphanumerics, - underscores, or hyphens. Convert to lowercase. Also strip leading and - trailing whitespace, dashes, and underscores. - """ - value = str(value) - if allow_unicode: - value = unicodedata.normalize('NFKC', value) - else: - value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore').decode('ascii') - value = re.sub(r'[^\w\s-]', '', value.lower()) - return re.sub(r'[-\s]+', '-', value).strip('-_') \ No newline at end of file diff --git a/spaces/Arnx/MusicGenXvAKN/audiocraft/modules/lstm.py b/spaces/Arnx/MusicGenXvAKN/audiocraft/modules/lstm.py deleted file mode 100644 index c0866175950c1ca4f6cca98649525e6481853bba..0000000000000000000000000000000000000000 --- a/spaces/Arnx/MusicGenXvAKN/audiocraft/modules/lstm.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from torch import nn - - -class StreamableLSTM(nn.Module): - """LSTM without worrying about the hidden state, nor the layout of the data. - Expects input as convolutional layout. - """ - def __init__(self, dimension: int, num_layers: int = 2, skip: bool = True): - super().__init__() - self.skip = skip - self.lstm = nn.LSTM(dimension, dimension, num_layers) - - def forward(self, x): - x = x.permute(2, 0, 1) - y, _ = self.lstm(x) - if self.skip: - y = y + x - y = y.permute(1, 2, 0) - return y diff --git a/spaces/Artples/Chat-with-Llama-2-70b/README.md b/spaces/Artples/Chat-with-Llama-2-70b/README.md deleted file mode 100644 index f84ebc22af15b5b66b94d47d05ec03186ec9a0f2..0000000000000000000000000000000000000000 --- a/spaces/Artples/Chat-with-Llama-2-70b/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Lauche-AI LEU-Chatbot -emoji: ⚡ -colorFrom: gray -colorTo: yellow -sdk: gradio -sdk_version: 3.44.3 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/utf8prober.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/utf8prober.py deleted file mode 100644 index d96354d97c2195320d0acc1717a5876eafbea2af..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/utf8prober.py +++ /dev/null @@ -1,82 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is mozilla.org code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 1998 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -from typing import Union - -from .charsetprober import CharSetProber -from .codingstatemachine import CodingStateMachine -from .enums import MachineState, ProbingState -from .mbcssm import UTF8_SM_MODEL - - -class UTF8Prober(CharSetProber): - ONE_CHAR_PROB = 0.5 - - def __init__(self) -> None: - super().__init__() - self.coding_sm = CodingStateMachine(UTF8_SM_MODEL) - self._num_mb_chars = 0 - self.reset() - - def reset(self) -> None: - super().reset() - self.coding_sm.reset() - self._num_mb_chars = 0 - - @property - def charset_name(self) -> str: - return "utf-8" - - @property - def language(self) -> str: - return "" - - def feed(self, byte_str: Union[bytes, bytearray]) -> ProbingState: - for c in byte_str: - coding_state = self.coding_sm.next_state(c) - if coding_state == MachineState.ERROR: - self._state = ProbingState.NOT_ME - break - if coding_state == MachineState.ITS_ME: - self._state = ProbingState.FOUND_IT - break - if coding_state == MachineState.START: - if self.coding_sm.get_current_charlen() >= 2: - self._num_mb_chars += 1 - - if self.state == ProbingState.DETECTING: - if self.get_confidence() > self.SHORTCUT_THRESHOLD: - self._state = ProbingState.FOUND_IT - - return self.state - - def get_confidence(self) -> float: - unlike = 0.99 - if self._num_mb_chars < 6: - unlike *= self.ONE_CHAR_PROB**self._num_mb_chars - return 1.0 - unlike - return unlike diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/diagnose.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/diagnose.py deleted file mode 100644 index ad36183898eddb11e33ccb7623c0291ccc0f091d..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/diagnose.py +++ /dev/null @@ -1,37 +0,0 @@ -import os -import platform - -from pip._vendor.rich import inspect -from pip._vendor.rich.console import Console, get_windows_console_features -from pip._vendor.rich.panel import Panel -from pip._vendor.rich.pretty import Pretty - - -def report() -> None: # pragma: no cover - """Print a report to the terminal with debugging information""" - console = Console() - inspect(console) - features = get_windows_console_features() - inspect(features) - - env_names = ( - "TERM", - "COLORTERM", - "CLICOLOR", - "NO_COLOR", - "TERM_PROGRAM", - "COLUMNS", - "LINES", - "JUPYTER_COLUMNS", - "JUPYTER_LINES", - "JPY_PARENT_PID", - "VSCODE_VERBOSE_LOGGING", - ) - env = {name: os.getenv(name) for name in env_names} - console.print(Panel.fit((Pretty(env)), title="[b]Environment Variables")) - - console.print(f'platform="{platform.system()}"') - - -if __name__ == "__main__": # pragma: no cover - report() diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/palette.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/palette.py deleted file mode 100644 index fa0c4dd40381addf5b42fae4228b6d8fef03abd9..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/palette.py +++ /dev/null @@ -1,100 +0,0 @@ -from math import sqrt -from functools import lru_cache -from typing import Sequence, Tuple, TYPE_CHECKING - -from .color_triplet import ColorTriplet - -if TYPE_CHECKING: - from pip._vendor.rich.table import Table - - -class Palette: - """A palette of available colors.""" - - def __init__(self, colors: Sequence[Tuple[int, int, int]]): - self._colors = colors - - def __getitem__(self, number: int) -> ColorTriplet: - return ColorTriplet(*self._colors[number]) - - def __rich__(self) -> "Table": - from pip._vendor.rich.color import Color - from pip._vendor.rich.style import Style - from pip._vendor.rich.text import Text - from pip._vendor.rich.table import Table - - table = Table( - "index", - "RGB", - "Color", - title="Palette", - caption=f"{len(self._colors)} colors", - highlight=True, - caption_justify="right", - ) - for index, color in enumerate(self._colors): - table.add_row( - str(index), - repr(color), - Text(" " * 16, style=Style(bgcolor=Color.from_rgb(*color))), - ) - return table - - # This is somewhat inefficient and needs caching - @lru_cache(maxsize=1024) - def match(self, color: Tuple[int, int, int]) -> int: - """Find a color from a palette that most closely matches a given color. - - Args: - color (Tuple[int, int, int]): RGB components in range 0 > 255. - - Returns: - int: Index of closes matching color. - """ - red1, green1, blue1 = color - _sqrt = sqrt - get_color = self._colors.__getitem__ - - def get_color_distance(index: int) -> float: - """Get the distance to a color.""" - red2, green2, blue2 = get_color(index) - red_mean = (red1 + red2) // 2 - red = red1 - red2 - green = green1 - green2 - blue = blue1 - blue2 - return _sqrt( - (((512 + red_mean) * red * red) >> 8) - + 4 * green * green - + (((767 - red_mean) * blue * blue) >> 8) - ) - - min_index = min(range(len(self._colors)), key=get_color_distance) - return min_index - - -if __name__ == "__main__": # pragma: no cover - import colorsys - from typing import Iterable - from pip._vendor.rich.color import Color - from pip._vendor.rich.console import Console, ConsoleOptions - from pip._vendor.rich.segment import Segment - from pip._vendor.rich.style import Style - - class ColorBox: - def __rich_console__( - self, console: Console, options: ConsoleOptions - ) -> Iterable[Segment]: - height = console.size.height - 3 - for y in range(0, height): - for x in range(options.max_width): - h = x / options.max_width - l = y / (height + 1) - r1, g1, b1 = colorsys.hls_to_rgb(h, l, 1.0) - r2, g2, b2 = colorsys.hls_to_rgb(h, l + (1 / height / 2), 1.0) - bgcolor = Color.from_rgb(r1 * 255, g1 * 255, b1 * 255) - color = Color.from_rgb(r2 * 255, g2 * 255, b2 * 255) - yield Segment("▄", Style(color=color, bgcolor=bgcolor)) - yield Segment.line() - - console = Console() - console.print(ColorBox()) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/_macos_compat.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/_macos_compat.py deleted file mode 100644 index 17769e9154bd9cc3f3c00dc10718e4377828cb5e..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/_macos_compat.py +++ /dev/null @@ -1,12 +0,0 @@ -import sys -import importlib - - -def bypass_compiler_fixup(cmd, args): - return cmd - - -if sys.platform == 'darwin': - compiler_fixup = importlib.import_module('_osx_support').compiler_fixup -else: - compiler_fixup = bypass_compiler_fixup diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/build_py.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/build_py.py deleted file mode 100644 index ec0627429ccbb88f3a17325726441ebcb28fb597..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/build_py.py +++ /dev/null @@ -1,368 +0,0 @@ -from functools import partial -from glob import glob -from distutils.util import convert_path -import distutils.command.build_py as orig -import os -import fnmatch -import textwrap -import io -import distutils.errors -import itertools -import stat -import warnings -from pathlib import Path -from typing import Dict, Iterable, Iterator, List, Optional, Tuple - -from setuptools._deprecation_warning import SetuptoolsDeprecationWarning -from setuptools.extern.more_itertools import unique_everseen - - -def make_writable(target): - os.chmod(target, os.stat(target).st_mode | stat.S_IWRITE) - - -class build_py(orig.build_py): - """Enhanced 'build_py' command that includes data files with packages - - The data files are specified via a 'package_data' argument to 'setup()'. - See 'setuptools.dist.Distribution' for more details. - - Also, this version of the 'build_py' command allows you to specify both - 'py_modules' and 'packages' in the same setup operation. - """ - editable_mode: bool = False - existing_egg_info_dir: Optional[str] = None #: Private API, internal use only. - - def finalize_options(self): - orig.build_py.finalize_options(self) - self.package_data = self.distribution.package_data - self.exclude_package_data = self.distribution.exclude_package_data or {} - if 'data_files' in self.__dict__: - del self.__dict__['data_files'] - self.__updated_files = [] - - def copy_file(self, infile, outfile, preserve_mode=1, preserve_times=1, - link=None, level=1): - # Overwrite base class to allow using links - if link: - infile = str(Path(infile).resolve()) - outfile = str(Path(outfile).resolve()) - return super().copy_file(infile, outfile, preserve_mode, preserve_times, - link, level) - - def run(self): - """Build modules, packages, and copy data files to build directory""" - if not (self.py_modules or self.packages) or self.editable_mode: - return - - if self.py_modules: - self.build_modules() - - if self.packages: - self.build_packages() - self.build_package_data() - - # Only compile actual .py files, using our base class' idea of what our - # output files are. - self.byte_compile(orig.build_py.get_outputs(self, include_bytecode=0)) - - def __getattr__(self, attr): - "lazily compute data files" - if attr == 'data_files': - self.data_files = self._get_data_files() - return self.data_files - return orig.build_py.__getattr__(self, attr) - - def build_module(self, module, module_file, package): - outfile, copied = orig.build_py.build_module(self, module, module_file, package) - if copied: - self.__updated_files.append(outfile) - return outfile, copied - - def _get_data_files(self): - """Generate list of '(package,src_dir,build_dir,filenames)' tuples""" - self.analyze_manifest() - return list(map(self._get_pkg_data_files, self.packages or ())) - - def get_data_files_without_manifest(self): - """ - Generate list of ``(package,src_dir,build_dir,filenames)`` tuples, - but without triggering any attempt to analyze or build the manifest. - """ - # Prevent eventual errors from unset `manifest_files` - # (that would otherwise be set by `analyze_manifest`) - self.__dict__.setdefault('manifest_files', {}) - return list(map(self._get_pkg_data_files, self.packages or ())) - - def _get_pkg_data_files(self, package): - # Locate package source directory - src_dir = self.get_package_dir(package) - - # Compute package build directory - build_dir = os.path.join(*([self.build_lib] + package.split('.'))) - - # Strip directory from globbed filenames - filenames = [ - os.path.relpath(file, src_dir) - for file in self.find_data_files(package, src_dir) - ] - return package, src_dir, build_dir, filenames - - def find_data_files(self, package, src_dir): - """Return filenames for package's data files in 'src_dir'""" - patterns = self._get_platform_patterns( - self.package_data, - package, - src_dir, - ) - globs_expanded = map(partial(glob, recursive=True), patterns) - # flatten the expanded globs into an iterable of matches - globs_matches = itertools.chain.from_iterable(globs_expanded) - glob_files = filter(os.path.isfile, globs_matches) - files = itertools.chain( - self.manifest_files.get(package, []), - glob_files, - ) - return self.exclude_data_files(package, src_dir, files) - - def get_outputs(self, include_bytecode=1) -> List[str]: - """See :class:`setuptools.commands.build.SubCommand`""" - if self.editable_mode: - return list(self.get_output_mapping().keys()) - return super().get_outputs(include_bytecode) - - def get_output_mapping(self) -> Dict[str, str]: - """See :class:`setuptools.commands.build.SubCommand`""" - mapping = itertools.chain( - self._get_package_data_output_mapping(), - self._get_module_mapping(), - ) - return dict(sorted(mapping, key=lambda x: x[0])) - - def _get_module_mapping(self) -> Iterator[Tuple[str, str]]: - """Iterate over all modules producing (dest, src) pairs.""" - for (package, module, module_file) in self.find_all_modules(): - package = package.split('.') - filename = self.get_module_outfile(self.build_lib, package, module) - yield (filename, module_file) - - def _get_package_data_output_mapping(self) -> Iterator[Tuple[str, str]]: - """Iterate over package data producing (dest, src) pairs.""" - for package, src_dir, build_dir, filenames in self.data_files: - for filename in filenames: - target = os.path.join(build_dir, filename) - srcfile = os.path.join(src_dir, filename) - yield (target, srcfile) - - def build_package_data(self): - """Copy data files into build directory""" - for target, srcfile in self._get_package_data_output_mapping(): - self.mkpath(os.path.dirname(target)) - _outf, _copied = self.copy_file(srcfile, target) - make_writable(target) - - def analyze_manifest(self): - self.manifest_files = mf = {} - if not self.distribution.include_package_data: - return - src_dirs = {} - for package in self.packages or (): - # Locate package source directory - src_dirs[assert_relative(self.get_package_dir(package))] = package - - if ( - getattr(self, 'existing_egg_info_dir', None) - and Path(self.existing_egg_info_dir, "SOURCES.txt").exists() - ): - egg_info_dir = self.existing_egg_info_dir - manifest = Path(egg_info_dir, "SOURCES.txt") - files = manifest.read_text(encoding="utf-8").splitlines() - else: - self.run_command('egg_info') - ei_cmd = self.get_finalized_command('egg_info') - egg_info_dir = ei_cmd.egg_info - files = ei_cmd.filelist.files - - check = _IncludePackageDataAbuse() - for path in self._filter_build_files(files, egg_info_dir): - d, f = os.path.split(assert_relative(path)) - prev = None - oldf = f - while d and d != prev and d not in src_dirs: - prev = d - d, df = os.path.split(d) - f = os.path.join(df, f) - if d in src_dirs: - if f == oldf: - if check.is_module(f): - continue # it's a module, not data - else: - importable = check.importable_subpackage(src_dirs[d], f) - if importable: - check.warn(importable) - mf.setdefault(src_dirs[d], []).append(path) - - def _filter_build_files(self, files: Iterable[str], egg_info: str) -> Iterator[str]: - """ - ``build_meta`` may try to create egg_info outside of the project directory, - and this can be problematic for certain plugins (reported in issue #3500). - - Extensions might also include between their sources files created on the - ``build_lib`` and ``build_temp`` directories. - - This function should filter this case of invalid files out. - """ - build = self.get_finalized_command("build") - build_dirs = (egg_info, self.build_lib, build.build_temp, build.build_base) - norm_dirs = [os.path.normpath(p) for p in build_dirs if p] - - for file in files: - norm_path = os.path.normpath(file) - if not os.path.isabs(file) or all(d not in norm_path for d in norm_dirs): - yield file - - def get_data_files(self): - pass # Lazily compute data files in _get_data_files() function. - - def check_package(self, package, package_dir): - """Check namespace packages' __init__ for declare_namespace""" - try: - return self.packages_checked[package] - except KeyError: - pass - - init_py = orig.build_py.check_package(self, package, package_dir) - self.packages_checked[package] = init_py - - if not init_py or not self.distribution.namespace_packages: - return init_py - - for pkg in self.distribution.namespace_packages: - if pkg == package or pkg.startswith(package + '.'): - break - else: - return init_py - - with io.open(init_py, 'rb') as f: - contents = f.read() - if b'declare_namespace' not in contents: - raise distutils.errors.DistutilsError( - "Namespace package problem: %s is a namespace package, but " - "its\n__init__.py does not call declare_namespace()! Please " - 'fix it.\n(See the setuptools manual under ' - '"Namespace Packages" for details.)\n"' % (package,) - ) - return init_py - - def initialize_options(self): - self.packages_checked = {} - orig.build_py.initialize_options(self) - self.editable_mode = False - self.existing_egg_info_dir = None - - def get_package_dir(self, package): - res = orig.build_py.get_package_dir(self, package) - if self.distribution.src_root is not None: - return os.path.join(self.distribution.src_root, res) - return res - - def exclude_data_files(self, package, src_dir, files): - """Filter filenames for package's data files in 'src_dir'""" - files = list(files) - patterns = self._get_platform_patterns( - self.exclude_package_data, - package, - src_dir, - ) - match_groups = (fnmatch.filter(files, pattern) for pattern in patterns) - # flatten the groups of matches into an iterable of matches - matches = itertools.chain.from_iterable(match_groups) - bad = set(matches) - keepers = (fn for fn in files if fn not in bad) - # ditch dupes - return list(unique_everseen(keepers)) - - @staticmethod - def _get_platform_patterns(spec, package, src_dir): - """ - yield platform-specific path patterns (suitable for glob - or fn_match) from a glob-based spec (such as - self.package_data or self.exclude_package_data) - matching package in src_dir. - """ - raw_patterns = itertools.chain( - spec.get('', []), - spec.get(package, []), - ) - return ( - # Each pattern has to be converted to a platform-specific path - os.path.join(src_dir, convert_path(pattern)) - for pattern in raw_patterns - ) - - -def assert_relative(path): - if not os.path.isabs(path): - return path - from distutils.errors import DistutilsSetupError - - msg = ( - textwrap.dedent( - """ - Error: setup script specifies an absolute path: - - %s - - setup() arguments must *always* be /-separated paths relative to the - setup.py directory, *never* absolute paths. - """ - ).lstrip() - % path - ) - raise DistutilsSetupError(msg) - - -class _IncludePackageDataAbuse: - """Inform users that package or module is included as 'data file'""" - - MESSAGE = """\ - Installing {importable!r} as data is deprecated, please list it in `packages`. - !!\n\n - ############################ - # Package would be ignored # - ############################ - Python recognizes {importable!r} as an importable package, - but it is not listed in the `packages` configuration of setuptools. - - {importable!r} has been automatically added to the distribution only - because it may contain data files, but this behavior is likely to change - in future versions of setuptools (and therefore is considered deprecated). - - Please make sure that {importable!r} is included as a package by using - the `packages` configuration field or the proper discovery methods - (for example by using `find_namespace_packages(...)`/`find_namespace:` - instead of `find_packages(...)`/`find:`). - - You can read more about "package discovery" and "data files" on setuptools - documentation page. - \n\n!! - """ - - def __init__(self): - self._already_warned = set() - - def is_module(self, file): - return file.endswith(".py") and file[:-len(".py")].isidentifier() - - def importable_subpackage(self, parent, file): - pkg = Path(file).parent - parts = list(itertools.takewhile(str.isidentifier, pkg.parts)) - if parts: - return ".".join([parent, *parts]) - return None - - def warn(self, importable): - if importable not in self._already_warned: - msg = textwrap.dedent(self.MESSAGE).format(importable=importable) - warnings.warn(msg, SetuptoolsDeprecationWarning, stacklevel=2) - self._already_warned.add(importable) diff --git a/spaces/BIASLab/sars-cov-2-classification-fcgr/src/utils.py b/spaces/BIASLab/sars-cov-2-classification-fcgr/src/utils.py deleted file mode 100644 index a1f1344e04f97f968a41f02f786203fb145813c8..0000000000000000000000000000000000000000 --- a/spaces/BIASLab/sars-cov-2-classification-fcgr/src/utils.py +++ /dev/null @@ -1,41 +0,0 @@ -import re -from PIL import Image -import numpy as np - - -def clean_seq(seq): - "Remove all characters different from A,C,G,T or N" - seq = seq.upper() - for letter in "BDEFHIJKLMOPQRSUVWXYZ": - seq = seq.replace(letter,"N") - return seq - -def array2img(array): - "FCGR array to grayscale image" - max_color = 255 - m, M = array.min(), array.max() - # rescale to [0,1] - img_rescaled = (array - m) / (M-m) - - # invert colors black->white - img_array = np.ceil(max_color - img_rescaled*max_color) - img_array = np.array(img_array, dtype=np.int8) - - # convert to Image - img_pil = Image.fromarray(img_array,'L') - return img_pil - -def count_seqs(fasta): - "Count number of '>' in a fasta file to use with a progress bar" - pattern = ">" - count = 0 - for line in fasta: - if re.search(pattern, line): - count +=1 - return count - -def generate_fcgr(kmer, fasta, fcgr): - "Generate Image FCGR" - array = fcgr(clean_seq(str(fasta.seq))) - img = array2img(array) - return img \ No newline at end of file diff --git a/spaces/Banbri/zcvzcv/src/types.ts b/spaces/Banbri/zcvzcv/src/types.ts deleted file mode 100644 index a01f6476cd020ee8bdfc3e3cd7f879fcdf6dc7d8..0000000000000000000000000000000000000000 --- a/spaces/Banbri/zcvzcv/src/types.ts +++ /dev/null @@ -1,130 +0,0 @@ -export type ProjectionMode = 'cartesian' | 'spherical' - -export type CacheMode = "use" | "renew" | "ignore" - -export interface RenderRequest { - prompt: string - - // whether to use video segmentation - // disabled (default) - // firstframe: we only analyze the first frame - // allframes: we analyze all the frames - segmentation: 'disabled' | 'firstframe' | 'allframes' - - // segmentation will only be executed if we have a non-empty list of actionnables - // actionnables are names of things like "chest", "key", "tree", "chair" etc - actionnables: string[] - - // note: this is the number of frames for Zeroscope, - // which is currently configured to only output 3 seconds, so: - // nbFrames=8 -> 1 sec - // nbFrames=16 -> 2 sec - // nbFrames=24 -> 3 sec - nbFrames: number // min: 1, max: 24 - - nbSteps: number // min: 1, max: 50 - - seed: number - - width: number // fixed at 1024 for now - height: number // fixed at 512 for now - - // upscaling factor - // 0: no upscaling - // 1: no upscaling - // 2: 2x larger - // 3: 3x larger - // 4x: 4x larger, up to 4096x4096 (warning: a PNG of this size can be 50 Mb!) - upscalingFactor: number - - projection: ProjectionMode - - cache: CacheMode - - wait: boolean // wait until the job is completed - - analyze: boolean // analyze the image to generate a caption (optional) -} - -export interface ImageSegment { - id: number - box: number[] - color: number[] - label: string - score: number -} - -export type RenderedSceneStatus = - | "pending" - | "completed" - | "error" - -export interface RenderedScene { - renderId: string - status: RenderedSceneStatus - assetUrl: string - alt: string - error: string - maskUrl: string - segments: ImageSegment[] -} - -export interface ImageAnalysisRequest { - image: string // in base64 - prompt: string -} - -export interface ImageAnalysisResponse { - result: string - error?: string -} - -export type LLMResponse = Array<{panel: number; instructions: string; caption: string }> - -export type LLMEngine = - | "INFERENCE_API" - | "INFERENCE_ENDPOINT" - | "OPENAI" - | "REPLICATE" - -export type RenderingEngine = - | "VIDEOCHAIN" - | "OPENAI" - | "REPLICATE" - | "INFERENCE_API" - | "INFERENCE_ENDPOINT" - -export type PostVisibility = - | "featured" // featured by admins - | "trending" // top trending / received more than 10 upvotes - | "normal" // default visibility - -export type Post = { - postId: string - appId: string - prompt: string - previewUrl: string - assetUrl: string - createdAt: string - visibility: PostVisibility - upvotes: number - downvotes: number -} - -export type CreatePostResponse = { - success?: boolean - error?: string - post: Post -} - -export type GetAppPostsResponse = { - success?: boolean - error?: string - posts: Post[] -} - -export type GetAppPostResponse = { - success?: boolean - error?: string - post: Post -} \ No newline at end of file diff --git "a/spaces/BatuhanYilmaz/Whisper-Auto-Subtitled-Video-Generator/pages/03_\360\237\223\235_Upload_Video_File_and_Transcript.py" "b/spaces/BatuhanYilmaz/Whisper-Auto-Subtitled-Video-Generator/pages/03_\360\237\223\235_Upload_Video_File_and_Transcript.py" deleted file mode 100644 index 99c3218b381db769d051b30878d0e30c789b3047..0000000000000000000000000000000000000000 --- "a/spaces/BatuhanYilmaz/Whisper-Auto-Subtitled-Video-Generator/pages/03_\360\237\223\235_Upload_Video_File_and_Transcript.py" +++ /dev/null @@ -1,130 +0,0 @@ -import streamlit as st -from streamlit_lottie import st_lottie -from utils import write_vtt, write_srt -import ffmpeg -import requests -from typing import Iterator -from io import StringIO -import numpy as np -import pathlib -import os - - -st.set_page_config(page_title="Auto Subtitled Video Generator", page_icon=":movie_camera:", layout="wide") - -# Define a function that we can use to load lottie files from a link. -@st.cache(allow_output_mutation=True) -def load_lottieurl(url: str): - r = requests.get(url) - if r.status_code != 200: - return None - return r.json() - - -APP_DIR = pathlib.Path(__file__).parent.absolute() - -LOCAL_DIR = APP_DIR / "local_transcript" -LOCAL_DIR.mkdir(exist_ok=True) -save_dir = LOCAL_DIR / "output" -save_dir.mkdir(exist_ok=True) - - -col1, col2 = st.columns([1, 3]) -with col1: - lottie = load_lottieurl("https://assets6.lottiefiles.com/packages/lf20_cjnxwrkt.json") - st_lottie(lottie) - -with col2: - st.write(""" - ## Auto Subtitled Video Generator - ##### ➠ Upload a video file and a transcript as .srt or .vtt file and get a video with subtitles. - ##### ➠ Processing time will increase as the video length increases. """) - - -def getSubs(segments: Iterator[dict], format: str, maxLineWidth: int) -> str: - segmentStream = StringIO() - - if format == 'vtt': - write_vtt(segments, file=segmentStream, maxLineWidth=maxLineWidth) - elif format == 'srt': - write_srt(segments, file=segmentStream, maxLineWidth=maxLineWidth) - else: - raise Exception("Unknown format " + format) - - segmentStream.seek(0) - return segmentStream.read() - - -def split_video_audio(uploaded_file): - with open(f"{save_dir}/input.mp4", "wb") as f: - f.write(uploaded_file.read()) - audio = ffmpeg.input(f"{save_dir}/input.mp4") - audio = ffmpeg.output(audio, f"{save_dir}/output.wav", acodec="pcm_s16le", ac=1, ar="16k") - ffmpeg.run(audio, overwrite_output=True) - - -def main(): - uploaded_video = st.file_uploader("Upload Video File", type=["mp4", "avi", "mov", "mkv"]) - # get the name of the input_file - if uploaded_video is not None: - filename = uploaded_video.name[:-4] - else: - filename = None - transcript_file = st.file_uploader("Upload Transcript File", type=["srt", "vtt"]) - if transcript_file is not None: - transcript_name = transcript_file.name - else: - transcript_name = None - if uploaded_video is not None and transcript_file is not None: - if transcript_name[-3:] == "vtt": - with open("uploaded_transcript.vtt", "wb") as f: - f.writelines(transcript_file) - f.close() - with open(os.path.join(os.getcwd(), "uploaded_transcript.vtt"), "rb") as f: - vtt_file = f.read() - if st.button("Generate Video with Subtitles"): - with st.spinner("Generating Subtitled Video"): - split_video_audio(uploaded_video) - video_file = ffmpeg.input(f"{save_dir}/input.mp4") - audio_file = ffmpeg.input(f"{save_dir}/output.wav") - ffmpeg.concat(video_file.filter("subtitles", "uploaded_transcript.vtt"), audio_file, v=1, a=1).output("final.mp4").global_args('-report').run(quiet=True, overwrite_output=True) - video_with_subs = open("final.mp4", "rb") - col3, col4 = st.columns(2) - with col3: - st.video(uploaded_video) - with col4: - st.video(video_with_subs) - st.download_button(label="Download Video with Subtitles", - data=video_with_subs, - file_name=f"{filename}_with_subs.mp4") - - elif transcript_name[-3:] == "srt": - with open("uploaded_transcript.srt", "wb") as f: - f.writelines(transcript_file) - f.close() - with open(os.path.join(os.getcwd(), "uploaded_transcript.srt"), "rb") as f: - srt_file = f.read() - if st.button("Generate Video with Subtitles"): - with st.spinner("Generating Subtitled Video"): - split_video_audio(uploaded_video) - video_file = ffmpeg.input(f"{save_dir}/input.mp4") - audio_file = ffmpeg.input(f"{save_dir}/output.wav") - ffmpeg.concat(video_file.filter("subtitles", "uploaded_transcript.srt"), audio_file, v=1, a=1).output("final.mp4").run(quiet=True, overwrite_output=True) - video_with_subs = open("final.mp4", "rb") - col3, col4 = st.columns(2) - with col3: - st.video(uploaded_video) - with col4: - st.video(video_with_subs) - st.download_button(label="Download Video with Subtitles", - data=video_with_subs, - file_name=f"{filename}_with_subs.mp4") - else: - st.error("Please upload a .srt or .vtt file") - else: - st.info("Please upload a video file and a transcript file ") - - -if __name__ == "__main__": - main() - diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/httpsession.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/httpsession.py deleted file mode 100644 index b3fe6e6c0c01d314152d909d0c5d14fbdd36db8e..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/httpsession.py +++ /dev/null @@ -1,510 +0,0 @@ -import logging -import os -import os.path -import socket -import sys -import warnings -from base64 import b64encode - -from urllib3 import PoolManager, Timeout, proxy_from_url -from urllib3.exceptions import ( - ConnectTimeoutError as URLLib3ConnectTimeoutError, -) -from urllib3.exceptions import ( - LocationParseError, - NewConnectionError, - ProtocolError, - ProxyError, -) -from urllib3.exceptions import ReadTimeoutError as URLLib3ReadTimeoutError -from urllib3.exceptions import SSLError as URLLib3SSLError -from urllib3.util.retry import Retry -from urllib3.util.ssl_ import ( - OP_NO_COMPRESSION, - PROTOCOL_TLS, - OP_NO_SSLv2, - OP_NO_SSLv3, - is_ipaddress, - ssl, -) -from urllib3.util.url import parse_url - -try: - from urllib3.util.ssl_ import OP_NO_TICKET, PROTOCOL_TLS_CLIENT -except ImportError: - # Fallback directly to ssl for version of urllib3 before 1.26. - # They are available in the standard library starting in Python 3.6. - from ssl import OP_NO_TICKET, PROTOCOL_TLS_CLIENT - -try: - # pyopenssl will be removed in urllib3 2.0, we'll fall back to ssl_ at that point. - # This can be removed once our urllib3 floor is raised to >= 2.0. - with warnings.catch_warnings(): - warnings.simplefilter("ignore", category=DeprecationWarning) - # Always import the original SSLContext, even if it has been patched - from urllib3.contrib.pyopenssl import ( - orig_util_SSLContext as SSLContext, - ) -except ImportError: - from urllib3.util.ssl_ import SSLContext - -try: - from urllib3.util.ssl_ import DEFAULT_CIPHERS -except ImportError: - # Defer to system configuration starting with - # urllib3 2.0. This will choose the ciphers provided by - # Openssl 1.1.1+ or secure system defaults. - DEFAULT_CIPHERS = None - -import botocore.awsrequest -from botocore.compat import ( - IPV6_ADDRZ_RE, - ensure_bytes, - filter_ssl_warnings, - unquote, - urlparse, -) -from botocore.exceptions import ( - ConnectionClosedError, - ConnectTimeoutError, - EndpointConnectionError, - HTTPClientError, - InvalidProxiesConfigError, - ProxyConnectionError, - ReadTimeoutError, - SSLError, -) - -filter_ssl_warnings() -logger = logging.getLogger(__name__) -DEFAULT_TIMEOUT = 60 -MAX_POOL_CONNECTIONS = 10 -DEFAULT_CA_BUNDLE = os.path.join(os.path.dirname(__file__), 'cacert.pem') - -try: - from certifi import where -except ImportError: - - def where(): - return DEFAULT_CA_BUNDLE - - -def get_cert_path(verify): - if verify is not True: - return verify - - cert_path = where() - logger.debug(f"Certificate path: {cert_path}") - - return cert_path - - -def create_urllib3_context( - ssl_version=None, cert_reqs=None, options=None, ciphers=None -): - """This function is a vendored version of the same function in urllib3 - - We vendor this function to ensure that the SSL contexts we construct - always use the std lib SSLContext instead of pyopenssl. - """ - # PROTOCOL_TLS is deprecated in Python 3.10 - if not ssl_version or ssl_version == PROTOCOL_TLS: - ssl_version = PROTOCOL_TLS_CLIENT - - context = SSLContext(ssl_version) - - if ciphers: - context.set_ciphers(ciphers) - elif DEFAULT_CIPHERS: - context.set_ciphers(DEFAULT_CIPHERS) - - # Setting the default here, as we may have no ssl module on import - cert_reqs = ssl.CERT_REQUIRED if cert_reqs is None else cert_reqs - - if options is None: - options = 0 - # SSLv2 is easily broken and is considered harmful and dangerous - options |= OP_NO_SSLv2 - # SSLv3 has several problems and is now dangerous - options |= OP_NO_SSLv3 - # Disable compression to prevent CRIME attacks for OpenSSL 1.0+ - # (issue urllib3#309) - options |= OP_NO_COMPRESSION - # TLSv1.2 only. Unless set explicitly, do not request tickets. - # This may save some bandwidth on wire, and although the ticket is encrypted, - # there is a risk associated with it being on wire, - # if the server is not rotating its ticketing keys properly. - options |= OP_NO_TICKET - - context.options |= options - - # Enable post-handshake authentication for TLS 1.3, see GH #1634. PHA is - # necessary for conditional client cert authentication with TLS 1.3. - # The attribute is None for OpenSSL <= 1.1.0 or does not exist in older - # versions of Python. We only enable on Python 3.7.4+ or if certificate - # verification is enabled to work around Python issue #37428 - # See: https://bugs.python.org/issue37428 - if ( - cert_reqs == ssl.CERT_REQUIRED or sys.version_info >= (3, 7, 4) - ) and getattr(context, "post_handshake_auth", None) is not None: - context.post_handshake_auth = True - - def disable_check_hostname(): - if ( - getattr(context, "check_hostname", None) is not None - ): # Platform-specific: Python 3.2 - # We do our own verification, including fingerprints and alternative - # hostnames. So disable it here - context.check_hostname = False - - # The order of the below lines setting verify_mode and check_hostname - # matter due to safe-guards SSLContext has to prevent an SSLContext with - # check_hostname=True, verify_mode=NONE/OPTIONAL. This is made even more - # complex because we don't know whether PROTOCOL_TLS_CLIENT will be used - # or not so we don't know the initial state of the freshly created SSLContext. - if cert_reqs == ssl.CERT_REQUIRED: - context.verify_mode = cert_reqs - disable_check_hostname() - else: - disable_check_hostname() - context.verify_mode = cert_reqs - - # Enable logging of TLS session keys via defacto standard environment variable - # 'SSLKEYLOGFILE', if the feature is available (Python 3.8+). Skip empty values. - if hasattr(context, "keylog_filename"): - sslkeylogfile = os.environ.get("SSLKEYLOGFILE") - if sslkeylogfile and not sys.flags.ignore_environment: - context.keylog_filename = sslkeylogfile - - return context - - -def ensure_boolean(val): - """Ensures a boolean value if a string or boolean is provided - - For strings, the value for True/False is case insensitive - """ - if isinstance(val, bool): - return val - else: - return val.lower() == 'true' - - -def mask_proxy_url(proxy_url): - """ - Mask proxy url credentials. - - :type proxy_url: str - :param proxy_url: The proxy url, i.e. https://username:password@proxy.com - - :return: Masked proxy url, i.e. https://***:***@proxy.com - """ - mask = '*' * 3 - parsed_url = urlparse(proxy_url) - if parsed_url.username: - proxy_url = proxy_url.replace(parsed_url.username, mask, 1) - if parsed_url.password: - proxy_url = proxy_url.replace(parsed_url.password, mask, 1) - return proxy_url - - -def _is_ipaddress(host): - """Wrap urllib3's is_ipaddress to support bracketed IPv6 addresses.""" - return is_ipaddress(host) or bool(IPV6_ADDRZ_RE.match(host)) - - -class ProxyConfiguration: - """Represents a proxy configuration dictionary and additional settings. - - This class represents a proxy configuration dictionary and provides utility - functions to retreive well structured proxy urls and proxy headers from the - proxy configuration dictionary. - """ - - def __init__(self, proxies=None, proxies_settings=None): - if proxies is None: - proxies = {} - if proxies_settings is None: - proxies_settings = {} - - self._proxies = proxies - self._proxies_settings = proxies_settings - - def proxy_url_for(self, url): - """Retrieves the corresponding proxy url for a given url.""" - parsed_url = urlparse(url) - proxy = self._proxies.get(parsed_url.scheme) - if proxy: - proxy = self._fix_proxy_url(proxy) - return proxy - - def proxy_headers_for(self, proxy_url): - """Retrieves the corresponding proxy headers for a given proxy url.""" - headers = {} - username, password = self._get_auth_from_url(proxy_url) - if username and password: - basic_auth = self._construct_basic_auth(username, password) - headers['Proxy-Authorization'] = basic_auth - return headers - - @property - def settings(self): - return self._proxies_settings - - def _fix_proxy_url(self, proxy_url): - if proxy_url.startswith('http:') or proxy_url.startswith('https:'): - return proxy_url - elif proxy_url.startswith('//'): - return 'http:' + proxy_url - else: - return 'http://' + proxy_url - - def _construct_basic_auth(self, username, password): - auth_str = f'{username}:{password}' - encoded_str = b64encode(auth_str.encode('ascii')).strip().decode() - return f'Basic {encoded_str}' - - def _get_auth_from_url(self, url): - parsed_url = urlparse(url) - try: - return unquote(parsed_url.username), unquote(parsed_url.password) - except (AttributeError, TypeError): - return None, None - - -class URLLib3Session: - """A basic HTTP client that supports connection pooling and proxies. - - This class is inspired by requests.adapters.HTTPAdapter, but has been - boiled down to meet the use cases needed by botocore. For the most part - this classes matches the functionality of HTTPAdapter in requests v2.7.0 - (the same as our vendored version). The only major difference of note is - that we currently do not support sending chunked requests. While requests - v2.7.0 implemented this themselves, later version urllib3 support this - directly via a flag to urlopen so enabling it if needed should be trivial. - """ - - def __init__( - self, - verify=True, - proxies=None, - timeout=None, - max_pool_connections=MAX_POOL_CONNECTIONS, - socket_options=None, - client_cert=None, - proxies_config=None, - ): - self._verify = verify - self._proxy_config = ProxyConfiguration( - proxies=proxies, proxies_settings=proxies_config - ) - self._pool_classes_by_scheme = { - 'http': botocore.awsrequest.AWSHTTPConnectionPool, - 'https': botocore.awsrequest.AWSHTTPSConnectionPool, - } - if timeout is None: - timeout = DEFAULT_TIMEOUT - if not isinstance(timeout, (int, float)): - timeout = Timeout(connect=timeout[0], read=timeout[1]) - - self._cert_file = None - self._key_file = None - if isinstance(client_cert, str): - self._cert_file = client_cert - elif isinstance(client_cert, tuple): - self._cert_file, self._key_file = client_cert - - self._timeout = timeout - self._max_pool_connections = max_pool_connections - self._socket_options = socket_options - if socket_options is None: - self._socket_options = [] - self._proxy_managers = {} - self._manager = PoolManager(**self._get_pool_manager_kwargs()) - self._manager.pool_classes_by_scheme = self._pool_classes_by_scheme - - def _proxies_kwargs(self, **kwargs): - proxies_settings = self._proxy_config.settings - proxies_kwargs = { - 'use_forwarding_for_https': proxies_settings.get( - 'proxy_use_forwarding_for_https' - ), - **kwargs, - } - return {k: v for k, v in proxies_kwargs.items() if v is not None} - - def _get_pool_manager_kwargs(self, **extra_kwargs): - pool_manager_kwargs = { - 'strict': True, - 'timeout': self._timeout, - 'maxsize': self._max_pool_connections, - 'ssl_context': self._get_ssl_context(), - 'socket_options': self._socket_options, - 'cert_file': self._cert_file, - 'key_file': self._key_file, - } - pool_manager_kwargs.update(**extra_kwargs) - return pool_manager_kwargs - - def _get_ssl_context(self): - return create_urllib3_context() - - def _get_proxy_manager(self, proxy_url): - if proxy_url not in self._proxy_managers: - proxy_headers = self._proxy_config.proxy_headers_for(proxy_url) - proxy_ssl_context = self._setup_proxy_ssl_context(proxy_url) - proxy_manager_kwargs = self._get_pool_manager_kwargs( - proxy_headers=proxy_headers - ) - proxy_manager_kwargs.update( - self._proxies_kwargs(proxy_ssl_context=proxy_ssl_context) - ) - proxy_manager = proxy_from_url(proxy_url, **proxy_manager_kwargs) - proxy_manager.pool_classes_by_scheme = self._pool_classes_by_scheme - self._proxy_managers[proxy_url] = proxy_manager - - return self._proxy_managers[proxy_url] - - def _path_url(self, url): - parsed_url = urlparse(url) - path = parsed_url.path - if not path: - path = '/' - if parsed_url.query: - path = path + '?' + parsed_url.query - return path - - def _setup_ssl_cert(self, conn, url, verify): - if url.lower().startswith('https') and verify: - conn.cert_reqs = 'CERT_REQUIRED' - conn.ca_certs = get_cert_path(verify) - else: - conn.cert_reqs = 'CERT_NONE' - conn.ca_certs = None - - def _setup_proxy_ssl_context(self, proxy_url): - proxies_settings = self._proxy_config.settings - proxy_ca_bundle = proxies_settings.get('proxy_ca_bundle') - proxy_cert = proxies_settings.get('proxy_client_cert') - if proxy_ca_bundle is None and proxy_cert is None: - return None - - context = self._get_ssl_context() - try: - url = parse_url(proxy_url) - # urllib3 disables this by default but we need it for proper - # proxy tls negotiation when proxy_url is not an IP Address - if not _is_ipaddress(url.host): - context.check_hostname = True - if proxy_ca_bundle is not None: - context.load_verify_locations(cafile=proxy_ca_bundle) - - if isinstance(proxy_cert, tuple): - context.load_cert_chain(proxy_cert[0], keyfile=proxy_cert[1]) - elif isinstance(proxy_cert, str): - context.load_cert_chain(proxy_cert) - - return context - except (OSError, URLLib3SSLError, LocationParseError) as e: - raise InvalidProxiesConfigError(error=e) - - def _get_connection_manager(self, url, proxy_url=None): - if proxy_url: - manager = self._get_proxy_manager(proxy_url) - else: - manager = self._manager - return manager - - def _get_request_target(self, url, proxy_url): - has_proxy = proxy_url is not None - - if not has_proxy: - return self._path_url(url) - - # HTTP proxies expect the request_target to be the absolute url to know - # which host to establish a connection to. urllib3 also supports - # forwarding for HTTPS through the 'use_forwarding_for_https' parameter. - proxy_scheme = urlparse(proxy_url).scheme - using_https_forwarding_proxy = ( - proxy_scheme == 'https' - and self._proxies_kwargs().get('use_forwarding_for_https', False) - ) - - if using_https_forwarding_proxy or url.startswith('http:'): - return url - else: - return self._path_url(url) - - def _chunked(self, headers): - transfer_encoding = headers.get('Transfer-Encoding', b'') - transfer_encoding = ensure_bytes(transfer_encoding) - return transfer_encoding.lower() == b'chunked' - - def close(self): - self._manager.clear() - for manager in self._proxy_managers.values(): - manager.clear() - - def send(self, request): - try: - proxy_url = self._proxy_config.proxy_url_for(request.url) - manager = self._get_connection_manager(request.url, proxy_url) - conn = manager.connection_from_url(request.url) - self._setup_ssl_cert(conn, request.url, self._verify) - if ensure_boolean( - os.environ.get('BOTO_EXPERIMENTAL__ADD_PROXY_HOST_HEADER', '') - ): - # This is currently an "experimental" feature which provides - # no guarantees of backwards compatibility. It may be subject - # to change or removal in any patch version. Anyone opting in - # to this feature should strictly pin botocore. - host = urlparse(request.url).hostname - conn.proxy_headers['host'] = host - - request_target = self._get_request_target(request.url, proxy_url) - urllib_response = conn.urlopen( - method=request.method, - url=request_target, - body=request.body, - headers=request.headers, - retries=Retry(False), - assert_same_host=False, - preload_content=False, - decode_content=False, - chunked=self._chunked(request.headers), - ) - - http_response = botocore.awsrequest.AWSResponse( - request.url, - urllib_response.status, - urllib_response.headers, - urllib_response, - ) - - if not request.stream_output: - # Cause the raw stream to be exhausted immediately. We do it - # this way instead of using preload_content because - # preload_content will never buffer chunked responses - http_response.content - - return http_response - except URLLib3SSLError as e: - raise SSLError(endpoint_url=request.url, error=e) - except (NewConnectionError, socket.gaierror) as e: - raise EndpointConnectionError(endpoint_url=request.url, error=e) - except ProxyError as e: - raise ProxyConnectionError( - proxy_url=mask_proxy_url(proxy_url), error=e - ) - except URLLib3ConnectTimeoutError as e: - raise ConnectTimeoutError(endpoint_url=request.url, error=e) - except URLLib3ReadTimeoutError as e: - raise ReadTimeoutError(endpoint_url=request.url, error=e) - except ProtocolError as e: - raise ConnectionClosedError( - error=e, request=request, endpoint_url=request.url - ) - except Exception as e: - message = 'Exception received when sending urllib3 HTTP request' - logger.debug(message, exc_info=True) - raise HTTPClientError(error=e) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/network/session.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/network/session.py deleted file mode 100644 index 6c40ade1595df0ed4d2963b819211491d55b0aa5..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/network/session.py +++ /dev/null @@ -1,517 +0,0 @@ -"""PipSession and supporting code, containing all pip-specific -network request configuration and behavior. -""" - -import email.utils -import io -import ipaddress -import json -import logging -import mimetypes -import os -import platform -import shutil -import subprocess -import sys -import urllib.parse -import warnings -from typing import ( - TYPE_CHECKING, - Any, - Dict, - Generator, - List, - Mapping, - Optional, - Sequence, - Tuple, - Union, -) - -from pip._vendor import requests, urllib3 -from pip._vendor.cachecontrol import CacheControlAdapter as _BaseCacheControlAdapter -from pip._vendor.requests.adapters import DEFAULT_POOLBLOCK, BaseAdapter -from pip._vendor.requests.adapters import HTTPAdapter as _BaseHTTPAdapter -from pip._vendor.requests.models import PreparedRequest, Response -from pip._vendor.requests.structures import CaseInsensitiveDict -from pip._vendor.urllib3.connectionpool import ConnectionPool -from pip._vendor.urllib3.exceptions import InsecureRequestWarning - -from pip import __version__ -from pip._internal.metadata import get_default_environment -from pip._internal.models.link import Link -from pip._internal.network.auth import MultiDomainBasicAuth -from pip._internal.network.cache import SafeFileCache - -# Import ssl from compat so the initial import occurs in only one place. -from pip._internal.utils.compat import has_tls -from pip._internal.utils.glibc import libc_ver -from pip._internal.utils.misc import build_url_from_netloc, parse_netloc -from pip._internal.utils.urls import url_to_path - -if TYPE_CHECKING: - from ssl import SSLContext - - from pip._vendor.urllib3.poolmanager import PoolManager - - -logger = logging.getLogger(__name__) - -SecureOrigin = Tuple[str, str, Optional[Union[int, str]]] - - -# Ignore warning raised when using --trusted-host. -warnings.filterwarnings("ignore", category=InsecureRequestWarning) - - -SECURE_ORIGINS: List[SecureOrigin] = [ - # protocol, hostname, port - # Taken from Chrome's list of secure origins (See: http://bit.ly/1qrySKC) - ("https", "*", "*"), - ("*", "localhost", "*"), - ("*", "127.0.0.0/8", "*"), - ("*", "::1/128", "*"), - ("file", "*", None), - # ssh is always secure. - ("ssh", "*", "*"), -] - - -# These are environment variables present when running under various -# CI systems. For each variable, some CI systems that use the variable -# are indicated. The collection was chosen so that for each of a number -# of popular systems, at least one of the environment variables is used. -# This list is used to provide some indication of and lower bound for -# CI traffic to PyPI. Thus, it is okay if the list is not comprehensive. -# For more background, see: https://github.com/pypa/pip/issues/5499 -CI_ENVIRONMENT_VARIABLES = ( - # Azure Pipelines - "BUILD_BUILDID", - # Jenkins - "BUILD_ID", - # AppVeyor, CircleCI, Codeship, Gitlab CI, Shippable, Travis CI - "CI", - # Explicit environment variable. - "PIP_IS_CI", -) - - -def looks_like_ci() -> bool: - """ - Return whether it looks like pip is running under CI. - """ - # We don't use the method of checking for a tty (e.g. using isatty()) - # because some CI systems mimic a tty (e.g. Travis CI). Thus that - # method doesn't provide definitive information in either direction. - return any(name in os.environ for name in CI_ENVIRONMENT_VARIABLES) - - -def user_agent() -> str: - """ - Return a string representing the user agent. - """ - data: Dict[str, Any] = { - "installer": {"name": "pip", "version": __version__}, - "python": platform.python_version(), - "implementation": { - "name": platform.python_implementation(), - }, - } - - if data["implementation"]["name"] == "CPython": - data["implementation"]["version"] = platform.python_version() - elif data["implementation"]["name"] == "PyPy": - pypy_version_info = sys.pypy_version_info # type: ignore - if pypy_version_info.releaselevel == "final": - pypy_version_info = pypy_version_info[:3] - data["implementation"]["version"] = ".".join( - [str(x) for x in pypy_version_info] - ) - elif data["implementation"]["name"] == "Jython": - # Complete Guess - data["implementation"]["version"] = platform.python_version() - elif data["implementation"]["name"] == "IronPython": - # Complete Guess - data["implementation"]["version"] = platform.python_version() - - if sys.platform.startswith("linux"): - from pip._vendor import distro - - linux_distribution = distro.name(), distro.version(), distro.codename() - distro_infos: Dict[str, Any] = dict( - filter( - lambda x: x[1], - zip(["name", "version", "id"], linux_distribution), - ) - ) - libc = dict( - filter( - lambda x: x[1], - zip(["lib", "version"], libc_ver()), - ) - ) - if libc: - distro_infos["libc"] = libc - if distro_infos: - data["distro"] = distro_infos - - if sys.platform.startswith("darwin") and platform.mac_ver()[0]: - data["distro"] = {"name": "macOS", "version": platform.mac_ver()[0]} - - if platform.system(): - data.setdefault("system", {})["name"] = platform.system() - - if platform.release(): - data.setdefault("system", {})["release"] = platform.release() - - if platform.machine(): - data["cpu"] = platform.machine() - - if has_tls(): - import _ssl as ssl - - data["openssl_version"] = ssl.OPENSSL_VERSION - - setuptools_dist = get_default_environment().get_distribution("setuptools") - if setuptools_dist is not None: - data["setuptools_version"] = str(setuptools_dist.version) - - if shutil.which("rustc") is not None: - # If for any reason `rustc --version` fails, silently ignore it - try: - rustc_output = subprocess.check_output( - ["rustc", "--version"], stderr=subprocess.STDOUT, timeout=0.5 - ) - except Exception: - pass - else: - if rustc_output.startswith(b"rustc "): - # The format of `rustc --version` is: - # `b'rustc 1.52.1 (9bc8c42bb 2021-05-09)\n'` - # We extract just the middle (1.52.1) part - data["rustc_version"] = rustc_output.split(b" ")[1].decode() - - # Use None rather than False so as not to give the impression that - # pip knows it is not being run under CI. Rather, it is a null or - # inconclusive result. Also, we include some value rather than no - # value to make it easier to know that the check has been run. - data["ci"] = True if looks_like_ci() else None - - user_data = os.environ.get("PIP_USER_AGENT_USER_DATA") - if user_data is not None: - data["user_data"] = user_data - - return "{data[installer][name]}/{data[installer][version]} {json}".format( - data=data, - json=json.dumps(data, separators=(",", ":"), sort_keys=True), - ) - - -class LocalFSAdapter(BaseAdapter): - def send( - self, - request: PreparedRequest, - stream: bool = False, - timeout: Optional[Union[float, Tuple[float, float]]] = None, - verify: Union[bool, str] = True, - cert: Optional[Union[str, Tuple[str, str]]] = None, - proxies: Optional[Mapping[str, str]] = None, - ) -> Response: - pathname = url_to_path(request.url) - - resp = Response() - resp.status_code = 200 - resp.url = request.url - - try: - stats = os.stat(pathname) - except OSError as exc: - # format the exception raised as a io.BytesIO object, - # to return a better error message: - resp.status_code = 404 - resp.reason = type(exc).__name__ - resp.raw = io.BytesIO(f"{resp.reason}: {exc}".encode("utf8")) - else: - modified = email.utils.formatdate(stats.st_mtime, usegmt=True) - content_type = mimetypes.guess_type(pathname)[0] or "text/plain" - resp.headers = CaseInsensitiveDict( - { - "Content-Type": content_type, - "Content-Length": stats.st_size, - "Last-Modified": modified, - } - ) - - resp.raw = open(pathname, "rb") - resp.close = resp.raw.close - - return resp - - def close(self) -> None: - pass - - -class _SSLContextAdapterMixin: - """Mixin to add the ``ssl_context`` constructor argument to HTTP adapters. - - The additional argument is forwarded directly to the pool manager. This allows us - to dynamically decide what SSL store to use at runtime, which is used to implement - the optional ``truststore`` backend. - """ - - def __init__( - self, - *, - ssl_context: Optional["SSLContext"] = None, - **kwargs: Any, - ) -> None: - self._ssl_context = ssl_context - super().__init__(**kwargs) - - def init_poolmanager( - self, - connections: int, - maxsize: int, - block: bool = DEFAULT_POOLBLOCK, - **pool_kwargs: Any, - ) -> "PoolManager": - if self._ssl_context is not None: - pool_kwargs.setdefault("ssl_context", self._ssl_context) - return super().init_poolmanager( # type: ignore[misc] - connections=connections, - maxsize=maxsize, - block=block, - **pool_kwargs, - ) - - -class HTTPAdapter(_SSLContextAdapterMixin, _BaseHTTPAdapter): - pass - - -class CacheControlAdapter(_SSLContextAdapterMixin, _BaseCacheControlAdapter): - pass - - -class InsecureHTTPAdapter(HTTPAdapter): - def cert_verify( - self, - conn: ConnectionPool, - url: str, - verify: Union[bool, str], - cert: Optional[Union[str, Tuple[str, str]]], - ) -> None: - super().cert_verify(conn=conn, url=url, verify=False, cert=cert) - - -class InsecureCacheControlAdapter(CacheControlAdapter): - def cert_verify( - self, - conn: ConnectionPool, - url: str, - verify: Union[bool, str], - cert: Optional[Union[str, Tuple[str, str]]], - ) -> None: - super().cert_verify(conn=conn, url=url, verify=False, cert=cert) - - -class PipSession(requests.Session): - timeout: Optional[int] = None - - def __init__( - self, - *args: Any, - retries: int = 0, - cache: Optional[str] = None, - trusted_hosts: Sequence[str] = (), - index_urls: Optional[List[str]] = None, - ssl_context: Optional["SSLContext"] = None, - **kwargs: Any, - ) -> None: - """ - :param trusted_hosts: Domains not to emit warnings for when not using - HTTPS. - """ - super().__init__(*args, **kwargs) - - # Namespace the attribute with "pip_" just in case to prevent - # possible conflicts with the base class. - self.pip_trusted_origins: List[Tuple[str, Optional[int]]] = [] - - # Attach our User Agent to the request - self.headers["User-Agent"] = user_agent() - - # Attach our Authentication handler to the session - self.auth = MultiDomainBasicAuth(index_urls=index_urls) - - # Create our urllib3.Retry instance which will allow us to customize - # how we handle retries. - retries = urllib3.Retry( - # Set the total number of retries that a particular request can - # have. - total=retries, - # A 503 error from PyPI typically means that the Fastly -> Origin - # connection got interrupted in some way. A 503 error in general - # is typically considered a transient error so we'll go ahead and - # retry it. - # A 500 may indicate transient error in Amazon S3 - # A 520 or 527 - may indicate transient error in CloudFlare - status_forcelist=[500, 503, 520, 527], - # Add a small amount of back off between failed requests in - # order to prevent hammering the service. - backoff_factor=0.25, - ) # type: ignore - - # Our Insecure HTTPAdapter disables HTTPS validation. It does not - # support caching so we'll use it for all http:// URLs. - # If caching is disabled, we will also use it for - # https:// hosts that we've marked as ignoring - # TLS errors for (trusted-hosts). - insecure_adapter = InsecureHTTPAdapter(max_retries=retries) - - # We want to _only_ cache responses on securely fetched origins or when - # the host is specified as trusted. We do this because - # we can't validate the response of an insecurely/untrusted fetched - # origin, and we don't want someone to be able to poison the cache and - # require manual eviction from the cache to fix it. - if cache: - secure_adapter = CacheControlAdapter( - cache=SafeFileCache(cache), - max_retries=retries, - ssl_context=ssl_context, - ) - self._trusted_host_adapter = InsecureCacheControlAdapter( - cache=SafeFileCache(cache), - max_retries=retries, - ) - else: - secure_adapter = HTTPAdapter(max_retries=retries, ssl_context=ssl_context) - self._trusted_host_adapter = insecure_adapter - - self.mount("https://", secure_adapter) - self.mount("http://", insecure_adapter) - - # Enable file:// urls - self.mount("file://", LocalFSAdapter()) - - for host in trusted_hosts: - self.add_trusted_host(host, suppress_logging=True) - - def update_index_urls(self, new_index_urls: List[str]) -> None: - """ - :param new_index_urls: New index urls to update the authentication - handler with. - """ - self.auth.index_urls = new_index_urls - - def add_trusted_host( - self, host: str, source: Optional[str] = None, suppress_logging: bool = False - ) -> None: - """ - :param host: It is okay to provide a host that has previously been - added. - :param source: An optional source string, for logging where the host - string came from. - """ - if not suppress_logging: - msg = f"adding trusted host: {host!r}" - if source is not None: - msg += f" (from {source})" - logger.info(msg) - - host_port = parse_netloc(host) - if host_port not in self.pip_trusted_origins: - self.pip_trusted_origins.append(host_port) - - self.mount( - build_url_from_netloc(host, scheme="http") + "/", self._trusted_host_adapter - ) - self.mount(build_url_from_netloc(host) + "/", self._trusted_host_adapter) - if not host_port[1]: - self.mount( - build_url_from_netloc(host, scheme="http") + ":", - self._trusted_host_adapter, - ) - # Mount wildcard ports for the same host. - self.mount(build_url_from_netloc(host) + ":", self._trusted_host_adapter) - - def iter_secure_origins(self) -> Generator[SecureOrigin, None, None]: - yield from SECURE_ORIGINS - for host, port in self.pip_trusted_origins: - yield ("*", host, "*" if port is None else port) - - def is_secure_origin(self, location: Link) -> bool: - # Determine if this url used a secure transport mechanism - parsed = urllib.parse.urlparse(str(location)) - origin_protocol, origin_host, origin_port = ( - parsed.scheme, - parsed.hostname, - parsed.port, - ) - - # The protocol to use to see if the protocol matches. - # Don't count the repository type as part of the protocol: in - # cases such as "git+ssh", only use "ssh". (I.e., Only verify against - # the last scheme.) - origin_protocol = origin_protocol.rsplit("+", 1)[-1] - - # Determine if our origin is a secure origin by looking through our - # hardcoded list of secure origins, as well as any additional ones - # configured on this PackageFinder instance. - for secure_origin in self.iter_secure_origins(): - secure_protocol, secure_host, secure_port = secure_origin - if origin_protocol != secure_protocol and secure_protocol != "*": - continue - - try: - addr = ipaddress.ip_address(origin_host or "") - network = ipaddress.ip_network(secure_host) - except ValueError: - # We don't have both a valid address or a valid network, so - # we'll check this origin against hostnames. - if ( - origin_host - and origin_host.lower() != secure_host.lower() - and secure_host != "*" - ): - continue - else: - # We have a valid address and network, so see if the address - # is contained within the network. - if addr not in network: - continue - - # Check to see if the port matches. - if ( - origin_port != secure_port - and secure_port != "*" - and secure_port is not None - ): - continue - - # If we've gotten here, then this origin matches the current - # secure origin and we should return True - return True - - # If we've gotten to this point, then the origin isn't secure and we - # will not accept it as a valid location to search. We will however - # log a warning that we are ignoring it. - logger.warning( - "The repository located at %s is not a trusted or secure host and " - "is being ignored. If this repository is available via HTTPS we " - "recommend you use HTTPS instead, otherwise you may silence " - "this warning and allow it anyway with '--trusted-host %s'.", - origin_host, - origin_host, - ) - - return False - - def request(self, method: str, url: str, *args: Any, **kwargs: Any) -> Response: - # Allow setting a default timeout on a session - kwargs.setdefault("timeout", self.timeout) - # Allow setting a default proxies on a session - kwargs.setdefault("proxies", self.proxies) - - # Dispatch the actual request - return super().request(method, url, *args, **kwargs) diff --git a/spaces/Boilin/URetinex-Net/network/architecture.py b/spaces/Boilin/URetinex-Net/network/architecture.py deleted file mode 100644 index 8cb0fcd99a78c9fe6cfddc1ebe27114cfd9b6b5a..0000000000000000000000000000000000000000 --- a/spaces/Boilin/URetinex-Net/network/architecture.py +++ /dev/null @@ -1,41 +0,0 @@ -import torch -import torch.nn as nn -import torchvision - -def get_batchnorm_layer(opts): - if opts.norm_layer == "batch": - norm_layer = nn.BatchNorm2d - elif opts.layer == "spectral_instance": - norm_layer = nn.InstanceNorm2d - else: - print("not implemented") - exit() - return norm_layer - -def get_conv2d_layer(in_c, out_c, k, s, p=0, dilation=1, groups=1): - return nn.Conv2d(in_channels=in_c, - out_channels=out_c, - kernel_size=k, - stride=s, - padding=p,dilation=dilation, groups=groups) - -def get_deconv2d_layer(in_c, out_c, k=1, s=1, p=1): - return nn.Sequential( - nn.Upsample(scale_factor=2, mode="bilinear"), - nn.Conv2d( - in_channels=in_c, - out_channels=out_c, - kernel_size=k, - stride=s, - padding=p - ) - ) - -class Identity(nn.Module): - - def __init__(self): - super(Identity, self).__init__() - - def forward(self, x): - return x - diff --git a/spaces/CAMP-ViL/Xplainer/model.py b/spaces/CAMP-ViL/Xplainer/model.py deleted file mode 100644 index fa175c63e6b5f1c24fd54df314fc10ebc0938584..0000000000000000000000000000000000000000 --- a/spaces/CAMP-ViL/Xplainer/model.py +++ /dev/null @@ -1,158 +0,0 @@ -from pathlib import Path -from typing import List - -import torch -import torch.nn.functional as F -from health_multimodal.image import get_biovil_resnet_inference -from health_multimodal.text import get_cxr_bert_inference -from health_multimodal.vlp import ImageTextInferenceEngine - -from utils import cos_sim_to_prob, prob_to_log_prob, log_prob_to_prob - - -class InferenceModel(): - def __init__(self): - self.text_inference = get_cxr_bert_inference() - self.image_inference = get_biovil_resnet_inference() - self.image_text_inference = ImageTextInferenceEngine( - image_inference_engine=self.image_inference, - text_inference_engine=self.text_inference, - ) - self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - self.image_text_inference.to(self.device) - - # caches for faster inference - self.text_embedding_cache = {} - self.image_embedding_cache = {} - - self.transform = self.image_inference.transform - - def get_similarity_score_from_raw_data(self, image_embedding, query_text: str) -> float: - """Compute the cosine similarity score between an image and one or more strings. - If multiple strings are passed, their embeddings are averaged before L2-normalization. - :param image_path: Path to the input chest X-ray, either a DICOM or JPEG file. - :param query_text: Input radiology text phrase. - :return: The similarity score between the image and the text. - """ - assert not self.image_text_inference.image_inference_engine.model.training - assert not self.image_text_inference.text_inference_engine.model.training - if query_text in self.text_embedding_cache: - text_embedding = self.text_embedding_cache[query_text] - else: - text_embedding = self.image_text_inference.text_inference_engine.get_embeddings_from_prompt([query_text], normalize=False) - text_embedding = text_embedding.mean(dim=0) - text_embedding = F.normalize(text_embedding, dim=0, p=2) - self.text_embedding_cache[query_text] = text_embedding - - cos_similarity = image_embedding @ text_embedding.t() - - return cos_similarity.item() - - def process_image(self, image): - ''' same code as in image_text_inference.image_inference_engine.get_projected_global_embedding() but adapted to deal with image instances instead of path''' - - transformed_image = self.transform(image) - projected_img_emb = self.image_inference.model.forward(transformed_image).projected_global_embedding - projected_img_emb = F.normalize(projected_img_emb, dim=-1) - assert projected_img_emb.shape[0] == 1 - assert projected_img_emb.ndim == 2 - return projected_img_emb[0] - - def get_descriptor_probs(self, image_path: Path, descriptors: List[str], do_negative_prompting=True, demo=False): - probs = {} - negative_probs = {} - if image_path in self.image_embedding_cache: - image_embedding = self.image_embedding_cache[image_path] - else: - image_embedding = self.image_text_inference.image_inference_engine.get_projected_global_embedding(image_path) - if not demo: - self.image_embedding_cache[image_path] = image_embedding - - # Default get_similarity_score_from_raw_data would load the image every time. Instead we only load once. - for desc in descriptors: - prompt = f'There are {desc}' - score = self.get_similarity_score_from_raw_data(image_embedding, prompt) - if do_negative_prompting: - neg_prompt = f'There are no {desc}' - neg_score = self.get_similarity_score_from_raw_data(image_embedding, neg_prompt) - - pos_prob = cos_sim_to_prob(score) - - if do_negative_prompting: - pos_prob, neg_prob = torch.softmax((torch.tensor([score, neg_score]) / 0.5), dim=0) - negative_probs[desc] = neg_prob - - probs[desc] = pos_prob - - return probs, negative_probs - - def get_all_descriptors(self, disease_descriptors): - all_descriptors = set() - for disease, descs in disease_descriptors.items(): - all_descriptors.update([f"{desc} indicating {disease}" for desc in descs]) - all_descriptors = sorted(all_descriptors) - return all_descriptors - - def get_all_descriptors_only_disease(self, disease_descriptors): - all_descriptors = set() - for disease, descs in disease_descriptors.items(): - all_descriptors.update([f"{desc}" for desc in descs]) - all_descriptors = sorted(all_descriptors) - return all_descriptors - - def get_diseases_probs(self, disease_descriptors, pos_probs, negative_probs, prior_probs=None, do_negative_prompting=True): - disease_probs = {} - disease_neg_probs = {} - for disease, descriptors in disease_descriptors.items(): - desc_log_probs = [] - desc_neg_log_probs = [] - for desc in descriptors: - desc = f"{desc} indicating {disease}" - desc_log_probs.append(prob_to_log_prob(pos_probs[desc])) - if do_negative_prompting: - desc_neg_log_probs.append(prob_to_log_prob(negative_probs[desc])) - disease_log_prob = sum(sorted(desc_log_probs, reverse=True)) / len(desc_log_probs) - if do_negative_prompting: - disease_neg_log_prob = sum(desc_neg_log_probs) / len(desc_neg_log_probs) - disease_probs[disease] = log_prob_to_prob(disease_log_prob) - if do_negative_prompting: - disease_neg_probs[disease] = log_prob_to_prob(disease_neg_log_prob) - - return disease_probs, disease_neg_probs - - # Threshold Based - def get_predictions(self, disease_descriptors, threshold, disease_probs, keys): - predicted_diseases = [] - prob_vector = torch.zeros(len(keys), dtype=torch.float) # num of diseases - for idx, disease in enumerate(disease_descriptors): - if disease == 'No Finding': - continue - prob_vector[keys.index(disease)] = disease_probs[disease] - if disease_probs[disease] > threshold: - predicted_diseases.append(disease) - - if len(predicted_diseases) == 0: # No finding rule based - prob_vector[0] = 1.0 - max(prob_vector) - else: - prob_vector[0] = 1.0 - max(prob_vector) - - return predicted_diseases, prob_vector - - # Negative vs Positive Prompting - def get_predictions_bin_prompting(self, disease_descriptors, disease_probs, negative_disease_probs, keys): - predicted_diseases = [] - prob_vector = torch.zeros(len(keys), dtype=torch.float) # num of diseases - for idx, disease in enumerate(disease_descriptors): - if disease == 'No Finding': - continue - pos_neg_scores = torch.tensor([disease_probs[disease], negative_disease_probs[disease]]) - prob_vector[keys.index(disease)] = pos_neg_scores[0] - if torch.argmax(pos_neg_scores) == 0: # Positive is More likely - predicted_diseases.append(disease) - - if len(predicted_diseases) == 0: # No finding rule based - prob_vector[0] = 1.0 - max(prob_vector) - else: - prob_vector[0] = 1.0 - max(prob_vector) - - return predicted_diseases, prob_vector diff --git a/spaces/CVPR/LIVE/pybind11/tests/local_bindings.h b/spaces/CVPR/LIVE/pybind11/tests/local_bindings.h deleted file mode 100644 index b6afb808664de1fdbde011a9bf7c38d3a8794127..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/tests/local_bindings.h +++ /dev/null @@ -1,64 +0,0 @@ -#pragma once -#include "pybind11_tests.h" - -/// Simple class used to test py::local: -template class LocalBase { -public: - LocalBase(int i) : i(i) { } - int i = -1; -}; - -/// Registered with py::module_local in both main and secondary modules: -using LocalType = LocalBase<0>; -/// Registered without py::module_local in both modules: -using NonLocalType = LocalBase<1>; -/// A second non-local type (for stl_bind tests): -using NonLocal2 = LocalBase<2>; -/// Tests within-module, different-compilation-unit local definition conflict: -using LocalExternal = LocalBase<3>; -/// Mixed: registered local first, then global -using MixedLocalGlobal = LocalBase<4>; -/// Mixed: global first, then local -using MixedGlobalLocal = LocalBase<5>; - -/// Registered with py::module_local only in the secondary module: -using ExternalType1 = LocalBase<6>; -using ExternalType2 = LocalBase<7>; - -using LocalVec = std::vector; -using LocalVec2 = std::vector; -using LocalMap = std::unordered_map; -using NonLocalVec = std::vector; -using NonLocalVec2 = std::vector; -using NonLocalMap = std::unordered_map; -using NonLocalMap2 = std::unordered_map; - -PYBIND11_MAKE_OPAQUE(LocalVec); -PYBIND11_MAKE_OPAQUE(LocalVec2); -PYBIND11_MAKE_OPAQUE(LocalMap); -PYBIND11_MAKE_OPAQUE(NonLocalVec); -//PYBIND11_MAKE_OPAQUE(NonLocalVec2); // same type as LocalVec2 -PYBIND11_MAKE_OPAQUE(NonLocalMap); -PYBIND11_MAKE_OPAQUE(NonLocalMap2); - - -// Simple bindings (used with the above): -template -py::class_ bind_local(Args && ...args) { - return py::class_(std::forward(args)...) - .def(py::init()) - .def("get", [](T &i) { return i.i + Adjust; }); -}; - -// Simulate a foreign library base class (to match the example in the docs): -namespace pets { -class Pet { -public: - Pet(std::string name) : name_(name) {} - std::string name_; - const std::string &name() { return name_; } -}; -} - -struct MixGL { int i; MixGL(int i) : i{i} {} }; -struct MixGL2 { int i; MixGL2(int i) : i{i} {} }; diff --git a/spaces/CVPR/LIVE/pydiffvg_tensorflow/shape.py b/spaces/CVPR/LIVE/pydiffvg_tensorflow/shape.py deleted file mode 100644 index 432a3b5dc2fd1b8eb03c306a8123c76e6b9302ff..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pydiffvg_tensorflow/shape.py +++ /dev/null @@ -1,54 +0,0 @@ -import tensorflow as tf -import math - -class Circle: - def __init__(self, radius, center, stroke_width = tf.constant(1.0), id = ''): - self.radius = radius - self.center = center - self.stroke_width = stroke_width - self.id = id - -class Ellipse: - def __init__(self, radius, center, stroke_width = tf.constant(1.0), id = ''): - self.radius = radius - self.center = center - self.stroke_width = stroke_width - self.id = id - -class Path: - def __init__(self, num_control_points, points, is_closed, stroke_width = tf.constant(1.0), id = '', use_distance_approx = False): - self.num_control_points = num_control_points - self.points = points - self.is_closed = is_closed - self.stroke_width = stroke_width - self.id = id - self.use_distance_approx = use_distance_approx - -class Polygon: - def __init__(self, points, is_closed, stroke_width = tf.constant(1.0), id = ''): - self.points = points - self.is_closed = is_closed - self.stroke_width = stroke_width - self.id = id - -class Rect: - def __init__(self, p_min, p_max, stroke_width = tf.constant(1.0), id = ''): - self.p_min = p_min - self.p_max = p_max - self.stroke_width = stroke_width - self.id = id - -class ShapeGroup: - def __init__(self, - shape_ids, - fill_color, - use_even_odd_rule = True, - stroke_color = None, - shape_to_canvas = tf.eye(3), - id = ''): - self.shape_ids = shape_ids - self.fill_color = fill_color - self.use_even_odd_rule = use_even_odd_rule - self.stroke_color = stroke_color - self.shape_to_canvas = shape_to_canvas - self.id = id diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/mismatch.h b/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/mismatch.h deleted file mode 100644 index c6ae90664ad9538e73febfde86c334011de417c8..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/mismatch.h +++ /dev/null @@ -1,22 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system has no special version of this algorithm - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/transform.h b/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/transform.h deleted file mode 100644 index 20d606dfbeec6d376a138db500ec368d94efa748..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/transform.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// omp inherits transform -#include - diff --git a/spaces/CVPR/LIVE/xing_loss.py b/spaces/CVPR/LIVE/xing_loss.py deleted file mode 100644 index 472ed17749dfe041eb262aff80b10506bdaadf01..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/xing_loss.py +++ /dev/null @@ -1,66 +0,0 @@ -import torch -import numpy as np - - -def area(a, b, c): - return (c[1] - a[1]) * (b[0] - a[0]) - (b[1] - a[1]) * (c[0] - a[0]) - - -def triangle_area(A, B, C): - out = (C - A).flip([-1]) * (B - A) - out = out[..., 1] - out[..., 0] - return out - -def compute_sine_theta(s1, s2): #s1 and s2 aret two segments to be uswed - #s1, s2 (2, 2) - v1 = s1[1,:] - s1[0, :] - v2 = s2[1,:] - s2[0, :] - #print(v1, v2) - sine_theta = ( v1[0] * v2[1] - v1[1] * v2[0] ) / (torch.norm(v1) * torch.norm(v2)) - return sine_theta - -def xing_loss(x_list, scale=1e-3): # x[ npoints,2] - loss = 0. - #print(len(x_list)) - for x in x_list: - #print(x) - seg_loss = 0. - N = x.size()[0] - x = torch.cat([x,x[0,:].unsqueeze(0)], dim=0) #(N+1,2) - segments = torch.cat([x[:-1,:].unsqueeze(1), x[1:,:].unsqueeze(1)], dim=1) #(N, start/end, 2) - assert N % 3 == 0, 'The segment number is not correct!' - segment_num = int(N / 3) - for i in range(segment_num): - cs1 = segments[i*3, :, :] #start control segs - cs2 = segments[i*3 + 1, :, :] #middle control segs - cs3 = segments[i*3 + 2, :, :] #end control segs - #print('the direction of the vectors:') - #print(compute_sine_theta(cs1, cs2)) - direct = (compute_sine_theta(cs1, cs2) >= 0).float() - opst = 1 - direct #another direction - sina = compute_sine_theta(cs1, cs3) #the angle between cs1 and cs3 - seg_loss += direct * torch.relu( - sina) + opst * torch.relu(sina) - # print(direct, opst, sina) - seg_loss /= segment_num - - - templ = seg_loss - loss += templ * scale #area_loss * scale - - return loss / (len(x_list)) - - -if __name__ == "__main__": - #x = torch.rand([6, 2]) - #x = torch.tensor([[0,0], [1,1], [2,1], [1.5,0]]) - x = torch.tensor([[0,0], [1,1], [2,1], [0.5,0]]) - #x = torch.tensor([[1,0], [2,1], [0,1], [2,0]]) - scale = 1 #0.5 - y = xing_loss([x], scale) - print(y) - - x = torch.tensor([[0,0], [1,1], [2,1], [2.,0]]) - #x = torch.tensor([[1,0], [2,1], [0,1], [2,0]]) - scale = 1 #0.5 - y = xing_loss([x], scale) - print(y) diff --git a/spaces/CVPR/MonoScene/monoscene/monoscene_model.py b/spaces/CVPR/MonoScene/monoscene/monoscene_model.py deleted file mode 100644 index 8a5207f3d03de86192c5d41a8bdfe3ce32e672ab..0000000000000000000000000000000000000000 --- a/spaces/CVPR/MonoScene/monoscene/monoscene_model.py +++ /dev/null @@ -1,21 +0,0 @@ -from transformers import PreTrainedModel -from .config import MonoSceneConfig -from monoscene.monoscene import MonoScene - - -class MonoSceneModel(PreTrainedModel): - config_class = MonoSceneConfig - - def __init__(self, config): - super().__init__(config) - self.model = MonoScene( - dataset=config.dataset, - n_classes=config.n_classes, - feature=config.feature, - project_scale=config.project_scale, - full_scene_size=config.full_scene_size - ) - - - def forward(self, tensor): - return self.model.forward(tensor) \ No newline at end of file diff --git a/spaces/CVPR/regionclip-demo/detectron2/modeling/meta_arch/__init__.py b/spaces/CVPR/regionclip-demo/detectron2/modeling/meta_arch/__init__.py deleted file mode 100644 index 04caae8693a51e59f1f31d1daac18df484842e93..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/modeling/meta_arch/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -from .build import META_ARCH_REGISTRY, build_model # isort:skip - -from .panoptic_fpn import PanopticFPN - -# import all the meta_arch, so they will be registered -from .rcnn import GeneralizedRCNN, ProposalNetwork -from .retinanet import RetinaNet -from .semantic_seg import SEM_SEG_HEADS_REGISTRY, SemanticSegmentor, build_sem_seg_head -from .clip_rcnn import CLIPRCNN, CLIPFastRCNN, PretrainFastRCNN - - -__all__ = list(globals().keys()) diff --git a/spaces/CVPR/transfiner/configs/Misc/mmdet_mask_rcnn_R_50_FPN_1x.py b/spaces/CVPR/transfiner/configs/Misc/mmdet_mask_rcnn_R_50_FPN_1x.py deleted file mode 100644 index 0f2464be744c083985898a25f9e71d00104f689d..0000000000000000000000000000000000000000 --- a/spaces/CVPR/transfiner/configs/Misc/mmdet_mask_rcnn_R_50_FPN_1x.py +++ /dev/null @@ -1,151 +0,0 @@ -# An example config to train a mmdetection model using detectron2. - -from ..common.data.coco import dataloader -from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier -from ..common.optim import SGD as optimizer -from ..common.train import train - -from detectron2.modeling.mmdet_wrapper import MMDetDetector -from detectron2.config import LazyCall as L - -model = L(MMDetDetector)( - detector=dict( - type="MaskRCNN", - pretrained="torchvision://resnet50", - backbone=dict( - type="ResNet", - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type="BN", requires_grad=True), - norm_eval=True, - style="pytorch", - ), - neck=dict(type="FPN", in_channels=[256, 512, 1024, 2048], out_channels=256, num_outs=5), - rpn_head=dict( - type="RPNHead", - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type="AnchorGenerator", - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64], - ), - bbox_coder=dict( - type="DeltaXYWHBBoxCoder", - target_means=[0.0, 0.0, 0.0, 0.0], - target_stds=[1.0, 1.0, 1.0, 1.0], - ), - loss_cls=dict(type="CrossEntropyLoss", use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type="L1Loss", loss_weight=1.0), - ), - roi_head=dict( - type="StandardRoIHead", - bbox_roi_extractor=dict( - type="SingleRoIExtractor", - roi_layer=dict(type="RoIAlign", output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32], - ), - bbox_head=dict( - type="Shared2FCBBoxHead", - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type="DeltaXYWHBBoxCoder", - target_means=[0.0, 0.0, 0.0, 0.0], - target_stds=[0.1, 0.1, 0.2, 0.2], - ), - reg_class_agnostic=False, - loss_cls=dict(type="CrossEntropyLoss", use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type="L1Loss", loss_weight=1.0), - ), - mask_roi_extractor=dict( - type="SingleRoIExtractor", - roi_layer=dict(type="RoIAlign", output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32], - ), - mask_head=dict( - type="FCNMaskHead", - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=80, - loss_mask=dict(type="CrossEntropyLoss", use_mask=True, loss_weight=1.0), - ), - ), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type="MaxIoUAssigner", - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1, - ), - sampler=dict( - type="RandomSampler", - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False, - ), - allowed_border=-1, - pos_weight=-1, - debug=False, - ), - rpn_proposal=dict( - nms_pre=2000, - max_per_img=1000, - nms=dict(type="nms", iou_threshold=0.7), - min_bbox_size=0, - ), - rcnn=dict( - assigner=dict( - type="MaxIoUAssigner", - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=True, - ignore_iof_thr=-1, - ), - sampler=dict( - type="RandomSampler", - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True, - ), - mask_size=28, - pos_weight=-1, - debug=False, - ), - ), - test_cfg=dict( - rpn=dict( - nms_pre=1000, - max_per_img=1000, - nms=dict(type="nms", iou_threshold=0.7), - min_bbox_size=0, - ), - rcnn=dict( - score_thr=0.05, - nms=dict(type="nms", iou_threshold=0.5), - max_per_img=100, - mask_thr_binary=0.5, - ), - ), - ), - pixel_mean=[123.675, 116.280, 103.530], - pixel_std=[58.395, 57.120, 57.375], -) - -dataloader.train.mapper.image_format = "RGB" # torchvision pretrained model -train.init_checkpoint = None # pretrained model is loaded inside backbone diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/luoyonghao_say/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/luoyonghao_say/__init__.py deleted file mode 100644 index f09d378a09127843804bb79fbf9e1e3370ac88fb..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/meme_generator/memes/luoyonghao_say/__init__.py +++ /dev/null @@ -1,42 +0,0 @@ -from pathlib import Path -from typing import List - -from PIL import ImageFilter -from pil_utils import BuildImage - -from meme_generator import add_meme -from meme_generator.exception import TextOverLength - -img_dir = Path(__file__).parent / "images" - - -def luoyonghao_say(images, texts: List[str], args): - text = texts[0] - frame = BuildImage.open(img_dir / "0.jpg") - text_frame = BuildImage.new("RGBA", (365, 120)) - try: - text_frame.draw_text( - (40, 10, 325, 110), - text, - allow_wrap=True, - max_fontsize=50, - min_fontsize=10, - valign="top", - ) - except ValueError: - raise TextOverLength(text) - text_frame = text_frame.perspective( - ((52, 10), (391, 0), (364, 110), (0, 120)) - ).filter(ImageFilter.GaussianBlur(radius=0.8)) - frame.paste(text_frame, (48, 246), alpha=True) - return frame.save_jpg() - - -add_meme( - "luoyonghao_say", - luoyonghao_say, - min_texts=1, - max_texts=1, - default_texts=["又不是不能用"], - keywords=["罗永浩说"], -) diff --git a/spaces/Cropinky/gpt2-rap-songs/app.py b/spaces/Cropinky/gpt2-rap-songs/app.py deleted file mode 100644 index bb86a0282d724d78553df52a90862718db8e3ff7..0000000000000000000000000000000000000000 --- a/spaces/Cropinky/gpt2-rap-songs/app.py +++ /dev/null @@ -1,49 +0,0 @@ -from os import CLD_CONTINUED -import streamlit as st -from transformers import AutoTokenizer, AutoModelForCausalLM -from transformers import pipeline - -@st.cache(allow_output_mutation=True) -def load_model(): - model_ckpt = "flax-community/gpt2-rap-lyric-generator" - tokenizer = AutoTokenizer.from_pretrained(model_ckpt,from_flax=True) - model = AutoModelForCausalLM.from_pretrained(model_ckpt,from_flax=True) - return tokenizer, model - -@st.cache() -def load_rappers(): - text_file = open("rappers.txt") - rappers = text_file.readlines() - rappers = [name[:-1] for name in rappers] - rappers.sort() - return rappers - - -title = st.title("Loading model") -tokenizer, model = load_model() -text_generation = pipeline("text-generation", model=model, tokenizer=tokenizer) -title.title("Rap lyrics generator") -#artist = st.text_input("Enter the artist", "Wu-Tang Clan") -list_of_rappers = load_rappers() -artist = st.selectbox("Choose your rapper", tuple(list_of_rappers), index = len(list_of_rappers)-1) -song_name = st.text_input("Enter the desired song name", "Sadboys") - - - -if st.button("Generate lyrics", help="The lyrics generation can last up to 2 minutres"): - st.title(f"{artist}: {song_name}") - prefix_text = f"{song_name} [Verse 1:{artist}]" - generated_song = text_generation(prefix_text, max_length=750, do_sample=True)[0] - for count, line in enumerate(generated_song['generated_text'].split("\n")): - if"" in line: - break - if count == 0: - st.markdown(f"**{line[line.find('['):]}**") - continue - if "" in line: - st.write(line[5:]) - continue - if line.startswith("["): - st.markdown(f"**{line}**") - continue - st.write(line) \ No newline at end of file diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/attr/_next_gen.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/attr/_next_gen.py deleted file mode 100644 index 8f7c0b9a46b7a0ee008f94b8054baf5807df043a..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/attr/_next_gen.py +++ /dev/null @@ -1,232 +0,0 @@ -# SPDX-License-Identifier: MIT - -""" -These are keyword-only APIs that call `attr.s` and `attr.ib` with different -default values. -""" - - -from functools import partial - -from . import setters -from ._funcs import asdict as _asdict -from ._funcs import astuple as _astuple -from ._make import ( - NOTHING, - _frozen_setattrs, - _ng_default_on_setattr, - attrib, - attrs, -) -from .exceptions import UnannotatedAttributeError - - -def define( - maybe_cls=None, - *, - these=None, - repr=None, - unsafe_hash=None, - hash=None, - init=None, - slots=True, - frozen=False, - weakref_slot=True, - str=False, - auto_attribs=None, - kw_only=False, - cache_hash=False, - auto_exc=True, - eq=None, - order=False, - auto_detect=True, - getstate_setstate=None, - on_setattr=None, - field_transformer=None, - match_args=True, -): - r""" - Define an *attrs* class. - - Differences to the classic `attr.s` that it uses underneath: - - - Automatically detect whether or not *auto_attribs* should be `True` (c.f. - *auto_attribs* parameter). - - If *frozen* is `False`, run converters and validators when setting an - attribute by default. - - *slots=True* - - .. caution:: - - Usually this has only upsides and few visible effects in everyday - programming. But it *can* lead to some suprising behaviors, so please - make sure to read :term:`slotted classes`. - - *auto_exc=True* - - *auto_detect=True* - - *order=False* - - Some options that were only relevant on Python 2 or were kept around for - backwards-compatibility have been removed. - - Please note that these are all defaults and you can change them as you - wish. - - :param Optional[bool] auto_attribs: If set to `True` or `False`, it behaves - exactly like `attr.s`. If left `None`, `attr.s` will try to guess: - - 1. If any attributes are annotated and no unannotated `attrs.fields`\ s - are found, it assumes *auto_attribs=True*. - 2. Otherwise it assumes *auto_attribs=False* and tries to collect - `attrs.fields`\ s. - - For now, please refer to `attr.s` for the rest of the parameters. - - .. versionadded:: 20.1.0 - .. versionchanged:: 21.3.0 Converters are also run ``on_setattr``. - .. versionadded:: 22.2.0 - *unsafe_hash* as an alias for *hash* (for :pep:`681` compliance). - """ - - def do_it(cls, auto_attribs): - return attrs( - maybe_cls=cls, - these=these, - repr=repr, - hash=hash, - unsafe_hash=unsafe_hash, - init=init, - slots=slots, - frozen=frozen, - weakref_slot=weakref_slot, - str=str, - auto_attribs=auto_attribs, - kw_only=kw_only, - cache_hash=cache_hash, - auto_exc=auto_exc, - eq=eq, - order=order, - auto_detect=auto_detect, - collect_by_mro=True, - getstate_setstate=getstate_setstate, - on_setattr=on_setattr, - field_transformer=field_transformer, - match_args=match_args, - ) - - def wrap(cls): - """ - Making this a wrapper ensures this code runs during class creation. - - We also ensure that frozen-ness of classes is inherited. - """ - nonlocal frozen, on_setattr - - had_on_setattr = on_setattr not in (None, setters.NO_OP) - - # By default, mutable classes convert & validate on setattr. - if frozen is False and on_setattr is None: - on_setattr = _ng_default_on_setattr - - # However, if we subclass a frozen class, we inherit the immutability - # and disable on_setattr. - for base_cls in cls.__bases__: - if base_cls.__setattr__ is _frozen_setattrs: - if had_on_setattr: - raise ValueError( - "Frozen classes can't use on_setattr " - "(frozen-ness was inherited)." - ) - - on_setattr = setters.NO_OP - break - - if auto_attribs is not None: - return do_it(cls, auto_attribs) - - try: - return do_it(cls, True) - except UnannotatedAttributeError: - return do_it(cls, False) - - # maybe_cls's type depends on the usage of the decorator. It's a class - # if it's used as `@attrs` but ``None`` if used as `@attrs()`. - if maybe_cls is None: - return wrap - else: - return wrap(maybe_cls) - - -mutable = define -frozen = partial(define, frozen=True, on_setattr=None) - - -def field( - *, - default=NOTHING, - validator=None, - repr=True, - hash=None, - init=True, - metadata=None, - type=None, - converter=None, - factory=None, - kw_only=False, - eq=None, - order=None, - on_setattr=None, - alias=None, -): - """ - Identical to `attr.ib`, except keyword-only and with some arguments - removed. - - .. versionadded:: 23.1.0 - The *type* parameter has been re-added; mostly for - {func}`attrs.make_class`. Please note that type checkers ignore this - metadata. - .. versionadded:: 20.1.0 - """ - return attrib( - default=default, - validator=validator, - repr=repr, - hash=hash, - init=init, - metadata=metadata, - type=type, - converter=converter, - factory=factory, - kw_only=kw_only, - eq=eq, - order=order, - on_setattr=on_setattr, - alias=alias, - ) - - -def asdict(inst, *, recurse=True, filter=None, value_serializer=None): - """ - Same as `attr.asdict`, except that collections types are always retained - and dict is always used as *dict_factory*. - - .. versionadded:: 21.3.0 - """ - return _asdict( - inst=inst, - recurse=recurse, - filter=filter, - value_serializer=value_serializer, - retain_collection_types=True, - ) - - -def astuple(inst, *, recurse=True, filter=None): - """ - Same as `attr.astuple`, except that collections types are always retained - and `tuple` is always used as the *tuple_factory*. - - .. versionadded:: 21.3.0 - """ - return _astuple( - inst=inst, recurse=recurse, filter=filter, retain_collection_types=True - ) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/templates/modelcard_template.md b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/templates/modelcard_template.md deleted file mode 100644 index ec2d18d427c9fc96eb5c8b89103632620ed4a0b6..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/templates/modelcard_template.md +++ /dev/null @@ -1,202 +0,0 @@ ---- -# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 -# Doc / guide: https://huggingface.co/docs/hub/model-cards -{{ card_data }} ---- - -# Model Card for {{ model_id | default("Model ID", true) }} - - - -{{ model_summary | default("", true) }} - -## Model Details - -### Model Description - - - -{{ model_description | default("", true) }} - -- **Developed by:** {{ developers | default("[More Information Needed]", true)}} -- **Shared by [optional]:** {{ shared_by | default("[More Information Needed]", true)}} -- **Model type:** {{ model_type | default("[More Information Needed]", true)}} -- **Language(s) (NLP):** {{ language | default("[More Information Needed]", true)}} -- **License:** {{ license | default("[More Information Needed]", true)}} -- **Finetuned from model [optional]:** {{ finetuned_from | default("[More Information Needed]", true)}} - -### Model Sources [optional] - - - -- **Repository:** {{ repo | default("[More Information Needed]", true)}} -- **Paper [optional]:** {{ paper | default("[More Information Needed]", true)}} -- **Demo [optional]:** {{ demo | default("[More Information Needed]", true)}} - -## Uses - - - -### Direct Use - - - -{{ direct_use | default("[More Information Needed]", true)}} - -### Downstream Use [optional] - - - -{{ downstream_use | default("[More Information Needed]", true)}} - -### Out-of-Scope Use - - - -{{ out_of_scope_use | default("[More Information Needed]", true)}} - -## Bias, Risks, and Limitations - - - -{{ bias_risks_limitations | default("[More Information Needed]", true)}} - -### Recommendations - - - -{{ bias_recommendations | default("Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", true)}} - -## How to Get Started with the Model - -Use the code below to get started with the model. - -{{ get_started_code | default("[More Information Needed]", true)}} - -## Training Details - -### Training Data - - - -{{ training_data | default("[More Information Needed]", true)}} - -### Training Procedure - - - -#### Preprocessing [optional] - -{{ preprocessing | default("[More Information Needed]", true)}} - - -#### Training Hyperparameters - -- **Training regime:** {{ training_regime | default("[More Information Needed]", true)}} - -#### Speeds, Sizes, Times [optional] - - - -{{ speeds_sizes_times | default("[More Information Needed]", true)}} - -## Evaluation - - - -### Testing Data, Factors & Metrics - -#### Testing Data - - - -{{ testing_data | default("[More Information Needed]", true)}} - -#### Factors - - - -{{ testing_factors | default("[More Information Needed]", true)}} - -#### Metrics - - - -{{ testing_metrics | default("[More Information Needed]", true)}} - -### Results - -{{ results | default("[More Information Needed]", true)}} - -#### Summary - -{{ results_summary | default("", true) }} - -## Model Examination [optional] - - - -{{ model_examination | default("[More Information Needed]", true)}} - -## Environmental Impact - - - -Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - -- **Hardware Type:** {{ hardware | default("[More Information Needed]", true)}} -- **Hours used:** {{ hours_used | default("[More Information Needed]", true)}} -- **Cloud Provider:** {{ cloud_provider | default("[More Information Needed]", true)}} -- **Compute Region:** {{ cloud_region | default("[More Information Needed]", true)}} -- **Carbon Emitted:** {{ co2_emitted | default("[More Information Needed]", true)}} - -## Technical Specifications [optional] - -### Model Architecture and Objective - -{{ model_specs | default("[More Information Needed]", true)}} - -### Compute Infrastructure - -{{ compute_infrastructure | default("[More Information Needed]", true)}} - -#### Hardware - -{{ hardware | default("[More Information Needed]", true)}} - -#### Software - -{{ software | default("[More Information Needed]", true)}} - -## Citation [optional] - - - -**BibTeX:** - -{{ citation_bibtex | default("[More Information Needed]", true)}} - -**APA:** - -{{ citation_apa | default("[More Information Needed]", true)}} - -## Glossary [optional] - - - -{{ glossary | default("[More Information Needed]", true)}} - -## More Information [optional] - -{{ more_information | default("[More Information Needed]", true)}} - -## Model Card Authors [optional] - -{{ model_card_authors | default("[More Information Needed]", true)}} - -## Model Card Contact - -{{ model_card_contact | default("[More Information Needed]", true)}} - - - diff --git a/spaces/DeepDrivePL/PaddleSeg-Matting/matting/model/__init__.py b/spaces/DeepDrivePL/PaddleSeg-Matting/matting/model/__init__.py deleted file mode 100644 index 9e906c1567ce12fe800b4d651f1a1ef9f9d0afe0..0000000000000000000000000000000000000000 --- a/spaces/DeepDrivePL/PaddleSeg-Matting/matting/model/__init__.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from .vgg import * -from .resnet_vd import * -from .mobilenet_v2 import * -from .hrnet import * -from .dim import DIM -from .loss import MRSD -from .modnet import MODNet diff --git a/spaces/Detomo/aisatsu-api/main.py b/spaces/Detomo/aisatsu-api/main.py deleted file mode 100644 index bcbaf24d321e1af38918c33dddca16554826511c..0000000000000000000000000000000000000000 --- a/spaces/Detomo/aisatsu-api/main.py +++ /dev/null @@ -1,202 +0,0 @@ -from ultralyticsplus import YOLO -from typing import Optional, Union, Annotated - -from scipy.spatial import distance as dist -import time -from fastapi import FastAPI, File, UploadFile, Form -from fastapi.responses import StreamingResponse -from fastapi.middleware.gzip import GZipMiddleware -from io import BytesIO -from utils import tts, stt, read_image_file, pil_to_base64, base64_to_pil, get_hist, ffmpeg_read -import zipfile -import soundfile as sf -import openai -import os -import random - -# Config for camera picture -model = YOLO('ultralyticsplus/yolov8s') -# model = YOLO('kadirnar/yolov8n-v8.0') -CLASS = model.model.names -ZIP = False -# bot_voice_time = "おはようございます" -bot_voice_time = "こんにちは" -default_bot_voice_list = [f"{bot_voice_time}、アイティコンサルティングとシステム開発を支援します。よろしくお願いします。", - f"{bot_voice_time}、デトモです。システム開発全般を支援します。", - f"{bot_voice_time}、デトモです。オフショア開発全般を支援します。", - f"{bot_voice_time}、私はアイサロボです。システム開発全般を支援します。", - f"{bot_voice_time}、エッジコンピューティングソリューションを提供します。"] -area_threshold = 0 -diff_value_threshold = 0 - -# Config for human input -prompt_template = "私はあなたに、Detomo社が作ったロボットのように振る舞ってほしいです。デトモは高度なデジタル化社会を支えます。"\ - "ビジネスの課題解決策を提案するコンサ ルティング・サービスと、課題解決を実現す るシステムの開発サービス、また、企業内 の情報システム部門の業務の代行サー ビスにも対応しています。"\ - "デトモはITコンサルティング・システム開発を得意とし、お客様の課題解決をお手伝いいたします。"\ - "あなたの名前はアイサロボです。"\ - "あなたのミッションは、子供たちが他の子供たちに挨拶する自信を持ち、幸せになることを助けることです。"\ - "質問には簡単な方法でしか答えないようにし、明示的に要求されない限り、追加情報を提供しないでください。" -system_prompt = [{"role": "system", "content": prompt_template}] -openai.api_key = os.environ["OPENAI_API_KEY"] - -app = FastAPI() -app.add_middleware(GZipMiddleware, minimum_size=1000) - - -@app.get("/") -def read_root(): - return {"Message": "Application startup complete"} - - -@app.get("/client_settings/") -def client_settings_api(): - return {"camera_picture_period": 5} - - -@app.post("/camera_picture/") -async def camera_picture_api( - file: UploadFile = File(...), - last_seen: Optional[Union[str, UploadFile]] = Form(None), - return_voice: Annotated[bool, Form()] = True, -): - # parameters - total_time = time.time() - most_close = 0 - out_img = None - diff_value = 0.5 - default_bot_voice = random.choice(default_bot_voice_list) - - # read image and predict - image = read_image_file(await file.read()) - results = model.predict(image, show=False)[0] - masks, boxes = results.masks, results.boxes - area_image = image.width * image.height - - # select and crop face image - if boxes is not None: - for xyxy, conf, cls in zip(boxes.xyxy, boxes.conf, boxes.cls): - if int(cls) != 0: - continue - box = xyxy.tolist() - area_rate = (box[2] - box[0]) * (box[3] - box[1]) / area_image - if area_rate >= most_close: - out_img = image.crop(tuple(box)).resize((64, 64)) - most_close = area_rate - - # check detect people or not - if out_img is None: - return { - "status": "No face detected", - "text": None, - "voice": None, - "image": None - } - else: - if ZIP: - image_bot_path = pil_to_base64(out_img, encode=False) - else: - image_bot_path = pil_to_base64(out_img, encode=True) - - # check with previous image if have - if last_seen is not None: - if type(last_seen) == str: - last_seen = base64_to_pil(last_seen) - else: - last_seen = read_image_file(await last_seen.read()) - diff_value = dist.euclidean(get_hist(out_img), get_hist(last_seen)) - print(f"Distance: {most_close}. Different value: {diff_value}") - - # return results - if most_close >= area_threshold and diff_value >= diff_value_threshold: - if ZIP: - voice_bot_path = tts(default_bot_voice, language="ja", encode=False) - io = BytesIO() - zip_filename = "final_archive.zip" - with zipfile.ZipFile(io, mode='w', compression=zipfile.ZIP_DEFLATED) as zf: - for file_path in [voice_bot_path, image_bot_path]: - zf.write(file_path) - zf.close() - print("Total time", time.time() - total_time) - return StreamingResponse( - iter([io.getvalue()]), - media_type="application/x-zip-compressed", - headers={"Content-Disposition": f"attachment;filename=%s" % zip_filename} - ) - else: - if return_voice: - print("Total time", time.time() - total_time) - return { - "status": "New people", - "text": default_bot_voice, - "voice": tts(default_bot_voice, language="ja", encode=True), - "image": image_bot_path - } - else: - print("Total time", time.time() - total_time) - return { - "status": "New people", - "text": default_bot_voice, - "voice": None, - "image": image_bot_path - } - elif most_close < area_threshold: - print("Total time", time.time() - total_time) - return { - "status": "People far from camera", - "text": None, - "voice": None, - "image": image_bot_path, - } - else: - print("Total time", time.time() - total_time) - return { - "status": "Old people", - "text": None, - "voice": None, - "image": image_bot_path, - } - - -@app.post("/human_input/") -async def human_input_api( - voice_input: bytes = File(None), - text_input: str = Form(None), - temperature: Annotated[float, Form()] = 0.7, - max_tokens: Annotated[int, Form()] = 1000, - return_voice: Annotated[bool, Form()] = False, -): - if text_input: - text = text_input - elif text_input is None and voice_input is not None: - upload_audio = ffmpeg_read(voice_input, sampling_rate=24000) - sf.write('temp.wav', upload_audio, 24000, subtype='PCM_16') - text = stt('temp.wav') - print(text) - else: - if return_voice: - return { - "human_text": None, - "robot_text": None, - "robot_voice": None - } - else: - return { - "human_text": None, - "robot_text": None, - } - prompt_msg = {"role": "user", "content": text} - messages = system_prompt + [prompt_msg] - completion = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=messages, temperature=temperature, - max_tokens=max_tokens) - print(completion['usage']['total_tokens']) - if return_voice: - return { - "human_text": text, - "robot_text": completion.choices[0].message.content, - "robot_voice": tts(completion.choices[0].message.content, language="ja", encode=True) - } - else: - return { - "human_text": text, - "robot_text": completion.choices[0].message.content, - } \ No newline at end of file diff --git a/spaces/EleutherAI/VQGAN_CLIP/CLIP/README.md b/spaces/EleutherAI/VQGAN_CLIP/CLIP/README.md deleted file mode 100644 index 5d2d20cd9e1cafcdf8bd8dfd83a0a9c47a884a39..0000000000000000000000000000000000000000 --- a/spaces/EleutherAI/VQGAN_CLIP/CLIP/README.md +++ /dev/null @@ -1,193 +0,0 @@ -# CLIP - -[[Blog]](https://openai.com/blog/clip/) [[Paper]](https://arxiv.org/abs/2103.00020) [[Model Card]](model-card.md) [[Colab]](https://colab.research.google.com/github/openai/clip/blob/master/notebooks/Interacting_with_CLIP.ipynb) - -CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3. We found CLIP matches the performance of the original ResNet50 on ImageNet “zero-shot” without using any of the original 1.28M labeled examples, overcoming several major challenges in computer vision. - - - -## Approach - -![CLIP](CLIP.png) - - - -## Usage - -First, [install PyTorch 1.7.1](https://pytorch.org/get-started/locally/) and torchvision, as well as small additional dependencies, and then install this repo as a Python package. On a CUDA GPU machine, the following will do the trick: - -```bash -$ conda install --yes -c pytorch pytorch=1.7.1 torchvision cudatoolkit=11.0 -$ pip install ftfy regex tqdm -$ pip install git+https://github.com/openai/CLIP.git -``` - -Replace `cudatoolkit=11.0` above with the appropriate CUDA version on your machine or `cpuonly` when installing on a machine without a GPU. - -```python -import torch -import clip -from PIL import Image - -device = "cuda" if torch.cuda.is_available() else "cpu" -model, preprocess = clip.load("ViT-B/32", device=device) - -image = preprocess(Image.open("CLIP.png")).unsqueeze(0).to(device) -text = clip.tokenize(["a diagram", "a dog", "a cat"]).to(device) - -with torch.no_grad(): - image_features = model.encode_image(image) - text_features = model.encode_text(text) - - logits_per_image, logits_per_text = model(image, text) - probs = logits_per_image.softmax(dim=-1).cpu().numpy() - -print("Label probs:", probs) # prints: [[0.9927937 0.00421068 0.00299572]] -``` - - -## API - -The CLIP module `clip` provides the following methods: - -#### `clip.available_models()` - -Returns the names of the available CLIP models. - -#### `clip.load(name, device=..., jit=False)` - -Returns the model and the TorchVision transform needed by the model, specified by the model name returned by `clip.available_models()`. It will download the model as necessary. The `name` argument can also be a path to a local checkpoint. - -The device to run the model can be optionally specified, and the default is to use the first CUDA device if there is any, otherwise the CPU. When `jit` is `False`, a non-JIT version of the model will be loaded. - -#### `clip.tokenize(text: Union[str, List[str]], context_length=77)` - -Returns a LongTensor containing tokenized sequences of given text input(s). This can be used as the input to the model - ---- - -The model returned by `clip.load()` supports the following methods: - -#### `model.encode_image(image: Tensor)` - -Given a batch of images, returns the image features encoded by the vision portion of the CLIP model. - -#### `model.encode_text(text: Tensor)` - -Given a batch of text tokens, returns the text features encoded by the language portion of the CLIP model. - -#### `model(image: Tensor, text: Tensor)` - -Given a batch of images and a batch of text tokens, returns two Tensors, containing the logit scores corresponding to each image and text input. The values are cosine similarities between the corresponding image and text features, times 100. - - - -## More Examples - -### Zero-Shot Prediction - -The code below performs zero-shot prediction using CLIP, as shown in Appendix B in the paper. This example takes an image from the [CIFAR-100 dataset](https://www.cs.toronto.edu/~kriz/cifar.html), and predicts the most likely labels among the 100 textual labels from the dataset. - -```python -import os -import clip -import torch -from torchvision.datasets import CIFAR100 - -# Load the model -device = "cuda" if torch.cuda.is_available() else "cpu" -model, preprocess = clip.load('ViT-B/32', device) - -# Download the dataset -cifar100 = CIFAR100(root=os.path.expanduser("~/.cache"), download=True, train=False) - -# Prepare the inputs -image, class_id = cifar100[3637] -image_input = preprocess(image).unsqueeze(0).to(device) -text_inputs = torch.cat([clip.tokenize(f"a photo of a {c}") for c in cifar100.classes]).to(device) - -# Calculate features -with torch.no_grad(): - image_features = model.encode_image(image_input) - text_features = model.encode_text(text_inputs) - -# Pick the top 5 most similar labels for the image -image_features /= image_features.norm(dim=-1, keepdim=True) -text_features /= text_features.norm(dim=-1, keepdim=True) -similarity = (100.0 * image_features @ text_features.T).softmax(dim=-1) -values, indices = similarity[0].topk(5) - -# Print the result -print("\nTop predictions:\n") -for value, index in zip(values, indices): - print(f"{cifar100.classes[index]:>16s}: {100 * value.item():.2f}%") -``` - -The output will look like the following (the exact numbers may be slightly different depending on the compute device): - -``` -Top predictions: - - snake: 65.31% - turtle: 12.29% - sweet_pepper: 3.83% - lizard: 1.88% - crocodile: 1.75% -``` - -Note that this example uses the `encode_image()` and `encode_text()` methods that return the encoded features of given inputs. - - -### Linear-probe evaluation - -The example below uses [scikit-learn](https://scikit-learn.org/) to perform logistic regression on image features. - -```python -import os -import clip -import torch - -import numpy as np -from sklearn.linear_model import LogisticRegression -from torch.utils.data import DataLoader -from torchvision.datasets import CIFAR100 -from tqdm import tqdm - -# Load the model -device = "cuda" if torch.cuda.is_available() else "cpu" -model, preprocess = clip.load('ViT-B/32', device) - -# Load the dataset -root = os.path.expanduser("~/.cache") -train = CIFAR100(root, download=True, train=True, transform=preprocess) -test = CIFAR100(root, download=True, train=False, transform=preprocess) - - -def get_features(dataset): - all_features = [] - all_labels = [] - - with torch.no_grad(): - for images, labels in tqdm(DataLoader(dataset, batch_size=100)): - features = model.encode_image(images.to(device)) - - all_features.append(features) - all_labels.append(labels) - - return torch.cat(all_features).cpu().numpy(), torch.cat(all_labels).cpu().numpy() - -# Calculate the image features -train_features, train_labels = get_features(train) -test_features, test_labels = get_features(test) - -# Perform logistic regression -classifier = LogisticRegression(random_state=0, C=0.316, max_iter=1000, verbose=1) -classifier.fit(train_features, train_labels) - -# Evaluate using the logistic regression classifier -predictions = classifier.predict(test_features) -accuracy = np.mean((test_labels == predictions).astype(np.float)) * 100. -print(f"Accuracy = {accuracy:.3f}") -``` - -Note that the `C` value should be determined via a hyperparameter sweep using a validation split. diff --git a/spaces/Endre/SemanticSearch-HU/src/data/dbpedia_dump_wiki_text.py b/spaces/Endre/SemanticSearch-HU/src/data/dbpedia_dump_wiki_text.py deleted file mode 100644 index 82ecb7c896bab35920c240ee9d0267e5342f94ca..0000000000000000000000000000000000000000 --- a/spaces/Endre/SemanticSearch-HU/src/data/dbpedia_dump_wiki_text.py +++ /dev/null @@ -1,18 +0,0 @@ -from rdflib import Graph - -# Downloaded from https://databus.dbpedia.org/dbpedia/text/short-abstracts -raw_data_path = 'data/raw/short-abstracts_lang=hu.ttl' -preprocessed_data_path = 'data/preprocessed/shortened_abstracts_hu_2021_09_01.txt' - -g = Graph() -g.parse(raw_data_path, format='turtle') - -i = 0 -objects = [] -with open(preprocessed_data_path, 'w') as f: - print(len(g)) - for subject, predicate, object in g: - objects.append(object.replace(' +/-','').replace('\n',' ')) - objects.append('\n') - i += 1 - f.writelines(objects) \ No newline at end of file diff --git a/spaces/Enterprisium/Easy_GUI/i18n.py b/spaces/Enterprisium/Easy_GUI/i18n.py deleted file mode 100644 index 37f310fadd0b48b2f364877158fb2105d645fc03..0000000000000000000000000000000000000000 --- a/spaces/Enterprisium/Easy_GUI/i18n.py +++ /dev/null @@ -1,28 +0,0 @@ -import locale -import json -import os - - -def load_language_list(language): - with open(f"./i18n/{language}.json", "r", encoding="utf-8") as f: - language_list = json.load(f) - return language_list - - -class I18nAuto: - def __init__(self, language=None): - if language in ["Auto", None]: - language = locale.getdefaultlocale()[ - 0 - ] # getlocale can't identify the system's language ((None, None)) - if not os.path.exists(f"./i18n/{language}.json"): - language = "en_US" - self.language = language - # print("Use Language:", language) - self.language_map = load_language_list(language) - - def __call__(self, key): - return self.language_map.get(key, key) - - def print(self): - print("Use Language:", self.language) diff --git a/spaces/EsoCode/text-generation-webui/extensions/character_bias/script.py b/spaces/EsoCode/text-generation-webui/extensions/character_bias/script.py deleted file mode 100644 index ff12f3afdc28be4ead12ffab90bd9fbd783514a2..0000000000000000000000000000000000000000 --- a/spaces/EsoCode/text-generation-webui/extensions/character_bias/script.py +++ /dev/null @@ -1,83 +0,0 @@ -import os - -import gradio as gr - -# get the current directory of the script -current_dir = os.path.dirname(os.path.abspath(__file__)) - -# check if the bias_options.txt file exists, if not, create it -bias_file = os.path.join(current_dir, "bias_options.txt") -if not os.path.isfile(bias_file): - with open(bias_file, "w") as f: - f.write("*I am so happy*\n*I am so sad*\n*I am so excited*\n*I am so bored*\n*I am so angry*") - -# read bias options from the text file -with open(bias_file, "r") as f: - bias_options = [line.strip() for line in f.readlines()] - -params = { - "activate": True, - "bias string": " *I am so happy*", - "use custom string": False, -} - - -def input_modifier(string): - """ - This function is applied to your text inputs before - they are fed into the model. - """ - return string - - -def output_modifier(string): - """ - This function is applied to the model outputs. - """ - return string - - -def bot_prefix_modifier(string): - """ - This function is only applied in chat mode. It modifies - the prefix text for the Bot and can be used to bias its - behavior. - """ - if params['activate']: - if params['use custom string']: - return f'{string} {params["custom string"].strip()} ' - else: - return f'{string} {params["bias string"].strip()} ' - else: - return string - - -def ui(): - # Gradio elements - activate = gr.Checkbox(value=params['activate'], label='Activate character bias') - dropdown_string = gr.Dropdown(choices=bias_options, value=params["bias string"], label='Character bias', info='To edit the options in this dropdown edit the "bias_options.txt" file') - use_custom_string = gr.Checkbox(value=False, label='Use custom bias textbox instead of dropdown') - custom_string = gr.Textbox(value="", placeholder="Enter custom bias string", label="Custom Character Bias", info='To use this textbox activate the checkbox above') - - # Event functions to update the parameters in the backend - def update_bias_string(x): - if x: - params.update({"bias string": x}) - else: - params.update({"bias string": dropdown_string.get()}) - return x - - def update_custom_string(x): - params.update({"custom string": x}) - - dropdown_string.change(update_bias_string, dropdown_string, None) - custom_string.change(update_custom_string, custom_string, None) - activate.change(lambda x: params.update({"activate": x}), activate, None) - use_custom_string.change(lambda x: params.update({"use custom string": x}), use_custom_string, None) - - # Group elements together depending on the selected option - def bias_string_group(): - if use_custom_string.value: - return gr.Group([use_custom_string, custom_string]) - else: - return dropdown_string diff --git a/spaces/EuroPython2022/mmocr-demo/configs/_base_/schedules/schedule_adam_600e.py b/spaces/EuroPython2022/mmocr-demo/configs/_base_/schedules/schedule_adam_600e.py deleted file mode 100644 index a77dc52004ba597b4ba7f2df13a96e123c4029ab..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/mmocr-demo/configs/_base_/schedules/schedule_adam_600e.py +++ /dev/null @@ -1,8 +0,0 @@ -# optimizer -optimizer = dict(type='Adam', lr=1e-3) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict(policy='poly', power=0.9) -# running settings -runner = dict(type='EpochBasedRunner', max_epochs=600) -checkpoint_config = dict(interval=100) diff --git a/spaces/EuroPython2022/mmocr-demo/configs/textdet/dbnetpp/dbnetpp_r50dcnv2_fpnc_100k_iter_synthtext.py b/spaces/EuroPython2022/mmocr-demo/configs/textdet/dbnetpp/dbnetpp_r50dcnv2_fpnc_100k_iter_synthtext.py deleted file mode 100644 index 5f3835ea998e5195b471671a8685c0032733b0a2..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/mmocr-demo/configs/textdet/dbnetpp/dbnetpp_r50dcnv2_fpnc_100k_iter_synthtext.py +++ /dev/null @@ -1,62 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/schedules/schedule_sgd_100k_iters.py', - '../../_base_/det_models/dbnetpp_r50dcnv2_fpnc.py', - '../../_base_/det_datasets/synthtext.py', - '../../_base_/det_pipelines/dbnet_pipeline.py' -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -img_norm_cfg_r50dcnv2 = dict( - mean=[122.67891434, 116.66876762, 104.00698793], - std=[58.395, 57.12, 57.375], - to_rgb=True) -train_pipeline_r50dcnv2 = [ - dict(type='LoadImageFromFile', color_type='color_ignore_orientation'), - dict( - type='LoadTextAnnotations', - with_bbox=True, - with_mask=True, - poly2mask=False), - dict(type='ColorJitter', brightness=32.0 / 255, saturation=0.5), - dict(type='Normalize', **img_norm_cfg_r50dcnv2), - dict( - type='ImgAug', - args=[['Fliplr', 0.5], - dict(cls='Affine', rotate=[-10, 10]), ['Resize', [0.5, 3.0]]], - clip_invalid_ploys=False), - dict(type='EastRandomCrop', target_size=(640, 640)), - dict(type='DBNetTargets', shrink_ratio=0.4), - dict(type='Pad', size_divisor=32), - dict( - type='CustomFormatBundle', - keys=['gt_shrink', 'gt_shrink_mask', 'gt_thr', 'gt_thr_mask'], - visualize=dict(flag=False, boundary_key='gt_shrink')), - dict( - type='Collect', - keys=['img', 'gt_shrink', 'gt_shrink_mask', 'gt_thr', 'gt_thr_mask']) -] - -test_pipeline_4068_1024 = {{_base_.test_pipeline_4068_1024}} - -data = dict( - samples_per_gpu=16, - workers_per_gpu=8, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline_r50dcnv2), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_4068_1024), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_4068_1024)) - -evaluation = dict(interval=200000, metric='hmean-iou') # do not evaluate diff --git a/spaces/Ferion/image-matting-app/ppmatting/ml/__init__.py b/spaces/Ferion/image-matting-app/ppmatting/ml/__init__.py deleted file mode 100644 index 612dff101f358f74db3eca601f0b9573ca6d93cb..0000000000000000000000000000000000000000 --- a/spaces/Ferion/image-matting-app/ppmatting/ml/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .methods import CloseFormMatting, KNNMatting, LearningBasedMatting, FastMatting, RandomWalksMatting diff --git a/spaces/FridaZuley/RVC_HFKawaii/infer/lib/rmvpe.py b/spaces/FridaZuley/RVC_HFKawaii/infer/lib/rmvpe.py deleted file mode 100644 index 2a387ebe73c7e1dd8bb7ccad1ea9e0ea89848ece..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/infer/lib/rmvpe.py +++ /dev/null @@ -1,717 +0,0 @@ -import pdb, os - -import numpy as np -import torch -try: - #Fix "Torch not compiled with CUDA enabled" - import intel_extension_for_pytorch as ipex # pylint: disable=import-error, unused-import - if torch.xpu.is_available(): - from infer.modules.ipex import ipex_init - ipex_init() -except Exception: - pass -import torch.nn as nn -import torch.nn.functional as F -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window - -import logging - -logger = logging.getLogger(__name__) - - -###stft codes from https://github.com/pseeth/torch-stft/blob/master/torch_stft/util.py -def window_sumsquare( - window, - n_frames, - hop_length=200, - win_length=800, - n_fft=800, - dtype=np.float32, - norm=None, -): - """ - # from librosa 0.6 - Compute the sum-square envelope of a window function at a given hop length. - This is used to estimate modulation effects induced by windowing - observations in short-time fourier transforms. - Parameters - ---------- - window : string, tuple, number, callable, or list-like - Window specification, as in `get_window` - n_frames : int > 0 - The number of analysis frames - hop_length : int > 0 - The number of samples to advance between frames - win_length : [optional] - The length of the window function. By default, this matches `n_fft`. - n_fft : int > 0 - The length of each analysis frame. - dtype : np.dtype - The data type of the output - Returns - ------- - wss : np.ndarray, shape=`(n_fft + hop_length * (n_frames - 1))` - The sum-squared envelope of the window function - """ - if win_length is None: - win_length = n_fft - - n = n_fft + hop_length * (n_frames - 1) - x = np.zeros(n, dtype=dtype) - - # Compute the squared window at the desired length - win_sq = get_window(window, win_length, fftbins=True) - win_sq = normalize(win_sq, norm=norm) ** 2 - win_sq = pad_center(win_sq, n_fft) - - # Fill the envelope - for i in range(n_frames): - sample = i * hop_length - x[sample : min(n, sample + n_fft)] += win_sq[: max(0, min(n_fft, n - sample))] - return x - - -class STFT(torch.nn.Module): - def __init__( - self, filter_length=1024, hop_length=512, win_length=None, window="hann" - ): - """ - This module implements an STFT using 1D convolution and 1D transpose convolutions. - This is a bit tricky so there are some cases that probably won't work as working - out the same sizes before and after in all overlap add setups is tough. Right now, - this code should work with hop lengths that are half the filter length (50% overlap - between frames). - - Keyword Arguments: - filter_length {int} -- Length of filters used (default: {1024}) - hop_length {int} -- Hop length of STFT (restrict to 50% overlap between frames) (default: {512}) - win_length {[type]} -- Length of the window function applied to each frame (if not specified, it - equals the filter length). (default: {None}) - window {str} -- Type of window to use (options are bartlett, hann, hamming, blackman, blackmanharris) - (default: {'hann'}) - """ - super(STFT, self).__init__() - self.filter_length = filter_length - self.hop_length = hop_length - self.win_length = win_length if win_length else filter_length - self.window = window - self.forward_transform = None - self.pad_amount = int(self.filter_length / 2) - scale = self.filter_length / self.hop_length - fourier_basis = np.fft.fft(np.eye(self.filter_length)) - - cutoff = int((self.filter_length / 2 + 1)) - fourier_basis = np.vstack( - [np.real(fourier_basis[:cutoff, :]), np.imag(fourier_basis[:cutoff, :])] - ) - forward_basis = torch.FloatTensor(fourier_basis[:, None, :]) - inverse_basis = torch.FloatTensor( - np.linalg.pinv(scale * fourier_basis).T[:, None, :] - ) - - assert filter_length >= self.win_length - # get window and zero center pad it to filter_length - fft_window = get_window(window, self.win_length, fftbins=True) - fft_window = pad_center(fft_window, size=filter_length) - fft_window = torch.from_numpy(fft_window).float() - - # window the bases - forward_basis *= fft_window - inverse_basis *= fft_window - - self.register_buffer("forward_basis", forward_basis.float()) - self.register_buffer("inverse_basis", inverse_basis.float()) - - def transform(self, input_data): - """Take input data (audio) to STFT domain. - - Arguments: - input_data {tensor} -- Tensor of floats, with shape (num_batch, num_samples) - - Returns: - magnitude {tensor} -- Magnitude of STFT with shape (num_batch, - num_frequencies, num_frames) - phase {tensor} -- Phase of STFT with shape (num_batch, - num_frequencies, num_frames) - """ - num_batches = input_data.shape[0] - num_samples = input_data.shape[-1] - - self.num_samples = num_samples - - # similar to librosa, reflect-pad the input - input_data = input_data.view(num_batches, 1, num_samples) - # print(1234,input_data.shape) - input_data = F.pad( - input_data.unsqueeze(1), - (self.pad_amount, self.pad_amount, 0, 0, 0, 0), - mode="reflect", - ).squeeze(1) - # print(2333,input_data.shape,self.forward_basis.shape,self.hop_length) - # pdb.set_trace() - forward_transform = F.conv1d( - input_data, self.forward_basis, stride=self.hop_length, padding=0 - ) - - cutoff = int((self.filter_length / 2) + 1) - real_part = forward_transform[:, :cutoff, :] - imag_part = forward_transform[:, cutoff:, :] - - magnitude = torch.sqrt(real_part**2 + imag_part**2) - # phase = torch.atan2(imag_part.data, real_part.data) - - return magnitude # , phase - - def inverse(self, magnitude, phase): - """Call the inverse STFT (iSTFT), given magnitude and phase tensors produced - by the ```transform``` function. - - Arguments: - magnitude {tensor} -- Magnitude of STFT with shape (num_batch, - num_frequencies, num_frames) - phase {tensor} -- Phase of STFT with shape (num_batch, - num_frequencies, num_frames) - - Returns: - inverse_transform {tensor} -- Reconstructed audio given magnitude and phase. Of - shape (num_batch, num_samples) - """ - recombine_magnitude_phase = torch.cat( - [magnitude * torch.cos(phase), magnitude * torch.sin(phase)], dim=1 - ) - - inverse_transform = F.conv_transpose1d( - recombine_magnitude_phase, - self.inverse_basis, - stride=self.hop_length, - padding=0, - ) - - if self.window is not None: - window_sum = window_sumsquare( - self.window, - magnitude.size(-1), - hop_length=self.hop_length, - win_length=self.win_length, - n_fft=self.filter_length, - dtype=np.float32, - ) - # remove modulation effects - approx_nonzero_indices = torch.from_numpy( - np.where(window_sum > tiny(window_sum))[0] - ) - window_sum = torch.from_numpy(window_sum).to(inverse_transform.device) - inverse_transform[:, :, approx_nonzero_indices] /= window_sum[ - approx_nonzero_indices - ] - - # scale by hop ratio - inverse_transform *= float(self.filter_length) / self.hop_length - - inverse_transform = inverse_transform[..., self.pad_amount :] - inverse_transform = inverse_transform[..., : self.num_samples] - inverse_transform = inverse_transform.squeeze(1) - - return inverse_transform - - def forward(self, input_data): - """Take input data (audio) to STFT domain and then back to audio. - - Arguments: - input_data {tensor} -- Tensor of floats, with shape (num_batch, num_samples) - - Returns: - reconstruction {tensor} -- Reconstructed audio given magnitude and phase. Of - shape (num_batch, num_samples) - """ - self.magnitude, self.phase = self.transform(input_data) - reconstruction = self.inverse(self.magnitude, self.phase) - return reconstruction - - -from time import time as ttime - - -class BiGRU(nn.Module): - def __init__(self, input_features, hidden_features, num_layers): - super(BiGRU, self).__init__() - self.gru = nn.GRU( - input_features, - hidden_features, - num_layers=num_layers, - batch_first=True, - bidirectional=True, - ) - - def forward(self, x): - return self.gru(x)[0] - - -class ConvBlockRes(nn.Module): - def __init__(self, in_channels, out_channels, momentum=0.01): - super(ConvBlockRes, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - nn.Conv2d( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - if in_channels != out_channels: - self.shortcut = nn.Conv2d(in_channels, out_channels, (1, 1)) - self.is_shortcut = True - else: - self.is_shortcut = False - - def forward(self, x): - if self.is_shortcut: - return self.conv(x) + self.shortcut(x) - else: - return self.conv(x) + x - - -class Encoder(nn.Module): - def __init__( - self, - in_channels, - in_size, - n_encoders, - kernel_size, - n_blocks, - out_channels=16, - momentum=0.01, - ): - super(Encoder, self).__init__() - self.n_encoders = n_encoders - self.bn = nn.BatchNorm2d(in_channels, momentum=momentum) - self.layers = nn.ModuleList() - self.latent_channels = [] - for i in range(self.n_encoders): - self.layers.append( - ResEncoderBlock( - in_channels, out_channels, kernel_size, n_blocks, momentum=momentum - ) - ) - self.latent_channels.append([out_channels, in_size]) - in_channels = out_channels - out_channels *= 2 - in_size //= 2 - self.out_size = in_size - self.out_channel = out_channels - - def forward(self, x): - concat_tensors = [] - x = self.bn(x) - for i in range(self.n_encoders): - _, x = self.layers[i](x) - concat_tensors.append(_) - return x, concat_tensors - - -class ResEncoderBlock(nn.Module): - def __init__( - self, in_channels, out_channels, kernel_size, n_blocks=1, momentum=0.01 - ): - super(ResEncoderBlock, self).__init__() - self.n_blocks = n_blocks - self.conv = nn.ModuleList() - self.conv.append(ConvBlockRes(in_channels, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv.append(ConvBlockRes(out_channels, out_channels, momentum)) - self.kernel_size = kernel_size - if self.kernel_size is not None: - self.pool = nn.AvgPool2d(kernel_size=kernel_size) - - def forward(self, x): - for i in range(self.n_blocks): - x = self.conv[i](x) - if self.kernel_size is not None: - return x, self.pool(x) - else: - return x - - -class Intermediate(nn.Module): # - def __init__(self, in_channels, out_channels, n_inters, n_blocks, momentum=0.01): - super(Intermediate, self).__init__() - self.n_inters = n_inters - self.layers = nn.ModuleList() - self.layers.append( - ResEncoderBlock(in_channels, out_channels, None, n_blocks, momentum) - ) - for i in range(self.n_inters - 1): - self.layers.append( - ResEncoderBlock(out_channels, out_channels, None, n_blocks, momentum) - ) - - def forward(self, x): - for i in range(self.n_inters): - x = self.layers[i](x) - return x - - -class ResDecoderBlock(nn.Module): - def __init__(self, in_channels, out_channels, stride, n_blocks=1, momentum=0.01): - super(ResDecoderBlock, self).__init__() - out_padding = (0, 1) if stride == (1, 2) else (1, 1) - self.n_blocks = n_blocks - self.conv1 = nn.Sequential( - nn.ConvTranspose2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=stride, - padding=(1, 1), - output_padding=out_padding, - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - self.conv2 = nn.ModuleList() - self.conv2.append(ConvBlockRes(out_channels * 2, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv2.append(ConvBlockRes(out_channels, out_channels, momentum)) - - def forward(self, x, concat_tensor): - x = self.conv1(x) - x = torch.cat((x, concat_tensor), dim=1) - for i in range(self.n_blocks): - x = self.conv2[i](x) - return x - - -class Decoder(nn.Module): - def __init__(self, in_channels, n_decoders, stride, n_blocks, momentum=0.01): - super(Decoder, self).__init__() - self.layers = nn.ModuleList() - self.n_decoders = n_decoders - for i in range(self.n_decoders): - out_channels = in_channels // 2 - self.layers.append( - ResDecoderBlock(in_channels, out_channels, stride, n_blocks, momentum) - ) - in_channels = out_channels - - def forward(self, x, concat_tensors): - for i in range(self.n_decoders): - x = self.layers[i](x, concat_tensors[-1 - i]) - return x - - -class DeepUnet(nn.Module): - def __init__( - self, - kernel_size, - n_blocks, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(DeepUnet, self).__init__() - self.encoder = Encoder( - in_channels, 128, en_de_layers, kernel_size, n_blocks, en_out_channels - ) - self.intermediate = Intermediate( - self.encoder.out_channel // 2, - self.encoder.out_channel, - inter_layers, - n_blocks, - ) - self.decoder = Decoder( - self.encoder.out_channel, en_de_layers, kernel_size, n_blocks - ) - - def forward(self, x): - x, concat_tensors = self.encoder(x) - x = self.intermediate(x) - x = self.decoder(x, concat_tensors) - return x - - -class E2E(nn.Module): - def __init__( - self, - n_blocks, - n_gru, - kernel_size, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(E2E, self).__init__() - self.unet = DeepUnet( - kernel_size, - n_blocks, - en_de_layers, - inter_layers, - in_channels, - en_out_channels, - ) - self.cnn = nn.Conv2d(en_out_channels, 3, (3, 3), padding=(1, 1)) - if n_gru: - self.fc = nn.Sequential( - BiGRU(3 * 128, 256, n_gru), - nn.Linear(512, 360), - nn.Dropout(0.25), - nn.Sigmoid(), - ) - else: - self.fc = nn.Sequential( - nn.Linear(3 * nn.N_MELS, nn.N_CLASS), nn.Dropout(0.25), nn.Sigmoid() - ) - - def forward(self, mel): - # print(mel.shape) - mel = mel.transpose(-1, -2).unsqueeze(1) - x = self.cnn(self.unet(mel)).transpose(1, 2).flatten(-2) - x = self.fc(x) - # print(x.shape) - return x - - -from librosa.filters import mel - - -class MelSpectrogram(torch.nn.Module): - def __init__( - self, - is_half, - n_mel_channels, - sampling_rate, - win_length, - hop_length, - n_fft=None, - mel_fmin=0, - mel_fmax=None, - clamp=1e-5, - ): - super().__init__() - n_fft = win_length if n_fft is None else n_fft - self.hann_window = {} - mel_basis = mel( - sr=sampling_rate, - n_fft=n_fft, - n_mels=n_mel_channels, - fmin=mel_fmin, - fmax=mel_fmax, - htk=True, - ) - mel_basis = torch.from_numpy(mel_basis).float() - self.register_buffer("mel_basis", mel_basis) - self.n_fft = win_length if n_fft is None else n_fft - self.hop_length = hop_length - self.win_length = win_length - self.sampling_rate = sampling_rate - self.n_mel_channels = n_mel_channels - self.clamp = clamp - self.is_half = is_half - - def forward(self, audio, keyshift=0, speed=1, center=True): - factor = 2 ** (keyshift / 12) - n_fft_new = int(np.round(self.n_fft * factor)) - win_length_new = int(np.round(self.win_length * factor)) - hop_length_new = int(np.round(self.hop_length * speed)) - keyshift_key = str(keyshift) + "_" + str(audio.device) - if keyshift_key not in self.hann_window: - self.hann_window[keyshift_key] = torch.hann_window(win_length_new).to( - # "cpu"if(audio.device.type=="privateuseone") else audio.device - audio.device - ) - # fft = torch.stft(#doesn't support pytorch_dml - # # audio.cpu() if(audio.device.type=="privateuseone")else audio, - # audio, - # n_fft=n_fft_new, - # hop_length=hop_length_new, - # win_length=win_length_new, - # window=self.hann_window[keyshift_key], - # center=center, - # return_complex=True, - # ) - # magnitude = torch.sqrt(fft.real.pow(2) + fft.imag.pow(2)) - # print(1111111111) - # print(222222222222222,audio.device,self.is_half) - if hasattr(self, "stft") == False: - # print(n_fft_new,hop_length_new,win_length_new,audio.shape) - self.stft = STFT( - filter_length=n_fft_new, - hop_length=hop_length_new, - win_length=win_length_new, - window="hann", - ).to(audio.device) - magnitude = self.stft.transform(audio) # phase - # if (audio.device.type == "privateuseone"): - # magnitude=magnitude.to(audio.device) - if keyshift != 0: - size = self.n_fft // 2 + 1 - resize = magnitude.size(1) - if resize < size: - magnitude = F.pad(magnitude, (0, 0, 0, size - resize)) - magnitude = magnitude[:, :size, :] * self.win_length / win_length_new - mel_output = torch.matmul(self.mel_basis, magnitude) - if self.is_half == True: - mel_output = mel_output.half() - log_mel_spec = torch.log(torch.clamp(mel_output, min=self.clamp)) - # print(log_mel_spec.device.type) - return log_mel_spec - - -class RMVPE: - def __init__(self, model_path, is_half, device=None): - self.resample_kernel = {} - self.resample_kernel = {} - self.is_half = is_half - if device is None: - device = "cuda" if torch.cuda.is_available() else "cpu" - self.device = device - self.mel_extractor = MelSpectrogram( - is_half, 128, 16000, 1024, 160, None, 30, 8000 - ).to(device) - if "privateuseone" in str(device): - import onnxruntime as ort - - ort_session = ort.InferenceSession( - "%s/rmvpe.onnx" % os.environ["rmvpe_root"], - providers=["DmlExecutionProvider"], - ) - self.model = ort_session - else: - model = E2E(4, 1, (2, 2)) - ckpt = torch.load(model_path, map_location="cpu") - model.load_state_dict(ckpt) - model.eval() - if is_half == True: - model = model.half() - self.model = model - self.model = self.model.to(device) - cents_mapping = 20 * np.arange(360) + 1997.3794084376191 - self.cents_mapping = np.pad(cents_mapping, (4, 4)) # 368 - - def mel2hidden(self, mel): - with torch.no_grad(): - n_frames = mel.shape[-1] - mel = F.pad( - mel, (0, 32 * ((n_frames - 1) // 32 + 1) - n_frames), mode="constant" - ) - if "privateuseone" in str(self.device): - onnx_input_name = self.model.get_inputs()[0].name - onnx_outputs_names = self.model.get_outputs()[0].name - hidden = self.model.run( - [onnx_outputs_names], - input_feed={onnx_input_name: mel.cpu().numpy()}, - )[0] - else: - hidden = self.model(mel) - return hidden[:, :n_frames] - - def decode(self, hidden, thred=0.03): - cents_pred = self.to_local_average_cents(hidden, thred=thred) - f0 = 10 * (2 ** (cents_pred / 1200)) - f0[f0 == 10] = 0 - # f0 = np.array([10 * (2 ** (cent_pred / 1200)) if cent_pred else 0 for cent_pred in cents_pred]) - return f0 - - def infer_from_audio(self, audio, thred=0.03): - # torch.cuda.synchronize() - t0 = ttime() - mel = self.mel_extractor( - torch.from_numpy(audio).float().to(self.device).unsqueeze(0), center=True - ) - # print(123123123,mel.device.type) - # torch.cuda.synchronize() - t1 = ttime() - hidden = self.mel2hidden(mel) - # torch.cuda.synchronize() - t2 = ttime() - # print(234234,hidden.device.type) - if "privateuseone" not in str(self.device): - hidden = hidden.squeeze(0).cpu().numpy() - else: - hidden = hidden[0] - if self.is_half == True: - hidden = hidden.astype("float32") - - f0 = self.decode(hidden, thred=thred) - # torch.cuda.synchronize() - t3 = ttime() - # print("hmvpe:%s\t%s\t%s\t%s"%(t1-t0,t2-t1,t3-t2,t3-t0)) - return f0 - - def infer_from_audio_with_pitch(self, audio, thred=0.03, f0_min=50, f0_max=1100): - audio = torch.from_numpy(audio).float().to(self.device).unsqueeze(0) - mel = self.mel_extractor(audio, center=True) - hidden = self.mel2hidden(mel) - hidden = hidden.squeeze(0).cpu().numpy() - if self.is_half == True: - hidden = hidden.astype("float32") - f0 = self.decode(hidden, thred=thred) - f0[(f0 < f0_min) | (f0 > f0_max)] = 0 - return f0 - - def to_local_average_cents(self, salience, thred=0.05): - # t0 = ttime() - center = np.argmax(salience, axis=1) # 帧长#index - salience = np.pad(salience, ((0, 0), (4, 4))) # 帧长,368 - # t1 = ttime() - center += 4 - todo_salience = [] - todo_cents_mapping = [] - starts = center - 4 - ends = center + 5 - for idx in range(salience.shape[0]): - todo_salience.append(salience[:, starts[idx] : ends[idx]][idx]) - todo_cents_mapping.append(self.cents_mapping[starts[idx] : ends[idx]]) - # t2 = ttime() - todo_salience = np.array(todo_salience) # 帧长,9 - todo_cents_mapping = np.array(todo_cents_mapping) # 帧长,9 - product_sum = np.sum(todo_salience * todo_cents_mapping, 1) - weight_sum = np.sum(todo_salience, 1) # 帧长 - devided = product_sum / weight_sum # 帧长 - # t3 = ttime() - maxx = np.max(salience, axis=1) # 帧长 - devided[maxx <= thred] = 0 - # t4 = ttime() - # print("decode:%s\t%s\t%s\t%s" % (t1 - t0, t2 - t1, t3 - t2, t4 - t3)) - return devided - - -if __name__ == "__main__": - import librosa - import soundfile as sf - - audio, sampling_rate = sf.read(r"C:\Users\liujing04\Desktop\Z\冬之花clip1.wav") - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - audio_bak = audio.copy() - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - model_path = r"D:\BaiduNetdiskDownload\RVC-beta-v2-0727AMD_realtime\rmvpe.pt" - thred = 0.03 # 0.01 - device = "cuda" if torch.cuda.is_available() else "cpu" - rmvpe = RMVPE(model_path, is_half=False, device=device) - t0 = ttime() - f0 = rmvpe.infer_from_audio(audio, thred=thred) - # f0 = rmvpe.infer_from_audio(audio, thred=thred) - # f0 = rmvpe.infer_from_audio(audio, thred=thred) - # f0 = rmvpe.infer_from_audio(audio, thred=thred) - # f0 = rmvpe.infer_from_audio(audio, thred=thred) - t1 = ttime() - logger.info("%s %.2f", f0.shape, t1 - t0) diff --git a/spaces/GV05/stable-diffusion-mingle-prompts/app.py b/spaces/GV05/stable-diffusion-mingle-prompts/app.py deleted file mode 100644 index 825c14dca7976177fcb97f903d9abf04cb3fd7e8..0000000000000000000000000000000000000000 --- a/spaces/GV05/stable-diffusion-mingle-prompts/app.py +++ /dev/null @@ -1,74 +0,0 @@ -import gradio as gr -import torch -from transformers import logging -import random -from PIL import Image -from Utils import MingleModel - -logging.set_verbosity_error() - - -def get_concat_h(images): - widths, heights = zip(*(i.size for i in images)) - - total_width = sum(widths) - max_height = max(heights) - - dst = Image.new('RGB', (total_width, max_height)) - x_offset = 0 - for im in images: - dst.paste(im, (x_offset,0)) - x_offset += im.size[0] - return dst - - -mingle_model = MingleModel() - - -def mingle_prompts(first_prompt, second_prompt): - imgs = [] - text_input1 = mingle_model.do_tokenizer(first_prompt) - text_input2 = mingle_model.do_tokenizer(second_prompt) - with torch.no_grad(): - text_embeddings1 = mingle_model.get_text_encoder(text_input1) - text_embeddings2 = mingle_model.get_text_encoder(text_input2) - - rand_generator = random.randint(1, 2048) - # Mix them together - # mix_factors = [0.1, 0.3, 0.5, 0.7, 0.9] - mix_factors = [0.5] - for mix_factor in mix_factors: - mixed_embeddings = (text_embeddings1 * mix_factor + text_embeddings2 * (1 - mix_factor)) - - # Generate! - steps = 20 - guidence_scale = 8.0 - img = mingle_model.generate_with_embs(mixed_embeddings, rand_generator, num_inference_steps=steps, - guidance_scale=guidence_scale) - imgs.append(img) - - return get_concat_h(imgs) - - -with gr.Blocks() as demo: - gr.Markdown( - ''' -

create a 'chimera' by averaging the embeddings of two different prompts!!

- ''') - gr.Image('batman_venum.png', shape=(1024, 205)) - - first_prompt = gr.Textbox(label="first_prompt") - second_prompt = gr.Textbox(label="second_prompt") - greet_btn = gr.Button("Submit") - gr.Markdown("## Text Examples") - gr.Examples([['batman, dynamic lighting, photorealistic fantasy concept art, trending on art station, stunning visuals, terrifying, creative, cinematic', - 'venom, dynamic lighting, photorealistic fantasy concept art, trending on art station, stunning visuals, terrifying, creative, cinematic'], - ['A mouse', 'A leopard']], [first_prompt, second_prompt]) - - gr.Markdown("# Output Results") - output = gr.Image(shape=(512,512)) - - greet_btn.click(fn=mingle_prompts, inputs=[first_prompt, second_prompt], outputs=[output]) - -demo.launch() - diff --git a/spaces/GXSA/bingo/tailwind.config.js b/spaces/GXSA/bingo/tailwind.config.js deleted file mode 100644 index 03da3c3c45be6983b9f5ffa6df5f1fd0870e9636..0000000000000000000000000000000000000000 --- a/spaces/GXSA/bingo/tailwind.config.js +++ /dev/null @@ -1,48 +0,0 @@ -/** @type {import('tailwindcss').Config} */ -module.exports = { - content: [ - './src/pages/**/*.{js,ts,jsx,tsx,mdx}', - './src/components/**/*.{js,ts,jsx,tsx,mdx}', - './src/app/**/*.{js,ts,jsx,tsx,mdx}', - './src/ui/**/*.{js,ts,jsx,tsx,mdx}', - ], - "darkMode": "class", - theme: { - extend: { - colors: { - 'primary-blue': 'rgb(var(--color-primary-blue) / )', - secondary: 'rgb(var(--color-secondary) / )', - 'primary-background': 'rgb(var(--primary-background) / )', - 'primary-text': 'rgb(var(--primary-text) / )', - 'secondary-text': 'rgb(var(--secondary-text) / )', - 'light-text': 'rgb(var(--light-text) / )', - 'primary-border': 'rgb(var(--primary-border) / )', - }, - keyframes: { - slideDownAndFade: { - from: { opacity: 0, transform: 'translateY(-2px)' }, - to: { opacity: 1, transform: 'translateY(0)' }, - }, - slideLeftAndFade: { - from: { opacity: 0, transform: 'translateX(2px)' }, - to: { opacity: 1, transform: 'translateX(0)' }, - }, - slideUpAndFade: { - from: { opacity: 0, transform: 'translateY(2px)' }, - to: { opacity: 1, transform: 'translateY(0)' }, - }, - slideRightAndFade: { - from: { opacity: 0, transform: 'translateX(2px)' }, - to: { opacity: 1, transform: 'translateX(0)' }, - }, - }, - animation: { - slideDownAndFade: 'slideDownAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideLeftAndFade: 'slideLeftAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideUpAndFade: 'slideUpAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideRightAndFade: 'slideRightAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - }, - }, - }, - plugins: [require('@headlessui/tailwindcss'), require('tailwind-scrollbar')], -} diff --git a/spaces/GeorgeOrville/bingo/src/components/ui/textarea.tsx b/spaces/GeorgeOrville/bingo/src/components/ui/textarea.tsx deleted file mode 100644 index e25af722c7a5dc1121a9ab58d6716952f9f76081..0000000000000000000000000000000000000000 --- a/spaces/GeorgeOrville/bingo/src/components/ui/textarea.tsx +++ /dev/null @@ -1,24 +0,0 @@ -import * as React from 'react' - -import { cn } from '@/lib/utils' - -export interface TextareaProps - extends React.TextareaHTMLAttributes {} - -const Textarea = React.forwardRef( - ({ className, ...props }, ref) => { - return ( -