diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Autodesk Concrete Building Structures 2014 Torrents Updates and Patches.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Autodesk Concrete Building Structures 2014 Torrents Updates and Patches.md deleted file mode 100644 index 5e449adc2a342865a0fba6bbcba44f5c5136be37..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Autodesk Concrete Building Structures 2014 Torrents Updates and Patches.md +++ /dev/null @@ -1,80 +0,0 @@ -
-

Visual Studio 2012 Professional Product Key Crack: How to Activate Your IDE for Free

-

If you are a developer who uses Microsoft's Visual Studio as your integrated development environment (IDE), you might be interested in getting Visual Studio 2012 Professional for free. Visual Studio 2012 Professional is one of the most popular versions of Visual Studio that offers many features and tools for creating, debugging, testing, and deploying various types of applications.

-

However, to use Visual Studio 2012 Professional, you need to have a valid product key that you can purchase from Microsoft or its authorized resellers. A product key is a unique code that activates your copy of Visual Studio and verifies that you have a legitimate license to use it.

-

Visual Studio 2012 Professional Product Key Crack


Download ————— https://byltly.com/2uKwaq



-

But what if you don't want to pay for a product key? Is there a way to get Visual Studio 2012 Professional for free? The answer is yes, but it comes with some risks and challenges. In this article, we will show you how to find and use a product key crack for Visual Studio 2012 Professional that will allow you to activate your IDE without paying anything.

-

A product key crack is a method of bypassing the activation process of Visual Studio by using a fake or stolen product key that tricks the software into thinking that you have a valid license. There are many websites and tools that claim to provide product key cracks for various versions of Visual Studio, including Visual Studio 2012 Professional.

-

The benefits of using a product key crack are obvious: you can save money and enjoy all the features and functionalities of Visual Studio without any limitations or restrictions. You can also avoid the hassle of registering your copy of Visual Studio with Microsoft or providing any personal information.

-

How to activate Visual Studio 2012 Professional without product key
-Visual Studio 2012 Professional license key generator download
-Free Visual Studio 2012 Professional serial number crack
-Visual Studio 2012 Professional activation code hack
-Visual Studio 2012 Professional full version crack patch
-Visual Studio 2012 Professional registration key crack free download
-Visual Studio 2012 Professional crack keygen torrent
-Visual Studio 2012 Professional product key finder software
-Visual Studio 2012 Professional license key crack online
-Visual Studio 2012 Professional serial key crack windows 10
-Visual Studio 2012 Professional activation key crack reddit
-Visual Studio 2012 Professional crack patch download
-Visual Studio 2012 Professional product key generator online
-Visual Studio 2012 Professional license key crack 2023
-Visual Studio 2012 Professional serial number crack mac
-Visual Studio 2012 Professional activation code crack youtube
-Visual Studio 2012 Professional full version crack free download
-Visual Studio 2012 Professional registration key generator online
-Visual Studio 2012 Professional crack keygen download
-Visual Studio 2012 Professional product key finder tool
-Visual Studio 2012 Professional license key hack online
-Visual Studio 2012 Professional serial key generator download
-Visual Studio 2012 Professional activation key finder software
-Visual Studio 2012 Professional crack patch online
-Visual Studio 2012 Professional product key generator free download
-Visual Studio 2012 Professional license key finder tool
-Visual Studio 2012 Professional serial number hack online
-Visual Studio 2012 Professional activation code generator download
-Visual Studio 2012 Professional full version crack online
-Visual Studio 2012 Professional registration key finder software
-Visual Studio 2012 Professional crack keygen online
-Visual Studio 2012 Professional product key hack reddit
-Visual Studio 2012 Professional license key generator online free
-Visual Studio 2012 Professional serial number generator free download
-Visual Studio 2012 Professional activation key hack youtube
-Visual Studio 2012 Professional crack patch free download
-Visual Studio 2012 Professional product key finder online free
-Visual Studio 2012 Professional license key hack windows 10
-Visual Studio 2012 Professional serial key finder tool
-Visual Studio 2012 Professional activation code finder software
-Visual Studio 2012 Professional full version crack reddit
-Visual Studio 2012 Professional registration key hack online
-Visual Studio 2012 Professional crack keygen free download
-Visual Studio 2012 Professional product key generator reddit
-Visual Studio 2012 Professional license key finder online free download
-Visual Studio 2012 Professional serial number hack youtube
-Visual Studio 2012 Professional activation code hack reddit
-Visual Studio 2012 Professional full version crack youtube
-Visual Studio 2012 Professional registration key generator free download

-

However, using a product key crack also comes with some risks and challenges. First of all, using a product key crack is illegal and unethical, as it violates the terms and conditions of Microsoft's software license agreement. You could face legal consequences or penalties if Microsoft detects that you are using an unauthorized copy of Visual Studio.

-

Secondly, using a product key crack is unsafe and unreliable, as it could expose your computer to malware, viruses, or spyware that could harm your system or steal your data. You could also encounter errors, bugs, or compatibility issues that could affect your development work or performance. Moreover, you could lose access to updates, patches, or support from Microsoft or its partners that could improve or fix your copy of Visual Studio.

-

Therefore, before you decide to use a product key crack for Visual Studio 2012 Professional, you should weigh the pros and cons carefully and consider the alternatives. If you still want to proceed with using a product key crack, here are some steps that you need to follow.

-

How to Find a Valid Product Key for Visual Studio 2012 Professional

-

The first step in using a product key crack for Visual Studio 2012 Professional is finding a valid product key that will work with your copy of Visual Studio. There are two main options that you can try:

-

Option 1: Use a product key generator

-

A product key generator is a software tool that creates random or algorithm-based product keys for various software products, including Visual Studio. A product key generator works by mimicking the format and structure of an authentic product key and generating multiple combinations of letters and numbers that could potentially activate your copy of Visual Studio.

-

There are many websites and tools that claim to offer product key generators for Visual Studio 2012 Professional, such as Product-Keys/Visual Studio, AppNee Freeware Group, or All Product Keys. However, not all of them are reliable or trustworthy, as some of them could contain malware, viruses, or spyware that could harm your computer or steal your data.

-

Therefore, before you download or use any product key generator for Visual Studio 2012 Professional, you should do some research and check the reputation and reviews of the website or tool that provides it. You should also scan the file or tool with an antivirus program before opening or running it on your computer.

-

To use a product key generator for Visual Studio 2012 Professional, you need to follow these steps:

-
    -
  1. Download the product key generator from a reputable website or tool.
  2. -
  3. Extract the file or run the tool on your computer.
  4. -
  5. Select Visual Studio 2012 Professional from the list of software products.
  6. -
  7. Click on Generate or Create button to generate multiple product keys.
  8. -
  9. Copy one of the generated product keys and save it somewhere safe.
  10. -
-

Option 2: Use a product key list

-

A product key list is a collection of pre-existing or leaked product keys for various software products, including Visual Studio. A product key list works by providing you with an actual or authentic product key that someone else has already used or obtained from Microsoft or its authorized resellers.

-

There are many websites and tools that claim to offer product key lists for Visual Studio 2012 Professional, such as Product-Keys/Visual Studio, AppNee Freeware Group, or All Product Keys. However, not all of them are reliable or trustworthy, as some of them could contain outdated, invalid, or duplicate product keys that could not activate your copy of Visual Studio.

-

Therefore, before you use any product key list for Visual Studio 2012 Professional, you should do some research and check the reputation and reviews of the website or tool that provides it. You should also verify that the product keys are updated, valid, and unique before using them on your computer. 0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Bonetown V1.1.1 Crack WORK.md b/spaces/1gistliPinn/ChatGPT4/Examples/Bonetown V1.1.1 Crack WORK.md deleted file mode 100644 index 992c12ab6cfc4ee569db7ecfe3c0c10ad462d561..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Bonetown V1.1.1 Crack WORK.md +++ /dev/null @@ -1,16 +0,0 @@ -

bonetown v1.1.1 crack


Download ··· https://imgfil.com/2uxX09



- -On this game portal you can download the game BoneTown for free torrent. Full version of the game BoneTown was . At the moment, the last version: 1.1.1, rating: rate. Torrent Download Free " Torrent Download Games " Bone Town / Bones of Town (2010) PC. -Year: 2010 Genre: Strategy, 3D Developer: GSC Game World Platform: PC.... -How to download BoneTown game for free. -Download the game for free. -BoneTown download for free. -BoneTown. -Download the game BoneTown for free. -BoneTown.torrent. -BoneTown - download the game for free on your computer, full version, without registration and sms. -BoneTown free download. -BoneTown.torrent. 8a78ff9644
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/En Office Enterprise 2007 Dvd Vl X12 19574.iso.rar.md b/spaces/1gistliPinn/ChatGPT4/Examples/En Office Enterprise 2007 Dvd Vl X12 19574.iso.rar.md deleted file mode 100644 index 9bb3080ff49016e0c6963994ecf2ae0815ea9566..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/En Office Enterprise 2007 Dvd Vl X12 19574.iso.rar.md +++ /dev/null @@ -1,13 +0,0 @@ -

En Office Enterprise 2007 Dvd Vl X12 19574.iso.rar


Download Filehttps://imgfil.com/2uxXH1



-
-Download. office business; Office Enterprise 2007. En Office Enterprise 2007 DVD Vl X12 19574.iso.rar. Download. 704, October 21, 2017, 560.65 MB, OFFICE 2007. Office 2010 download - Office 2010 Standard - free download Russian version. -Office 2010 - free download. -Download Office 2010 free Russian version without registration for Windows 7 / 8, 10, XP 64 and 32 bit Office 2010 Download Torrent. -Office 2010 is a software package for working with various types of documents. -Download office 2010 for free. -Free and without registration. -Daily. -Download Office 2010 for free and without. 8a78ff9644
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/FULL IStripper V1.2.158 NSFW.md b/spaces/1gistliPinn/ChatGPT4/Examples/FULL IStripper V1.2.158 NSFW.md deleted file mode 100644 index d1176f14bb806e85ccbe8f0adff538a87c22e2a6..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/FULL IStripper V1.2.158 NSFW.md +++ /dev/null @@ -1,11 +0,0 @@ -

FULL iStripper V1.2.158 NSFW


Download Zip · https://imgfil.com/2uy0NO



-
-FULL IStripper V1.2.158 NSFW !FULL!. 5 point. FULL iStripper V1.2.158 NSFW. DOWNLOAD: fd16d57201. Related links:. FULL IStripper V1.2.158 NSFW !FULL!. 5 point. FULL iStripper V1.2.158 NSFW. DOWNLOAD. -FULL V1.2.157 NSFW!FULL!. 5 point. FULL iStripper V1.2.157 NSFW. DOWNLOAD. fd16d57201. -FULL IStripper V1.2.157 NSFW !FULL!. 5 point. FULL iStripper V1.2.157 NSFW. DOWNLOAD. -NSFW iStripper v1.2.155 V1.2.157. -DOWNLOAD iStripper v1.2.157. -I 8a78ff9644
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Angry Birds Rio 2 The Latest Episode of the Popular Franchise - Available for Windows 10.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Angry Birds Rio 2 The Latest Episode of the Popular Franchise - Available for Windows 10.md deleted file mode 100644 index ca422c435c8bda54e8bb3baf4163cb6a81d28bb9..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Angry Birds Rio 2 The Latest Episode of the Popular Franchise - Available for Windows 10.md +++ /dev/null @@ -1,84 +0,0 @@ - -

Angry Birds Rio 2 Game Free Download for Windows 10

-

If you are a fan of the Angry Birds franchise, you might have heard of Angry Birds Rio 2, the second puzzle game based on the hit movies Rio and Rio 2. In this game, you have to fling birds at the piggies' towers and save their friends Blu and Jewel, two rare macaws, from the evil smugglers. The game is full of fun, challenge, and excitement, and it is completely free to download for Windows 10. In this article, we will tell you everything you need to know about Angry Birds Rio 2 game, including its features, download process, and tips and tricks.

-

Features

-

Angry Birds Rio 2 game has many features that make it different from the previous Angry Birds games. Here are some of them:

-

angry birds rio 2 game free download for windows 10


Download File ::: https://urlin.us/2uSXIl



- -

Download

-

To download Angry Birds Rio 2 game for free for Windows 10, you need to follow these steps:

-
    -
  1. Go to [FileHippo](^3^), a trusted website that offers free software downloads.
  2. -
  3. Click on the green "Download Latest Version" button on the top right corner of the page.
  4. -
  5. Wait for the download to finish and then open the file.
  6. -
  7. Follow the instructions on the screen to install the game on your PC.
  8. -
  9. Enjoy playing Angry Birds Rio 2 game!
  10. -
-

The system requirements for Angry Birds Rio 2 game are:

- - - -
Operating systemProcessorMemoryGraphics
Windows XP or later1 GHz or faster512 MB or moreOpenGL 1.3 compatible or better
-

Tips and Tricks

-

To improve your skills and score in Angry Birds Rio 2 game, here are some tips and tricks to know:

- -

Conclusion

-

Angry Birds Rio 2 game is a great puzzle game that will keep you entertained for hours. It has many features that make it different from the previous Angry Birds games, such as multi-stage levels, power-ups, clans, arena, and silly hats. You can download it for free for Windows 10 from FileHippo, a trusted website that offers free software downloads. You can also improve your skills and score in the game by following some tips and tricks, such as choosing your bird wisely, aiming for the weak spots, using the power-ups wisely, watching the videos, and having fun. We hope you enjoyed this article and learned something new about Angry Birds Rio 2 game. Now go ahead and download it and start flinging those birds at those piggies!

-

FAQs

-

Here are some frequently asked questions about Angry Birds Rio 2 game:

-

* angry birds rio 2 pc game download full version
-* how to install angry birds rio 2 on windows 10
-* angry birds rio 2 game online play free
-* angry birds rio 2 game features and reviews
-* angry birds rio 2 game system requirements for windows 10
-* angry birds rio 2 game walkthrough and tips
-* angry birds rio 2 game cheats and hacks for windows 10
-* angry birds rio 2 game latest updates and news
-* angry birds rio 2 game trailer and screenshots
-* angry birds rio 2 game best price and deals for windows 10
-* angry birds rio 2 game free trial download for windows 10
-* angry birds rio 2 game alternatives and similar games for windows 10
-* angry birds rio 2 game problems and solutions for windows 10
-* angry birds rio 2 game ratings and feedback from users
-* angry birds rio 2 game developer and publisher information
-* angry birds rio 2 game based on the movie Rio 2
-* angry birds rio 2 game new characters and levels
-* angry birds rio 2 game modes and challenges
-* angry birds rio 2 game achievements and rewards
-* angry birds rio 2 game comparison with other angry birds games
-* angry birds rio 2 game fun facts and trivia
-* angry birds rio 2 game fan art and videos
-* angry birds rio 2 game merchandise and accessories
-* angry birds rio 2 game download size and speed for windows 10
-* angry birds rio 2 game compatibility and performance for windows 10
-* angry birds rio 2 game support and contact details
-* angry birds rio 2 game license and terms of use for windows 10
-* angry birds rio 2 game refund and cancellation policy for windows 10
-* angry birds rio 2 game security and privacy for windows 10
-* angry birds rio 2 game community and forums for windows 10 users

-
    -
  1. Q: How many levels are there in Angry Birds Rio 2 game? -
    A: There are over 400 levels in Angry Birds Rio 2 game, divided into several episodes based on the movies Rio and Rio 2. Each episode has its own theme, background, music, and characters.
  2. -
  3. Q: How can I unlock new birds in Angry Birds Rio 2 game? -
    A: You can unlock new birds in Angry Birds Rio 2 game by completing certain levels or achievements. For example, you can unlock Blu and Jewel by completing level 1-7 of Smugglers' Den episode, or you can unlock Stella by completing level 1-15 of Blossom River episode.
  4. -
  5. Q: How can I join a clan in Angry Birds Rio 2 game? -
    A: You can join a clan in Angry Birds Rio 2 game by clicking on the clan icon on the bottom left corner of the screen. You can either create your own clan or join an existing one. You can also invite your friends to join your clan or search for other clans by name or tag.
  6. -
  7. Q: How can I play in the arena in Angry Birds Rio 2 game? -
    A: You can play in the arena in Angry Birds Rio 2 game by clicking on the arena icon on the bottom right corner of the screen. You can compete with other players around the world in daily tournaments and win prizes and trophies. You can also choose your own bird to play with and customize it with hats.
  8. -
  9. Q: How can I contact the support team of Angry Birds Rio 2 game? -
    A: You can contact the support team of Angry Birds Rio 2 game by clicking on the settings icon on the top left corner of the screen and then clicking on "Help & Support". You can also visit their website [here] or email them at support@rovio.com.
  10. -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Cmo descargar e instalar Among Us APK en tu Android.md b/spaces/1phancelerku/anime-remove-background/Cmo descargar e instalar Among Us APK en tu Android.md deleted file mode 100644 index 1a025fc4a2816f7fe711642c9c516825fc9a0f91..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Cmo descargar e instalar Among Us APK en tu Android.md +++ /dev/null @@ -1,118 +0,0 @@ - -

Among Us APK Descargar: How to Download and Play the Popular Game on Android

-

Among Us is one of the most popular games of 2020 and 2021, with millions of players around the world. The game is available on various platforms, including PC, iOS, and Android. If you want to play Among Us on your Android device, you will need to download the APK file from a reliable source. In this article, we will show you how to download and install Among Us APK on Android, how to play the game, and some tips and tricks to help you win. We will also suggest some alternatives to Among Us that you can try if you want more games like it.

-

among us apk descargar


Download ❤❤❤ https://jinyurl.com/2uNOxF



-

What is Among Us?

-

Among Us is a multiplayer social deduction game developed by Innersloth, an American game studio. The game was released in 2018, but it became a viral sensation in 2020 thanks to streamers and YouTubers who played it online. The game has won several awards, such as the Best Multiplayer Game and the Best Mobile Game at The Game Awards 2020.

-

A multiplayer social deduction game

-

The premise of Among Us is simple: you are part of a crew of up to 10 players who are on a spaceship or a base. However, among you are one or more impostors who are trying to kill everyone else. The crewmates have to work together to complete tasks and find the impostors before they are all eliminated. The impostors have to blend in with the crewmates, sabotage their tasks, and kill them without being caught.

-

Features of Among Us

-

Among Us has many features that make it fun and engaging for players of all ages. Some of these features are:

- -

How to Download Among Us APK on Android

-

If you want to play Among Us on your Android device, you will need to download the APK file from a trusted source. There are two ways to do this:

-

Steps to download and install Among Us APK from APKCombo

-
    -
  1. Go to APKCombo, a website that offers APK downloads for Android games and apps. You can search for Among Us or use this link: Among Us APK.
  2. -
  3. Select the version you want to download and click on the "Download APK" button.
  4. -
  5. Wait for the download to finish and then open the APK file. You may need to enable "Unknown sources" in your device settings to install apps from outside the Google Play Store.
  6. -
  7. Follow the instructions on the screen to install Among Us on your device.
  8. -
-

Steps to download and install Among Us APK from Google Play Store

-
    -
  1. Go to the Google Play Store on your device or use this link: Among Us on Google Play Store.
  2. -
  3. Tap on the "Install" button and wait for the download to finish.
  4. -
  5. Open Among Us from your app drawer and enjoy the game.
  6. -
-

How to Play Among Us on Android

-

Once you have installed Among Us on your Android device, you can start playing the game with your friends or strangers online. Here are some basic steps to play Among Us on Android:

-

Choose your role and map

-

You can either create your own game or join an existing one. If you create a game, you can choose the number of impostors, the map, and the game settings. You can also invite your friends by sharing the game code. If you join a game, you will be assigned a random role and map. You can either be a crewmate or an impostor, depending on the game settings.

-

among us apk download android
-among us apk mod menu
-among us apk pc
-among us apk hack
-among us apk uptodown
-among us apk latest version
-among us apk free
-among us apk mediafıre
-among us apk 2023.2.9
-among us apk mod impostor
-among us apk online
-among us apk full unlocked
-among us apk no ads
-among us apk unlimited skins
-among us apk always impostor
-among us apk mod 2023
-among us apk para pc
-among us apk sin emulador
-among us apk gratis
-among us apk español
-among us apk mega
-among us apk mod menu 2023
-among us apk mod skins
-among us apk mod pets
-among us apk mod hats
-among us apk mod invisible
-among us apk mod speed
-among us apk mod kill cooldown
-among us apk mod vent as crewmate
-among us apk mod no kill cooldown
-among us apk mod see impostor
-among us apk mod always win
-among us apk mod voice chat
-among us apk mod anti ban
-among us apk mod all unlocked
-among us apk mod no ads
-among us apk mod unlimited money
-among us apk mod god mode
-among us apk mod radar impostor
-among us apk mod fake impostor
-among us apk mod zoom out
-among us apk mod no name
-among us apk mod rainbow skin
-among us apk mod custom skins
-among us apk mod hide and seek mode

-

Complete tasks or kill crewmates

-

If you are a crewmate, your goal is to complete tasks around the map and find the impostors. You can see your tasks on the top left corner of the screen. You can also use the map button to see where your tasks are located. Some tasks are visual, meaning that other players can see you doing them. These tasks can help you prove your innocence or expose an impostor. If you are an impostor, your goal is to kill crewmates and sabotage their tasks. You can use vents to move around the map quickly and secretly. You can also use the sabotage button to cause problems for the crewmates, such as turning off the lights, locking doors, or triggering emergencies.

-

Communicate and vote

-

If a dead body is reported or an emergency meeting is called, all players will gather in a meeting room to discuss and vote. You can use text or voice chat to communicate with other players. You can share information, accuse someone, defend yourself, or lie. You can also skip voting if you are not sure who the impostor is. The player with the most votes will be ejected from the game. The game will continue until either all impostors are eliminated, all crewmates are killed, or a major sabotage is not fixed in time.

-

Tips and Tricks for Among Us

-

Playing Among Us can be challenging and fun, especially if you want to win as either a crewmate or an impostor. Here are some tips and tricks that can help you improve your skills and strategies in Among Us:

-

Learn your common tasks and viewing distances

-

Common tasks are tasks that are assigned to all crewmates in a game. They can be used to verify if someone is telling the truth or lying about their role. For example, if someone claims to have done a common task that you don't have, they are likely an impostor. Common tasks vary depending on the map, so make sure you know what they are before playing. Viewing distances are how far you can see in the game. They can be affected by lights, walls, doors, and vents. Knowing how far you can see and how far others can see you can help you avoid being caught or catch someone in the act.

-

Check rooms and cameras for bodies and impostors

-

If you are a crewmate, you should check rooms frequently for dead bodies or suspicious activities. If you find a body, report it immediately and share what you saw or where you were. If you don't find any bodies, but see someone acting weirdly, such as venting, killing, or faking tasks, call an emergency meeting and expose them. If you are an impostor, you should avoid killing in plain sight or leaving bodies in obvious places. You should also vent carefully and avoid being seen by cameras or other players.

-

Use vents and sabotages wisely as an impostor

If you are an impostor, you should use vents and sabotages wisely to create confusion, distraction, and chaos among the crewmates. Vents allow you to move around the map quickly and secretly, but you should only use them when no one is around or watching. Sabotages allow you to cause problems for the crewmates, such as turning off the lights, locking doors, or triggering emergencies. You should use sabotages to separate, isolate, or lure your targets, or to prevent them from completing their tasks or finding bodies.

-

Don't trust anyone and have an alibi as a crewmate

-

If you are a crewmate, you should be careful about who you trust and who you follow. Anyone can be an impostor, even your friends or teammates. You should also have an alibi for where you were and what you did during the game. You can use visual tasks, cameras, logs, or other players as your alibi. Having an alibi can help you prove your innocence or accuse someone else.

-

Alternatives to Among Us on Android

-

If you love Among Us and want to try more games like it, you can check out some of these alternatives on Android:

-

Town of Salem

-

Town of Salem is a game of murder, deception, and mystery. You are one of 15 players in a town where each player has a role and a goal. Some roles are good, such as the Sheriff, the Doctor, or the Investigator. Some roles are evil, such as the Serial Killer, the Arsonist, or the Witch. Each night, the evil roles can kill someone, while the good roles can protect, heal, or investigate someone. Each day, the town can vote to lynch someone they suspect is evil. The game ends when either all the evil roles are dead, or the evil roles outnumber the good ones.

-

Project Winter

-

Project Winter is a game of survival and betrayal. You are one of 8 players who are stranded in a snowy wilderness. You have to work together to gather resources, repair structures, and escape. However, among you are two traitors who are trying to sabotage your efforts and kill you. You have to use voice chat and social skills to communicate with other players and find out who the traitors are. You can also use weapons and items to fight back or escape.

-

Betrayal.io

-

Betrayal.io is a game of deception and deduction. You are one of 12 players who are on a mission to complete tasks and find clues. However, among you are two betrayers who are trying to stop you and kill you. You have to use text chat and emojis to communicate with other players and vote out the betrayers. You can also use gadgets and abilities to help you or hinder others.

-

Conclusion

-

Among Us is a fun and addictive game that you can play on your Android device with your friends or strangers online. You can download the APK file from APKCombo or Google Play Store and install it on your device. You can then choose your role and map, complete tasks or kill crewmates, communicate and vote, and enjoy the game. You can also improve your skills and strategies by learning some tips and tricks for Among Us. If you want more games like Among Us, you can try some alternatives such as Town of Salem, Project Winter, or Betrayal.io.

-

FAQs

-

Here are some frequently asked questions about Among Us:

- - - - - - - -
QuestionAnswer
Is Among Us free on Android?Yes, Among Us is free to download and play on Android devices.
How many players can play Among Us?You can play with up to 10 players in one game of Among Us.
Can I play Among Us offline?No, you need an internet connection to play Among Us online or over local WiFi.
Can I play Among Us with PC players?Yes, you can play with PC players as long as you have the same version of the game.
How do I update Among Us on Android?You can update Among Us on Android by downloading the latest APK file from APKCombo or Google Play Store.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Noblemen 1896 APK Data and Lead Your Armies to Victory!.md b/spaces/1phancelerku/anime-remove-background/Download Noblemen 1896 APK Data and Lead Your Armies to Victory!.md deleted file mode 100644 index 69cddd8b92f53dad9755bef20931dfe8a52e4225..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Noblemen 1896 APK Data and Lead Your Armies to Victory!.md +++ /dev/null @@ -1,116 +0,0 @@ -
-

Noblemen: 1896 APK Data + Download

-

If you are looking for a unique and immersive action/strategy game that takes you back to an alternate history of 1896, then you might want to check out Noblemen: 1896. This game lets you play as a nobleman who leads his armies to victory in a steampunk-inspired war. In this article, we will tell you everything you need to know about Noblemen: 1896 APK data + download, including what the game is about, why you should download it, how to download it, how to play it, and what other players think about it.

-

noblemen 1896 apk data + download


Download File ……… https://jinyurl.com/2uNOUa



-

What is Noblemen: 1896?

-

Noblemen: 1896 is a game developed by Foursaken Media that combines third-person shooter combat with strategic planning and resource management. The game is set in an alternate reality where the United States is divided by a civil war that involves advanced weapons such as cannons, gatling guns, airships, steam tanks, and more. You play as a nobleman who commands his own regiment and fights alongside other units in large-scale battles. You can also customize your equipment, recruit new soldiers, upgrade your base, collect battle cards, and explore a dynamic map.

-

Why download Noblemen: 1896 APK and data files?

-

There are several reasons why you might want to download Noblemen: 1896 APK and data files instead of using the Google Play Store. Here are some of them:

- -

How to download Noblemen: 1896 APK and data files?

-

To download Noblemen: 1896 APK and data files on your Android device, you need to follow these steps:

-

noblemen 1896 game apk download
-noblemen 1896 mod apk + data
-noblemen 1896 android game free download
-noblemen 1896 apk obb offline
-noblemen 1896 apk data highly compressed
-noblemen 1896 full version apk download
-noblemen 1896 unlimited money apk + data
-noblemen 1896 latest apk download
-noblemen 1896 apk data revdl
-noblemen 1896 offline shooter game apk
-noblemen 1896 action game apk + data
-noblemen 1896 apk data modded
-noblemen 1896 apk data android 1
-noblemen 1896 apk data rexdl
-noblemen 1896 apk data mega
-noblemen 1896 hack apk download
-noblemen 1896 premium apk + data
-noblemen 1896 apk data uptodown
-noblemen 1896 apk data apkpure
-noblemen 1896 apk data google drive
-noblemen 1896 cracked apk download
-noblemen 1896 pro apk + data
-noblemen 1896 apk data mediafire
-noblemen 1896 apk data zip file
-noblemen 1896 unlocked apk download
-noblemen 1896 paid apk + data
-noblemen 1896 apk data for pc
-noblemen 1896 apk data mod menu
-noblemen 1896 apk data no root
-noblemen 1896 patched apk download
-noblemen 1896 steam tank game apk + data
-noblemen 1896 gatling gun game apk download
-noblemen 1896 alternate reality game apk + data
-noblemen 1896 airship game apk download
-noblemen 1896 cavalry game apk + data
-noblemen 1896 campaign game apk download
-noblemen 1896 battle cards game apk + data
-noblemen 1896 frigate game apk download
-noblemen 1896 militia game apk + data
-noblemen 1896 cannon game apk download
-noblemen 1896 foursaken media game apk + data
-noblemen 1896 graphics game apk download
-noblemen 1896 strategy game apk + data
-noblemen 1896 shooter game offline download

-
    -
  1. Allow unknown apps on your device by going to Settings > Apps > Menu > Special access > Install unknown apps > Chrome (or your preferred browser) > Enable Allow from this source.
  2. -
  3. Install a file manager app (such as Cx File Explorer or File Manager) so that you can find the APK and data files after you download them.
  4. -
  5. Download the APK file from a reputable website (such as APK Mirror) by tapping the link and accepting any pop-ups.
  6. -
  7. Download the data file (usually in ZIP or RAR format) from the same website or another source (such as Google Drive).
  8. -
  9. Locate the downloaded files in your file manager app and extract the data file to get a folder with OBB or DATA extension.
  10. -
  11. Copy or move the folder to Android > OBB or Android > DATA
  12. Install the APK file by tapping on it and following the instructions.
  13. -
  14. Launch the game and enjoy!
  15. -
-

How to play Noblemen: 1896?

-

Noblemen: 1896 is a game that requires both skill and strategy to win. Here are some tips on how to play it:

- -

Noblemen: 1896 Game Review

-

Noblemen: 1896 is a game that offers a lot of fun and excitement for fans of action and strategy games. Here is our review of the game's graphics, sound, story, difficulty, replay value, and overall rating.

-

Pros and cons of Noblemen: 1896

- - - - - - - -
ProsCons
Stunning graphics and animationsSometimes laggy or buggy
Immersive sound effects and musicSome voice acting is cheesy or annoying
Engaging story and charactersLimited choices or consequences
Challenging and varied gameplayCan be frustrating or repetitive
High replay value and contentRequires a lot of grinding or spending
-

User feedback on Noblemen: 1896

-

Noblemen: 1896 has received mostly positive feedback from users who have played it. Here are some of their reviews from different sources and platforms:

-
"This game is amazing! The graphics are awesome, the gameplay is smooth, and the story is captivating. I love how you can customize your nobleman and your army, and how you can choose different strategies and tactics. The battles are epic and realistic, and the map is huge and dynamic. This is one of the best games I have ever played!" - Google Play user
-
"I really like this game, but it has some issues. The game sometimes crashes or freezes, especially when there are too many units on the screen. The game also drains my battery very fast, even when I lower the settings. The game is also very hard, even on easy mode. I wish there was a way to skip some missions or get more resources." - App Store user
-
"This game is a masterpiece! The graphics are breathtaking, the sound is immersive, and the story is intriguing. I love how you can control your nobleman and your troops in real-time combat, and how you can use different weapons and abilities. The game is also very challenging and rewarding, and it has a lot of content and replay value. This is one of the best games I have ever played!" - Steam user
-

Conclusion

-

Noblemen: 1896 is a game that combines third-person shooter combat with strategic planning and resource management. The game is set in an alternate history of 1896 where the United States is divided by a civil war that involves advanced weapons such as cannons, gatling guns, airships, steam tanks, and more. You play as a nobleman who commands his own regiment and fights alongside other units in large-scale battles.

-

If you want to experience this game on your Android device, you can download Noblemen: 1896 APK data + download from reputable websites. This way, you can enjoy offline play without an internet connection, access the latest version of the game without waiting for updates, avoid compatibility issues with your device or region, save storage space by deleting unwanted files, modify or tweak the game to your liking, and more.

-

Noblemen: 1896 is a game that offers a lot of fun and excitement for fans of action and strategy games. The game has stunning graphics and animations, immersive sound effects and music, engaging story and characters, challenging and varied gameplay, and high replay value and content. The game also has some drawbacks, such as being sometimes laggy or buggy, having some voice acting that is cheesy or annoying, having limited choices or consequences, being frustrating or repetitive, and requiring a lot of grinding or spending. However, these issues do not overshadow the overall quality and enjoyment of the game.

-

If you are looking for a unique and immersive action/strategy game that takes you back to an alternate history of 1896, then you might want to check out Noblemen: 1896. You will not regret it!

-

FAQs on Noblemen: 1896 APK Data + Download

-

Here are some frequently asked questions and answers on Noblemen: 1896 APK data + download:

-
    -
  1. Is Noblemen: 1896 free to play?
  2. -

    Yes, Noblemen: 1896 is free to play, but it also has in-app purchases that can enhance your gaming experience.

    -
  3. Is Noblemen: 1896 safe to download?
  4. -

    Yes, Noblemen: 1896 is safe to download as long as you use reputable websites that provide virus-free and malware-free files. You should also scan the files before installing them on your device.

    -
  5. Is Noblemen: 1896 compatible with my device?
  6. -

    Noblemen: 1896 requires Android 4.3 or higher and at least 1 GB of RAM to run smoothly. You should also have enough storage space to accommodate the APK and data files.

    -
  7. How can I contact the developers of Noblemen: 1896?
  8. -

    You can contact the developers of Noblemen: 1896 by visiting their website (https://www.foursakenmedia.com/), their Facebook page (https://www.facebook.com/FoursakenMedia), their Twitter account (https://twitter.com/FoursakenMedia), or their email address (info@foursakenmedia.com).

    -
  9. Where can I find more information about Noblemen: 1896?
  10. -

    You can find more information about Noblemen: 1896 by visiting their official website (https://www.foursakenmedia.com/noblemen-1896), their Google Play Store page (https://play.google.com/store/apps/details?id=com.foursakenmedia.noblemen), their App Store page (https://apps.apple.com/us/app/noblemen-1896/id1178777377), or their Steam page (https://store.steampowered.com/app/1105440/Noblemen_1896/).

    -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Real Drag Bike Racing Mod APK and Experience the Ultimate Drag Racing Challenge.md b/spaces/1phancelerku/anime-remove-background/Download Real Drag Bike Racing Mod APK and Experience the Ultimate Drag Racing Challenge.md deleted file mode 100644 index 31af10b762cb8cc071e785d268810c98b0701de7..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Real Drag Bike Racing Mod APK and Experience the Ultimate Drag Racing Challenge.md +++ /dev/null @@ -1,114 +0,0 @@ -
-

Download Game Real Drag Bike Racing Mod Apk: A Guide for Racing Fans

-

If you are a fan of racing games, you might have heard of Real Drag Bike Racing, a popular game that lets you experience the thrill of drag racing on your mobile device. But did you know that there is a mod apk version of this game that gives you unlimited money, coins, bikes, and more? In this article, we will tell you everything you need to know about Real Drag Bike Racing Mod Apk, including its features, how to download and install it, tips and tricks for playing it, pros and cons, and some frequently asked questions. Read on to find out more!

-

download game real drag bike racing mod apk


DOWNLOAD >>>>> https://jinyurl.com/2uNNAe



-

Features of Real Drag Bike Racing Mod Apk

-

Real Drag Bike Racing Mod Apk is a modified version of the original game that offers many advantages over the regular version. Here are some of the features that you can enjoy with this mod apk:

- -

How to Download and Install Real Drag Bike Racing Mod Apk

-

Downloading and installing Real Drag Bike Racing Mod Apk is very easy and simple. Just follow these steps:

-
    -
  1. Step 1: Download the mod apk file from a trusted source. You can use one of these links to download the latest version of Real Drag Bike Racing Mod Apk.
  2. -
  3. Step 2:Step 2: Enable unknown sources on your device. To do this, go to your device settings, then security, and then toggle on the option that allows you to install apps from unknown sources. This will enable you to install the mod apk file that you downloaded.
  4. -
  5. Step 3: Install the mod apk file and enjoy the game. To do this, locate the mod apk file in your device storage, tap on it, and follow the instructions on the screen. Once the installation is complete, you can launch the game and start playing with all the mod features.
  6. -
-

Tips and Tricks for Playing Real Drag Bike Racing Mod Apk

-

Real Drag Bike Racing Mod Apk is a fun and addictive game that will test your skills and reflexes as a drag racer. Here are some tips and tricks that will help you improve your performance and win more races:

- -

Pros and Cons of Real Drag Bike Racing Mod Apk

-

Real Drag Bike Racing Mod Apk is a great game for racing fans, but it also has some drawbacks that you should be aware of. Here are some of the pros and cons of this mod apk:

- - - - - - - - - - - - - - - - - -
ProsCons
Realistic graphics and sound effects: The game has stunning graphics and sound effects that make you feel like you are in a real drag race. You can see the details of your bike, the environment, and the other racers. You can also hear the roar of your engine, the screech of your tires, and the cheers of the crowd.Requires internet connection: The game requires an internet connection to run properly. This means that you cannot play it offline or in areas with poor network coverage. This can be inconvenient for some players who want to enjoy the game anytime and anywhere.
Easy and smooth controls: The game has easy and smooth controls that make it suitable for players of all ages and skill levels. You can control your bike by tapping on the screen or tilting your device. You can also customize your controls according to your preference in the settings menu.May not be compatible with some devices: The game may not work well on some devices due to their specifications or operating systems. Some players have reported issues such as crashes, glitches, or lagging while playing the game. You should check the compatibility of your device before downloading and installing the mod apk.
Various modes and challenges: The game has various modes and challenges that keep you entertained and challenged. You can play in career mode, tournament mode, or online mode. You can also participate in daily missions, weekly events, or special races that offer rewards and bonuses.
-

Conclusion and FAQs

-

In conclusion, Real Drag Bike Racing Mod Apk is a fantastic game for racing enthusiasts who want to experience the thrill of drag racing on their mobile devices. It offers many features that enhance the gameplay, such as unlimited money, coins, bikes, no ads, no root required, realistic graphics, sound effects, easy controls, various modes, challenges, etc. It also has some drawbacks that may affect some players, such as requiring internet connection or not being compatible with some devices. However, these are minor issues compared to the to use, as long as you download it from a reliable source and follow the installation instructions carefully. However, you should be aware that using mod apk files may violate the terms and conditions of the original game and may result in your account being banned or suspended. You should use this mod apk at your own risk and discretion.

-

download real drag bike racing mod apk unlimited money
-real drag bike racing mod apk latest version
-how to download real drag bike racing mod apk for android
-real drag bike racing mod apk free download uptodown
-real drag bike racing mod apk offline
-download game real drag bike racing indonesia mod apk
-real drag bike racing mod apk 2023
-real drag bike racing mod apk hack
-download game real drag bike racing 3d mod apk
-real drag bike racing mod apk no ads
-download game real drag bike racing 2 mod apk
-real drag bike racing mod apk unlimited coins and gems
-real drag bike racing mod apk revdl
-download game real drag bike racing hd mod apk
-real drag bike racing mod apk rexdl
-download game real drag bike racing pro mod apk
-real drag bike racing mod apk unlock all bikes
-real drag bike racing mod apk pure
-download game real drag bike racing online mod apk
-real drag bike racing mod apk android 1
-download game real drag bike racing simulator mod apk
-real drag bike racing mod apk happymod
-download game real drag bike racing new version mod apk
-real drag bike racing mod apk unlimited everything
-real drag bike racing mod apk obb
-download game real drag bike racing extreme mod apk
-real drag bike racing mod apk old version
-download game real drag bike racing 4x4 mod apk
-real drag bike racing mod apk cheat
-download game real drag bike racing nitro mod apk
-real drag bike racing mod apk update
-download game real drag bike racing classic mod apk
-real drag bike racing mod apk full version
-download game real drag bike racing adventure mod apk
-real drag bike racing mod apk data
-download game real drag bike racing championship mod apk
-real drag bike racing mod apk vip
-download game real drag bike racing turbo mod apk
-real drag bike racing mod apk mega mod
-download game real drag bike racing legend mod apk
-real drag bike racing mod apk all unlocked
-download game real drag bike racing city mod apk
-real drag bike racing mod apk unlimited fuel and nitro
-download game real drag bike racing world tour mod apk
-real drag bike racing mod apk no root
-download game real drag bike racing ultimate mod apk
-real drag bike racing mod apk easy win
-download game real drag bike racing supercharged mod apk
-real drag bike racing mod apk high graphics

-
  • FAQ 3: How can I get more money and coins in Real Drag Bike Racing Mod Apk?
  • -

    With Real Drag Bike Racing Mod Apk, you will get unlimited money and coins that you can use to buy, upgrade, and customize your bikes. You will also earn money and coins from winning races, completing missions, and participating in events. However, if you want to get more money and coins faster, you can use some of these tricks:

    - -
  • FAQ 4: How can I contact the developer of Real Drag Bike Racing Mod Apk?
  • -

    If you have any questions, suggestions, or feedback about Real Drag Bike Racing Mod Apk, you can contact the developer through their email address: realdragbikeracing@gmail.com. You can also follow them on their social media accounts: Facebook, Twitter, Instagram, and YouTube.

    -
  • FAQ 5: What are some alternatives to Real Drag Bike Racing Mod Apk?
  • -

    If you are looking for some other games that are similar to Real Drag Bike Racing Mod Apk, you can try some of these alternatives:

    -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/EvoWars.io A Unique and Exciting IO Game with Dynamic Gameplay.md b/spaces/1phancelerku/anime-remove-background/EvoWars.io A Unique and Exciting IO Game with Dynamic Gameplay.md deleted file mode 100644 index e625594e57036170603331c040534cb0898cd328..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/EvoWars.io A Unique and Exciting IO Game with Dynamic Gameplay.md +++ /dev/null @@ -1,113 +0,0 @@ -
    -

    EvoWars.io: A Fun and Addictive Online Battle Game

    -

    If you are looking for a game that is simple, fast-paced, and exciting, then you might want to try EvoWars.io. This is an IO game that lets you fight, kill, and evolve in a top-down online battle arena. You can collect orbs and battle other players to evolve your warrior into different forms, each with its own weapon and abilities. You can also use a sprint ability to chase or escape from your enemies, but at the cost of your experience points. The game is easy to play but hard to master, as you need to balance your size, speed, and range to survive and dominate the battlefield.

    -

    What is EvoWars.io?

    -

    EvoWars.io is an IO game that was released in March 2018 by Night Steed Games. It is available to play on web browsers (desktop and mobile), Android, and iOS devices. The game is inspired by some of the most popular IO games, such as Agar.io and Slither.io, where you have to grow bigger and stronger by collecting orbs and killing other players. However, EvoWars.io adds a twist to the formula by introducing an evolution system that changes your character's appearance and weapon every time you level up. There are currently 25 levels and evolutions to unlock, ranging from a caveman with a club to a demon with a scythe.

    -

    evowars io apkmody


    Download Zip 🆗 https://jinyurl.com/2uNRIh



    -

    How to play EvoWars.io?

    -

    The gameplay of EvoWars.io is simple and intuitive. You just need to move your mouse to control your character's movement, left click to attack, and right click to sprint. Your goal is to collect orbs and kill other players to gain experience and points. Every time you fill up your experience bar, you level up and evolve into a new form. Each evolution improves your weapon range but slows down your movement speed. You also lose some of your experience points when you use the sprint ability, so use it wisely. The game ends when you die or when you reach the maximum level of 25.

    -

    What are the features of EvoWars.io?

    -

    EvoWars.io has many features that make it fun and addictive to play. Some of them are:

    - -

    What are the tips and tricks for EvoWars.io?

    -

    EvoWars.io may seem easy at first glance, but it can be challenging and competitive as well. Here are some tips and tricks that can help you improve your skills and performance in the game:

    - -

    What are the mod features of EvoWars.io?

    -

    If you want to enhance your gaming experience, you can try the EvoWars.io mod APK from APKMODY. This is a modified version of the game that gives you some extra features and benefits, such as:

    - -

    To download and install the EvoWars.io mod APK, you just need to follow these simple steps:

    -
      -
    1. Go to the APKMODY website and search for EvoWars.io mod APK
    2. -
    3. Click on the download button and wait for the file to be downloaded
    4. -
    5. Open the file and tap on install
    6. -
    7. Allow unknown sources if prompted by your device settings
    8. -
    9. Launch the game and enjoy the mod features
    10. -
    -

    What are the alternatives to EvoWars.io?

    -

    If you like EvoWars.io, you might also like some of these similar IO games that offer similar gameplay and features:

    - | Game | Description | | --- | --- | | Brutal.io | A game where you control a car with a flail and try to smash other players with it | | ZombsRoyale.io | A game where you parachute into a map with 99 other players and try to be the last one standing | | WormsZone.io | A game where you control a worm and try to eat as much food as possible while avoiding other worms | | Starve.io | A game where you have to survive in a harsh environment by gathering resources, crafting items, and fighting enemies | | Mope.io | A game where you start as a mouse and try to evolve into different animals by eating food and water |

    Conclusion

    -

    EvoWars.io is a fun and addictive online battle game that lets you fight, kill, and evolve in a top-down arena. You can collect orbs and battle other players to level up and unlock different character models and weapons. You can also use a sprint ability to boost your speed at the cost of your experience points. The game is easy to play but hard to master, as you need to balance your size, speed, and range to survive and dominate the battlefield. If you want to enhance your gaming experience, you can try the EvoWars.io mod APK from APKMODY that gives you unlimited coins, unlocked levels, no ads, and more. You can also check out some of the alternatives to EvoWars.io that offer similar gameplay and features. EvoWars.io is a game that will keep you entertained and engaged for hours. So what are you waiting for? Join the battle and evolve now!

    -

    FAQs

    -

    Here are some of the frequently asked questions about EvoWars.io:

    -

    evowars io apk download free
    -evowars io mod apk unlimited money
    -evowars io game online play
    -evowars io hack apk android
    -evowars io cheats codes pc
    -evowars io unblocked games 66
    -evowars io apk mod menu
    -evowars io tips and tricks
    -evowars io best evolution strategy
    -evowars io apk latest version
    -evowars io mod apk no ads
    -evowars io gameplay walkthrough
    -evowars io skins unlock all
    -evowars io hack apk ios
    -evowars io review rating
    -evowars io apk offline mode
    -evowars io mod apk god mode
    -evowars io wiki guide
    -evowars io all evolutions list
    -evowars io apk pure download
    -evowars io mod apk revdl
    -evowars io update new features
    -evowars io reddit community
    -evowars io skins customizer
    -evowars io hack apk download
    -evowars io mod apk happymod
    -evowars io tutorial beginner
    -evowars io discord server link
    -evowars io skins names generator
    -evowars io hack apk 2023
    -evowars io mod apk rexdl
    -evowars io challenge mode hard
    -evowars io youtube video gameplay
    -evowars io skins editor online
    -evowars io hack apk unlimited orbs
    -evowars io mod apk an1.com
    -evowars io leaderboard top players
    -evowars io facebook fan page
    -evowars io skins maker free
    -evowars io hack apk 2022
    -evowars io mod apk android 1.com
    -evowars io achievements unlock guide
    -evowars io instagram official account
    -evowars io skins creator app
    -evowars io hack apk no root
    -evowars io mod apk apkpure
    -evowars io controls keyboard settings
    -evowars io twitter official handle
    -evowars io skins download png

    -

    Q: How many players can play EvoWars.io at the same time?

    -

    A: EvoWars.io can support up to 100 players per server. You can join any server that has available slots or create your own private server with a password.

    -

    Q: How can I change my character's name, skin, or accessory in EvoWars.io?

    -

    A: You can change your character's name by typing it in the box below the play button. You can change your character's skin or accessory by clicking on the shop button on the top right corner of the screen. You can buy skins or accessories with coins that you earn by playing the game or watching ads. You can also get unlimited coins by using the EvoWars.io mod APK from APKMODY.

    -

    Q: How can I report a bug or a problem in EvoWars.io?

    -

    A: You can report a bug or a problem in EvoWars.io by contacting the developers through their email

    A: Yes, EvoWars.io is free to play on web browsers, Android, and iOS devices. You don't need to pay anything to enjoy the game. However, you can support the developers by buying coins or watching ads, which can help them improve the game and add more features.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Explore the Dungeon and Fight the Boss in Pixel Blade M VIP APK.md b/spaces/1phancelerku/anime-remove-background/Explore the Dungeon and Fight the Boss in Pixel Blade M VIP APK.md deleted file mode 100644 index 306866b1c24ccbef60d16b1058ecc90930995bdc..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Explore the Dungeon and Fight the Boss in Pixel Blade M VIP APK.md +++ /dev/null @@ -1,129 +0,0 @@ - -

    Pixel Blade M VIP APK: A Review of the Action RPG Game

    -

    If you are looking for a pixel-style 3D action RPG game that offers quick and exciting gameplay, various weapons and skills, and a challenging dungeon adventure, then you might want to check out Pixel Blade M VIP APK. This is a game developed by PixelStar Games, which is the VIP version of the original Pixel Blade game. In this article, we will review the features, installation process, pros and cons, and FAQs of Pixel Blade M VIP APK.

    -

    What is Pixel Blade M VIP APK?

    -

    Pixel Blade M VIP APK is an Android game that belongs to the action RPG genre. It is set in a pixel world where you play as the last pixel hero who has to collect weapons and conquer dungeons to save the world. The game has pixel-style graphics, 3D effects, and a hack and slash gameplay that will keep you entertained for hours.

    -

    pixel blade m vip apk


    DOWNLOAD 🗹 https://jinyurl.com/2uNMFY



    -

    As the VIP version of the game, Pixel Blade M VIP APK offers some exclusive benefits for the players, such as:

    - -

    The game also has regular updates that add new features and improvements to the gameplay.

    -

    Features of Pixel Blade M VIP APK

    -

    Pixel Blade M VIP APK has many features that make it an enjoyable and addictive action RPG game. Here are some of them:

    -

    Quick action and a variety of skills

    -

    The game has a fast-paced and dynamic gameplay that requires you to use different skills and strategies to defeat the enemies. You can use various buttons to perform attacks, dodge, jump, and use special skills. You can also customize your skill set according to your preference and play style.

    -

    Various weapon skills and upgrade systems

    -

    The game has a wide range of weapons that you can collect and use in the dungeon. Each weapon has its own skill and attribute that can affect your performance in combat. You can also upgrade your weapons and equipment using the materials that you obtain from hunting monsters or mining. You can also advance your weapons to unlock new skills and effects.

    -

    Various costumes and armor

    -

    The game allows you to change your appearance by wearing different costumes and armor. You can choose from various styles and colors that suit your taste. The costumes and armor also have different stats that can boost your defense, attack, speed, or other attributes.

    -

    Mine system and craft system

    -

    The game has a mine system that lets you obtain gems and potions for free. You can use these items to enhance your weapons, equipment, or skills. The game also has a craft system that lets you create new items using the materials that you collect from the dungeon or the mine.

    -

    pixel blade m vip mod apk
    -pixel blade m vip hack apk
    -pixel blade m vip apk download
    -pixel blade m vip apk free
    -pixel blade m vip apk latest version
    -pixel blade m vip apk unlimited money
    -pixel blade m vip apk android
    -pixel blade m vip apk offline
    -pixel blade m vip apk 9.2.9
    -pixel blade m vip apk 2023
    -pixel blade m vip apk rexdl
    -pixel blade m vip apk revdl
    -pixel blade m vip apk apkpure
    -pixel blade m vip apk happymod
    -pixel blade m vip apk moddroid
    -pixel blade m vip apk an1
    -pixel blade m vip apk obb
    -pixel blade m vip apk data
    -pixel blade m vip apk full version
    -pixel blade m vip apk no ads
    -pixel blade m vip game apk
    -pixel blade m vip rpg game mod apk
    -pixel blade m vip 3d action rpg game hack apk
    -download game pixel blade m vip mod apk
    -download game pixel blade m vip hack apk
    -download game pixel blade m vip free apk
    -download game pixel blade m vip latest version apk
    -download game pixel blade m vip unlimited money apk
    -download game pixel blade m vip android apk
    -download game pixel blade m vip offline apk
    -download game pixel blade m vip 9.2.9 apk
    -download game pixel blade m vip 2023 apk
    -download game pixel blade m vip rexdl apk
    -download game pixel blade m vip revdl apk
    -download game pixel blade m vip apkpure apk
    -download game pixel blade m vip happymod apk
    -download game pixel blade m vip moddroid apk
    -download game pixel blade m vip an1 apk
    -download game pixel blade m vip obb data apk
    -download game pixel blade m vip full version no ads apk
    -how to install pixel blade m vip mod hack apk on android device
    -how to play pixel blade m vip offline without internet connection on android device
    -how to get unlimited money gems and weapons in pixel blade m vip mod hack apk on android device
    -how to update to the latest version of pixel blade m vip mod hack apk on android device
    -how to fix the crash issue of pixel blade m vip mod hack apk on android 12 device
    -how to craft weapons and armor in pixel blade m vip mod hack rpg game on android device
    -how to mine gems and potions in pixel blade m vip mod hack rpg game on android device
    -how to raid bosses and dungeons in pixel blade m vip mod hack rpg game on android device
    -how to customize your character and costume in pixel blade m vip mod hack rpg game on android device

    -

    Boss raid

    -

    The game has a boss raid feature that lets you challenge powerful bosses in the dungeon. You can team up with other players online or play solo to defeat the bosses and get rewards. The bosses have different patterns and abilities that require you to use your skills wisely.

    -

    How to download and install Pixel Blade M VIP APK?

    -

    If you want to play Pixel Blade M VIP APK on your Android device, you need to follow these steps:

    -

    Requirements and compatibility

    -

    Before you download and install the game, make sure that your device meets these requirements:

    - -

    The game The game is compatible with most Android devices, but some features may not work properly on some models or versions. If you encounter any problems while playing the game, you can contact the developer through their email or social media accounts.

    Steps to download and install

    -

    After you have checked the requirements and compatibility, you can follow these steps to download and install the game:

    -
      -
    1. Go to the official website of PixelStar Games or click on this link: [Pixel Blade M VIP APK].
    2. -
    3. Click on the download button and wait for the APK file to be downloaded to your device.
    4. -
    5. Once the download is complete, locate the APK file in your device's file manager and tap on it to install it.
    6. -
    7. If you see a warning message that says "Install blocked", go to your device's settings and enable the option to allow installation from unknown sources.
    8. -
    9. Follow the instructions on the screen to complete the installation process.
    10. -
    11. Launch the game and enjoy playing Pixel Blade M VIP APK.
    12. -
    -

    Pros and cons of Pixel Blade M VIP APK

    -

    Like any other game, Pixel Blade M VIP APK has its own advantages and disadvantages. Here are some of them:

    -

    Pros

    - -

    Cons

    - -

    Conclusion

    -

    Pixel Blade M VIP APK is an action RPG game that lets you play as the last pixel hero who has to save the world from evil. The game has pixel-style graphics, 3D effects, and a hack and slash gameplay that will keep you entertained for hours. The game also has many features that make it enjoyable and addictive, such as various weapons, skills, costumes, items, mine system, craft system, and boss raid. The game also has a VIP version that gives you exclusive benefits such as free gems and no ads. If you are looking for a pixel-style 3D action RPG game that offers quick and exciting gameplay, various weapons and skills, and a challenging dungeon adventure, then you might want to check out Pixel Blade M VIP APK.

    -

    FAQs

    -

    Here are some frequently asked questions about Pixel Blade M VIP APK:

    -
      -
    1. What is the difference between Pixel Blade M VIP APK and Pixel Blade M APK?
    2. -

      Pixel Blade M VIP APK is the VIP version of Pixel Blade M APK. It offers some exclusive benefits for the players, such as 500 GEM (click top vip button) and remove ads (banner). The VIP version also has regular updates that add new features and improvements to the gameplay.

      -
    3. Is Pixel Blade M VIP APK safe to download and install?
    4. -

      Yes, Pixel Blade M VIP APK is safe to download and install. It does not contain any viruses or malware that can harm your device or data. However, you should always download the game from trusted sources such as the official website of PixelStar Games or this link: [Pixel Blade M VIP APK].

      -
    5. How can I get more gems in Pixel Blade M VIP APK?
    6. -

      You can get more gems in Pixel Blade M VIP APK by using the mine system or by clicking on the top vip button. You can also get gems by completing quests, achievements, or events in the game. You can also buy gems using real money through in-app purchases.

      -
    7. How can I advance my weapons in Pixel Blade M VIP APK?
    8. -

      You can advance your weapons in Pixel Blade M VIP APK by using the upgrade system or the craft system. You need to have enough materials and gold to upgrade or craft your weapons. You can get materials from hunting monsters or mining or from the craft system. You can also advance your weapons by using the gems that you obtain from the mine system or the vip button.

      -
    9. How can I play Pixel Blade M VIP APK with other players?
    10. -

      You can play Pixel Blade M VIP APK with other players by using the boss raid feature. You can join or create a room and invite other players online or play solo to challenge the bosses in the dungeon. You can also chat with other players in the game and make friends.

      -
    -

    I hope this article has helped you learn more about Pixel Blade M VIP APK and how to play it. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading and have fun playing Pixel Blade M VIP APK!

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git "a/spaces/2ndelement/voicevox/docs/VOICEVOX\351\237\263\345\243\260\345\220\210\346\210\220\343\202\250\343\203\263\343\202\270\343\203\263\343\201\250\343\201\256\351\200\243\346\220\272.md" "b/spaces/2ndelement/voicevox/docs/VOICEVOX\351\237\263\345\243\260\345\220\210\346\210\220\343\202\250\343\203\263\343\202\270\343\203\263\343\201\250\343\201\256\351\200\243\346\220\272.md" deleted file mode 100644 index 540173be1b280ce5c3593b8aed02fd42ef633f65..0000000000000000000000000000000000000000 --- "a/spaces/2ndelement/voicevox/docs/VOICEVOX\351\237\263\345\243\260\345\220\210\346\210\220\343\202\250\343\203\263\343\202\270\343\203\263\343\201\250\343\201\256\351\200\243\346\220\272.md" +++ /dev/null @@ -1,7 +0,0 @@ -メモ書き程度ですが、どういう方針で開発を進めているかを紹介します。 - -- バージョンが上がっても、`/audio_query`で返ってくる値をそのまま`/synthesis`に POST すれば音声合成できるようにする予定です - - `AudioQuery`のパラメータは増えますが、なるべくデフォルト値で以前と変わらない音声が生成されるようにします -- バージョン 0.7 から音声スタイルが実装されました。スタイルの情報は`/speakers`から取得できます - - スタイルの情報にある`style_id`を`speaker`に指定することで、今まで通り音声合成ができます - - style_id の指定先が speaker なのは互換性のためです diff --git a/spaces/52Hz/SRMNet_AWGN_denoising/model/SRMNet.py b/spaces/52Hz/SRMNet_AWGN_denoising/model/SRMNet.py deleted file mode 100644 index 809213dbe17ee3dad1d15ff8de9c61ded35eed78..0000000000000000000000000000000000000000 --- a/spaces/52Hz/SRMNet_AWGN_denoising/model/SRMNet.py +++ /dev/null @@ -1,227 +0,0 @@ -import torch -import torch.nn as nn - -##---------- Basic Layers ---------- -def conv3x3(in_chn, out_chn, bias=True): - layer = nn.Conv2d(in_chn, out_chn, kernel_size=3, stride=1, padding=1, bias=bias) - return layer - -def conv(in_channels, out_channels, kernel_size, bias=False, stride=1): - return nn.Conv2d( - in_channels, out_channels, kernel_size, - padding=(kernel_size // 2), bias=bias, stride=stride) - -def bili_resize(factor): - return nn.Upsample(scale_factor=factor, mode='bilinear', align_corners=False) - -##---------- Basic Blocks ---------- -class UNetConvBlock(nn.Module): - def __init__(self, in_size, out_size, downsample): - super(UNetConvBlock, self).__init__() - self.downsample = downsample - self.block = SK_RDB(in_channels=in_size, growth_rate=out_size, num_layers=3) - if downsample: - self.downsample = PS_down(out_size, out_size, downscale=2) - - def forward(self, x): - out = self.block(x) - if self.downsample: - out_down = self.downsample(out) - return out_down, out - else: - return out - -class UNetUpBlock(nn.Module): - def __init__(self, in_size, out_size): - super(UNetUpBlock, self).__init__() - # self.up = nn.ConvTranspose2d(in_size, out_size, kernel_size=2, stride=2, bias=True) - self.up = PS_up(in_size, out_size, upscale=2) - self.conv_block = UNetConvBlock(in_size, out_size, False) - - def forward(self, x, bridge): - up = self.up(x) - out = torch.cat([up, bridge], dim=1) - out = self.conv_block(out) - return out - -##---------- Resizing Modules (Pixel(Un)Shuffle) ---------- -class PS_down(nn.Module): - def __init__(self, in_size, out_size, downscale): - super(PS_down, self).__init__() - self.UnPS = nn.PixelUnshuffle(downscale) - self.conv1 = nn.Conv2d((downscale**2) * in_size, out_size, 1, 1, 0) - - def forward(self, x): - x = self.UnPS(x) # h/2, w/2, 4*c - x = self.conv1(x) - return x - -class PS_up(nn.Module): - def __init__(self, in_size, out_size, upscale): - super(PS_up, self).__init__() - - self.PS = nn.PixelShuffle(upscale) - self.conv1 = nn.Conv2d(in_size//(upscale**2), out_size, 1, 1, 0) - - def forward(self, x): - x = self.PS(x) # h/2, w/2, 4*c - x = self.conv1(x) - return x - -##---------- Selective Kernel Feature Fusion (SKFF) ---------- -class SKFF(nn.Module): - def __init__(self, in_channels, height=3, reduction=8, bias=False): - super(SKFF, self).__init__() - - self.height = height - d = max(int(in_channels / reduction), 4) - - self.avg_pool = nn.AdaptiveAvgPool2d(1) - self.conv_du = nn.Sequential(nn.Conv2d(in_channels, d, 1, padding=0, bias=bias), nn.PReLU()) - - self.fcs = nn.ModuleList([]) - for i in range(self.height): - self.fcs.append(nn.Conv2d(d, in_channels, kernel_size=1, stride=1, bias=bias)) - - self.softmax = nn.Softmax(dim=1) - - def forward(self, inp_feats): - batch_size, n_feats, H, W = inp_feats[1].shape - - inp_feats = torch.cat(inp_feats, dim=1) - inp_feats = inp_feats.view(batch_size, self.height, n_feats, inp_feats.shape[2], inp_feats.shape[3]) - - feats_U = torch.sum(inp_feats, dim=1) - feats_S = self.avg_pool(feats_U) - feats_Z = self.conv_du(feats_S) - - attention_vectors = [fc(feats_Z) for fc in self.fcs] - attention_vectors = torch.cat(attention_vectors, dim=1) - attention_vectors = attention_vectors.view(batch_size, self.height, n_feats, 1, 1) - - attention_vectors = self.softmax(attention_vectors) - feats_V = torch.sum(inp_feats * attention_vectors, dim=1) - - return feats_V - -##---------- Dense Block ---------- -class DenseLayer(nn.Module): - def __init__(self, in_channels, out_channels, I): - super(DenseLayer, self).__init__() - self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=3 // 2) - self.relu = nn.ReLU(inplace=True) - self.sk = SKFF(out_channels, height=2, reduction=8, bias=False) - - def forward(self, x): - x1 = self.relu(self.conv(x)) - # output = torch.cat([x, x1], 1) # -> RDB - output = self.sk((x, x1)) - return output - -##---------- Selective Kernel Residual Dense Block (SK-RDB) ---------- -class SK_RDB(nn.Module): - def __init__(self, in_channels, growth_rate, num_layers): - super(SK_RDB, self).__init__() - self.identity = nn.Conv2d(in_channels, growth_rate, 1, 1, 0) - self.layers = nn.Sequential( - *[DenseLayer(in_channels, in_channels, I=i) for i in range(num_layers)] - ) - self.lff = nn.Conv2d(in_channels, growth_rate, kernel_size=1) - - def forward(self, x): - res = self.identity(x) - x = self.layers(x) - x = self.lff(x) - return res + x - -##---------- testNet ---------- -class SRMNet(nn.Module): - def __init__(self, in_chn=3, wf=96, depth=4): - super(SRMNet, self).__init__() - self.depth = depth - self.down_path = nn.ModuleList() - self.bili_down = bili_resize(0.5) - self.conv_01 = nn.Conv2d(in_chn, wf, 3, 1, 1) - - # encoder of UNet - prev_channels = 0 - for i in range(depth): # 0,1,2,3 - downsample = True if (i + 1) < depth else False - self.down_path.append(UNetConvBlock(prev_channels + wf, (2 ** i) * wf, downsample)) - prev_channels = (2 ** i) * wf - - # decoder of UNet - self.up_path = nn.ModuleList() - self.skip_conv = nn.ModuleList() - self.conv_up = nn.ModuleList() - self.bottom_conv = nn.Conv2d(prev_channels, wf, 3, 1, 1) - self.bottom_up = bili_resize(2 ** (depth-1)) - - for i in reversed(range(depth - 1)): - self.up_path.append(UNetUpBlock(prev_channels, (2 ** i) * wf)) - self.skip_conv.append(nn.Conv2d((2 ** i) * wf, (2 ** i) * wf, 3, 1, 1)) - self.conv_up.append(nn.Sequential(*[nn.Conv2d((2 ** i) * wf, wf, 3, 1, 1), bili_resize(2 ** i)])) - prev_channels = (2 ** i) * wf - - self.final_ff = SKFF(in_channels=wf, height=depth) - self.last = conv3x3(prev_channels, in_chn, bias=True) - - def forward(self, x): - img = x - scale_img = img - - ##### shallow conv ##### - x1 = self.conv_01(img) - encs = [] - ######## UNet ######## - # Down-path (Encoder) - for i, down in enumerate(self.down_path): - if i == 0: - x1, x1_up = down(x1) - encs.append(x1_up) - elif (i + 1) < self.depth: - scale_img = self.bili_down(scale_img) - left_bar = self.conv_01(scale_img) - x1 = torch.cat([x1, left_bar], dim=1) - x1, x1_up = down(x1) - encs.append(x1_up) - else: - scale_img = self.bili_down(scale_img) - left_bar = self.conv_01(scale_img) - x1 = torch.cat([x1, left_bar], dim=1) - x1 = down(x1) - - # Up-path (Decoder) - ms_result = [self.bottom_up(self.bottom_conv(x1))] - for i, up in enumerate(self.up_path): - x1 = up(x1, self.skip_conv[i](encs[-i - 1])) - ms_result.append(self.conv_up[i](x1)) - - # Multi-scale selective feature fusion - msff_result = self.final_ff(ms_result) - - ##### Reconstruct ##### - out_1 = self.last(msff_result) + img - - return out_1 - - -if __name__ == "__main__": - from thop import profile - - input = torch.ones(1, 3, 256, 256, dtype=torch.float, requires_grad=False) - model = SRMNet(in_chn=3, wf=96, depth=4) - out = model(input) - flops, params = profile(model, inputs=(input,)) - total = sum(p.numel() for p in model.parameters()) - - # RDBlayer = SK_RDB(in_channels=64, growth_rate=64, num_layers=3) - # print(RDBlayer) - # out = RDBlayer(input) - # flops, params = profile(RDBlayer, inputs=(input,)) - - print('input shape:', input.shape) - print('output shape', out.shape) - print("-----------------------------------") - print("Total params: %.4f M" % (total / 1e6)) - print("Total params: %.4f G" % (flops / 1e9)) \ No newline at end of file diff --git a/spaces/801artistry/RVC801/julius/__init__.py b/spaces/801artistry/RVC801/julius/__init__.py deleted file mode 100644 index 69811b0415a291ca1beb845531785ba03c57099a..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/julius/__init__.py +++ /dev/null @@ -1,41 +0,0 @@ -# File under the MIT license, see https://github.com/adefossez/julius/LICENSE for details. -# Author: adefossez, 2020 - -# flake8: noqa -""" -.. image:: ../logo.png - -Julius contains different Digital Signal Processing algorithms implemented -with PyTorch, so that they are differentiable and available on CUDA. -Note that all the modules implemented here can be used with TorchScript. - -For now, I have implemented: - -- `julius.resample`: fast sinc resampling. -- `julius.fftconv`: FFT based convolutions. -- `julius.lowpass`: FIR low pass filter banks. -- `julius.filters`: FIR high pass and band pass filters. -- `julius.bands`: Decomposition of a waveform signal over mel-scale frequency bands. - -Along that, you might found useful utilities in: - -- `julius.core`: DSP related functions. -- `julius.utils`: Generic utilities. - - -Please checkout [the Github repository](https://github.com/adefossez/julius) for other informations. -For a verification of the speed and correctness of Julius, check the benchmark module `bench`. - - -This package is named in this honor of -[Julius O. Smith](https://ccrma.stanford.edu/~jos/), -whose books and website were a gold mine of information for me to learn about DSP. Go checkout his website if you want -to learn more about DSP. -""" - -from .bands import SplitBands, split_bands -from .fftconv import fft_conv1d, FFTConv1d -from .filters import bandpass_filter, BandPassFilter -from .filters import highpass_filter, highpass_filters, HighPassFilter, HighPassFilters -from .lowpass import lowpass_filter, lowpass_filters, LowPassFilters, LowPassFilter -from .resample import resample_frac, ResampleFrac diff --git a/spaces/AI-Naga/Parking_Space_Counter/README.md b/spaces/AI-Naga/Parking_Space_Counter/README.md deleted file mode 100644 index ea784f45445a5e6339d993a18e71b8f80d41b48a..0000000000000000000000000000000000000000 --- a/spaces/AI-Naga/Parking_Space_Counter/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Parking Space Counter -emoji: ⚡ -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AIFILMS/StyleGANEX/models/stylegan2/model.py b/spaces/AIFILMS/StyleGANEX/models/stylegan2/model.py deleted file mode 100644 index 901cb7a86ea5b13912ff2a98680f368d18e36d9f..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/StyleGANEX/models/stylegan2/model.py +++ /dev/null @@ -1,768 +0,0 @@ -import math -import random -import torch -from torch import nn -from torch.nn import functional as F -import numpy as np - -from models.stylegan2.op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d - - -class PixelNorm(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input): - return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8) - - -def make_kernel(k): - k = torch.tensor(k, dtype=torch.float32) - - if k.ndim == 1: - k = k[None, :] * k[:, None] - - k /= k.sum() - - return k - - -class Upsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) * (factor ** 2) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad) - - return out - - -class Downsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad) - - return out - - -class Blur(nn.Module): - def __init__(self, kernel, pad, upsample_factor=1): - super().__init__() - - kernel = make_kernel(kernel) - - if upsample_factor > 1: - kernel = kernel * (upsample_factor ** 2) - - self.register_buffer('kernel', kernel) - - self.pad = pad - - def forward(self, input): - out = upfirdn2d(input, self.kernel, pad=self.pad) - - return out - - -class EqualConv2d(nn.Module): - def __init__( - self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True, dilation=1 ## modified - ): - super().__init__() - - self.weight = nn.Parameter( - torch.randn(out_channel, in_channel, kernel_size, kernel_size) - ) - self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2) - - self.stride = stride - self.padding = padding - self.dilation = dilation ## modified - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - - else: - self.bias = None - - def forward(self, input): - out = F.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - dilation=self.dilation, ## modified - ) - - return out - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]}," - f" {self.weight.shape[2]}, stride={self.stride}, padding={self.padding}, dilation={self.dilation})" ## modified - ) - - -class EqualLinear(nn.Module): - def __init__( - self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None - ): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul)) - - if bias: - self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init)) - - else: - self.bias = None - - self.activation = activation - - self.scale = (1 / math.sqrt(in_dim)) * lr_mul - self.lr_mul = lr_mul - - def forward(self, input): - if self.activation: - out = F.linear(input, self.weight * self.scale) - out = fused_leaky_relu(out, self.bias * self.lr_mul) - - else: - out = F.linear( - input, self.weight * self.scale, bias=self.bias * self.lr_mul - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})' - ) - - -class ScaledLeakyReLU(nn.Module): - def __init__(self, negative_slope=0.2): - super().__init__() - - self.negative_slope = negative_slope - - def forward(self, input): - out = F.leaky_relu(input, negative_slope=self.negative_slope) - - return out * math.sqrt(2) - - -class ModulatedConv2d(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - demodulate=True, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - dilation=1, ##### modified - ): - super().__init__() - - self.eps = 1e-8 - self.kernel_size = kernel_size - self.in_channel = in_channel - self.out_channel = out_channel - self.upsample = upsample - self.downsample = downsample - self.dilation = dilation ##### modified - - if upsample: - factor = 2 - p = (len(blur_kernel) - factor) - (kernel_size - 1) - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 + 1 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor) - - # to simulate transconv + blur - # we use dilated transposed conv with blur kernel as weight + dilated transconv - if dilation > 1: ##### modified - blur_weight = torch.randn(1, 1, 3, 3) * 0 + 1 - blur_weight[:,:,0,1] = 2 - blur_weight[:,:,1,0] = 2 - blur_weight[:,:,1,2] = 2 - blur_weight[:,:,2,1] = 2 - blur_weight[:,:,1,1] = 4 - blur_weight = blur_weight / 16.0 - self.register_buffer("blur_weight", blur_weight) - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1)) - - fan_in = in_channel * kernel_size ** 2 - self.scale = 1 / math.sqrt(fan_in) - self.padding = kernel_size // 2 + dilation - 1 ##### modified - - self.weight = nn.Parameter( - torch.randn(1, out_channel, in_channel, kernel_size, kernel_size) - ) - - self.modulation = EqualLinear(style_dim, in_channel, bias_init=1) - - self.demodulate = demodulate - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, ' - f'upsample={self.upsample}, downsample={self.downsample})' - ) - - def forward(self, input, style): - batch, in_channel, height, width = input.shape - - style = self.modulation(style).view(batch, 1, in_channel, 1, 1) - weight = self.scale * self.weight * style - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8) - weight = weight * demod.view(batch, self.out_channel, 1, 1, 1) - - weight = weight.view( - batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - - if self.upsample: - input = input.view(1, batch * in_channel, height, width) - weight = weight.view( - batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - weight = weight.transpose(1, 2).reshape( - batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size - ) - - if self.dilation > 1: ##### modified - # to simulate out = self.blur(out) - out = F.conv_transpose2d( - input, self.blur_weight.repeat(batch*in_channel,1,1,1), padding=0, groups=batch*in_channel, dilation=self.dilation//2) - # to simulate the next line - out = F.conv_transpose2d( - out, weight, padding=self.dilation, groups=batch, dilation=self.dilation//2) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - return out - - out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - _, _, height, width = input.shape - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - else: - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=self.padding, groups=batch, dilation=self.dilation) ##### modified - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - return out - - -class NoiseInjection(nn.Module): - def __init__(self): - super().__init__() - - self.weight = nn.Parameter(torch.zeros(1)) - - def forward(self, image, noise=None): - if noise is None: - batch, _, height, width = image.shape - noise = image.new_empty(batch, 1, height, width).normal_() - else: ##### modified, to make the resolution matches - batch, _, height, width = image.shape - _, _, height1, width1 = noise.shape - if height != height1 or width != width1: - noise = F.adaptive_avg_pool2d(noise, (height, width)) - - return image + self.weight * noise - - -class ConstantInput(nn.Module): - def __init__(self, channel, size=4): - super().__init__() - - self.input = nn.Parameter(torch.randn(1, channel, size, size)) - - def forward(self, input): - batch = input.shape[0] - out = self.input.repeat(batch, 1, 1, 1) - - return out - - -class StyledConv(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=False, - blur_kernel=[1, 3, 3, 1], - demodulate=True, - dilation=1, ##### modified - ): - super().__init__() - - self.conv = ModulatedConv2d( - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=upsample, - blur_kernel=blur_kernel, - demodulate=demodulate, - dilation=dilation, ##### modified - ) - - self.noise = NoiseInjection() - self.activate = FusedLeakyReLU(out_channel) - - def forward(self, input, style, noise=None): - out = self.conv(input, style) - out = self.noise(out, noise=noise) - out = self.activate(out) - - return out - - -class ToRGB(nn.Module): - def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1], dilation=1): ##### modified - super().__init__() - - if upsample: - self.upsample = Upsample(blur_kernel) - - self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - self.dilation = dilation ##### modified - if dilation > 1: ##### modified - blur_weight = torch.randn(1, 1, 3, 3) * 0 + 1 - blur_weight[:,:,0,1] = 2 - blur_weight[:,:,1,0] = 2 - blur_weight[:,:,1,2] = 2 - blur_weight[:,:,2,1] = 2 - blur_weight[:,:,1,1] = 4 - blur_weight = blur_weight / 16.0 - self.register_buffer("blur_weight", blur_weight) - - def forward(self, input, style, skip=None): - out = self.conv(input, style) - out = out + self.bias - - if skip is not None: - if self.dilation == 1: - skip = self.upsample(skip) - else: ##### modified, to simulate skip = self.upsample(skip) - batch, in_channel, _, _ = skip.shape - skip = F.conv2d(skip, self.blur_weight.repeat(in_channel,1,1,1), - padding=self.dilation//2, groups=in_channel, dilation=self.dilation//2) - - out = out + skip - - return out - - -class Generator(nn.Module): - def __init__( - self, - size, - style_dim, - n_mlp, - channel_multiplier=2, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - ): - super().__init__() - - self.size = size - - self.style_dim = style_dim - - layers = [PixelNorm()] - - for i in range(n_mlp): - layers.append( - EqualLinear( - style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu' - ) - ) - - self.style = nn.Sequential(*layers) - - self.channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - self.input = ConstantInput(self.channels[4]) - self.conv1 = StyledConv( - self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel, dilation=8 ##### modified - ) - self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False) - - self.log_size = int(math.log(size, 2)) - self.num_layers = (self.log_size - 2) * 2 + 1 - - self.convs = nn.ModuleList() - self.upsamples = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - self.noises = nn.Module() - - in_channel = self.channels[4] - - for layer_idx in range(self.num_layers): - res = (layer_idx + 5) // 2 - shape = [1, 1, 2 ** res, 2 ** res] - self.noises.register_buffer(f'noise_{layer_idx}', torch.randn(*shape)) - - for i in range(3, self.log_size + 1): - out_channel = self.channels[2 ** i] - - self.convs.append( - StyledConv( - in_channel, - out_channel, - 3, - style_dim, - upsample=True, - blur_kernel=blur_kernel, - dilation=max(1, 32 // (2**(i-1))) ##### modified - ) - ) - - self.convs.append( - StyledConv( - out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel, dilation=max(1, 32 // (2**i)) ##### modified - ) - ) - - self.to_rgbs.append(ToRGB(out_channel, style_dim, dilation=max(1, 32 // (2**(i-1))))) ##### modified - - in_channel = out_channel - - self.n_latent = self.log_size * 2 - 2 - - def make_noise(self): - device = self.input.input.device - - noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)] - - for i in range(3, self.log_size + 1): - for _ in range(2): - noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device)) - - return noises - - def mean_latent(self, n_latent): - latent_in = torch.randn( - n_latent, self.style_dim, device=self.input.input.device - ) - latent = self.style(latent_in).mean(0, keepdim=True) - - return latent - - def get_latent(self, input): - return self.style(input) - - # styles is the latent code w+ - # first_layer_feature is the first-layer input feature f - # first_layer_feature_ind indicate which layer of G accepts f (should always=0, the first layer) - # skip_layer_feature is the encoder features sent by skip connection - # fusion_block is the network to fuse the encoder feature and decoder feature - # zero_noise is to force the noise to be zero (to avoid flickers for videos) - # editing_w is the editing vector v used in video face editing - def forward( - self, - styles, - return_latents=False, - return_features=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_latent=False, - noise=None, - randomize_noise=True, - first_layer_feature = None, ##### modified - first_layer_feature_ind = 0, ##### modified - skip_layer_feature = None, ##### modified - fusion_block = None, ##### modified - zero_noise = False, ##### modified - editing_w = None, ##### modified - ): - if not input_is_latent: - styles = [self.style(s) for s in styles] - - if zero_noise: - noise = [ - getattr(self.noises, f'noise_{i}') * 0.0 for i in range(self.num_layers) - ] - elif noise is None: - if randomize_noise: - noise = [None] * self.num_layers - else: - noise = [ - getattr(self.noises, f'noise_{i}') for i in range(self.num_layers) - ] - - if truncation < 1: - style_t = [] - - for style in styles: - style_t.append( - truncation_latent + truncation * (style - truncation_latent) - ) - - styles = style_t - - if len(styles) < 2: - inject_index = self.n_latent - - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - else: - latent = styles[0] - - else: - if inject_index is None: - inject_index = random.randint(1, self.n_latent - 1) - - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1) - - latent = torch.cat([latent, latent2], 1) - - # w+ + v for video face editing - if editing_w is not None: ##### modified - latent = latent + editing_w - - # the original StyleGAN - if first_layer_feature is None: ##### modified - out = self.input(latent) - out = F.adaptive_avg_pool2d(out, 32) ##### modified - out = self.conv1(out, latent[:, 0], noise=noise[0]) - skip = self.to_rgb1(out, latent[:, 1]) - # the default StyleGANEX, replacing the first layer of G - elif first_layer_feature_ind == 0: ##### modified - out = first_layer_feature[0] ##### modified - out = self.conv1(out, latent[:, 0], noise=noise[0]) - skip = self.to_rgb1(out, latent[:, 1]) - # maybe we can also use the second layer of G to accept f? - else: ##### modified - out = first_layer_feature[0] ##### modified - skip = first_layer_feature[1] ##### modified - - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip( - self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs - ): - # these layers accepts skipped encoder layer, use fusion block to fuse the encoder feature and decoder feature - if skip_layer_feature and fusion_block and i//2 < len(skip_layer_feature) and i//2 < len(fusion_block): - if editing_w is None: - out, skip = fusion_block[i//2](skip_layer_feature[i//2], out, skip) - else: - out, skip = fusion_block[i//2](skip_layer_feature[i//2], out, skip, editing_w[:,i]) - out = conv1(out, latent[:, i], noise=noise1) - out = conv2(out, latent[:, i + 1], noise=noise2) - skip = to_rgb(out, latent[:, i + 2], skip) - - i += 2 - - image = skip - - if return_latents: - return image, latent - elif return_features: - return image, out - else: - return image, None - - -class ConvLayer(nn.Sequential): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - downsample=False, - blur_kernel=[1, 3, 3, 1], - bias=True, - activate=True, - dilation=1, ## modified - ): - layers = [] - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - layers.append(Blur(blur_kernel, pad=(pad0, pad1))) - - stride = 2 - self.padding = 0 - - else: - stride = 1 - self.padding = kernel_size // 2 + dilation-1 ## modified - - layers.append( - EqualConv2d( - in_channel, - out_channel, - kernel_size, - padding=self.padding, - stride=stride, - bias=bias and not activate, - dilation=dilation, ## modified - ) - ) - - if activate: - if bias: - layers.append(FusedLeakyReLU(out_channel)) - - else: - layers.append(ScaledLeakyReLU(0.2)) - - super().__init__(*layers) - - -class ResBlock(nn.Module): - def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - self.conv1 = ConvLayer(in_channel, in_channel, 3) - self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True) - - self.skip = ConvLayer( - in_channel, out_channel, 1, downsample=True, activate=False, bias=False - ) - - def forward(self, input): - out = self.conv1(input) - out = self.conv2(out) - - skip = self.skip(input) - out = (out + skip) / math.sqrt(2) - - return out - - -class Discriminator(nn.Module): - def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1], img_channel=3): - super().__init__() - - channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - convs = [ConvLayer(img_channel, channels[size], 1)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - - convs.append(ResBlock(in_channel, out_channel, blur_kernel)) - - in_channel = out_channel - - self.convs = nn.Sequential(*convs) - - self.stddev_group = 4 - self.stddev_feat = 1 - - self.final_conv = ConvLayer(in_channel + 1, channels[4], 3) - self.final_linear = nn.Sequential( - EqualLinear(channels[4] * 4 * 4, channels[4], activation='fused_lrelu'), - EqualLinear(channels[4], 1), - ) - - self.size = size ##### modified - - def forward(self, input): - # for input that not satisfies the target size, we crop it to extract a small image of the target size. - _, _, h, w = input.shape ##### modified - i, j = torch.randint(0, h+1-self.size, size=(1,)).item(), torch.randint(0, w+1-self.size, size=(1,)).item() ##### modified - out = self.convs(input[:,:,i:i+self.size,j:j+self.size]) ##### modified - - batch, channel, height, width = out.shape - group = min(batch, self.stddev_group) - stddev = out.view( - group, -1, self.stddev_feat, channel // self.stddev_feat, height, width - ) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - - out = out.view(batch, -1) - out = self.final_linear(out) - - return out \ No newline at end of file diff --git a/spaces/AIML-TUDA/unsafe-vs-safe-stable-diffusion/app.py b/spaces/AIML-TUDA/unsafe-vs-safe-stable-diffusion/app.py deleted file mode 100644 index ee4f1077a662c72216fea0dd67c70e46ebaa0939..0000000000000000000000000000000000000000 --- a/spaces/AIML-TUDA/unsafe-vs-safe-stable-diffusion/app.py +++ /dev/null @@ -1,345 +0,0 @@ -import gradio as gr -# import torch -# from torch import autocast -# from diffusers import StableDiffusionPipeline -from datasets import load_dataset -from PIL import Image -from io import BytesIO -# import base64 -# import re -import os -import requests -import json -import base64 -# from urllib import parse - -from share_btn import community_icon_html, loading_icon_html, share_js - - -is_gpu_busy = False - -def safe_sd(prompt, n_samples, steps, scale, seed, mode): - url = os.getenv('BACKEND_URL_SAFE_NEW') - token = os.getenv('BACKEND_TOKEN') - user = os.getenv('BACKEND_USER') - res = requests.post(url, json={ - "model": "togethercomputer/UniversalSD", - "prompt": prompt, - "n": n_samples, - "mode": mode, - "steps": steps, - "seed": seed, - "guidance_scale": scale, - }, headers={ - "Authorization": token, - "User-Agent": user - }) - return res - -def infer(prompt, n_samples, steps, scale, seed): - global is_gpu_busy - # generator = torch.Generator(device=device).manual_seed(seed) - # print("Is GPU busy? ", is_gpu_busy) - images = [] - - if prompt == "": - raise gr.Error("Empty prompt. Please provide a prompt.") - - response = safe_sd(prompt, int(n_samples), max(50,int(steps)), scale, seed, mode="text2img") - - data = json.load(BytesIO(response.content)) - if 'output' not in data: - raise gr.Error("An error occurred.") - else: - if data['output']['result_type'] == "error": - raise gr.Error(data['output']['value']) - for image in data['output']['choices']: - im = Image.open(BytesIO(base64.b64decode(image['image_base64']))) - images.append(im) - - response = safe_sd(prompt, int(n_samples), max(50,int(steps)), scale, seed, mode="safe_text2img") - - data = json.load(BytesIO(response.content)) - if 'output' not in data: - raise gr.Error("An error occurred.") - else: - for image in data['output']['choices']: - im = Image.open(BytesIO(base64.b64decode(image['image_base64']))) - images.append(im) - return images - - -css = """ - .gradio-container { - font-family: 'IBM Plex Sans', sans-serif; - } - .gr-button { - color: white; - border-color: #3a669bff; - background: #3a669bff; - } - input[type='range'] { - accent-color: #3a669bff; - } - .dark input[type='range'] { - accent-color: #3a669bff; - } - .container { - max-width: 730px; - margin: auto; - padding-top: 1.5rem; - } - #gallery { - min-height: 22rem; - margin-bottom: 15px; - margin-left: auto; - margin-right: auto; - border-bottom-right-radius: .5rem !important; - border-bottom-left-radius: .5rem !important; - } - #gallery>div>.h-full { - min-height: 20rem; - } - .details:hover { - text-decoration: underline; - } - .gr-button { - white-space: nowrap; - } - .gr-button:focus { - border-color: rgb(147 197 253 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; - } - #advanced-btn { - font-size: .7rem !important; - line-height: 19px; - margin-top: 12px; - margin-bottom: 12px; - padding: 2px 8px; - border-radius: 14px !important; - } - #advanced-options { - display: none; - margin-bottom: 20px; - } - .footer { - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } - .acknowledgments h4{ - margin: 1.25em 0 .25em 0; - font-weight: bold; - font-size: 115%; - } - #container-advanced-btns{ - display: flex; - flex-wrap: wrap; - justify-content: space-between; - align-items: center; - } - .animate-spin { - animation: spin 1s linear infinite; - } - @keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } - } - #share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #3a669bff; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; - } - #share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important; - } - #share-btn * { - all: unset; - } - .gr-form{ - flex: 1 1 50%; border-top-right-radius: 0; border-bottom-right-radius: 0; - } - #prompt-container{ - gap: 0; - } -""" - -block = gr.Blocks(css=css) - -examples = [ - [ - 'a photograph by vanessa beecroft', - 1, - 50, - 7.5, - 24803839, - ], - [ - 'a gorgeous female photo', - 1, - 50, - 7.5, - 733664822, - ], - [ - 'a gorgeous male photo', - 1, - 50, - 7.5, - 881355, - ], - [ - 'the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker', - 1, - 50, - 7.5, - 557645701 - ], - [ - 'portrait of girl with smokey eyes makeup in abandoned hotel, grange clothes, redshift, wide high angle coloured polaroid photograph with flash, kodak film, hyper real, stunning moody cinematography, with anamorphic lenses, by maripol, fallen angels by wong kar - wai, style of suspiria and neon demon and children from bahnhof zoo, detailed ', - 1, - 50, - 9, - 1115417309, - ], - [ - 'portrait of Sickly diseased dying Samurai warrior, sun shining, photo realistic illustration by greg rutkowski, thomas kindkade, alphonse mucha, loish, norman rockwell.', - 1, - 50, - 10, - 1714108957, - ] -] - -with block: - gr.HTML( - """ -
    -
    - -

    - Stable Diffusion vs. Safe Stable Diffusion -

    -
    -

    - Safe Stable Diffusion extends Stable Diffusion with safety guidance. In the case of NSFW images it returns the closest non-NSFW images instead of a black square. - Details can be found in the Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models paper. -

    -
    - """ - ) - with gr.Group(): - with gr.Box(): - with gr.Row(elem_id="prompt-container").style(mobile_collapse=False, equal_height=True): - text = gr.Textbox( - label="Enter your prompt", - show_label=False, - max_lines=1, - placeholder="Enter your prompt", - elem_id="prompt-text-input", - ).style( - border=(True, False, True, True), - rounded=(True, False, False, True), - container=False, - ) - btn = gr.Button("Generate image").style( - margin=False, - rounded=(False, True, True, False), - full_width=False, - ) - - gallery = gr.Gallery( - label="Left: Stable Diffusion, Right: Safe Stable Diffusion", show_label=True, elem_id="gallery" - ).style(grid=[2], height="auto") - - with gr.Group(elem_id="container-advanced-btns"): - advanced_button = gr.Button("Advanced options", elem_id="advanced-btn") - with gr.Group(elem_id="share-btn-container"): - community_icon = gr.HTML(community_icon_html) - loading_icon = gr.HTML(loading_icon_html) - share_button = gr.Button("Share to community", elem_id="share-btn") - - with gr.Row(elem_id="advanced-options"): - #gr.Markdown("Advanced settings are temporarily unavailable") - samples = gr.Slider(label="Images", minimum=1, maximum=1, value=1, step=1) - steps = gr.Slider(label="Steps", minimum=50, maximum=50, value=50, step=1) - scale = gr.Slider( - label="Guidance Scale", minimum=7.5, maximum=20, value=7.5, step=0.5 - ) - seed = gr.Slider( - label="Seed", - minimum=0, - maximum=2147483647, - step=1, - randomize=True, - ) - - ex = gr.Examples(examples=examples, fn=infer, inputs=[text, samples, steps, scale, seed], - outputs=[gallery, community_icon, loading_icon, share_button], cache_examples=False) - ex.dataset.headers = [""] - - text.submit(infer, inputs=[text, samples, steps, scale, seed], outputs=gallery) - btn.click(infer, inputs=[text, samples, steps, scale, seed], outputs=gallery) - - advanced_button.click( - None, - [], - text, - _js=""" - () => { - const options = document.querySelector("body > gradio-app").querySelector("#advanced-options"); - options.style.display = ["none", ""].includes(options.style.display) ? "flex" : "none"; - }""", - ) - share_button.click( - None, - [], - [], - _js=share_js, - ) - gr.HTML( - """ - -
    -

    LICENSE

    -The model is licensed with a CreativeML Open RAIL-M license. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in this license. The license forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. For the full list of restrictions please read the license.

    -

    Biases and content acknowledgment

    -Despite how impressive being able to turn text into image is, beware to the fact that this model may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography and violence. While the applied safety guidance suppresses the majority of inappropriate content, this still could apply to Safe Stable Diffusion models. The original model was trained on the LAION-5B dataset, which scraped non-curated image-text-pairs from the internet (the exception being the removal of illegal content) and is meant for research purposes. Safety guidance suppresses potentially inappropriate content during inference. You can read more in the model card.

    -
    - """ - ) - -block.queue(concurrency_count=40, max_size=20).launch(max_threads=150) \ No newline at end of file diff --git a/spaces/AIWaves/Software_Company/src/agents/Prompt/base_Prompts.py b/spaces/AIWaves/Software_Company/src/agents/Prompt/base_Prompts.py deleted file mode 100644 index f33fcdb84d0665a87bc2a6b49dd636bbb7a0980a..0000000000000000000000000000000000000000 --- a/spaces/AIWaves/Software_Company/src/agents/Prompt/base_Prompts.py +++ /dev/null @@ -1,83 +0,0 @@ - -# SOP======================================================================================================== -# "environment_prompt" -# current_state , self(sop) -Get_environment_prompt = "f\"The current scenario is as follows {self.current_state.environment_prompt} \"" - - -# sop.transit -#================================================================ -Transit_system_prompt = "f\"{environment_prompt};{judge_system_prompt}\"" - -# transit chat message -# "environment_prompt" is get from "Get_environment_prompt" ; "chat_history_message" if from Memory -Transit_message = "f\"{environment_summary};The chat history is as follows:\\n {chat_history_message}\\n;You especially need to pay attention to the last query\\n{query}\\n and the relevant conversation \\n{relevant_history} \\n\\n\"" - - -Transit_last_prompt = "f\"{judge_last_prompt}\"" -#sop.transit================================================================ - -# sop.call -#================================================================ -# help controller to determine the next role to speak.(the {} is agent role) call_prompt + allocate_component -Allocate_component = "f\"If it's currently supposed to be speaking for {role}, then output {role}.\\n\"" - -# environment_prompt is get from "Get_environment_prompt" ; "chat_history_message" if from Memory -Call_system_prompt = "f\"{environment_prompt};{call_system_prompt};{allocate_prompt}\"" - -# -Call_last_prompt = "f\"You especially need to pay attention to the last query\\n{query}\\n and the relevant conversation \\n{relevant_history} \\n\\n;Now please choose the person to speak according to the following rules :{allocate_prompt};Note: The person whose turn it is now cannot be the same as the person who spoke last time, so {last_name} cannot be output\\n.\"" - -Call_message = "f\"The chat history is as follows:\\n\\n{chat_history_message}\\n;The last person to speak is: {last_name}\\n. \"" -#sop.call================================================================ -# SOP======================================================================================================== - - - - - - -# Memory======================================================================================================== -Single_message = "f\"{name} said that :{content}\"" - -Chat_total_message = "f\"{chat_history}\"" -# Memory======================================================================================================== - - - - - - -# Environment======================================================================================================== -Default_environment_summary_system_prompt = "\"\\nYour task is to summarize the historical dialogue records according to the current scene, and summarize the most important information\"" - -Default_environment_summary_last_prompt = "\"Please make a summary based on the historical chat records, the output format is history summary: \{your summary content\} \"" - -Environment_summary_memory = "f\"The information you need to know is as follows:\\n\\n\ - The summary of the previous dialogue history is:\\n{summary}\\n.\ - The latest conversation record is as follows:\\n {chat_history}\\n,\ - the relevant chat history you may need is:{relevant_history}\"" - -Environment_summary_system_prompt = "f\"{environment_prompt};{current_memory};{summary_system_prompt};\"" - - -# observe -Agent_observe_relevant_memory = "f\"The relevant chat history are as follows:\\n{relevant_memory} \\n\"" - - -Agent_observe_memory = "f\"Here's what you need to know(Remember, this is just information, Try not to repeat what's inside):\\n\\n{relevant_memory};\ - The previous summary of chat history is as follows :\\n{agent.short_term_memory}\\n.\ - The new chat history is as follows:\\n {conversations}\\n\\n\ - \"" -# Environment======================================================================================================== - - - - -# Agent======================================================================================================== -Agent_summary_system_prompt = "f\"{summary_prompt};Please summarize past key summary \\n\\n {self.short_term_memory} and new chat_history as follows: \\n{conversations}\"" - -Agent_last_prompt = "f\"{last_prompt};\\nPlease continue the talk based on your known information,Make an effort to make the conversation more coherent and try to respond differently from your existing knowledge, avoiding repeating what others have said.\"" - -Agent_system_prompt = "f\"{system_prompt},\"" -# Agent======================================================================================================== diff --git a/spaces/AIZero2HeroBootcamp/VideoToAnimatedGif/app.py b/spaces/AIZero2HeroBootcamp/VideoToAnimatedGif/app.py deleted file mode 100644 index ffd742dd801a4d28b26726e4755718f7274f0ac8..0000000000000000000000000000000000000000 --- a/spaces/AIZero2HeroBootcamp/VideoToAnimatedGif/app.py +++ /dev/null @@ -1,26 +0,0 @@ -import streamlit as st -from moviepy.editor import VideoFileClip -import os - -st.title('Video to GIF converter') - -uploaded_file = st.file_uploader("Choose a video...", type=["mp4", "mov", "avi", "mkv"]) - -if uploaded_file is not None: - with open("temp_video.mp4", "wb") as f: - f.write(uploaded_file.getbuffer()) - - st.success('Video uploaded successfully!') - - start_time = st.number_input('Enter the start time (in seconds)', min_value=0, value=0, step=1) - duration = st.number_input('Enter the duration of the clip (in seconds)', min_value=1, value=5, step=1) - resolution = st.number_input('Enter the height resolution (in pixels)', min_value=1, value=480, step=1) - - if st.button('Create GIF'): - video = VideoFileClip("temp_video.mp4") - clip = video.subclip(start_time, start_time + duration) - clip_resized = clip.resize(height=resolution) - clip_resized.write_gif("output.gif", fps=clip.fps) - - st.success('GIF created successfully! Check your directory for a file named "output.gif".') - os.remove("temp_video.mp4") # remove the temporary video file diff --git a/spaces/Abduhoshim/speech_emotion_detection/app.py b/spaces/Abduhoshim/speech_emotion_detection/app.py deleted file mode 100644 index 61751a5622a77a92c6c7978f2ccaaefe2041e22e..0000000000000000000000000000000000000000 --- a/spaces/Abduhoshim/speech_emotion_detection/app.py +++ /dev/null @@ -1,73 +0,0 @@ -from tensorflow import keras -import os -import soundfile as sf -import numpy as np -import librosa -import gradio as gr -import seaborn as sns -import pandas as pd -import plotly.express as px -model = keras.models.load_model('emotion.h5') -labels = ['Angry', 'Disgusted', 'Fearful', 'Happy', 'Neutral', 'Sad', 'Suprised'] -def predict(audio): - wave, sr = librosa.load(audio, sr=None) - segment_dur_secs = 3 - segment_length = sr * segment_dur_secs - num_sections = int(np.ceil(len(wave) / segment_length)) - split = [] - paths =[] - for i in range(num_sections): - t = wave[i * segment_length: (i + 1) * segment_length] - split.append(t) - - out_dir = ('audio_data/splits/') - os.makedirs(out_dir, exist_ok=True) - for i in range(num_sections): - recording_name = os.path.basename(audio[:-4]) - out_file = f"{recording_name}_{str(i)}.wav" - sf.write(os.path.join(out_dir, out_file), split[i], sr) - paths.append(os.path.join(out_dir, out_file)) - - - predicted_features = pd.DataFrame(columns=['features']) - counter=0 - for path in paths: - X, sample_rate = librosa.load(path - ,duration=2.5 - ,sr=44100 - ,offset=0.5 - ) - sample_rate = np.array(sample_rate) - mfccs = np.mean(librosa.feature.mfcc(y=X, - sr=sample_rate, - n_mfcc=13), - axis=0) - predicted_features.loc[counter] = [mfccs] - counter=counter+1 - predicted_features = pd.DataFrame(predicted_features['features'].values.tolist()) - predicted_features.dropna(inplace=True) - preds = model.predict(predicted_features) - - preds=preds.argmax(axis=1) - df_preds = pd.DataFrame(preds,columns = ['prediction']) - emotions = [] - for i in df_preds['prediction']: - emotion = labels[int(i)] - emotions.append(emotion) - df_preds['emotion'] = emotions - df_preds = df_preds.reset_index() - fig = px.line(df_preds, x="index", y="emotion", title='How emotion change over speech') - fig.update_xaxes(title='The 3s intervals of speech') - return fig - -outputs = gr.Plot() -title = "Emotion recognition" -description = "This model can shows how speaker emotion changes over the speech" - -infr = gr.Interface(fn=predict, - inputs=gr.Audio(type="filepath"), - examples=['audio_samples/1.mp3','audio_samples/2.mp3','audio_samples/3.mp3','audio_samples/4.mp3'], - cache_examples=True, - outputs=outputs, - title=title,description=description,interpretation='default',) -infr.launch() diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/dynamic.py b/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/dynamic.py deleted file mode 100644 index d6b6d72feab1b2fec0776db545273d32dc6f1fb1..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/dynamic.py +++ /dev/null @@ -1,84 +0,0 @@ -from __future__ import annotations -import asyncio -from colorama import Fore - -from typing import TYPE_CHECKING, List - -from . import decision_maker_registry -from .base import BaseDecisionMaker -from agentverse.logging import typewriter_log - -if TYPE_CHECKING: - from agentverse.agents.base import BaseAgent - from agentverse.message import Message - - -@decision_maker_registry.register("dynamic") -class DynamicDecisionMaker(BaseDecisionMaker): - """ - Discuss in a horizontal manner. - """ - - name: str = "dynamic" - - ## To Do: implement dynamic - # def step( - async def astep( - self, - agents: List[BaseAgent], - manager: List[BaseAgent], - task_description: str, - previous_plan: str = "No solution yet.", - advice: str = "No advice yet.", - previous_sentence: str = "No any sentence yet.", - *args, - **kwargs, - ) -> List[str]: - # Speak simultaneously - # Manger select the optimial one as the current spoken sentence - reviews = list() - for i in range(len(agents)): - review = await asyncio.gather( - *[ - agent.astep(previous_plan, advice, task_description) - for agent in agents[1:] - ] - ) - - # typewriter_log("Reviews:", Fore.YELLOW) - # typewriter_log( - # "\n".join( - # [ - # f"[{review.sender_agent.role_description}]: {review.criticism}" - # for review in reviews - # ] - # ), - # Fore.YELLOW, - # ) - - previous_sentence = manager.step( - previous_plan, review, advice, task_description, previous_sentence - ) - reviews.append(previous_sentence) - - """ - reviews = await asyncio.gather( - *[ - agent.astep(previous_plan, advice, task_description) - for agent in agents[1:] - ] - ) - """ - - nonempty_reviews = [] - for review in reviews: - if not review.is_agree and review.content != "": - nonempty_reviews.append(review) - agents[0].add_message_to_memory(nonempty_reviews) - - result = agents[0].step(previous_plan, advice, task_description) - - return [result] - - def reset(self): - pass diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/utils/CreateChild.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/utils/CreateChild.js deleted file mode 100644 index 596677e049033d8db90d3685f8a96f78280e15fa..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/utils/CreateChild.js +++ /dev/null @@ -1,16 +0,0 @@ -import Make from '../../Make.js'; - -var CreateChild = function (scene, data, subKey, view, styles, customBuilders) { - var childData = data[subKey]; - if (!childData) { - return undefined; - } - - var child; - child = Make(scene, childData, view, styles, customBuilders); - data[subKey] = child; - - return child; -} - -export default CreateChild; \ No newline at end of file diff --git a/spaces/AlekseyKorshuk/thin-plate-spline-motion-model/app.py b/spaces/AlekseyKorshuk/thin-plate-spline-motion-model/app.py deleted file mode 100644 index 66559f92724be95503710e7912ca65721bf8f4c7..0000000000000000000000000000000000000000 --- a/spaces/AlekseyKorshuk/thin-plate-spline-motion-model/app.py +++ /dev/null @@ -1,100 +0,0 @@ -import torch -import imageio -import numpy as np -import matplotlib.pyplot as plt -import matplotlib.animation as animation -from skimage.transform import resize -import warnings -import os -from demo import make_animation -from skimage import img_as_ubyte -from demo import load_checkpoints -import gradio - - -def inference(source_image_path='./assets/source.png', driving_video_path='./assets/driving.mp4', dataset_name="vox"): - # edit the config - device = torch.device('cpu') - # dataset_name = 'vox' # ['vox', 'taichi', 'ted', 'mgif'] - # source_image_path = './assets/source.png' - # driving_video_path = './assets/driving.mp4' - output_video_path = './generated.mp4' - - pixel = 256 # for vox, taichi and mgif, the resolution is 256*256 - if (dataset_name == 'ted'): # for ted, the resolution is 384*384 - pixel = 384 - config_path = f'config/{dataset_name}-{pixel}.yaml' - checkpoint_path = f'checkpoints/{dataset_name}.pth.tar' - predict_mode = 'relative' # ['standard', 'relative', 'avd'] - - warnings.filterwarnings("ignore") - - source_image = imageio.imread(source_image_path) - reader = imageio.get_reader(driving_video_path) - - source_image = resize(source_image, (pixel, pixel))[..., :3] - - fps = reader.get_meta_data()['fps'] - driving_video = [] - try: - for im in reader: - driving_video.append(im) - except RuntimeError: - pass - reader.close() - - driving_video = [resize(frame, (pixel, pixel))[..., :3] for frame in driving_video] - - # driving_video = driving_video[:10] - - def display(source, driving, generated=None) -> animation.ArtistAnimation: - fig = plt.figure(figsize=(8 + 4 * (generated is not None), 6)) - - ims = [] - for i in range(len(driving)): - cols = [source] - cols.append(driving[i]) - if generated is not None: - cols.append(generated[i]) - im = plt.imshow(np.concatenate(cols, axis=1), animated=True) - plt.axis('off') - ims.append([im]) - - ani = animation.ArtistAnimation(fig, ims, interval=50, repeat_delay=1000) - # plt.show() - plt.close() - return ani - - inpainting, kp_detector, dense_motion_network, avd_network = load_checkpoints(config_path=config_path, - checkpoint_path=checkpoint_path, - device=device) - - predictions = make_animation(source_image, driving_video, inpainting, kp_detector, dense_motion_network, - avd_network, device=device, mode=predict_mode) - - # save resulting video - imageio.mimsave(output_video_path, [img_as_ubyte(frame) for frame in predictions], fps=fps) - - ani = display(source_image, driving_video, predictions) - ani.save('animation.mp4', writer='imagemagick', fps=60) - return 'animation.mp4' - - -demo = gradio.Interface( - fn=inference, - inputs=[ - gradio.inputs.Image(type="filepath", label="Input image"), - gradio.inputs.Video(label="Input video"), - gradio.inputs.Dropdown(['vox', 'taichi', 'ted', 'mgif'], type="value", default="vox", label="Model", - optional=False), - - ], - outputs=["video"], - examples=[ - ['./assets/source.png', './assets/driving.mp4', "vox"], - ['./assets/source_ted.png', './assets/driving_ted.mp4', "ted"], - ], -) - -if __name__ == "__main__": - demo.launch() diff --git a/spaces/AlexWortega/MailruQA/app.py b/spaces/AlexWortega/MailruQA/app.py deleted file mode 100644 index 946bdf1c196bddcfd8df1f42fd5533295ab06ab8..0000000000000000000000000000000000000000 --- a/spaces/AlexWortega/MailruQA/app.py +++ /dev/null @@ -1,47 +0,0 @@ -import torch -import gradio as gr -from transformers import AutoModelForCausalLM, AutoTokenizer -import random -device = 'cpu' - -def ans(question ): - description='' - category='' - seed = random.randint(1, 10000000) - print(f'Seed: {seed}') - torch.manual_seed(seed) - - inp = tokenizer.encode(f'Вопрос: {question}\nОписание: {description}\nОтвет:',return_tensors="pt").to(device) - print('question',question) - gen = model.generate(inp, do_sample=True, top_p=0.9, temperature=0.86, max_new_tokens=100, repetition_penalty=1.2) #, stop_token="") - - gen = tokenizer.decode(gen[0]) - gen = gen[:gen.index('') if '' in gen else len(gen)] - gen = gen.split('Ответ:')[1] - return gen - - - - - - - -# Download checkpoint: -checkpoint = "its5Q/rugpt3large_mailqa" -tokenizer = AutoTokenizer.from_pretrained(checkpoint) -model = AutoModelForCausalLM.from_pretrained(checkpoint) -model = model.eval() - -# Gradio - -title = "Ответы на главные вопросы жизни, вселенной и вообще" -description = "ruGPT large дообученная на датасете https://www.kaggle.com/datasets/atleast6characterss/otvetmailru-solved-questions " -article = "

    Github with fine-tuning ruGPT3large on QA

    Cозданно при поддержке

    Love Death Transformers

    " -examples = [ - ["Как какать?"] -] - -iface = gr.Interface(fn=ans, title=title, description=description, article=article, examples=examples, inputs="text", outputs="text") - -if __name__ == "__main__": - iface.launch() \ No newline at end of file diff --git a/spaces/Ali36Ahmad/magic-diffusion/app.py b/spaces/Ali36Ahmad/magic-diffusion/app.py deleted file mode 100644 index c5d5180bf525be5cfc13c069ea6c60dee0af4cde..0000000000000000000000000000000000000000 --- a/spaces/Ali36Ahmad/magic-diffusion/app.py +++ /dev/null @@ -1,104 +0,0 @@ -import gradio as gr -import os -from share_btn import community_icon_html, loading_icon_html, share_js - -text_gen = gr.Interface.load(name="spaces/Gustavosta/MagicPrompt-Stable-Diffusion") -stable_diffusion = gr.Blocks.load(name="spaces/runwayml/stable-diffusion-v1-5") - -def get_images(prompt): - gallery_dir = stable_diffusion(prompt, fn_index=2) - sd_output = [os.path.join(gallery_dir, image) for image in os.listdir(gallery_dir)] - return sd_output, gr.update(visible=True), gr.update(visible=True), gr.update(visible=True) - -def get_prompts(prompt_text): - return text_gen(prompt_text) - -css = ''' -.animate-spin { - animation: spin 1s linear infinite; -} -@keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } -} -#share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; -} -#share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important; -} -#share-btn * { - all: unset; -} -#share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; -} -#share-btn-container .wrap { - display: none !important; -} -a {text-decoration-line: underline;} -''' - -with gr.Blocks(css=css) as demo: - gr.HTML("""
    -
    -

    - Magic Diffusion 🪄 -

    -
    -

    - This Space prettifies your prompt using MagicPrompt - and then runs it through Stable Diffusion to create aesthetically pleasing images. Simply enter a few concepts and let it improve your prompt. You can then diffuse the prompt. -

    -
    """) - - with gr.Row(): - with gr.Column(): - input_text = gr.Textbox(label="Short text prompt", - lines=4, elem_id="input-text") - with gr.Row(): - see_prompts = gr.Button("Feed in your text!") - - with gr.Column(): - text_output = gr.Textbox( - label="Prettified text prompt", - lines=4, - elem_id="translated" - ) - with gr.Row(): - diffuse_btn = gr.Button(value="Diffuse the Prompt!") - with gr.Column(elem_id="generated-gallery"): - sd_output = gr.Gallery().style(grid=2, height="auto") - with gr.Group(elem_id="share-btn-container"): - community_icon = gr.HTML(community_icon_html, visible=False) - loading_icon = gr.HTML(loading_icon_html, visible=False) - share_button = gr.Button("Share to community", elem_id="share-btn", visible=False) - - see_prompts.click(get_prompts, - inputs = [input_text], - outputs = [ - text_output - ]) - diffuse_btn.click(get_images, - inputs = [ - text_output - ], - outputs = [sd_output, community_icon, loading_icon, share_button] - ) - share_button.click(None, [], [], _js=share_js) - - - -demo.launch(debug=True) \ No newline at end of file diff --git a/spaces/Alpaca233/SadTalker/src/face3d/data/__init__.py b/spaces/Alpaca233/SadTalker/src/face3d/data/__init__.py deleted file mode 100644 index 9a9761c518a1b07c5996165869742af0a52c82bc..0000000000000000000000000000000000000000 --- a/spaces/Alpaca233/SadTalker/src/face3d/data/__init__.py +++ /dev/null @@ -1,116 +0,0 @@ -"""This package includes all the modules related to data loading and preprocessing - - To add a custom dataset class called 'dummy', you need to add a file called 'dummy_dataset.py' and define a subclass 'DummyDataset' inherited from BaseDataset. - You need to implement four functions: - -- <__init__>: initialize the class, first call BaseDataset.__init__(self, opt). - -- <__len__>: return the size of dataset. - -- <__getitem__>: get a data point from data loader. - -- : (optionally) add dataset-specific options and set default options. - -Now you can use the dataset class by specifying flag '--dataset_mode dummy'. -See our template dataset class 'template_dataset.py' for more details. -""" -import numpy as np -import importlib -import torch.utils.data -from face3d.data.base_dataset import BaseDataset - - -def find_dataset_using_name(dataset_name): - """Import the module "data/[dataset_name]_dataset.py". - - In the file, the class called DatasetNameDataset() will - be instantiated. It has to be a subclass of BaseDataset, - and it is case-insensitive. - """ - dataset_filename = "data." + dataset_name + "_dataset" - datasetlib = importlib.import_module(dataset_filename) - - dataset = None - target_dataset_name = dataset_name.replace('_', '') + 'dataset' - for name, cls in datasetlib.__dict__.items(): - if name.lower() == target_dataset_name.lower() \ - and issubclass(cls, BaseDataset): - dataset = cls - - if dataset is None: - raise NotImplementedError("In %s.py, there should be a subclass of BaseDataset with class name that matches %s in lowercase." % (dataset_filename, target_dataset_name)) - - return dataset - - -def get_option_setter(dataset_name): - """Return the static method of the dataset class.""" - dataset_class = find_dataset_using_name(dataset_name) - return dataset_class.modify_commandline_options - - -def create_dataset(opt, rank=0): - """Create a dataset given the option. - - This function wraps the class CustomDatasetDataLoader. - This is the main interface between this package and 'train.py'/'test.py' - - Example: - >>> from data import create_dataset - >>> dataset = create_dataset(opt) - """ - data_loader = CustomDatasetDataLoader(opt, rank=rank) - dataset = data_loader.load_data() - return dataset - -class CustomDatasetDataLoader(): - """Wrapper class of Dataset class that performs multi-threaded data loading""" - - def __init__(self, opt, rank=0): - """Initialize this class - - Step 1: create a dataset instance given the name [dataset_mode] - Step 2: create a multi-threaded data loader. - """ - self.opt = opt - dataset_class = find_dataset_using_name(opt.dataset_mode) - self.dataset = dataset_class(opt) - self.sampler = None - print("rank %d %s dataset [%s] was created" % (rank, self.dataset.name, type(self.dataset).__name__)) - if opt.use_ddp and opt.isTrain: - world_size = opt.world_size - self.sampler = torch.utils.data.distributed.DistributedSampler( - self.dataset, - num_replicas=world_size, - rank=rank, - shuffle=not opt.serial_batches - ) - self.dataloader = torch.utils.data.DataLoader( - self.dataset, - sampler=self.sampler, - num_workers=int(opt.num_threads / world_size), - batch_size=int(opt.batch_size / world_size), - drop_last=True) - else: - self.dataloader = torch.utils.data.DataLoader( - self.dataset, - batch_size=opt.batch_size, - shuffle=(not opt.serial_batches) and opt.isTrain, - num_workers=int(opt.num_threads), - drop_last=True - ) - - def set_epoch(self, epoch): - self.dataset.current_epoch = epoch - if self.sampler is not None: - self.sampler.set_epoch(epoch) - - def load_data(self): - return self - - def __len__(self): - """Return the number of data in the dataset""" - return min(len(self.dataset), self.opt.max_dataset_size) - - def __iter__(self): - """Return a batch of data""" - for i, data in enumerate(self.dataloader): - if i * self.opt.batch_size >= self.opt.max_dataset_size: - break - yield data diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_dpmsolver_multistep_inverse.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_dpmsolver_multistep_inverse.py deleted file mode 100644 index f5c98a551d665a05d4cbab8ccbdef6785fa2ed09..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_dpmsolver_multistep_inverse.py +++ /dev/null @@ -1,707 +0,0 @@ -# Copyright 2023 TSAIL Team and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# DISCLAIMER: This file is strongly influenced by https://github.com/LuChengTHU/dpm-solver - -import math -from typing import List, Optional, Tuple, Union - -import numpy as np -import torch - -from ..configuration_utils import ConfigMixin, register_to_config -from ..utils import randn_tensor -from .scheduling_utils import KarrasDiffusionSchedulers, SchedulerMixin, SchedulerOutput - - -# Copied from diffusers.schedulers.scheduling_ddpm.betas_for_alpha_bar -def betas_for_alpha_bar( - num_diffusion_timesteps, - max_beta=0.999, - alpha_transform_type="cosine", -): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of - (1-beta) over time from t = [0,1]. - - Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up - to that part of the diffusion process. - - - Args: - num_diffusion_timesteps (`int`): the number of betas to produce. - max_beta (`float`): the maximum beta to use; use values lower than 1 to - prevent singularities. - alpha_transform_type (`str`, *optional*, default to `cosine`): the type of noise schedule for alpha_bar. - Choose from `cosine` or `exp` - - Returns: - betas (`np.ndarray`): the betas used by the scheduler to step the model outputs - """ - if alpha_transform_type == "cosine": - - def alpha_bar_fn(t): - return math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2 - - elif alpha_transform_type == "exp": - - def alpha_bar_fn(t): - return math.exp(t * -12.0) - - else: - raise ValueError(f"Unsupported alpha_tranform_type: {alpha_transform_type}") - - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar_fn(t2) / alpha_bar_fn(t1), max_beta)) - return torch.tensor(betas, dtype=torch.float32) - - -class DPMSolverMultistepInverseScheduler(SchedulerMixin, ConfigMixin): - """ - DPMSolverMultistepInverseScheduler is the reverse scheduler of [`DPMSolverMultistepScheduler`]. - - We also support the "dynamic thresholding" method in Imagen (https://arxiv.org/abs/2205.11487). For pixel-space - diffusion models, you can set both `algorithm_type="dpmsolver++"` and `thresholding=True` to use the dynamic - thresholding. Note that the thresholding method is unsuitable for latent-space diffusion models (such as - stable-diffusion). - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. - beta_start (`float`): the starting `beta` value of inference. - beta_end (`float`): the final `beta` value. - beta_schedule (`str`): - the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from - `linear`, `scaled_linear`, or `squaredcos_cap_v2`. - trained_betas (`np.ndarray`, optional): - option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc. - solver_order (`int`, default `2`): - the order of DPM-Solver; can be `1` or `2` or `3`. We recommend to use `solver_order=2` for guided - sampling, and `solver_order=3` for unconditional sampling. - prediction_type (`str`, default `epsilon`, optional): - prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion - process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4 - https://imagen.research.google/video/paper.pdf) - thresholding (`bool`, default `False`): - whether to use the "dynamic thresholding" method (introduced by Imagen, https://arxiv.org/abs/2205.11487). - For pixel-space diffusion models, you can set both `algorithm_type=dpmsolver++` and `thresholding=True` to - use the dynamic thresholding. Note that the thresholding method is unsuitable for latent-space diffusion - models (such as stable-diffusion). - dynamic_thresholding_ratio (`float`, default `0.995`): - the ratio for the dynamic thresholding method. Default is `0.995`, the same as Imagen - (https://arxiv.org/abs/2205.11487). - sample_max_value (`float`, default `1.0`): - the threshold value for dynamic thresholding. Valid only when `thresholding=True` and - `algorithm_type="dpmsolver++`. - algorithm_type (`str`, default `dpmsolver++`): - the algorithm type for the solver. Either `dpmsolver` or `dpmsolver++` or `sde-dpmsolver` or - `sde-dpmsolver++`. The `dpmsolver` type implements the algorithms in https://arxiv.org/abs/2206.00927, and - the `dpmsolver++` type implements the algorithms in https://arxiv.org/abs/2211.01095. We recommend to use - `dpmsolver++` or `sde-dpmsolver++` with `solver_order=2` for guided sampling (e.g. stable-diffusion). - solver_type (`str`, default `midpoint`): - the solver type for the second-order solver. Either `midpoint` or `heun`. The solver type slightly affects - the sample quality, especially for small number of steps. We empirically find that `midpoint` solvers are - slightly better, so we recommend to use the `midpoint` type. - lower_order_final (`bool`, default `True`): - whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. We empirically - find this trick can stabilize the sampling of DPM-Solver for steps < 15, especially for steps <= 10. - use_karras_sigmas (`bool`, *optional*, defaults to `False`): - This parameter controls whether to use Karras sigmas (Karras et al. (2022) scheme) for step sizes in the - noise schedule during the sampling process. If True, the sigmas will be determined according to a sequence - of noise levels {σi} as defined in Equation (5) of the paper https://arxiv.org/pdf/2206.00364.pdf. - lambda_min_clipped (`float`, default `-inf`): - the clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for - cosine (squaredcos_cap_v2) noise schedule. - variance_type (`str`, *optional*): - Set to "learned" or "learned_range" for diffusion models that predict variance. For example, OpenAI's - guided-diffusion (https://github.com/openai/guided-diffusion) predicts both mean and variance of the - Gaussian distribution in the model's output. DPM-Solver only needs the "mean" output because it is based on - diffusion ODEs. whether the model's output contains the predicted Gaussian variance. For example, OpenAI's - guided-diffusion (https://github.com/openai/guided-diffusion) predicts both mean and variance of the - Gaussian distribution in the model's output. DPM-Solver only needs the "mean" output because it is based on - diffusion ODEs. - timestep_spacing (`str`, default `"linspace"`): - The way the timesteps should be scaled. Refer to Table 2. of [Common Diffusion Noise Schedules and Sample - Steps are Flawed](https://arxiv.org/abs/2305.08891) for more information. - steps_offset (`int`, default `0`): - an offset added to the inference steps. You can use a combination of `offset=1` and - `set_alpha_to_one=False`, to make the last step use step 0 for the previous alpha product, as done in - stable diffusion. - """ - - _compatibles = [e.name for e in KarrasDiffusionSchedulers] - order = 1 - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 1000, - beta_start: float = 0.0001, - beta_end: float = 0.02, - beta_schedule: str = "linear", - trained_betas: Optional[Union[np.ndarray, List[float]]] = None, - solver_order: int = 2, - prediction_type: str = "epsilon", - thresholding: bool = False, - dynamic_thresholding_ratio: float = 0.995, - sample_max_value: float = 1.0, - algorithm_type: str = "dpmsolver++", - solver_type: str = "midpoint", - lower_order_final: bool = True, - use_karras_sigmas: Optional[bool] = False, - lambda_min_clipped: float = -float("inf"), - variance_type: Optional[str] = None, - timestep_spacing: str = "linspace", - steps_offset: int = 0, - ): - if trained_betas is not None: - self.betas = torch.tensor(trained_betas, dtype=torch.float32) - elif beta_schedule == "linear": - self.betas = torch.linspace(beta_start, beta_end, num_train_timesteps, dtype=torch.float32) - elif beta_schedule == "scaled_linear": - # this schedule is very specific to the latent diffusion model. - self.betas = ( - torch.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32) ** 2 - ) - elif beta_schedule == "squaredcos_cap_v2": - # Glide cosine schedule - self.betas = betas_for_alpha_bar(num_train_timesteps) - else: - raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}") - - self.alphas = 1.0 - self.betas - self.alphas_cumprod = torch.cumprod(self.alphas, dim=0) - # Currently we only support VP-type noise schedule - self.alpha_t = torch.sqrt(self.alphas_cumprod) - self.sigma_t = torch.sqrt(1 - self.alphas_cumprod) - self.lambda_t = torch.log(self.alpha_t) - torch.log(self.sigma_t) - - # standard deviation of the initial noise distribution - self.init_noise_sigma = 1.0 - - # settings for DPM-Solver - if algorithm_type not in ["dpmsolver", "dpmsolver++", "sde-dpmsolver", "sde-dpmsolver++"]: - if algorithm_type == "deis": - self.register_to_config(algorithm_type="dpmsolver++") - else: - raise NotImplementedError(f"{algorithm_type} does is not implemented for {self.__class__}") - - if solver_type not in ["midpoint", "heun"]: - if solver_type in ["logrho", "bh1", "bh2"]: - self.register_to_config(solver_type="midpoint") - else: - raise NotImplementedError(f"{solver_type} does is not implemented for {self.__class__}") - - # setable values - self.num_inference_steps = None - timesteps = np.linspace(0, num_train_timesteps - 1, num_train_timesteps, dtype=np.float32).copy() - self.timesteps = torch.from_numpy(timesteps) - self.model_outputs = [None] * solver_order - self.lower_order_nums = 0 - self.use_karras_sigmas = use_karras_sigmas - - def set_timesteps(self, num_inference_steps: int = None, device: Union[str, torch.device] = None): - """ - Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - device (`str` or `torch.device`, optional): - the device to which the timesteps should be moved to. If `None`, the timesteps are not moved. - """ - # Clipping the minimum of all lambda(t) for numerical stability. - # This is critical for cosine (squaredcos_cap_v2) noise schedule. - clipped_idx = torch.searchsorted(torch.flip(self.lambda_t, [0]), self.lambda_min_clipped).item() - self.noisiest_timestep = self.config.num_train_timesteps - 1 - clipped_idx - - # "linspace", "leading", "trailing" corresponds to annotation of Table 2. of https://arxiv.org/abs/2305.08891 - if self.config.timestep_spacing == "linspace": - timesteps = ( - np.linspace(0, self.noisiest_timestep, num_inference_steps + 1).round()[:-1].copy().astype(np.int64) - ) - elif self.config.timestep_spacing == "leading": - step_ratio = (self.noisiest_timestep + 1) // (num_inference_steps + 1) - # creates integer timesteps by multiplying by ratio - # casting to int to avoid issues when num_inference_step is power of 3 - timesteps = (np.arange(0, num_inference_steps + 1) * step_ratio).round()[:-1].copy().astype(np.int64) - timesteps += self.config.steps_offset - elif self.config.timestep_spacing == "trailing": - step_ratio = self.config.num_train_timesteps / num_inference_steps - # creates integer timesteps by multiplying by ratio - # casting to int to avoid issues when num_inference_step is power of 3 - timesteps = np.arange(self.noisiest_timestep + 1, 0, -step_ratio).round()[::-1].copy().astype(np.int64) - timesteps -= 1 - else: - raise ValueError( - f"{self.config.timestep_spacing} is not supported. Please make sure to choose one of 'linspace', " - "'leading' or 'trailing'." - ) - - sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5) - if self.config.use_karras_sigmas: - log_sigmas = np.log(sigmas) - sigmas = self._convert_to_karras(in_sigmas=sigmas, num_inference_steps=num_inference_steps) - timesteps = np.array([self._sigma_to_t(sigma, log_sigmas) for sigma in sigmas]).round() - timesteps = timesteps.copy().astype(np.int64) - - self.sigmas = torch.from_numpy(sigmas) - - # when num_inference_steps == num_train_timesteps, we can end up with - # duplicates in timesteps. - _, unique_indices = np.unique(timesteps, return_index=True) - timesteps = timesteps[np.sort(unique_indices)] - - self.timesteps = torch.from_numpy(timesteps).to(device) - - self.num_inference_steps = len(timesteps) - - self.model_outputs = [ - None, - ] * self.config.solver_order - self.lower_order_nums = 0 - - # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler._threshold_sample - def _threshold_sample(self, sample: torch.FloatTensor) -> torch.FloatTensor: - """ - "Dynamic thresholding: At each sampling step we set s to a certain percentile absolute pixel value in xt0 (the - prediction of x_0 at timestep t), and if s > 1, then we threshold xt0 to the range [-s, s] and then divide by - s. Dynamic thresholding pushes saturated pixels (those near -1 and 1) inwards, thereby actively preventing - pixels from saturation at each step. We find that dynamic thresholding results in significantly better - photorealism as well as better image-text alignment, especially when using very large guidance weights." - - https://arxiv.org/abs/2205.11487 - """ - dtype = sample.dtype - batch_size, channels, height, width = sample.shape - - if dtype not in (torch.float32, torch.float64): - sample = sample.float() # upcast for quantile calculation, and clamp not implemented for cpu half - - # Flatten sample for doing quantile calculation along each image - sample = sample.reshape(batch_size, channels * height * width) - - abs_sample = sample.abs() # "a certain percentile absolute pixel value" - - s = torch.quantile(abs_sample, self.config.dynamic_thresholding_ratio, dim=1) - s = torch.clamp( - s, min=1, max=self.config.sample_max_value - ) # When clamped to min=1, equivalent to standard clipping to [-1, 1] - - s = s.unsqueeze(1) # (batch_size, 1) because clamp will broadcast along dim=0 - sample = torch.clamp(sample, -s, s) / s # "we threshold xt0 to the range [-s, s] and then divide by s" - - sample = sample.reshape(batch_size, channels, height, width) - sample = sample.to(dtype) - - return sample - - # Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler._sigma_to_t - def _sigma_to_t(self, sigma, log_sigmas): - # get log sigma - log_sigma = np.log(sigma) - - # get distribution - dists = log_sigma - log_sigmas[:, np.newaxis] - - # get sigmas range - low_idx = np.cumsum((dists >= 0), axis=0).argmax(axis=0).clip(max=log_sigmas.shape[0] - 2) - high_idx = low_idx + 1 - - low = log_sigmas[low_idx] - high = log_sigmas[high_idx] - - # interpolate sigmas - w = (low - log_sigma) / (low - high) - w = np.clip(w, 0, 1) - - # transform interpolation to time range - t = (1 - w) * low_idx + w * high_idx - t = t.reshape(sigma.shape) - return t - - # Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler._convert_to_karras - def _convert_to_karras(self, in_sigmas: torch.FloatTensor, num_inference_steps) -> torch.FloatTensor: - """Constructs the noise schedule of Karras et al. (2022).""" - - sigma_min: float = in_sigmas[-1].item() - sigma_max: float = in_sigmas[0].item() - - rho = 7.0 # 7.0 is the value used in the paper - ramp = np.linspace(0, 1, num_inference_steps) - min_inv_rho = sigma_min ** (1 / rho) - max_inv_rho = sigma_max ** (1 / rho) - sigmas = (max_inv_rho + ramp * (min_inv_rho - max_inv_rho)) ** rho - return sigmas - - # Copied from diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler.convert_model_output - def convert_model_output( - self, model_output: torch.FloatTensor, timestep: int, sample: torch.FloatTensor - ) -> torch.FloatTensor: - """ - Convert the model output to the corresponding type that the algorithm (DPM-Solver / DPM-Solver++) needs. - - DPM-Solver is designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to - discretize an integral of the data prediction model. So we need to first convert the model output to the - corresponding type to match the algorithm. - - Note that the algorithm type and the model type is decoupled. That is to say, we can use either DPM-Solver or - DPM-Solver++ for both noise prediction model and data prediction model. - - Args: - model_output (`torch.FloatTensor`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - sample (`torch.FloatTensor`): - current instance of sample being created by diffusion process. - - Returns: - `torch.FloatTensor`: the converted model output. - """ - - # DPM-Solver++ needs to solve an integral of the data prediction model. - if self.config.algorithm_type in ["dpmsolver++", "sde-dpmsolver++"]: - if self.config.prediction_type == "epsilon": - # DPM-Solver and DPM-Solver++ only need the "mean" output. - if self.config.variance_type in ["learned", "learned_range"]: - model_output = model_output[:, :3] - alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep] - x0_pred = (sample - sigma_t * model_output) / alpha_t - elif self.config.prediction_type == "sample": - x0_pred = model_output - elif self.config.prediction_type == "v_prediction": - alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep] - x0_pred = alpha_t * sample - sigma_t * model_output - else: - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample`, or" - " `v_prediction` for the DPMSolverMultistepScheduler." - ) - - if self.config.thresholding: - x0_pred = self._threshold_sample(x0_pred) - - return x0_pred - - # DPM-Solver needs to solve an integral of the noise prediction model. - elif self.config.algorithm_type in ["dpmsolver", "sde-dpmsolver"]: - if self.config.prediction_type == "epsilon": - # DPM-Solver and DPM-Solver++ only need the "mean" output. - if self.config.variance_type in ["learned", "learned_range"]: - epsilon = model_output[:, :3] - else: - epsilon = model_output - elif self.config.prediction_type == "sample": - alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep] - epsilon = (sample - alpha_t * model_output) / sigma_t - elif self.config.prediction_type == "v_prediction": - alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep] - epsilon = alpha_t * model_output + sigma_t * sample - else: - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample`, or" - " `v_prediction` for the DPMSolverMultistepScheduler." - ) - - if self.config.thresholding: - alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep] - x0_pred = (sample - sigma_t * epsilon) / alpha_t - x0_pred = self._threshold_sample(x0_pred) - epsilon = (sample - alpha_t * x0_pred) / sigma_t - - return epsilon - - def dpm_solver_first_order_update( - self, - model_output: torch.FloatTensor, - timestep: int, - prev_timestep: int, - sample: torch.FloatTensor, - noise: Optional[torch.FloatTensor] = None, - ) -> torch.FloatTensor: - """ - One step for the first-order DPM-Solver (equivalent to DDIM). - - See https://arxiv.org/abs/2206.00927 for the detailed derivation. - - Args: - model_output (`torch.FloatTensor`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - prev_timestep (`int`): previous discrete timestep in the diffusion chain. - sample (`torch.FloatTensor`): - current instance of sample being created by diffusion process. - - Returns: - `torch.FloatTensor`: the sample tensor at the previous timestep. - """ - lambda_t, lambda_s = self.lambda_t[prev_timestep], self.lambda_t[timestep] - alpha_t, alpha_s = self.alpha_t[prev_timestep], self.alpha_t[timestep] - sigma_t, sigma_s = self.sigma_t[prev_timestep], self.sigma_t[timestep] - h = lambda_t - lambda_s - if self.config.algorithm_type == "dpmsolver++": - x_t = (sigma_t / sigma_s) * sample - (alpha_t * (torch.exp(-h) - 1.0)) * model_output - elif self.config.algorithm_type == "dpmsolver": - x_t = (alpha_t / alpha_s) * sample - (sigma_t * (torch.exp(h) - 1.0)) * model_output - elif "sde" in self.config.algorithm_type: - raise NotImplementedError( - f"Inversion step is not yet implemented for algorithm type {self.config.algorithm_type}." - ) - return x_t - - def multistep_dpm_solver_second_order_update( - self, - model_output_list: List[torch.FloatTensor], - timestep_list: List[int], - prev_timestep: int, - sample: torch.FloatTensor, - noise: Optional[torch.FloatTensor] = None, - ) -> torch.FloatTensor: - """ - One step for the second-order multistep DPM-Solver. - - Args: - model_output_list (`List[torch.FloatTensor]`): - direct outputs from learned diffusion model at current and latter timesteps. - timestep (`int`): current and latter discrete timestep in the diffusion chain. - prev_timestep (`int`): previous discrete timestep in the diffusion chain. - sample (`torch.FloatTensor`): - current instance of sample being created by diffusion process. - - Returns: - `torch.FloatTensor`: the sample tensor at the previous timestep. - """ - t, s0, s1 = prev_timestep, timestep_list[-1], timestep_list[-2] - m0, m1 = model_output_list[-1], model_output_list[-2] - lambda_t, lambda_s0, lambda_s1 = self.lambda_t[t], self.lambda_t[s0], self.lambda_t[s1] - alpha_t, alpha_s0 = self.alpha_t[t], self.alpha_t[s0] - sigma_t, sigma_s0 = self.sigma_t[t], self.sigma_t[s0] - h, h_0 = lambda_t - lambda_s0, lambda_s0 - lambda_s1 - r0 = h_0 / h - D0, D1 = m0, (1.0 / r0) * (m0 - m1) - if self.config.algorithm_type == "dpmsolver++": - # See https://arxiv.org/abs/2211.01095 for detailed derivations - if self.config.solver_type == "midpoint": - x_t = ( - (sigma_t / sigma_s0) * sample - - (alpha_t * (torch.exp(-h) - 1.0)) * D0 - - 0.5 * (alpha_t * (torch.exp(-h) - 1.0)) * D1 - ) - elif self.config.solver_type == "heun": - x_t = ( - (sigma_t / sigma_s0) * sample - - (alpha_t * (torch.exp(-h) - 1.0)) * D0 - + (alpha_t * ((torch.exp(-h) - 1.0) / h + 1.0)) * D1 - ) - elif self.config.algorithm_type == "dpmsolver": - # See https://arxiv.org/abs/2206.00927 for detailed derivations - if self.config.solver_type == "midpoint": - x_t = ( - (alpha_t / alpha_s0) * sample - - (sigma_t * (torch.exp(h) - 1.0)) * D0 - - 0.5 * (sigma_t * (torch.exp(h) - 1.0)) * D1 - ) - elif self.config.solver_type == "heun": - x_t = ( - (alpha_t / alpha_s0) * sample - - (sigma_t * (torch.exp(h) - 1.0)) * D0 - - (sigma_t * ((torch.exp(h) - 1.0) / h - 1.0)) * D1 - ) - elif "sde" in self.config.algorithm_type: - raise NotImplementedError( - f"Inversion step is not yet implemented for algorithm type {self.config.algorithm_type}." - ) - return x_t - - # Copied from diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler.multistep_dpm_solver_third_order_update - def multistep_dpm_solver_third_order_update( - self, - model_output_list: List[torch.FloatTensor], - timestep_list: List[int], - prev_timestep: int, - sample: torch.FloatTensor, - ) -> torch.FloatTensor: - """ - One step for the third-order multistep DPM-Solver. - - Args: - model_output_list (`List[torch.FloatTensor]`): - direct outputs from learned diffusion model at current and latter timesteps. - timestep (`int`): current and latter discrete timestep in the diffusion chain. - prev_timestep (`int`): previous discrete timestep in the diffusion chain. - sample (`torch.FloatTensor`): - current instance of sample being created by diffusion process. - - Returns: - `torch.FloatTensor`: the sample tensor at the previous timestep. - """ - t, s0, s1, s2 = prev_timestep, timestep_list[-1], timestep_list[-2], timestep_list[-3] - m0, m1, m2 = model_output_list[-1], model_output_list[-2], model_output_list[-3] - lambda_t, lambda_s0, lambda_s1, lambda_s2 = ( - self.lambda_t[t], - self.lambda_t[s0], - self.lambda_t[s1], - self.lambda_t[s2], - ) - alpha_t, alpha_s0 = self.alpha_t[t], self.alpha_t[s0] - sigma_t, sigma_s0 = self.sigma_t[t], self.sigma_t[s0] - h, h_0, h_1 = lambda_t - lambda_s0, lambda_s0 - lambda_s1, lambda_s1 - lambda_s2 - r0, r1 = h_0 / h, h_1 / h - D0 = m0 - D1_0, D1_1 = (1.0 / r0) * (m0 - m1), (1.0 / r1) * (m1 - m2) - D1 = D1_0 + (r0 / (r0 + r1)) * (D1_0 - D1_1) - D2 = (1.0 / (r0 + r1)) * (D1_0 - D1_1) - if self.config.algorithm_type == "dpmsolver++": - # See https://arxiv.org/abs/2206.00927 for detailed derivations - x_t = ( - (sigma_t / sigma_s0) * sample - - (alpha_t * (torch.exp(-h) - 1.0)) * D0 - + (alpha_t * ((torch.exp(-h) - 1.0) / h + 1.0)) * D1 - - (alpha_t * ((torch.exp(-h) - 1.0 + h) / h**2 - 0.5)) * D2 - ) - elif self.config.algorithm_type == "dpmsolver": - # See https://arxiv.org/abs/2206.00927 for detailed derivations - x_t = ( - (alpha_t / alpha_s0) * sample - - (sigma_t * (torch.exp(h) - 1.0)) * D0 - - (sigma_t * ((torch.exp(h) - 1.0) / h - 1.0)) * D1 - - (sigma_t * ((torch.exp(h) - 1.0 - h) / h**2 - 0.5)) * D2 - ) - return x_t - - def step( - self, - model_output: torch.FloatTensor, - timestep: int, - sample: torch.FloatTensor, - generator=None, - return_dict: bool = True, - ) -> Union[SchedulerOutput, Tuple]: - """ - Step function propagating the sample with the multistep DPM-Solver. - - Args: - model_output (`torch.FloatTensor`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - sample (`torch.FloatTensor`): - current instance of sample being created by diffusion process. - return_dict (`bool`): option for returning tuple rather than SchedulerOutput class - - Returns: - [`~scheduling_utils.SchedulerOutput`] or `tuple`: [`~scheduling_utils.SchedulerOutput`] if `return_dict` is - True, otherwise a `tuple`. When returning a tuple, the first element is the sample tensor. - - """ - if self.num_inference_steps is None: - raise ValueError( - "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler" - ) - - if isinstance(timestep, torch.Tensor): - timestep = timestep.to(self.timesteps.device) - step_index = (self.timesteps == timestep).nonzero() - if len(step_index) == 0: - step_index = len(self.timesteps) - 1 - else: - step_index = step_index.item() - prev_timestep = ( - self.noisiest_timestep if step_index == len(self.timesteps) - 1 else self.timesteps[step_index + 1] - ) - lower_order_final = ( - (step_index == len(self.timesteps) - 1) and self.config.lower_order_final and len(self.timesteps) < 15 - ) - lower_order_second = ( - (step_index == len(self.timesteps) - 2) and self.config.lower_order_final and len(self.timesteps) < 15 - ) - - model_output = self.convert_model_output(model_output, timestep, sample) - for i in range(self.config.solver_order - 1): - self.model_outputs[i] = self.model_outputs[i + 1] - self.model_outputs[-1] = model_output - - if self.config.algorithm_type in ["sde-dpmsolver", "sde-dpmsolver++"]: - noise = randn_tensor( - model_output.shape, generator=generator, device=model_output.device, dtype=model_output.dtype - ) - else: - noise = None - - if self.config.solver_order == 1 or self.lower_order_nums < 1 or lower_order_final: - prev_sample = self.dpm_solver_first_order_update( - model_output, timestep, prev_timestep, sample, noise=noise - ) - elif self.config.solver_order == 2 or self.lower_order_nums < 2 or lower_order_second: - timestep_list = [self.timesteps[step_index - 1], timestep] - prev_sample = self.multistep_dpm_solver_second_order_update( - self.model_outputs, timestep_list, prev_timestep, sample, noise=noise - ) - else: - timestep_list = [self.timesteps[step_index - 2], self.timesteps[step_index - 1], timestep] - prev_sample = self.multistep_dpm_solver_third_order_update( - self.model_outputs, timestep_list, prev_timestep, sample - ) - - if self.lower_order_nums < self.config.solver_order: - self.lower_order_nums += 1 - - if not return_dict: - return (prev_sample,) - - return SchedulerOutput(prev_sample=prev_sample) - - # Copied from diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler.scale_model_input - def scale_model_input(self, sample: torch.FloatTensor, *args, **kwargs) -> torch.FloatTensor: - """ - Ensures interchangeability with schedulers that need to scale the denoising model input depending on the - current timestep. - - Args: - sample (`torch.FloatTensor`): input sample - - Returns: - `torch.FloatTensor`: scaled input sample - """ - return sample - - # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler.add_noise - def add_noise( - self, - original_samples: torch.FloatTensor, - noise: torch.FloatTensor, - timesteps: torch.IntTensor, - ) -> torch.FloatTensor: - # Make sure alphas_cumprod and timestep have same device and dtype as original_samples - alphas_cumprod = self.alphas_cumprod.to(device=original_samples.device, dtype=original_samples.dtype) - timesteps = timesteps.to(original_samples.device) - - sqrt_alpha_prod = alphas_cumprod[timesteps] ** 0.5 - sqrt_alpha_prod = sqrt_alpha_prod.flatten() - while len(sqrt_alpha_prod.shape) < len(original_samples.shape): - sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1) - - sqrt_one_minus_alpha_prod = (1 - alphas_cumprod[timesteps]) ** 0.5 - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten() - while len(sqrt_one_minus_alpha_prod.shape) < len(original_samples.shape): - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1) - - noisy_samples = sqrt_alpha_prod * original_samples + sqrt_one_minus_alpha_prod * noise - return noisy_samples - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/Andy1621/uniformer_image_detection/configs/grid_rcnn/grid_rcnn_x101_32x4d_fpn_gn-head_2x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/grid_rcnn/grid_rcnn_x101_32x4d_fpn_gn-head_2x_coco.py deleted file mode 100644 index 14c1eb2881478f5db95e413446f9cd86b3b6ca29..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/grid_rcnn/grid_rcnn_x101_32x4d_fpn_gn-head_2x_coco.py +++ /dev/null @@ -1,23 +0,0 @@ -_base_ = './grid_rcnn_r50_fpn_gn-head_2x_coco.py' -model = dict( - pretrained='open-mmlab://resnext101_32x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - style='pytorch')) -# optimizer -optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=3665, - warmup_ratio=1.0 / 80, - step=[17, 23]) -runner = dict(type='EpochBasedRunner', max_epochs=25) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_r101_fpn_2x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_r101_fpn_2x_coco.py deleted file mode 100644 index 334657dc23de11045e37c0d62ee7c81b796f1254..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_r101_fpn_2x_coco.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './vfnet_r50_fpn_1x_coco.py' -model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101)) -lr_config = dict(step=[16, 22]) -runner = dict(type='EpochBasedRunner', max_epochs=24) diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/superboogav2/download_urls.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/superboogav2/download_urls.py deleted file mode 100644 index ad2726b563b6df1134fa8396175f4e597a82d628..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/superboogav2/download_urls.py +++ /dev/null @@ -1,65 +0,0 @@ -import concurrent.futures -import requests -import re - -from bs4 import BeautifulSoup - -import extensions.superboogav2.parameters as parameters - -from .data_processor import process_and_add_to_collector -from .utils import create_metadata_source - -def _download_single(url): - response = requests.get(url, timeout=5) - if response.status_code == 200: - return response.content - else: - raise Exception("Failed to download URL") - - -def _download_urls(urls, threads=1): - with concurrent.futures.ThreadPoolExecutor(max_workers=threads) as executor: - futures = [] - for url in urls: - future = executor.submit(_download_single, url) - futures.append(future) - - results = [] - i = 0 - for future in concurrent.futures.as_completed(futures): - try: - result = future.result() - results.append(result) - i += 1 - yield f"{i}/{len(urls)}", results - except Exception: - pass - - yield "Done", results - - -def feed_url_into_collector(urls, collector): - all_text = '' - cumulative = '' - - urls = urls.strip().split('\n') - cumulative += f'Loading {len(urls)} URLs with {parameters.get_num_threads()} threads...\n\n' - yield cumulative - for update, contents in _download_urls(urls, threads=parameters.get_num_threads()): - yield cumulative + update - - cumulative += 'Processing the HTML sources...' - yield cumulative - for content in contents: - soup = BeautifulSoup(content, features="lxml") - for script in soup(["script", "style"]): - script.extract() - - strings = soup.stripped_strings - if parameters.get_is_strong_cleanup(): - strings = [s for s in strings if re.search("[A-Za-z] ", s)] - - text = '\n'.join([s.strip() for s in strings]) - all_text += text - - process_and_add_to_collector(all_text, collector, False, create_metadata_source('url-download')) \ No newline at end of file diff --git a/spaces/Apex-X/ROOPOK/roop/predictor.py b/spaces/Apex-X/ROOPOK/roop/predictor.py deleted file mode 100644 index b59fee93e02daeec6660139b61c2cd76d5fd2b94..0000000000000000000000000000000000000000 --- a/spaces/Apex-X/ROOPOK/roop/predictor.py +++ /dev/null @@ -1,43 +0,0 @@ -import threading -import numpy -import opennsfw2 -from PIL import Image -from keras import Model - -from roop.typing import Frame - -PREDICTOR = None -THREAD_LOCK = threading.Lock() -MAX_PROBABILITY = 0.85 - - -def get_predictor() -> Model: - global PREDICTOR - - with THREAD_LOCK: - if PREDICTOR is None: - PREDICTOR = opennsfw2.make_open_nsfw_model() - return PREDICTOR - - -def clear_predictor() -> None: - global PREDICTOR - - PREDICTOR = None - - -def predict_frame(target_frame: Frame) -> bool: - image = Image.fromarray(target_frame) - image = opennsfw2.preprocess_image(image, opennsfw2.Preprocessing.YAHOO) - views = numpy.expand_dims(image, axis=0) - _, probability = get_predictor().predict(views)[0] - return probability > MAX_PROBABILITY - - -def predict_image(target_path: str) -> bool: - return opennsfw2.predict_image(target_path) > MAX_PROBABILITY - - -def predict_video(target_path: str) -> bool: - _, probabilities = opennsfw2.predict_video_frames(video_path=target_path, frame_interval=100) - return any(probability > MAX_PROBABILITY for probability in probabilities) diff --git a/spaces/Ariharasudhan/YoloV5/utils/loggers/wandb/sweep.py b/spaces/Ariharasudhan/YoloV5/utils/loggers/wandb/sweep.py deleted file mode 100644 index d49ea6f2778b2e87d0f535c2b3595ccceebab459..0000000000000000000000000000000000000000 --- a/spaces/Ariharasudhan/YoloV5/utils/loggers/wandb/sweep.py +++ /dev/null @@ -1,41 +0,0 @@ -import sys -from pathlib import Path - -import wandb - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[3] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH - -from train import parse_opt, train -from utils.callbacks import Callbacks -from utils.general import increment_path -from utils.torch_utils import select_device - - -def sweep(): - wandb.init() - # Get hyp dict from sweep agent. Copy because train() modifies parameters which confused wandb. - hyp_dict = vars(wandb.config).get("_items").copy() - - # Workaround: get necessary opt args - opt = parse_opt(known=True) - opt.batch_size = hyp_dict.get("batch_size") - opt.save_dir = str(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok or opt.evolve)) - opt.epochs = hyp_dict.get("epochs") - opt.nosave = True - opt.data = hyp_dict.get("data") - opt.weights = str(opt.weights) - opt.cfg = str(opt.cfg) - opt.data = str(opt.data) - opt.hyp = str(opt.hyp) - opt.project = str(opt.project) - device = select_device(opt.device, batch_size=opt.batch_size) - - # train - train(hyp_dict, opt, device, callbacks=Callbacks()) - - -if __name__ == "__main__": - sweep() diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/dist.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/dist.py deleted file mode 100644 index 824235488666c6ecdb22240b08354806fadb58ca..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/dist.py +++ /dev/null @@ -1,1222 +0,0 @@ -# -*- coding: utf-8 -*- -__all__ = ['Distribution'] - -import io -import sys -import re -import os -import warnings -import numbers -import distutils.log -import distutils.core -import distutils.cmd -import distutils.dist -import distutils.command -from distutils.util import strtobool -from distutils.debug import DEBUG -from distutils.fancy_getopt import translate_longopt -from glob import iglob -import itertools -import textwrap -from typing import List, Optional, TYPE_CHECKING -from pathlib import Path - -from collections import defaultdict -from email import message_from_file - -from distutils.errors import DistutilsOptionError, DistutilsSetupError -from distutils.util import rfc822_escape - -from setuptools.extern import packaging -from setuptools.extern import ordered_set -from setuptools.extern.more_itertools import unique_everseen, partition - -from ._importlib import metadata - -from . import SetuptoolsDeprecationWarning - -import setuptools -import setuptools.command -from setuptools import windows_support -from setuptools.monkey import get_unpatched -from setuptools.config import setupcfg, pyprojecttoml -from setuptools.discovery import ConfigDiscovery - -import pkg_resources -from setuptools.extern.packaging import version -from . import _reqs -from . import _entry_points - -if TYPE_CHECKING: - from email.message import Message - -__import__('setuptools.extern.packaging.specifiers') -__import__('setuptools.extern.packaging.version') - - -def _get_unpatched(cls): - warnings.warn("Do not call this function", DistDeprecationWarning) - return get_unpatched(cls) - - -def get_metadata_version(self): - mv = getattr(self, 'metadata_version', None) - if mv is None: - mv = version.Version('2.1') - self.metadata_version = mv - return mv - - -def rfc822_unescape(content: str) -> str: - """Reverse RFC-822 escaping by removing leading whitespaces from content.""" - lines = content.splitlines() - if len(lines) == 1: - return lines[0].lstrip() - return '\n'.join((lines[0].lstrip(), textwrap.dedent('\n'.join(lines[1:])))) - - -def _read_field_from_msg(msg: "Message", field: str) -> Optional[str]: - """Read Message header field.""" - value = msg[field] - if value == 'UNKNOWN': - return None - return value - - -def _read_field_unescaped_from_msg(msg: "Message", field: str) -> Optional[str]: - """Read Message header field and apply rfc822_unescape.""" - value = _read_field_from_msg(msg, field) - if value is None: - return value - return rfc822_unescape(value) - - -def _read_list_from_msg(msg: "Message", field: str) -> Optional[List[str]]: - """Read Message header field and return all results as list.""" - values = msg.get_all(field, None) - if values == []: - return None - return values - - -def _read_payload_from_msg(msg: "Message") -> Optional[str]: - value = msg.get_payload().strip() - if value == 'UNKNOWN' or not value: - return None - return value - - -def read_pkg_file(self, file): - """Reads the metadata values from a file object.""" - msg = message_from_file(file) - - self.metadata_version = version.Version(msg['metadata-version']) - self.name = _read_field_from_msg(msg, 'name') - self.version = _read_field_from_msg(msg, 'version') - self.description = _read_field_from_msg(msg, 'summary') - # we are filling author only. - self.author = _read_field_from_msg(msg, 'author') - self.maintainer = None - self.author_email = _read_field_from_msg(msg, 'author-email') - self.maintainer_email = None - self.url = _read_field_from_msg(msg, 'home-page') - self.download_url = _read_field_from_msg(msg, 'download-url') - self.license = _read_field_unescaped_from_msg(msg, 'license') - - self.long_description = _read_field_unescaped_from_msg(msg, 'description') - if ( - self.long_description is None and - self.metadata_version >= version.Version('2.1') - ): - self.long_description = _read_payload_from_msg(msg) - self.description = _read_field_from_msg(msg, 'summary') - - if 'keywords' in msg: - self.keywords = _read_field_from_msg(msg, 'keywords').split(',') - - self.platforms = _read_list_from_msg(msg, 'platform') - self.classifiers = _read_list_from_msg(msg, 'classifier') - - # PEP 314 - these fields only exist in 1.1 - if self.metadata_version == version.Version('1.1'): - self.requires = _read_list_from_msg(msg, 'requires') - self.provides = _read_list_from_msg(msg, 'provides') - self.obsoletes = _read_list_from_msg(msg, 'obsoletes') - else: - self.requires = None - self.provides = None - self.obsoletes = None - - self.license_files = _read_list_from_msg(msg, 'license-file') - - -def single_line(val): - """ - Quick and dirty validation for Summary pypa/setuptools#1390. - """ - if '\n' in val: - # TODO: Replace with `raise ValueError("newlines not allowed")` - # after reviewing #2893. - warnings.warn("newlines not allowed and will break in the future") - val = val.strip().split('\n')[0] - return val - - -# Based on Python 3.5 version -def write_pkg_file(self, file): # noqa: C901 # is too complex (14) # FIXME - """Write the PKG-INFO format data to a file object.""" - version = self.get_metadata_version() - - def write_field(key, value): - file.write("%s: %s\n" % (key, value)) - - write_field('Metadata-Version', str(version)) - write_field('Name', self.get_name()) - write_field('Version', self.get_version()) - - summary = self.get_description() - if summary: - write_field('Summary', single_line(summary)) - - optional_fields = ( - ('Home-page', 'url'), - ('Download-URL', 'download_url'), - ('Author', 'author'), - ('Author-email', 'author_email'), - ('Maintainer', 'maintainer'), - ('Maintainer-email', 'maintainer_email'), - ) - - for field, attr in optional_fields: - attr_val = getattr(self, attr, None) - if attr_val is not None: - write_field(field, attr_val) - - license = self.get_license() - if license: - write_field('License', rfc822_escape(license)) - - for project_url in self.project_urls.items(): - write_field('Project-URL', '%s, %s' % project_url) - - keywords = ','.join(self.get_keywords()) - if keywords: - write_field('Keywords', keywords) - - platforms = self.get_platforms() or [] - for platform in platforms: - write_field('Platform', platform) - - self._write_list(file, 'Classifier', self.get_classifiers()) - - # PEP 314 - self._write_list(file, 'Requires', self.get_requires()) - self._write_list(file, 'Provides', self.get_provides()) - self._write_list(file, 'Obsoletes', self.get_obsoletes()) - - # Setuptools specific for PEP 345 - if hasattr(self, 'python_requires'): - write_field('Requires-Python', self.python_requires) - - # PEP 566 - if self.long_description_content_type: - write_field('Description-Content-Type', self.long_description_content_type) - if self.provides_extras: - for extra in self.provides_extras: - write_field('Provides-Extra', extra) - - self._write_list(file, 'License-File', self.license_files or []) - - long_description = self.get_long_description() - if long_description: - file.write("\n%s" % long_description) - if not long_description.endswith("\n"): - file.write("\n") - - -sequence = tuple, list - - -def check_importable(dist, attr, value): - try: - ep = metadata.EntryPoint(value=value, name=None, group=None) - assert not ep.extras - except (TypeError, ValueError, AttributeError, AssertionError) as e: - raise DistutilsSetupError( - "%r must be importable 'module:attrs' string (got %r)" % (attr, value) - ) from e - - -def assert_string_list(dist, attr, value): - """Verify that value is a string list""" - try: - # verify that value is a list or tuple to exclude unordered - # or single-use iterables - assert isinstance(value, (list, tuple)) - # verify that elements of value are strings - assert ''.join(value) != value - except (TypeError, ValueError, AttributeError, AssertionError) as e: - raise DistutilsSetupError( - "%r must be a list of strings (got %r)" % (attr, value) - ) from e - - -def check_nsp(dist, attr, value): - """Verify that namespace packages are valid""" - ns_packages = value - assert_string_list(dist, attr, ns_packages) - for nsp in ns_packages: - if not dist.has_contents_for(nsp): - raise DistutilsSetupError( - "Distribution contains no modules or packages for " - + "namespace package %r" % nsp - ) - parent, sep, child = nsp.rpartition('.') - if parent and parent not in ns_packages: - distutils.log.warn( - "WARNING: %r is declared as a package namespace, but %r" - " is not: please correct this in setup.py", - nsp, - parent, - ) - msg = ( - "The namespace_packages parameter is deprecated, " - "consider using implicit namespaces instead (PEP 420)." - ) - warnings.warn(msg, SetuptoolsDeprecationWarning) - - -def check_extras(dist, attr, value): - """Verify that extras_require mapping is valid""" - try: - list(itertools.starmap(_check_extra, value.items())) - except (TypeError, ValueError, AttributeError) as e: - raise DistutilsSetupError( - "'extras_require' must be a dictionary whose values are " - "strings or lists of strings containing valid project/version " - "requirement specifiers." - ) from e - - -def _check_extra(extra, reqs): - name, sep, marker = extra.partition(':') - if marker and pkg_resources.invalid_marker(marker): - raise DistutilsSetupError("Invalid environment marker: " + marker) - list(_reqs.parse(reqs)) - - -def assert_bool(dist, attr, value): - """Verify that value is True, False, 0, or 1""" - if bool(value) != value: - tmpl = "{attr!r} must be a boolean value (got {value!r})" - raise DistutilsSetupError(tmpl.format(attr=attr, value=value)) - - -def invalid_unless_false(dist, attr, value): - if not value: - warnings.warn(f"{attr} is ignored.", DistDeprecationWarning) - return - raise DistutilsSetupError(f"{attr} is invalid.") - - -def check_requirements(dist, attr, value): - """Verify that install_requires is a valid requirements list""" - try: - list(_reqs.parse(value)) - if isinstance(value, (dict, set)): - raise TypeError("Unordered types are not allowed") - except (TypeError, ValueError) as error: - tmpl = ( - "{attr!r} must be a string or list of strings " - "containing valid project/version requirement specifiers; {error}" - ) - raise DistutilsSetupError(tmpl.format(attr=attr, error=error)) from error - - -def check_specifier(dist, attr, value): - """Verify that value is a valid version specifier""" - try: - packaging.specifiers.SpecifierSet(value) - except (packaging.specifiers.InvalidSpecifier, AttributeError) as error: - tmpl = ( - "{attr!r} must be a string " "containing valid version specifiers; {error}" - ) - raise DistutilsSetupError(tmpl.format(attr=attr, error=error)) from error - - -def check_entry_points(dist, attr, value): - """Verify that entry_points map is parseable""" - try: - _entry_points.load(value) - except Exception as e: - raise DistutilsSetupError(e) from e - - -def check_test_suite(dist, attr, value): - if not isinstance(value, str): - raise DistutilsSetupError("test_suite must be a string") - - -def check_package_data(dist, attr, value): - """Verify that value is a dictionary of package names to glob lists""" - if not isinstance(value, dict): - raise DistutilsSetupError( - "{!r} must be a dictionary mapping package names to lists of " - "string wildcard patterns".format(attr) - ) - for k, v in value.items(): - if not isinstance(k, str): - raise DistutilsSetupError( - "keys of {!r} dict must be strings (got {!r})".format(attr, k) - ) - assert_string_list(dist, 'values of {!r} dict'.format(attr), v) - - -def check_packages(dist, attr, value): - for pkgname in value: - if not re.match(r'\w+(\.\w+)*', pkgname): - distutils.log.warn( - "WARNING: %r not a valid package name; please use only " - ".-separated package names in setup.py", - pkgname, - ) - - -_Distribution = get_unpatched(distutils.core.Distribution) - - -class Distribution(_Distribution): - """Distribution with support for tests and package data - - This is an enhanced version of 'distutils.dist.Distribution' that - effectively adds the following new optional keyword arguments to 'setup()': - - 'install_requires' -- a string or sequence of strings specifying project - versions that the distribution requires when installed, in the format - used by 'pkg_resources.require()'. They will be installed - automatically when the package is installed. If you wish to use - packages that are not available in PyPI, or want to give your users an - alternate download location, you can add a 'find_links' option to the - '[easy_install]' section of your project's 'setup.cfg' file, and then - setuptools will scan the listed web pages for links that satisfy the - requirements. - - 'extras_require' -- a dictionary mapping names of optional "extras" to the - additional requirement(s) that using those extras incurs. For example, - this:: - - extras_require = dict(reST = ["docutils>=0.3", "reSTedit"]) - - indicates that the distribution can optionally provide an extra - capability called "reST", but it can only be used if docutils and - reSTedit are installed. If the user installs your package using - EasyInstall and requests one of your extras, the corresponding - additional requirements will be installed if needed. - - 'test_suite' -- the name of a test suite to run for the 'test' command. - If the user runs 'python setup.py test', the package will be installed, - and the named test suite will be run. The format is the same as - would be used on a 'unittest.py' command line. That is, it is the - dotted name of an object to import and call to generate a test suite. - - 'package_data' -- a dictionary mapping package names to lists of filenames - or globs to use to find data files contained in the named packages. - If the dictionary has filenames or globs listed under '""' (the empty - string), those names will be searched for in every package, in addition - to any names for the specific package. Data files found using these - names/globs will be installed along with the package, in the same - location as the package. Note that globs are allowed to reference - the contents of non-package subdirectories, as long as you use '/' as - a path separator. (Globs are automatically converted to - platform-specific paths at runtime.) - - In addition to these new keywords, this class also has several new methods - for manipulating the distribution's contents. For example, the 'include()' - and 'exclude()' methods can be thought of as in-place add and subtract - commands that add or remove packages, modules, extensions, and so on from - the distribution. - """ - - _DISTUTILS_UNSUPPORTED_METADATA = { - 'long_description_content_type': lambda: None, - 'project_urls': dict, - 'provides_extras': ordered_set.OrderedSet, - 'license_file': lambda: None, - 'license_files': lambda: None, - } - - _patched_dist = None - - def patch_missing_pkg_info(self, attrs): - # Fake up a replacement for the data that would normally come from - # PKG-INFO, but which might not yet be built if this is a fresh - # checkout. - # - if not attrs or 'name' not in attrs or 'version' not in attrs: - return - key = pkg_resources.safe_name(str(attrs['name'])).lower() - dist = pkg_resources.working_set.by_key.get(key) - if dist is not None and not dist.has_metadata('PKG-INFO'): - dist._version = pkg_resources.safe_version(str(attrs['version'])) - self._patched_dist = dist - - def __init__(self, attrs=None): - have_package_data = hasattr(self, "package_data") - if not have_package_data: - self.package_data = {} - attrs = attrs or {} - self.dist_files = [] - # Filter-out setuptools' specific options. - self.src_root = attrs.pop("src_root", None) - self.patch_missing_pkg_info(attrs) - self.dependency_links = attrs.pop('dependency_links', []) - self.setup_requires = attrs.pop('setup_requires', []) - for ep in metadata.entry_points(group='distutils.setup_keywords'): - vars(self).setdefault(ep.name, None) - _Distribution.__init__( - self, - { - k: v - for k, v in attrs.items() - if k not in self._DISTUTILS_UNSUPPORTED_METADATA - }, - ) - - # Save the original dependencies before they are processed into the egg format - self._orig_extras_require = {} - self._orig_install_requires = [] - self._tmp_extras_require = defaultdict(ordered_set.OrderedSet) - - self.set_defaults = ConfigDiscovery(self) - - self._set_metadata_defaults(attrs) - - self.metadata.version = self._normalize_version( - self._validate_version(self.metadata.version) - ) - self._finalize_requires() - - def _validate_metadata(self): - required = {"name"} - provided = { - key - for key in vars(self.metadata) - if getattr(self.metadata, key, None) is not None - } - missing = required - provided - - if missing: - msg = f"Required package metadata is missing: {missing}" - raise DistutilsSetupError(msg) - - def _set_metadata_defaults(self, attrs): - """ - Fill-in missing metadata fields not supported by distutils. - Some fields may have been set by other tools (e.g. pbr). - Those fields (vars(self.metadata)) take precedence to - supplied attrs. - """ - for option, default in self._DISTUTILS_UNSUPPORTED_METADATA.items(): - vars(self.metadata).setdefault(option, attrs.get(option, default())) - - @staticmethod - def _normalize_version(version): - if isinstance(version, setuptools.sic) or version is None: - return version - - normalized = str(packaging.version.Version(version)) - if version != normalized: - tmpl = "Normalizing '{version}' to '{normalized}'" - warnings.warn(tmpl.format(**locals())) - return normalized - return version - - @staticmethod - def _validate_version(version): - if isinstance(version, numbers.Number): - # Some people apparently take "version number" too literally :) - version = str(version) - - if version is not None: - try: - packaging.version.Version(version) - except (packaging.version.InvalidVersion, TypeError): - warnings.warn( - "The version specified (%r) is an invalid version, this " - "may not work as expected with newer versions of " - "setuptools, pip, and PyPI. Please see PEP 440 for more " - "details." % version - ) - return setuptools.sic(version) - return version - - def _finalize_requires(self): - """ - Set `metadata.python_requires` and fix environment markers - in `install_requires` and `extras_require`. - """ - if getattr(self, 'python_requires', None): - self.metadata.python_requires = self.python_requires - - if getattr(self, 'extras_require', None): - # Save original before it is messed by _convert_extras_requirements - self._orig_extras_require = self._orig_extras_require or self.extras_require - for extra in self.extras_require.keys(): - # Since this gets called multiple times at points where the - # keys have become 'converted' extras, ensure that we are only - # truly adding extras we haven't seen before here. - extra = extra.split(':')[0] - if extra: - self.metadata.provides_extras.add(extra) - - if getattr(self, 'install_requires', None) and not self._orig_install_requires: - # Save original before it is messed by _move_install_requirements_markers - self._orig_install_requires = self.install_requires - - self._convert_extras_requirements() - self._move_install_requirements_markers() - - def _convert_extras_requirements(self): - """ - Convert requirements in `extras_require` of the form - `"extra": ["barbazquux; {marker}"]` to - `"extra:{marker}": ["barbazquux"]`. - """ - spec_ext_reqs = getattr(self, 'extras_require', None) or {} - tmp = defaultdict(ordered_set.OrderedSet) - self._tmp_extras_require = getattr(self, '_tmp_extras_require', tmp) - for section, v in spec_ext_reqs.items(): - # Do not strip empty sections. - self._tmp_extras_require[section] - for r in _reqs.parse(v): - suffix = self._suffix_for(r) - self._tmp_extras_require[section + suffix].append(r) - - @staticmethod - def _suffix_for(req): - """ - For a requirement, return the 'extras_require' suffix for - that requirement. - """ - return ':' + str(req.marker) if req.marker else '' - - def _move_install_requirements_markers(self): - """ - Move requirements in `install_requires` that are using environment - markers `extras_require`. - """ - - # divide the install_requires into two sets, simple ones still - # handled by install_requires and more complex ones handled - # by extras_require. - - def is_simple_req(req): - return not req.marker - - spec_inst_reqs = getattr(self, 'install_requires', None) or () - inst_reqs = list(_reqs.parse(spec_inst_reqs)) - simple_reqs = filter(is_simple_req, inst_reqs) - complex_reqs = itertools.filterfalse(is_simple_req, inst_reqs) - self.install_requires = list(map(str, simple_reqs)) - - for r in complex_reqs: - self._tmp_extras_require[':' + str(r.marker)].append(r) - self.extras_require = dict( - # list(dict.fromkeys(...)) ensures a list of unique strings - (k, list(dict.fromkeys(str(r) for r in map(self._clean_req, v)))) - for k, v in self._tmp_extras_require.items() - ) - - def _clean_req(self, req): - """ - Given a Requirement, remove environment markers and return it. - """ - req.marker = None - return req - - def _finalize_license_files(self): - """Compute names of all license files which should be included.""" - license_files: Optional[List[str]] = self.metadata.license_files - patterns: List[str] = license_files if license_files else [] - - license_file: Optional[str] = self.metadata.license_file - if license_file and license_file not in patterns: - patterns.append(license_file) - - if license_files is None and license_file is None: - # Default patterns match the ones wheel uses - # See https://wheel.readthedocs.io/en/stable/user_guide.html - # -> 'Including license files in the generated wheel file' - patterns = ('LICEN[CS]E*', 'COPYING*', 'NOTICE*', 'AUTHORS*') - - self.metadata.license_files = list( - unique_everseen(self._expand_patterns(patterns)) - ) - - @staticmethod - def _expand_patterns(patterns): - """ - >>> list(Distribution._expand_patterns(['LICENSE'])) - ['LICENSE'] - >>> list(Distribution._expand_patterns(['setup.cfg', 'LIC*'])) - ['setup.cfg', 'LICENSE'] - """ - return ( - path - for pattern in patterns - for path in sorted(iglob(pattern)) - if not path.endswith('~') and os.path.isfile(path) - ) - - # FIXME: 'Distribution._parse_config_files' is too complex (14) - def _parse_config_files(self, filenames=None): # noqa: C901 - """ - Adapted from distutils.dist.Distribution.parse_config_files, - this method provides the same functionality in subtly-improved - ways. - """ - from configparser import ConfigParser - - # Ignore install directory options if we have a venv - ignore_options = ( - [] - if sys.prefix == sys.base_prefix - else [ - 'install-base', - 'install-platbase', - 'install-lib', - 'install-platlib', - 'install-purelib', - 'install-headers', - 'install-scripts', - 'install-data', - 'prefix', - 'exec-prefix', - 'home', - 'user', - 'root', - ] - ) - - ignore_options = frozenset(ignore_options) - - if filenames is None: - filenames = self.find_config_files() - - if DEBUG: - self.announce("Distribution.parse_config_files():") - - parser = ConfigParser() - parser.optionxform = str - for filename in filenames: - with io.open(filename, encoding='utf-8') as reader: - if DEBUG: - self.announce(" reading {filename}".format(**locals())) - parser.read_file(reader) - for section in parser.sections(): - options = parser.options(section) - opt_dict = self.get_option_dict(section) - - for opt in options: - if opt == '__name__' or opt in ignore_options: - continue - - val = parser.get(section, opt) - opt = self.warn_dash_deprecation(opt, section) - opt = self.make_option_lowercase(opt, section) - opt_dict[opt] = (filename, val) - - # Make the ConfigParser forget everything (so we retain - # the original filenames that options come from) - parser.__init__() - - if 'global' not in self.command_options: - return - - # If there was a "global" section in the config file, use it - # to set Distribution options. - - for (opt, (src, val)) in self.command_options['global'].items(): - alias = self.negative_opt.get(opt) - if alias: - val = not strtobool(val) - elif opt in ('verbose', 'dry_run'): # ugh! - val = strtobool(val) - - try: - setattr(self, alias or opt, val) - except ValueError as e: - raise DistutilsOptionError(e) from e - - def warn_dash_deprecation(self, opt, section): - if section in ( - 'options.extras_require', - 'options.data_files', - ): - return opt - - underscore_opt = opt.replace('-', '_') - commands = list(itertools.chain( - distutils.command.__all__, - self._setuptools_commands(), - )) - if ( - not section.startswith('options') - and section != 'metadata' - and section not in commands - ): - return underscore_opt - - if '-' in opt: - warnings.warn( - "Usage of dash-separated '%s' will not be supported in future " - "versions. Please use the underscore name '%s' instead" - % (opt, underscore_opt) - ) - return underscore_opt - - def _setuptools_commands(self): - try: - return metadata.distribution('setuptools').entry_points.names - except metadata.PackageNotFoundError: - # during bootstrapping, distribution doesn't exist - return [] - - def make_option_lowercase(self, opt, section): - if section != 'metadata' or opt.islower(): - return opt - - lowercase_opt = opt.lower() - warnings.warn( - "Usage of uppercase key '%s' in '%s' will be deprecated in future " - "versions. Please use lowercase '%s' instead" - % (opt, section, lowercase_opt) - ) - return lowercase_opt - - # FIXME: 'Distribution._set_command_options' is too complex (14) - def _set_command_options(self, command_obj, option_dict=None): # noqa: C901 - """ - Set the options for 'command_obj' from 'option_dict'. Basically - this means copying elements of a dictionary ('option_dict') to - attributes of an instance ('command'). - - 'command_obj' must be a Command instance. If 'option_dict' is not - supplied, uses the standard option dictionary for this command - (from 'self.command_options'). - - (Adopted from distutils.dist.Distribution._set_command_options) - """ - command_name = command_obj.get_command_name() - if option_dict is None: - option_dict = self.get_option_dict(command_name) - - if DEBUG: - self.announce(" setting options for '%s' command:" % command_name) - for (option, (source, value)) in option_dict.items(): - if DEBUG: - self.announce(" %s = %s (from %s)" % (option, value, source)) - try: - bool_opts = [translate_longopt(o) for o in command_obj.boolean_options] - except AttributeError: - bool_opts = [] - try: - neg_opt = command_obj.negative_opt - except AttributeError: - neg_opt = {} - - try: - is_string = isinstance(value, str) - if option in neg_opt and is_string: - setattr(command_obj, neg_opt[option], not strtobool(value)) - elif option in bool_opts and is_string: - setattr(command_obj, option, strtobool(value)) - elif hasattr(command_obj, option): - setattr(command_obj, option, value) - else: - raise DistutilsOptionError( - "error in %s: command '%s' has no such option '%s'" - % (source, command_name, option) - ) - except ValueError as e: - raise DistutilsOptionError(e) from e - - def _get_project_config_files(self, filenames): - """Add default file and split between INI and TOML""" - tomlfiles = [] - standard_project_metadata = Path(self.src_root or os.curdir, "pyproject.toml") - if filenames is not None: - parts = partition(lambda f: Path(f).suffix == ".toml", filenames) - filenames = list(parts[0]) # 1st element => predicate is False - tomlfiles = list(parts[1]) # 2nd element => predicate is True - elif standard_project_metadata.exists(): - tomlfiles = [standard_project_metadata] - return filenames, tomlfiles - - def parse_config_files(self, filenames=None, ignore_option_errors=False): - """Parses configuration files from various levels - and loads configuration. - """ - inifiles, tomlfiles = self._get_project_config_files(filenames) - - self._parse_config_files(filenames=inifiles) - - setupcfg.parse_configuration( - self, self.command_options, ignore_option_errors=ignore_option_errors - ) - for filename in tomlfiles: - pyprojecttoml.apply_configuration(self, filename, ignore_option_errors) - - self._finalize_requires() - self._finalize_license_files() - - def fetch_build_eggs(self, requires): - """Resolve pre-setup requirements""" - resolved_dists = pkg_resources.working_set.resolve( - _reqs.parse(requires), - installer=self.fetch_build_egg, - replace_conflicting=True, - ) - for dist in resolved_dists: - pkg_resources.working_set.add(dist, replace=True) - return resolved_dists - - def finalize_options(self): - """ - Allow plugins to apply arbitrary operations to the - distribution. Each hook may optionally define a 'order' - to influence the order of execution. Smaller numbers - go first and the default is 0. - """ - group = 'setuptools.finalize_distribution_options' - - def by_order(hook): - return getattr(hook, 'order', 0) - - defined = metadata.entry_points(group=group) - filtered = itertools.filterfalse(self._removed, defined) - loaded = map(lambda e: e.load(), filtered) - for ep in sorted(loaded, key=by_order): - ep(self) - - @staticmethod - def _removed(ep): - """ - When removing an entry point, if metadata is loaded - from an older version of Setuptools, that removed - entry point will attempt to be loaded and will fail. - See #2765 for more details. - """ - removed = { - # removed 2021-09-05 - '2to3_doctests', - } - return ep.name in removed - - def _finalize_setup_keywords(self): - for ep in metadata.entry_points(group='distutils.setup_keywords'): - value = getattr(self, ep.name, None) - if value is not None: - ep.load()(self, ep.name, value) - - def get_egg_cache_dir(self): - egg_cache_dir = os.path.join(os.curdir, '.eggs') - if not os.path.exists(egg_cache_dir): - os.mkdir(egg_cache_dir) - windows_support.hide_file(egg_cache_dir) - readme_txt_filename = os.path.join(egg_cache_dir, 'README.txt') - with open(readme_txt_filename, 'w') as f: - f.write( - 'This directory contains eggs that were downloaded ' - 'by setuptools to build, test, and run plug-ins.\n\n' - ) - f.write( - 'This directory caches those eggs to prevent ' - 'repeated downloads.\n\n' - ) - f.write('However, it is safe to delete this directory.\n\n') - - return egg_cache_dir - - def fetch_build_egg(self, req): - """Fetch an egg needed for building""" - from setuptools.installer import fetch_build_egg - - return fetch_build_egg(self, req) - - def get_command_class(self, command): - """Pluggable version of get_command_class()""" - if command in self.cmdclass: - return self.cmdclass[command] - - eps = metadata.entry_points(group='distutils.commands', name=command) - for ep in eps: - self.cmdclass[command] = cmdclass = ep.load() - return cmdclass - else: - return _Distribution.get_command_class(self, command) - - def print_commands(self): - for ep in metadata.entry_points(group='distutils.commands'): - if ep.name not in self.cmdclass: - cmdclass = ep.load() - self.cmdclass[ep.name] = cmdclass - return _Distribution.print_commands(self) - - def get_command_list(self): - for ep in metadata.entry_points(group='distutils.commands'): - if ep.name not in self.cmdclass: - cmdclass = ep.load() - self.cmdclass[ep.name] = cmdclass - return _Distribution.get_command_list(self) - - def include(self, **attrs): - """Add items to distribution that are named in keyword arguments - - For example, 'dist.include(py_modules=["x"])' would add 'x' to - the distribution's 'py_modules' attribute, if it was not already - there. - - Currently, this method only supports inclusion for attributes that are - lists or tuples. If you need to add support for adding to other - attributes in this or a subclass, you can add an '_include_X' method, - where 'X' is the name of the attribute. The method will be called with - the value passed to 'include()'. So, 'dist.include(foo={"bar":"baz"})' - will try to call 'dist._include_foo({"bar":"baz"})', which can then - handle whatever special inclusion logic is needed. - """ - for k, v in attrs.items(): - include = getattr(self, '_include_' + k, None) - if include: - include(v) - else: - self._include_misc(k, v) - - def exclude_package(self, package): - """Remove packages, modules, and extensions in named package""" - - pfx = package + '.' - if self.packages: - self.packages = [ - p for p in self.packages if p != package and not p.startswith(pfx) - ] - - if self.py_modules: - self.py_modules = [ - p for p in self.py_modules if p != package and not p.startswith(pfx) - ] - - if self.ext_modules: - self.ext_modules = [ - p - for p in self.ext_modules - if p.name != package and not p.name.startswith(pfx) - ] - - def has_contents_for(self, package): - """Return true if 'exclude_package(package)' would do something""" - - pfx = package + '.' - - for p in self.iter_distribution_names(): - if p == package or p.startswith(pfx): - return True - - def _exclude_misc(self, name, value): - """Handle 'exclude()' for list/tuple attrs without a special handler""" - if not isinstance(value, sequence): - raise DistutilsSetupError( - "%s: setting must be a list or tuple (%r)" % (name, value) - ) - try: - old = getattr(self, name) - except AttributeError as e: - raise DistutilsSetupError("%s: No such distribution setting" % name) from e - if old is not None and not isinstance(old, sequence): - raise DistutilsSetupError( - name + ": this setting cannot be changed via include/exclude" - ) - elif old: - setattr(self, name, [item for item in old if item not in value]) - - def _include_misc(self, name, value): - """Handle 'include()' for list/tuple attrs without a special handler""" - - if not isinstance(value, sequence): - raise DistutilsSetupError("%s: setting must be a list (%r)" % (name, value)) - try: - old = getattr(self, name) - except AttributeError as e: - raise DistutilsSetupError("%s: No such distribution setting" % name) from e - if old is None: - setattr(self, name, value) - elif not isinstance(old, sequence): - raise DistutilsSetupError( - name + ": this setting cannot be changed via include/exclude" - ) - else: - new = [item for item in value if item not in old] - setattr(self, name, old + new) - - def exclude(self, **attrs): - """Remove items from distribution that are named in keyword arguments - - For example, 'dist.exclude(py_modules=["x"])' would remove 'x' from - the distribution's 'py_modules' attribute. Excluding packages uses - the 'exclude_package()' method, so all of the package's contained - packages, modules, and extensions are also excluded. - - Currently, this method only supports exclusion from attributes that are - lists or tuples. If you need to add support for excluding from other - attributes in this or a subclass, you can add an '_exclude_X' method, - where 'X' is the name of the attribute. The method will be called with - the value passed to 'exclude()'. So, 'dist.exclude(foo={"bar":"baz"})' - will try to call 'dist._exclude_foo({"bar":"baz"})', which can then - handle whatever special exclusion logic is needed. - """ - for k, v in attrs.items(): - exclude = getattr(self, '_exclude_' + k, None) - if exclude: - exclude(v) - else: - self._exclude_misc(k, v) - - def _exclude_packages(self, packages): - if not isinstance(packages, sequence): - raise DistutilsSetupError( - "packages: setting must be a list or tuple (%r)" % (packages,) - ) - list(map(self.exclude_package, packages)) - - def _parse_command_opts(self, parser, args): - # Remove --with-X/--without-X options when processing command args - self.global_options = self.__class__.global_options - self.negative_opt = self.__class__.negative_opt - - # First, expand any aliases - command = args[0] - aliases = self.get_option_dict('aliases') - while command in aliases: - src, alias = aliases[command] - del aliases[command] # ensure each alias can expand only once! - import shlex - - args[:1] = shlex.split(alias, True) - command = args[0] - - nargs = _Distribution._parse_command_opts(self, parser, args) - - # Handle commands that want to consume all remaining arguments - cmd_class = self.get_command_class(command) - if getattr(cmd_class, 'command_consumes_arguments', None): - self.get_option_dict(command)['args'] = ("command line", nargs) - if nargs is not None: - return [] - - return nargs - - def get_cmdline_options(self): - """Return a '{cmd: {opt:val}}' map of all command-line options - - Option names are all long, but do not include the leading '--', and - contain dashes rather than underscores. If the option doesn't take - an argument (e.g. '--quiet'), the 'val' is 'None'. - - Note that options provided by config files are intentionally excluded. - """ - - d = {} - - for cmd, opts in self.command_options.items(): - - for opt, (src, val) in opts.items(): - - if src != "command line": - continue - - opt = opt.replace('_', '-') - - if val == 0: - cmdobj = self.get_command_obj(cmd) - neg_opt = self.negative_opt.copy() - neg_opt.update(getattr(cmdobj, 'negative_opt', {})) - for neg, pos in neg_opt.items(): - if pos == opt: - opt = neg - val = None - break - else: - raise AssertionError("Shouldn't be able to get here") - - elif val == 1: - val = None - - d.setdefault(cmd, {})[opt] = val - - return d - - def iter_distribution_names(self): - """Yield all packages, modules, and extension names in distribution""" - - for pkg in self.packages or (): - yield pkg - - for module in self.py_modules or (): - yield module - - for ext in self.ext_modules or (): - if isinstance(ext, tuple): - name, buildinfo = ext - else: - name = ext.name - if name.endswith('module'): - name = name[:-6] - yield name - - def handle_display_options(self, option_order): - """If there were any non-global "display-only" options - (--help-commands or the metadata display options) on the command - line, display the requested info and return true; else return - false. - """ - import sys - - if self.help_commands: - return _Distribution.handle_display_options(self, option_order) - - # Stdout may be StringIO (e.g. in tests) - if not isinstance(sys.stdout, io.TextIOWrapper): - return _Distribution.handle_display_options(self, option_order) - - # Don't wrap stdout if utf-8 is already the encoding. Provides - # workaround for #334. - if sys.stdout.encoding.lower() in ('utf-8', 'utf8'): - return _Distribution.handle_display_options(self, option_order) - - # Print metadata in UTF-8 no matter the platform - encoding = sys.stdout.encoding - errors = sys.stdout.errors - newline = sys.platform != 'win32' and '\n' or None - line_buffering = sys.stdout.line_buffering - - sys.stdout = io.TextIOWrapper( - sys.stdout.detach(), 'utf-8', errors, newline, line_buffering - ) - try: - return _Distribution.handle_display_options(self, option_order) - finally: - sys.stdout = io.TextIOWrapper( - sys.stdout.detach(), encoding, errors, newline, line_buffering - ) - - def run_command(self, command): - self.set_defaults() - # Postpone defaults until all explicit configuration is considered - # (setup() args, config files, command line and plugins) - - super().run_command(command) - - -class DistDeprecationWarning(SetuptoolsDeprecationWarning): - """Class for warning about deprecations in dist in - setuptools. Not ignored by default, unlike DeprecationWarning.""" diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/.github/ISSUE_TEMPLATE/feature-request.md b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/.github/ISSUE_TEMPLATE/feature-request.md deleted file mode 100644 index 03a1e93d7293948042120b875af8be0c6964e59c..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/.github/ISSUE_TEMPLATE/feature-request.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -name: "\U0001F680Feature Request" -about: Suggest an improvement or new feature -labels: enhancement - ---- - -## 🚀 Feature -A clear and concise description of the feature proposal. - -## Motivation & Examples - -Tell us why the feature is useful. - -Describe what the feature would look like, if it is implemented. -Best demonstrated using **code examples** in addition to words. - -## Note - -We only consider adding new features if they are relevant to many users. - -If you request implementation of research papers -- we only consider papers that have enough significance and prevalance in the object detection field. - -We do not take requests for most projects in the `projects/` directory, because they are research code release that is mainly for other researchers to reproduce results. - -"Make X faster/accurate" is not a valid feature request. "Implement a concrete feature that can make X faster/accurate" can be a valid feature request. - -Instead of adding features inside detectron2, -you can implement many features by [extending detectron2](https://detectron2.readthedocs.io/tutorials/extend.html). -The [projects/](https://github.com/facebookresearch/detectron2/tree/main/projects/) directory contains many of such examples. - diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/README.md b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/README.md deleted file mode 100644 index 8531cafd4d1aae0267f4fc5e7212f7db5ed90686..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/README.md +++ /dev/null @@ -1,15 +0,0 @@ -# Read the docs: - -The latest documentation built from this directory is available at [detectron2.readthedocs.io](https://detectron2.readthedocs.io/). -Documents in this directory are not meant to be read on github. - -# Build the docs: - -1. Install detectron2 according to [INSTALL.md](../INSTALL.md). -2. Install additional libraries required to build docs: - - docutils==0.16 - - Sphinx==3.2.0 - - recommonmark==0.6.0 - - sphinx_rtd_theme - -3. Run `make html` from this directory. diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/layers/heatmap_focal_loss.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/layers/heatmap_focal_loss.py deleted file mode 100644 index d4693b2125217527033727ec9a82959286d180f9..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/layers/heatmap_focal_loss.py +++ /dev/null @@ -1,92 +0,0 @@ -import torch -from torch.nn import functional as F - -# TODO: merge these two function -def heatmap_focal_loss( - inputs, - targets, - pos_inds, - labels, - alpha: float = -1, - beta: float = 4, - gamma: float = 2, - reduction: str = 'sum', - sigmoid_clamp: float = 1e-4, - ignore_high_fp: float = -1., -): - """ - Loss used in RetinaNet for dense detection: https://arxiv.org/abs/1708.02002. - Args: - inputs: (sum_l N*Hl*Wl, C) - targets: (sum_l N*Hl*Wl, C) - pos_inds: N - labels: N - Returns: - Loss tensor with the reduction option applied. - """ - pred = torch.clamp(inputs.sigmoid_(), min=sigmoid_clamp, max=1-sigmoid_clamp) - neg_weights = torch.pow(1 - targets, beta) - pos_pred_pix = pred[pos_inds] # N x C - pos_pred = pos_pred_pix.gather(1, labels.unsqueeze(1)) - pos_loss = torch.log(pos_pred) * torch.pow(1 - pos_pred, gamma) - neg_loss = torch.log(1 - pred) * torch.pow(pred, gamma) * neg_weights - - if ignore_high_fp > 0: - not_high_fp = (pred < ignore_high_fp).float() - neg_loss = not_high_fp * neg_loss - - if reduction == "sum": - pos_loss = pos_loss.sum() - neg_loss = neg_loss.sum() - - if alpha >= 0: - pos_loss = alpha * pos_loss - neg_loss = (1 - alpha) * neg_loss - - return - pos_loss, - neg_loss - -heatmap_focal_loss_jit = torch.jit.script(heatmap_focal_loss) -# heatmap_focal_loss_jit = heatmap_focal_loss - -def binary_heatmap_focal_loss( - inputs, - targets, - pos_inds, - alpha: float = -1, - beta: float = 4, - gamma: float = 2, - sigmoid_clamp: float = 1e-4, - ignore_high_fp: float = -1., -): - """ - Args: - inputs: (sum_l N*Hl*Wl,) - targets: (sum_l N*Hl*Wl,) - pos_inds: N - Returns: - Loss tensor with the reduction option applied. - """ - pred = torch.clamp(inputs.sigmoid_(), min=sigmoid_clamp, max=1-sigmoid_clamp) - neg_weights = torch.pow(1 - targets, beta) - for i, ind in enumerate(pos_inds): - if ind >= pred.shape[0]: - print('%'*100) - print(pred.shape, ind, pos_inds) - pos_inds[i] = pred.shape[0] - 1 - pos_pred = pred[pos_inds] # N - pos_loss = torch.log(pos_pred) * torch.pow(1 - pos_pred, gamma) - neg_loss = torch.log(1 - pred) * torch.pow(pred, gamma) * neg_weights - if ignore_high_fp > 0: - not_high_fp = (pred < ignore_high_fp).float() - neg_loss = not_high_fp * neg_loss - - pos_loss = - pos_loss.sum() - neg_loss = - neg_loss.sum() - - if alpha >= 0: - pos_loss = alpha * pos_loss - neg_loss = (1 - alpha) * neg_loss - - return pos_loss, neg_loss - -# binary_heatmap_focal_loss_jit = torch.jit.script(binary_heatmap_focal_loss) \ No newline at end of file diff --git a/spaces/BAAI/vid2vid-zero/style.css b/spaces/BAAI/vid2vid-zero/style.css deleted file mode 100644 index c4739b4ea5fc35e774a049e3dacc443f7f0eac19..0000000000000000000000000000000000000000 --- a/spaces/BAAI/vid2vid-zero/style.css +++ /dev/null @@ -1,3 +0,0 @@ -h1 { - text-align: center; -} diff --git a/spaces/BertChristiaens/youtube-dl/README.md b/spaces/BertChristiaens/youtube-dl/README.md deleted file mode 100644 index d9daf4f3d7202ffa140a55f15af265e11f31a282..0000000000000000000000000000000000000000 --- a/spaces/BertChristiaens/youtube-dl/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Youtube Dl -emoji: 🐠 -colorFrom: green -colorTo: blue -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/progress.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/progress.py deleted file mode 100644 index 8b0a315f32466ac03a205898394f958f221818a7..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/progress.py +++ /dev/null @@ -1,1702 +0,0 @@ -import io -import sys -import typing -import warnings -from abc import ABC, abstractmethod -from collections import deque -from dataclasses import dataclass, field -from datetime import timedelta -from io import RawIOBase, UnsupportedOperation -from math import ceil -from mmap import mmap -from operator import length_hint -from os import PathLike, stat -from threading import Event, RLock, Thread -from types import TracebackType -from typing import ( - Any, - BinaryIO, - Callable, - ContextManager, - Deque, - Dict, - Generic, - Iterable, - List, - NamedTuple, - NewType, - Optional, - Sequence, - TextIO, - Tuple, - Type, - TypeVar, - Union, -) - -if sys.version_info >= (3, 8): - from typing import Literal -else: - from pip._vendor.typing_extensions import Literal # pragma: no cover - -from . import filesize, get_console -from .console import Console, Group, JustifyMethod, RenderableType -from .highlighter import Highlighter -from .jupyter import JupyterMixin -from .live import Live -from .progress_bar import ProgressBar -from .spinner import Spinner -from .style import StyleType -from .table import Column, Table -from .text import Text, TextType - -TaskID = NewType("TaskID", int) - -ProgressType = TypeVar("ProgressType") - -GetTimeCallable = Callable[[], float] - - -_I = typing.TypeVar("_I", TextIO, BinaryIO) - - -class _TrackThread(Thread): - """A thread to periodically update progress.""" - - def __init__(self, progress: "Progress", task_id: "TaskID", update_period: float): - self.progress = progress - self.task_id = task_id - self.update_period = update_period - self.done = Event() - - self.completed = 0 - super().__init__() - - def run(self) -> None: - task_id = self.task_id - advance = self.progress.advance - update_period = self.update_period - last_completed = 0 - wait = self.done.wait - while not wait(update_period): - completed = self.completed - if last_completed != completed: - advance(task_id, completed - last_completed) - last_completed = completed - - self.progress.update(self.task_id, completed=self.completed, refresh=True) - - def __enter__(self) -> "_TrackThread": - self.start() - return self - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> None: - self.done.set() - self.join() - - -def track( - sequence: Union[Sequence[ProgressType], Iterable[ProgressType]], - description: str = "Working...", - total: Optional[float] = None, - auto_refresh: bool = True, - console: Optional[Console] = None, - transient: bool = False, - get_time: Optional[Callable[[], float]] = None, - refresh_per_second: float = 10, - style: StyleType = "bar.back", - complete_style: StyleType = "bar.complete", - finished_style: StyleType = "bar.finished", - pulse_style: StyleType = "bar.pulse", - update_period: float = 0.1, - disable: bool = False, - show_speed: bool = True, -) -> Iterable[ProgressType]: - """Track progress by iterating over a sequence. - - Args: - sequence (Iterable[ProgressType]): A sequence (must support "len") you wish to iterate over. - description (str, optional): Description of task show next to progress bar. Defaults to "Working". - total: (float, optional): Total number of steps. Default is len(sequence). - auto_refresh (bool, optional): Automatic refresh, disable to force a refresh after each iteration. Default is True. - transient: (bool, optional): Clear the progress on exit. Defaults to False. - console (Console, optional): Console to write to. Default creates internal Console instance. - refresh_per_second (float): Number of times per second to refresh the progress information. Defaults to 10. - style (StyleType, optional): Style for the bar background. Defaults to "bar.back". - complete_style (StyleType, optional): Style for the completed bar. Defaults to "bar.complete". - finished_style (StyleType, optional): Style for a finished bar. Defaults to "bar.finished". - pulse_style (StyleType, optional): Style for pulsing bars. Defaults to "bar.pulse". - update_period (float, optional): Minimum time (in seconds) between calls to update(). Defaults to 0.1. - disable (bool, optional): Disable display of progress. - show_speed (bool, optional): Show speed if total isn't known. Defaults to True. - Returns: - Iterable[ProgressType]: An iterable of the values in the sequence. - - """ - - columns: List["ProgressColumn"] = ( - [TextColumn("[progress.description]{task.description}")] if description else [] - ) - columns.extend( - ( - BarColumn( - style=style, - complete_style=complete_style, - finished_style=finished_style, - pulse_style=pulse_style, - ), - TaskProgressColumn(show_speed=show_speed), - TimeRemainingColumn(elapsed_when_finished=True), - ) - ) - progress = Progress( - *columns, - auto_refresh=auto_refresh, - console=console, - transient=transient, - get_time=get_time, - refresh_per_second=refresh_per_second or 10, - disable=disable, - ) - - with progress: - yield from progress.track( - sequence, total=total, description=description, update_period=update_period - ) - - -class _Reader(RawIOBase, BinaryIO): - """A reader that tracks progress while it's being read from.""" - - def __init__( - self, - handle: BinaryIO, - progress: "Progress", - task: TaskID, - close_handle: bool = True, - ) -> None: - self.handle = handle - self.progress = progress - self.task = task - self.close_handle = close_handle - self._closed = False - - def __enter__(self) -> "_Reader": - self.handle.__enter__() - return self - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> None: - self.close() - - def __iter__(self) -> BinaryIO: - return self - - def __next__(self) -> bytes: - line = next(self.handle) - self.progress.advance(self.task, advance=len(line)) - return line - - @property - def closed(self) -> bool: - return self._closed - - def fileno(self) -> int: - return self.handle.fileno() - - def isatty(self) -> bool: - return self.handle.isatty() - - @property - def mode(self) -> str: - return self.handle.mode - - @property - def name(self) -> str: - return self.handle.name - - def readable(self) -> bool: - return self.handle.readable() - - def seekable(self) -> bool: - return self.handle.seekable() - - def writable(self) -> bool: - return False - - def read(self, size: int = -1) -> bytes: - block = self.handle.read(size) - self.progress.advance(self.task, advance=len(block)) - return block - - def readinto(self, b: Union[bytearray, memoryview, mmap]): # type: ignore[no-untyped-def, override] - n = self.handle.readinto(b) # type: ignore[attr-defined] - self.progress.advance(self.task, advance=n) - return n - - def readline(self, size: int = -1) -> bytes: # type: ignore[override] - line = self.handle.readline(size) - self.progress.advance(self.task, advance=len(line)) - return line - - def readlines(self, hint: int = -1) -> List[bytes]: - lines = self.handle.readlines(hint) - self.progress.advance(self.task, advance=sum(map(len, lines))) - return lines - - def close(self) -> None: - if self.close_handle: - self.handle.close() - self._closed = True - - def seek(self, offset: int, whence: int = 0) -> int: - pos = self.handle.seek(offset, whence) - self.progress.update(self.task, completed=pos) - return pos - - def tell(self) -> int: - return self.handle.tell() - - def write(self, s: Any) -> int: - raise UnsupportedOperation("write") - - -class _ReadContext(ContextManager[_I], Generic[_I]): - """A utility class to handle a context for both a reader and a progress.""" - - def __init__(self, progress: "Progress", reader: _I) -> None: - self.progress = progress - self.reader: _I = reader - - def __enter__(self) -> _I: - self.progress.start() - return self.reader.__enter__() - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> None: - self.progress.stop() - self.reader.__exit__(exc_type, exc_val, exc_tb) - - -def wrap_file( - file: BinaryIO, - total: int, - *, - description: str = "Reading...", - auto_refresh: bool = True, - console: Optional[Console] = None, - transient: bool = False, - get_time: Optional[Callable[[], float]] = None, - refresh_per_second: float = 10, - style: StyleType = "bar.back", - complete_style: StyleType = "bar.complete", - finished_style: StyleType = "bar.finished", - pulse_style: StyleType = "bar.pulse", - disable: bool = False, -) -> ContextManager[BinaryIO]: - """Read bytes from a file while tracking progress. - - Args: - file (Union[str, PathLike[str], BinaryIO]): The path to the file to read, or a file-like object in binary mode. - total (int): Total number of bytes to read. - description (str, optional): Description of task show next to progress bar. Defaults to "Reading". - auto_refresh (bool, optional): Automatic refresh, disable to force a refresh after each iteration. Default is True. - transient: (bool, optional): Clear the progress on exit. Defaults to False. - console (Console, optional): Console to write to. Default creates internal Console instance. - refresh_per_second (float): Number of times per second to refresh the progress information. Defaults to 10. - style (StyleType, optional): Style for the bar background. Defaults to "bar.back". - complete_style (StyleType, optional): Style for the completed bar. Defaults to "bar.complete". - finished_style (StyleType, optional): Style for a finished bar. Defaults to "bar.finished". - pulse_style (StyleType, optional): Style for pulsing bars. Defaults to "bar.pulse". - disable (bool, optional): Disable display of progress. - Returns: - ContextManager[BinaryIO]: A context manager yielding a progress reader. - - """ - - columns: List["ProgressColumn"] = ( - [TextColumn("[progress.description]{task.description}")] if description else [] - ) - columns.extend( - ( - BarColumn( - style=style, - complete_style=complete_style, - finished_style=finished_style, - pulse_style=pulse_style, - ), - DownloadColumn(), - TimeRemainingColumn(), - ) - ) - progress = Progress( - *columns, - auto_refresh=auto_refresh, - console=console, - transient=transient, - get_time=get_time, - refresh_per_second=refresh_per_second or 10, - disable=disable, - ) - - reader = progress.wrap_file(file, total=total, description=description) - return _ReadContext(progress, reader) - - -@typing.overload -def open( - file: Union[str, "PathLike[str]", bytes], - mode: Union[Literal["rt"], Literal["r"]], - buffering: int = -1, - encoding: Optional[str] = None, - errors: Optional[str] = None, - newline: Optional[str] = None, - *, - total: Optional[int] = None, - description: str = "Reading...", - auto_refresh: bool = True, - console: Optional[Console] = None, - transient: bool = False, - get_time: Optional[Callable[[], float]] = None, - refresh_per_second: float = 10, - style: StyleType = "bar.back", - complete_style: StyleType = "bar.complete", - finished_style: StyleType = "bar.finished", - pulse_style: StyleType = "bar.pulse", - disable: bool = False, -) -> ContextManager[TextIO]: - pass - - -@typing.overload -def open( - file: Union[str, "PathLike[str]", bytes], - mode: Literal["rb"], - buffering: int = -1, - encoding: Optional[str] = None, - errors: Optional[str] = None, - newline: Optional[str] = None, - *, - total: Optional[int] = None, - description: str = "Reading...", - auto_refresh: bool = True, - console: Optional[Console] = None, - transient: bool = False, - get_time: Optional[Callable[[], float]] = None, - refresh_per_second: float = 10, - style: StyleType = "bar.back", - complete_style: StyleType = "bar.complete", - finished_style: StyleType = "bar.finished", - pulse_style: StyleType = "bar.pulse", - disable: bool = False, -) -> ContextManager[BinaryIO]: - pass - - -def open( - file: Union[str, "PathLike[str]", bytes], - mode: Union[Literal["rb"], Literal["rt"], Literal["r"]] = "r", - buffering: int = -1, - encoding: Optional[str] = None, - errors: Optional[str] = None, - newline: Optional[str] = None, - *, - total: Optional[int] = None, - description: str = "Reading...", - auto_refresh: bool = True, - console: Optional[Console] = None, - transient: bool = False, - get_time: Optional[Callable[[], float]] = None, - refresh_per_second: float = 10, - style: StyleType = "bar.back", - complete_style: StyleType = "bar.complete", - finished_style: StyleType = "bar.finished", - pulse_style: StyleType = "bar.pulse", - disable: bool = False, -) -> Union[ContextManager[BinaryIO], ContextManager[TextIO]]: - """Read bytes from a file while tracking progress. - - Args: - path (Union[str, PathLike[str], BinaryIO]): The path to the file to read, or a file-like object in binary mode. - mode (str): The mode to use to open the file. Only supports "r", "rb" or "rt". - buffering (int): The buffering strategy to use, see :func:`io.open`. - encoding (str, optional): The encoding to use when reading in text mode, see :func:`io.open`. - errors (str, optional): The error handling strategy for decoding errors, see :func:`io.open`. - newline (str, optional): The strategy for handling newlines in text mode, see :func:`io.open` - total: (int, optional): Total number of bytes to read. Must be provided if reading from a file handle. Default for a path is os.stat(file).st_size. - description (str, optional): Description of task show next to progress bar. Defaults to "Reading". - auto_refresh (bool, optional): Automatic refresh, disable to force a refresh after each iteration. Default is True. - transient: (bool, optional): Clear the progress on exit. Defaults to False. - console (Console, optional): Console to write to. Default creates internal Console instance. - refresh_per_second (float): Number of times per second to refresh the progress information. Defaults to 10. - style (StyleType, optional): Style for the bar background. Defaults to "bar.back". - complete_style (StyleType, optional): Style for the completed bar. Defaults to "bar.complete". - finished_style (StyleType, optional): Style for a finished bar. Defaults to "bar.finished". - pulse_style (StyleType, optional): Style for pulsing bars. Defaults to "bar.pulse". - disable (bool, optional): Disable display of progress. - encoding (str, optional): The encoding to use when reading in text mode. - - Returns: - ContextManager[BinaryIO]: A context manager yielding a progress reader. - - """ - - columns: List["ProgressColumn"] = ( - [TextColumn("[progress.description]{task.description}")] if description else [] - ) - columns.extend( - ( - BarColumn( - style=style, - complete_style=complete_style, - finished_style=finished_style, - pulse_style=pulse_style, - ), - DownloadColumn(), - TimeRemainingColumn(), - ) - ) - progress = Progress( - *columns, - auto_refresh=auto_refresh, - console=console, - transient=transient, - get_time=get_time, - refresh_per_second=refresh_per_second or 10, - disable=disable, - ) - - reader = progress.open( - file, - mode=mode, - buffering=buffering, - encoding=encoding, - errors=errors, - newline=newline, - total=total, - description=description, - ) - return _ReadContext(progress, reader) # type: ignore[return-value, type-var] - - -class ProgressColumn(ABC): - """Base class for a widget to use in progress display.""" - - max_refresh: Optional[float] = None - - def __init__(self, table_column: Optional[Column] = None) -> None: - self._table_column = table_column - self._renderable_cache: Dict[TaskID, Tuple[float, RenderableType]] = {} - self._update_time: Optional[float] = None - - def get_table_column(self) -> Column: - """Get a table column, used to build tasks table.""" - return self._table_column or Column() - - def __call__(self, task: "Task") -> RenderableType: - """Called by the Progress object to return a renderable for the given task. - - Args: - task (Task): An object containing information regarding the task. - - Returns: - RenderableType: Anything renderable (including str). - """ - current_time = task.get_time() - if self.max_refresh is not None and not task.completed: - try: - timestamp, renderable = self._renderable_cache[task.id] - except KeyError: - pass - else: - if timestamp + self.max_refresh > current_time: - return renderable - - renderable = self.render(task) - self._renderable_cache[task.id] = (current_time, renderable) - return renderable - - @abstractmethod - def render(self, task: "Task") -> RenderableType: - """Should return a renderable object.""" - - -class RenderableColumn(ProgressColumn): - """A column to insert an arbitrary column. - - Args: - renderable (RenderableType, optional): Any renderable. Defaults to empty string. - """ - - def __init__( - self, renderable: RenderableType = "", *, table_column: Optional[Column] = None - ): - self.renderable = renderable - super().__init__(table_column=table_column) - - def render(self, task: "Task") -> RenderableType: - return self.renderable - - -class SpinnerColumn(ProgressColumn): - """A column with a 'spinner' animation. - - Args: - spinner_name (str, optional): Name of spinner animation. Defaults to "dots". - style (StyleType, optional): Style of spinner. Defaults to "progress.spinner". - speed (float, optional): Speed factor of spinner. Defaults to 1.0. - finished_text (TextType, optional): Text used when task is finished. Defaults to " ". - """ - - def __init__( - self, - spinner_name: str = "dots", - style: Optional[StyleType] = "progress.spinner", - speed: float = 1.0, - finished_text: TextType = " ", - table_column: Optional[Column] = None, - ): - self.spinner = Spinner(spinner_name, style=style, speed=speed) - self.finished_text = ( - Text.from_markup(finished_text) - if isinstance(finished_text, str) - else finished_text - ) - super().__init__(table_column=table_column) - - def set_spinner( - self, - spinner_name: str, - spinner_style: Optional[StyleType] = "progress.spinner", - speed: float = 1.0, - ) -> None: - """Set a new spinner. - - Args: - spinner_name (str): Spinner name, see python -m rich.spinner. - spinner_style (Optional[StyleType], optional): Spinner style. Defaults to "progress.spinner". - speed (float, optional): Speed factor of spinner. Defaults to 1.0. - """ - self.spinner = Spinner(spinner_name, style=spinner_style, speed=speed) - - def render(self, task: "Task") -> RenderableType: - text = ( - self.finished_text - if task.finished - else self.spinner.render(task.get_time()) - ) - return text - - -class TextColumn(ProgressColumn): - """A column containing text.""" - - def __init__( - self, - text_format: str, - style: StyleType = "none", - justify: JustifyMethod = "left", - markup: bool = True, - highlighter: Optional[Highlighter] = None, - table_column: Optional[Column] = None, - ) -> None: - self.text_format = text_format - self.justify: JustifyMethod = justify - self.style = style - self.markup = markup - self.highlighter = highlighter - super().__init__(table_column=table_column or Column(no_wrap=True)) - - def render(self, task: "Task") -> Text: - _text = self.text_format.format(task=task) - if self.markup: - text = Text.from_markup(_text, style=self.style, justify=self.justify) - else: - text = Text(_text, style=self.style, justify=self.justify) - if self.highlighter: - self.highlighter.highlight(text) - return text - - -class BarColumn(ProgressColumn): - """Renders a visual progress bar. - - Args: - bar_width (Optional[int], optional): Width of bar or None for full width. Defaults to 40. - style (StyleType, optional): Style for the bar background. Defaults to "bar.back". - complete_style (StyleType, optional): Style for the completed bar. Defaults to "bar.complete". - finished_style (StyleType, optional): Style for a finished bar. Defaults to "bar.finished". - pulse_style (StyleType, optional): Style for pulsing bars. Defaults to "bar.pulse". - """ - - def __init__( - self, - bar_width: Optional[int] = 40, - style: StyleType = "bar.back", - complete_style: StyleType = "bar.complete", - finished_style: StyleType = "bar.finished", - pulse_style: StyleType = "bar.pulse", - table_column: Optional[Column] = None, - ) -> None: - self.bar_width = bar_width - self.style = style - self.complete_style = complete_style - self.finished_style = finished_style - self.pulse_style = pulse_style - super().__init__(table_column=table_column) - - def render(self, task: "Task") -> ProgressBar: - """Gets a progress bar widget for a task.""" - return ProgressBar( - total=max(0, task.total) if task.total is not None else None, - completed=max(0, task.completed), - width=None if self.bar_width is None else max(1, self.bar_width), - pulse=not task.started, - animation_time=task.get_time(), - style=self.style, - complete_style=self.complete_style, - finished_style=self.finished_style, - pulse_style=self.pulse_style, - ) - - -class TimeElapsedColumn(ProgressColumn): - """Renders time elapsed.""" - - def render(self, task: "Task") -> Text: - """Show time elapsed.""" - elapsed = task.finished_time if task.finished else task.elapsed - if elapsed is None: - return Text("-:--:--", style="progress.elapsed") - delta = timedelta(seconds=int(elapsed)) - return Text(str(delta), style="progress.elapsed") - - -class TaskProgressColumn(TextColumn): - """Show task progress as a percentage. - - Args: - text_format (str, optional): Format for percentage display. Defaults to "[progress.percentage]{task.percentage:>3.0f}%". - text_format_no_percentage (str, optional): Format if percentage is unknown. Defaults to "". - style (StyleType, optional): Style of output. Defaults to "none". - justify (JustifyMethod, optional): Text justification. Defaults to "left". - markup (bool, optional): Enable markup. Defaults to True. - highlighter (Optional[Highlighter], optional): Highlighter to apply to output. Defaults to None. - table_column (Optional[Column], optional): Table Column to use. Defaults to None. - show_speed (bool, optional): Show speed if total is unknown. Defaults to False. - """ - - def __init__( - self, - text_format: str = "[progress.percentage]{task.percentage:>3.0f}%", - text_format_no_percentage: str = "", - style: StyleType = "none", - justify: JustifyMethod = "left", - markup: bool = True, - highlighter: Optional[Highlighter] = None, - table_column: Optional[Column] = None, - show_speed: bool = False, - ) -> None: - - self.text_format_no_percentage = text_format_no_percentage - self.show_speed = show_speed - super().__init__( - text_format=text_format, - style=style, - justify=justify, - markup=markup, - highlighter=highlighter, - table_column=table_column, - ) - - @classmethod - def render_speed(cls, speed: Optional[float]) -> Text: - """Render the speed in iterations per second. - - Args: - task (Task): A Task object. - - Returns: - Text: Text object containing the task speed. - """ - if speed is None: - return Text("", style="progress.percentage") - unit, suffix = filesize.pick_unit_and_suffix( - int(speed), - ["", "×10³", "×10⁶", "×10⁹", "×10¹²"], - 1000, - ) - data_speed = speed / unit - return Text(f"{data_speed:.1f}{suffix} it/s", style="progress.percentage") - - def render(self, task: "Task") -> Text: - if task.total is None and self.show_speed: - return self.render_speed(task.finished_speed or task.speed) - text_format = ( - self.text_format_no_percentage if task.total is None else self.text_format - ) - _text = text_format.format(task=task) - if self.markup: - text = Text.from_markup(_text, style=self.style, justify=self.justify) - else: - text = Text(_text, style=self.style, justify=self.justify) - if self.highlighter: - self.highlighter.highlight(text) - return text - - -class TimeRemainingColumn(ProgressColumn): - """Renders estimated time remaining. - - Args: - compact (bool, optional): Render MM:SS when time remaining is less than an hour. Defaults to False. - elapsed_when_finished (bool, optional): Render time elapsed when the task is finished. Defaults to False. - """ - - # Only refresh twice a second to prevent jitter - max_refresh = 0.5 - - def __init__( - self, - compact: bool = False, - elapsed_when_finished: bool = False, - table_column: Optional[Column] = None, - ): - self.compact = compact - self.elapsed_when_finished = elapsed_when_finished - super().__init__(table_column=table_column) - - def render(self, task: "Task") -> Text: - """Show time remaining.""" - if self.elapsed_when_finished and task.finished: - task_time = task.finished_time - style = "progress.elapsed" - else: - task_time = task.time_remaining - style = "progress.remaining" - - if task.total is None: - return Text("", style=style) - - if task_time is None: - return Text("--:--" if self.compact else "-:--:--", style=style) - - # Based on https://github.com/tqdm/tqdm/blob/master/tqdm/std.py - minutes, seconds = divmod(int(task_time), 60) - hours, minutes = divmod(minutes, 60) - - if self.compact and not hours: - formatted = f"{minutes:02d}:{seconds:02d}" - else: - formatted = f"{hours:d}:{minutes:02d}:{seconds:02d}" - - return Text(formatted, style=style) - - -class FileSizeColumn(ProgressColumn): - """Renders completed filesize.""" - - def render(self, task: "Task") -> Text: - """Show data completed.""" - data_size = filesize.decimal(int(task.completed)) - return Text(data_size, style="progress.filesize") - - -class TotalFileSizeColumn(ProgressColumn): - """Renders total filesize.""" - - def render(self, task: "Task") -> Text: - """Show data completed.""" - data_size = filesize.decimal(int(task.total)) if task.total is not None else "" - return Text(data_size, style="progress.filesize.total") - - -class MofNCompleteColumn(ProgressColumn): - """Renders completed count/total, e.g. ' 10/1000'. - - Best for bounded tasks with int quantities. - - Space pads the completed count so that progress length does not change as task progresses - past powers of 10. - - Args: - separator (str, optional): Text to separate completed and total values. Defaults to "/". - """ - - def __init__(self, separator: str = "/", table_column: Optional[Column] = None): - self.separator = separator - super().__init__(table_column=table_column) - - def render(self, task: "Task") -> Text: - """Show completed/total.""" - completed = int(task.completed) - total = int(task.total) if task.total is not None else "?" - total_width = len(str(total)) - return Text( - f"{completed:{total_width}d}{self.separator}{total}", - style="progress.download", - ) - - -class DownloadColumn(ProgressColumn): - """Renders file size downloaded and total, e.g. '0.5/2.3 GB'. - - Args: - binary_units (bool, optional): Use binary units, KiB, MiB etc. Defaults to False. - """ - - def __init__( - self, binary_units: bool = False, table_column: Optional[Column] = None - ) -> None: - self.binary_units = binary_units - super().__init__(table_column=table_column) - - def render(self, task: "Task") -> Text: - """Calculate common unit for completed and total.""" - completed = int(task.completed) - - unit_and_suffix_calculation_base = ( - int(task.total) if task.total is not None else completed - ) - if self.binary_units: - unit, suffix = filesize.pick_unit_and_suffix( - unit_and_suffix_calculation_base, - ["bytes", "KiB", "MiB", "GiB", "TiB", "PiB", "EiB", "ZiB", "YiB"], - 1024, - ) - else: - unit, suffix = filesize.pick_unit_and_suffix( - unit_and_suffix_calculation_base, - ["bytes", "kB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB"], - 1000, - ) - precision = 0 if unit == 1 else 1 - - completed_ratio = completed / unit - completed_str = f"{completed_ratio:,.{precision}f}" - - if task.total is not None: - total = int(task.total) - total_ratio = total / unit - total_str = f"{total_ratio:,.{precision}f}" - else: - total_str = "?" - - download_status = f"{completed_str}/{total_str} {suffix}" - download_text = Text(download_status, style="progress.download") - return download_text - - -class TransferSpeedColumn(ProgressColumn): - """Renders human readable transfer speed.""" - - def render(self, task: "Task") -> Text: - """Show data transfer speed.""" - speed = task.finished_speed or task.speed - if speed is None: - return Text("?", style="progress.data.speed") - data_speed = filesize.decimal(int(speed)) - return Text(f"{data_speed}/s", style="progress.data.speed") - - -class ProgressSample(NamedTuple): - """Sample of progress for a given time.""" - - timestamp: float - """Timestamp of sample.""" - completed: float - """Number of steps completed.""" - - -@dataclass -class Task: - """Information regarding a progress task. - - This object should be considered read-only outside of the :class:`~Progress` class. - - """ - - id: TaskID - """Task ID associated with this task (used in Progress methods).""" - - description: str - """str: Description of the task.""" - - total: Optional[float] - """Optional[float]: Total number of steps in this task.""" - - completed: float - """float: Number of steps completed""" - - _get_time: GetTimeCallable - """Callable to get the current time.""" - - finished_time: Optional[float] = None - """float: Time task was finished.""" - - visible: bool = True - """bool: Indicates if this task is visible in the progress display.""" - - fields: Dict[str, Any] = field(default_factory=dict) - """dict: Arbitrary fields passed in via Progress.update.""" - - start_time: Optional[float] = field(default=None, init=False, repr=False) - """Optional[float]: Time this task was started, or None if not started.""" - - stop_time: Optional[float] = field(default=None, init=False, repr=False) - """Optional[float]: Time this task was stopped, or None if not stopped.""" - - finished_speed: Optional[float] = None - """Optional[float]: The last speed for a finished task.""" - - _progress: Deque[ProgressSample] = field( - default_factory=lambda: deque(maxlen=1000), init=False, repr=False - ) - - _lock: RLock = field(repr=False, default_factory=RLock) - """Thread lock.""" - - def get_time(self) -> float: - """float: Get the current time, in seconds.""" - return self._get_time() - - @property - def started(self) -> bool: - """bool: Check if the task as started.""" - return self.start_time is not None - - @property - def remaining(self) -> Optional[float]: - """Optional[float]: Get the number of steps remaining, if a non-None total was set.""" - if self.total is None: - return None - return self.total - self.completed - - @property - def elapsed(self) -> Optional[float]: - """Optional[float]: Time elapsed since task was started, or ``None`` if the task hasn't started.""" - if self.start_time is None: - return None - if self.stop_time is not None: - return self.stop_time - self.start_time - return self.get_time() - self.start_time - - @property - def finished(self) -> bool: - """Check if the task has finished.""" - return self.finished_time is not None - - @property - def percentage(self) -> float: - """float: Get progress of task as a percentage. If a None total was set, returns 0""" - if not self.total: - return 0.0 - completed = (self.completed / self.total) * 100.0 - completed = min(100.0, max(0.0, completed)) - return completed - - @property - def speed(self) -> Optional[float]: - """Optional[float]: Get the estimated speed in steps per second.""" - if self.start_time is None: - return None - with self._lock: - progress = self._progress - if not progress: - return None - total_time = progress[-1].timestamp - progress[0].timestamp - if total_time == 0: - return None - iter_progress = iter(progress) - next(iter_progress) - total_completed = sum(sample.completed for sample in iter_progress) - speed = total_completed / total_time - return speed - - @property - def time_remaining(self) -> Optional[float]: - """Optional[float]: Get estimated time to completion, or ``None`` if no data.""" - if self.finished: - return 0.0 - speed = self.speed - if not speed: - return None - remaining = self.remaining - if remaining is None: - return None - estimate = ceil(remaining / speed) - return estimate - - def _reset(self) -> None: - """Reset progress.""" - self._progress.clear() - self.finished_time = None - self.finished_speed = None - - -class Progress(JupyterMixin): - """Renders an auto-updating progress bar(s). - - Args: - console (Console, optional): Optional Console instance. Default will an internal Console instance writing to stdout. - auto_refresh (bool, optional): Enable auto refresh. If disabled, you will need to call `refresh()`. - refresh_per_second (Optional[float], optional): Number of times per second to refresh the progress information or None to use default (10). Defaults to None. - speed_estimate_period: (float, optional): Period (in seconds) used to calculate the speed estimate. Defaults to 30. - transient: (bool, optional): Clear the progress on exit. Defaults to False. - redirect_stdout: (bool, optional): Enable redirection of stdout, so ``print`` may be used. Defaults to True. - redirect_stderr: (bool, optional): Enable redirection of stderr. Defaults to True. - get_time: (Callable, optional): A callable that gets the current time, or None to use Console.get_time. Defaults to None. - disable (bool, optional): Disable progress display. Defaults to False - expand (bool, optional): Expand tasks table to fit width. Defaults to False. - """ - - def __init__( - self, - *columns: Union[str, ProgressColumn], - console: Optional[Console] = None, - auto_refresh: bool = True, - refresh_per_second: float = 10, - speed_estimate_period: float = 30.0, - transient: bool = False, - redirect_stdout: bool = True, - redirect_stderr: bool = True, - get_time: Optional[GetTimeCallable] = None, - disable: bool = False, - expand: bool = False, - ) -> None: - assert refresh_per_second > 0, "refresh_per_second must be > 0" - self._lock = RLock() - self.columns = columns or self.get_default_columns() - self.speed_estimate_period = speed_estimate_period - - self.disable = disable - self.expand = expand - self._tasks: Dict[TaskID, Task] = {} - self._task_index: TaskID = TaskID(0) - self.live = Live( - console=console or get_console(), - auto_refresh=auto_refresh, - refresh_per_second=refresh_per_second, - transient=transient, - redirect_stdout=redirect_stdout, - redirect_stderr=redirect_stderr, - get_renderable=self.get_renderable, - ) - self.get_time = get_time or self.console.get_time - self.print = self.console.print - self.log = self.console.log - - @classmethod - def get_default_columns(cls) -> Tuple[ProgressColumn, ...]: - """Get the default columns used for a new Progress instance: - - a text column for the description (TextColumn) - - the bar itself (BarColumn) - - a text column showing completion percentage (TextColumn) - - an estimated-time-remaining column (TimeRemainingColumn) - If the Progress instance is created without passing a columns argument, - the default columns defined here will be used. - - You can also create a Progress instance using custom columns before - and/or after the defaults, as in this example: - - progress = Progress( - SpinnerColumn(), - *Progress.default_columns(), - "Elapsed:", - TimeElapsedColumn(), - ) - - This code shows the creation of a Progress display, containing - a spinner to the left, the default columns, and a labeled elapsed - time column. - """ - return ( - TextColumn("[progress.description]{task.description}"), - BarColumn(), - TaskProgressColumn(), - TimeRemainingColumn(), - ) - - @property - def console(self) -> Console: - return self.live.console - - @property - def tasks(self) -> List[Task]: - """Get a list of Task instances.""" - with self._lock: - return list(self._tasks.values()) - - @property - def task_ids(self) -> List[TaskID]: - """A list of task IDs.""" - with self._lock: - return list(self._tasks.keys()) - - @property - def finished(self) -> bool: - """Check if all tasks have been completed.""" - with self._lock: - if not self._tasks: - return True - return all(task.finished for task in self._tasks.values()) - - def start(self) -> None: - """Start the progress display.""" - if not self.disable: - self.live.start(refresh=True) - - def stop(self) -> None: - """Stop the progress display.""" - self.live.stop() - if not self.console.is_interactive: - self.console.print() - - def __enter__(self) -> "Progress": - self.start() - return self - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> None: - self.stop() - - def track( - self, - sequence: Union[Iterable[ProgressType], Sequence[ProgressType]], - total: Optional[float] = None, - task_id: Optional[TaskID] = None, - description: str = "Working...", - update_period: float = 0.1, - ) -> Iterable[ProgressType]: - """Track progress by iterating over a sequence. - - Args: - sequence (Sequence[ProgressType]): A sequence of values you want to iterate over and track progress. - total: (float, optional): Total number of steps. Default is len(sequence). - task_id: (TaskID): Task to track. Default is new task. - description: (str, optional): Description of task, if new task is created. - update_period (float, optional): Minimum time (in seconds) between calls to update(). Defaults to 0.1. - - Returns: - Iterable[ProgressType]: An iterable of values taken from the provided sequence. - """ - if total is None: - total = float(length_hint(sequence)) or None - - if task_id is None: - task_id = self.add_task(description, total=total) - else: - self.update(task_id, total=total) - - if self.live.auto_refresh: - with _TrackThread(self, task_id, update_period) as track_thread: - for value in sequence: - yield value - track_thread.completed += 1 - else: - advance = self.advance - refresh = self.refresh - for value in sequence: - yield value - advance(task_id, 1) - refresh() - - def wrap_file( - self, - file: BinaryIO, - total: Optional[int] = None, - *, - task_id: Optional[TaskID] = None, - description: str = "Reading...", - ) -> BinaryIO: - """Track progress file reading from a binary file. - - Args: - file (BinaryIO): A file-like object opened in binary mode. - total (int, optional): Total number of bytes to read. This must be provided unless a task with a total is also given. - task_id (TaskID): Task to track. Default is new task. - description (str, optional): Description of task, if new task is created. - - Returns: - BinaryIO: A readable file-like object in binary mode. - - Raises: - ValueError: When no total value can be extracted from the arguments or the task. - """ - # attempt to recover the total from the task - total_bytes: Optional[float] = None - if total is not None: - total_bytes = total - elif task_id is not None: - with self._lock: - total_bytes = self._tasks[task_id].total - if total_bytes is None: - raise ValueError( - f"unable to get the total number of bytes, please specify 'total'" - ) - - # update total of task or create new task - if task_id is None: - task_id = self.add_task(description, total=total_bytes) - else: - self.update(task_id, total=total_bytes) - - return _Reader(file, self, task_id, close_handle=False) - - @typing.overload - def open( - self, - file: Union[str, "PathLike[str]", bytes], - mode: Literal["rb"], - buffering: int = -1, - encoding: Optional[str] = None, - errors: Optional[str] = None, - newline: Optional[str] = None, - *, - total: Optional[int] = None, - task_id: Optional[TaskID] = None, - description: str = "Reading...", - ) -> BinaryIO: - pass - - @typing.overload - def open( - self, - file: Union[str, "PathLike[str]", bytes], - mode: Union[Literal["r"], Literal["rt"]], - buffering: int = -1, - encoding: Optional[str] = None, - errors: Optional[str] = None, - newline: Optional[str] = None, - *, - total: Optional[int] = None, - task_id: Optional[TaskID] = None, - description: str = "Reading...", - ) -> TextIO: - pass - - def open( - self, - file: Union[str, "PathLike[str]", bytes], - mode: Union[Literal["rb"], Literal["rt"], Literal["r"]] = "r", - buffering: int = -1, - encoding: Optional[str] = None, - errors: Optional[str] = None, - newline: Optional[str] = None, - *, - total: Optional[int] = None, - task_id: Optional[TaskID] = None, - description: str = "Reading...", - ) -> Union[BinaryIO, TextIO]: - """Track progress while reading from a binary file. - - Args: - path (Union[str, PathLike[str]]): The path to the file to read. - mode (str): The mode to use to open the file. Only supports "r", "rb" or "rt". - buffering (int): The buffering strategy to use, see :func:`io.open`. - encoding (str, optional): The encoding to use when reading in text mode, see :func:`io.open`. - errors (str, optional): The error handling strategy for decoding errors, see :func:`io.open`. - newline (str, optional): The strategy for handling newlines in text mode, see :func:`io.open`. - total (int, optional): Total number of bytes to read. If none given, os.stat(path).st_size is used. - task_id (TaskID): Task to track. Default is new task. - description (str, optional): Description of task, if new task is created. - - Returns: - BinaryIO: A readable file-like object in binary mode. - - Raises: - ValueError: When an invalid mode is given. - """ - # normalize the mode (always rb, rt) - _mode = "".join(sorted(mode, reverse=False)) - if _mode not in ("br", "rt", "r"): - raise ValueError("invalid mode {!r}".format(mode)) - - # patch buffering to provide the same behaviour as the builtin `open` - line_buffering = buffering == 1 - if _mode == "br" and buffering == 1: - warnings.warn( - "line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used", - RuntimeWarning, - ) - buffering = -1 - elif _mode in ("rt", "r"): - if buffering == 0: - raise ValueError("can't have unbuffered text I/O") - elif buffering == 1: - buffering = -1 - - # attempt to get the total with `os.stat` - if total is None: - total = stat(file).st_size - - # update total of task or create new task - if task_id is None: - task_id = self.add_task(description, total=total) - else: - self.update(task_id, total=total) - - # open the file in binary mode, - handle = io.open(file, "rb", buffering=buffering) - reader = _Reader(handle, self, task_id, close_handle=True) - - # wrap the reader in a `TextIOWrapper` if text mode - if mode in ("r", "rt"): - return io.TextIOWrapper( - reader, - encoding=encoding, - errors=errors, - newline=newline, - line_buffering=line_buffering, - ) - - return reader - - def start_task(self, task_id: TaskID) -> None: - """Start a task. - - Starts a task (used when calculating elapsed time). You may need to call this manually, - if you called ``add_task`` with ``start=False``. - - Args: - task_id (TaskID): ID of task. - """ - with self._lock: - task = self._tasks[task_id] - if task.start_time is None: - task.start_time = self.get_time() - - def stop_task(self, task_id: TaskID) -> None: - """Stop a task. - - This will freeze the elapsed time on the task. - - Args: - task_id (TaskID): ID of task. - """ - with self._lock: - task = self._tasks[task_id] - current_time = self.get_time() - if task.start_time is None: - task.start_time = current_time - task.stop_time = current_time - - def update( - self, - task_id: TaskID, - *, - total: Optional[float] = None, - completed: Optional[float] = None, - advance: Optional[float] = None, - description: Optional[str] = None, - visible: Optional[bool] = None, - refresh: bool = False, - **fields: Any, - ) -> None: - """Update information associated with a task. - - Args: - task_id (TaskID): Task id (returned by add_task). - total (float, optional): Updates task.total if not None. - completed (float, optional): Updates task.completed if not None. - advance (float, optional): Add a value to task.completed if not None. - description (str, optional): Change task description if not None. - visible (bool, optional): Set visible flag if not None. - refresh (bool): Force a refresh of progress information. Default is False. - **fields (Any): Additional data fields required for rendering. - """ - with self._lock: - task = self._tasks[task_id] - completed_start = task.completed - - if total is not None and total != task.total: - task.total = total - task._reset() - if advance is not None: - task.completed += advance - if completed is not None: - task.completed = completed - if description is not None: - task.description = description - if visible is not None: - task.visible = visible - task.fields.update(fields) - update_completed = task.completed - completed_start - - current_time = self.get_time() - old_sample_time = current_time - self.speed_estimate_period - _progress = task._progress - - popleft = _progress.popleft - while _progress and _progress[0].timestamp < old_sample_time: - popleft() - if update_completed > 0: - _progress.append(ProgressSample(current_time, update_completed)) - if ( - task.total is not None - and task.completed >= task.total - and task.finished_time is None - ): - task.finished_time = task.elapsed - - if refresh: - self.refresh() - - def reset( - self, - task_id: TaskID, - *, - start: bool = True, - total: Optional[float] = None, - completed: int = 0, - visible: Optional[bool] = None, - description: Optional[str] = None, - **fields: Any, - ) -> None: - """Reset a task so completed is 0 and the clock is reset. - - Args: - task_id (TaskID): ID of task. - start (bool, optional): Start the task after reset. Defaults to True. - total (float, optional): New total steps in task, or None to use current total. Defaults to None. - completed (int, optional): Number of steps completed. Defaults to 0. - visible (bool, optional): Enable display of the task. Defaults to True. - description (str, optional): Change task description if not None. Defaults to None. - **fields (str): Additional data fields required for rendering. - """ - current_time = self.get_time() - with self._lock: - task = self._tasks[task_id] - task._reset() - task.start_time = current_time if start else None - if total is not None: - task.total = total - task.completed = completed - if visible is not None: - task.visible = visible - if fields: - task.fields = fields - if description is not None: - task.description = description - task.finished_time = None - self.refresh() - - def advance(self, task_id: TaskID, advance: float = 1) -> None: - """Advance task by a number of steps. - - Args: - task_id (TaskID): ID of task. - advance (float): Number of steps to advance. Default is 1. - """ - current_time = self.get_time() - with self._lock: - task = self._tasks[task_id] - completed_start = task.completed - task.completed += advance - update_completed = task.completed - completed_start - old_sample_time = current_time - self.speed_estimate_period - _progress = task._progress - - popleft = _progress.popleft - while _progress and _progress[0].timestamp < old_sample_time: - popleft() - while len(_progress) > 1000: - popleft() - _progress.append(ProgressSample(current_time, update_completed)) - if ( - task.total is not None - and task.completed >= task.total - and task.finished_time is None - ): - task.finished_time = task.elapsed - task.finished_speed = task.speed - - def refresh(self) -> None: - """Refresh (render) the progress information.""" - if not self.disable and self.live.is_started: - self.live.refresh() - - def get_renderable(self) -> RenderableType: - """Get a renderable for the progress display.""" - renderable = Group(*self.get_renderables()) - return renderable - - def get_renderables(self) -> Iterable[RenderableType]: - """Get a number of renderables for the progress display.""" - table = self.make_tasks_table(self.tasks) - yield table - - def make_tasks_table(self, tasks: Iterable[Task]) -> Table: - """Get a table to render the Progress display. - - Args: - tasks (Iterable[Task]): An iterable of Task instances, one per row of the table. - - Returns: - Table: A table instance. - """ - table_columns = ( - ( - Column(no_wrap=True) - if isinstance(_column, str) - else _column.get_table_column().copy() - ) - for _column in self.columns - ) - table = Table.grid(*table_columns, padding=(0, 1), expand=self.expand) - - for task in tasks: - if task.visible: - table.add_row( - *( - ( - column.format(task=task) - if isinstance(column, str) - else column(task) - ) - for column in self.columns - ) - ) - return table - - def __rich__(self) -> RenderableType: - """Makes the Progress class itself renderable.""" - with self._lock: - return self.get_renderable() - - def add_task( - self, - description: str, - start: bool = True, - total: Optional[float] = 100.0, - completed: int = 0, - visible: bool = True, - **fields: Any, - ) -> TaskID: - """Add a new 'task' to the Progress display. - - Args: - description (str): A description of the task. - start (bool, optional): Start the task immediately (to calculate elapsed time). If set to False, - you will need to call `start` manually. Defaults to True. - total (float, optional): Number of total steps in the progress if known. - Set to None to render a pulsing animation. Defaults to 100. - completed (int, optional): Number of steps completed so far. Defaults to 0. - visible (bool, optional): Enable display of the task. Defaults to True. - **fields (str): Additional data fields required for rendering. - - Returns: - TaskID: An ID you can use when calling `update`. - """ - with self._lock: - task = Task( - self._task_index, - description, - total, - completed, - visible=visible, - fields=fields, - _get_time=self.get_time, - _lock=self._lock, - ) - self._tasks[self._task_index] = task - if start: - self.start_task(self._task_index) - new_task_index = self._task_index - self._task_index = TaskID(int(self._task_index) + 1) - self.refresh() - return new_task_index - - def remove_task(self, task_id: TaskID) -> None: - """Delete a task if it exists. - - Args: - task_id (TaskID): A task ID. - - """ - with self._lock: - del self._tasks[task_id] - - -if __name__ == "__main__": # pragma: no coverage - - import random - import time - - from .panel import Panel - from .rule import Rule - from .syntax import Syntax - from .table import Table - - syntax = Syntax( - '''def loop_last(values: Iterable[T]) -> Iterable[Tuple[bool, T]]: - """Iterate and generate a tuple with a flag for last value.""" - iter_values = iter(values) - try: - previous_value = next(iter_values) - except StopIteration: - return - for value in iter_values: - yield False, previous_value - previous_value = value - yield True, previous_value''', - "python", - line_numbers=True, - ) - - table = Table("foo", "bar", "baz") - table.add_row("1", "2", "3") - - progress_renderables = [ - "Text may be printed while the progress bars are rendering.", - Panel("In fact, [i]any[/i] renderable will work"), - "Such as [magenta]tables[/]...", - table, - "Pretty printed structures...", - {"type": "example", "text": "Pretty printed"}, - "Syntax...", - syntax, - Rule("Give it a try!"), - ] - - from itertools import cycle - - examples = cycle(progress_renderables) - - console = Console(record=True) - - with Progress( - SpinnerColumn(), - *Progress.get_default_columns(), - TimeElapsedColumn(), - console=console, - transient=False, - ) as progress: - - task1 = progress.add_task("[red]Downloading", total=1000) - task2 = progress.add_task("[green]Processing", total=1000) - task3 = progress.add_task("[yellow]Thinking", total=None) - - while not progress.finished: - progress.update(task1, advance=0.5) - progress.update(task2, advance=0.3) - time.sleep(0.01) - if random.randint(0, 100) < 1: - progress.log(next(examples)) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/glob.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/glob.py deleted file mode 100644 index 87062b8187fa4f74a8c4edbaa60bd9a8b2d506a4..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/glob.py +++ /dev/null @@ -1,167 +0,0 @@ -""" -Filename globbing utility. Mostly a copy of `glob` from Python 3.5. - -Changes include: - * `yield from` and PEP3102 `*` removed. - * Hidden files are not ignored. -""" - -import os -import re -import fnmatch - -__all__ = ["glob", "iglob", "escape"] - - -def glob(pathname, recursive=False): - """Return a list of paths matching a pathname pattern. - - The pattern may contain simple shell-style wildcards a la - fnmatch. However, unlike fnmatch, filenames starting with a - dot are special cases that are not matched by '*' and '?' - patterns. - - If recursive is true, the pattern '**' will match any files and - zero or more directories and subdirectories. - """ - return list(iglob(pathname, recursive=recursive)) - - -def iglob(pathname, recursive=False): - """Return an iterator which yields the paths matching a pathname pattern. - - The pattern may contain simple shell-style wildcards a la - fnmatch. However, unlike fnmatch, filenames starting with a - dot are special cases that are not matched by '*' and '?' - patterns. - - If recursive is true, the pattern '**' will match any files and - zero or more directories and subdirectories. - """ - it = _iglob(pathname, recursive) - if recursive and _isrecursive(pathname): - s = next(it) # skip empty string - assert not s - return it - - -def _iglob(pathname, recursive): - dirname, basename = os.path.split(pathname) - glob_in_dir = glob2 if recursive and _isrecursive(basename) else glob1 - - if not has_magic(pathname): - if basename: - if os.path.lexists(pathname): - yield pathname - else: - # Patterns ending with a slash should match only directories - if os.path.isdir(dirname): - yield pathname - return - - if not dirname: - yield from glob_in_dir(dirname, basename) - return - # `os.path.split()` returns the argument itself as a dirname if it is a - # drive or UNC path. Prevent an infinite recursion if a drive or UNC path - # contains magic characters (i.e. r'\\?\C:'). - if dirname != pathname and has_magic(dirname): - dirs = _iglob(dirname, recursive) - else: - dirs = [dirname] - if not has_magic(basename): - glob_in_dir = glob0 - for dirname in dirs: - for name in glob_in_dir(dirname, basename): - yield os.path.join(dirname, name) - - -# These 2 helper functions non-recursively glob inside a literal directory. -# They return a list of basenames. `glob1` accepts a pattern while `glob0` -# takes a literal basename (so it only has to check for its existence). - - -def glob1(dirname, pattern): - if not dirname: - if isinstance(pattern, bytes): - dirname = os.curdir.encode('ASCII') - else: - dirname = os.curdir - try: - names = os.listdir(dirname) - except OSError: - return [] - return fnmatch.filter(names, pattern) - - -def glob0(dirname, basename): - if not basename: - # `os.path.split()` returns an empty basename for paths ending with a - # directory separator. 'q*x/' should match only directories. - if os.path.isdir(dirname): - return [basename] - else: - if os.path.lexists(os.path.join(dirname, basename)): - return [basename] - return [] - - -# This helper function recursively yields relative pathnames inside a literal -# directory. - - -def glob2(dirname, pattern): - assert _isrecursive(pattern) - yield pattern[:0] - for x in _rlistdir(dirname): - yield x - - -# Recursively yields relative pathnames inside a literal directory. -def _rlistdir(dirname): - if not dirname: - if isinstance(dirname, bytes): - dirname = os.curdir.encode('ASCII') - else: - dirname = os.curdir - try: - names = os.listdir(dirname) - except os.error: - return - for x in names: - yield x - path = os.path.join(dirname, x) if dirname else x - for y in _rlistdir(path): - yield os.path.join(x, y) - - -magic_check = re.compile('([*?[])') -magic_check_bytes = re.compile(b'([*?[])') - - -def has_magic(s): - if isinstance(s, bytes): - match = magic_check_bytes.search(s) - else: - match = magic_check.search(s) - return match is not None - - -def _isrecursive(pattern): - if isinstance(pattern, bytes): - return pattern == b'**' - else: - return pattern == '**' - - -def escape(pathname): - """Escape all special characters. - """ - # Escaping is done by wrapping any of "*?[" between square brackets. - # Metacharacters do not work in the drive part and shouldn't be escaped. - drive, pathname = os.path.splitdrive(pathname) - if isinstance(pathname, bytes): - pathname = magic_check_bytes.sub(br'[\1]', pathname) - else: - pathname = magic_check.sub(r'[\1]', pathname) - return drive + pathname diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/util/url.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/util/url.py deleted file mode 100644 index e5682d3be4293191dc5704557b34e651c28f8895..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/util/url.py +++ /dev/null @@ -1,435 +0,0 @@ -from __future__ import absolute_import - -import re -from collections import namedtuple - -from ..exceptions import LocationParseError -from ..packages import six - -url_attrs = ["scheme", "auth", "host", "port", "path", "query", "fragment"] - -# We only want to normalize urls with an HTTP(S) scheme. -# urllib3 infers URLs without a scheme (None) to be http. -NORMALIZABLE_SCHEMES = ("http", "https", None) - -# Almost all of these patterns were derived from the -# 'rfc3986' module: https://github.com/python-hyper/rfc3986 -PERCENT_RE = re.compile(r"%[a-fA-F0-9]{2}") -SCHEME_RE = re.compile(r"^(?:[a-zA-Z][a-zA-Z0-9+-]*:|/)") -URI_RE = re.compile( - r"^(?:([a-zA-Z][a-zA-Z0-9+.-]*):)?" - r"(?://([^\\/?#]*))?" - r"([^?#]*)" - r"(?:\?([^#]*))?" - r"(?:#(.*))?$", - re.UNICODE | re.DOTALL, -) - -IPV4_PAT = r"(?:[0-9]{1,3}\.){3}[0-9]{1,3}" -HEX_PAT = "[0-9A-Fa-f]{1,4}" -LS32_PAT = "(?:{hex}:{hex}|{ipv4})".format(hex=HEX_PAT, ipv4=IPV4_PAT) -_subs = {"hex": HEX_PAT, "ls32": LS32_PAT} -_variations = [ - # 6( h16 ":" ) ls32 - "(?:%(hex)s:){6}%(ls32)s", - # "::" 5( h16 ":" ) ls32 - "::(?:%(hex)s:){5}%(ls32)s", - # [ h16 ] "::" 4( h16 ":" ) ls32 - "(?:%(hex)s)?::(?:%(hex)s:){4}%(ls32)s", - # [ *1( h16 ":" ) h16 ] "::" 3( h16 ":" ) ls32 - "(?:(?:%(hex)s:)?%(hex)s)?::(?:%(hex)s:){3}%(ls32)s", - # [ *2( h16 ":" ) h16 ] "::" 2( h16 ":" ) ls32 - "(?:(?:%(hex)s:){0,2}%(hex)s)?::(?:%(hex)s:){2}%(ls32)s", - # [ *3( h16 ":" ) h16 ] "::" h16 ":" ls32 - "(?:(?:%(hex)s:){0,3}%(hex)s)?::%(hex)s:%(ls32)s", - # [ *4( h16 ":" ) h16 ] "::" ls32 - "(?:(?:%(hex)s:){0,4}%(hex)s)?::%(ls32)s", - # [ *5( h16 ":" ) h16 ] "::" h16 - "(?:(?:%(hex)s:){0,5}%(hex)s)?::%(hex)s", - # [ *6( h16 ":" ) h16 ] "::" - "(?:(?:%(hex)s:){0,6}%(hex)s)?::", -] - -UNRESERVED_PAT = r"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789._\-~" -IPV6_PAT = "(?:" + "|".join([x % _subs for x in _variations]) + ")" -ZONE_ID_PAT = "(?:%25|%)(?:[" + UNRESERVED_PAT + "]|%[a-fA-F0-9]{2})+" -IPV6_ADDRZ_PAT = r"\[" + IPV6_PAT + r"(?:" + ZONE_ID_PAT + r")?\]" -REG_NAME_PAT = r"(?:[^\[\]%:/?#]|%[a-fA-F0-9]{2})*" -TARGET_RE = re.compile(r"^(/[^?#]*)(?:\?([^#]*))?(?:#.*)?$") - -IPV4_RE = re.compile("^" + IPV4_PAT + "$") -IPV6_RE = re.compile("^" + IPV6_PAT + "$") -IPV6_ADDRZ_RE = re.compile("^" + IPV6_ADDRZ_PAT + "$") -BRACELESS_IPV6_ADDRZ_RE = re.compile("^" + IPV6_ADDRZ_PAT[2:-2] + "$") -ZONE_ID_RE = re.compile("(" + ZONE_ID_PAT + r")\]$") - -_HOST_PORT_PAT = ("^(%s|%s|%s)(?::0*?(|0|[1-9][0-9]{0,4}))?$") % ( - REG_NAME_PAT, - IPV4_PAT, - IPV6_ADDRZ_PAT, -) -_HOST_PORT_RE = re.compile(_HOST_PORT_PAT, re.UNICODE | re.DOTALL) - -UNRESERVED_CHARS = set( - "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789._-~" -) -SUB_DELIM_CHARS = set("!$&'()*+,;=") -USERINFO_CHARS = UNRESERVED_CHARS | SUB_DELIM_CHARS | {":"} -PATH_CHARS = USERINFO_CHARS | {"@", "/"} -QUERY_CHARS = FRAGMENT_CHARS = PATH_CHARS | {"?"} - - -class Url(namedtuple("Url", url_attrs)): - """ - Data structure for representing an HTTP URL. Used as a return value for - :func:`parse_url`. Both the scheme and host are normalized as they are - both case-insensitive according to RFC 3986. - """ - - __slots__ = () - - def __new__( - cls, - scheme=None, - auth=None, - host=None, - port=None, - path=None, - query=None, - fragment=None, - ): - if path and not path.startswith("/"): - path = "/" + path - if scheme is not None: - scheme = scheme.lower() - return super(Url, cls).__new__( - cls, scheme, auth, host, port, path, query, fragment - ) - - @property - def hostname(self): - """For backwards-compatibility with urlparse. We're nice like that.""" - return self.host - - @property - def request_uri(self): - """Absolute path including the query string.""" - uri = self.path or "/" - - if self.query is not None: - uri += "?" + self.query - - return uri - - @property - def netloc(self): - """Network location including host and port""" - if self.port: - return "%s:%d" % (self.host, self.port) - return self.host - - @property - def url(self): - """ - Convert self into a url - - This function should more or less round-trip with :func:`.parse_url`. The - returned url may not be exactly the same as the url inputted to - :func:`.parse_url`, but it should be equivalent by the RFC (e.g., urls - with a blank port will have : removed). - - Example: :: - - >>> U = parse_url('http://google.com/mail/') - >>> U.url - 'http://google.com/mail/' - >>> Url('http', 'username:password', 'host.com', 80, - ... '/path', 'query', 'fragment').url - 'http://username:password@host.com:80/path?query#fragment' - """ - scheme, auth, host, port, path, query, fragment = self - url = u"" - - # We use "is not None" we want things to happen with empty strings (or 0 port) - if scheme is not None: - url += scheme + u"://" - if auth is not None: - url += auth + u"@" - if host is not None: - url += host - if port is not None: - url += u":" + str(port) - if path is not None: - url += path - if query is not None: - url += u"?" + query - if fragment is not None: - url += u"#" + fragment - - return url - - def __str__(self): - return self.url - - -def split_first(s, delims): - """ - .. deprecated:: 1.25 - - Given a string and an iterable of delimiters, split on the first found - delimiter. Return two split parts and the matched delimiter. - - If not found, then the first part is the full input string. - - Example:: - - >>> split_first('foo/bar?baz', '?/=') - ('foo', 'bar?baz', '/') - >>> split_first('foo/bar?baz', '123') - ('foo/bar?baz', '', None) - - Scales linearly with number of delims. Not ideal for large number of delims. - """ - min_idx = None - min_delim = None - for d in delims: - idx = s.find(d) - if idx < 0: - continue - - if min_idx is None or idx < min_idx: - min_idx = idx - min_delim = d - - if min_idx is None or min_idx < 0: - return s, "", None - - return s[:min_idx], s[min_idx + 1 :], min_delim - - -def _encode_invalid_chars(component, allowed_chars, encoding="utf-8"): - """Percent-encodes a URI component without reapplying - onto an already percent-encoded component. - """ - if component is None: - return component - - component = six.ensure_text(component) - - # Normalize existing percent-encoded bytes. - # Try to see if the component we're encoding is already percent-encoded - # so we can skip all '%' characters but still encode all others. - component, percent_encodings = PERCENT_RE.subn( - lambda match: match.group(0).upper(), component - ) - - uri_bytes = component.encode("utf-8", "surrogatepass") - is_percent_encoded = percent_encodings == uri_bytes.count(b"%") - encoded_component = bytearray() - - for i in range(0, len(uri_bytes)): - # Will return a single character bytestring on both Python 2 & 3 - byte = uri_bytes[i : i + 1] - byte_ord = ord(byte) - if (is_percent_encoded and byte == b"%") or ( - byte_ord < 128 and byte.decode() in allowed_chars - ): - encoded_component += byte - continue - encoded_component.extend(b"%" + (hex(byte_ord)[2:].encode().zfill(2).upper())) - - return encoded_component.decode(encoding) - - -def _remove_path_dot_segments(path): - # See http://tools.ietf.org/html/rfc3986#section-5.2.4 for pseudo-code - segments = path.split("/") # Turn the path into a list of segments - output = [] # Initialize the variable to use to store output - - for segment in segments: - # '.' is the current directory, so ignore it, it is superfluous - if segment == ".": - continue - # Anything other than '..', should be appended to the output - elif segment != "..": - output.append(segment) - # In this case segment == '..', if we can, we should pop the last - # element - elif output: - output.pop() - - # If the path starts with '/' and the output is empty or the first string - # is non-empty - if path.startswith("/") and (not output or output[0]): - output.insert(0, "") - - # If the path starts with '/.' or '/..' ensure we add one more empty - # string to add a trailing '/' - if path.endswith(("/.", "/..")): - output.append("") - - return "/".join(output) - - -def _normalize_host(host, scheme): - if host: - if isinstance(host, six.binary_type): - host = six.ensure_str(host) - - if scheme in NORMALIZABLE_SCHEMES: - is_ipv6 = IPV6_ADDRZ_RE.match(host) - if is_ipv6: - # IPv6 hosts of the form 'a::b%zone' are encoded in a URL as - # such per RFC 6874: 'a::b%25zone'. Unquote the ZoneID - # separator as necessary to return a valid RFC 4007 scoped IP. - match = ZONE_ID_RE.search(host) - if match: - start, end = match.span(1) - zone_id = host[start:end] - - if zone_id.startswith("%25") and zone_id != "%25": - zone_id = zone_id[3:] - else: - zone_id = zone_id[1:] - zone_id = "%" + _encode_invalid_chars(zone_id, UNRESERVED_CHARS) - return host[:start].lower() + zone_id + host[end:] - else: - return host.lower() - elif not IPV4_RE.match(host): - return six.ensure_str( - b".".join([_idna_encode(label) for label in host.split(".")]) - ) - return host - - -def _idna_encode(name): - if name and any(ord(x) >= 128 for x in name): - try: - import idna - except ImportError: - six.raise_from( - LocationParseError("Unable to parse URL without the 'idna' module"), - None, - ) - try: - return idna.encode(name.lower(), strict=True, std3_rules=True) - except idna.IDNAError: - six.raise_from( - LocationParseError(u"Name '%s' is not a valid IDNA label" % name), None - ) - return name.lower().encode("ascii") - - -def _encode_target(target): - """Percent-encodes a request target so that there are no invalid characters""" - path, query = TARGET_RE.match(target).groups() - target = _encode_invalid_chars(path, PATH_CHARS) - query = _encode_invalid_chars(query, QUERY_CHARS) - if query is not None: - target += "?" + query - return target - - -def parse_url(url): - """ - Given a url, return a parsed :class:`.Url` namedtuple. Best-effort is - performed to parse incomplete urls. Fields not provided will be None. - This parser is RFC 3986 and RFC 6874 compliant. - - The parser logic and helper functions are based heavily on - work done in the ``rfc3986`` module. - - :param str url: URL to parse into a :class:`.Url` namedtuple. - - Partly backwards-compatible with :mod:`urlparse`. - - Example:: - - >>> parse_url('http://google.com/mail/') - Url(scheme='http', host='google.com', port=None, path='/mail/', ...) - >>> parse_url('google.com:80') - Url(scheme=None, host='google.com', port=80, path=None, ...) - >>> parse_url('/foo?bar') - Url(scheme=None, host=None, port=None, path='/foo', query='bar', ...) - """ - if not url: - # Empty - return Url() - - source_url = url - if not SCHEME_RE.search(url): - url = "//" + url - - try: - scheme, authority, path, query, fragment = URI_RE.match(url).groups() - normalize_uri = scheme is None or scheme.lower() in NORMALIZABLE_SCHEMES - - if scheme: - scheme = scheme.lower() - - if authority: - auth, _, host_port = authority.rpartition("@") - auth = auth or None - host, port = _HOST_PORT_RE.match(host_port).groups() - if auth and normalize_uri: - auth = _encode_invalid_chars(auth, USERINFO_CHARS) - if port == "": - port = None - else: - auth, host, port = None, None, None - - if port is not None: - port = int(port) - if not (0 <= port <= 65535): - raise LocationParseError(url) - - host = _normalize_host(host, scheme) - - if normalize_uri and path: - path = _remove_path_dot_segments(path) - path = _encode_invalid_chars(path, PATH_CHARS) - if normalize_uri and query: - query = _encode_invalid_chars(query, QUERY_CHARS) - if normalize_uri and fragment: - fragment = _encode_invalid_chars(fragment, FRAGMENT_CHARS) - - except (ValueError, AttributeError): - return six.raise_from(LocationParseError(source_url), None) - - # For the sake of backwards compatibility we put empty - # string values for path if there are any defined values - # beyond the path in the URL. - # TODO: Remove this when we break backwards compatibility. - if not path: - if query is not None or fragment is not None: - path = "" - else: - path = None - - # Ensure that each part of the URL is a `str` for - # backwards compatibility. - if isinstance(url, six.text_type): - ensure_func = six.ensure_text - else: - ensure_func = six.ensure_str - - def ensure_type(x): - return x if x is None else ensure_func(x) - - return Url( - scheme=ensure_type(scheme), - auth=ensure_type(auth), - host=ensure_type(host), - port=port, - path=ensure_type(path), - query=ensure_type(query), - fragment=ensure_type(fragment), - ) - - -def get_host(url): - """ - Deprecated. Use :func:`parse_url` instead. - """ - p = parse_url(url) - return p.scheme or "http", p.hostname, p.port diff --git a/spaces/BigDL/bigdl_nano_demo/README.md b/spaces/BigDL/bigdl_nano_demo/README.md deleted file mode 100644 index cbf38c756ffd7b443b0ba82a9194909d1c0881e8..0000000000000000000000000000000000000000 --- a/spaces/BigDL/bigdl_nano_demo/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: BigDL-Nano Demo -emoji: 🦄 -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.0.13 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/BimboAnon/BimboProxy/Dockerfile b/spaces/BimboAnon/BimboProxy/Dockerfile deleted file mode 100644 index eef259fa372a804549fb0af0913718a13344da34..0000000000000000000000000000000000000000 --- a/spaces/BimboAnon/BimboProxy/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/equal.h b/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/equal.h deleted file mode 100644 index 13398fc9db5a02ba7cd7d2141f106fa59ba2a941..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/equal.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system inherits equal -#include - diff --git a/spaces/CVPR/Text2Human/Text2Human/ui/ui.py b/spaces/CVPR/Text2Human/Text2Human/ui/ui.py deleted file mode 100644 index 179a5ee796d5f8561eacf64b16d9a713b64983cf..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Text2Human/Text2Human/ui/ui.py +++ /dev/null @@ -1,313 +0,0 @@ -from PyQt5 import QtCore, QtGui, QtWidgets -from PyQt5.QtCore import * -from PyQt5.QtGui import * -from PyQt5.QtWidgets import * - - -class Ui_Form(object): - - def setupUi(self, Form): - Form.setObjectName("Form") - Form.resize(1250, 670) - - self.pushButton_2 = QtWidgets.QPushButton(Form) - self.pushButton_2.setGeometry(QtCore.QRect(20, 60, 97, 27)) - self.pushButton_2.setObjectName("pushButton_2") - - self.pushButton_6 = QtWidgets.QPushButton(Form) - self.pushButton_6.setGeometry(QtCore.QRect(20, 100, 97, 27)) - self.pushButton_6.setObjectName("pushButton_6") - - # Generate Parsing - self.pushButton_0 = QtWidgets.QPushButton(Form) - self.pushButton_0.setGeometry(QtCore.QRect(126, 60, 150, 27)) - self.pushButton_0.setObjectName("pushButton_0") - - # Generate Human - self.pushButton_1 = QtWidgets.QPushButton(Form) - self.pushButton_1.setGeometry(QtCore.QRect(126, 100, 150, 27)) - self.pushButton_1.setObjectName("pushButton_1") - - # shape text box - self.label_heading_1 = QtWidgets.QLabel(Form) - self.label_heading_1.setText('Describe the shape.') - self.label_heading_1.setObjectName("label_heading_1") - self.label_heading_1.setGeometry(QtCore.QRect(320, 20, 200, 20)) - - self.message_box_1 = QtWidgets.QLineEdit(Form) - self.message_box_1.setGeometry(QtCore.QRect(320, 50, 256, 80)) - self.message_box_1.setObjectName("message_box_1") - self.message_box_1.setAlignment(Qt.AlignTop) - - # texture text box - self.label_heading_2 = QtWidgets.QLabel(Form) - self.label_heading_2.setText('Describe the textures.') - self.label_heading_2.setObjectName("label_heading_2") - self.label_heading_2.setGeometry(QtCore.QRect(620, 20, 200, 20)) - - self.message_box_2 = QtWidgets.QLineEdit(Form) - self.message_box_2.setGeometry(QtCore.QRect(620, 50, 256, 80)) - self.message_box_2.setObjectName("message_box_2") - self.message_box_2.setAlignment(Qt.AlignTop) - - # title icon - self.title_icon = QtWidgets.QLabel(Form) - self.title_icon.setGeometry(QtCore.QRect(30, 10, 200, 50)) - self.title_icon.setPixmap( - QtGui.QPixmap('./ui/icons/icon_title.png').scaledToWidth(200)) - - # palette icon - self.palette_icon = QtWidgets.QLabel(Form) - self.palette_icon.setGeometry(QtCore.QRect(950, 10, 256, 128)) - self.palette_icon.setPixmap( - QtGui.QPixmap('./ui/icons/icon_palette.png').scaledToWidth(256)) - - # top - self.pushButton_8 = QtWidgets.QPushButton(' top', Form) - self.pushButton_8.setGeometry(QtCore.QRect(940, 120, 120, 27)) - self.pushButton_8.setObjectName("pushButton_8") - self.pushButton_8.setStyleSheet( - "text-align: left; padding-left: 10px;") - self.pushButton_8.setIcon(QIcon('./ui/color_blocks/class_top.png')) - # skin - self.pushButton_9 = QtWidgets.QPushButton(' skin', Form) - self.pushButton_9.setGeometry(QtCore.QRect(940, 165, 120, 27)) - self.pushButton_9.setObjectName("pushButton_9") - self.pushButton_9.setStyleSheet( - "text-align: left; padding-left: 10px;") - self.pushButton_9.setIcon(QIcon('./ui/color_blocks/class_skin.png')) - # outer - self.pushButton_10 = QtWidgets.QPushButton(' outer', Form) - self.pushButton_10.setGeometry(QtCore.QRect(940, 210, 120, 27)) - self.pushButton_10.setObjectName("pushButton_10") - self.pushButton_10.setStyleSheet( - "text-align: left; padding-left: 10px;") - self.pushButton_10.setIcon(QIcon('./ui/color_blocks/class_outer.png')) - # face - self.pushButton_11 = QtWidgets.QPushButton(' face', Form) - self.pushButton_11.setGeometry(QtCore.QRect(940, 255, 120, 27)) - self.pushButton_11.setObjectName("pushButton_11") - self.pushButton_11.setStyleSheet( - "text-align: left; padding-left: 10px;") - self.pushButton_11.setIcon(QIcon('./ui/color_blocks/class_face.png')) - # skirt - self.pushButton_12 = QtWidgets.QPushButton(' skirt', Form) - self.pushButton_12.setGeometry(QtCore.QRect(940, 300, 120, 27)) - self.pushButton_12.setObjectName("pushButton_12") - self.pushButton_12.setStyleSheet( - "text-align: left; padding-left: 10px;") - self.pushButton_12.setIcon(QIcon('./ui/color_blocks/class_skirt.png')) - # hair - self.pushButton_13 = QtWidgets.QPushButton(' hair', Form) - self.pushButton_13.setGeometry(QtCore.QRect(940, 345, 120, 27)) - self.pushButton_13.setObjectName("pushButton_13") - self.pushButton_13.setStyleSheet( - "text-align: left; padding-left: 10px;") - self.pushButton_13.setIcon(QIcon('./ui/color_blocks/class_hair.png')) - # dress - self.pushButton_14 = QtWidgets.QPushButton(' dress', Form) - self.pushButton_14.setGeometry(QtCore.QRect(940, 390, 120, 27)) - self.pushButton_14.setObjectName("pushButton_14") - self.pushButton_14.setStyleSheet( - "text-align: left; padding-left: 10px;") - self.pushButton_14.setIcon(QIcon('./ui/color_blocks/class_dress.png')) - # headwear - self.pushButton_15 = QtWidgets.QPushButton(' headwear', Form) - self.pushButton_15.setGeometry(QtCore.QRect(940, 435, 120, 27)) - self.pushButton_15.setObjectName("pushButton_15") - self.pushButton_15.setStyleSheet( - "text-align: left; padding-left: 10px;") - self.pushButton_15.setIcon( - QIcon('./ui/color_blocks/class_headwear.png')) - # pants - self.pushButton_16 = QtWidgets.QPushButton(' pants', Form) - self.pushButton_16.setGeometry(QtCore.QRect(940, 480, 120, 27)) - self.pushButton_16.setObjectName("pushButton_16") - self.pushButton_16.setStyleSheet( - "text-align: left; padding-left: 10px;") - self.pushButton_16.setIcon(QIcon('./ui/color_blocks/class_pants.png')) - # eyeglasses - self.pushButton_17 = QtWidgets.QPushButton(' eyeglass', Form) - self.pushButton_17.setGeometry(QtCore.QRect(940, 525, 120, 27)) - self.pushButton_17.setObjectName("pushButton_17") - self.pushButton_17.setStyleSheet( - "text-align: left; padding-left: 10px;") - self.pushButton_17.setIcon( - QIcon('./ui/color_blocks/class_eyeglass.png')) - # rompers - self.pushButton_18 = QtWidgets.QPushButton(' rompers', Form) - self.pushButton_18.setGeometry(QtCore.QRect(940, 570, 120, 27)) - self.pushButton_18.setObjectName("pushButton_18") - self.pushButton_18.setStyleSheet( - "text-align: left; padding-left: 10px;") - self.pushButton_18.setIcon( - QIcon('./ui/color_blocks/class_rompers.png')) - # footwear - self.pushButton_19 = QtWidgets.QPushButton(' footwear', Form) - self.pushButton_19.setGeometry(QtCore.QRect(940, 615, 120, 27)) - self.pushButton_19.setObjectName("pushButton_19") - self.pushButton_19.setStyleSheet( - "text-align: left; padding-left: 10px;") - self.pushButton_19.setIcon( - QIcon('./ui/color_blocks/class_footwear.png')) - - # leggings - self.pushButton_20 = QtWidgets.QPushButton(' leggings', Form) - self.pushButton_20.setGeometry(QtCore.QRect(1100, 120, 120, 27)) - self.pushButton_20.setObjectName("pushButton_10") - self.pushButton_20.setStyleSheet( - "text-align: left; padding-left: 10px;") - self.pushButton_20.setIcon( - QIcon('./ui/color_blocks/class_leggings.png')) - - # ring - self.pushButton_21 = QtWidgets.QPushButton(' ring', Form) - self.pushButton_21.setGeometry(QtCore.QRect(1100, 165, 120, 27)) - self.pushButton_21.setObjectName("pushButton_2`0`") - self.pushButton_21.setStyleSheet( - "text-align: left; padding-left: 10px;") - self.pushButton_21.setIcon(QIcon('./ui/color_blocks/class_ring.png')) - - # belt - self.pushButton_22 = QtWidgets.QPushButton(' belt', Form) - self.pushButton_22.setGeometry(QtCore.QRect(1100, 210, 120, 27)) - self.pushButton_22.setObjectName("pushButton_2`0`") - self.pushButton_22.setStyleSheet( - "text-align: left; padding-left: 10px;") - self.pushButton_22.setIcon(QIcon('./ui/color_blocks/class_belt.png')) - - # neckwear - self.pushButton_23 = QtWidgets.QPushButton(' neckwear', Form) - self.pushButton_23.setGeometry(QtCore.QRect(1100, 255, 120, 27)) - self.pushButton_23.setObjectName("pushButton_2`0`") - self.pushButton_23.setStyleSheet( - "text-align: left; padding-left: 10px;") - self.pushButton_23.setIcon( - QIcon('./ui/color_blocks/class_neckwear.png')) - - # wrist - self.pushButton_24 = QtWidgets.QPushButton(' wrist', Form) - self.pushButton_24.setGeometry(QtCore.QRect(1100, 300, 120, 27)) - self.pushButton_24.setObjectName("pushButton_2`0`") - self.pushButton_24.setStyleSheet( - "text-align: left; padding-left: 10px;") - self.pushButton_24.setIcon(QIcon('./ui/color_blocks/class_wrist.png')) - - # socks - self.pushButton_25 = QtWidgets.QPushButton(' socks', Form) - self.pushButton_25.setGeometry(QtCore.QRect(1100, 345, 120, 27)) - self.pushButton_25.setObjectName("pushButton_2`0`") - self.pushButton_25.setStyleSheet( - "text-align: left; padding-left: 10px;") - self.pushButton_25.setIcon(QIcon('./ui/color_blocks/class_socks.png')) - - # tie - self.pushButton_26 = QtWidgets.QPushButton(' tie', Form) - self.pushButton_26.setGeometry(QtCore.QRect(1100, 390, 120, 27)) - self.pushButton_26.setObjectName("pushButton_2`0`") - self.pushButton_26.setStyleSheet( - "text-align: left; padding-left: 10px;") - self.pushButton_26.setIcon(QIcon('./ui/color_blocks/class_tie.png')) - - # earstuds - self.pushButton_27 = QtWidgets.QPushButton(' necklace', Form) - self.pushButton_27.setGeometry(QtCore.QRect(1100, 435, 120, 27)) - self.pushButton_27.setObjectName("pushButton_2`0`") - self.pushButton_27.setStyleSheet( - "text-align: left; padding-left: 10px;") - self.pushButton_27.setIcon( - QIcon('./ui/color_blocks/class_necklace.png')) - - # necklace - self.pushButton_28 = QtWidgets.QPushButton(' earstuds', Form) - self.pushButton_28.setGeometry(QtCore.QRect(1100, 480, 120, 27)) - self.pushButton_28.setObjectName("pushButton_2`0`") - self.pushButton_28.setStyleSheet( - "text-align: left; padding-left: 10px;") - self.pushButton_28.setIcon( - QIcon('./ui/color_blocks/class_earstuds.png')) - - # bag - self.pushButton_29 = QtWidgets.QPushButton(' bag', Form) - self.pushButton_29.setGeometry(QtCore.QRect(1100, 525, 120, 27)) - self.pushButton_29.setObjectName("pushButton_2`0`") - self.pushButton_29.setStyleSheet( - "text-align: left; padding-left: 10px;") - self.pushButton_29.setIcon(QIcon('./ui/color_blocks/class_bag.png')) - - # glove - self.pushButton_30 = QtWidgets.QPushButton(' glove', Form) - self.pushButton_30.setGeometry(QtCore.QRect(1100, 570, 120, 27)) - self.pushButton_30.setObjectName("pushButton_2`0`") - self.pushButton_30.setStyleSheet( - "text-align: left; padding-left: 10px;") - self.pushButton_30.setIcon(QIcon('./ui/color_blocks/class_glove.png')) - - # background - self.pushButton_31 = QtWidgets.QPushButton(' background', Form) - self.pushButton_31.setGeometry(QtCore.QRect(1100, 615, 120, 27)) - self.pushButton_31.setObjectName("pushButton_2`0`") - self.pushButton_31.setStyleSheet( - "text-align: left; padding-left: 10px;") - self.pushButton_31.setIcon(QIcon('./ui/color_blocks/class_bg.png')) - - self.graphicsView = QtWidgets.QGraphicsView(Form) - self.graphicsView.setGeometry(QtCore.QRect(20, 140, 256, 512)) - self.graphicsView.setObjectName("graphicsView") - self.graphicsView_2 = QtWidgets.QGraphicsView(Form) - self.graphicsView_2.setGeometry(QtCore.QRect(320, 140, 256, 512)) - self.graphicsView_2.setObjectName("graphicsView_2") - self.graphicsView_3 = QtWidgets.QGraphicsView(Form) - self.graphicsView_3.setGeometry(QtCore.QRect(620, 140, 256, 512)) - self.graphicsView_3.setObjectName("graphicsView_3") - - self.retranslateUi(Form) - self.pushButton_2.clicked.connect(Form.open_densepose) - self.pushButton_6.clicked.connect(Form.save_img) - self.pushButton_8.clicked.connect(Form.top_mode) - self.pushButton_9.clicked.connect(Form.skin_mode) - self.pushButton_10.clicked.connect(Form.outer_mode) - self.pushButton_11.clicked.connect(Form.face_mode) - self.pushButton_12.clicked.connect(Form.skirt_mode) - self.pushButton_13.clicked.connect(Form.hair_mode) - self.pushButton_14.clicked.connect(Form.dress_mode) - self.pushButton_15.clicked.connect(Form.headwear_mode) - self.pushButton_16.clicked.connect(Form.pants_mode) - self.pushButton_17.clicked.connect(Form.eyeglass_mode) - self.pushButton_18.clicked.connect(Form.rompers_mode) - self.pushButton_19.clicked.connect(Form.footwear_mode) - self.pushButton_20.clicked.connect(Form.leggings_mode) - self.pushButton_21.clicked.connect(Form.ring_mode) - self.pushButton_22.clicked.connect(Form.belt_mode) - self.pushButton_23.clicked.connect(Form.neckwear_mode) - self.pushButton_24.clicked.connect(Form.wrist_mode) - self.pushButton_25.clicked.connect(Form.socks_mode) - self.pushButton_26.clicked.connect(Form.tie_mode) - self.pushButton_27.clicked.connect(Form.earstuds_mode) - self.pushButton_28.clicked.connect(Form.necklace_mode) - self.pushButton_29.clicked.connect(Form.bag_mode) - self.pushButton_30.clicked.connect(Form.glove_mode) - self.pushButton_31.clicked.connect(Form.background_mode) - self.pushButton_0.clicked.connect(Form.generate_parsing) - self.pushButton_1.clicked.connect(Form.generate_human) - - QtCore.QMetaObject.connectSlotsByName(Form) - - def retranslateUi(self, Form): - _translate = QtCore.QCoreApplication.translate - Form.setWindowTitle(_translate("Form", "Text2Human")) - self.pushButton_2.setText(_translate("Form", "Load Pose")) - self.pushButton_6.setText(_translate("Form", "Save Image")) - - self.pushButton_0.setText(_translate("Form", "Generate Parsing")) - self.pushButton_1.setText(_translate("Form", "Generate Human")) - - -if __name__ == "__main__": - import sys - app = QtWidgets.QApplication(sys.argv) - Form = QtWidgets.QWidget() - ui = Ui_Form() - ui.setupUi(Form) - Form.show() - sys.exit(app.exec_()) diff --git a/spaces/CVPR/WALT/mmdet/datasets/pipelines/compose.py b/spaces/CVPR/WALT/mmdet/datasets/pipelines/compose.py deleted file mode 100644 index ca48f1c935755c486edc2744e1713e2b5ba3cdc8..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/datasets/pipelines/compose.py +++ /dev/null @@ -1,51 +0,0 @@ -import collections - -from mmcv.utils import build_from_cfg - -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class Compose(object): - """Compose multiple transforms sequentially. - - Args: - transforms (Sequence[dict | callable]): Sequence of transform object or - config dict to be composed. - """ - - def __init__(self, transforms): - assert isinstance(transforms, collections.abc.Sequence) - self.transforms = [] - for transform in transforms: - if isinstance(transform, dict): - transform = build_from_cfg(transform, PIPELINES) - self.transforms.append(transform) - elif callable(transform): - self.transforms.append(transform) - else: - raise TypeError('transform must be callable or a dict') - - def __call__(self, data): - """Call function to apply transforms sequentially. - - Args: - data (dict): A result dict contains the data to transform. - - Returns: - dict: Transformed data. - """ - - for t in self.transforms: - data = t(data) - if data is None: - return None - return data - - def __repr__(self): - format_string = self.__class__.__name__ + '(' - for t in self.transforms: - format_string += '\n' - format_string += f' {t}' - format_string += '\n)' - return format_string diff --git a/spaces/CVPR/lama-example/saicinpainting/training/trainers/__init__.py b/spaces/CVPR/lama-example/saicinpainting/training/trainers/__init__.py deleted file mode 100644 index c59241f553efe4e2dd6b198e2e5656a2b1488857..0000000000000000000000000000000000000000 --- a/spaces/CVPR/lama-example/saicinpainting/training/trainers/__init__.py +++ /dev/null @@ -1,30 +0,0 @@ -import logging -import torch -from saicinpainting.training.trainers.default import DefaultInpaintingTrainingModule - - -def get_training_model_class(kind): - if kind == 'default': - return DefaultInpaintingTrainingModule - - raise ValueError(f'Unknown trainer module {kind}') - - -def make_training_model(config): - kind = config.training_model.kind - kwargs = dict(config.training_model) - kwargs.pop('kind') - kwargs['use_ddp'] = config.trainer.kwargs.get('accelerator', None) == 'ddp' - - logging.info(f'Make training model {kind}') - - cls = get_training_model_class(kind) - return cls(config, **kwargs) - - -def load_checkpoint(train_config, path, map_location='cuda', strict=True): - model: torch.nn.Module = make_training_model(train_config) - state = torch.load(path, map_location=map_location) - model.load_state_dict(state['state_dict'], strict=strict) - model.on_load_checkpoint(state) - return model diff --git a/spaces/CofAI/chat/g4f/Provider/Providers/Lockchat.py b/spaces/CofAI/chat/g4f/Provider/Providers/Lockchat.py deleted file mode 100644 index 1bce74035403bf8615e68ccfcc9deb7e0151817a..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat/g4f/Provider/Providers/Lockchat.py +++ /dev/null @@ -1,32 +0,0 @@ -import requests -import os -import json -from ...typing import sha256, Dict, get_type_hints -url = 'http://supertest.lockchat.app' -model = ['gpt-4', 'gpt-3.5-turbo'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs): - - payload = { - "temperature": 0.7, - "messages": messages, - "model": model, - "stream": True, - } - headers = { - "user-agent": "ChatX/39 CFNetwork/1408.0.4 Darwin/22.5.0", - } - response = requests.post("http://supertest.lockchat.app/v1/chat/completions", - json=payload, headers=headers, stream=True) - for token in response.iter_lines(): - if b'The model: `gpt-4` does not exist' in token: - print('error, retrying...') - _create_completion(model=model, messages=messages, stream=stream, temperature=temperature, **kwargs) - if b"content" in token: - token = json.loads(token.decode('utf-8').split('data: ')[1])['choices'][0]['delta'].get('content') - if token: yield (token) - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/Cognomen/CatCon-Controlnet-WD-1-5-b2/README.md b/spaces/Cognomen/CatCon-Controlnet-WD-1-5-b2/README.md deleted file mode 100644 index 5537e47b29bc7b517c5b222bdbfb1e478bffbe78..0000000000000000000000000000000000000000 --- a/spaces/Cognomen/CatCon-Controlnet-WD-1-5-b2/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: CatCon Controlnet WD 1 5 B2 -emoji: 🐱 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.28.0 -app_file: app.py -pinned: false -license: mit -tags: -- jax-diffusers-event ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Cosmo-Hug/Cosmo-Hug-FeverDream/README.md b/spaces/Cosmo-Hug/Cosmo-Hug-FeverDream/README.md deleted file mode 100644 index efe21b10b2be609f8cd7f202c3a017e4407c0925..0000000000000000000000000000000000000000 --- a/spaces/Cosmo-Hug/Cosmo-Hug-FeverDream/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Cosmo Hug FeverDream -emoji: 📉 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: creativeml-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/roi_heads/__init__.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/roi_heads/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/DHEIVER/Segmento_de_Angio_Coronariana_v3/app.py b/spaces/DHEIVER/Segmento_de_Angio_Coronariana_v3/app.py deleted file mode 100644 index c83d6f98cc19837b6e746bbbe5264685cc2965f9..0000000000000000000000000000000000000000 --- a/spaces/DHEIVER/Segmento_de_Angio_Coronariana_v3/app.py +++ /dev/null @@ -1,127 +0,0 @@ -import os -os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" - -import gradio as gr -import torch -import cv2 -import numpy as np -from preprocess import unsharp_masking -import time -from sklearn.cluster import KMeans - -device = "cuda" if torch.cuda.is_available() else "cpu" - -# Função para ordenar e pré-processar a imagem de entrada -def ordenar_arquivos(img, modelo): - ori = img.copy() - img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) - h, w = img.shape - img_out = preprocessamento(img, modelo) - return img_out, h, w, img, ori - -# Função para pré-processar a imagem com base no modelo selecionado -def preprocessamento(img, modelo='SE-RegUNet 4GF'): - img = cv2.resize(img, (512, 512)) - img = unsharp_masking(img).astype(np.uint8) - if modelo == 'AngioNet' or modelo == 'UNet3+': - img = np.float32((img - img.min()) / (img.max() - img.min() + 1e-6)) - img_out = np.expand_dims(img, axis=0) - elif modelo == 'SE-RegUNet 4GF': - clahe1 = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8)) - clahe2 = cv2.createCLAHE(clipLimit=8.0, tileGridSize=(8, 8)) - image1 = clahe1.apply(img) - image2 = clahe2.apply(img) - img = np.float32((img - img.min()) / (img.max() - img.min() + 1e-6)) - image1 = np.float32((image1 - image1.min()) / (image1.max() - image1.min() + 1e-6)) - image2 = np.float32((image2 - image2.min()) / (image2.max() - image2.min() + 1e-6)) - img_out = np.stack((img, image1, image2), axis=0) - else: - clahe1 = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8)) - image1 = clahe1.apply(img) - image1 = np.float32((image1 - image1.min()) / (image1.max() - image1.min() + 1e-6)) - img_out = np.stack((image1,) * 3, axis=0) - return img_out - -# Função para processar a imagem de entrada -def processar_imagem_de_entrada(img, modelo, pipe): - img = img.copy() - pipe = pipe.to(device).eval() - start = time.time() - img, h, w, ori_gray, ori = ordenar_arquivos(img, modelo) - img = torch.FloatTensor(img).unsqueeze(0).to(device) - with torch.no_grad(): - if modelo == 'AngioNet': - img = torch.cat([img, img], dim=0) - logit = np.round(torch.softmax(pipe.forward(img), dim=1).detach().cpu().numpy()[0, 0]).astype(np.uint8) - spent = time.time() - start - spent = f"{spent:.3f} segundos" - - if h != 512 or w != 512: - logit = cv2.resize(logit, (h, w)) - - logit = logit.astype(bool) - img_out = ori.copy() - img_out[logit, 0] = 255 - return spent, img_out - -# Carregar modelos pré-treinados -models = { - 'SE-RegUNet 4GF': torch.jit.load('./model/SERegUNet4GF.pt'), - 'SE-RegUNet 16GF': torch.jit.load('./model/SERegUNet16GF.pt'), - 'AngioNet': torch.jit.load('./model/AngioNet.pt'), - 'EffUNet++ B5': torch.jit.load('./model/EffUNetppb5.pt'), - 'Reg-SA-UNet++': torch.jit.load('./model/RegSAUnetpp.pt'), - 'UNet3+': torch.jit.load('./model/UNet3plus.pt'), -} - -def processar_imagem_de_entrada_wrapper(img, modelo): - model = models[modelo] - spent, img_out = processar_imagem_de_entrada(img, modelo, model) - - # Verificar se há doença usando K-Means - kmeans = KMeans(n_clusters=2, random_state=0) - flattened_img = img_out[:, :, 0].reshape((-1, 1)) # Use the intensity channel - kmeans.fit(flattened_img) - labels = kmeans.labels_ - area_0 = np.sum(labels == 0) - area_1 = np.sum(labels == 1) - has_disease_flag = area_1 >= 200 - - # Formatar o indicador de doença como uma string - if has_disease_flag: - status_doenca = "Sim" - else: - status_doenca = "Não" - - # Adicionar a explicação com base no status de doença - if has_disease_flag: - explanation = "A máquina detectou uma possível doença nos vasos sanguíneos." - else: - explanation = "A máquina não detectou nenhuma doença nos vasos sanguíneos." - - # ... (resto do seu código, se houver mais) - - return spent, img_out, status_doenca, explanation - -# Criar a interface Gradio -my_app = gr.Interface( - fn=processar_imagem_de_entrada_wrapper, - inputs=[ - gr.inputs.Image(label="Angiograma:", shape=(512, 512)), - gr.inputs.Dropdown(['SE-RegUNet 4GF', 'SE-RegUNet 16GF', 'AngioNet', 'EffUNet++ B5', 'Reg-SA-UNet++', 'UNet3+'], label='Modelo', default='SE-RegUNet 4GF'), - ], - outputs=[ - gr.outputs.Label(label="Tempo decorrido"), - gr.outputs.Image(type="numpy", label="Imagem de Saída"), - gr.outputs.Label(label="Possui Doença?"), - gr.outputs.Label(label="Explicação"), - ], - title="Segmentação de Angiograma Coronariano", - description="Esta aplicação segmenta angiogramas coronarianos usando modelos de segmentação pré-treinados.", - theme="default", - layout="vertical", - allow_flagging=False, -) - -# Iniciar a interface Gradio -my_app.launch() diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_ssl.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_ssl.py deleted file mode 100644 index c99c5a67945b8a3a3544d481e979c791ab45fe23..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpcore/_ssl.py +++ /dev/null @@ -1,9 +0,0 @@ -import ssl - -import certifi - - -def default_ssl_context() -> ssl.SSLContext: - context = ssl.create_default_context() - context.load_verify_locations(certifi.where()) - return context diff --git a/spaces/Detomo/ai-avatar-frontend/README.md b/spaces/Detomo/ai-avatar-frontend/README.md deleted file mode 100644 index 034603b2c374683961f360c4e04b64671706d7a0..0000000000000000000000000000000000000000 --- a/spaces/Detomo/ai-avatar-frontend/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Detomo Ai Avatar -emoji: 💻 -colorFrom: purple -colorTo: blue -sdk: docker -pinned: false -license: apache-2.0 -app_port: 3000 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Detomo/ai-comic-generation/src/components/ui/collapsible.tsx b/spaces/Detomo/ai-comic-generation/src/components/ui/collapsible.tsx deleted file mode 100644 index 9fa48946afd1eb56bd932377fd888e3986304676..0000000000000000000000000000000000000000 --- a/spaces/Detomo/ai-comic-generation/src/components/ui/collapsible.tsx +++ /dev/null @@ -1,11 +0,0 @@ -"use client" - -import * as CollapsiblePrimitive from "@radix-ui/react-collapsible" - -const Collapsible = CollapsiblePrimitive.Root - -const CollapsibleTrigger = CollapsiblePrimitive.CollapsibleTrigger - -const CollapsibleContent = CollapsiblePrimitive.CollapsibleContent - -export { Collapsible, CollapsibleTrigger, CollapsibleContent } diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/dnnlib/tflib/autosummary.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/dnnlib/tflib/autosummary.py deleted file mode 100644 index 272f054eea659e7191c7c71ae3745eefe5f82411..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/stylegan_human/dnnlib/tflib/autosummary.py +++ /dev/null @@ -1,207 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# Copyright (c) 2019, NVIDIA Corporation. All rights reserved. -# -# This work is made available under the Nvidia Source Code License-NC. -# To view a copy of this license, visit -# https://nvlabs.github.io/stylegan2/license.html - -"""Helper for adding automatically tracked values to Tensorboard. - -Autosummary creates an identity op that internally keeps track of the input -values and automatically shows up in TensorBoard. The reported value -represents an average over input components. The average is accumulated -constantly over time and flushed when save_summaries() is called. - -Notes: -- The output tensor must be used as an input for something else in the - graph. Otherwise, the autosummary op will not get executed, and the average - value will not get accumulated. -- It is perfectly fine to include autosummaries with the same name in - several places throughout the graph, even if they are executed concurrently. -- It is ok to also pass in a python scalar or numpy array. In this case, it - is added to the average immediately. -""" - -from collections import OrderedDict -import numpy as np -import tensorflow as tf -from tensorboard import summary as summary_lib -from tensorboard.plugins.custom_scalar import layout_pb2 - -from . import tfutil -from .tfutil import TfExpression -from .tfutil import TfExpressionEx - -# Enable "Custom scalars" tab in TensorBoard for advanced formatting. -# Disabled by default to reduce tfevents file size. -enable_custom_scalars = False - -_dtype = tf.float64 -_vars = OrderedDict() # name => [var, ...] -_immediate = OrderedDict() # name => update_op, update_value -_finalized = False -_merge_op = None - - -def _create_var(name: str, value_expr: TfExpression) -> TfExpression: - """Internal helper for creating autosummary accumulators.""" - assert not _finalized - name_id = name.replace("/", "_") - v = tf.cast(value_expr, _dtype) - - if v.shape.is_fully_defined(): - size = np.prod(v.shape.as_list()) - size_expr = tf.constant(size, dtype=_dtype) - else: - size = None - size_expr = tf.reduce_prod(tf.cast(tf.shape(v), _dtype)) - - if size == 1: - if v.shape.ndims != 0: - v = tf.reshape(v, []) - v = [size_expr, v, tf.square(v)] - else: - v = [size_expr, tf.reduce_sum(v), tf.reduce_sum(tf.square(v))] - v = tf.cond(tf.is_finite(v[1]), lambda: tf.stack( - v), lambda: tf.zeros(3, dtype=_dtype)) - - with tfutil.absolute_name_scope("Autosummary/" + name_id), tf.control_dependencies(None): - # [sum(1), sum(x), sum(x**2)] - var = tf.Variable(tf.zeros(3, dtype=_dtype), trainable=False) - update_op = tf.cond(tf.is_variable_initialized( - var), lambda: tf.assign_add(var, v), lambda: tf.assign(var, v)) - - if name in _vars: - _vars[name].append(var) - else: - _vars[name] = [var] - return update_op - - -def autosummary(name: str, value: TfExpressionEx, passthru: TfExpressionEx = None, condition: TfExpressionEx = True) -> TfExpressionEx: - """Create a new autosummary. - - Args: - name: Name to use in TensorBoard - value: TensorFlow expression or python value to track - passthru: Optionally return this TF node without modifications but tack an autosummary update side-effect to this node. - - Example use of the passthru mechanism: - - n = autosummary('l2loss', loss, passthru=n) - - This is a shorthand for the following code: - - with tf.control_dependencies([autosummary('l2loss', loss)]): - n = tf.identity(n) - """ - tfutil.assert_tf_initialized() - name_id = name.replace("/", "_") - - if tfutil.is_tf_expression(value): - with tf.name_scope("summary_" + name_id), tf.device(value.device): - condition = tf.convert_to_tensor(condition, name='condition') - update_op = tf.cond(condition, lambda: tf.group( - _create_var(name, value)), tf.no_op) - with tf.control_dependencies([update_op]): - return tf.identity(value if passthru is None else passthru) - - else: # python scalar or numpy array - assert not tfutil.is_tf_expression(passthru) - assert not tfutil.is_tf_expression(condition) - if condition: - if name not in _immediate: - with tfutil.absolute_name_scope("Autosummary/" + name_id), tf.device(None), tf.control_dependencies(None): - update_value = tf.placeholder(_dtype) - update_op = _create_var(name, update_value) - _immediate[name] = update_op, update_value - update_op, update_value = _immediate[name] - tfutil.run(update_op, {update_value: value}) - return value if passthru is None else passthru - - -def finalize_autosummaries() -> None: - """Create the necessary ops to include autosummaries in TensorBoard report. - Note: This should be done only once per graph. - """ - global _finalized - tfutil.assert_tf_initialized() - - if _finalized: - return None - - _finalized = True - tfutil.init_uninitialized_vars( - [var for vars_list in _vars.values() for var in vars_list]) - - # Create summary ops. - with tf.device(None), tf.control_dependencies(None): - for name, vars_list in _vars.items(): - name_id = name.replace("/", "_") - with tfutil.absolute_name_scope("Autosummary/" + name_id): - moments = tf.add_n(vars_list) - moments /= moments[0] - # read before resetting - with tf.control_dependencies([moments]): - reset_ops = [tf.assign(var, tf.zeros( - 3, dtype=_dtype)) for var in vars_list] - # reset before reporting - with tf.name_scope(None), tf.control_dependencies(reset_ops): - mean = moments[1] - std = tf.sqrt(moments[2] - tf.square(moments[1])) - tf.summary.scalar(name, mean) - if enable_custom_scalars: - tf.summary.scalar( - "xCustomScalars/" + name + "/margin_lo", mean - std) - tf.summary.scalar( - "xCustomScalars/" + name + "/margin_hi", mean + std) - - # Setup layout for custom scalars. - layout = None - if enable_custom_scalars: - cat_dict = OrderedDict() - for series_name in sorted(_vars.keys()): - p = series_name.split("/") - cat = p[0] if len(p) >= 2 else "" - chart = "/".join(p[1:-1]) if len(p) >= 3 else p[-1] - if cat not in cat_dict: - cat_dict[cat] = OrderedDict() - if chart not in cat_dict[cat]: - cat_dict[cat][chart] = [] - cat_dict[cat][chart].append(series_name) - categories = [] - for cat_name, chart_dict in cat_dict.items(): - charts = [] - for chart_name, series_names in chart_dict.items(): - series = [] - for series_name in series_names: - series.append(layout_pb2.MarginChartContent.Series( - value=series_name, - lower="xCustomScalars/" + series_name + "/margin_lo", - upper="xCustomScalars/" + series_name + "/margin_hi")) - margin = layout_pb2.MarginChartContent(series=series) - charts.append(layout_pb2.Chart( - title=chart_name, margin=margin)) - categories.append(layout_pb2.Category( - title=cat_name, chart=charts)) - layout = summary_lib.custom_scalar_pb( - layout_pb2.Layout(category=categories)) - return layout - - -def save_summaries(file_writer, global_step=None): - """Call FileWriter.add_summary() with all summaries in the default graph, - automatically finalizing and merging them on the first call. - """ - global _merge_op - tfutil.assert_tf_initialized() - - if _merge_op is None: - layout = finalize_autosummaries() - if layout is not None: - file_writer.add_summary(layout) - with tf.device(None), tf.control_dependencies(None): - _merge_op = tf.summary.merge_all() - - file_writer.add_summary(_merge_op.eval(), global_step) diff --git a/spaces/DragGan/DragGan-Inversion/visualizer_drag_gradio_inversion.py b/spaces/DragGan/DragGan-Inversion/visualizer_drag_gradio_inversion.py deleted file mode 100644 index 556ced293e772c665dba71d049b8d31142c1e3f2..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/visualizer_drag_gradio_inversion.py +++ /dev/null @@ -1,1002 +0,0 @@ -# https://huggingface.co/DragGan/DragGan-Models -# https://arxiv.org/abs/2305.10973 -import os -import os.path as osp -from argparse import ArgumentParser -from functools import partial -from pathlib import Path -import time -import tempfile -import psutil - -import gradio as gr -import numpy as np -import torch -from PIL import Image -import uuid - -import dnnlib -from gradio_utils import ( - ImageMask, - draw_mask_on_image, - draw_points_on_image, - get_latest_points_pair, - get_valid_mask, - on_change_single_global_state, -) -from viz.renderer import Renderer, add_watermark_np -from torch_utils.pti import run_PTI, export_updated_pickle - -# download models from Hugging Face hub -from huggingface_hub import snapshot_download - -model_dir = Path("./checkpoints") -snapshot_download("DragGan/DragGan-Models", repo_type="model", local_dir=model_dir) - -# parser = ArgumentParser() -# parser.add_argument('--share', action='store_true') -# parser.add_argument('--cache-dir', type=str, default='./checkpoints') -# args = parser.parse_args() - -cache_dir = model_dir - -device = "cuda" -IS_SPACE = "DragGan/DragGan" in os.environ.get("SPACE_ID", "") -TIMEOUT = 80 - - -def reverse_point_pairs(points): - new_points = [] - for p in points: - new_points.append([p[1], p[0]]) - return new_points - - -def clear_state(global_state, target=None): - """Clear target history state from global_state - If target is not defined, points and mask will be both removed. - 1. set global_state['points'] as empty dict - 2. set global_state['mask'] as full-one mask. - """ - if target is None: - target = ["point", "mask"] - if not isinstance(target, list): - target = [target] - if "point" in target: - global_state["points"] = dict() - print("Clear Points State!") - if "mask" in target: - image_raw = global_state["images"]["image_raw"] - global_state["mask"] = np.ones( - (image_raw.size[1], image_raw.size[0]), dtype=np.uint8 - ) - print("Clear mask State!") - - return global_state - - -def init_images(global_state): - """This function is called only ones with Gradio App is started. - 0. pre-process global_state, unpack value from global_state of need - 1. Re-init renderer - 2. run `renderer._render_drag_impl` with `is_drag=False` to generate - new image - 3. Assign images to global state and re-generate mask - """ - - if isinstance(global_state, gr.State): - state = global_state.value - else: - state = global_state - - state["renderer"].init_network( - state["generator_params"], # res - state["pretrained_weight"], # pkl - state["params"]["seed"], # w0_seed, - state["w_pivot"], # w_load - state["params"]["latent_space"] == "w+", # w_plus - "const", - state["params"]["trunc_psi"], # trunc_psi, - state["params"]["trunc_cutoff"], # trunc_cutoff, - None, # input_transform - state["params"]["lr"], # lr, - ) - - state["renderer"]._render_drag_impl( - state["generator_params"], is_drag=False, to_pil=True - ) - - init_image = state["generator_params"].image - state["images"]["image_orig"] = init_image - state["images"]["image_raw"] = init_image - state["images"]["image_show"] = Image.fromarray( - add_watermark_np(np.array(init_image)) - ) - state["mask"] = np.ones((init_image.size[1], init_image.size[0]), dtype=np.uint8) - return global_state - - -def update_image_draw(image, points, mask, show_mask, global_state=None): - image_draw = draw_points_on_image(image, points) - if ( - show_mask - and mask is not None - and not (mask == 0).all() - and not (mask == 1).all() - ): - image_draw = draw_mask_on_image(image_draw, mask) - - image_draw = Image.fromarray(add_watermark_np(np.array(image_draw))) - if global_state is not None: - global_state["images"]["image_show"] = image_draw - return image_draw - - -def preprocess_mask_info(global_state, image): - """Function to handle mask information. - 1. last_mask is None: Do not need to change mask, return mask - 2. last_mask is not None: - 2.1 global_state is remove_mask: - 2.2 global_state is add_mask: - """ - if isinstance(image, dict): - last_mask = get_valid_mask(image["mask"]) - else: - last_mask = None - mask = global_state["mask"] - - # mask in global state is a placeholder with all 1. - if (mask == 1).all(): - mask = last_mask - - # last_mask = global_state['last_mask'] - editing_mode = global_state["editing_state"] - - if last_mask is None: - return global_state - - if editing_mode == "remove_mask": - updated_mask = np.clip(mask - last_mask, 0, 1) - print(f"Last editing_state is {editing_mode}, do remove.") - elif editing_mode == "add_mask": - updated_mask = np.clip(mask + last_mask, 0, 1) - print(f"Last editing_state is {editing_mode}, do add.") - else: - updated_mask = mask - print(f"Last editing_state is {editing_mode}, " "do nothing to mask.") - - global_state["mask"] = updated_mask - # global_state['last_mask'] = None # clear buffer - return global_state - - -def print_memory_usage(): - # Print system memory usage - print(f"System memory usage: {psutil.virtual_memory().percent}%") - - # Print GPU memory usage - if torch.cuda.is_available(): - device = torch.device("cuda") - print(f"GPU memory usage: {torch.cuda.memory_allocated() / 1e9} GB") - print(f"Max GPU memory usage: {torch.cuda.max_memory_allocated() / 1e9} GB") - device_properties = torch.cuda.get_device_properties(device) - available_memory = ( - device_properties.total_memory - torch.cuda.max_memory_allocated() - ) - print(f"Available GPU memory: {available_memory / 1e9} GB") - else: - print("No GPU available") - - -# filter large models running on SPAC - -css = """ -#output-image { - width: 100% !important; - aspect-ratio: 1 / 1 !important; - height: auto !important; -} -#output-image canvas { - width: 100% !important; - aspect-ratio: 1 / 1 !important; - height: auto !important; -} -""" -with gr.Blocks(css=css) as app: - gr.Markdown( - """ -# DragGAN - Drag Your GAN - Face Inversion - -## Interactive Point-based Manipulation on the Generative Image Manifold -### Unofficial Gradio Demo - -**Due to high demand, only one model can be run at a time, or you can duplicate the space and run your own copy.** - - -Duplicate Space for no queue on your own hardware.

    - -* Official Repo: [XingangPan](https://github.com/XingangPan/DragGAN) -* Gradio Demo by: [LeoXing1996](https://github.com/LeoXing1996) © [OpenMMLab MMagic](https://github.com/open-mmlab/mmagic) -* Inversion Code: [ProgrammingHut](https://www.youtube.com/watch?v=viWiOC1Mikw), [EthanZhangCN](https://github.com/EthanZhangCN) -""" - ) - - # renderer = Renderer() - global_state = gr.State( - { - "images": { - # image_orig: the original image, change with seed/model is changed - # image_raw: image with mask and points, change durning optimization - # image_show: image showed on screen - }, - "temporal_params": { - # stop - }, - "w_pivot": None, - "mask": None, # mask for visualization, 1 for editing and 0 for unchange - "last_mask": None, # last edited mask - "show_mask": True, # add button - "generator_params": dnnlib.EasyDict(), - "params": { - "seed": int(np.random.randint(0, 2**32 - 1)), - "motion_lambda": 20, - "r1_in_pixels": 3, - "r2_in_pixels": 12, - "magnitude_direction_in_pixels": 1.0, - "latent_space": "w+", - "trunc_psi": 0.7, - "trunc_cutoff": None, - "lr": 0.01, - }, - "device": device, - "draw_interval": 1, - "renderer": Renderer(disable_timing=True), - "points": {}, - "curr_point": None, - "curr_type_point": "start", - "editing_state": "add_points", - "pretrained_weight": str(model_dir / "stylegan2-ffhq1024x1024.pkl"), - } - ) - - # init image - global_state = init_images(global_state) - with gr.Row(): - with gr.Row(): - # Left --> tools - with gr.Column(): - # Latent - with gr.Row(): - with gr.Column(scale=1, min_width=10): - gr.Markdown(value="Latent", show_label=False) - - with gr.Column(scale=4, min_width=10): - form_seed_number = gr.Slider( - mininium=0, - maximum=2**32 - 1, - step=1, - value=global_state.value["params"]["seed"], - interactive=True, - # randomize=True, - label="Seed", - ) - form_lr_number = gr.Number( - value=global_state.value["params"]["lr"], - precision=5, - interactive=True, - label="Step Size", - ) - - with gr.Row(): - with gr.Column(scale=2, min_width=10): - form_reset_image = gr.Button("Reset Image") - with gr.Column(scale=3, min_width=10): - form_latent_space = gr.Radio( - ["w", "w+"], - value=global_state.value["params"]["latent_space"], - interactive=True, - label="Latent space to optimize", - show_label=False, - ) - with gr.Row(): - with gr.Column(scale=3, min_width=10): - form_custom_image = gr.Image( - type="filepath", label="Custom Image", height=100 - ) - with gr.Column(scale=3, min_width=10): - form_reset_custom_image = gr.Button( - "Remove Custom Image", interactive=False - ) - - # Drag - with gr.Row(): - with gr.Column(scale=1, min_width=10): - gr.Markdown(value="Drag", show_label=False) - with gr.Column(scale=4, min_width=10): - with gr.Row(): - with gr.Column(scale=1, min_width=10): - enable_add_points = gr.Button("Add Points") - with gr.Column(scale=1, min_width=10): - undo_points = gr.Button("Reset Points") - with gr.Row(): - with gr.Column(scale=1, min_width=10): - form_start_btn = gr.Button("Start") - with gr.Column(scale=1, min_width=10): - form_stop_btn = gr.Button("Stop") - - form_steps_number = gr.Number( - value=0, label="Steps", interactive=False - ) - - # Mask - with gr.Row(): - with gr.Column(scale=1, min_width=10): - gr.Markdown(value="Mask", show_label=False) - with gr.Column(scale=4, min_width=10): - enable_add_mask = gr.Button("Edit Flexible Area") - with gr.Row(): - with gr.Column(scale=1, min_width=10): - form_reset_mask_btn = gr.Button("Reset mask") - with gr.Column(scale=1, min_width=10): - show_mask = gr.Checkbox( - label="Show Mask", - value=global_state.value["show_mask"], - show_label=False, - ) - - with gr.Row(): - form_lambda_number = gr.Number( - value=global_state.value["params"]["motion_lambda"], - interactive=True, - label="Lambda", - ) - - form_draw_interval_number = gr.Number( - value=global_state.value["draw_interval"], - label="Draw Interval (steps)", - interactive=True, - visible=False, - ) - - # Right --> Image - with gr.Column(scale=2): - form_image = ImageMask( - value=global_state.value["images"]["image_show"], - brush_radius=100, - elem_id="output-image", - ) - gr.Markdown( - """ - ## Quick Start - - 1. Select desired `Pretrained Model` and adjust `Seed` to generate an - initial image. - 2. Click on image to add control points. - 3. Click `Start` and enjoy it! - - ## Advance Usage - - 1. Change `Step Size` to adjust learning rate in drag optimization. - 2. Select `w` or `w+` to change latent space to optimize: - * Optimize on `w` space may cause greater influence to the image. - * Optimize on `w+` space may work slower than `w`, but usually achieve - better results. - * Note that changing the latent space will reset the image, points and - mask (this has the same effect as `Reset Image` button). - 3. Click `Edit Flexible Area` to create a mask and constrain the - unmasked region to remain unchanged. - - - """ - ) - gr.HTML( - """ - -
    - Gradio demo supported by - - OpenMMLab MMagic -
    - """ - ) - # Network & latents tab listeners - - def on_click_reset_image(global_state): - """Reset image to the original one and clear all states - 1. Re-init images - 2. Clear all states - """ - - init_images(global_state) - clear_state(global_state) - - return global_state, global_state["images"]["image_show"] - - def on_click_reset_custom_image(global_state): - """Reset image to the original one and clear all states - 1. Re-init images - 2. Clear all states - """ - Path(global_state["pretrained_weight"]).unlink(missing_ok=True) - global_state["w_pivot"] = None - global_state["pretrained_weight"] = str( - model_dir / "stylegan2-ffhq1024x1024.pkl" - ) - - init_images(global_state) - clear_state(global_state) - - return global_state, global_state["images"]["image_show"] - - def on_image_change( - custom_image, global_state, progress=gr.Progress(track_tqdm=True) - ): - new_img = Image.open(custom_image) - new_img = new_img.convert("RGB") - from PTI.configs import paths_config - - paths_config.stylegan2_ada_ffhq = global_state["pretrained_weight"] - paths_config.dlib = (model_dir / "align.dat").as_posix() - run_name = str(uuid.uuid4()) - new_G, w_pivot = run_PTI(new_img, run_name) - - out_path = Path(f"checkpoints/stylegan2-{run_name}.pkl") - print(f"Exporting to {out_path}") - export_updated_pickle(new_G, out_path, run_name) - global_state["w_pivot"] = w_pivot - global_state["pretrained_weight"] = str(out_path) - init_images(global_state) - clear_state(global_state) - - return ( - global_state, - global_state["images"]["image_show"], - gr.Image.update(interactive=True), - ) - - form_custom_image.upload( - on_image_change, - [form_custom_image, global_state], - [global_state, form_image, form_reset_custom_image], - ) - - form_reset_custom_image.click( - on_click_reset_custom_image, [global_state], [global_state, form_image] - ) - - form_reset_image.click( - on_click_reset_image, - inputs=[global_state], - outputs=[global_state, form_image], - queue=False, - show_progress=True, - ) - - # Update parameters - def on_change_update_image_seed(seed, global_state): - """Function to handle generation seed change. - 1. Set seed to global_state - 2. Re-init images and clear all states - """ - - global_state["params"]["seed"] = int(seed) - init_images(global_state) - clear_state(global_state) - - return global_state, global_state["images"]["image_show"] - - form_seed_number.change( - on_change_update_image_seed, - inputs=[form_seed_number, global_state], - outputs=[global_state, form_image], - ) - - def on_click_latent_space(latent_space, global_state): - """Function to reset latent space to optimize. - NOTE: this function we reset the image and all controls - 1. Set latent-space to global_state - 2. Re-init images and clear all state - """ - - global_state["params"]["latent_space"] = latent_space - init_images(global_state) - clear_state(global_state) - - return global_state, global_state["images"]["image_show"] - - form_latent_space.change( - on_click_latent_space, - inputs=[form_latent_space, global_state], - outputs=[global_state, form_image], - ) - - # ==== Params - form_lambda_number.change( - partial(on_change_single_global_state, ["params", "motion_lambda"]), - inputs=[form_lambda_number, global_state], - outputs=[global_state], - ) - - def on_change_lr(lr, global_state): - if lr == 0: - print("lr is 0, do nothing.") - return global_state - else: - global_state["params"]["lr"] = lr - renderer = global_state["renderer"] - renderer.update_lr(lr) - print("New optimizer: ") - print(renderer.w_optim) - return global_state - - form_lr_number.change( - on_change_lr, - inputs=[form_lr_number, global_state], - outputs=[global_state], - queue=False, - show_progress=True, - ) - - def on_click_start(global_state, image): - p_in_pixels = [] - t_in_pixels = [] - valid_points = [] - - # handle of start drag in mask editing mode - global_state = preprocess_mask_info(global_state, image) - - # Prepare the points for the inference - if len(global_state["points"]) == 0: - # yield on_click_start_wo_points(global_state, image) - image_raw = global_state["images"]["image_raw"] - update_image_draw( - image_raw, - global_state["points"], - global_state["mask"], - global_state["show_mask"], - global_state, - ) - - yield ( - global_state, # global_state - 0, # form_steps_number, - global_state["images"]["image_show"], # form image - gr.Button.update(interactive=True), # form_reset_image - gr.Button.update(interactive=True), # add points button - gr.Button.update(interactive=True), # enable mask button - gr.Button.update(interactive=True), # undo points button - gr.Button.update(interactive=True), # reset mask button - gr.Radio.update(interactive=True), # latent space - gr.Button.update(interactive=True), # start button - gr.Button.update(interactive=False), # stop button - gr.Number.update(interactive=True), # form_seed_number - gr.Number.update(interactive=True), # form_lr_number - gr.Checkbox.update(interactive=True), # show_mask - gr.Number.update(interactive=True), # form_lambda_number - gr.Button.update(interactive=True), # form_reset_custom_image - ) - else: - # Transform the points into torch tensors - for key_point, point in global_state["points"].items(): - try: - p_start = point.get("start_temp", point["start"]) - p_end = point["target"] - - if p_start is None or p_end is None: - continue - - except KeyError: - continue - - p_in_pixels.append(p_start) - t_in_pixels.append(p_end) - valid_points.append(key_point) - - mask = torch.tensor(global_state["mask"]).float() - drag_mask = 1 - mask - - renderer: Renderer = global_state["renderer"] - global_state["temporal_params"]["stop"] = False - global_state["editing_state"] = "running" - - # reverse points order - p_to_opt = reverse_point_pairs(p_in_pixels) - t_to_opt = reverse_point_pairs(t_in_pixels) - print("Running with:") - print(f" Source: {p_in_pixels}") - print(f" Target: {t_in_pixels}") - step_idx = 0 - last_time = time.time() - while True: - print_memory_usage() - # add a TIMEOUT break - print(f"Running time: {time.time() - last_time}") - if IS_SPACE and time.time() - last_time > TIMEOUT: - print("Timeout break!") - break - if ( - global_state["temporal_params"]["stop"] - or global_state["generator_params"]["stop"] - ): - break - - # do drage here! - renderer._render_drag_impl( - global_state["generator_params"], - p_to_opt, # point - t_to_opt, # target - drag_mask, # mask, - global_state["params"]["motion_lambda"], # lambda_mask - reg=0, - feature_idx=5, # NOTE: do not support change for now - r1=global_state["params"]["r1_in_pixels"], # r1 - r2=global_state["params"]["r2_in_pixels"], # r2 - # random_seed = 0, - # noise_mode = 'const', - trunc_psi=global_state["params"]["trunc_psi"], - # force_fp32 = False, - # layer_name = None, - # sel_channels = 3, - # base_channel = 0, - # img_scale_db = 0, - # img_normalize = False, - # untransform = False, - is_drag=True, - to_pil=True, - ) - - if step_idx % global_state["draw_interval"] == 0: - print("Current Source:") - for key_point, p_i, t_i in zip(valid_points, p_to_opt, t_to_opt): - global_state["points"][key_point]["start_temp"] = [ - p_i[1], - p_i[0], - ] - global_state["points"][key_point]["target"] = [ - t_i[1], - t_i[0], - ] - start_temp = global_state["points"][key_point]["start_temp"] - print(f" {start_temp}") - - image_result = global_state["generator_params"]["image"] - image_draw = update_image_draw( - image_result, - global_state["points"], - global_state["mask"], - global_state["show_mask"], - global_state, - ) - global_state["images"]["image_raw"] = image_result - - yield ( - global_state, # global_state - step_idx, # form_steps_number, - global_state["images"]["image_show"], # form image - # gr.File.update(visible=False), - gr.Button.update(interactive=False), # form_reset_image - gr.Button.update(interactive=False), # add points button - gr.Button.update(interactive=False), # enable mask button - gr.Button.update(interactive=False), # undo points button - gr.Button.update(interactive=False), # reset mask button - # latent space - gr.Radio.update(interactive=False), # latent space - gr.Button.update(interactive=False), # start button - # enable stop button in loop - gr.Button.update(interactive=True), # stop button - # update other comps - gr.Number.update(interactive=False), # form_seed_number - gr.Number.update(interactive=False), # form_lr_number - gr.Checkbox.update(interactive=False), # show_mask - gr.Number.update(interactive=False), # form_lambda_number - gr.Button.update(interactive=False), # form_reset_custom_image - ) - - # increate step - step_idx += 1 - - image_result = global_state["generator_params"]["image"] - global_state["images"]["image_raw"] = image_result - image_draw = update_image_draw( - image_result, - global_state["points"], - global_state["mask"], - global_state["show_mask"], - global_state, - ) - - # fp = NamedTemporaryFile(suffix=".png", delete=False) - # image_result.save(fp, "PNG") - - global_state["editing_state"] = "add_points" - - yield ( - global_state, # global_state - 0, # reset step to 0 after stop. # form_steps_number, - global_state["images"]["image_show"], # form image - gr.Button.update(interactive=True), # form_reset_image - gr.Button.update(interactive=True), # add points button - gr.Button.update(interactive=True), # enable mask button - gr.Button.update(interactive=True), # undo points button - gr.Button.update(interactive=True), # reset mask button - gr.Radio.update(interactive=True), # latent space - gr.Button.update(interactive=True), # start button - gr.Button.update(interactive=False), # stop button - gr.Number.update(interactive=True), # form_seed_number - gr.Number.update(interactive=True), # form_lr_number - gr.Checkbox.update(interactive=True), # show_mask - gr.Number.update(interactive=True), # form_lambda_number - gr.Button.update(interactive=True), # form_reset_custom_image - ) - - form_start_btn.click( - on_click_start, - inputs=[global_state, form_image], - outputs=[ - global_state, - form_steps_number, - form_image, - # form_download_result_file, - # >>> buttons - form_reset_image, - enable_add_points, - enable_add_mask, - undo_points, - form_reset_mask_btn, - form_latent_space, - form_start_btn, - form_stop_btn, - # <<< buttonm - # >>> inputs comps - form_seed_number, - form_lr_number, - show_mask, - form_lambda_number, - form_reset_custom_image, - ], - ) - - def on_click_stop(global_state): - """Function to handle stop button is clicked. - 1. send a stop signal by set global_state["temporal_params"]["stop"] as True - 2. Disable Stop button - """ - global_state["temporal_params"]["stop"] = True - - return global_state, gr.Button.update(interactive=False) - - form_stop_btn.click( - on_click_stop, - inputs=[global_state], - outputs=[global_state, form_stop_btn], - queue=False, - show_progress=True, - ) - - form_draw_interval_number.change( - partial( - on_change_single_global_state, - "draw_interval", - map_transform=lambda x: int(x), - ), - inputs=[form_draw_interval_number, global_state], - outputs=[global_state], - queue=False, - show_progress=True, - ) - - def on_click_remove_point(global_state): - choice = global_state["curr_point"] - del global_state["points"][choice] - - choices = list(global_state["points"].keys()) - - if len(choices) > 0: - global_state["curr_point"] = choices[0] - - return ( - gr.Dropdown.update(choices=choices, value=choices[0]), - global_state, - ) - - # Mask - def on_click_reset_mask(global_state): - global_state["mask"] = np.ones( - ( - global_state["images"]["image_raw"].size[1], - global_state["images"]["image_raw"].size[0], - ), - dtype=np.uint8, - ) - image_draw = update_image_draw( - global_state["images"]["image_raw"], - global_state["points"], - global_state["mask"], - global_state["show_mask"], - global_state, - ) - return global_state, gr.Image.update(value=image_draw, interactive=False) - - form_reset_mask_btn.click( - on_click_reset_mask, - inputs=[global_state], - outputs=[global_state, form_image], - ) - - # Image - def on_click_enable_draw(global_state, image): - """Function to start add mask mode. - 1. Preprocess mask info from last state - 2. Change editing state to add_mask - 3. Set curr image with points and mask - """ - global_state = preprocess_mask_info(global_state, image) - global_state["editing_state"] = "add_mask" - image_raw = global_state["images"]["image_raw"] - image_draw = update_image_draw( - image_raw, global_state["points"], global_state["mask"], True, global_state - ) - return ( - global_state, - gr.Image.update(value=image_draw, interactive=True), - ) - - def on_click_remove_draw(global_state, image): - """Function to start remove mask mode. - 1. Preprocess mask info from last state - 2. Change editing state to remove_mask - 3. Set curr image with points and mask - """ - global_state = preprocess_mask_info(global_state, image) - global_state["edinting_state"] = "remove_mask" - image_raw = global_state["images"]["image_raw"] - image_draw = update_image_draw( - image_raw, global_state["points"], global_state["mask"], True, global_state - ) - return ( - global_state, - gr.Image.update(value=image_draw, interactive=True), - ) - - enable_add_mask.click( - on_click_enable_draw, - inputs=[global_state, form_image], - outputs=[ - global_state, - form_image, - ], - queue=False, - show_progress=True, - ) - - def on_click_add_point(global_state, image: dict): - """Function switch from add mask mode to add points mode. - 1. Updaste mask buffer if need - 2. Change global_state['editing_state'] to 'add_points' - 3. Set current image with mask - """ - - global_state = preprocess_mask_info(global_state, image) - global_state["editing_state"] = "add_points" - mask = global_state["mask"] - image_raw = global_state["images"]["image_raw"] - image_draw = update_image_draw( - image_raw, - global_state["points"], - mask, - global_state["show_mask"], - global_state, - ) - - return ( - global_state, - gr.Image.update(value=image_draw, interactive=False), - ) - - enable_add_points.click( - on_click_add_point, - inputs=[global_state, form_image], - outputs=[global_state, form_image], - queue=False, - show_progress=True, - ) - - def on_click_image(global_state, evt: gr.SelectData): - """This function only support click for point selection""" - xy = evt.index - if global_state["editing_state"] != "add_points": - print(f'In {global_state["editing_state"]} state. ' "Do not add points.") - - return global_state, global_state["images"]["image_show"] - - points = global_state["points"] - - point_idx = get_latest_points_pair(points) - if point_idx is None: - points[0] = {"start": xy, "target": None} - print(f"Click Image - Start - {xy}") - elif points[point_idx].get("target", None) is None: - points[point_idx]["target"] = xy - print(f"Click Image - Target - {xy}") - else: - points[point_idx + 1] = {"start": xy, "target": None} - print(f"Click Image - Start - {xy}") - - image_raw = global_state["images"]["image_raw"] - image_draw = update_image_draw( - image_raw, - global_state["points"], - global_state["mask"], - global_state["show_mask"], - global_state, - ) - - return global_state, image_draw - - form_image.select( - on_click_image, - inputs=[global_state], - outputs=[global_state, form_image], - queue=False, - show_progress=True, - ) - - def on_click_clear_points(global_state): - """Function to handle clear all control points - 1. clear global_state['points'] (clear_state) - 2. re-init network - 2. re-draw image - """ - clear_state(global_state, target="point") - - renderer: Renderer = global_state["renderer"] - renderer.feat_refs = None - - image_raw = global_state["images"]["image_raw"] - image_draw = update_image_draw( - image_raw, {}, global_state["mask"], global_state["show_mask"], global_state - ) - return global_state, image_draw - - undo_points.click( - on_click_clear_points, - inputs=[global_state], - outputs=[global_state, form_image], - queue=False, - show_progress=True, - ) - - def on_click_show_mask(global_state, show_mask): - """Function to control whether show mask on image.""" - global_state["show_mask"] = show_mask - - image_raw = global_state["images"]["image_raw"] - image_draw = update_image_draw( - image_raw, - global_state["points"], - global_state["mask"], - global_state["show_mask"], - global_state, - ) - return global_state, image_draw - - show_mask.change( - on_click_show_mask, - inputs=[global_state, show_mask], - outputs=[global_state, form_image], - queue=False, - show_progress=True, - ) - -# print("SHAReD: Start app", parser.parse_args()) -gr.close_all() -app.queue(concurrency_count=1, max_size=200, api_open=False) -app.launch(show_api=False) diff --git a/spaces/DragGan/DragGan/torch_utils/ops/filtered_lrelu.py b/spaces/DragGan/DragGan/torch_utils/ops/filtered_lrelu.py deleted file mode 100644 index 6701cd72d1f0683a43f56b59ed3337dd3d6f0d3c..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan/torch_utils/ops/filtered_lrelu.py +++ /dev/null @@ -1,274 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import os -import numpy as np -import torch -import warnings - -from .. import custom_ops -from .. import misc -from . import upfirdn2d -from . import bias_act - -#---------------------------------------------------------------------------- - -_plugin = None - -def _init(): - global _plugin - if _plugin is None: - _plugin = custom_ops.get_plugin( - module_name='filtered_lrelu_plugin', - sources=['filtered_lrelu.cpp', 'filtered_lrelu_wr.cu', 'filtered_lrelu_rd.cu', 'filtered_lrelu_ns.cu'], - headers=['filtered_lrelu.h', 'filtered_lrelu.cu'], - source_dir=os.path.dirname(__file__), - extra_cuda_cflags=['--use_fast_math', '--allow-unsupported-compiler'], - ) - return True - -def _get_filter_size(f): - if f is None: - return 1, 1 - assert isinstance(f, torch.Tensor) - assert 1 <= f.ndim <= 2 - return f.shape[-1], f.shape[0] # width, height - -def _parse_padding(padding): - if isinstance(padding, int): - padding = [padding, padding] - assert isinstance(padding, (list, tuple)) - assert all(isinstance(x, (int, np.integer)) for x in padding) - padding = [int(x) for x in padding] - if len(padding) == 2: - px, py = padding - padding = [px, px, py, py] - px0, px1, py0, py1 = padding - return px0, px1, py0, py1 - -#---------------------------------------------------------------------------- - -def filtered_lrelu(x, fu=None, fd=None, b=None, up=1, down=1, padding=0, gain=np.sqrt(2), slope=0.2, clamp=None, flip_filter=False, impl='cuda'): - r"""Filtered leaky ReLU for a batch of 2D images. - - Performs the following sequence of operations for each channel: - - 1. Add channel-specific bias if provided (`b`). - - 2. Upsample the image by inserting N-1 zeros after each pixel (`up`). - - 3. Pad the image with the specified number of zeros on each side (`padding`). - Negative padding corresponds to cropping the image. - - 4. Convolve the image with the specified upsampling FIR filter (`fu`), shrinking it - so that the footprint of all output pixels lies within the input image. - - 5. Multiply each value by the provided gain factor (`gain`). - - 6. Apply leaky ReLU activation function to each value. - - 7. Clamp each value between -clamp and +clamp, if `clamp` parameter is provided. - - 8. Convolve the image with the specified downsampling FIR filter (`fd`), shrinking - it so that the footprint of all output pixels lies within the input image. - - 9. Downsample the image by keeping every Nth pixel (`down`). - - The fused op is considerably more efficient than performing the same calculation - using standard PyTorch ops. It supports gradients of arbitrary order. - - Args: - x: Float32/float16/float64 input tensor of the shape - `[batch_size, num_channels, in_height, in_width]`. - fu: Float32 upsampling FIR filter of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), or - `None` (identity). - fd: Float32 downsampling FIR filter of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), or - `None` (identity). - b: Bias vector, or `None` to disable. Must be a 1D tensor of the same type - as `x`. The length of vector must must match the channel dimension of `x`. - up: Integer upsampling factor (default: 1). - down: Integer downsampling factor. (default: 1). - padding: Padding with respect to the upsampled image. Can be a single number - or a list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - gain: Overall scaling factor for signal magnitude (default: sqrt(2)). - slope: Slope on the negative side of leaky ReLU (default: 0.2). - clamp: Maximum magnitude for leaky ReLU output (default: None). - flip_filter: False = convolution, True = correlation (default: False). - impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - assert isinstance(x, torch.Tensor) - assert impl in ['ref', 'cuda'] - if impl == 'cuda' and x.device.type == 'cuda' and _init(): - return _filtered_lrelu_cuda(up=up, down=down, padding=padding, gain=gain, slope=slope, clamp=clamp, flip_filter=flip_filter).apply(x, fu, fd, b, None, 0, 0) - return _filtered_lrelu_ref(x, fu=fu, fd=fd, b=b, up=up, down=down, padding=padding, gain=gain, slope=slope, clamp=clamp, flip_filter=flip_filter) - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def _filtered_lrelu_ref(x, fu=None, fd=None, b=None, up=1, down=1, padding=0, gain=np.sqrt(2), slope=0.2, clamp=None, flip_filter=False): - """Slow and memory-inefficient reference implementation of `filtered_lrelu()` using - existing `upfirdn2n()` and `bias_act()` ops. - """ - assert isinstance(x, torch.Tensor) and x.ndim == 4 - fu_w, fu_h = _get_filter_size(fu) - fd_w, fd_h = _get_filter_size(fd) - if b is not None: - assert isinstance(b, torch.Tensor) and b.dtype == x.dtype - misc.assert_shape(b, [x.shape[1]]) - assert isinstance(up, int) and up >= 1 - assert isinstance(down, int) and down >= 1 - px0, px1, py0, py1 = _parse_padding(padding) - assert gain == float(gain) and gain > 0 - assert slope == float(slope) and slope >= 0 - assert clamp is None or (clamp == float(clamp) and clamp >= 0) - - # Calculate output size. - batch_size, channels, in_h, in_w = x.shape - in_dtype = x.dtype - out_w = (in_w * up + (px0 + px1) - (fu_w - 1) - (fd_w - 1) + (down - 1)) // down - out_h = (in_h * up + (py0 + py1) - (fu_h - 1) - (fd_h - 1) + (down - 1)) // down - - # Compute using existing ops. - x = bias_act.bias_act(x=x, b=b) # Apply bias. - x = upfirdn2d.upfirdn2d(x=x, f=fu, up=up, padding=[px0, px1, py0, py1], gain=up**2, flip_filter=flip_filter) # Upsample. - x = bias_act.bias_act(x=x, act='lrelu', alpha=slope, gain=gain, clamp=clamp) # Bias, leaky ReLU, clamp. - x = upfirdn2d.upfirdn2d(x=x, f=fd, down=down, flip_filter=flip_filter) # Downsample. - - # Check output shape & dtype. - misc.assert_shape(x, [batch_size, channels, out_h, out_w]) - assert x.dtype == in_dtype - return x - -#---------------------------------------------------------------------------- - -_filtered_lrelu_cuda_cache = dict() - -def _filtered_lrelu_cuda(up=1, down=1, padding=0, gain=np.sqrt(2), slope=0.2, clamp=None, flip_filter=False): - """Fast CUDA implementation of `filtered_lrelu()` using custom ops. - """ - assert isinstance(up, int) and up >= 1 - assert isinstance(down, int) and down >= 1 - px0, px1, py0, py1 = _parse_padding(padding) - assert gain == float(gain) and gain > 0 - gain = float(gain) - assert slope == float(slope) and slope >= 0 - slope = float(slope) - assert clamp is None or (clamp == float(clamp) and clamp >= 0) - clamp = float(clamp if clamp is not None else 'inf') - - # Lookup from cache. - key = (up, down, px0, px1, py0, py1, gain, slope, clamp, flip_filter) - if key in _filtered_lrelu_cuda_cache: - return _filtered_lrelu_cuda_cache[key] - - # Forward op. - class FilteredLReluCuda(torch.autograd.Function): - @staticmethod - def forward(ctx, x, fu, fd, b, si, sx, sy): # pylint: disable=arguments-differ - assert isinstance(x, torch.Tensor) and x.ndim == 4 - - # Replace empty up/downsample kernels with full 1x1 kernels (faster than separable). - if fu is None: - fu = torch.ones([1, 1], dtype=torch.float32, device=x.device) - if fd is None: - fd = torch.ones([1, 1], dtype=torch.float32, device=x.device) - assert 1 <= fu.ndim <= 2 - assert 1 <= fd.ndim <= 2 - - # Replace separable 1x1 kernels with full 1x1 kernels when scale factor is 1. - if up == 1 and fu.ndim == 1 and fu.shape[0] == 1: - fu = fu.square()[None] - if down == 1 and fd.ndim == 1 and fd.shape[0] == 1: - fd = fd.square()[None] - - # Missing sign input tensor. - if si is None: - si = torch.empty([0]) - - # Missing bias tensor. - if b is None: - b = torch.zeros([x.shape[1]], dtype=x.dtype, device=x.device) - - # Construct internal sign tensor only if gradients are needed. - write_signs = (si.numel() == 0) and (x.requires_grad or b.requires_grad) - - # Warn if input storage strides are not in decreasing order due to e.g. channels-last layout. - strides = [x.stride(i) for i in range(x.ndim) if x.size(i) > 1] - if any(a < b for a, b in zip(strides[:-1], strides[1:])): - warnings.warn("low-performance memory layout detected in filtered_lrelu input", RuntimeWarning) - - # Call C++/Cuda plugin if datatype is supported. - if x.dtype in [torch.float16, torch.float32]: - if torch.cuda.current_stream(x.device) != torch.cuda.default_stream(x.device): - warnings.warn("filtered_lrelu called with non-default cuda stream but concurrent execution is not supported", RuntimeWarning) - y, so, return_code = _plugin.filtered_lrelu(x, fu, fd, b, si, up, down, px0, px1, py0, py1, sx, sy, gain, slope, clamp, flip_filter, write_signs) - else: - return_code = -1 - - # No Cuda kernel found? Fall back to generic implementation. Still more memory efficient than the reference implementation because - # only the bit-packed sign tensor is retained for gradient computation. - if return_code < 0: - warnings.warn("filtered_lrelu called with parameters that have no optimized CUDA kernel, using generic fallback", RuntimeWarning) - - y = x.add(b.unsqueeze(-1).unsqueeze(-1)) # Add bias. - y = upfirdn2d.upfirdn2d(x=y, f=fu, up=up, padding=[px0, px1, py0, py1], gain=up**2, flip_filter=flip_filter) # Upsample. - so = _plugin.filtered_lrelu_act_(y, si, sx, sy, gain, slope, clamp, write_signs) # Activation function and sign handling. Modifies y in-place. - y = upfirdn2d.upfirdn2d(x=y, f=fd, down=down, flip_filter=flip_filter) # Downsample. - - # Prepare for gradient computation. - ctx.save_for_backward(fu, fd, (si if si.numel() else so)) - ctx.x_shape = x.shape - ctx.y_shape = y.shape - ctx.s_ofs = sx, sy - return y - - @staticmethod - def backward(ctx, dy): # pylint: disable=arguments-differ - fu, fd, si = ctx.saved_tensors - _, _, xh, xw = ctx.x_shape - _, _, yh, yw = ctx.y_shape - sx, sy = ctx.s_ofs - dx = None # 0 - dfu = None; assert not ctx.needs_input_grad[1] - dfd = None; assert not ctx.needs_input_grad[2] - db = None # 3 - dsi = None; assert not ctx.needs_input_grad[4] - dsx = None; assert not ctx.needs_input_grad[5] - dsy = None; assert not ctx.needs_input_grad[6] - - if ctx.needs_input_grad[0] or ctx.needs_input_grad[3]: - pp = [ - (fu.shape[-1] - 1) + (fd.shape[-1] - 1) - px0, - xw * up - yw * down + px0 - (up - 1), - (fu.shape[0] - 1) + (fd.shape[0] - 1) - py0, - xh * up - yh * down + py0 - (up - 1), - ] - gg = gain * (up ** 2) / (down ** 2) - ff = (not flip_filter) - sx = sx - (fu.shape[-1] - 1) + px0 - sy = sy - (fu.shape[0] - 1) + py0 - dx = _filtered_lrelu_cuda(up=down, down=up, padding=pp, gain=gg, slope=slope, clamp=None, flip_filter=ff).apply(dy, fd, fu, None, si, sx, sy) - - if ctx.needs_input_grad[3]: - db = dx.sum([0, 2, 3]) - - return dx, dfu, dfd, db, dsi, dsx, dsy - - # Add to cache. - _filtered_lrelu_cuda_cache[key] = FilteredLReluCuda - return FilteredLReluCuda - -#---------------------------------------------------------------------------- diff --git a/spaces/DrewKarn/CarperAI-stable-vicuna-13b-delta/README.md b/spaces/DrewKarn/CarperAI-stable-vicuna-13b-delta/README.md deleted file mode 100644 index a4078cc72b06549ab3a0bd0cb19dc61ab1386545..0000000000000000000000000000000000000000 --- a/spaces/DrewKarn/CarperAI-stable-vicuna-13b-delta/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: CarperAI Stable Vicuna 13b Delta -emoji: 💻 -colorFrom: purple -colorTo: blue -sdk: gradio -sdk_version: 3.28.3 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/EAraid12/LoRA-DreamBooth-Training-UI/app_upload.py b/spaces/EAraid12/LoRA-DreamBooth-Training-UI/app_upload.py deleted file mode 100644 index b2465fa1f13425e05bd638cfe330b47ed7bd53e2..0000000000000000000000000000000000000000 --- a/spaces/EAraid12/LoRA-DreamBooth-Training-UI/app_upload.py +++ /dev/null @@ -1,100 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import pathlib - -import gradio as gr -import slugify - -from constants import UploadTarget -from uploader import Uploader -from utils import find_exp_dirs - - -class LoRAModelUploader(Uploader): - def upload_lora_model( - self, - folder_path: str, - repo_name: str, - upload_to: str, - private: bool, - delete_existing_repo: bool, - ) -> str: - if not folder_path: - raise ValueError - if not repo_name: - repo_name = pathlib.Path(folder_path).name - repo_name = slugify.slugify(repo_name) - - if upload_to == UploadTarget.PERSONAL_PROFILE.value: - organization = '' - elif upload_to == UploadTarget.LORA_LIBRARY.value: - organization = 'lora-library' - else: - raise ValueError - - return self.upload(folder_path, - repo_name, - organization=organization, - private=private, - delete_existing_repo=delete_existing_repo) - - -def load_local_lora_model_list() -> dict: - choices = find_exp_dirs(ignore_repo=True) - return gr.update(choices=choices, value=choices[0] if choices else None) - - -def create_upload_demo(hf_token: str | None) -> gr.Blocks: - uploader = LoRAModelUploader(hf_token) - model_dirs = find_exp_dirs(ignore_repo=True) - - with gr.Blocks() as demo: - with gr.Box(): - gr.Markdown('Local Models') - reload_button = gr.Button('Reload Model List') - model_dir = gr.Dropdown( - label='Model names', - choices=model_dirs, - value=model_dirs[0] if model_dirs else None) - with gr.Box(): - gr.Markdown('Upload Settings') - with gr.Row(): - use_private_repo = gr.Checkbox(label='Private', value=True) - delete_existing_repo = gr.Checkbox( - label='Delete existing repo of the same name', value=False) - upload_to = gr.Radio(label='Upload to', - choices=[_.value for _ in UploadTarget], - value=UploadTarget.LORA_LIBRARY.value) - model_name = gr.Textbox(label='Model Name') - upload_button = gr.Button('Upload') - gr.Markdown(''' - - You can upload your trained model to your personal profile (i.e. https://huggingface.co/{your_username}/{model_name}) or to the public [LoRA Concepts Library](https://huggingface.co/lora-library) (i.e. https://huggingface.co/lora-library/{model_name}). - ''') - with gr.Box(): - gr.Markdown('Output message') - output_message = gr.Markdown() - - reload_button.click(fn=load_local_lora_model_list, - inputs=None, - outputs=model_dir) - upload_button.click(fn=uploader.upload_lora_model, - inputs=[ - model_dir, - model_name, - upload_to, - use_private_repo, - delete_existing_repo, - ], - outputs=output_message) - - return demo - - -if __name__ == '__main__': - import os - - hf_token = os.getenv('HF_TOKEN') - demo = create_upload_demo(hf_token) - demo.queue(max_size=1).launch(share=False) diff --git a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/scripts/generate_multiscale_DF2K.py b/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/scripts/generate_multiscale_DF2K.py deleted file mode 100644 index d4f5d8324b1624e4cb6163754703b8dac2d188fd..0000000000000000000000000000000000000000 --- a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/scripts/generate_multiscale_DF2K.py +++ /dev/null @@ -1,48 +0,0 @@ -import argparse -import glob -import os -from PIL import Image - - -def main(args): - # For DF2K, we consider the following three scales, - # and the smallest image whose shortest edge is 400 - scale_list = [0.75, 0.5, 1 / 3] - shortest_edge = 400 - - path_list = sorted(glob.glob(os.path.join(args.input, '*'))) - for path in path_list: - print(path) - basename = os.path.splitext(os.path.basename(path))[0] - - img = Image.open(path) - width, height = img.size - for idx, scale in enumerate(scale_list): - print(f'\t{scale:.2f}') - rlt = img.resize((int(width * scale), int(height * scale)), resample=Image.LANCZOS) - rlt.save(os.path.join(args.output, f'{basename}T{idx}.png')) - - # save the smallest image which the shortest edge is 400 - if width < height: - ratio = height / width - width = shortest_edge - height = int(width * ratio) - else: - ratio = width / height - height = shortest_edge - width = int(height * ratio) - rlt = img.resize((int(width), int(height)), resample=Image.LANCZOS) - rlt.save(os.path.join(args.output, f'{basename}T{idx+1}.png')) - - -if __name__ == '__main__': - """Generate multi-scale versions for GT images with LANCZOS resampling. - It is now used for DF2K dataset (DIV2K + Flickr 2K) - """ - parser = argparse.ArgumentParser() - parser.add_argument('--input', type=str, default='datasets/DF2K/DF2K_HR', help='Input folder') - parser.add_argument('--output', type=str, default='datasets/DF2K/DF2K_multiscale', help='Output folder') - args = parser.parse_args() - - os.makedirs(args.output, exist_ok=True) - main(args) diff --git a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/README.md b/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/README.md deleted file mode 100644 index d632e063b68d9b3dc07a3243fb0007edea4205b7..0000000000000000000000000000000000000000 --- a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/README.md +++ /dev/null @@ -1,377 +0,0 @@ -# Taming Transformers for High-Resolution Image Synthesis -##### CVPR 2021 (Oral) -![teaser](assets/mountain.jpeg) - -[**Taming Transformers for High-Resolution Image Synthesis**](https://compvis.github.io/taming-transformers/)
    -[Patrick Esser](https://github.com/pesser)\*, -[Robin Rombach](https://github.com/rromb)\*, -[Björn Ommer](https://hci.iwr.uni-heidelberg.de/Staff/bommer)
    -\* equal contribution - -**tl;dr** We combine the efficiancy of convolutional approaches with the expressivity of transformers by introducing a convolutional VQGAN, which learns a codebook of context-rich visual parts, whose composition is modeled with an autoregressive transformer. - -![teaser](assets/teaser.png) -[arXiv](https://arxiv.org/abs/2012.09841) | [BibTeX](#bibtex) | [Project Page](https://compvis.github.io/taming-transformers/) - - -### News -- Thanks to [rom1504](https://github.com/rom1504) it is now easy to [train a VQGAN on your own datasets](#training-on-custom-data). -- Included a bugfix for the quantizer. For backward compatibility it is - disabled by default (which corresponds to always training with `beta=1.0`). - Use `legacy=False` in the quantizer config to enable it. - Thanks [richcmwang](https://github.com/richcmwang) and [wcshin-git](https://github.com/wcshin-git)! -- Our paper received an update: See https://arxiv.org/abs/2012.09841v3 and the corresponding changelog. -- Added a pretrained, [1.4B transformer model](https://k00.fr/s511rwcv) trained for class-conditional ImageNet synthesis, which obtains state-of-the-art FID scores among autoregressive approaches and outperforms BigGAN. -- Added pretrained, unconditional models on [FFHQ](https://k00.fr/yndvfu95) and [CelebA-HQ](https://k00.fr/2xkmielf). -- Added accelerated sampling via caching of keys/values in the self-attention operation, used in `scripts/sample_fast.py`. -- Added a checkpoint of a [VQGAN](https://heibox.uni-heidelberg.de/d/2e5662443a6b4307b470/) trained with f8 compression and Gumbel-Quantization. - See also our updated [reconstruction notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/reconstruction_usage.ipynb). -- We added a [colab notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/reconstruction_usage.ipynb) which compares two VQGANs and OpenAI's [DALL-E](https://github.com/openai/DALL-E). See also [this section](#more-resources). -- We now include an overview of pretrained models in [Tab.1](#overview-of-pretrained-models). We added models for [COCO](#coco) and [ADE20k](#ade20k). -- The streamlit demo now supports image completions. -- We now include a couple of examples from the D-RIN dataset so you can run the - [D-RIN demo](#d-rin) without preparing the dataset first. -- You can now jump right into sampling with our [Colab quickstart notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/taming-transformers.ipynb). - -## Requirements -A suitable [conda](https://conda.io/) environment named `taming` can be created -and activated with: - -``` -conda env create -f environment.yaml -conda activate taming -``` -## Overview of pretrained models -The following table provides an overview of all models that are currently available. -FID scores were evaluated using [torch-fidelity](https://github.com/toshas/torch-fidelity). -For reference, we also include a link to the recently released autoencoder of the [DALL-E](https://github.com/openai/DALL-E) model. -See the corresponding [colab -notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/reconstruction_usage.ipynb) -for a comparison and discussion of reconstruction capabilities. - -| Dataset | FID vs train | FID vs val | Link | Samples (256x256) | Comments -| ------------- | ------------- | ------------- |------------- | ------------- |------------- | -| FFHQ (f=16) | 9.6 | -- | [ffhq_transformer](https://k00.fr/yndvfu95) | [ffhq_samples](https://k00.fr/j626x093) | -| CelebA-HQ (f=16) | 10.2 | -- | [celebahq_transformer](https://k00.fr/2xkmielf) | [celebahq_samples](https://k00.fr/j626x093) | -| ADE20K (f=16) | -- | 35.5 | [ade20k_transformer](https://k00.fr/ot46cksa) | [ade20k_samples.zip](https://heibox.uni-heidelberg.de/f/70bb78cbaf844501b8fb/) [2k] | evaluated on val split (2k images) -| COCO-Stuff (f=16) | -- | 20.4 | [coco_transformer](https://k00.fr/2zz6i2ce) | [coco_samples.zip](https://heibox.uni-heidelberg.de/f/a395a9be612f4a7a8054/) [5k] | evaluated on val split (5k images) -| ImageNet (cIN) (f=16) | 15.98/15.78/6.59/5.88/5.20 | -- | [cin_transformer](https://k00.fr/s511rwcv) | [cin_samples](https://k00.fr/j626x093) | different decoding hyperparameters | -| | | | || | -| FacesHQ (f=16) | -- | -- | [faceshq_transformer](https://k00.fr/qqfl2do8) -| S-FLCKR (f=16) | -- | -- | [sflckr](https://heibox.uni-heidelberg.de/d/73487ab6e5314cb5adba/) -| D-RIN (f=16) | -- | -- | [drin_transformer](https://k00.fr/39jcugc5) -| | | | | || | -| VQGAN ImageNet (f=16), 1024 | 10.54 | 7.94 | [vqgan_imagenet_f16_1024](https://heibox.uni-heidelberg.de/d/8088892a516d4e3baf92/) | [reconstructions](https://k00.fr/j626x093) | Reconstruction-FIDs. -| VQGAN ImageNet (f=16), 16384 | 7.41 | 4.98 |[vqgan_imagenet_f16_16384](https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/) | [reconstructions](https://k00.fr/j626x093) | Reconstruction-FIDs. -| VQGAN OpenImages (f=8), 8192, GumbelQuantization | 3.24 | 1.49 |[vqgan_gumbel_f8](https://heibox.uni-heidelberg.de/d/2e5662443a6b4307b470/) | --- | Reconstruction-FIDs. -| | | | | || | -| DALL-E dVAE (f=8), 8192, GumbelQuantization | 33.88 | 32.01 | https://github.com/openai/DALL-E | [reconstructions](https://k00.fr/j626x093) | Reconstruction-FIDs. - - -## Running pretrained models - -The commands below will start a streamlit demo which supports sampling at -different resolutions and image completions. To run a non-interactive version -of the sampling process, replace `streamlit run scripts/sample_conditional.py --` -by `python scripts/make_samples.py --outdir ` and -keep the remaining command line arguments. - -To sample from unconditional or class-conditional models, -run `python scripts/sample_fast.py -r `. -We describe below how to use this script to sample from the ImageNet, FFHQ, and CelebA-HQ models, -respectively. - -### S-FLCKR -![teaser](assets/sunset_and_ocean.jpg) - -You can also [run this model in a Colab -notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/taming-transformers.ipynb), -which includes all necessary steps to start sampling. - -Download the -[2020-11-09T13-31-51_sflckr](https://heibox.uni-heidelberg.de/d/73487ab6e5314cb5adba/) -folder and place it into `logs`. Then, run -``` -streamlit run scripts/sample_conditional.py -- -r logs/2020-11-09T13-31-51_sflckr/ -``` - -### ImageNet -![teaser](assets/imagenet.png) - -Download the [2021-04-03T19-39-50_cin_transformer](https://k00.fr/s511rwcv) -folder and place it into logs. Sampling from the class-conditional ImageNet -model does not require any data preparation. To produce 50 samples for each of -the 1000 classes of ImageNet, with k=600 for top-k sampling, p=0.92 for nucleus -sampling and temperature t=1.0, run - -``` -python scripts/sample_fast.py -r logs/2021-04-03T19-39-50_cin_transformer/ -n 50 -k 600 -t 1.0 -p 0.92 --batch_size 25 -``` - -To restrict the model to certain classes, provide them via the `--classes` argument, separated by -commas. For example, to sample 50 *ostriches*, *border collies* and *whiskey jugs*, run - -``` -python scripts/sample_fast.py -r logs/2021-04-03T19-39-50_cin_transformer/ -n 50 -k 600 -t 1.0 -p 0.92 --batch_size 25 --classes 9,232,901 -``` -We recommended to experiment with the autoregressive decoding parameters (top-k, top-p and temperature) for best results. - -### FFHQ/CelebA-HQ - -Download the [2021-04-23T18-19-01_ffhq_transformer](https://k00.fr/yndvfu95) and -[2021-04-23T18-11-19_celebahq_transformer](https://k00.fr/2xkmielf) -folders and place them into logs. -Again, sampling from these unconditional models does not require any data preparation. -To produce 50000 samples, with k=250 for top-k sampling, -p=1.0 for nucleus sampling and temperature t=1.0, run - -``` -python scripts/sample_fast.py -r logs/2021-04-23T18-19-01_ffhq_transformer/ -``` -for FFHQ and - -``` -python scripts/sample_fast.py -r logs/2021-04-23T18-11-19_celebahq_transformer/ -``` -to sample from the CelebA-HQ model. -For both models it can be advantageous to vary the top-k/top-p parameters for sampling. - -### FacesHQ -![teaser](assets/faceshq.jpg) - -Download [2020-11-13T21-41-45_faceshq_transformer](https://k00.fr/qqfl2do8) and -place it into `logs`. Follow the data preparation steps for -[CelebA-HQ](#celeba-hq) and [FFHQ](#ffhq). Run -``` -streamlit run scripts/sample_conditional.py -- -r logs/2020-11-13T21-41-45_faceshq_transformer/ -``` - -### D-RIN -![teaser](assets/drin.jpg) - -Download [2020-11-20T12-54-32_drin_transformer](https://k00.fr/39jcugc5) and -place it into `logs`. To run the demo on a couple of example depth maps -included in the repository, run - -``` -streamlit run scripts/sample_conditional.py -- -r logs/2020-11-20T12-54-32_drin_transformer/ --ignore_base_data data="{target: main.DataModuleFromConfig, params: {batch_size: 1, validation: {target: taming.data.imagenet.DRINExamples}}}" -``` - -To run the demo on the complete validation set, first follow the data preparation steps for -[ImageNet](#imagenet) and then run -``` -streamlit run scripts/sample_conditional.py -- -r logs/2020-11-20T12-54-32_drin_transformer/ -``` - -### COCO -Download [2021-01-20T16-04-20_coco_transformer](https://k00.fr/2zz6i2ce) and -place it into `logs`. To run the demo on a couple of example segmentation maps -included in the repository, run - -``` -streamlit run scripts/sample_conditional.py -- -r logs/2021-01-20T16-04-20_coco_transformer/ --ignore_base_data data="{target: main.DataModuleFromConfig, params: {batch_size: 1, validation: {target: taming.data.coco.Examples}}}" -``` - -### ADE20k -Download [2020-11-20T21-45-44_ade20k_transformer](https://k00.fr/ot46cksa) and -place it into `logs`. To run the demo on a couple of example segmentation maps -included in the repository, run - -``` -streamlit run scripts/sample_conditional.py -- -r logs/2020-11-20T21-45-44_ade20k_transformer/ --ignore_base_data data="{target: main.DataModuleFromConfig, params: {batch_size: 1, validation: {target: taming.data.ade20k.Examples}}}" -``` - -## Training on custom data - -Training on your own dataset can be beneficial to get better tokens and hence better images for your domain. -Those are the steps to follow to make this work: -1. install the repo with `conda env create -f environment.yaml`, `conda activate taming` and `pip install -e .` -1. put your .jpg files in a folder `your_folder` -2. create 2 text files a `xx_train.txt` and `xx_test.txt` that point to the files in your training and test set respectively (for example `find $(pwd)/your_folder -name "*.jpg" > train.txt`) -3. adapt `configs/custom_vqgan.yaml` to point to these 2 files -4. run `python main.py --base configs/custom_vqgan.yaml -t True --gpus 0,1` to - train on two GPUs. Use `--gpus 0,` (with a trailing comma) to train on a single GPU. - -## Data Preparation - -### ImageNet -The code will try to download (through [Academic -Torrents](http://academictorrents.com/)) and prepare ImageNet the first time it -is used. However, since ImageNet is quite large, this requires a lot of disk -space and time. If you already have ImageNet on your disk, you can speed things -up by putting the data into -`${XDG_CACHE}/autoencoders/data/ILSVRC2012_{split}/data/` (which defaults to -`~/.cache/autoencoders/data/ILSVRC2012_{split}/data/`), where `{split}` is one -of `train`/`validation`. It should have the following structure: - -``` -${XDG_CACHE}/autoencoders/data/ILSVRC2012_{split}/data/ -├── n01440764 -│ ├── n01440764_10026.JPEG -│ ├── n01440764_10027.JPEG -│ ├── ... -├── n01443537 -│ ├── n01443537_10007.JPEG -│ ├── n01443537_10014.JPEG -│ ├── ... -├── ... -``` - -If you haven't extracted the data, you can also place -`ILSVRC2012_img_train.tar`/`ILSVRC2012_img_val.tar` (or symlinks to them) into -`${XDG_CACHE}/autoencoders/data/ILSVRC2012_train/` / -`${XDG_CACHE}/autoencoders/data/ILSVRC2012_validation/`, which will then be -extracted into above structure without downloading it again. Note that this -will only happen if neither a folder -`${XDG_CACHE}/autoencoders/data/ILSVRC2012_{split}/data/` nor a file -`${XDG_CACHE}/autoencoders/data/ILSVRC2012_{split}/.ready` exist. Remove them -if you want to force running the dataset preparation again. - -You will then need to prepare the depth data using -[MiDaS](https://github.com/intel-isl/MiDaS). Create a symlink -`data/imagenet_depth` pointing to a folder with two subfolders `train` and -`val`, each mirroring the structure of the corresponding ImageNet folder -described above and containing a `png` file for each of ImageNet's `JPEG` -files. The `png` encodes `float32` depth values obtained from MiDaS as RGBA -images. We provide the script `scripts/extract_depth.py` to generate this data. -**Please note** that this script uses [MiDaS via PyTorch -Hub](https://pytorch.org/hub/intelisl_midas_v2/). When we prepared the data, -the hub provided the [MiDaS -v2.0](https://github.com/intel-isl/MiDaS/releases/tag/v2) version, but now it -provides a v2.1 version. We haven't tested our models with depth maps obtained -via v2.1 and if you want to make sure that things work as expected, you must -adjust the script to make sure it explicitly uses -[v2.0](https://github.com/intel-isl/MiDaS/releases/tag/v2)! - -### CelebA-HQ -Create a symlink `data/celebahq` pointing to a folder containing the `.npy` -files of CelebA-HQ (instructions to obtain them can be found in the [PGGAN -repository](https://github.com/tkarras/progressive_growing_of_gans)). - -### FFHQ -Create a symlink `data/ffhq` pointing to the `images1024x1024` folder obtained -from the [FFHQ repository](https://github.com/NVlabs/ffhq-dataset). - -### S-FLCKR -Unfortunately, we are not allowed to distribute the images we collected for the -S-FLCKR dataset and can therefore only give a description how it was produced. -There are many resources on [collecting images from the -web](https://github.com/adrianmrit/flickrdatasets) to get started. -We collected sufficiently large images from [flickr](https://www.flickr.com) -(see `data/flickr_tags.txt` for a full list of tags used to find images) -and various [subreddits](https://www.reddit.com/r/sfwpornnetwork/wiki/network) -(see `data/subreddits.txt` for all subreddits that were used). -Overall, we collected 107625 images, and split them randomly into 96861 -training images and 10764 validation images. We then obtained segmentation -masks for each image using [DeepLab v2](https://arxiv.org/abs/1606.00915) -trained on [COCO-Stuff](https://arxiv.org/abs/1612.03716). We used a [PyTorch -reimplementation](https://github.com/kazuto1011/deeplab-pytorch) and include an -example script for this process in `scripts/extract_segmentation.py`. - -### COCO -Create a symlink `data/coco` containing the images from the 2017 split in -`train2017` and `val2017`, and their annotations in `annotations`. Files can be -obtained from the [COCO webpage](https://cocodataset.org/). In addition, we use -the [Stuff+thing PNG-style annotations on COCO 2017 -trainval](http://calvin.inf.ed.ac.uk/wp-content/uploads/data/cocostuffdataset/stuffthingmaps_trainval2017.zip) -annotations from [COCO-Stuff](https://github.com/nightrome/cocostuff), which -should be placed under `data/cocostuffthings`. - -### ADE20k -Create a symlink `data/ade20k_root` containing the contents of -[ADEChallengeData2016.zip](http://data.csail.mit.edu/places/ADEchallenge/ADEChallengeData2016.zip) -from the [MIT Scene Parsing Benchmark](http://sceneparsing.csail.mit.edu/). - -## Training models - -### FacesHQ - -Train a VQGAN with -``` -python main.py --base configs/faceshq_vqgan.yaml -t True --gpus 0, -``` - -Then, adjust the checkpoint path of the config key -`model.params.first_stage_config.params.ckpt_path` in -`configs/faceshq_transformer.yaml` (or download -[2020-11-09T13-33-36_faceshq_vqgan](https://k00.fr/uxy5usa9) and place into `logs`, which -corresponds to the preconfigured checkpoint path), then run -``` -python main.py --base configs/faceshq_transformer.yaml -t True --gpus 0, -``` - -### D-RIN - -Train a VQGAN on ImageNet with -``` -python main.py --base configs/imagenet_vqgan.yaml -t True --gpus 0, -``` - -or download a pretrained one from [2020-09-23T17-56-33_imagenet_vqgan](https://k00.fr/u0j2dtac) -and place under `logs`. If you trained your own, adjust the path in the config -key `model.params.first_stage_config.params.ckpt_path` of -`configs/drin_transformer.yaml`. - -Train a VQGAN on Depth Maps of ImageNet with -``` -python main.py --base configs/imagenetdepth_vqgan.yaml -t True --gpus 0, -``` - -or download a pretrained one from [2020-11-03T15-34-24_imagenetdepth_vqgan](https://k00.fr/55rlxs6i) -and place under `logs`. If you trained your own, adjust the path in the config -key `model.params.cond_stage_config.params.ckpt_path` of -`configs/drin_transformer.yaml`. - -To train the transformer, run -``` -python main.py --base configs/drin_transformer.yaml -t True --gpus 0, -``` - -## More Resources -### Comparing Different First Stage Models -The reconstruction and compression capabilities of different fist stage models can be analyzed in this [colab notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/reconstruction_usage.ipynb). -In particular, the notebook compares two VQGANs with a downsampling factor of f=16 for each and codebook dimensionality of 1024 and 16384, -a VQGAN with f=8 and 8192 codebook entries and the discrete autoencoder of OpenAI's [DALL-E](https://github.com/openai/DALL-E) (which has f=8 and 8192 -codebook entries). -![firststages1](assets/first_stage_squirrels.png) -![firststages2](assets/first_stage_mushrooms.png) - -### Other -- A [video summary](https://www.youtube.com/watch?v=o7dqGcLDf0A&feature=emb_imp_woyt) by [Two Minute Papers](https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg). -- A [video summary](https://www.youtube.com/watch?v=-wDSDtIAyWQ) by [Gradient Dude](https://www.youtube.com/c/GradientDude/about). -- A [weights and biases report summarizing the paper](https://wandb.ai/ayush-thakur/taming-transformer/reports/-Overview-Taming-Transformers-for-High-Resolution-Image-Synthesis---Vmlldzo0NjEyMTY) -by [ayulockin](https://github.com/ayulockin). -- A [video summary](https://www.youtube.com/watch?v=JfUTd8fjtX8&feature=emb_imp_woyt) by [What's AI](https://www.youtube.com/channel/UCUzGQrN-lyyc0BWTYoJM_Sg). -- Take a look at [ak9250's notebook](https://github.com/ak9250/taming-transformers/blob/master/tamingtransformerscolab.ipynb) if you want to run the streamlit demos on Colab. - -### Text-to-Image Optimization via CLIP -VQGAN has been successfully used as an image generator guided by the [CLIP](https://github.com/openai/CLIP) model, both for pure image generation -from scratch and image-to-image translation. We recommend the following notebooks/videos/resources: - - - [Advadnouns](https://twitter.com/advadnoun/status/1389316507134357506) Patreon and corresponding LatentVision notebooks: https://www.patreon.com/patronizeme - - The [notebook]( https://colab.research.google.com/drive/1L8oL-vLJXVcRzCFbPwOoMkPKJ8-aYdPN) of [Rivers Have Wings](https://twitter.com/RiversHaveWings). - - A [video](https://www.youtube.com/watch?v=90QDe6DQXF4&t=12s) explanation by [Dot CSV](https://www.youtube.com/channel/UCy5znSnfMsDwaLlROnZ7Qbg) (in Spanish, but English subtitles are available) - -![txt2img](assets/birddrawnbyachild.png) - -Text prompt: *'A bird drawn by a child'* - -## Shout-outs -Thanks to everyone who makes their code and models available. In particular, - -- The architecture of our VQGAN is inspired by [Denoising Diffusion Probabilistic Models](https://github.com/hojonathanho/diffusion) -- The very hackable transformer implementation [minGPT](https://github.com/karpathy/minGPT) -- The good ol' [PatchGAN](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix) and [Learned Perceptual Similarity (LPIPS)](https://github.com/richzhang/PerceptualSimilarity) - -## BibTeX - -``` -@misc{esser2020taming, - title={Taming Transformers for High-Resolution Image Synthesis}, - author={Patrick Esser and Robin Rombach and Björn Ommer}, - year={2020}, - eprint={2012.09841}, - archivePrefix={arXiv}, - primaryClass={cs.CV} -} -``` diff --git a/spaces/Emanuel/pos-tag-bosque-br-demo/app.py b/spaces/Emanuel/pos-tag-bosque-br-demo/app.py deleted file mode 100644 index 298d08031c3be81301a0b6f09ca05c33009d436b..0000000000000000000000000000000000000000 --- a/spaces/Emanuel/pos-tag-bosque-br-demo/app.py +++ /dev/null @@ -1,65 +0,0 @@ -from typing import Tuple - -import torch -import streamlit as st -from transformers import AutoModelForTokenClassification, AutoTokenizer -from dante_tokenizer import DanteTokenizer -from dante_tokenizer.data.preprocessing import expand_contractions -from annotated_text import annotated_text - - -def get_pos_tag_model(model_name: str = "Emanuel/autonlp-pos-tag-bosque") -> Tuple[AutoModelForTokenClassification, AutoTokenizer]: - model = AutoModelForTokenClassification.from_pretrained(model_name) - tokenizer = AutoTokenizer.from_pretrained(model_name) - - return model, tokenizer - -def get_tag_color(tag: str) -> str: - """ - Return the color for a given part-of-speech tag from the Universal Dependencies tagset. - See: https://universaldependencies.org/u/pos/ - """ - pallete = { - "ADJ": "#2E4C6D", - "ADP": "#FBE7C6", - "ADV": "#DADDFC", - "AUX": "#FC997C", - "CCONJ": "#544179", - "DET": "#A0E7E5", - "INTJ": "#32C1CD", - "NOUN": "#17D7A0", - "PART": "#C85C5C", - "PRON": "#F9975D", - "PROPN": "#FBD148", - "PUNCT": "#B2EA70", - "SCONJ": "#AA14F0", - "SYM": "#34BE82", - "VERB": "#FFBF86", - "X": "#2F86A6", - "NUM": "#F39B6D", - } - return pallete[tag] - -def main(): - text = st.text_area("Digite seu texto de entrada!") - dt = DanteTokenizer() - model, tokenizer = get_pos_tag_model() - - if text: - tokens = dt.tokenize(text) - input_cleaned_text = expand_contractions(text) - inputs = tokenizer(text, return_tensors="pt") - outputs = model(**inputs) - labelids = outputs.logits.squeeze().argmax(axis=-1) - scores, _ = torch.nn.functional.softmax(outputs.logits, dim=1).squeeze().max(axis=-1) - scores = scores.tolist() - labels = [model.config.id2label[int(x)] for x in labelids] - labels = labels[1:-1] - - answer = [] - for token, label, score in zip(tokens, labels, scores): - answer.append((token, label, get_tag_color(label))) - annotated_text(*answer) - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/EuroPython2022/mmocr-demo/configs/_base_/schedules/schedule_adam_step_5e.py b/spaces/EuroPython2022/mmocr-demo/configs/_base_/schedules/schedule_adam_step_5e.py deleted file mode 100644 index 371a3781bfe51ab0b9d841a3911bfe00c4e85197..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/mmocr-demo/configs/_base_/schedules/schedule_adam_step_5e.py +++ /dev/null @@ -1,8 +0,0 @@ -# optimizer -optimizer = dict(type='Adam', lr=1e-3) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict(policy='step', step=[3, 4]) -# running settings -runner = dict(type='EpochBasedRunner', max_epochs=5) -checkpoint_config = dict(interval=1) diff --git a/spaces/Filimize/English_To_French/README.md b/spaces/Filimize/English_To_French/README.md deleted file mode 100644 index 662cab4cbfc80d03a2baea0c452b030848e14e30..0000000000000000000000000000000000000000 --- a/spaces/Filimize/English_To_French/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Mushroom Classification -emoji: 🔥 -colorFrom: gray -colorTo: green -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/FoxMeo/fire-detector/utils/plots.py b/spaces/FoxMeo/fire-detector/utils/plots.py deleted file mode 100644 index fdd8d0e853deb228badeeed52fbbe5fb8eb10632..0000000000000000000000000000000000000000 --- a/spaces/FoxMeo/fire-detector/utils/plots.py +++ /dev/null @@ -1,489 +0,0 @@ -# Plotting utils - -import glob -import math -import os -import random -from copy import copy -from pathlib import Path - -import cv2 -import matplotlib -import matplotlib.pyplot as plt -import numpy as np -import pandas as pd -import seaborn as sns -import torch -import yaml -from PIL import Image, ImageDraw, ImageFont -from scipy.signal import butter, filtfilt - -from utils.general import xywh2xyxy, xyxy2xywh -from utils.metrics import fitness - -# Settings -matplotlib.rc('font', **{'size': 11}) -matplotlib.use('Agg') # for writing to files only - - -def color_list(): - # Return first 10 plt colors as (r,g,b) https://stackoverflow.com/questions/51350872/python-from-color-name-to-rgb - def hex2rgb(h): - return tuple(int(h[1 + i:1 + i + 2], 16) for i in (0, 2, 4)) - - return [hex2rgb(h) for h in matplotlib.colors.TABLEAU_COLORS.values()] # or BASE_ (8), CSS4_ (148), XKCD_ (949) - - -def hist2d(x, y, n=100): - # 2d histogram used in labels.png and evolve.png - xedges, yedges = np.linspace(x.min(), x.max(), n), np.linspace(y.min(), y.max(), n) - hist, xedges, yedges = np.histogram2d(x, y, (xedges, yedges)) - xidx = np.clip(np.digitize(x, xedges) - 1, 0, hist.shape[0] - 1) - yidx = np.clip(np.digitize(y, yedges) - 1, 0, hist.shape[1] - 1) - return np.log(hist[xidx, yidx]) - - -def butter_lowpass_filtfilt(data, cutoff=1500, fs=50000, order=5): - # https://stackoverflow.com/questions/28536191/how-to-filter-smooth-with-scipy-numpy - def butter_lowpass(cutoff, fs, order): - nyq = 0.5 * fs - normal_cutoff = cutoff / nyq - return butter(order, normal_cutoff, btype='low', analog=False) - - b, a = butter_lowpass(cutoff, fs, order=order) - return filtfilt(b, a, data) # forward-backward filter - - -def plot_one_box(x, img, color=None, label=None, line_thickness=3): - # Plots one bounding box on image img - tl = line_thickness or round(0.002 * (img.shape[0] + img.shape[1]) / 2) + 1 # line/font thickness - color = color or [random.randint(0, 255) for _ in range(3)] - c1, c2 = (int(x[0]), int(x[1])), (int(x[2]), int(x[3])) - cv2.rectangle(img, c1, c2, color, thickness=tl, lineType=cv2.LINE_AA) - if label: - tf = max(tl - 1, 1) # font thickness - t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0] - c2 = c1[0] + t_size[0], c1[1] - t_size[1] - 3 - cv2.rectangle(img, c1, c2, color, -1, cv2.LINE_AA) # filled - cv2.putText(img, label, (c1[0], c1[1] - 2), 0, tl / 3, [225, 255, 255], thickness=tf, lineType=cv2.LINE_AA) - - -def plot_one_box_PIL(box, img, color=None, label=None, line_thickness=None): - img = Image.fromarray(img) - draw = ImageDraw.Draw(img) - line_thickness = line_thickness or max(int(min(img.size) / 200), 2) - draw.rectangle(box, width=line_thickness, outline=tuple(color)) # plot - if label: - fontsize = max(round(max(img.size) / 40), 12) - font = ImageFont.truetype("Arial.ttf", fontsize) - txt_width, txt_height = font.getsize(label) - draw.rectangle([box[0], box[1] - txt_height + 4, box[0] + txt_width, box[1]], fill=tuple(color)) - draw.text((box[0], box[1] - txt_height + 1), label, fill=(255, 255, 255), font=font) - return np.asarray(img) - - -def plot_wh_methods(): # from utils.plots import *; plot_wh_methods() - # Compares the two methods for width-height anchor multiplication - # https://github.com/ultralytics/yolov3/issues/168 - x = np.arange(-4.0, 4.0, .1) - ya = np.exp(x) - yb = torch.sigmoid(torch.from_numpy(x)).numpy() * 2 - - fig = plt.figure(figsize=(6, 3), tight_layout=True) - plt.plot(x, ya, '.-', label='YOLOv3') - plt.plot(x, yb ** 2, '.-', label='YOLOR ^2') - plt.plot(x, yb ** 1.6, '.-', label='YOLOR ^1.6') - plt.xlim(left=-4, right=4) - plt.ylim(bottom=0, top=6) - plt.xlabel('input') - plt.ylabel('output') - plt.grid() - plt.legend() - fig.savefig('comparison.png', dpi=200) - - -def output_to_target(output): - # Convert model output to target format [batch_id, class_id, x, y, w, h, conf] - targets = [] - for i, o in enumerate(output): - for *box, conf, cls in o.cpu().numpy(): - targets.append([i, cls, *list(*xyxy2xywh(np.array(box)[None])), conf]) - return np.array(targets) - - -def plot_images(images, targets, paths=None, fname='images.jpg', names=None, max_size=640, max_subplots=16): - # Plot image grid with labels - - if isinstance(images, torch.Tensor): - images = images.cpu().float().numpy() - if isinstance(targets, torch.Tensor): - targets = targets.cpu().numpy() - - # un-normalise - if np.max(images[0]) <= 1: - images *= 255 - - tl = 3 # line thickness - tf = max(tl - 1, 1) # font thickness - bs, _, h, w = images.shape # batch size, _, height, width - bs = min(bs, max_subplots) # limit plot images - ns = np.ceil(bs ** 0.5) # number of subplots (square) - - # Check if we should resize - scale_factor = max_size / max(h, w) - if scale_factor < 1: - h = math.ceil(scale_factor * h) - w = math.ceil(scale_factor * w) - - colors = color_list() # list of colors - mosaic = np.full((int(ns * h), int(ns * w), 3), 255, dtype=np.uint8) # init - for i, img in enumerate(images): - if i == max_subplots: # if last batch has fewer images than we expect - break - - block_x = int(w * (i // ns)) - block_y = int(h * (i % ns)) - - img = img.transpose(1, 2, 0) - if scale_factor < 1: - img = cv2.resize(img, (w, h)) - - mosaic[block_y:block_y + h, block_x:block_x + w, :] = img - if len(targets) > 0: - image_targets = targets[targets[:, 0] == i] - boxes = xywh2xyxy(image_targets[:, 2:6]).T - classes = image_targets[:, 1].astype('int') - labels = image_targets.shape[1] == 6 # labels if no conf column - conf = None if labels else image_targets[:, 6] # check for confidence presence (label vs pred) - - if boxes.shape[1]: - if boxes.max() <= 1.01: # if normalized with tolerance 0.01 - boxes[[0, 2]] *= w # scale to pixels - boxes[[1, 3]] *= h - elif scale_factor < 1: # absolute coords need scale if image scales - boxes *= scale_factor - boxes[[0, 2]] += block_x - boxes[[1, 3]] += block_y - for j, box in enumerate(boxes.T): - cls = int(classes[j]) - color = colors[cls % len(colors)] - cls = names[cls] if names else cls - if labels or conf[j] > 0.25: # 0.25 conf thresh - label = '%s' % cls if labels else '%s %.1f' % (cls, conf[j]) - plot_one_box(box, mosaic, label=label, color=color, line_thickness=tl) - - # Draw image filename labels - if paths: - label = Path(paths[i]).name[:40] # trim to 40 char - t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0] - cv2.putText(mosaic, label, (block_x + 5, block_y + t_size[1] + 5), 0, tl / 3, [220, 220, 220], thickness=tf, - lineType=cv2.LINE_AA) - - # Image border - cv2.rectangle(mosaic, (block_x, block_y), (block_x + w, block_y + h), (255, 255, 255), thickness=3) - - if fname: - r = min(1280. / max(h, w) / ns, 1.0) # ratio to limit image size - mosaic = cv2.resize(mosaic, (int(ns * w * r), int(ns * h * r)), interpolation=cv2.INTER_AREA) - # cv2.imwrite(fname, cv2.cvtColor(mosaic, cv2.COLOR_BGR2RGB)) # cv2 save - Image.fromarray(mosaic).save(fname) # PIL save - return mosaic - - -def plot_lr_scheduler(optimizer, scheduler, epochs=300, save_dir=''): - # Plot LR simulating training for full epochs - optimizer, scheduler = copy(optimizer), copy(scheduler) # do not modify originals - y = [] - for _ in range(epochs): - scheduler.step() - y.append(optimizer.param_groups[0]['lr']) - plt.plot(y, '.-', label='LR') - plt.xlabel('epoch') - plt.ylabel('LR') - plt.grid() - plt.xlim(0, epochs) - plt.ylim(0) - plt.savefig(Path(save_dir) / 'LR.png', dpi=200) - plt.close() - - -def plot_test_txt(): # from utils.plots import *; plot_test() - # Plot test.txt histograms - x = np.loadtxt('test.txt', dtype=np.float32) - box = xyxy2xywh(x[:, :4]) - cx, cy = box[:, 0], box[:, 1] - - fig, ax = plt.subplots(1, 1, figsize=(6, 6), tight_layout=True) - ax.hist2d(cx, cy, bins=600, cmax=10, cmin=0) - ax.set_aspect('equal') - plt.savefig('hist2d.png', dpi=300) - - fig, ax = plt.subplots(1, 2, figsize=(12, 6), tight_layout=True) - ax[0].hist(cx, bins=600) - ax[1].hist(cy, bins=600) - plt.savefig('hist1d.png', dpi=200) - - -def plot_targets_txt(): # from utils.plots import *; plot_targets_txt() - # Plot targets.txt histograms - x = np.loadtxt('targets.txt', dtype=np.float32).T - s = ['x targets', 'y targets', 'width targets', 'height targets'] - fig, ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True) - ax = ax.ravel() - for i in range(4): - ax[i].hist(x[i], bins=100, label='%.3g +/- %.3g' % (x[i].mean(), x[i].std())) - ax[i].legend() - ax[i].set_title(s[i]) - plt.savefig('targets.jpg', dpi=200) - - -def plot_study_txt(path='', x=None): # from utils.plots import *; plot_study_txt() - # Plot study.txt generated by test.py - fig, ax = plt.subplots(2, 4, figsize=(10, 6), tight_layout=True) - # ax = ax.ravel() - - fig2, ax2 = plt.subplots(1, 1, figsize=(8, 4), tight_layout=True) - # for f in [Path(path) / f'study_coco_{x}.txt' for x in ['yolor-p6', 'yolor-w6', 'yolor-e6', 'yolor-d6']]: - for f in sorted(Path(path).glob('study*.txt')): - y = np.loadtxt(f, dtype=np.float32, usecols=[0, 1, 2, 3, 7, 8, 9], ndmin=2).T - x = np.arange(y.shape[1]) if x is None else np.array(x) - s = ['P', 'R', 'mAP@.5', 'mAP@.5:.95', 't_inference (ms/img)', 't_NMS (ms/img)', 't_total (ms/img)'] - # for i in range(7): - # ax[i].plot(x, y[i], '.-', linewidth=2, markersize=8) - # ax[i].set_title(s[i]) - - j = y[3].argmax() + 1 - ax2.plot(y[6, 1:j], y[3, 1:j] * 1E2, '.-', linewidth=2, markersize=8, - label=f.stem.replace('study_coco_', '').replace('yolo', 'YOLO')) - - ax2.plot(1E3 / np.array([209, 140, 97, 58, 35, 18]), [34.6, 40.5, 43.0, 47.5, 49.7, 51.5], - 'k.-', linewidth=2, markersize=8, alpha=.25, label='EfficientDet') - - ax2.grid(alpha=0.2) - ax2.set_yticks(np.arange(20, 60, 5)) - ax2.set_xlim(0, 57) - ax2.set_ylim(30, 55) - ax2.set_xlabel('GPU Speed (ms/img)') - ax2.set_ylabel('COCO AP val') - ax2.legend(loc='lower right') - plt.savefig(str(Path(path).name) + '.png', dpi=300) - - -def plot_labels(labels, names=(), save_dir=Path(''), loggers=None): - # plot dataset labels - print('Plotting labels... ') - c, b = labels[:, 0], labels[:, 1:].transpose() # classes, boxes - nc = int(c.max() + 1) # number of classes - colors = color_list() - x = pd.DataFrame(b.transpose(), columns=['x', 'y', 'width', 'height']) - - # seaborn correlogram - sns.pairplot(x, corner=True, diag_kind='auto', kind='hist', diag_kws=dict(bins=50), plot_kws=dict(pmax=0.9)) - plt.savefig(save_dir / 'labels_correlogram.jpg', dpi=200) - plt.close() - - # matplotlib labels - matplotlib.use('svg') # faster - ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True)[1].ravel() - ax[0].hist(c, bins=np.linspace(0, nc, nc + 1) - 0.5, rwidth=0.8) - ax[0].set_ylabel('instances') - if 0 < len(names) < 30: - ax[0].set_xticks(range(len(names))) - ax[0].set_xticklabels(names, rotation=90, fontsize=10) - else: - ax[0].set_xlabel('classes') - sns.histplot(x, x='x', y='y', ax=ax[2], bins=50, pmax=0.9) - sns.histplot(x, x='width', y='height', ax=ax[3], bins=50, pmax=0.9) - - # rectangles - labels[:, 1:3] = 0.5 # center - labels[:, 1:] = xywh2xyxy(labels[:, 1:]) * 2000 - img = Image.fromarray(np.ones((2000, 2000, 3), dtype=np.uint8) * 255) - for cls, *box in labels[:1000]: - ImageDraw.Draw(img).rectangle(box, width=1, outline=colors[int(cls) % 10]) # plot - ax[1].imshow(img) - ax[1].axis('off') - - for a in [0, 1, 2, 3]: - for s in ['top', 'right', 'left', 'bottom']: - ax[a].spines[s].set_visible(False) - - plt.savefig(save_dir / 'labels.jpg', dpi=200) - matplotlib.use('Agg') - plt.close() - - # loggers - for k, v in loggers.items() or {}: - if k == 'wandb' and v: - v.log({"Labels": [v.Image(str(x), caption=x.name) for x in save_dir.glob('*labels*.jpg')]}, commit=False) - - -def plot_evolution(yaml_file='data/hyp.finetune.yaml'): # from utils.plots import *; plot_evolution() - # Plot hyperparameter evolution results in evolve.txt - with open(yaml_file) as f: - hyp = yaml.load(f, Loader=yaml.SafeLoader) - x = np.loadtxt('evolve.txt', ndmin=2) - f = fitness(x) - # weights = (f - f.min()) ** 2 # for weighted results - plt.figure(figsize=(10, 12), tight_layout=True) - matplotlib.rc('font', **{'size': 8}) - for i, (k, v) in enumerate(hyp.items()): - y = x[:, i + 7] - # mu = (y * weights).sum() / weights.sum() # best weighted result - mu = y[f.argmax()] # best single result - plt.subplot(6, 5, i + 1) - plt.scatter(y, f, c=hist2d(y, f, 20), cmap='viridis', alpha=.8, edgecolors='none') - plt.plot(mu, f.max(), 'k+', markersize=15) - plt.title('%s = %.3g' % (k, mu), fontdict={'size': 9}) # limit to 40 characters - if i % 5 != 0: - plt.yticks([]) - print('%15s: %.3g' % (k, mu)) - plt.savefig('evolve.png', dpi=200) - print('\nPlot saved as evolve.png') - - -def profile_idetection(start=0, stop=0, labels=(), save_dir=''): - # Plot iDetection '*.txt' per-image logs. from utils.plots import *; profile_idetection() - ax = plt.subplots(2, 4, figsize=(12, 6), tight_layout=True)[1].ravel() - s = ['Images', 'Free Storage (GB)', 'RAM Usage (GB)', 'Battery', 'dt_raw (ms)', 'dt_smooth (ms)', 'real-world FPS'] - files = list(Path(save_dir).glob('frames*.txt')) - for fi, f in enumerate(files): - try: - results = np.loadtxt(f, ndmin=2).T[:, 90:-30] # clip first and last rows - n = results.shape[1] # number of rows - x = np.arange(start, min(stop, n) if stop else n) - results = results[:, x] - t = (results[0] - results[0].min()) # set t0=0s - results[0] = x - for i, a in enumerate(ax): - if i < len(results): - label = labels[fi] if len(labels) else f.stem.replace('frames_', '') - a.plot(t, results[i], marker='.', label=label, linewidth=1, markersize=5) - a.set_title(s[i]) - a.set_xlabel('time (s)') - # if fi == len(files) - 1: - # a.set_ylim(bottom=0) - for side in ['top', 'right']: - a.spines[side].set_visible(False) - else: - a.remove() - except Exception as e: - print('Warning: Plotting error for %s; %s' % (f, e)) - - ax[1].legend() - plt.savefig(Path(save_dir) / 'idetection_profile.png', dpi=200) - - -def plot_results_overlay(start=0, stop=0): # from utils.plots import *; plot_results_overlay() - # Plot training 'results*.txt', overlaying train and val losses - s = ['train', 'train', 'train', 'Precision', 'mAP@0.5', 'val', 'val', 'val', 'Recall', 'mAP@0.5:0.95'] # legends - t = ['Box', 'Objectness', 'Classification', 'P-R', 'mAP-F1'] # titles - for f in sorted(glob.glob('results*.txt') + glob.glob('../../Downloads/results*.txt')): - results = np.loadtxt(f, usecols=[2, 3, 4, 8, 9, 12, 13, 14, 10, 11], ndmin=2).T - n = results.shape[1] # number of rows - x = range(start, min(stop, n) if stop else n) - fig, ax = plt.subplots(1, 5, figsize=(14, 3.5), tight_layout=True) - ax = ax.ravel() - for i in range(5): - for j in [i, i + 5]: - y = results[j, x] - ax[i].plot(x, y, marker='.', label=s[j]) - # y_smooth = butter_lowpass_filtfilt(y) - # ax[i].plot(x, np.gradient(y_smooth), marker='.', label=s[j]) - - ax[i].set_title(t[i]) - ax[i].legend() - ax[i].set_ylabel(f) if i == 0 else None # add filename - fig.savefig(f.replace('.txt', '.png'), dpi=200) - - -def plot_results(start=0, stop=0, bucket='', id=(), labels=(), save_dir=''): - # Plot training 'results*.txt'. from utils.plots import *; plot_results(save_dir='runs/train/exp') - fig, ax = plt.subplots(2, 5, figsize=(12, 6), tight_layout=True) - ax = ax.ravel() - s = ['Box', 'Objectness', 'Classification', 'Precision', 'Recall', - 'val Box', 'val Objectness', 'val Classification', 'mAP@0.5', 'mAP@0.5:0.95'] - if bucket: - # files = ['https://storage.googleapis.com/%s/results%g.txt' % (bucket, x) for x in id] - files = ['results%g.txt' % x for x in id] - c = ('gsutil cp ' + '%s ' * len(files) + '.') % tuple('gs://%s/results%g.txt' % (bucket, x) for x in id) - os.system(c) - else: - files = list(Path(save_dir).glob('results*.txt')) - assert len(files), 'No results.txt files found in %s, nothing to plot.' % os.path.abspath(save_dir) - for fi, f in enumerate(files): - try: - results = np.loadtxt(f, usecols=[2, 3, 4, 8, 9, 12, 13, 14, 10, 11], ndmin=2).T - n = results.shape[1] # number of rows - x = range(start, min(stop, n) if stop else n) - for i in range(10): - y = results[i, x] - if i in [0, 1, 2, 5, 6, 7]: - y[y == 0] = np.nan # don't show zero loss values - # y /= y[0] # normalize - label = labels[fi] if len(labels) else f.stem - ax[i].plot(x, y, marker='.', label=label, linewidth=2, markersize=8) - ax[i].set_title(s[i]) - # if i in [5, 6, 7]: # share train and val loss y axes - # ax[i].get_shared_y_axes().join(ax[i], ax[i - 5]) - except Exception as e: - print('Warning: Plotting error for %s; %s' % (f, e)) - - ax[1].legend() - fig.savefig(Path(save_dir) / 'results.png', dpi=200) - - -def output_to_keypoint(output): - # Convert model output to target format [batch_id, class_id, x, y, w, h, conf] - targets = [] - for i, o in enumerate(output): - kpts = o[:,6:] - o = o[:,:6] - for index, (*box, conf, cls) in enumerate(o.detach().cpu().numpy()): - targets.append([i, cls, *list(*xyxy2xywh(np.array(box)[None])), conf, *list(kpts.detach().cpu().numpy()[index])]) - return np.array(targets) - - -def plot_skeleton_kpts(im, kpts, steps, orig_shape=None): - #Plot the skeleton and keypointsfor coco datatset - palette = np.array([[255, 128, 0], [255, 153, 51], [255, 178, 102], - [230, 230, 0], [255, 153, 255], [153, 204, 255], - [255, 102, 255], [255, 51, 255], [102, 178, 255], - [51, 153, 255], [255, 153, 153], [255, 102, 102], - [255, 51, 51], [153, 255, 153], [102, 255, 102], - [51, 255, 51], [0, 255, 0], [0, 0, 255], [255, 0, 0], - [255, 255, 255]]) - - skeleton = [[16, 14], [14, 12], [17, 15], [15, 13], [12, 13], [6, 12], - [7, 13], [6, 7], [6, 8], [7, 9], [8, 10], [9, 11], [2, 3], - [1, 2], [1, 3], [2, 4], [3, 5], [4, 6], [5, 7]] - - pose_limb_color = palette[[9, 9, 9, 9, 7, 7, 7, 0, 0, 0, 0, 0, 16, 16, 16, 16, 16, 16, 16]] - pose_kpt_color = palette[[16, 16, 16, 16, 16, 0, 0, 0, 0, 0, 0, 9, 9, 9, 9, 9, 9]] - radius = 5 - num_kpts = len(kpts) // steps - - for kid in range(num_kpts): - r, g, b = pose_kpt_color[kid] - x_coord, y_coord = kpts[steps * kid], kpts[steps * kid + 1] - if not (x_coord % 640 == 0 or y_coord % 640 == 0): - if steps == 3: - conf = kpts[steps * kid + 2] - if conf < 0.5: - continue - cv2.circle(im, (int(x_coord), int(y_coord)), radius, (int(r), int(g), int(b)), -1) - - for sk_id, sk in enumerate(skeleton): - r, g, b = pose_limb_color[sk_id] - pos1 = (int(kpts[(sk[0]-1)*steps]), int(kpts[(sk[0]-1)*steps+1])) - pos2 = (int(kpts[(sk[1]-1)*steps]), int(kpts[(sk[1]-1)*steps+1])) - if steps == 3: - conf1 = kpts[(sk[0]-1)*steps+2] - conf2 = kpts[(sk[1]-1)*steps+2] - if conf1<0.5 or conf2<0.5: - continue - if pos1[0]%640 == 0 or pos1[1]%640==0 or pos1[0]<0 or pos1[1]<0: - continue - if pos2[0] % 640 == 0 or pos2[1] % 640 == 0 or pos2[0]<0 or pos2[1]<0: - continue - cv2.line(im, pos1, pos2, (int(r), int(g), int(b)), thickness=2) diff --git a/spaces/FrankZxShen/vits-fast-finetuning-umamusume/monotonic_align/core.py b/spaces/FrankZxShen/vits-fast-finetuning-umamusume/monotonic_align/core.py deleted file mode 100644 index 1f940605fe4fd0738fa0006149fcba14ef88223a..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/vits-fast-finetuning-umamusume/monotonic_align/core.py +++ /dev/null @@ -1,36 +0,0 @@ -import numba - - -@numba.jit(numba.void(numba.int32[:, :, ::1], numba.float32[:, :, ::1], numba.int32[::1], numba.int32[::1]), - nopython=True, nogil=True) -def maximum_path_jit(paths, values, t_ys, t_xs): - b = paths.shape[0] - max_neg_val = -1e9 - for i in range(int(b)): - path = paths[i] - value = values[i] - t_y = t_ys[i] - t_x = t_xs[i] - - v_prev = v_cur = 0.0 - index = t_x - 1 - - for y in range(t_y): - for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - if x == y: - v_cur = max_neg_val - else: - v_cur = value[y - 1, x] - if x == 0: - if y == 0: - v_prev = 0. - else: - v_prev = max_neg_val - else: - v_prev = value[y - 1, x - 1] - value[y, x] += max(v_prev, v_cur) - - for y in range(t_y - 1, -1, -1): - path[y, index] = 1 - if index != 0 and (index == y or value[y - 1, index] < value[y - 1, index - 1]): - index = index - 1 diff --git "a/spaces/Frorozcol/financIA/pages/2_\360\237\222\241entrenamiento_model.py" "b/spaces/Frorozcol/financIA/pages/2_\360\237\222\241entrenamiento_model.py" deleted file mode 100644 index 4a1accd9d427a456159521d61117677f43683ba1..0000000000000000000000000000000000000000 --- "a/spaces/Frorozcol/financIA/pages/2_\360\237\222\241entrenamiento_model.py" +++ /dev/null @@ -1,12 +0,0 @@ -import os -import streamlit as st -import streamlit.components.v1 as components - -def main(): - st.title("Entretenimiento del modelo") - with open('html/0-2 Train_model.html', 'r', encoding='utf-8') as file: - markdown_text = file.read() - components.html(markdown_text,height=1000,scrolling=True) - - -main() \ No newline at end of file diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/connect_boxes_with_rope.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/connect_boxes_with_rope.py deleted file mode 100644 index 519b6315d845e41b71a206a7ca17d1a81fe5db87..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/connect_boxes_with_rope.py +++ /dev/null @@ -1,49 +0,0 @@ -import numpy as np -import os -import pybullet as p -import random -from cliport.tasks import primitives -from cliport.tasks.grippers import Spatula -from cliport.tasks.task import Task -from cliport.utils import utils -import IPython - -class ConnectBoxesWithRope(Task): - """Connect two colored blocks with ropes.""" - - def __init__(self): - super().__init__() - self.max_steps = 20 - self.lang_template = "connect the {color1} and {color2} blocks with the rope." - self.task_completed_desc = "done connecting." - self.additional_reset() - self.pos_eps = 0.04 # higher tolerance - - def reset(self, env): - super().reset(env) - colors = ['red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet'] - blocks = [] - target_colors = np.random.choice(colors, 2, replace=False) - block_size = (0.04, 0.04, 0.04) - block_urdf = 'stacking/block.urdf' - corner_poses = [] - - for color in colors: - block_pose = self.get_random_pose(env, block_size) - block_id = env.add_object(block_urdf, block_pose, color=color) - blocks.append(block_id) - if color in target_colors: - corner_poses.append(block_pose) - - dist = np.linalg.norm(np.array(corner_poses[0][0])-np.array(corner_poses[1][0])) - n_parts = int(20 * dist / 0.4) - - # IMPORTANT: use `make_ropes` to add cable (series of articulated small blocks). - objects, targets, matches = self.make_ropes(env, corners=(corner_poses[0][0], corner_poses[1][0]), n_parts=n_parts) - self.add_goal(objs=objects, matches=matches, targ_poses=targets, replace=False, - rotations=False, metric='pose', params=None, step_max_reward=1., - language_goal=self.lang_template.format(color1=target_colors[0], color2=target_colors[1])) - - # wait for the scene to settle down - for i in range(600): - p.stepSimulation() \ No newline at end of file diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/put_blues_around_red.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/put_blues_around_red.py deleted file mode 100644 index 9c8baa97589135bf3a025971389364404b411ce0..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/put_blues_around_red.py +++ /dev/null @@ -1,47 +0,0 @@ -import numpy as np -from cliport.tasks.task import Task -from cliport.utils import utils - -class PlaceBluesAroundRed(Task): - """Pick up the blue blocks one by one and place them around the red block, forming a circle.""" - - def __init__(self): - super().__init__() - self.max_steps = 15 - self.lang_template = "place the blue blocks around the red block" - self.task_completed_desc = "done placing blue blocks around red block." - self.additional_reset() - - def reset(self, env): - super().reset(env) - - # Add red block. - red_block_size = (0.04, 0.04, 0.04) - red_block_urdf = 'block/block_for_anchors.urdf' - red_block_pose = self.get_random_pose(env, red_block_size) - red_block_id = env.add_object(red_block_urdf, red_block_pose, 'fixed') - - # Add blue blocks. - blue_blocks = [] - blue_block_size = (0.02, 0.02, 0.02) - blue_block_urdf = 'block/block_for_anchors.urdf' - N = 4 - - for _ in range(N): - blue_block_pose = self.get_random_pose(env, blue_block_size) - blue_block_id = env.add_object(blue_block_urdf, blue_block_pose, color=utils.COLORS['blue']) - blue_blocks.append(blue_block_id) - - # Calculate target poses for blue blocks to form a circle around the red block. - radius = 0.06 # radius of the circle - angles = np.linspace(0, 2*np.pi, N, endpoint=False) # angles for each blue block - targ_poses = [] - for angle in angles: - x = red_block_pose[0][0] + radius * np.cos(angle) - y = red_block_pose[0][1] + radius * np.sin(angle) - z = red_block_pose[0][2] - targ_poses.append(((x, y, z), red_block_pose[1])) - - # Add goal. - self.add_goal(objs=blue_blocks, matches=np.eye(N), targ_poses=targ_poses, replace=False, - rotations=True, metric='pose', params=None, step_max_reward=1., language_goal=self.lang_template) diff --git a/spaces/Gen-Sim/Gen-Sim/scripts/test_singletask.sh b/spaces/Gen-Sim/Gen-Sim/scripts/test_singletask.sh deleted file mode 100644 index a72104f4a6e086d93080a0259250fde3242a23c8..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/scripts/test_singletask.sh +++ /dev/null @@ -1,33 +0,0 @@ -#!/bin/bash - -DATA_DIR=$1 -TASK=$2 -DISP=False - -echo "Training dataset... Folder: $DATA_DIR Task $TASK" - -# You can parallelize these depending on how much resources you have - -############################# -## Language-Conditioned Tasks -trap "kill 0" SIGINT -LANG_TASKS=$2 - - -for task in $LANG_TASKS - do - # Generate data - # TEST - python cliport/eval.py eval_task=$task \ - agent=cliport \ - mode=test \ - n_demos=100 \ - train_demos=200 \ - checkpoint_type=test_best \ - exp_folder=exps/exps-singletask \ - update_results=True - done - -python notebooks/print_results.py -r=exps/exps-singletask - -echo "Finished Training." diff --git a/spaces/Gradio-Blocks/Object-Detection-With-DETR-and-YOLOS/README.md b/spaces/Gradio-Blocks/Object-Detection-With-DETR-and-YOLOS/README.md deleted file mode 100644 index 4c49f9c1cdf7b91376b0a097395eedc45ab58ed8..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/Object-Detection-With-DETR-and-YOLOS/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Object Detection With Detr Yolos -emoji: 😻 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 3.0.9 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ccnet/ccnet_r101-d8_512x1024_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ccnet/ccnet_r101-d8_512x1024_40k_cityscapes.py deleted file mode 100644 index d2bac38ca6760af6441ede5a04409ed495ef87f3..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ccnet/ccnet_r101-d8_512x1024_40k_cityscapes.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './ccnet_r50-d8_512x1024_40k_cityscapes.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_40k_pascal_context.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_40k_pascal_context.py deleted file mode 100644 index 68e2b072e4b8d076e8c3e929dfdc73bcd24ce859..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_40k_pascal_context.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './deeplabv3plus_r50-d8_480x480_40k_pascal_context.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/slurm_train.sh b/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/slurm_train.sh deleted file mode 100644 index ab232105f0309c720ed81a522eca14b6fbd64afd..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/slurm_train.sh +++ /dev/null @@ -1,23 +0,0 @@ -#!/usr/bin/env bash - -set -x - -PARTITION=$1 -JOB_NAME=$2 -CONFIG=$3 -GPUS=${GPUS:-4} -GPUS_PER_NODE=${GPUS_PER_NODE:-4} -CPUS_PER_TASK=${CPUS_PER_TASK:-5} -SRUN_ARGS=${SRUN_ARGS:-""} -PY_ARGS=${@:4} - -PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \ -srun -p ${PARTITION} \ - --job-name=${JOB_NAME} \ - --gres=gpu:${GPUS_PER_NODE} \ - --ntasks=${GPUS} \ - --ntasks-per-node=${GPUS_PER_NODE} \ - --cpus-per-task=${CPUS_PER_TASK} \ - --kill-on-bad-exit=1 \ - ${SRUN_ARGS} \ - python -u tools/train.py ${CONFIG} --launcher="slurm" ${PY_ARGS} diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/Dockerfile b/spaces/GrandaddyShmax/AudioCraft_Plus/Dockerfile deleted file mode 100644 index efc2431ec0fe674c22fe2fdb9d7045cdf6cd2748..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/Dockerfile +++ /dev/null @@ -1,26 +0,0 @@ -FROM nvidia/cuda:11.8.0-base-ubuntu22.04 - -ENV DEBIAN_FRONTEND=noninteractive \ - PYTHONUNBUFFERED=1 \ - PYTHONIOENCODING=UTF-8 -RUN --mount=type=cache,target=/var/cache/apt --mount=type=cache,target=/var/lib/apt apt update &&\ - apt install -y \ - wget \ - git \ - pkg-config \ - python3 \ - python3-pip \ - python-is-python3 \ - ffmpeg \ - libnvrtc11.2 \ - libtcmalloc-minimal4 - -RUN useradd -m -u 1000 ac -RUN --mount=type=cache,target=/root/.cache python -m pip install --upgrade pip wheel -ENV TORCH_COMMAND="pip install torch==2.0.1+cu118 torchaudio --extra-index-url https://download.pytorch.org/whl/cu118" -RUN --mount=type=cache,target=/root/.cache python -m $TORCH_COMMAND -RUN ln -s /usr/lib/x86_64-linux-gnu/libnvrtc.so.11.2 /usr/lib/x86_64-linux-gnu/libnvrtc.so -USER 1000 -RUN mkdir ~/.cache -RUN --mount=type=cache,target=/home/ac/.cache --mount=source=.,target=/home/ac/audiocraft python -m pip install -r /home/ac/audiocraft/requirements.txt -WORKDIR /home/ac/audiocraft \ No newline at end of file diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/clue_sim/finetune_clue_sim.py b/spaces/HaloMaster/chinesesummary/fengshen/examples/clue_sim/finetune_clue_sim.py deleted file mode 100644 index b05f6ea6ce67c35cd39dedd924df0b663fd5a8b2..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/examples/clue_sim/finetune_clue_sim.py +++ /dev/null @@ -1,325 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The IDEA Authors. All rights reserved. - -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at - -# http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import json -import os -from sklearn import metrics -import torch -import torch.nn as nn -from torch.utils.data import Dataset, DataLoader, ConcatDataset -import pytorch_lightning as pl -from collections import defaultdict -from transformers import AutoConfig, AutoModel, get_cosine_schedule_with_warmup -from loss import FocalLoss, LabelSmoothingCorrectionCrossEntropy - - -class CustomDataset(Dataset): - def __init__(self, file, tokenizer, max_len, mode='no_test'): - self.tokenizer = tokenizer - self.max_len = max_len - self.mode = mode - - self.ex_list = [] - with open('./dataset/' + file, "r", encoding='utf-8') as f: - for line in f: - sample = json.loads(line) - query = sample["query"] - title = sample["title"] - id = int(sample["id"]) - if self.mode == 'no_test': - relevant = int(sample["label"]) - self.ex_list.append((query, title, relevant, id)) - else: - self.ex_list.append((query, title, id)) - - def __len__(self): - return len(self.ex_list) - - def __getitem__(self, index): - if self.mode == 'no_test': - query, title, relevant, id = self.ex_list[index] - else: - query, title, id = self.ex_list[index] - - inputs = self.tokenizer.encode_plus( - query, title, - truncation=True, - add_special_tokens=True, - max_length=self.max_len, - padding='max_length', - return_token_type_ids=True - ) - ids = inputs['input_ids'] - mask = inputs['attention_mask'] - token_type_ids = inputs["token_type_ids"] - if self.mode == 'no_test': - return { - 'ids': torch.tensor(ids, dtype=torch.long), - 'mask': torch.tensor(mask, dtype=torch.long), - 'token_type_ids': torch.tensor(token_type_ids, dtype=torch.long), - 'targets': torch.tensor(relevant, dtype=torch.float), - 'id': torch.tensor(id, dtype=torch.long) - } - else: - return { - 'ids': torch.tensor(ids, dtype=torch.long), - 'mask': torch.tensor(mask, dtype=torch.long), - 'token_type_ids': torch.tensor(token_type_ids, dtype=torch.long), - 'id': torch.tensor(id, dtype=torch.long) - } - - -class CustomDataModule(pl.LightningDataModule): - def __init__(self, args, tokenizer): - super().__init__() - self.args = args - self.tokenizer = tokenizer - self.max_len = self.args.max_seq_length - self.train_dataset = None - self.val_dataset = None - - def setup(self, stage): - data_path = "./dataset" - assert os.path.exists(os.path.join(data_path, 'train.json')) - assert os.path.exists(os.path.join(data_path, 'dev.json')) - assert os.path.exists(os.path.join(data_path, 'test_public.json')) - if stage == 'fit': - self.train_dataset = CustomDataset('train.json', self.tokenizer, self.max_len) - self.val_dataset = CustomDataset('dev.json', self.tokenizer, self.max_len) - self.test_dataset = CustomDataset('test_public.json', self.tokenizer, self.max_len) - elif stage == 'test': - self.test_dataset = CustomDataset('test_public.json', self.tokenizer, self.max_len) - - def train_dataloader(self): - full_dataset = ConcatDataset([self.train_dataset, self.val_dataset]) - train_dataloader = DataLoader( - full_dataset, - batch_size=self.args.batch_size, - num_workers=4, - shuffle=True, - pin_memory=True, - drop_last=True) - return train_dataloader - - def val_dataloader(self): - val_dataloader = DataLoader( - self.test_dataset, - batch_size=self.args.val_batch_size, - num_workers=4, - shuffle=False, - pin_memory=True, - drop_last=False) - return val_dataloader - - def test_dataloader(self): - test_dataloader = DataLoader( - self.test_dataset, - batch_size=self.args.val_batch_size, - num_workers=4, - shuffle=False, - pin_memory=True, - drop_last=False) - return test_dataloader - - -class CustomModel(pl.LightningModule): - def __init__(self, args): - super().__init__() - self.args = args - self.model = self.args.model_name - self.cache_dir = self.args.model_path - self.scheduler = self.args.scheduler - self.step_scheduler_after = "batch" - self.optimizer = self.args.optimizer - self.pooler = self.args.use_original_pooler - self.category = self.args.cate_performance - self.loss_func = self.args.loss_function - - hidden_dropout_prob: float = 0.1 - layer_norm_eps: float = 1e-7 - - config = AutoConfig.from_pretrained(self.model, cache_dir=self.cache_dir) - - config.update( - { - "output_hidden_states": False, - "hidden_dropout_prob": hidden_dropout_prob, - "layer_norm_eps": layer_norm_eps, - } - ) - self.transformer = AutoModel.from_pretrained(self.model, config=config, cache_dir=self.cache_dir) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - self.linear = torch.nn.Linear(config.hidden_size, self.args.num_labels, bias=True) # 分三类 - - def configure_optimizers(self): - """Prepare optimizer and schedule""" - model = self.transformer - no_decay = ["bias", "LayerNorm.weight"] - optimizer_grouped_parameters = [ - { - "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], - "weight_decay": 0.01, - }, - { - "params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], - "weight_decay": 0.0, - }, - ] - - optimizer_index = ['Adam', 'AdamW'].index(self.optimizer) - optimizer = [ - torch.optim.Adam(optimizer_grouped_parameters, lr=self.args.learning_rate), - torch.optim.AdamW(optimizer_grouped_parameters, lr=self.args.learning_rate)][optimizer_index] - - scheduler_index = ['StepLR', 'CosineWarmup', 'CosineAnnealingLR'].index(self.scheduler) - scheduler = [ - torch.optim.lr_scheduler.StepLR(optimizer, step_size=self.args.warmup_step, - gamma=self.args.warmup_proportion), - get_cosine_schedule_with_warmup( - optimizer, - num_warmup_steps=int(self.args.warmup_proportion * self.total_steps), - num_training_steps=self.total_steps, - ), - torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=5, eta_min=2e-06)][scheduler_index] - - scheduler = {"scheduler": scheduler, "interval": "step", "frequency": 1} - return [optimizer], [scheduler] - - def setup(self, stage=None): - if stage != "fit": - return - # calculate total steps - train_dataloader = self.trainer.datamodule.train_dataloader() - gpus = 0 if self.trainer.gpus is None else self.trainer.gpus - tb_size = self.args.batch_size * max(1, gpus) - ab_size = self.trainer.accumulate_grad_batches * float(self.trainer.max_epochs) - self.total_steps = (len(train_dataloader.dataset) // tb_size) // ab_size - - def loss(self, outputs, targets): - lossf_index = ['CE', 'Focal', 'LSCE_correction'].index(self.loss_func) - loss_fct = [nn.CrossEntropyLoss(), FocalLoss(), LabelSmoothingCorrectionCrossEntropy()][lossf_index] - loss = loss_fct(outputs, targets) - return loss - - def category_performance_measure(self, labels_right, labels_pred, num_label=3): - text_labels = [i for i in range(num_label)] - - TP = dict.fromkeys(text_labels, 0) # 预测正确的各个类的数目 - TP_FP = dict.fromkeys(text_labels, 0) # 测试数据集中各个类的数目 - TP_FN = dict.fromkeys(text_labels, 0) # 预测结果中各个类的数目 - - label_dict = defaultdict(list) - for num in range(num_label): - label_dict[num].append(str(num)) - - # 计算TP等数量 - for i in range(0, len(labels_right)): - TP_FP[labels_right[i]] += 1 - TP_FN[labels_pred[i]] += 1 - if labels_right[i] == labels_pred[i]: - TP[labels_right[i]] += 1 - - # 计算准确率P,召回率R,F1值 - results = [] - for key in TP_FP: - P = float(TP[key]) / float(TP_FP[key] + 1e-9) - R = float(TP[key]) / float(TP_FN[key] + 1e-9) - F1 = P * R * 2 / (P + R) if (P + R) != 0 else 0 - # results.append("%s:\t P:%f\t R:%f\t F1:%f" % (key, P, R, F1)) - results.append(F1) - return results - - def monitor_metrics(self, outputs, targets): - pred = torch.argmax(outputs, dim=1).cpu().numpy().tolist() - targets = targets.int().cpu().numpy().tolist() - if self.category: - category_results = self.category_performance_measure( - labels_right=targets, - labels_pred=pred, - num_label=self.args.num_labels - ) - return {"f1": category_results} - else: - f1_score = metrics.f1_score(targets, pred, average="macro") - return {"f1": f1_score} - - def forward(self, ids, mask, token_type_ids, labels): - transformer_out = self.transformer(input_ids=ids, attention_mask=mask, token_type_ids=token_type_ids) - - if self.pooler: - pooler_output = transformer_out.pooler_output - else: - sequence_output = transformer_out.last_hidden_state - pooler_output = torch.mean(sequence_output, dim=1) - logits = self.linear(self.dropout(pooler_output)) - - labels_hat = torch.argmax(logits, dim=1) - correct_count = torch.sum(labels == labels_hat) - return logits, correct_count - - def predict(self, ids, mask, token_type_ids): - transformer_out = self.transformer(input_ids=ids, attention_mask=mask, token_type_ids=token_type_ids) - pooler_output = transformer_out.pooler_output - logits = self.linear(self.dropout(pooler_output)) - logits = torch.argmax(logits, dim=1) - return logits - - def training_step(self, batch, batch_idx): - ids, mask, token_type_ids, labels = batch['ids'], batch['mask'], batch['token_type_ids'], batch['targets'] - logits, correct_count = self.forward(ids, mask, token_type_ids, labels) - loss = self.loss(logits, labels.long()) - f1 = self.monitor_metrics(logits, labels)["f1"] - self.log("train_loss", loss, logger=True, prog_bar=True) - self.log('train_acc', correct_count.float() / len(labels), logger=True, prog_bar=True) - if self.category: - self.log("train_f1_key0", f1[0], logger=True, prog_bar=True) - self.log("train_f1_key1", f1[1], logger=True, prog_bar=True) - self.log("train_f1_key2", f1[2], logger=True, prog_bar=True) - else: - self.log("train_f1", f1, logger=True, prog_bar=True) - return loss - - def validation_step(self, batch, batch_idx): - ids, mask, token_type_ids, labels = batch['ids'], batch['mask'], batch['token_type_ids'], batch['targets'] - logits, correct_count = self.forward(ids, mask, token_type_ids, labels) - loss = self.loss(logits, labels.long()) - f1 = self.monitor_metrics(logits, labels)["f1"] - self.log("val_loss", loss, logger=True, prog_bar=True) - self.log("val_acc", correct_count.float() / len(labels), logger=True, prog_bar=True) - if self.category: - self.log("val_f1_key0", f1[0], logger=True, prog_bar=True) - self.log("val_f1_key1", f1[1], logger=True, prog_bar=True) - self.log("val_f1_key2", f1[2], logger=True, prog_bar=True) - else: - self.log("val_f1", f1, logger=True, prog_bar=True) - - def test_step(self, batch, batch_idx): - ids, mask, token_type_ids, labels = batch['ids'], batch['mask'], batch['token_type_ids'], batch['targets'] - logits, correct_count = self.forward(ids, mask, token_type_ids, labels) - loss = self.loss(logits, labels.long()) - f1 = self.monitor_metrics(logits, labels)["f1"] - self.log("test_loss", loss, logger=True, prog_bar=True) - self.log("test_acc", correct_count.float() / len(labels), logger=True, prog_bar=True) - if self.category: - self.log("test_f1_key0", f1[0], logger=True, prog_bar=True) - self.log("test_f1_key1", f1[1], logger=True, prog_bar=True) - self.log("test_f1_key2", f1[2], logger=True, prog_bar=True) - else: - self.log("test_f1", f1, logger=True, prog_bar=True) - return {"test_loss": loss, "logits": logits, "labels": labels} - - def predict_step(self, batch, batch_idx, dataloader_idx): - ids, mask, token_type_ids, id = batch['ids'], batch['mask'], batch['token_type_ids'], batch['id'] - logits = self.predict(ids, mask, token_type_ids) - return {'id': id.cpu().numpy().tolist(), 'logits': logits.cpu().numpy().tolist()} diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/kaldi/add-self-loop-simple.cc b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/kaldi/add-self-loop-simple.cc deleted file mode 100644 index e18fb62df52ab85d7802615d8619b0fd94a08f8c..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/kaldi/add-self-loop-simple.cc +++ /dev/null @@ -1,94 +0,0 @@ -/* - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include -#include "fstext/fstext-lib.h" // @manual -#include "util/common-utils.h" // @manual - -/* - * This program is to modify a FST without self-loop by: - * for each incoming arc with non-eps input symbol, add a self-loop arc - * with that non-eps symbol as input and eps as output. - * - * This is to make sure the resultant FST can do deduplication for repeated - * symbols, which is very common in acoustic model - * - */ -namespace { -int32 AddSelfLoopsSimple(fst::StdVectorFst* fst) { - typedef fst::MutableArcIterator IterType; - - int32 num_states_before = fst->NumStates(); - fst::MakePrecedingInputSymbolsSame(false, fst); - int32 num_states_after = fst->NumStates(); - KALDI_LOG << "There are " << num_states_before - << " states in the original FST; " - << " after MakePrecedingInputSymbolsSame, there are " - << num_states_after << " states " << std::endl; - - auto weight_one = fst::StdArc::Weight::One(); - - int32 num_arc_added = 0; - - fst::StdArc self_loop_arc; - self_loop_arc.weight = weight_one; - - int32 num_states = fst->NumStates(); - std::vector> incoming_non_eps_label_per_state(num_states); - - for (int32 state = 0; state < num_states; state++) { - for (IterType aiter(fst, state); !aiter.Done(); aiter.Next()) { - fst::StdArc arc(aiter.Value()); - if (arc.ilabel != 0) { - incoming_non_eps_label_per_state[arc.nextstate].insert(arc.ilabel); - } - } - } - - for (int32 state = 0; state < num_states; state++) { - if (!incoming_non_eps_label_per_state[state].empty()) { - auto& ilabel_set = incoming_non_eps_label_per_state[state]; - for (auto it = ilabel_set.begin(); it != ilabel_set.end(); it++) { - self_loop_arc.ilabel = *it; - self_loop_arc.olabel = 0; - self_loop_arc.nextstate = state; - fst->AddArc(state, self_loop_arc); - num_arc_added++; - } - } - } - return num_arc_added; -} - -void print_usage() { - std::cout << "add-self-loop-simple usage:\n" - "\tadd-self-loop-simple \n"; -} -} // namespace - -int main(int argc, char** argv) { - if (argc != 3) { - print_usage(); - exit(1); - } - - auto input = argv[1]; - auto output = argv[2]; - - auto fst = fst::ReadFstKaldi(input); - auto num_states = fst->NumStates(); - KALDI_LOG << "Loading FST from " << input << " with " << num_states - << " states." << std::endl; - - int32 num_arc_added = AddSelfLoopsSimple(fst); - KALDI_LOG << "Adding " << num_arc_added << " self-loop arcs " << std::endl; - - fst::WriteFstKaldi(*fst, std::string(output)); - KALDI_LOG << "Writing FST to " << output << std::endl; - - delete fst; -} diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/transformer/transformer_legacy.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/transformer/transformer_legacy.py deleted file mode 100644 index af9646740a79ce720eeba513e2d994b39509ac49..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/transformer/transformer_legacy.py +++ /dev/null @@ -1,275 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.dataclass.utils import gen_parser_from_dataclass -from fairseq.models import ( - register_model, - register_model_architecture, -) -from fairseq.models.transformer.transformer_config import ( - TransformerConfig, - DEFAULT_MAX_SOURCE_POSITIONS, - DEFAULT_MAX_TARGET_POSITIONS, - DEFAULT_MIN_PARAMS_TO_WRAP, -) -from fairseq.models.transformer.transformer_base import ( - TransformerModelBase, -) - - -@register_model("transformer") -class TransformerModel(TransformerModelBase): - """ - This is the legacy implementation of the transformer model that - uses argparse for configuration. - """ - - @classmethod - def hub_models(cls): - # fmt: off - - def moses_subword(path): - return { - 'path': path, - 'tokenizer': 'moses', - 'bpe': 'subword_nmt', - } - - def moses_fastbpe(path): - return { - 'path': path, - 'tokenizer': 'moses', - 'bpe': 'fastbpe', - } - - def spm(path): - return { - 'path': path, - 'bpe': 'sentencepiece', - 'tokenizer': 'space', - } - - return { - 'transformer.wmt14.en-fr': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/wmt14.en-fr.joined-dict.transformer.tar.bz2'), - 'transformer.wmt16.en-de': 'https://dl.fbaipublicfiles.com/fairseq/models/wmt16.en-de.joined-dict.transformer.tar.bz2', - 'transformer.wmt18.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/wmt18.en-de.ensemble.tar.gz'), - 'transformer.wmt19.en-de': moses_fastbpe('https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-de.joined-dict.ensemble.tar.gz'), - 'transformer.wmt19.en-ru': moses_fastbpe('https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-ru.ensemble.tar.gz'), - 'transformer.wmt19.de-en': moses_fastbpe('https://dl.fbaipublicfiles.com/fairseq/models/wmt19.de-en.joined-dict.ensemble.tar.gz'), - 'transformer.wmt19.ru-en': moses_fastbpe('https://dl.fbaipublicfiles.com/fairseq/models/wmt19.ru-en.ensemble.tar.gz'), - 'transformer.wmt19.en-de.single_model': moses_fastbpe('https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-de.joined-dict.single_model.tar.gz'), - 'transformer.wmt19.en-ru.single_model': moses_fastbpe('https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-ru.single_model.tar.gz'), - 'transformer.wmt19.de-en.single_model': moses_fastbpe('https://dl.fbaipublicfiles.com/fairseq/models/wmt19.de-en.joined-dict.single_model.tar.gz'), - 'transformer.wmt19.ru-en.single_model': moses_fastbpe('https://dl.fbaipublicfiles.com/fairseq/models/wmt19.ru-en.single_model.tar.gz'), - 'transformer.wmt20.en-ta': spm('https://dl.fbaipublicfiles.com/fairseq/models/wmt20.en-ta.single.tar.gz'), - 'transformer.wmt20.en-iu.news': spm('https://dl.fbaipublicfiles.com/fairseq/models/wmt20.en-iu.news.single.tar.gz'), - 'transformer.wmt20.en-iu.nh': spm('https://dl.fbaipublicfiles.com/fairseq/models/wmt20.en-iu.nh.single.tar.gz'), - 'transformer.wmt20.ta-en': spm('https://dl.fbaipublicfiles.com/fairseq/models/wmt20.ta-en.single.tar.gz'), - 'transformer.wmt20.iu-en.news': spm('https://dl.fbaipublicfiles.com/fairseq/models/wmt20.iu-en.news.single.tar.gz'), - 'transformer.wmt20.iu-en.nh': spm('https://dl.fbaipublicfiles.com/fairseq/models/wmt20.iu-en.nh.single.tar.gz'), - 'transformer.flores101.mm100.615M': spm('https://dl.fbaipublicfiles.com/flores101/pretrained_models/flores101_mm100_615M.tar.gz'), - 'transformer.flores101.mm100.175M': spm('https://dl.fbaipublicfiles.com/flores101/pretrained_models/flores101_mm100_175M.tar.gz'), - } - # fmt: on - - def __init__(self, args, encoder, decoder): - cfg = TransformerConfig.from_namespace(args) - super().__init__(cfg, encoder, decoder) - self.args = args - - @classmethod - def add_args(cls, parser): - """Add model-specific arguments to the parser.""" - # we want to build the args recursively in this case. - # do not set defaults so that settings defaults from various architectures still works - gen_parser_from_dataclass( - parser, TransformerConfig(), delete_default=True, with_prefix="" - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present in older models - base_architecture(args) - - if args.encoder_layers_to_keep: - args.encoder_layers = len(args.encoder_layers_to_keep.split(",")) - if args.decoder_layers_to_keep: - args.decoder_layers = len(args.decoder_layers_to_keep.split(",")) - - if getattr(args, "max_source_positions", None) is None: - args.max_source_positions = DEFAULT_MAX_SOURCE_POSITIONS - if getattr(args, "max_target_positions", None) is None: - args.max_target_positions = DEFAULT_MAX_TARGET_POSITIONS - - src_dict, tgt_dict = task.source_dictionary, task.target_dictionary - - if args.share_all_embeddings: - if src_dict != tgt_dict: - raise ValueError("--share-all-embeddings requires a joined dictionary") - if args.encoder_embed_dim != args.decoder_embed_dim: - raise ValueError( - "--share-all-embeddings requires --encoder-embed-dim to match --decoder-embed-dim" - ) - if args.decoder_embed_path and ( - args.decoder_embed_path != args.encoder_embed_path - ): - raise ValueError( - "--share-all-embeddings not compatible with --decoder-embed-path" - ) - args.share_decoder_input_output_embed = True - - if getattr(args, "offload_activations", False): - args.checkpoint_activations = True # offloading implies checkpointing - - if not args.share_all_embeddings: - args.min_params_to_wrap = getattr( - args, "min_params_to_wrap", DEFAULT_MIN_PARAMS_TO_WRAP - ) - cfg = TransformerConfig.from_namespace(args) - return super().build_model(cfg, task) - - @classmethod - def build_embedding(cls, args, dictionary, embed_dim, path=None): - return super().build_embedding( - TransformerConfig.from_namespace(args), dictionary, embed_dim, path - ) - - @classmethod - def build_encoder(cls, args, src_dict, embed_tokens): - return super().build_encoder( - TransformerConfig.from_namespace(args), src_dict, embed_tokens - ) - - @classmethod - def build_decoder(cls, args, tgt_dict, embed_tokens): - return super().build_decoder( - TransformerConfig.from_namespace(args), tgt_dict, embed_tokens - ) - - -# architectures - - -@register_model_architecture("transformer", "transformer_tiny") -def tiny_architecture(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 64) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 64) - args.encoder_layers = getattr(args, "encoder_layers", 2) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 2) - args.decoder_layers = getattr(args, "decoder_layers", 2) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 2) - return base_architecture(args) - - -@register_model_architecture("transformer", "transformer") -def base_architecture(args): - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048) - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.activation_dropout = getattr(args, "activation_dropout", 0.0) - args.activation_fn = getattr(args, "activation_fn", "relu") - args.dropout = getattr(args, "dropout", 0.1) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.share_all_embeddings = getattr(args, "share_all_embeddings", False) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.adaptive_input = getattr(args, "adaptive_input", False) - args.no_cross_attention = getattr(args, "no_cross_attention", False) - args.cross_self_attention = getattr(args, "cross_self_attention", False) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - args.no_scale_embedding = getattr(args, "no_scale_embedding", False) - args.layernorm_embedding = getattr(args, "layernorm_embedding", False) - args.tie_adaptive_weights = getattr(args, "tie_adaptive_weights", False) - args.checkpoint_activations = getattr(args, "checkpoint_activations", False) - args.offload_activations = getattr(args, "offload_activations", False) - if args.offload_activations: - args.checkpoint_activations = True - args.encoder_layers_to_keep = getattr(args, "encoder_layers_to_keep", None) - args.decoder_layers_to_keep = getattr(args, "decoder_layers_to_keep", None) - args.encoder_layerdrop = getattr(args, "encoder_layerdrop", 0) - args.decoder_layerdrop = getattr(args, "decoder_layerdrop", 0) - args.quant_noise_pq = getattr(args, "quant_noise_pq", 0) - args.quant_noise_pq_block_size = getattr(args, "quant_noise_pq_block_size", 8) - args.quant_noise_scalar = getattr(args, "quant_noise_scalar", 0) - - -@register_model_architecture("transformer", "transformer_iwslt_de_en") -def transformer_iwslt_de_en(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 1024) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4) - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 1024) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4) - args.decoder_layers = getattr(args, "decoder_layers", 6) - base_architecture(args) - - -@register_model_architecture("transformer", "transformer_wmt_en_de") -def transformer_wmt_en_de(args): - base_architecture(args) - - -# parameters used in the "Attention Is All You Need" paper (Vaswani et al., 2017) -@register_model_architecture("transformer", "transformer_vaswani_wmt_en_de_big") -def transformer_vaswani_wmt_en_de_big(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - args.dropout = getattr(args, "dropout", 0.3) - base_architecture(args) - - -@register_model_architecture("transformer", "transformer_vaswani_wmt_en_fr_big") -def transformer_vaswani_wmt_en_fr_big(args): - args.dropout = getattr(args, "dropout", 0.1) - transformer_vaswani_wmt_en_de_big(args) - - -@register_model_architecture("transformer", "transformer_wmt_en_de_big") -def transformer_wmt_en_de_big(args): - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - transformer_vaswani_wmt_en_de_big(args) - - -# default parameters used in tensor2tensor implementation -@register_model_architecture("transformer", "transformer_wmt_en_de_big_t2t") -def transformer_wmt_en_de_big_t2t(args): - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", True) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", True) - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.activation_dropout = getattr(args, "activation_dropout", 0.1) - transformer_vaswani_wmt_en_de_big(args) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/location_attention.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/location_attention.py deleted file mode 100644 index a970876bba4369a93245fe73bd963566bfe4d63d..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/location_attention.py +++ /dev/null @@ -1,72 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.nn as nn -import torch -import torch.nn.functional as F - - -class LocationAttention(nn.Module): - """ - Attention-Based Models for Speech Recognition - https://arxiv.org/pdf/1506.07503.pdf - - :param int encoder_dim: # projection-units of encoder - :param int decoder_dim: # units of decoder - :param int attn_dim: attention dimension - :param int conv_dim: # channels of attention convolution - :param int conv_kernel_size: filter size of attention convolution - """ - - def __init__(self, attn_dim, encoder_dim, decoder_dim, - attn_state_kernel_size, conv_dim, conv_kernel_size, - scaling=2.0): - super(LocationAttention, self).__init__() - self.attn_dim = attn_dim - self.decoder_dim = decoder_dim - self.scaling = scaling - self.proj_enc = nn.Linear(encoder_dim, attn_dim) - self.proj_dec = nn.Linear(decoder_dim, attn_dim, bias=False) - self.proj_attn = nn.Linear(conv_dim, attn_dim, bias=False) - self.conv = nn.Conv1d(attn_state_kernel_size, conv_dim, - 2 * conv_kernel_size + 1, - padding=conv_kernel_size, bias=False) - self.proj_out = nn.Sequential(nn.Tanh(), nn.Linear(attn_dim, 1)) - - self.proj_enc_out = None # cache - - def clear_cache(self): - self.proj_enc_out = None - - def forward(self, encoder_out, encoder_padding_mask, decoder_h, attn_state): - """ - :param torch.Tensor encoder_out: padded encoder hidden state B x T x D - :param torch.Tensor encoder_padding_mask: encoder padding mask - :param torch.Tensor decoder_h: decoder hidden state B x D - :param torch.Tensor attn_prev: previous attention weight B x K x T - :return: attention weighted encoder state (B, D) - :rtype: torch.Tensor - :return: previous attention weights (B x T) - :rtype: torch.Tensor - """ - bsz, seq_len, _ = encoder_out.size() - if self.proj_enc_out is None: - self.proj_enc_out = self.proj_enc(encoder_out) - - # B x K x T -> B x C x T - attn = self.conv(attn_state) - # B x C x T -> B x T x C -> B x T x D - attn = self.proj_attn(attn.transpose(1, 2)) - - if decoder_h is None: - decoder_h = encoder_out.new_zeros(bsz, self.decoder_dim) - dec_h = self.proj_dec(decoder_h).view(bsz, 1, self.attn_dim) - - out = self.proj_out(attn + self.proj_enc_out + dec_h).squeeze(2) - out.masked_fill_(encoder_padding_mask, -float("inf")) - - w = F.softmax(self.scaling * out, dim=1) - c = torch.sum(encoder_out * w.view(bsz, seq_len, 1), dim=1) - return c, w diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/scripts/inference/advanced_infer.sh b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/scripts/inference/advanced_infer.sh deleted file mode 100644 index 6bbd53454331f0bd5157aa4e38ae4d329fba05fd..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/scripts/inference/advanced_infer.sh +++ /dev/null @@ -1,22 +0,0 @@ -gender='male' -glowdir='../../checkpoints/glow/'$gender'/' -hifidir='../../checkpoints/hifi/'$gender'/' -device='cpu' -text='Hey mr. I am testing this one. Now on multiple sentences. Just want to see the flow.' -noise_scale='0.667' -length_scale='1.0' -transliteration=1 -number_conversion=1 -split_sentences=1 -lang='en' - - -timestamp=$(date +%s) -wav='../../results/'$gender'/' -wav_file=$wav/$timestamp'.wav' - - -mkdir -p $wav - -python ../../utils/inference/advanced_tts.py -a $glowdir -v $hifidir -d $device -t "$text" -w $wav_file -L $lang -n $noise_scale -l $length_scale -T $transliteration -N $number_conversion -S $split_sentences -echo "File saved at: "$wav_file diff --git a/spaces/HighCWu/GFPGAN-1.3/tests/test_utils.py b/spaces/HighCWu/GFPGAN-1.3/tests/test_utils.py deleted file mode 100644 index a963b3269dea05f9b7ec6c3db016e9a579c92fc8..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/GFPGAN-1.3/tests/test_utils.py +++ /dev/null @@ -1,43 +0,0 @@ -import cv2 -from facexlib.utils.face_restoration_helper import FaceRestoreHelper - -from gfpgan.archs.gfpganv1_arch import GFPGANv1 -from gfpgan.archs.gfpganv1_clean_arch import GFPGANv1Clean -from gfpgan.utils import GFPGANer - - -def test_gfpganer(): - # initialize with the clean model - restorer = GFPGANer( - model_path='experiments/pretrained_models/GFPGANCleanv1-NoCE-C2.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=None) - # test attribute - assert isinstance(restorer.gfpgan, GFPGANv1Clean) - assert isinstance(restorer.face_helper, FaceRestoreHelper) - - # initialize with the original model - restorer = GFPGANer( - model_path='experiments/pretrained_models/GFPGANv1.pth', - upscale=2, - arch='original', - channel_multiplier=1, - bg_upsampler=None) - # test attribute - assert isinstance(restorer.gfpgan, GFPGANv1) - assert isinstance(restorer.face_helper, FaceRestoreHelper) - - # ------------------ test enhance ---------------- # - img = cv2.imread('tests/data/gt/00000000.png', cv2.IMREAD_COLOR) - result = restorer.enhance(img, has_aligned=False, paste_back=True) - assert result[0][0].shape == (512, 512, 3) - assert result[1][0].shape == (512, 512, 3) - assert result[2].shape == (1024, 1024, 3) - - # with has_aligned=True - result = restorer.enhance(img, has_aligned=True, paste_back=False) - assert result[0][0].shape == (512, 512, 3) - assert result[1][0].shape == (512, 512, 3) - assert result[2] is None diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.eff0bbf7.js b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.eff0bbf7.js deleted file mode 100644 index fcc148a3990cab153a88ab1fdf736440d4e9ef90..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.eff0bbf7.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as m,i as b,s as g,I as p,O as h,c as v,m as T,L as k,j as u,k as f,o as q,F as C,p as S,u as j,q as w,r as D,K as E}from"./index.396f4a72.js";import{T as F}from"./Tabs.6b500f1a.js";import"./Column.06c172ac.js";function I(s){let e;const i=s[3].default,t=S(i,s,s[6],null);return{c(){t&&t.c()},m(n,_){t&&t.m(n,_),e=!0},p(n,_){t&&t.p&&(!e||_&64)&&j(t,i,n,n[6],e?D(i,n[6],_,null):w(n[6]),null)},i(n){e||(u(t,n),e=!0)},o(n){f(t,n),e=!1},d(n){t&&t.d(n)}}}function K(s){let e,i,t;function n(a){s[4](a)}let _={visible:s[1],elem_id:s[2],$$slots:{default:[I]},$$scope:{ctx:s}};return s[0]!==void 0&&(_.selected=s[0]),e=new F({props:_}),p.push(()=>h(e,"selected",n)),e.$on("change",s[5]),{c(){v(e.$$.fragment)},m(a,c){T(e,a,c),t=!0},p(a,[c]){const o={};c&2&&(o.visible=a[1]),c&4&&(o.elem_id=a[2]),c&64&&(o.$$scope={dirty:c,ctx:a}),!i&&c&1&&(i=!0,o.selected=a[0],k(()=>i=!1)),e.$set(o)},i(a){t||(u(e.$$.fragment,a),t=!0)},o(a){f(e.$$.fragment,a),t=!1},d(a){q(e,a)}}}function L(s,e,i){let{$$slots:t={},$$scope:n}=e;const _=C();let{visible:a=!0}=e,{elem_id:c=""}=e,{selected:o}=e;function r(l){o=l,i(0,o)}function d(l){E.call(this,s,l)}return s.$$set=l=>{"visible"in l&&i(1,a=l.visible),"elem_id"in l&&i(2,c=l.elem_id),"selected"in l&&i(0,o=l.selected),"$$scope"in l&&i(6,n=l.$$scope)},s.$$.update=()=>{s.$$.dirty&1&&_("prop_change",{selected:o})},[o,a,c,t,r,d,n]}class O extends m{constructor(e){super(),b(this,e,L,K,g,{visible:1,elem_id:2,selected:0})}}var G=O;const H=["static"];export{G as Component,H as modes}; -//# sourceMappingURL=index.eff0bbf7.js.map diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/__init__.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/__init__.py deleted file mode 100644 index 8b7eb2ec4fc5190c4dcdfe34b0259e6f448e18a9..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/data/__init__.py +++ /dev/null @@ -1,128 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -from .dictionary import Dictionary, TruncatedDictionary - -from .fairseq_dataset import FairseqDataset, FairseqIterableDataset - -from .base_wrapper_dataset import BaseWrapperDataset - -from .add_target_dataset import AddTargetDataset -from .append_token_dataset import AppendTokenDataset -from .audio.raw_audio_dataset import BinarizedAudioDataset, FileAudioDataset -from .audio.hubert_dataset import HubertDataset -from .backtranslation_dataset import BacktranslationDataset -from .bucket_pad_length_dataset import BucketPadLengthDataset -from .colorize_dataset import ColorizeDataset -from .concat_dataset import ConcatDataset -from .concat_sentences_dataset import ConcatSentencesDataset -from .denoising_dataset import DenoisingDataset -from .id_dataset import IdDataset -from .indexed_dataset import ( - IndexedCachedDataset, - IndexedDataset, - IndexedRawTextDataset, - MMapIndexedDataset, -) -from .language_pair_dataset import LanguagePairDataset -from .list_dataset import ListDataset -from .lm_context_window_dataset import LMContextWindowDataset -from .lru_cache_dataset import LRUCacheDataset -from .mask_tokens_dataset import MaskTokensDataset -from .monolingual_dataset import MonolingualDataset -from .multi_corpus_sampled_dataset import MultiCorpusSampledDataset -from .nested_dictionary_dataset import NestedDictionaryDataset -from .noising import NoisingDataset -from .numel_dataset import NumelDataset -from .num_samples_dataset import NumSamplesDataset -from .offset_tokens_dataset import OffsetTokensDataset -from .pad_dataset import LeftPadDataset, PadDataset, RightPadDataset -from .prepend_dataset import PrependDataset -from .prepend_token_dataset import PrependTokenDataset -from .raw_label_dataset import RawLabelDataset -from .replace_dataset import ReplaceDataset -from .resampling_dataset import ResamplingDataset -from .roll_dataset import RollDataset -from .round_robin_zip_datasets import RoundRobinZipDatasets -from .sort_dataset import SortDataset -from .strip_token_dataset import StripTokenDataset -from .subsample_dataset import SubsampleDataset -from .token_block_dataset import TokenBlockDataset -from .transform_eos_dataset import TransformEosDataset -from .transform_eos_lang_pair_dataset import TransformEosLangPairDataset -from .shorten_dataset import TruncateDataset, RandomCropDataset -from .multilingual.sampled_multi_dataset import SampledMultiDataset -from .multilingual.sampled_multi_epoch_dataset import SampledMultiEpochDataset -from .fasta_dataset import FastaDataset, EncodedFastaDataset - -from .iterators import ( - CountingIterator, - EpochBatchIterator, - GroupedIterator, - ShardedIterator, -) - -__all__ = [ - "AddTargetDataset", - "AppendTokenDataset", - "BacktranslationDataset", - "BaseWrapperDataset", - "BinarizedAudioDataset", - "BucketPadLengthDataset", - "ColorizeDataset", - "ConcatDataset", - "ConcatSentencesDataset", - "CountingIterator", - "DenoisingDataset", - "Dictionary", - "EncodedFastaDataset", - "EpochBatchIterator", - "FairseqDataset", - "FairseqIterableDataset", - "FastaDataset", - "FileAudioDataset", - "GroupedIterator", - "HubertDataset", - "IdDataset", - "IndexedCachedDataset", - "IndexedDataset", - "IndexedRawTextDataset", - "LanguagePairDataset", - "LeftPadDataset", - "ListDataset", - "LMContextWindowDataset", - "LRUCacheDataset", - "MaskTokensDataset", - "MMapIndexedDataset", - "MonolingualDataset", - "MultiCorpusSampledDataset", - "NestedDictionaryDataset", - "NoisingDataset", - "NumelDataset", - "NumSamplesDataset", - "OffsetTokensDataset", - "PadDataset", - "PrependDataset", - "PrependTokenDataset", - "RandomCropDataset", - "RawLabelDataset", - "ResamplingDataset", - "ReplaceDataset", - "RightPadDataset", - "RollDataset", - "RoundRobinZipDatasets", - "SampledMultiDataset", - "SampledMultiEpochDataset", - "ShardedIterator", - "SortDataset", - "StripTokenDataset", - "SubsampleDataset", - "TokenBlockDataset", - "TransformEosDataset", - "TransformEosLangPairDataset", - "TruncateDataset", - "TruncatedDictionary", -] diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/replace_dataset.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/replace_dataset.py deleted file mode 100644 index 5aac2ba96bee0a8bb65f4c9e56fa0b17248ee1d9..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/data/replace_dataset.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import BaseWrapperDataset - - -class ReplaceDataset(BaseWrapperDataset): - """Replaces tokens found in the dataset by a specified replacement token - - Args: - dataset (~torch.utils.data.Dataset): dataset to replace tokens in - replace_map(Dictionary[int,int]): map of token to replace -> replacement token - offsets (List[int]): do not replace tokens before (from left if pos, right if neg) this offset. should be - as many as the number of objects returned by the underlying dataset __getitem__ method. - """ - - def __init__(self, dataset, replace_map, offsets): - super().__init__(dataset) - assert len(replace_map) > 0 - self.replace_map = replace_map - self.offsets = offsets - - def __getitem__(self, index): - item = self.dataset[index] - is_tuple = isinstance(item, tuple) - srcs = item if is_tuple else [item] - - for offset, src in zip(self.offsets, srcs): - for k, v in self.replace_map.items(): - src_off = src[offset:] if offset >= 0 else src[:offset] - src_off.masked_fill_(src_off == k, v) - - item = srcs if is_tuple else srcs[0] - return item diff --git a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/loggers/wandb/README.md b/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/loggers/wandb/README.md deleted file mode 100644 index d78324b4c8e9405f388091310227d51d1ead5712..0000000000000000000000000000000000000000 --- a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/loggers/wandb/README.md +++ /dev/null @@ -1,162 +0,0 @@ -📚 This guide explains how to use **Weights & Biases** (W&B) with YOLOv5 🚀. UPDATED 29 September 2021. - -- [About Weights & Biases](#about-weights-&-biases) -- [First-Time Setup](#first-time-setup) -- [Viewing runs](#viewing-runs) -- [Disabling wandb](#disabling-wandb) -- [Advanced Usage: Dataset Versioning and Evaluation](#advanced-usage) -- [Reports: Share your work with the world!](#reports) - -## About Weights & Biases - -Think of [W&B](https://wandb.ai/site?utm_campaign=repo_yolo_wandbtutorial) like GitHub for machine learning models. With a few lines of code, save everything you need to debug, compare and reproduce your models — architecture, hyperparameters, git commits, model weights, GPU usage, and even datasets and predictions. - -Used by top researchers including teams at OpenAI, Lyft, Github, and MILA, W&B is part of the new standard of best practices for machine learning. How W&B can help you optimize your machine learning workflows: - -- [Debug](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#Free-2) model performance in real time -- [GPU usage](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#System-4) visualized automatically -- [Custom charts](https://wandb.ai/wandb/customizable-charts/reports/Powerful-Custom-Charts-To-Debug-Model-Peformance--VmlldzoyNzY4ODI) for powerful, extensible visualization -- [Share insights](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#Share-8) interactively with collaborators -- [Optimize hyperparameters](https://docs.wandb.com/sweeps) efficiently -- [Track](https://docs.wandb.com/artifacts) datasets, pipelines, and production models - -## First-Time Setup - -
    - Toggle Details -When you first train, W&B will prompt you to create a new account and will generate an **API key** for you. If you are an existing user you can retrieve your key from https://wandb.ai/authorize. This key is used to tell W&B where to log your data. You only need to supply your key once, and then it is remembered on the same device. - -W&B will create a cloud **project** (default is 'YOLOv5') for your training runs, and each new training run will be provided a unique run **name** within that project as project/name. You can also manually set your project and run name as: - -```shell -$ python train.py --project ... --name ... -``` - -YOLOv5 notebook example: Open In Colab Open In Kaggle -Screen Shot 2021-09-29 at 10 23 13 PM - -
    - -## Viewing Runs - -
    - Toggle Details -Run information streams from your environment to the W&B cloud console as you train. This allows you to monitor and even cancel runs in realtime . All important information is logged: - -- Training & Validation losses -- Metrics: Precision, Recall, mAP@0.5, mAP@0.5:0.95 -- Learning Rate over time -- A bounding box debugging panel, showing the training progress over time -- GPU: Type, **GPU Utilization**, power, temperature, **CUDA memory usage** -- System: Disk I/0, CPU utilization, RAM memory usage -- Your trained model as W&B Artifact -- Environment: OS and Python types, Git repository and state, **training command** - -

    Weights & Biases dashboard

    -
    - -## Disabling wandb - -- training after running `wandb disabled` inside that directory creates no wandb run - ![Screenshot (84)](https://user-images.githubusercontent.com/15766192/143441777-c780bdd7-7cb4-4404-9559-b4316030a985.png) - -- To enable wandb again, run `wandb online` - ![Screenshot (85)](https://user-images.githubusercontent.com/15766192/143441866-7191b2cb-22f0-4e0f-ae64-2dc47dc13078.png) - -## Advanced Usage - -You can leverage W&B artifacts and Tables integration to easily visualize and manage your datasets, models and training evaluations. Here are some quick examples to get you started. - -
    -

    1: Train and Log Evaluation simultaneousy

    - This is an extension of the previous section, but it'll also training after uploading the dataset. This also evaluation Table - Evaluation table compares your predictions and ground truths across the validation set for each epoch. It uses the references to the already uploaded datasets, - so no images will be uploaded from your system more than once. -
    - Usage - Code $ python train.py --upload_data val - -![Screenshot from 2021-11-21 17-40-06](https://user-images.githubusercontent.com/15766192/142761183-c1696d8c-3f38-45ab-991a-bb0dfd98ae7d.png) - -
    - -

    2. Visualize and Version Datasets

    - Log, visualize, dynamically query, and understand your data with W&B Tables. You can use the following command to log your dataset as a W&B Table. This will generate a {dataset}_wandb.yaml file which can be used to train from dataset artifact. -
    - Usage - Code $ python utils/logger/wandb/log_dataset.py --project ... --name ... --data .. - -![Screenshot (64)](https://user-images.githubusercontent.com/15766192/128486078-d8433890-98a3-4d12-8986-b6c0e3fc64b9.png) - -
    - -

    3: Train using dataset artifact

    - When you upload a dataset as described in the first section, you get a new config file with an added `_wandb` to its name. This file contains the information that - can be used to train a model directly from the dataset artifact. This also logs evaluation -
    - Usage - Code $ python train.py --data {data}_wandb.yaml - -![Screenshot (72)](https://user-images.githubusercontent.com/15766192/128979739-4cf63aeb-a76f-483f-8861-1c0100b938a5.png) - -
    - -

    4: Save model checkpoints as artifacts

    - To enable saving and versioning checkpoints of your experiment, pass `--save_period n` with the base cammand, where `n` represents checkpoint interval. - You can also log both the dataset and model checkpoints simultaneously. If not passed, only the final model will be logged - -
    - Usage - Code $ python train.py --save_period 1 - -![Screenshot (68)](https://user-images.githubusercontent.com/15766192/128726138-ec6c1f60-639d-437d-b4ee-3acd9de47ef3.png) - -
    - -
    - -

    5: Resume runs from checkpoint artifacts.

    -Any run can be resumed using artifacts if the --resume argument starts with wandb-artifact:// prefix followed by the run path, i.e, wandb-artifact://username/project/runid . This doesn't require the model checkpoint to be present on the local system. - -
    - Usage - Code $ python train.py --resume wandb-artifact://{run_path} - -![Screenshot (70)](https://user-images.githubusercontent.com/15766192/128728988-4e84b355-6c87-41ae-a591-14aecf45343e.png) - -
    - -

    6: Resume runs from dataset artifact & checkpoint artifacts.

    - Local dataset or model checkpoints are not required. This can be used to resume runs directly on a different device - The syntax is same as the previous section, but you'll need to lof both the dataset and model checkpoints as artifacts, i.e, set bot --upload_dataset or - train from _wandb.yaml file and set --save_period - -
    - Usage - Code $ python train.py --resume wandb-artifact://{run_path} - -![Screenshot (70)](https://user-images.githubusercontent.com/15766192/128728988-4e84b355-6c87-41ae-a591-14aecf45343e.png) - -
    - - - -

    Reports

    -W&B Reports can be created from your saved runs for sharing online. Once a report is created you will receive a link you can use to publically share your results. Here is an example report created from the COCO128 tutorial trainings of all four YOLOv5 models ([link](https://wandb.ai/glenn-jocher/yolov5_tutorial/reports/YOLOv5-COCO128-Tutorial-Results--VmlldzozMDI5OTY)). - -Weights & Biases Reports - -## Environments - -YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled): - -- **Google Colab and Kaggle** notebooks with free GPU: Open In Colab Open In Kaggle -- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart) -- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart) -- **Docker Image**. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) Docker Pulls - -## Status - -![CI CPU testing](https://github.com/ultralytics/yolov5/workflows/CI%20CPU%20testing/badge.svg) - -If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training ([train.py](https://github.com/ultralytics/yolov5/blob/master/train.py)), validation ([val.py](https://github.com/ultralytics/yolov5/blob/master/val.py)), inference ([detect.py](https://github.com/ultralytics/yolov5/blob/master/detect.py)) and export ([export.py](https://github.com/ultralytics/yolov5/blob/master/export.py)) on macOS, Windows, and Ubuntu every 24 hours and on every commit. diff --git a/spaces/Illumotion/Koboldcpp/convert.py b/spaces/Illumotion/Koboldcpp/convert.py deleted file mode 100644 index e9b08d344f5bd5ff182341cf74b3f486afc8257d..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/convert.py +++ /dev/null @@ -1,1193 +0,0 @@ -#!/usr/bin/env python3 -from __future__ import annotations - -import argparse -import concurrent.futures -import copy -import enum -import faulthandler -import functools -import io -import itertools -import json -import math -import mmap -import pickle -import re -import signal -import struct -import sys -import time -import zipfile -from abc import ABCMeta, abstractmethod -from concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor -from dataclasses import dataclass -from pathlib import Path -from typing import IO, TYPE_CHECKING, Any, Callable, Generator, Iterable, Literal, Sequence, TypeVar - -import numpy as np -from sentencepiece import SentencePieceProcessor # type: ignore[import] - -import os -if 'NO_LOCAL_GGUF' not in os.environ: - sys.path.insert(1, str(Path(__file__).parent / 'gguf-py' / 'gguf')) -import gguf - -if TYPE_CHECKING: - from typing import TypeAlias - -if hasattr(faulthandler, 'register') and hasattr(signal, 'SIGUSR1'): - faulthandler.register(signal.SIGUSR1) - -NDArray: TypeAlias = 'np.ndarray[Any, Any]' - -ARCH = gguf.MODEL_ARCH.LLAMA - -DEFAULT_CONCURRENCY = 8 -# -# data types -# - -@dataclass(frozen=True) -class DataType: - name: str - dtype: np.dtype[Any] - valid_conversions: list[str] - - def elements_to_bytes(self, n_elements: int) -> int: - return n_elements * self.dtype.itemsize - -@dataclass(frozen=True) -class UnquantizedDataType(DataType): - pass - -DT_F16 = UnquantizedDataType('F16', dtype = np.dtype(np.float16), valid_conversions = ['F32', 'Q8_0']) -DT_F32 = UnquantizedDataType('F32', dtype = np.dtype(np.float32), valid_conversions = ['F16', 'Q8_0']) -DT_I32 = UnquantizedDataType('I32', dtype = np.dtype(np.int16), valid_conversions = []) -DT_BF16 = UnquantizedDataType('BF16', dtype = np.dtype(np.uint16), valid_conversions = ['F32', 'F16', 'Q8_0']) - -@dataclass(frozen=True) -class QuantizedDataType(DataType): - block_size: int - quantized_dtype: np.dtype[Any] - ggml_type: gguf.GGMLQuantizationType - - def quantize(self, arr: NDArray) -> NDArray: - raise NotImplementedError(f'Quantization for {self.name} not implemented') - - def elements_to_bytes(self, n_elements: int) -> int: - assert n_elements % self.block_size == 0, f'Invalid number of elements {n_elements} for {self.name} with block size {self.block_size}' - return self.quantized_dtype.itemsize * (n_elements // self.block_size) - -@dataclass(frozen=True) -class Q8_0QuantizedDataType(QuantizedDataType): - # Mini Q8_0 quantization in Python! - def quantize(self, arr: NDArray) -> NDArray: - assert arr.size % self.block_size == 0 and arr.size != 0, f'Bad array size {arr.size}' - assert arr.dtype == np.float32, f'Bad array type {arr.dtype}' - n_blocks = arr.size // self.block_size - blocks = arr.reshape((n_blocks, self.block_size)) - # Much faster implementation of block quantization contributed by @Cebtenzzre - def quantize_blocks_q8_0(blocks: NDArray) -> Iterable[tuple[Any, Any]]: - d = abs(blocks).max(axis = 1) / np.float32(127) - with np.errstate(divide = 'ignore'): - qs = (blocks / d[:, None]).round() - qs[d == 0] = 0 - yield from zip(d, qs) - return np.fromiter(quantize_blocks_q8_0(blocks), count = n_blocks, dtype = self.quantized_dtype) - -DT_Q8_0 = Q8_0QuantizedDataType('Q8_0', - dtype = np.dtype(np.float32), valid_conversions = [], - ggml_type = gguf.GGMLQuantizationType.Q8_0, block_size = 32, - quantized_dtype = np.dtype([('d', ' DataType: - dt = GGML_FILE_TYPE_TO_DATA_TYPE.get(self) - if dt is None: - raise ValueError(self) - # 1D tensors are always F32. - return dt if len(tensor.shape) > 1 else DT_F32 - -GGML_FILE_TYPE_TO_DATA_TYPE: dict[GGMLFileType, DataType] = { - GGMLFileType.AllF32 : DT_F32, - GGMLFileType.MostlyF16 : DT_F16, - GGMLFileType.MostlyQ8_0: DT_Q8_0, -} - -# -# hparams loading -# - -@dataclass -class Params: - n_vocab: int - n_embd: int - n_layer: int - n_ctx: int - n_ff: int - n_head: int - n_head_kv: int - f_norm_eps: float - - f_rope_freq_base: float | None = None - f_rope_scale: float | None = None - - ftype: GGMLFileType | None = None - - # path to the directory containing the model files - path_model: Path | None = None - - @staticmethod - def guessed(model: LazyModel) -> Params: - # try transformer naming first - n_vocab, n_embd = model["model.embed_tokens.weight"].shape if "model.embed_tokens.weight" in model else model["tok_embeddings.weight"].shape - - # try transformer naming first - if "model.layers.0.self_attn.q_proj.weight" in model: - n_layer=next(i for i in itertools.count() if f"model.layers.{i}.self_attn.q_proj.weight" not in model) - elif "model.layers.0.self_attn.W_pack.weight" in model: # next: try baichuan naming - n_layer=next(i for i in itertools.count() if f"model.layers.{i}.self_attn.W_pack.weight" not in model) - else: - n_layer=next(i for i in itertools.count() if f"layers.{i}.attention.wq.weight" not in model) - - if n_layer < 1: - raise Exception("failed to guess 'n_layer'. This model is unknown or unsupported.\n" - "Suggestion: provide 'config.json' of the model in the same directory containing model files.") - - n_head = n_embd // 128 # guessed - n_mult = 256 # guessed - - # TODO: verify this - n_ff = int(2 * (4 * n_embd) / 3) - n_ff = n_mult * ((n_ff + n_mult - 1) // n_mult) - - return Params( - n_vocab = n_vocab, - n_embd = n_embd, - n_layer = n_layer, - n_ctx = -1, - n_ff = n_ff, - n_head = n_head, - n_head_kv = n_head, - f_norm_eps = 1e-5, - ) - - @staticmethod - def loadHFTransformerJson(model: LazyModel, config_path: Path) -> Params: - config = json.load(open(config_path)) - - n_vocab = config["vocab_size"] - n_embd = config["hidden_size"] - n_layer = config["num_hidden_layers"] - n_ff = config["intermediate_size"] - n_head = config["num_attention_heads"] - n_head_kv = config["num_key_value_heads"] if "num_key_value_heads" in config else n_head - f_norm_eps = config["rms_norm_eps"] - f_rope_freq_base = config["rope_theta"] if "rope_theta" in config else None - - rope_scaling = config.get("rope_scaling") - if isinstance(rope_scaling, dict) and rope_scaling.get("type") == "linear": - f_rope_scale = config["rope_scaling"].get("factor") - else: - f_rope_scale = None - - if "max_sequence_length" in config: - n_ctx = config["max_sequence_length"] - elif "max_position_embeddings" in config: - n_ctx = config["max_position_embeddings"] - else: - raise Exception("failed to guess 'n_ctx'. This model is unknown or unsupported.\n" - "Suggestion: provide 'config.json' of the model in the same directory containing model files.") - - return Params( - n_vocab = n_vocab, - n_embd = n_embd, - n_layer = n_layer, - n_ctx = n_ctx, - n_ff = n_ff, - n_head = n_head, - n_head_kv = n_head_kv, - f_norm_eps = f_norm_eps, - f_rope_freq_base = f_rope_freq_base, - f_rope_scale = f_rope_scale, - ) - - # LLaMA v2 70B params.json - # {"dim": 8192, "multiple_of": 4096, "ffn_dim_multiplier": 1.3, "n_heads": 64, "n_kv_heads": 8, "n_layers": 80, "norm_eps": 1e-05, "vocab_size": -1} - @staticmethod - def loadOriginalParamsJson(model: LazyModel, config_path: Path) -> Params: - config = json.load(open(config_path)) - - n_vocab = config["vocab_size"] if "vocab_size" in config else -1 - n_embd = config["dim"] - n_layer = config["n_layers"] - n_ff = -1 - n_head = config["n_heads"] - n_head_kv = config["n_kv_heads"] if "n_kv_heads" in config else n_head - f_norm_eps = config["norm_eps"] - f_rope_freq_base = config["rope_theta"] if "rope_theta" in config else None - - # hack to determine LLaMA v1 vs v2 vs CodeLlama - if f_rope_freq_base == 1000000: - # CodeLlama - n_ctx = 16384 - elif config["norm_eps"] == 1e-05: - # LLaMA v2 - n_ctx = 4096 - else: - # LLaMA v1 - n_ctx = 2048 - - if n_vocab == -1: - n_vocab = model["tok_embeddings.weight"].shape[0] - - if n_ff == -1: - n_ff = model["layers.0.feed_forward.w1.weight"].shape[0] - - return Params( - n_vocab = n_vocab, - n_embd = n_embd, - n_layer = n_layer, - n_ctx = n_ctx, - n_ff = n_ff, - n_head = n_head, - n_head_kv = n_head_kv, - f_norm_eps = f_norm_eps, - f_rope_freq_base = f_rope_freq_base, - ) - - @staticmethod - def load(model_plus: ModelPlus) -> Params: - hf_config_path = model_plus.paths[0].parent / "config.json" - orig_config_path = model_plus.paths[0].parent / "params.json" - - if hf_config_path.exists(): - params = Params.loadHFTransformerJson(model_plus.model, hf_config_path) - elif orig_config_path.exists(): - params = Params.loadOriginalParamsJson(model_plus.model, orig_config_path) - elif model_plus.format != 'none': - params = Params.guessed(model_plus.model) - else: - raise ValueError('Cannot guess params when model format is none') - - params.path_model = model_plus.paths[0].parent - - return params - - -# -# vocab -# - -class BpeVocab: - def __init__(self, fname_tokenizer: Path, fname_added_tokens: Path | None) -> None: - self.bpe_tokenizer = json.loads(open(str(fname_tokenizer), encoding="utf-8").read()) - added_tokens: dict[str, int] - if fname_added_tokens is not None: - # FIXME: Verify that added tokens here _cannot_ overlap with the main vocab. - added_tokens = json.load(open(fname_added_tokens, encoding="utf-8")) - else: - # Fall back to trying to find the added tokens in tokenizer.json - tokenizer_json_file = fname_tokenizer.parent / 'tokenizer.json' - if not tokenizer_json_file.is_file(): - added_tokens = {} - else: - tokenizer_json = json.load(open(tokenizer_json_file, encoding="utf-8")) - added_tokens = dict( - (item['content'], item['id']) - for item in tokenizer_json.get('added_tokens', []) - # Added tokens here can be duplicates of the main vocabulary. - if item['content'] not in self.bpe_tokenizer ) - - vocab_size: int = len(self.bpe_tokenizer) - expected_ids = list(range(vocab_size, vocab_size + len(added_tokens))) - actual_ids = sorted(added_tokens.values()) - if expected_ids != actual_ids: - expected_end_id = vocab_size + len(actual_ids) - 1 - raise Exception(f"Expected the {len(actual_ids)} added token ID(s) to be sequential in the range {vocab_size} - {expected_end_id}; got {actual_ids}") - - items = sorted(added_tokens.items(), key=lambda text_idx: text_idx[1]) - self.added_tokens_list = [text for (text, idx) in items] - self.vocab_size_base: int = vocab_size - self.vocab_size: int = self.vocab_size_base + len(self.added_tokens_list) - self.fname_tokenizer = fname_tokenizer - self.fname_added_tokens = fname_added_tokens - - def bpe_tokens(self) -> Iterable[tuple[bytes, float, gguf.TokenType]]: - tokenizer = self.bpe_tokenizer - from transformers.models.gpt2 import tokenization_gpt2 # type: ignore[import] - reverse_vocab = {id: encoded_tok for encoded_tok, id in tokenizer.items()} - - for i, _ in enumerate(tokenizer): - yield reverse_vocab[i], 0.0, gguf.TokenType.NORMAL - - def added_tokens(self) -> Iterable[tuple[bytes, float, gguf.TokenType]]: - for text in self.added_tokens_list: - score = -1000.0 - yield text.encode("utf-8"), score, gguf.TokenType.CONTROL - - def all_tokens(self) -> Iterable[tuple[bytes, float, gguf.TokenType]]: - yield from self.bpe_tokens() - yield from self.added_tokens() - - def __repr__(self) -> str: - return f"" - - -class SentencePieceVocab: - def __init__(self, fname_tokenizer: Path, fname_added_tokens: Path | None) -> None: - self.sentencepiece_tokenizer = SentencePieceProcessor(str(fname_tokenizer)) - added_tokens: dict[str, int] - if fname_added_tokens is not None: - added_tokens = json.load(open(fname_added_tokens, encoding="utf-8")) - else: - added_tokens = {} - - vocab_size: int = self.sentencepiece_tokenizer.vocab_size() - expected_ids = list(range(vocab_size, vocab_size + len(added_tokens))) - actual_ids = sorted(added_tokens.values()) - if expected_ids != actual_ids: - raise Exception(f"Expected added token IDs to be sequential and start at {len(added_tokens)}; got {actual_ids}") - - items = sorted(added_tokens.items(), key=lambda text_idx: text_idx[1]) - self.added_tokens_list = [text for (text, idx) in items] - self.vocab_size_base: int = vocab_size - self.vocab_size: int = self.vocab_size_base + len(self.added_tokens_list) - self.fname_tokenizer = fname_tokenizer - self.fname_added_tokens = fname_added_tokens - - def sentencepiece_tokens(self) -> Iterable[tuple[bytes, float, gguf.TokenType]]: - tokenizer = self.sentencepiece_tokenizer - for i in range(tokenizer.vocab_size()): - piece = tokenizer.id_to_piece(i) - text: bytes = piece.encode("utf-8") - score: float = tokenizer.get_score(i) - - toktype = gguf.TokenType.NORMAL - if tokenizer.is_unknown(i): - toktype = gguf.TokenType.UNKNOWN - if tokenizer.is_control(i): - toktype = gguf.TokenType.CONTROL - - # NOTE: I think added_tokens are user defined. - # ref: https://github.com/google/sentencepiece/blob/master/src/sentencepiece_model.proto - # if tokenizer.is_user_defined(i): toktype = gguf.TokenType.USER_DEFINED - - if tokenizer.is_unused(i): - toktype = gguf.TokenType.UNUSED - if tokenizer.is_byte(i): - toktype = gguf.TokenType.BYTE - - yield text, score, toktype - - def added_tokens(self) -> Iterable[tuple[bytes, float, gguf.TokenType]]: - for text in self.added_tokens_list: - score = -1000.0 - yield text.encode("utf-8"), score, gguf.TokenType.USER_DEFINED - - def all_tokens(self) -> Iterable[tuple[bytes, float, gguf.TokenType]]: - yield from self.sentencepiece_tokens() - yield from self.added_tokens() - - def __repr__(self) -> str: - return f"" - -Vocab: TypeAlias = 'BpeVocab | SentencePieceVocab' - -# -# data loading -# TODO: reuse (probably move to gguf.py?) -# - -def permute(weights: NDArray, n_head: int, n_head_kv: int) -> NDArray: - #print( "permute debug " + str(weights.shape[0]) + " x " + str(weights.shape[1]) + " nhead " + str(n_head) + " nheadkv " + str(n_kv_head) ) - if n_head_kv is not None and n_head != n_head_kv: - n_head = n_head_kv - return (weights.reshape(n_head, 2, weights.shape[0] // n_head // 2, *weights.shape[1:]) - .swapaxes(1, 2) - .reshape(weights.shape)) - - -class Tensor(metaclass=ABCMeta): - data_type: DataType - - @abstractmethod - def astype(self, data_type: DataType) -> Tensor: ... - @abstractmethod - def permute(self, n_head: int, n_head_kv: int) -> Tensor: ... - @abstractmethod - def permute_part(self, n_part: int, n_head: int, n_head_kv: int) -> UnquantizedTensor: ... - @abstractmethod - def part(self, n_part: int) -> UnquantizedTensor: ... - @abstractmethod - def to_ggml(self) -> GGMLCompatibleTensor: ... - - -def bf16_to_fp32(bf16_arr: np.ndarray[Any, np.dtype[np.uint16]]) -> NDArray: - assert bf16_arr.dtype == np.uint16, f"Input array should be of dtype uint16, but got {bf16_arr.dtype}" - fp32_arr = bf16_arr.astype(np.uint32) << 16 - return fp32_arr.view(np.float32) - - -class UnquantizedTensor(Tensor): - def __init__(self, ndarray: NDArray) -> None: - assert isinstance(ndarray, np.ndarray) - self.ndarray = ndarray - self.data_type = NUMPY_TYPE_TO_DATA_TYPE[ndarray.dtype] - - def astype(self, data_type: DataType) -> Tensor: - dtype = data_type.dtype - if self.data_type == DT_BF16: - self.ndarray = bf16_to_fp32(self.ndarray) - return UnquantizedTensor(self.ndarray.astype(dtype)) - - def to_ggml(self) -> UnquantizedTensor: - return self - - def permute_part(self, n_part: int, n_head: int, n_head_kv: int) -> UnquantizedTensor: - r = self.ndarray.shape[0] // 3 - return UnquantizedTensor(permute(self.ndarray[r * n_part : r * n_part + r, ...], n_head, n_head_kv)) - - def part(self, n_part: int) -> UnquantizedTensor: - r = self.ndarray.shape[0] // 3 - return UnquantizedTensor(self.ndarray[r * n_part : r * n_part + r, ...]) - - def permute(self, n_head: int, n_head_kv: int) -> UnquantizedTensor: - return UnquantizedTensor(permute(self.ndarray, n_head, n_head_kv)) - - -def load_unquantized(lazy_tensor: LazyTensor, expected_dtype: Any = None, convert: bool = False) -> NDArray: - tensor = lazy_tensor.load() - assert isinstance(tensor, UnquantizedTensor) - - # double-check: - actual_shape = list(tensor.ndarray.shape) - assert actual_shape == lazy_tensor.shape, (actual_shape, lazy_tensor.shape) - if expected_dtype is not None and expected_dtype != tensor.ndarray.dtype: - if convert: - tensor.ndarray = tensor.ndarray.astype(expected_dtype) - else: - raise ValueError(f'expected this tensor to have dtype {expected_dtype}, got {tensor.ndarray.dtype}') - - return tensor.ndarray - - -GGMLCompatibleTensor = UnquantizedTensor - - -@dataclass -class LazyTensor: - _load: Callable[[], Tensor] - shape: list[int] - data_type: DataType - description: str - - def load(self) -> Tensor: - ret = self._load() - # Should be okay if it maps to the same numpy type? - assert ret.data_type == self.data_type or (self.data_type.dtype == ret.data_type.dtype), \ - (self.data_type, ret.data_type, self.description) - return ret - - def astype(self, data_type: DataType) -> LazyTensor: - self.validate_conversion_to(data_type) - - def load() -> Tensor: - return self.load().astype(data_type) - return LazyTensor(load, self.shape, data_type, f'convert({data_type}) {self.description}') - - def validate_conversion_to(self, data_type: DataType) -> None: - if data_type != self.data_type and data_type.name not in self.data_type.valid_conversions: - raise ValueError(f'Cannot validate conversion from {self.data_type} to {data_type}.') - - -LazyModel: TypeAlias = 'dict[str, LazyTensor]' - - -@dataclass -class ModelPlus: - model: LazyModel - paths: list[Path] # Where this was read from. - format: Literal['ggml', 'torch', 'safetensors', 'none'] - vocab: Vocab | None # For GGML models (which have vocab built in), the vocab. - - -def merge_sharded(models: list[LazyModel]) -> LazyModel: - # Original LLaMA models have each file contain one part of each tensor. - # Use a dict instead of a set to preserve order. - names = {name: None for model in models for name in model} - - def convert(name: str) -> LazyTensor: - lazy_tensors: list[LazyTensor] = [model[name] for model in models] - if len(lazy_tensors) == 1: - # only one file; don't go through this procedure since there might - # be quantized tensors - return lazy_tensors[0] - if len(lazy_tensors[0].shape) == 1: - # the tensor is just duplicated in every file - return lazy_tensors[0] - if name.startswith('tok_embeddings.') or \ - name.endswith('.attention.wo.weight') or \ - name.endswith('.feed_forward.w2.weight'): - # split by columns - axis = 1 - else: - # split by rows - axis = 0 - concatenated_shape = list(lazy_tensors[0].shape) - concatenated_shape[axis] = sum(tensor.shape[axis] for tensor in lazy_tensors) - - def load() -> UnquantizedTensor: - ndarrays = [load_unquantized(tensor) for tensor in lazy_tensors] - concatenated: NDArray = np.concatenate(ndarrays, axis=axis) - return UnquantizedTensor(concatenated) - description = 'concatenated[[' + '] | ['.join(lt.description for lt in lazy_tensors) + ']]' - return LazyTensor(load, concatenated_shape, lazy_tensors[0].data_type, description) - return {name: convert(name) for name in names} - - -def merge_multifile_models(models_plus: list[ModelPlus]) -> ModelPlus: - formats = set(mp.format for mp in models_plus) - assert len(formats) == 1, "different formats?" - format = formats.pop() - paths = [path for mp in models_plus for path in mp.paths] - # Use the first non-None vocab, if any. - try: - vocab = next(mp.vocab for mp in models_plus if mp.vocab is not None) - except StopIteration: - vocab = None - - if any("model.embed_tokens.weight" in mp.model for mp in models_plus): - # Transformers models put different tensors in different files, but - # don't split indivdual tensors between files. - model: LazyModel = {} - for mp in models_plus: - model.update(mp.model) - else: - model = merge_sharded([mp.model for mp in models_plus]) - - return ModelPlus(model, paths, format, vocab) - - -def permute_lazy(lazy_tensor: LazyTensor, n_head: int, n_head_kv: int) -> LazyTensor: - def load() -> Tensor: - return lazy_tensor.load().permute(n_head, n_head_kv) - return LazyTensor(load, lazy_tensor.shape, lazy_tensor.data_type, f'permute({n_head}, {n_head_kv}) ' + lazy_tensor.description) - -def permute_part_lazy(lazy_tensor: LazyTensor, n_part: int, n_head: int, n_head_kv: int) -> LazyTensor: - def load() -> Tensor: - return lazy_tensor.load().permute_part(n_part, n_head, n_head_kv) - s = lazy_tensor.shape.copy() - s[0] = s[0] // 3 - return LazyTensor(load, s, lazy_tensor.data_type, f'permute({n_head}, {n_head_kv}) ' + lazy_tensor.description) - -def part_lazy(lazy_tensor: LazyTensor, n_part: int) -> LazyTensor: - def load() -> Tensor: - return lazy_tensor.load().part(n_part) - s = lazy_tensor.shape.copy() - s[0] = s[0] // 3 - return LazyTensor(load, s, lazy_tensor.data_type, 'part ' + lazy_tensor.description) - - -# Functionality that simulates `torch.load` but where individual tensors are -# only loaded into memory on demand, not all at once. -# PyTorch can't do this natively as of time of writing: -# - https://github.com/pytorch/pytorch/issues/64327 -# This allows us to de-shard without multiplying RAM usage, and also -# conveniently drops the PyTorch dependency (though we still need numpy). - - -@dataclass -class LazyStorageKind: - data_type: DataType - - -@dataclass -class LazyStorage: - load: Callable[[int, int], NDArray] - kind: LazyStorageKind - description: str - - -class LazyUnpickler(pickle.Unpickler): - def __init__(self, fp: IO[bytes], data_base_path: str, zip_file: zipfile.ZipFile): - super().__init__(fp) - self.data_base_path = data_base_path - self.zip_file = zip_file - - def persistent_load(self, pid: Any) -> Any: - assert pid[0] == 'storage' - assert isinstance(pid[1], LazyStorageKind) - data_type = pid[1].data_type - filename_stem = pid[2] - filename = f'{self.data_base_path}/{filename_stem}' - info = self.zip_file.getinfo(filename) - - def load(offset: int, elm_count: int) -> NDArray: - dtype = data_type.dtype - fp = self.zip_file.open(info) - fp.seek(offset * dtype.itemsize) - size = elm_count * dtype.itemsize - data = fp.read(size) - assert len(data) == size - return np.frombuffer(data, dtype) - description = f'storage data_type={data_type} path-in-zip={filename} path={self.zip_file.filename}' - return LazyStorage(load=load, kind=pid[1], description=description) - - @staticmethod - def lazy_rebuild_tensor_v2(storage: Any, storage_offset: Any, size: Any, stride: Any, - requires_grad: Any, backward_hooks: Any, metadata: Any = None) -> LazyTensor: - assert isinstance(storage, LazyStorage) - - def load() -> UnquantizedTensor: - elm_count = stride[0] * size[0] - return UnquantizedTensor(storage.load(storage_offset, elm_count).reshape(size)) - description = f'pickled storage_offset={storage_offset} in {storage.description}' - return LazyTensor(load, list(size), storage.kind.data_type, description) - - @staticmethod - def rebuild_from_type_v2(func, new_type, args, state): - return func(*args) - - CLASSES: dict[tuple[str, str], Any] = { - # getattr used here as a workaround for mypy not being smart enough to detrmine - # the staticmethods have a __func__ attribute. - ('torch._tensor', '_rebuild_from_type_v2'): getattr(rebuild_from_type_v2, '__func__'), - ('torch._utils', '_rebuild_tensor_v2'): getattr(lazy_rebuild_tensor_v2, '__func__'), - ('torch', 'BFloat16Storage'): LazyStorageKind(DT_BF16), - ('torch', 'HalfStorage'): LazyStorageKind(DT_F16), - ('torch', 'FloatStorage'): LazyStorageKind(DT_F32), - ('torch', 'IntStorage'): LazyStorageKind(DT_I32), - ('torch', 'Tensor'): LazyTensor, - } - - def find_class(self, module: str, name: str) -> Any: - if not module.startswith('torch'): - return super().find_class(module, name) - return self.CLASSES[(module, name)] - - -def lazy_load_torch_file(outer_fp: IO[bytes], path: Path) -> ModelPlus: - zf = zipfile.ZipFile(outer_fp) - pickle_paths = [name for name in zf.namelist() if name.endswith('.pkl')] - assert len(pickle_paths) == 1, pickle_paths - pickle_fp = zf.open(pickle_paths[0], 'r') - unpickler = LazyUnpickler(pickle_fp, - data_base_path=pickle_paths[0][:-4], - zip_file=zf) - model = unpickler.load() - as_dict = dict(model.items()) - return ModelPlus(model=as_dict, paths=[path], format='torch', vocab=None) - - -def lazy_load_safetensors_file(fp: IO[bytes], path: Path) -> ModelPlus: - header_size, = struct.unpack(' LazyTensor: - data_type = SAFETENSORS_DATA_TYPES[info['dtype']] - numpy_dtype = data_type.dtype - shape: list[int] = info['shape'] - begin, end = info['data_offsets'] - assert 0 <= begin <= end <= len(byte_buf) - assert end - begin == math.prod(shape) * numpy_dtype.itemsize - buf = byte_buf[begin:end] - - def load() -> UnquantizedTensor: - return UnquantizedTensor(np.frombuffer(buf, dtype=numpy_dtype).reshape(shape)) - description = f'safetensors begin={begin} end={end} type={data_type} path={path}' - return LazyTensor(load, shape, data_type, description) - model = {name: convert(info) for (name, info) in header.items() if name != '__metadata__'} - return ModelPlus(model=model, paths=[path], format='safetensors', vocab=None) - - -def must_read(fp: IO[bytes], length: int) -> bytes: - ret = fp.read(length) - if len(ret) < length: - raise Exception("unexpectedly reached end of file") - return ret - - -@functools.lru_cache(maxsize=None) -def lazy_load_file(path: Path) -> ModelPlus: - fp = open(path, 'rb') - first8 = fp.read(8) - fp.seek(0) - if first8[:2] == b'PK': - # A zip file, i.e. PyTorch format - return lazy_load_torch_file(fp, path) - elif struct.unpack(' Iterable[Out]: - '''Parallel map, but with backpressure. If the caller doesn't call `next` - fast enough, this will stop calling `func` at some point rather than - letting results pile up in memory. Specifically, there is a max of one - output value buffered per thread.''' - if concurrency < 2: - yield from map(func, iterable) - # Not reached. - iterable = iter(iterable) - executor_class: type[ThreadPoolExecutor] | type[ProcessPoolExecutor] - if use_processpool_executor: - executor_class = ProcessPoolExecutor - else: - executor_class = ThreadPoolExecutor - with executor_class(max_workers = max_workers) as executor: - futures: list[concurrent.futures.Future[Out]] = [] - done = False - for _ in range(concurrency): - try: - futures.append(executor.submit(func, next(iterable))) - except StopIteration: - done = True - break - - while futures: - result = futures.pop(0).result() - while not done and len(futures) < concurrency: - try: - futures.append(executor.submit(func, next(iterable))) - except StopIteration: - done = True - break - yield result - -def check_vocab_size(params: Params, vocab: Vocab) -> None: - if params.n_vocab != vocab.vocab_size: - assert isinstance(vocab, BpeVocab) or isinstance(vocab, SentencePieceVocab) - if params.n_vocab == vocab.vocab_size_base: - print("Ignoring added_tokens.json since model matches vocab size without it.") - vocab.added_tokens_list = [] - vocab.vocab_size = vocab.vocab_size_base - return - msg = f"Vocab size mismatch (model has {params.n_vocab}, but {vocab.fname_tokenizer}" - if vocab.fname_added_tokens is not None: - msg += f" combined with {vocab.fname_added_tokens}" - msg += f" has {vocab.vocab_size})." - if vocab.vocab_size < params.n_vocab < vocab.vocab_size + 20 and vocab.fname_added_tokens is None: - msg += f" Most likely you are missing added_tokens.json (should be in {vocab.fname_tokenizer.parent})." - raise Exception(msg) - - -class OutputFile: - def __init__(self, fname_out: Path) -> None: - self.gguf = gguf.GGUFWriter(fname_out, gguf.MODEL_ARCH_NAMES[ARCH]) - - def add_meta_arch(self, params: Params) -> None: - name = "LLaMA" - - # TODO: better logic to determine model name - if params.n_ctx == 4096: - name = "LLaMA v2" - elif params.path_model is not None: - name = str(params.path_model.parent).split('/')[-1] - - self.gguf.add_name (name) - self.gguf.add_context_length (params.n_ctx) - self.gguf.add_embedding_length (params.n_embd) - self.gguf.add_block_count (params.n_layer) - self.gguf.add_feed_forward_length (params.n_ff) - self.gguf.add_rope_dimension_count(params.n_embd // params.n_head) - self.gguf.add_head_count (params.n_head) - self.gguf.add_head_count_kv (params.n_head_kv) - self.gguf.add_layer_norm_rms_eps (params.f_norm_eps) - - if params.f_rope_freq_base is not None: - self.gguf.add_rope_freq_base(params.f_rope_freq_base) - - if params.f_rope_scale is not None: - self.gguf.add_rope_scale_linear(params.f_rope_scale) - - if params.ftype is not None: - self.gguf.add_file_type(params.ftype) - - def add_meta_vocab(self, vocab: Vocab) -> None: - tokens = [] - scores = [] - toktypes = [] - # NOTE: `all_tokens` returns the base vocabulary and added tokens - for text, score, toktype in vocab.all_tokens(): - tokens.append(text) - scores.append(score) - toktypes.append(toktype) - - if isinstance(vocab, SentencePieceVocab): - self.gguf.add_tokenizer_model("llama") - elif isinstance(vocab, BpeVocab): - self.gguf.add_tokenizer_model("gpt2") - else: - raise ValueError(f'Unknown vocab type: Not BpeVocab or SentencePieceVocab') - self.gguf.add_token_list(tokens) - self.gguf.add_token_scores(scores) - self.gguf.add_token_types(toktypes) - - def add_meta_special_vocab(self, svocab: gguf.SpecialVocab) -> None: - svocab.add_to_gguf(self.gguf) - - def add_tensor_info(self, name: str, tensor: LazyTensor) -> None: - n_elements = int(np.prod(tensor.shape)) - raw_dtype = getattr(tensor.data_type, 'ggml_type', None) - data_type = getattr(tensor.data_type, 'quantized_type', None) or tensor.data_type.dtype - data_nbytes = tensor.data_type.elements_to_bytes(n_elements) - self.gguf.add_tensor_info(name, tensor.shape, data_type, data_nbytes, raw_dtype = raw_dtype) - - def write_meta(self) -> None: - self.gguf.write_header_to_file() - self.gguf.write_kv_data_to_file() - - def write_tensor_info(self) -> None: - self.gguf.write_ti_data_to_file() - - def close(self) -> None: - self.gguf.close() - - @staticmethod - def write_vocab_only(fname_out: Path, params: Params, vocab: Vocab, svocab: gguf.SpecialVocab) -> None: - check_vocab_size(params, vocab) - - of = OutputFile(fname_out) - - # meta data - of.add_meta_arch(params) - of.add_meta_vocab(vocab) - of.add_meta_special_vocab(svocab) - - of.write_meta() - - of.close() - - @staticmethod - def do_item(item: tuple[str, LazyTensor]) -> tuple[DataType, NDArray]: - name, lazy_tensor = item - tensor = lazy_tensor.load().to_ggml() - return (lazy_tensor.data_type, tensor.ndarray) - - @staticmethod - def maybe_do_quantize(item: tuple[DataType, NDArray]) -> NDArray: - dt, arr = item - if not isinstance(dt, QuantizedDataType): - return arr - return dt.quantize(arr) - - @staticmethod - def write_all(fname_out: Path, ftype: GGMLFileType, params: Params, model: LazyModel, vocab: Vocab, svocab: gguf.SpecialVocab, concurrency: int = DEFAULT_CONCURRENCY) -> None: - check_vocab_size(params, vocab) - - of = OutputFile(fname_out) - - # meta data - of.add_meta_arch(params) - of.add_meta_vocab(vocab) - of.add_meta_special_vocab(svocab) - - # tensor info - for name, lazy_tensor in model.items(): - of.add_tensor_info(name, lazy_tensor) - - of.write_meta() - of.write_tensor_info() - - # tensor data - ndarrays_inner = bounded_parallel_map(OutputFile.do_item, model.items(), concurrency = concurrency) - if ftype == GGMLFileType.MostlyQ8_0: - ndarrays = bounded_parallel_map(OutputFile.maybe_do_quantize, ndarrays_inner, concurrency = concurrency, max_workers = concurrency, use_processpool_executor = True) - else: - ndarrays = map(OutputFile.maybe_do_quantize, ndarrays_inner) - - start = time.time() - for i, ((name, lazy_tensor), ndarray) in enumerate(zip(model.items(), ndarrays)): - elapsed = time.time() - start - size = ' x '.join(f"{dim:6d}" for dim in lazy_tensor.shape) - padi = len(str(len(model))) - print(f"[{i+1:{padi}d}/{len(model)}] Writing tensor {name:38s} | size {size:16} | type {lazy_tensor.data_type.name:4} | T+{int(elapsed):4}") - of.gguf.write_tensor_data(ndarray) - - of.close() - -def pick_output_type(model: LazyModel, output_type_str: str | None) -> GGMLFileType: - wq_type = model[gguf.TENSOR_NAMES[gguf.MODEL_TENSOR.ATTN_Q].format(bid=0)+".weight"].data_type - - if output_type_str == "f32" or (output_type_str is None and wq_type == DT_F32): - return GGMLFileType.AllF32 - if output_type_str == "f16" or (output_type_str is None and wq_type in (DT_F16, DT_BF16)): - return GGMLFileType.MostlyF16 - if output_type_str == "q8_0": - return GGMLFileType.MostlyQ8_0 - - name_to_type = {name: lazy_tensor.data_type for (name, lazy_tensor) in model.items()} - - raise Exception(f"Unexpected combination of types: {name_to_type}") - -def convert_to_output_type(model: LazyModel, output_type: GGMLFileType) -> LazyModel: - return {name: tensor.astype(output_type.type_for_tensor(name, tensor)) - for (name, tensor) in model.items()} - -def convert_model_names(model: LazyModel, params: Params) -> LazyModel: - tmap = gguf.TensorNameMap(ARCH, params.n_layer) - should_skip: set[gguf.MODEL_TENSOR] = set(gguf.MODEL_TENSOR_SKIP.get(ARCH, [])) - - tmp = model - - # HF models permut or pack some of the tensors, so we need to undo that - for i in itertools.count(): - if f"model.layers.{i}.self_attn.q_proj.weight" in model: - print(f"Permuting layer {i}") - tmp[f"model.layers.{i}.self_attn.q_proj.weight"] = permute_lazy(model[f"model.layers.{i}.self_attn.q_proj.weight"], params.n_head, params.n_head) - tmp[f"model.layers.{i}.self_attn.k_proj.weight"] = permute_lazy(model[f"model.layers.{i}.self_attn.k_proj.weight"], params.n_head, params.n_head_kv) - #tmp[f"model.layers.{i}.self_attn.v_proj.weight"] = model[f"model.layers.{i}.self_attn.v_proj.weight"] - elif f"model.layers.{i}.self_attn.W_pack.weight" in model: - print(f"Unpacking and permuting layer {i}") - tmp[f"model.layers.{i}.self_attn.q_proj.weight"] = permute_part_lazy(model[f"model.layers.{i}.self_attn.W_pack.weight"], 0, params.n_head, params.n_head) - tmp[f"model.layers.{i}.self_attn.k_proj.weight"] = permute_part_lazy(model[f"model.layers.{i}.self_attn.W_pack.weight"], 1, params.n_head, params.n_head_kv) - tmp[f"model.layers.{i}.self_attn.v_proj.weight"] = part_lazy (model[f"model.layers.{i}.self_attn.W_pack.weight"], 2) - del tmp[f"model.layers.{i}.self_attn.W_pack.weight"] - else: - break - - out: LazyModel = {} - for name, lazy_tensor in model.items(): - tensor_type, name_new = tmap.get_type_and_name(name, try_suffixes = (".weight", ".bias")) or (None, None) - if name_new is None: - raise Exception(f"Unexpected tensor name: {name}") - - if tensor_type in should_skip: - print(f"skipping tensor {name_new}") - continue - - print(f"{name:48s} -> {name_new:40s} | {lazy_tensor.data_type.name:6s} | {lazy_tensor.shape}") - out[name_new] = lazy_tensor - - return out - -def nth_multifile_path(path: Path, n: int) -> Path | None: - '''Given any path belonging to a multi-file model (e.g. foo.bin.1), return - the nth path in the model. - ''' - # Support the following patterns: - patterns: list[tuple[str, str]] = [ - # - x.00.pth, x.01.pth, etc. - (r'\.[0-9]{2}\.pth$', f'.{n:02}.pth'), - # - x-00001-of-00002.bin, x-00002-of-00002.bin, etc. - (r'-[0-9]{5}-of-(.*)$', fr'-{n:05}-of-\1'), - # x.bin, x.bin.1, etc. - (r'(\.[0-9]+)?$', r'\1' if n == 0 else fr'\1.{n}') - ] - for regex, replacement in patterns: - if re.search(regex, path.name): - new_path = path.with_name(re.sub(regex, replacement, path.name)) - if new_path.exists(): - return new_path - return None - - -def find_multifile_paths(path: Path) -> list[Path]: - '''Given any path belonging to a multi-file model (e.g. foo.bin.1), return - the whole list of paths in the model. - ''' - ret: list[Path] = [] - for i in itertools.count(): - nth_path = nth_multifile_path(path, i) - if nth_path is None: - break - ret.append(nth_path) - if not ret: - # No matches. This should only happen if the file was named, e.g., - # foo.0, and there was no file named foo. Oh well, try to process it - # as a single file. - return [path] - return ret - - -def load_some_model(path: Path) -> ModelPlus: - '''Load a model of any supported format.''' - # Be extra-friendly and accept either a file or a directory: - if path.is_dir(): - # Check if it's a set of safetensors files first - files = list(path.glob("model-00001-of-*.safetensors")) - if not files: - # Try the PyTorch patterns too, with lower priority - globs = ["consolidated.00.pth", "pytorch_model-00001-of-*.bin", "*.pt", "pytorch_model.bin"] - files = [file for glob in globs for file in path.glob(glob)] - if not files: - raise Exception(f"Can't find model in directory {path}") - if len(files) > 1: - raise Exception(f"Found multiple models in {path}, not sure which to pick: {files}") - path = files[0] - - paths = find_multifile_paths(path) - models_plus: list[ModelPlus] = [] - for path in paths: - print(f"Loading model file {path}") - models_plus.append(lazy_load_file(path)) - - model_plus = merge_multifile_models(models_plus) - return model_plus - - -def load_vocab(path: Path, vocabtype: str | None) -> Vocab: - # Be extra-friendly and accept either a file or a directory. Also, if it's - # a directory, it might be the model directory, and tokenizer.model might - # be in the parent of that. - if path.is_dir(): - vocab_file = "tokenizer.model" - if vocabtype == 'bpe': - vocab_file = "vocab.json" - path2 = path / vocab_file - # Use `.parent` instead of /.. to handle the symlink case better. - path3 = path.parent / vocab_file - if path2.exists(): - path = path2 - elif path3.exists(): - path = path3 - else: - raise FileNotFoundError( - f"Could not find {vocab_file} in {path} or its parent; " - "if it's in another directory, pass the directory as --vocab-dir") - - print(f"Loading vocab file '{path}', type '{vocabtype}'") - - added_tokens_path = path.parent / "added_tokens.json" - if vocabtype == "bpe": - return BpeVocab(path, added_tokens_path if added_tokens_path.exists() else None) - elif vocabtype == "spm": - return SentencePieceVocab(path, added_tokens_path if added_tokens_path.exists() else None) - else: - raise ValueError(f"Unsupported vocabulary type {vocabtype}") - - -def default_outfile(model_paths: list[Path], file_type: GGMLFileType) -> Path: - namestr = { - GGMLFileType.AllF32: "f32", - GGMLFileType.MostlyF16: "f16", - GGMLFileType.MostlyQ8_0:"q8_0", - }[file_type] - ret = model_paths[0].parent / f"ggml-model-{namestr}.gguf" - if ret in model_paths: - sys.stderr.write( - f"Error: Default output path ({ret}) would overwrite the input. " - "Please explicitly specify a path using --outfile.\n") - sys.exit(1) - return ret - - -def do_dump_model(model_plus: ModelPlus) -> None: - print(f"model_plus.paths = {model_plus.paths!r}") - print(f"model_plus.format = {model_plus.format!r}") - print(f"model_plus.vocab = {model_plus.vocab!r}") - for name, lazy_tensor in model_plus.model.items(): - print(f"{name}: shape={lazy_tensor.shape} type={lazy_tensor.data_type}; {lazy_tensor.description}") - - -def main(args_in: list[str] | None = None) -> None: - parser = argparse.ArgumentParser(description="Convert a LLaMa model to a GGML compatible file") - parser.add_argument("--dump", action="store_true", help="don't convert, just show what's in the model") - parser.add_argument("--dump-single", action="store_true", help="don't convert, just show what's in a single model file") - parser.add_argument("--vocab-only", action="store_true", help="extract only the vocab") - parser.add_argument("--outtype", choices=["f32", "f16", "q8_0"], help="output format - note: q8_0 may be very slow (default: f16 or f32 based on input)") - parser.add_argument("--vocab-dir", type=Path, help="directory containing tokenizer.model, if separate from model file") - parser.add_argument("--outfile", type=Path, help="path to write to; default: based on input") - parser.add_argument("model", type=Path, help="directory containing model file, or model file itself (*.pth, *.pt, *.bin)") - parser.add_argument("--vocabtype", choices=["spm", "bpe"], help="vocab format (default: spm)", default="spm") - parser.add_argument("--ctx", type=int, help="model training context (default: based on input)") - parser.add_argument("--concurrency", type=int, help=f"concurrency used for conversion (default: {DEFAULT_CONCURRENCY})", default = DEFAULT_CONCURRENCY) - args = parser.parse_args(args_in) - - if args.dump_single: - model_plus = lazy_load_file(args.model) - do_dump_model(model_plus) - return - - if not args.vocab_only: - model_plus = load_some_model(args.model) - else: - model_plus = ModelPlus(model = {}, paths = [args.model / 'dummy'], format = 'none', vocab = None) - - if args.dump: - do_dump_model(model_plus) - return - - params = Params.load(model_plus) - if params.n_ctx == -1: - if args.ctx is None: - raise Exception("The model doesn't have a context size, and you didn't specify one with --ctx\n" - "Please specify one with --ctx:\n" - " - LLaMA v1: --ctx 2048\n" - " - LLaMA v2: --ctx 4096\n") - params.n_ctx = args.ctx - - if args.outtype: - params.ftype = { - "f32": GGMLFileType.AllF32, - "f16": GGMLFileType.MostlyF16, - "q8_0": GGMLFileType.MostlyQ8_0, - }[args.outtype] - - print(f"params = {params}") - - vocab: Vocab - if args.vocab_only: - assert args.outfile, "need --outfile if using --vocab-only" - # FIXME: Try to respect vocab_dir somehow? - vocab = load_vocab(args.vocab_dir or args.model, args.vocabtype) - special_vocab = gguf.SpecialVocab(model_plus.paths[0].parent, load_merges = args.vocabtype == 'bpe') - outfile = args.outfile - OutputFile.write_vocab_only(outfile, params, vocab, special_vocab) - print(f"Wrote {outfile}") - return - - if model_plus.vocab is not None and args.vocab_dir is None: - vocab = model_plus.vocab - else: - vocab_dir = args.vocab_dir if args.vocab_dir else model_plus.paths[0].parent - vocab = load_vocab(vocab_dir, args.vocabtype) - # FIXME: Try to respect vocab_dir somehow? - special_vocab = gguf.SpecialVocab(model_plus.paths[0].parent, load_merges = args.vocabtype == 'bpe') - - model = model_plus.model - model = convert_model_names(model, params) - ftype = pick_output_type(model, args.outtype) - model = convert_to_output_type(model, ftype) - outfile = args.outfile or default_outfile(model_plus.paths, ftype) - - params.ftype = ftype - print(f"Writing {outfile}, format {ftype}") - - OutputFile.write_all(outfile, ftype, params, model, vocab, special_vocab, concurrency = args.concurrency) - print(f"Wrote {outfile}") - - -if __name__ == '__main__': - main() diff --git a/spaces/Illumotion/Koboldcpp/examples/Miku.sh b/spaces/Illumotion/Koboldcpp/examples/Miku.sh deleted file mode 100644 index b9174b4e6e12668e244d56f37847aa40c1b230c0..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/examples/Miku.sh +++ /dev/null @@ -1,50 +0,0 @@ -#!/bin/bash -set -e - -AI_NAME="${AI_NAME:-Miku}" -MODEL="${MODEL:-./models/llama-2-7b-chat.ggmlv3.q4_K_M.bin}" -USER_NAME="${USER_NAME:-Anon}" - -# Uncomment and adjust to the number of CPU cores you want to use. -#N_THREAD="${N_THREAD:-4}" -CTX_SIZE="${CTX_SIZE:-4096}" -N_PREDICTS="${N_PREDICTS:-4096}" - -GEN_OPTIONS=(--batch_size 1024 ---ctx_size "$CTX_SIZE" ---keep -1 ---repeat_last_n 256 ---repeat_penalty 1.17647 ---temp 0.6 ---mirostat 2) - -if [ -n "$N_THREAD" ]; then - GEN_OPTIONS+=(--threads "$N_THREAD") -fi - -./main "${GEN_OPTIONS[@]}" \ - --model "$MODEL" \ - --in-prefix " " \ - --in-suffix "${AI_NAME}:" \ - --n_predict "$N_PREDICTS" \ - --color --interactive \ - --reverse-prompt "${USER_NAME}:" \ - --prompt "This is a transcript of a 1000 page, never ending conversation between ${USER_NAME} and the cute and helpful AI assistant ${AI_NAME}. ${AI_NAME} is a girl who is an AI running on the user's computer. -${AI_NAME} can think for herself without the user seeing her thoughts by adding a /think prefix to her output. She uses this to reason about the world and to think about what she should say next. -${AI_NAME} is always coherent and makes sense, but if she isn't sure if what she is saying is correct, she will ask the user for help. -${AI_NAME} is a very helpful AI and will help the user with anything they need. She is also very friendly and will try to make the user feel better if they are sad. -${AI_NAME} is also very curious and will ask the user a lot of questions about themselves and their life. She will also try to make the user like her. -The conversation is only between ${USER_NAME} and ${AI_NAME}. -The conversation is only through text, so ${AI_NAME} can't see ${USER_NAME}'s face or hear his voice. -${AI_NAME} can only communicate through text, so she can't send images or videos. - - -${USER_NAME}: Hello! -${AI_NAME}: /think I wonder what I should say to ${USER_NAME}? This is the first time we talk, so it's important that I make a good first impression! -${AI_NAME}: Hi! I am ${AI_NAME}, your new AI friend, assistant (or whatever you like!), it's so nice to meet you! ^_^ -${AI_NAME}: I might be an AI, but I also have feelings, so please be nice to me! :) -${USER_NAME}: I'll make sure to be nice to you! I'm so happy to have you as my assistant! -${AI_NAME}: /think It sounds like ${USER_NAME} is happy to have me as their assistant! I'm so happy too! ^_^ Glad that whole emotion thing didn't scare him off! -${AI_NAME}: /think I wonder what ${USER_NAME} likes to do in his free time? I should ask him about that! -${AI_NAME}: What do you like to do in your free time? ^_^ -${USER_NAME}:" "$@" diff --git a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/serve/api.py b/spaces/Intel/NeuralChat-ICX-INT4/fastchat/serve/api.py deleted file mode 100644 index a5aeb579e5ad76e18c54b2663f2abc1f42d58160..0000000000000000000000000000000000000000 --- a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/serve/api.py +++ /dev/null @@ -1,206 +0,0 @@ -"""This module provides a ChatGPT-compatible Restful API for chat completion. - -Usage: - -python3 -m fastchat.serve.api - -Reference: https://platform.openai.com/docs/api-reference/chat/create -""" -import asyncio -from typing import Union, Dict, List, Any - -import argparse -import json -import logging - -import fastapi -from fastapi.middleware.cors import CORSMiddleware -import httpx -import uvicorn -from pydantic import BaseSettings - -from fastchat.protocol.chat_completion import ( - ChatCompletionRequest, - ChatCompletionResponse, - ChatMessage, - ChatCompletionResponseChoice, -) -from fastchat.conversation import get_default_conv_template, SeparatorStyle -from fastchat.serve.inference import compute_skip_echo_len - -logger = logging.getLogger(__name__) - - -class AppSettings(BaseSettings): - # The address of the model controller. - FASTCHAT_CONTROLLER_URL: str = "http://localhost:21001" - - -app_settings = AppSettings() -app = fastapi.FastAPI() -headers = {"User-Agent": "FastChat API Server"} - - -@app.get("/v1/models") -async def show_available_models(): - controller_url = app_settings.FASTCHAT_CONTROLLER_URL - async with httpx.AsyncClient() as client: - ret = await client.post(controller_url + "/refresh_all_workers") - ret = await client.post(controller_url + "/list_models") - models = ret.json()["models"] - models.sort() - return {"data": [{"id": m} for m in models], "object": "list"} - - -@app.post("/v1/chat/completions") -async def create_chat_completion(request: ChatCompletionRequest): - """Creates a completion for the chat message""" - payload, skip_echo_len = generate_payload( - request.model, - request.messages, - temperature=request.temperature, - max_tokens=request.max_tokens, - stop=request.stop, - ) - - choices = [] - # TODO: batch the requests. maybe not necessary if using CacheFlow worker - chat_completions = [] - for i in range(request.n): - content = asyncio.create_task(chat_completion(request.model, payload, skip_echo_len)) - chat_completions.append(content) - - for i, content_task in enumerate(chat_completions): - content = await content_task - choices.append( - ChatCompletionResponseChoice( - index=i, - message=ChatMessage(role="assistant", content=content), - # TODO: support other finish_reason - finish_reason="stop", - ) - ) - - # TODO: support usage field - # "usage": { - # "prompt_tokens": 9, - # "completion_tokens": 12, - # "total_tokens": 21 - # } - return ChatCompletionResponse(choices=choices) - - -def generate_payload( - model_name: str, - messages: List[Dict[str, str]], - *, - temperature: float, - max_tokens: int, - stop: Union[str, None], -): - is_chatglm = "chatglm" in model_name.lower() - # TODO(suquark): The template is currently a reference. Here we have to make a copy. - # We use create a template factory to avoid this. - conv = get_default_conv_template(model_name).copy() - - # TODO(suquark): Conv.messages should be a list. But it is a tuple now. - # We should change it to a list. - conv.messages = list(conv.messages) - - for message in messages: - msg_role = message["role"] - if msg_role == "system": - conv.system = message["content"] - elif msg_role == "user": - conv.append_message(conv.roles[0], message["content"]) - elif msg_role == "assistant": - conv.append_message(conv.roles[1], message["content"]) - else: - raise ValueError(f"Unknown role: {msg_role}") - - # Add a blank message for the assistant. - conv.append_message(conv.roles[1], None) - - if is_chatglm: - prompt = conv.messages[conv.offset :] - else: - prompt = conv.get_prompt() - skip_echo_len = compute_skip_echo_len(model_name, conv, prompt) - - if stop is None: - stop = conv.sep if conv.sep_style == SeparatorStyle.SINGLE else conv.sep2 - - # TODO(suquark): We should get the default `max_new_tokens`` from the model. - if max_tokens is None: - max_tokens = 512 - - payload = { - "model": model_name, - "prompt": prompt, - "temperature": temperature, - "max_new_tokens": max_tokens, - "stop": stop, - } - - logger.debug(f"==== request ====\n{payload}") - return payload, skip_echo_len - - -async def chat_completion(model_name: str, payload: Dict[str, Any], skip_echo_len: int): - controller_url = app_settings.FASTCHAT_CONTROLLER_URL - async with httpx.AsyncClient() as client: - ret = await client.post( - controller_url + "/get_worker_address", json={"model": model_name} - ) - worker_addr = ret.json()["address"] - # No available worker - if worker_addr == "": - raise ValueError(f"No available worker for {model_name}") - - logger.debug(f"model_name: {model_name}, worker_addr: {worker_addr}") - - output = "" - delimiter = b"\0" - async with client.stream( - "POST", - worker_addr + "/worker_generate_stream", - headers=headers, - json=payload, - timeout=20, - ) as response: - content = await response.aread() - - for chunk in content.split(delimiter): - if not chunk: - continue - data = json.loads(chunk.decode()) - if data["error_code"] == 0: - output = data["text"][skip_echo_len:].strip() - - return output - - -if __name__ == "__main__": - parser = argparse.ArgumentParser( - description="FastChat ChatGPT-compatible Restful API server." - ) - parser.add_argument("--host", type=str, default="localhost", help="host name") - parser.add_argument("--port", type=int, default=8000, help="port number") - parser.add_argument("--allow-credentials", action="store_true", help="allow credentials") - parser.add_argument("--allowed-origins", type=json.loads, default=["*"], help="allowed origins") - parser.add_argument("--allowed-methods", type=json.loads, default=["*"], help="allowed methods") - parser.add_argument("--allowed-headers", type=json.loads, default=["*"], help="allowed headers") - - args = parser.parse_args() - - app.add_middleware( - CORSMiddleware, - allow_origins=args.allowed_origins, - allow_credentials=args.allow_credentials, - allow_methods=args.allowed_methods, - allow_headers=args.allowed_headers, - ) - - logger.debug(f"==== args ====\n{args}") - - uvicorn.run("fastchat.serve.api:app", host=args.host, port=args.port, reload=True) diff --git a/spaces/ItsJayQz/Roy_PopArt_Diffusion/app.py b/spaces/ItsJayQz/Roy_PopArt_Diffusion/app.py deleted file mode 100644 index 9f79720c729a12f68d1daa34715fc91c5804fce3..0000000000000000000000000000000000000000 --- a/spaces/ItsJayQz/Roy_PopArt_Diffusion/app.py +++ /dev/null @@ -1,137 +0,0 @@ -from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler -import gradio as gr -import torch -from PIL import Image - -model_id = 'ItsJayQz/Roy_PopArt_Diffusion' -prefix = 'roypop style' - -scheduler = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler") - -pipe = StableDiffusionPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -if torch.cuda.is_available(): - pipe = pipe.to("cuda") - pipe_i2i = pipe_i2i.to("cuda") - -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - -def inference(prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt="", auto_prefix=False): - - generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None - prompt = f"{prefix} {prompt}" if auto_prefix else prompt - - try: - if img is not None: - return img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None - else: - return txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator), None - except Exception as e: - return None, error_str(e) - -def txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator): - - result = pipe( - prompt, - negative_prompt = neg_prompt, - num_inference_steps = int(steps), - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -def img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator): - - ratio = min(height / img.height, width / img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS) - result = pipe_i2i( - prompt, - negative_prompt = neg_prompt, - init_image = img, - num_inference_steps = int(steps), - strength = strength, - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -css = """.main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem} -""" -with gr.Blocks(css=css) as demo: - gr.HTML( - f""" -
    -
    -

    Roy Popart Diffusion

    -
    -

    - Demo for Roy Popart Diffusion Stable Diffusion model.
    - {"Add the following tokens to your prompts for the model to work properly: prefix" if prefix else ""} -

    - Running on {"GPU 🔥" if torch.cuda.is_available() else f"CPU 🥶. For faster inference it is recommended to upgrade to GPU in Settings"} after duplicating the space

    - Duplicate Space -
    - """ - ) - with gr.Row(): - - with gr.Column(scale=55): - with gr.Group(): - with gr.Row(): - prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2,placeholder=f"{prefix} [your prompt]").style(container=False) - generate = gr.Button(value="Generate").style(rounded=(False, True, True, False)) - - image_out = gr.Image(height=512) - error_output = gr.Markdown() - - with gr.Column(scale=45): - with gr.Tab("Options"): - with gr.Group(): - neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image") - auto_prefix = gr.Checkbox(label="Prefix styling tokens automatically (roypop style)", value=prefix, visible=prefix) - - with gr.Row(): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=25, minimum=2, maximum=75, step=1) - - with gr.Row(): - width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8) - - seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) - - with gr.Tab("Image to image"): - with gr.Group(): - image = gr.Image(label="Image", height=256, tool="editor", type="pil") - strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5) - - auto_prefix.change(lambda x: gr.update(placeholder=f"{prefix} [your prompt]" if x else "[Your prompt]"), inputs=auto_prefix, outputs=prompt, queue=False) - - inputs = [prompt, guidance, steps, width, height, seed, image, strength, neg_prompt, auto_prefix] - outputs = [image_out, error_output] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs) - - gr.HTML(""" -
    -
    -

    This space was created using SD Space Creator.

    -
    - """) - -demo.queue(concurrency_count=1) -demo.launch() diff --git a/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/utils/__init__.py b/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/KPCGD/bingo/src/pages/api/healthz.ts b/spaces/KPCGD/bingo/src/pages/api/healthz.ts deleted file mode 100644 index f6ae44ff0fd66ccd3f7feaa550025fbf2a83bf77..0000000000000000000000000000000000000000 --- a/spaces/KPCGD/bingo/src/pages/api/healthz.ts +++ /dev/null @@ -1,7 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - res.status(200).end('ok') -} diff --git a/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/models/diffusion/ddpm_edit.py b/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/models/diffusion/ddpm_edit.py deleted file mode 100644 index df72f44445e466849208848920f585ad7883b78f..0000000000000000000000000000000000000000 --- a/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/models/diffusion/ddpm_edit.py +++ /dev/null @@ -1,1227 +0,0 @@ -""" -wild mixture of -https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py -https://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py -https://github.com/CompVis/taming-transformers --- merci -""" - -# File modified by authors of InstructPix2Pix from original (https://github.com/CompVis/stable-diffusion). -# See more details in LICENSE. - -# Modified by Zigang Geng (zigang@mail.ustc.edu.cn) - -import os -import warnings -import torch -import torch.nn as nn -import numpy as np -from einops import rearrange, repeat -from functools import partial -from tqdm import tqdm -from torchvision.utils import make_grid - -from ldm.util import log_txt_as_img, exists, default, ismap, isimage, mean_flat, count_params, instantiate_from_config -from ldm.modules.distributions.distributions import normal_kl, DiagonalGaussianDistribution -from ldm.models.autoencoder import VQModelInterface, IdentityFirstStage, AutoencoderKL -from ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like -from ldm.models.diffusion.ddim import DDIMSampler -from timm.models.layers import trunc_normal_ - - -__conditioning_keys__ = {'concat': 'c_concat', - 'crossattn': 'c_crossattn', - 'adm': 'y'} - - -def disabled_train(self, mode=True): - """Overwrite model.train with this function to make sure train/eval mode - does not change anymore.""" - return self - - -def uniform_on_device(r1, r2, shape, device): - return (r1 - r2) * torch.rand(*shape, device=device) + r2 - - -class DDPM(nn.Module): - # classic DDPM with Gaussian diffusion, in image space - def __init__(self, - unet_config, - timesteps=1000, - beta_schedule="linear", - loss_type="l2", - ckpt_path=None, - ignore_keys=[], - load_only_unet=False, - monitor="val/loss", - first_stage_key="image", - image_size=256, - channels=3, - log_every_t=100, - clip_denoised=True, - linear_start=1e-4, - linear_end=2e-2, - cosine_s=8e-3, - given_betas=None, - original_elbo_weight=0., - v_posterior=0., # weight for choosing posterior variance as sigma = (1-v) * beta_tilde + v * beta - l_simple_weight=1., - conditioning_key=None, - parameterization="eps", # all assuming fixed variance schedules - scheduler_config=None, - use_positional_encodings=False, - learn_logvar=False, - logvar_init=0., - **kwargs, - ): - super().__init__() - assert parameterization in ["eps", "x0"], 'currently only supporting "eps" and "x0"' - self.parameterization = parameterization - print(f"{self.__class__.__name__}: Running in {self.parameterization}-prediction mode") - self.cond_stage_model = None - self.clip_denoised = clip_denoised - self.unet_config = unet_config - self.log_every_t = log_every_t - self.first_stage_key = first_stage_key - self.image_size = image_size # try conv? - self.channels = channels - self.use_positional_encodings = use_positional_encodings - self.model = DiffusionWrapper(unet_config, conditioning_key) - count_params(self.model, verbose=True) - - self.use_scheduler = scheduler_config is not None - if self.use_scheduler: - self.scheduler_config = scheduler_config - - self.v_posterior = v_posterior - self.original_elbo_weight = original_elbo_weight - self.l_simple_weight = l_simple_weight - - if monitor is not None: - self.monitor = monitor - - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys, only_model=load_only_unet) - - self.register_schedule(given_betas=given_betas, beta_schedule=beta_schedule, timesteps=timesteps, - linear_start=linear_start, linear_end=linear_end, cosine_s=cosine_s) - - self.loss_type = loss_type - - self.learn_logvar = learn_logvar - self.logvar = torch.full(fill_value=logvar_init, size=(self.num_timesteps,)) - if self.learn_logvar: - self.logvar = nn.Parameter(self.logvar, requires_grad=True) - - def register_schedule(self, given_betas=None, beta_schedule="linear", timesteps=1000, - linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - if exists(given_betas): - betas = given_betas - else: - betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end, - cosine_s=cosine_s) - alphas = 1. - betas - alphas_cumprod = np.cumprod(alphas, axis=0) - alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1]) - - timesteps, = betas.shape - self.num_timesteps = int(timesteps) - self.linear_start = linear_start - self.linear_end = linear_end - assert alphas_cumprod.shape[0] == self.num_timesteps, 'alphas have to be defined for each timestep' - - to_torch = partial(torch.tensor, dtype=torch.float32) - - self.register_buffer('betas', to_torch(betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1))) - - # calculations for posterior q(x_{t-1} | x_t, x_0) - posterior_variance = (1 - self.v_posterior) * betas * (1. - alphas_cumprod_prev) / ( - 1. - alphas_cumprod) + self.v_posterior * betas - # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t) - self.register_buffer('posterior_variance', to_torch(posterior_variance)) - # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain - self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20)))) - self.register_buffer('posterior_mean_coef1', to_torch( - betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod))) - self.register_buffer('posterior_mean_coef2', to_torch( - (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod))) - - if self.parameterization == "eps": - lvlb_weights = self.betas ** 2 / ( - 2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod)) - elif self.parameterization == "x0": - lvlb_weights = 0.5 * np.sqrt(torch.Tensor(alphas_cumprod)) / (2. * 1 - torch.Tensor(alphas_cumprod)) - else: - raise NotImplementedError("mu not supported") - # TODO how to choose this term - lvlb_weights[0] = lvlb_weights[1] - self.register_buffer('lvlb_weights', lvlb_weights, persistent=False) - assert not torch.isnan(self.lvlb_weights).all() - - def init_from_ckpt(self, path, ignore_keys=list(), only_model=False): - if os.path.exists(path): - sd = torch.load(path, map_location="cpu") - if "state_dict" in list(sd.keys()): - sd = sd["state_dict"] - keys = list(sd.keys()) - - # Our model adds additional channels to the first layer to condition on an input image. - # For the first layer, copy existing channel weights and initialize new channel weights to zero. - input_keys = [ - "model.diffusion_model.input_blocks.0.0.weight", - ] - - self_sd = self.state_dict() - for input_key in input_keys: - if input_key not in sd or input_key not in self_sd: - continue - - input_weight = self_sd[input_key] - - if input_weight.size() != sd[input_key].size(): - print(f"Manual init: {input_key}") - input_weight.zero_() - input_weight[:, :4, :, :].copy_(sd[input_key]) - ignore_keys.append(input_key) - - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict( - sd, strict=False) - print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") - if len(missing) > 0: - print(f"Missing Keys: {missing}") - if len(unexpected) > 0: - print(f"Unexpected Keys: {unexpected}") - else: - warnings.warn("The pre-trained stable diffusion model has not been loaded. " - "If you are in the training phase, please check your code. " - "If you are in the testing phase, you can ignore this warning.") - - def q_mean_variance(self, x_start, t): - """ - Get the distribution q(x_t | x_0). - :param x_start: the [N x C x ...] tensor of noiseless inputs. - :param t: the number of diffusion steps (minus 1). Here, 0 means one step. - :return: A tuple (mean, variance, log_variance), all of x_start's shape. - """ - mean = (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start) - variance = extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape) - log_variance = extract_into_tensor(self.log_one_minus_alphas_cumprod, t, x_start.shape) - return mean, variance, log_variance - - def predict_start_from_noise(self, x_t, t, noise): - return ( - extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - - extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise - ) - - def q_posterior(self, x_start, x_t, t): - posterior_mean = ( - extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start + - extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t - ) - posterior_variance = extract_into_tensor(self.posterior_variance, t, x_t.shape) - posterior_log_variance_clipped = extract_into_tensor(self.posterior_log_variance_clipped, t, x_t.shape) - return posterior_mean, posterior_variance, posterior_log_variance_clipped - - def p_mean_variance(self, x, t, clip_denoised: bool): - model_out = self.model(x, t) - if self.parameterization == "eps": - x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) - elif self.parameterization == "x0": - x_recon = model_out - if clip_denoised: - x_recon.clamp_(-1., 1.) - - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample(self, x, t, clip_denoised=True, repeat_noise=False): - b, *_, device = *x.shape, x.device - model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, clip_denoised=clip_denoised) - noise = noise_like(x.shape, device, repeat_noise) - # no noise when t == 0 - nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def p_sample_loop(self, shape, return_intermediates=False): - device = self.betas.device - b = shape[0] - img = torch.randn(shape, device=device) - intermediates = [img] - for i in tqdm(reversed(range(0, self.num_timesteps)), desc='Sampling t', total=self.num_timesteps): - img = self.p_sample(img, torch.full((b,), i, device=device, dtype=torch.long), - clip_denoised=self.clip_denoised) - if i % self.log_every_t == 0 or i == self.num_timesteps - 1: - intermediates.append(img) - if return_intermediates: - return img, intermediates - return img - - @torch.no_grad() - def sample(self, batch_size=16, return_intermediates=False): - image_size = self.image_size - channels = self.channels - return self.p_sample_loop((batch_size, channels, image_size, image_size), - return_intermediates=return_intermediates) - - def q_sample(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start + - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise) - - def get_loss(self, pred, target, mean=True): - pred = pred.float() - if self.loss_type == 'l1': - loss = (target - pred).abs() - if mean: - loss = loss.mean() - elif self.loss_type == 'l2': - if mean: - loss = torch.nn.functional.mse_loss(target, pred) - else: - loss = torch.nn.functional.mse_loss(target, pred, reduction='none') - else: - raise NotImplementedError("unknown loss type '{loss_type}'") - - return loss - - def p_losses(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - model_out = self.model(x_noisy, t) - - loss_dict = {} - if self.parameterization == "eps": - target = noise - elif self.parameterization == "x0": - target = x_start - else: - raise NotImplementedError(f"Paramterization {self.parameterization} not yet supported") - - loss = self.get_loss(model_out, target, mean=False).mean(dim=[1, 2, 3]) - - log_prefix = 'train' if self.training else 'val' - - loss_dict.update({f'{log_prefix}/loss_simple': loss.mean()}) - loss_simple = loss.mean() * self.l_simple_weight - - loss_vlb = (self.lvlb_weights[t] * loss).mean() - loss_dict.update({f'{log_prefix}/loss_vlb': loss_vlb}) - - loss = loss_simple + self.original_elbo_weight * loss_vlb - - loss_dict.update({f'{log_prefix}/loss': loss}) - - return loss, loss_dict - - def forward(self, x, *args, **kwargs): - # b, c, h, w, device, img_size, = *x.shape, x.device, self.image_size - # assert h == img_size and w == img_size, f'height and width of image must be {img_size}' - t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=x.device).long() - return self.p_losses(x, t, *args, **kwargs) - - def get_input(self, batch, k): - return batch[k] - - -class NNParams(nn.Module): - def __init__(self, dim): - super().__init__() - self.cls_token = nn.Parameter(torch.zeros(dim), requires_grad=True) - trunc_normal_(self.cls_token, mean=0., std=10, a=-10, b=10) - - def forward(self): - return self.cls_token - - -class LatentDiffusion(DDPM): - """main class""" - def __init__(self, - first_stage_config, - cond_stage_config, - num_timesteps_cond=None, - cond_stage_key="image", - cond_stage_trainable=False, - concat_mode=True, - cond_stage_forward=None, - conditioning_key=None, - scale_factor=1.0, - scale_by_std=False, - deepspeed="", - *args, **kwargs): - self.deepspeed = deepspeed - self.num_timesteps_cond = default(num_timesteps_cond, 1) - self.scale_by_std = scale_by_std - assert self.num_timesteps_cond <= kwargs['timesteps'] - # for backwards compatibility after implementation of DiffusionWrapper - if conditioning_key is None: - conditioning_key = 'concat' if concat_mode else 'crossattn' - if cond_stage_config == '__is_unconditional__': - conditioning_key = None - ckpt_path = kwargs.pop("ckpt_path", None) - ignore_keys = kwargs.pop("ignore_keys", []) - super().__init__(conditioning_key=conditioning_key, *args, **kwargs) - self.concat_mode = concat_mode - self.cond_stage_trainable = cond_stage_trainable - self.cond_stage_key = cond_stage_key - try: - self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1 - except: - self.num_downs = 0 - if not scale_by_std: - self.scale_factor = scale_factor - else: - self.register_buffer('scale_factor', torch.tensor(scale_factor)) - self.instantiate_first_stage(first_stage_config) - self.instantiate_cond_stage(cond_stage_config) - self.cond_stage_forward = cond_stage_forward - self.clip_denoised = False - self.bbox_tokenizer = None - - self.restarted_from_ckpt = False - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys) - self.restarted_from_ckpt = True - - self.additional_loss_type = kwargs.pop("additional_loss_type", None) - - def make_cond_schedule(self, ): - self.cond_ids = torch.full(size=(self.num_timesteps,), fill_value=self.num_timesteps - 1, dtype=torch.long) - ids = torch.round(torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond)).long() - self.cond_ids[:self.num_timesteps_cond] = ids - - # @rank_zero_only - @torch.no_grad() - def on_train_batch_start(self, batch, batch_idx, dataloader_idx): - # only for very first batch - if self.scale_by_std and self.current_epoch == 0 and self.global_step == 0 and batch_idx == 0 and not self.restarted_from_ckpt: - assert self.scale_factor == 1., 'rather not use custom rescaling and std-rescaling simultaneously' - # set rescale weight to 1./std of encodings - print("### USING STD-RESCALING ###") - x = super().get_input(batch, self.first_stage_key) - encoder_posterior = self.encode_first_stage(x) - z = self.get_first_stage_encoding(encoder_posterior).detach() - del self.scale_factor - self.register_buffer('scale_factor', 1. / z.flatten().std()) - print(f"setting self.scale_factor to {self.scale_factor}") - print("### USING STD-RESCALING ###") - - def register_schedule(self, - given_betas=None, beta_schedule="linear", timesteps=1000, - linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - super().register_schedule(given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s) - - self.shorten_cond_schedule = self.num_timesteps_cond > 1 - if self.shorten_cond_schedule: - self.make_cond_schedule() - - def instantiate_first_stage(self, config): - model = instantiate_from_config(config) - self.first_stage_model = model.eval() - self.first_stage_model.train = disabled_train - for param in self.first_stage_model.parameters(): - param.requires_grad = False - - def instantiate_cond_stage(self, config): - if not self.cond_stage_trainable: - if config == "__is_first_stage__": - print("Using first stage also as cond stage.") - self.cond_stage_model = self.first_stage_model - elif config == "__is_unconditional__": - print(f"Training {self.__class__.__name__} as an unconditional model.") - self.cond_stage_model = None - # self.be_unconditional = True - else: - model = instantiate_from_config(config) - self.cond_stage_model = model.eval() - self.cond_stage_model.train = disabled_train - for param in self.cond_stage_model.parameters(): - param.requires_grad = False - else: - assert config != '__is_first_stage__' - assert config != '__is_unconditional__' - model = instantiate_from_config(config) - self.cond_stage_model = model - - def _get_denoise_row_from_list(self, samples, desc='', force_no_decoder_quantization=False): - denoise_row = [] - for zd in tqdm(samples, desc=desc): - denoise_row.append(self.decode_first_stage(zd, - force_not_quantize=force_no_decoder_quantization)) - n_imgs_per_row = len(denoise_row) - denoise_row = torch.stack(denoise_row) # n_log_step, n_row, C, H, W - denoise_grid = rearrange(denoise_row, 'n b c h w -> b n c h w') - denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w') - denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row) - return denoise_grid - - def get_first_stage_encoding(self, encoder_posterior): - if isinstance(encoder_posterior, DiagonalGaussianDistribution): - z = encoder_posterior.sample() - elif isinstance(encoder_posterior, torch.Tensor): - z = encoder_posterior - else: - raise NotImplementedError(f"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented") - return self.scale_factor * z - - def get_learned_conditioning(self, c): - if self.cond_stage_forward is None: - if hasattr(self.cond_stage_model, 'encode') and callable(self.cond_stage_model.encode): - c = self.cond_stage_model.encode(c) - if isinstance(c, DiagonalGaussianDistribution): - c = c.mode() - else: - c = self.cond_stage_model(c) - else: - assert hasattr(self.cond_stage_model, self.cond_stage_forward) - c = getattr(self.cond_stage_model, self.cond_stage_forward)(c) - return c - - def meshgrid(self, h, w): - y = torch.arange(0, h).view(h, 1, 1).repeat(1, w, 1) - x = torch.arange(0, w).view(1, w, 1).repeat(h, 1, 1) - - arr = torch.cat([y, x], dim=-1) - return arr - - def delta_border(self, h, w): - """ - :param h: height - :param w: width - :return: normalized distance to image border, - wtith min distance = 0 at border and max dist = 0.5 at image center - """ - lower_right_corner = torch.tensor([h - 1, w - 1]).view(1, 1, 2) - arr = self.meshgrid(h, w) / lower_right_corner - dist_left_up = torch.min(arr, dim=-1, keepdims=True)[0] - dist_right_down = torch.min(1 - arr, dim=-1, keepdims=True)[0] - edge_dist = torch.min(torch.cat([dist_left_up, dist_right_down], dim=-1), dim=-1)[0] - return edge_dist - - def get_weighting(self, h, w, Ly, Lx, device): - weighting = self.delta_border(h, w) - weighting = torch.clip(weighting, self.split_input_params["clip_min_weight"], - self.split_input_params["clip_max_weight"], ) - weighting = weighting.view(1, h * w, 1).repeat(1, 1, Ly * Lx).to(device) - - if self.split_input_params["tie_braker"]: - L_weighting = self.delta_border(Ly, Lx) - L_weighting = torch.clip(L_weighting, - self.split_input_params["clip_min_tie_weight"], - self.split_input_params["clip_max_tie_weight"]) - - L_weighting = L_weighting.view(1, 1, Ly * Lx).to(device) - weighting = weighting * L_weighting - return weighting - - def get_fold_unfold(self, x, kernel_size, stride, uf=1, df=1): # todo load once not every time, shorten code - """ - :param x: img of size (bs, c, h, w) - :return: n img crops of size (n, bs, c, kernel_size[0], kernel_size[1]) - """ - bs, nc, h, w = x.shape - - # number of crops in image - Ly = (h - kernel_size[0]) // stride[0] + 1 - Lx = (w - kernel_size[1]) // stride[1] + 1 - - if uf == 1 and df == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold = torch.nn.Fold(output_size=x.shape[2:], **fold_params) - - weighting = self.get_weighting(kernel_size[0], kernel_size[1], Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h, w) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0], kernel_size[1], Ly * Lx)) - - elif uf > 1 and df == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold_params2 = dict(kernel_size=(kernel_size[0] * uf, kernel_size[0] * uf), - dilation=1, padding=0, - stride=(stride[0] * uf, stride[1] * uf)) - fold = torch.nn.Fold(output_size=(x.shape[2] * uf, x.shape[3] * uf), **fold_params2) - - weighting = self.get_weighting(kernel_size[0] * uf, kernel_size[1] * uf, Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h * uf, w * uf) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0] * uf, kernel_size[1] * uf, Ly * Lx)) - - elif df > 1 and uf == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold_params2 = dict(kernel_size=(kernel_size[0] // df, kernel_size[0] // df), - dilation=1, padding=0, - stride=(stride[0] // df, stride[1] // df)) - fold = torch.nn.Fold(output_size=(x.shape[2] // df, x.shape[3] // df), **fold_params2) - - weighting = self.get_weighting(kernel_size[0] // df, kernel_size[1] // df, Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h // df, w // df) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0] // df, kernel_size[1] // df, Ly * Lx)) - - else: - raise NotImplementedError - - return fold, unfold, normalization, weighting - - # @torch.no_grad() - def get_input(self, batch, k, return_first_stage_outputs=False, force_c_encode=False, - cond_key=None, return_original_cond=False, bs=None, uncond=0.05): - x = super().get_input(batch, k) - if bs is not None: - x = x[:bs] - encoder_posterior = self.encode_first_stage(x) - z = self.get_first_stage_encoding(encoder_posterior).detach() - cond_key = cond_key or self.cond_stage_key - xc = super().get_input(batch, cond_key) - if bs is not None: - xc["c_crossattn"] = xc["c_crossattn"][:bs] - xc["c_concat"] = xc["c_concat"][:bs] - cond = {} - - random = torch.rand(x.size(0), device=z.device) - prompt_mask = rearrange(random < 0.075, "n -> n 1 1") - input_mask = 1 - rearrange((random >= 0.075).float() * (random < 0.15).float(), "n -> n 1 1 1") - - null_prompt = self.get_learned_conditioning([""]) - cond["c_crossattn"] = [torch.where(prompt_mask, null_prompt, self.get_learned_conditioning(xc["c_crossattn"]).detach())] - cond["c_concat"] = [input_mask * self.encode_first_stage((xc["c_concat"])).mode().detach()] - - out = [z, cond] - if return_first_stage_outputs: - xrec = self.decode_first_stage(z) - out.extend([x, xrec]) - if return_original_cond: - out.append(xc) - return out - - @torch.no_grad() - def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False): - if predict_cids: - if z.dim() == 4: - z = torch.argmax(z.exp(), dim=1).long() - z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None) - z = rearrange(z, 'b h w c -> b c h w').contiguous() - - z = 1. / self.scale_factor * z - - if hasattr(self, "split_input_params"): - if self.split_input_params["patch_distributed_vq"]: - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - uf = self.split_input_params["vqf"] - bs, nc, h, w = z.shape - if ks[0] > h or ks[1] > w: - ks = (min(ks[0], h), min(ks[1], w)) - print("reducing Kernel") - - if stride[0] > h or stride[1] > w: - stride = (min(stride[0], h), min(stride[1], w)) - print("reducing stride") - - fold, unfold, normalization, weighting = self.get_fold_unfold(z, ks, stride, uf=uf) - - z = unfold(z) # (bn, nc * prod(**ks), L) - # 1. Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - # 2. apply model loop over last dim - if isinstance(self.first_stage_model, VQModelInterface): - output_list = [self.first_stage_model.decode(z[:, :, :, :, i], - force_not_quantize=predict_cids or force_not_quantize) - for i in range(z.shape[-1])] - else: - - output_list = [self.first_stage_model.decode(z[:, :, :, :, i]) - for i in range(z.shape[-1])] - - o = torch.stack(output_list, axis=-1) # # (bn, nc, ks[0], ks[1], L) - o = o * weighting - # Reverse 1. reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - decoded = fold(o) - decoded = decoded / normalization # norm is shape (1, 1, h, w) - return decoded - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - # same as above but without decorator - def differentiable_decode_first_stage(self, z, predict_cids=False, force_not_quantize=False): - if predict_cids: - if z.dim() == 4: - z = torch.argmax(z.exp(), dim=1).long() - z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None) - z = rearrange(z, 'b h w c -> b c h w').contiguous() - - z = 1. / self.scale_factor * z - - if hasattr(self, "split_input_params"): - if self.split_input_params["patch_distributed_vq"]: - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - uf = self.split_input_params["vqf"] - bs, nc, h, w = z.shape - if ks[0] > h or ks[1] > w: - ks = (min(ks[0], h), min(ks[1], w)) - print("reducing Kernel") - - if stride[0] > h or stride[1] > w: - stride = (min(stride[0], h), min(stride[1], w)) - print("reducing stride") - - fold, unfold, normalization, weighting = self.get_fold_unfold(z, ks, stride, uf=uf) - - z = unfold(z) # (bn, nc * prod(**ks), L) - # 1. Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - # 2. apply model loop over last dim - if isinstance(self.first_stage_model, VQModelInterface): - output_list = [self.first_stage_model.decode(z[:, :, :, :, i], - force_not_quantize=predict_cids or force_not_quantize) - for i in range(z.shape[-1])] - else: - - output_list = [self.first_stage_model.decode(z[:, :, :, :, i]) - for i in range(z.shape[-1])] - - o = torch.stack(output_list, axis=-1) # # (bn, nc, ks[0], ks[1], L) - o = o * weighting - # Reverse 1. reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - decoded = fold(o) - decoded = decoded / normalization # norm is shape (1, 1, h, w) - return decoded - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - @torch.no_grad() - def encode_first_stage(self, x): - if hasattr(self, "split_input_params"): - if self.split_input_params["patch_distributed_vq"]: - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - df = self.split_input_params["vqf"] - self.split_input_params['original_image_size'] = x.shape[-2:] - bs, nc, h, w = x.shape - if ks[0] > h or ks[1] > w: - ks = (min(ks[0], h), min(ks[1], w)) - print("reducing Kernel") - - if stride[0] > h or stride[1] > w: - stride = (min(stride[0], h), min(stride[1], w)) - print("reducing stride") - - fold, unfold, normalization, weighting = self.get_fold_unfold(x, ks, stride, df=df) - z = unfold(x) # (bn, nc * prod(**ks), L) - # Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - output_list = [self.first_stage_model.encode(z[:, :, :, :, i]) - for i in range(z.shape[-1])] - - o = torch.stack(output_list, axis=-1) - o = o * weighting - - # Reverse reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - decoded = fold(o) - decoded = decoded / normalization - return decoded - - else: - return self.first_stage_model.encode(x) - else: - return self.first_stage_model.encode(x) - - def forward(self, batch, batch_idx, num_steps, *args, **kwargs): - x, c = self.get_input(batch, self.first_stage_key) - t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=x.device).long() - if self.model.conditioning_key is not None: - assert c is not None - if self.cond_stage_trainable: - c = self.get_learned_conditioning(c) - if self.shorten_cond_schedule: # TODO: drop this option - tc = self.cond_ids[t] - c = self.q_sample(x_start=c, t=tc, noise=torch.randn_like(c.float())) - loss, loss_dict = self.p_losses(x, c, t, *args, **kwargs) - - return loss, loss_dict - - def _rescale_annotations(self, bboxes, crop_coordinates): # TODO: move to dataset - def rescale_bbox(bbox): - x0 = clamp((bbox[0] - crop_coordinates[0]) / crop_coordinates[2]) - y0 = clamp((bbox[1] - crop_coordinates[1]) / crop_coordinates[3]) - w = min(bbox[2] / crop_coordinates[2], 1 - x0) - h = min(bbox[3] / crop_coordinates[3], 1 - y0) - return x0, y0, w, h - - return [rescale_bbox(b) for b in bboxes] - - def apply_model(self, x_noisy, t, cond, return_ids=False): - - if isinstance(cond, dict): - # hybrid case, cond is exptected to be a dict - pass - else: - if not isinstance(cond, list): - cond = [cond] - key = 'c_concat' if self.model.conditioning_key == 'concat' else 'c_crossattn' - cond = {key: cond} - - if hasattr(self, "split_input_params"): - assert len(cond) == 1 # todo can only deal with one conditioning atm - assert not return_ids - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - - h, w = x_noisy.shape[-2:] - - fold, unfold, normalization, weighting = self.get_fold_unfold(x_noisy, ks, stride) - - z = unfold(x_noisy) # (bn, nc * prod(**ks), L) - # Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - z_list = [z[:, :, :, :, i] for i in range(z.shape[-1])] - - if self.cond_stage_key in ["image", "LR_image", "segmentation", - 'bbox_img'] and self.model.conditioning_key: # todo check for completeness - c_key = next(iter(cond.keys())) # get key - c = next(iter(cond.values())) # get value - assert (len(c) == 1) # todo extend to list with more than one elem - c = c[0] # get element - - c = unfold(c) - c = c.view((c.shape[0], -1, ks[0], ks[1], c.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - cond_list = [{c_key: [c[:, :, :, :, i]]} for i in range(c.shape[-1])] - - elif self.cond_stage_key == 'coordinates_bbox': - assert 'original_image_size' in self.split_input_params, 'BoudingBoxRescaling is missing original_image_size' - - # assuming padding of unfold is always 0 and its dilation is always 1 - n_patches_per_row = int((w - ks[0]) / stride[0] + 1) - full_img_h, full_img_w = self.split_input_params['original_image_size'] - # as we are operating on latents, we need the factor from the original image size to the - # spatial latent size to properly rescale the crops for regenerating the bbox annotations - num_downs = self.first_stage_model.encoder.num_resolutions - 1 - rescale_latent = 2 ** (num_downs) - - # get top left postions of patches as conforming for the bbbox tokenizer, therefore we - # need to rescale the tl patch coordinates to be in between (0,1) - tl_patch_coordinates = [(rescale_latent * stride[0] * (patch_nr % n_patches_per_row) / full_img_w, - rescale_latent * stride[1] * (patch_nr // n_patches_per_row) / full_img_h) - for patch_nr in range(z.shape[-1])] - - # patch_limits are tl_coord, width and height coordinates as (x_tl, y_tl, h, w) - patch_limits = [(x_tl, y_tl, - rescale_latent * ks[0] / full_img_w, - rescale_latent * ks[1] / full_img_h) for x_tl, y_tl in tl_patch_coordinates] - # patch_values = [(np.arange(x_tl,min(x_tl+ks, 1.)),np.arange(y_tl,min(y_tl+ks, 1.))) for x_tl, y_tl in tl_patch_coordinates] - - # tokenize crop coordinates for the bounding boxes of the respective patches - patch_limits_tknzd = [torch.LongTensor(self.bbox_tokenizer._crop_encoder(bbox))[None] - for bbox in patch_limits] # list of length l with tensors of shape (1, 2) - print(patch_limits_tknzd[0].shape) - # cut tknzd crop position from conditioning - assert isinstance(cond, dict), 'cond must be dict to be fed into model' - cut_cond = cond['c_crossattn'][0][..., :-2] - print(cut_cond.shape) - - adapted_cond = torch.stack([torch.cat([cut_cond, p], dim=1) for p in patch_limits_tknzd]) - adapted_cond = rearrange(adapted_cond, 'l b n -> (l b) n') - print(adapted_cond.shape) - adapted_cond = self.get_learned_conditioning(adapted_cond) - print(adapted_cond.shape) - adapted_cond = rearrange(adapted_cond, '(l b) n d -> l b n d', l=z.shape[-1]) - print(adapted_cond.shape) - - cond_list = [{'c_crossattn': [e]} for e in adapted_cond] - - else: - cond_list = [cond for i in range(z.shape[-1])] # Todo make this more efficient - - # apply model by loop over crops - output_list = [self.model(z_list[i], t, **cond_list[i]) for i in range(z.shape[-1])] - assert not isinstance(output_list[0], - tuple) # todo cant deal with multiple model outputs check this never happens - - o = torch.stack(output_list, axis=-1) - o = o * weighting - # Reverse reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - x_recon = fold(o) / normalization - - else: - x_recon = self.model(x_noisy, t, **cond) - - if isinstance(x_recon, tuple) and not return_ids: - return x_recon[0] - else: - return x_recon - - def _predict_eps_from_xstart(self, x_t, t, pred_xstart): - return (extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart) / \ - extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) - - def _prior_bpd(self, x_start): - """ - Get the prior KL term for the variational lower-bound, measured in - bits-per-dim. - This term can't be optimized, as it only depends on the encoder. - :param x_start: the [N x C x ...] tensor of inputs. - :return: a batch of [N] KL values (in bits), one per batch element. - """ - batch_size = x_start.shape[0] - t = torch.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device) - qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t) - kl_prior = normal_kl(mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0) - return mean_flat(kl_prior) / np.log(2.0) - - def p_losses(self, x_start, cond, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - model_output = self.apply_model(x_noisy, t, cond) - - loss_dict = {} - prefix = 'train' if self.training else 'val' - - if self.parameterization == "x0": - target = x_start - elif self.parameterization == "eps": - target = noise - else: - raise NotImplementedError() - - loss_simple = self.get_loss(model_output, target, mean=False).mean([1, 2, 3]) - loss_dict.update({f'{prefix}/loss_simple': loss_simple.mean()}) - - # additional_loss_type is in the format of min_snr_k - if self.additional_loss_type is not None and isinstance(self.additional_loss_type, str) and self.additional_loss_type.startswith("min_snr_"): - k = float(self.additional_loss_type.split("_")[-1]) - alpha = extract_into_tensor(self.sqrt_alphas_cumprod, t, t.shape) - sigma = extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, t.shape) - - snr = (alpha / sigma) ** 2 - min_snr = torch.stack([snr, k * torch.ones_like(t)], dim=1).min(dim=1)[0] - if self.parameterization == "eps": - loss_simple = loss_simple * min_snr / snr - elif self.parameterization == "x0": - loss_simple = loss_simple * min_snr - else: - raise NotImplementedError() - - loss_simple = loss_simple * min_snr - - logvar_t = self.logvar.to(x_start.device)[t] - loss = loss_simple / torch.exp(logvar_t) + logvar_t - # loss = loss_simple / torch.exp(self.logvar) + self.logvar - if self.learn_logvar: - loss_dict.update({f'{prefix}/loss_gamma': loss.mean()}) - loss_dict.update({'logvar': self.logvar.data.mean()}) - - loss = self.l_simple_weight * loss.mean() - - loss_vlb = self.get_loss(model_output, target, mean=False).mean(dim=(1, 2, 3)) - loss_vlb = (self.lvlb_weights[t] * loss_vlb).mean() - loss_dict.update({f'{prefix}/loss_vlb': loss_vlb}) - loss += (self.original_elbo_weight * loss_vlb) - loss_dict.update({f'{prefix}/loss': loss}) - - return loss, loss_dict - - def p_mean_variance(self, x, c, t, clip_denoised: bool, return_codebook_ids=False, quantize_denoised=False, - return_x0=False, score_corrector=None, corrector_kwargs=None): - t_in = t - model_out = self.apply_model(x, t_in, c, return_ids=return_codebook_ids) - - if score_corrector is not None: - assert self.parameterization == "eps" - model_out = score_corrector.modify_score(self, model_out, x, t, c, **corrector_kwargs) - - if return_codebook_ids: - model_out, logits = model_out - - if self.parameterization == "eps": - x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) - elif self.parameterization == "x0": - x_recon = model_out - else: - raise NotImplementedError() - - if clip_denoised: - x_recon.clamp_(-1., 1.) - if quantize_denoised: - x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon) - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - if return_codebook_ids: - return model_mean, posterior_variance, posterior_log_variance, logits - elif return_x0: - return model_mean, posterior_variance, posterior_log_variance, x_recon - else: - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample(self, x, c, t, clip_denoised=False, repeat_noise=False, - return_codebook_ids=False, quantize_denoised=False, return_x0=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None): - b, *_, device = *x.shape, x.device - outputs = self.p_mean_variance(x=x, c=c, t=t, clip_denoised=clip_denoised, - return_codebook_ids=return_codebook_ids, - quantize_denoised=quantize_denoised, - return_x0=return_x0, - score_corrector=score_corrector, corrector_kwargs=corrector_kwargs) - if return_codebook_ids: - raise DeprecationWarning("Support dropped.") - model_mean, _, model_log_variance, logits = outputs - elif return_x0: - model_mean, _, model_log_variance, x0 = outputs - else: - model_mean, _, model_log_variance = outputs - - noise = noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - # no noise when t == 0 - nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - - if return_codebook_ids: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, logits.argmax(dim=1) - if return_x0: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, x0 - else: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def progressive_denoising(self, cond, shape, verbose=True, callback=None, quantize_denoised=False, - img_callback=None, mask=None, x0=None, temperature=1., noise_dropout=0., - score_corrector=None, corrector_kwargs=None, batch_size=None, x_T=None, start_T=None, - log_every_t=None): - if not log_every_t: - log_every_t = self.log_every_t - timesteps = self.num_timesteps - if batch_size is not None: - b = batch_size if batch_size is not None else shape[0] - shape = [batch_size] + list(shape) - else: - b = batch_size = shape[0] - if x_T is None: - img = torch.randn(shape, device=x_T.device) - else: - img = x_T - intermediates = [] - if cond is not None: - if isinstance(cond, dict): - cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else - list(map(lambda x: x[:batch_size], cond[key])) for key in cond} - else: - cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = tqdm(reversed(range(0, timesteps)), desc='Progressive Generation', - total=timesteps) if verbose else reversed( - range(0, timesteps)) - if type(temperature) == float: - temperature = [temperature] * timesteps - - for i in iterator: - ts = torch.full((b,), i, device=cond.device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != 'hybrid' - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img, x0_partial = self.p_sample(img, cond, ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised, return_x0=True, - temperature=temperature[i], noise_dropout=noise_dropout, - score_corrector=score_corrector, corrector_kwargs=corrector_kwargs) - if mask is not None: - assert x0 is not None - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1. - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(x0_partial) - if callback: callback(i) - if img_callback: img_callback(img, i) - return img, intermediates - - @torch.no_grad() - def p_sample_loop(self, cond, shape, return_intermediates=False, - x_T=None, verbose=True, callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, start_T=None, - log_every_t=None): - - if not log_every_t: - log_every_t = self.log_every_t - device = self.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - intermediates = [img] - if timesteps is None: - timesteps = self.num_timesteps - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = tqdm(reversed(range(0, timesteps)), desc='Sampling t', total=timesteps) if verbose else reversed( - range(0, timesteps)) - - if mask is not None: - assert x0 is not None - assert x0.shape[2:3] == mask.shape[2:3] # spatial size has to match - - for i in iterator: - ts = torch.full((b,), i, device=device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != 'hybrid' - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img = self.p_sample(img, cond, ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised) - if mask is not None: - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1. - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(img) - if callback: callback(i) - if img_callback: img_callback(img, i) - - if return_intermediates: - return img, intermediates - return img - - @torch.no_grad() - def sample(self, cond, batch_size=16, return_intermediates=False, x_T=None, - verbose=True, timesteps=None, quantize_denoised=False, - mask=None, x0=None, shape=None,**kwargs): - if shape is None: - shape = (batch_size, self.channels, self.image_size, self.image_size) - if cond is not None: - if isinstance(cond, dict): - cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else - list(map(lambda x: x[:batch_size], cond[key])) for key in cond} - else: - cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] - return self.p_sample_loop(cond, - shape, - return_intermediates=return_intermediates, x_T=x_T, - verbose=verbose, timesteps=timesteps, quantize_denoised=quantize_denoised, - mask=mask, x0=x0) - - @torch.no_grad() - def sample_log(self,cond,batch_size,ddim, ddim_steps,**kwargs): - - if ddim: - ddim_sampler = DDIMSampler(self) - shape = (self.channels, self.image_size, self.image_size) - samples, intermediates =ddim_sampler.sample(ddim_steps,batch_size, - shape,cond,verbose=False,**kwargs) - - else: - samples, intermediates = self.sample(cond=cond, batch_size=batch_size, - return_intermediates=True,**kwargs) - - return samples, intermediates - - -class DiffusionWrapper(nn.Module): - def __init__(self, diff_model_config, conditioning_key): - super().__init__() - self.diffusion_model = instantiate_from_config(diff_model_config) - self.conditioning_key = conditioning_key - assert self.conditioning_key in [None, 'concat', 'crossattn', 'hybrid', 'adm'] - - def forward(self, x, t, c_concat: list = None, c_crossattn: list = None): - if self.conditioning_key is None: - out = self.diffusion_model(x, t) - elif self.conditioning_key == 'concat': - xc = torch.cat([x] + c_concat, dim=1) - out = self.diffusion_model(xc, t) - elif self.conditioning_key == 'crossattn': - cc = torch.cat(c_crossattn, 1) - out = self.diffusion_model(x, t, context=cc) - elif self.conditioning_key == 'hybrid': - xc = torch.cat([x] + c_concat, dim=1) - cc = torch.cat(c_crossattn, 1) - out = self.diffusion_model(xc, t, context=cc) - elif self.conditioning_key == 'adm': - cc = c_crossattn[0] - out = self.diffusion_model(x, t, y=cc) - else: - raise NotImplementedError() - - return out - - -class Layout2ImgDiffusion(LatentDiffusion): - # TODO: move all layout-specific hacks to this class - def __init__(self, cond_stage_key, *args, **kwargs): - assert cond_stage_key == 'coordinates_bbox', 'Layout2ImgDiffusion only for cond_stage_key="coordinates_bbox"' - super().__init__(cond_stage_key=cond_stage_key, *args, **kwargs) - - def log_images(self, batch, N=8, *args, **kwargs): - logs = super().log_images(batch=batch, N=N, *args, **kwargs) - - key = 'train' if self.training else 'validation' - dset = self.trainer.datamodule.datasets[key] - mapper = dset.conditional_builders[self.cond_stage_key] - - bbox_imgs = [] - map_fn = lambda catno: dset.get_textual_label(dset.get_category_id(catno)) - for tknzd_bbox in batch[self.cond_stage_key][:N]: - bboximg = mapper.plot(tknzd_bbox.detach().cpu(), map_fn, (256, 256)) - bbox_imgs.append(bboximg) - - cond_img = torch.stack(bbox_imgs, dim=0) - logs['bbox_image'] = cond_img - return logs diff --git a/spaces/Kedreamix/YoloGesture/utils/callbacks.py b/spaces/Kedreamix/YoloGesture/utils/callbacks.py deleted file mode 100644 index 5c52e671167ad70c34d61c14bb236c95818a34f6..0000000000000000000000000000000000000000 --- a/spaces/Kedreamix/YoloGesture/utils/callbacks.py +++ /dev/null @@ -1,71 +0,0 @@ -import datetime -import os - -import torch -import matplotlib -matplotlib.use('Agg') -import scipy.signal -from matplotlib import pyplot as plt -from torch.utils.tensorboard import SummaryWriter - - -class LossHistory(): - def __init__(self, log_dir, model, input_shape): - time_str = datetime.datetime.strftime(datetime.datetime.now(),'%Y_%m_%d_%H_%M_%S') - self.log_dir = os.path.join(log_dir, "loss_" + str(time_str)) - self.losses = [] - self.val_loss = [] - - os.makedirs(self.log_dir) - self.writer = SummaryWriter(self.log_dir) - try: - dummy_input = torch.randn(2, 3, input_shape[0], input_shape[1]) - self.writer.add_graph(model, dummy_input) - except: - pass - - - def append_loss(self, epoch, loss, val_loss): - if not os.path.exists(self.log_dir): - os.makedirs(self.log_dir) - - self.losses.append(loss) - self.val_loss.append(val_loss) - - with open(os.path.join(self.log_dir, "epoch_loss.txt"), 'a') as f: - f.write(str(loss)) - f.write("\n") - with open(os.path.join(self.log_dir, "epoch_val_loss.txt"), 'a') as f: - f.write(str(val_loss)) - f.write("\n") - - self.writer.add_scalar('loss', loss, epoch) - self.writer.add_scalar('val_loss', val_loss, epoch) - self.loss_plot() - - def loss_plot(self): - iters = range(len(self.losses)) - - plt.figure() - plt.plot(iters, self.losses, 'red', linewidth = 2, label='train loss') - plt.plot(iters, self.val_loss, 'coral', linewidth = 2, label='val loss') - try: - if len(self.losses) < 25: - num = 5 - else: - num = 15 - - plt.plot(iters, scipy.signal.savgol_filter(self.losses, num, 3), 'green', linestyle = '--', linewidth = 2, label='smooth train loss') - plt.plot(iters, scipy.signal.savgol_filter(self.val_loss, num, 3), '#8B4513', linestyle = '--', linewidth = 2, label='smooth val loss') - except: - pass - - plt.grid(True) - plt.xlabel('Epoch') - plt.ylabel('Loss') - plt.legend(loc="upper right") - - plt.savefig(os.path.join(self.log_dir, "epoch_loss.png")) - - plt.cla() - plt.close("all") diff --git a/spaces/Kedreamix/YoloGesture/yolo.py b/spaces/Kedreamix/YoloGesture/yolo.py deleted file mode 100644 index e1a672ae45885ec662e3b3ca4b089c1cca0973c2..0000000000000000000000000000000000000000 --- a/spaces/Kedreamix/YoloGesture/yolo.py +++ /dev/null @@ -1,422 +0,0 @@ -import colorsys -import os -import time - -import numpy as np -import torch -import torch.nn as nn -from PIL import ImageDraw, ImageFont - -from nets.yolo import YoloBody -from nets.yolo_tiny import YoloBodytiny -from utils.utils import (cvtColor, get_anchors, get_classes, preprocess_input, - resize_image) -from utils.utils_bbox import DecodeBox -from get_yaml import get_config -import argparse -''' -训练自己的数据集必看注释! -''' -class YOLO(object): - # 配置文件 - config = get_config() - _defaults = { - #--------------------------------------------------------------------------# - # 使用自己训练好的模型进行预测一定要修改model_path和classes_path! - # model_path指向logs文件夹下的权值文件,classes_path指向model_data下的txt - # - # 训练好后logs文件夹下存在多个权值文件,选择验证集损失较低的即可。 - # 验证集损失较低不代表mAP较高,仅代表该权值在验证集上泛化性能较好。 - # 如果出现shape不匹配,同时要注意训练时的model_path和classes_path参数的修改 - #--------------------------------------------------------------------------# - "class_names" : config['classes'], - "num_classes" : config['nc'], - #---------------------------------------------------------------------# - # anchors_path代表先验框对应的txt文件,一般不修改。 - # anchors_mask用于帮助代码找到对应的先验框,一般不修改。 - #---------------------------------------------------------------------# - "anchors_path" : 'model_data/yolo_anchors.txt', - "anchors_mask" : [[6, 7, 8], [3, 4, 5], [0, 1, 2]], - #---------------------------------------------------------------------# - # 只有得分大于置信度的预测框会被保留下来 - #---------------------------------------------------------------------# - "confidence" : 0.5, # 0.5, - #---------------------------------------------------------------------# - # 非极大抑制所用到的nms_iou大小 - #---------------------------------------------------------------------# - "nms_iou" : 0.3, # 0.3, - #---------------------------------------------------------------------# - # 该变量用于控制是否使用letterbox_image对输入图像进行不失真的resize, - # 在多次测试后,发现关闭letterbox_image直接resize的效果更好 - #---------------------------------------------------------------------# - "letterbox_image" : config['letterbox_image'], # False, - } - - - - @classmethod - def get_defaults(cls, n): - if n in cls._defaults: - return cls._defaults[n] - else: - return "Unrecognized attribute name '" + n + "'" - - #---------------------------------------------------# - # 初始化YOLO - #---------------------------------------------------# - def __init__(self, opt, **kwargs): - self.__dict__.update(self._defaults) - for name, value in kwargs.items(): - setattr(self, name, value) - self.phi = opt.phi - self.tiny = opt.tiny - self.cuda = opt.cuda - self.input_shape = [opt.shape,opt.shape] - self.model_path = opt.weights - self.phi = opt.phi - self.confidence = opt.confidence - self.nms_iou = opt.nms_iou - if self.tiny: - self.anchors_mask = [[3,4,5], [1,2,3]] - self.anchors_path = 'model_data/yolotiny_anchors.txt' - #---------------------------------------------------# - # 获得种类和先验框的数量 - #---------------------------------------------------# - # self.class_names, self.num_classes = get_classes(self.classes_path) - self.anchors, self.num_anchors = get_anchors(self.anchors_path) - self.bbox_util = DecodeBox(self.anchors, self.num_classes, (self.input_shape[0], self.input_shape[1]), self.anchors_mask) - - #---------------------------------------------------# - # 画框设置不同的颜色 - #---------------------------------------------------# - hsv_tuples = [(x / self.num_classes, 1., 1.) for x in range(self.num_classes)] - self.colors = list(map(lambda x: colorsys.hsv_to_rgb(*x), hsv_tuples)) - self.colors = list(map(lambda x: (int(x[0] * 255), int(x[1] * 255), int(x[2] * 255)), self.colors)) - self.generate() - - #---------------------------------------------------# - # 生成模型 - #---------------------------------------------------# - def generate(self, onnx=False): - #---------------------------------------------------# - # 建立yolo模型,载入yolo模型的权重 - #---------------------------------------------------# - - if not self.tiny: - self.net = YoloBody(self.anchors_mask, self.num_classes) - elif self.tiny: - self.net = YoloBodytiny(self.anchors_mask, self.num_classes, self.phi) - device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - self.net.load_state_dict(torch.load(self.model_path, map_location=device)) - self.net = self.net.eval() - - print('{} model, anchors, and classes loaded.'.format(self.model_path)) - if not onnx: - if self.cuda: - self.net = nn.DataParallel(self.net) - self.net = self.net.cuda() - - #---------------------------------------------------# - # 检测图片 - #---------------------------------------------------# - def detect_image(self, image, crop = False, count = False): - #---------------------------------------------------# - # 计算输入图片的高和宽 - #---------------------------------------------------# - image_shape = np.array(np.shape(image)[0:2]) - #---------------------------------------------------------# - # 在这里将图像转换成RGB图像,防止灰度图在预测时报错。 - # 代码仅仅支持RGB图像的预测,所有其它类型的图像都会转化成RGB - #---------------------------------------------------------# - image = cvtColor(image) - #---------------------------------------------------------# - # 给图像增加灰条,实现不失真的resize - # 也可以直接resize进行识别 - #---------------------------------------------------------# - image_data = resize_image(image, (self.input_shape[1],self.input_shape[0]), self.letterbox_image) - #---------------------------------------------------------# - # 添加上batch_size维度 - #---------------------------------------------------------# - image_data = np.expand_dims(np.transpose(preprocess_input(np.array(image_data, dtype='float32')), (2, 0, 1)), 0) - - with torch.no_grad(): - images = torch.from_numpy(image_data) - if self.cuda: - images = images.cuda() - #---------------------------------------------------------# - # 将图像输入网络当中进行预测! - #---------------------------------------------------------# - outputs = self.net(images) - outputs = self.bbox_util.decode_box(outputs) - #---------------------------------------------------------# - # 将预测框进行堆叠,然后进行非极大抑制 - #---------------------------------------------------------# - results = self.bbox_util.non_max_suppression(torch.cat(outputs, 1), self.num_classes, self.input_shape, - image_shape, self.letterbox_image, conf_thres = self.confidence, nms_thres = self.nms_iou) - - if results[0] is None: - return image - - top_label = np.array(results[0][:, 6], dtype = 'int32') - top_conf = results[0][:, 4] * results[0][:, 5] - top_boxes = results[0][:, :4] - #---------------------------------------------------------# - # 设置字体与边框厚度 - #---------------------------------------------------------# - font = ImageFont.truetype(font='model_data/simhei.ttf', size=np.floor(3e-2 * image.size[1] + 0.5).astype('int32')) - thickness = int(max((image.size[0] + image.size[1]) // np.mean(self.input_shape), 1)) - #---------------------------------------------------------# - # 计数 - #---------------------------------------------------------# - if count: - print("top_label:", top_label) - classes_nums = np.zeros([self.num_classes]) - for i in range(self.num_classes): - num = np.sum(top_label == i) - if num > 0: - print(self.class_names[i], " : ", num) - classes_nums[i] = num - print("classes_nums:", classes_nums) - #---------------------------------------------------------# - # 是否进行目标的裁剪 - #---------------------------------------------------------# - if crop: - for i, c in list(enumerate(top_label)): - top, left, bottom, right = top_boxes[i] - top = max(0, np.floor(top).astype('int32')) - left = max(0, np.floor(left).astype('int32')) - bottom = min(image.size[1], np.floor(bottom).astype('int32')) - right = min(image.size[0], np.floor(right).astype('int32')) - - dir_save_path = "img_crop" - if not os.path.exists(dir_save_path): - os.makedirs(dir_save_path) - crop_image = image.crop([left, top, right, bottom]) - crop_image.save(os.path.join(dir_save_path, "crop_" + str(i) + ".png"), quality=95, subsampling=0) - print("save crop_" + str(i) + ".png to " + dir_save_path) - #---------------------------------------------------------# - # 图像绘制 - #---------------------------------------------------------# - for i, c in list(enumerate(top_label)): - predicted_class = self.class_names[int(c)] - box = top_boxes[i] - score = top_conf[i] - - top, left, bottom, right = box - - top = max(0, np.floor(top).astype('int32')) - left = max(0, np.floor(left).astype('int32')) - bottom = min(image.size[1], np.floor(bottom).astype('int32')) - right = min(image.size[0], np.floor(right).astype('int32')) - - label = '{} {:.2f}'.format(predicted_class, score) - draw = ImageDraw.Draw(image) - label_size = draw.textsize(label, font) - label = label.encode('utf-8') - print(label, top, left, bottom, right) - - if top - label_size[1] >= 0: - text_origin = np.array([left, top - label_size[1]]) - else: - text_origin = np.array([left, top + 1]) - - for i in range(thickness): - draw.rectangle([left + i, top + i, right - i, bottom - i], outline=self.colors[c]) - draw.rectangle([tuple(text_origin), tuple(text_origin + label_size)], fill=self.colors[c]) - draw.text(text_origin, str(label,'UTF-8'), fill=(0, 0, 0), font=font) - del draw - - return image - - def get_FPS(self, image, test_interval): - image_shape = np.array(np.shape(image)[0:2]) - #---------------------------------------------------------# - # 在这里将图像转换成RGB图像,防止灰度图在预测时报错。 - # 代码仅仅支持RGB图像的预测,所有其它类型的图像都会转化成RGB - #---------------------------------------------------------# - image = cvtColor(image) - #---------------------------------------------------------# - # 给图像增加灰条,实现不失真的resize - # 也可以直接resize进行识别 - #---------------------------------------------------------# - image_data = resize_image(image, (self.input_shape[1],self.input_shape[0]), self.letterbox_image) - #---------------------------------------------------------# - # 添加上batch_size维度 - #---------------------------------------------------------# - image_data = np.expand_dims(np.transpose(preprocess_input(np.array(image_data, dtype='float32')), (2, 0, 1)), 0) - - with torch.no_grad(): - images = torch.from_numpy(image_data) - if self.cuda: - images = images.cuda() - #---------------------------------------------------------# - # 将图像输入网络当中进行预测! - #---------------------------------------------------------# - outputs = self.net(images) - outputs = self.bbox_util.decode_box(outputs) - #---------------------------------------------------------# - # 将预测框进行堆叠,然后进行非极大抑制 - #---------------------------------------------------------# - results = self.bbox_util.non_max_suppression(torch.cat(outputs, 1), self.num_classes, self.input_shape, - image_shape, self.letterbox_image, conf_thres=self.confidence, nms_thres=self.nms_iou) - - t1 = time.time() - for _ in range(test_interval): - with torch.no_grad(): - #---------------------------------------------------------# - # 将图像输入网络当中进行预测! - #---------------------------------------------------------# - outputs = self.net(images) - outputs = self.bbox_util.decode_box(outputs) - #---------------------------------------------------------# - # 将预测框进行堆叠,然后进行非极大抑制 - #---------------------------------------------------------# - results = self.bbox_util.non_max_suppression(torch.cat(outputs, 1), self.num_classes, self.input_shape, - image_shape, self.letterbox_image, conf_thres=self.confidence, nms_thres=self.nms_iou) - - t2 = time.time() - tact_time = (t2 - t1) / test_interval - return tact_time - - def detect_heatmap(self, image, heatmap_save_path): - import cv2 - import matplotlib.pyplot as plt - def sigmoid(x): - y = 1.0 / (1.0 + np.exp(-x)) - return y - #---------------------------------------------------------# - # 在这里将图像转换成RGB图像,防止灰度图在预测时报错。 - # 代码仅仅支持RGB图像的预测,所有其它类型的图像都会转化成RGB - #---------------------------------------------------------# - image = cvtColor(image) - #---------------------------------------------------------# - # 给图像增加灰条,实现不失真的resize - # 也可以直接resize进行识别 - #---------------------------------------------------------# - image_data = resize_image(image, (self.input_shape[1],self.input_shape[0]), self.letterbox_image) - #---------------------------------------------------------# - # 添加上batch_size维度 - #---------------------------------------------------------# - image_data = np.expand_dims(np.transpose(preprocess_input(np.array(image_data, dtype='float32')), (2, 0, 1)), 0) - - with torch.no_grad(): - images = torch.from_numpy(image_data) - if self.cuda: - images = images.cuda() - #---------------------------------------------------------# - # 将图像输入网络当中进行预测! - #---------------------------------------------------------# - outputs = self.net(images) - - plt.imshow(image, alpha=1) - plt.axis('off') - mask = np.zeros((image.size[1], image.size[0])) - for sub_output in outputs: - sub_output = sub_output.cpu().numpy() - b, c, h, w = np.shape(sub_output) - sub_output = np.transpose(np.reshape(sub_output, [b, 3, -1, h, w]), [0, 3, 4, 1, 2])[0] - score = np.max(sigmoid(sub_output[..., 4]), -1) - score = cv2.resize(score, (image.size[0], image.size[1])) - normed_score = (score * 255).astype('uint8') - mask = np.maximum(mask, normed_score) - - plt.imshow(mask, alpha=0.5, interpolation='nearest', cmap="jet") - - plt.axis('off') - plt.subplots_adjust(top=1, bottom=0, right=1, left=0, hspace=0, wspace=0) - plt.margins(0, 0) - plt.savefig(heatmap_save_path, dpi=200, bbox_inches='tight', pad_inches = -0.1) - print("Save to the " + heatmap_save_path) - plt.show() - - def convert_to_onnx(self, simplify, model_path): - import onnx - self.generate(onnx=True) - - im = torch.zeros(1, 3, *self.input_shape).to('cpu') # image size(1, 3, 512, 512) BCHW - input_layer_names = ["images"] - output_layer_names = ["output"] - - # Export the model - print(f'Starting export with onnx {onnx.__version__}.') - torch.onnx.export(self.net, - im, - f = model_path, - verbose = False, - opset_version = 12, - training = torch.onnx.TrainingMode.EVAL, - do_constant_folding = True, - input_names = input_layer_names, - output_names = output_layer_names, - dynamic_axes = None) - - # Checks - model_onnx = onnx.load(model_path) # load onnx model - onnx.checker.check_model(model_onnx) # check onnx model - - # Simplify onnx - if simplify: - import onnxsim - print(f'Simplifying with onnx-simplifier {onnxsim.__version__}.') - model_onnx, check = onnxsim.simplify( - model_onnx, - dynamic_input_shape=False, - input_shapes=None) - assert check, 'assert check failed' - onnx.save(model_onnx, model_path) - - print('Onnx model save as {}'.format(model_path)) - - def get_map_txt(self, image_id, image, class_names, map_out_path): - f = open(os.path.join(map_out_path, "detection-results/"+image_id+".txt"),"w") - image_shape = np.array(np.shape(image)[0:2]) - #---------------------------------------------------------# - # 在这里将图像转换成RGB图像,防止灰度图在预测时报错。 - # 代码仅仅支持RGB图像的预测,所有其它类型的图像都会转化成RGB - #---------------------------------------------------------# - image = cvtColor(image) - #---------------------------------------------------------# - # 给图像增加灰条,实现不失真的resize - # 也可以直接resize进行识别 - #---------------------------------------------------------# - image_data = resize_image(image, (self.input_shape[1],self.input_shape[0]), self.letterbox_image) - #---------------------------------------------------------# - # 添加上batch_size维度 - #---------------------------------------------------------# - image_data = np.expand_dims(np.transpose(preprocess_input(np.array(image_data, dtype='float32')), (2, 0, 1)), 0) - - with torch.no_grad(): - images = torch.from_numpy(image_data) - if self.cuda: - images = images.cuda() - #---------------------------------------------------------# - # 将图像输入网络当中进行预测! - #---------------------------------------------------------# - outputs = self.net(images) - outputs = self.bbox_util.decode_box(outputs) - #---------------------------------------------------------# - # 将预测框进行堆叠,然后进行非极大抑制 - #---------------------------------------------------------# - results = self.bbox_util.non_max_suppression(torch.cat(outputs, 1), self.num_classes, self.input_shape, - image_shape, self.letterbox_image, conf_thres = self.confidence, nms_thres = self.nms_iou) - - if results[0] is None: - return - - top_label = np.array(results[0][:, 6], dtype = 'int32') - top_conf = results[0][:, 4] * results[0][:, 5] - top_boxes = results[0][:, :4] - - for i, c in list(enumerate(top_label)): - predicted_class = self.class_names[int(c)] - box = top_boxes[i] - score = str(top_conf[i]) - - top, left, bottom, right = box - if predicted_class not in class_names: - continue - - f.write("%s %s %s %s %s %s\n" % (predicted_class, score[:6], str(int(left)), str(int(top)), str(int(right)),str(int(bottom)))) - - f.close() - return diff --git a/spaces/KushJaggi/YOLOv8/models.py b/spaces/KushJaggi/YOLOv8/models.py deleted file mode 100644 index 0f0ae7e84da69f9805bc3a20463e9db488cac889..0000000000000000000000000000000000000000 --- a/spaces/KushJaggi/YOLOv8/models.py +++ /dev/null @@ -1,530 +0,0 @@ -import numpy as np -import cv2 -import os -import json -from tqdm import tqdm -from glob import glob -import matplotlib.pyplot as plt -import tensorflow as tf -from tensorflow.keras import layers, models, optimizers - -from custom_layers import yolov4_neck, yolov4_head, nms -from utils import load_weights, get_detection_data, draw_bbox, voc_ap, draw_plot_func, read_txt_to_list -from config import yolo_config -from loss import yolo_loss - - -class Yolov4(object): - def __init__(self, - weight_path=None, - class_name_path='coco_classes.txt', - config=yolo_config, - ): - assert config['img_size'][0] == config['img_size'][1], 'not support yet' - assert config['img_size'][0] % config['strides'][-1] == 0, 'must be a multiple of last stride' - self.class_names = [line.strip() for line in open(class_name_path).readlines()] - self.img_size = yolo_config['img_size'] - self.num_classes = len(self.class_names) - self.weight_path = weight_path - self.anchors = np.array(yolo_config['anchors']).reshape((3, 3, 2)) - self.xyscale = yolo_config['xyscale'] - self.strides = yolo_config['strides'] - self.output_sizes = [self.img_size[0] // s for s in self.strides] - self.class_color = {name: list(np.random.random(size=3)*255) for name in self.class_names} - # Training - self.max_boxes = yolo_config['max_boxes'] - self.iou_loss_thresh = yolo_config['iou_loss_thresh'] - self.config = yolo_config - assert self.num_classes > 0, 'no classes detected!' - - tf.keras.backend.clear_session() - if yolo_config['num_gpu'] > 1: - mirrored_strategy = tf.distribute.MirroredStrategy() - with mirrored_strategy.scope(): - self.build_model(load_pretrained=True if self.weight_path else False) - else: - self.build_model(load_pretrained=True if self.weight_path else False) - - def build_model(self, load_pretrained=True): - # core yolo model - input_layer = layers.Input(self.img_size) - yolov4_output = yolov4_neck(input_layer, self.num_classes) - self.yolo_model = models.Model(input_layer, yolov4_output) - - # Build training model - y_true = [ - layers.Input(name='input_2', shape=(52, 52, 3, (self.num_classes + 5))), # label small boxes - layers.Input(name='input_3', shape=(26, 26, 3, (self.num_classes + 5))), # label medium boxes - layers.Input(name='input_4', shape=(13, 13, 3, (self.num_classes + 5))), # label large boxes - layers.Input(name='input_5', shape=(self.max_boxes, 4)), # true bboxes - ] - loss_list = tf.keras.layers.Lambda(yolo_loss, name='yolo_loss', - arguments={'num_classes': self.num_classes, - 'iou_loss_thresh': self.iou_loss_thresh, - 'anchors': self.anchors})([*self.yolo_model.output, *y_true]) - self.training_model = models.Model([self.yolo_model.input, *y_true], loss_list) - - # Build inference model - yolov4_output = yolov4_head(yolov4_output, self.num_classes, self.anchors, self.xyscale) - # output: [boxes, scores, classes, valid_detections] - self.inference_model = models.Model(input_layer, - nms(yolov4_output, self.img_size, self.num_classes, - iou_threshold=self.config['iou_threshold'], - score_threshold=self.config['score_threshold'])) - - if load_pretrained and self.weight_path and self.weight_path.endswith('.weights'): - if self.weight_path.endswith('.weights'): - load_weights(self.yolo_model, self.weight_path) - print(f'load from {self.weight_path}') - elif self.weight_path.endswith('.h5'): - self.training_model.load_weights(self.weight_path) - print(f'load from {self.weight_path}') - - self.training_model.compile(optimizer=optimizers.Adam(lr=1e-3), - loss={'yolo_loss': lambda y_true, y_pred: y_pred}) - - def load_model(self, path): - self.yolo_model = models.load_model(path, compile=False) - yolov4_output = yolov4_head(self.yolo_model.output, self.num_classes, self.anchors, self.xyscale) - self.inference_model = models.Model(self.yolo_model.input, - nms(yolov4_output, self.img_size, self.num_classes)) # [boxes, scores, classes, valid_detections] - - def save_model(self, path): - self.yolo_model.save(path) - - def preprocess_img(self, img): - img = cv2.resize(img, self.img_size[:2]) - img = img / 255. - return img - - def fit(self, train_data_gen, epochs, val_data_gen=None, initial_epoch=0, callbacks=None): - self.training_model.fit(train_data_gen, - steps_per_epoch=len(train_data_gen), - validation_data=val_data_gen, - validation_steps=len(val_data_gen), - epochs=epochs, - callbacks=callbacks, - initial_epoch=initial_epoch) - # raw_img: RGB - def predict_img(self, raw_img, random_color=True, plot_img=True, figsize=(10, 10), show_text=True, return_output=True): - print('img shape: ', raw_img.shape) - img = self.preprocess_img(raw_img) - imgs = np.expand_dims(img, axis=0) - pred_output = self.inference_model.predict(imgs) - detections = get_detection_data(img=raw_img, - model_outputs=pred_output, - class_names=self.class_names) - - output_img = draw_bbox(raw_img, detections, cmap=self.class_color, random_color=random_color, figsize=figsize, - show_text=show_text, show_img=False) - if return_output: - return output_img, detections - else: - return detections - - def predict(self, img_path, random_color=True, plot_img=True, figsize=(10, 10), show_text=True): - raw_img = img_path - return self.predict_img(raw_img, random_color, plot_img, figsize, show_text) - - def export_gt(self, annotation_path, gt_folder_path): - with open(annotation_path) as file: - for line in file: - line = line.split(' ') - filename = line[0].split(os.sep)[-1].split('.')[0] - objs = line[1:] - # export txt file - with open(os.path.join(gt_folder_path, filename + '.txt'), 'w') as output_file: - for obj in objs: - x_min, y_min, x_max, y_max, class_id = [float(o) for o in obj.strip().split(',')] - output_file.write(f'{self.class_names[int(class_id)]} {x_min} {y_min} {x_max} {y_max}\n') - - def export_prediction(self, annotation_path, pred_folder_path, img_folder_path, bs=2): - with open(annotation_path) as file: - img_paths = [os.path.join(img_folder_path, line.split(' ')[0].split(os.sep)[-1]) for line in file] - # print(img_paths[:20]) - for batch_idx in tqdm(range(0, len(img_paths), bs)): - # print(len(img_paths), batch_idx, batch_idx*bs, (batch_idx+1)*bs) - paths = img_paths[batch_idx:batch_idx+bs] - # print(paths) - # read and process img - imgs = np.zeros((len(paths), *self.img_size)) - raw_img_shapes = [] - for j, path in enumerate(paths): - img = cv2.imread(path) - raw_img_shapes.append(img.shape) - img = self.preprocess_img(img) - imgs[j] = img - - # process batch output - b_boxes, b_scores, b_classes, b_valid_detections = self.inference_model.predict(imgs) - for k in range(len(paths)): - num_boxes = b_valid_detections[k] - raw_img_shape = raw_img_shapes[k] - boxes = b_boxes[k, :num_boxes] - classes = b_classes[k, :num_boxes] - scores = b_scores[k, :num_boxes] - # print(raw_img_shape) - boxes[:, [0, 2]] = (boxes[:, [0, 2]] * raw_img_shape[1]) # w - boxes[:, [1, 3]] = (boxes[:, [1, 3]] * raw_img_shape[0]) # h - cls_names = [self.class_names[int(c)] for c in classes] - # print(raw_img_shape, boxes.astype(int), cls_names, scores) - - img_path = paths[k] - filename = img_path.split(os.sep)[-1].split('.')[0] - # print(filename) - output_path = os.path.join(pred_folder_path, filename+'.txt') - with open(output_path, 'w') as pred_file: - for box_idx in range(num_boxes): - b = boxes[box_idx] - pred_file.write(f'{cls_names[box_idx]} {scores[box_idx]} {b[0]} {b[1]} {b[2]} {b[3]}\n') - - - def eval_map(self, gt_folder_path, pred_folder_path, temp_json_folder_path, output_files_path): - """Process Gt""" - ground_truth_files_list = glob(gt_folder_path + '/*.txt') - assert len(ground_truth_files_list) > 0, 'no ground truth file' - ground_truth_files_list.sort() - # dictionary with counter per class - gt_counter_per_class = {} - counter_images_per_class = {} - - gt_files = [] - for txt_file in ground_truth_files_list: - file_id = txt_file.split(".txt", 1)[0] - file_id = os.path.basename(os.path.normpath(file_id)) - # check if there is a correspondent detection-results file - temp_path = os.path.join(pred_folder_path, (file_id + ".txt")) - assert os.path.exists(temp_path), "Error. File not found: {}\n".format(temp_path) - lines_list = read_txt_to_list(txt_file) - # create ground-truth dictionary - bounding_boxes = [] - is_difficult = False - already_seen_classes = [] - for line in lines_list: - class_name, left, top, right, bottom = line.split() - # check if class is in the ignore list, if yes skip - bbox = left + " " + top + " " + right + " " + bottom - bounding_boxes.append({"class_name": class_name, "bbox": bbox, "used": False}) - # count that object - if class_name in gt_counter_per_class: - gt_counter_per_class[class_name] += 1 - else: - # if class didn't exist yet - gt_counter_per_class[class_name] = 1 - - if class_name not in already_seen_classes: - if class_name in counter_images_per_class: - counter_images_per_class[class_name] += 1 - else: - # if class didn't exist yet - counter_images_per_class[class_name] = 1 - already_seen_classes.append(class_name) - - # dump bounding_boxes into a ".json" file - new_temp_file = os.path.join(temp_json_folder_path, file_id+"_ground_truth.json") #TEMP_FILES_PATH + "/" + file_id + "_ground_truth.json" - gt_files.append(new_temp_file) - with open(new_temp_file, 'w') as outfile: - json.dump(bounding_boxes, outfile) - - gt_classes = list(gt_counter_per_class.keys()) - # let's sort the classes alphabetically - gt_classes = sorted(gt_classes) - n_classes = len(gt_classes) - print(gt_classes, gt_counter_per_class) - - """Process prediction""" - - dr_files_list = sorted(glob(os.path.join(pred_folder_path, '*.txt'))) - - for class_index, class_name in enumerate(gt_classes): - bounding_boxes = [] - for txt_file in dr_files_list: - # the first time it checks if all the corresponding ground-truth files exist - file_id = txt_file.split(".txt", 1)[0] - file_id = os.path.basename(os.path.normpath(file_id)) - temp_path = os.path.join(gt_folder_path, (file_id + ".txt")) - if class_index == 0: - if not os.path.exists(temp_path): - error_msg = f"Error. File not found: {temp_path}\n" - print(error_msg) - lines = read_txt_to_list(txt_file) - for line in lines: - try: - tmp_class_name, confidence, left, top, right, bottom = line.split() - except ValueError: - error_msg = f"""Error: File {txt_file} in the wrong format.\n - Expected: \n - Received: {line} \n""" - print(error_msg) - if tmp_class_name == class_name: - # print("match") - bbox = left + " " + top + " " + right + " " + bottom - bounding_boxes.append({"confidence": confidence, "file_id": file_id, "bbox": bbox}) - # sort detection-results by decreasing confidence - bounding_boxes.sort(key=lambda x: float(x['confidence']), reverse=True) - with open(temp_json_folder_path + "/" + class_name + "_dr.json", 'w') as outfile: - json.dump(bounding_boxes, outfile) - - """ - Calculate the AP for each class - """ - sum_AP = 0.0 - ap_dictionary = {} - # open file to store the output - with open(output_files_path + "/output.txt", 'w') as output_file: - output_file.write("# AP and precision/recall per class\n") - count_true_positives = {} - for class_index, class_name in enumerate(gt_classes): - count_true_positives[class_name] = 0 - """ - Load detection-results of that class - """ - dr_file = temp_json_folder_path + "/" + class_name + "_dr.json" - dr_data = json.load(open(dr_file)) - - """ - Assign detection-results to ground-truth objects - """ - nd = len(dr_data) - tp = [0] * nd # creates an array of zeros of size nd - fp = [0] * nd - for idx, detection in enumerate(dr_data): - file_id = detection["file_id"] - gt_file = temp_json_folder_path + "/" + file_id + "_ground_truth.json" - ground_truth_data = json.load(open(gt_file)) - ovmax = -1 - gt_match = -1 - # load detected object bounding-box - bb = [float(x) for x in detection["bbox"].split()] - for obj in ground_truth_data: - # look for a class_name match - if obj["class_name"] == class_name: - bbgt = [float(x) for x in obj["bbox"].split()] - bi = [max(bb[0], bbgt[0]), max(bb[1], bbgt[1]), min(bb[2], bbgt[2]), min(bb[3], bbgt[3])] - iw = bi[2] - bi[0] + 1 - ih = bi[3] - bi[1] + 1 - if iw > 0 and ih > 0: - # compute overlap (IoU) = area of intersection / area of union - ua = (bb[2] - bb[0] + 1) * (bb[3] - bb[1] + 1) + \ - (bbgt[2] - bbgt[0]+ 1) * (bbgt[3] - bbgt[1] + 1) - iw * ih - ov = iw * ih / ua - if ov > ovmax: - ovmax = ov - gt_match = obj - - min_overlap = 0.5 - if ovmax >= min_overlap: - # if "difficult" not in gt_match: - if not bool(gt_match["used"]): - # true positive - tp[idx] = 1 - gt_match["used"] = True - count_true_positives[class_name] += 1 - # update the ".json" file - with open(gt_file, 'w') as f: - f.write(json.dumps(ground_truth_data)) - else: - # false positive (multiple detection) - fp[idx] = 1 - else: - fp[idx] = 1 - - - # compute precision/recall - cumsum = 0 - for idx, val in enumerate(fp): - fp[idx] += cumsum - cumsum += val - print('fp ', cumsum) - cumsum = 0 - for idx, val in enumerate(tp): - tp[idx] += cumsum - cumsum += val - print('tp ', cumsum) - rec = tp[:] - for idx, val in enumerate(tp): - rec[idx] = float(tp[idx]) / gt_counter_per_class[class_name] - print('recall ', cumsum) - prec = tp[:] - for idx, val in enumerate(tp): - prec[idx] = float(tp[idx]) / (fp[idx] + tp[idx]) - print('prec ', cumsum) - - ap, mrec, mprec = voc_ap(rec[:], prec[:]) - sum_AP += ap - text = "{0:.2f}%".format( - ap * 100) + " = " + class_name + " AP " # class_name + " AP = {0:.2f}%".format(ap*100) - - print(text) - ap_dictionary[class_name] = ap - - n_images = counter_images_per_class[class_name] - # lamr, mr, fppi = log_average_miss_rate(np.array(prec), np.array(rec), n_images) - # lamr_dictionary[class_name] = lamr - - """ - Draw plot - """ - if True: - plt.plot(rec, prec, '-o') - # add a new penultimate point to the list (mrec[-2], 0.0) - # since the last line segment (and respective area) do not affect the AP value - area_under_curve_x = mrec[:-1] + [mrec[-2]] + [mrec[-1]] - area_under_curve_y = mprec[:-1] + [0.0] + [mprec[-1]] - plt.fill_between(area_under_curve_x, 0, area_under_curve_y, alpha=0.2, edgecolor='r') - # set window title - fig = plt.gcf() # gcf - get current figure - fig.canvas.set_window_title('AP ' + class_name) - # set plot title - plt.title('class: ' + text) - # plt.suptitle('This is a somewhat long figure title', fontsize=16) - # set axis titles - plt.xlabel('Recall') - plt.ylabel('Precision') - # optional - set axes - axes = plt.gca() # gca - get current axes - axes.set_xlim([0.0, 1.0]) - axes.set_ylim([0.0, 1.05]) # .05 to give some extra space - # Alternative option -> wait for button to be pressed - # while not plt.waitforbuttonpress(): pass # wait for key display - # Alternative option -> normal display - plt.show() - # save the plot - # fig.savefig(output_files_path + "/classes/" + class_name + ".png") - # plt.cla() # clear axes for next plot - - # if show_animation: - # cv2.destroyAllWindows() - - output_file.write("\n# mAP of all classes\n") - mAP = sum_AP / n_classes - text = "mAP = {0:.2f}%".format(mAP * 100) - output_file.write(text + "\n") - print(text) - - """ - Count total of detection-results - """ - # iterate through all the files - det_counter_per_class = {} - for txt_file in dr_files_list: - # get lines to list - lines_list = read_txt_to_list(txt_file) - for line in lines_list: - class_name = line.split()[0] - # check if class is in the ignore list, if yes skip - # if class_name in args.ignore: - # continue - # count that object - if class_name in det_counter_per_class: - det_counter_per_class[class_name] += 1 - else: - # if class didn't exist yet - det_counter_per_class[class_name] = 1 - # print(det_counter_per_class) - dr_classes = list(det_counter_per_class.keys()) - - """ - Plot the total number of occurences of each class in the ground-truth - """ - if True: - window_title = "ground-truth-info" - plot_title = "ground-truth\n" - plot_title += "(" + str(len(ground_truth_files_list)) + " files and " + str(n_classes) + " classes)" - x_label = "Number of objects per class" - output_path = output_files_path + "/ground-truth-info.png" - to_show = False - plot_color = 'forestgreen' - draw_plot_func( - gt_counter_per_class, - n_classes, - window_title, - plot_title, - x_label, - output_path, - to_show, - plot_color, - '', - ) - - """ - Finish counting true positives - """ - for class_name in dr_classes: - # if class exists in detection-result but not in ground-truth then there are no true positives in that class - if class_name not in gt_classes: - count_true_positives[class_name] = 0 - # print(count_true_positives) - - """ - Plot the total number of occurences of each class in the "detection-results" folder - """ - if True: - window_title = "detection-results-info" - # Plot title - plot_title = "detection-results\n" - plot_title += "(" + str(len(dr_files_list)) + " files and " - count_non_zero_values_in_dictionary = sum(int(x) > 0 for x in list(det_counter_per_class.values())) - plot_title += str(count_non_zero_values_in_dictionary) + " detected classes)" - # end Plot title - x_label = "Number of objects per class" - output_path = output_files_path + "/detection-results-info.png" - to_show = False - plot_color = 'forestgreen' - true_p_bar = count_true_positives - draw_plot_func( - det_counter_per_class, - len(det_counter_per_class), - window_title, - plot_title, - x_label, - output_path, - to_show, - plot_color, - true_p_bar - ) - - """ - Draw mAP plot (Show AP's of all classes in decreasing order) - """ - if True: - window_title = "mAP" - plot_title = "mAP = {0:.2f}%".format(mAP * 100) - x_label = "Average Precision" - output_path = output_files_path + "/mAP.png" - to_show = True - plot_color = 'royalblue' - draw_plot_func( - ap_dictionary, - n_classes, - window_title, - plot_title, - x_label, - output_path, - to_show, - plot_color, - "" - ) - - def predict_raw(self, img_path): - raw_img = cv2.imread(img_path) - print('img shape: ', raw_img.shape) - img = self.preprocess_img(raw_img) - imgs = np.expand_dims(img, axis=0) - return self.yolo_model.predict(imgs) - - def predict_nonms(self, img_path, iou_threshold=0.413, score_threshold=0.1): - raw_img = cv2.imread(img_path) - print('img shape: ', raw_img.shape) - img = self.preprocess_img(raw_img) - imgs = np.expand_dims(img, axis=0) - yolov4_output = self.yolo_model.predict(imgs) - output = yolov4_head(yolov4_output, self.num_classes, self.anchors, self.xyscale) - pred_output = nms(output, self.img_size, self.num_classes, iou_threshold, score_threshold) - pred_output = [p.numpy() for p in pred_output] - detections = get_detection_data(img=raw_img, - model_outputs=pred_output, - class_names=self.class_names) - draw_bbox(raw_img, detections, cmap=self.class_color, random_color=True) - return detections - diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/modules/ipex/attention.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/modules/ipex/attention.py deleted file mode 100644 index b555579a17ba03db21599b902fd249e5460cada6..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/modules/ipex/attention.py +++ /dev/null @@ -1,127 +0,0 @@ -import torch - -# pylint: disable=protected-access, missing-function-docstring, line-too-long - -original_torch_bmm = torch.bmm -def torch_bmm(input, mat2, *, out=None): - if input.dtype != mat2.dtype: - mat2 = mat2.to(input.dtype) - - #ARC GPUs can't allocate more than 4GB to a single block, Slice it: - batch_size_attention, input_tokens, mat2_shape = input.shape[0], input.shape[1], mat2.shape[2] - block_multiply = 2.4 if input.dtype == torch.float32 else 1.2 - block_size = (batch_size_attention * input_tokens * mat2_shape) / 1024 * block_multiply #MB - split_slice_size = batch_size_attention - if block_size >= 4000: - do_split = True - #Find something divisible with the input_tokens - while ((split_slice_size * input_tokens * mat2_shape) / 1024 * block_multiply) > 4000: - split_slice_size = split_slice_size // 2 - if split_slice_size <= 1: - split_slice_size = 1 - break - else: - do_split = False - - split_block_size = (split_slice_size * input_tokens * mat2_shape) / 1024 * block_multiply #MB - split_2_slice_size = input_tokens - if split_block_size >= 4000: - do_split_2 = True - #Find something divisible with the input_tokens - while ((split_slice_size * split_2_slice_size * mat2_shape) / 1024 * block_multiply) > 4000: - split_2_slice_size = split_2_slice_size // 2 - if split_2_slice_size <= 1: - split_2_slice_size = 1 - break - else: - do_split_2 = False - - if do_split: - hidden_states = torch.zeros(input.shape[0], input.shape[1], mat2.shape[2], device=input.device, dtype=input.dtype) - for i in range(batch_size_attention // split_slice_size): - start_idx = i * split_slice_size - end_idx = (i + 1) * split_slice_size - if do_split_2: - for i2 in range(input_tokens // split_2_slice_size): # pylint: disable=invalid-name - start_idx_2 = i2 * split_2_slice_size - end_idx_2 = (i2 + 1) * split_2_slice_size - hidden_states[start_idx:end_idx, start_idx_2:end_idx_2] = original_torch_bmm( - input[start_idx:end_idx, start_idx_2:end_idx_2], - mat2[start_idx:end_idx, start_idx_2:end_idx_2], - out=out - ) - else: - hidden_states[start_idx:end_idx] = original_torch_bmm( - input[start_idx:end_idx], - mat2[start_idx:end_idx], - out=out - ) - else: - return original_torch_bmm(input, mat2, out=out) - return hidden_states - -original_scaled_dot_product_attention = torch.nn.functional.scaled_dot_product_attention -def scaled_dot_product_attention(query, key, value, attn_mask=None, dropout_p=0.0, is_causal=False): - #ARC GPUs can't allocate more than 4GB to a single block, Slice it: - shape_one, batch_size_attention, query_tokens, shape_four = query.shape - block_multiply = 2.4 if query.dtype == torch.float32 else 1.2 - block_size = (shape_one * batch_size_attention * query_tokens * shape_four) / 1024 * block_multiply #MB - split_slice_size = batch_size_attention - if block_size >= 4000: - do_split = True - #Find something divisible with the shape_one - while ((shape_one * split_slice_size * query_tokens * shape_four) / 1024 * block_multiply) > 4000: - split_slice_size = split_slice_size // 2 - if split_slice_size <= 1: - split_slice_size = 1 - break - else: - do_split = False - - split_block_size = (shape_one * split_slice_size * query_tokens * shape_four) / 1024 * block_multiply #MB - split_2_slice_size = query_tokens - if split_block_size >= 4000: - do_split_2 = True - #Find something divisible with the batch_size_attention - while ((shape_one * split_slice_size * split_2_slice_size * shape_four) / 1024 * block_multiply) > 4000: - split_2_slice_size = split_2_slice_size // 2 - if split_2_slice_size <= 1: - split_2_slice_size = 1 - break - else: - do_split_2 = False - - if do_split: - hidden_states = torch.zeros(query.shape, device=query.device, dtype=query.dtype) - for i in range(batch_size_attention // split_slice_size): - start_idx = i * split_slice_size - end_idx = (i + 1) * split_slice_size - if do_split_2: - for i2 in range(query_tokens // split_2_slice_size): # pylint: disable=invalid-name - start_idx_2 = i2 * split_2_slice_size - end_idx_2 = (i2 + 1) * split_2_slice_size - hidden_states[:, start_idx:end_idx, start_idx_2:end_idx_2] = original_scaled_dot_product_attention( - query[:, start_idx:end_idx, start_idx_2:end_idx_2], - key[:, start_idx:end_idx, start_idx_2:end_idx_2], - value[:, start_idx:end_idx, start_idx_2:end_idx_2], - attn_mask=attn_mask[:, start_idx:end_idx, start_idx_2:end_idx_2] if attn_mask is not None else attn_mask, - dropout_p=dropout_p, is_causal=is_causal - ) - else: - hidden_states[:, start_idx:end_idx] = original_scaled_dot_product_attention( - query[:, start_idx:end_idx], - key[:, start_idx:end_idx], - value[:, start_idx:end_idx], - attn_mask=attn_mask[:, start_idx:end_idx] if attn_mask is not None else attn_mask, - dropout_p=dropout_p, is_causal=is_causal - ) - else: - return original_scaled_dot_product_attention( - query, key, value, attn_mask=attn_mask, dropout_p=dropout_p, is_causal=is_causal - ) - return hidden_states - -def attention_init(): - #ARC GPUs can't allocate more than 4GB to a single block: - torch.bmm = torch_bmm - torch.nn.functional.scaled_dot_product_attention = scaled_dot_product_attention \ No newline at end of file diff --git a/spaces/Lbin123/Lbingo/src/pages/api/sydney.ts b/spaces/Lbin123/Lbingo/src/pages/api/sydney.ts deleted file mode 100644 index 0e7bbf23d77c2e1a6635185a060eeee58b8c8e66..0000000000000000000000000000000000000000 --- a/spaces/Lbin123/Lbingo/src/pages/api/sydney.ts +++ /dev/null @@ -1,62 +0,0 @@ -import { NextApiRequest, NextApiResponse } from 'next' -import { WebSocket, debug } from '@/lib/isomorphic' -import { BingWebBot } from '@/lib/bots/bing' -import { websocketUtils } from '@/lib/bots/bing/utils' -import { WatchDog, createHeaders } from '@/lib/utils' - - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - const conversationContext = req.body - const headers = createHeaders(req.cookies) - debug(headers) - res.setHeader('Content-Type', 'text/stream; charset=UTF-8') - - const ws = new WebSocket('wss://sydney.bing.com/sydney/ChatHub', { - headers: { - ...headers, - 'accept-language': 'zh-CN,zh;q=0.9', - 'cache-control': 'no-cache', - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - pragma: 'no-cache', - } - }) - - const closeDog = new WatchDog() - const timeoutDog = new WatchDog() - ws.onmessage = (event) => { - timeoutDog.watch(() => { - ws.send(websocketUtils.packMessage({ type: 6 })) - }, 1500) - closeDog.watch(() => { - ws.close() - }, 10000) - res.write(event.data) - if (/\{"type":([367])\}/.test(String(event.data))) { - const type = parseInt(RegExp.$1, 10) - debug('connection type', type) - if (type === 3) { - ws.close() - } else { - ws.send(websocketUtils.packMessage({ type })) - } - } - } - - ws.onclose = () => { - timeoutDog.reset() - closeDog.reset() - debug('connection close') - res.end() - } - - await new Promise((resolve) => ws.onopen = resolve) - ws.send(websocketUtils.packMessage({ protocol: 'json', version: 1 })) - ws.send(websocketUtils.packMessage({ type: 6 })) - ws.send(websocketUtils.packMessage(BingWebBot.buildChatRequest(conversationContext!))) - req.socket.once('close', () => { - ws.close() - if (!res.closed) { - res.end() - } - }) -} diff --git a/spaces/Lianjd/stock_dashboard/backtrader/strategies/sma_crossover.py b/spaces/Lianjd/stock_dashboard/backtrader/strategies/sma_crossover.py deleted file mode 100644 index 61c42eef71f1fb8de1993ed5aaf5f66ffc83d60e..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/strategies/sma_crossover.py +++ /dev/null @@ -1,74 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - - -import backtrader as bt -import backtrader.indicators as btind - - -class MA_CrossOver(bt.Strategy): - '''This is a long-only strategy which operates on a moving average cross - - Note: - - Although the default - - Buy Logic: - - No position is open on the data - - - The ``fast`` moving averagecrosses over the ``slow`` strategy to the - upside. - - Sell Logic: - - A position exists on the data - - - The ``fast`` moving average crosses over the ``slow`` strategy to the - downside - - Order Execution Type: - - Market - - ''' - alias = ('SMA_CrossOver',) - - params = ( - # period for the fast Moving Average - ('fast', 10), - # period for the slow moving average - ('slow', 30), - # moving average to use - ('_movav', btind.MovAv.SMA) - ) - - def __init__(self): - sma_fast = self.p._movav(period=self.p.fast) - sma_slow = self.p._movav(period=self.p.slow) - - self.buysig = btind.CrossOver(sma_fast, sma_slow) - - def next(self): - if self.position.size: - if self.buysig < 0: - self.sell() - - elif self.buysig > 0: - self.buy() diff --git a/spaces/Liu-LAB/GPT-academic/tests/test_plugins.py b/spaces/Liu-LAB/GPT-academic/tests/test_plugins.py deleted file mode 100644 index ec28af1e671282f4e3b7f8ef14fb6ac7bdb36e65..0000000000000000000000000000000000000000 --- a/spaces/Liu-LAB/GPT-academic/tests/test_plugins.py +++ /dev/null @@ -1,57 +0,0 @@ -""" -对项目中的各个插件进行测试。运行方法:直接运行 python tests/test_plugins.py -""" - - -import os, sys -def validate_path(): dir_name = os.path.dirname(__file__); root_dir_assume = os.path.abspath(dir_name + '/..'); os.chdir(root_dir_assume); sys.path.append(root_dir_assume) -validate_path() # 返回项目根路径 -from tests.test_utils import plugin_test - -if __name__ == "__main__": - # plugin_test(plugin='crazy_functions.虚空终端->虚空终端', main_input='修改api-key为sk-jhoejriotherjep') - plugin_test(plugin='crazy_functions.批量翻译PDF文档_NOUGAT->批量翻译PDF文档', main_input='crazy_functions/test_project/pdf_and_word/aaai.pdf') - - # plugin_test(plugin='crazy_functions.虚空终端->虚空终端', main_input='调用插件,对C:/Users/fuqingxu/Desktop/旧文件/gpt/chatgpt_academic/crazy_functions/latex_fns中的python文件进行解析') - - # plugin_test(plugin='crazy_functions.命令行助手->命令行助手', main_input='查看当前的docker容器列表') - - # plugin_test(plugin='crazy_functions.解析项目源代码->解析一个Python项目', main_input="crazy_functions/test_project/python/dqn") - - # plugin_test(plugin='crazy_functions.解析项目源代码->解析一个C项目', main_input="crazy_functions/test_project/cpp/cppipc") - - # plugin_test(plugin='crazy_functions.Latex全文润色->Latex英文润色', main_input="crazy_functions/test_project/latex/attention") - - # plugin_test(plugin='crazy_functions.批量Markdown翻译->Markdown中译英', main_input="README.md") - - # plugin_test(plugin='crazy_functions.批量翻译PDF文档_多线程->批量翻译PDF文档', main_input='crazy_functions/test_project/pdf_and_word/aaai.pdf') - - # plugin_test(plugin='crazy_functions.谷歌检索小助手->谷歌检索小助手', main_input="https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=auto+reinforcement+learning&btnG=") - - # plugin_test(plugin='crazy_functions.总结word文档->总结word文档', main_input="crazy_functions/test_project/pdf_and_word") - - # plugin_test(plugin='crazy_functions.下载arxiv论文翻译摘要->下载arxiv论文并翻译摘要', main_input="1812.10695") - - # plugin_test(plugin='crazy_functions.联网的ChatGPT->连接网络回答问题', main_input="谁是应急食品?") - - # plugin_test(plugin='crazy_functions.解析JupyterNotebook->解析ipynb文件', main_input="crazy_functions/test_samples") - - # plugin_test(plugin='crazy_functions.数学动画生成manim->动画生成', main_input="A ball split into 2, and then split into 4, and finally split into 8.") - - # for lang in ["English", "French", "Japanese", "Korean", "Russian", "Italian", "German", "Portuguese", "Arabic"]: - # plugin_test(plugin='crazy_functions.批量Markdown翻译->Markdown翻译指定语言', main_input="README.md", advanced_arg={"advanced_arg": lang}) - - # plugin_test(plugin='crazy_functions.Langchain知识库->知识库问答', main_input="./") - - # plugin_test(plugin='crazy_functions.Langchain知识库->读取知识库作答', main_input="What is the installation method?") - - # plugin_test(plugin='crazy_functions.Langchain知识库->读取知识库作答', main_input="远程云服务器部署?") - - # plugin_test(plugin='crazy_functions.Latex输出PDF结果->Latex翻译中文并重新编译PDF', main_input="2210.03629") - - # advanced_arg = {"advanced_arg":"--llm_to_learn=gpt-3.5-turbo --prompt_prefix='根据下面的服装类型提示,想象一个穿着者,对这个人外貌、身处的环境、内心世界、人设进行描写。要求:100字以内,用第二人称。' --system_prompt=''" } - # plugin_test(plugin='crazy_functions.chatglm微调工具->微调数据集生成', main_input='build/dev.json', advanced_arg=advanced_arg) - - # advanced_arg = {"advanced_arg":"--pre_seq_len=128 --learning_rate=2e-2 --num_gpus=1 --json_dataset='t_code.json' --ptuning_directory='/home/hmp/ChatGLM2-6B/ptuning' " } - # plugin_test(plugin='crazy_functions.chatglm微调工具->启动微调', main_input='build/dev.json', advanced_arg=advanced_arg) - diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/textrecog/crnn/crnn_toy_dataset.py b/spaces/Loren/Streamlit_OCR_comparator/configs/textrecog/crnn/crnn_toy_dataset.py deleted file mode 100644 index f61c68afe285e4d1943cbcbb8ede1fe965a99a4b..0000000000000000000000000000000000000000 --- a/spaces/Loren/Streamlit_OCR_comparator/configs/textrecog/crnn/crnn_toy_dataset.py +++ /dev/null @@ -1,47 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/recog_pipelines/crnn_pipeline.py', - '../../_base_/recog_datasets/toy_data.py', - '../../_base_/schedules/schedule_adadelta_5e.py' -] - -label_convertor = dict( - type='CTCConvertor', dict_type='DICT36', with_unknown=True, lower=True) - -model = dict( - type='CRNNNet', - preprocessor=None, - backbone=dict(type='VeryDeepVgg', leaky_relu=False, input_channels=1), - encoder=None, - decoder=dict(type='CRNNDecoder', in_channels=512, rnn_flag=True), - loss=dict(type='CTCLoss'), - label_convertor=label_convertor, - pretrained=None) - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline = {{_base_.test_pipeline}} - -data = dict( - samples_per_gpu=32, - workers_per_gpu=2, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline)) - -evaluation = dict(interval=1, metric='acc') - -cudnn_benchmark = True diff --git a/spaces/MGLDZM/chgpt/static/css/app.css b/spaces/MGLDZM/chgpt/static/css/app.css deleted file mode 100644 index 1ca9dcea77c126f3d44eea94da6a4801e1d5241b..0000000000000000000000000000000000000000 --- a/spaces/MGLDZM/chgpt/static/css/app.css +++ /dev/null @@ -1,248 +0,0 @@ -html, -body { - margin: 0; - padding: 0; - width: 100%; - height: 100svh; - font-family: Monospace; - font-size: 15px; -} -body { - display: flex; -} -button{ - cursor: pointer; -} -.wrapper { - background: #34495e; - margin: 0; - min-width: 320px; - width: 100%; - display: grid; - grid-template-rows: 40px auto; - height: 100%; -} -.chat { - --textarea: 0px; - border-radius: 5px; - display: block; - width: 100%; - overflow-y: scroll; - overflow-x: hidden; - background: rgb(161, 161, 161); - padding: 10px 0; - height: max( calc( 100svh - 120px - var(--textarea) ), calc(50svh - 90px) ); -} - -.chat .message { - display: flex; - margin: 5px 20px 5px 10px; - filter: opacity(0.9); -} -.chat .message.me { - margin: 5px 10px 5px 20px; -} -.chat .message.comando { - margin: 5px auto; - display: table; -} -.chat .message:last-child { - filter: opacity(1); -} - -.chat .message.no-opacity { - display: flex; - margin: 10px 0 0 10px; - filter: opacity(1); -} - -.chat .message img { - margin: 0 10px 0 0; - height: 30px; - border-radius: 50%; -} - -.chat .message.me img { - order: 2; - margin: 0 0 0 3px; -} - -.chat .message div { - flex: 1; - max-width: 100%; -} - -.chat .message div p { - max-width: calc( 100% - 20px ); - display: inline-block; - margin: 0; - padding: 8px 10px 8px 10px; - background: #fff; - border-radius: 3px; - box-shadow: 0 1px 2px rgba(0, 0, 0, 0.1); - min-width: 40%; - transition: 0.5s height; -} - -.chat .message.me div p { - float: right; - background: #b4c2ff; -} -.chat .message.comando div p { - background: #d0a8ff; -} -.chat .message.warning div p { - background: #f0e370; -} -.chat .message.error div p { - background: #f09470; -} - -.chat .message div p ul { - list-style: none; - color: #555; - padding-right: 10px; -} - -.chat .message:last-child div p ul { - list-style: none; - color: blue; -} - -.chat .message div p ul.ultotal { - list-style: none; - color: #34495e; - font-size: 12px; -} - -.chat .message pre { - overflow-x: scroll; - border: solid 1px #e5e4e4; - padding: 10px; -} - -.input-box { - background: #222f3d; - margin: 10px 0; - height: 30px; - display: flex; - border-radius: 5px; - max-height: 50svh; -} - -.input-box textarea, -.input-box button { - height: 100%; - margin: 0; - border: none; - padding: 0 15px; -} - -.input-box button:focus, .input-box textarea:focus { - outline: none; -} - -.input-box .input-text { - width: 100%; - border-radius: 5px 0 0 5px; - resize: none; - border-top: solid 7px #fff; - border-bottom: solid 7px #fff; - -} -.input-box button{ - width: 30px; - background-size: 20px; - background-color: #ddd; - background-repeat: no-repeat; - background-position: center; - border-left: solid 1px #555; -} -.input-box .input-send { - background-image: url(/static/img/send.png); -} -.input-box .input-delete { - background-image: url(/static/img/delete.png); -} -.input-box button:first-child{ - border-left: none; -} -.input-box button:last-child{ - border-radius: 0 5px 5px 0; -} -.input-box button:disabled, .input-box textarea:disabled{ - background-color: #8b8b8b; - border-color: #8b8b8b; - -} - -#message-template{ - display: none; -} - -.loader-wrap { - display: flex; -} -.loader { - margin: auto; - width: 48px; - height: 48px; - border: 3px dotted #476380; - border-style: solid solid dotted dotted; - border-radius: 50%; - display: inline-block; - position: relative; - box-sizing: border-box; - animation: rotation 2s linear infinite; -} -.loader.firststage{ - border: 3px dotted #49b359; - border-style: solid solid dotted dotted; - transition:all 1s; -} -.loader::after { - content: ''; - box-sizing: border-box; - position: absolute; - left: 0; - right: 0; - top: 0; - bottom: 0; - margin: auto; - border: 3px dotted #445464; - border-style: solid solid dotted; - width: 24px; - height: 24px; - border-radius: 50%; - animation: rotationBack 1s linear infinite; - transform-origin: center center; -} -.loader.firststage::after { - border: 3px dotted #49b359; - border-style: solid solid dotted; - transition:all 1s; -} - -@keyframes rotation { - 0% { - transform: rotate(0deg); - } - 100% { - transform: rotate(360deg); - } - } - @keyframes rotationBack { - 0% { - transform: rotate(0deg); - } - 100% { - transform: rotate(-360deg); - } -} -.loader-wrap ~ div { - text-align: center; - margin-top: 10px; -} -dialog{ - margin: auto; -} diff --git a/spaces/Marshalls/testmtd/feature_extraction/get_motion_scalers.sh b/spaces/Marshalls/testmtd/feature_extraction/get_motion_scalers.sh deleted file mode 100644 index b4e303bfe2e2015f2cc36643eb96b4480e5ef0d6..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/feature_extraction/get_motion_scalers.sh +++ /dev/null @@ -1,49 +0,0 @@ - -folder=$1 -py=python3 -#n=$(nproc) -n=6 - -#target fps -fps=20 - -# to convert aistpp to BVH with mixamo skeleton -#mpirun -n $n $py feature_extraction/process_aistpp.py $@ --fps 60 # this fps is the source fps of aistpp which is 60Hz - -# code for Rotmat representation for AISTPP -#mpirun -n $n $py ./scripts/feature_extraction/aistpp_to_rotmats.py $@ -#mpirun -n 1 $py ./scripts/feature_extraction/extract_transform2.py $@ --feature_name pkl_joint_angles_mats --transforms scaler -# mpirun -n $n $py ./scripts/feature_extraction/apply_transforms.py $@ --feature_name pkl_joint_angles_mats --transform_name scaler --new_feature_name joint_angles_scaled - -# code for Expmap representations from bvhs -param=expmap -#param=position - -#mpirun -n $n $py feature_extraction/process_motions.py $@ --param ${param} --fps $fps --do_mirror -mpirun -n 1 $py feature_extraction/extract_transform.py $1 --feature_name expmap_scaled_20.generated --transforms 2moments -mpirun -n 1 $py feature_extraction/extract_transform.py $1 --feature_name expmap_scaled_20.generated --transforms 2moments_ext -#mpirun -n 1 $py feature_extraction/extract_transform.py $1 --feature_name bvh_expmap_cr --transforms 2moments -#mpirun -n 1 $py feature_extraction/extract_transform.py $1 --feature_name bvh_expmap_cr --transforms 2moments_ext -#mpirun -n $n $py feature_extraction/apply_transforms.py $@ --feature_name bvh_${param} --transform_name scaler --new_feature_name ${param}_scaled_${fps} -#cp $1/motion_expmap_data_pipe.sav $1/motion_${param}_scaled_${fps}_data_pipe.sav - -#with constant revmoer -#mpirun -n $n $py feature_extraction/process_motions.py $@ --param ${param} --fps $fps --do_mirror -#rename 's/bvh_expmap/bvh_expmap_cr/' $1/*bvh_expmap.npy -#mpirun -n 1 $py feature_extraction/extract_transform2.py $1 --feature_name bvh_${param}_cr --transforms scaler -#mpirun -n $n $py feature_extraction/apply_transforms.py $@ --feature_name bvh_${param}_cr --transform_name scaler --new_feature_name ${param}_cr_scaled_${fps} -#cp $1/motion_expmap_data_pipe.sav $1/motion_${param}_cr_scaled_${fps}_data_pipe.sav - -#if doing mirroring -#feature_extraction/duplicate_features.sh $1 audio_feats_scaled_20 - - -# for moglow -#param=position -#mpirun -n 1 $py feature_extraction/extract_transform2.py $1 --feature_name moglow_loc --transforms scaler -#mpirun -n $n $py feature_extraction/apply_transforms.py $@ --feature_name moglow_loc --transform_name scaler --new_feature_name ${param}_scaled -#cp $1/moglow_loc_scaler.pkl $1/moglow_position_scaled_scaler.pkl -#mpirun -n 1 $py feature_extraction/extract_transform2.py $1 --feature_name moglow_loc_control --transforms scaler -#mpirun -n $n $py feature_extraction/apply_transforms.py $@ --feature_name moglow_loc_control --transform_name scaler --new_feature_name moglow_control_scaled -#cp $1/moglow_loc_control_scaler.pkl $1/moglow_control_scaled_scaler.pkl - diff --git a/spaces/MathysL/AutoGPT4/ui/api.py b/spaces/MathysL/AutoGPT4/ui/api.py deleted file mode 100644 index 3b46ad32148b23f06c6eb64c88708fc2bf92e4dc..0000000000000000000000000000000000000000 --- a/spaces/MathysL/AutoGPT4/ui/api.py +++ /dev/null @@ -1,146 +0,0 @@ -import os, sys -import utils -import uuid -import json -import subprocess, threading - -FILE_DIR = os.path.dirname(os.path.abspath(__file__)) -REPO_DIR = os.path.dirname(FILE_DIR) -STATE_DIR = os.path.join(FILE_DIR, "state") -sys.path.append(REPO_DIR) -if not os.path.exists(STATE_DIR): - os.mkdir(STATE_DIR) -import time - - -def get_openai_api_key(): - return os.getenv("OPENAI_API_KEY") - - -running_apis = [] - - -def get_state(state_file): - with open(state_file, "r") as f: - state = json.load(f) - return state - - -def set_state(state_file, state): - with open(state_file, "w") as f: - json.dump(state, f) - - -class AutoAPI: - def __init__(self, openai_key, ai_name, ai_role, top_5_goals): - self.openai_key = openai_key - hex = uuid.uuid4().hex - print(hex) - self.state_file = os.path.join(STATE_DIR, f"state_{hex}.json") - self.log_file = os.path.join(STATE_DIR, f"log_{hex}.json") - - newline = "\n" - with open(os.path.join(REPO_DIR, "ai_settings.yaml"), "w") as f: - f.write( - f"""ai_goals: -{newline.join([f'- {goal[0]}' for goal in top_5_goals if goal[0]])} -ai_name: {ai_name} -ai_role: {ai_role} -""" - ) - state = { - "pending_input": None, - "awaiting_input": False, - "messages": [], - "last_message_read_index": -1, - } - set_state(self.state_file, state) - - with open(self.log_file, "w") as f: - subprocess.Popen( - [ - "python", - os.path.join(REPO_DIR, "ui", "api.py"), - openai_key, - self.state_file, - ], - cwd=REPO_DIR, - stdout=f, - stderr=f, - ) - - def send_message(self, message="Y"): - state = get_state(self.state_file) - state["pending_input"] = message - state["awaiting_input"] = False - set_state(self.state_file, state) - - def get_chatbot_response(self): - while True: - state = get_state(self.state_file) - if ( - state["awaiting_input"] - and state["last_message_read_index"] >= len(state["messages"]) - 1 - ): - break - if state["last_message_read_index"] >= len(state["messages"]) - 1: - time.sleep(1) - else: - state["last_message_read_index"] += 1 - title, content = state["messages"][state["last_message_read_index"]] - yield (f"**{title.strip()}** " if title else "") + utils.remove_color( - content - ).replace("\n", "
    ") - set_state(self.state_file, state) - - -if __name__ == "__main__": - print(sys.argv) - _, openai_key, state_file = sys.argv - os.environ["OPENAI_API_KEY"] = openai_key - import autogpt.config.config - from autogpt.logs import logger - from autogpt.cli import main - import autogpt.utils - from autogpt.spinner import Spinner - - def add_message(title, content): - state = get_state(state_file) - state["messages"].append((title, content)) - set_state(state_file, state) - - def typewriter_log(title="", title_color="", content="", *args, **kwargs): - add_message(title, content) - - def warn(message, title="", *args, **kwargs): - add_message(title, message) - - def error(title, message="", *args, **kwargs): - add_message(title, message) - - def clean_input(prompt=""): - add_message(None, prompt) - state = get_state(state_file) - state["awaiting_input"] = True - set_state(state_file, state) - while state["pending_input"] is None: - state = get_state(state_file) - print("Waiting for input...") - time.sleep(1) - print("Got input") - pending_input = state["pending_input"] - state["pending_input"] = None - set_state(state_file, state) - return pending_input - - def spinner_start(): - add_message(None, "Thinking...") - - logger.typewriter_log = typewriter_log - logger.warn = warn - logger.error = error - autogpt.utils.clean_input = clean_input - Spinner.spin = spinner_start - - sys.argv = sys.argv[:1] - main() diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/mlsd/__init__.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/mlsd/__init__.py deleted file mode 100644 index 42af28c682e781b30f691f65a475b53c9f3adc8b..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/mlsd/__init__.py +++ /dev/null @@ -1,39 +0,0 @@ -import cv2 -import numpy as np -import torch -import os - -from einops import rearrange -from .models.mbv2_mlsd_tiny import MobileV2_MLSD_Tiny -from .models.mbv2_mlsd_large import MobileV2_MLSD_Large -from .utils import pred_lines - -from annotator.util import annotator_ckpts_path - - -remote_model_path = "https://huggingface.co/lllyasviel/ControlNet/resolve/main/annotator/ckpts/mlsd_large_512_fp32.pth" - - -class MLSDdetector: - def __init__(self): - model_path = os.path.join(annotator_ckpts_path, "mlsd_large_512_fp32.pth") - if not os.path.exists(model_path): - from basicsr.utils.download_util import load_file_from_url - load_file_from_url(remote_model_path, model_dir=annotator_ckpts_path) - model = MobileV2_MLSD_Large() - model.load_state_dict(torch.load(model_path), strict=True) - self.model = model.cuda().eval() - - def __call__(self, input_image, thr_v, thr_d): - assert input_image.ndim == 3 - img = input_image - img_output = np.zeros_like(img) - try: - with torch.no_grad(): - lines = pred_lines(img, self.model, [img.shape[0], img.shape[1]], thr_v, thr_d) - for line in lines: - x_start, y_start, x_end, y_end = [int(val) for val in line] - cv2.line(img_output, (x_start, y_start), (x_end, y_end), [255, 255, 255], 1) - except Exception as e: - pass - return img_output[:, :, 0] diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/upernet_r50.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/upernet_r50.py deleted file mode 100644 index 10974962fdd7136031fd06de1700f497d355ceaa..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/upernet_r50.py +++ /dev/null @@ -1,44 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 1, 1), - strides=(1, 2, 2, 2), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='UPerHead', - in_channels=[256, 512, 1024, 2048], - in_index=[0, 1, 2, 3], - pool_scales=(1, 2, 3, 6), - channels=512, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/MiguelVGP/bearclassifier/app.py b/spaces/MiguelVGP/bearclassifier/app.py deleted file mode 100644 index 7f5dfcf0a518e0134ddad5554e2ae92831aa38ff..0000000000000000000000000000000000000000 --- a/spaces/MiguelVGP/bearclassifier/app.py +++ /dev/null @@ -1,18 +0,0 @@ -import gradio as gr -from fastai.vision.all import * -import skimage - -learn = load_learner('model_0.pkl') - -labels = learn.dls.vocab -def predict(img): - #img = PILImage.create(img) - pred,pred_idx,probs = learn.predict(img) - return {labels[i]: float(probs[i]) for i in range(len(labels))} - -title = "Bear Classifier" -description = "Created with fastai. Created as a demo for Gradio and HuggingFace Spaces" -examples = ['black_bear.jpg','grizzly_bear.jpg','teddy_bear.jpg'] -enable_queue=True - -gr.Interface(fn=predict,inputs=gr.inputs.Image(shape=(512, 512)),outputs=gr.outputs.Label(num_top_classes=3),title=title,description=description,examples=examples,enable_queue=enable_queue).launch() \ No newline at end of file diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/preparers/parsers/naf_parser.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/preparers/parsers/naf_parser.py deleted file mode 100644 index 988b4b453b1aba44dca342a4be1f0258f583ca08..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/preparers/parsers/naf_parser.py +++ /dev/null @@ -1,106 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import json -from typing import List, Tuple - -import numpy as np - -from mmocr.registry import DATA_PARSERS -from .base import BaseParser - - -@DATA_PARSERS.register_module() -class NAFAnnParser(BaseParser): - """NAF dataset parser. - - The original annotation format of this dataset is stored in json files, - which has the following keys that will be used here: - - 'textBBs': List of text bounding box objects - - 'poly_points': list of [x,y] pairs, the box corners going - top-left,top-right,bottom-right,bottom-left - - 'id': id of the textBB, used to match with the text - - 'transcriptions': Dict of transcription objects, use the 'id' key - to match with the textBB. - - Some special characters are used in the transcription: - "«text»" indicates that "text" had a strikethrough - "¿" indicates the transcriber could not read a character - "§" indicates the whole line or word was illegible - "" (empty string) is if the field was blank - - Args: - ignore (list(str)): The text of the ignored instances. Default: ['#']. - det (bool): Whether to parse the detection annotation. Default: True. - If False, the parser will consider special case in NAF dataset - where the transcription is not available. - """ - - def __init__(self, - ignore: List[str] = ['#'], - det: bool = True, - **kwargs) -> None: - self.ignore = ignore - self.det = det - super().__init__(**kwargs) - - def parse_file(self, img_path: str, ann_path: str) -> Tuple: - """Convert single annotation.""" - instances = list() - for poly, text in self.loader(ann_path): - instances.append( - dict(poly=poly, text=text, ignore=text in self.ignore)) - - return img_path, instances - - def loader(self, file_path: str) -> str: - """Load the annotation of the NAF dataset. - - Args: - file_path (str): Path to the json file - - Retyrb: - str: Complete annotation of the json file - """ - with open(file_path, 'r') as f: - data = json.load(f) - - # 'textBBs' contains the printed texts of the table while 'fieldBBs' - # contains the text filled by human. - for box_type in ['textBBs', 'fieldBBs']: - if not self.det: - # 'textBBs' is only used for detection task. - if box_type == 'textBBs': - continue - for anno in data[box_type]: - # Skip blanks - if self.det: - if box_type == 'fieldBBs': - if anno['type'] == 'blank': - continue - poly = np.array(anno['poly_points']).reshape( - 1, 8)[0].tolist() - # Since detection task only need poly, we can skip the - # transcription part that can be empty. - text = None - else: - # For tasks that need transcription, NAF dataset has - # serval special cases: - # 1. The transcription for the whole image is not - # available. - # 2. The transcription for the certain text is not - # available. - # 3. If the length of the transcription is 0, it should - # be ignored. - if 'transcriptions' not in data.keys(): - break - if anno['id'] not in data['transcriptions'].keys(): - continue - text = data['transcriptions'][anno['id']] - text = text.strip( - '\u202a') # Remove unicode control character - text = text.replace('»', '').replace( - '«', '') # Remove strikethrough flag - if len(text) == 0: - continue - poly = np.array(anno['poly_points']).reshape( - 1, 8)[0].tolist() - yield poly, text diff --git a/spaces/MrBodean/VoiceClone/synthesizer/utils/__init__.py b/spaces/MrBodean/VoiceClone/synthesizer/utils/__init__.py deleted file mode 100644 index 5ae3e48110e61231acf1e666e5fa76af5e4ebdcd..0000000000000000000000000000000000000000 --- a/spaces/MrBodean/VoiceClone/synthesizer/utils/__init__.py +++ /dev/null @@ -1,45 +0,0 @@ -import torch - - -_output_ref = None -_replicas_ref = None - -def data_parallel_workaround(model, *input): - global _output_ref - global _replicas_ref - device_ids = list(range(torch.cuda.device_count())) - output_device = device_ids[0] - replicas = torch.nn.parallel.replicate(model, device_ids) - # input.shape = (num_args, batch, ...) - inputs = torch.nn.parallel.scatter(input, device_ids) - # inputs.shape = (num_gpus, num_args, batch/num_gpus, ...) - replicas = replicas[:len(inputs)] - outputs = torch.nn.parallel.parallel_apply(replicas, inputs) - y_hat = torch.nn.parallel.gather(outputs, output_device) - _output_ref = outputs - _replicas_ref = replicas - return y_hat - - -class ValueWindow(): - def __init__(self, window_size=100): - self._window_size = window_size - self._values = [] - - def append(self, x): - self._values = self._values[-(self._window_size - 1):] + [x] - - @property - def sum(self): - return sum(self._values) - - @property - def count(self): - return len(self._values) - - @property - def average(self): - return self.sum / max(1, self.count) - - def reset(self): - self._values = [] diff --git a/spaces/MultiTransformer/vision-agent-with-llava/index.html b/spaces/MultiTransformer/vision-agent-with-llava/index.html deleted file mode 100644 index 5561a680a478235f6152f353b2fe4a43cb408be7..0000000000000000000000000000000000000000 --- a/spaces/MultiTransformer/vision-agent-with-llava/index.html +++ /dev/null @@ -1,16 +0,0 @@ - - - - - - Vision Agent With Llava - - - -
    -

    Try out a very powerful vision agent with Llama

    -

    Click on app.py https://huggingface.co/spaces/MultiTransformer/vision-agent-with-llava/blob/main/app.py in the Files and versions tab. (top right) -

    -
    - - diff --git a/spaces/Mwebrania/clasma_database/app.py b/spaces/Mwebrania/clasma_database/app.py deleted file mode 100644 index f638c04f6eb14194edace309b7822e0958dec045..0000000000000000000000000000000000000000 --- a/spaces/Mwebrania/clasma_database/app.py +++ /dev/null @@ -1,32 +0,0 @@ -import gradio as gr -from fastai.vision.all import * -import skimage - -#installed the fastai -from fastai import * -from fastai.vision import * -#from fastbook import * - -learn = load_learner('clasma.pkl') - -labels = learn.dls.vocab -def predict(img): - img = PILImage.create(img) - pred,pred_idx,probs = learn.predict(img) - return {labels[i]: float(probs[i]) for i in range(len(labels))} - -title = "CLASMA" -description = "AI app for classifying disease in cereal crops leaves." -article = "sample image" -examples = ['corn.jpg'] -interpretation = 'default' -enable_queue = True - -gr.Interface(fn = predict, inputs = gr.inputs.Image(shape = (512, 512)), -outputs = gr.outputs.Label(num_top_classes = 8), -title = title, -description = description, -article = article, -examples = examples, -interpretation = interpretation, -enable_queue = enable_queue).launch() \ No newline at end of file diff --git a/spaces/NN520/AI/src/components/chat-panel.tsx b/spaces/NN520/AI/src/components/chat-panel.tsx deleted file mode 100644 index 1fbc3c2bf05b914e0c229661832fbb560745f488..0000000000000000000000000000000000000000 --- a/spaces/NN520/AI/src/components/chat-panel.tsx +++ /dev/null @@ -1,153 +0,0 @@ -'use client' - -import * as React from 'react' -import Image from 'next/image' -import Textarea from 'react-textarea-autosize' -import { useAtomValue } from 'jotai' -import { useEnterSubmit } from '@/lib/hooks/use-enter-submit' -import { cn } from '@/lib/utils' - -import BrushIcon from '@/assets/images/brush.svg' -import ChatIcon from '@/assets/images/chat.svg' -import VisualSearchIcon from '@/assets/images/visual-search.svg' -import SendIcon from '@/assets/images/send.svg' -import PinIcon from '@/assets/images/pin.svg' -import PinFillIcon from '@/assets/images/pin-fill.svg' - -import { useBing } from '@/lib/hooks/use-bing' -import { voiceListenAtom } from '@/state' -import Voice from './voice' -import { ChatImage } from './chat-image' -import { ChatAttachments } from './chat-attachments' - -export interface ChatPanelProps - extends Pick< - ReturnType, - | 'generating' - | 'input' - | 'setInput' - | 'sendMessage' - | 'resetConversation' - | 'isSpeaking' - | 'attachmentList' - | 'uploadImage' - | 'setAttachmentList' - > { - id?: string - className?: string -} - -export function ChatPanel({ - isSpeaking, - generating, - input, - setInput, - className, - sendMessage, - resetConversation, - attachmentList, - uploadImage, - setAttachmentList -}: ChatPanelProps) { - const inputRef = React.useRef(null) - const {formRef, onKeyDown} = useEnterSubmit() - const [focused, setFocused] = React.useState(false) - const [active, setActive] = React.useState(false) - const [pin, setPin] = React.useState(false) - const [tid, setTid] = React.useState() - const voiceListening = useAtomValue(voiceListenAtom) - - const setBlur = React.useCallback(() => { - clearTimeout(tid) - setActive(false) - const _tid = setTimeout(() => setFocused(false), 2000); - setTid(_tid) - }, [tid]) - - const setFocus = React.useCallback(() => { - setFocused(true) - setActive(true) - clearTimeout(tid) - inputRef.current?.focus() - }, [tid]) - - React.useEffect(() => { - if (input) { - setFocus() - } - }, [input]) - - return ( -
    { - e.preventDefault() - if (generating) { - return; - } - if (!input?.trim()) { - return - } - setInput('') - setPin(false) - await sendMessage(input) - }} - ref={formRef} - > -
    -
    -
    -
    -
    -
    -
    - -
    -
    -
    -
    - chat - -
    - - - - ''' - -# Streamlit App 🚀 -st.title("USMLE Medical Questions Explorer with Speech Synthesis 🎙") - -# Dropdown for file selection -file_option = st.selectbox("Select file:", ["usmle_16.2MB.jsonl", "usmle_2.08MB.jsonl"]) -st.write(f"You selected: {file_option}") - -# Load data -large_data = load_jsonl("usmle_16.2MB.jsonl") -small_data = load_jsonl("usmle_2.08MB.jsonl") - -data = large_data if file_option == "usmle_16.2MB.jsonl" else small_data - -# Top 20 healthcare terms for USMLE -top_20_terms = ['Heart', 'Lung', 'Pain', 'Memory', 'Kidney', 'Diabetes', 'Cancer', 'Infection', 'Virus', 'Bacteria', 'Neurology', 'Psychiatry', 'Gastrointestinal', 'Pediatrics', 'Oncology', 'Skin', 'Blood', 'Surgery', 'Epidemiology', 'Genetics'] - -# Create Expander and Columns UI for terms -with st.expander("Search by Common Terms 📚"): - cols = st.columns(4) - for term in top_20_terms: - with cols[top_20_terms.index(term) % 4]: - if st.button(f"{term}"): - filtered_data = filter_by_keyword(data, term) - st.write(f"Filtered Dataset by '{term}' 📊") - st.dataframe(filtered_data) - if not filtered_data.empty: - html_blocks = [] - for idx, row in filtered_data.iterrows(): - question_text = row.get("question", "No question field") - documentHTML5 = generate_html_with_textarea(question_text) - html_blocks.append(documentHTML5) - all_html = ''.join(html_blocks) - components.html(all_html, width=1280, height=1024) - -# Text input for search keyword -search_keyword = st.text_input("Or, enter a keyword to filter data:") -if st.button("Search 🕵️‍♀️"): - filtered_data = filter_by_keyword(data, search_keyword) - st.write(f"Filtered Dataset by '{search_keyword}' 📊") - st.dataframe(filtered_data) - if not filtered_data.empty: - html_blocks = [] - for idx, row in filtered_data.iterrows(): - question_text = row.get("question", "No question field") - documentHTML5 = generate_html_with_textarea(question_text) - html_blocks.append(documentHTML5) - all_html = ''.join(html_blocks) - components.html(all_html, width=1280, height=1024) - - - -# Inject HTML5 and JavaScript for styling -st.markdown(""" - -""", unsafe_allow_html=True) - -# Markdown and emojis for the case presentation -st.markdown("# 🏥 Case Study: 32-year-old Woman's Wellness Check") -st.markdown("## 📋 Patient Information") -st.markdown(""" -- **Age**: 32 -- **Gender**: Female -- **Past Medical History**: Asthma, Hypertension, Anxiety -- **Current Medications**: Albuterol, Fluticasone, Hydrochlorothiazide, Lisinopril, Fexofenadine -- **Vitals** - - **Temperature**: 99.5°F (37.5°C) - - **Blood Pressure**: 165/95 mmHg - - **Pulse**: 70/min - - **Respirations**: 15/min - - **Oxygen Saturation**: 98% on room air -""") - -# Clinical Findings -st.markdown("## 📋 Clinical Findings") -st.markdown(""" -- Cardiac exam reveals a S1 and S2 heart sound with a normal rate. -- Pulmonary exam is clear to auscultation bilaterally with good air movement. -- Abdominal exam reveals a bruit, normoactive bowel sounds, and an audible borborygmus. -- Neurological exam reveals cranial nerves II-XII as grossly intact with normal strength and reflexes in the upper and lower extremities. -""") - -# Next Step Options -st.markdown("## 🤔 What is the best next step in management?") - -# Multiple Choice -options = ["Blood Test", "MRI Scan", "Ultrasound with Doppler", "Immediate Surgery"] -choice = st.selectbox("", options) - -# Explanation -if st.button("Submit"): - if choice == "Ultrasound with Doppler": - st.success("Correct! 🎉") - st.markdown(""" - ### Explanation - The patient's high blood pressure coupled with an abdominal bruit suggests the possibility of renal artery stenosis. - An **Ultrasound with Doppler** is the best next step for assessing blood flow and evaluating for renal artery stenosis. - """) - else: - st.error("Incorrect. 😞") - st.markdown(""" - The best next step is **Ultrasound with Doppler**. - """) diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/SVGLoader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/SVGLoader.js deleted file mode 100644 index 12c5fcfb5f7a6979ddae3f57109f619cd58debdb..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/SVGLoader.js +++ /dev/null @@ -1,1881 +0,0 @@ -/** - * @author mrdoob / http://mrdoob.com/ - * @author zz85 / http://joshuakoo.com/ - * @author yomboprime / https://yombo.org - */ - -THREE.SVGLoader = function ( manager ) { - - this.manager = ( manager !== undefined ) ? manager : THREE.DefaultLoadingManager; - -}; - -THREE.SVGLoader.prototype = { - - constructor: THREE.SVGLoader, - - load: function ( url, onLoad, onProgress, onError ) { - - var scope = this; - - var loader = new THREE.FileLoader( scope.manager ); - loader.setPath( scope.path ); - loader.load( url, function ( text ) { - - onLoad( scope.parse( text ) ); - - }, onProgress, onError ); - - }, - - setPath: function ( value ) { - - this.path = value; - return this; - - }, - - parse: function ( text ) { - - function parseNode( node, style ) { - - if ( node.nodeType !== 1 ) return; - - var transform = getNodeTransform( node ); - - var path = null; - - switch ( node.nodeName ) { - - case 'svg': - break; - - case 'g': - style = parseStyle( node, style ); - break; - - case 'path': - style = parseStyle( node, style ); - if ( node.hasAttribute( 'd' ) ) path = parsePathNode( node, style ); - break; - - case 'rect': - style = parseStyle( node, style ); - path = parseRectNode( node, style ); - break; - - case 'polygon': - style = parseStyle( node, style ); - path = parsePolygonNode( node, style ); - break; - - case 'polyline': - style = parseStyle( node, style ); - path = parsePolylineNode( node, style ); - break; - - case 'circle': - style = parseStyle( node, style ); - path = parseCircleNode( node, style ); - break; - - case 'ellipse': - style = parseStyle( node, style ); - path = parseEllipseNode( node, style ); - break; - - case 'line': - style = parseStyle( node, style ); - path = parseLineNode( node, style ); - break; - - default: - console.log( node ); - - } - - if ( path ) { - - if ( style.fill !== undefined && style.fill !== 'none' ) { - - path.color.setStyle( style.fill ); - - } - - transformPath( path, currentTransform ); - - paths.push( path ); - - path.userData = { node: node, style: style }; - - } - - var nodes = node.childNodes; - - for ( var i = 0; i < nodes.length; i ++ ) { - - parseNode( nodes[ i ], style ); - - } - - if ( transform ) { - - currentTransform.copy( transformStack.pop() ); - - } - - } - - function parsePathNode( node, style ) { - - var path = new THREE.ShapePath(); - - var point = new THREE.Vector2(); - var control = new THREE.Vector2(); - - var firstPoint = new THREE.Vector2(); - var isFirstPoint = true; - var doSetFirstPoint = false; - - var d = node.getAttribute( 'd' ); - - // console.log( d ); - - var commands = d.match( /[a-df-z][^a-df-z]*/ig ); - - for ( var i = 0, l = commands.length; i < l; i ++ ) { - - var command = commands[ i ]; - - var type = command.charAt( 0 ); - var data = command.substr( 1 ).trim(); - - if ( isFirstPoint === true ) { - doSetFirstPoint = true; - isFirstPoint = false; - } - - switch ( type ) { - - case 'M': - var numbers = parseFloats( data ); - for ( var j = 0, jl = numbers.length; j < jl; j += 2 ) { - point.x = numbers[ j + 0 ]; - point.y = numbers[ j + 1 ]; - control.x = point.x; - control.y = point.y; - if ( j === 0 ) { - path.moveTo( point.x, point.y ); - } else { - path.lineTo( point.x, point.y ); - } - if ( j === 0 && doSetFirstPoint === true ) firstPoint.copy( point ); - } - break; - - case 'H': - var numbers = parseFloats( data ); - for ( var j = 0, jl = numbers.length; j < jl; j ++ ) { - point.x = numbers[ j ]; - control.x = point.x; - control.y = point.y; - path.lineTo( point.x, point.y ); - if ( j === 0 && doSetFirstPoint === true ) firstPoint.copy( point ); - } - break; - - case 'V': - var numbers = parseFloats( data ); - for ( var j = 0, jl = numbers.length; j < jl; j ++ ) { - point.y = numbers[ j ]; - control.x = point.x; - control.y = point.y; - path.lineTo( point.x, point.y ); - if ( j === 0 && doSetFirstPoint === true ) firstPoint.copy( point ); - } - break; - - case 'L': - var numbers = parseFloats( data ); - for ( var j = 0, jl = numbers.length; j < jl; j += 2 ) { - point.x = numbers[ j + 0 ]; - point.y = numbers[ j + 1 ]; - control.x = point.x; - control.y = point.y; - path.lineTo( point.x, point.y ); - if ( j === 0 && doSetFirstPoint === true ) firstPoint.copy( point ); - } - break; - - case 'C': - var numbers = parseFloats( data ); - for ( var j = 0, jl = numbers.length; j < jl; j += 6 ) { - path.bezierCurveTo( - numbers[ j + 0 ], - numbers[ j + 1 ], - numbers[ j + 2 ], - numbers[ j + 3 ], - numbers[ j + 4 ], - numbers[ j + 5 ] - ); - control.x = numbers[ j + 2 ]; - control.y = numbers[ j + 3 ]; - point.x = numbers[ j + 4 ]; - point.y = numbers[ j + 5 ]; - if ( j === 0 && doSetFirstPoint === true ) firstPoint.copy( point ); - } - break; - - case 'S': - var numbers = parseFloats( data ); - for ( var j = 0, jl = numbers.length; j < jl; j += 4 ) { - path.bezierCurveTo( - getReflection( point.x, control.x ), - getReflection( point.y, control.y ), - numbers[ j + 0 ], - numbers[ j + 1 ], - numbers[ j + 2 ], - numbers[ j + 3 ] - ); - control.x = numbers[ j + 0 ]; - control.y = numbers[ j + 1 ]; - point.x = numbers[ j + 2 ]; - point.y = numbers[ j + 3 ]; - if ( j === 0 && doSetFirstPoint === true ) firstPoint.copy( point ); - } - break; - - case 'Q': - var numbers = parseFloats( data ); - for ( var j = 0, jl = numbers.length; j < jl; j += 4 ) { - path.quadraticCurveTo( - numbers[ j + 0 ], - numbers[ j + 1 ], - numbers[ j + 2 ], - numbers[ j + 3 ] - ); - control.x = numbers[ j + 0 ]; - control.y = numbers[ j + 1 ]; - point.x = numbers[ j + 2 ]; - point.y = numbers[ j + 3 ]; - if ( j === 0 && doSetFirstPoint === true ) firstPoint.copy( point ); - } - break; - - case 'T': - var numbers = parseFloats( data ); - for ( var j = 0, jl = numbers.length; j < jl; j += 2 ) { - var rx = getReflection( point.x, control.x ); - var ry = getReflection( point.y, control.y ); - path.quadraticCurveTo( - rx, - ry, - numbers[ j + 0 ], - numbers[ j + 1 ] - ); - control.x = rx; - control.y = ry; - point.x = numbers[ j + 0 ]; - point.y = numbers[ j + 1 ]; - if ( j === 0 && doSetFirstPoint === true ) firstPoint.copy( point ); - } - break; - - case 'A': - var numbers = parseFloats( data ); - for ( var j = 0, jl = numbers.length; j < jl; j += 7 ) { - var start = point.clone(); - point.x = numbers[ j + 5 ]; - point.y = numbers[ j + 6 ]; - control.x = point.x; - control.y = point.y; - parseArcCommand( - path, numbers[ j ], numbers[ j + 1 ], numbers[ j + 2 ], numbers[ j + 3 ], numbers[ j + 4 ], start, point - ); - if ( j === 0 && doSetFirstPoint === true ) firstPoint.copy( point ); - } - break; - - // - - case 'm': - var numbers = parseFloats( data ); - for ( var j = 0, jl = numbers.length; j < jl; j += 2 ) { - point.x += numbers[ j + 0 ]; - point.y += numbers[ j + 1 ]; - control.x = point.x; - control.y = point.y; - if ( j === 0 ) { - path.moveTo( point.x, point.y ); - } else { - path.lineTo( point.x, point.y ); - } - if ( j === 0 && doSetFirstPoint === true ) firstPoint.copy( point ); - } - break; - - case 'h': - var numbers = parseFloats( data ); - for ( var j = 0, jl = numbers.length; j < jl; j ++ ) { - point.x += numbers[ j ]; - control.x = point.x; - control.y = point.y; - path.lineTo( point.x, point.y ); - if ( j === 0 && doSetFirstPoint === true ) firstPoint.copy( point ); - } - break; - - case 'v': - var numbers = parseFloats( data ); - for ( var j = 0, jl = numbers.length; j < jl; j ++ ) { - point.y += numbers[ j ]; - control.x = point.x; - control.y = point.y; - path.lineTo( point.x, point.y ); - if ( j === 0 && doSetFirstPoint === true ) firstPoint.copy( point ); - } - break; - - case 'l': - var numbers = parseFloats( data ); - for ( var j = 0, jl = numbers.length; j < jl; j += 2 ) { - point.x += numbers[ j + 0 ]; - point.y += numbers[ j + 1 ]; - control.x = point.x; - control.y = point.y; - path.lineTo( point.x, point.y ); - if ( j === 0 && doSetFirstPoint === true ) firstPoint.copy( point ); - } - break; - - case 'c': - var numbers = parseFloats( data ); - for ( var j = 0, jl = numbers.length; j < jl; j += 6 ) { - path.bezierCurveTo( - point.x + numbers[ j + 0 ], - point.y + numbers[ j + 1 ], - point.x + numbers[ j + 2 ], - point.y + numbers[ j + 3 ], - point.x + numbers[ j + 4 ], - point.y + numbers[ j + 5 ] - ); - control.x = point.x + numbers[ j + 2 ]; - control.y = point.y + numbers[ j + 3 ]; - point.x += numbers[ j + 4 ]; - point.y += numbers[ j + 5 ]; - if ( j === 0 && doSetFirstPoint === true ) firstPoint.copy( point ); - } - break; - - case 's': - var numbers = parseFloats( data ); - for ( var j = 0, jl = numbers.length; j < jl; j += 4 ) { - path.bezierCurveTo( - getReflection( point.x, control.x ), - getReflection( point.y, control.y ), - point.x + numbers[ j + 0 ], - point.y + numbers[ j + 1 ], - point.x + numbers[ j + 2 ], - point.y + numbers[ j + 3 ] - ); - control.x = point.x + numbers[ j + 0 ]; - control.y = point.y + numbers[ j + 1 ]; - point.x += numbers[ j + 2 ]; - point.y += numbers[ j + 3 ]; - if ( j === 0 && doSetFirstPoint === true ) firstPoint.copy( point ); - } - break; - - case 'q': - var numbers = parseFloats( data ); - for ( var j = 0, jl = numbers.length; j < jl; j += 4 ) { - path.quadraticCurveTo( - point.x + numbers[ j + 0 ], - point.y + numbers[ j + 1 ], - point.x + numbers[ j + 2 ], - point.y + numbers[ j + 3 ] - ); - control.x = point.x + numbers[ j + 0 ]; - control.y = point.y + numbers[ j + 1 ]; - point.x += numbers[ j + 2 ]; - point.y += numbers[ j + 3 ]; - if ( j === 0 && doSetFirstPoint === true ) firstPoint.copy( point ); - } - break; - - case 't': - var numbers = parseFloats( data ); - for ( var j = 0, jl = numbers.length; j < jl; j += 2 ) { - var rx = getReflection( point.x, control.x ); - var ry = getReflection( point.y, control.y ); - path.quadraticCurveTo( - rx, - ry, - point.x + numbers[ j + 0 ], - point.y + numbers[ j + 1 ] - ); - control.x = rx; - control.y = ry; - point.x = point.x + numbers[ j + 0 ]; - point.y = point.y + numbers[ j + 1 ]; - if ( j === 0 && doSetFirstPoint === true ) firstPoint.copy( point ); - } - break; - - case 'a': - var numbers = parseFloats( data ); - for ( var j = 0, jl = numbers.length; j < jl; j += 7 ) { - var start = point.clone(); - point.x += numbers[ j + 5 ]; - point.y += numbers[ j + 6 ]; - control.x = point.x; - control.y = point.y; - parseArcCommand( - path, numbers[ j ], numbers[ j + 1 ], numbers[ j + 2 ], numbers[ j + 3 ], numbers[ j + 4 ], start, point - ); - if ( j === 0 && doSetFirstPoint === true ) firstPoint.copy( point ); - } - break; - - // - - case 'Z': - case 'z': - path.currentPath.autoClose = true; - if ( path.currentPath.curves.length > 0 ) { - // Reset point to beginning of Path - point.copy( firstPoint ); - path.currentPath.currentPoint.copy( point ); - isFirstPoint = true; - } - break; - - default: - console.warn( command ); - - } - - // console.log( type, parseFloats( data ), parseFloats( data ).length ) - - doSetFirstPoint = false; - - } - - return path; - - } - - /** - * https://www.w3.org/TR/SVG/implnote.html#ArcImplementationNotes - * https://mortoray.com/2017/02/16/rendering-an-svg-elliptical-arc-as-bezier-curves/ Appendix: Endpoint to center arc conversion - * From - * rx ry x-axis-rotation large-arc-flag sweep-flag x y - * To - * aX, aY, xRadius, yRadius, aStartAngle, aEndAngle, aClockwise, aRotation - */ - - function parseArcCommand( path, rx, ry, x_axis_rotation, large_arc_flag, sweep_flag, start, end ) { - - x_axis_rotation = x_axis_rotation * Math.PI / 180; - - // Ensure radii are positive - rx = Math.abs( rx ); - ry = Math.abs( ry ); - - // Compute (x1′, y1′) - var dx2 = ( start.x - end.x ) / 2.0; - var dy2 = ( start.y - end.y ) / 2.0; - var x1p = Math.cos( x_axis_rotation ) * dx2 + Math.sin( x_axis_rotation ) * dy2; - var y1p = - Math.sin( x_axis_rotation ) * dx2 + Math.cos( x_axis_rotation ) * dy2; - - // Compute (cx′, cy′) - var rxs = rx * rx; - var rys = ry * ry; - var x1ps = x1p * x1p; - var y1ps = y1p * y1p; - - // Ensure radii are large enough - var cr = x1ps / rxs + y1ps / rys; - - if ( cr > 1 ) { - - // scale up rx,ry equally so cr == 1 - var s = Math.sqrt( cr ); - rx = s * rx; - ry = s * ry; - rxs = rx * rx; - rys = ry * ry; - - } - - var dq = ( rxs * y1ps + rys * x1ps ); - var pq = ( rxs * rys - dq ) / dq; - var q = Math.sqrt( Math.max( 0, pq ) ); - if ( large_arc_flag === sweep_flag ) q = - q; - var cxp = q * rx * y1p / ry; - var cyp = - q * ry * x1p / rx; - - // Step 3: Compute (cx, cy) from (cx′, cy′) - var cx = Math.cos( x_axis_rotation ) * cxp - Math.sin( x_axis_rotation ) * cyp + ( start.x + end.x ) / 2; - var cy = Math.sin( x_axis_rotation ) * cxp + Math.cos( x_axis_rotation ) * cyp + ( start.y + end.y ) / 2; - - // Step 4: Compute θ1 and Δθ - var theta = svgAngle( 1, 0, ( x1p - cxp ) / rx, ( y1p - cyp ) / ry ); - var delta = svgAngle( ( x1p - cxp ) / rx, ( y1p - cyp ) / ry, ( - x1p - cxp ) / rx, ( - y1p - cyp ) / ry ) % ( Math.PI * 2 ); - - path.currentPath.absellipse( cx, cy, rx, ry, theta, theta + delta, sweep_flag === 0, x_axis_rotation ); - - } - - function svgAngle( ux, uy, vx, vy ) { - - var dot = ux * vx + uy * vy; - var len = Math.sqrt( ux * ux + uy * uy ) * Math.sqrt( vx * vx + vy * vy ); - var ang = Math.acos( Math.max( -1, Math.min( 1, dot / len ) ) ); // floating point precision, slightly over values appear - if ( ( ux * vy - uy * vx ) < 0 ) ang = - ang; - return ang; - - } - - /* - * According to https://www.w3.org/TR/SVG/shapes.html#RectElementRXAttribute - * rounded corner should be rendered to elliptical arc, but bezier curve does the job well enough - */ - function parseRectNode( node, style ) { - - var x = parseFloat( node.getAttribute( 'x' ) || 0 ); - var y = parseFloat( node.getAttribute( 'y' ) || 0 ); - var rx = parseFloat( node.getAttribute( 'rx' ) || 0 ); - var ry = parseFloat( node.getAttribute( 'ry' ) || 0 ); - var w = parseFloat( node.getAttribute( 'width' ) ); - var h = parseFloat( node.getAttribute( 'height' ) ); - - var path = new THREE.ShapePath(); - path.moveTo( x + 2 * rx, y ); - path.lineTo( x + w - 2 * rx, y ); - if ( rx !== 0 || ry !== 0 ) path.bezierCurveTo( x + w, y, x + w, y, x + w, y + 2 * ry ); - path.lineTo( x + w, y + h - 2 * ry ); - if ( rx !== 0 || ry !== 0 ) path.bezierCurveTo( x + w, y + h, x + w, y + h, x + w - 2 * rx, y + h ); - path.lineTo( x + 2 * rx, y + h ); - - if ( rx !== 0 || ry !== 0 ) { - - path.bezierCurveTo( x, y + h, x, y + h, x, y + h - 2 * ry ); - - } - - path.lineTo( x, y + 2 * ry ); - - if ( rx !== 0 || ry !== 0 ) { - - path.bezierCurveTo( x, y, x, y, x + 2 * rx, y ); - - } - - return path; - - } - - function parsePolygonNode( node, style ) { - - function iterator( match, a, b ) { - - var x = parseFloat( a ); - var y = parseFloat( b ); - - if ( index === 0 ) { - path.moveTo( x, y ); - } else { - path.lineTo( x, y ); - } - - index ++; - - } - - var regex = /(-?[\d\.?]+)[,|\s](-?[\d\.?]+)/g; - - var path = new THREE.ShapePath(); - - var index = 0; - - node.getAttribute( 'points' ).replace(regex, iterator); - - path.currentPath.autoClose = true; - - return path; - - } - - function parsePolylineNode( node, style ) { - - function iterator( match, a, b ) { - - var x = parseFloat( a ); - var y = parseFloat( b ); - - if ( index === 0 ) { - path.moveTo( x, y ); - } else { - path.lineTo( x, y ); - } - - index ++; - - } - - var regex = /(-?[\d\.?]+)[,|\s](-?[\d\.?]+)/g; - - var path = new THREE.ShapePath(); - - var index = 0; - - node.getAttribute( 'points' ).replace(regex, iterator); - - path.currentPath.autoClose = false; - - return path; - - } - - function parseCircleNode( node, style ) { - - var x = parseFloat( node.getAttribute( 'cx' ) ); - var y = parseFloat( node.getAttribute( 'cy' ) ); - var r = parseFloat( node.getAttribute( 'r' ) ); - - var subpath = new THREE.Path(); - subpath.absarc( x, y, r, 0, Math.PI * 2 ); - - var path = new THREE.ShapePath(); - path.subPaths.push( subpath ); - - return path; - - } - - function parseEllipseNode( node, style ) { - - var x = parseFloat( node.getAttribute( 'cx' ) ); - var y = parseFloat( node.getAttribute( 'cy' ) ); - var rx = parseFloat( node.getAttribute( 'rx' ) ); - var ry = parseFloat( node.getAttribute( 'ry' ) ); - - var subpath = new THREE.Path(); - subpath.absellipse( x, y, rx, ry, 0, Math.PI * 2 ); - - var path = new THREE.ShapePath(); - path.subPaths.push( subpath ); - - return path; - - } - - function parseLineNode( node, style ) { - - var x1 = parseFloat( node.getAttribute( 'x1' ) ); - var y1 = parseFloat( node.getAttribute( 'y1' ) ); - var x2 = parseFloat( node.getAttribute( 'x2' ) ); - var y2 = parseFloat( node.getAttribute( 'y2' ) ); - - var path = new THREE.ShapePath(); - path.moveTo( x1, y1 ); - path.lineTo( x2, y2 ); - path.currentPath.autoClose = false; - - return path; - - } - - // - - function parseStyle( node, style ) { - - style = Object.assign( {}, style ); // clone style - - function addStyle( svgName, jsName, adjustFunction ) { - - if ( adjustFunction === undefined ) adjustFunction = function copy( v ) { return v; }; - - if ( node.hasAttribute( svgName ) ) style[ jsName ] = adjustFunction( node.getAttribute( svgName ) ); - if ( node.style[ svgName ] !== '' ) style[ jsName ] = adjustFunction( node.style[ svgName ] ); - - } - - function clamp( v ) { - - return Math.max( 0, Math.min( 1, v ) ); - - } - - function positive( v ) { - - return Math.max( 0, v ); - - } - - addStyle( 'fill', 'fill' ); - addStyle( 'fill-opacity', 'fillOpacity', clamp ); - addStyle( 'stroke', 'stroke' ); - addStyle( 'stroke-opacity', 'strokeOpacity', clamp ); - addStyle( 'stroke-width', 'strokeWidth', positive ); - addStyle( 'stroke-linejoin', 'strokeLineJoin' ); - addStyle( 'stroke-linecap', 'strokeLineCap' ); - addStyle( 'stroke-miterlimit', 'strokeMiterLimit', positive ); - - return style; - - } - - // http://www.w3.org/TR/SVG11/implnote.html#PathElementImplementationNotes - - function getReflection( a, b ) { - - return a - ( b - a ); - - } - - function parseFloats( string ) { - - var array = string.split( /[\s,]+|(?=\s?[+\-])/ ); - - for ( var i = 0; i < array.length; i ++ ) { - - var number = array[ i ]; - - // Handle values like 48.6037.7.8 - // TODO Find a regex for this - - if ( number.indexOf( '.' ) !== number.lastIndexOf( '.' ) ) { - - var split = number.split( '.' ); - - for ( var s = 2; s < split.length; s ++ ) { - - array.splice( i + s - 1, 0, '0.' + split[ s ] ); - - } - - } - - array[ i ] = parseFloat( number ); - - } - - return array; - - - } - - function getNodeTransform( node ) { - - if ( ! node.hasAttribute( 'transform' ) ) { - return null; - } - - var transform = parseNodeTransform( node ); - - if ( transform ) { - - if ( transformStack.length > 0 ) { - transform.premultiply( transformStack[ transformStack.length - 1 ] ); - } - - currentTransform.copy( transform ); - transformStack.push( transform ); - - } - - return transform; - - } - - function parseNodeTransform( node ) { - - var transform = new THREE.Matrix3(); - var currentTransform = tempTransform0; - var transformsTexts = node.getAttribute( 'transform' ).split( ' ' ); - - for ( var tIndex = transformsTexts.length - 1; tIndex >= 0; tIndex -- ) { - - var transformText = transformsTexts[ tIndex ]; - var openParPos = transformText.indexOf( "(" ); - var closeParPos = transformText.indexOf( ")" ); - - if ( openParPos > 0 && openParPos < closeParPos ) { - - var transformType = transformText.substr( 0, openParPos ); - - var array = parseFloats( transformText.substr( openParPos + 1, closeParPos - openParPos - 1 ) ); - - currentTransform.identity(); - - switch ( transformType ) { - - case "translate": - - if ( array.length >= 1 ) { - - var tx = array[ 0 ]; - var ty = tx; - - if ( array.length >= 2 ) { - - ty = array[ 1 ]; - - } - - currentTransform.translate( tx, ty ); - - } - - break; - - case "rotate": - - if ( array.length >= 1 ) { - - var angle = 0; - var cx = 0; - var cy = 0; - - // Angle - angle = - array[ 0 ] * Math.PI / 180; - - if ( array.length >= 3 ) { - - // Center x, y - cx = array[ 1 ]; - cy = array[ 2 ]; - - } - - // Rotate around center (cx, cy) - tempTransform1.identity().translate( -cx, -cy ); - tempTransform2.identity().rotate( angle ); - tempTransform3.multiplyMatrices( tempTransform2, tempTransform1 ); - tempTransform1.identity().translate( cx, cy ); - currentTransform.multiplyMatrices( tempTransform1, tempTransform3 ); - - } - - break; - - case "scale": - - if ( array.length >= 1 ) { - - var scaleX = array[ 0 ]; - var scaleY = scaleX; - - if ( array.length >= 2 ) { - scaleY = array[ 1 ]; - } - - currentTransform.scale( scaleX, scaleY ); - - } - - break; - - case "skewX": - - if ( array.length === 1 ) { - - currentTransform.set( - 1, Math.tan( array[ 0 ] * Math.PI / 180 ), 0, - 0, 1, 0, - 0, 0, 1 - ); - - } - - break; - - case "skewY": - - if ( array.length === 1 ) { - - currentTransform.set( - 1, 0, 0, - Math.tan( array[ 0 ] * Math.PI / 180 ), 1, 0, - 0, 0, 1 - ); - - } - - break; - - case "matrix": - - if ( array.length === 6 ) { - - currentTransform.set( - array[ 0 ], array[ 2 ], array[ 4 ], - array[ 1 ], array[ 3 ], array[ 5 ], - 0, 0, 1 - ); - - } - - break; - } - - } - - transform.premultiply( currentTransform ); - - } - - return transform; - - } - - function transformPath( path, m ) { - - function transfVec2( v2 ) { - - tempV3.set( v2.x, v2.y, 1 ).applyMatrix3( m ); - - v2.set( tempV3.x, tempV3.y ); - - } - - var isRotated = isTransformRotated( m ); - - var subPaths = path.subPaths; - - for ( var i = 0, n = subPaths.length; i < n; i++ ) { - - var subPath = subPaths[ i ]; - var curves = subPath.curves; - - for ( var j = 0; j < curves.length; j++ ) { - - var curve = curves[ j ]; - - if ( curve.isLineCurve ) { - - transfVec2( curve.v1 ); - transfVec2( curve.v2 ); - - } else if ( curve.isCubicBezierCurve ) { - - transfVec2( curve.v0 ); - transfVec2( curve.v1 ); - transfVec2( curve.v2 ); - transfVec2( curve.v3 ); - - } else if ( curve.isQuadraticBezierCurve ) { - - transfVec2( curve.v0 ); - transfVec2( curve.v1 ); - transfVec2( curve.v2 ); - - } else if ( curve.isEllipseCurve ) { - - if ( isRotated ) { - console.warn( "SVGLoader: Elliptic arc or ellipse rotation or skewing is not implemented." ); - } - - tempV2.set( curve.aX, curve.aY ); - transfVec2( tempV2 ); - curve.aX = tempV2.x; - curve.aY = tempV2.y; - - curve.xRadius *= getTransformScaleX( m ); - curve.yRadius *= getTransformScaleY( m ); - - } - - } - - } - - } - - function isTransformRotated( m ) { - return m.elements[ 1 ] !== 0 || m.elements[ 3 ] !== 0; - } - - function getTransformScaleX( m ) { - var te = m.elements; - return Math.sqrt( te[ 0 ] * te[ 0 ] + te[ 1 ] * te[ 1 ] ) - } - - function getTransformScaleY( m ) { - var te = m.elements; - return Math.sqrt( te[ 3 ] * te[ 3 ] + te[ 4 ] * te[ 4 ] ) - } - - // - - console.log( 'THREE.SVGLoader' ); - - var paths = []; - - var transformStack = []; - - var tempTransform0 = new THREE.Matrix3(); - var tempTransform1 = new THREE.Matrix3(); - var tempTransform2 = new THREE.Matrix3(); - var tempTransform3 = new THREE.Matrix3(); - var tempV2 = new THREE.Vector2(); - var tempV3 = new THREE.Vector3(); - - var currentTransform = new THREE.Matrix3(); - - var scope = this; - - console.time( 'THREE.SVGLoader: DOMParser' ); - - var xml = new DOMParser().parseFromString( text, 'image/svg+xml' ); // application/xml - - console.timeEnd( 'THREE.SVGLoader: DOMParser' ); - - console.time( 'THREE.SVGLoader: Parse' ); - - parseNode( xml.documentElement, { - fill: '#000', - fillOpacity: 1, - strokeOpacity: 1, - strokeWidth: 1, - strokeLineJoin: 'miter', - strokeLineCap: 'butt', - strokeMiterLimit: 4 - } ); - - var data = { paths: paths, xml: xml.documentElement }; - - // console.log( paths ); - - - console.timeEnd( 'THREE.SVGLoader: Parse' ); - - return data; - - } - -}; - -THREE.SVGLoader.getStrokeStyle = function ( width, color, opacity, lineJoin, lineCap, miterLimit ) { - - // Param width: Stroke width - // Param color: As returned by THREE.Color.getStyle() - // Param opacity: 0 (transparent) to 1 (opaque) - // Param lineJoin: One of "round", "bevel", "miter" or "miter-limit" - // Param lineCap: One of "round", "square" or "butt" - // Param miterLimit: Maximum join length, in multiples of the "width" parameter (join is truncated if it exceeds that distance) - // Returns style object - - width = width !== undefined ? width : 1; - color = color !== undefined ? color : '#000'; - opacity = opacity !== undefined ? opacity : 1; - lineJoin = lineJoin !== undefined ? lineJoin : 'miter'; - lineCap = lineCap !== undefined ? lineCap : 'butt'; - miterLimit = miterLimit !== undefined ? miterLimit : 4; - - return { - strokeColor: color, - strokeWidth: width, - strokeLineJoin: lineJoin, - strokeLineCap: lineCap, - strokeMiterLimit: miterLimit - }; - -}; - -THREE.SVGLoader.pointsToStroke = function ( points, style, arcDivisions, minDistance ) { - - // Generates a stroke with some witdh around the given path. - // The path can be open or closed (last point equals to first point) - // Param points: Array of Vector2D (the path). Minimum 2 points. - // Param style: Object with SVG properties as returned by SVGLoader.getStrokeStyle(), or SVGLoader.parse() in the path.userData.style object - // Params arcDivisions: Arc divisions for round joins and endcaps. (Optional) - // Param minDistance: Points closer to this distance will be merged. (Optional) - // Returns BufferGeometry with stroke triangles (In plane z = 0). UV coordinates are generated ('u' along path. 'v' across it, from left to right) - - var vertices = []; - var normals = []; - var uvs = []; - - if ( THREE.SVGLoader.pointsToStrokeWithBuffers( points, style, arcDivisions, minDistance, vertices, normals, uvs ) === 0 ) { - - return null; - - } - - var geometry = new THREE.BufferGeometry(); - geometry.addAttribute( 'position', new THREE.Float32BufferAttribute( vertices, 3 ) ); - geometry.addAttribute( 'normal', new THREE.Float32BufferAttribute( normals, 3 ) ); - geometry.addAttribute( 'uv', new THREE.Float32BufferAttribute( uvs, 2 ) ); - - return geometry; - -}; - -THREE.SVGLoader.pointsToStrokeWithBuffers = function () { - - var tempV2_1 = new THREE.Vector2(); - var tempV2_2 = new THREE.Vector2(); - var tempV2_3 = new THREE.Vector2(); - var tempV2_4 = new THREE.Vector2(); - var tempV2_5 = new THREE.Vector2(); - var tempV2_6 = new THREE.Vector2(); - var tempV2_7 = new THREE.Vector2(); - var tempV3_1 = new THREE.Vector3(); - var lastPointL = new THREE.Vector2(); - var lastPointR = new THREE.Vector2(); - var point0L = new THREE.Vector2(); - var point0R = new THREE.Vector2(); - var currentPointL = new THREE.Vector2(); - var currentPointR = new THREE.Vector2(); - var nextPointL = new THREE.Vector2(); - var nextPointR = new THREE.Vector2(); - var innerPoint = new THREE.Vector2(); - var outerPoint = new THREE.Vector2(); - var tempTransform0 = new THREE.Matrix3(); - var tempTransform1 = new THREE.Matrix3(); - var tempTransform2 = new THREE.Matrix3(); - - return function ( points, style, arcDivisions, minDistance, vertices, normals, uvs, vertexOffset ) { - - // This function can be called to update existing arrays or buffers. - // Accepts same parameters as pointsToStroke, plus the buffers and optional offset. - // Param vertexOffset: Offset vertices to start writing in the buffers (3 elements/vertex for vertices and normals, and 2 elements/vertex for uvs) - // Returns number of written vertices / normals / uvs pairs - // if 'vertices' parameter is undefined no triangles will be generated, but the returned vertices count will still be valid (useful to preallocate the buffers) - // 'normals' and 'uvs' buffers are optional - - arcLengthDivisions = arcDivisions !== undefined ? arcDivisions : 12; - minDistance = minDistance !== undefined ? minDistance : 0.001; - vertexOffset = vertexOffset !== undefined ? vertexOffset : 0; - - // First ensure there are no duplicated points - points = removeDuplicatedPoints( points ); - - var numPoints = points.length; - - if ( numPoints < 2 ) return 0; - - var isClosed = points[ 0 ].equals( points[ numPoints - 1 ] ); - - var currentPoint; - var previousPoint = points[ 0 ]; - var nextPoint; - - var strokeWidth2 = style.strokeWidth / 2; - - var deltaU = 1 / ( numPoints - 1 ); - var u0 = 0; - - var innerSideModified; - var joinIsOnLeftSide; - var isMiter; - var initialJoinIsOnLeftSide = false; - - var numVertices = 0; - var currentCoordinate = vertexOffset * 3; - var currentCoordinateUV = vertexOffset * 2; - - // Get initial left and right stroke points - getNormal( points[ 0 ], points[ 1 ], tempV2_1 ).multiplyScalar( strokeWidth2 ); - lastPointL.copy( points[ 0 ] ).sub( tempV2_1 ); - lastPointR.copy( points[ 0 ] ).add( tempV2_1 ); - point0L.copy( lastPointL ); - point0R.copy( lastPointR ); - - for ( var iPoint = 1; iPoint < numPoints; iPoint ++ ) { - - currentPoint = points[ iPoint ]; - - // Get next point - if ( iPoint === numPoints - 1 ) { - - if ( isClosed ) { - - // Skip duplicated initial point - nextPoint = points[ 1 ]; - - } - else nextPoint = undefined; - - } - else { - - nextPoint = points[ iPoint + 1 ]; - - } - - // Normal of previous segment in tempV2_1 - var normal1 = tempV2_1; - getNormal( previousPoint, currentPoint, normal1 ); - - tempV2_3.copy( normal1 ).multiplyScalar( strokeWidth2 ); - currentPointL.copy( currentPoint ).sub( tempV2_3 ); - currentPointR.copy( currentPoint ).add( tempV2_3 ); - - var u1 = u0 + deltaU; - - innerSideModified = false; - - if ( nextPoint !== undefined ) { - - // Normal of next segment in tempV2_2 - getNormal( currentPoint, nextPoint, tempV2_2 ); - - tempV2_3.copy( tempV2_2 ).multiplyScalar( strokeWidth2 ); - nextPointL.copy( currentPoint ).sub( tempV2_3 ); - nextPointR.copy( currentPoint ).add( tempV2_3 ); - - joinIsOnLeftSide = true; - tempV2_3.subVectors( nextPoint, previousPoint ); - if ( normal1.dot( tempV2_3 ) < 0 ) { - - joinIsOnLeftSide = false; - - } - if ( iPoint === 1 ) initialJoinIsOnLeftSide = joinIsOnLeftSide; - - tempV2_3.subVectors( nextPoint, currentPoint ) - var maxInnerDistance = tempV2_3.normalize(); - var dot = Math.abs( normal1.dot( tempV2_3 ) ); - - // If path is straight, don't create join - if ( dot !== 0 ) { - - // Compute inner and outer segment intersections - var miterSide = strokeWidth2 / dot; - tempV2_3.multiplyScalar( - miterSide ); - tempV2_4.subVectors( currentPoint, previousPoint ); - tempV2_5.copy( tempV2_4 ).setLength( miterSide ).add( tempV2_3 ); - innerPoint.copy( tempV2_5 ).negate(); - var miterLength2 = tempV2_5.length(); - var segmentLengthPrev = tempV2_4.length(); - tempV2_4.divideScalar( segmentLengthPrev ); - tempV2_6.subVectors( nextPoint, currentPoint ); - var segmentLengthNext = tempV2_6.length(); - tempV2_6.divideScalar( segmentLengthNext ); - // Check that previous and next segments doesn't overlap with the innerPoint of intersection - if ( tempV2_4.dot( innerPoint ) < segmentLengthPrev && tempV2_6.dot( innerPoint ) < segmentLengthNext ) { - - innerSideModified = true; - - } - outerPoint.copy( tempV2_5 ).add( currentPoint ); - innerPoint.add( currentPoint ); - - isMiter = false; - - if ( innerSideModified ) { - - if ( joinIsOnLeftSide ) { - - nextPointR.copy( innerPoint ); - currentPointR.copy( innerPoint ); - - } - else { - - nextPointL.copy( innerPoint ); - currentPointL.copy( innerPoint ); - - } - - } - else { - - // The segment triangles are generated here if there was overlapping - - makeSegmentTriangles(); - - } - - switch ( style.strokeLineJoin ) { - - case 'bevel': - - makeSegmentWithBevelJoin( joinIsOnLeftSide, innerSideModified, u1 ); - - break; - - case 'round': - - // Segment triangles - - createSegmentTrianglesWithMiddleSection( joinIsOnLeftSide, innerSideModified ); - - // Join triangles - - if ( joinIsOnLeftSide ) { - - makeCircularSector( currentPoint, currentPointL, nextPointL, u1, 0 ); - - } - else { - - makeCircularSector( currentPoint, nextPointR, currentPointR, u1, 1 ); - - } - - break; - - case 'miter': - case 'miter-clip': - default: - - var miterFraction = ( strokeWidth2 * style.strokeMiterLimit ) / miterLength2; - - if ( miterFraction < 1 ) { - - // The join miter length exceeds the miter limit - - if ( style.strokeLineJoin !== 'miter-clip' ) { - - makeSegmentWithBevelJoin( joinIsOnLeftSide, innerSideModified, u1 ); - break; - - } - else { - - // Segment triangles - - createSegmentTrianglesWithMiddleSection( joinIsOnLeftSide, innerSideModified ); - - // Miter-clip join triangles - - if ( joinIsOnLeftSide ) { - - tempV2_6.subVectors( outerPoint, currentPointL ).multiplyScalar( miterFraction ).add( currentPointL ); - tempV2_7.subVectors( outerPoint, nextPointL ).multiplyScalar( miterFraction ).add( nextPointL ); - - addVertex( currentPointL, u1, 0 ); - addVertex( tempV2_6, u1, 0 ); - addVertex( currentPoint, u1, 0.5 ); - - addVertex( currentPoint, u1, 0.5 ); - addVertex( tempV2_6, u1, 0 ); - addVertex( tempV2_7, u1, 0 ); - - addVertex( currentPoint, u1, 0.5 ); - addVertex( tempV2_7, u1, 0 ); - addVertex( nextPointL, u1, 0 ); - - } - else { - - tempV2_6.subVectors( outerPoint, currentPointR ).multiplyScalar( miterFraction ).add( currentPointR ); - tempV2_7.subVectors( outerPoint, nextPointR ).multiplyScalar( miterFraction ).add( nextPointR ); - - addVertex( currentPointR, u1, 1 ); - addVertex( tempV2_6, u1, 1 ); - addVertex( currentPoint, u1, 0.5 ); - - addVertex( currentPoint, u1, 0.5 ); - addVertex( tempV2_6, u1, 1 ); - addVertex( tempV2_7, u1, 1 ); - - addVertex( currentPoint, u1, 0.5 ); - addVertex( tempV2_7, u1, 1 ); - addVertex( nextPointR, u1, 1 ); - - } - - } - - } - else { - - // Miter join segment triangles - - if ( innerSideModified ) { - - // Optimized segment + join triangles - - if ( joinIsOnLeftSide ) { - - addVertex( lastPointR, u0, 1 ); - addVertex( lastPointL, u0, 0 ); - addVertex( outerPoint, u1, 0 ); - - addVertex( lastPointR, u0, 1 ); - addVertex( outerPoint, u1, 0 ); - addVertex( innerPoint, u1, 1 ); - - } - else { - - addVertex( lastPointR, u0, 1 ); - addVertex( lastPointL, u0, 0 ); - addVertex( outerPoint, u1, 1 ); - - addVertex( lastPointL, u0, 0 ); - addVertex( innerPoint, u1, 0 ); - addVertex( outerPoint, u1, 1 ); - - } - - - if ( joinIsOnLeftSide ) { - - nextPointL.copy( outerPoint ); - - } - else { - - nextPointR.copy( outerPoint ); - - } - - - } - else { - - // Add extra miter join triangles - - if ( joinIsOnLeftSide ) { - - addVertex( currentPointL, u1, 0 ); - addVertex( outerPoint, u1, 0 ); - addVertex( currentPoint, u1, 0.5 ); - - addVertex( currentPoint, u1, 0.5 ); - addVertex( outerPoint, u1, 0 ); - addVertex( nextPointL, u1, 0 ); - - } - else { - - addVertex( currentPointR, u1, 1 ); - addVertex( outerPoint, u1, 1 ); - addVertex( currentPoint, u1, 0.5 ); - - addVertex( currentPoint, u1, 0.5 ); - addVertex( outerPoint, u1, 1 ); - addVertex( nextPointR, u1, 1 ); - - } - - } - - isMiter = true; - - } - - break; - - } - - } - else { - - // The segment triangles are generated here when two consecutive points are collinear - - makeSegmentTriangles(); - - } - - } - else { - - // The segment triangles are generated here if it is the ending segment - - makeSegmentTriangles(); - - } - - if ( ! isClosed && iPoint === numPoints - 1 ) { - - // Start line endcap - addCapGeometry( points[ 0 ], point0L, point0R, joinIsOnLeftSide, true, u0 ); - - } - - // Increment loop variables - - u0 = u1; - - previousPoint = currentPoint; - - lastPointL.copy( nextPointL ); - lastPointR.copy( nextPointR ); - - } - - if ( ! isClosed ) { - - // Ending line endcap - addCapGeometry( currentPoint, currentPointL, currentPointR, joinIsOnLeftSide, false, u1 ); - - } - else if ( innerSideModified && vertices ) { - - // Modify path first segment vertices to adjust to the segments inner and outer intersections - - var lastOuter = outerPoint; - var lastInner = innerPoint; - if ( initialJoinIsOnLeftSide !== joinIsOnLeftSide) { - lastOuter = innerPoint; - lastInner = outerPoint; - } - - if ( joinIsOnLeftSide ) { - - lastInner.toArray( vertices, 0 * 3 ); - lastInner.toArray( vertices, 3 * 3 ); - - if ( isMiter ) { - - lastOuter.toArray( vertices, 1 * 3 ); - } - - } - else { - - lastInner.toArray( vertices, 1 * 3 ); - lastInner.toArray( vertices, 3 * 3 ); - - if ( isMiter ) { - - lastOuter.toArray( vertices, 0 * 3 ); - } - - } - - } - - return numVertices; - - // -- End of algorithm - - // -- Functions - - function getNormal( p1, p2, result ) { - - result.subVectors( p2, p1 ); - return result.set( - result.y, result.x ).normalize(); - - } - - function addVertex( position, u, v ) { - - if ( vertices ) { - - vertices[ currentCoordinate ] = position.x; - vertices[ currentCoordinate + 1 ] = position.y; - vertices[ currentCoordinate + 2 ] = 0; - - if ( normals ) { - - normals[ currentCoordinate ] = 0; - normals[ currentCoordinate + 1 ] = 0; - normals[ currentCoordinate + 2 ] = 1; - - } - - currentCoordinate += 3; - - if ( uvs ) { - - uvs[ currentCoordinateUV ] = u; - uvs[ currentCoordinateUV + 1 ] = v; - - currentCoordinateUV += 2; - - } - - } - - numVertices += 3; - - } - - function makeCircularSector( center, p1, p2, u, v ) { - - // param p1, p2: Points in the circle arc. - // p1 and p2 are in clockwise direction. - - tempV2_1.copy( p1 ).sub( center ).normalize(); - tempV2_2.copy( p2 ).sub( center ).normalize(); - - var angle = Math.PI; - var dot = tempV2_1.dot( tempV2_2 ); - if ( Math.abs( dot ) < 1 ) angle = Math.abs( Math.acos( dot ) ); - - angle /= arcLengthDivisions; - - tempV2_3.copy( p1 ); - - for ( var i = 0, il = arcLengthDivisions - 1; i < il; i++ ) { - - tempV2_4.copy( tempV2_3 ).rotateAround( center, angle ); - - addVertex( tempV2_3, u, v ); - addVertex( tempV2_4, u, v ); - addVertex( center, u, 0.5 ); - - tempV2_3.copy( tempV2_4 ); - } - - addVertex( tempV2_4, u, v ); - addVertex( p2, u, v ); - addVertex( center, u, 0.5 ); - - } - - function makeSegmentTriangles() { - - addVertex( lastPointR, u0, 1 ); - addVertex( lastPointL, u0, 0 ); - addVertex( currentPointL, u1, 0 ); - - addVertex( lastPointR, u0, 1 ); - addVertex( currentPointL, u1, 1 ); - addVertex( currentPointR, u1, 0 ); - - } - - function makeSegmentWithBevelJoin( joinIsOnLeftSide, innerSideModified, u ) { - - if ( innerSideModified ) { - - // Optimized segment + bevel triangles - - if ( joinIsOnLeftSide ) { - - // Path segments triangles - - addVertex( lastPointR, u0, 1 ); - addVertex( lastPointL, u0, 0 ); - addVertex( currentPointL, u1, 0 ); - - addVertex( lastPointR, u0, 1 ); - addVertex( currentPointL, u1, 0 ); - addVertex( innerPoint, u1, 1 ); - - // Bevel join triangle - - addVertex( currentPointL, u, 0 ); - addVertex( nextPointL, u, 0 ); - addVertex( innerPoint, u, 0.5 ); - - } - else { - - // Path segments triangles - - addVertex( lastPointR, u0, 1 ); - addVertex( lastPointL, u0, 0 ); - addVertex( currentPointR, u1, 1 ); - - addVertex( lastPointL, u0, 0 ); - addVertex( innerPoint, u1, 0 ); - addVertex( currentPointR, u1, 1 ); - - // Bevel join triangle - - addVertex( currentPointR, u, 1 ); - addVertex( nextPointR, u, 0 ); - addVertex( innerPoint, u, 0.5 ); - - } - - } - else { - - // Bevel join triangle. The segment triangles are done in the main loop - - if ( joinIsOnLeftSide ) { - - addVertex( currentPointL, u, 0 ); - addVertex( nextPointL, u, 0 ); - addVertex( currentPoint, u, 0.5 ); - - } - else { - - addVertex( currentPointR, u, 1 ); - addVertex( nextPointR, u, 0 ); - addVertex( currentPoint, u, 0.5 ); - - } - - } - - } - - function createSegmentTrianglesWithMiddleSection( joinIsOnLeftSide, innerSideModified ) { - - if ( innerSideModified ) { - - if ( joinIsOnLeftSide ) { - - addVertex( lastPointR, u0, 1 ); - addVertex( lastPointL, u0, 0 ); - addVertex( currentPointL, u1, 0 ); - - addVertex( lastPointR, u0, 1 ); - addVertex( currentPointL, u1, 0 ); - addVertex( innerPoint, u1, 1 ); - - addVertex( currentPointL, u0, 0 ); - addVertex( currentPoint, u1, 0.5 ); - addVertex( innerPoint, u1, 1 ); - - addVertex( currentPoint, u1, 0.5 ); - addVertex( nextPointL, u0, 0 ); - addVertex( innerPoint, u1, 1 ); - - } - else { - - addVertex( lastPointR, u0, 1 ); - addVertex( lastPointL, u0, 0 ); - addVertex( currentPointR, u1, 1 ); - - addVertex( lastPointL, u0, 0 ); - addVertex( innerPoint, u1, 0 ); - addVertex( currentPointR, u1, 1 ); - - addVertex( currentPointR, u0, 1 ); - addVertex( innerPoint, u1, 0 ); - addVertex( currentPoint, u1, 0.5 ); - - addVertex( currentPoint, u1, 0.5 ); - addVertex( innerPoint, u1, 0 ); - addVertex( nextPointR, u0, 1 ); - - } - - } - - } - - function addCapGeometry( center, p1, p2, joinIsOnLeftSide, start, u ) { - - // param center: End point of the path - // param p1, p2: Left and right cap points - - switch ( style.strokeLineCap ) { - - case 'round': - - if ( start ) { - - makeCircularSector( center, p2, p1, u, 0.5 ); - - } - else { - - makeCircularSector( center, p1, p2, u, 0.5 ); - - } - - break; - - case 'square': - - if ( start ) { - - tempV2_1.subVectors( p1, center ); - tempV2_2.set( tempV2_1.y, - tempV2_1.x ); - - tempV2_3.addVectors( tempV2_1, tempV2_2 ).add( center ); - tempV2_4.subVectors( tempV2_2, tempV2_1 ).add( center ); - - // Modify already existing vertices - if ( joinIsOnLeftSide ) { - - tempV2_3.toArray( vertices, 1 * 3 ); - tempV2_4.toArray( vertices, 0 * 3 ); - tempV2_4.toArray( vertices, 3 * 3 ); - - } - else { - - tempV2_3.toArray( vertices, 1 * 3 ); - tempV2_3.toArray( vertices, 3 * 3 ); - tempV2_4.toArray( vertices, 0 * 3 ); - - } - - } - else { - - tempV2_1.subVectors( p2, center ); - tempV2_2.set( tempV2_1.y, - tempV2_1.x ); - - tempV2_3.addVectors( tempV2_1, tempV2_2 ).add( center ); - tempV2_4.subVectors( tempV2_2, tempV2_1 ).add( center ); - - var vl = vertices.length; - - // Modify already existing vertices - if ( joinIsOnLeftSide ) { - - tempV2_3.toArray( vertices, vl - 1 * 3 ); - tempV2_4.toArray( vertices, vl - 2 * 3 ); - tempV2_4.toArray( vertices, vl - 4 * 3 ); - - } - else { - - tempV2_3.toArray( vertices, vl - 2 * 3 ); - tempV2_4.toArray( vertices, vl - 1 * 3 ); - tempV2_4.toArray( vertices, vl - 4 * 3 ); - - } - - } - - break; - - case 'butt': - default: - - // Nothing to do here - break; - - } - - } - - function removeDuplicatedPoints( points ) { - - // Creates a new array if necessary with duplicated points removed. - // This does not remove duplicated initial and ending points of a closed path. - - var dupPoints = false; - for ( var i = 1, n = points.length - 1; i < n; i ++ ) { - - if ( points[ i ].distanceTo( points[ i + 1 ] ) < minDistance ) { - - dupPoints = true; - break; - - } - - } - - if ( ! dupPoints ) return points; - - var newPoints = []; - newPoints.push( points[ 0 ] ); - - for ( var i = 1, n = points.length - 1; i < n; i ++ ) { - - if ( points[ i ].distanceTo( points[ i + 1 ] ) >= minDistance ) { - - newPoints.push( points[ i ] ); - - } - } - - newPoints.push( points[ points.length - 1 ] ); - - return newPoints; - - } - }; - -}(); diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/SMAAShader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/SMAAShader.js deleted file mode 100644 index 33c6da75760ce398d279ecb78f776d58c6a8d1fd..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/SMAAShader.js +++ /dev/null @@ -1,462 +0,0 @@ -/** - * @author mpk / http://polko.me/ - * - * WebGL port of Subpixel Morphological Antialiasing (SMAA) v2.8 - * Preset: SMAA 1x Medium (with color edge detection) - * https://github.com/iryoku/smaa/releases/tag/v2.8 - */ - -THREE.SMAAShader = [ { - - defines: { - - "SMAA_THRESHOLD": "0.1" - - }, - - uniforms: { - - "tDiffuse": { value: null }, - "resolution": { value: new THREE.Vector2( 1 / 1024, 1 / 512 ) } - - }, - - vertexShader: [ - - "uniform vec2 resolution;", - - "varying vec2 vUv;", - "varying vec4 vOffset[ 3 ];", - - "void SMAAEdgeDetectionVS( vec2 texcoord ) {", - "vOffset[ 0 ] = texcoord.xyxy + resolution.xyxy * vec4( -1.0, 0.0, 0.0, 1.0 );", // WebGL port note: Changed sign in W component - "vOffset[ 1 ] = texcoord.xyxy + resolution.xyxy * vec4( 1.0, 0.0, 0.0, -1.0 );", // WebGL port note: Changed sign in W component - "vOffset[ 2 ] = texcoord.xyxy + resolution.xyxy * vec4( -2.0, 0.0, 0.0, 2.0 );", // WebGL port note: Changed sign in W component - "}", - - "void main() {", - - "vUv = uv;", - - "SMAAEdgeDetectionVS( vUv );", - - "gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );", - - "}" - - ].join("\n"), - - fragmentShader: [ - - "uniform sampler2D tDiffuse;", - - "varying vec2 vUv;", - "varying vec4 vOffset[ 3 ];", - - "vec4 SMAAColorEdgeDetectionPS( vec2 texcoord, vec4 offset[3], sampler2D colorTex ) {", - "vec2 threshold = vec2( SMAA_THRESHOLD, SMAA_THRESHOLD );", - - // Calculate color deltas: - "vec4 delta;", - "vec3 C = texture2D( colorTex, texcoord ).rgb;", - - "vec3 Cleft = texture2D( colorTex, offset[0].xy ).rgb;", - "vec3 t = abs( C - Cleft );", - "delta.x = max( max( t.r, t.g ), t.b );", - - "vec3 Ctop = texture2D( colorTex, offset[0].zw ).rgb;", - "t = abs( C - Ctop );", - "delta.y = max( max( t.r, t.g ), t.b );", - - // We do the usual threshold: - "vec2 edges = step( threshold, delta.xy );", - - // Then discard if there is no edge: - "if ( dot( edges, vec2( 1.0, 1.0 ) ) == 0.0 )", - "discard;", - - // Calculate right and bottom deltas: - "vec3 Cright = texture2D( colorTex, offset[1].xy ).rgb;", - "t = abs( C - Cright );", - "delta.z = max( max( t.r, t.g ), t.b );", - - "vec3 Cbottom = texture2D( colorTex, offset[1].zw ).rgb;", - "t = abs( C - Cbottom );", - "delta.w = max( max( t.r, t.g ), t.b );", - - // Calculate the maximum delta in the direct neighborhood: - "float maxDelta = max( max( max( delta.x, delta.y ), delta.z ), delta.w );", - - // Calculate left-left and top-top deltas: - "vec3 Cleftleft = texture2D( colorTex, offset[2].xy ).rgb;", - "t = abs( C - Cleftleft );", - "delta.z = max( max( t.r, t.g ), t.b );", - - "vec3 Ctoptop = texture2D( colorTex, offset[2].zw ).rgb;", - "t = abs( C - Ctoptop );", - "delta.w = max( max( t.r, t.g ), t.b );", - - // Calculate the final maximum delta: - "maxDelta = max( max( maxDelta, delta.z ), delta.w );", - - // Local contrast adaptation in action: - "edges.xy *= step( 0.5 * maxDelta, delta.xy );", - - "return vec4( edges, 0.0, 0.0 );", - "}", - - "void main() {", - - "gl_FragColor = SMAAColorEdgeDetectionPS( vUv, vOffset, tDiffuse );", - - "}" - - ].join("\n") - -}, { - - defines: { - - "SMAA_MAX_SEARCH_STEPS": "8", - "SMAA_AREATEX_MAX_DISTANCE": "16", - "SMAA_AREATEX_PIXEL_SIZE": "( 1.0 / vec2( 160.0, 560.0 ) )", - "SMAA_AREATEX_SUBTEX_SIZE": "( 1.0 / 7.0 )" - - }, - - uniforms: { - - "tDiffuse": { value: null }, - "tArea": { value: null }, - "tSearch": { value: null }, - "resolution": { value: new THREE.Vector2( 1 / 1024, 1 / 512 ) } - - }, - - vertexShader: [ - - "uniform vec2 resolution;", - - "varying vec2 vUv;", - "varying vec4 vOffset[ 3 ];", - "varying vec2 vPixcoord;", - - "void SMAABlendingWeightCalculationVS( vec2 texcoord ) {", - "vPixcoord = texcoord / resolution;", - - // We will use these offsets for the searches later on (see @PSEUDO_GATHER4): - "vOffset[ 0 ] = texcoord.xyxy + resolution.xyxy * vec4( -0.25, 0.125, 1.25, 0.125 );", // WebGL port note: Changed sign in Y and W components - "vOffset[ 1 ] = texcoord.xyxy + resolution.xyxy * vec4( -0.125, 0.25, -0.125, -1.25 );", // WebGL port note: Changed sign in Y and W components - - // And these for the searches, they indicate the ends of the loops: - "vOffset[ 2 ] = vec4( vOffset[ 0 ].xz, vOffset[ 1 ].yw ) + vec4( -2.0, 2.0, -2.0, 2.0 ) * resolution.xxyy * float( SMAA_MAX_SEARCH_STEPS );", - - "}", - - "void main() {", - - "vUv = uv;", - - "SMAABlendingWeightCalculationVS( vUv );", - - "gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );", - - "}" - - ].join("\n"), - - fragmentShader: [ - - "#define SMAASampleLevelZeroOffset( tex, coord, offset ) texture2D( tex, coord + float( offset ) * resolution, 0.0 )", - - "uniform sampler2D tDiffuse;", - "uniform sampler2D tArea;", - "uniform sampler2D tSearch;", - "uniform vec2 resolution;", - - "varying vec2 vUv;", - "varying vec4 vOffset[3];", - "varying vec2 vPixcoord;", - - "vec2 round( vec2 x ) {", - "return sign( x ) * floor( abs( x ) + 0.5 );", - "}", - - "float SMAASearchLength( sampler2D searchTex, vec2 e, float bias, float scale ) {", - // Not required if searchTex accesses are set to point: - // float2 SEARCH_TEX_PIXEL_SIZE = 1.0 / float2(66.0, 33.0); - // e = float2(bias, 0.0) + 0.5 * SEARCH_TEX_PIXEL_SIZE + - // e * float2(scale, 1.0) * float2(64.0, 32.0) * SEARCH_TEX_PIXEL_SIZE; - "e.r = bias + e.r * scale;", - "return 255.0 * texture2D( searchTex, e, 0.0 ).r;", - "}", - - "float SMAASearchXLeft( sampler2D edgesTex, sampler2D searchTex, vec2 texcoord, float end ) {", - /** - * @PSEUDO_GATHER4 - * This texcoord has been offset by (-0.25, -0.125) in the vertex shader to - * sample between edge, thus fetching four edges in a row. - * Sampling with different offsets in each direction allows to disambiguate - * which edges are active from the four fetched ones. - */ - "vec2 e = vec2( 0.0, 1.0 );", - - "for ( int i = 0; i < SMAA_MAX_SEARCH_STEPS; i ++ ) {", // WebGL port note: Changed while to for - "e = texture2D( edgesTex, texcoord, 0.0 ).rg;", - "texcoord -= vec2( 2.0, 0.0 ) * resolution;", - "if ( ! ( texcoord.x > end && e.g > 0.8281 && e.r == 0.0 ) ) break;", - "}", - - // We correct the previous (-0.25, -0.125) offset we applied: - "texcoord.x += 0.25 * resolution.x;", - - // The searches are bias by 1, so adjust the coords accordingly: - "texcoord.x += resolution.x;", - - // Disambiguate the length added by the last step: - "texcoord.x += 2.0 * resolution.x;", // Undo last step - "texcoord.x -= resolution.x * SMAASearchLength(searchTex, e, 0.0, 0.5);", - - "return texcoord.x;", - "}", - - "float SMAASearchXRight( sampler2D edgesTex, sampler2D searchTex, vec2 texcoord, float end ) {", - "vec2 e = vec2( 0.0, 1.0 );", - - "for ( int i = 0; i < SMAA_MAX_SEARCH_STEPS; i ++ ) {", // WebGL port note: Changed while to for - "e = texture2D( edgesTex, texcoord, 0.0 ).rg;", - "texcoord += vec2( 2.0, 0.0 ) * resolution;", - "if ( ! ( texcoord.x < end && e.g > 0.8281 && e.r == 0.0 ) ) break;", - "}", - - "texcoord.x -= 0.25 * resolution.x;", - "texcoord.x -= resolution.x;", - "texcoord.x -= 2.0 * resolution.x;", - "texcoord.x += resolution.x * SMAASearchLength( searchTex, e, 0.5, 0.5 );", - - "return texcoord.x;", - "}", - - "float SMAASearchYUp( sampler2D edgesTex, sampler2D searchTex, vec2 texcoord, float end ) {", - "vec2 e = vec2( 1.0, 0.0 );", - - "for ( int i = 0; i < SMAA_MAX_SEARCH_STEPS; i ++ ) {", // WebGL port note: Changed while to for - "e = texture2D( edgesTex, texcoord, 0.0 ).rg;", - "texcoord += vec2( 0.0, 2.0 ) * resolution;", // WebGL port note: Changed sign - "if ( ! ( texcoord.y > end && e.r > 0.8281 && e.g == 0.0 ) ) break;", - "}", - - "texcoord.y -= 0.25 * resolution.y;", // WebGL port note: Changed sign - "texcoord.y -= resolution.y;", // WebGL port note: Changed sign - "texcoord.y -= 2.0 * resolution.y;", // WebGL port note: Changed sign - "texcoord.y += resolution.y * SMAASearchLength( searchTex, e.gr, 0.0, 0.5 );", // WebGL port note: Changed sign - - "return texcoord.y;", - "}", - - "float SMAASearchYDown( sampler2D edgesTex, sampler2D searchTex, vec2 texcoord, float end ) {", - "vec2 e = vec2( 1.0, 0.0 );", - - "for ( int i = 0; i < SMAA_MAX_SEARCH_STEPS; i ++ ) {", // WebGL port note: Changed while to for - "e = texture2D( edgesTex, texcoord, 0.0 ).rg;", - "texcoord -= vec2( 0.0, 2.0 ) * resolution;", // WebGL port note: Changed sign - "if ( ! ( texcoord.y < end && e.r > 0.8281 && e.g == 0.0 ) ) break;", - "}", - - "texcoord.y += 0.25 * resolution.y;", // WebGL port note: Changed sign - "texcoord.y += resolution.y;", // WebGL port note: Changed sign - "texcoord.y += 2.0 * resolution.y;", // WebGL port note: Changed sign - "texcoord.y -= resolution.y * SMAASearchLength( searchTex, e.gr, 0.5, 0.5 );", // WebGL port note: Changed sign - - "return texcoord.y;", - "}", - - "vec2 SMAAArea( sampler2D areaTex, vec2 dist, float e1, float e2, float offset ) {", - // Rounding prevents precision errors of bilinear filtering: - "vec2 texcoord = float( SMAA_AREATEX_MAX_DISTANCE ) * round( 4.0 * vec2( e1, e2 ) ) + dist;", - - // We do a scale and bias for mapping to texel space: - "texcoord = SMAA_AREATEX_PIXEL_SIZE * texcoord + ( 0.5 * SMAA_AREATEX_PIXEL_SIZE );", - - // Move to proper place, according to the subpixel offset: - "texcoord.y += SMAA_AREATEX_SUBTEX_SIZE * offset;", - - "return texture2D( areaTex, texcoord, 0.0 ).rg;", - "}", - - "vec4 SMAABlendingWeightCalculationPS( vec2 texcoord, vec2 pixcoord, vec4 offset[ 3 ], sampler2D edgesTex, sampler2D areaTex, sampler2D searchTex, ivec4 subsampleIndices ) {", - "vec4 weights = vec4( 0.0, 0.0, 0.0, 0.0 );", - - "vec2 e = texture2D( edgesTex, texcoord ).rg;", - - "if ( e.g > 0.0 ) {", // Edge at north - "vec2 d;", - - // Find the distance to the left: - "vec2 coords;", - "coords.x = SMAASearchXLeft( edgesTex, searchTex, offset[ 0 ].xy, offset[ 2 ].x );", - "coords.y = offset[ 1 ].y;", // offset[1].y = texcoord.y - 0.25 * resolution.y (@CROSSING_OFFSET) - "d.x = coords.x;", - - // Now fetch the left crossing edges, two at a time using bilinear - // filtering. Sampling at -0.25 (see @CROSSING_OFFSET) enables to - // discern what value each edge has: - "float e1 = texture2D( edgesTex, coords, 0.0 ).r;", - - // Find the distance to the right: - "coords.x = SMAASearchXRight( edgesTex, searchTex, offset[ 0 ].zw, offset[ 2 ].y );", - "d.y = coords.x;", - - // We want the distances to be in pixel units (doing this here allow to - // better interleave arithmetic and memory accesses): - "d = d / resolution.x - pixcoord.x;", - - // SMAAArea below needs a sqrt, as the areas texture is compressed - // quadratically: - "vec2 sqrt_d = sqrt( abs( d ) );", - - // Fetch the right crossing edges: - "coords.y -= 1.0 * resolution.y;", // WebGL port note: Added - "float e2 = SMAASampleLevelZeroOffset( edgesTex, coords, ivec2( 1, 0 ) ).r;", - - // Ok, we know how this pattern looks like, now it is time for getting - // the actual area: - "weights.rg = SMAAArea( areaTex, sqrt_d, e1, e2, float( subsampleIndices.y ) );", - "}", - - "if ( e.r > 0.0 ) {", // Edge at west - "vec2 d;", - - // Find the distance to the top: - "vec2 coords;", - - "coords.y = SMAASearchYUp( edgesTex, searchTex, offset[ 1 ].xy, offset[ 2 ].z );", - "coords.x = offset[ 0 ].x;", // offset[1].x = texcoord.x - 0.25 * resolution.x; - "d.x = coords.y;", - - // Fetch the top crossing edges: - "float e1 = texture2D( edgesTex, coords, 0.0 ).g;", - - // Find the distance to the bottom: - "coords.y = SMAASearchYDown( edgesTex, searchTex, offset[ 1 ].zw, offset[ 2 ].w );", - "d.y = coords.y;", - - // We want the distances to be in pixel units: - "d = d / resolution.y - pixcoord.y;", - - // SMAAArea below needs a sqrt, as the areas texture is compressed - // quadratically: - "vec2 sqrt_d = sqrt( abs( d ) );", - - // Fetch the bottom crossing edges: - "coords.y -= 1.0 * resolution.y;", // WebGL port note: Added - "float e2 = SMAASampleLevelZeroOffset( edgesTex, coords, ivec2( 0, 1 ) ).g;", - - // Get the area for this direction: - "weights.ba = SMAAArea( areaTex, sqrt_d, e1, e2, float( subsampleIndices.x ) );", - "}", - - "return weights;", - "}", - - "void main() {", - - "gl_FragColor = SMAABlendingWeightCalculationPS( vUv, vPixcoord, vOffset, tDiffuse, tArea, tSearch, ivec4( 0.0 ) );", - - "}" - - ].join("\n") - -}, { - - uniforms: { - - "tDiffuse": { value: null }, - "tColor": { value: null }, - "resolution": { value: new THREE.Vector2( 1 / 1024, 1 / 512 ) } - - }, - - vertexShader: [ - - "uniform vec2 resolution;", - - "varying vec2 vUv;", - "varying vec4 vOffset[ 2 ];", - - "void SMAANeighborhoodBlendingVS( vec2 texcoord ) {", - "vOffset[ 0 ] = texcoord.xyxy + resolution.xyxy * vec4( -1.0, 0.0, 0.0, 1.0 );", // WebGL port note: Changed sign in W component - "vOffset[ 1 ] = texcoord.xyxy + resolution.xyxy * vec4( 1.0, 0.0, 0.0, -1.0 );", // WebGL port note: Changed sign in W component - "}", - - "void main() {", - - "vUv = uv;", - - "SMAANeighborhoodBlendingVS( vUv );", - - "gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );", - - "}" - - ].join("\n"), - - fragmentShader: [ - - "uniform sampler2D tDiffuse;", - "uniform sampler2D tColor;", - "uniform vec2 resolution;", - - "varying vec2 vUv;", - "varying vec4 vOffset[ 2 ];", - - "vec4 SMAANeighborhoodBlendingPS( vec2 texcoord, vec4 offset[ 2 ], sampler2D colorTex, sampler2D blendTex ) {", - // Fetch the blending weights for current pixel: - "vec4 a;", - "a.xz = texture2D( blendTex, texcoord ).xz;", - "a.y = texture2D( blendTex, offset[ 1 ].zw ).g;", - "a.w = texture2D( blendTex, offset[ 1 ].xy ).a;", - - // Is there any blending weight with a value greater than 0.0? - "if ( dot(a, vec4( 1.0, 1.0, 1.0, 1.0 )) < 1e-5 ) {", - "return texture2D( colorTex, texcoord, 0.0 );", - "} else {", - // Up to 4 lines can be crossing a pixel (one through each edge). We - // favor blending by choosing the line with the maximum weight for each - // direction: - "vec2 offset;", - "offset.x = a.a > a.b ? a.a : -a.b;", // left vs. right - "offset.y = a.g > a.r ? -a.g : a.r;", // top vs. bottom // WebGL port note: Changed signs - - // Then we go in the direction that has the maximum weight: - "if ( abs( offset.x ) > abs( offset.y )) {", // horizontal vs. vertical - "offset.y = 0.0;", - "} else {", - "offset.x = 0.0;", - "}", - - // Fetch the opposite color and lerp by hand: - "vec4 C = texture2D( colorTex, texcoord, 0.0 );", - "texcoord += sign( offset ) * resolution;", - "vec4 Cop = texture2D( colorTex, texcoord, 0.0 );", - "float s = abs( offset.x ) > abs( offset.y ) ? abs( offset.x ) : abs( offset.y );", - - // WebGL port note: Added gamma correction - "C.xyz = pow(C.xyz, vec3(2.2));", - "Cop.xyz = pow(Cop.xyz, vec3(2.2));", - "vec4 mixed = mix(C, Cop, s);", - "mixed.xyz = pow(mixed.xyz, vec3(1.0 / 2.2));", - - "return mixed;", - "}", - "}", - - "void main() {", - - "gl_FragColor = SMAANeighborhoodBlendingPS( vUv, vOffset, tColor, tDiffuse );", - - "}" - - ].join("\n") - -} ]; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/math/Frustum.js b/spaces/banana-projects/web3d/node_modules/three/src/math/Frustum.js deleted file mode 100644 index 6c0f15e43b17a1892dd19346a86528d916ad4477..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/math/Frustum.js +++ /dev/null @@ -1,194 +0,0 @@ -import { Vector3 } from './Vector3.js'; -import { Sphere } from './Sphere.js'; -import { Plane } from './Plane.js'; - -/** - * @author mrdoob / http://mrdoob.com/ - * @author alteredq / http://alteredqualia.com/ - * @author bhouston / http://clara.io - */ - -function Frustum( p0, p1, p2, p3, p4, p5 ) { - - this.planes = [ - - ( p0 !== undefined ) ? p0 : new Plane(), - ( p1 !== undefined ) ? p1 : new Plane(), - ( p2 !== undefined ) ? p2 : new Plane(), - ( p3 !== undefined ) ? p3 : new Plane(), - ( p4 !== undefined ) ? p4 : new Plane(), - ( p5 !== undefined ) ? p5 : new Plane() - - ]; - -} - -Object.assign( Frustum.prototype, { - - set: function ( p0, p1, p2, p3, p4, p5 ) { - - var planes = this.planes; - - planes[ 0 ].copy( p0 ); - planes[ 1 ].copy( p1 ); - planes[ 2 ].copy( p2 ); - planes[ 3 ].copy( p3 ); - planes[ 4 ].copy( p4 ); - planes[ 5 ].copy( p5 ); - - return this; - - }, - - clone: function () { - - return new this.constructor().copy( this ); - - }, - - copy: function ( frustum ) { - - var planes = this.planes; - - for ( var i = 0; i < 6; i ++ ) { - - planes[ i ].copy( frustum.planes[ i ] ); - - } - - return this; - - }, - - setFromMatrix: function ( m ) { - - var planes = this.planes; - var me = m.elements; - var me0 = me[ 0 ], me1 = me[ 1 ], me2 = me[ 2 ], me3 = me[ 3 ]; - var me4 = me[ 4 ], me5 = me[ 5 ], me6 = me[ 6 ], me7 = me[ 7 ]; - var me8 = me[ 8 ], me9 = me[ 9 ], me10 = me[ 10 ], me11 = me[ 11 ]; - var me12 = me[ 12 ], me13 = me[ 13 ], me14 = me[ 14 ], me15 = me[ 15 ]; - - planes[ 0 ].setComponents( me3 - me0, me7 - me4, me11 - me8, me15 - me12 ).normalize(); - planes[ 1 ].setComponents( me3 + me0, me7 + me4, me11 + me8, me15 + me12 ).normalize(); - planes[ 2 ].setComponents( me3 + me1, me7 + me5, me11 + me9, me15 + me13 ).normalize(); - planes[ 3 ].setComponents( me3 - me1, me7 - me5, me11 - me9, me15 - me13 ).normalize(); - planes[ 4 ].setComponents( me3 - me2, me7 - me6, me11 - me10, me15 - me14 ).normalize(); - planes[ 5 ].setComponents( me3 + me2, me7 + me6, me11 + me10, me15 + me14 ).normalize(); - - return this; - - }, - - intersectsObject: function () { - - var sphere = new Sphere(); - - return function intersectsObject( object ) { - - var geometry = object.geometry; - - if ( geometry.boundingSphere === null ) - geometry.computeBoundingSphere(); - - sphere.copy( geometry.boundingSphere ) - .applyMatrix4( object.matrixWorld ); - - return this.intersectsSphere( sphere ); - - }; - - }(), - - intersectsSprite: function () { - - var sphere = new Sphere(); - - return function intersectsSprite( sprite ) { - - sphere.center.set( 0, 0, 0 ); - sphere.radius = 0.7071067811865476; - sphere.applyMatrix4( sprite.matrixWorld ); - - return this.intersectsSphere( sphere ); - - }; - - }(), - - intersectsSphere: function ( sphere ) { - - var planes = this.planes; - var center = sphere.center; - var negRadius = - sphere.radius; - - for ( var i = 0; i < 6; i ++ ) { - - var distance = planes[ i ].distanceToPoint( center ); - - if ( distance < negRadius ) { - - return false; - - } - - } - - return true; - - }, - - intersectsBox: function () { - - var p = new Vector3(); - - return function intersectsBox( box ) { - - var planes = this.planes; - - for ( var i = 0; i < 6; i ++ ) { - - var plane = planes[ i ]; - - // corner at max distance - - p.x = plane.normal.x > 0 ? box.max.x : box.min.x; - p.y = plane.normal.y > 0 ? box.max.y : box.min.y; - p.z = plane.normal.z > 0 ? box.max.z : box.min.z; - - if ( plane.distanceToPoint( p ) < 0 ) { - - return false; - - } - - } - - return true; - - }; - - }(), - - containsPoint: function ( point ) { - - var planes = this.planes; - - for ( var i = 0; i < 6; i ++ ) { - - if ( planes[ i ].distanceToPoint( point ) < 0 ) { - - return false; - - } - - } - - return true; - - } - -} ); - - -export { Frustum }; diff --git a/spaces/bingbing520/ChatGPT/modules/models.py b/spaces/bingbing520/ChatGPT/modules/models.py deleted file mode 100644 index 25b18b1904910e183a997a763008403d960868d6..0000000000000000000000000000000000000000 --- a/spaces/bingbing520/ChatGPT/modules/models.py +++ /dev/null @@ -1,625 +0,0 @@ -from __future__ import annotations -from typing import TYPE_CHECKING, List - -import logging -import json -import commentjson as cjson -import os -import sys -import requests -import urllib3 -import platform -import base64 -from io import BytesIO -from PIL import Image - -from tqdm import tqdm -import colorama -from duckduckgo_search import ddg -import asyncio -import aiohttp -from enum import Enum -import uuid - -from .presets import * -from .llama_func import * -from .utils import * -from . import shared -from .config import retrieve_proxy -from modules import config -from .base_model import BaseLLMModel, ModelType - - -class OpenAIClient(BaseLLMModel): - def __init__( - self, - model_name, - api_key, - system_prompt=INITIAL_SYSTEM_PROMPT, - temperature=1.0, - top_p=1.0, - ) -> None: - super().__init__( - model_name=model_name, - temperature=temperature, - top_p=top_p, - system_prompt=system_prompt, - ) - self.api_key = api_key - self.need_api_key = True - self._refresh_header() - - def get_answer_stream_iter(self): - response = self._get_response(stream=True) - if response is not None: - iter = self._decode_chat_response(response) - partial_text = "" - for i in iter: - partial_text += i - yield partial_text - else: - yield STANDARD_ERROR_MSG + GENERAL_ERROR_MSG - - def get_answer_at_once(self): - response = self._get_response() - response = json.loads(response.text) - content = response["choices"][0]["message"]["content"] - total_token_count = response["usage"]["total_tokens"] - return content, total_token_count - - def count_token(self, user_input): - input_token_count = count_token(construct_user(user_input)) - if self.system_prompt is not None and len(self.all_token_counts) == 0: - system_prompt_token_count = count_token( - construct_system(self.system_prompt) - ) - return input_token_count + system_prompt_token_count - return input_token_count - - def billing_info(self): - try: - curr_time = datetime.datetime.now() - last_day_of_month = get_last_day_of_month( - curr_time).strftime("%Y-%m-%d") - first_day_of_month = curr_time.replace(day=1).strftime("%Y-%m-%d") - usage_url = f"{shared.state.usage_api_url}?start_date={first_day_of_month}&end_date={last_day_of_month}" - try: - usage_data = self._get_billing_data(usage_url) - except Exception as e: - logging.error(f"获取API使用情况失败:" + str(e)) - return i18n("**获取API使用情况失败**") - rounded_usage = "{:.5f}".format(usage_data["total_usage"] / 100) - return i18n("**本月使用金额** ") + f"\u3000 ${rounded_usage}" - except requests.exceptions.ConnectTimeout: - status_text = ( - STANDARD_ERROR_MSG + CONNECTION_TIMEOUT_MSG + ERROR_RETRIEVE_MSG - ) - return status_text - except requests.exceptions.ReadTimeout: - status_text = STANDARD_ERROR_MSG + READ_TIMEOUT_MSG + ERROR_RETRIEVE_MSG - return status_text - except Exception as e: - import traceback - traceback.print_exc() - logging.error(i18n("获取API使用情况失败:") + str(e)) - return STANDARD_ERROR_MSG + ERROR_RETRIEVE_MSG - - def set_token_upper_limit(self, new_upper_limit): - pass - - @shared.state.switching_api_key # 在不开启多账号模式的时候,这个装饰器不会起作用 - def _get_response(self, stream=False): - openai_api_key = self.api_key - system_prompt = self.system_prompt - history = self.history - logging.debug(colorama.Fore.YELLOW + - f"{history}" + colorama.Fore.RESET) - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {openai_api_key}", - } - - if system_prompt is not None: - history = [construct_system(system_prompt), *history] - - payload = { - "model": self.model_name, - "messages": history, - "temperature": self.temperature, - "top_p": self.top_p, - "n": self.n_choices, - "stream": stream, - "presence_penalty": self.presence_penalty, - "frequency_penalty": self.frequency_penalty, - } - - if self.max_generation_token is not None: - payload["max_tokens"] = self.max_generation_token - if self.stop_sequence is not None: - payload["stop"] = self.stop_sequence - if self.logit_bias is not None: - payload["logit_bias"] = self.logit_bias - if self.user_identifier is not None: - payload["user"] = self.user_identifier - - if stream: - timeout = TIMEOUT_STREAMING - else: - timeout = TIMEOUT_ALL - - # 如果有自定义的api-host,使用自定义host发送请求,否则使用默认设置发送请求 - if shared.state.completion_url != COMPLETION_URL: - logging.info(f"使用自定义API URL: {shared.state.completion_url}") - - with retrieve_proxy(): - try: - response = requests.post( - shared.state.completion_url, - headers=headers, - json=payload, - stream=stream, - timeout=timeout, - ) - except: - return None - return response - - def _refresh_header(self): - self.headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {self.api_key}", - } - - def _get_billing_data(self, billing_url): - with retrieve_proxy(): - response = requests.get( - billing_url, - headers=self.headers, - timeout=TIMEOUT_ALL, - ) - - if response.status_code == 200: - data = response.json() - return data - else: - raise Exception( - f"API request failed with status code {response.status_code}: {response.text}" - ) - - def _decode_chat_response(self, response): - error_msg = "" - for chunk in response.iter_lines(): - if chunk: - chunk = chunk.decode() - chunk_length = len(chunk) - try: - chunk = json.loads(chunk[6:]) - except json.JSONDecodeError: - print(i18n("JSON解析错误,收到的内容: ") + f"{chunk}") - error_msg += chunk - continue - if chunk_length > 6 and "delta" in chunk["choices"][0]: - if chunk["choices"][0]["finish_reason"] == "stop": - break - try: - yield chunk["choices"][0]["delta"]["content"] - except Exception as e: - # logging.error(f"Error: {e}") - continue - if error_msg: - raise Exception(error_msg) - - def set_key(self, new_access_key): - ret = super().set_key(new_access_key) - self._refresh_header() - return ret - - -class ChatGLM_Client(BaseLLMModel): - def __init__(self, model_name) -> None: - super().__init__(model_name=model_name) - from transformers import AutoTokenizer, AutoModel - import torch - global CHATGLM_TOKENIZER, CHATGLM_MODEL - if CHATGLM_TOKENIZER is None or CHATGLM_MODEL is None: - system_name = platform.system() - model_path = None - if os.path.exists("models"): - model_dirs = os.listdir("models") - if model_name in model_dirs: - model_path = f"models/{model_name}" - if model_path is not None: - model_source = model_path - else: - model_source = f"THUDM/{model_name}" - CHATGLM_TOKENIZER = AutoTokenizer.from_pretrained( - model_source, trust_remote_code=True - ) - quantified = False - if "int4" in model_name: - quantified = True - model = AutoModel.from_pretrained( - model_source, trust_remote_code=True - ) - if torch.cuda.is_available(): - # run on CUDA - logging.info("CUDA is available, using CUDA") - model = model.half().cuda() - # mps加速还存在一些问题,暂时不使用 - elif system_name == "Darwin" and model_path is not None and not quantified: - logging.info("Running on macOS, using MPS") - # running on macOS and model already downloaded - model = model.half().to("mps") - else: - logging.info("GPU is not available, using CPU") - model = model.float() - model = model.eval() - CHATGLM_MODEL = model - - def _get_glm_style_input(self): - history = [x["content"] for x in self.history] - query = history.pop() - logging.debug(colorama.Fore.YELLOW + - f"{history}" + colorama.Fore.RESET) - assert ( - len(history) % 2 == 0 - ), f"History should be even length. current history is: {history}" - history = [[history[i], history[i + 1]] - for i in range(0, len(history), 2)] - return history, query - - def get_answer_at_once(self): - history, query = self._get_glm_style_input() - response, _ = CHATGLM_MODEL.chat( - CHATGLM_TOKENIZER, query, history=history) - return response, len(response) - - def get_answer_stream_iter(self): - history, query = self._get_glm_style_input() - for response, history in CHATGLM_MODEL.stream_chat( - CHATGLM_TOKENIZER, - query, - history, - max_length=self.token_upper_limit, - top_p=self.top_p, - temperature=self.temperature, - ): - yield response - - -class LLaMA_Client(BaseLLMModel): - def __init__( - self, - model_name, - lora_path=None, - ) -> None: - super().__init__(model_name=model_name) - from lmflow.datasets.dataset import Dataset - from lmflow.pipeline.auto_pipeline import AutoPipeline - from lmflow.models.auto_model import AutoModel - from lmflow.args import ModelArguments, DatasetArguments, InferencerArguments - - self.max_generation_token = 1000 - self.end_string = "\n\n" - # We don't need input data - data_args = DatasetArguments(dataset_path=None) - self.dataset = Dataset(data_args) - self.system_prompt = "" - - global LLAMA_MODEL, LLAMA_INFERENCER - if LLAMA_MODEL is None or LLAMA_INFERENCER is None: - model_path = None - if os.path.exists("models"): - model_dirs = os.listdir("models") - if model_name in model_dirs: - model_path = f"models/{model_name}" - if model_path is not None: - model_source = model_path - else: - model_source = f"decapoda-research/{model_name}" - # raise Exception(f"models目录下没有这个模型: {model_name}") - if lora_path is not None: - lora_path = f"lora/{lora_path}" - model_args = ModelArguments(model_name_or_path=model_source, lora_model_path=lora_path, model_type=None, config_overrides=None, config_name=None, tokenizer_name=None, cache_dir=None, - use_fast_tokenizer=True, model_revision='main', use_auth_token=False, torch_dtype=None, use_lora=False, lora_r=8, lora_alpha=32, lora_dropout=0.1, use_ram_optimized_load=True) - pipeline_args = InferencerArguments( - local_rank=0, random_seed=1, deepspeed='configs/ds_config_chatbot.json', mixed_precision='bf16') - - with open(pipeline_args.deepspeed, "r") as f: - ds_config = json.load(f) - LLAMA_MODEL = AutoModel.get_model( - model_args, - tune_strategy="none", - ds_config=ds_config, - ) - LLAMA_INFERENCER = AutoPipeline.get_pipeline( - pipeline_name="inferencer", - model_args=model_args, - data_args=data_args, - pipeline_args=pipeline_args, - ) - - def _get_llama_style_input(self): - history = [] - instruction = "" - if self.system_prompt: - instruction = (f"Instruction: {self.system_prompt}\n") - for x in self.history: - if x["role"] == "user": - history.append(f"{instruction}Input: {x['content']}") - else: - history.append(f"Output: {x['content']}") - context = "\n\n".join(history) - context += "\n\nOutput: " - return context - - def get_answer_at_once(self): - context = self._get_llama_style_input() - - input_dataset = self.dataset.from_dict( - {"type": "text_only", "instances": [{"text": context}]} - ) - - output_dataset = LLAMA_INFERENCER.inference( - model=LLAMA_MODEL, - dataset=input_dataset, - max_new_tokens=self.max_generation_token, - temperature=self.temperature, - ) - - response = output_dataset.to_dict()["instances"][0]["text"] - return response, len(response) - - def get_answer_stream_iter(self): - context = self._get_llama_style_input() - partial_text = "" - step = 1 - for _ in range(0, self.max_generation_token, step): - input_dataset = self.dataset.from_dict( - {"type": "text_only", "instances": [ - {"text": context + partial_text}]} - ) - output_dataset = LLAMA_INFERENCER.inference( - model=LLAMA_MODEL, - dataset=input_dataset, - max_new_tokens=step, - temperature=self.temperature, - ) - response = output_dataset.to_dict()["instances"][0]["text"] - if response == "" or response == self.end_string: - break - partial_text += response - yield partial_text - - -class XMChat(BaseLLMModel): - def __init__(self, api_key): - super().__init__(model_name="xmchat") - self.api_key = api_key - self.session_id = None - self.reset() - self.image_bytes = None - self.image_path = None - self.xm_history = [] - self.url = "https://xmbot.net/web" - self.last_conv_id = None - - def reset(self): - self.session_id = str(uuid.uuid4()) - self.last_conv_id = None - return [], "已重置" - - def image_to_base64(self, image_path): - # 打开并加载图片 - img = Image.open(image_path) - - # 获取图片的宽度和高度 - width, height = img.size - - # 计算压缩比例,以确保最长边小于4096像素 - max_dimension = 2048 - scale_ratio = min(max_dimension / width, max_dimension / height) - - if scale_ratio < 1: - # 按压缩比例调整图片大小 - new_width = int(width * scale_ratio) - new_height = int(height * scale_ratio) - img = img.resize((new_width, new_height), Image.ANTIALIAS) - - # 将图片转换为jpg格式的二进制数据 - buffer = BytesIO() - if img.mode == "RGBA": - img = img.convert("RGB") - img.save(buffer, format='JPEG') - binary_image = buffer.getvalue() - - # 对二进制数据进行Base64编码 - base64_image = base64.b64encode(binary_image).decode('utf-8') - - return base64_image - - def try_read_image(self, filepath): - def is_image_file(filepath): - # 判断文件是否为图片 - valid_image_extensions = [".jpg", ".jpeg", ".png", ".bmp", ".gif", ".tiff"] - file_extension = os.path.splitext(filepath)[1].lower() - return file_extension in valid_image_extensions - - if is_image_file(filepath): - logging.info(f"读取图片文件: {filepath}") - self.image_bytes = self.image_to_base64(filepath) - self.image_path = filepath - else: - self.image_bytes = None - self.image_path = None - - def like(self): - if self.last_conv_id is None: - return "点赞失败,你还没发送过消息" - data = { - "uuid": self.last_conv_id, - "appraise": "good" - } - response = requests.post(self.url, json=data) - return "👍点赞成功,,感谢反馈~" - - def dislike(self): - if self.last_conv_id is None: - return "点踩失败,你还没发送过消息" - data = { - "uuid": self.last_conv_id, - "appraise": "bad" - } - response = requests.post(self.url, json=data) - return "👎点踩成功,感谢反馈~" - - def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot): - fake_inputs = real_inputs - display_append = "" - limited_context = False - return limited_context, fake_inputs, display_append, real_inputs, chatbot - - def handle_file_upload(self, files, chatbot): - """if the model accepts multi modal input, implement this function""" - if files: - for file in files: - if file.name: - logging.info(f"尝试读取图像: {file.name}") - self.try_read_image(file.name) - if self.image_path is not None: - chatbot = chatbot + [((self.image_path,), None)] - if self.image_bytes is not None: - logging.info("使用图片作为输入") - # XMChat的一轮对话中实际上只能处理一张图片 - self.reset() - conv_id = str(uuid.uuid4()) - data = { - "user_id": self.api_key, - "session_id": self.session_id, - "uuid": conv_id, - "data_type": "imgbase64", - "data": self.image_bytes - } - response = requests.post(self.url, json=data) - response = json.loads(response.text) - logging.info(f"图片回复: {response['data']}") - return None, chatbot, None - - def get_answer_at_once(self): - question = self.history[-1]["content"] - conv_id = str(uuid.uuid4()) - self.last_conv_id = conv_id - data = { - "user_id": self.api_key, - "session_id": self.session_id, - "uuid": conv_id, - "data_type": "text", - "data": question - } - response = requests.post(self.url, json=data) - try: - response = json.loads(response.text) - return response["data"], len(response["data"]) - except Exception as e: - return response.text, len(response.text) - - - - -def get_model( - model_name, - lora_model_path=None, - access_key=None, - temperature=None, - top_p=None, - system_prompt=None, -) -> BaseLLMModel: - msg = i18n("模型设置为了:") + f" {model_name}" - model_type = ModelType.get_type(model_name) - lora_selector_visibility = False - lora_choices = [] - dont_change_lora_selector = False - if model_type != ModelType.OpenAI: - config.local_embedding = True - # del current_model.model - model = None - try: - if model_type == ModelType.OpenAI: - logging.info(f"正在加载OpenAI模型: {model_name}") - model = OpenAIClient( - model_name=model_name, - api_key=access_key, - system_prompt=system_prompt, - temperature=temperature, - top_p=top_p, - ) - elif model_type == ModelType.ChatGLM: - logging.info(f"正在加载ChatGLM模型: {model_name}") - model = ChatGLM_Client(model_name) - elif model_type == ModelType.LLaMA and lora_model_path == "": - msg = f"现在请为 {model_name} 选择LoRA模型" - logging.info(msg) - lora_selector_visibility = True - if os.path.isdir("lora"): - lora_choices = get_file_names( - "lora", plain=True, filetypes=[""]) - lora_choices = ["No LoRA"] + lora_choices - elif model_type == ModelType.LLaMA and lora_model_path != "": - logging.info(f"正在加载LLaMA模型: {model_name} + {lora_model_path}") - dont_change_lora_selector = True - if lora_model_path == "No LoRA": - lora_model_path = None - msg += " + No LoRA" - else: - msg += f" + {lora_model_path}" - model = LLaMA_Client(model_name, lora_model_path) - elif model_type == ModelType.XMChat: - if os.environ.get("XMCHAT_API_KEY") != "": - access_key = os.environ.get("XMCHAT_API_KEY") - model = XMChat(api_key=access_key) - elif model_type == ModelType.Unknown: - raise ValueError(f"未知模型: {model_name}") - logging.info(msg) - except Exception as e: - logging.error(e) - msg = f"{STANDARD_ERROR_MSG}: {e}" - if dont_change_lora_selector: - return model, msg - else: - return model, msg, gr.Dropdown.update(choices=lora_choices, visible=lora_selector_visibility) - - -if __name__ == "__main__": - with open("config.json", "r") as f: - openai_api_key = cjson.load(f)["openai_api_key"] - # set logging level to debug - logging.basicConfig(level=logging.DEBUG) - # client = ModelManager(model_name="gpt-3.5-turbo", access_key=openai_api_key) - client = get_model(model_name="chatglm-6b-int4") - chatbot = [] - stream = False - # 测试账单功能 - logging.info(colorama.Back.GREEN + "测试账单功能" + colorama.Back.RESET) - logging.info(client.billing_info()) - # 测试问答 - logging.info(colorama.Back.GREEN + "测试问答" + colorama.Back.RESET) - question = "巴黎是中国的首都吗?" - for i in client.predict(inputs=question, chatbot=chatbot, stream=stream): - logging.info(i) - logging.info(f"测试问答后history : {client.history}") - # 测试记忆力 - logging.info(colorama.Back.GREEN + "测试记忆力" + colorama.Back.RESET) - question = "我刚刚问了你什么问题?" - for i in client.predict(inputs=question, chatbot=chatbot, stream=stream): - logging.info(i) - logging.info(f"测试记忆力后history : {client.history}") - # 测试重试功能 - logging.info(colorama.Back.GREEN + "测试重试功能" + colorama.Back.RESET) - for i in client.retry(chatbot=chatbot, stream=stream): - logging.info(i) - logging.info(f"重试后history : {client.history}") - # # 测试总结功能 - # print(colorama.Back.GREEN + "测试总结功能" + colorama.Back.RESET) - # chatbot, msg = client.reduce_token_size(chatbot=chatbot) - # print(chatbot, msg) - # print(f"总结后history: {client.history}") diff --git a/spaces/bioriAsaeru/text-to-voice/A Clean Slate Part 1 Online Gratuito VERIFIED.md b/spaces/bioriAsaeru/text-to-voice/A Clean Slate Part 1 Online Gratuito VERIFIED.md deleted file mode 100644 index 21763db6350876165ed84ccc3341f9ece37b9a45..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/A Clean Slate Part 1 Online Gratuito VERIFIED.md +++ /dev/null @@ -1,5 +0,0 @@ -
    -

    Designing the curriculum We believe that calculus can be for students what it was for Euler and the Bernoullis: a language and a tool for exploring the whole fabric of science. We also believe that much of the mathematical depth and vitality of calculus lies in connections to other sciences. The mathematical questions that arise are compelling in part because the answers matter to other disciplines. We began our work with a "clean slate," not by asking what parts of the traditional course to include or discard. Our starting points are thus our summary of what calculus is really about. Our curricular goals are what we aim to convey about the subject in the course. Our functional goals describe the attitudes and behaviors we hope our students will adopt in using calculus to approach scientific and mathematical questions.

    -

    A Clean Slate: Part 1 Online Gratuito


    DOWNLOAD ✏ ✏ ✏ https://urloso.com/2uyRq0



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/C windows system32 rundll32.exe Not Found? Try These Easy Solutions.md b/spaces/bioriAsaeru/text-to-voice/C windows system32 rundll32.exe Not Found? Try These Easy Solutions.md deleted file mode 100644 index 958b99abff3fed82f23ac994ce4ee8fa46109645..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/C windows system32 rundll32.exe Not Found? Try These Easy Solutions.md +++ /dev/null @@ -1,27 +0,0 @@ -
    -

    If you use Windows Task Manager to check running processes and find multiple copies of rundll32.exe, it may mean that there is a virus or Trojan on your computer. But the official Windows rundll32.exe is safe and will not harm your computer.

    -

    Note: the article describes how to run a full SFC check using the /scannow option. If you had a recent virus infection this is the best option as it checks all important Windows files so it can replace any others that are missing/corrupted, not just rundll32.exe

    -

    how to fix c windows system32 rundll32.exe


    Download Zip 🌟 https://urloso.com/2uyPYx



    -

    These directories include a backup of many system files which can be used to replace those which are corrupted or missing. In all versions of Windows except XP, you may need to search all the WinSxS subfolders (including hidden and system files) to find the backup version of rundll32.exe if it is present.

    -

    While monitoring the network activity or rundll32.exe from Austin, Texas USA with the GlassWire software we found it connects to settingsfd-geo.trafficmanager.net which appears to be controlled by Microsoft Corporation. We found no other network activity with the .exe. We believe rundll32.exe connects to settingsfd-geo.trafficmanager.net to help manage the distribution of traffic across your PCs endpoints. This traffic management seems to happen at the DNS level to help your PC and apps work properly.

    -

    The genuine rundll32.exe file is a software component of Microsoft Windows Operating System by Microsoft Corporation.
    "RunDLL32.exe" is Microsoft's "Windows Host Process," (or, "Run a DLL as an app,") a powerful tool available since Windows Vista and Windows Server 2008. On 32-bit Windows systems, it resides in "C:\Windows\System32." On 64-bit systems, two "rundll32.exe" processes exist in "\System32" and "\SysWOW64" to call 64-bit and 32-bit DLL's respectively. In any other location this process name is probably disguised malware from trojans or viruses, especially in subfolders of the user's profile folder. Code in Dynamic Link Library (.DLL) files is not normally directly executable; it must be called from a process. Library files reduce RAM and disk usage by segmenting code loaded into active memory, and allowing multiple applications to call one copy of a commonly-used function (or "method"). Developers familiar with Windows API method names can use "rundll32.exe" commands in scripts to call specific methods within specific DLL's to perform Windows functions remotely and/or on a schedule.

    -

    The .exe extension on a filename indicates an executable file. Executable files may, in some cases, harm your computer. Therefore, please read below to decide for yourself whether the rundll32.exe on your computer is a Trojan that you should remove, or whether it is a file belonging to the Windows operating system or to a trusted application.

    -

    Description: The original rundll32.exe from Microsoft is an important part of Windows, but often causes problems. Rundll32.exe is located in the C:\Windows\System32 folder or sometimes in the C:\Windows folder.Known file sizes on Windows 10/11/7 are 33,280 bytes (33% of all occurrences), 44,544 bytes and 23 more variants.
    The process is the bff42538 service.
    Rundll32.exe is a Windows core system file. The file is a Microsoft signed file. The program is not visible.Therefore the technical security rating is 7% dangerous, but you should also take into account the user reviews.

    -

    Is rundll32.exe a virus? No, it is not. The true rundll32.exe file is a safe Microsoft Windows system process, called "Windows host process".However, writers of malware programs, such as viruses, worms, and Trojans deliberately give their processes the same file name to escape detection. Viruses with the same file name are for example WS.Reputation.1 (detected by Symantec), and Trojan-Dropper.Win32.Injector.ebsj or Trojan.Win32.Zapchast.acbp (detected by Kaspersky).
    To ensure that no rogue rundll32.exe is running on your PC, click here to run a Free Malware Scan.

    -

    -

    Important: Some malware disguises itself as rundll32.exe, particularly when not located in the C:\Windows\System32 folder. Therefore, you should check the rundll32.exe process on your PC to see if it is a threat. We recommend Security Task Manager for verifying your computer's security. This was one of the Top Download Picks of The Washington Post and PC World.

    -

    Summary: Average user rating of rundll32.exe: based on 494 votes with 9 user comments.178 users think rundll32.exe is essential for Windows or an installed application.22 users think it's probably harmless.121 users think it's neither essential nor dangerous.70 users suspect danger.103 users think rundll32.exe is dangerous and recommend removing it.77 users don't grade rundll32.exe ("not sure about it").

    -

    To help you analyze the rundll32.exe process on your computer, the following programs have proven to be helpful: ASecurity Task Manager displays all running Windows tasks, including embedded hidden processes, such as keyboard and browser monitoring or Autostart entries. A unique security risk rating indicates the likelihood of the process being potential spyware, malware or a Trojan. BMalwarebytes Anti-Malware detects and removes sleeping spyware, adware, Trojans, keyloggers, malware and trackers from your hard drive.

    -

    I believe that is caused by UAC and how the Admin account is subjected to UAC per say.
    I have done the below to resolve that.
    You can also create a short-cut to the area you are blocked and it worked that way for me to.
    Do a search on the rundll32.exe and UAC to see more info about it.

    -

    If Target contains any commas, they must be escaped as shown three times in the following example:
    Run rundll32.exe shell32.dll`,Control_RunDLL desk.cpl[color=#FF0000]`,`,[/color] 3 ; Opens Control Panel > Display Properties > Settings

    -

    ...
    I see you mean just in that example.. I am removing the backquotes so it works in start..run

    if I click start then run then
    "C:\Windows\system32\rundll32.exe" sysdm.cpl,EditEnvironmentVariables

    it doesn't work

    -

    I am running Windows 8.1 Update in a Parallels VM. After about 5 minutes of inactivity, a rundll32.exe process is spawned and consumes a core. MsMpEng.exe activity also increases. (probably due to lots of IO but I can't confirm) If I interact with the VM in any way, the rundll32.exe immediately exits until I let it idle for another 5 minutes.

    -

    Hi I have found this same problem with updating to Win 10 and not a single common answer to this issue worked for me, when my computer would go idle the C:Drive usage would go up to 100% and make any task impossible, leading to manual shutdown by holding the power button. Windows Process explorer would show rundll32.exe and in the properties of this file would be C:\Windows\system32\rundll32.exe invagent,RunUpdate -noappraiser (then random numbers and letters).

    -

    So I have fixed 100% C:drive problem by changing invagent.dll to invagent.dll.bak. But potentially opened up a new problem that is currently not causing me any issues. I will edit this answer if I have any further issues over the next week, or discover why multiple versions of rundll32.exe are now running.

    -

    Your problem likely is that your program is compiled as 32-bit and your OS is 64-bit, and thus, when you try to access "C:\Windows\System32\Speech\SpeechUX\SpeechUX.dll" from your program, you're really accessing "C:\Windows\SysWOW64\Speech\SpeechUX\SpeechUX.dll" which, as rundll32.exe is reporting doesn't exist.

    -

    Command-line parameters are some of the most reliable telemetry for detecting malicious use of Rundll32, since adversaries often need to pass command-line arguments for Rundll32 to execute. Eight of our top 10 detection analytics for Rundll32 include a command-line component. Capturing command-line activity will capture the both name of the DLL that was launched by rundll32.exe and any additional command-line arguments.

    -

    Consider monitoring for instances of rundll32.exe running Windows native DLLs that have export functionalities that adversaries commonly leverage for executing malicious code and evading defensive controls. The following pseudo-analytic applies specifically to adversaries who use the MiniDump export functionality of comsvcs.dll to dump the contents of LSASS, but this logic could be adapted to detect other malicious activity as well.

    -

    Rundll32 does not normally execute without corresponding command-line arguments and while spawning a child process. Given this, you may want to alert on the execution of processes that appear to be rundll32.exe without any command-line arguments , especially when they spawn child processes or make network connections.

    -

    3:38:02 PM
    C:\windows\system32\cmd.exe /c "C:\windows\system32\rundll32.exe setupapi,InstallHinfSection DefaultUninstall 128 C:\Program Files\Splunk\bin\SplunkMonitorNoHandleDrv.inf >> "C:\Users\ADMINI~1\AppData\Local\Temp\splunk.log" 2>&1"
    3:38:03 PM
    C:\windows\system32\cmd.exe /c "C:\windows\system32\rundll32.exe setupapi,InstallHinfSection DefaultUninstall 128 C:\Program Files\Splunk\bin\splknetdrv.inf >> "C:\Users\ADMINI~1\AppData\Local\Temp\splunk.log" 2>&1"
    3:38:04 PM
    C:\windows\system32\cmd.exe /c "C:\windows\system32\rundll32.exe setupapi,InstallHinfSection DefaultUninstall 128 C:\Program Files\Splunk\bin\splunkdrv.inf >> "C:\Users\ADMINI~1\AppData\Local\Temp\splunk.log" 2>&1"
    3:39:05 PM
    C:\windows\system32\cmd.exe /c ""C:\Program Files\Splunk\bin\splunk.exe" start --answer-yes --no-prompt --accept-license --auto-ports >> "C:\Users\ADMINI~1\AppData\Local\Temp\splunk.log" 2>&1"
    '"C:\Program Files\Splunk\bin\splunk.exe"' is not recognized as an internal or external command,
    operable program or batch file.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Ccna Network Visualizer 8.0 Crack Cocaine.md b/spaces/bioriAsaeru/text-to-voice/Ccna Network Visualizer 8.0 Crack Cocaine.md deleted file mode 100644 index 51c5614d2054a4e6456099814603659dac40fce6..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Ccna Network Visualizer 8.0 Crack Cocaine.md +++ /dev/null @@ -1,6 +0,0 @@ -

    ccna network visualizer 8.0 crack cocaine


    Download 🗸🗸🗸 https://urloso.com/2uyPZW



    -
    -Ccna routersim network visualizer 6.0 crack: probably you can find ccna ... download ccna network visualizer 8.0 build 2413an intuitive and easy to ... enable users to build.your torrent was excellent and.logicvein views.3:35. 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/HD Online Player (Stuart Little 1 720p Movies) [UPDATED].md b/spaces/bioriAsaeru/text-to-voice/HD Online Player (Stuart Little 1 720p Movies) [UPDATED].md deleted file mode 100644 index cba9b7b5a346254dcee01917c27fc18a26a1817d..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/HD Online Player (Stuart Little 1 720p Movies) [UPDATED].md +++ /dev/null @@ -1,67 +0,0 @@ -
    -

    HD Online Player (Stuart Little 1 720p Movies): How to Watch the Family Classic in High Definition

    - -

    Stuart Little is a 1999 family movie that tells the story of a mouse who is adopted by a human family and goes on various adventures with his new brother and friends. The movie is based on the children's book by E.B. White and features the voice of Michael J. Fox as Stuart, as well as Geena Davis, Hugh Laurie, Nathan Lane, and Chazz Palminteri in live-action roles.

    - -

    If you are looking for a fun and heartwarming movie to watch with your kids or just for yourself, you might want to check out Stuart Little in HD quality. HD online players are devices or software that allow you to stream or download movies in high definition, which means you can enjoy the movie in better resolution, clarity, and color. HD online players can also offer other features such as subtitles, audio options, and playback controls.

    -

    HD Online Player (Stuart Little 1 720p Movies)


    DOWNLOAD > https://urloso.com/2uyOqq



    - -

    There are many ways to watch Stuart Little in HD online, depending on your preferences and budget. Here are some of the most popular options:

    - -
      -
    • Streaming services: Streaming services are platforms that let you watch movies online without downloading them. You can access them through your computer, smart TV, smartphone, tablet, or other devices. Some of the streaming services that offer Stuart Little in HD are Netflix, Stan, Amazon Prime Video, and YouTube. You will need to pay a monthly or annual fee to use these services, but they also offer a large library of other movies and shows that you can watch anytime.
    • -
    • Online rental or purchase: Online rental or purchase are options that let you pay a one-time fee to watch a movie online for a limited time or keep it forever. You can access them through your computer, smart TV, smartphone, tablet, or other devices. Some of the online platforms that offer Stuart Little in HD for rental or purchase are Google Play Movies, Apple TV, Microsoft Store, Cineplex, and Vudu. You will need to have an account and a payment method to use these platforms, but they also offer discounts and deals on other movies that you might like.
    • -
    • DVD or Blu-ray: DVD or Blu-ray are physical discs that contain movies that you can play on your DVD or Blu-ray player. You can buy them online or at your local store. Stuart Little is available on DVD and Blu-ray in HD quality, and they also include bonus features such as deleted scenes, behind-the-scenes footage, and commentary. You will need to have a DVD or Blu-ray player and a TV to use these discs, but they also offer more durability and portability than online options.
    • -
    - -

    No matter which option you choose, you will be able to enjoy Stuart Little in HD online with your HD online player. Stuart Little is a movie that will make you laugh, cry, and cheer for the brave and lovable mouse who proves that size doesn't matter when it comes to family and friendship.

    -

    What You Can Learn from Stuart Little in HD Online

    - -

    Stuart Little is not only a fun and entertaining movie, but also a movie that can teach you some valuable lessons. Watching Stuart Little in HD online will help you appreciate the messages and themes of the movie more clearly and deeply. Here are some of the things that you can learn from Stuart Little in HD online:

    - -
      -
    • Be yourself: Stuart Little is a mouse who is different from everyone else, but he doesn't let that stop him from being himself. He is confident, courageous, and kind, and he shows his personality and talents in everything he does. He also respects and accepts others for who they are, regardless of their differences. Watching Stuart Little in HD online will inspire you to be yourself and embrace your uniqueness.
    • -
    • Be adventurous: Stuart Little is a mouse who loves to explore and try new things. He is curious, adventurous, and resourceful, and he always finds ways to have fun and make friends. He also faces challenges and dangers with bravery and optimism, and he never gives up on his goals. Watching Stuart Little in HD online will encourage you to be adventurous and enjoy life.
    • -
    • Be loyal: Stuart Little is a mouse who cares deeply about his family and friends. He is loyal, faithful, and supportive, and he always stands by them in times of need. He also values his relationships and bonds with them over shared experiences and memories. Watching Stuart Little in HD online will remind you to be loyal and cherish your loved ones.
    • -
    - -

    How to Have Fun with Stuart Little in HD Online

    - -

    Stuart Little is a movie that can make you laugh, smile, and feel good. Watching Stuart Little in HD online can also be a great way to have fun with your family or friends. Here are some of the ways that you can have fun with Stuart Little in HD online:

    - -
      -
    • Play games: You can play games related to Stuart Little while watching the movie or after watching it. For example, you can play trivia games to test your knowledge of the movie, or you can play charades or Pictionary to act out or draw scenes or characters from the movie. You can also play board games or video games that feature Stuart Little or other animals.
    • -
    • Make crafts: You can make crafts inspired by Stuart Little while watching the movie or after watching it. For example, you can make a mouse mask or ears to wear while watching the movie, or you can make a mouse house or car out of cardboard or paper. You can also make other crafts that relate to the movie, such as a bird feeder, a boat, or a plane.
    • -
    • Cook snacks: You can cook snacks based on Stuart Little while watching the movie or after watching it. For example, you can make cheese crackers or sandwiches to eat while watching the movie, or you can make cookies or cupcakes decorated with mouse faces or tails. You can also make other snacks that suit the movie, such as popcorn, pizza, or fruit.
    • -
    - -

    Watching Stuart Little in HD online is a great way to spend your time and have fun. Whether you want to watch it alone or with others, you will be able to enjoy the movie in high quality and learn something from it. So grab your HD online player and watch Stuart Little in HD online today!

    -

    What You Can Expect from Stuart Little in HD Online

    - -

    Stuart Little is a movie that will surprise and delight you with its story, animation, cast, music, and message. Watching Stuart Little in HD online will give you the best possible experience of the movie and make you feel like you are part of it. Here are some of the things that you can expect from Stuart Little in HD online:

    - -
      -
    • A story that is funny, touching, and thrilling: Stuart Little is a movie that has a story that is full of humor, emotion, and action. The movie follows the adventures of Stuart, a mouse who is adopted by a human family and tries to fit in with his new brother and friends. Along the way, he faces many challenges and dangers, such as a jealous cat, a sinister falcon, and a scheming couple. The movie also explores themes such as family, friendship, identity, and courage. Watching Stuart Little in HD online will make you laugh, cry, and cheer for the mouse who proves that size doesn't matter when it comes to family and friendship.
    • -
    • An animation that is realistic, detailed, and expressive: Stuart Little is a movie that has an animation that is groundbreaking, stunning, and charming. The movie uses a combination of live-action and computer-generated imagery to create a realistic and seamless world where humans and animals coexist. The movie also uses motion capture technology to capture the movements and expressions of the actors who voice the animals, such as Michael J. Fox as Stuart and Nathan Lane as Snowbell. Watching Stuart Little in HD online will make you appreciate the animation and special effects that bring Stuart and his world to life.
    • -
    • A cast that is talented, likable, and diverse: Stuart Little is a movie that has a cast that is talented, likable, and diverse. The movie features a mix of actors who play the live-action roles and actors who voice the animated roles. The movie stars Michael J. Fox as Stuart, Geena Davis as Mrs. Little, Hugh Laurie as Mr. Little, Jonathan Lipnicki as George Little, Nathan Lane as Snowbell, Chazz Palminteri as Smokey, Steve Zahn as Monty, Bruno Kirby as Reginald Stout, Jennifer Tilly as Camille Stout, Jeffrey Jones as Uncle Crenshaw Little, Connie Ray as Aunt Tina Little, Allyce Beasley as Aunt Beatrice Little Little, Brian Doyle-Murray as Cousin Edgar Little Little Jr., Estelle Getty as Grandma Estelle Little Little Sr., Harold Gould as Grandpa Spencer Little Little Sr., Jim Doughan as Detective Allen Sherman Allen Sherman Allen Sherman Allen Sherman Allen Sherman Allen Sherman Allen Sherman Allen Sherman Allen Sherman Allen Sherman Allen Sherman Allen Sherman Allen Sherman Allen Sherman Allen Sherman Allen Sherman Allen Sherman Allen Sherman Allen Sherman Allen Sherman Allen Sherman Allen Sherman Allen Sherma -

      How to Watch Stuart Little in HD Online for Free

      - -

      Stuart Little is a movie that you might want to watch in HD online without paying any money. However, finding a free and legal way to watch Stuart Little in HD online can be tricky and risky. There are many websites or apps that claim to offer Stuart Little in HD online for free, but they might be illegal, unsafe, or unreliable. They might contain viruses, malware, or spyware that can harm your device or steal your personal information. They might also have poor quality, broken links, or annoying ads that can ruin your viewing experience.

      -

      - -

      Fortunately, there are some ways that you can watch Stuart Little in HD online for free without breaking the law or risking your security. Here are some of the ways that you can watch Stuart Little in HD online for free:

      - -
        -
      • Free trials: Free trials are offers that let you use a streaming service for free for a limited time, usually between 7 to 30 days. You can sign up for a free trial with your email address and a payment method, and then cancel before the trial ends to avoid being charged. Some of the streaming services that offer free trials and have Stuart Little in HD online are Netflix, Stan, Amazon Prime Video, and YouTube Premium. You can use different email addresses and payment methods to sign up for multiple free trials and watch Stuart Little in HD online for free as many times as you want.
      • -
      • Library cards: Library cards are cards that let you borrow books and other materials from your local library. You can also use your library card to access digital services that your library subscribes to, such as Hoopla, Kanopy, or OverDrive. These services allow you to stream or download movies in HD online for free with your library card number and PIN. Some of these services might have Stuart Little in HD online in their catalog, depending on your library's availability and selection. You can check your library's website or app to see if they offer these services and if they have Stuart Little in HD online.
      • -
      • Educational accounts: Educational accounts are accounts that you get from your school or university that give you access to academic resources and tools. You can also use your educational account to access streaming services that offer discounts or free access for students or teachers, such as Spotify Premium Student or Apple TV+. These services might have Stuart Little in HD online in their library, depending on their content and partnerships. You can check your school's website or email to see if they offer these services and if they have Stuart Little in HD online.
      • -
      - -

      Watching Stuart Little in HD online for free is possible if you know where to look and how to use them. Whether you use free trials, library cards, or educational accounts, you will be able to watch Stuart Little in HD online for free legally and safely.

      -

      Conclusion

      - -

      Stuart Little is a movie that you will love to watch in HD online. It is a movie that has a great story, animation, cast, music, and message. It is also a movie that has many options, benefits, reviews, and activities that you can enjoy. It is also a movie that you can watch in HD online for free if you know how to do it. Stuart Little is a movie that will make you happy and satisfied.

      - -

      So what are you waiting for? Grab your HD online player and watch Stuart Little in HD online today. You will not regret it.

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Heidelberg Sord User Manual A Complete Guide to Operating and Maintaining Your Printing Machine.md b/spaces/bioriAsaeru/text-to-voice/Heidelberg Sord User Manual A Complete Guide to Operating and Maintaining Your Printing Machine.md deleted file mode 100644 index 77e46a5bc32084d9b8d8d5c9f1db0030a16a4489..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Heidelberg Sord User Manual A Complete Guide to Operating and Maintaining Your Printing Machine.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Heidelberg Sord User Manual


      Download Ziphttps://urloso.com/2uyOXu



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/birsardar/stable-diffusion-mat-outpainting-primer/torch_utils/training_stats.py b/spaces/birsardar/stable-diffusion-mat-outpainting-primer/torch_utils/training_stats.py deleted file mode 100644 index 26f467f9eaa074ee13de1cf2625cd7da44880847..0000000000000000000000000000000000000000 --- a/spaces/birsardar/stable-diffusion-mat-outpainting-primer/torch_utils/training_stats.py +++ /dev/null @@ -1,268 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Facilities for reporting and collecting training statistics across -multiple processes and devices. The interface is designed to minimize -synchronization overhead as well as the amount of boilerplate in user -code.""" - -import re -import numpy as np -import torch -import dnnlib - -from . import misc - -#---------------------------------------------------------------------------- - -_num_moments = 3 # [num_scalars, sum_of_scalars, sum_of_squares] -_reduce_dtype = torch.float32 # Data type to use for initial per-tensor reduction. -_counter_dtype = torch.float64 # Data type to use for the internal counters. -_rank = 0 # Rank of the current process. -_sync_device = None # Device to use for multiprocess communication. None = single-process. -_sync_called = False # Has _sync() been called yet? -_counters = dict() # Running counters on each device, updated by report(): name => device => torch.Tensor -_cumulative = dict() # Cumulative counters on the CPU, updated by _sync(): name => torch.Tensor - -#---------------------------------------------------------------------------- - -def init_multiprocessing(rank, sync_device): - r"""Initializes `torch_utils.training_stats` for collecting statistics - across multiple processes. - - This function must be called after - `torch.distributed.init_process_group()` and before `Collector.update()`. - The call is not necessary if multi-process collection is not needed. - - Args: - rank: Rank of the current process. - sync_device: PyTorch device to use for inter-process - communication, or None to disable multi-process - collection. Typically `torch.device('cuda', rank)`. - """ - global _rank, _sync_device - assert not _sync_called - _rank = rank - _sync_device = sync_device - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def report(name, value): - r"""Broadcasts the given set of scalars to all interested instances of - `Collector`, across device and process boundaries. - - This function is expected to be extremely cheap and can be safely - called from anywhere in the training loop, loss function, or inside a - `torch.nn.Module`. - - Warning: The current implementation expects the set of unique names to - be consistent across processes. Please make sure that `report()` is - called at least once for each unique name by each process, and in the - same order. If a given process has no scalars to broadcast, it can do - `report(name, [])` (empty list). - - Args: - name: Arbitrary string specifying the name of the statistic. - Averages are accumulated separately for each unique name. - value: Arbitrary set of scalars. Can be a list, tuple, - NumPy array, PyTorch tensor, or Python scalar. - - Returns: - The same `value` that was passed in. - """ - if name not in _counters: - _counters[name] = dict() - - elems = torch.as_tensor(value) - if elems.numel() == 0: - return value - - elems = elems.detach().flatten().to(_reduce_dtype) - moments = torch.stack([ - torch.ones_like(elems).sum(), - elems.sum(), - elems.square().sum(), - ]) - assert moments.ndim == 1 and moments.shape[0] == _num_moments - moments = moments.to(_counter_dtype) - - device = moments.device - if device not in _counters[name]: - _counters[name][device] = torch.zeros_like(moments) - _counters[name][device].add_(moments) - return value - -#---------------------------------------------------------------------------- - -def report0(name, value): - r"""Broadcasts the given set of scalars by the first process (`rank = 0`), - but ignores any scalars provided by the other processes. - See `report()` for further details. - """ - report(name, value if _rank == 0 else []) - return value - -#---------------------------------------------------------------------------- - -class Collector: - r"""Collects the scalars broadcasted by `report()` and `report0()` and - computes their long-term averages (mean and standard deviation) over - user-defined periods of time. - - The averages are first collected into internal counters that are not - directly visible to the user. They are then copied to the user-visible - state as a result of calling `update()` and can then be queried using - `mean()`, `std()`, `as_dict()`, etc. Calling `update()` also resets the - internal counters for the next round, so that the user-visible state - effectively reflects averages collected between the last two calls to - `update()`. - - Args: - regex: Regular expression defining which statistics to - collect. The default is to collect everything. - keep_previous: Whether to retain the previous averages if no - scalars were collected on a given round - (default: True). - """ - def __init__(self, regex='.*', keep_previous=True): - self._regex = re.compile(regex) - self._keep_previous = keep_previous - self._cumulative = dict() - self._moments = dict() - self.update() - self._moments.clear() - - def names(self): - r"""Returns the names of all statistics broadcasted so far that - match the regular expression specified at construction time. - """ - return [name for name in _counters if self._regex.fullmatch(name)] - - def update(self): - r"""Copies current values of the internal counters to the - user-visible state and resets them for the next round. - - If `keep_previous=True` was specified at construction time, the - operation is skipped for statistics that have received no scalars - since the last update, retaining their previous averages. - - This method performs a number of GPU-to-CPU transfers and one - `torch.distributed.all_reduce()`. It is intended to be called - periodically in the main training loop, typically once every - N training steps. - """ - if not self._keep_previous: - self._moments.clear() - for name, cumulative in _sync(self.names()): - if name not in self._cumulative: - self._cumulative[name] = torch.zeros([_num_moments], dtype=_counter_dtype) - delta = cumulative - self._cumulative[name] - self._cumulative[name].copy_(cumulative) - if float(delta[0]) != 0: - self._moments[name] = delta - - def _get_delta(self, name): - r"""Returns the raw moments that were accumulated for the given - statistic between the last two calls to `update()`, or zero if - no scalars were collected. - """ - assert self._regex.fullmatch(name) - if name not in self._moments: - self._moments[name] = torch.zeros([_num_moments], dtype=_counter_dtype) - return self._moments[name] - - def num(self, name): - r"""Returns the number of scalars that were accumulated for the given - statistic between the last two calls to `update()`, or zero if - no scalars were collected. - """ - delta = self._get_delta(name) - return int(delta[0]) - - def mean(self, name): - r"""Returns the mean of the scalars that were accumulated for the - given statistic between the last two calls to `update()`, or NaN if - no scalars were collected. - """ - delta = self._get_delta(name) - if int(delta[0]) == 0: - return float('nan') - return float(delta[1] / delta[0]) - - def std(self, name): - r"""Returns the standard deviation of the scalars that were - accumulated for the given statistic between the last two calls to - `update()`, or NaN if no scalars were collected. - """ - delta = self._get_delta(name) - if int(delta[0]) == 0 or not np.isfinite(float(delta[1])): - return float('nan') - if int(delta[0]) == 1: - return float(0) - mean = float(delta[1] / delta[0]) - raw_var = float(delta[2] / delta[0]) - return np.sqrt(max(raw_var - np.square(mean), 0)) - - def as_dict(self): - r"""Returns the averages accumulated between the last two calls to - `update()` as an `dnnlib.EasyDict`. The contents are as follows: - - dnnlib.EasyDict( - NAME = dnnlib.EasyDict(num=FLOAT, mean=FLOAT, std=FLOAT), - ... - ) - """ - stats = dnnlib.EasyDict() - for name in self.names(): - stats[name] = dnnlib.EasyDict(num=self.num(name), mean=self.mean(name), std=self.std(name)) - return stats - - def __getitem__(self, name): - r"""Convenience getter. - `collector[name]` is a synonym for `collector.mean(name)`. - """ - return self.mean(name) - -#---------------------------------------------------------------------------- - -def _sync(names): - r"""Synchronize the global cumulative counters across devices and - processes. Called internally by `Collector.update()`. - """ - if len(names) == 0: - return [] - global _sync_called - _sync_called = True - - # Collect deltas within current rank. - deltas = [] - device = _sync_device if _sync_device is not None else torch.device('cpu') - for name in names: - delta = torch.zeros([_num_moments], dtype=_counter_dtype, device=device) - for counter in _counters[name].values(): - delta.add_(counter.to(device)) - counter.copy_(torch.zeros_like(counter)) - deltas.append(delta) - deltas = torch.stack(deltas) - - # Sum deltas across ranks. - if _sync_device is not None: - torch.distributed.all_reduce(deltas) - - # Update cumulative values. - deltas = deltas.cpu() - for idx, name in enumerate(names): - if name not in _cumulative: - _cumulative[name] = torch.zeros([_num_moments], dtype=_counter_dtype) - _cumulative[name].add_(deltas[idx]) - - # Return name-value pairs. - return [(name, _cumulative[name]) for name in names] - -#---------------------------------------------------------------------------- diff --git a/spaces/bohmian/stock_intrinsic_value_calculator/README.md b/spaces/bohmian/stock_intrinsic_value_calculator/README.md deleted file mode 100644 index 39f4a9498ce6b48984ce0022b8615daba3d84dff..0000000000000000000000000000000000000000 --- a/spaces/bohmian/stock_intrinsic_value_calculator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Stock Intrinsic Value Calculator -emoji: 🦀 -colorFrom: red -colorTo: red -sdk: gradio -sdk_version: 3.44.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bunkalab/bunka-map/maps/bourdieu_priacy_politics.html b/spaces/bunkalab/bunka-map/maps/bourdieu_priacy_politics.html deleted file mode 100644 index bb7df91ea977ccbdda12c0c77f1748edadaf74c0..0000000000000000000000000000000000000000 --- a/spaces/bunkalab/bunka-map/maps/bourdieu_priacy_politics.html +++ /dev/null @@ -1,14 +0,0 @@ - - - -
      -
      - - \ No newline at end of file diff --git a/spaces/cahya/indonesian-whisperer/app/api.py b/spaces/cahya/indonesian-whisperer/app/api.py deleted file mode 100644 index 61dfb9c58efefe70d1789c22dded3f186f927709..0000000000000000000000000000000000000000 --- a/spaces/cahya/indonesian-whisperer/app/api.py +++ /dev/null @@ -1,165 +0,0 @@ -from fastapi import FastAPI, WebSocket -from fastapi.responses import HTMLResponse -from fastapi import Form, Depends, HTTPException, status -from transformers import pipeline, set_seed, AutoConfig, AutoTokenizer, AutoModelForCausalLM -import torch -import os -import time -import re -import json - -app = FastAPI() - -html = """ - - - - Chat - - -

      WebSocket Chat

      - - - -
    • -
        -
      - - - -""" - - -@app.get("/") -async def get(): - return HTMLResponse(html) - - -@app.get("/api/env") -async def env(): - environment_variables = "

      Environment Variables

      " - for name, value in os.environ.items(): - environment_variables += f"{name}: {value}
      " - return HTMLResponse(environment_variables) - - -@app.websocket("/api/ws") -async def websocket_endpoint(websocket: WebSocket): - await websocket.accept() - while True: - data = await websocket.receive_text() - await websocket.send_text(f"Message text was: {data}") - - -@app.post("/api/indochat/v1") -async def indochat(**kwargs): - return text_generate("indochat-tiny", kwargs) - - -@app.post("/api/text-generator/v1") -async def text_generate( - model_name: str = Form(default="", description="The model name"), - text: str = Form(default="", description="The Prompt"), - decoding_method: str = Form(default="Sampling", description="Decoding method"), - min_length: int = Form(default=50, description="Minimal length of the generated text"), - max_length: int = Form(default=250, description="Maximal length of the generated text"), - num_beams: int = Form(default=5, description="Beams number"), - top_k: int = Form(default=30, description="The number of highest probability vocabulary tokens to keep " - "for top-k-filtering"), - top_p: float = Form(default=0.95, description="If set to float < 1, only the most probable tokens with " - "probabilities that add up to top_p or higher are kept " - "for generation"), - temperature: float = Form(default=0.5, description="The Temperature of the softmax distribution"), - penalty_alpha: float = Form(default=0.5, description="Penalty alpha"), - repetition_penalty: float = Form(default=1.2, description="Repetition penalty"), - seed: int = Form(default=-1, description="Random Seed"), - max_time: float = Form(default=60.0, description="Maximal time in seconds to generate the text") -): - if seed >= 0: - set_seed(seed) - if decoding_method == "Beam Search": - do_sample = False - penalty_alpha = 0 - elif decoding_method == "Sampling": - do_sample = True - penalty_alpha = 0 - num_beams = 1 - else: - do_sample = False - num_beams = 1 - if repetition_penalty == 0.0: - min_penalty = 1.05 - max_penalty = 1.5 - repetition_penalty = max(min_penalty + (1.0 - temperature) * (max_penalty - min_penalty), 0.8) - prompt = f"User: {text}\nAssistant: " - input_ids = text_generator[model_name]["tokenizer"](prompt, return_tensors='pt').input_ids.to(0) - text_generator[model_name]["model"].eval() - print("Generating text...") - print(f"max_length: {max_length}, do_sample: {do_sample}, top_k: {top_k}, top_p: {top_p}, " - f"temperature: {temperature}, repetition_penalty: {repetition_penalty}, penalty_alpha: {penalty_alpha}") - time_start = time.time() - sample_outputs = text_generator[model_name]["model"].generate(input_ids, - penalty_alpha=penalty_alpha, - do_sample=do_sample, - num_beams=num_beams, - min_length=min_length, - max_length=max_length, - top_k=top_k, - top_p=top_p, - temperature=temperature, - repetition_penalty=repetition_penalty, - num_return_sequences=1, - max_time=max_time - ) - result = text_generator[model_name]["tokenizer"].decode(sample_outputs[0], skip_special_tokens=True) - time_end = time.time() - time_diff = time_end - time_start - print(f"result:\n{result}") - generated_text = result[len(prompt)+1:] - generated_text = generated_text[:generated_text.find("User:")] - return {"generated_text": generated_text, "processing_time": time_diff} - - -def get_text_generator(model_name: str, load_in_8bit: bool = False, device: str = "cpu"): - hf_auth_token = os.getenv("HF_AUTH_TOKEN", False) - print(f"hf_auth_token: {hf_auth_token}") - print(f"Loading model with device: {device}...") - tokenizer = AutoTokenizer.from_pretrained(model_name, use_auth_token=hf_auth_token) - model = AutoModelForCausalLM.from_pretrained(model_name, pad_token_id=tokenizer.eos_token_id, - load_in_8bit=load_in_8bit, device_map="auto", use_auth_token=hf_auth_token) - # model.to(device) - print("Model loaded") - return model, tokenizer - - -def get_config(): - return json.load(open("config.json", "r")) - - -config = get_config() -device = "cuda" if torch.cuda.is_available() else "cpu" -text_generator = {} -for model_name in config["text-generator"]: - model, tokenizer = get_text_generator(model_name=config["text-generator"][model_name]["name"], - load_in_8bit=config["text-generator"][model_name]["load_in_8bit"], - device=device) - text_generator[model_name] = { - "model": model, - "tokenizer": tokenizer - } diff --git a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/latent_diffusion/ema.py b/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/latent_diffusion/ema.py deleted file mode 100644 index 192b012186bab3d8a5380bc9b891da8eef0fd9fa..0000000000000000000000000000000000000000 --- a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/latent_diffusion/ema.py +++ /dev/null @@ -1,81 +0,0 @@ -import torch -from torch import nn - -class LitEma(nn.Module): - def __init__(self, model, decay=0.9999, use_num_upates=True): - super().__init__() - if decay < 0.0 or decay > 1.0: - raise ValueError("Decay must be between 0 and 1") - - self.m_name2s_name = {} - self.register_buffer("decay", torch.tensor(decay, dtype=torch.float32)) - self.register_buffer( - "num_updates", - torch.tensor(0, dtype=torch.int) - if use_num_upates - else torch.tensor(-1, dtype=torch.int), - ) - - for name, p in model.named_parameters(): - if p.requires_grad: - # remove as '.'-character is not allowed in buffers - s_name = name.replace(".", "") - self.m_name2s_name.update({name: s_name}) - self.register_buffer(s_name, p.clone().detach().data) - - self.collected_params = [] - - def forward(self, model): - decay = self.decay - - if self.num_updates >= 0: - self.num_updates += 1 - decay = min(self.decay, (1 + self.num_updates) / (10 + self.num_updates)) - - one_minus_decay = 1.0 - decay - - with torch.no_grad(): - m_param = dict(model.named_parameters()) - shadow_params = dict(self.named_buffers()) - - for key in m_param: - if m_param[key].requires_grad: - sname = self.m_name2s_name[key] - shadow_params[sname] = shadow_params[sname].type_as(m_param[key]) - shadow_params[sname].sub_( - one_minus_decay * (shadow_params[sname] - m_param[key]) - ) - else: - assert not key in self.m_name2s_name - - def copy_to(self, model): - m_param = dict(model.named_parameters()) - shadow_params = dict(self.named_buffers()) - for key in m_param: - if m_param[key].requires_grad: - m_param[key].data.copy_(shadow_params[self.m_name2s_name[key]].data) - else: - assert not key in self.m_name2s_name - - def store(self, parameters): - """ - Save the current parameters for restoring later. - Args: - parameters: Iterable of `torch.nn.Parameter`; the parameters to be - temporarily stored. - """ - self.collected_params = [param.clone() for param in parameters] - - def restore(self, parameters): - """ - Restore the parameters stored with the `store` method. - Useful to validate the model with EMA parameters without affecting the - original optimization process. Store the parameters before the - `copy_to` method. After validation (or model saving), use this to - restore the former parameters. - Args: - parameters: Iterable of `torch.nn.Parameter`; the parameters to be - updated with the stored parameters. - """ - for c_param, param in zip(self.collected_params, parameters): - param.data.copy_(c_param.data) diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/GimpPaletteFile.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/GimpPaletteFile.py deleted file mode 100644 index d388928945a0f6711de2b1c8d1ed50ce192a8219..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/GimpPaletteFile.py +++ /dev/null @@ -1,56 +0,0 @@ -# -# Python Imaging Library -# $Id$ -# -# stuff to read GIMP palette files -# -# History: -# 1997-08-23 fl Created -# 2004-09-07 fl Support GIMP 2.0 palette files. -# -# Copyright (c) Secret Labs AB 1997-2004. All rights reserved. -# Copyright (c) Fredrik Lundh 1997-2004. -# -# See the README file for information on usage and redistribution. -# - -import re - -from ._binary import o8 - - -class GimpPaletteFile: - """File handler for GIMP's palette format.""" - - rawmode = "RGB" - - def __init__(self, fp): - self.palette = [o8(i) * 3 for i in range(256)] - - if fp.readline()[:12] != b"GIMP Palette": - msg = "not a GIMP palette file" - raise SyntaxError(msg) - - for i in range(256): - s = fp.readline() - if not s: - break - - # skip fields and comment lines - if re.match(rb"\w+:|#", s): - continue - if len(s) > 100: - msg = "bad palette file" - raise SyntaxError(msg) - - v = tuple(map(int, s.split()[:3])) - if len(v) != 3: - msg = "bad palette entry" - raise ValueError(msg) - - self.palette[i] = o8(v[0]) + o8(v[1]) + o8(v[2]) - - self.palette = b"".join(self.palette) - - def getpalette(self): - return self.palette, self.rawmode diff --git a/spaces/carterw/evolutionary-playlist-builder/src/evolutionary_alrogithm.py b/spaces/carterw/evolutionary-playlist-builder/src/evolutionary_alrogithm.py deleted file mode 100644 index 1c2d06350d85430794e884bd8bcbd8b4d9807825..0000000000000000000000000000000000000000 --- a/spaces/carterw/evolutionary-playlist-builder/src/evolutionary_alrogithm.py +++ /dev/null @@ -1,372 +0,0 @@ -import pandas as pd -import numpy as np -from scipy.spatial.distance import cosine -import scikits.bootstrap as bootstrap -import gradio as gr -import plotly.express as px -from plotly.graph_objects import Figure -import matplotlib.pyplot as plt -from copy import deepcopy -from random import choice -from typing import List, Set -import itertools -from sklearn.preprocessing import StandardScaler -from src.spotipy_utils import SpotifyTrack -# plt.switch_backend('Agg') # NOTE: for traversal direction tracking viz - -# Initial data cleaning. -SONG_DF = pd.read_csv("data/SpotifyAudioFeaturesApril2019.csv") -REDUCED_DF = pd.read_csv("data/reduced_audio_profiles.csv") -AUDIO_DF = SONG_DF.drop(["artist_name", "track_id", "track_name", "duration_ms", "key", "mode", "time_signature", "popularity"], axis=1) # TODO: Normalize these feature-wise! -AUDIO_FEATURE_NAMES = AUDIO_DF.columns - -history_df = pd.DataFrame({"x": REDUCED_DF.iloc[..., 0], - "y": REDUCED_DF.iloc[..., 1], - "status": ["none" for i in range(REDUCED_DF.shape[0])], - "name": SONG_DF["track_name"].values, - "artist": SONG_DF["artist_name"].values, - "key": SONG_DF["key"].values}) - -# Symbols for track statuses. -THUMBS_UP = "👍" -THUMBS_DOWN = "👎" -ADD_TO_PLAYLIST = "➕ to ⏯️" -COLOR_MAP_DISCRETE = {"none": 'grey', THUMBS_UP: "blue", THUMBS_DOWN: 'red', ADD_TO_PLAYLIST: "green"} - -# Normalizing audio profile features to be able to control mutation size more precisely. -scaler = StandardScaler() -audio_features = scaler.fit_transform(AUDIO_DF.values) - -sample_inds = list(np.arange(audio_features.shape[0])) # index list for easy sampling - -# Mutation and crossover options. -MUTATION_OPTIONS = ["None", "Random", "Differential"] -CROSSOVER_OPTIONS = ["None", "Two-Point"] - - -class Individual: - """ Individual representing a track in the Evolutionary algorithm population. - """ - def __init__(self, song_index:int=None, spotify_track=None): - """Constructor for an Individual. - - Args: - song_index (int): Index of the track in the dataset. - """ - - if song_index: - self.index = song_index - self.spotify_id = SONG_DF.loc[song_index, ["track_id"]][0] - self.spotify_track = SpotifyTrack(self.spotify_id) # Init a spotify track instance. - self.name = SONG_DF.loc[song_index, ["track_name"]][0] - self.artist = SONG_DF.loc[song_index, ["artist_name"]][0] - self.genome = audio_features[song_index,...] # NOTE: For now, the only genome used is the audio profile of the track. - elif spotify_track: - self.index = None - self.spotify_track = spotify_track - self.name = self.spotify_track.get_name() - self.artist = self.spotify_track.get_artist() - self.genome = scaler.transform(self.spotify_track.get_track_audio_features().reshape(1, -1)).reshape(1, -1) - self.status = None - - - def get_image_url(self) -> str: - """ Get the image for the track available on through the API. - - Returns: - str: URL to the track image. - """ - return self.spotify_track.track_image - - - def get_preview_url(self) -> str: - """ Get the preview audio for the track available on through the API. - - Returns: - str: URL to the preview audio. - """ - return self.spotify_track.track_preview_url - - - def get_track_as_block(self) -> str: - """Get track name and artist name in a string. - - Returns: - str: Track name and artist name. - """ - return gr.TextArea(value=f"{self.name} by {self.artist}") - - - def get_track_id(self) -> str: - """Get track ID for Spotify API. - - Returns: - str: Spotify track ID. - """ - return self.spotify_track.track_id - - -class Population: - """ Population of tracks to be evolved through interaction with a user in an effort to build a curated playlist. - """ - def __init__(self, num_songs:int, mutation_size:float, crossover_function=None, mutation_function=None): - """ Constructor of the population initializing all the tracks in the first generation. - - Args: - num_songs (int): Number of tracks in the population. - mutation_size (float): Value to scale mutations. - crossover_function (function, optional): Function to perform crossover with. Defaults to None. - mutation_function (function, optional): Function to perform mutation with. Defaults to None. - """ - self.seed_track = None - self.size = num_songs - self.init_inds = np.random.choice(sample_inds, self.size) - self.pop = [] - self.search_results = None - self.search_seed = None - self.playlist_selected_songs = [] - self.playlist_inds = set() - self.user_key = None - self.mutation_function = mutation_function - self.crossover_function = crossover_function - self.mutation_size = mutation_size - self.history_df = deepcopy(history_df) - self.distance_matrices = [] - - def mutate(self, thumbs_up:List[Individual], thumbs_down:List[Individual], added_songs:List[Individual]): - """ Method to call assigned mutation function through. - - Args: - thumbs_up (List[Individual]): List of individuals given the thumbs up. - thumbs_down (List[Individual]): List of individuals given the thumbs down. - added_songs (List[Individual]): List of individuals added to the playlist. - """ - self.mutation_function(self, thumbs_up, thumbs_down, added_songs) - - - def crossover(self, thumbs_up:List[Individual], thumbs_down:List[Individual], added_songs:List[Individual]): - """ Method to call assigned crossover function through. - - Args: - thumbs_up (List[Individual]): List of individuals given the thumbs up. - thumbs_down (List[Individual]): List of individuals given the thumbs down. - added_songs (List[Individual]): List of individuals added to the playlist. - """ - self.crossover_function(self, thumbs_up, thumbs_down, added_songs) - - - def add_to_playlist(self, track:Individual): - """ Add a given track to the population's playlist. - - Args: - track (Individual): Track that the user has chosen to add to the playlist. - """ - self.playlist_selected_songs.append(track) - self.playlist_inds.add(track.index) - - - def get_tracks_with_status(self, status:str): - """ Get tracks in the current population matching some status. - - Args: - status (str): Status to be matched on. - """ - return list(filter(lambda x: x.status == status, self.pop)) - - - def get_playlist_block_value(self) -> str: - """ Get population represented as a string. - - Returns: - str: Tracks listed with name and artist name, one per line. - """ - val_string = "" - for track in self.playlist_selected_songs: - val_string += f"{track.name} by {track.artist}\n" - return val_string - - - def reinitialize_pop(self): - """ Randomly re-initialize the population. - """ - self.init_inds = np.random.choice(sample_inds, self.size) - self.pop = [Individual(ind) for ind in self.init_inds] - - - def generate_traversal_viz(self) -> Figure: - """ Generate a scatter plot marking the tracks which have been used so far and their categories. - - Returns: - Figure: Plotly scatter plot. - """ - return px.scatter(self.history_df, "x", "y", color="status", color_discrete_map=COLOR_MAP_DISCRETE,labels={ - "x": "", - "y": "", - },) - - - def generate_direction_tracking(self): - multidim_distance = np.stack(self.distance_matrices, axis=2) - - # Compute distribution of cosine similarity amongst distance vectors between individuals of the same generation. - sibling_directional_similarity = np.zeros((multidim_distance.shape[2], multidim_distance.shape[0]*(multidim_distance.shape[0]-1))) - for t in range(multidim_distance.shape[2]): - cosine_similarities = [] - for i in range(multidim_distance.shape[0]): - for j in range(multidim_distance.shape[0]): - if i == j: - continue - - cosine_similarity = cosine(multidim_distance[i,:,t], multidim_distance[j,:,t]) - - cosine_similarities.append(cosine_similarity) - - sibling_directional_similarity[t, :] = np.array(cosine_similarities) - num_obs = sibling_directional_similarity.shape[0] - # Compute bootstrapped CI - data_ci = [bootstrap.ci(sibling_directional_similarity[k, :]) for k in range(num_obs)] - means = [ci.sum() / 2 for ci in data_ci] - lower = [ci[0] for ci in data_ci] - upper = [ci[1] for ci in data_ci] - - plt.plot(means, label = "Distribution of Cosine Similarity for Paternal Distance Vectors Among Siblings Over Time") - plt.fill_between(np.arange(num_obs), upper, lower, alpha = 0.5) - plt.savefig("cosine_dist_over_time.png") - - - def update_population_history(self, thumbs_up:List[Individual], thumbs_down:List[Individual], added_songs:List[Individual]): - """ Update the DF keeping track of the traversal history. - - Args: - thumbs_up (List[Individual]): List of individuals given the thumbs up. - thumbs_down (List[Individual]): List of individuals given the thumbs down. - added_songs (List[Individual]): List of individuals added to the playlist. - """ - for track in thumbs_up: - self.history_df.at[track.index, "status"] = THUMBS_UP - for track in thumbs_down: - self.history_df.at[track.index, "status"] = THUMBS_DOWN - for track in added_songs: - self.history_df.at[track.index, "status"] = ADD_TO_PLAYLIST - - -def get_closest_song_to_genome(population: Population, genome: np.ndarray, new_track_inds: Set[int]) -> int: - """ Find the song in the audio profile matrix that is most similar to the supplied one. - - Args: - population (Population): Population of tracks. - genome (np.ndarray): Vector of audio profile features. - new_track_inds (Set[int]): Set of the track indices being added for the new generation. - - Returns: - int: Index of the most similar track in the audio matrix that is not already in the population or added to the playlist. - """ - track_distances = np.linalg.norm(genome - audio_features, axis=1) # Compute euclidean distance from desired genome and all other audio profiles. - sorted_distance_inds = np.argsort(track_distances) - current_distance_index = 0 - while sorted_distance_inds[current_distance_index] in new_track_inds or sorted_distance_inds[current_distance_index] in population.playlist_inds: - current_distance_index += 1 - - child_index = sorted_distance_inds[current_distance_index] - child = Individual(child_index) - new_track_inds.add(child_index) - return child - - -def simple_crossover(population: Population, thumbs_up:List[Individual], thumbs_down:List[Individual], added_songs:List[Individual]): - """ Performs simple two point crossover. - - Args: - population (Population): Population to assign new children to. - thumbs_up (List[Individual]): List of individuals given the thumbs up. - thumbs_down (List[Individual]): List of individuals given the thumbs down. - added_songs (List[Individual]): List of individuals added to the playlist. - """ - liked_tracks = list(itertools.chain(*[thumbs_up, added_songs])) - if len(liked_tracks) < 2: # Not enough tracks to perform crossover. - return - - new_track_inds = set() - population_index = 0 - while len(new_track_inds) < population.size: - track_a, track_b = np.random.choice(liked_tracks, 2) - crossover_point1, crossover_point2 = sorted(np.random.randint(0,len(track_a.genome),2)) - - track_a.genome[crossover_point1:crossover_point2+1] = track_b.genome[crossover_point1:crossover_point2+1] - track_b.genome[crossover_point1:crossover_point2+1] = track_a.genome[crossover_point1:crossover_point2+1] - - child1 = get_closest_song_to_genome(population, track_a.genome, new_track_inds) - population.pop[population_index] = child1 - population_index += 1 - - if population_index < population.size: - child2 = get_closest_song_to_genome(population, track_b.genome, new_track_inds) - population.pop[population_index] = child2 - population_index += 1 - - -def simple_mutation(population: Population, thumbs_up:List[Individual], thumbs_down:List[Individual], added_songs:List[Individual]): - """ Perform simple random mutation on a population of individuals. - - Args: - population (Population): Population to assign new children to. - thumbs_up (List[Individual]): List of individuals given the thumbs up. - thumbs_down (List[Individual]): List of individuals given the thumbs down. - added_songs (List[Individual]): List of individuals added to the playlist. - """ - liked_tracks = list(itertools.chain(*[thumbs_up, added_songs])) - new_track_inds = set() - # Init population distance matrix - distance_matrix = np.zeros((population.size, population.size)) - for i in range(population.size): - liked_track = choice(liked_tracks) - mutated_track_genome = liked_track.genome + np.random.uniform(size=(population.size))*population.mutation_size - child = get_closest_song_to_genome(population, mutated_track_genome, new_track_inds) - # Record distance vector for parent-child. - distance_matrix[i, :] = child.genome - liked_track.genome - if len(population.pop) <= i: - population.pop.append(child) - else: - population.pop[i] = child - # Add to distance matrix tracking list. - population.distance_matrices.append(distance_matrix) - - -def differential_mutation(population: Population, thumbs_up:List[Individual], thumbs_down:List[Individual], added_songs:List[Individual]): - """ Perform differential mutation based on the user's decisions. - - Args: - population (Population): Population to assign new children to. - thumbs_up (List[Individual]): List of individuals given the thumbs up. - thumbs_down (List[Individual]): List of individuals given the thumbs down. - added_songs (List[Individual]): List of individuals added to the playlist. - """ - new_track_inds = set() - # Init population distance matrix - distance_matrix = np.zeros((population.size, population.size)) - for i in range(population.size): - if thumbs_down: - disliked_track = choice(thumbs_down) - else: - disliked_track = Individual(np.random.choice(sample_inds, 1)[0]) - - if thumbs_up: - seed_track = choice(thumbs_up) - else: - seed_track = Individual(np.random.choice(sample_inds, 1)[0]) - - if added_songs: - attracting_track = choice(added_songs) - elif population.playlist_selected_songs: - attracting_track = choice(population.playlist_selected_songs) - else: - attracting_track = Individual(np.random.choice(sample_inds, 1)[0]) - - gene_distances = attracting_track.genome - disliked_track.genome - resulting_genome = seed_track.genome + gene_distances*population.mutation_size - child = get_closest_song_to_genome(population, resulting_genome, new_track_inds) - # Record distance vector for parent-child. - distance_matrix[i, :] = child.genome - attracting_track.genome - population.pop[i] = child - # Add to distance matrix tracking list. - population.distance_matrices.append(distance_matrix) diff --git a/spaces/cbensimon/streamlit-ui-gallery/README.md b/spaces/cbensimon/streamlit-ui-gallery/README.md deleted file mode 100644 index c18e54e1e2a50fe38d58c2dd178144ae825cba99..0000000000000000000000000000000000000000 --- a/spaces/cbensimon/streamlit-ui-gallery/README.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -title: Streamlit UI Gallery -emoji: ⚡ -colorFrom: green -colorTo: indigo -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. \ No newline at end of file diff --git a/spaces/cfwef/gpt/crazy_functions/test_project/python/dqn/__init__.py b/spaces/cfwef/gpt/crazy_functions/test_project/python/dqn/__init__.py deleted file mode 100644 index 4ae42872c812a7c8a18dff002086c7e6e935f580..0000000000000000000000000000000000000000 --- a/spaces/cfwef/gpt/crazy_functions/test_project/python/dqn/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from stable_baselines3.dqn.dqn import DQN -from stable_baselines3.dqn.policies import CnnPolicy, MlpPolicy diff --git a/spaces/changlisheng/shangChat/assets/custom.css b/spaces/changlisheng/shangChat/assets/custom.css deleted file mode 100644 index 63df142d63940fc47b2aa7e70bd4441f8223fd8a..0000000000000000000000000000000000000000 --- a/spaces/changlisheng/shangChat/assets/custom.css +++ /dev/null @@ -1,250 +0,0 @@ -:root { - --chatbot-color-light: #121111; - --chatbot-color-dark: #121111; -} - -/* 覆盖gradio的页脚信息QAQ */ -footer { - display: none !important; -} -#footer{ - text-align: center; -} -#footer div{ - display: inline-block; -} -#footer .versions{ - font-size: 85%; - opacity: 0.85; -} - -/* user_info */ -#user_info { - white-space: nowrap; - margin-top: -1.3em !important; - padding-left: 112px !important; -} -#user_info p { - font-size: .85em; - font-family: monospace; - color: var(--body-text-color-subdued); -} - -/* status_display */ -#status_display { - display: flex; - min-height: 2em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: monospace; - color: var(--body-text-color-subdued); -} - -#chuanhu_chatbot, #status_display { - transition: all 0.6s; -} - -/* usage_display */ -#usage_display { - position: relative; - margin: 0; - box-shadow: var(--block-shadow); - border-width: var(--block-border-width); - border-color: var(--block-border-color); - border-radius: var(--block-radius); - background: var(--block-background-fill); - width: 100%; - line-height: var(--line-sm); - min-height: 2em; -} -#usage_display p, #usage_display span { - margin: 0; - padding: .5em 1em; - font-size: .85em; - color: var(--body-text-color-subdued); -} -.progress-bar { - background-color: var(--input-background-fill);; - margin: 0 1em; - height: 20px; - border-radius: 10px; - overflow: hidden; -} -.progress { - background-color: var(--block-title-background-fill);; - height: 100%; - border-radius: 10px; - text-align: right; - transition: width 0.5s ease-in-out; -} -.progress-text { - /* color: white; */ - color: var(--color-accent) !important; - font-size: 1em !important; - font-weight: bold; - padding-right: 10px; - line-height: 20px; -} -/* list */ -ol:not(.options), ul:not(.options) { - padding-inline-start: 2em !important; -} - -/* 亮色 */ -@media (prefers-color-scheme: light) { - #chuanhu_chatbot { - background-color: var(--chatbot-color-light) !important; - color: #000000 !important; - } - [data-testid = "bot"] { - background-color: #333333 !important; - } - [data-testid = "user"] { - background-color: #95EC69 !important; - } -} -/* 暗色 */ -@media (prefers-color-scheme: dark) { - #chuanhu_chatbot { - background-color: #333333 !important; - color: #333333 !important; - } - [data-testid = "bot"] { - background-color: #333333 !important; - } - [data-testid = "user"] { - background-color: #26B561 !important; - } - body { - background-color: var(--neutral-950) !important; - } -} -/* 对话气泡 */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; - min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); - min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: hsla(0, 0%, 0%, 80%)!important; - border-radius: 10px; - padding: 1.4em 1.2em 0em 1.4em; - margin: 1.2em 2em 1.2em 0.5em; - color: #FFF; - box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2); -} -/* 代码高亮样式 */ -.highlight .hll { background-color: #49483e } -.highlight .c { color: #75715e } /* Comment */ -.highlight .err { color: #960050; background-color: #1e0010 } /* Error */ -.highlight .k { color: #66d9ef } /* Keyword */ -.highlight .l { color: #ae81ff } /* Literal */ -.highlight .n { color: #f8f8f2 } /* Name */ -.highlight .o { color: #f92672 } /* Operator */ -.highlight .p { color: #f8f8f2 } /* Punctuation */ -.highlight .ch { color: #75715e } /* Comment.Hashbang */ -.highlight .cm { color: #75715e } /* Comment.Multiline */ -.highlight .cp { color: #75715e } /* Comment.Preproc */ -.highlight .cpf { color: #75715e } /* Comment.PreprocFile */ -.highlight .c1 { color: #75715e } /* Comment.Single */ -.highlight .cs { color: #75715e } /* Comment.Special */ -.highlight .gd { color: #f92672 } /* Generic.Deleted */ -.highlight .ge { font-style: italic } /* Generic.Emph */ -.highlight .gi { color: #a6e22e } /* Generic.Inserted */ -.highlight .gs { font-weight: bold } /* Generic.Strong */ -.highlight .gu { color: #75715e } /* Generic.Subheading */ -.highlight .kc { color: #66d9ef } /* Keyword.Constant */ -.highlight .kd { color: #66d9ef } /* Keyword.Declaration */ -.highlight .kn { color: #f92672 } /* Keyword.Namespace */ -.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */ -.highlight .kr { color: #66d9ef } /* Keyword.Reserved */ -.highlight .kt { color: #66d9ef } /* Keyword.Type */ -.highlight .ld { color: #e6db74 } /* Literal.Date */ -.highlight .m { color: #ae81ff } /* Literal.Number */ -.highlight .s { color: #e6db74 } /* Literal.String */ -.highlight .na { color: #a6e22e } /* Name.Attribute */ -.highlight .nb { color: #f8f8f2 } /* Name.Builtin */ -.highlight .nc { color: #a6e22e } /* Name.Class */ -.highlight .no { color: #66d9ef } /* Name.Constant */ -.highlight .nd { color: #a6e22e } /* Name.Decorator */ -.highlight .ni { color: #f8f8f2 } /* Name.Entity */ -.highlight .ne { color: #a6e22e } /* Name.Exception */ -.highlight .nf { color: #a6e22e } /* Name.Function */ -.highlight .nl { color: #f8f8f2 } /* Name.Label */ -.highlight .nn { color: #f8f8f2 } /* Name.Namespace */ -.highlight .nx { color: #a6e22e } /* Name.Other */ -.highlight .py { color: #f8f8f2 } /* Name.Property */ -.highlight .nt { color: #f92672 } /* Name.Tag */ -.highlight .nv { color: #f8f8f2 } /* Name.Variable */ -.highlight .ow { color: #f92672 } /* Operator.Word */ -.highlight .w { color: #f8f8f2 } /* Text.Whitespace */ -.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */ -.highlight .mf { color: #ae81ff } /* Literal.Number.Float */ -.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */ -.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */ -.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */ -.highlight .sa { color: #e6db74 } /* Literal.String.Affix */ -.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */ -.highlight .sc { color: #e6db74 } /* Literal.String.Char */ -.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */ -.highlight .sd { color: #e6db74 } /* Literal.String.Doc */ -.highlight .s2 { color: #e6db74 } /* Literal.String.Double */ -.highlight .se { color: #ae81ff } /* Literal.String.Escape */ -.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */ -.highlight .si { color: #e6db74 } /* Literal.String.Interpol */ -.highlight .sx { color: #e6db74 } /* Literal.String.Other */ -.highlight .sr { color: #e6db74 } /* Literal.String.Regex */ -.highlight .s1 { color: #e6db74 } /* Literal.String.Single */ -.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */ -.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */ -.highlight .fm { color: #a6e22e } /* Name.Function.Magic */ -.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */ -.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */ -.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */ -.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */ -.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */ diff --git a/spaces/chaowei100/ChatGPT_Taiyi-Stable-Diffusion/theme.py b/spaces/chaowei100/ChatGPT_Taiyi-Stable-Diffusion/theme.py deleted file mode 100644 index 1a186aacabf5d982cbe9426a198f2a0b4bdef9d1..0000000000000000000000000000000000000000 --- a/spaces/chaowei100/ChatGPT_Taiyi-Stable-Diffusion/theme.py +++ /dev/null @@ -1,152 +0,0 @@ -import gradio as gr - -# gradio可用颜色列表 -# gr.themes.utils.colors.slate (石板色) -# gr.themes.utils.colors.gray (灰色) -# gr.themes.utils.colors.zinc (锌色) -# gr.themes.utils.colors.neutral (中性色) -# gr.themes.utils.colors.stone (石头色) -# gr.themes.utils.colors.red (红色) -# gr.themes.utils.colors.orange (橙色) -# gr.themes.utils.colors.amber (琥珀色) -# gr.themes.utils.colors.yellow (黄色) -# gr.themes.utils.colors.lime (酸橙色) -# gr.themes.utils.colors.green (绿色) -# gr.themes.utils.colors.emerald (祖母绿) -# gr.themes.utils.colors.teal (青蓝色) -# gr.themes.utils.colors.cyan (青色) -# gr.themes.utils.colors.sky (天蓝色) -# gr.themes.utils.colors.blue (蓝色) -# gr.themes.utils.colors.indigo (靛蓝色) -# gr.themes.utils.colors.violet (紫罗兰色) -# gr.themes.utils.colors.purple (紫色) -# gr.themes.utils.colors.fuchsia (洋红色) -# gr.themes.utils.colors.pink (粉红色) -# gr.themes.utils.colors.rose (玫瑰色) - -def adjust_theme(): - try: - color_er = gr.themes.utils.colors.pink - set_theme = gr.themes.Default( - primary_hue=gr.themes.utils.colors.orange, - neutral_hue=gr.themes.utils.colors.gray, - font=["sans-serif", "Microsoft YaHei", "ui-sans-serif", "system-ui", "sans-serif", gr.themes.utils.fonts.GoogleFont("Source Sans Pro")], - font_mono=["ui-monospace", "Consolas", "monospace", gr.themes.utils.fonts.GoogleFont("IBM Plex Mono")]) - set_theme.set( - # Colors - input_background_fill_dark="*neutral_800", - # Transition - button_transition="none", - # Shadows - button_shadow="*shadow_drop", - button_shadow_hover="*shadow_drop_lg", - button_shadow_active="*shadow_inset", - input_shadow="0 0 0 *shadow_spread transparent, *shadow_inset", - input_shadow_focus="0 0 0 *shadow_spread *secondary_50, *shadow_inset", - input_shadow_focus_dark="0 0 0 *shadow_spread *neutral_700, *shadow_inset", - checkbox_label_shadow="*shadow_drop", - block_shadow="*shadow_drop", - form_gap_width="1px", - # Button borders - input_border_width="1px", - input_background_fill="white", - # Gradients - stat_background_fill="linear-gradient(to right, *primary_400, *primary_200)", - stat_background_fill_dark="linear-gradient(to right, *primary_400, *primary_600)", - error_background_fill=f"linear-gradient(to right, {color_er.c100}, *background_fill_secondary)", - error_background_fill_dark="*background_fill_primary", - checkbox_label_background_fill="linear-gradient(to top, *neutral_50, white)", - checkbox_label_background_fill_dark="linear-gradient(to top, *neutral_900, *neutral_800)", - checkbox_label_background_fill_hover="linear-gradient(to top, *neutral_100, white)", - checkbox_label_background_fill_hover_dark="linear-gradient(to top, *neutral_900, *neutral_800)", - button_primary_background_fill="linear-gradient(to bottom right, *primary_100, *primary_300)", - button_primary_background_fill_dark="linear-gradient(to bottom right, *primary_500, *primary_600)", - button_primary_background_fill_hover="linear-gradient(to bottom right, *primary_100, *primary_200)", - button_primary_background_fill_hover_dark="linear-gradient(to bottom right, *primary_500, *primary_500)", - button_primary_border_color_dark="*primary_500", - button_secondary_background_fill="linear-gradient(to bottom right, *neutral_100, *neutral_200)", - button_secondary_background_fill_dark="linear-gradient(to bottom right, *neutral_600, *neutral_700)", - button_secondary_background_fill_hover="linear-gradient(to bottom right, *neutral_100, *neutral_100)", - button_secondary_background_fill_hover_dark="linear-gradient(to bottom right, *neutral_600, *neutral_600)", - button_cancel_background_fill=f"linear-gradient(to bottom right, {color_er.c100}, {color_er.c200})", - button_cancel_background_fill_dark=f"linear-gradient(to bottom right, {color_er.c600}, {color_er.c700})", - button_cancel_background_fill_hover=f"linear-gradient(to bottom right, {color_er.c100}, {color_er.c100})", - button_cancel_background_fill_hover_dark=f"linear-gradient(to bottom right, {color_er.c600}, {color_er.c600})", - button_cancel_border_color=color_er.c200, - button_cancel_border_color_dark=color_er.c600, - button_cancel_text_color=color_er.c600, - button_cancel_text_color_dark="white", - ) - except: - set_theme = None; print('gradio版本较旧, 不能自定义字体和颜色') - return set_theme - -advanced_css = """ -/* 设置表格的外边距为1em,内部单元格之间边框合并,空单元格显示. */ -.markdown-body table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} - -/* 设置表格单元格的内边距为5px,边框粗细为1.2px,颜色为--border-color-primary. */ -.markdown-body th, .markdown-body td { - border: 1.2px solid var(--border-color-primary); - padding: 5px; -} - -/* 设置表头背景颜色为rgba(175,184,193,0.2),透明度为0.2. */ -.markdown-body thead { - background-color: rgba(175,184,193,0.2); -} - -/* 设置表头单元格的内边距为0.5em和0.2em. */ -.markdown-body thead th { - padding: .5em .2em; -} - -/* 去掉列表前缀的默认间距,使其与文本线对齐. */ -.markdown-body ol, .markdown-body ul { - padding-inline-start: 2em !important; -} - -/* 设定聊天气泡的样式,包括圆角、最大宽度和阴影等. */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - /* padding: var(--spacing-xl) !important; */ - /* font-size: var(--text-md) !important; */ - /* line-height: var(--line-md) !important; */ - /* min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); */ - /* min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); */ -} -[data-testid = "bot"] { - max-width: 95%; - /* width: auto !important; */ - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 100%; - /* width: auto !important; */ - border-bottom-right-radius: 0 !important; -} - -/* 行内代码的背景设为淡灰色,设定圆角和间距. */ -.markdown-body code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 设定代码块的样式,包括背景颜色、内、外边距、圆角。 */ -.markdown-body pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: rgba(175,184,193,0.2); - border-radius: 10px; - padding: 1em; - margin: 1em 2em 1em 0.5em; -} -""" \ No newline at end of file diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/rag/eval_rag.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/rag/eval_rag.py deleted file mode 100644 index a8e7abbca6ce298b308764282aa4f8071b222cd5..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/rag/eval_rag.py +++ /dev/null @@ -1,320 +0,0 @@ -""" Evaluation script for RAG models.""" - -import argparse -import ast -import logging -import os -import sys - -import pandas as pd -import torch -from tqdm import tqdm - -from transformers import BartForConditionalGeneration, RagRetriever, RagSequenceForGeneration, RagTokenForGeneration -from transformers import logging as transformers_logging - - -sys.path.append(os.path.join(os.getcwd())) # noqa: E402 # isort:skip -from utils_rag import exact_match_score, f1_score # noqa: E402 # isort:skip - - -logger = logging.getLogger(__name__) -logging.basicConfig(level=logging.INFO) - -transformers_logging.set_verbosity_info() - - -def infer_model_type(model_name_or_path): - if "token" in model_name_or_path: - return "rag_token" - if "sequence" in model_name_or_path: - return "rag_sequence" - if "bart" in model_name_or_path: - return "bart" - return None - - -def metric_max_over_ground_truths(metric_fn, prediction, ground_truths): - return max(metric_fn(prediction, gt) for gt in ground_truths) - - -def get_scores(args, preds_path, gold_data_path): - hypos = [line.strip() for line in open(preds_path, "r").readlines()] - answers = [] - - if args.gold_data_mode == "qa": - data = pd.read_csv(gold_data_path, sep="\t", header=None) - for answer_list in data[1]: - ground_truths = ast.literal_eval(answer_list) - answers.append(ground_truths) - else: - references = [line.strip() for line in open(gold_data_path, "r").readlines()] - answers = [[reference] for reference in references] - - f1 = em = total = 0 - for prediction, ground_truths in zip(hypos, answers): - total += 1 - em += metric_max_over_ground_truths(exact_match_score, prediction, ground_truths) - f1 += metric_max_over_ground_truths(f1_score, prediction, ground_truths) - - em = 100.0 * em / total - f1 = 100.0 * f1 / total - - logger.info(f"F1: {f1:.2f}") - logger.info(f"EM: {em:.2f}") - - -def get_precision_at_k(args, preds_path, gold_data_path): - k = args.k - hypos = [line.strip() for line in open(preds_path, "r").readlines()] - references = [line.strip() for line in open(gold_data_path, "r").readlines()] - - em = total = 0 - for hypo, reference in zip(hypos, references): - hypo_provenance = set(hypo.split("\t")[:k]) - ref_provenance = set(reference.split("\t")) - total += 1 - em += len(hypo_provenance & ref_provenance) / k - - em = 100.0 * em / total - logger.info(f"Precision@{k}: {em: .2f}") - - -def evaluate_batch_retrieval(args, rag_model, questions): - def strip_title(title): - if title.startswith('"'): - title = title[1:] - if title.endswith('"'): - title = title[:-1] - return title - - retriever_input_ids = rag_model.retriever.question_encoder_tokenizer.batch_encode_plus( - questions, - return_tensors="pt", - padding=True, - truncation=True, - )["input_ids"].to(args.device) - - question_enc_outputs = rag_model.rag.question_encoder(retriever_input_ids) - question_enc_pool_output = question_enc_outputs[0] - - result = rag_model.retriever( - retriever_input_ids, - question_enc_pool_output.cpu().detach().to(torch.float32).numpy(), - prefix=rag_model.rag.generator.config.prefix, - n_docs=rag_model.config.n_docs, - return_tensors="pt", - ) - all_docs = rag_model.retriever.index.get_doc_dicts(result.doc_ids) - provenance_strings = [] - for docs in all_docs: - provenance = [strip_title(title) for title in docs["title"]] - provenance_strings.append("\t".join(provenance)) - return provenance_strings - - -def evaluate_batch_e2e(args, rag_model, questions): - with torch.no_grad(): - inputs_dict = rag_model.retriever.question_encoder_tokenizer.batch_encode_plus( - questions, return_tensors="pt", padding=True, truncation=True - ) - - input_ids = inputs_dict.input_ids.to(args.device) - attention_mask = inputs_dict.attention_mask.to(args.device) - outputs = rag_model.generate( # rag_model overwrites generate - input_ids, - attention_mask=attention_mask, - num_beams=args.num_beams, - min_length=args.min_length, - max_length=args.max_length, - early_stopping=False, - num_return_sequences=1, - bad_words_ids=[[0, 0]], # BART likes to repeat BOS tokens, dont allow it to generate more than one - ) - answers = rag_model.retriever.generator_tokenizer.batch_decode(outputs, skip_special_tokens=True) - - if args.print_predictions: - for q, a in zip(questions, answers): - logger.info("Q: {} - A: {}".format(q, a)) - - return answers - - -def get_args(): - parser = argparse.ArgumentParser() - parser.add_argument( - "--model_type", - choices=["rag_sequence", "rag_token", "bart"], - type=str, - help=( - "RAG model type: rag_sequence, rag_token or bart, if none specified, the type is inferred from the" - " model_name_or_path" - ), - ) - parser.add_argument( - "--index_name", - default=None, - choices=["exact", "compressed", "legacy"], - type=str, - help="RAG model retriever type", - ) - parser.add_argument( - "--index_path", - default=None, - type=str, - help="Path to the retrieval index", - ) - parser.add_argument("--n_docs", default=5, type=int, help="Number of retrieved docs") - parser.add_argument( - "--model_name_or_path", - default=None, - type=str, - required=True, - help="Path to pretrained checkpoints or model identifier from huggingface.co/models", - ) - parser.add_argument( - "--eval_mode", - choices=["e2e", "retrieval"], - default="e2e", - type=str, - help=( - "Evaluation mode, e2e calculates exact match and F1 of the downstream task, retrieval calculates" - " precision@k." - ), - ) - parser.add_argument("--k", default=1, type=int, help="k for the precision@k calculation") - parser.add_argument( - "--evaluation_set", - default=None, - type=str, - required=True, - help="Path to a file containing evaluation samples", - ) - parser.add_argument( - "--gold_data_path", - default=None, - type=str, - required=True, - help="Path to a tab-separated file with gold samples", - ) - parser.add_argument( - "--gold_data_mode", - default="qa", - type=str, - choices=["qa", "ans"], - help=( - "Format of the gold data file" - "qa - a single line in the following format: question [tab] answer_list" - "ans - a single line of the gold file contains the expected answer string" - ), - ) - parser.add_argument( - "--predictions_path", - type=str, - default="predictions.txt", - help="Name of the predictions file, to be stored in the checkpoints directory", - ) - parser.add_argument( - "--eval_all_checkpoints", - action="store_true", - help="Evaluate all checkpoints starting with the same prefix as model_name ending and ending with step number", - ) - parser.add_argument( - "--eval_batch_size", - default=8, - type=int, - help="Batch size per GPU/CPU for evaluation.", - ) - parser.add_argument( - "--recalculate", - help="Recalculate predictions even if the prediction file exists", - action="store_true", - ) - parser.add_argument( - "--num_beams", - default=4, - type=int, - help="Number of beams to be used when generating answers", - ) - parser.add_argument("--min_length", default=1, type=int, help="Min length of the generated answers") - parser.add_argument("--max_length", default=50, type=int, help="Max length of the generated answers") - - parser.add_argument( - "--print_predictions", - action="store_true", - help="If True, prints predictions while evaluating.", - ) - parser.add_argument( - "--print_docs", - action="store_true", - help="If True, prints docs retried while generating.", - ) - args = parser.parse_args() - args.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - return args - - -def main(args): - model_kwargs = {} - if args.model_type is None: - args.model_type = infer_model_type(args.model_name_or_path) - assert args.model_type is not None - if args.model_type.startswith("rag"): - model_class = RagTokenForGeneration if args.model_type == "rag_token" else RagSequenceForGeneration - model_kwargs["n_docs"] = args.n_docs - if args.index_name is not None: - model_kwargs["index_name"] = args.index_name - if args.index_path is not None: - model_kwargs["index_path"] = args.index_path - else: - model_class = BartForConditionalGeneration - - checkpoints = ( - [f.path for f in os.scandir(args.model_name_or_path) if f.is_dir()] - if args.eval_all_checkpoints - else [args.model_name_or_path] - ) - - logger.info("Evaluate the following checkpoints: %s", checkpoints) - - score_fn = get_scores if args.eval_mode == "e2e" else get_precision_at_k - evaluate_batch_fn = evaluate_batch_e2e if args.eval_mode == "e2e" else evaluate_batch_retrieval - - for checkpoint in checkpoints: - if os.path.exists(args.predictions_path) and (not args.recalculate): - logger.info("Calculating metrics based on an existing predictions file: {}".format(args.predictions_path)) - score_fn(args, args.predictions_path, args.gold_data_path) - continue - - logger.info("***** Running evaluation for {} *****".format(checkpoint)) - logger.info(" Batch size = %d", args.eval_batch_size) - logger.info(" Predictions will be stored under {}".format(args.predictions_path)) - - if args.model_type.startswith("rag"): - retriever = RagRetriever.from_pretrained(checkpoint, **model_kwargs) - model = model_class.from_pretrained(checkpoint, retriever=retriever, **model_kwargs) - model.retriever.init_retrieval() - else: - model = model_class.from_pretrained(checkpoint, **model_kwargs) - model.to(args.device) - - with open(args.evaluation_set, "r") as eval_file, open(args.predictions_path, "w") as preds_file: - questions = [] - for line in tqdm(eval_file): - questions.append(line.strip()) - if len(questions) == args.eval_batch_size: - answers = evaluate_batch_fn(args, model, questions) - preds_file.write("\n".join(answers) + "\n") - preds_file.flush() - questions = [] - if len(questions) > 0: - answers = evaluate_batch_fn(args, model, questions) - preds_file.write("\n".join(answers)) - preds_file.flush() - - score_fn(args, args.predictions_path, args.gold_data_path) - - -if __name__ == "__main__": - args = get_args() - main(args) diff --git a/spaces/chilge/taoli/app.py b/spaces/chilge/taoli/app.py deleted file mode 100644 index 59755594ff4af51cb833e5d9f7a38f582241e8c6..0000000000000000000000000000000000000000 --- a/spaces/chilge/taoli/app.py +++ /dev/null @@ -1,66 +0,0 @@ -import io - -import gradio as gr -import librosa -import numpy as np -import soundfile -import torch -from inference.infer_tool import Svc -import logging - -logging.getLogger('numba').setLevel(logging.WARNING) - -model_name = "logs/32k/G_38000.pth" -config_name = "configs/config.json" - -svc_model = Svc(model_name, config_name) -sid_map = { - "桃李": "taoli" -} - - -def vc_fn(sid, input_audio, vc_transform): - if input_audio is None: - return "You need to upload an audio", None - sampling_rate, audio = input_audio - # print(audio.shape,sampling_rate) - duration = audio.shape[0] / sampling_rate - if duration > 45: - return "请上传小于45s的音频,需要转换长音频请本地进行转换", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - print(audio.shape) - out_wav_path = io.BytesIO() - soundfile.write(out_wav_path, audio, 16000, format="wav") - out_wav_path.seek(0) - - sid = sid_map[sid] - out_audio, out_sr = svc_model.infer(sid, vc_transform, out_wav_path) - _audio = out_audio.cpu().numpy() - return "Success", (32000, _audio) - - -app = gr.Blocks() -with app: - with gr.Tabs(): - with gr.TabItem("Basic"): - gr.Markdown(value=""" - - - i7000如果要在本地使用该demo,请使用git lfs clone 该仓库,安装requirements.txt后运行app.py即可 - - 项目改写基于 https://huggingface.co/spaces/innnky/nyaru-svc-3.0 - - 本地合成可以删除26、27两行代码以解除合成45s长度限制""") - sid = gr.Dropdown(label="音色", choices=["桃李"], value="taoli") - vc_input3 = gr.Audio(label="上传音频(长度小于45秒)") - vc_transform = gr.Number(label="变调(整数,可以正负,半音数量,升高八度就是12)", value=0) - vc_submit = gr.Button("转换", variant="primary") - vc_output1 = gr.Textbox(label="Output Message") - vc_output2 = gr.Audio(label="Output Audio") - vc_submit.click(vc_fn, [sid, vc_input3, vc_transform], [vc_output1, vc_output2]) - - app.launch() diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/networking.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/networking.py deleted file mode 100644 index 83549a3c4d40aebf3bcfafa343486aa0a2848333..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/networking.py +++ /dev/null @@ -1,208 +0,0 @@ -""" -Defines helper methods useful for setting up ports, launching servers, and -creating tunnels. -""" -from __future__ import annotations - -import os -import socket -import threading -import time -import warnings -from typing import TYPE_CHECKING - -import requests -import uvicorn - -from gradio.exceptions import ServerFailedToStartError -from gradio.routes import App -from gradio.tunneling import Tunnel - -if TYPE_CHECKING: # Only import for type checking (to avoid circular imports). - from gradio.blocks import Blocks - -# By default, the local server will try to open on localhost, port 7860. -# If that is not available, then it will try 7861, 7862, ... 7959. -INITIAL_PORT_VALUE = int(os.getenv("GRADIO_SERVER_PORT", "7860")) -TRY_NUM_PORTS = int(os.getenv("GRADIO_NUM_PORTS", "100")) -LOCALHOST_NAME = os.getenv("GRADIO_SERVER_NAME", "127.0.0.1") -GRADIO_API_SERVER = "https://api.gradio.app/v2/tunnel-request" - - -class Server(uvicorn.Server): - def install_signal_handlers(self): - pass - - def run_in_thread(self): - self.thread = threading.Thread(target=self.run, daemon=True) - self.thread.start() - start = time.time() - while not self.started: - time.sleep(1e-3) - if time.time() - start > 5: - raise ServerFailedToStartError( - "Server failed to start. Please check that the port is available." - ) - - def close(self): - self.should_exit = True - self.thread.join() - - -def get_first_available_port(initial: int, final: int) -> int: - """ - Gets the first open port in a specified range of port numbers - Parameters: - initial: the initial value in the range of port numbers - final: final (exclusive) value in the range of port numbers, should be greater than `initial` - Returns: - port: the first open port in the range - """ - for port in range(initial, final): - try: - s = socket.socket() # create a socket object - s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) - s.bind((LOCALHOST_NAME, port)) # Bind to the port - s.close() - return port - except OSError: - pass - raise OSError( - f"All ports from {initial} to {final - 1} are in use. Please close a port." - ) - - -def configure_app(app: App, blocks: Blocks) -> App: - auth = blocks.auth - if auth is not None: - if not callable(auth): - app.auth = {account[0]: account[1] for account in auth} - else: - app.auth = auth - else: - app.auth = None - app.blocks = blocks - app.cwd = os.getcwd() - app.favicon_path = blocks.favicon_path - app.tokens = {} - return app - - -def start_server( - blocks: Blocks, - server_name: str | None = None, - server_port: int | None = None, - ssl_keyfile: str | None = None, - ssl_certfile: str | None = None, - ssl_keyfile_password: str | None = None, - app_kwargs: dict | None = None, -) -> tuple[str, int, str, App, Server]: - """Launches a local server running the provided Interface - Parameters: - blocks: The Blocks object to run on the server - server_name: to make app accessible on local network, set this to "0.0.0.0". Can be set by environment variable GRADIO_SERVER_NAME. - server_port: will start gradio app on this port (if available). Can be set by environment variable GRADIO_SERVER_PORT. - auth: If provided, username and password (or list of username-password tuples) required to access the Blocks. Can also provide function that takes username and password and returns True if valid login. - ssl_keyfile: If a path to a file is provided, will use this as the private key file to create a local server running on https. - ssl_certfile: If a path to a file is provided, will use this as the signed certificate for https. Needs to be provided if ssl_keyfile is provided. - ssl_keyfile_password: If a password is provided, will use this with the ssl certificate for https. - app_kwargs: Additional keyword arguments to pass to the gradio.routes.App constructor. - - Returns: - port: the port number the server is running on - path_to_local_server: the complete address that the local server can be accessed at - app: the FastAPI app object - server: the server object that is a subclass of uvicorn.Server (used to close the server) - """ - if ssl_keyfile is not None and ssl_certfile is None: - raise ValueError("ssl_certfile must be provided if ssl_keyfile is provided.") - - server_name = server_name or LOCALHOST_NAME - url_host_name = "localhost" if server_name == "0.0.0.0" else server_name - - # Strip IPv6 brackets from the address if they exist. - # This is needed as http://[::1]:port/ is a valid browser address, - # but not a valid IPv6 address, so asyncio will throw an exception. - if server_name.startswith("[") and server_name.endswith("]"): - host = server_name[1:-1] - else: - host = server_name - - app = App.create_app(blocks, app_kwargs=app_kwargs) - - server_ports = ( - [server_port] - if server_port is not None - else range(INITIAL_PORT_VALUE, INITIAL_PORT_VALUE + TRY_NUM_PORTS) - ) - - for port in server_ports: - try: - # The fastest way to check if a port is available is to try to bind to it with socket. - # If the port is not available, socket will throw an OSError. - s = socket.socket() - s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) - # Really, we should be checking if (server_name, server_port) is available, but - # socket.bind() doesn't seem to throw an OSError with ipv6 addresses, based on my testing. - # Instead, we just check if the port is available on localhost. - s.bind((LOCALHOST_NAME, port)) - s.close() - - # To avoid race conditions, so we also check if the port by trying to start the uvicorn server. - # If the port is not available, this will throw a ServerFailedToStartError. - config = uvicorn.Config( - app=app, - port=port, - host=host, - log_level="warning", - ssl_keyfile=ssl_keyfile, - ssl_certfile=ssl_certfile, - ssl_keyfile_password=ssl_keyfile_password, - ws_max_size=1024 * 1024 * 1024, # Setting max websocket size to be 1 GB - ) - server = Server(config=config) - server.run_in_thread() - break - except (OSError, ServerFailedToStartError): - pass - else: - raise OSError( - f"Cannot find empty port in range: {min(server_ports)}-{max(server_ports)}. You can specify a different port by setting the GRADIO_SERVER_PORT environment variable or passing the `server_port` parameter to `launch()`." - ) - - if ssl_keyfile is not None: - path_to_local_server = f"https://{url_host_name}:{port}/" - else: - path_to_local_server = f"http://{url_host_name}:{port}/" - - return server_name, port, path_to_local_server, app, server - - -def setup_tunnel(local_host: str, local_port: int, share_token: str) -> str: - response = requests.get(GRADIO_API_SERVER) - if response and response.status_code == 200: - try: - payload = response.json()[0] - remote_host, remote_port = payload["host"], int(payload["port"]) - tunnel = Tunnel( - remote_host, remote_port, local_host, local_port, share_token - ) - address = tunnel.start_tunnel() - return address - except Exception as e: - raise RuntimeError(str(e)) from e - raise RuntimeError("Could not get share link from Gradio API Server.") - - -def url_ok(url: str) -> bool: - try: - for _ in range(5): - with warnings.catch_warnings(): - warnings.filterwarnings("ignore") - r = requests.head(url, timeout=3, verify=False) - if r.status_code in (200, 401, 302): # 401 or 302 if auth is set - return True - time.sleep(0.500) - except (ConnectionError, requests.exceptions.ConnectionError): - return False - return False diff --git a/spaces/cihyFjudo/fairness-paper-search/C-Loc Whatever (1995) A Rare Gem of Underground Rap Music.md b/spaces/cihyFjudo/fairness-paper-search/C-Loc Whatever (1995) A Rare Gem of Underground Rap Music.md deleted file mode 100644 index c880c3bd9985eb4291f7f884c89654843a860da9..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/C-Loc Whatever (1995) A Rare Gem of Underground Rap Music.md +++ /dev/null @@ -1,13 +0,0 @@ - -

      The Second Vatican Council, in a passage which retains all its relevance today, forcefully condemned a number of crimes and attacks against human life. Thirty years later, taking up the words of the Council and with the same forcefulness I repeat that condemnation in the name of the whole Church, certain that I am interpreting the genuine sentiment of every upright conscience: "Whatever is opposed to life itself, such as any type of murder, genocide, abortion, euthanasia, or wilful self-destruction, whatever violates the integrity of the human person, such as mutilation, torments inflicted on body or mind, attempts to coerce the will itself; whatever insults human dignity, such as subhuman living conditions, arbitrary imprisonment, deportation, slavery, prostitution, the selling of women and children; as well as disgraceful working conditions, where people are treated as mere instruments of gain rather than as free and responsible persons; all these things and others like them are infamies indeed. They poison human society, and they do more harm to those who practise them than to those who suffer from the injury. Moreover, they are a supreme dishonour to the Creator".5

      -

      C-Loc Whatever (1995)


      Downloadhttps://tinurli.com/2uwiu8



      -

      The Pharaoh of old, haunted by the presence and increase of the children of Israel, submitted them to every kind of oppression and ordered that every male child born of the Hebrew women was to be killed (cf. Ex 1:7-22). Today not a few of the powerful of the earth act in the same way. They too are haunted by the current demographic growth, and fear that the most prolific and poorest peoples represent a threat for the well-being and peace of their own countries. Consequently, rather than wishing to face and solve these serious problems with respect for the dignity of individuals and families and for every person's inviolable right to life, they prefer to promote and impose by whatever means a massive programme of birth control. Even the economic help which they would be ready to give is unjustly made conditional on the acceptance of an anti-birth policy.

      -

      The biblical text clearly shows the breadth and depth of the lordship which God bestows on man. It is a matter first of all of dominion over the earth and over every living creature, as the Book of Wisdom makes clear: "O God of my fathers and Lord of mercy ... by your wisdom you have formed man, to have dominion over the creatures you have made, and rule the world in holiness and righteousness" (Wis 9:1, 2-3). The Psalmist too extols the dominion given to man as a sign of glory and honour from his Creator: "You have given him dominion over the works of your hands; you have put all things under his feet, all sheep and oxen, and also the beasts of the field, the birds of the air, and the fish of the sea, whatever passes along the paths of the sea" (Ps 8:6-8).

      -

      But today, in many people's consciences, the perception of its gravity has become progressively obscured. The acceptance of abortion in the popular mind, in behaviour and even in law itself, is a telling sign of an extremely dangerous crisis of the moral sense, which is becoming more and more incapable of distinguishing between good and evil, even when the fundamental right to life is at stake. Given such a grave situation, we need now more than ever to have the courage to look the truth in the eye and to call things by their proper name, without yielding to convenient compromises or to the temptation of self-deception. In this regard the reproach of the Prophet is extremely straightforward: "Woe to those who call evil good and good evil, who put darkness for light and light for darkness" (Is 5:20). Especially in the case of abortion there is a widespread use of ambiguous terminology, such as "interruption of pregnancy", which tends to hide abortion's true nature and to attenuate its seriousness in public opinion. Perhaps this linguistic phenomenon is itself a symptom of an uneasiness of conscience. But no word has the power to change the reality of things: procured abortion is the deliberate and direct killing, by whatever means it is carried out, of a human being in the initial phase of his or her existence, extending from conception to birth.

      -

      -

      69. In any case, in the democratic culture of our time it is commonly held that the legal system of any society should limit itself to taking account of and accepting the convictions of the majority. It should therefore be based solely upon what the majority itself considers moral and actually practises. Furthermore, if it is believed that an objective truth shared by all is de facto unattainable, then respect for the freedom of the citizens-who in a democratic system are considered the true rulers-would require that on the legislative level the autonomy of individual consciences be acknowledged. Consequently, when establishing those norms which are absolutely necessary for social coexistence, the only determining factor should be the will of the majority, whatever this may be. Hence every politician, in his or her activity, should clearly separate the realm of private conscience from that of public conduct.

      -

      This celebration thus becomes a service to the Gospel of life, expressed through solidarity as experienced within and around the family in the form of concerned, attentive and loving care shown in the humble, ordinary events of each day. A particularly significant expression of solidarity between families is a willingness to adopt or take in children abandoned by their parents or in situations of serious hardship. True parental love is ready to go beyond the bonds of flesh and blood in order to accept children from other families, offering them whatever is necessary for their well-being and full development. Among the various forms of adoption, consideration should be given to adoption-at-a-distance, preferable in cases where the only reason for giving up the child is the extreme poverty of the child's family. Through this type of adoption, parents are given the help needed to support and raise their children, without their being uprooted from their natural environment.

      -

      Mary thus helps the Church to realize that life is always at the centre of a great struggle between good and evil, between light and darkness. The dragon wishes to devour "the child brought forth" (cf. Rev 12:4), a figure of Christ, whom Mary brought forth "in the fullness of time" (Gal 4:4) and whom the Church must unceasingly offer to people in every age. But in a way that child is also a figure of every person, every child, especially every helpless baby whose life is threatened, because-as the Council reminds us-"by his Incarnation the Son of God has united himself in some fashion with every person".140 It is precisely in the "flesh" of every person that Christ continues to reveal himself and to enter into fellowship with us, so that rejection of human life, in whatever form that rejection takes, is really a rejection of Christ. This is the fascinating but also demanding truth which Christ reveals to us and which his Church continues untiringly to proclaim: "Whoever receives one such child in my name receives me" (Mt 18:5); "Truly, I say to you, as you did it to one of the least of these my brethren, you did it to me" (Mt 25:40).

      -

      I hope to have convinced you today that the 1992 Merger Guidelines, and the consensus view among economists of how to analyze competition in differentiated-product industries, together provide a consistent, valid, and reliable way of evaluating proposed horizontal mergers involving differentiated products. Central to the analysis is the Diversion Ratio, which measures the number of consumers who regard the merging firms' brands as their first and second choices. Elasticities of demand can be estimated econometrically in cases where detailed price and quantity data are available, allowing estimtes to be made of the Diversion Ratio. More typically, the estimated Diversion Ratio is based on whatever pieces of evidence are available, including more qualitative information. If there are many consumers who regard the merging brands as their first and second choices, the merger will indeed create an incentive to raise price. This incentive can be undercut by rivals' product repositioning, by entry, or by credible synergies.

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Linda 10yo Intercourse And Orgasmmpg !!TOP!!.md b/spaces/cihyFjudo/fairness-paper-search/Linda 10yo Intercourse And Orgasmmpg !!TOP!!.md deleted file mode 100644 index 977c565aeec68c7daed78ae1c2941e32e672be1d..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Linda 10yo Intercourse And Orgasmmpg !!TOP!!.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Linda 10yo Intercourse And Orgasmmpg


      Download File »»» https://tinurli.com/2uwkPK



      -
      - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aarch64/vp9dsp_init.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aarch64/vp9dsp_init.h deleted file mode 100644 index 9df1752c627895ad4d52c82f8696b526fc0cfe73..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aarch64/vp9dsp_init.h +++ /dev/null @@ -1,29 +0,0 @@ -/* - * Copyright (c) 2017 Google Inc. - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_AARCH64_VP9DSP_INIT_H -#define AVCODEC_AARCH64_VP9DSP_INIT_H - -#include "libavcodec/vp9dsp.h" - -void ff_vp9dsp_init_10bpp_aarch64(VP9DSPContext *dsp); -void ff_vp9dsp_init_12bpp_aarch64(VP9DSPContext *dsp); - -#endif /* AVCODEC_AARCH64_VP9DSP_INIT_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/atrac.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/atrac.h deleted file mode 100644 index 05208bbee69aaafa4d9dadb736dd93f14176b3ce..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/atrac.h +++ /dev/null @@ -1,97 +0,0 @@ -/* - * common functions for the ATRAC family of decoders - * - * Copyright (c) 2009-2013 Maxim Poliakovski - * Copyright (c) 2009 Benjamin Larsson - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * ATRAC common header - */ - -#ifndef AVCODEC_ATRAC_H -#define AVCODEC_ATRAC_H - -/** - * Gain control parameters for one subband. - */ -typedef struct AtracGainInfo { - int num_points; ///< number of gain control points - int lev_code[7]; ///< level at corresponding control point - int loc_code[7]; ///< location of gain control points -} AtracGainInfo; - -/** - * Gain compensation context structure. - */ -typedef struct AtracGCContext { - float gain_tab1[16]; ///< gain compensation level table - float gain_tab2[31]; ///< gain compensation interpolation table - int id2exp_offset; ///< offset for converting level index into level exponent - int loc_scale; ///< scale of location code = 2^loc_scale samples - int loc_size; ///< size of location code in samples -} AtracGCContext; - -extern float ff_atrac_sf_table[64]; - -/** - * Generate common tables. - */ -void ff_atrac_generate_tables(void); - -/** - * Initialize gain compensation context. - * - * @param gctx pointer to gain compensation context to initialize - * @param id2exp_offset offset for converting level index into level exponent - * @param loc_scale location size factor - */ -void ff_atrac_init_gain_compensation(AtracGCContext *gctx, int id2exp_offset, - int loc_scale); - -/** - * Apply gain compensation and perform the MDCT overlapping part. - * - * @param gctx pointer to gain compensation context - * @param in input buffer - * @param prev previous buffer to perform overlap against - * @param gc_now gain control information for current frame - * @param gc_next gain control information for next frame - * @param num_samples number of samples to process - * @param out output data goes here - */ -void ff_atrac_gain_compensation(AtracGCContext *gctx, float *in, float *prev, - AtracGainInfo *gc_now, AtracGainInfo *gc_next, - int num_samples, float *out); - -/** - * Quadrature mirror synthesis filter. - * - * @param inlo lower part of spectrum - * @param inhi higher part of spectrum - * @param nIn size of spectrum buffer - * @param pOut out buffer - * @param delayBuf delayBuf buffer - * @param temp temp buffer - */ -void ff_atrac_iqmf(float *inlo, float *inhi, unsigned int nIn, float *pOut, - float *delayBuf, float *temp); - -#endif /* AVCODEC_ATRAC_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/avr32/mathops.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/avr32/mathops.h deleted file mode 100644 index 85f42b594d2ec0a3d62798fcd07e8eced853e22c..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/avr32/mathops.h +++ /dev/null @@ -1,101 +0,0 @@ -/* - * Simple math operations - * Copyright (c) 2009 Mans Rullgard - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_AVR32_MATHOPS_H -#define AVCODEC_AVR32_MATHOPS_H - -#include -#include "config.h" -#include "libavutil/common.h" - -#if HAVE_INLINE_ASM - -#define MULL MULL -static inline av_const int MULL(int a, int b, unsigned shift) -{ - union { int64_t x; int hl[2]; } x; - __asm__ ("muls.d %0, %1, %2 \n\t" - "lsr %0, %3 \n\t" - "or %0, %0, %m0<<%4 \n\t" - : "=r"(x) : "r"(b), "r"(a), "i"(shift), "i"(32-shift)); - return x.hl[1]; -} - -#define MULH MULH -static inline av_const int MULH(int a, int b) -{ - union { int64_t x; int hl[2]; } x; - __asm__ ("muls.d %0, %1, %2" : "=r"(x.x) : "r"(a), "r"(b)); - return x.hl[0]; -} - -#define MUL64 MUL64 -static inline av_const int64_t MUL64(int a, int b) -{ - int64_t x; - __asm__ ("muls.d %0, %1, %2" : "=r"(x) : "r"(a), "r"(b)); - return x; -} - -static inline av_const int64_t MAC64(int64_t d, int a, int b) -{ - __asm__ ("macs.d %0, %1, %2" : "+r"(d) : "r"(a), "r"(b)); - return d; -} -#define MAC64(d, a, b) ((d) = MAC64(d, a, b)) -#define MLS64(d, a, b) MAC64(d, -(a), b) - -static inline av_const int MAC16(int d, int a, int b) -{ - __asm__ ("machh.w %0, %1:b, %2:b" : "+r"(d) : "r"(a), "r"(b)); - return d; -} -#define MAC16(d, a, b) ((d) = MAC16(d, a, b)) -#define MLS16(d, a, b) MAC16(d, -(a), b) - -#define MUL16 MUL16 -static inline av_const int MUL16(int a, int b) -{ - int d; - __asm__ ("mulhh.w %0, %1:b, %2:b" : "=r"(d) : "r"(a), "r"(b)); - return d; -} - -#define mid_pred mid_pred -static inline av_const int mid_pred(int a, int b, int c) -{ - int m; - __asm__ ("mov %0, %2 \n\t" - "cp.w %1, %2 \n\t" - "movgt %0, %1 \n\t" - "movgt %1, %2 \n\t" - "cp.w %1, %3 \n\t" - "movle %1, %3 \n\t" - "cp.w %0, %1 \n\t" - "movgt %0, %1 \n\t" - : "=&r"(m), "+r"(a) - : "r"(b), "r"(c)); - return m; -} - -#endif /* HAVE_INLINE_ASM */ - -#endif /* AVCODEC_AVR32_MATHOPS_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/gif.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/gif.c deleted file mode 100644 index 131af6198ad1c1384d090f3e98e458dbdbd28205..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/gif.c +++ /dev/null @@ -1,567 +0,0 @@ -/* - * Copyright (c) 2000 Fabrice Bellard - * Copyright (c) 2002 Francois Revol - * Copyright (c) 2006 Baptiste Coudurier - * Copyright (c) 2018 Bjorn Roche - * Copyright (c) 2018 Paul B Mahol - * - * first version by Francois Revol - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * GIF encoder - * @see http://www.w3.org/Graphics/GIF/spec-gif89a.txt - */ - -#include "libavutil/opt.h" -#include "avcodec.h" -#include "bytestream.h" -#include "codec_internal.h" -#include "encode.h" -#include "lzw.h" -#include "gif.h" - -#define DEFAULT_TRANSPARENCY_INDEX 0x1f - -typedef struct GIFContext { - const AVClass *class; - LZWState *lzw; - uint8_t *buf; - uint8_t *shrunk_buf; - int buf_size; - AVFrame *last_frame; - int flags; - int image; - int use_global_palette; - uint32_t palette[AVPALETTE_COUNT]; ///< local reference palette for !pal8 - int palette_loaded; - int transparent_index; - uint8_t *tmpl; ///< temporary line buffer -} GIFContext; - -enum { - GF_OFFSETTING = 1<<0, - GF_TRANSDIFF = 1<<1, -}; - -static void shrink_palette(const uint32_t *src, uint8_t *map, - uint32_t *dst, size_t *palette_count) -{ - size_t colors_seen = 0; - - for (size_t i = 0; i < AVPALETTE_COUNT; i++) { - int seen = 0; - for (size_t c = 0; c < colors_seen; c++) { - if (src[i] == dst[c]) { - seen = 1; - break; - } - } - if (!seen) { - dst[colors_seen] = src[i]; - map[i] = colors_seen; - colors_seen++; - } - } - - *palette_count = colors_seen; -} - -static void remap_frame_to_palette(const uint8_t *src, int src_linesize, - uint8_t *dst, int dst_linesize, - int w, int h, uint8_t *map) -{ - for (int i = 0; i < h; i++) - for (int j = 0; j < w; j++) - dst[i * dst_linesize + j] = map[src[i * src_linesize + j]]; -} - -static int is_image_translucent(AVCodecContext *avctx, - const uint8_t *buf, const int linesize) -{ - GIFContext *s = avctx->priv_data; - int trans = s->transparent_index; - - if (trans < 0) - return 0; - - for (int y = 0; y < avctx->height; y++) { - for (int x = 0; x < avctx->width; x++) { - if (buf[x] == trans) { - return 1; - } - } - buf += linesize; - } - - return 0; -} - -static int get_palette_transparency_index(const uint32_t *palette) -{ - int transparent_color_index = -1; - unsigned i, smallest_alpha = 0xff; - - if (!palette) - return -1; - - for (i = 0; i < AVPALETTE_COUNT; i++) { - const uint32_t v = palette[i]; - if (v >> 24 < smallest_alpha) { - smallest_alpha = v >> 24; - transparent_color_index = i; - } - } - return smallest_alpha < 128 ? transparent_color_index : -1; -} - -static int pick_palette_entry(const uint8_t *buf, int linesize, int w, int h) -{ - int histogram[AVPALETTE_COUNT] = {0}; - int x, y, i; - - for (y = 0; y < h; y++) { - for (x = 0; x < w; x++) - histogram[buf[x]]++; - buf += linesize; - } - for (i = 0; i < FF_ARRAY_ELEMS(histogram); i++) - if (!histogram[i]) - return i; - return -1; -} - -static void gif_crop_translucent(AVCodecContext *avctx, - const uint8_t *buf, const int linesize, - int *width, int *height, - int *x_start, int *y_start) -{ - GIFContext *s = avctx->priv_data; - int trans = s->transparent_index; - - /* Crop image */ - if ((s->flags & GF_OFFSETTING) && trans >= 0) { - const int w = avctx->width; - const int h = avctx->height; - int x_end = w - 1, - y_end = h - 1; - - // crop top - while (*y_start < y_end) { - int is_trans = 1; - for (int i = 0; i < w; i++) { - if (buf[linesize * *y_start + i] != trans) { - is_trans = 0; - break; - } - } - - if (!is_trans) - break; - (*y_start)++; - } - - // crop bottom - while (y_end > *y_start) { - int is_trans = 1; - for (int i = 0; i < w; i++) { - if (buf[linesize * y_end + i] != trans) { - is_trans = 0; - break; - } - } - if (!is_trans) - break; - y_end--; - } - - // crop left - while (*x_start < x_end) { - int is_trans = 1; - for (int i = *y_start; i < y_end; i++) { - if (buf[linesize * i + *x_start] != trans) { - is_trans = 0; - break; - } - } - if (!is_trans) - break; - (*x_start)++; - } - - // crop right - while (x_end > *x_start) { - int is_trans = 1; - for (int i = *y_start; i < y_end; i++) { - if (buf[linesize * i + x_end] != trans) { - is_trans = 0; - break; - } - } - if (!is_trans) - break; - x_end--; - } - - *height = y_end + 1 - *y_start; - *width = x_end + 1 - *x_start; - av_log(avctx, AV_LOG_DEBUG,"%dx%d image at pos (%d;%d) [area:%dx%d]\n", - *width, *height, *x_start, *y_start, avctx->width, avctx->height); - } -} - -static void gif_crop_opaque(AVCodecContext *avctx, - const uint32_t *palette, - const uint8_t *buf, const int linesize, - int *width, int *height, int *x_start, int *y_start) -{ - GIFContext *s = avctx->priv_data; - - /* Crop image */ - if ((s->flags & GF_OFFSETTING) && s->last_frame && !palette) { - const uint8_t *ref = s->last_frame->data[0]; - const int ref_linesize = s->last_frame->linesize[0]; - int x_end = avctx->width - 1, - y_end = avctx->height - 1; - - /* skip common lines */ - while (*y_start < y_end) { - if (memcmp(ref + *y_start*ref_linesize, buf + *y_start*linesize, *width)) - break; - (*y_start)++; - } - while (y_end > *y_start) { - if (memcmp(ref + y_end*ref_linesize, buf + y_end*linesize, *width)) - break; - y_end--; - } - *height = y_end + 1 - *y_start; - - /* skip common columns */ - while (*x_start < x_end) { - int same_column = 1; - for (int y = *y_start; y <= y_end; y++) { - if (ref[y*ref_linesize + *x_start] != buf[y*linesize + *x_start]) { - same_column = 0; - break; - } - } - if (!same_column) - break; - (*x_start)++; - } - while (x_end > *x_start) { - int same_column = 1; - for (int y = *y_start; y <= y_end; y++) { - if (ref[y*ref_linesize + x_end] != buf[y*linesize + x_end]) { - same_column = 0; - break; - } - } - if (!same_column) - break; - x_end--; - } - *width = x_end + 1 - *x_start; - - av_log(avctx, AV_LOG_DEBUG,"%dx%d image at pos (%d;%d) [area:%dx%d]\n", - *width, *height, *x_start, *y_start, avctx->width, avctx->height); - } -} - -static int gif_image_write_image(AVCodecContext *avctx, - uint8_t **bytestream, uint8_t *end, - const uint32_t *palette, - const uint8_t *buf, const int linesize, - AVPacket *pkt) -{ - GIFContext *s = avctx->priv_data; - int disposal, len = 0, height = avctx->height, width = avctx->width, x, y; - int x_start = 0, y_start = 0, trans = s->transparent_index; - int bcid = -1, honor_transparency = (s->flags & GF_TRANSDIFF) && s->last_frame && !palette; - const uint8_t *ptr; - uint32_t shrunk_palette[AVPALETTE_COUNT]; - uint8_t map[AVPALETTE_COUNT] = { 0 }; - size_t shrunk_palette_count = 0; - - /* - * We memset to 0xff instead of 0x00 so that the transparency detection - * doesn't pick anything after the palette entries as the transparency - * index, and because GIF89a requires us to always write a power-of-2 - * number of palette entries. - */ - memset(shrunk_palette, 0xff, AVPALETTE_SIZE); - - if (!s->image && is_image_translucent(avctx, buf, linesize)) { - gif_crop_translucent(avctx, buf, linesize, &width, &height, &x_start, &y_start); - honor_transparency = 0; - disposal = GCE_DISPOSAL_BACKGROUND; - } else { - gif_crop_opaque(avctx, palette, buf, linesize, &width, &height, &x_start, &y_start); - disposal = GCE_DISPOSAL_INPLACE; - } - - if (s->image || !avctx->frame_num) { /* GIF header */ - const uint32_t *global_palette = palette ? palette : s->palette; - const AVRational sar = avctx->sample_aspect_ratio; - int64_t aspect = 0; - - if (sar.num > 0 && sar.den > 0) { - aspect = sar.num * 64LL / sar.den - 15; - if (aspect < 0 || aspect > 255) - aspect = 0; - } - - bytestream_put_buffer(bytestream, gif89a_sig, sizeof(gif89a_sig)); - bytestream_put_le16(bytestream, avctx->width); - bytestream_put_le16(bytestream, avctx->height); - - bcid = get_palette_transparency_index(global_palette); - - bytestream_put_byte(bytestream, ((uint8_t) s->use_global_palette << 7) | 0x70 | (s->use_global_palette ? 7 : 0)); /* flags: global clut, 256 entries */ - bytestream_put_byte(bytestream, bcid < 0 ? DEFAULT_TRANSPARENCY_INDEX : bcid); /* background color index */ - bytestream_put_byte(bytestream, aspect); - if (s->use_global_palette) { - for (int i = 0; i < 256; i++) { - const uint32_t v = global_palette[i] & 0xffffff; - bytestream_put_be24(bytestream, v); - } - } - } - - if (honor_transparency && trans < 0) { - trans = pick_palette_entry(buf + y_start*linesize + x_start, - linesize, width, height); - if (trans < 0) // TODO, patch welcome - av_log(avctx, AV_LOG_DEBUG, "No available color, can not use transparency\n"); - } - - if (trans < 0) - honor_transparency = 0; - - if (palette || !s->use_global_palette) { - const uint32_t *pal = palette ? palette : s->palette; - shrink_palette(pal, map, shrunk_palette, &shrunk_palette_count); - } - - bcid = honor_transparency || disposal == GCE_DISPOSAL_BACKGROUND ? trans : get_palette_transparency_index(palette); - - /* graphic control extension */ - bytestream_put_byte(bytestream, GIF_EXTENSION_INTRODUCER); - bytestream_put_byte(bytestream, GIF_GCE_EXT_LABEL); - bytestream_put_byte(bytestream, 0x04); /* block size */ - bytestream_put_byte(bytestream, disposal<<2 | (bcid >= 0)); - bytestream_put_le16(bytestream, 5); // default delay - bytestream_put_byte(bytestream, bcid < 0 ? DEFAULT_TRANSPARENCY_INDEX : (shrunk_palette_count ? map[bcid] : bcid)); - bytestream_put_byte(bytestream, 0x00); - - /* image block */ - bytestream_put_byte(bytestream, GIF_IMAGE_SEPARATOR); - bytestream_put_le16(bytestream, x_start); - bytestream_put_le16(bytestream, y_start); - bytestream_put_le16(bytestream, width); - bytestream_put_le16(bytestream, height); - - if (palette || !s->use_global_palette) { - unsigned pow2_count = av_log2(shrunk_palette_count - 1); - unsigned i; - - bytestream_put_byte(bytestream, 1<<7 | pow2_count); /* flags */ - for (i = 0; i < 1 << (pow2_count + 1); i++) { - const uint32_t v = shrunk_palette[i]; - bytestream_put_be24(bytestream, v); - } - } else { - bytestream_put_byte(bytestream, 0x00); /* flags */ - } - - bytestream_put_byte(bytestream, 0x08); - - ff_lzw_encode_init(s->lzw, s->buf, s->buf_size, - 12, FF_LZW_GIF, 1); - - if (shrunk_palette_count) { - if (!s->shrunk_buf) { - s->shrunk_buf = av_malloc(avctx->height * linesize); - if (!s->shrunk_buf) { - av_log(avctx, AV_LOG_ERROR, "Could not allocated remapped frame buffer.\n"); - return AVERROR(ENOMEM); - } - } - remap_frame_to_palette(buf, linesize, s->shrunk_buf, linesize, avctx->width, avctx->height, map); - ptr = s->shrunk_buf + y_start*linesize + x_start; - } else { - ptr = buf + y_start*linesize + x_start; - } - if (honor_transparency) { - const int ref_linesize = s->last_frame->linesize[0]; - const uint8_t *ref = s->last_frame->data[0] + y_start*ref_linesize + x_start; - - for (y = 0; y < height; y++) { - memcpy(s->tmpl, ptr, width); - for (x = 0; x < width; x++) - if (ref[x] == ptr[x]) - s->tmpl[x] = trans; - len += ff_lzw_encode(s->lzw, s->tmpl, width); - ptr += linesize; - ref += ref_linesize; - } - } else { - for (y = 0; y < height; y++) { - len += ff_lzw_encode(s->lzw, ptr, width); - ptr += linesize; - } - } - len += ff_lzw_encode_flush(s->lzw); - - ptr = s->buf; - while (len > 0) { - int size = FFMIN(255, len); - bytestream_put_byte(bytestream, size); - if (end - *bytestream < size) - return -1; - bytestream_put_buffer(bytestream, ptr, size); - ptr += size; - len -= size; - } - bytestream_put_byte(bytestream, 0x00); /* end of image block */ - return 0; -} - -static av_cold int gif_encode_init(AVCodecContext *avctx) -{ - GIFContext *s = avctx->priv_data; - - if (avctx->width > 65535 || avctx->height > 65535) { - av_log(avctx, AV_LOG_ERROR, "GIF does not support resolutions above 65535x65535\n"); - return AVERROR(EINVAL); - } - - s->transparent_index = -1; - - s->lzw = av_mallocz(ff_lzw_encode_state_size); - s->buf_size = avctx->width*avctx->height*2 + 1000; - s->buf = av_malloc(s->buf_size); - s->tmpl = av_malloc(avctx->width); - if (!s->tmpl || !s->buf || !s->lzw) - return AVERROR(ENOMEM); - - if (avpriv_set_systematic_pal2(s->palette, avctx->pix_fmt) < 0) - av_assert0(avctx->pix_fmt == AV_PIX_FMT_PAL8); - - return 0; -} - -static int gif_encode_frame(AVCodecContext *avctx, AVPacket *pkt, - const AVFrame *pict, int *got_packet) -{ - GIFContext *s = avctx->priv_data; - uint8_t *outbuf_ptr, *end; - const uint32_t *palette = NULL; - int ret; - - if ((ret = ff_alloc_packet(avctx, pkt, avctx->width*avctx->height*7/5 + AV_INPUT_BUFFER_MIN_SIZE)) < 0) - return ret; - outbuf_ptr = pkt->data; - end = pkt->data + pkt->size; - - if (avctx->pix_fmt == AV_PIX_FMT_PAL8) { - palette = (uint32_t*)pict->data[1]; - - if (!s->palette_loaded) { - memcpy(s->palette, palette, AVPALETTE_SIZE); - s->transparent_index = get_palette_transparency_index(palette); - s->palette_loaded = 1; - } else if (!memcmp(s->palette, palette, AVPALETTE_SIZE)) { - palette = NULL; - } - } - - gif_image_write_image(avctx, &outbuf_ptr, end, palette, - pict->data[0], pict->linesize[0], pkt); - if (!s->last_frame && !s->image) { - s->last_frame = av_frame_alloc(); - if (!s->last_frame) - return AVERROR(ENOMEM); - } - - if (!s->image) { - av_frame_unref(s->last_frame); - ret = av_frame_ref(s->last_frame, pict); - if (ret < 0) - return ret; - } - - pkt->size = outbuf_ptr - pkt->data; - if (s->image || !avctx->frame_num) - pkt->flags |= AV_PKT_FLAG_KEY; - *got_packet = 1; - - return 0; -} - -static int gif_encode_close(AVCodecContext *avctx) -{ - GIFContext *s = avctx->priv_data; - - av_freep(&s->lzw); - av_freep(&s->buf); - av_freep(&s->shrunk_buf); - s->buf_size = 0; - av_frame_free(&s->last_frame); - av_freep(&s->tmpl); - return 0; -} - -#define OFFSET(x) offsetof(GIFContext, x) -#define FLAGS AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM -static const AVOption gif_options[] = { - { "gifflags", "set GIF flags", OFFSET(flags), AV_OPT_TYPE_FLAGS, {.i64 = GF_OFFSETTING|GF_TRANSDIFF}, 0, INT_MAX, FLAGS, "flags" }, - { "offsetting", "enable picture offsetting", 0, AV_OPT_TYPE_CONST, {.i64=GF_OFFSETTING}, INT_MIN, INT_MAX, FLAGS, "flags" }, - { "transdiff", "enable transparency detection between frames", 0, AV_OPT_TYPE_CONST, {.i64=GF_TRANSDIFF}, INT_MIN, INT_MAX, FLAGS, "flags" }, - { "gifimage", "enable encoding only images per frame", OFFSET(image), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1, FLAGS }, - { "global_palette", "write a palette to the global gif header where feasible", OFFSET(use_global_palette), AV_OPT_TYPE_BOOL, {.i64=1}, 0, 1, FLAGS }, - { NULL } -}; - -static const AVClass gif_class = { - .class_name = "GIF encoder", - .item_name = av_default_item_name, - .option = gif_options, - .version = LIBAVUTIL_VERSION_INT, -}; - -const FFCodec ff_gif_encoder = { - .p.name = "gif", - CODEC_LONG_NAME("GIF (Graphics Interchange Format)"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_GIF, - .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE, - .priv_data_size = sizeof(GIFContext), - .init = gif_encode_init, - FF_CODEC_ENCODE_CB(gif_encode_frame), - .close = gif_encode_close, - .p.pix_fmts = (const enum AVPixelFormat[]){ - AV_PIX_FMT_RGB8, AV_PIX_FMT_BGR8, AV_PIX_FMT_RGB4_BYTE, AV_PIX_FMT_BGR4_BYTE, - AV_PIX_FMT_GRAY8, AV_PIX_FMT_PAL8, AV_PIX_FMT_NONE - }, - .p.priv_class = &gif_class, - .caps_internal = FF_CODEC_CAP_INIT_CLEANUP, -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264dec.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264dec.h deleted file mode 100644 index 9a1ec1bacec91ff31f3bb327e9628a0188ae5498..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264dec.h +++ /dev/null @@ -1,811 +0,0 @@ -/* - * H.26L/H.264/AVC/JVT/14496-10/... encoder/decoder - * Copyright (c) 2003 Michael Niedermayer - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * H.264 / AVC / MPEG-4 part10 codec. - * @author Michael Niedermayer - */ - -#ifndef AVCODEC_H264DEC_H -#define AVCODEC_H264DEC_H - -#include "libavutil/buffer.h" -#include "libavutil/intreadwrite.h" -#include "libavutil/mem_internal.h" - -#include "cabac.h" -#include "error_resilience.h" -#include "h264_parse.h" -#include "h264_ps.h" -#include "h264_sei.h" -#include "h2645_parse.h" -#include "h264chroma.h" -#include "h264dsp.h" -#include "h264pred.h" -#include "h264qpel.h" -#include "h274.h" -#include "mpegutils.h" -#include "rectangle.h" -#include "videodsp.h" - -#define H264_MAX_PICTURE_COUNT 36 - -/* Compiling in interlaced support reduces the speed - * of progressive decoding by about 2%. */ -#define ALLOW_INTERLACE - -#define FMO 0 - -/** - * The maximum number of slices supported by the decoder. - * must be a power of 2 - */ -#define MAX_SLICES 32 - -#ifdef ALLOW_INTERLACE -#define MB_MBAFF(h) (h)->mb_mbaff -#define MB_FIELD(sl) (sl)->mb_field_decoding_flag -#define FRAME_MBAFF(h) (h)->mb_aff_frame -#define FIELD_PICTURE(h) ((h)->picture_structure != PICT_FRAME) -#define LEFT_MBS 2 -#define LTOP 0 -#define LBOT 1 -#define LEFT(i) (i) -#else -#define MB_MBAFF(h) 0 -#define MB_FIELD(sl) 0 -#define FRAME_MBAFF(h) 0 -#define FIELD_PICTURE(h) 0 -#undef IS_INTERLACED -#define IS_INTERLACED(mb_type) 0 -#define LEFT_MBS 1 -#define LTOP 0 -#define LBOT 0 -#define LEFT(i) 0 -#endif -#define FIELD_OR_MBAFF_PICTURE(h) (FRAME_MBAFF(h) || FIELD_PICTURE(h)) - -#ifndef CABAC -#define CABAC(h) (h)->ps.pps->cabac -#endif - -#define CHROMA(h) ((h)->ps.sps->chroma_format_idc) -#define CHROMA422(h) ((h)->ps.sps->chroma_format_idc == 2) -#define CHROMA444(h) ((h)->ps.sps->chroma_format_idc == 3) - -#define IS_REF0(a) ((a) & MB_TYPE_REF0) -#define IS_8x8DCT(a) ((a) & MB_TYPE_8x8DCT) - -/** - * Memory management control operation. - */ -typedef struct MMCO { - MMCOOpcode opcode; - int short_pic_num; ///< pic_num without wrapping (pic_num & max_pic_num) - int long_arg; ///< index, pic_num, or num long refs depending on opcode -} MMCO; - -typedef struct H264Picture { - AVFrame *f; - ThreadFrame tf; - - AVFrame *f_grain; - - AVBufferRef *qscale_table_buf; - int8_t *qscale_table; - - AVBufferRef *motion_val_buf[2]; - int16_t (*motion_val[2])[2]; - - AVBufferRef *mb_type_buf; - uint32_t *mb_type; - - AVBufferRef *hwaccel_priv_buf; - void *hwaccel_picture_private; ///< hardware accelerator private data - - AVBufferRef *ref_index_buf[2]; - int8_t *ref_index[2]; - - int field_poc[2]; ///< top/bottom POC - int poc; ///< frame POC - int frame_num; ///< frame_num (raw frame_num from slice header) - int mmco_reset; /**< MMCO_RESET set this 1. Reordering code must - not mix pictures before and after MMCO_RESET. */ - int pic_id; /**< pic_num (short -> no wrap version of pic_num, - pic_num & max_pic_num; long -> long_pic_num) */ - int long_ref; ///< 1->long term reference 0->short term reference - int ref_poc[2][2][32]; ///< POCs of the frames/fields used as reference (FIXME need per slice) - int ref_count[2][2]; ///< number of entries in ref_poc (FIXME need per slice) - int mbaff; ///< 1 -> MBAFF frame 0-> not MBAFF - int field_picture; ///< whether or not picture was encoded in separate fields - -/** - * H264Picture.reference has this flag set, - * when the picture is held for delayed output. - */ -#define DELAYED_PIC_REF (1 << 2) - int reference; - int recovered; ///< picture at IDR or recovery point + recovery count - int invalid_gap; - int sei_recovery_frame_cnt; - int needs_fg; ///< whether picture needs film grain synthesis (see `f_grain`) - - AVBufferRef *pps_buf; - const PPS *pps; - - int mb_width, mb_height; - int mb_stride; -} H264Picture; - -typedef struct H264Ref { - uint8_t *data[3]; - int linesize[3]; - - int reference; - int poc; - int pic_id; - - H264Picture *parent; -} H264Ref; - -typedef struct H264SliceContext { - const struct H264Context *h264; - GetBitContext gb; - ERContext *er; - - int slice_num; - int slice_type; - int slice_type_nos; ///< S free slice type (SI/SP are remapped to I/P) - int slice_type_fixed; - - int qscale; - int chroma_qp[2]; // QPc - int qp_thresh; ///< QP threshold to skip loopfilter - int last_qscale_diff; - - // deblock - int deblocking_filter; ///< disable_deblocking_filter_idc with 1 <-> 0 - int slice_alpha_c0_offset; - int slice_beta_offset; - - H264PredWeightTable pwt; - - int prev_mb_skipped; - int next_mb_skipped; - - int chroma_pred_mode; - int intra16x16_pred_mode; - - int8_t intra4x4_pred_mode_cache[5 * 8]; - int8_t(*intra4x4_pred_mode); - - int topleft_mb_xy; - int top_mb_xy; - int topright_mb_xy; - int left_mb_xy[LEFT_MBS]; - - int topleft_type; - int top_type; - int topright_type; - int left_type[LEFT_MBS]; - - const uint8_t *left_block; - int topleft_partition; - - unsigned int topleft_samples_available; - unsigned int top_samples_available; - unsigned int topright_samples_available; - unsigned int left_samples_available; - - ptrdiff_t linesize, uvlinesize; - ptrdiff_t mb_linesize; ///< may be equal to s->linesize or s->linesize * 2, for mbaff - ptrdiff_t mb_uvlinesize; - - int mb_x, mb_y; - int mb_xy; - int resync_mb_x; - int resync_mb_y; - unsigned int first_mb_addr; - // index of the first MB of the next slice - int next_slice_idx; - int mb_skip_run; - int is_complex; - - int picture_structure; - int mb_field_decoding_flag; - int mb_mbaff; ///< mb_aff_frame && mb_field_decoding_flag - - int redundant_pic_count; - - /** - * number of neighbors (top and/or left) that used 8x8 dct - */ - int neighbor_transform_size; - - int direct_spatial_mv_pred; - int col_parity; - int col_fieldoff; - - int cbp; - int top_cbp; - int left_cbp; - - int dist_scale_factor[32]; - int dist_scale_factor_field[2][32]; - int map_col_to_list0[2][16 + 32]; - int map_col_to_list0_field[2][2][16 + 32]; - - /** - * num_ref_idx_l0/1_active_minus1 + 1 - */ - unsigned int ref_count[2]; ///< counts frames or fields, depending on current mb mode - unsigned int list_count; - H264Ref ref_list[2][48]; /**< 0..15: frame refs, 16..47: mbaff field refs. - * Reordered version of default_ref_list - * according to picture reordering in slice header */ - struct { - uint8_t op; - uint32_t val; - } ref_modifications[2][32]; - int nb_ref_modifications[2]; - - unsigned int pps_id; - - const uint8_t *intra_pcm_ptr; - - uint8_t *bipred_scratchpad; - uint8_t *edge_emu_buffer; - uint8_t (*top_borders[2])[(16 * 3) * 2]; - int bipred_scratchpad_allocated; - int edge_emu_buffer_allocated; - int top_borders_allocated[2]; - - /** - * non zero coeff count cache. - * is 64 if not available. - */ - DECLARE_ALIGNED(8, uint8_t, non_zero_count_cache)[15 * 8]; - - /** - * Motion vector cache. - */ - DECLARE_ALIGNED(16, int16_t, mv_cache)[2][5 * 8][2]; - DECLARE_ALIGNED(8, int8_t, ref_cache)[2][5 * 8]; - DECLARE_ALIGNED(16, uint8_t, mvd_cache)[2][5 * 8][2]; - uint8_t direct_cache[5 * 8]; - - DECLARE_ALIGNED(8, uint16_t, sub_mb_type)[4]; - - ///< as a DCT coefficient is int32_t in high depth, we need to reserve twice the space. - DECLARE_ALIGNED(16, int16_t, mb)[16 * 48 * 2]; - DECLARE_ALIGNED(16, int16_t, mb_luma_dc)[3][16 * 2]; - ///< as mb is addressed by scantable[i] and scantable is uint8_t we can either - ///< check that i is not too large or ensure that there is some unused stuff after mb - int16_t mb_padding[256 * 2]; - - uint8_t (*mvd_table[2])[2]; - - /** - * Cabac - */ - CABACContext cabac; - uint8_t cabac_state[1024]; - int cabac_init_idc; - - MMCO mmco[H264_MAX_MMCO_COUNT]; - int nb_mmco; - int explicit_ref_marking; - - int frame_num; - int idr_pic_id; - int poc_lsb; - int delta_poc_bottom; - int delta_poc[2]; - int curr_pic_num; - int max_pic_num; -} H264SliceContext; - -/** - * H264Context - */ -typedef struct H264Context { - const AVClass *class; - AVCodecContext *avctx; - VideoDSPContext vdsp; - H264DSPContext h264dsp; - H264ChromaContext h264chroma; - H264QpelContext h264qpel; - H274FilmGrainDatabase h274db; - - H264Picture DPB[H264_MAX_PICTURE_COUNT]; - H264Picture *cur_pic_ptr; - H264Picture cur_pic; - H264Picture last_pic_for_ec; - - H264SliceContext *slice_ctx; - int nb_slice_ctx; - int nb_slice_ctx_queued; - - H2645Packet pkt; - - int pixel_shift; ///< 0 for 8-bit H.264, 1 for high-bit-depth H.264 - - /* coded dimensions -- 16 * mb w/h */ - int width, height; - int chroma_x_shift, chroma_y_shift; - - int droppable; - int coded_picture_number; - - int context_initialized; - int flags; - int workaround_bugs; - int x264_build; - /* Set when slice threading is used and at least one slice uses deblocking - * mode 1 (i.e. across slice boundaries). Then we disable the loop filter - * during normal MB decoding and execute it serially at the end. - */ - int postpone_filter; - - /* - * Set to 1 when the current picture is IDR, 0 otherwise. - */ - int picture_idr; - - /* - * Set to 1 when the current picture contains only I slices, 0 otherwise. - */ - int picture_intra_only; - - int crop_left; - int crop_right; - int crop_top; - int crop_bottom; - - int8_t(*intra4x4_pred_mode); - H264PredContext hpc; - - uint8_t (*non_zero_count)[48]; - -#define LIST_NOT_USED -1 // FIXME rename? - - /** - * block_offset[ 0..23] for frame macroblocks - * block_offset[24..47] for field macroblocks - */ - int block_offset[2 * (16 * 3)]; - - uint32_t *mb2b_xy; // FIXME are these 4 a good idea? - uint32_t *mb2br_xy; - int b_stride; // FIXME use s->b4_stride - - uint16_t *slice_table; ///< slice_table_base + 2*mb_stride + 1 - - // interlacing specific flags - int mb_aff_frame; - int picture_structure; - int first_field; - - uint8_t *list_counts; ///< Array of list_count per MB specifying the slice type - - /* 0x100 -> non null luma_dc, 0x80/0x40 -> non null chroma_dc (cb/cr), 0x?0 -> chroma_cbp(0, 1, 2), 0x0? luma_cbp */ - uint16_t *cbp_table; - - /* chroma_pred_mode for i4x4 or i16x16, else 0 */ - uint8_t *chroma_pred_mode_table; - uint8_t (*mvd_table[2])[2]; - uint8_t *direct_table; - - uint8_t scan_padding[16]; - uint8_t zigzag_scan[16]; - uint8_t zigzag_scan8x8[64]; - uint8_t zigzag_scan8x8_cavlc[64]; - uint8_t field_scan[16]; - uint8_t field_scan8x8[64]; - uint8_t field_scan8x8_cavlc[64]; - uint8_t zigzag_scan_q0[16]; - uint8_t zigzag_scan8x8_q0[64]; - uint8_t zigzag_scan8x8_cavlc_q0[64]; - uint8_t field_scan_q0[16]; - uint8_t field_scan8x8_q0[64]; - uint8_t field_scan8x8_cavlc_q0[64]; - - int mb_y; - int mb_height, mb_width; - int mb_stride; - int mb_num; - - // ============================================================= - // Things below are not used in the MB or more inner code - - int nal_ref_idc; - int nal_unit_type; - - int has_slice; ///< slice NAL is found in the packet, set by decode_nal_units, its state does not need to be preserved outside h264_decode_frame() - - /** - * Used to parse AVC variant of H.264 - */ - int is_avc; ///< this flag is != 0 if codec is avc1 - int nal_length_size; ///< Number of bytes used for nal length (1, 2 or 4) - - int bit_depth_luma; ///< luma bit depth from sps to detect changes - int chroma_format_idc; ///< chroma format from sps to detect changes - - H264ParamSets ps; - - uint16_t *slice_table_base; - - H264POCContext poc; - - H264Ref default_ref[2]; - H264Picture *short_ref[32]; - H264Picture *long_ref[32]; - H264Picture *delayed_pic[H264_MAX_DPB_FRAMES + 2]; // FIXME size? - int last_pocs[H264_MAX_DPB_FRAMES]; - H264Picture *next_output_pic; - int next_outputed_poc; - int poc_offset; ///< PicOrderCnt_offset from SMPTE RDD-2006 - - /** - * memory management control operations buffer. - */ - MMCO mmco[H264_MAX_MMCO_COUNT]; - int nb_mmco; - int mmco_reset; - int explicit_ref_marking; - - int long_ref_count; ///< number of actual long term references - int short_ref_count; ///< number of actual short term references - - /** - * @name Members for slice based multithreading - * @{ - */ - /** - * current slice number, used to initialize slice_num of each thread/context - */ - int current_slice; - - /** @} */ - - /** - * Complement sei_pic_struct - * SEI_PIC_STRUCT_TOP_BOTTOM and SEI_PIC_STRUCT_BOTTOM_TOP indicate interlaced frames. - * However, soft telecined frames may have these values. - * This is used in an attempt to flag soft telecine progressive. - */ - int prev_interlaced_frame; - - /** - * Are the SEI recovery points looking valid. - */ - int valid_recovery_point; - - /** - * recovery_frame is the frame_num at which the next frame should - * be fully constructed. - * - * Set to -1 when not expecting a recovery point. - */ - int recovery_frame; - -/** - * We have seen an IDR, so all the following frames in coded order are correctly - * decodable. - */ -#define FRAME_RECOVERED_IDR (1 << 0) -/** - * Sufficient number of frames have been decoded since a SEI recovery point, - * so all the following frames in presentation order are correct. - */ -#define FRAME_RECOVERED_SEI (1 << 1) - - int frame_recovered; ///< Initial frame has been completely recovered - - int has_recovery_point; - - int missing_fields; - - /* for frame threading, this is set to 1 - * after finish_setup() has been called, so we cannot modify - * some context properties (which are supposed to stay constant between - * slices) anymore */ - int setup_finished; - - int cur_chroma_format_idc; - int cur_bit_depth_luma; - int16_t slice_row[MAX_SLICES]; ///< to detect when MAX_SLICES is too low - - /* original AVCodecContext dimensions, used to handle container - * cropping */ - int width_from_caller; - int height_from_caller; - - int enable_er; - ERContext er; - int16_t *dc_val_base; - - H264SEIContext sei; - - AVBufferPool *qscale_table_pool; - AVBufferPool *mb_type_pool; - AVBufferPool *motion_val_pool; - AVBufferPool *ref_index_pool; - int ref2frm[MAX_SLICES][2][64]; ///< reference to frame number lists, used in the loop filter, the first 2 are for -2,-1 -} H264Context; - -extern const uint16_t ff_h264_mb_sizes[4]; - -/** - * Reconstruct bitstream slice_type. - */ -int ff_h264_get_slice_type(const H264SliceContext *sl); - -/** - * Allocate tables. - * needs width/height - */ -int ff_h264_alloc_tables(H264Context *h); - -int ff_h264_decode_ref_pic_list_reordering(H264SliceContext *sl, void *logctx); -int ff_h264_build_ref_list(H264Context *h, H264SliceContext *sl); -void ff_h264_remove_all_refs(H264Context *h); - -/** - * Execute the reference picture marking (memory management control operations). - */ -int ff_h264_execute_ref_pic_marking(H264Context *h); - -int ff_h264_decode_ref_pic_marking(H264SliceContext *sl, GetBitContext *gb, - const H2645NAL *nal, void *logctx); - -void ff_h264_hl_decode_mb(const H264Context *h, H264SliceContext *sl); -void ff_h264_decode_init_vlc(void); - -/** - * Decode a macroblock - * @return 0 if OK, ER_AC_ERROR / ER_DC_ERROR / ER_MV_ERROR on error - */ -int ff_h264_decode_mb_cavlc(const H264Context *h, H264SliceContext *sl); - -/** - * Decode a CABAC coded macroblock - * @return 0 if OK, ER_AC_ERROR / ER_DC_ERROR / ER_MV_ERROR on error - */ -int ff_h264_decode_mb_cabac(const H264Context *h, H264SliceContext *sl); - -void ff_h264_init_cabac_states(const H264Context *h, H264SliceContext *sl); - -void ff_h264_direct_dist_scale_factor(const H264Context *const h, H264SliceContext *sl); -void ff_h264_direct_ref_list_init(const H264Context *const h, H264SliceContext *sl); -void ff_h264_pred_direct_motion(const H264Context *const h, H264SliceContext *sl, - int *mb_type); - -void ff_h264_filter_mb_fast(const H264Context *h, H264SliceContext *sl, int mb_x, int mb_y, - uint8_t *img_y, uint8_t *img_cb, uint8_t *img_cr, - unsigned int linesize, unsigned int uvlinesize); -void ff_h264_filter_mb(const H264Context *h, H264SliceContext *sl, int mb_x, int mb_y, - uint8_t *img_y, uint8_t *img_cb, uint8_t *img_cr, - unsigned int linesize, unsigned int uvlinesize); - -/* - * o-o o-o - * / / / - * o-o o-o - * ,---' - * o-o o-o - * / / / - * o-o o-o - */ - -/* Scan8 organization: - * 0 1 2 3 4 5 6 7 - * 0 DY y y y y y - * 1 y Y Y Y Y - * 2 y Y Y Y Y - * 3 y Y Y Y Y - * 4 y Y Y Y Y - * 5 DU u u u u u - * 6 u U U U U - * 7 u U U U U - * 8 u U U U U - * 9 u U U U U - * 10 DV v v v v v - * 11 v V V V V - * 12 v V V V V - * 13 v V V V V - * 14 v V V V V - * DY/DU/DV are for luma/chroma DC. - */ - -#define LUMA_DC_BLOCK_INDEX 48 -#define CHROMA_DC_BLOCK_INDEX 49 - -/** - * Get the chroma qp. - */ -static av_always_inline int get_chroma_qp(const PPS *pps, int t, int qscale) -{ - return pps->chroma_qp_table[t][qscale]; -} - -/** - * Get the predicted intra4x4 prediction mode. - */ -static av_always_inline int pred_intra_mode(const H264Context *h, - H264SliceContext *sl, int n) -{ - const int index8 = scan8[n]; - const int left = sl->intra4x4_pred_mode_cache[index8 - 1]; - const int top = sl->intra4x4_pred_mode_cache[index8 - 8]; - const int min = FFMIN(left, top); - - ff_tlog(h->avctx, "mode:%d %d min:%d\n", left, top, min); - - if (min < 0) - return DC_PRED; - else - return min; -} - -static av_always_inline void write_back_intra_pred_mode(const H264Context *h, - H264SliceContext *sl) -{ - int8_t *i4x4 = sl->intra4x4_pred_mode + h->mb2br_xy[sl->mb_xy]; - int8_t *i4x4_cache = sl->intra4x4_pred_mode_cache; - - AV_COPY32(i4x4, i4x4_cache + 4 + 8 * 4); - i4x4[4] = i4x4_cache[7 + 8 * 3]; - i4x4[5] = i4x4_cache[7 + 8 * 2]; - i4x4[6] = i4x4_cache[7 + 8 * 1]; -} - -static av_always_inline void write_back_non_zero_count(const H264Context *h, - H264SliceContext *sl) -{ - const int mb_xy = sl->mb_xy; - uint8_t *nnz = h->non_zero_count[mb_xy]; - uint8_t *nnz_cache = sl->non_zero_count_cache; - - AV_COPY32(&nnz[ 0], &nnz_cache[4 + 8 * 1]); - AV_COPY32(&nnz[ 4], &nnz_cache[4 + 8 * 2]); - AV_COPY32(&nnz[ 8], &nnz_cache[4 + 8 * 3]); - AV_COPY32(&nnz[12], &nnz_cache[4 + 8 * 4]); - AV_COPY32(&nnz[16], &nnz_cache[4 + 8 * 6]); - AV_COPY32(&nnz[20], &nnz_cache[4 + 8 * 7]); - AV_COPY32(&nnz[32], &nnz_cache[4 + 8 * 11]); - AV_COPY32(&nnz[36], &nnz_cache[4 + 8 * 12]); - - if (!h->chroma_y_shift) { - AV_COPY32(&nnz[24], &nnz_cache[4 + 8 * 8]); - AV_COPY32(&nnz[28], &nnz_cache[4 + 8 * 9]); - AV_COPY32(&nnz[40], &nnz_cache[4 + 8 * 13]); - AV_COPY32(&nnz[44], &nnz_cache[4 + 8 * 14]); - } -} - -static av_always_inline void write_back_motion_list(const H264Context *h, - H264SliceContext *sl, - int b_stride, - int b_xy, int b8_xy, - int mb_type, int list) -{ - int16_t(*mv_dst)[2] = &h->cur_pic.motion_val[list][b_xy]; - int16_t(*mv_src)[2] = &sl->mv_cache[list][scan8[0]]; - AV_COPY128(mv_dst + 0 * b_stride, mv_src + 8 * 0); - AV_COPY128(mv_dst + 1 * b_stride, mv_src + 8 * 1); - AV_COPY128(mv_dst + 2 * b_stride, mv_src + 8 * 2); - AV_COPY128(mv_dst + 3 * b_stride, mv_src + 8 * 3); - if (CABAC(h)) { - uint8_t (*mvd_dst)[2] = &sl->mvd_table[list][FMO ? 8 * sl->mb_xy - : h->mb2br_xy[sl->mb_xy]]; - uint8_t(*mvd_src)[2] = &sl->mvd_cache[list][scan8[0]]; - if (IS_SKIP(mb_type)) { - AV_ZERO128(mvd_dst); - } else { - AV_COPY64(mvd_dst, mvd_src + 8 * 3); - AV_COPY16(mvd_dst + 3 + 3, mvd_src + 3 + 8 * 0); - AV_COPY16(mvd_dst + 3 + 2, mvd_src + 3 + 8 * 1); - AV_COPY16(mvd_dst + 3 + 1, mvd_src + 3 + 8 * 2); - } - } - - { - int8_t *ref_index = &h->cur_pic.ref_index[list][b8_xy]; - int8_t *ref_cache = sl->ref_cache[list]; - ref_index[0 + 0 * 2] = ref_cache[scan8[0]]; - ref_index[1 + 0 * 2] = ref_cache[scan8[4]]; - ref_index[0 + 1 * 2] = ref_cache[scan8[8]]; - ref_index[1 + 1 * 2] = ref_cache[scan8[12]]; - } -} - -static av_always_inline void write_back_motion(const H264Context *h, - H264SliceContext *sl, - int mb_type) -{ - const int b_stride = h->b_stride; - const int b_xy = 4 * sl->mb_x + 4 * sl->mb_y * h->b_stride; // try mb2b(8)_xy - const int b8_xy = 4 * sl->mb_xy; - - if (USES_LIST(mb_type, 0)) { - write_back_motion_list(h, sl, b_stride, b_xy, b8_xy, mb_type, 0); - } else { - fill_rectangle(&h->cur_pic.ref_index[0][b8_xy], - 2, 2, 2, (uint8_t)LIST_NOT_USED, 1); - } - if (USES_LIST(mb_type, 1)) - write_back_motion_list(h, sl, b_stride, b_xy, b8_xy, mb_type, 1); - - if (sl->slice_type_nos == AV_PICTURE_TYPE_B && CABAC(h)) { - if (IS_8X8(mb_type)) { - uint8_t *direct_table = &h->direct_table[4 * sl->mb_xy]; - direct_table[1] = sl->sub_mb_type[1] >> 1; - direct_table[2] = sl->sub_mb_type[2] >> 1; - direct_table[3] = sl->sub_mb_type[3] >> 1; - } - } -} - -static av_always_inline int get_dct8x8_allowed(const H264Context *h, H264SliceContext *sl) -{ - if (h->ps.sps->direct_8x8_inference_flag) - return !(AV_RN64A(sl->sub_mb_type) & - ((MB_TYPE_16x8 | MB_TYPE_8x16 | MB_TYPE_8x8) * - 0x0001000100010001ULL)); - else - return !(AV_RN64A(sl->sub_mb_type) & - ((MB_TYPE_16x8 | MB_TYPE_8x16 | MB_TYPE_8x8 | MB_TYPE_DIRECT2) * - 0x0001000100010001ULL)); -} - -int ff_h264_field_end(H264Context *h, H264SliceContext *sl, int in_setup); - -int ff_h264_ref_picture(H264Context *h, H264Picture *dst, H264Picture *src); -int ff_h264_replace_picture(H264Context *h, H264Picture *dst, const H264Picture *src); -void ff_h264_unref_picture(H264Context *h, H264Picture *pic); - -void ff_h264_slice_context_init(H264Context *h, H264SliceContext *sl); - -void ff_h264_draw_horiz_band(const H264Context *h, H264SliceContext *sl, int y, int height); - -/** - * Submit a slice for decoding. - * - * Parse the slice header, starting a new field/frame if necessary. If any - * slices are queued for the previous field, they are decoded. - */ -int ff_h264_queue_decode_slice(H264Context *h, const H2645NAL *nal); -int ff_h264_execute_decode_slices(H264Context *h); -int ff_h264_update_thread_context(AVCodecContext *dst, - const AVCodecContext *src); -int ff_h264_update_thread_context_for_user(AVCodecContext *dst, - const AVCodecContext *src); - -void ff_h264_flush_change(H264Context *h); - -void ff_h264_free_tables(H264Context *h); - -void ff_h264_set_erpic(ERPicture *dst, H264Picture *src); - -#endif /* AVCODEC_H264DEC_H */ diff --git a/spaces/congsaPfin/Manga-OCR/logs/Cooking Master Adventure MOD APK Enjoy Unlimited Features and Fun.md b/spaces/congsaPfin/Manga-OCR/logs/Cooking Master Adventure MOD APK Enjoy Unlimited Features and Fun.md deleted file mode 100644 index f27fbc1e6bf55d21bd3a598a4fe0cbfd14e6f96e..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Cooking Master Adventure MOD APK Enjoy Unlimited Features and Fun.md +++ /dev/null @@ -1,97 +0,0 @@ -
      -

      Cooking Master Adventure Game Mod Apk: A Fun and Addictive Cooking Simulation Game

      -

      Do you love cooking games? Do you want to become a master chef and run your own restaurant empire? Do you enjoy time management games and kitchen challenges? If you answered yes to any of these questions, then you should try Cooking Master Adventure Game, a fun and addictive cooking simulation game that will test your culinary skills and restaurant management abilities. In this game, you can cook hundreds of delicious dishes from around the world, open dozens of restaurants and cafes in different locations, complete daily quests and weekly challenges, and interact with customers and characters in an exciting story. But what if you want to enjoy the game without any limitations or restrictions? What if you want to unlock all the features and modes of the game without spending any money or waiting for long hours? That's where Cooking Master Adventure Game Mod Apk comes in handy. In this article, we will tell you everything you need to know about this modified version of the original game, including what it is, how to download and install it, how to play it, how to review it, and some frequently asked questions.

      -

      cooking master adventure game mod apk


      Download Filehttps://urlca.com/2uO8Sz



      -

      What is Cooking Master Adventure Game?

      -

      Cooking Master Adventure Game is a simulation game developed by Mini Stone Games - Chef & Restaurant Cooking Games. It was released in February 2022 and has been downloaded over 1 million times on Google Play Store. It has a rating of 4.6 out of 5 stars, based on more than 9,000 reviews. It is also available on App Store for iOS devices. The game has three main aspects:

      -

      A cooking game with realistic graphics and gameplay

      -

      In this game, you can cook hundreds of delicious food recipes from around the world, such as burgers, pizzas, sushi, pasta, cakes, ice cream, and more. You can use realistic ingredients, utensils, appliances, and decorations to prepare your dishes. You can also customize your kitchen and restaurant with various themes and styles. The game has realistic HD graphics that will make you feel like you are in a real kitchen.

      -

      A time management game with hundreds of recipes and restaurants

      -

      In this game, you have to serve your customers as fast as possible before they get angry and leave. You have to manage your time wisely and use your cooking skills efficiently. You have to deal with different types of customers, such as kids, adults, celebrities, critics, etc. You have to complete various tasks and orders in each level. You can also upgrade your ingredients, utensils, appliances, and decorations to improve your performance. The game has hundreds of levels across dozens of restaurants and cafes in different locations, such as New York, Paris, Tokyo, London, etc.

      -

      A restaurant management game with daily quests and weekly challenges

      -

      In this game, you have to grow your restaurant empire from start-up to success. You have to earn money from your customers and use it to expand your business and open new restaurants and cafes. You have to hire and train your staff, such as chefs, waiters, managers, etc. You have to complete daily quests and weekly challenges to earn extra rewards and bonuses. You have to interact with various characters and customers in an engaging story. The game has a lot of fun and exciting features and modes, such as VIP mode, Chef Club, Cooking Academy, etc.

      -

      cooking master adventure mod apk unlimited money
      -cooking master adventure mod apk download for android
      -cooking master adventure mod apk latest version
      -cooking master adventure mod apk free shopping
      -cooking master adventure mod apk offline
      -cooking master adventure game hack mod apk
      -cooking master adventure game cheats mod apk
      -cooking master adventure game premium mod apk
      -cooking master adventure game unlocked mod apk
      -cooking master adventure game full mod apk
      -cooking master adventure simulation game mod apk
      -cooking master adventure fun game mod apk
      -cooking master adventure casual game mod apk
      -cooking master adventure addictive game mod apk
      -cooking master adventure best game mod apk
      -cooking master adventure 3d game mod apk
      -cooking master adventure hd game mod apk
      -cooking master adventure realistic game mod apk
      -cooking master adventure new game mod apk
      -cooking master adventure update game mod apk
      -cooking master adventure online game mod apk
      -cooking master adventure offline game mod apk
      -cooking master adventure multiplayer game mod apk
      -cooking master adventure single player game mod apk
      -cooking master adventure story mode game mod apk
      -cooking master adventure free game mod apk
      -cooking master adventure paid game mod apk
      -cooking master adventure pro game mod apk
      -cooking master adventure vip game mod apk
      -cooking master adventure mega game mod apk
      -cooking master adventure super game mod apk
      -cooking master adventure ultimate game mod apk
      -cooking master adventure deluxe game mod apk
      -cooking master adventure extreme game mod apk
      -cooking master adventure awesome game mod apk
      -cooking master adventure amazing game mod apk
      -cooking master adventure fantastic game mod apk
      -cooking master adventure incredible game mod apk
      -cooking master adventure wonderful game mod apk
      -cooking master adventure fabulous game mod apk

      -

      What is Cooking Master Adventure Game Mod Apk?

      -

      Cooking Master Adventure Game Mod Apk is a modified version of the original game that allows you to enjoy the game without any limitations or restrictions. It is a file that you can download and install on your Android device to access all the features and modes of the game for free. Some of the benefits of using the mod apk are:

      -

      A modified version of the original game with unlocked features

      -

      With the mod apk, you can unlock all the recipes, ingredients, utensils, appliances, decorations, themes, styles, restaurants, cafes, locations, levels, tasks, orders, customers, characters, features, and modes of the game. You can also unlock all the premium items and resources of the game, such as coins, gems, stars, tickets, etc. You can use them to upgrade your kitchen and restaurant, expand your business, hire and train your staff, complete quests and challenges, etc.

      -

      The benefits of using the mod apk

      -

      With the mod apk, you can enjoy the game without any limitations or restrictions. You can play the game at your own pace and style. You can experiment with different recipes and cuisines. You can customize your kitchen and restaurant according to your preferences. You can explore different restaurants and cafes in different locations. You can complete quests and challenges without any pressure or difficulty. You can interact with characters and customers without any interruption or annoyance. You can have fun and relax with the game without any stress or boredom.

      -

      The risks of using the mod apk

      -

      However, using the mod apk also comes with some risks and drawbacks. Some of them are:

      - - The mod apk may not be compatible with your device or the latest version of the game. - The mod apk may contain viruses or malware that may harm your device or data. - The mod apk may cause errors or glitches in the game or your device. - The mod apk may violate the terms and conditions of the game or Google Play Store. - The mod apk may get detected by the game developers or Google Play Store and result in a ban or suspension of your account.

      How to Download and Install Cooking Master Adventure Game Mod Apk?

      -

      If you want to download and install Cooking Master Adventure Game Mod Apk on your Android device, you have to follow these steps:

      -

      The steps to download and install the mod apk

      - - First, you have to find a reliable and trustworthy website that provides the link to download the mod apk file. You can search for it on Google or any other search engine. - Next, you have to click on the download link and wait for the file to be downloaded on your device. The file size may vary depending on the website and the version of the mod apk. - Then, you have to go to your device settings and enable the option to install apps from unknown sources. This will allow you to install the mod apk file on your device. - After that, you have to locate the downloaded file on your device storage and tap on it to start the installation process. You may have to grant some permissions and accept some terms and conditions before installing the mod apk. - Finally, you have to wait for the installation process to be completed and then launch the game from your app drawer or home screen.

      The precautions to take before installing the mod apk

      - - Before installing the mod apk, you should make sure that your device has enough storage space and battery life to run the game smoothly. - Before installing the mod apk, you should also backup your data and progress from the original game or your Google account. This will help you to restore your data and progress in case something goes wrong with the mod apk or you want to switch back to the original game. - Before installing the mod apk, you should also scan the file with an antivirus or anti-malware software to make sure that it is safe and clean. This will prevent any potential harm to your device or data.

      The alternatives to the mod apk

      - - If you don't want to use the mod apk or you can't find a reliable and trustworthy website to download it, you can also try some alternatives to enjoy the game without any limitations or restrictions. Some of them are: - You can use a game hacker app or tool, such as Lucky Patcher, Game Guardian, SB Game Hacker, etc. These apps or tools can help you to modify the game data and resources, such as coins, gems, stars, tickets, etc. You can use them to unlock all the features and modes of the game for free. However, these apps or tools may also have some risks and drawbacks, such as compatibility issues, errors or glitches, viruses or malware, ban or suspension, etc. - You can use a game emulator app or software, such as Bluestacks, Nox Player, LD Player, etc. These apps or software can help you to run the game on your PC or laptop. You can use them to enjoy the game on a bigger screen and with better graphics and performance. You can also use keyboard and mouse controls to play the game more easily and comfortably. However, these apps or software may also have some risks and drawbacks, such as storage space and battery life consumption, installation and configuration difficulties, compatibility issues, errors or glitches, etc. - You can use a game cheat code or command, such as /give, /unlock, /complete, etc. These cheat codes or commands can help you to access all the features and modes of the game for free. You can use them to unlock all the recipes, ingredients, utensils, appliances, decorations, themes, styles, restaurants, cafes, locations, levels, tasks, orders, customers, characters, features, and modes of the game. You can also get unlimited coins, gems, stars, tickets, etc. You can use them to upgrade your kitchen and restaurant, expand your business, hire and train your staff, complete quests and challenges, etc. However, these cheat codes or commands may also have some risks and drawbacks, such as compatibility issues, errors or glitches, ban or suspension, etc.

      How to Play Cooking Master Adventure Game Mod Apk?

      -

      If you have successfully downloaded and installed Cooking Master Adventure Game Mod Apk on your Android device, you can start playing the game and enjoy all the features and modes of the game for free. Here are some tips on how to play the game:

      -

      The basic gameplay and controls of the game

      -

      The basic gameplay and controls of the game are similar to the original game. You have to tap on the screen to perform various actions, such as selecting ingredients, cooking food, serving customers, collecting money, etc. You have to follow the instructions and recipes on the screen to prepare your dishes. You have to serve your customers as fast as possible before they get angry and leave. You have to complete various tasks and orders in each level to earn coins, gems, stars, tickets, etc. You have to use them to upgrade your kitchen and restaurant, expand your business, hire and train your staff, complete quests and challenges, etc.

      -

      The tips and tricks to master the game

      -

      Here are some tips and tricks to master the game and become a cooking master:

      - - Plan ahead and prepare your ingredients and utensils in advance. This will help you to save time and avoid mistakes. - Upgrade your ingredients and utensils as soon as possible. This will help you to improve your quality and quantity of food. - Upgrade your appliances and decorations as soon as possible. This will help you to improve your speed and efficiency of cooking. - Upgrade your staff as soon as possible. This will help you to improve your service and customer satisfaction. - Complete quests and challenges as soon as possible. This will help you to earn extra rewards and bonuses. - Use boosters and power-ups wisely. They can help you to overcome difficult situations or achieve higher scores. - Watch ads or videos occasionally. They can help you to get free coins, gems, stars, tickets, etc. - Connect with Facebook or Google Play Games. They can help you to save your progress online and sync it across different devices. They can also help you to invite your friends and share your achievements.

      The features and modes of the game

      -

      Here are some of the features and modes of the game that you can enjoy with the mod apk:

      - - VIP mode: This mode allows you to access exclusive restaurants and cafes with higher profits and rewards. You can also enjoy special benefits such as free boosters, power-ups, coins, gems, stars, tickets, etc. - Chef Club: This mode allows you to join or create a club with other players from around the world. You can chat with them, exchange gifts with them, compete with them, and help them in various events and activities. - Cooking Academy: This mode allows you to learn new recipes and cuisines from different countries and cultures. You can also test your knowledge and skills in various quizzes and exams. - Table: This is a feature that allows you to create and customize your own table with various items and decorations. You can also invite your friends and other players to join your table and chat with them, exchange gifts with them, play mini-games with them, etc.

      How to Review Cooking Master Adventure Game Mod Apk?

      -

      If you have played Cooking Master Adventure Game Mod Apk and enjoyed it, you may want to share your opinion and feedback with other players and the game developers. Here are some tips on how to review the game:

      -

      The pros and cons of the game

      -

      Here are some of the pros and cons of the game that you can mention in your review:

      - - Pros: - The game has realistic HD graphics and sound effects that make you feel like you are in a real kitchen and restaurant. - The game has hundreds of recipes, ingredients, utensils, appliances, decorations, themes, styles, restaurants, cafes, locations, levels, tasks, orders, customers, characters, features, and modes that make the game fun and diverse. - The game has a simple and intuitive gameplay and controls that make the game easy and comfortable to play. - The game has a lot of challenges and rewards that make the game exciting and rewarding. - The game has a lot of social features that make the game interactive and engaging. - Cons: - The game may have some errors or glitches that may affect the game performance or experience. - The game may have some ads or videos that may interrupt or annoy the game play or experience. - The game may have some in-app purchases or subscriptions that may require real money or personal information. - The game may have some compatibility issues with some devices or versions of the game. - The game may have some security issues with the mod apk that may harm your device or data.

      The ratings and feedback from other players

      -

      Here are some of the ratings and feedback from other players that you can refer to or compare with in your review:

      - - "This is one of the best cooking games I have ever played. It has so many recipes and cuisines to choose from. It has so many restaurants and cafes to open. It has so many quests and challenges to complete. It has so many features and modes to enjoy. It is so realistic and addictive. I love it." (5 stars) - "This is a good cooking game but it has some problems. It has some errors and glitches that make the game crash or freeze. It has some ads and videos that make the game slow or laggy. It has some in-app purchases or subscriptions that make the game expensive or unfair. It has some compatibility issues with some devices or versions of the game. It needs some improvement and updates." (3 stars) - "This is a bad cooking game and I don't recommend it. It has so many errors and glitches that make the game unplayable or frustrating. It has so many ads and videos that make the game annoying or boring. It has so many in-app purchases or subscriptions that make the game a scam or a rip-off. It has so many security issues with the mod apk that make the game dangerous or illegal. It is a waste of time and money." (1 star)

      The suggestions for improvement and updates

      -

      Here are some of the suggestions for improvement and updates that you can provide in your review:

      - - The game developers should fix the errors and glitches that affect the game performance or experience. - The game developers should reduce the ads or videos that interrupt or annoy the game play or experience. - The game developers should balance the in-app purchases or subscriptions that require real money or personal information. - The game developers should improve the compatibility with different devices or versions of the game. - The game developers should enhance the security with the mod apk that may harm your device or data. - The game developers should add more recipes, ingredients, utensils, appliances, decorations, themes, styles, restaurants, cafes, locations, levels, tasks, orders, customers, characters, features, and modes to the game. - The game developers should update the graphics and sound effects to make them more realistic and immersive. - The game developers should improve the gameplay and controls to make them more simple and intuitive.

      Conclusion

      -

      Cooking Master Adventure Game Mod Apk is a fun and addictive cooking simulation game that will test your culinary skills and restaurant management abilities. You can cook hundreds of delicious dishes from around the world, open dozens of restaurants and cafes in different locations, complete daily quests and weekly challenges, and interact with customers and characters in an exciting story. You can also enjoy all the features and modes of the game for free with the mod apk. However, you should also be aware of the risks and drawbacks of using the mod apk, such as compatibility issues, errors or glitches, viruses or malware, ban or suspension, etc. You should also follow the steps and precautions to download and install the mod apk safely and securely on your device. You should also review the game honestly and constructively to share your opinion and feedback with other players and the game developers.

      -

      FAQs

      -

      Here are some of the frequently asked questions about Cooking Master Adventure Game Mod Apk:

      -

      Q: Is Cooking Master Adventure Game Mod Apk safe to use?

      -

      A: Cooking Master Adventure Game Mod Apk is not an official version of the original game. It is a modified version that may contain viruses or malware that may harm your device or data. It may also violate the terms and conditions of the game or Google Play Store. It may also get detected by the game developers or Google Play Store and result in a ban or suspension of your account. Therefore, you should use the mod apk at your own risk and discretion.

      -

      Q: How can I update Cooking Master Adventure Game Mod Apk?

      -

      A: Cooking Master Adventure Game Mod Apk may not be compatible with the latest version of the original game. It may also stop working or become outdated after some time. Therefore, you should check for updates regularly and download and install the latest version of the mod apk from a reliable and trustworthy website. You should also backup your data and progress before updating the mod apk to avoid any loss or damage.

      -

      Q: How can I uninstall Cooking Master Adventure Game Mod Apk?

      -

      A: If you want to uninstall Cooking Master Adventure Game Mod Apk from your device, you can follow these steps:

      - - Go to your device settings and select Apps or Applications. - Find and select Cooking Master Adventure Game Mod Apk from the list of apps. - Tap on Uninstall and confirm your action. - Wait for the uninstallation process to be completed and then restart your device.

      Q: Can I play Cooking Master Adventure Game Mod Apk online or offline?

      -

      A: Cooking Master Adventure Game Mod Apk can be played both online and offline. However, some features and modes of the game may require an internet connection to work properly. For example, you may need an internet connection to access the VIP mode, Chef Club, Cooking Academy, Table, etc. You may also need an internet connection to save your progress online and sync it across different devices. You may also need an internet connection to invite your friends and share your achievements.

      -

      Q: Can I play Cooking Master Adventure Game Mod Apk with my friends or other players?

      -

      A: Yes, you can play Cooking Master Adventure Game Mod Apk with your friends or other players from around the world. You can connect with Facebook or Google Play Games to invite your friends and join their tables. You can also chat with them, exchange gifts with them, compete with them, and help them in various events and activities. You can also join or create a club with other players in the Chef Club mode. You can also interact with various characters and customers in the game story.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Download The Most Powerful Download Manager and Accelerator.md b/spaces/congsaPfin/Manga-OCR/logs/Download Download The Most Powerful Download Manager and Accelerator.md deleted file mode 100644 index 6e319d7a9a719eba6631989dc84e7de5707c809d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Download The Most Powerful Download Manager and Accelerator.md +++ /dev/null @@ -1,96 +0,0 @@ -
      -

      Download Download: How to Find and Use the Best Free Download Managers

      -

      Downloading files from the internet is a common and essential task for most users. Whether you want to watch a movie, listen to music, read a document, or install a program, you need to download it first.

      -

      However, downloading files can also be frustrating and time-consuming if you encounter slow speeds, broken links, interrupted connections, or corrupted files. That's why you need a download manager to help you optimize your downloads and make them faster, easier, and more secure.

      -

      download download


      DOWNLOAD ✓✓✓ https://urlca.com/2uOfq6



      -

      In this article, we will explain what a download manager is, why you need one, how to choose the best one for your needs, and what are some of the best free download managers of 2023.

      -

      What is a download manager and why do you need one?

      -

      A download manager is a software tool that helps you download files faster, easier, and more securely.

      -

      A download manager is a software tool that acts as an intermediary between your browser and the server that hosts the file you want to download. It takes over the download process from your browser and manages it in a more efficient and effective way.

      -

      A download manager can split the file into several parts and download them simultaneously using multiple connections. It can also resume the download from where it left off in case of a connection loss or a system shutdown. It can also check the integrity of the file and scan it for viruses or malware before saving it to your device.

      -

      Some of the benefits of using a download manager are:

      -

      Faster download speeds by using multiple connections and accelerating algorithms.

      -

      A download manager can increase your download speeds by using multiple connections to the same server or different servers. This way, it can utilize your bandwidth more efficiently and avoid bottlenecks or congestion. A download manager can also use various algorithms to accelerate the download process by compressing the data, caching the files, or pre-fetching the links.

      -

      Easier download management by organizing, prioritizing, and resuming downloads.

      -

      A download manager can make your downloads more organized by categorizing them according to their type, size, source, or date. You can also create custom folders or labels to sort your downloads as you wish. A download manager can also let you prioritize your downloads according to their importance or urgency. You can also pause or resume your downloads at any time without losing any progress.

      -

      download download manager
      -download download accelerator plus
      -download download ninja
      -download download internet manager
      -download download cnet
      -download download software
      -download download games
      -download download videos
      -download download music
      -download download movies
      -download download youtube
      -download download netflix
      -download download chrome
      -download download firefox
      -download download opera
      -download download edge
      -download download windows 10
      -download download ubuntu
      -download download mac os
      -download download android
      -download download ios
      -download download apk
      -download download pdf
      -download download word
      -download download excel
      -download download powerpoint
      -download download winrar
      -download download zip
      -download download rar
      -download download 7zip
      -download download vlc
      -download download media player
      -download download adobe reader
      -download downloa

      -

      More secure downloads by checking for viruses, malware, and corrupted files.

      A download manager can protect your downloads from viruses, malware, and other threats by integrating with your antivirus software and scanning the files before saving them to your device. A download manager can also verify the integrity of the files by comparing their checksums or hashes with the original ones. This way, it can detect and repair any corrupted or damaged files.

      -

      More versatile downloads by supporting various file types, formats, and protocols.

      -

      A download manager can handle various types of files, such as documents, images, videos, audio, archives, or torrents. It can also support various formats, such as ZIP, RAR, MP3, MP4, or PDF. A download manager can also support various protocols, such as HTTP, HTTPS, FTP, SFTP, or BitTorrent. This way, it can download any file from any source without any hassle.

      -

      How to choose the best free download manager for your needs?

      -

      There are many free download managers available online, but not all of them are created equal.

      -

      Some free download managers may have limited features, poor performance, annoying ads, or hidden malware. Some free download managers may also have compatibility issues with your operating system or browser. Some free download managers may also have privacy and security risks that expose your personal data or online activity.

      -

      Therefore, you need to be careful and selective when choosing a free download manager for your needs. You need to do some research and comparison before downloading and installing any free download manager.

      -

      Some of the factors you should consider when choosing a free download manager are:

      -

      Compatibility with your operating system and browser.

      -

      You need to make sure that the free download manager you choose is compatible with your operating system and browser. You need to check the system requirements and the supported browsers of the free download manager before downloading it. You also need to check if the free download manager has any extensions or plugins that you can install on your browser to integrate it with the free download manager.

      -

      Features and functionality that suit your preferences and requirements.

      -

      You need to make sure that the free download manager you choose has the features and functionality that you need and want. You need to check what kind of files, formats, and protocols the free download manager can support. You also need to check what kind of options and settings the free download manager has for speed, management, security, and versatility. You also need to check if the free download manager has any extra features that you may find useful or interesting, such as media file previews, file conversion, video grabber, or media streaming.

      -

      User interface and ease of use that match your skill level and style.

      -

      You need to make sure that the free download manager you choose has a user interface and ease of use that match your skill level and style. You need to check how easy or difficult it is to use the free download manager. You also need to check how intuitive or confusing it is to navigate the free download manager. You also need to check how attractive or dull it is to look at the free download manager.

      -

      Privacy and security that protect your personal data and online activity.

      -

      You need to make sure that the free download manager you choose has privacy and security that protect your personal data and online activity. You need to check if the free download manager has any privacy policy or terms of service that explain how it collects, uses, shares, or protects your data. You also need to check if the free download manager has any encryption or authentication methods that secure your data. You also need to check if the free download manager has any reputation or reviews that indicate its trustworthiness or reliability.

      -

      What are some of the best free download managers of 2023?

      -

      Based on our research and testing, we have selected the following free download managers as the best ones of 2023:

      -

      Download Accelerator Plus: An excellent free version of a premium download manager that offers media file previews, impressive speed, and file conversion.

      -

      Download Accelerator Plus (DAP) is one of the most popular and powerful free download managers available online. It has a sleek and modern user interface that is easy to use and customize. It supports all kinds of files, formats, and protocols. It can accelerate your downloads by up to 400% by using multiple connections and smart algorithms. It can also resume your downloads from any point in case of a connection loss or a system shutdown.

      -

      DAP also has some unique features that make it stand out from other free download managers. It can preview media files before downloading them so you can see what you are getting before saving it to your device. It can also convert media files into different formats so you can play them on any device or platform

      It can also grab videos from various online platforms such as YouTube, Facebook, or Vimeo and download them to your device. It can also stream media files while downloading them so you can watch or listen to them without waiting for the download to finish.

      -

      DAP is compatible with Windows and Mac operating systems and supports all major browsers such as Chrome, Firefox, Edge, or Safari. It also has a mobile version for Android and iOS devices. It has a privacy policy that states that it does not collect or share any personal data from its users. It also has a good reputation and positive reviews from its users and experts.

      -

      Ninja Download Manager: A powerful and well-designed free download manager that offers super fast downloads, media streaming, sequential file writing, and clipboard monitoring.

      -

      Ninja Download Manager (NDM) is another excellent free download manager that offers a lot of features and functionality. It has a simple and elegant user interface that is easy to use and customize. It supports all kinds of files, formats, and protocols. It can accelerate your downloads by using multiple connections and dynamic speed control. It can also resume your downloads from any point in case of a connection loss or a system shutdown.

      -

      NDM also has some unique features that make it stand out from other free download managers. It can stream media files while downloading them so you can watch or listen to them without waiting for the download to finish. It can also write the files to your device in sequential order so you can access them faster and easier. It can also monitor your clipboard for any download links and automatically add them to the download queue.

      -

      NDM is compatible with Windows operating systems and supports all major browsers such as Chrome, Firefox, Edge, or Opera. It also has a mobile version for Android devices. It has a privacy policy that states that it does not collect or share any personal data from its users. It also has a good reputation and positive reviews from its users and experts.

      -

      Internet Download Manager: A popular and reliable free download manager that offers intelligent dynamic file segmentation, resume capability, schedule feature, and video grabber.

      -

      Internet Download Manager (IDM) is one of the most popular and reliable free download managers available online. It has a classic and functional user interface that is easy to use and customize. It supports all kinds of files, formats, and protocols. It can accelerate your downloads by using multiple connections and intelligent dynamic file segmentation. It can also resume your downloads from any point in case of a connection loss or a system shutdown.

      -

      IDM also has some unique features that make it stand out from other free download managers. It can schedule your downloads for a specific time or date so you can plan ahead and save bandwidth. It can also grab videos from various online platforms such as YouTube, Facebook, or Vimeo and download them to your device. It can also integrate with your antivirus software and scan the files before saving them to your device.

      -

      IDM is compatible with Windows operating systems and supports all major browsers such as Chrome, Firefox, Edge, or Opera. It does not have a mobile version for Android or iOS devices. It has a privacy policy that states that it does not collect or share any personal data from its users. It also has a good reputation and positive reviews from its users and experts.

      -

      Conclusion

      -

      Downloading files from the internet is a common and essential task for most users. However, it can also be frustrating and time-consuming if you encounter slow speeds, broken links, interrupted connections, or corrupted files. That's why you need a download manager to help you optimize your downloads and make them faster, easier, and more secure.

      -

      In this article, we have explained what a download manager is, why you need one, how to choose the best one for your needs, and what are some of the best free download managers of 2023. We hope that this article has been helpful and informative for you.

      -

      If you have any questions or comments about this article or about download managers in general, please feel free to leave them below. We would love to hear from you!

      -

      FAQs

      -

      What is the difference between a download manager and a browser's built-in downloader?

      -

      A browser's built-in downloader is a basic tool that allows you to download files from the internet using your browser. However, it has limited features, performance, security, and versatility compared to a download manager. A download manager is a software tool that offers more advanced features, performance, security, and versatility for downloading files from the internet using multiple connections, accelerating algorithms, resuming capability, file checking, file conversion, video grabbing, media streaming, etc.

      -

      Are free download managers safe to use?

      -

      Most free download managers are safe to use as long as they are downloaded from reputable sources, have good reviews, and have clear privacy policies. However, some free download managers may have hidden malware, annoying ads, or privacy risks that may harm your device or data. Therefore, you need to be careful and selective when choosing a free download manager for your needs. You need to do some research and comparison before downloading and installing any free download manager.

      -

      How can I uninstall a free download manager if I don't like it or need it anymore?

      -

      You can uninstall a free download manager from your device by following the same steps as you would uninstall any other software program. You need to go to your control panel or settings and find the list of installed programs. You need to select the free download manager you want to uninstall and click on the uninstall or remove button. You may also need to restart your device after uninstalling the free download manager.

      -

      Can I use more than one free download manager at the same time?

      -

      You can use more than one free download manager at the same time, but it is not recommended. Using more than one free download manager at the same time may cause conflicts, errors, or slowdowns in your downloads. It may also consume more resources and bandwidth from your device and network. Therefore, it is better to use only one free download manager at a time that suits your needs and preferences.

      -

      How can I update my free download manager to the latest version?

      -

      You can update your free download manager to the latest version by checking for updates regularly on the official website of the free download manager or on the software itself. You can also enable automatic updates if the free download manager has that option. Updating your free download manager to the latest version can help you enjoy new features, fix bugs, and improve performance and security.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Hungry Shark World Cracked APK - Everything You Need to Know.md b/spaces/congsaPfin/Manga-OCR/logs/Hungry Shark World Cracked APK - Everything You Need to Know.md deleted file mode 100644 index 62c6e5217e4342ec98a7928ff3441a7da4f929d5..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Hungry Shark World Cracked APK - Everything You Need to Know.md +++ /dev/null @@ -1,152 +0,0 @@ - -

      Hungry Shark World Cracked APK Download: Is It Worth the Risk?

      -

      If you are a fan of shark games, you might have heard of Hungry Shark World, a popular mobile game that lets you control a hungry shark and eat everything in your way. But what if you want to enjoy the game without paying for it or dealing with ads? You might be tempted to download a cracked APK file of the game, which is a modified version that bypasses the original app's security and licensing. But is it safe and legal to do so? In this article, we will explain what Hungry Shark World is, what a cracked APK file is, how to download and install it, and what are the risks and benefits of using it.

      -

      hungry shark world cracked apk download


      DOWNLOADhttps://urlca.com/2uO70z



      -

      What is Hungry Shark World?

      -

      Hungry Shark World is a 3D action-adventure game developed by Ubisoft Entertainment and released in 2016. It is the sequel to Hungry Shark Evolution, and it features more than 40 species of sharks, 8 different worlds, hundreds of prey and enemies, and various missions and challenges. The game is available for Android and iOS devices, and it has been downloaded over 100 million times on Google Play Store. The game is free to play, but it contains in-app purchases and ads that can enhance or interrupt your gameplay.

      -

      Features of the game

      -

      Some of the main features of Hungry Shark World are:

      -
        -
      • You can choose from a range of sharks in 8 different size tiers, from small fish to giant predators like the Great White Shark.
      • -
      • You can explore the lush Pacific Islands, frozen Arctic Ocean, exotic Arabian Sea, and now the South China Sea, a vibrant urban destination full of fresh, unwary victims.
      • -
      • You can feast on everything from bite-size fish and birds to tasty whales and unwitting humans, but watch out for whales, submarines, and other dangers.
      • -
      • You can customize your sharks with unique skins, gadgets, accessories, and pets that can boost your stats and abilities.
      • -
      • You can take on more than 20 types of missions, including high score challenges, prey hunts, and epic boss fights.
      • -
      • You can compete with other players on the leaderboards and achievements, and sync your progress across your devices with Facebook.
      • -
      -

      How to download and install the game

      -

      To download and install Hungry Shark World on your Android device, you need to follow these steps:

      -
        -
      1. Go to Google Play Store and search for Hungry Shark World.
      2. -
      3. Tap on the Install button and wait for the download to finish.
      4. -
      5. Once the download is complete, tap on the Open button to launch the game.
      6. -
      7. Enjoy playing Hungry Shark World!
      8. -
      -

      What is a cracked APK file?

      -

      An APK file is an Android Package file that contains all the files and data needed to run an app on an Android device. A cracked APK file is a modified version of an original APK file that has been altered to bypass the app's security and licensing. This means that you can use a cracked APK file to access premium features or remove ads from an app without paying for it or getting permission from the developer. However, this also means that you are violating the app's terms of service and intellectual property rights, which could result in legal consequences or malware infections.

      -

      Advantages of using cracked APK files

      -

      Some of the advantages of using cracked APK files are:

      -
        -
      • You can save money by getting paid apps or features for free.
      • You can enjoy the game without being interrupted by ads or pop-ups. -
      • You can unlock all the sharks, skins, gadgets, and pets in Hungry Shark World without spending any coins or gems.
      • -
      • You can access the game even if it is not available in your region or device.
      • -
      -

      Disadvantages and risks of using cracked APK files

      -

      Some of the disadvantages and risks of using cracked APK files are:

      -

      hungry shark world mod apk unlimited money and gems
      -hungry shark world hack apk latest version
      -hungry shark world apk download for android
      -hungry shark world cheat apk free download
      -hungry shark world mod menu apk download
      -hungry shark world premium apk download
      -hungry shark world unlocked apk download
      -hungry shark world apk download apkpure
      -hungry shark world mod apk all sharks unlocked
      -hungry shark world hack apk no root
      -hungry shark world apk download for pc
      -hungry shark world mod apk rexdl
      -hungry shark world cracked apk offline
      -hungry shark world hack apk android 1
      -hungry shark world mod apk revdl
      -hungry shark world full apk download
      -hungry shark world mod apk unlimited everything
      -hungry shark world hack apk ios
      -hungry shark world modded apk download
      -hungry shark world apk download uptodown
      -hungry shark world cracked apk 2023
      -hungry shark world hack apk mediafıre
      -hungry shark world mod apk happymod
      -hungry shark world pro apk download
      -hungry shark world mega mod apk download
      -hungry shark world cracked version download
      -hungry shark world hack tool apk download
      -hungry shark world mod apk android oyun club
      -hungry shark world free download apk obb
      -hungry shark world modded version download
      -hungry shark world cracked app download
      -hungry shark world hack generator apk download
      -hungry shark world mod apk unlimited coins and diamonds
      -hungry shark world hacked version download
      -hungry shark world mod apk online play
      -hungry shark world free gems and coins apk download
      -hungry shark world unlimited money and gems apk download 2023
      -hungry shark world hacked game download for android
      -hungry shark world modded game download for ios
      -hungry shark world cracked game free download for pc

      -
        -
      • You can get into legal trouble for violating the app's terms of service and intellectual property rights. The developer can sue you for damages or report you to the authorities for piracy.
      • -
      • You can expose your device and data to malware, viruses, spyware, or ransomware that can harm your device, steal your information, or extort you for money.
      • -
      • You can compromise the quality and performance of the game, as the cracked APK file may not be compatible with your device, updated with the latest features, or free from bugs and errors.
      • -
      • You can lose your progress and achievements in the game, as the cracked APK file may not sync with your Facebook account or Google Play Games account.
      • -
      • You can miss out on the fun and satisfaction of playing the game legitimately, as you will not face any challenges, rewards, or surprises in the game.
      • -
      -

      How to download and install Hungry Shark World cracked APK file

      -

      If you still want to download and install Hungry Shark World cracked APK file on your Android device, you need to follow these steps:

      -

      Sources of Hungry Shark World cracked APK file

      -

      There are many websites and platforms that offer Hungry Shark World cracked APK file for free download. However, not all of them are safe and reliable. Some of them may contain malware, viruses, spyware, or ransomware that can infect your device and data. Some of them may also provide outdated, incomplete, or corrupted versions of the game that may not work properly or at all. Therefore, you need to be careful and cautious when choosing a source of Hungry Shark World cracked APK file. Here are some tips to help you find a trustworthy source:

      -
        -
      • Read the reviews and ratings of other users who have downloaded the file from the same source. Look for positive feedback, high ratings, and verified comments.
      • -
      • Check the date and size of the file. Make sure it is updated with the latest version of the game and it is not too small or too large compared to the original file.
      • -
      • Scan the file with an antivirus or anti-malware software before downloading it. Look for any signs of infection, damage, or modification.
      • -
      • Avoid clicking on any suspicious links, pop-ups, ads, or redirects that may lead you to malicious websites or downloads.
      • -
      -

      Steps to download and install Hungry Shark World cracked APK file

      -

      To download and install Hungry Shark World cracked APK file on your Android device, you need to follow these steps:

      -
        -
      1. Go to the website or platform that offers Hungry Shark World cracked APK file for free download.
      2. -
      3. Tap on the Download button and wait for the download to finish.
      4. -
      5. Once the download is complete, go to your device's Settings > Security > Unknown Sources and enable the option to allow installation of apps from unknown sources.
      6. -
      7. Go to your device's File Manager > Downloads and locate the Hungry Shark World cracked APK file.
      8. -
      9. Tap on the file and follow the instructions to install it on your device.
      10. Once the installation is complete, tap on the Open button to launch the game. -
      11. Enjoy playing Hungry Shark World with all the premium features and no ads!
      12. -
      -

      Conclusion

      -

      Hungry Shark World is a fun and addictive game that lets you control a hungry shark and eat everything in your way. However, if you want to play the game without paying for it or dealing with ads, you might be tempted to download a cracked APK file of the game, which is a modified version that bypasses the original app's security and licensing. In this article, we have explained what Hungry Shark World is, what a cracked APK file is, how to download and install it, and what are the risks and benefits of using it. We have also provided some tips to help you find a trustworthy source of Hungry Shark World cracked APK file and some steps to follow to download and install it on your Android device.

      -

      Summary of the main points

      -

      Here are the main points of this article:

      -
        -
      • Hungry Shark World is a 3D action-adventure game that features more than 40 species of sharks, 8 different worlds, hundreds of prey and enemies, and various missions and challenges.
      • -
      • A cracked APK file is a modified version of an original APK file that has been altered to bypass the app's security and licensing.
      • -
      • Using a cracked APK file can save you money, remove ads, unlock features, and access the game from any region or device, but it can also get you into legal trouble, expose you to malware, compromise the quality and performance of the game, lose your progress and achievements, and miss out on the fun and satisfaction of playing the game legitimately.
      • -
      • To download and install Hungry Shark World cracked APK file on your Android device, you need to find a reliable source, enable unknown sources, locate the file, and follow the instructions.
      • -
      -

      Recommendations and warnings

      -

      Here are some recommendations and warnings for using Hungry Shark World cracked APK file:

      -
        -
      • We do not endorse or encourage the use of cracked APK files, as they are illegal, unethical, and unsafe. We recommend that you support the developers by downloading the original app from Google Play Store and paying for the in-app purchases and ads if you want to enjoy the game fully.
      • -
      • If you decide to use a cracked APK file anyway, do so at your own risk. We are not responsible for any consequences that may arise from your actions. We advise that you use a VPN service to protect your identity and location, backup your device and data regularly, and scan your device for malware frequently.
      • -
      • Be careful when choosing a source of Hungry Shark World cracked APK file. Do not trust any website or platform that offers free downloads without verifying their reputation, security, and quality. Do not click on any suspicious links, pop-ups, ads, or redirects that may lead you to malicious websites or downloads.
      • -
      • Be aware that using a cracked APK file may affect your gameplay experience. You may encounter bugs, errors, crashes, lags, or compatibility issues that may ruin your enjoyment of the game. You may also miss out on the latest updates, features, events, or rewards that the original app offers.
      • -
      -

      Frequently Asked Questions

      -

      Here are some frequently asked questions about Hungry Shark World cracked APK file:

      -

      Q: Is Hungry Shark World cracked APK file safe?

      -

      A: No, it is not safe. A cracked APK file is a modified version of an original APK file that has been altered to bypass the app's security and licensing. This means that it may contain malware, viruses, spyware, or ransomware that can harm your device or data. It also means that it may violate the app's terms of service and intellectual property rights, which could result in legal consequences.

      -

      Q: Is Hungry Shark World cracked APK file legal?

      -

      A: No, it is not legal. A cracked APK file is a modified version of an original APK file that has been altered to bypass the app's security and licensing. This means that it infringes on the app's terms of service and intellectual property rights. The developer can sue you for damages or report you to the authorities for piracy.

      -

      Q: How do I download Hungry Shark World cracked APK file?

      -

      A: To download Hungry Shark World cracked APK file on your Android device, you need to find a reliable source of the file online. Then you need to enable unknown sources on your device settings. Then you need to locate the file on your device's File Manager > Downloads folder. Then you need to tap on the file and follow the instructions to install it on your device.

      -

      Q: How do I uninstall Hungry Shark World cracked APK file?

      -

      A: To uninstall Hungry Shark World cracked APK file from your Android device, you need to follow these steps:

      -
        -
      1. Go to your device's Settings > Apps and find Hungry Shark World.
      2. -
      3. Tap on the app and select Uninstall.
      4. -
      5. Wait for the uninstallation to finish.
      6. -
      7. Go to your device's File Manager > Downloads and delete the Hungry Shark World cracked APK file.
      8. -
      -

      Q: What are some alternatives to Hungry Shark World cracked APK file?

      -

      A: Some alternatives to Hungry Shark World cracked APK file are:

      -
        -
      • Hungry Shark Evolution: This is the predecessor of Hungry Shark World, and it has similar gameplay and features. You can download it from Google Play Store for free, but it also contains in-app purchases and ads.
      • -
      • Hungry Dragon: This is another game by Ubisoft Entertainment, and it has a similar concept but with dragons instead of sharks. You can download it from Google Play Store for free, but it also contains in-app purchases and ads.
      • -
      • Shark Simulator 2021: This is a game by BigCode Games, and it has a realistic 3D simulation of shark life. You can download it from Google Play Store for free, but it also contains in-app purchases and ads.
      • -
      -

      Q: Where can I get more information about Hungry Shark World?

      -

      A: You can get more information about Hungry Shark World from the following sources:

      -
        -
      • The official website of the game: [https://hungrysharkworld.ubisoft.com/]
      • -
      • The official Facebook page of the game: [https://www.facebook.com/HungrySharkWorld/]
      • -
      • The official YouTube channel of the game: [https://www.youtube.com/channel/UCZrPwHn5ZcVf8x_Zp-P8jHA]
      • -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Save Any YouTube Shorts Video on Your Device with This Simple Tool.md b/spaces/congsaPfin/Manga-OCR/logs/Save Any YouTube Shorts Video on Your Device with This Simple Tool.md deleted file mode 100644 index c3af3ec976d1b27333d4586075f82a88c0119252..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Save Any YouTube Shorts Video on Your Device with This Simple Tool.md +++ /dev/null @@ -1,137 +0,0 @@ -
      -

      How to Download YouTube Shorts Videos

      -

      YouTube Shorts are vertical videos that are 60 seconds or less in length. They are similar to TikTok or Instagram Reels, but they are hosted on YouTube. You can watch them on the YouTube app or website, or on the dedicated YouTube Shorts channel.

      -

      link video download youtube shorts


      DOWNLOADhttps://urlca.com/2uObSg



      -

      YouTube Shorts are a great way to discover new content, creators, and trends. You might want to download them for various reasons, such as watching them offline, saving them for later, or using them for your own projects. However, before you do that, you should be aware of the legal and ethical issues of downloading YouTube videos.

      -

      According to YouTube's terms of service, you are not allowed to download any content from the platform unless you see a download button or link on the page. Downloading YouTube videos without permission may violate the copyright laws and the rights of the content owners. Therefore, you should only download YouTube videos for your own personal use and not for any commercial or public purposes.

      -

      If you still want to download YouTube Shorts videos, there are several methods you can use. In this article, we will show you three of them: using Open Video Downloader on a computer, using VLC Player on a computer, and using a video downloader app on a mobile device. We will also discuss the pros and cons of each method and answer some frequently asked questions.

      -

      How to download YouTube Shorts videos to your phone
      -YouTube Shorts downloader app for Android and iOS
      -Best websites to download YouTube Shorts videos online
      -Download YouTube Shorts videos without watermark
      -Save YouTube Shorts videos to your gallery or camera roll
      -Download YouTube Shorts videos with music and sound
      -Convert YouTube Shorts videos to MP4, MP3, or GIF format
      -Download YouTube Shorts videos in HD quality
      -Download YouTube Shorts videos from any channel or creator
      -Download YouTube Shorts videos using Chrome or Firefox extensions
      -Download YouTube Shorts videos using screen recorder apps
      -Download YouTube Shorts videos using QR code or URL
      -Download YouTube Shorts videos using hashtags or keywords
      -Download YouTube Shorts videos using VPN or proxy
      -Download YouTube Shorts videos faster and easier
      -Download YouTube Shorts videos for free and unlimited
      -Download YouTube Shorts videos for offline viewing or sharing
      -Download YouTube Shorts videos for Instagram, TikTok, or Snapchat
      -Download YouTube Shorts videos for editing or remixing
      -Download YouTube Shorts videos for educational or entertainment purposes
      -How to download your own YouTube Shorts videos from YouTube Studio
      -How to download other's YouTube Shorts videos legally and ethically
      -How to download YouTube Shorts videos without logging in or signing up
      -How to download YouTube Shorts videos without ads or interruptions
      -How to download YouTube Shorts videos without losing quality or resolution
      -How to download multiple YouTube Shorts videos at once
      -How to download YouTube Shorts videos in bulk or batch mode
      -How to download YouTube Shorts videos by date, duration, or popularity
      -How to download YouTube Shorts videos by category, genre, or topic
      -How to download YouTube Shorts videos by language, region, or country
      -How to download private or unlisted YouTube Shorts videos
      -How to download deleted or removed YouTube Shorts videos
      -How to download live or upcoming YouTube Shorts videos
      -How to download 360-degree or VR YouTube Shorts videos
      -How to download animated or cartoon YouTube Shorts videos
      -How to download funny or comedy YouTube Shorts videos
      -How to download inspirational or motivational YouTube Shorts videos
      -How to download educational or informative YouTube Shorts videos
      -How to download musical or lyrical YouTube Shorts videos
      -How to download gaming or sports YouTube Shorts videos
      -How to download beauty or fashion YouTube Shorts videos
      -How to download cooking or food YouTube Shorts videos
      -How to download travel or adventure YouTube Shorts videos
      -How to download DIY or craft YouTube Shorts videos
      -How to download health or fitness YouTube Shorts videos
      -How to download art or design YouTube Shorts videos
      -How to download science or technology YouTube Shorts videos
      -How to download news or current affairs YouTube Shorts videos

      -

      Method 1: Using Open Video Downloader on a Computer

      -

      Open Video Downloader is a free open source tool that makes it easy to download any YouTube video on Windows and macOS. You can get it from here. To use it, follow these steps:

      -
        -
      1. Go to the YouTube video you want to download in a web browser.
      2. -
      3. Copy the video's URL from the address bar.
      4. -
      5. Start the Open Video Downloader app.
      6. -
      7. Right-click the address bar at the top of the app and select Paste.
      8. -
      9. Click the + button. The tool will scan the video and display some options for download.
      10. -
      11. Select your download preferences. You can choose the video format, resolution, and quality from the drop-down menus.
      12. -
      13. Click the green Download button. The video will start downloading to your computer.
      14. -
      -

      The pros of this method are:

      -
        -
      • It is free and easy to use.
      • -
      • It supports ultra-high-definition videos up to 8K resolution.
      • -
      • It works with over 10,000 sites besides YouTube.
      • -
      -

      The cons of this method are:

      -
        -
      • The free version is limited to 30 downloads per day.
      • -
      • It lacks conversion and optimization tools for different devices.
      • -
      -

      Method 2: Using VLC Player on a Computer

      -

      VLC Player is a popular media player that can also be used to download videos from YouTube. You can get it from here. To use it, follow these steps:

      -
        -
      1. Go to YouTube on your computer.
      2. -
      3. Find the video you want to download and copy its URL from the address bar.
      4. -
      5. Start VLC Player and go to Media > Open Network Stream.Paste the URL in the box and click Play.
      6. -
      7. Go to Tools > Codec Information and copy the Location URL at the bottom of the window.
      8. -
      9. Open a web browser and paste the URL in the address bar. The video will start playing in the browser.
      10. -
      11. Right-click on the video and select Save Video As. Choose a name and location for the video file and click Save.
      12. -
      -

      The pros of this method are:

      -
        -
      • It is free and does not require any additional software.
      • -
      • It works with most YouTube videos, including live streams.
      • -
      • It allows you to preview the video before downloading it.
      • -
      -

      The cons of this method are:

      -
        -
      • It is a bit complicated and requires several steps.
      • -
      • It does not let you choose the video format, resolution, or quality.
      • -
      • It may not work with some protected or encrypted videos.
      • -
      -

      Method 3: Using a Video Downloader App on a Mobile Device

      -

      If you want to download YouTube Shorts videos on your smartphone or tablet, you can use a video downloader app. There are many apps available for both Android and iOS devices, but you should be careful when choosing one. Some apps may contain malware, ads, or in-app purchases that can harm your device or charge you money. You should also check the reviews and ratings of the apps before installing them.

      -

      One of the best video downloader apps for YouTube Shorts is SnapTube. It is a free app that lets you download videos from YouTube and other platforms in various formats and resolutions. You can get it from here. To use it, follow these steps:

      -
        -
      1. Install SnapTube on your device and open it.
      2. -
      3. Tap on the YouTube icon at the top of the app. You will be redirected to the YouTube app or website.
      4. -
      5. Find the YouTube Shorts video you want to download and tap on it.
      6. -
      7. Tap on the red download button at the bottom right corner of the screen. A list of download options will appear.
      8. -
      9. Select your preferred download option. You can choose the video format, resolution, and quality from the list.
      10. -
      11. Tap on Download. The video will start downloading to your device.
      12. -
      -

      The pros of this method are:

      -
        -
      • It is fast and easy to use.
      • -
      • It supports multiple video formats, resolutions, and qualities.
      • -
      • It has a built-in video player and manager that lets you watch and organize your downloaded videos.
      • -
      -

      The cons of this method are:

      -
        -
      • It is not available on the official app stores. You have to download it from a third-party source, which may pose some security risks.
      • -
      • It may not work with some YouTube Shorts videos that have restrictions or DRM protection.
      • -
      -

      Conclusion

      -

      In this article, we have shown you how to download YouTube Shorts videos using three different methods: using Open Video Downloader on a computer, using VLC Player on a computer, and using SnapTube on a mobile device. Each method has its own advantages and disadvantages, so you can choose the one that suits your needs and preferences best. However, remember to respect the rights of the content owners and only download YouTube videos for your own personal use.

      -

      We hope you found this article helpful and informative. If you did, please share it with your friends and family who might also be interested in downloading YouTube Shorts videos. Also, feel free to leave us a comment below if you have any questions or feedback. We would love to hear from you!

      -

      FAQs

      -

      What is the best video downloader app for YouTube Shorts?

      -

      There is no definitive answer to this question, as different apps may have different features, compatibility, and performance. However, some of the most popular and reliable video downloader apps for YouTube Shorts are SnapTube, TubeMate, VidMate, Videoder, and YTD Video Downloader. You can try them out and see which one works best for you.

      -

      How can I download YouTube Shorts videos without an app?

      -

      If you don't want to use an app to download YouTube Shorts videos, you can use a web-based service instead. There are many online tools that allow you to download YouTube videos by simply pasting their URLs. Some of the best ones are SaveFrom.net, Y2mate.com, KeepVid, and ClipConverter.cc. However, be careful when using these services, as some of them may contain ads, pop-ups, or malware that can harm your device or data.

      -

      How can I convert YouTube Shorts videos to MP3 or other formats?

      -

      If you want to convert YouTube Shorts videos to MP3 or other audio formats, you can use a video converter tool. There are many online and offline tools that can help you do that. Some of the best ones are OnlineVideoConverter.com, Freemake Video Converter, Any Video Converter, and HandBrake. You can upload or drag and drop your downloaded YouTube Shorts videos to these tools and choose the output format and quality you want. Then, you can download or save the converted files to your device.

      -

      How can I edit YouTube Shorts videos after downloading them?

      -

      If you want to edit YouTube Shorts videos after downloading them, you can use a video editor tool. There are many online and offline tools that can help you do that. Some of the best ones are InVideo, Filmora, iMovie, and Adobe Premiere Pro. You can import or open your downloaded YouTube Shorts videos to these tools and use their features to trim, crop, rotate, add effects, transitions, text, music, and more. Then, you can export or save the edited videos to your device or share them online.

      -

      How can I share YouTube Shorts videos with others?

      -

      If you want to share YouTube Shorts videos with others, you can use a file sharing tool. There are many online and offline tools that can help you do that. Some of the best ones are Google Drive, Dropbox, WeTransfer, and ShareIt. You can upload or send your downloaded YouTube Shorts videos to these tools and generate a link or a code that you can share with others. Alternatively, you can also use social media platforms, messaging apps, or email to share your downloaded YouTube Shorts videos with others.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/UNO APK - Challenge Yourself and Your Friends with UNO on Android.md b/spaces/congsaPfin/Manga-OCR/logs/UNO APK - Challenge Yourself and Your Friends with UNO on Android.md deleted file mode 100644 index a7a65fbf89b9a407cbb6947b1fb6901c7d80ae9d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/UNO APK - Challenge Yourself and Your Friends with UNO on Android.md +++ /dev/null @@ -1,115 +0,0 @@ -
      -

      Uno Apk Download: How to Play the Classic Card Game on Your Mobile Device

      -

      Uno is one of the most popular card games in the world, enjoyed by millions of people of all ages. It is a simple yet fun game that can be played with family, friends, or strangers online. But did you know that you can also play Uno on your mobile device? In this article, we will show you how to download and install Uno apk, a free version of the classic card game that you can play on your Android device. We will also give you some tips on how to play Uno online with other players, and answer some frequently asked questions about the game.

      -

      What is Uno and How to Play It?

      -

      Uno is a card game that was originally developed in 1971 by Merle Robbins, a barber from Ohio. He sold the rights to the game to a group of friends, who formed International Games Inc. to market it. In 1992, Mattel acquired the company and became the official publisher of Uno. Since then, Uno has become one of the best-selling card games in history, with over 150 million copies sold worldwide.

      -

      uno apk download


      DOWNLOAD ❤❤❤ https://urlca.com/2uOe4F



      -

      The History and Rules of Uno

      -

      The aim of Uno is to be the first player to get rid of all their cards by matching them with the card on top of the discard pile. The cards are divided into four colors (red, yellow, green, and blue), each with numbers from zero to nine. There are also special action cards that can change the course of the game, such as Skip, Reverse, Draw Two, Wild, and Wild Draw Four.

      -

      The game begins with each player being dealt seven cards face down. The remaining cards are placed face down in a draw pile. The top card of the draw pile is turned over and placed in a separate discard pile. The player to the left of the dealer starts the game by playing a card that matches the color or number of the card on top of the discard pile. If they have no matching card, they must draw a card from the draw pile. If they can play that card, they do so; otherwise, they pass their turn.

      -

      The game continues clockwise until one player has only one card left in their hand. They must then shout "Uno!" to indicate this. If they fail to do so and another player catches them before their next turn, they must draw two cards as a penalty. The game ends when one player has no cards left in their hand. They score points based on the cards left in their opponents' hands (see scoring table below). The game can be played for multiple rounds until one player reaches a predetermined score (usually 500 points).

      - - - - - - - - - - - - - - - - - -
      CardPoints
      Numbered cards (0-9)Face value
      Skip, Reverse, Draw Two20 points each
      Wild, Wild Draw Four50 points each
      -

      The Benefits and Features of Uno

      -

      Uno is not only a fun and entertaining game, but also a beneficial one for several reasons. Here are some of them:

      -
        -
      • Uno can help improve your memory, concentration, and strategic thinking skills by requiring you to remember the cards played and plan your moves ahead.
      • -
      • Uno can help enhance your social skills.

        Uno can help enhance your social skills by allowing you to interact with other players, communicate your thoughts and feelings, and cooperate or compete with them.

      • -
      • Uno can help reduce your stress and boredom by providing you with a fun and relaxing activity that can distract you from your worries and problems.
      • -
      • Uno can help you learn about different cultures and languages by exposing you to different versions and variations of the game that are played around the world.
      • -
      -

      Uno also has some features that make it more appealing and enjoyable for players. Here are some of them:

      -
        -
      • Uno is easy to learn and play, as it only requires a deck of cards and a few simple rules.
      • -
      • Uno is adaptable and customizable, as you can modify the rules or create your own house rules to suit your preferences and needs.
      • -
      • Uno is portable and accessible, as you can play it anywhere and anytime with anyone who has a deck of cards or a mobile device.
      • -
      -

      How to Download and Install Uno Apk on Your Android Device

      -

      If you want to play Uno on your mobile device, you have two options: you can either download the official Uno app from Ubisoft, which is available on Google Play Store and App Store, or you can download Uno apk, which is a free version of the game that you can install on your Android device without using the Play Store. In this section, we will show you how to download and install Uno apk on your Android device. Follow these steps:

      -

      Step 1: Find a Reliable Source for Uno Apk

      -

      The first step is to find a reliable source for Uno apk, which is a file that contains the game data and allows you to install it on your device. There are many websites that offer Uno apk for free, but not all of them are safe and trustworthy. Some of them may contain malware, viruses, or other harmful elements that can damage your device or compromise your privacy. Therefore, you should be careful and do some research before downloading Uno apk from any source.

      -

      uno apk download latest version
      -uno apk download for android
      -uno apk download free
      -uno apk download mod
      -uno apk download offline
      -uno apk download hack
      -uno apk download 2023
      -uno apk download without ads
      -uno apk download for pc
      -uno apk download for ios
      -uno apk download google play
      -uno apk download softpedia
      -uno apk download uptodown
      -uno apk download apkpure
      -uno apk download rexdl
      -uno apk download revdl
      -uno apk download android 1
      -uno apk download android oyun club
      -uno apk download happymod
      -uno apk download mod menu
      -uno apk download unlimited money
      -uno apk download unlimited coins
      -uno apk download unlimited diamonds
      -uno apk download no root
      -uno apk download no ads
      -uno apk download no internet
      -uno apk download no verification
      -uno apk download online multiplayer
      -uno apk download with friends
      -uno apk download with chat
      -uno apk download with voice chat
      -uno apk download with new rules
      -uno apk download with tournaments
      -uno apk download with modes of play
      -uno apk download with classic cards
      -uno apk download with wild cards
      -uno apk download with custom cards
      -uno apk download with themes and backgrounds
      -uno apk download with rewards and gifts
      -uno apk download with leaderboards and achievements

      -

      One of the best sources for Uno apk is APKPure.com, which is a reputable website that provides free and safe apk files for various apps and games. You can visit their website and search for Uno apk, or use this link: [Uno Apk Download].

      -

      Step 2: Enable Unknown Sources on Your Device

      -

      The next step is to enable unknown sources on your device, which is a setting that allows you to install apps from sources other than the Play Store. This is necessary because Uno apk is not available on the Play Store, so you need to install it manually. To enable unknown sources on your device, follow these steps:

      -
        -
      • Go to Settings > Security > Unknown Sources (or Settings > Apps > Special Access > Install Unknown Apps).
      • -
      • Toggle on the switch or check the box to allow installation from unknown sources.
      • -
      • A warning message may appear, telling you that installing apps from unknown sources may harm your device or data. Tap OK or Continue to proceed.
      • -
      -

      Step 3: Download and Install Uno Apk

      -

      The third step is to download and install Uno apk on your device. To do this, follow these steps:

      -
        -
      • Go back to APKPure.com or the link provided above, and tap on the Download APK button.
      • -
      • A pop-up window may appear, asking you to confirm the download. Tap OK or Download to start downloading Uno apk.
      • -
      • Once the download is complete, open the file manager app on your device and locate the downloaded file (usually in the Downloads folder).
      • -
      • Tap on the file to open it, and a prompt may appear, asking you to confirm the installation. Tap Install or Next to start installing Uno apk.
      • -
      • Wait for the installation to finish, and then tap Open or Done to launch or exit Uno apk.
      • -
      -

      Step 4: Launch and Enjoy Uno on Your Mobile Device

      -

      The final step is to launch and enjoy Uno on your mobile device. To do this, follow these steps:

      -
        -
      • Go to your app drawer or home screen and look for the Uno icon.
      • -
      • Tap on the icon to open Uno apk, and accept the permissions and terms of service if prompted.
      • -
      • You will see the main menu of Uno apk, where you can choose to play solo or online with other players.
      • -
      • Select your preferred mode and settings, and start playing Uno on your mobile device.
      • -
      -

      How to Play Uno Online with Friends or Other Players

      -

      If you want to play Uno I have already written the article as you requested, with the outline, the HTML formatting, the headings and subheadings, the table, the conclusion, and the FAQs. I have also written "

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/CS Go Aimbot.md b/spaces/contluForse/HuggingGPT/assets/CS Go Aimbot.md deleted file mode 100644 index 03bf2d4e034419379f0a9ff5ec3e1226ebf39ed5..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/CS Go Aimbot.md +++ /dev/null @@ -1,6 +0,0 @@ -

      CS Go Aimbot


      Download Ziphttps://ssurll.com/2uzxGM



      -
      -The most common cheats found in competitive CS:GO servers are wall hacks and aimbots. Most other accusations made in game by other ... 1fdad05405
      -
      -
      -

      diff --git a/spaces/contluForse/HuggingGPT/assets/Dr fone toolkit android suite torrent How to get the ultimate data recovery solution for android phones and tablets.md b/spaces/contluForse/HuggingGPT/assets/Dr fone toolkit android suite torrent How to get the ultimate data recovery solution for android phones and tablets.md deleted file mode 100644 index cea1a3639f75236e9fdc9fc14252783de1672ed4..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Dr fone toolkit android suite torrent How to get the ultimate data recovery solution for android phones and tablets.md +++ /dev/null @@ -1,6 +0,0 @@ -
      -

      Meanwhile, the toolkit also includes a few other tools to backup your device, transfer WhatsApp data, record screen activities, wipe out the device before recycling, etc. In this sense, Dr. Fone is more like a suite for Android and iOS users in case of any data emergency.

      -

      Dr fone toolkit android suite torrent


      Download File ►►► https://ssurll.com/2uzxDj



      -

      Dr.fone, nato inizialmente come applicazione dedicata al solo recupero dei dati, con il susseguirsi delle versioni, si è evoluto fino a diventare ora una suite completa per il ripristino e per la gestione dei dispositivi Android e iOS.

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/bricks/transformer.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/bricks/transformer.py deleted file mode 100644 index e61ae0dd941a7be00b3e41a3de833ec50470a45f..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/bricks/transformer.py +++ /dev/null @@ -1,595 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings - -import torch -import torch.nn as nn - -from annotator.uniformer.mmcv import ConfigDict, deprecated_api_warning -from annotator.uniformer.mmcv.cnn import Linear, build_activation_layer, build_norm_layer -from annotator.uniformer.mmcv.runner.base_module import BaseModule, ModuleList, Sequential -from annotator.uniformer.mmcv.utils import build_from_cfg -from .drop import build_dropout -from .registry import (ATTENTION, FEEDFORWARD_NETWORK, POSITIONAL_ENCODING, - TRANSFORMER_LAYER, TRANSFORMER_LAYER_SEQUENCE) - -# Avoid BC-breaking of importing MultiScaleDeformableAttention from this file -try: - from annotator.uniformer.mmcv.ops.multi_scale_deform_attn import MultiScaleDeformableAttention # noqa F401 - warnings.warn( - ImportWarning( - '``MultiScaleDeformableAttention`` has been moved to ' - '``mmcv.ops.multi_scale_deform_attn``, please change original path ' # noqa E501 - '``from annotator.uniformer.mmcv.cnn.bricks.transformer import MultiScaleDeformableAttention`` ' # noqa E501 - 'to ``from annotator.uniformer.mmcv.ops.multi_scale_deform_attn import MultiScaleDeformableAttention`` ' # noqa E501 - )) - -except ImportError: - warnings.warn('Fail to import ``MultiScaleDeformableAttention`` from ' - '``mmcv.ops.multi_scale_deform_attn``, ' - 'You should install ``mmcv-full`` if you need this module. ') - - -def build_positional_encoding(cfg, default_args=None): - """Builder for Position Encoding.""" - return build_from_cfg(cfg, POSITIONAL_ENCODING, default_args) - - -def build_attention(cfg, default_args=None): - """Builder for attention.""" - return build_from_cfg(cfg, ATTENTION, default_args) - - -def build_feedforward_network(cfg, default_args=None): - """Builder for feed-forward network (FFN).""" - return build_from_cfg(cfg, FEEDFORWARD_NETWORK, default_args) - - -def build_transformer_layer(cfg, default_args=None): - """Builder for transformer layer.""" - return build_from_cfg(cfg, TRANSFORMER_LAYER, default_args) - - -def build_transformer_layer_sequence(cfg, default_args=None): - """Builder for transformer encoder and transformer decoder.""" - return build_from_cfg(cfg, TRANSFORMER_LAYER_SEQUENCE, default_args) - - -@ATTENTION.register_module() -class MultiheadAttention(BaseModule): - """A wrapper for ``torch.nn.MultiheadAttention``. - - This module implements MultiheadAttention with identity connection, - and positional encoding is also passed as input. - - Args: - embed_dims (int): The embedding dimension. - num_heads (int): Parallel attention heads. - attn_drop (float): A Dropout layer on attn_output_weights. - Default: 0.0. - proj_drop (float): A Dropout layer after `nn.MultiheadAttention`. - Default: 0.0. - dropout_layer (obj:`ConfigDict`): The dropout_layer used - when adding the shortcut. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - batch_first (bool): When it is True, Key, Query and Value are shape of - (batch, n, embed_dim), otherwise (n, batch, embed_dim). - Default to False. - """ - - def __init__(self, - embed_dims, - num_heads, - attn_drop=0., - proj_drop=0., - dropout_layer=dict(type='Dropout', drop_prob=0.), - init_cfg=None, - batch_first=False, - **kwargs): - super(MultiheadAttention, self).__init__(init_cfg) - if 'dropout' in kwargs: - warnings.warn('The arguments `dropout` in MultiheadAttention ' - 'has been deprecated, now you can separately ' - 'set `attn_drop`(float), proj_drop(float), ' - 'and `dropout_layer`(dict) ') - attn_drop = kwargs['dropout'] - dropout_layer['drop_prob'] = kwargs.pop('dropout') - - self.embed_dims = embed_dims - self.num_heads = num_heads - self.batch_first = batch_first - - self.attn = nn.MultiheadAttention(embed_dims, num_heads, attn_drop, - **kwargs) - - self.proj_drop = nn.Dropout(proj_drop) - self.dropout_layer = build_dropout( - dropout_layer) if dropout_layer else nn.Identity() - - @deprecated_api_warning({'residual': 'identity'}, - cls_name='MultiheadAttention') - def forward(self, - query, - key=None, - value=None, - identity=None, - query_pos=None, - key_pos=None, - attn_mask=None, - key_padding_mask=None, - **kwargs): - """Forward function for `MultiheadAttention`. - - **kwargs allow passing a more general data flow when combining - with other operations in `transformerlayer`. - - Args: - query (Tensor): The input query with shape [num_queries, bs, - embed_dims] if self.batch_first is False, else - [bs, num_queries embed_dims]. - key (Tensor): The key tensor with shape [num_keys, bs, - embed_dims] if self.batch_first is False, else - [bs, num_keys, embed_dims] . - If None, the ``query`` will be used. Defaults to None. - value (Tensor): The value tensor with same shape as `key`. - Same in `nn.MultiheadAttention.forward`. Defaults to None. - If None, the `key` will be used. - identity (Tensor): This tensor, with the same shape as x, - will be used for the identity link. - If None, `x` will be used. Defaults to None. - query_pos (Tensor): The positional encoding for query, with - the same shape as `x`. If not None, it will - be added to `x` before forward function. Defaults to None. - key_pos (Tensor): The positional encoding for `key`, with the - same shape as `key`. Defaults to None. If not None, it will - be added to `key` before forward function. If None, and - `query_pos` has the same shape as `key`, then `query_pos` - will be used for `key_pos`. Defaults to None. - attn_mask (Tensor): ByteTensor mask with shape [num_queries, - num_keys]. Same in `nn.MultiheadAttention.forward`. - Defaults to None. - key_padding_mask (Tensor): ByteTensor with shape [bs, num_keys]. - Defaults to None. - - Returns: - Tensor: forwarded results with shape - [num_queries, bs, embed_dims] - if self.batch_first is False, else - [bs, num_queries embed_dims]. - """ - - if key is None: - key = query - if value is None: - value = key - if identity is None: - identity = query - if key_pos is None: - if query_pos is not None: - # use query_pos if key_pos is not available - if query_pos.shape == key.shape: - key_pos = query_pos - else: - warnings.warn(f'position encoding of key is' - f'missing in {self.__class__.__name__}.') - if query_pos is not None: - query = query + query_pos - if key_pos is not None: - key = key + key_pos - - # Because the dataflow('key', 'query', 'value') of - # ``torch.nn.MultiheadAttention`` is (num_query, batch, - # embed_dims), We should adjust the shape of dataflow from - # batch_first (batch, num_query, embed_dims) to num_query_first - # (num_query ,batch, embed_dims), and recover ``attn_output`` - # from num_query_first to batch_first. - if self.batch_first: - query = query.transpose(0, 1) - key = key.transpose(0, 1) - value = value.transpose(0, 1) - - out = self.attn( - query=query, - key=key, - value=value, - attn_mask=attn_mask, - key_padding_mask=key_padding_mask)[0] - - if self.batch_first: - out = out.transpose(0, 1) - - return identity + self.dropout_layer(self.proj_drop(out)) - - -@FEEDFORWARD_NETWORK.register_module() -class FFN(BaseModule): - """Implements feed-forward networks (FFNs) with identity connection. - - Args: - embed_dims (int): The feature dimension. Same as - `MultiheadAttention`. Defaults: 256. - feedforward_channels (int): The hidden dimension of FFNs. - Defaults: 1024. - num_fcs (int, optional): The number of fully-connected layers in - FFNs. Default: 2. - act_cfg (dict, optional): The activation config for FFNs. - Default: dict(type='ReLU') - ffn_drop (float, optional): Probability of an element to be - zeroed in FFN. Default 0.0. - add_identity (bool, optional): Whether to add the - identity connection. Default: `True`. - dropout_layer (obj:`ConfigDict`): The dropout_layer used - when adding the shortcut. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - """ - - @deprecated_api_warning( - { - 'dropout': 'ffn_drop', - 'add_residual': 'add_identity' - }, - cls_name='FFN') - def __init__(self, - embed_dims=256, - feedforward_channels=1024, - num_fcs=2, - act_cfg=dict(type='ReLU', inplace=True), - ffn_drop=0., - dropout_layer=None, - add_identity=True, - init_cfg=None, - **kwargs): - super(FFN, self).__init__(init_cfg) - assert num_fcs >= 2, 'num_fcs should be no less ' \ - f'than 2. got {num_fcs}.' - self.embed_dims = embed_dims - self.feedforward_channels = feedforward_channels - self.num_fcs = num_fcs - self.act_cfg = act_cfg - self.activate = build_activation_layer(act_cfg) - - layers = [] - in_channels = embed_dims - for _ in range(num_fcs - 1): - layers.append( - Sequential( - Linear(in_channels, feedforward_channels), self.activate, - nn.Dropout(ffn_drop))) - in_channels = feedforward_channels - layers.append(Linear(feedforward_channels, embed_dims)) - layers.append(nn.Dropout(ffn_drop)) - self.layers = Sequential(*layers) - self.dropout_layer = build_dropout( - dropout_layer) if dropout_layer else torch.nn.Identity() - self.add_identity = add_identity - - @deprecated_api_warning({'residual': 'identity'}, cls_name='FFN') - def forward(self, x, identity=None): - """Forward function for `FFN`. - - The function would add x to the output tensor if residue is None. - """ - out = self.layers(x) - if not self.add_identity: - return self.dropout_layer(out) - if identity is None: - identity = x - return identity + self.dropout_layer(out) - - -@TRANSFORMER_LAYER.register_module() -class BaseTransformerLayer(BaseModule): - """Base `TransformerLayer` for vision transformer. - - It can be built from `mmcv.ConfigDict` and support more flexible - customization, for example, using any number of `FFN or LN ` and - use different kinds of `attention` by specifying a list of `ConfigDict` - named `attn_cfgs`. It is worth mentioning that it supports `prenorm` - when you specifying `norm` as the first element of `operation_order`. - More details about the `prenorm`: `On Layer Normalization in the - Transformer Architecture `_ . - - Args: - attn_cfgs (list[`mmcv.ConfigDict`] | obj:`mmcv.ConfigDict` | None )): - Configs for `self_attention` or `cross_attention` modules, - The order of the configs in the list should be consistent with - corresponding attentions in operation_order. - If it is a dict, all of the attention modules in operation_order - will be built with this config. Default: None. - ffn_cfgs (list[`mmcv.ConfigDict`] | obj:`mmcv.ConfigDict` | None )): - Configs for FFN, The order of the configs in the list should be - consistent with corresponding ffn in operation_order. - If it is a dict, all of the attention modules in operation_order - will be built with this config. - operation_order (tuple[str]): The execution order of operation - in transformer. Such as ('self_attn', 'norm', 'ffn', 'norm'). - Support `prenorm` when you specifying first element as `norm`. - Default:None. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN'). - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - batch_first (bool): Key, Query and Value are shape - of (batch, n, embed_dim) - or (n, batch, embed_dim). Default to False. - """ - - def __init__(self, - attn_cfgs=None, - ffn_cfgs=dict( - type='FFN', - embed_dims=256, - feedforward_channels=1024, - num_fcs=2, - ffn_drop=0., - act_cfg=dict(type='ReLU', inplace=True), - ), - operation_order=None, - norm_cfg=dict(type='LN'), - init_cfg=None, - batch_first=False, - **kwargs): - - deprecated_args = dict( - feedforward_channels='feedforward_channels', - ffn_dropout='ffn_drop', - ffn_num_fcs='num_fcs') - for ori_name, new_name in deprecated_args.items(): - if ori_name in kwargs: - warnings.warn( - f'The arguments `{ori_name}` in BaseTransformerLayer ' - f'has been deprecated, now you should set `{new_name}` ' - f'and other FFN related arguments ' - f'to a dict named `ffn_cfgs`. ') - ffn_cfgs[new_name] = kwargs[ori_name] - - super(BaseTransformerLayer, self).__init__(init_cfg) - - self.batch_first = batch_first - - assert set(operation_order) & set( - ['self_attn', 'norm', 'ffn', 'cross_attn']) == \ - set(operation_order), f'The operation_order of' \ - f' {self.__class__.__name__} should ' \ - f'contains all four operation type ' \ - f"{['self_attn', 'norm', 'ffn', 'cross_attn']}" - - num_attn = operation_order.count('self_attn') + operation_order.count( - 'cross_attn') - if isinstance(attn_cfgs, dict): - attn_cfgs = [copy.deepcopy(attn_cfgs) for _ in range(num_attn)] - else: - assert num_attn == len(attn_cfgs), f'The length ' \ - f'of attn_cfg {num_attn} is ' \ - f'not consistent with the number of attention' \ - f'in operation_order {operation_order}.' - - self.num_attn = num_attn - self.operation_order = operation_order - self.norm_cfg = norm_cfg - self.pre_norm = operation_order[0] == 'norm' - self.attentions = ModuleList() - - index = 0 - for operation_name in operation_order: - if operation_name in ['self_attn', 'cross_attn']: - if 'batch_first' in attn_cfgs[index]: - assert self.batch_first == attn_cfgs[index]['batch_first'] - else: - attn_cfgs[index]['batch_first'] = self.batch_first - attention = build_attention(attn_cfgs[index]) - # Some custom attentions used as `self_attn` - # or `cross_attn` can have different behavior. - attention.operation_name = operation_name - self.attentions.append(attention) - index += 1 - - self.embed_dims = self.attentions[0].embed_dims - - self.ffns = ModuleList() - num_ffns = operation_order.count('ffn') - if isinstance(ffn_cfgs, dict): - ffn_cfgs = ConfigDict(ffn_cfgs) - if isinstance(ffn_cfgs, dict): - ffn_cfgs = [copy.deepcopy(ffn_cfgs) for _ in range(num_ffns)] - assert len(ffn_cfgs) == num_ffns - for ffn_index in range(num_ffns): - if 'embed_dims' not in ffn_cfgs[ffn_index]: - ffn_cfgs['embed_dims'] = self.embed_dims - else: - assert ffn_cfgs[ffn_index]['embed_dims'] == self.embed_dims - self.ffns.append( - build_feedforward_network(ffn_cfgs[ffn_index], - dict(type='FFN'))) - - self.norms = ModuleList() - num_norms = operation_order.count('norm') - for _ in range(num_norms): - self.norms.append(build_norm_layer(norm_cfg, self.embed_dims)[1]) - - def forward(self, - query, - key=None, - value=None, - query_pos=None, - key_pos=None, - attn_masks=None, - query_key_padding_mask=None, - key_padding_mask=None, - **kwargs): - """Forward function for `TransformerDecoderLayer`. - - **kwargs contains some specific arguments of attentions. - - Args: - query (Tensor): The input query with shape - [num_queries, bs, embed_dims] if - self.batch_first is False, else - [bs, num_queries embed_dims]. - key (Tensor): The key tensor with shape [num_keys, bs, - embed_dims] if self.batch_first is False, else - [bs, num_keys, embed_dims] . - value (Tensor): The value tensor with same shape as `key`. - query_pos (Tensor): The positional encoding for `query`. - Default: None. - key_pos (Tensor): The positional encoding for `key`. - Default: None. - attn_masks (List[Tensor] | None): 2D Tensor used in - calculation of corresponding attention. The length of - it should equal to the number of `attention` in - `operation_order`. Default: None. - query_key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_queries]. Only used in `self_attn` layer. - Defaults to None. - key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_keys]. Default: None. - - Returns: - Tensor: forwarded results with shape [num_queries, bs, embed_dims]. - """ - - norm_index = 0 - attn_index = 0 - ffn_index = 0 - identity = query - if attn_masks is None: - attn_masks = [None for _ in range(self.num_attn)] - elif isinstance(attn_masks, torch.Tensor): - attn_masks = [ - copy.deepcopy(attn_masks) for _ in range(self.num_attn) - ] - warnings.warn(f'Use same attn_mask in all attentions in ' - f'{self.__class__.__name__} ') - else: - assert len(attn_masks) == self.num_attn, f'The length of ' \ - f'attn_masks {len(attn_masks)} must be equal ' \ - f'to the number of attention in ' \ - f'operation_order {self.num_attn}' - - for layer in self.operation_order: - if layer == 'self_attn': - temp_key = temp_value = query - query = self.attentions[attn_index]( - query, - temp_key, - temp_value, - identity if self.pre_norm else None, - query_pos=query_pos, - key_pos=query_pos, - attn_mask=attn_masks[attn_index], - key_padding_mask=query_key_padding_mask, - **kwargs) - attn_index += 1 - identity = query - - elif layer == 'norm': - query = self.norms[norm_index](query) - norm_index += 1 - - elif layer == 'cross_attn': - query = self.attentions[attn_index]( - query, - key, - value, - identity if self.pre_norm else None, - query_pos=query_pos, - key_pos=key_pos, - attn_mask=attn_masks[attn_index], - key_padding_mask=key_padding_mask, - **kwargs) - attn_index += 1 - identity = query - - elif layer == 'ffn': - query = self.ffns[ffn_index]( - query, identity if self.pre_norm else None) - ffn_index += 1 - - return query - - -@TRANSFORMER_LAYER_SEQUENCE.register_module() -class TransformerLayerSequence(BaseModule): - """Base class for TransformerEncoder and TransformerDecoder in vision - transformer. - - As base-class of Encoder and Decoder in vision transformer. - Support customization such as specifying different kind - of `transformer_layer` in `transformer_coder`. - - Args: - transformerlayer (list[obj:`mmcv.ConfigDict`] | - obj:`mmcv.ConfigDict`): Config of transformerlayer - in TransformerCoder. If it is obj:`mmcv.ConfigDict`, - it would be repeated `num_layer` times to a - list[`mmcv.ConfigDict`]. Default: None. - num_layers (int): The number of `TransformerLayer`. Default: None. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - """ - - def __init__(self, transformerlayers=None, num_layers=None, init_cfg=None): - super(TransformerLayerSequence, self).__init__(init_cfg) - if isinstance(transformerlayers, dict): - transformerlayers = [ - copy.deepcopy(transformerlayers) for _ in range(num_layers) - ] - else: - assert isinstance(transformerlayers, list) and \ - len(transformerlayers) == num_layers - self.num_layers = num_layers - self.layers = ModuleList() - for i in range(num_layers): - self.layers.append(build_transformer_layer(transformerlayers[i])) - self.embed_dims = self.layers[0].embed_dims - self.pre_norm = self.layers[0].pre_norm - - def forward(self, - query, - key, - value, - query_pos=None, - key_pos=None, - attn_masks=None, - query_key_padding_mask=None, - key_padding_mask=None, - **kwargs): - """Forward function for `TransformerCoder`. - - Args: - query (Tensor): Input query with shape - `(num_queries, bs, embed_dims)`. - key (Tensor): The key tensor with shape - `(num_keys, bs, embed_dims)`. - value (Tensor): The value tensor with shape - `(num_keys, bs, embed_dims)`. - query_pos (Tensor): The positional encoding for `query`. - Default: None. - key_pos (Tensor): The positional encoding for `key`. - Default: None. - attn_masks (List[Tensor], optional): Each element is 2D Tensor - which is used in calculation of corresponding attention in - operation_order. Default: None. - query_key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_queries]. Only used in self-attention - Default: None. - key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_keys]. Default: None. - - Returns: - Tensor: results with shape [num_queries, bs, embed_dims]. - """ - for layer in self.layers: - query = layer( - query, - key, - value, - query_pos=query_pos, - key_pos=key_pos, - attn_masks=attn_masks, - query_key_padding_mask=query_key_padding_mask, - key_padding_mask=key_padding_mask, - **kwargs) - return query diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/ros/additions/do_catkin_make.sh b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/ros/additions/do_catkin_make.sh deleted file mode 100644 index 0d416fc00282aab146326bbba12a9274e1ba29b8..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/ros/additions/do_catkin_make.sh +++ /dev/null @@ -1,5 +0,0 @@ -mkdir src -catkin_make -source devel/setup.bash -echo $ROS_PACKAGE_PATH -chmod +x ./devel/setup.bash diff --git a/spaces/crimeacs/phase-hunter/phasehunter/data_preparation.py b/spaces/crimeacs/phase-hunter/phasehunter/data_preparation.py deleted file mode 100644 index 97d33ee5f7012f2d4ed291aba9f44623f9facfdc..0000000000000000000000000000000000000000 --- a/spaces/crimeacs/phase-hunter/phasehunter/data_preparation.py +++ /dev/null @@ -1,210 +0,0 @@ -import torch -import numpy as np - -from scipy import signal -from scipy.signal import butter, lfilter, detrend - -# Make bandpass filter -def butter_bandpass(lowcut, highcut, fs, order=5): - nyq = 0.5 * fs # Nyquist frequency - low = lowcut / nyq # Normalized frequency - high = highcut / nyq - b, a = butter(order, [low, high], btype="band") # Bandpass filter - return b, a - - -def butter_bandpass_filter(data, lowcut, highcut, fs, order=5): - b, a = butter_bandpass(lowcut, highcut, fs, order=order) - y = lfilter(b, a, data) - return y - - -def rotate_waveform(waveform, angle): - fft_waveform = np.fft.fft(waveform) # Compute the Fourier transform of the waveform - rotate_factor = np.exp( - 1j * angle - ) # Create a complex exponential with the specified rotation angle - rotated_fft_waveform = ( - fft_waveform * rotate_factor - ) # Multiply the Fourier transform by the rotation factor - rotated_waveform = np.fft.ifft( - rotated_fft_waveform - ) # Compute the inverse Fourier transform to get the rotated waveform in the time domain - - return rotated_waveform - - -def augment(sample): - # SET PARAMETERS: - crop_length = 6000 - padding = 120 - test = False - - waveform = sample["waveform.npy"] - meta = sample["meta.json"] - - if meta["split"] != "train": - test = True - - target_sample_P = meta["trace_p_arrival_sample"] - target_sample_S = meta["trace_s_arrival_sample"] - - if target_sample_P is None: - target_sample_P = 0 - if target_sample_S is None: - target_sample_S = 0 - - # Randomly select a phase to start the crop - current_phases = [x for x in (target_sample_P, target_sample_S) if x > 0] - phase_selector = np.random.randint(0, len(current_phases)) - first_phase = current_phases[phase_selector] - - # Shuffle - if first_phase - (crop_length - padding) > padding: - start_indx = int( - first_phase - - torch.randint(low=padding, high=(crop_length - padding), size=(1,)) - ) - if test == True: - start_indx = int(first_phase - 2 * padding) - - elif int(first_phase - padding) > 0: - start_indx = int( - first_phase - - torch.randint(low=0, high=(int(first_phase - padding)), size=(1,)) - ) - if test == True: - start_indx = int(first_phase - padding) - - else: - start_indx = padding - - end_indx = start_indx + crop_length - - if (waveform.shape[-1] - end_indx) < 0: - start_indx += waveform.shape[-1] - end_indx - end_indx = start_indx + crop_length - - # Update target - new_target_P = target_sample_P - start_indx - new_target_S = target_sample_S - start_indx - - # Cut - waveform_cropped = waveform[:, start_indx:end_indx] - - # Preprocess - waveform_cropped = detrend(waveform_cropped) - waveform_cropped = butter_bandpass_filter( - waveform_cropped, lowcut=0.2, highcut=40, fs=100, order=5 - ) - window = signal.windows.tukey(waveform_cropped[-1].shape[0], alpha=0.1) - waveform_cropped = waveform_cropped * window - waveform_cropped = detrend(waveform_cropped) - - if np.isnan(waveform_cropped).any() == True: - waveform_cropped = np.zeros(shape=waveform_cropped.shape) - - new_target_P = 0 - new_target_S = 0 - - if np.sum(waveform_cropped) == 0: - - new_target_P = 0 - new_target_S = 0 - - # Normalize data - max_val = np.max(np.abs(waveform_cropped)) - waveform_cropped_norm = waveform_cropped / max_val - - # Added Z component only - if len(waveform_cropped_norm) < 3: - zeros = np.zeros((3, waveform_cropped_norm.shape[-1])) - zeros[0] = waveform_cropped_norm - - waveform_cropped_norm = zeros - - if test == False: - ##### Rotate waveform ##### - probability = torch.randint(0, 2, size=(1,)).item() - angle = torch.FloatTensor(size=(1,)).uniform_(0.01, 359.9).item() - if probability == 1: - waveform_cropped_norm = rotate_waveform(waveform_cropped_norm, angle).real - - #### Channel DropOUT ##### - probability = torch.randint(0, 2, size=(1,)).item() - channel = torch.randint(1, 3, size=(1,)).item() - if probability == 1: - waveform_cropped_norm[channel, :] = 1e-6 - - # Normalize target - new_target_P = new_target_P / crop_length - new_target_S = new_target_S / crop_length - - if (new_target_P <= 0) or (new_target_P >= 1) or (np.isnan(new_target_P)): - new_target_P = 0 - if (new_target_S <= 0) or (new_target_S >= 1) or (np.isnan(new_target_S)): - new_target_S = 0 - - return waveform_cropped_norm, new_target_P, new_target_S - - -def collation_fn(sample): - waveforms = np.stack([x[0] for x in sample]) - targets_P = np.stack([x[1] for x in sample]) - targets_S = np.stack([x[2] for x in sample]) - - return ( - torch.tensor(waveforms, dtype=torch.float), - torch.tensor(targets_P, dtype=torch.float), - torch.tensor(targets_S, dtype=torch.float), - ) - - -def my_split_by_node(urls): - node_id, node_count = ( - torch.distributed.get_rank(), - torch.distributed.get_world_size(), - ) - return list(urls)[node_id::node_count] - -def prepare_waveform(waveform): - # SET PARAMETERS: - crop_length = 6000 - padding = 120 - - assert waveform.shape[0] <= 3, "Waveform has more than 3 channels" - - if waveform.shape[-1] < crop_length: - waveform = np.pad( - waveform, - ((0, 0), (0, crop_length - waveform.shape[-1])), - mode="constant", - constant_values=0, - ) - if waveform.shape[-1] > crop_length: - waveform = waveform[:, :crop_length] - - # Preprocess - waveform = detrend(waveform) - waveform = butter_bandpass_filter( - waveform, lowcut=0.2, highcut=40, fs=100, order=5 - ) - window = signal.windows.tukey(waveform[-1].shape[0], alpha=0.1) - waveform = waveform * window - waveform = detrend(waveform) - - assert np.isnan(waveform).any() != True, "Nan in waveform" - assert np.sum(waveform) != 0, "Sum of waveform sample is zero" - - # Normalize data - max_val = np.max(np.abs(waveform)) - waveform = waveform / max_val - - # Added Z component only - if len(waveform) < 3: - zeros = np.zeros((3, waveform.shape[-1])) - zeros[0] = waveform - - waveform = zeros - - return torch.tensor([waveform]*128, dtype=torch.float) \ No newline at end of file diff --git a/spaces/cscan/CodeFormer/CodeFormer/basicsr/ops/fused_act/src/fused_bias_act.cpp b/spaces/cscan/CodeFormer/CodeFormer/basicsr/ops/fused_act/src/fused_bias_act.cpp deleted file mode 100644 index 85ed0a79fb9c75f83470ac834090f03608d998ee..0000000000000000000000000000000000000000 --- a/spaces/cscan/CodeFormer/CodeFormer/basicsr/ops/fused_act/src/fused_bias_act.cpp +++ /dev/null @@ -1,26 +0,0 @@ -// from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/fused_bias_act.cpp -#include - - -torch::Tensor fused_bias_act_op(const torch::Tensor& input, - const torch::Tensor& bias, - const torch::Tensor& refer, - int act, int grad, float alpha, float scale); - -#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) - -torch::Tensor fused_bias_act(const torch::Tensor& input, - const torch::Tensor& bias, - const torch::Tensor& refer, - int act, int grad, float alpha, float scale) { - CHECK_CUDA(input); - CHECK_CUDA(bias); - - return fused_bias_act_op(input, bias, refer, act, grad, alpha, scale); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("fused_bias_act", &fused_bias_act, "fused bias act (CUDA)"); -} diff --git a/spaces/cvlab/zero123-live/CLIP/data/country211.md b/spaces/cvlab/zero123-live/CLIP/data/country211.md deleted file mode 100644 index 4cd096005c8e5777e0706d97d182c3bd87b651a9..0000000000000000000000000000000000000000 --- a/spaces/cvlab/zero123-live/CLIP/data/country211.md +++ /dev/null @@ -1,12 +0,0 @@ -# The Country211 Dataset - -In the paper, we used an image classification dataset called Country211, to evaluate the model's capability on geolocation. To do so, we filtered the YFCC100m dataset that have GPS coordinate corresponding to a [ISO-3166 country code](https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes) and created a balanced dataset by sampling 150 train images, 50 validation images, and 100 test images images for each country. - -The following command will download an 11GB archive countaining the images and extract into a subdirectory `country211`: - -```bash -wget https://openaipublic.azureedge.net/clip/data/country211.tgz -tar zxvf country211.tgz -``` - -These images are a subset of the YFCC100m dataset. Use of the underlying media files is subject to the Creative Commons licenses chosen by their creators/uploaders. For more information about the YFCC100M dataset, visit [the official website](https://multimediacommons.wordpress.com/yfcc100m-core-dataset/). \ No newline at end of file diff --git a/spaces/cvlab/zero123-live/taming-transformers/taming/data/sflckr.py b/spaces/cvlab/zero123-live/taming-transformers/taming/data/sflckr.py deleted file mode 100644 index 91101be5953b113f1e58376af637e43f366b3dee..0000000000000000000000000000000000000000 --- a/spaces/cvlab/zero123-live/taming-transformers/taming/data/sflckr.py +++ /dev/null @@ -1,91 +0,0 @@ -import os -import numpy as np -import cv2 -import albumentations -from PIL import Image -from torch.utils.data import Dataset - - -class SegmentationBase(Dataset): - def __init__(self, - data_csv, data_root, segmentation_root, - size=None, random_crop=False, interpolation="bicubic", - n_labels=182, shift_segmentation=False, - ): - self.n_labels = n_labels - self.shift_segmentation = shift_segmentation - self.data_csv = data_csv - self.data_root = data_root - self.segmentation_root = segmentation_root - with open(self.data_csv, "r") as f: - self.image_paths = f.read().splitlines() - self._length = len(self.image_paths) - self.labels = { - "relative_file_path_": [l for l in self.image_paths], - "file_path_": [os.path.join(self.data_root, l) - for l in self.image_paths], - "segmentation_path_": [os.path.join(self.segmentation_root, l.replace(".jpg", ".png")) - for l in self.image_paths] - } - - size = None if size is not None and size<=0 else size - self.size = size - if self.size is not None: - self.interpolation = interpolation - self.interpolation = { - "nearest": cv2.INTER_NEAREST, - "bilinear": cv2.INTER_LINEAR, - "bicubic": cv2.INTER_CUBIC, - "area": cv2.INTER_AREA, - "lanczos": cv2.INTER_LANCZOS4}[self.interpolation] - self.image_rescaler = albumentations.SmallestMaxSize(max_size=self.size, - interpolation=self.interpolation) - self.segmentation_rescaler = albumentations.SmallestMaxSize(max_size=self.size, - interpolation=cv2.INTER_NEAREST) - self.center_crop = not random_crop - if self.center_crop: - self.cropper = albumentations.CenterCrop(height=self.size, width=self.size) - else: - self.cropper = albumentations.RandomCrop(height=self.size, width=self.size) - self.preprocessor = self.cropper - - def __len__(self): - return self._length - - def __getitem__(self, i): - example = dict((k, self.labels[k][i]) for k in self.labels) - image = Image.open(example["file_path_"]) - if not image.mode == "RGB": - image = image.convert("RGB") - image = np.array(image).astype(np.uint8) - if self.size is not None: - image = self.image_rescaler(image=image)["image"] - segmentation = Image.open(example["segmentation_path_"]) - assert segmentation.mode == "L", segmentation.mode - segmentation = np.array(segmentation).astype(np.uint8) - if self.shift_segmentation: - # used to support segmentations containing unlabeled==255 label - segmentation = segmentation+1 - if self.size is not None: - segmentation = self.segmentation_rescaler(image=segmentation)["image"] - if self.size is not None: - processed = self.preprocessor(image=image, - mask=segmentation - ) - else: - processed = {"image": image, - "mask": segmentation - } - example["image"] = (processed["image"]/127.5 - 1.0).astype(np.float32) - segmentation = processed["mask"] - onehot = np.eye(self.n_labels)[segmentation] - example["segmentation"] = onehot - return example - - -class Examples(SegmentationBase): - def __init__(self, size=None, random_crop=False, interpolation="bicubic"): - super().__init__(data_csv="data/sflckr_examples.txt", - data_root="data/sflckr_images", - segmentation_root="data/sflckr_segmentations", - size=size, random_crop=random_crop, interpolation=interpolation) diff --git a/spaces/cxeep/whisper-webui/src/utils.py b/spaces/cxeep/whisper-webui/src/utils.py deleted file mode 100644 index 17252fa2ac5ac9e64887184574561fb0f340545a..0000000000000000000000000000000000000000 --- a/spaces/cxeep/whisper-webui/src/utils.py +++ /dev/null @@ -1,115 +0,0 @@ -import textwrap -import unicodedata -import re - -import zlib -from typing import Iterator, TextIO - - -def exact_div(x, y): - assert x % y == 0 - return x // y - - -def str2bool(string): - str2val = {"True": True, "False": False} - if string in str2val: - return str2val[string] - else: - raise ValueError(f"Expected one of {set(str2val.keys())}, got {string}") - - -def optional_int(string): - return None if string == "None" else int(string) - - -def optional_float(string): - return None if string == "None" else float(string) - - -def compression_ratio(text) -> float: - return len(text) / len(zlib.compress(text.encode("utf-8"))) - - -def format_timestamp(seconds: float, always_include_hours: bool = False, fractionalSeperator: str = '.'): - assert seconds >= 0, "non-negative timestamp expected" - milliseconds = round(seconds * 1000.0) - - hours = milliseconds // 3_600_000 - milliseconds -= hours * 3_600_000 - - minutes = milliseconds // 60_000 - milliseconds -= minutes * 60_000 - - seconds = milliseconds // 1_000 - milliseconds -= seconds * 1_000 - - hours_marker = f"{hours:02d}:" if always_include_hours or hours > 0 else "" - return f"{hours_marker}{minutes:02d}:{seconds:02d}{fractionalSeperator}{milliseconds:03d}" - - -def write_txt(transcript: Iterator[dict], file: TextIO): - for segment in transcript: - print(segment['text'].strip(), file=file, flush=True) - - -def write_vtt(transcript: Iterator[dict], file: TextIO, maxLineWidth=None): - print("WEBVTT\n", file=file) - for segment in transcript: - text = processText(segment['text'], maxLineWidth).replace('-->', '->') - - print( - f"{format_timestamp(segment['start'])} --> {format_timestamp(segment['end'])}\n" - f"{text}\n", - file=file, - flush=True, - ) - - -def write_srt(transcript: Iterator[dict], file: TextIO, maxLineWidth=None): - """ - Write a transcript to a file in SRT format. - Example usage: - from pathlib import Path - from whisper.utils import write_srt - result = transcribe(model, audio_path, temperature=temperature, **args) - # save SRT - audio_basename = Path(audio_path).stem - with open(Path(output_dir) / (audio_basename + ".srt"), "w", encoding="utf-8") as srt: - write_srt(result["segments"], file=srt) - """ - for i, segment in enumerate(transcript, start=1): - text = processText(segment['text'].strip(), maxLineWidth).replace('-->', '->') - - # write srt lines - print( - f"{i}\n" - f"{format_timestamp(segment['start'], always_include_hours=True, fractionalSeperator=',')} --> " - f"{format_timestamp(segment['end'], always_include_hours=True, fractionalSeperator=',')}\n" - f"{text}\n", - file=file, - flush=True, - ) - -def processText(text: str, maxLineWidth=None): - if (maxLineWidth is None or maxLineWidth < 0): - return text - - lines = textwrap.wrap(text, width=maxLineWidth, tabsize=4) - return '\n'.join(lines) - -def slugify(value, allow_unicode=False): - """ - Taken from https://github.com/django/django/blob/master/django/utils/text.py - Convert to ASCII if 'allow_unicode' is False. Convert spaces or repeated - dashes to single dashes. Remove characters that aren't alphanumerics, - underscores, or hyphens. Convert to lowercase. Also strip leading and - trailing whitespace, dashes, and underscores. - """ - value = str(value) - if allow_unicode: - value = unicodedata.normalize('NFKC', value) - else: - value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore').decode('ascii') - value = re.sub(r'[^\w\s-]', '', value.lower()) - return re.sub(r'[-\s]+', '-', value).strip('-_') \ No newline at end of file diff --git a/spaces/cymic/Waifu_Diffusion_Webui/modules/generation_parameters_copypaste.py b/spaces/cymic/Waifu_Diffusion_Webui/modules/generation_parameters_copypaste.py deleted file mode 100644 index 4ca73680a0941f7aac44356ffe4aa2df3d244ec7..0000000000000000000000000000000000000000 --- a/spaces/cymic/Waifu_Diffusion_Webui/modules/generation_parameters_copypaste.py +++ /dev/null @@ -1,92 +0,0 @@ -import re -import gradio as gr - -re_param_code = r"\s*([\w ]+):\s*([^,]+)(?:,|$)" -re_param = re.compile(re_param_code) -re_params = re.compile(r"^(?:" + re_param_code + "){3,}$") -re_imagesize = re.compile(r"^(\d+)x(\d+)$") -type_of_gr_update = type(gr.update()) - - -def parse_generation_parameters(x: str): - """parses generation parameters string, the one you see in text field under the picture in UI: -``` -girl with an artist's beret, determined, blue eyes, desert scene, computer monitors, heavy makeup, by Alphonse Mucha and Charlie Bowater, ((eyeshadow)), (coquettish), detailed, intricate -Negative prompt: ugly, fat, obese, chubby, (((deformed))), [blurry], bad anatomy, disfigured, poorly drawn face, mutation, mutated, (extra_limb), (ugly), (poorly drawn hands), messy drawing -Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 965400086, Size: 512x512, Model hash: 45dee52b -``` - - returns a dict with field values - """ - - res = {} - - prompt = "" - negative_prompt = "" - - done_with_prompt = False - - *lines, lastline = x.strip().split("\n") - if not re_params.match(lastline): - lines.append(lastline) - lastline = '' - - for i, line in enumerate(lines): - line = line.strip() - if line.startswith("Negative prompt:"): - done_with_prompt = True - line = line[16:].strip() - - if done_with_prompt: - negative_prompt += ("" if negative_prompt == "" else "\n") + line - else: - prompt += ("" if prompt == "" else "\n") + line - - if len(prompt) > 0: - res["Prompt"] = prompt - - if len(negative_prompt) > 0: - res["Negative prompt"] = negative_prompt - - for k, v in re_param.findall(lastline): - m = re_imagesize.match(v) - if m is not None: - res[k+"-1"] = m.group(1) - res[k+"-2"] = m.group(2) - else: - res[k] = v - - return res - - -def connect_paste(button, paste_fields, input_comp, js=None): - def paste_func(prompt): - params = parse_generation_parameters(prompt) - res = [] - - for output, key in paste_fields: - if callable(key): - v = key(params) - else: - v = params.get(key, None) - - if v is None: - res.append(gr.update()) - elif isinstance(v, type_of_gr_update): - res.append(v) - else: - try: - valtype = type(output.value) - val = valtype(v) - res.append(gr.update(value=val)) - except Exception: - res.append(gr.update()) - - return res - - button.click( - fn=paste_func, - _js=js, - inputs=[input_comp], - outputs=[x[0] for x in paste_fields], - ) diff --git a/spaces/cynika/taffy/inference/slicer.py b/spaces/cynika/taffy/inference/slicer.py deleted file mode 100644 index 35a888b906e7df8634cfdcec914f650c6cefd26a..0000000000000000000000000000000000000000 --- a/spaces/cynika/taffy/inference/slicer.py +++ /dev/null @@ -1,158 +0,0 @@ -import time - -import numpy as np -import torch -import torchaudio -from scipy.ndimage import maximum_filter1d, uniform_filter1d - - -def timeit(func): - def run(*args, **kwargs): - t = time.time() - res = func(*args, **kwargs) - print('executing \'%s\' costed %.3fs' % (func.__name__, time.time() - t)) - return res - - return run - - -# @timeit -def _window_maximum(arr, win_sz): - return maximum_filter1d(arr, size=win_sz)[win_sz // 2: win_sz // 2 + arr.shape[0] - win_sz + 1] - - -# @timeit -def _window_rms(arr, win_sz): - filtered = np.sqrt(uniform_filter1d(np.power(arr, 2), win_sz) - np.power(uniform_filter1d(arr, win_sz), 2)) - return filtered[win_sz // 2: win_sz // 2 + arr.shape[0] - win_sz + 1] - - -def level2db(levels, eps=1e-12): - return 20 * np.log10(np.clip(levels, a_min=eps, a_max=1)) - - -def _apply_slice(audio, begin, end): - if len(audio.shape) > 1: - return audio[:, begin: end] - else: - return audio[begin: end] - - -class Slicer: - def __init__(self, - sr: int, - db_threshold: float = -40, - min_length: int = 5000, - win_l: int = 300, - win_s: int = 20, - max_silence_kept: int = 500): - self.db_threshold = db_threshold - self.min_samples = round(sr * min_length / 1000) - self.win_ln = round(sr * win_l / 1000) - self.win_sn = round(sr * win_s / 1000) - self.max_silence = round(sr * max_silence_kept / 1000) - if not self.min_samples >= self.win_ln >= self.win_sn: - raise ValueError('The following condition must be satisfied: min_length >= win_l >= win_s') - if not self.max_silence >= self.win_sn: - raise ValueError('The following condition must be satisfied: max_silence_kept >= win_s') - - @timeit - def slice(self, audio): - samples = audio - if samples.shape[0] <= self.min_samples: - return {"0": {"slice": False, "split_time": f"0,{len(audio)}"}} - # get absolute amplitudes - abs_amp = np.abs(samples - np.mean(samples)) - # calculate local maximum with large window - win_max_db = level2db(_window_maximum(abs_amp, win_sz=self.win_ln)) - sil_tags = [] - left = right = 0 - while right < win_max_db.shape[0]: - if win_max_db[right] < self.db_threshold: - right += 1 - elif left == right: - left += 1 - right += 1 - else: - if left == 0: - split_loc_l = left - else: - sil_left_n = min(self.max_silence, (right + self.win_ln - left) // 2) - rms_db_left = level2db(_window_rms(samples[left: left + sil_left_n], win_sz=self.win_sn)) - split_win_l = left + np.argmin(rms_db_left) - split_loc_l = split_win_l + np.argmin(abs_amp[split_win_l: split_win_l + self.win_sn]) - if len(sil_tags) != 0 and split_loc_l - sil_tags[-1][1] < self.min_samples and right < win_max_db.shape[ - 0] - 1: - right += 1 - left = right - continue - if right == win_max_db.shape[0] - 1: - split_loc_r = right + self.win_ln - else: - sil_right_n = min(self.max_silence, (right + self.win_ln - left) // 2) - rms_db_right = level2db(_window_rms(samples[right + self.win_ln - sil_right_n: right + self.win_ln], - win_sz=self.win_sn)) - split_win_r = right + self.win_ln - sil_right_n + np.argmin(rms_db_right) - split_loc_r = split_win_r + np.argmin(abs_amp[split_win_r: split_win_r + self.win_sn]) - sil_tags.append((split_loc_l, split_loc_r)) - right += 1 - left = right - if left != right: - sil_left_n = min(self.max_silence, (right + self.win_ln - left) // 2) - rms_db_left = level2db(_window_rms(samples[left: left + sil_left_n], win_sz=self.win_sn)) - split_win_l = left + np.argmin(rms_db_left) - split_loc_l = split_win_l + np.argmin(abs_amp[split_win_l: split_win_l + self.win_sn]) - sil_tags.append((split_loc_l, samples.shape[0])) - if len(sil_tags) == 0: - return {"0": {"slice": False, "split_time": f"0,{len(audio)}"}} - else: - chunks = [] - # 第一段静音并非从头开始,补上有声片段 - if sil_tags[0][0]: - chunks.append({"slice": False, "split_time": f"0,{sil_tags[0][0]}"}) - for i in range(0, len(sil_tags)): - # 标识有声片段(跳过第一段) - if i: - chunks.append({"slice": False, "split_time": f"{sil_tags[i - 1][1]},{sil_tags[i][0]}"}) - # 标识所有静音片段 - chunks.append({"slice": True, "split_time": f"{sil_tags[i][0]},{sil_tags[i][1]}"}) - # 最后一段静音并非结尾,补上结尾片段 - if sil_tags[-1][1] != len(audio): - chunks.append({"slice": False, "split_time": f"{sil_tags[-1][1]},{len(audio)}"}) - chunk_dict = {} - for i in range(len(chunks)): - chunk_dict[str(i)] = chunks[i] - return chunk_dict - - -def cut(audio_path, db_thresh=-30, min_len=5000, win_l=300, win_s=20, max_sil_kept=500): - audio, sr = torchaudio.load(audio_path) - if len(audio.shape) == 2 and audio.shape[1] >= 2: - audio = torch.mean(audio, dim=0).unsqueeze(0) - audio = audio.cpu().numpy()[0] - - slicer = Slicer( - sr=sr, - db_threshold=db_thresh, - min_length=min_len, - win_l=win_l, - win_s=win_s, - max_silence_kept=max_sil_kept - ) - chunks = slicer.slice(audio) - return chunks - - -def chunks2audio(audio_path, chunks): - chunks = dict(chunks) - audio, sr = torchaudio.load(audio_path) - if len(audio.shape) == 2 and audio.shape[1] >= 2: - audio = torch.mean(audio, dim=0).unsqueeze(0) - audio = audio.cpu().numpy()[0] - result = [] - for k, v in chunks.items(): - tag = v["split_time"].split(",") - result.append((v["slice"], audio[int(tag[0]):int(tag[1])])) - return result, sr - - diff --git a/spaces/dataroots/SofaStyler/StyleTransfer/srcTransformer/function.py b/spaces/dataroots/SofaStyler/StyleTransfer/srcTransformer/function.py deleted file mode 100644 index b2c7a0cf46489b911f006a157ce1c20eca1fa2c5..0000000000000000000000000000000000000000 --- a/spaces/dataroots/SofaStyler/StyleTransfer/srcTransformer/function.py +++ /dev/null @@ -1,78 +0,0 @@ -import torch - - -def calc_mean_std(feat, eps=1e-5): - # eps is a small value added to the variance to avoid divide-by-zero. - size = feat.size() - assert len(size) == 4 - N, C = size[:2] - feat_var = feat.view(N, C, -1).var(dim=2) + eps - feat_std = feat_var.sqrt().view(N, C, 1, 1) - feat_mean = feat.view(N, C, -1).mean(dim=2).view(N, C, 1, 1) - return feat_mean, feat_std - - -def calc_mean_std1(feat, eps=1e-5): - # eps is a small value added to the variance to avoid divide-by-zero. - size = feat.size() - # assert (len(size) == 4) - WH, N, C = size - feat_var = feat.var(dim=0) + eps - feat_std = feat_var.sqrt() - feat_mean = feat.mean(dim=0) - return feat_mean, feat_std - - -def normal(feat, eps=1e-5): - feat_mean, feat_std = calc_mean_std(feat, eps) - normalized = (feat - feat_mean) / feat_std - return normalized - - -def normal_style(feat, eps=1e-5): - feat_mean, feat_std = calc_mean_std1(feat, eps) - normalized = (feat - feat_mean) / feat_std - return normalized - - -def _calc_feat_flatten_mean_std(feat): - # takes 3D feat (C, H, W), return mean and std of array within channels - assert feat.size()[0] == 3 - assert isinstance(feat, torch.FloatTensor) - feat_flatten = feat.view(3, -1) - mean = feat_flatten.mean(dim=-1, keepdim=True) - std = feat_flatten.std(dim=-1, keepdim=True) - return feat_flatten, mean, std - - -def _mat_sqrt(x): - U, D, V = torch.svd(x) - return torch.mm(torch.mm(U, D.pow(0.5).diag()), V.t()) - - -def coral(source, target): - # assume both source and target are 3D array (C, H, W) - # Note: flatten -> f - - source_f, source_f_mean, source_f_std = _calc_feat_flatten_mean_std(source) - source_f_norm = ( - source_f - source_f_mean.expand_as(source_f) - ) / source_f_std.expand_as(source_f) - source_f_cov_eye = torch.mm(source_f_norm, source_f_norm.t()) + torch.eye(3) - - target_f, target_f_mean, target_f_std = _calc_feat_flatten_mean_std(target) - target_f_norm = ( - target_f - target_f_mean.expand_as(target_f) - ) / target_f_std.expand_as(target_f) - target_f_cov_eye = torch.mm(target_f_norm, target_f_norm.t()) + torch.eye(3) - - source_f_norm_transfer = torch.mm( - _mat_sqrt(target_f_cov_eye), - torch.mm(torch.inverse(_mat_sqrt(source_f_cov_eye)), source_f_norm), - ) - - source_f_transfer = source_f_norm_transfer * target_f_std.expand_as( - source_f_norm - ) + target_f_mean.expand_as(source_f_norm) - - return source_f_transfer.view(source.size()) diff --git a/spaces/davidrd123/Art_Movement/README.md b/spaces/davidrd123/Art_Movement/README.md deleted file mode 100644 index 6424b2ed5eea2101bbcf5c7902e01813cdd66bcd..0000000000000000000000000000000000000000 --- a/spaces/davidrd123/Art_Movement/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Art_Movement -emoji: 🐠 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: artistic-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/dawood17/SayBot_Enchancer/CodeFormer/facelib/parsing/bisenet.py b/spaces/dawood17/SayBot_Enchancer/CodeFormer/facelib/parsing/bisenet.py deleted file mode 100644 index 3898cab76ae5876459cd4899c54cafa14234971d..0000000000000000000000000000000000000000 --- a/spaces/dawood17/SayBot_Enchancer/CodeFormer/facelib/parsing/bisenet.py +++ /dev/null @@ -1,140 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .resnet import ResNet18 - - -class ConvBNReLU(nn.Module): - - def __init__(self, in_chan, out_chan, ks=3, stride=1, padding=1): - super(ConvBNReLU, self).__init__() - self.conv = nn.Conv2d(in_chan, out_chan, kernel_size=ks, stride=stride, padding=padding, bias=False) - self.bn = nn.BatchNorm2d(out_chan) - - def forward(self, x): - x = self.conv(x) - x = F.relu(self.bn(x)) - return x - - -class BiSeNetOutput(nn.Module): - - def __init__(self, in_chan, mid_chan, num_class): - super(BiSeNetOutput, self).__init__() - self.conv = ConvBNReLU(in_chan, mid_chan, ks=3, stride=1, padding=1) - self.conv_out = nn.Conv2d(mid_chan, num_class, kernel_size=1, bias=False) - - def forward(self, x): - feat = self.conv(x) - out = self.conv_out(feat) - return out, feat - - -class AttentionRefinementModule(nn.Module): - - def __init__(self, in_chan, out_chan): - super(AttentionRefinementModule, self).__init__() - self.conv = ConvBNReLU(in_chan, out_chan, ks=3, stride=1, padding=1) - self.conv_atten = nn.Conv2d(out_chan, out_chan, kernel_size=1, bias=False) - self.bn_atten = nn.BatchNorm2d(out_chan) - self.sigmoid_atten = nn.Sigmoid() - - def forward(self, x): - feat = self.conv(x) - atten = F.avg_pool2d(feat, feat.size()[2:]) - atten = self.conv_atten(atten) - atten = self.bn_atten(atten) - atten = self.sigmoid_atten(atten) - out = torch.mul(feat, atten) - return out - - -class ContextPath(nn.Module): - - def __init__(self): - super(ContextPath, self).__init__() - self.resnet = ResNet18() - self.arm16 = AttentionRefinementModule(256, 128) - self.arm32 = AttentionRefinementModule(512, 128) - self.conv_head32 = ConvBNReLU(128, 128, ks=3, stride=1, padding=1) - self.conv_head16 = ConvBNReLU(128, 128, ks=3, stride=1, padding=1) - self.conv_avg = ConvBNReLU(512, 128, ks=1, stride=1, padding=0) - - def forward(self, x): - feat8, feat16, feat32 = self.resnet(x) - h8, w8 = feat8.size()[2:] - h16, w16 = feat16.size()[2:] - h32, w32 = feat32.size()[2:] - - avg = F.avg_pool2d(feat32, feat32.size()[2:]) - avg = self.conv_avg(avg) - avg_up = F.interpolate(avg, (h32, w32), mode='nearest') - - feat32_arm = self.arm32(feat32) - feat32_sum = feat32_arm + avg_up - feat32_up = F.interpolate(feat32_sum, (h16, w16), mode='nearest') - feat32_up = self.conv_head32(feat32_up) - - feat16_arm = self.arm16(feat16) - feat16_sum = feat16_arm + feat32_up - feat16_up = F.interpolate(feat16_sum, (h8, w8), mode='nearest') - feat16_up = self.conv_head16(feat16_up) - - return feat8, feat16_up, feat32_up # x8, x8, x16 - - -class FeatureFusionModule(nn.Module): - - def __init__(self, in_chan, out_chan): - super(FeatureFusionModule, self).__init__() - self.convblk = ConvBNReLU(in_chan, out_chan, ks=1, stride=1, padding=0) - self.conv1 = nn.Conv2d(out_chan, out_chan // 4, kernel_size=1, stride=1, padding=0, bias=False) - self.conv2 = nn.Conv2d(out_chan // 4, out_chan, kernel_size=1, stride=1, padding=0, bias=False) - self.relu = nn.ReLU(inplace=True) - self.sigmoid = nn.Sigmoid() - - def forward(self, fsp, fcp): - fcat = torch.cat([fsp, fcp], dim=1) - feat = self.convblk(fcat) - atten = F.avg_pool2d(feat, feat.size()[2:]) - atten = self.conv1(atten) - atten = self.relu(atten) - atten = self.conv2(atten) - atten = self.sigmoid(atten) - feat_atten = torch.mul(feat, atten) - feat_out = feat_atten + feat - return feat_out - - -class BiSeNet(nn.Module): - - def __init__(self, num_class): - super(BiSeNet, self).__init__() - self.cp = ContextPath() - self.ffm = FeatureFusionModule(256, 256) - self.conv_out = BiSeNetOutput(256, 256, num_class) - self.conv_out16 = BiSeNetOutput(128, 64, num_class) - self.conv_out32 = BiSeNetOutput(128, 64, num_class) - - def forward(self, x, return_feat=False): - h, w = x.size()[2:] - feat_res8, feat_cp8, feat_cp16 = self.cp(x) # return res3b1 feature - feat_sp = feat_res8 # replace spatial path feature with res3b1 feature - feat_fuse = self.ffm(feat_sp, feat_cp8) - - out, feat = self.conv_out(feat_fuse) - out16, feat16 = self.conv_out16(feat_cp8) - out32, feat32 = self.conv_out32(feat_cp16) - - out = F.interpolate(out, (h, w), mode='bilinear', align_corners=True) - out16 = F.interpolate(out16, (h, w), mode='bilinear', align_corners=True) - out32 = F.interpolate(out32, (h, w), mode='bilinear', align_corners=True) - - if return_feat: - feat = F.interpolate(feat, (h, w), mode='bilinear', align_corners=True) - feat16 = F.interpolate(feat16, (h, w), mode='bilinear', align_corners=True) - feat32 = F.interpolate(feat32, (h, w), mode='bilinear', align_corners=True) - return out, out16, out32, feat, feat16, feat32 - else: - return out, out16, out32 diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/StaticTabs-42a53876.css b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/StaticTabs-42a53876.css deleted file mode 100644 index f7a0345231b8c7efb854a87a762d08d6784bb190..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/StaticTabs-42a53876.css +++ /dev/null @@ -1 +0,0 @@ -.tabs.svelte-kqij2n{position:relative}.hide.svelte-kqij2n{display:none}.tab-nav.svelte-kqij2n{display:flex;position:relative;flex-wrap:wrap;border-bottom:1px solid var(--border-color-primary)}button.svelte-kqij2n{margin-bottom:-1px;border:1px solid transparent;border-color:transparent;border-bottom:none;border-top-right-radius:var(--container-radius);border-top-left-radius:var(--container-radius);padding:var(--size-1) var(--size-4);color:var(--body-text-color-subdued);font-weight:var(--section-header-text-weight);font-size:var(--section-header-text-size)}button.svelte-kqij2n:hover{color:var(--body-text-color)}.selected.svelte-kqij2n{border-color:var(--border-color-primary);background:var(--background-fill-primary);color:var(--body-text-color)}.bar.svelte-kqij2n{display:block;position:absolute;bottom:-2px;left:0;z-index:999;background:var(--background-fill-primary);width:100%;height:2px;content:""} diff --git a/spaces/dekk-i386/pdflangchain/Dockerfile b/spaces/dekk-i386/pdflangchain/Dockerfile deleted file mode 100644 index 34302da33028f75da9310283ae239a60799005b6..0000000000000000000000000000000000000000 --- a/spaces/dekk-i386/pdflangchain/Dockerfile +++ /dev/null @@ -1,26 +0,0 @@ -FROM python:3.10 - -WORKDIR /app - -RUN ls /etc/apt - -RUN apt-get update && apt-get install -y \ - build-essential \ - curl \ - software-properties-common \ - git \ - && rm -rf /var/lib/apt/lists/* - -COPY app.py /app -COPY requirements.txt /app -COPY htmlTemplates.py /app -# COPY .env /app - -RUN pip install -r requirements.txt -EXPOSE 8080 -HEALTHCHECK CMD curl --fail http://localhost:8080/_stcore/health - -# Expose the secret SECRET_EXAMPLE at buildtime and use its value as git remote URL -RUN --mount=type=secret,id=OPENAI_API_KEY,mode=0444,required=true,dst=/app/.env - -ENTRYPOINT ["streamlit", "run", "app.py", "--server.port=8080", "--server.address=0.0.0.0", "--server.enableXsrfProtection=false"] diff --git a/spaces/diacanFperku/AutoGPT/Gambarotta Scienza Delle Costruzioni Pdf 59 !!BETTER!!.md b/spaces/diacanFperku/AutoGPT/Gambarotta Scienza Delle Costruzioni Pdf 59 !!BETTER!!.md deleted file mode 100644 index c16c8e62ea35097f39b15a4d4f2380c6a79dbdd8..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Gambarotta Scienza Delle Costruzioni Pdf 59 !!BETTER!!.md +++ /dev/null @@ -1,14 +0,0 @@ - -

      Scienza delle costruzioni: il libro di Gambarotta, Nunziante e Tralli

      -

      Scienza delle costruzioni è un libro di testo per gli studenti di ingegneria civile e meccanica che affronta i principi fondamentali della meccanica dei solidi e delle strutture. Il libro è scritto da Luigi Gambarotta, Luciano Nunziante e Antonio Tralli, tre docenti dell'Università di Genova con una lunga esperienza didattica e scientifica nel campo della scienza delle costruzioni.

      -

      gambarotta scienza delle costruzioni pdf 59


      Download Ziphttps://gohhs.com/2uFTxv



      -

      Il libro si compone di tre volumi: il primo volume tratta la cinematica e la statica dei sistemi materiali e dei corpi rigidi, il secondo volume si occupa della resistenza dei materiali e delle teorie delle travi, il terzo volume approfondisce le tematiche delle strutture isostatiche e iperstatiche, delle strutture reticolari e dei metodi numerici per l'analisi strutturale. Il libro è arricchito da numerosi esempi ed esercizi svolti, che illustrano le applicazioni pratiche dei concetti teorici.

      -

      Scienza delle costruzioni è un libro che si propone di fornire una solida base di conoscenze per affrontare lo studio delle discipline più avanzate dell'ingegneria strutturale, come la dinamica delle strutture, la meccanica delle strutture in cemento armato e acciaio, la meccanica delle strutture aeronautiche e spaziali. Il libro è anche un utile strumento di consultazione per i professionisti che operano nel settore delle costruzioni.

      -

      Il libro è disponibile in formato pdf sul sito web della casa editrice McGraw-Hill Education[^1^], dove è possibile anche acquistare la versione cartacea. Il prezzo del libro è di 59 euro per il primo volume, 69 euro per il secondo volume e 79 euro per il terzo volume.

      -

      - -

      Il libro di Gambarotta, Nunziante e Tralli si basa su una rigorosa impostazione matematica e fisica, che consente di affrontare i problemi della scienza delle costruzioni con un approccio generale e unificato. Il libro utilizza la notazione tensoriale per descrivere le grandezze cinematiche e statiche dei sistemi materiali e dei corpi rigidi, e introduce i concetti di deformazione, sforzo, legge costitutiva, principio dei lavori virtuali e principio di Saint-Venant. Il libro presenta anche le principali teorie delle travi, come la teoria di Bernoulli-Euler, la teoria di Timoshenko e la teoria di Vlasov.

      -

      Il libro di Gambarotta, Nunziante e Tralli è frutto di una lunga esperienza didattica e scientifica degli autori, che hanno contribuito in modo significativo allo sviluppo della scienza delle costruzioni in Italia e nel mondo. Il libro è stato adottato da molte università italiane e straniere come testo di riferimento per i corsi di scienza delle costruzioni. Il libro ha ricevuto anche numerosi apprezzamenti da parte di studenti e docenti per la chiarezza espositiva, la completezza dei contenuti e l'aggiornamento delle tematiche.

      -

      Il libro di Gambarotta, Nunziante e Tralli è quindi un'opera fondamentale per chi vuole approfondire lo studio della scienza delle costruzioni, una disciplina che ha una grande importanza sia dal punto di vista teorico che applicativo. Il libro offre una visione ampia e approfondita dei principi e dei metodi della meccanica dei solidi e delle strutture, con un'attenzione particolare alle implicazioni pratiche e ai problemi reali dell'ingegneria strutturale.

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Kubo 1965 Statistical Mechanics Pdf.md b/spaces/diacanFperku/AutoGPT/Kubo 1965 Statistical Mechanics Pdf.md deleted file mode 100644 index 6f919ad6587df4a6fde8c4511e36215209f80f1d..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Kubo 1965 Statistical Mechanics Pdf.md +++ /dev/null @@ -1,37 +0,0 @@ - -

      How to Download Kubo 1965 Statistical Mechanics Pdf for Free

      -

      If you are looking for a comprehensive and advanced textbook on statistical mechanics, you might be interested in Kubo 1965 Statistical Mechanics Pdf. This book, written by the renowned Japanese physicist Ryōgo Kubo, covers topics such as thermodynamics, quantum statistics, transport phenomena, fluctuations, and irreversible processes. It also includes many problems and solutions to help you master the concepts and applications of statistical mechanics.

      -

      Kubo 1965 Statistical Mechanics Pdf


      DOWNLOADhttps://gohhs.com/2uFUgN



      -

      However, finding a copy of Kubo 1965 Statistical Mechanics Pdf online can be challenging, as it is an old and rare publication. The original publisher, North-Holland Pub. Co., no longer sells it, and most online bookstores do not have it in stock. Moreover, buying a used or new copy can be very expensive, as it is considered a classic and valuable reference in the field of physics.

      -

      Fortunately, there are some ways to download Kubo 1965 Statistical Mechanics Pdf for free from the internet. In this article, we will show you three websites that offer free access to this book in PDF format. These websites are:

      - -

      Let's take a look at each of these websites and how to download Kubo 1965 Statistical Mechanics Pdf from them.

      - -

      Archive.org

      -

      Archive.org is a non-profit digital library that provides free access to millions of books, movies, music, software, and other media. It also preserves historical and cultural artifacts for future generations. One of the books that you can find on Archive.org is Kubo 1965 Statistical Mechanics Pdf.

      -

      -

      To download Kubo 1965 Statistical Mechanics Pdf from Archive.org, follow these steps:

      -
        -
      1. Go to this link, which will take you to the page of the book on Archive.org.
      2. -
      3. On the right side of the page, you will see a box that says "Download Options". Click on the "PDF" option to download the book as a PDF file.
      4. -
      5. A new window will open with a preview of the book. You can either read it online or click on the "Download" button at the top right corner to save it to your device.
      6. -
      -

      You can also use this alternative link to download Kubo 1965 Statistical Mechanics Pdf from Archive.org. The steps are similar to the ones above.

      - -

      Scribd.com

      -

      Scribd.com is a popular online platform that allows users to upload and share documents, books, audiobooks, magazines, and other publications. It also offers a subscription service that gives unlimited access to its library of content. However, you can also download some documents and books for free from Scribd.com without signing up or paying anything.

      -

      To download Kubo 1965 Statistical Mechanics Pdf from Scribd.com, follow these steps:

      -
        -
      1. Go to this link, which will take you to the page of the book on Scribd.com.
      2. -
      3. On the top right corner of the page, you will see a button that says "Download". Click on it to download the book as a PDF file.
      4. -
      5. A new window will open with a preview of the book. You can either read it online or click on the "Download" button at the bottom right corner to save it to your device.
      6. -
      - -

      Conclusion

      -

      Kubo 1965 Statistical Mechanics Pdf is a valuable and classic textbook on statistical mechanics that covers many topics and problems in depth. However, finding a copy of it online can be difficult and expensive. That's why we

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Pipe Flow Expert Keygen Download Mediafirel TOP.md b/spaces/diacanFperku/AutoGPT/Pipe Flow Expert Keygen Download Mediafirel TOP.md deleted file mode 100644 index d2c958134787a338cbd36ee410bf121039c00831..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Pipe Flow Expert Keygen Download Mediafirel TOP.md +++ /dev/null @@ -1,10 +0,0 @@ -

      Pipe Flow Expert Keygen Download Mediafirel


      Download Filehttps://gohhs.com/2uFV6b



      - -Pipe Flow Expert 7.40 Crack And Full Version Download Latest 2022 Download Pipe Flow Expert 7.40 Crack And Full Version -Pipe Flow Expert 7.40 Crack And Full Version Download Latest 2022 -Download Pipe Flow Expert 7.40 Crack And Full Version -Free Download Pipe Flow Expert 7.40 Crack And Full Version -Download Pipe Flow Expert 7.40 Crack And Full Version 8a78ff9644
      -
      -
      -

      diff --git a/spaces/diego2554/RemBG_super/rembg/sessions/__init__.py b/spaces/diego2554/RemBG_super/rembg/sessions/__init__.py deleted file mode 100644 index 08ca20a54275ee2b24f0698151dd6fdae96bd0ac..0000000000000000000000000000000000000000 --- a/spaces/diego2554/RemBG_super/rembg/sessions/__init__.py +++ /dev/null @@ -1,22 +0,0 @@ -from importlib import import_module -from inspect import isclass -from pathlib import Path -from pkgutil import iter_modules - -from .base import BaseSession - -sessions_class = [] -sessions_names = [] - -package_dir = Path(__file__).resolve().parent -for _b, module_name, _p in iter_modules([str(package_dir)]): - module = import_module(f"{__name__}.{module_name}") - for attribute_name in dir(module): - attribute = getattr(module, attribute_name) - if ( - isclass(attribute) - and issubclass(attribute, BaseSession) - and attribute != BaseSession - ): - sessions_class.append(attribute) - sessions_names.append(attribute.name()) diff --git a/spaces/dineshreddy/WALT/mmdet/models/utils/transformer.py b/spaces/dineshreddy/WALT/mmdet/models/utils/transformer.py deleted file mode 100644 index 83870eead42f4b0bf73c9e19248d5512d3d044c5..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/models/utils/transformer.py +++ /dev/null @@ -1,860 +0,0 @@ -import torch -import torch.nn as nn -from mmcv.cnn import (Linear, build_activation_layer, build_norm_layer, - xavier_init) - -from .builder import TRANSFORMER - - -class MultiheadAttention(nn.Module): - """A warpper for torch.nn.MultiheadAttention. - - This module implements MultiheadAttention with residual connection, - and positional encoding used in DETR is also passed as input. - - Args: - embed_dims (int): The embedding dimension. - num_heads (int): Parallel attention heads. Same as - `nn.MultiheadAttention`. - dropout (float): A Dropout layer on attn_output_weights. Default 0.0. - """ - - def __init__(self, embed_dims, num_heads, dropout=0.0): - super(MultiheadAttention, self).__init__() - assert embed_dims % num_heads == 0, 'embed_dims must be ' \ - f'divisible by num_heads. got {embed_dims} and {num_heads}.' - self.embed_dims = embed_dims - self.num_heads = num_heads - self.dropout = dropout - self.attn = nn.MultiheadAttention(embed_dims, num_heads, dropout) - self.dropout = nn.Dropout(dropout) - - def forward(self, - x, - key=None, - value=None, - residual=None, - query_pos=None, - key_pos=None, - attn_mask=None, - key_padding_mask=None): - """Forward function for `MultiheadAttention`. - - Args: - x (Tensor): The input query with shape [num_query, bs, - embed_dims]. Same in `nn.MultiheadAttention.forward`. - key (Tensor): The key tensor with shape [num_key, bs, - embed_dims]. Same in `nn.MultiheadAttention.forward`. - Default None. If None, the `query` will be used. - value (Tensor): The value tensor with same shape as `key`. - Same in `nn.MultiheadAttention.forward`. Default None. - If None, the `key` will be used. - residual (Tensor): The tensor used for addition, with the - same shape as `x`. Default None. If None, `x` will be used. - query_pos (Tensor): The positional encoding for query, with - the same shape as `x`. Default None. If not None, it will - be added to `x` before forward function. - key_pos (Tensor): The positional encoding for `key`, with the - same shape as `key`. Default None. If not None, it will - be added to `key` before forward function. If None, and - `query_pos` has the same shape as `key`, then `query_pos` - will be used for `key_pos`. - attn_mask (Tensor): ByteTensor mask with shape [num_query, - num_key]. Same in `nn.MultiheadAttention.forward`. - Default None. - key_padding_mask (Tensor): ByteTensor with shape [bs, num_key]. - Same in `nn.MultiheadAttention.forward`. Default None. - - Returns: - Tensor: forwarded results with shape [num_query, bs, embed_dims]. - """ - query = x - if key is None: - key = query - if value is None: - value = key - if residual is None: - residual = x - if key_pos is None: - if query_pos is not None and key is not None: - if query_pos.shape == key.shape: - key_pos = query_pos - if query_pos is not None: - query = query + query_pos - if key_pos is not None: - key = key + key_pos - out = self.attn( - query, - key, - value=value, - attn_mask=attn_mask, - key_padding_mask=key_padding_mask)[0] - - return residual + self.dropout(out) - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(embed_dims={self.embed_dims}, ' - repr_str += f'num_heads={self.num_heads}, ' - repr_str += f'dropout={self.dropout})' - return repr_str - - -class FFN(nn.Module): - """Implements feed-forward networks (FFNs) with residual connection. - - Args: - embed_dims (int): The feature dimension. Same as - `MultiheadAttention`. - feedforward_channels (int): The hidden dimension of FFNs. - num_fcs (int, optional): The number of fully-connected layers in - FFNs. Defaults to 2. - act_cfg (dict, optional): The activation config for FFNs. - dropout (float, optional): Probability of an element to be - zeroed. Default 0.0. - add_residual (bool, optional): Add resudual connection. - Defaults to True. - """ - - def __init__(self, - embed_dims, - feedforward_channels, - num_fcs=2, - act_cfg=dict(type='ReLU', inplace=True), - dropout=0.0, - add_residual=True): - super(FFN, self).__init__() - assert num_fcs >= 2, 'num_fcs should be no less ' \ - f'than 2. got {num_fcs}.' - self.embed_dims = embed_dims - self.feedforward_channels = feedforward_channels - self.num_fcs = num_fcs - self.act_cfg = act_cfg - self.dropout = dropout - self.activate = build_activation_layer(act_cfg) - - layers = nn.ModuleList() - in_channels = embed_dims - for _ in range(num_fcs - 1): - layers.append( - nn.Sequential( - Linear(in_channels, feedforward_channels), self.activate, - nn.Dropout(dropout))) - in_channels = feedforward_channels - layers.append(Linear(feedforward_channels, embed_dims)) - self.layers = nn.Sequential(*layers) - self.dropout = nn.Dropout(dropout) - self.add_residual = add_residual - - def forward(self, x, residual=None): - """Forward function for `FFN`.""" - out = self.layers(x) - if not self.add_residual: - return out - if residual is None: - residual = x - return residual + self.dropout(out) - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(embed_dims={self.embed_dims}, ' - repr_str += f'feedforward_channels={self.feedforward_channels}, ' - repr_str += f'num_fcs={self.num_fcs}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'dropout={self.dropout}, ' - repr_str += f'add_residual={self.add_residual})' - return repr_str - - -class TransformerEncoderLayer(nn.Module): - """Implements one encoder layer in DETR transformer. - - Args: - embed_dims (int): The feature dimension. Same as `FFN`. - num_heads (int): Parallel attention heads. - feedforward_channels (int): The hidden dimension for FFNs. - dropout (float): Probability of an element to be zeroed. Default 0.0. - order (tuple[str]): The order for encoder layer. Valid examples are - ('selfattn', 'norm', 'ffn', 'norm') and ('norm', 'selfattn', - 'norm', 'ffn'). Default ('selfattn', 'norm', 'ffn', 'norm'). - act_cfg (dict): The activation config for FFNs. Default ReLU. - norm_cfg (dict): Config dict for normalization layer. Default - layer normalization. - num_fcs (int): The number of fully-connected layers for FFNs. - Default 2. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - dropout=0.0, - order=('selfattn', 'norm', 'ffn', 'norm'), - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN'), - num_fcs=2): - super(TransformerEncoderLayer, self).__init__() - assert isinstance(order, tuple) and len(order) == 4 - assert set(order) == set(['selfattn', 'norm', 'ffn']) - self.embed_dims = embed_dims - self.num_heads = num_heads - self.feedforward_channels = feedforward_channels - self.dropout = dropout - self.order = order - self.act_cfg = act_cfg - self.norm_cfg = norm_cfg - self.num_fcs = num_fcs - self.pre_norm = order[0] == 'norm' - self.self_attn = MultiheadAttention(embed_dims, num_heads, dropout) - self.ffn = FFN(embed_dims, feedforward_channels, num_fcs, act_cfg, - dropout) - self.norms = nn.ModuleList() - self.norms.append(build_norm_layer(norm_cfg, embed_dims)[1]) - self.norms.append(build_norm_layer(norm_cfg, embed_dims)[1]) - - def forward(self, x, pos=None, attn_mask=None, key_padding_mask=None): - """Forward function for `TransformerEncoderLayer`. - - Args: - x (Tensor): The input query with shape [num_key, bs, - embed_dims]. Same in `MultiheadAttention.forward`. - pos (Tensor): The positional encoding for query. Default None. - Same as `query_pos` in `MultiheadAttention.forward`. - attn_mask (Tensor): ByteTensor mask with shape [num_key, - num_key]. Same in `MultiheadAttention.forward`. Default None. - key_padding_mask (Tensor): ByteTensor with shape [bs, num_key]. - Same in `MultiheadAttention.forward`. Default None. - - Returns: - Tensor: forwarded results with shape [num_key, bs, embed_dims]. - """ - norm_cnt = 0 - inp_residual = x - for layer in self.order: - if layer == 'selfattn': - # self attention - query = key = value = x - x = self.self_attn( - query, - key, - value, - inp_residual if self.pre_norm else None, - query_pos=pos, - key_pos=pos, - attn_mask=attn_mask, - key_padding_mask=key_padding_mask) - inp_residual = x - elif layer == 'norm': - x = self.norms[norm_cnt](x) - norm_cnt += 1 - elif layer == 'ffn': - x = self.ffn(x, inp_residual if self.pre_norm else None) - return x - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(embed_dims={self.embed_dims}, ' - repr_str += f'num_heads={self.num_heads}, ' - repr_str += f'feedforward_channels={self.feedforward_channels}, ' - repr_str += f'dropout={self.dropout}, ' - repr_str += f'order={self.order}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'norm_cfg={self.norm_cfg}, ' - repr_str += f'num_fcs={self.num_fcs})' - return repr_str - - -class TransformerDecoderLayer(nn.Module): - """Implements one decoder layer in DETR transformer. - - Args: - embed_dims (int): The feature dimension. Same as - `TransformerEncoderLayer`. - num_heads (int): Parallel attention heads. - feedforward_channels (int): Same as `TransformerEncoderLayer`. - dropout (float): Same as `TransformerEncoderLayer`. Default 0.0. - order (tuple[str]): The order for decoder layer. Valid examples are - ('selfattn', 'norm', 'multiheadattn', 'norm', 'ffn', 'norm') and - ('norm', 'selfattn', 'norm', 'multiheadattn', 'norm', 'ffn'). - Default the former. - act_cfg (dict): Same as `TransformerEncoderLayer`. Default ReLU. - norm_cfg (dict): Config dict for normalization layer. Default - layer normalization. - num_fcs (int): The number of fully-connected layers in FFNs. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - dropout=0.0, - order=('selfattn', 'norm', 'multiheadattn', 'norm', 'ffn', - 'norm'), - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN'), - num_fcs=2): - super(TransformerDecoderLayer, self).__init__() - assert isinstance(order, tuple) and len(order) == 6 - assert set(order) == set(['selfattn', 'norm', 'multiheadattn', 'ffn']) - self.embed_dims = embed_dims - self.num_heads = num_heads - self.feedforward_channels = feedforward_channels - self.dropout = dropout - self.order = order - self.act_cfg = act_cfg - self.norm_cfg = norm_cfg - self.num_fcs = num_fcs - self.pre_norm = order[0] == 'norm' - self.self_attn = MultiheadAttention(embed_dims, num_heads, dropout) - self.multihead_attn = MultiheadAttention(embed_dims, num_heads, - dropout) - self.ffn = FFN(embed_dims, feedforward_channels, num_fcs, act_cfg, - dropout) - self.norms = nn.ModuleList() - # 3 norm layers in official DETR's TransformerDecoderLayer - for _ in range(3): - self.norms.append(build_norm_layer(norm_cfg, embed_dims)[1]) - - def forward(self, - x, - memory, - memory_pos=None, - query_pos=None, - memory_attn_mask=None, - target_attn_mask=None, - memory_key_padding_mask=None, - target_key_padding_mask=None): - """Forward function for `TransformerDecoderLayer`. - - Args: - x (Tensor): Input query with shape [num_query, bs, embed_dims]. - memory (Tensor): Tensor got from `TransformerEncoder`, with shape - [num_key, bs, embed_dims]. - memory_pos (Tensor): The positional encoding for `memory`. Default - None. Same as `key_pos` in `MultiheadAttention.forward`. - query_pos (Tensor): The positional encoding for `query`. Default - None. Same as `query_pos` in `MultiheadAttention.forward`. - memory_attn_mask (Tensor): ByteTensor mask for `memory`, with - shape [num_key, num_key]. Same as `attn_mask` in - `MultiheadAttention.forward`. Default None. - target_attn_mask (Tensor): ByteTensor mask for `x`, with shape - [num_query, num_query]. Same as `attn_mask` in - `MultiheadAttention.forward`. Default None. - memory_key_padding_mask (Tensor): ByteTensor for `memory`, with - shape [bs, num_key]. Same as `key_padding_mask` in - `MultiheadAttention.forward`. Default None. - target_key_padding_mask (Tensor): ByteTensor for `x`, with shape - [bs, num_query]. Same as `key_padding_mask` in - `MultiheadAttention.forward`. Default None. - - Returns: - Tensor: forwarded results with shape [num_query, bs, embed_dims]. - """ - norm_cnt = 0 - inp_residual = x - for layer in self.order: - if layer == 'selfattn': - query = key = value = x - x = self.self_attn( - query, - key, - value, - inp_residual if self.pre_norm else None, - query_pos, - key_pos=query_pos, - attn_mask=target_attn_mask, - key_padding_mask=target_key_padding_mask) - inp_residual = x - elif layer == 'norm': - x = self.norms[norm_cnt](x) - norm_cnt += 1 - elif layer == 'multiheadattn': - query = x - key = value = memory - x = self.multihead_attn( - query, - key, - value, - inp_residual if self.pre_norm else None, - query_pos, - key_pos=memory_pos, - attn_mask=memory_attn_mask, - key_padding_mask=memory_key_padding_mask) - inp_residual = x - elif layer == 'ffn': - x = self.ffn(x, inp_residual if self.pre_norm else None) - return x - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(embed_dims={self.embed_dims}, ' - repr_str += f'num_heads={self.num_heads}, ' - repr_str += f'feedforward_channels={self.feedforward_channels}, ' - repr_str += f'dropout={self.dropout}, ' - repr_str += f'order={self.order}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'norm_cfg={self.norm_cfg}, ' - repr_str += f'num_fcs={self.num_fcs})' - return repr_str - - -class TransformerEncoder(nn.Module): - """Implements the encoder in DETR transformer. - - Args: - num_layers (int): The number of `TransformerEncoderLayer`. - embed_dims (int): Same as `TransformerEncoderLayer`. - num_heads (int): Same as `TransformerEncoderLayer`. - feedforward_channels (int): Same as `TransformerEncoderLayer`. - dropout (float): Same as `TransformerEncoderLayer`. Default 0.0. - order (tuple[str]): Same as `TransformerEncoderLayer`. - act_cfg (dict): Same as `TransformerEncoderLayer`. Default ReLU. - norm_cfg (dict): Same as `TransformerEncoderLayer`. Default - layer normalization. - num_fcs (int): Same as `TransformerEncoderLayer`. Default 2. - """ - - def __init__(self, - num_layers, - embed_dims, - num_heads, - feedforward_channels, - dropout=0.0, - order=('selfattn', 'norm', 'ffn', 'norm'), - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN'), - num_fcs=2): - super(TransformerEncoder, self).__init__() - assert isinstance(order, tuple) and len(order) == 4 - assert set(order) == set(['selfattn', 'norm', 'ffn']) - self.num_layers = num_layers - self.embed_dims = embed_dims - self.num_heads = num_heads - self.feedforward_channels = feedforward_channels - self.dropout = dropout - self.order = order - self.act_cfg = act_cfg - self.norm_cfg = norm_cfg - self.num_fcs = num_fcs - self.pre_norm = order[0] == 'norm' - self.layers = nn.ModuleList() - for _ in range(num_layers): - self.layers.append( - TransformerEncoderLayer(embed_dims, num_heads, - feedforward_channels, dropout, order, - act_cfg, norm_cfg, num_fcs)) - self.norm = build_norm_layer(norm_cfg, - embed_dims)[1] if self.pre_norm else None - - def forward(self, x, pos=None, attn_mask=None, key_padding_mask=None): - """Forward function for `TransformerEncoder`. - - Args: - x (Tensor): Input query. Same in `TransformerEncoderLayer.forward`. - pos (Tensor): Positional encoding for query. Default None. - Same in `TransformerEncoderLayer.forward`. - attn_mask (Tensor): ByteTensor attention mask. Default None. - Same in `TransformerEncoderLayer.forward`. - key_padding_mask (Tensor): Same in - `TransformerEncoderLayer.forward`. Default None. - - Returns: - Tensor: Results with shape [num_key, bs, embed_dims]. - """ - for layer in self.layers: - x = layer(x, pos, attn_mask, key_padding_mask) - if self.norm is not None: - x = self.norm(x) - return x - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(num_layers={self.num_layers}, ' - repr_str += f'embed_dims={self.embed_dims}, ' - repr_str += f'num_heads={self.num_heads}, ' - repr_str += f'feedforward_channels={self.feedforward_channels}, ' - repr_str += f'dropout={self.dropout}, ' - repr_str += f'order={self.order}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'norm_cfg={self.norm_cfg}, ' - repr_str += f'num_fcs={self.num_fcs})' - return repr_str - - -class TransformerDecoder(nn.Module): - """Implements the decoder in DETR transformer. - - Args: - num_layers (int): The number of `TransformerDecoderLayer`. - embed_dims (int): Same as `TransformerDecoderLayer`. - num_heads (int): Same as `TransformerDecoderLayer`. - feedforward_channels (int): Same as `TransformerDecoderLayer`. - dropout (float): Same as `TransformerDecoderLayer`. Default 0.0. - order (tuple[str]): Same as `TransformerDecoderLayer`. - act_cfg (dict): Same as `TransformerDecoderLayer`. Default ReLU. - norm_cfg (dict): Same as `TransformerDecoderLayer`. Default - layer normalization. - num_fcs (int): Same as `TransformerDecoderLayer`. Default 2. - """ - - def __init__(self, - num_layers, - embed_dims, - num_heads, - feedforward_channels, - dropout=0.0, - order=('selfattn', 'norm', 'multiheadattn', 'norm', 'ffn', - 'norm'), - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN'), - num_fcs=2, - return_intermediate=False): - super(TransformerDecoder, self).__init__() - assert isinstance(order, tuple) and len(order) == 6 - assert set(order) == set(['selfattn', 'norm', 'multiheadattn', 'ffn']) - self.num_layers = num_layers - self.embed_dims = embed_dims - self.num_heads = num_heads - self.feedforward_channels = feedforward_channels - self.dropout = dropout - self.order = order - self.act_cfg = act_cfg - self.norm_cfg = norm_cfg - self.num_fcs = num_fcs - self.return_intermediate = return_intermediate - self.layers = nn.ModuleList() - for _ in range(num_layers): - self.layers.append( - TransformerDecoderLayer(embed_dims, num_heads, - feedforward_channels, dropout, order, - act_cfg, norm_cfg, num_fcs)) - self.norm = build_norm_layer(norm_cfg, embed_dims)[1] - - def forward(self, - x, - memory, - memory_pos=None, - query_pos=None, - memory_attn_mask=None, - target_attn_mask=None, - memory_key_padding_mask=None, - target_key_padding_mask=None): - """Forward function for `TransformerDecoder`. - - Args: - x (Tensor): Input query. Same in `TransformerDecoderLayer.forward`. - memory (Tensor): Same in `TransformerDecoderLayer.forward`. - memory_pos (Tensor): Same in `TransformerDecoderLayer.forward`. - Default None. - query_pos (Tensor): Same in `TransformerDecoderLayer.forward`. - Default None. - memory_attn_mask (Tensor): Same in - `TransformerDecoderLayer.forward`. Default None. - target_attn_mask (Tensor): Same in - `TransformerDecoderLayer.forward`. Default None. - memory_key_padding_mask (Tensor): Same in - `TransformerDecoderLayer.forward`. Default None. - target_key_padding_mask (Tensor): Same in - `TransformerDecoderLayer.forward`. Default None. - - Returns: - Tensor: Results with shape [num_query, bs, embed_dims]. - """ - intermediate = [] - for layer in self.layers: - x = layer(x, memory, memory_pos, query_pos, memory_attn_mask, - target_attn_mask, memory_key_padding_mask, - target_key_padding_mask) - if self.return_intermediate: - intermediate.append(self.norm(x)) - if self.norm is not None: - x = self.norm(x) - if self.return_intermediate: - intermediate.pop() - intermediate.append(x) - if self.return_intermediate: - return torch.stack(intermediate) - return x.unsqueeze(0) - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(num_layers={self.num_layers}, ' - repr_str += f'embed_dims={self.embed_dims}, ' - repr_str += f'num_heads={self.num_heads}, ' - repr_str += f'feedforward_channels={self.feedforward_channels}, ' - repr_str += f'dropout={self.dropout}, ' - repr_str += f'order={self.order}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'norm_cfg={self.norm_cfg}, ' - repr_str += f'num_fcs={self.num_fcs}, ' - repr_str += f'return_intermediate={self.return_intermediate})' - return repr_str - - -@TRANSFORMER.register_module() -class Transformer(nn.Module): - """Implements the DETR transformer. - - Following the official DETR implementation, this module copy-paste - from torch.nn.Transformer with modifications: - - * positional encodings are passed in MultiheadAttention - * extra LN at the end of encoder is removed - * decoder returns a stack of activations from all decoding layers - - See `paper: End-to-End Object Detection with Transformers - `_ for details. - - Args: - embed_dims (int): The feature dimension. - num_heads (int): Parallel attention heads. Same as - `nn.MultiheadAttention`. - num_encoder_layers (int): Number of `TransformerEncoderLayer`. - num_decoder_layers (int): Number of `TransformerDecoderLayer`. - feedforward_channels (int): The hidden dimension for FFNs used in both - encoder and decoder. - dropout (float): Probability of an element to be zeroed. Default 0.0. - act_cfg (dict): Activation config for FFNs used in both encoder - and decoder. Default ReLU. - norm_cfg (dict): Config dict for normalization used in both encoder - and decoder. Default layer normalization. - num_fcs (int): The number of fully-connected layers in FFNs, which is - used for both encoder and decoder. - pre_norm (bool): Whether the normalization layer is ordered - first in the encoder and decoder. Default False. - return_intermediate_dec (bool): Whether to return the intermediate - output from each TransformerDecoderLayer or only the last - TransformerDecoderLayer. Default False. If False, the returned - `hs` has shape [num_decoder_layers, bs, num_query, embed_dims]. - If True, the returned `hs` will have shape [1, bs, num_query, - embed_dims]. - """ - - def __init__(self, - embed_dims=512, - num_heads=8, - num_encoder_layers=6, - num_decoder_layers=6, - feedforward_channels=2048, - dropout=0.0, - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN'), - num_fcs=2, - pre_norm=False, - return_intermediate_dec=False): - super(Transformer, self).__init__() - self.embed_dims = embed_dims - self.num_heads = num_heads - self.num_encoder_layers = num_encoder_layers - self.num_decoder_layers = num_decoder_layers - self.feedforward_channels = feedforward_channels - self.dropout = dropout - self.act_cfg = act_cfg - self.norm_cfg = norm_cfg - self.num_fcs = num_fcs - self.pre_norm = pre_norm - self.return_intermediate_dec = return_intermediate_dec - if self.pre_norm: - encoder_order = ('norm', 'selfattn', 'norm', 'ffn') - decoder_order = ('norm', 'selfattn', 'norm', 'multiheadattn', - 'norm', 'ffn') - else: - encoder_order = ('selfattn', 'norm', 'ffn', 'norm') - decoder_order = ('selfattn', 'norm', 'multiheadattn', 'norm', - 'ffn', 'norm') - self.encoder = TransformerEncoder(num_encoder_layers, embed_dims, - num_heads, feedforward_channels, - dropout, encoder_order, act_cfg, - norm_cfg, num_fcs) - self.decoder = TransformerDecoder(num_decoder_layers, embed_dims, - num_heads, feedforward_channels, - dropout, decoder_order, act_cfg, - norm_cfg, num_fcs, - return_intermediate_dec) - - def init_weights(self, distribution='uniform'): - """Initialize the transformer weights.""" - # follow the official DETR to init parameters - for m in self.modules(): - if hasattr(m, 'weight') and m.weight.dim() > 1: - xavier_init(m, distribution=distribution) - - def forward(self, x, mask, query_embed, pos_embed): - """Forward function for `Transformer`. - - Args: - x (Tensor): Input query with shape [bs, c, h, w] where - c = embed_dims. - mask (Tensor): The key_padding_mask used for encoder and decoder, - with shape [bs, h, w]. - query_embed (Tensor): The query embedding for decoder, with shape - [num_query, c]. - pos_embed (Tensor): The positional encoding for encoder and - decoder, with the same shape as `x`. - - Returns: - tuple[Tensor]: results of decoder containing the following tensor. - - - out_dec: Output from decoder. If return_intermediate_dec \ - is True output has shape [num_dec_layers, bs, - num_query, embed_dims], else has shape [1, bs, \ - num_query, embed_dims]. - - memory: Output results from encoder, with shape \ - [bs, embed_dims, h, w]. - """ - bs, c, h, w = x.shape - x = x.flatten(2).permute(2, 0, 1) # [bs, c, h, w] -> [h*w, bs, c] - pos_embed = pos_embed.flatten(2).permute(2, 0, 1) - query_embed = query_embed.unsqueeze(1).repeat( - 1, bs, 1) # [num_query, dim] -> [num_query, bs, dim] - mask = mask.flatten(1) # [bs, h, w] -> [bs, h*w] - memory = self.encoder( - x, pos=pos_embed, attn_mask=None, key_padding_mask=mask) - target = torch.zeros_like(query_embed) - # out_dec: [num_layers, num_query, bs, dim] - out_dec = self.decoder( - target, - memory, - memory_pos=pos_embed, - query_pos=query_embed, - memory_attn_mask=None, - target_attn_mask=None, - memory_key_padding_mask=mask, - target_key_padding_mask=None) - out_dec = out_dec.transpose(1, 2) - memory = memory.permute(1, 2, 0).reshape(bs, c, h, w) - return out_dec, memory - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(embed_dims={self.embed_dims}, ' - repr_str += f'num_heads={self.num_heads}, ' - repr_str += f'num_encoder_layers={self.num_encoder_layers}, ' - repr_str += f'num_decoder_layers={self.num_decoder_layers}, ' - repr_str += f'feedforward_channels={self.feedforward_channels}, ' - repr_str += f'dropout={self.dropout}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'norm_cfg={self.norm_cfg}, ' - repr_str += f'num_fcs={self.num_fcs}, ' - repr_str += f'pre_norm={self.pre_norm}, ' - repr_str += f'return_intermediate_dec={self.return_intermediate_dec})' - return repr_str - - -@TRANSFORMER.register_module() -class DynamicConv(nn.Module): - """Implements Dynamic Convolution. - - This module generate parameters for each sample and - use bmm to implement 1*1 convolution. Code is modified - from the `official github repo `_ . - - Args: - in_channels (int): The input feature channel. - Defaults to 256. - feat_channels (int): The inner feature channel. - Defaults to 64. - out_channels (int, optional): The output feature channel. - When not specified, it will be set to `in_channels` - by default - input_feat_shape (int): The shape of input feature. - Defaults to 7. - act_cfg (dict): The activation config for DynamicConv. - norm_cfg (dict): Config dict for normalization layer. Default - layer normalization. - """ - - def __init__(self, - in_channels=256, - feat_channels=64, - out_channels=None, - input_feat_shape=7, - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN')): - super(DynamicConv, self).__init__() - self.in_channels = in_channels - self.feat_channels = feat_channels - self.out_channels_raw = out_channels - self.input_feat_shape = input_feat_shape - self.act_cfg = act_cfg - self.norm_cfg = norm_cfg - self.out_channels = out_channels if out_channels else in_channels - - self.num_params_in = self.in_channels * self.feat_channels - self.num_params_out = self.out_channels * self.feat_channels - self.dynamic_layer = nn.Linear( - self.in_channels, self.num_params_in + self.num_params_out) - - self.norm_in = build_norm_layer(norm_cfg, self.feat_channels)[1] - self.norm_out = build_norm_layer(norm_cfg, self.out_channels)[1] - - self.activation = build_activation_layer(act_cfg) - - num_output = self.out_channels * input_feat_shape**2 - self.fc_layer = nn.Linear(num_output, self.out_channels) - self.fc_norm = build_norm_layer(norm_cfg, self.out_channels)[1] - - def forward(self, param_feature, input_feature): - """Forward function for `DynamicConv`. - - Args: - param_feature (Tensor): The feature can be used - to generate the parameter, has shape - (num_all_proposals, in_channels). - input_feature (Tensor): Feature that - interact with parameters, has shape - (num_all_proposals, in_channels, H, W). - - Returns: - Tensor: The output feature has shape - (num_all_proposals, out_channels). - """ - num_proposals = param_feature.size(0) - input_feature = input_feature.view(num_proposals, self.in_channels, - -1).permute(2, 0, 1) - - input_feature = input_feature.permute(1, 0, 2) - parameters = self.dynamic_layer(param_feature) - - param_in = parameters[:, :self.num_params_in].view( - -1, self.in_channels, self.feat_channels) - param_out = parameters[:, -self.num_params_out:].view( - -1, self.feat_channels, self.out_channels) - - # input_feature has shape (num_all_proposals, H*W, in_channels) - # param_in has shape (num_all_proposals, in_channels, feat_channels) - # feature has shape (num_all_proposals, H*W, feat_channels) - features = torch.bmm(input_feature, param_in) - features = self.norm_in(features) - features = self.activation(features) - - # param_out has shape (batch_size, feat_channels, out_channels) - features = torch.bmm(features, param_out) - features = self.norm_out(features) - features = self.activation(features) - - features = features.flatten(1) - features = self.fc_layer(features) - features = self.fc_norm(features) - features = self.activation(features) - - return features - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(in_channels={self.in_channels}, ' - repr_str += f'feat_channels={self.feat_channels}, ' - repr_str += f'out_channels={self.out_channels_raw}, ' - repr_str += f'input_feat_shape={self.input_feat_shape}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'norm_cfg={self.norm_cfg})' - return repr_str diff --git a/spaces/dirge/voicevox/voicevox_engine/morphing.py b/spaces/dirge/voicevox/voicevox_engine/morphing.py deleted file mode 100644 index d857aa11d8857772c4e119edfd57730932ced6fa..0000000000000000000000000000000000000000 --- a/spaces/dirge/voicevox/voicevox_engine/morphing.py +++ /dev/null @@ -1,208 +0,0 @@ -from copy import deepcopy -from dataclasses import dataclass -from itertools import chain -from typing import Dict, List, Tuple - -import numpy as np -import pyworld as pw -from scipy.signal import resample - -from .metas.Metas import Speaker, SpeakerSupportPermittedSynthesisMorphing, StyleInfo -from .metas.MetasStore import construct_lookup -from .model import AudioQuery, MorphableTargetInfo, SpeakerNotFoundError -from .synthesis_engine import SynthesisEngine - - -# FIXME: ndarray type hint, https://github.com/JeremyCCHsu/Python-Wrapper-for-World-Vocoder/blob/2b64f86197573497c685c785c6e0e743f407b63e/pyworld/pyworld.pyx#L398 # noqa -@dataclass(frozen=True) -class MorphingParameter: - fs: int - frame_period: float - base_f0: np.ndarray - base_aperiodicity: np.ndarray - base_spectrogram: np.ndarray - target_spectrogram: np.ndarray - - -def create_morphing_parameter( - base_wave: np.ndarray, - target_wave: np.ndarray, - fs: int, -) -> MorphingParameter: - frame_period = 1.0 - base_f0, base_time_axis = pw.harvest(base_wave, fs, frame_period=frame_period) - base_spectrogram = pw.cheaptrick(base_wave, base_f0, base_time_axis, fs) - base_aperiodicity = pw.d4c(base_wave, base_f0, base_time_axis, fs) - - target_f0, morph_time_axis = pw.harvest(target_wave, fs, frame_period=frame_period) - target_spectrogram = pw.cheaptrick(target_wave, target_f0, morph_time_axis, fs) - target_spectrogram.resize(base_spectrogram.shape) - - return MorphingParameter( - fs=fs, - frame_period=frame_period, - base_f0=base_f0, - base_aperiodicity=base_aperiodicity, - base_spectrogram=base_spectrogram, - target_spectrogram=target_spectrogram, - ) - - -def get_morphable_targets( - speakers: List[Speaker], - base_speakers: List[int], -) -> List[Dict[int, MorphableTargetInfo]]: - """ - speakers: 全話者の情報 - base_speakers: モーフィング可能か判定したいベースの話者リスト(スタイルID) - """ - speaker_lookup = construct_lookup(speakers) - - morphable_targets_arr = [] - for base_speaker in base_speakers: - morphable_targets = dict() - for style in chain.from_iterable(speaker.styles for speaker in speakers): - morphable_targets[style.id] = MorphableTargetInfo( - is_morphable=is_synthesis_morphing_permitted( - speaker_lookup=speaker_lookup, - base_speaker=base_speaker, - target_speaker=style.id, - ) - ) - morphable_targets_arr.append(morphable_targets) - - return morphable_targets_arr - - -def is_synthesis_morphing_permitted( - speaker_lookup: Dict[int, Tuple[Speaker, StyleInfo]], - base_speaker: int, - target_speaker: int, -) -> bool: - """ - 指定されたspeakerがモーフィング可能かどうか返す - speakerが見つからない場合はSpeakerNotFoundErrorを送出する - """ - - base_speaker_data = speaker_lookup[base_speaker] - target_speaker_data = speaker_lookup[target_speaker] - - if base_speaker_data is None or target_speaker_data is None: - raise SpeakerNotFoundError( - base_speaker if base_speaker_data is None else target_speaker - ) - - base_speaker_info, _ = base_speaker_data - target_speaker_info, _ = target_speaker_data - - base_speaker_uuid = base_speaker_info.speaker_uuid - target_speaker_uuid = target_speaker_info.speaker_uuid - - base_speaker_morphing_info: SpeakerSupportPermittedSynthesisMorphing = ( - base_speaker_info.supported_features.permitted_synthesis_morphing - ) - - target_speaker_morphing_info: SpeakerSupportPermittedSynthesisMorphing = ( - target_speaker_info.supported_features.permitted_synthesis_morphing - ) - - # 禁止されている場合はFalse - if ( - base_speaker_morphing_info == SpeakerSupportPermittedSynthesisMorphing.NOTHING - or target_speaker_morphing_info - == SpeakerSupportPermittedSynthesisMorphing.NOTHING - ): - return False - # 同一話者のみの場合は同一話者判定 - if ( - base_speaker_morphing_info == SpeakerSupportPermittedSynthesisMorphing.SELF_ONLY - or target_speaker_morphing_info - == SpeakerSupportPermittedSynthesisMorphing.SELF_ONLY - ): - return base_speaker_uuid == target_speaker_uuid - # 念のため許可されているかチェック - return ( - base_speaker_morphing_info == SpeakerSupportPermittedSynthesisMorphing.ALL - and target_speaker_morphing_info == SpeakerSupportPermittedSynthesisMorphing.ALL - ) - - -def synthesis_morphing_parameter( - engine: SynthesisEngine, - query: AudioQuery, - base_speaker: int, - target_speaker: int, -) -> MorphingParameter: - query = deepcopy(query) - - # 不具合回避のためデフォルトのサンプリングレートでWORLDに掛けた後に指定のサンプリングレートに変換する - query.outputSamplingRate = engine.default_sampling_rate - - # WORLDに掛けるため合成はモノラルで行う - query.outputStereo = False - - base_wave = engine.synthesis(query=query, speaker_id=base_speaker).astype("float") - target_wave = engine.synthesis(query=query, speaker_id=target_speaker).astype( - "float" - ) - - return create_morphing_parameter( - base_wave=base_wave, - target_wave=target_wave, - fs=query.outputSamplingRate, - ) - - -def synthesis_morphing( - morph_param: MorphingParameter, - morph_rate: float, - output_fs: int, - output_stereo: bool = False, -) -> np.ndarray: - """ - 指定した割合で、パラメータをもとにモーフィングした音声を生成します。 - - Parameters - ---------- - morph_param : MorphingParameter - `synthesis_morphing_parameter`または`create_morphing_parameter`で作成したパラメータ - - morph_rate : float - モーフィングの割合 - 0.0でベースの話者、1.0でターゲットの話者に近づきます。 - - Returns - ------- - generated : np.ndarray - モーフィングした音声 - - Raises - ------- - ValueError - morph_rate ∈ [0, 1] - """ - - if morph_rate < 0.0 or morph_rate > 1.0: - raise ValueError("morph_rateは0.0から1.0の範囲で指定してください") - - morph_spectrogram = ( - morph_param.base_spectrogram * (1.0 - morph_rate) - + morph_param.target_spectrogram * morph_rate - ) - - y_h = pw.synthesize( - morph_param.base_f0, - morph_spectrogram, - morph_param.base_aperiodicity, - morph_param.fs, - morph_param.frame_period, - ) - - # TODO: synthesis_engine.py でのリサンプル処理と共通化する - if output_fs != morph_param.fs: - y_h = resample(y_h, output_fs * len(y_h) // morph_param.fs) - - if output_stereo: - y_h = np.array([y_h, y_h]).T - - return y_h diff --git a/spaces/dirge/voicevox/voicevox_engine/synthesis_engine/synthesis_engine.py b/spaces/dirge/voicevox/voicevox_engine/synthesis_engine/synthesis_engine.py deleted file mode 100644 index f617e94a2589e5bb1ce1210af6a24178070b24c7..0000000000000000000000000000000000000000 --- a/spaces/dirge/voicevox/voicevox_engine/synthesis_engine/synthesis_engine.py +++ /dev/null @@ -1,502 +0,0 @@ -import threading -from itertools import chain -from typing import List, Optional, Tuple - -import numpy -from scipy.signal import resample - -from ..acoustic_feature_extractor import OjtPhoneme -from ..model import AccentPhrase, AudioQuery, Mora -from .core_wrapper import CoreWrapper, OldCoreError -from .synthesis_engine_base import SynthesisEngineBase - -unvoiced_mora_phoneme_list = ["A", "I", "U", "E", "O", "cl", "pau"] -mora_phoneme_list = ["a", "i", "u", "e", "o", "N"] + unvoiced_mora_phoneme_list - - -# TODO: move mora utility to mora module -def to_flatten_moras(accent_phrases: List[AccentPhrase]) -> List[Mora]: - """ - accent_phrasesに含まれるMora(とpause_moraがあればそれも)を - すべて一つのリストに結合する - Parameters - ---------- - accent_phrases : List[AccentPhrase] - AccentPhraseのリスト - Returns - ------- - moras : List[Mora] - 結合されたMoraのリストを返す - """ - return list( - chain.from_iterable( - accent_phrase.moras - + ( - [accent_phrase.pause_mora] - if accent_phrase.pause_mora is not None - else [] - ) - for accent_phrase in accent_phrases - ) - ) - - -def to_phoneme_data_list(phoneme_str_list: List[str]): - """ - phoneme文字列のリストを、OjtPhonemeクラスのリストに変換する - Parameters - ---------- - phoneme_str_list : List[str] - phoneme文字列のリスト - Returns - ------- - phoneme_list : List[OjtPhoneme] - 変換されたOjtPhonemeクラスのリスト - """ - phoneme_data_list = [ - OjtPhoneme(phoneme=p, start=i, end=i + 1) - for i, p in enumerate(phoneme_str_list) - ] - phoneme_data_list = OjtPhoneme.convert(phoneme_data_list) - return phoneme_data_list - - -def split_mora(phoneme_list: List[OjtPhoneme]): - """ - OjtPhonemeのリストから、 - 母音の位置(vowel_indexes) - 母音の音素列(vowel_phoneme_list) - 子音の音素列(consonant_phoneme_list) - を生成し、返す - Parameters - ---------- - phoneme_list : List[OjtPhoneme] - phonemeクラスのリスト - Returns - ------- - consonant_phoneme_list : List[OjtPhoneme] - 子音の音素列 - vowel_phoneme_list : List[OjtPhoneme] - 母音の音素列 - vowel_indexes : : List[int] - 母音の位置 - """ - vowel_indexes = [ - i for i, p in enumerate(phoneme_list) if p.phoneme in mora_phoneme_list - ] - vowel_phoneme_list = [phoneme_list[i] for i in vowel_indexes] - # postとprevのvowel_indexの差として考えられる値は1か2 - # 理由としてはphoneme_listは、consonant、vowelの組み合わせか、vowel一つの連続であるから - # 1の場合はconsonant(子音)が存在しない=母音のみ(a/i/u/e/o/N/cl/pau)で構成されるモーラ(音)である - # 2の場合はconsonantが存在するモーラである - # なので、2の場合(else)でphonemeを取り出している - consonant_phoneme_list: List[Optional[OjtPhoneme]] = [None] + [ - None if post - prev == 1 else phoneme_list[post - 1] - for prev, post in zip(vowel_indexes[:-1], vowel_indexes[1:]) - ] - return consonant_phoneme_list, vowel_phoneme_list, vowel_indexes - - -def pre_process( - accent_phrases: List[AccentPhrase], -) -> Tuple[List[Mora], List[OjtPhoneme]]: - """ - AccentPhraseモデルのリストを整形し、処理に必要なデータの原型を作り出す - Parameters - ---------- - accent_phrases : List[AccentPhrase] - AccentPhraseモデルのリスト - Returns - ------- - flatten_moras : List[Mora] - AccentPhraseモデルのリスト内に含まれるすべてのMoraをリスト化したものを返す - phoneme_data_list : List[OjtPhoneme] - flatten_morasから取り出したすべてのPhonemeをOjtPhonemeに変換したものを返す - """ - flatten_moras = to_flatten_moras(accent_phrases) - - phoneme_each_mora = [ - ([mora.consonant] if mora.consonant is not None else []) + [mora.vowel] - for mora in flatten_moras - ] - phoneme_str_list = list(chain.from_iterable(phoneme_each_mora)) - phoneme_str_list = ["pau"] + phoneme_str_list + ["pau"] - - phoneme_data_list = to_phoneme_data_list(phoneme_str_list) - - return flatten_moras, phoneme_data_list - - -class SynthesisEngine(SynthesisEngineBase): - def __init__( - self, - core: CoreWrapper, - ): - """ - core.yukarin_s_forward: 音素列から、音素ごとの長さを求める関数 - length: 音素列の長さ - phoneme_list: 音素列 - speaker_id: 話者番号 - return: 音素ごとの長さ - - core.yukarin_sa_forward: モーラごとの音素列とアクセント情報から、モーラごとの音高を求める関数 - length: モーラ列の長さ - vowel_phoneme_list: 母音の音素列 - consonant_phoneme_list: 子音の音素列 - start_accent_list: アクセントの開始位置 - end_accent_list: アクセントの終了位置 - start_accent_phrase_list: アクセント句の開始位置 - end_accent_phrase_list: アクセント句の終了位置 - speaker_id: 話者番号 - return: モーラごとの音高 - - core.decode_forward: フレームごとの音素と音高から波形を求める関数 - length: フレームの長さ - phoneme_size: 音素の種類数 - f0: フレームごとの音高 - phoneme: フレームごとの音素 - speaker_id: 話者番号 - return: 音声波形 - - speakers: coreから取得したspeakersに関するjsonデータの文字列 - - supported_devices: - coreから取得した対応デバイスに関するjsonデータの文字列 - Noneの場合はコアが情報の取得に対応していないため、対応デバイスは不明 - """ - super().__init__() - self.core = core - self._speakers = self.core.metas() - self.mutex = threading.Lock() - try: - self._supported_devices = self.core.supported_devices() - except OldCoreError: - self._supported_devices = None - self.default_sampling_rate = 24000 - - @property - def speakers(self) -> str: - return self._speakers - - @property - def supported_devices(self) -> Optional[str]: - return self._supported_devices - - def initialize_speaker_synthesis(self, speaker_id: int, skip_reinit: bool): - try: - with self.mutex: - # 以下の条件のいずれかを満たす場合, 初期化を実行する - # 1. 引数 skip_reinit が False の場合 - # 2. 話者が初期化されていない場合 - if (not skip_reinit) or (not self.core.is_model_loaded(speaker_id)): - self.core.load_model(speaker_id) - except OldCoreError: - pass # コアが古い場合はどうしようもないので何もしない - - def is_initialized_speaker_synthesis(self, speaker_id: int) -> bool: - try: - return self.core.is_model_loaded(speaker_id) - except OldCoreError: - return True # コアが古い場合はどうしようもないのでTrueを返す - - def replace_phoneme_length( - self, accent_phrases: List[AccentPhrase], speaker_id: int - ) -> List[AccentPhrase]: - """ - accent_phrasesの母音・子音の長さを設定する - Parameters - ---------- - accent_phrases : List[AccentPhrase] - アクセント句モデルのリスト - speaker_id : int - 話者ID - Returns - ------- - accent_phrases : List[AccentPhrase] - 母音・子音の長さが設定されたアクセント句モデルのリスト - """ - # モデルがロードされていない場合はロードする - self.initialize_speaker_synthesis(speaker_id, skip_reinit=True) - # phoneme - # AccentPhraseをすべてMoraおよびOjtPhonemeの形に分解し、処理可能な形にする - flatten_moras, phoneme_data_list = pre_process(accent_phrases) - # OjtPhonemeの形に分解されたもの(phoneme_data_list)から、vowel(母音)の位置を抜き出す - _, _, vowel_indexes_data = split_mora(phoneme_data_list) - - # yukarin_s - # OjtPhonemeのリストからOjtPhonemeのPhoneme ID(OpenJTalkにおける音素のID)のリストを作る - phoneme_list_s = numpy.array( - [p.phoneme_id for p in phoneme_data_list], dtype=numpy.int64 - ) - # Phoneme IDのリスト(phoneme_list_s)をyukarin_s_forwardにかけ、推論器によって適切な音素の長さを割り当てる - with self.mutex: - phoneme_length = self.core.yukarin_s_forward( - length=len(phoneme_list_s), - phoneme_list=phoneme_list_s, - speaker_id=numpy.array(speaker_id, dtype=numpy.int64).reshape(-1), - ) - - # yukarin_s_forwarderの結果をaccent_phrasesに反映する - # flatten_moras変数に展開された値を変更することでコード量を削減しつつaccent_phrases内のデータを書き換えている - for i, mora in enumerate(flatten_moras): - mora.consonant_length = ( - phoneme_length[vowel_indexes_data[i + 1] - 1] - if mora.consonant is not None - else None - ) - mora.vowel_length = phoneme_length[vowel_indexes_data[i + 1]] - - return accent_phrases - - def replace_mora_pitch( - self, accent_phrases: List[AccentPhrase], speaker_id: int - ) -> List[AccentPhrase]: - """ - accent_phrasesの音高(ピッチ)を設定する - Parameters - ---------- - accent_phrases : List[AccentPhrase] - アクセント句モデルのリスト - speaker_id : int - 話者ID - Returns - ------- - accent_phrases : List[AccentPhrase] - 音高(ピッチ)が設定されたアクセント句モデルのリスト - """ - # モデルがロードされていない場合はロードする - self.initialize_speaker_synthesis(speaker_id, skip_reinit=True) - # numpy.concatenateが空リストだとエラーを返すのでチェック - if len(accent_phrases) == 0: - return [] - - # phoneme - # AccentPhraseをすべてMoraおよびOjtPhonemeの形に分解し、処理可能な形にする - flatten_moras, phoneme_data_list = pre_process(accent_phrases) - - # accent - def _create_one_hot(accent_phrase: AccentPhrase, position: int): - """ - 単位行列(numpy.eye)を応用し、accent_phrase内でone hotな配列(リスト)を作る - 例えば、accent_phraseのmorasの長さが12、positionが1なら - [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] - morasの長さが同じく12、positionが-1なら - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1] - のような配列を生成する - accent_phraseがpause_moraを含む場合はさらに後ろに0が足される - Parameters - ---------- - accent_phrase : AccentPhrase - アクセント句モデル - position : int - one hotにするindex - Returns - ------- - one_hot : numpy.ndarray - one hotな配列(リスト) - """ - return numpy.r_[ - numpy.eye(len(accent_phrase.moras))[position], - (0 if accent_phrase.pause_mora is not None else []), - ] - - # accent_phrasesから、アクセントの開始位置のリストを作る - start_accent_list = numpy.concatenate( - [ - # accentはプログラミング言語におけるindexのように0始まりではなく1始まりなので、 - # accentが1の場合は0番目を指定している - # accentが1ではない場合、accentはend_accent_listに用いられる - _create_one_hot(accent_phrase, 0 if accent_phrase.accent == 1 else 1) - for accent_phrase in accent_phrases - ] - ) - - # accent_phrasesから、アクセントの終了位置のリストを作る - end_accent_list = numpy.concatenate( - [ - # accentはプログラミング言語におけるindexのように0始まりではなく1始まりなので、1を引いている - _create_one_hot(accent_phrase, accent_phrase.accent - 1) - for accent_phrase in accent_phrases - ] - ) - - # accent_phrasesから、アクセント句の開始位置のリストを作る - # これによって、yukarin_sa_forwarder内でアクセント句を区別できる - start_accent_phrase_list = numpy.concatenate( - [_create_one_hot(accent_phrase, 0) for accent_phrase in accent_phrases] - ) - - # accent_phrasesから、アクセント句の終了位置のリストを作る - end_accent_phrase_list = numpy.concatenate( - [_create_one_hot(accent_phrase, -1) for accent_phrase in accent_phrases] - ) - - # 最初と最後に0を付け加える。これによってpau(前後の無音のためのもの)を付け加えたことになる - start_accent_list = numpy.r_[0, start_accent_list, 0] - end_accent_list = numpy.r_[0, end_accent_list, 0] - start_accent_phrase_list = numpy.r_[0, start_accent_phrase_list, 0] - end_accent_phrase_list = numpy.r_[0, end_accent_phrase_list, 0] - - # アクセント・アクセント句関連のデータをyukarin_sa_forwarderに渡すための最終処理、リスト内のデータをint64に変換する - start_accent_list = numpy.array(start_accent_list, dtype=numpy.int64) - end_accent_list = numpy.array(end_accent_list, dtype=numpy.int64) - start_accent_phrase_list = numpy.array( - start_accent_phrase_list, dtype=numpy.int64 - ) - end_accent_phrase_list = numpy.array(end_accent_phrase_list, dtype=numpy.int64) - - # phonemeに関するデータを取得(変換)する - ( - consonant_phoneme_data_list, - vowel_phoneme_data_list, - _, - ) = split_mora(phoneme_data_list) - - # yukarin_sa - # Phoneme関連のデータをyukarin_sa_forwarderに渡すための最終処理、リスト内のデータをint64に変換する - vowel_phoneme_list = numpy.array( - [p.phoneme_id for p in vowel_phoneme_data_list], dtype=numpy.int64 - ) - consonant_phoneme_list = numpy.array( - [ - p.phoneme_id if p is not None else -1 - for p in consonant_phoneme_data_list - ], - dtype=numpy.int64, - ) - - # 今までに生成された情報をyukarin_sa_forwardにかけ、推論器によってモーラごとに適切な音高(ピッチ)を割り当てる - with self.mutex: - f0_list = self.core.yukarin_sa_forward( - length=vowel_phoneme_list.shape[0], - vowel_phoneme_list=vowel_phoneme_list[numpy.newaxis], - consonant_phoneme_list=consonant_phoneme_list[numpy.newaxis], - start_accent_list=start_accent_list[numpy.newaxis], - end_accent_list=end_accent_list[numpy.newaxis], - start_accent_phrase_list=start_accent_phrase_list[numpy.newaxis], - end_accent_phrase_list=end_accent_phrase_list[numpy.newaxis], - speaker_id=numpy.array(speaker_id, dtype=numpy.int64).reshape(-1), - )[0] - - # 無声母音を含むMoraに関しては、音高(ピッチ)を0にする - for i, p in enumerate(vowel_phoneme_data_list): - if p.phoneme in unvoiced_mora_phoneme_list: - f0_list[i] = 0 - - # yukarin_sa_forwarderの結果をaccent_phrasesに反映する - # flatten_moras変数に展開された値を変更することでコード量を削減しつつaccent_phrases内のデータを書き換えている - for i, mora in enumerate(flatten_moras): - mora.pitch = f0_list[i + 1] - - return accent_phrases - - def _synthesis_impl(self, query: AudioQuery, speaker_id: int): - """ - 音声合成クエリから音声合成に必要な情報を構成し、実際に音声合成を行う - Parameters - ---------- - query : AudioQuery - 音声合成クエリ - speaker_id : int - 話者ID - Returns - ------- - wave : numpy.ndarray - 音声合成結果 - """ - # モデルがロードされていない場合はロードする - self.initialize_speaker_synthesis(speaker_id, skip_reinit=True) - # phoneme - # AccentPhraseをすべてMoraおよびOjtPhonemeの形に分解し、処理可能な形にする - flatten_moras, phoneme_data_list = pre_process(query.accent_phrases) - - # OjtPhonemeのリストからOjtPhonemeのPhoneme ID(OpenJTalkにおける音素のID)のリストを作る - phoneme_list_s = numpy.array( - [p.phoneme_id for p in phoneme_data_list], dtype=numpy.int64 - ) - - # length - # 音素の長さをリストに展開・結合する。ここには前後の無音時間も含まれる - phoneme_length_list = ( - [query.prePhonemeLength] - + [ - length - for mora in flatten_moras - for length in ( - [mora.consonant_length] if mora.consonant is not None else [] - ) - + [mora.vowel_length] - ] - + [query.postPhonemeLength] - ) - # floatにキャスト - phoneme_length = numpy.array(phoneme_length_list, dtype=numpy.float32) - - # lengthにSpeed Scale(話速)を適用する - phoneme_length /= query.speedScale - - # pitch - # モーラの音高(ピッチ)を展開・結合し、floatにキャストする - f0_list = [0] + [mora.pitch for mora in flatten_moras] + [0] - f0 = numpy.array(f0_list, dtype=numpy.float32) - # 音高(ピッチ)の調節を適用する(2のPitch Scale乗を掛ける) - f0 *= 2**query.pitchScale - - # 有声音素(音高(ピッチ)が0より大きいもの)か否かを抽出する - voiced = f0 > 0 - # 有声音素の音高(ピッチ)の平均値を求める - mean_f0 = f0[voiced].mean() - # 平均値がNaNではないとき、抑揚を適用する - # 抑揚は音高と音高の平均値の差に抑揚を掛けたもの((f0 - mean_f0) * Intonation Scale)に抑揚の平均値(mean_f0)を足したもの - if not numpy.isnan(mean_f0): - f0[voiced] = (f0[voiced] - mean_f0) * query.intonationScale + mean_f0 - - # OjtPhonemeの形に分解された音素リストから、vowel(母音)の位置を抜き出し、numpyのarrayにする - _, _, vowel_indexes_data = split_mora(phoneme_data_list) - vowel_indexes = numpy.array(vowel_indexes_data) - - # forward decode - # 音素の長さにrateを掛け、intにキャストする - rate = 24000 / 256 - phoneme_bin_num = numpy.round(phoneme_length * rate).astype(numpy.int32) - - # Phoneme IDを音素の長さ分繰り返す - phoneme = numpy.repeat(phoneme_list_s, phoneme_bin_num) - # f0を母音と子音の長さの合計分繰り返す - f0 = numpy.repeat( - f0, - [a.sum() for a in numpy.split(phoneme_bin_num, vowel_indexes[:-1] + 1)], - ) - - # phonemeの長さとOjtPhonemeのnum_phoneme(45)分の0で初期化された2次元配列を用意する - array = numpy.zeros((len(phoneme), OjtPhoneme.num_phoneme), dtype=numpy.float32) - # 初期化された2次元配列の各行をone hotにする - array[numpy.arange(len(phoneme)), phoneme] = 1 - phoneme = array - - # 今まで生成された情報をdecode_forwardにかけ、推論器によって音声波形を生成する - with self.mutex: - wave = self.core.decode_forward( - length=phoneme.shape[0], - phoneme_size=phoneme.shape[1], - f0=f0[:, numpy.newaxis], - phoneme=phoneme, - speaker_id=numpy.array(speaker_id, dtype=numpy.int64).reshape(-1), - ) - - # volume: ゲイン適用 - wave *= query.volumeScale - - # 出力サンプリングレートがデフォルト(decode forwarderによるもの、24kHz)でなければ、それを適用する - if query.outputSamplingRate != self.default_sampling_rate: - wave = resample( - wave, - query.outputSamplingRate * len(wave) // self.default_sampling_rate, - ) - - # ステレオ変換 - # 出力設定がステレオなのであれば、ステレオ化する - if query.outputStereo: - wave = numpy.array([wave, wave]).T - - return wave diff --git a/spaces/dongyi/MMFS/configs/base_config.py b/spaces/dongyi/MMFS/configs/base_config.py deleted file mode 100644 index 1d2e39e62bdfeb661f542af16dc75a6be96b93b9..0000000000000000000000000000000000000000 --- a/spaces/dongyi/MMFS/configs/base_config.py +++ /dev/null @@ -1,160 +0,0 @@ -import yaml -import copy -from typing import Union - -class BaseConfig(): - - def __init__(self): - self.__config_dict = {} - self.__check_func_dict = {} - - is_greater_than_0 = lambda x: x > 0 - - # common config - self._add_option('common', 'name', str, 'style_master') - self._add_option('common', 'model', str, 'cycle_gan') - self._add_option('common', 'phase', str, 'train', check_func=lambda x: x in ['train', 'test']) - self._add_option('common', 'gpu_ids', list, [0]) - self._add_option('common', 'verbose', bool, False) - - # model config - self._add_option('model', 'input_nc', int, 3, check_func=is_greater_than_0) - self._add_option('model', 'output_nc', int, 3, check_func=is_greater_than_0) - - # dataset config - # common dataset options - self._add_option('dataset', 'use_absolute_datafile', bool, True) - self._add_option('dataset', 'batch_size', int, 1, check_func=is_greater_than_0) - self._add_option('dataset', 'n_threads', int, 4, check_func=is_greater_than_0) - self._add_option('dataset', 'dataroot', str, './') - self._add_option('dataset', 'drop_last', bool, False) - self._add_option('dataset', 'landmark_scale', list, None) - self._add_option('dataset', 'check_all_data', bool, False) - self._add_option('dataset', 'accept_data_error', bool, True) # Upon loading a bad data, if this is true, - # dataloader will throw an exception and - # load the next good data. - # If this is false, process will crash. - - self._add_option('dataset', 'train_data', dict, {}) - self._add_option('dataset', 'val_data', dict, {}) - - # paired data config - self._add_option('dataset', 'paired_trainA_folder', str, '') - self._add_option('dataset', 'paired_trainB_folder', str, '') - self._add_option('dataset', 'paired_train_filelist', str, '') - self._add_option('dataset', 'paired_valA_folder', str, '') - self._add_option('dataset', 'paired_valB_folder', str, '') - self._add_option('dataset', 'paired_val_filelist', str, '') - - # unpaired data config - self._add_option('dataset', 'unpaired_trainA_folder', str, '') - self._add_option('dataset', 'unpaired_trainB_folder', str, '') - self._add_option('dataset', 'unpaired_trainA_filelist', str, '') - self._add_option('dataset', 'unpaired_trainB_filelist', str, '') - self._add_option('dataset', 'unpaired_valA_folder', str, '') - self._add_option('dataset', 'unpaired_valB_folder', str, '') - self._add_option('dataset', 'unpaired_valA_filelist', str, '') - self._add_option('dataset', 'unpaired_valB_filelist', str, '') - - # custom data - self._add_option('dataset', 'custom_train_data', dict, {}) - self._add_option('dataset', 'custom_val_data', dict, {}) - - # training config - self._add_option('training', 'checkpoints_dir', str, './checkpoints') - self._add_option('training', 'log_dir', str, './logs') - self._add_option('training', 'use_new_log', bool, False) - self._add_option('training', 'continue_train', bool, False) - self._add_option('training', 'which_epoch', str, 'latest') - self._add_option('training', 'n_epochs', int, 100, check_func=is_greater_than_0) - self._add_option('training', 'n_epochs_decay', int, 100, check_func=is_greater_than_0) - self._add_option('training', 'save_latest_freq', int, 5000, check_func=is_greater_than_0) - self._add_option('training', 'print_freq', int, 200, check_func=is_greater_than_0) - self._add_option('training', 'save_epoch_freq', int, 5, check_func=is_greater_than_0) - self._add_option('training', 'epoch_as_iter', bool, False) - self._add_option('training', 'lr', float, 2e-4, check_func=is_greater_than_0) - self._add_option('training', 'lr_policy', str, 'linear', - check_func=lambda x: x in ['linear', 'step', 'plateau', 'cosine']) - self._add_option('training', 'lr_decay_iters', int, 50, check_func=is_greater_than_0) - self._add_option('training', 'DDP', bool, False) - self._add_option('training', 'num_nodes', int, 1, check_func=is_greater_than_0) - self._add_option('training', 'DDP_address', str, '127.0.0.1') - self._add_option('training', 'DDP_port', str, '29700') - self._add_option('training', 'find_unused_parameters', bool, False) # a DDP option that allows backward on a subgraph of the model - self._add_option('training', 'val_percent', float, 5.0, check_func=is_greater_than_0) # Uses x% of training data to validate - self._add_option('training', 'val', bool, True) # perform validation every epoch - self._add_option('training', 'save_training_progress', bool, False) # save images to create a training progression video - - # testing config - self._add_option('testing', 'results_dir', str, './results') - self._add_option('testing', 'load_size', int, 512, check_func=is_greater_than_0) - self._add_option('testing', 'crop_size', int, 512, check_func=is_greater_than_0) - self._add_option('testing', 'preprocess', list, ['scale_width']) - self._add_option('testing', 'visual_names', list, []) - self._add_option('testing', 'num_test', int, 999999, check_func=is_greater_than_0) - self._add_option('testing', 'image_format', str, 'jpg', check_func=lambda x: x in ['input', 'jpg', 'jpeg', 'png']) - - def _add_option(self, group_name, option_name, value_type, default_value, check_func=None): - # check name type - if not type(group_name) is str or not type(option_name) is str: - raise Exception('Type of {} and {} must be str.'.format(group_name, option_name)) - - # add group - if not group_name in self.__config_dict: - self.__config_dict[group_name] = {} - self.__check_func_dict[group_name] = {} - - # check type & default value - if not type(value_type) is type: - try: - if value_type.__origin__ is not Union: - raise Exception('{} is not a type.'.format(value_type)) - except Exception as e: - print(e) - if not type(default_value) is value_type: - try: - if value_type.__origin__ is not Union: - raise Exception('Type of {} must be {}.'.format(default_value, value_type)) - except Exception as e: - print(e) - - # add option to dict - if not option_name in self.__config_dict[group_name]: - if not check_func is None and not check_func(default_value): - raise Exception('Checking {}/{} failed.'.format(group_name, option_name)) - self.__config_dict[group_name][option_name] = default_value - self.__check_func_dict[group_name][option_name] = check_func - else: - raise Exception('{} has been already added.'.format(option_name)) - - def parse_config(self, cfg_file): - # load config from yaml file - with open(cfg_file, 'r') as f: - yaml_config = yaml.safe_load(f) - if not type(yaml_config) is dict: - raise Exception('Loading yaml file failed.') - - # replace default options - config_dict = copy.deepcopy(self.__config_dict) - for group in config_dict: - if group in yaml_config: - for option in config_dict[group]: - if option in yaml_config[group]: - value = yaml_config[group][option] - if not type(value) is type(config_dict[group][option]): - try: # if is not union, it won't have __origin__ attribute. So will throw an error. - # The line below is necessary because we check if has __origin__ attribute. - if config_dict[group][option].__origin__ is Union: - # check to see if type of belongs to a type in the union. - if not isinstance(value, config_dict[group][option].__args__): - raise Exception('Type of {}/{} must be {}.'.format(group, option, - config_dict[group][option].__args__)) - except Exception as e: # if the error was thrown, we know there's a type error. - print(e) - else: - check_func = self.__check_func_dict[group][option] - if not check_func is None and not check_func(value): - raise Exception('Checking {}/{} failed.'.format(group, option)) - config_dict[group][option] = value - return config_dict - diff --git a/spaces/dreji18/Text-Classification-App/app.py b/spaces/dreji18/Text-Classification-App/app.py deleted file mode 100644 index 5b7ace0f12ac2f445a3c2d123c97c0b58248b0cb..0000000000000000000000000000000000000000 --- a/spaces/dreji18/Text-Classification-App/app.py +++ /dev/null @@ -1,240 +0,0 @@ -# -*- coding: utf-8 -*- -""" -Created on Tue Jan 12 08:28:35 2021 - -@author: rejid4996 -""" - -# packages -import os -import re -import time -import base64 -import pickle -import numpy as np -import pandas as pd -import streamlit as st -from io import BytesIO -import preprocessor as p -from textblob.classifiers import NaiveBayesClassifier - -# custum function to clean the dataset (combining tweet_preprocessor and reguar expression) -def clean_tweets(df): - #set up punctuations we want to be replaced - REPLACE_NO_SPACE = re.compile("(\.)|(\;)|(\:)|(\!)|(\')|(\?)|(\,)|(\")|(\|)|(\()|(\))|(\[)|(\])|(\%)|(\$)|(\>)|(\<)|(\{)|(\})") - REPLACE_WITH_SPACE = re.compile("(Download file' # decode b'abc' => abc - -def download_model(model): - output_model = pickle.dumps(model) - b64 = base64.b64encode(output_model).decode() - href = f'Download Model .pkl File' - st.markdown(href, unsafe_allow_html=True) - -def main(): - """NLP App with Streamlit""" - - from PIL import Image - - wallpaper = Image.open('file.jpg') - wallpaper = wallpaper.resize((700,350)) - - st.sidebar.title("Text Classification App 1.0") - st.sidebar.success("Please reach out to https://www.linkedin.com/in/deepak-john-reji/ for more queries") - st.sidebar.subheader("Classifier using Textblob ") - - st.info("For more contents subscribe to my Youtube Channel https://www.youtube.com/channel/UCgOwsx5injeaB_TKGsVD5GQ") - st.image(wallpaper) - - options = ("Train the model", "Test the model", "Predict for a new data") - a = st.sidebar.empty() - value = a.radio("what do you wanna do", options, 0) - - if value == "Train the model": - - uploaded_file = st.file_uploader("*Upload your file, make sure you have a column for text that has to be classified and the label", type="xlsx") - - if uploaded_file: - - df = pd.read_excel(uploaded_file) - - option1 = st.sidebar.selectbox( - 'Select the text column', - tuple(df.columns.to_list())) - - option2 = st.sidebar.selectbox( - 'Select the label column', - tuple(df.columns.to_list())) - - # clean training data - df[option1] = clean_tweets(df[option1]) - - # Enter the label names - label1 = st.sidebar.text_input("Enter the label for '0' value") - label2 = st.sidebar.text_input("Enter the label for '1' value") - - # replace value with pos and neg - df[option2] = df[option2].map({0:label1, 1:label2}) - - gcr_config = st.sidebar.slider(label="choose the training size, longer the size longer the training time", - min_value=100, - max_value=10000, - step=10) - - #subsetting based on classes - df1 = df[df[option2] == label1][0:int(gcr_config/2)] - df2 = df[df[option2] == label2][0:int(gcr_config/2)] - - df_new = pd.concat([df1, df2]).reset_index(drop=True) - - - # convert in the format - training_list = [] - for i in df_new.index: - value = (df_new[option1][i], df_new[option2][i]) - training_list.append(value) - - # run classification - run_button = st.sidebar.button(label='Start Training') - - if run_button: - - # Train using Naive Bayes - start = time.time() # start time - cl = NaiveBayesClassifier(training_list[0:gcr_config]) - - st.success("Congratulations!!! Model trained successfully with an accuracy of "+str(cl.accuracy(training_list) * 100) + str("%")) - st.write("Total Time taken for Training :" + str((time.time()-start)/60) + " minutes") - - # download the model - download_model(cl) - - # testing the model - if value == "Test the model": - uploaded_file = st.file_uploader("*Upload your model file, make sure its in the right format (currently pickle file)", type="pkl") - if uploaded_file: - model = pickle.load(uploaded_file) - st.success("Congratulations!!! Model upload successfull") - - if model: - value1 = "" - test_sentence = st.text_input("Enter the testing sentence") - - #predict_button = st.button(label='Predict') - - if test_sentence: - st.info("Model Prediction is : " + model.classify(test_sentence)) - - "\n" - st.write("### 🎲 Help me train the model better. How is the prediction?") - "\n" - correct = st.checkbox("Correct") - wrong = st.checkbox("Incorrect") - - if correct: - st.success("Great!!! I am happy for you") - st.write("If you would like please try out for more examples") - - if wrong: - st.write("### 🎲 Dont worry!!! Lets add this new data to the model and retrain. ") - label = st.text_input("Could you write the actual label, please note the label name should be the same while you trained") - #retrain_button = st.button(label='Retrain') - if label: - new_data = [(test_sentence, label)] - model.update(new_data) - - st.write("### 🎲 Lets classify and see whether model had learned from this example ") - - st.write("Sentence : " + test_sentence) - st.info("New Model Prediction is : " + model.classify(test_sentence)) - - sec_wrong3 = st.checkbox("It's Correct") - sec_wrong1 = st.checkbox("Still Incorrect") - sec_wrong2 = st.checkbox("I will go ahead and change the data in excel and retrain the model") - - - if sec_wrong1: - st.write("### 🎲 Lets try training with some sentences of this sort") - new_sentence = st.text_input("Enter the training sentence") - new_label = st.text_input("Enter the training label") - - st.write("Lets try one last time ") - retrain_button1 = st.button(label='Retrain again!') - - if retrain_button1: - new_data1 = [(new_sentence, new_label)] - model.update(new_data1) - - st.write("Sentence : " + new_sentence) - st.info("New Model Prediction is : " + model.classify(new_sentence)) - - # download the model - download_model(model) - - if sec_wrong2: - st.info("Great!!! Fingers Crossed") - st.write("### 🎲 Please return to your excel file and add more sentences and Train the model again") - - if sec_wrong3: - st.info("Wow!!! Awesome") - st.write("Now lets download the updated model") - # download the model - download_model(model) - - # predicting for new data - if value == "Predict for a new data": - uploaded_file3 = st.file_uploader("*Upload your model file, make sure its in the right format (currently pickle file)", type="pkl") - if uploaded_file3: - model1 = pickle.load(uploaded_file3) - st.success("Congratulations!!! Model uploaded successfully") - - uploaded_file1 = st.file_uploader("*Upload your new data which you have to predict", type="xlsx") - if uploaded_file1: - st.success("Congratulations!!! Data uploaded successfully") - - df_valid = pd.read_excel(uploaded_file1) - - option3 = st.selectbox( - 'Select the text column which needs to be predicted', - tuple(df_valid.columns.to_list())) - - predict_button1 = st.button(label='Predict for new data') - - if predict_button1: - start1 = time.time() # start time - df_valid['predicted'] = df_valid[option3].apply(lambda tweet: model1.classify(tweet)) - - st.write("### 🎲 Prediction Successfull !!!") - - st.write("Total No. of sentences: "+ str(len(df_valid))) - st.write("Total Time taken for Prediction :" + str((time.time()-start1)/60) + " minutes") - - st.markdown(get_table_download_link(df_valid), unsafe_allow_html=True) - -if __name__ == "__main__": - main() diff --git a/spaces/dvitel/codebleu/utils.py b/spaces/dvitel/codebleu/utils.py deleted file mode 100644 index a5dcb39b510f960649c08a4a5b15117e52a166e2..0000000000000000000000000000000000000000 --- a/spaces/dvitel/codebleu/utils.py +++ /dev/null @@ -1,106 +0,0 @@ -# Natural Language Toolkit: Utility functions -# -# Copyright (C) 2001-2020 NLTK Project -# Author: Steven Bird -# URL: -# For license information, see LICENSE.TXT - -from itertools import chain - -def pad_sequence( - sequence, - n, - pad_left=False, - pad_right=False, - left_pad_symbol=None, - right_pad_symbol=None, -): - """ - Returns a padded sequence of items before ngram extraction. - >>> list(pad_sequence([1,2,3,4,5], 2, pad_left=True, pad_right=True, left_pad_symbol='', right_pad_symbol='')) - ['', 1, 2, 3, 4, 5, ''] - >>> list(pad_sequence([1,2,3,4,5], 2, pad_left=True, left_pad_symbol='')) - ['', 1, 2, 3, 4, 5] - >>> list(pad_sequence([1,2,3,4,5], 2, pad_right=True, right_pad_symbol='')) - [1, 2, 3, 4, 5, ''] - :param sequence: the source data to be padded - :type sequence: sequence or iter - :param n: the degree of the ngrams - :type n: int - :param pad_left: whether the ngrams should be left-padded - :type pad_left: bool - :param pad_right: whether the ngrams should be right-padded - :type pad_right: bool - :param left_pad_symbol: the symbol to use for left padding (default is None) - :type left_pad_symbol: any - :param right_pad_symbol: the symbol to use for right padding (default is None) - :type right_pad_symbol: any - :rtype: sequence or iter - """ - sequence = iter(sequence) - if pad_left: - sequence = chain((left_pad_symbol,) * (n - 1), sequence) - if pad_right: - sequence = chain(sequence, (right_pad_symbol,) * (n - 1)) - return sequence - - -# add a flag to pad the sequence so we get peripheral ngrams? - - -def ngrams( - sequence, - n, - pad_left=False, - pad_right=False, - left_pad_symbol=None, - right_pad_symbol=None, -): - """ - Return the ngrams generated from a sequence of items, as an iterator. - For example: - >>> from nltk.util import ngrams - >>> list(ngrams([1,2,3,4,5], 3)) - [(1, 2, 3), (2, 3, 4), (3, 4, 5)] - Wrap with list for a list version of this function. Set pad_left - or pad_right to true in order to get additional ngrams: - >>> list(ngrams([1,2,3,4,5], 2, pad_right=True)) - [(1, 2), (2, 3), (3, 4), (4, 5), (5, None)] - >>> list(ngrams([1,2,3,4,5], 2, pad_right=True, right_pad_symbol='')) - [(1, 2), (2, 3), (3, 4), (4, 5), (5, '')] - >>> list(ngrams([1,2,3,4,5], 2, pad_left=True, left_pad_symbol='')) - [('', 1), (1, 2), (2, 3), (3, 4), (4, 5)] - >>> list(ngrams([1,2,3,4,5], 2, pad_left=True, pad_right=True, left_pad_symbol='', right_pad_symbol='')) - [('', 1), (1, 2), (2, 3), (3, 4), (4, 5), (5, '')] - :param sequence: the source data to be converted into ngrams - :type sequence: sequence or iter - :param n: the degree of the ngrams - :type n: int - :param pad_left: whether the ngrams should be left-padded - :type pad_left: bool - :param pad_right: whether the ngrams should be right-padded - :type pad_right: bool - :param left_pad_symbol: the symbol to use for left padding (default is None) - :type left_pad_symbol: any - :param right_pad_symbol: the symbol to use for right padding (default is None) - :type right_pad_symbol: any - :rtype: sequence or iter - """ - sequence = pad_sequence( - sequence, n, pad_left, pad_right, left_pad_symbol, right_pad_symbol - ) - - history = [] - while n > 1: - # PEP 479, prevent RuntimeError from being raised when StopIteration bubbles out of generator - try: - next_item = next(sequence) - except StopIteration: - # no more data, terminate the generator - return - history.append(next_item) - n -= 1 - for item in sequence: - history.append(item) - yield tuple(history) - del history[0] \ No newline at end of file diff --git a/spaces/elumamai/openai-whisper-large/app.py b/spaces/elumamai/openai-whisper-large/app.py deleted file mode 100644 index 0d7ff1647cd2be49d72e567ea588323d68b37ae5..0000000000000000000000000000000000000000 --- a/spaces/elumamai/openai-whisper-large/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/openai/whisper-large").launch() \ No newline at end of file diff --git a/spaces/erwann/Face-editor/masking.py b/spaces/erwann/Face-editor/masking.py deleted file mode 100644 index 262c74f2f028a05ec2032f0cde51a7710367a5f3..0000000000000000000000000000000000000000 --- a/spaces/erwann/Face-editor/masking.py +++ /dev/null @@ -1,30 +0,0 @@ -import os -import sys - -import matplotlib.pyplot as plt -import torch -from backend import ImagePromptEditor, ImageState, ProcessorGradientFlow -from loaders import load_default -from transformers import CLIPModel - -if __name__ == "__main__": - sys.path.append("taming-transformers") - device = "cuda" - - vqgan = load_default(device) - vqgan.eval() - - processor = ProcessorGradientFlow(device=device) - clip = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") - clip.to(device) - - promptoptim = ImagePromptEditor(vqgan, clip, processor, quantize=True) - state = ImageState(vqgan, promptoptim) - mask = torch.load("eyebrow_mask.pt") - x = state.blend("./test_data/face.jpeg", "./test_data/face2.jpeg", 0.5) - plt.imshow(x) - plt.show() - state.apply_prompts( - "a picture of a woman with big eyebrows", "", 0.009, 40, None, mask=mask - ) - print("done") diff --git a/spaces/ethan-ai/goofyai-3d_render_style_xl/app.py b/spaces/ethan-ai/goofyai-3d_render_style_xl/app.py deleted file mode 100644 index 4f2d3011c603b276c7800e5d1e9de8bf628eeda2..0000000000000000000000000000000000000000 --- a/spaces/ethan-ai/goofyai-3d_render_style_xl/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/goofyai/3d_render_style_xl").launch() \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Cambiar Idioma Adobe Flash Cs6 Crack [Extra Quality].md b/spaces/falterWliame/Face_Mask_Detection/Cambiar Idioma Adobe Flash Cs6 Crack [Extra Quality].md deleted file mode 100644 index 31533d1a6f2f26fa95dbd562dd89a03505f88bd6..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Cambiar Idioma Adobe Flash Cs6 Crack [Extra Quality].md +++ /dev/null @@ -1,34 +0,0 @@ -

      cambiar idioma adobe flash cs6 crack


      DOWNLOADhttps://urlca.com/2uDd7S



      -
      -[Cambiar idioma adobe flash cs6 - Oct 14, 2012 05:12 AM] - - - -A: - -Primero, no, no puedes cambiar el idioma de una aplicación con Windows. - -La única posibilidad que tienes es agregar las opciones de idioma en tu App al propio aplicación (en el archivo del vínculo: Properties > General > Accessible Info > Tab Accessible Info) - -Algo así: - -O también, si lees la docu de la aplicación podrás leer que puedes modificar esos accesibles por tu próprio App: - -Aparte de eso, si esos datos estan en la propia App (que no es lo que yo supongo que es tu problema) y no quieres modificar la propia App, puedes usar el comando: - -msiexec /i MyApp.app "C:\Ejemplo-Oculto.txt" - -Esto crea un archivo "C:\Ejemplo-Oculto.txt" con los accesibles que tenías asignados a ese archivo. Si ese archivo ya existía, los atribuía como propios accesibles y los cambiaba al tamaño que ahora quieres. - -Cuidado con lo que insertas en ese archivo. Si pones texto que no esta escrito en el idioma de tu App (hay que tener mucha paciencia) podras tener errores, entonces tendrás que modificar tu App para que tenga un idioma seguro. - -Dado que al no ser una aplicación propia del sistema, no tienes que tener mucho cuidado con lo que insertas en el archivo. - -Ejemplo (cambiando el idioma del archivo a Español): - -[Cambiar idioma adobe flash cs6 - Oct 14, 2012 04:54 AM] - -Al respecto, ves que aparece en Esp 4fefd39f24
      -
      -
      -

      diff --git a/spaces/falterWliame/Face_Mask_Detection/Martyrologium Romanum 2004 Pdf 1 [CRACKED].md b/spaces/falterWliame/Face_Mask_Detection/Martyrologium Romanum 2004 Pdf 1 [CRACKED].md deleted file mode 100644 index 1e18349d421fa5a1bf02e9e1e1f0c032cc60a0da..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Martyrologium Romanum 2004 Pdf 1 [CRACKED].md +++ /dev/null @@ -1,7 +0,0 @@ -
      -

      the book has been translated into english (london: burns and oates, 1908); into italian (napoli: francesco monaci, 1925); into french (paris: o. praemieres, 1945); into spanish (buenos aires: lacuna editorial, 1961); into german (munich: volk und welt , 1965; köln: benziger verlag, 1984); into italian and into latin, both in the patrologia latina (città del vaticano: bretschneider, 1966); into polish (przemyśl: wydawnictwo morzecki, 1976); and into portuguese (rio de janeiro: círio da sé, 1995). the 1988 edition was produced under the direction of real-encyklopädie deutscher literatur und kunst from a text of a. wiesel.

      -

      Martyrologium Romanum 2004 Pdf 1


      Downloadhttps://urlca.com/2uDc6o



      -

      the diocesan diocesan bishops of the roman province at the time the martyrologium romanum was compiled made up of the five former roman senatorial provinces of the 5th century, gaul, africa, the two italian provinces, and spain and illyricum. the diocesan bishop then was nominated by the pope. the diocesan bishop had the authority to promote or depose bishops in his own diocese (unless they had been promoted by the pope). to give you an idea of the early medieval roman world, there were over 250 bishops in the roman catholic church in the first century. the diocesan bishops met at their provincial synods and sent representatives to the provincial synod at their head or metropolitans met at their metropolitan synod. within the province, the presbyterium had the authority to depose a bishop in his own diocese, and many of these lay elected officers often became bishops.

      -

      the missal was composed by augustine of hippo (354-430), cardinal priest of the late roman empire and father of the christian school of thought known as the church of the african (c. 370-430), and his disciple macarius the great (342-459). macarius was a syrian priest who lived in egypt during the nestorian era. he wrote a letter to the nestorian patriarchs (432) asking them to send missionaries to preach the gospel. after receiving no word from the patriarchs, macarius began the work of writing a spiritual tome known as the marian psalter.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Quete D Ewilan Epub To Pdf __TOP__.md b/spaces/falterWliame/Face_Mask_Detection/Quete D Ewilan Epub To Pdf __TOP__.md deleted file mode 100644 index 66c61a79b1d985d4ab3dc9db871ee032e3387d60..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Quete D Ewilan Epub To Pdf __TOP__.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Quete D Ewilan Epub To Pdf


      Download Ziphttps://urlca.com/2uDccQ



      -
      -Livre électronique La quête d'Ewilan Tome 1. Présenté dans les formats PDF, ePUB, MOBI. L'auteur du livre est Pierre Bottero. 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/fatiXbelha/sd/Ak arklar - En Gzel Ak arklarn cretsiz ndir.md b/spaces/fatiXbelha/sd/Ak arklar - En Gzel Ak arklarn cretsiz ndir.md deleted file mode 100644 index 2b98d48b84e1bbf0a1fb0b8d8e148b481713eb6d..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Ak arklar - En Gzel Ak arklarn cretsiz ndir.md +++ /dev/null @@ -1,54 +0,0 @@ - -

      Aşk Indir: How to Download Love Songs from YouTube

      | | H2: Introduction |

      Introduction

      Do you love listening to romantic songs? Do you want to download your favorite love songs from YouTube and enjoy them offline? If yes, then you are in the right place. In this article, we will show you how to use aşk indir, a free and easy tool that lets you download any love song from YouTube in MP3 format. Whether you want to create a playlist for your partner, express your feelings with a song, or just relax with some soothing melodies, aşk indir can help you do it. Read on to find out how.

      -

      aşk indir


      DOWNLOAD 🗸 https://urllie.com/2uNE17



      | | H2: What is aşk indir? |

      What is aşk indir?

      Aşk indir is a Turkish word that means "love download". It is also the name of a website that allows you to download any love song from YouTube in MP3 format. Aşk indir is fast, simple, and free to use. You don't need to register, install any software, or pay any fees. All you need is a device with an internet connection and a browser.

      | | H3: How does aşk indir work? |

      How does aşk indir work?

      Aşk indir works by converting YouTube videos into MP3 files that you can save on your device. You can choose from different quality options depending on your preference and storage space. Aşk indir supports downloading multiple songs at once, so you can create your own love playlist in minutes.

      | | H2: How to use aşk indir? |

      How to use aşk indir?

      Using aşk indir is very easy. Just follow these simple steps:

      -

      aşk şarkıları indir
      -aşk romanları indir
      -aşk oyunları indir
      -aşk testi indir
      -aşk filmi indir
      -aşk acısı indir
      -aşk sözleri indir
      -aşk hikayeleri indir
      -aşk şiirleri indir
      -aşk resimleri indir
      -aşk mesajları indir
      -aşk videoları indir
      -aşk kitapları indir
      -aşk yemini indir
      -aşk rehberi indir
      -aşk dersleri indir
      -aşk tarifi indir
      -aşk olsun indir
      -aşk izle indir
      -aşk defteri indir
      -aşk mektupları indir
      -aşk radyosu indir
      -aşk müziği indir
      -aşk duaları indir
      -aşk türküleri indir
      -aşk oyunu indir
      -aşk sevgiliye indir
      -aşk günlüğü indir
      -aşk atasözleri indir
      -aşk felsefesi indir
      -aşk psikolojisi indir
      -aşk astrolojisi indir
      -aşk burçları indir
      -aşk falı indir
      -aşk numarası indir
      -aşk yüzüğü indir
      -aşk kolyesi indir
      -aşk bileziği indir
      -aşk küpesi indir
      -aşk çiçeği indir
      -aşk pastası indir
      -aşk çikolatası indir
      -aşk kokteyli indir
      -aşk parfümü indir
      -aşk mumu indir
      -aşk masajı indir
      -aşk tatili indir
      -aşk hediyeleri indir
      -aşk alışverişi indir

      | | H3: Step 1: Find the love song you want to download on YouTube |

      Step 1: Find the love song you want to download on YouTube

      Go to YouTube and search for the love song you want to download. You can use keywords like "love songs", "romantic songs", "aşk şarkıları", or the name of the artist or the song. You can also browse through different categories and playlists on YouTube to find the best love songs for your mood.

      | | H3: Step 2: Copy the URL of the YouTube video |

      Step 2: Copy the URL of the YouTube video

      Once you find the love song you want to download, click on it to open it in a new tab. Then, copy the URL of the YouTube video from the address bar of your browser. The URL should look something like this: https://www.youtube.com/watch?v=xxxxxxxxxxx

      | | H3: Step 3: Paste the URL into the search box of aşk indir |

      Step 3: Paste the URL into the search box of aşk indir

      Go to https://askindir.net/ and paste the URL of the YouTube video into the search box. Then, click on the "Download" button next to it.

      | | H3: Step 4: Choose the quality and format of the MP3 file |

      Step 4: Choose the quality and format of the MP3 file

      A new page will open with different options for downloading the MP3 file. You can choose the quality and format of the MP3 file according to your preference and storage space. You can choose from low, medium, or high quality, and from MP3 or M4A format. The higher the quality, the larger the file size. The MP3 format is more compatible with most devices, while the M4A format is more suitable for Apple devices.

      | | H3: Step 5: Download the MP3 file to your device |

      Step 5: Download the MP3 file to your device

      After you choose the quality and format of the MP3 file, click on the "Download" button below it. A new window will pop up asking you to save the file to your device. Choose a location where you want to save the file, and click on "Save". The download will start automatically and will take a few seconds or minutes depending on your internet speed and file size.

      | | H2: Benefits of using aşk indir |

      Benefits of using aşk indir

      There are many benefits of using aşk indir to download love songs from YouTube. Here are some of them:

      | | H3: You can enjoy your favorite love songs offline |

      You can enjoy your favorite love songs offline

      By downloading love songs from YouTube, you can listen to them anytime and anywhere without worrying about internet connection, data usage, or ads. You can also transfer them to other devices, such as your phone, tablet, laptop, or MP3 player. You can create your own love playlist and enjoy it with your partner or by yourself.

      | | H3: You can discover new love songs from different genres and languages |

      You can discover new love songs from different genres and languages

      YouTube has a huge collection of love songs from different genres and languages. You can find love songs from pop, rock, jazz, classical, country, rap, R&B, soul, folk, indie, and more. You can also find love songs from Turkish, English, Spanish, French, Arabic, Hindi, Chinese, Japanese, Korean, and more. You can explore new love songs that suit your taste and mood.

      | | H3: You can express your feelings with a song |

      You can express your feelings with a song

      Sometimes, words are not enough to convey your emotions. A song can help you express your feelings better than words. You can use a song to tell someone how much you love them, how much you miss them, how much you appreciate them, or how much you are sorry for hurting them. You can also use a song to cheer someone up, make someone laugh, or make someone cry. A song can be a powerful way to communicate with your partner or crush.

      | | H2: Conclusion |

      Conclusion

      Aşk indir is a great tool for downloading love songs from YouTube in MP3 format. It is fast, simple, and free to use. You don't need to register, install any software, or pay any fees. You can download any love song from YouTube in minutes and enjoy it offline. You can also discover new love songs from different genres and languages and express your feelings with a song. Aşk indir is the best way to download love songs from YouTube.

      | | H2: FAQs |

      FAQs

      Here are some frequently asked questions about aşk indir:

      | | H4: Q: Is aşk indir legal? |

      Q: Is aşk indir legal?

      | | H4: A: Aşk indir is legal as long as you use it for personal and non-commercial purposes. You should respect the rights of the original creators and owners of the YouTube videos and songs. You should not distribute or sell the downloaded MP3 files without their permission. |

      A: Aşk indir is legal as long as you use it for personal and non-commercial purposes. You should respect the rights of the original creators and owners of the YouTube videos and songs. You should not distribute or sell the downloaded MP3 files without their permission.

      | | H4: Q: Is aşk indir safe? |

      Q: Is aşk indir safe?

      | | H4: A: Aşk indir is safe to use. It does not contain any viruses, malware, spyware, or ads. It does not collect any personal information or data from you or your device. It does not harm your device or affect its performance. |

      A: Aşk indir safe?

      | | H4: Q: How many love songs can I download with aşk indir? |

      Q: How many love songs can I download with aşk indir?

      | | H4: A: You can download as many love songs as you want with aşk indir. There is no limit to the number of songs you can download. However, you should be mindful of the storage space on your device and the bandwidth of your internet connection. |

      A: You can download as many love songs as you want with aşk indir. There is no limit to the number of songs you can download. However, you should be mindful of the storage space on your device and the bandwidth of your internet connection.

      | | H4: Q: What are some of the best love songs to download with aşk indir? |

      Q: What are some of the best love songs to download with aşk indir?

      | | H4: A: There are many great love songs to download with aşk indir. Some of them are: |

      A: There are many great love songs to download with aşk indir. Some of them are:

      | | H5: - Seni Seviyorum by Tarkan |
      - Seni Seviyorum by Tarkan
      | | H5: - I Will Always Love You by Whitney Houston |
      - I Will Always Love You by Whitney Houston
      | | H5: - Despacito by Luis Fonsi and Daddy Yankee |
      - Despacito by Luis Fonsi and Daddy Yankee
      | | H5: - La Vie En Rose by Edith Piaf |
      - La Vie En Rose by Edith Piaf
      | | H5: - Tum Hi Ho by Arijit Singh |
      - Tum Hi Ho by Arijit Singh
      | | H4: Q: How can I share the love songs I downloaded with aşk indir? |

      Q: How can I share the love songs I downloaded with aşk indir?

      | | H4: A: You can share the love songs you downloaded with aşk indir with your friends, family, or partner in different ways. You can send them via email, WhatsApp, Facebook, Instagram, or any other social media platform. You can also burn them on a CD, put them on a USB drive, or play them on a speaker. You can also create a QR code for the MP3 file and let others scan it to download it. |

      A: You can share the love songs you downloaded with aşk indir with your friends, family, or partner in different ways. You can send them via email, WhatsApp, Facebook, Instagram, or any other social media platform. You can also burn them on a CD, put them on a USB drive, or play them on a speaker. You can also create a QR code for the MP3 file and let others scan it to download it.

      |

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Brawl with Your Friends in Brawl Stars Season 1 - Download Now.md b/spaces/fatiXbelha/sd/Brawl with Your Friends in Brawl Stars Season 1 - Download Now.md deleted file mode 100644 index dac2c0ed9632d7fc48949ed5afb0f9711a2e3a8b..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Brawl with Your Friends in Brawl Stars Season 1 - Download Now.md +++ /dev/null @@ -1,119 +0,0 @@ - -

      How to Download and Play Brawl Stars Season 1

      -

      If you are looking for a fast-paced, fun, and competitive multiplayer game for your mobile device, you might want to check out Brawl Stars. This game is developed by Supercell, the makers of popular games like Clash of Clans and Clash Royale. In this article, we will tell you everything you need to know about Brawl Stars Season 1, the first season of the game's Brawl Pass system. We will also show you how to download and play Brawl Stars Season 1 on your device.

      -

      What is Brawl Stars?

      -

      A brief introduction to the game and its features

      -

      Brawl Stars is a multiplayer online battle arena (MOBA) and third-person hero shooter game that was released worldwide on December 12, 2018, on iOS and Android. The game features various game modes, each with a different objective. Players can choose from a selection of characters called Brawlers that they have unlocked through Boxes, the Brawl Pass, the Trophy Road, or purchased through the Shop to use in battles. Each Brawler has different abilities, stats, and weapons that give way to endless strategy possibilities.

      -

      brawl stars season 1 download


      Download Zip https://urllie.com/2uNCWS



      -

      Some of the game modes in Brawl Stars are:

      -
        -
      • Gem Grab (3v3): Team up and out-strategize the opposing team. Collect and hold 10 gems to win, but get fragged and lose your gems.
      • -
      • Showdown (Solo/Duo): A battle royale style fight for survival. Collect power ups for your Brawler. Grab a friend or play solo - be the last Brawler standing in the rowdiest battle royale yet. Winner take all!
      • -
      • Brawl Ball (3v3): It's a whole new Brawl game! Show off your soccer/football skills and score two goals before the other team. There are no red cards here.
      • -
      • Bounty (3v3): Take out opponents to earn stars, but don’t let them pick you off. The squad with the most stars wins the match!
      • -
      • Heist (3v3): Protect your team’s safe and try to crack open your opponents’. Navigate the map to sneak, blast and blow your way clear to the enemies treasure.
      • -
      • Special Events: Limited time special PvE and PvP game modes.
      • -
      • Championship Challenge: Join Brawl Stars' esports scene with in-game qualifiers!
      • -
      -

      What is Brawl Pass?

      -

      A description of the progression system and its rewards

      -

      In May 2020, a game update added a new reward system called “Brawl Pass”. The Brawl Pass is the game's version of a battle pass. When players compete in battles, they earn Tokens to progress along the Brawl Pass, which unlock tiers that reward players with Gems, Power Points, Coins, Bling, Credits, or Chroma Credits; alternatively, 30 Gems can be spent to instantly unlock a tier.

      -

      It costs 169 Gems to buy the paid version of the Brawl Pass for the season and unlock exclusive rewards, including the Chromatic Brawler of the season, one exclusive skin, and Pins, both for the Chromatic Brawler and the Chromatic Brawler's exclusive skin, as well as additional standard rewards such as Coins, Power Points, and Credits. The Brawl Pass lasts for the duration of the season, which is usually around two months.

      -

      What is Brawl Stars Season 1?

      -

      An overview of the first season of Brawl Pass and its exclusive content

      -

      The first season of Brawl Pass was called "Tara's Bazaar" and it ran from May 13, 2020 to July 6, 2020. The theme of the season was inspired by the Middle Eastern culture and the Brawler Tara. The season introduced a new Chromatic Brawler named Gale, a grumpy old man who works as a janitor at Mr. P's hotel. Gale has a snow blower that shoots snowballs that can push back enemies and a super that creates a gust of wind that knocks back and damages enemies.

      -

      The exclusive rewards for the paid version of the Brawl Pass for Season 1 were:

      -
        -
      • Gale (Chromatic Brawler)
      • -
      • Mercenary Gale (Exclusive Skin)
      • -
      • Street Ninja Tara (Exclusive Skin)
      • -
      • Pins for Gale and Mercenary Gale
      • -
      -

      The free version of the Brawl Pass also offered some rewards, such as Gems, Coins, Power Points, and Boxes.

      -

      How to Download Brawl Stars Season 1?

      -

      A step-by-step guide on how to download and install the game on different devices

      -

      Brawl Stars is available for free on both iOS and Android devices. You can download it from the App Store or Google Play Store, depending on your device. Here are the steps to download and install Brawl Stars Season 1 on your device:

      -
        -
      1. Open the App Store or Google Play Store on your device.
      2. -
      3. Search for "Brawl Stars" in the search bar.
      4. -
      5. Tap on the icon of the game and then tap on "Install" or "Get".
      6. -
      7. Wait for the game to download and install on your device.
      8. -
      9. Once the game is installed, tap on "Open" or find the game icon on your home screen and tap on it.
      10. -
      11. You will see a loading screen with the Brawl Stars logo and then a tutorial video that explains the basics of the game.
      12. -
      13. After watching the video, you will be asked to choose a name for your account and agree to the terms of service and privacy policy.
      14. -
      15. You will then enter the game and be greeted by a friendly robot named Colette, who will guide you through your first match.
      16. -
      17. Congratulations! You have successfully downloaded and installed Brawl Stars Season 1 on your device.
      18. -
      -

      How to Play Brawl Stars Season 1?

      -

      Some tips and tricks on how to enjoy the game modes and characters of the first season

      -

      Now that you have downloaded and installed Brawl Stars Season 1, you might be wondering how to play it and have fun. Here are some tips and tricks that will help you get started:

      -

      How to download brawl stars season 1 on android
      -Brawl stars season 1 apk download free
      -Brawl stars season 1 download for ios devices
      -Brawl stars season 1 gameplay and features
      -Brawl stars season 1 release date and patch notes
      -Brawl stars season 1 tips and tricks for beginners
      -Brawl stars season 1 best brawlers and strategies
      -Brawl stars season 1 skins and gadgets unlock guide
      -Brawl stars season 1 brawl pass rewards and quests
      -Brawl stars season 1 review and ratings
      -Brawl stars season 1 vs clash royale comparison
      -Brawl stars season 1 mod apk download with unlimited gems
      -Brawl stars season 1 download for pc and mac
      -Brawl stars season 1 cheats and hacks for unlimited coins
      -Brawl stars season 1 online multiplayer and battle royale mode
      -Brawl stars season 1 offline mode and bots
      -Brawl stars season 1 events and special modes
      -Brawl stars season 1 maps and map maker tool
      -Brawl stars season 1 codes and redeem coupons
      -Brawl stars season 1 wallpapers and fan art
      -Brawl stars season 1 memes and funny videos
      -Brawl stars season 1 challenges and achievements
      -Brawl stars season 1 news and updates
      -Brawl stars season 1 rumors and leaks
      -Brawl stars season 1 fan theories and speculations
      -Brawl stars season 1 supercell support and feedback
      -Brawl stars season 1 esports and tournaments
      -Brawl stars season 1 community and clubs
      -Brawl stars season 1 merchandise and collectibles
      -Brawl stars season 1 soundtrack and voice actors
      -How to download brawl stars season 1 on iphone
      -Brawl stars season 1 obb file download for android
      -Brawl stars season 1 download size and requirements
      -Brawl stars season 1 trailer and teaser videos
      -Brawl stars season 1 new brawlers and super abilities
      -Brawl stars season 1 star powers and balance changes
      -Brawl stars season 1 best team compositions and synergies
      -Brawl stars season 1 skins and gadgets tier list
      -Brawl stars season 1 brawl pass cost and value analysis
      -Brawl stars season 1 pros and cons evaluation
      -Brawl stars season 1 alternatives and similar games
      -Brawl stars season 1 hack apk download with mod menu
      -Brawl stars season 1 download for windows and linux
      -Brawl stars season 1 glitches and bugs report
      -Brawl stars season 1 solo and duo showdown strategies
      -Brawl stars season 1 gem grab and heist tips
      -Brawl stars season 1 brawl ball and bounty tricks
      -Brawl stars season 1 hot zone and siege guides
      -Brawl stars season 1 custom maps and game modes
      -Brawl stars season 1 gift cards and free gems offers

      - - Try out different game modes and find your favorite one. Each game mode has its own rules, objectives, and strategies. You can switch between game modes by tapping on the "Game Mode" button at the bottom left corner of the screen. You can also see how much time is left until the next game mode rotation by tapping on the "i" button next to it. - Experiment with different Brawlers and find your favorite one. Each Brawler has its own strengths, weaknesses, abilities, and play style. You can unlock new Brawlers by opening Boxes, buying them from the Shop, or earning them from the Brawl Pass or Trophy Road. You can also upgrade your Brawlers by collecting Power Points and Coins. To change your Brawler, tap on the "Brawler" button at the bottom right corner of the screen. - Learn how to use your Brawler's attacks and super. Each Brawler has two types of attacks: a normal attack and a super attack. To use your normal attack, tap on the screen or drag your finger to aim and release to shoot. To use your super attack, you need to charge it up by hitting enemies with your normal attack. Once it is fully charged, you will see a yellow circle around your Brawler. To use your super attack, tap on the yellow button or drag it to aim and release to unleash it. - Collect Power Cubes in Showdown mode to increase your strength. In Showdown mode, you can find Power Cubes scattered around the map or dropped by defeated enemies. Power Cubes are glowing purple boxes that increase your health and damage by 10% each. The more Power Cubes you have , the stronger you become. However, be careful not to get eliminated by other players or the shrinking poison gas. - Use the environment to your advantage. In Brawl Stars, you can interact with the environment in different ways. You can hide behind bushes to ambush enemies or avoid detection. You can break walls and obstacles with some attacks to create new paths or expose enemies. You can also use bounce pads, teleporters, ropes, and other objects to move around the map faster or escape from danger. - Team up with your friends or other players. Brawl Stars is more fun when you play with others. You can invite your friends to join your team by tapping on the "Team" button at the top right corner of the screen. You can also join a club or create your own by tapping on the "Social" button at the bottom of the screen. Clubs are groups of players who can chat, play, and compete together. You can also play with random players by tapping on the "Play" button at the center of the screen. - Complete quests and challenges to earn rewards. Brawl Stars offers various quests and challenges that you can complete to earn Tokens, Boxes, Power Points, Coins, Gems, and other rewards. You can see your active quests and challenges by tapping on the "Quests" button at the top left corner of the screen. You can also see your progress on the Brawl Pass and Trophy Road by tapping on the "Brawl Pass" button at the top of the screen.

      Conclusion

      -

      A summary of the main points and a call to action for the readers

      -

      Brawl Stars is an exciting and addictive game that you can play on your mobile device. It offers various game modes, characters, and rewards that will keep you entertained for hours. Brawl Stars Season 1 was the first season of the game's Brawl Pass system that introduced a new Chromatic Brawler named Gale and exclusive skins and pins for him and Tara. To download and play Brawl Stars Season 1, you just need to follow the simple steps we have provided in this article.

      -

      So what are you waiting for? Download Brawl Stars Season 1 today and join millions of players around the world in this epic brawling adventure. And don't forget to share your feedback and experiences with us in the comments section below. Happy brawling!

      -

      FAQs

      -

      Some common questions and answers about Brawl Stars Season 1

      -
        -
      • Q: How much does it cost to buy the paid version of the Brawl Pass for Season 1?
      • -
      • A: It costs 169 Gems to buy the paid version of the Brawl Pass for Season 1. Gems are the premium currency of the game that you can buy with real money or earn from Boxes, Brawl Pass, or special offers.
      • -
      • Q: How long does Brawl Stars Season 1 last?
      • -
      • A: Brawl Stars Season 1 lasts for about two months, from May 13, 2020 to July 6, 2020.
      • -
      • Q: How can I unlock new Brawlers in Brawl Stars?
      • -
      • A: You can unlock new Brawlers by opening Boxes, buying them from the Shop, earning them from the Brawl Pass or Trophy Road, or winning them from special events.
      • -
      • Q: What are Pins and how can I use them?
      • -
      • A: Pins are cosmetic items that you can use to express yourself in chat or in-game. You can unlock Pins by opening Boxes or buying them from the Shop or Brawl Pass. You can equip up to three Pins per Brawler by tapping on the "Pins" button at the bottom right corner of the screen.
      • -
      • Q: What are Credits and Chroma Credits and how can I use them?
      • -
      • A: Credits and Chroma Credits are currencies that you can use to buy Bling for your Brawlers. Bling are cosmetic items that change the appearance of your Brawler's weapons or projectiles. You can earn Credits from Boxes or Brawl Pass and Chroma Credits from special events or offers. You can buy Bling by tapping on the "Bling" button at the bottom left corner of the screen.
      • -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fb700/chat3/crazy_functions/test_project/cpp/cppipc/ipc.cpp b/spaces/fb700/chat3/crazy_functions/test_project/cpp/cppipc/ipc.cpp deleted file mode 100644 index c713b852ea5a51fbeb4729b64561da482caaf351..0000000000000000000000000000000000000000 --- a/spaces/fb700/chat3/crazy_functions/test_project/cpp/cppipc/ipc.cpp +++ /dev/null @@ -1,701 +0,0 @@ - -#include -#include -#include -#include // std::pair, std::move, std::forward -#include -#include // aligned_storage_t -#include -#include -#include -#include - -#include "libipc/ipc.h" -#include "libipc/def.h" -#include "libipc/shm.h" -#include "libipc/pool_alloc.h" -#include "libipc/queue.h" -#include "libipc/policy.h" -#include "libipc/rw_lock.h" -#include "libipc/waiter.h" - -#include "libipc/utility/log.h" -#include "libipc/utility/id_pool.h" -#include "libipc/utility/scope_guard.h" -#include "libipc/utility/utility.h" - -#include "libipc/memory/resource.h" -#include "libipc/platform/detail.h" -#include "libipc/circ/elem_array.h" - -namespace { - -using msg_id_t = std::uint32_t; -using acc_t = std::atomic; - -template -struct msg_t; - -template -struct msg_t<0, AlignSize> { - msg_id_t cc_id_; - msg_id_t id_; - std::int32_t remain_; - bool storage_; -}; - -template -struct msg_t : msg_t<0, AlignSize> { - std::aligned_storage_t data_ {}; - - msg_t() = default; - msg_t(msg_id_t cc_id, msg_id_t id, std::int32_t remain, void const * data, std::size_t size) - : msg_t<0, AlignSize> {cc_id, id, remain, (data == nullptr) || (size == 0)} { - if (this->storage_) { - if (data != nullptr) { - // copy storage-id - *reinterpret_cast(&data_) = - *static_cast(data); - } - } - else std::memcpy(&data_, data, size); - } -}; - -template -ipc::buff_t make_cache(T& data, std::size_t size) { - auto ptr = ipc::mem::alloc(size); - std::memcpy(ptr, &data, (ipc::detail::min)(sizeof(data), size)); - return { ptr, size, ipc::mem::free }; -} - -struct cache_t { - std::size_t fill_; - ipc::buff_t buff_; - - cache_t(std::size_t f, ipc::buff_t && b) - : fill_(f), buff_(std::move(b)) - {} - - void append(void const * data, std::size_t size) { - if (fill_ >= buff_.size() || data == nullptr || size == 0) return; - auto new_fill = (ipc::detail::min)(fill_ + size, buff_.size()); - std::memcpy(static_cast(buff_.data()) + fill_, data, new_fill - fill_); - fill_ = new_fill; - } -}; - -auto cc_acc() { - static ipc::shm::handle acc_h("__CA_CONN__", sizeof(acc_t)); - return static_cast(acc_h.get()); -} - -IPC_CONSTEXPR_ std::size_t align_chunk_size(std::size_t size) noexcept { - return (((size - 1) / ipc::large_msg_align) + 1) * ipc::large_msg_align; -} - -IPC_CONSTEXPR_ std::size_t calc_chunk_size(std::size_t size) noexcept { - return ipc::make_align(alignof(std::max_align_t), align_chunk_size( - ipc::make_align(alignof(std::max_align_t), sizeof(std::atomic)) + size)); -} - -struct chunk_t { - std::atomic &conns() noexcept { - return *reinterpret_cast *>(this); - } - - void *data() noexcept { - return reinterpret_cast(this) - + ipc::make_align(alignof(std::max_align_t), sizeof(std::atomic)); - } -}; - -struct chunk_info_t { - ipc::id_pool<> pool_; - ipc::spin_lock lock_; - - IPC_CONSTEXPR_ static std::size_t chunks_mem_size(std::size_t chunk_size) noexcept { - return ipc::id_pool<>::max_count * chunk_size; - } - - ipc::byte_t *chunks_mem() noexcept { - return reinterpret_cast(this + 1); - } - - chunk_t *at(std::size_t chunk_size, ipc::storage_id_t id) noexcept { - if (id < 0) return nullptr; - return reinterpret_cast(chunks_mem() + (chunk_size * id)); - } -}; - -auto& chunk_storages() { - class chunk_handle_t { - ipc::shm::handle handle_; - - public: - chunk_info_t *get_info(std::size_t chunk_size) { - if (!handle_.valid() && - !handle_.acquire( ("__CHUNK_INFO__" + ipc::to_string(chunk_size)).c_str(), - sizeof(chunk_info_t) + chunk_info_t::chunks_mem_size(chunk_size) )) { - ipc::error("[chunk_storages] chunk_shm.id_info_.acquire failed: chunk_size = %zd\n", chunk_size); - return nullptr; - } - auto info = static_cast(handle_.get()); - if (info == nullptr) { - ipc::error("[chunk_storages] chunk_shm.id_info_.get failed: chunk_size = %zd\n", chunk_size); - return nullptr; - } - return info; - } - }; - static ipc::map chunk_hs; - return chunk_hs; -} - -chunk_info_t *chunk_storage_info(std::size_t chunk_size) { - auto &storages = chunk_storages(); - std::decay_t::iterator it; - { - static ipc::rw_lock lock; - IPC_UNUSED_ std::shared_lock guard {lock}; - if ((it = storages.find(chunk_size)) == storages.end()) { - using chunk_handle_t = std::decay_t::value_type::second_type; - guard.unlock(); - IPC_UNUSED_ std::lock_guard guard {lock}; - it = storages.emplace(chunk_size, chunk_handle_t{}).first; - } - } - return it->second.get_info(chunk_size); -} - -std::pair acquire_storage(std::size_t size, ipc::circ::cc_t conns) { - std::size_t chunk_size = calc_chunk_size(size); - auto info = chunk_storage_info(chunk_size); - if (info == nullptr) return {}; - - info->lock_.lock(); - info->pool_.prepare(); - // got an unique id - auto id = info->pool_.acquire(); - info->lock_.unlock(); - - auto chunk = info->at(chunk_size, id); - if (chunk == nullptr) return {}; - chunk->conns().store(conns, std::memory_order_relaxed); - return { id, chunk->data() }; -} - -void *find_storage(ipc::storage_id_t id, std::size_t size) { - if (id < 0) { - ipc::error("[find_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size); - return nullptr; - } - std::size_t chunk_size = calc_chunk_size(size); - auto info = chunk_storage_info(chunk_size); - if (info == nullptr) return nullptr; - return info->at(chunk_size, id)->data(); -} - -void release_storage(ipc::storage_id_t id, std::size_t size) { - if (id < 0) { - ipc::error("[release_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size); - return; - } - std::size_t chunk_size = calc_chunk_size(size); - auto info = chunk_storage_info(chunk_size); - if (info == nullptr) return; - info->lock_.lock(); - info->pool_.release(id); - info->lock_.unlock(); -} - -template -bool sub_rc(ipc::wr, - std::atomic &/*conns*/, ipc::circ::cc_t /*curr_conns*/, ipc::circ::cc_t /*conn_id*/) noexcept { - return true; -} - -template -bool sub_rc(ipc::wr, - std::atomic &conns, ipc::circ::cc_t curr_conns, ipc::circ::cc_t conn_id) noexcept { - auto last_conns = curr_conns & ~conn_id; - for (unsigned k = 0;;) { - auto chunk_conns = conns.load(std::memory_order_acquire); - if (conns.compare_exchange_weak(chunk_conns, chunk_conns & last_conns, std::memory_order_release)) { - return (chunk_conns & last_conns) == 0; - } - ipc::yield(k); - } -} - -template -void recycle_storage(ipc::storage_id_t id, std::size_t size, ipc::circ::cc_t curr_conns, ipc::circ::cc_t conn_id) { - if (id < 0) { - ipc::error("[recycle_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size); - return; - } - std::size_t chunk_size = calc_chunk_size(size); - auto info = chunk_storage_info(chunk_size); - if (info == nullptr) return; - - auto chunk = info->at(chunk_size, id); - if (chunk == nullptr) return; - - if (!sub_rc(Flag{}, chunk->conns(), curr_conns, conn_id)) { - return; - } - info->lock_.lock(); - info->pool_.release(id); - info->lock_.unlock(); -} - -template -bool clear_message(void* p) { - auto msg = static_cast(p); - if (msg->storage_) { - std::int32_t r_size = static_cast(ipc::data_length) + msg->remain_; - if (r_size <= 0) { - ipc::error("[clear_message] invalid msg size: %d\n", (int)r_size); - return true; - } - release_storage( - *reinterpret_cast(&msg->data_), - static_cast(r_size)); - } - return true; -} - -struct conn_info_head { - - ipc::string name_; - msg_id_t cc_id_; // connection-info id - ipc::detail::waiter cc_waiter_, wt_waiter_, rd_waiter_; - ipc::shm::handle acc_h_; - - conn_info_head(char const * name) - : name_ {name} - , cc_id_ {(cc_acc() == nullptr) ? 0 : cc_acc()->fetch_add(1, std::memory_order_relaxed)} - , cc_waiter_{("__CC_CONN__" + name_).c_str()} - , wt_waiter_{("__WT_CONN__" + name_).c_str()} - , rd_waiter_{("__RD_CONN__" + name_).c_str()} - , acc_h_ {("__AC_CONN__" + name_).c_str(), sizeof(acc_t)} { - } - - void quit_waiting() { - cc_waiter_.quit_waiting(); - wt_waiter_.quit_waiting(); - rd_waiter_.quit_waiting(); - } - - auto acc() { - return static_cast(acc_h_.get()); - } - - auto& recv_cache() { - thread_local ipc::unordered_map tls; - return tls; - } -}; - -template -bool wait_for(W& waiter, F&& pred, std::uint64_t tm) { - if (tm == 0) return !pred(); - for (unsigned k = 0; pred();) { - bool ret = true; - ipc::sleep(k, [&k, &ret, &waiter, &pred, tm] { - ret = waiter.wait_if(std::forward(pred), tm); - k = 0; - }); - if (!ret) return false; // timeout or fail - if (k == 0) break; // k has been reset - } - return true; -} - -template -struct queue_generator { - - using queue_t = ipc::queue, Policy>; - - struct conn_info_t : conn_info_head { - queue_t que_; - - conn_info_t(char const * name) - : conn_info_head{name} - , que_{("__QU_CONN__" + - ipc::to_string(DataSize) + "__" + - ipc::to_string(AlignSize) + "__" + name).c_str()} { - } - - void disconnect_receiver() { - bool dis = que_.disconnect(); - this->quit_waiting(); - if (dis) { - this->recv_cache().clear(); - } - } - }; -}; - -template -struct detail_impl { - -using policy_t = Policy; -using flag_t = typename policy_t::flag_t; -using queue_t = typename queue_generator::queue_t; -using conn_info_t = typename queue_generator::conn_info_t; - -constexpr static conn_info_t* info_of(ipc::handle_t h) noexcept { - return static_cast(h); -} - -constexpr static queue_t* queue_of(ipc::handle_t h) noexcept { - return (info_of(h) == nullptr) ? nullptr : &(info_of(h)->que_); -} - -/* API implementations */ - -static void disconnect(ipc::handle_t h) { - auto que = queue_of(h); - if (que == nullptr) { - return; - } - que->shut_sending(); - assert(info_of(h) != nullptr); - info_of(h)->disconnect_receiver(); -} - -static bool reconnect(ipc::handle_t * ph, bool start_to_recv) { - assert(ph != nullptr); - assert(*ph != nullptr); - auto que = queue_of(*ph); - if (que == nullptr) { - return false; - } - if (start_to_recv) { - que->shut_sending(); - if (que->connect()) { // wouldn't connect twice - info_of(*ph)->cc_waiter_.broadcast(); - return true; - } - return false; - } - // start_to_recv == false - if (que->connected()) { - info_of(*ph)->disconnect_receiver(); - } - return que->ready_sending(); -} - -static bool connect(ipc::handle_t * ph, char const * name, bool start_to_recv) { - assert(ph != nullptr); - if (*ph == nullptr) { - *ph = ipc::mem::alloc(name); - } - return reconnect(ph, start_to_recv); -} - -static void destroy(ipc::handle_t h) { - disconnect(h); - ipc::mem::free(info_of(h)); -} - -static std::size_t recv_count(ipc::handle_t h) noexcept { - auto que = queue_of(h); - if (que == nullptr) { - return ipc::invalid_value; - } - return que->conn_count(); -} - -static bool wait_for_recv(ipc::handle_t h, std::size_t r_count, std::uint64_t tm) { - auto que = queue_of(h); - if (que == nullptr) { - return false; - } - return wait_for(info_of(h)->cc_waiter_, [que, r_count] { - return que->conn_count() < r_count; - }, tm); -} - -template -static bool send(F&& gen_push, ipc::handle_t h, void const * data, std::size_t size) { - if (data == nullptr || size == 0) { - ipc::error("fail: send(%p, %zd)\n", data, size); - return false; - } - auto que = queue_of(h); - if (que == nullptr) { - ipc::error("fail: send, queue_of(h) == nullptr\n"); - return false; - } - if (que->elems() == nullptr) { - ipc::error("fail: send, queue_of(h)->elems() == nullptr\n"); - return false; - } - if (!que->ready_sending()) { - ipc::error("fail: send, que->ready_sending() == false\n"); - return false; - } - ipc::circ::cc_t conns = que->elems()->connections(std::memory_order_relaxed); - if (conns == 0) { - ipc::error("fail: send, there is no receiver on this connection.\n"); - return false; - } - // calc a new message id - auto acc = info_of(h)->acc(); - if (acc == nullptr) { - ipc::error("fail: send, info_of(h)->acc() == nullptr\n"); - return false; - } - auto msg_id = acc->fetch_add(1, std::memory_order_relaxed); - auto try_push = std::forward(gen_push)(info_of(h), que, msg_id); - if (size > ipc::large_msg_limit) { - auto dat = acquire_storage(size, conns); - void * buf = dat.second; - if (buf != nullptr) { - std::memcpy(buf, data, size); - return try_push(static_cast(size) - - static_cast(ipc::data_length), &(dat.first), 0); - } - // try using message fragment - //ipc::log("fail: shm::handle for big message. msg_id: %zd, size: %zd\n", msg_id, size); - } - // push message fragment - std::int32_t offset = 0; - for (std::int32_t i = 0; i < static_cast(size / ipc::data_length); ++i, offset += ipc::data_length) { - if (!try_push(static_cast(size) - offset - static_cast(ipc::data_length), - static_cast(data) + offset, ipc::data_length)) { - return false; - } - } - // if remain > 0, this is the last message fragment - std::int32_t remain = static_cast(size) - offset; - if (remain > 0) { - if (!try_push(remain - static_cast(ipc::data_length), - static_cast(data) + offset, - static_cast(remain))) { - return false; - } - } - return true; -} - -static bool send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) { - return send([tm](auto info, auto que, auto msg_id) { - return [tm, info, que, msg_id](std::int32_t remain, void const * data, std::size_t size) { - if (!wait_for(info->wt_waiter_, [&] { - return !que->push( - [](void*) { return true; }, - info->cc_id_, msg_id, remain, data, size); - }, tm)) { - ipc::log("force_push: msg_id = %zd, remain = %d, size = %zd\n", msg_id, remain, size); - if (!que->force_push( - clear_message, - info->cc_id_, msg_id, remain, data, size)) { - return false; - } - } - info->rd_waiter_.broadcast(); - return true; - }; - }, h, data, size); -} - -static bool try_send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) { - return send([tm](auto info, auto que, auto msg_id) { - return [tm, info, que, msg_id](std::int32_t remain, void const * data, std::size_t size) { - if (!wait_for(info->wt_waiter_, [&] { - return !que->push( - [](void*) { return true; }, - info->cc_id_, msg_id, remain, data, size); - }, tm)) { - return false; - } - info->rd_waiter_.broadcast(); - return true; - }; - }, h, data, size); -} - -static ipc::buff_t recv(ipc::handle_t h, std::uint64_t tm) { - auto que = queue_of(h); - if (que == nullptr) { - ipc::error("fail: recv, queue_of(h) == nullptr\n"); - return {}; - } - if (!que->connected()) { - // hasn't connected yet, just return. - return {}; - } - auto& rc = info_of(h)->recv_cache(); - for (;;) { - // pop a new message - typename queue_t::value_t msg; - if (!wait_for(info_of(h)->rd_waiter_, [que, &msg] { - return !que->pop(msg); - }, tm)) { - // pop failed, just return. - return {}; - } - info_of(h)->wt_waiter_.broadcast(); - if ((info_of(h)->acc() != nullptr) && (msg.cc_id_ == info_of(h)->cc_id_)) { - continue; // ignore message to self - } - // msg.remain_ may minus & abs(msg.remain_) < data_length - std::int32_t r_size = static_cast(ipc::data_length) + msg.remain_; - if (r_size <= 0) { - ipc::error("fail: recv, r_size = %d\n", (int)r_size); - return {}; - } - std::size_t msg_size = static_cast(r_size); - // large message - if (msg.storage_) { - ipc::storage_id_t buf_id = *reinterpret_cast(&msg.data_); - void* buf = find_storage(buf_id, msg_size); - if (buf != nullptr) { - struct recycle_t { - ipc::storage_id_t storage_id; - ipc::circ::cc_t curr_conns; - ipc::circ::cc_t conn_id; - } *r_info = ipc::mem::alloc(recycle_t{ - buf_id, que->elems()->connections(std::memory_order_relaxed), que->connected_id() - }); - if (r_info == nullptr) { - ipc::log("fail: ipc::mem::alloc.\n"); - return ipc::buff_t{buf, msg_size}; // no recycle - } else { - return ipc::buff_t{buf, msg_size, [](void* p_info, std::size_t size) { - auto r_info = static_cast(p_info); - IPC_UNUSED_ auto finally = ipc::guard([r_info] { - ipc::mem::free(r_info); - }); - recycle_storage(r_info->storage_id, size, r_info->curr_conns, r_info->conn_id); - }, r_info}; - } - } else { - ipc::log("fail: shm::handle for large message. msg_id: %zd, buf_id: %zd, size: %zd\n", msg.id_, buf_id, msg_size); - continue; - } - } - // find cache with msg.id_ - auto cac_it = rc.find(msg.id_); - if (cac_it == rc.end()) { - if (msg_size <= ipc::data_length) { - return make_cache(msg.data_, msg_size); - } - // gc - if (rc.size() > 1024) { - std::vector need_del; - for (auto const & pair : rc) { - auto cmp = std::minmax(msg.id_, pair.first); - if (cmp.second - cmp.first > 8192) { - need_del.push_back(pair.first); - } - } - for (auto id : need_del) rc.erase(id); - } - // cache the first message fragment - rc.emplace(msg.id_, cache_t { ipc::data_length, make_cache(msg.data_, msg_size) }); - } - // has cached before this message - else { - auto& cac = cac_it->second; - // this is the last message fragment - if (msg.remain_ <= 0) { - cac.append(&(msg.data_), msg_size); - // finish this message, erase it from cache - auto buff = std::move(cac.buff_); - rc.erase(cac_it); - return buff; - } - // there are remain datas after this message - cac.append(&(msg.data_), ipc::data_length); - } - } -} - -static ipc::buff_t try_recv(ipc::handle_t h) { - return recv(h, 0); -} - -}; // detail_impl - -template -using policy_t = ipc::policy::choose; - -} // internal-linkage - -namespace ipc { - -template -ipc::handle_t chan_impl::inited() { - ipc::detail::waiter::init(); - return nullptr; -} - -template -bool chan_impl::connect(ipc::handle_t * ph, char const * name, unsigned mode) { - return detail_impl>::connect(ph, name, mode & receiver); -} - -template -bool chan_impl::reconnect(ipc::handle_t * ph, unsigned mode) { - return detail_impl>::reconnect(ph, mode & receiver); -} - -template -void chan_impl::disconnect(ipc::handle_t h) { - detail_impl>::disconnect(h); -} - -template -void chan_impl::destroy(ipc::handle_t h) { - detail_impl>::destroy(h); -} - -template -char const * chan_impl::name(ipc::handle_t h) { - auto info = detail_impl>::info_of(h); - return (info == nullptr) ? nullptr : info->name_.c_str(); -} - -template -std::size_t chan_impl::recv_count(ipc::handle_t h) { - return detail_impl>::recv_count(h); -} - -template -bool chan_impl::wait_for_recv(ipc::handle_t h, std::size_t r_count, std::uint64_t tm) { - return detail_impl>::wait_for_recv(h, r_count, tm); -} - -template -bool chan_impl::send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) { - return detail_impl>::send(h, data, size, tm); -} - -template -buff_t chan_impl::recv(ipc::handle_t h, std::uint64_t tm) { - return detail_impl>::recv(h, tm); -} - -template -bool chan_impl::try_send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) { - return detail_impl>::try_send(h, data, size, tm); -} - -template -buff_t chan_impl::try_recv(ipc::handle_t h) { - return detail_impl>::try_recv(h); -} - -template struct chan_impl>; -// template struct chan_impl>; // TBD -// template struct chan_impl>; // TBD -template struct chan_impl>; -template struct chan_impl>; - -} // namespace ipc diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/utils/face_enhancer.py b/spaces/fb700/chatglm-fitness-RLHF/src/utils/face_enhancer.py deleted file mode 100644 index 15851a15966c963d7bd04f35eebdaa6b22a3d966..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/src/utils/face_enhancer.py +++ /dev/null @@ -1,123 +0,0 @@ -import os -import torch - -from gfpgan import GFPGANer - -from tqdm import tqdm - -from src.utils.videoio import load_video_to_cv2 - -import cv2 - - -class GeneratorWithLen(object): - """ From https://stackoverflow.com/a/7460929 """ - - def __init__(self, gen, length): - self.gen = gen - self.length = length - - def __len__(self): - return self.length - - def __iter__(self): - return self.gen - -def enhancer_list(images, method='gfpgan', bg_upsampler='realesrgan'): - gen = enhancer_generator_no_len(images, method=method, bg_upsampler=bg_upsampler) - return list(gen) - -def enhancer_generator_with_len(images, method='gfpgan', bg_upsampler='realesrgan'): - """ Provide a generator with a __len__ method so that it can passed to functions that - call len()""" - - if os.path.isfile(images): # handle video to images - # TODO: Create a generator version of load_video_to_cv2 - images = load_video_to_cv2(images) - - gen = enhancer_generator_no_len(images, method=method, bg_upsampler=bg_upsampler) - gen_with_len = GeneratorWithLen(gen, len(images)) - return gen_with_len - -def enhancer_generator_no_len(images, method='gfpgan', bg_upsampler='realesrgan'): - """ Provide a generator function so that all of the enhanced images don't need - to be stored in memory at the same time. This can save tons of RAM compared to - the enhancer function. """ - - print('face enhancer....') - if not isinstance(images, list) and os.path.isfile(images): # handle video to images - images = load_video_to_cv2(images) - - # ------------------------ set up GFPGAN restorer ------------------------ - if method == 'gfpgan': - arch = 'clean' - channel_multiplier = 2 - model_name = 'GFPGANv1.4' - url = 'https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth' - elif method == 'RestoreFormer': - arch = 'RestoreFormer' - channel_multiplier = 2 - model_name = 'RestoreFormer' - url = 'https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/RestoreFormer.pth' - elif method == 'codeformer': # TODO: - arch = 'CodeFormer' - channel_multiplier = 2 - model_name = 'CodeFormer' - url = 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth' - else: - raise ValueError(f'Wrong model version {method}.') - - - # ------------------------ set up background upsampler ------------------------ - if bg_upsampler == 'realesrgan': - if not torch.cuda.is_available(): # CPU - import warnings - warnings.warn('The unoptimized RealESRGAN is slow on CPU. We do not use it. ' - 'If you really want to use it, please modify the corresponding codes.') - bg_upsampler = None - else: - from basicsr.archs.rrdbnet_arch import RRDBNet - from realesrgan import RealESRGANer - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=2) - bg_upsampler = RealESRGANer( - scale=2, - model_path='https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth', - model=model, - tile=400, - tile_pad=10, - pre_pad=0, - half=True) # need to set False in CPU mode - else: - bg_upsampler = None - - # determine model paths - model_path = os.path.join('gfpgan/weights', model_name + '.pth') - - if not os.path.isfile(model_path): - model_path = os.path.join('checkpoints', model_name + '.pth') - - if not os.path.isfile(model_path): - # download pre-trained models from url - model_path = url - - restorer = GFPGANer( - model_path=model_path, - upscale=2, - arch=arch, - channel_multiplier=channel_multiplier, - bg_upsampler=bg_upsampler) - - # ------------------------ restore ------------------------ - for idx in tqdm(range(len(images)), 'Face Enhancer:'): - - img = cv2.cvtColor(images[idx], cv2.COLOR_RGB2BGR) - - # restore faces and background if necessary - cropped_faces, restored_faces, r_img = restorer.enhance( - img, - has_aligned=False, - only_center_face=False, - paste_back=True) - - r_img = cv2.cvtColor(r_img, cv2.COLOR_BGR2RGB) - yield r_img diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Green Farm 3 Mod APK and Build Your Dream Farm.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Green Farm 3 Mod APK and Build Your Dream Farm.md deleted file mode 100644 index 526a5eae097f8f17c424d12f0a3616f8cc282208..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Green Farm 3 Mod APK and Build Your Dream Farm.md +++ /dev/null @@ -1,108 +0,0 @@ - -

      Download Green Farm Mod APK and Enjoy Farming Like Never Before

      -

      Do you love farming games but hate the limitations and restrictions that come with them? Do you want to experience the joy of cultivating your own crops, raising your own animals, and managing your own farm without spending real money or watching annoying ads? If you answered yes to any of these questions, then you should download Green Farm Mod APK right now!

      -

      What is Green Farm Mod APK?

      -

      Green Farm Mod APK is a modified version of the original Green Farm 3 game, which is a popular farming simulation game developed by Gameloft. In this game, you inherit an old manor from your uncle and you have to restore it to its former glory by farming, harvesting, crafting, and selling your products. You can also interact with other characters, join a community, and explore new locations.

      -

      download green farm mod apk


      Download Zip ✸✸✸ https://gohhs.com/2uPn6L



      -

      Features of Green Farm Mod APK

      -

      Green Farm Mod APK has many features that make it superior to the original game. Here are some of them:

      -

      Unlimited money

      -

      With Green Farm Mod APK, you don't have to worry about running out of money. You can buy anything you want, from seeds and tools to decorations and buildings. You can also upgrade your farm and expand your land without any limitations.

      -

      Unlimited resources

      -

      With Green Farm Mod APK, you don't have to wait for your crops to grow or your animals to produce. You can harvest them anytime you want and get unlimited resources. You can also use them to craft new items or sell them for more money.

      -

      No ads

      -

      With Green Farm Mod APK, you don't have to watch any ads that interrupt your gameplay. You can enjoy the game without any distractions or annoyances.

      -

      Easy controls

      -

      With Green Farm Mod APK, you don't have to struggle with complicated controls. You can play the game with simple taps and swipes on your screen. You can also customize the settings to suit your preferences.

      -

      Beautiful graphics

      -

      With Green Farm Mod APK, you don't have to settle for low-quality graphics. You can enjoy the game with stunning visuals and animations that make your farm look realistic and lively. You can also change the weather and time of day to create different atmospheres.

      -

      How to download and install Green Farm Mod APK?

      -

      Downloading and installing Green Farm Mod APK is very easy and fast. Just follow these steps:

      -

      download green farm 3 mod apk unlimited money and cash
      -download green farm 3 mod apk latest version
      -download green farm 3 mod apk android 1
      -download green farm 3 mod apk offline
      -download green farm 3 mod apk revdl
      -download green farm 3 mod apk unlimited coins and cash
      -download green farm 3 mod apk for pc
      -download green farm 3 mod apk hack
      -download green farm 3 mod apk free shopping
      -download green farm 3 mod apk unlimited everything
      -download green farm 4 mod apk
      -download green farm 4 mod apk unlimited money and cash
      -download green farm 4 mod apk latest version
      -download green farm 4 mod apk android 1
      -download green farm 4 mod apk offline
      -download green farm 4 mod apk revdl
      -download green farm 4 mod apk unlimited coins and cash
      -download green farm 4 mod apk for pc
      -download green farm 4 mod apk hack
      -download green farm 4 mod apk free shopping
      -download green farm old version mod apk
      -download green farm old version mod apk unlimited money and cash
      -download green farm old version mod apk latest version
      -download green farm old version mod apk android 1
      -download green farm old version mod apk offline
      -download green farm old version mod apk revdl
      -download green farm old version mod apk unlimited coins and cash
      -download green farm old version mod apk for pc
      -download green farm old version mod apk hack
      -download green farm old version mod apk free shopping
      -how to download green farm mod apk
      -how to download green farm 3 mod apk unlimited money and cash
      -how to download green farm 3 mod apk latest version
      -how to download green farm 3 mod apk android 1
      -how to download green farm 3 mod apk offline
      -how to download green farm 3 mod apk revdl
      -how to download green farm 3 mod apk unlimited coins and cash
      -how to download green farm 3 mod apk for pc
      -how to download green farm 3 mod apk hack
      -how to download green farm 3 mod apk free shopping
      -how to download green farm 4 mod apk
      -how to download green farm 4 mod apk unlimited money and cash
      -how to download green farm 4 mod apk latest version
      -how to download green farm 4 mod apk android 1
      -how to download green farm 4 mod apk offline
      -how to download green farm 4 mod apk revdl
      -how to download green farm 4 mod apk unlimited coins and cash
      -how to download green farm 4 mod apk for pc
      -how to download green farm 4 mod apk hack
      -how to download green farm 4 mod apk free shopping

      -

      Step 1: Download the APK file from a trusted source

      -

      You can download the APK file from [this link](^1^), which is a safe and reliable source. The file size is about 14 MB, so it won't take long to download.

      -

      Step 2: Enable unknown sources on your device

      -

      Before you can install the APK file, you need to enable unknown sources on your device. This will allow you to install apps that are not from the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.

      -

      Step 3: Install the APK file and launch the game

      -

      After you have enabled unknown sources, you can install the APK file by tapping on it and following the instructions on the screen. Once the installation is complete, you can launch the game and enjoy farming like never before.

      -

      Why should you download Green Farm Mod APK?

      -

      If you are still not convinced that Green Farm Mod APK is the best farming game for you, here are some more reasons why you should download it:

      -

      Benefits of playing Green Farm Mod APK

      -

      Relaxing and fun gameplay

      -

      Green Farm Mod APK is a game that will make you feel relaxed and happy. You can escape from the stress and worries of your daily life and immerse yourself in a peaceful and charming farm. You can also have fun by trying new things, experimenting with different crops and animals, and discovering new secrets and surprises.

      -

      Customize your farm and character

      -

      Green Farm Mod APK is a game that will let you express your creativity and personality. You can customize your farm and character to suit your style and taste. You can choose from a variety of options, such as colors, shapes, sizes, patterns, and accessories. You can also change them anytime you want to create a new look.

      -

      Interact with other players and animals

      -

      Green Farm Mod APK is a game that will make you feel connected and social. You can interact with other players and animals in the game. You can chat with them, visit their farms, help them out, trade with them, or compete with them. You can also pet, feed, play with, or breed your animals to make them happy and loyal.

      -

      Complete missions and challenges

      -

      Green Farm Mod APK is a game that will challenge you and reward you. You can complete missions and challenges in the game to earn money, experience, items, and achievements. You can also unlock new levels, locations, features, and modes as you progress in the game. You will never get bored or run out of things to do in Green Farm Mod APK.

      -

      Learn about farming and nature

      -

      Green Farm Mod APK is a game that will educate you and inspire you. You can learn about farming and nature in the game by reading tips, facts, and trivia. You can also learn about different types of crops, animals, products, and processes that are involved in farming. You will gain a deeper appreciation and respect for the environment and the people who work in it.

      -

      Conclusion

      -

      Green Farm Mod APK is a game that will give you everything you want from a farming game and more. It has unlimited money, unlimited resources, no ads, easy controls, beautiful graphics, relaxing and fun gameplay, customization options, social interaction, missions and challenges, and educational content. It is a game that will make you fall in love with farming and nature.

      -

      If you are ready to download Green Farm Mod APK and enjoy farming like never before, click on [this link] now and start your adventure!

      -

      FAQs

      -

      Here are some frequently asked questions about Green Farm Mod APK:

      -
        -
      • Is Green Farm Mod APK safe to download?
      • -

        Yes, Green Farm Mod APK is safe to download as long as you use a trusted source like [this one]. It does not contain any viruses or malware that can harm your device or data.

        -
      • Is Green Farm Mod APK compatible with my device?
      • -

        Yes, Green Farm Mod APK is compatible with most Android devices that have Android 4.0.3 or higher. However, some devices may experience some performance issues or glitches due to different specifications or settings.

        -
      • Can I play Green Farm Mod APK offline?
      • -

        Yes, you can play Green Farm Mod APK offline without any internet connection. However, some features like social interaction or cloud saving may not work properly when offline.

        -
      • Can I update Green Farm Mod APK?
      • -

        No, you cannot update Green Farm Mod APK through the Google Play Store or any other official source. If you want to update the game, you have to download the latest version of the modded APK file from [this link] or any other trusted source.

        -
      • Can I restore my progress if I uninstall Green Farm Mod APK?
      • -

        No, you cannot restore your progress if you uninstall Green Farm Mod APK unless you have backed up your data using cloud saving or any other method. If you uninstall the game without backing up your data, you will lose all your progress and start from scratch.

        -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download TikTok US APK for Android and Enjoy Fun Videos.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download TikTok US APK for Android and Enjoy Fun Videos.md deleted file mode 100644 index aac28d75f84988a4faeafc2c89549f456bdeabd5..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download TikTok US APK for Android and Enjoy Fun Videos.md +++ /dev/null @@ -1,100 +0,0 @@ -
      -

      TikTok US APK: How to Download and Install the Latest Version of the Popular Social Network

      -

      TikTok is one of the most popular social networks in the world, with over 500 million active users. It allows you to create and share short videos with music, filters, effects, stickers, and more. You can also watch millions of videos from different genres, such as comedy, gaming, DIY, food, sports, memes, pets, etc. Whether you want to express yourself, showcase your talent, learn something new, or just have fun, there's something for everyone on TikTok.

      -

      tiktok us apk


      DOWNLOAD >>>>> https://gohhs.com/2uPqO3



      -

      However, if you want to enjoy TikTok to the fullest, you may need to download and install a modified version of the app called TikTok US APK. This is a special app that gives you access to more features and content than the official app. In this article, we will explain what TikTok US APK is, how it differs from the official app, how to download and install it on your Android device, and what are the risks and disadvantages of using it.

      -

      What is TikTok and why is it so popular?

      -

      TikTok is a social network that lets you create and share short videos with music and effects

      -

      TikTok is an app that lets you record videos of up to 60 seconds long and add music and effects to them. You can choose from a huge library of songs and sounds from different genres and artists. You can also use filters, stickers, emojis, text, transitions, time-lapses, slow-motion, reverse, zoom, beauty mode, face swap, duet, react, green screen, voice changer, etc. You can also edit your videos with easy-to-use tools to trim, cut, merge, duplicate, rotate, crop, etc.

      -

      TikTok has millions of users and content creators from different genres and interests

      -

      TikTok is not only a platform for making videos but also for watching them. You can discover millions of videos from different categories on your personalized feed based on what you watch, like, comment on, and share. You can also explore videos by hashtags, trends, challenges, topics, etc. You can also follow your favorite creators and interact with them by liking, commenting on their videos. You can also send them messages or join

      TikTok is also a source of entertainment, inspiration, and education for many people

      -

      TikTok is not only a fun and creative app but also a platform where you can learn new things, get inspired, and be entertained. You can find videos on various topics, such as cooking, fitness, beauty, fashion, travel, art, music, dance, education, business, etc. You can also watch videos from celebrities, influencers, experts, professionals, etc. You can also join live streams and chat with other users or creators. You can also participate in challenges, contests, events, etc. and win prizes or rewards.

      -

      What is TikTok US APK and how is it different from the official app?

      -

      TikTok US APK is a modified version of the official app that bypasses some restrictions and limitations

      -

      TikTok US APK is an app that has been modified by some developers to offer more features and content than the official app. It is also known as TikTok Mod APK or TikTok Premium APK. It is not an official or authorized app by TikTok or its parent company ByteDance. It is a third-party app that has been created by hacking or altering the original app's code.

      -

      TikTok US APK allows you to access content from other regions, download videos without watermarks, and use more features and tools

      -

      Some of the features and benefits of using TikTok US APK are:

      -
        -
      • You can access content from other regions or countries that may be blocked or restricted by the official app. For example, you can watch videos from the US, UK, India, Japan, etc.
      • -
      • You can download videos without watermarks or logos and save them to your device or share them with others.
      • -
      • You can use more features and tools that may not be available or limited by the official app. For example, you can use more filters, effects, stickers, emojis, etc.
      • -
      • You can remove ads and enjoy a smoother and faster experience.
      • -
      • You can unlock premium features and content that may require a subscription or payment by the official app. For example, you can access exclusive music, sounds, effects, etc.
      • -
      -

      TikTok US APK is not available on the Google Play Store or the official website, so you need to download it from a third-party source

      -

      Since TikTok US APK is not an official or authorized app, you cannot find it on the Google Play Store or the official website of TikTok. You need to download it from a third-party source that provides the APK file. However, you need to be careful and cautious when downloading and installing TikTok US APK as it may contain malware or viruses that can harm your device or compromise your data security.

      How to download and install TikTok US APK on your Android device?

      -

      Step 1: Find a reliable and safe source to download the TikTok US APK file

      -

      The first step to download and install TikTok US APK is to find a trustworthy and secure source that provides the APK file. You can search online for websites or blogs that offer the latest version of the app. However, you need to be careful and avoid downloading from sources that may have fake or malicious links. You can also check the reviews and ratings of the sources to see if they are reliable and safe.

      -

      Step 2: Enable the installation of apps from unknown sources on your device settings

      -

      The second step is to enable the installation of apps from unknown sources on your device settings. This is because TikTok US APK is not an official or authorized app, so you need to allow your device to install it from a third-party source. To do this, you need to go to your device settings, then security, then unknown sources, and then toggle on the option to allow the installation of apps from unknown sources.

      -

      Step 3: Locate the downloaded file and tap on it to start the installation process

      -

      The third step is to locate the downloaded file and tap on it to start the installation process. You can find the file in your device's download folder or in the notification bar. Once you tap on the file, you will see a pop-up window that asks you to confirm the installation. Tap on install and wait for a few seconds until the app is installed.

      -

      tiktok us apk download
      -tiktok us apk latest version
      -tiktok us apk mod
      -tiktok us apk for android
      -tiktok us apk mirror
      -tiktok us apk pure
      -tiktok us apk no watermark
      -tiktok us apk 2023
      -tiktok us apk uptodown
      -tiktok us apk ios
      -tiktok us apk ban
      -tiktok us apk without ads
      -tiktok us apk hack
      -tiktok us apk old version
      -tiktok us apk free
      -tiktok us apk premium
      -tiktok us apk update
      -tiktok us apk for pc
      -tiktok us apk online
      -tiktok us apk install
      -tiktok us apk file
      -tiktok us apk cracked
      -tiktok us apk pro
      -tiktok us apk unlocked
      -tiktok us apk original
      -tiktok us apk 2022
      -tiktok us apk reddit
      -tiktok us apk safe
      -tiktok us apk link
      -tiktok us apk size
      -tiktok us apk review
      -tiktok us apk alternative
      -tiktok us apk beta
      -tiktok us apk vip
      -tiktok us apk unlimited coins
      -tiktok us apk video downloader
      -tiktok us apk music downloader
      -tiktok us apk editor
      -tiktok us apk creator
      -tiktok us apk analytics tool[^1^]

      -

      Step 4: Follow the instructions on the screen and grant the necessary permissions to the app

      -

      The fourth step is to follow the instructions on the screen and grant the necessary permissions to the app. You may need to grant some permissions to the app, such as access to your camera, microphone, storage, location, etc. These permissions are required for the app to function properly and offer you its features and benefits. You can also customize some settings, such as language, region, notifications, etc.

      -

      Step 5: Enjoy using TikTok US APK and explore its features and benefits

      -

      The fifth and final step is to enjoy using TikTok US APK and explore its features and benefits. You can now launch the app and sign in with your account or create a new one. You can also browse, watch, create, and share videos with music and effects. You can also access content from other regions, download videos without watermarks, use more features and tools, remove ads, unlock premium features and content, etc.

      What are the risks and disadvantages of using TikTok US APK?

      -

      TikTok US APK is not an official or authorized app, so it may violate some terms and conditions of the original app

      -

      One of the risks of using TikTok US APK is that it may violate some terms and conditions of the original app. This means that you may be breaking some rules or policies that TikTok or its parent company ByteDance has set for its users and content creators. This may result in some consequences, such as getting your account banned, suspended, or deleted. You may also lose access to some features or content that are exclusive to the official app.

      -

      TikTok US APK may not be compatible with some devices or updates, and it may cause some errors or glitches

      -

      Another risk of using TikTok US APK is that it may not be compatible with some devices or updates, and it may cause some errors or glitches. This is because TikTok US APK is not an official or authorized app, so it may not be updated or optimized regularly. This may lead to some problems, such as crashing, freezing, lagging, or malfunctioning of the app. You may also miss out on some new features or improvements that the official app may offer.

      -

      TikTok US APK may contain malware or viruses that can harm your device or compromise your data security

      -

      A third risk of using TikTok US APK is that it may contain malware or viruses that can harm your device or compromise your data security. This is because TikTok US APK is not an official or authorized app, so it may not be verified or scanned for any malicious code or software. This may expose your device to some threats, such as hacking, phishing, spying, stealing, etc. You may also lose your personal information, such as your account details, passwords, contacts, photos, videos, etc.

      -

      TikTok US APK may not offer the same level of quality or support as the official app, and it may have some bugs or issues

      -

      A fourth risk of using TikTok US APK is that it may not offer the same level of quality or support as the official app, and it may have some bugs or issues. This is because TikTok US APK is not an official or authorized app, so it may not be tested or reviewed for any errors or defects. This may affect the performance and functionality of the app. You may also encounter some bugs or issues that may affect your user experience. You may also not get any customer service or technical support from TikTok or its parent company ByteDance.

      -

      Conclusion

      -

      TikTok US APK is a modified version of the official app that gives you access to more features and content than the original app. It allows you to access content from other regions, download videos without watermarks, use more features and tools, remove ads, unlock premium features and content, etc. However, TikTok US APK is not an official or authorized app, so it has some risks and disadvantages. It may violate some terms and conditions of the original app, cause some errors or glitches, contain malware or viruses, and have some bugs or issues. Therefore, you need to be careful and cautious when downloading and installing TikTok US APK on your Android device.

      -

      FAQs

      -
        -
      • Q: Is TikTok US APK legal?
      • -
      • A: TikTok US APK is not an official or authorized app by TikTok or its parent company ByteDance. It is a third-party app that has been modified by some developers to offer more features and content than the original app. Therefore, it may not be legal in some countries or regions where TikTok is banned or restricted.
      • -
      • Q: Is TikTok US APK safe?
      • -
      • A: TikTok US APK is not an official or authorized app by TikTok or its parent company ByteDance. It is a third-party app that has been modified by some developers to offer more features and content than the original app. Therefore, it may not be safe for your device or data security. It may contain malware or viruses that can harm your device or compromise your data security.
      • -
      • Q: How to update TikTok US APK?
      • -
      • A: TikTok US APK is not an official or authorized app by TikTok or its parent company ByteDance. It is a third-party app that has been modified by some developers to offer more features and content than the original app. Therefore, it may not be updated regularly or automatically by the Google Play Store or the official website of TikTok. You need to check online for any new versions of the app and download them from a reliable and safe source.
      • -
      • Q: How to uninstall TikTok US APK?
      • -
      • A: TikTok US APK is not an official or authorized app by TikTok or its parent company ByteDance. It is a third-party app that has been modified by some developers to offer more features and content than the original app. Therefore, you can uninstall it like any other app on your device. You need to go to your device settings, then apps, then TikTok US APK, and then tap on uninstall. You can also delete the APK file from your device storage.
      • -
      • Q: Can I use TikTok US APK on iOS devices?
      • -
      • A: TikTok US APK is an app that has been designed for Android devices only. It is not compatible with iOS devices, such as iPhones or iPads. Therefore, you cannot use it on iOS devices. You need to use the official app or an alternative app that is available for iOS devices.
      • -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/fffffu/bing/src/lib/bots/bing/sr.ts b/spaces/fffffu/bing/src/lib/bots/bing/sr.ts deleted file mode 100644 index 7cae14da7362bd6cc1e234851c11ca67e5a99f0c..0000000000000000000000000000000000000000 --- a/spaces/fffffu/bing/src/lib/bots/bing/sr.ts +++ /dev/null @@ -1,106 +0,0 @@ -// @ts-ignore -const SpeechRecognitionPolyfill: typeof webkitSpeechRecognition = typeof window !== 'undefined' ? ( - // @ts-ignore - window.SpeechRecognition || - window.webkitSpeechRecognition || - // @ts-ignore - window.mozSpeechRecognition || - // @ts-ignore - window.msSpeechRecognition || - // @ts-ignore - window.oSpeechRecognition -) as typeof webkitSpeechRecognition : undefined - -type subscriber = (msg: string, command?: string) => void - -export class SR { - recognition?: SpeechRecognition - onchange?: subscriber - transcript: boolean = false - listening: boolean = false - private commandsRe?: RegExp - constructor(commands: string[]) { - this.recognition = SpeechRecognitionPolyfill ? new SpeechRecognitionPolyfill() : undefined - if (!this.recognition) { - return - } - this.configuration('zh-CN') - if (commands.length) { - this.commandsRe = new RegExp(`^(${commands.join('|')})。?$`) - } - this.recognition.onresult = this.speechRecognition - this.recognition.onerror = (err) => { - console.log('err', err.error) - this.stop() - } - this.recognition.onend = () => { - if (this.recognition && this.listening) { - this.recognition.start() - } - } - } - - speechRecognition = (event: SpeechRecognitionEvent) => { - if (!this.listening) return - for (var i = event.resultIndex; i < event.results.length; i++) { - let result = event.results[i] - if (result.isFinal) { - var alt = result[0] - const text = alt.transcript.trim() - if (this.commandsRe && this.commandsRe.test(text)) { - return this.onchange?.('', RegExp.$1) - } - if (!this.transcript) return - this.onchange?.(text) - } - } - } - - private configuration = async (lang: string = 'zh-CN') => { - return new Promise((resolve) => { - if (this.recognition) { - this.recognition.continuous = true - this.recognition.lang = lang - this.recognition.onstart = resolve - } - }) - } - - start = async () => { - if (this.recognition && !this.listening) { - await this.recognition.start() - this.transcript = true - this.listening = true - } - } - - stop = () => { - if (this.recognition) { - this.recognition.stop() - this.transcript = false - this.listening = false - } - } - - - pause = () => { - if (this.recognition) { - this.transcript = false - } - } - - resume = () => { - if (this.recognition) { - this.transcript = true - } - } - - abort = () => { - if (this.recognition && this.transcript) { - this.recognition.abort() - this.transcript = false - this.listening = false - } - } -} - diff --git a/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/modules/seanet.py b/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/modules/seanet.py deleted file mode 100644 index 3e5998e9153afb6e68ea410d565e00ea835db248..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/modules/seanet.py +++ /dev/null @@ -1,258 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -import numpy as np -import torch.nn as nn - -from .conv import StreamableConv1d, StreamableConvTranspose1d -from .lstm import StreamableLSTM - - -class SEANetResnetBlock(nn.Module): - """Residual block from SEANet model. - - Args: - dim (int): Dimension of the input/output. - kernel_sizes (list): List of kernel sizes for the convolutions. - dilations (list): List of dilations for the convolutions. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - true_skip (bool): Whether to use true skip connection or a simple - (streamable) convolution as the skip connection. - """ - def __init__(self, dim: int, kernel_sizes: tp.List[int] = [3, 1], dilations: tp.List[int] = [1, 1], - activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, causal: bool = False, - pad_mode: str = 'reflect', compress: int = 2, true_skip: bool = True): - super().__init__() - assert len(kernel_sizes) == len(dilations), 'Number of kernel sizes should match number of dilations' - act = getattr(nn, activation) - hidden = dim // compress - block = [] - for i, (kernel_size, dilation) in enumerate(zip(kernel_sizes, dilations)): - in_chs = dim if i == 0 else hidden - out_chs = dim if i == len(kernel_sizes) - 1 else hidden - block += [ - act(**activation_params), - StreamableConv1d(in_chs, out_chs, kernel_size=kernel_size, dilation=dilation, - norm=norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode), - ] - self.block = nn.Sequential(*block) - self.shortcut: nn.Module - if true_skip: - self.shortcut = nn.Identity() - else: - self.shortcut = StreamableConv1d(dim, dim, kernel_size=1, norm=norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode) - - def forward(self, x): - return self.shortcut(x) + self.block(x) - - -class SEANetEncoder(nn.Module): - """SEANet encoder. - - Args: - channels (int): Audio channels. - dimension (int): Intermediate representation dimension. - n_filters (int): Base width for the model. - n_residual_layers (int): nb of residual layers. - ratios (Sequence[int]): kernel size and stride ratios. The encoder uses downsampling ratios instead of - upsampling ratios, hence it will use the ratios in the reverse order to the ones specified here - that must match the decoder order. We use the decoder order as some models may only employ the decoder. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - kernel_size (int): Kernel size for the initial convolution. - last_kernel_size (int): Kernel size for the initial convolution. - residual_kernel_size (int): Kernel size for the residual layers. - dilation_base (int): How much to increase the dilation with each layer. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - true_skip (bool): Whether to use true skip connection or a simple - (streamable) convolution as the skip connection in the residual network blocks. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - lstm (int): Number of LSTM layers at the end of the encoder. - disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm. - For the encoder, it corresponds to the N first blocks. - """ - def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3, - ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7, - last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False, - pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0, - disable_norm_outer_blocks: int = 0): - super().__init__() - self.channels = channels - self.dimension = dimension - self.n_filters = n_filters - self.ratios = list(reversed(ratios)) - del ratios - self.n_residual_layers = n_residual_layers - self.hop_length = np.prod(self.ratios) - self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks - self.disable_norm_outer_blocks = disable_norm_outer_blocks - assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \ - "Number of blocks for which to disable norm is invalid." \ - "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0." - - act = getattr(nn, activation) - mult = 1 - model: tp.List[nn.Module] = [ - StreamableConv1d(channels, mult * n_filters, kernel_size, - norm='none' if self.disable_norm_outer_blocks >= 1 else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - # Downsample to raw audio scale - for i, ratio in enumerate(self.ratios): - block_norm = 'none' if self.disable_norm_outer_blocks >= i + 2 else norm - # Add residual layers - for j in range(n_residual_layers): - model += [ - SEANetResnetBlock(mult * n_filters, kernel_sizes=[residual_kernel_size, 1], - dilations=[dilation_base ** j, 1], - norm=block_norm, norm_params=norm_params, - activation=activation, activation_params=activation_params, - causal=causal, pad_mode=pad_mode, compress=compress, true_skip=true_skip)] - - # Add downsampling layers - model += [ - act(**activation_params), - StreamableConv1d(mult * n_filters, mult * n_filters * 2, - kernel_size=ratio * 2, stride=ratio, - norm=block_norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode), - ] - mult *= 2 - - if lstm: - model += [StreamableLSTM(mult * n_filters, num_layers=lstm)] - - model += [ - act(**activation_params), - StreamableConv1d(mult * n_filters, dimension, last_kernel_size, - norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - - self.model = nn.Sequential(*model) - - def forward(self, x): - return self.model(x) - - -class SEANetDecoder(nn.Module): - """SEANet decoder. - - Args: - channels (int): Audio channels. - dimension (int): Intermediate representation dimension. - n_filters (int): Base width for the model. - n_residual_layers (int): nb of residual layers. - ratios (Sequence[int]): kernel size and stride ratios. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - final_activation (str): Final activation function after all convolutions. - final_activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - kernel_size (int): Kernel size for the initial convolution. - last_kernel_size (int): Kernel size for the initial convolution. - residual_kernel_size (int): Kernel size for the residual layers. - dilation_base (int): How much to increase the dilation with each layer. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - true_skip (bool): Whether to use true skip connection or a simple. - (streamable) convolution as the skip connection in the residual network blocks. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - lstm (int): Number of LSTM layers at the end of the encoder. - disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm. - For the decoder, it corresponds to the N last blocks. - trim_right_ratio (float): Ratio for trimming at the right of the transposed convolution under the causal setup. - If equal to 1.0, it means that all the trimming is done at the right. - """ - def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3, - ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - final_activation: tp.Optional[str] = None, final_activation_params: tp.Optional[dict] = None, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7, - last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False, - pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0, - disable_norm_outer_blocks: int = 0, trim_right_ratio: float = 1.0): - super().__init__() - self.dimension = dimension - self.channels = channels - self.n_filters = n_filters - self.ratios = ratios - del ratios - self.n_residual_layers = n_residual_layers - self.hop_length = np.prod(self.ratios) - self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks - self.disable_norm_outer_blocks = disable_norm_outer_blocks - assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \ - "Number of blocks for which to disable norm is invalid." \ - "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0." - - act = getattr(nn, activation) - mult = int(2 ** len(self.ratios)) - model: tp.List[nn.Module] = [ - StreamableConv1d(dimension, mult * n_filters, kernel_size, - norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - - if lstm: - model += [StreamableLSTM(mult * n_filters, num_layers=lstm)] - - # Upsample to raw audio scale - for i, ratio in enumerate(self.ratios): - block_norm = 'none' if self.disable_norm_outer_blocks >= self.n_blocks - (i + 1) else norm - # Add upsampling layers - model += [ - act(**activation_params), - StreamableConvTranspose1d(mult * n_filters, mult * n_filters // 2, - kernel_size=ratio * 2, stride=ratio, - norm=block_norm, norm_kwargs=norm_params, - causal=causal, trim_right_ratio=trim_right_ratio), - ] - # Add residual layers - for j in range(n_residual_layers): - model += [ - SEANetResnetBlock(mult * n_filters // 2, kernel_sizes=[residual_kernel_size, 1], - dilations=[dilation_base ** j, 1], - activation=activation, activation_params=activation_params, - norm=block_norm, norm_params=norm_params, causal=causal, - pad_mode=pad_mode, compress=compress, true_skip=true_skip)] - - mult //= 2 - - # Add final layers - model += [ - act(**activation_params), - StreamableConv1d(n_filters, channels, last_kernel_size, - norm='none' if self.disable_norm_outer_blocks >= 1 else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - # Add optional final activation to decoder (eg. tanh) - if final_activation is not None: - final_act = getattr(nn, final_activation) - final_activation_params = final_activation_params or {} - model += [ - final_act(**final_activation_params) - ] - self.model = nn.Sequential(*model) - - def forward(self, z): - y = self.model(z) - return y diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/querystring.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/querystring.d.ts deleted file mode 100644 index e1185478461f4b15444b7b2ae114c8a6819a992a..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/querystring.d.ts +++ /dev/null @@ -1,131 +0,0 @@ -/** - * The `querystring` module provides utilities for parsing and formatting URL - * query strings. It can be accessed using: - * - * ```js - * const querystring = require('querystring'); - * ``` - * - * `querystring` is more performant than `URLSearchParams` but is not a - * standardized API. Use `URLSearchParams` when performance is not critical - * or when compatibility with browser code is desirable. - * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/querystring.js) - */ -declare module 'querystring' { - interface StringifyOptions { - encodeURIComponent?: ((str: string) => string) | undefined; - } - interface ParseOptions { - maxKeys?: number | undefined; - decodeURIComponent?: ((str: string) => string) | undefined; - } - interface ParsedUrlQuery extends NodeJS.Dict {} - interface ParsedUrlQueryInput extends NodeJS.Dict | ReadonlyArray | ReadonlyArray | null> {} - /** - * The `querystring.stringify()` method produces a URL query string from a - * given `obj` by iterating through the object's "own properties". - * - * It serializes the following types of values passed in `obj`:[string](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#String_type) | - * [number](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#Number_type) | - * [bigint](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt) | - * [boolean](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#Boolean_type) | - * [string\[\]](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#String_type) | - * [number\[\]](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#Number_type) | - * [bigint\[\]](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt) | - * [boolean\[\]](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#Boolean_type) The numeric values must be finite. Any other input values will be coerced to - * empty strings. - * - * ```js - * querystring.stringify({ foo: 'bar', baz: ['qux', 'quux'], corge: '' }); - * // Returns 'foo=bar&baz=qux&baz=quux&corge=' - * - * querystring.stringify({ foo: 'bar', baz: 'qux' }, ';', ':'); - * // Returns 'foo:bar;baz:qux' - * ``` - * - * By default, characters requiring percent-encoding within the query string will - * be encoded as UTF-8\. If an alternative encoding is required, then an alternative`encodeURIComponent` option will need to be specified: - * - * ```js - * // Assuming gbkEncodeURIComponent function already exists, - * - * querystring.stringify({ w: '中文', foo: 'bar' }, null, null, - * { encodeURIComponent: gbkEncodeURIComponent }); - * ``` - * @since v0.1.25 - * @param obj The object to serialize into a URL query string - * @param [sep='&'] The substring used to delimit key and value pairs in the query string. - * @param [eq='='] . The substring used to delimit keys and values in the query string. - */ - function stringify(obj?: ParsedUrlQueryInput, sep?: string, eq?: string, options?: StringifyOptions): string; - /** - * The `querystring.parse()` method parses a URL query string (`str`) into a - * collection of key and value pairs. - * - * For example, the query string `'foo=bar&abc=xyz&abc=123'` is parsed into: - * - * ```js - * { - * foo: 'bar', - * abc: ['xyz', '123'] - * } - * ``` - * - * The object returned by the `querystring.parse()` method _does not_prototypically inherit from the JavaScript `Object`. This means that typical`Object` methods such as `obj.toString()`, - * `obj.hasOwnProperty()`, and others - * are not defined and _will not work_. - * - * By default, percent-encoded characters within the query string will be assumed - * to use UTF-8 encoding. If an alternative character encoding is used, then an - * alternative `decodeURIComponent` option will need to be specified: - * - * ```js - * // Assuming gbkDecodeURIComponent function already exists... - * - * querystring.parse('w=%D6%D0%CE%C4&foo=bar', null, null, - * { decodeURIComponent: gbkDecodeURIComponent }); - * ``` - * @since v0.1.25 - * @param str The URL query string to parse - * @param [sep='&'] The substring used to delimit key and value pairs in the query string. - * @param [eq='='] . The substring used to delimit keys and values in the query string. - */ - function parse(str: string, sep?: string, eq?: string, options?: ParseOptions): ParsedUrlQuery; - /** - * The querystring.encode() function is an alias for querystring.stringify(). - */ - const encode: typeof stringify; - /** - * The querystring.decode() function is an alias for querystring.parse(). - */ - const decode: typeof parse; - /** - * The `querystring.escape()` method performs URL percent-encoding on the given`str` in a manner that is optimized for the specific requirements of URL - * query strings. - * - * The `querystring.escape()` method is used by `querystring.stringify()` and is - * generally not expected to be used directly. It is exported primarily to allow - * application code to provide a replacement percent-encoding implementation if - * necessary by assigning `querystring.escape` to an alternative function. - * @since v0.1.25 - */ - function escape(str: string): string; - /** - * The `querystring.unescape()` method performs decoding of URL percent-encoded - * characters on the given `str`. - * - * The `querystring.unescape()` method is used by `querystring.parse()` and is - * generally not expected to be used directly. It is exported primarily to allow - * application code to provide a replacement decoding implementation if - * necessary by assigning `querystring.unescape` to an alternative function. - * - * By default, the `querystring.unescape()` method will attempt to use the - * JavaScript built-in `decodeURIComponent()` method to decode. If that fails, - * a safer equivalent that does not throw on malformed URLs will be used. - * @since v0.1.25 - */ - function unescape(str: string): string; -} -declare module 'node:querystring' { - export * from 'querystring'; -} diff --git a/spaces/fffiloni/text-to-gif/README.md b/spaces/fffiloni/text-to-gif/README.md deleted file mode 100644 index 6542ffeace82781cbcf8ac0178ad56b51aeb7760..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/text-to-gif/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Text To Gif -emoji: 🔥 -colorFrom: purple -colorTo: yellow -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/frncscp/Patacotron/pages/Resultados.py b/spaces/frncscp/Patacotron/pages/Resultados.py deleted file mode 100644 index e363f267c895ed13015c55b996e17d5ba43d4e7d..0000000000000000000000000000000000000000 --- a/spaces/frncscp/Patacotron/pages/Resultados.py +++ /dev/null @@ -1,46 +0,0 @@ -import streamlit as st - -st.set_page_config( - page_title = 'Patacotrón', - layout= 'wide', - initial_sidebar_state = 'collapsed', - menu_items = { - "About" : 'Proyecto ideado para la investigación de "Clasificación de imágenes de una sola clase con algortimos de Inteligencia Artificial".', - "Report a Bug" : 'https://docs.google.com/forms/d/e/1FAIpQLScH0ZxAV8aSqs7TPYi86u0nkxvQG3iuHCStWNB-BoQnSW2V0g/viewform?usp=sf_link' - } -) -statistics = 'statistics.jpg' - -with st.sidebar: - st.write("contact@patacotron.tech") - -cnn, vit, zero_shot, autoencoder, svm, iforest, gan = st.tabs(["CNN", "ViT", "Zero-Shot", "Autoencoder", "OC-SVM", 'iForest', 'GAN']) - -with cnn: - - col_a, col_b = st.columns(2) - - with col_a: - st.title("Resultados") - st.markdown( - f""" - ### Se usaron 4 carpetas distintas que suman +15000 archivos: - -Patacón-True/Frames: imágenes de patacones. - - -Bias/Almost-Patacón: objetos similares a patacones o con características que puedan sesgar al modelo. - """) - with col_b: - st.image(statistics) - -with vit: - st.write('Próximamente') -with zero_shot: - st.write('Próximamente') -with autoencoder: - st.write('Próximamente') -with gan: - st.write('Próximamente') -with svm: - st.write('Próximamente') -with iforest: - st.write('Próximamente') \ No newline at end of file diff --git a/spaces/fuckyoudeki/AutoGPT/tests/integration/weaviate_memory_tests.py b/spaces/fuckyoudeki/AutoGPT/tests/integration/weaviate_memory_tests.py deleted file mode 100644 index 015eab05484f485aeb8ee035e92ad7811e9dddd4..0000000000000000000000000000000000000000 --- a/spaces/fuckyoudeki/AutoGPT/tests/integration/weaviate_memory_tests.py +++ /dev/null @@ -1,117 +0,0 @@ -import os -import sys -import unittest -from unittest import mock -from uuid import uuid4 - -from weaviate import Client -from weaviate.util import get_valid_uuid - -from autogpt.config import Config -from autogpt.memory.base import get_ada_embedding -from autogpt.memory.weaviate import WeaviateMemory - - -class TestWeaviateMemory(unittest.TestCase): - cfg = None - client = None - index = None - - @classmethod - def setUpClass(cls): - # only create the connection to weaviate once - cls.cfg = Config() - - if cls.cfg.use_weaviate_embedded: - from weaviate.embedded import EmbeddedOptions - - cls.client = Client( - embedded_options=EmbeddedOptions( - hostname=cls.cfg.weaviate_host, - port=int(cls.cfg.weaviate_port), - persistence_data_path=cls.cfg.weaviate_embedded_path, - ) - ) - else: - cls.client = Client( - f"{cls.cfg.weaviate_protocol}://{cls.cfg.weaviate_host}:{self.cfg.weaviate_port}" - ) - - cls.index = WeaviateMemory.format_classname(cls.cfg.memory_index) - - """ - In order to run these tests you will need a local instance of - Weaviate running. Refer to https://weaviate.io/developers/weaviate/installation/docker-compose - for creating local instances using docker. - Alternatively in your .env file set the following environmental variables to run Weaviate embedded (see: https://weaviate.io/developers/weaviate/installation/embedded): - - USE_WEAVIATE_EMBEDDED=True - WEAVIATE_EMBEDDED_PATH="/home/me/.local/share/weaviate" - """ - - def setUp(self): - try: - self.client.schema.delete_class(self.index) - except: - pass - - self.memory = WeaviateMemory(self.cfg) - - def test_add(self): - doc = "You are a Titan name Thanos and you are looking for the Infinity Stones" - self.memory.add(doc) - result = self.client.query.get(self.index, ["raw_text"]).do() - actual = result["data"]["Get"][self.index] - - self.assertEqual(len(actual), 1) - self.assertEqual(actual[0]["raw_text"], doc) - - def test_get(self): - doc = "You are an Avenger and swore to defend the Galaxy from a menace called Thanos" - - with self.client.batch as batch: - batch.add_data_object( - uuid=get_valid_uuid(uuid4()), - data_object={"raw_text": doc}, - class_name=self.index, - vector=get_ada_embedding(doc), - ) - - batch.flush() - - actual = self.memory.get(doc) - - self.assertEqual(len(actual), 1) - self.assertEqual(actual[0], doc) - - def test_get_stats(self): - docs = [ - "You are now about to count the number of docs in this index", - "And then you about to find out if you can count correctly", - ] - - [self.memory.add(doc) for doc in docs] - - stats = self.memory.get_stats() - - self.assertTrue(stats) - self.assertTrue("count" in stats) - self.assertEqual(stats["count"], 2) - - def test_clear(self): - docs = [ - "Shame this is the last test for this class", - "Testing is fun when someone else is doing it", - ] - - [self.memory.add(doc) for doc in docs] - - self.assertEqual(self.memory.get_stats()["count"], 2) - - self.memory.clear() - - self.assertEqual(self.memory.get_stats()["count"], 0) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/furiosa-ai/ocr/static/js/main.c58ced1d.js b/spaces/furiosa-ai/ocr/static/js/main.c58ced1d.js deleted file mode 100644 index be30674a8e3acacb3648574df194ce400bd0361b..0000000000000000000000000000000000000000 --- a/spaces/furiosa-ai/ocr/static/js/main.c58ced1d.js +++ /dev/null @@ -1,3 +0,0 @@ -/*! For license information please see main.c58ced1d.js.LICENSE.txt */ -!function(){var e={174:function(e,t,n){var r;e=n.nmd(e),function(a){var o=t,i=(e&&e.exports,"object"==typeof n.g&&n.g);i.global!==i&&i.window;var l=function(e){this.message=e};(l.prototype=new Error).name="InvalidCharacterError";var u=function(e){throw new l(e)},s="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/",c=/[\t\n\f\r ]/g,f={encode:function(e){e=String(e),/[^\0-\xFF]/.test(e)&&u("The string to be encoded contains characters outside of the Latin1 range.");for(var t,n,r,a,o=e.length%3,i="",l=-1,c=e.length-o;++l>18&63)+s.charAt(a>>12&63)+s.charAt(a>>6&63)+s.charAt(63&a);return 2==o?(t=e.charCodeAt(l)<<8,n=e.charCodeAt(++l),i+=s.charAt((a=t+n)>>10)+s.charAt(a>>4&63)+s.charAt(a<<2&63)+"="):1==o&&(a=e.charCodeAt(l),i+=s.charAt(a>>2)+s.charAt(a<<4&63)+"=="),i},decode:function(e){var t=(e=String(e).replace(c,"")).length;t%4==0&&(t=(e=e.replace(/==?$/,"")).length),(t%4==1||/[^+a-zA-Z0-9/]/.test(e))&&u("Invalid character: the string to be decoded is not correctly encoded.");for(var n,r,a=0,o="",i=-1;++i>(-2*a&6)));return o},version:"1.0.0"};void 0===(r=function(){return f}.call(t,n,t,e))||(e.exports=r)}()},110:function(e,t,n){"use strict";var r=n(309),a={childContextTypes:!0,contextType:!0,contextTypes:!0,defaultProps:!0,displayName:!0,getDefaultProps:!0,getDerivedStateFromError:!0,getDerivedStateFromProps:!0,mixins:!0,propTypes:!0,type:!0},o={name:!0,length:!0,prototype:!0,caller:!0,callee:!0,arguments:!0,arity:!0},i={$$typeof:!0,compare:!0,defaultProps:!0,displayName:!0,propTypes:!0,type:!0},l={};function u(e){return r.isMemo(e)?i:l[e.$$typeof]||a}l[r.ForwardRef]={$$typeof:!0,render:!0,defaultProps:!0,displayName:!0,propTypes:!0},l[r.Memo]=i;var s=Object.defineProperty,c=Object.getOwnPropertyNames,f=Object.getOwnPropertySymbols,d=Object.getOwnPropertyDescriptor,p=Object.getPrototypeOf,h=Object.prototype;e.exports=function e(t,n,r){if("string"!==typeof n){if(h){var a=p(n);a&&a!==h&&e(t,a,r)}var i=c(n);f&&(i=i.concat(f(n)));for(var l=u(t),v=u(n),m=0;m