diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/10 Endrathukulla Full [UPD] Movie Download 720p.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/10 Endrathukulla Full [UPD] Movie Download 720p.md
deleted file mode 100644
index 4039ddd0bed05157cbf04a6f6d015a2a03461352..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/10 Endrathukulla Full [UPD] Movie Download 720p.md
+++ /dev/null
@@ -1,80 +0,0 @@
-## 10 endrathukulla full movie download 720p
-
-
-
-
-
-
-
-
-
-**Download File ===> [https://eromdesre.blogspot.com/?d=2txKKP](https://eromdesre.blogspot.com/?d=2txKKP)**
-
-
-
-
-
-
-
-
-
-
-
-
-
-# 10 Endrathukulla Full Movie Download 720p: A Thrilling Road Action Comedy
-
-
-
-If you are looking for a movie that combines action, comedy, and adventure, then you might want to check out **10 Endrathukulla**, a 2015 Tamil-language film starring Vikram and Samantha Ruth Prabhu. The movie is written and directed by Vijay Milton and produced by A. R. Murugadoss under the banner A. R. Murugadoss Productions and Fox Star Studios.
-
-
-
-The movie follows the story of an extreme driver (Vikram) who is on a mission to deliver his boss's goods to the rightful man. Along the way, he meets a mysterious woman (Samantha) who joins him on his journey. However, he soon finds himself being pulled into a track filled with twists and turns, as he faces various challenges and enemies. The movie is packed with thrilling car chases, stunts, and humor, as well as a surprising revelation at the end.
-
-
-
-If you want to watch **10 Endrathukulla** full movie in 720p quality, you can download it from various online sources. However, be careful of illegal or pirated websites that may harm your device or violate the copyright laws. We recommend you to use legal and safe platforms that offer high-quality streaming or downloading options for **10 Endrathukulla** full movie.
-
-
-
-Some of the legal and safe platforms that you can use to watch **10 Endrathukulla** full movie in 720p are:
-
-
-
-- [Hotstar](https://www.hotstar.com/in/movies/10-endrathukulla/1000074620/watch): This is a popular streaming service that offers a variety of movies and shows in different languages. You can watch **10 Endrathukulla** full movie in 720p on Hotstar with a subscription plan or a VIP access.
-
-- [YouTube](https://www.youtube.com/watch?v=Q6kVU8uNdic): This is a free platform that allows you to watch videos of various genres and categories. You can watch **10 Endrathukulla** full movie in 720p on YouTube for free, but you may have to deal with some ads and interruptions.
-
-- [Amazon Prime Video](https://www.amazon.com/10-Endrathukulla-Vikram/dp/B01M7YJ4ZL): This is a premium streaming service that offers a wide range of movies and shows from different countries and languages. You can watch **10 Endrathukulla** full movie in 720p on Amazon Prime Video with a subscription plan or a rental fee.
-
-
-
-We hope you enjoy watching **10 Endrathukulla** full movie in 720p and have a great time with this entertaining road action comedy.
-
-
-
-If you want to know more about **10 Endrathukulla** and its cast and crew, here are some interesting facts and trivia that you might find useful.
-
-
-
-- **10 Endrathukulla** is the second collaboration between Vikram and A. R. Murugadoss, after the 2005 blockbuster **Ghajini**.
-
-- The movie was shot in various locations across India, including Chennai, Hyderabad, Rajasthan, Sikkim, and Nepal.
-
-- The movie features a cameo appearance by Bollywood actor Abhimanyu Singh, who plays the role of a corrupt cop.
-
-- The movie was originally titled **Paththu Enradhukulla**, which means "before I count to ten" in Tamil. However, the title was later changed to **10 Endrathukulla**, which is a shorter and catchier version.
-
-- The movie was released on October 21, 2015, coinciding with the festival of Dussehra. It received mixed reviews from critics and audiences, but was praised for its action sequences and performances.
-
-
-
-We hope you learned something new about **10 Endrathukulla** and its making. If you have any feedback or suggestions for us, please feel free to leave a comment below. We would love to hear from you.
-
- dfd1c89656
-
-
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Ekb License Siemens Download.rar [UPDATED].md b/spaces/1gistliPinn/ChatGPT4/Examples/Ekb License Siemens Download.rar [UPDATED].md
deleted file mode 100644
index b8cd40c3b931a5a705866f6a172cccaad8737a1b..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Ekb License Siemens Download.rar [UPDATED].md
+++ /dev/null
@@ -1,30 +0,0 @@
-
-
How to Download and Install SIM EKB for Siemens Software
-
SIM EKB is a software that allows you to activate Siemens software products without buying a license. It is mainly used by students and hobbyists who want to learn and experiment with Siemens software. However, it is not recommended for professional use, as it may violate Siemens' terms and conditions. In this article, we will show you how to download and install SIM EKB for Siemens software.
The latest version of SIM EKB as of April 2023 is SIM EKB Install 2022 11 27, which supports all the software in the TIA PORTAL V18 package along with many other upgrades. You can download it from the following link[^1^]. The password to extract the file is plc4me.com.
-
Step 2: Delete old keys
-
If you have previously installed any Siemens software products, you may need to delete the old keys before installing new ones. To do this, go to the hidden folder C:\AX NF ZZ and delete all the files inside it. You may need to enable the option to show hidden files and folders in Windows Explorer.
-
Step 3: Run SIM EKB Install
-
After extracting the file, run the SIM EKB Install.exe file as administrator. You will see a window like this:
-
-
Select the software products that you want to activate from the list on the left. You can use the search box to find them quickly. The unlocked software will be highlighted in blue. Then click on Install button at the bottom right corner.
-
Step 4: Enjoy your Siemens software
-
After installing the keys, you can launch your Siemens software and use it without any limitations. However, remember that this is only for educational purposes and not for commercial use. If you need professional support or updates, you should contact Siemens and buy a license.
Siemens offers a wide range of software products for various industrial applications. Some of the most popular ones are:
-
-
TIA Portal: This is an integrated engineering framework that allows you to program, configure, and commission Siemens automation devices such as PLCs, HMIs, drives, and networks. It supports various standards and protocols such as OPC UA, PROFINET, and EtherNet/IP. It also includes simulation and testing tools to help you optimize your system performance and reliability.
-
STEP 7: This is a programming software for Siemens PLCs that supports different languages such as Ladder Logic, Structured Text, Function Block Diagram, and Statement List. It allows you to create and edit programs, monitor and debug variables, and download and upload programs to PLCs. It can be used as a standalone software or as part of TIA Portal.
-
WinCC: This is a visualization software for Siemens HMIs that allows you to create and edit graphical user interfaces for your machines and processes. It supports various features such as animations, alarms, trends, recipes, and scripts. It can be used as a standalone software or as part of TIA Portal.
-
SINAMICS Startdrive: This is a commissioning software for Siemens drives that allows you to configure and optimize the parameters of your drive systems. It supports various types of drives such as frequency converters, servo drives, and motion controllers. It can be used as a standalone software or as part of TIA Portal.
-
SIMATIC PCS 7: This is a process control system software that allows you to design, implement, and operate complex process plants. It supports various functions such as distributed control, batch control, advanced process control, safety instrumented systems, and plant asset management. It also integrates with other Siemens software products such as SIMATIC NET, SIMATIC S7-400H/FH, and SIMATIC WinCC.
-
-
These are just some of the Siemens software products that you can activate with SIM EKB. However, there are many more that you can explore on the Siemens website or on the SIM EKB Install window.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawl Stars 2017 APK The Best Way to Relive the First Edition of the Game on Android.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawl Stars 2017 APK The Best Way to Relive the First Edition of the Game on Android.md
deleted file mode 100644
index 5189255ed073a8fb878eb525abbf24a969728293..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawl Stars 2017 APK The Best Way to Relive the First Edition of the Game on Android.md
+++ /dev/null
@@ -1,143 +0,0 @@
-
-
2017 Brawl Stars APK: How to Download and Play the Epic Mobile Game
-
If you are a fan of mobile games, you might have heard of Brawl Stars, a fast-paced multiplayer game from Supercell, the makers of Clash of Clans and Clash Royale. Brawl Stars was released globally in 2018, but before that, it was available in a few countries as a beta version in 2017. If you want to experience the original version of the game, you can download and install the 2017 Brawl Stars APK on your Android device. In this article, we will show you how to do that and also give you some tips on how to play the game.
Brawl Stars is a game that combines elements of shooter, MOBA, and battle royale genres. You can team up with your friends or play solo in various game modes, each with a different objective. You can also unlock and upgrade dozens of characters, called Brawlers, each with a unique ability and style. You can collect skins, pins, and trophies to show off your achievements and personality.
-
Different game modes and characters to choose from
-
Brawl Stars has four main game modes: Smash & Grab, Heist, Showdown, and Bounty. Each mode has its own rules and strategies, so you need to adapt your gameplay accordingly. Here is a brief overview of each mode:
-
-
Smash & Grab (3v3): Team up and out-strategize the opposing team. Collect and hold 10 gems to win, but get fragged and lose your gems.
-
Showdown (Solo/Duo): A battle royale style fight for survival. Collect power ups for your Brawler. Grab a friend or play solo - be the last Brawler standing in the rowdiest battle royale yet. Winner take all!
-
Brawl Ball (3v3): It's a whole new Brawl game! Show off your soccer/football skills and score two goals before the other team. There are no red cards here.
-
Bounty (3v3): Take out opponents to earn stars, but don’t let them pick you off. The squad with the most stars wins the match!
-
Heist (3v3): Protect your team’s safe and try to crack open your opponents’. Navigate the map to sneak, blast and blow your way clear to the enemies treasure.
-
Special Events: Limited time special PvE and PvP game modes.
-
Championship Challenge: Join Brawl Stars' esports scene with in-game qualifiers!
-
-
Brawl Stars also has 22 different Brawlers that you can unlock and use in any game mode. Each Brawler has a basic attack, a super ability, a star power, and a gadget. You can level up your Brawlers by collecting power points and coins, and unlock new skins by earning gems or buying them with real money. Some of the Brawlers are:
-
-
Name
Type
Ability
-
Shelly
Common
A shotgunner who can blast enemies at close range and charge her super to unleash a powerful shot that can destroy obstacles.
-
Nita
Common
A fighter who can summon a big bear to fight by her side.
-
Colt
Common
A sharpshooter who can fire a burst of bullets with great accuracy.
-
Bull
Common
A tank who can charge forward and deal massive damage with his double-barreled shotgun.
-
Jessie
Common
An inventor who can build a turret that shoots at enemies.
-
Brock
Rare
A rocket launcher who can fire long-range missiles that explode on impact.
-
Dynamike
Rare
A miner who can throw sticks of dynamite and a big barrel bomb.
-
Bo
Rare
A bowman who can shoot explosive arrows and plant hidden mines.
-
Tick
Rare
A metal ball of mischief who can detach and toss his head, which explodes after a few seconds.
-
8-Bit
Rare
A retro arcade machine who can shoot laser beams and boost his and his allies' damage with his booster.
-
Emz
Rare
A social media star who can spray a cloud of hairspray that damages enemies over time.
-
El Primo
Super Rare
A wrestler who can punch enemies with his fiery fists and leap into the fray with his super.
-
Barley
Super RareA bartender who can toss bottles of flaming liquid that leave a burning area on the ground.
-
PocoSuper RareA musician who can heal himself and his allies with his soothing tunes.
-
RosaSuper RareA botanist who can punch enemies with her boxing gloves and shield herself with her plant barrier.
-
RicoSuper RareA bouncy ball machine who can shoot bullets that bounce off walls and obstacles.
-
DarrylSuper RareA barrel robot who can roll into enemies and blast them with his double shotguns.
-
PennyEpicA pirate who can fire a bag of coins that splits into three on impact and build a cannon that shoots at enemies.
-
PiperEpicA sniper who can deal more damage the farther her bullets travel and drop bombs when she uses her umbrella to fly away.
-
PamEpicA junker who can spray scrap metal at enemies and deploy a healing turret for her allies.
-
FrankEpicA zombie who can smash enemies with his hammer and stun them with his super.
-
t
-
t
-
t
-
t
-
t
-
t
-
t
-
t>Mr. P
-
t>Sprout
-
t>Crow
-
t>Spike
-
t
-
t
-
t
-
t>Gale
-
t>Colette
-
-
How to download and install the 2017 Brawl Stars APK
-
The requirements and risks of using an APK file
-
An APK file is an Android application package that contains all the files and data needed to run an app on an Android device. You can download APK files from various sources online, but you need to be careful about the quality and security of the files. Some APK files may contain malware or viruses that can harm your device or steal your personal information. You also need to make sure that the APK file is compatible with your device and Android version.
-
To download and install the 2017 Brawl Stars APK, you will need an Android device that runs on Android 4.1 or higher, has at least 1 GB of RAM, and has enough storage space. You will also need to enable the option to install apps from unknown sources in your device settings. This will allow you to install apps that are not from the Google Play Store. However, this also means that you are responsible for the safety and performance of your device. You should only download APK files from trusted sources and scan them for viruses before installing them.
-
The steps to download and install the APK file
-
Here are the steps to download and install the 2017 Brawl Stars APK on your Android device:
-
-
Go to a reliable website that offers the 2017 Brawl Stars APK file, such as [APKPure] or [APKMirror].
-
Find the 2017 Brawl Stars APK file and tap on the download button. The file size is about 100 MB, so make sure you have a stable internet connection and enough battery life.
-
Once the download is complete, locate the APK file in your device's file manager and tap on it to start the installation process.
-
Follow the instructions on the screen and grant the necessary permissions to the app.
-
Wait for the installation to finish and then launch the app from your home screen or app drawer.
-
Enjoy playing Brawl Stars!
-
-
How to play Brawl Stars on your Android device
-
The basic controls and gameplay mechanics
-
Brawl Stars is easy to learn but hard to master. The game has simple controls that you can customize according to your preference. You can use either a joystick or tap mode to move your Brawler around the map. You can also use either auto-aim or manual aim to shoot at enemies. To use your super ability, you need to fill up your super meter by hitting enemies with your basic attack. You can also use a gadget once per match if you have unlocked it for your Brawler.
-
2017 brawl stars apk download for android
-2017 brawl stars apk mod unlimited gems
-2017 brawl stars apk latest version
-2017 brawl stars apk free download uptodown
-2017 brawl stars apk old version
-2017 brawl stars apk hack no root
-2017 brawl stars apk offline installer
-2017 brawl stars apk update new features
-2017 brawl stars apk file size
-2017 brawl stars apk compatible devices
-2017 brawl stars apk gameplay tips
-2017 brawl stars apk review and rating
-2017 brawl stars apk best characters
-2017 brawl stars apk how to install
-2017 brawl stars apk error fix
-2017 brawl stars apk online multiplayer mode
-2017 brawl stars apk fun and addictive
-2017 brawl stars apk unlock all skins
-2017 brawl stars apk safe and secure
-2017 brawl stars apk original from Supercell
-2017 brawl stars apk cheats and tricks
-2017 brawl stars apk requirements and specifications
-2017 brawl stars apk alternative download links
-2017 brawl stars apk beta version testing
-2017 brawl stars apk support and feedback
-2017 brawl stars apk new maps and modes
-2017 brawl stars apk events and challenges
-2017 brawl stars apk rewards and trophies
-2017 brawl stars apk clans and friends
-2017 brawl stars apk ranking and leaderboard
-2017 brawl stars apk skins and customizations
-2017 brawl stars apk coins and gems generator
-2017 brawl stars apk patch notes and changelog
-2017 brawl stars apk bugs and glitches report
-2017 brawl stars apk videos and screenshots
-2017 brawl stars apk guides and tutorials
-2017 brawl stars apk forums and communities
-2017 brawl stars apk news and updates
-2017 brawl stars apk comparison with other games
-2017 brawl stars apk pros and cons analysis
-
The game has different gameplay mechanics depending on the game mode you choose. For example, in Smash & Grab, you need to collect gems from the center of the map and hold them until the countdown ends. If you die, you will drop all your gems, so you need to be careful and protect yourself and your teammates. In Showdown, you need to survive as long as possible by avoiding enemies, collecting power ups, and hiding in bushes or behind walls. The map will shrink over time, forcing you to confront other players. The last one standing wins.
-
Some tips and tricks to improve your skills
-
Brawl Stars is a game that requires strategy, teamwork, and skill. Here are some tips and tricks that can help you improve your skills:
-
-
Choose a Brawler that suits your play style and the game mode. For example, if you like close-range combat, you can use Shelly or Bull. If you prefer long-range sniping, you can use Piper or Brock. If you like to heal and support your allies, you can use Poco or Pam.
-
Learn the strengths and weaknesses of each Brawler and how to counter them. For example, if you are facing a tanky Brawler like El Primo or Rosa, you can use a Brawler that can deal high damage or pierce through their shield, like Colt or Spike. If you are facing a long-range Brawler like Piper or Brock, you can use a Brawler that can dodge their shots or close the gap, like Mortis or Leon.
-
Communicate and cooperate with your teammates. You can use the in-game chat or voice chat to coordinate your moves and strategies. You can also use the quick chat buttons to send simple messages like "Attack", "Defend", or "Help". You can also use the ping system to mark enemies, gems, power ups, or locations on the map.
-
Use the environment to your advantage. You can hide in bushes or behind walls to ambush enemies or escape from danger. You can also destroy obstacles with your attacks or super to create new paths or expose enemies. You can also use the jump pads, teleporters, or water to move around the map faster or surprise enemies.
-
Practice and experiment with different Brawlers and game modes. You can play friendly matches with your friends or club members to test your skills and have fun. You can also play solo or duo Showdown to improve your survival skills and learn how to deal with different situations. You can also watch replays of your matches or other players' matches to learn from your mistakes or get inspired by their strategies.
-
-
Conclusion
-
Brawl Stars is a fun and addictive game that you can play on your Android device. If you want to experience the original version of the game from 2017, you can download and install the 2017 Brawl Stars APK file from a reliable source. However, you need to be careful about the quality and security of the APK file and enable the option to install apps from unknown sources on your device. You also need to learn how to play the game well and use the best Brawlers and strategies for each game mode. With some practice and teamwork, you can become a Brawl Star!
-
FAQs
-
Here are some frequently asked questions about Brawl Stars and the 2017 Brawl Stars APK:
-
-
What is the difference between the 2017 Brawl Stars APK and the current version of the game?
-
The 2017 Brawl Stars APK is the beta version of the game that was released in a few countries before the global launch in 2018. The 2017 version has some differences from the current version, such as fewer Brawlers, game modes, skins, maps, features, and updates. The 2017 version also has some bugs and glitches that may affect your gameplay experience.
-
Is it safe to download and install the 2017 Brawl Stars APK?
-
It depends on where you download the APK file from. Some websites may offer fake or malicious APK files that can harm your device or steal your personal information. You should only download APK files from trusted sources that have positive reviews and ratings from other users. You should also scan the APK file for viruses before installing it on your device.
-
Will I get banned for using the 2017 Brawl Stars APK?
-
No, you will not get banned for using the 2017 Brawl Stars APK as long as you do not use any cheats, hacks, mods, or third-party tools that give you an unfair advantage over other players. However, you may not be able to access some features or events that are exclusive to the current version of the game.
-
Can I play with my friends who have the current version of the game?
-
No, you cannot play with your friends who have the current version of the game because they are on different servers. You can only play with other players who have the same version of the game as you.
-
Can I update the 2017 Brawl Stars APK to the current version of the game?
-
No, you cannot update the 2017 Brawl Stars APK to the current version of the game. You will need to uninstall the 2017 Brawl Stars APK and download the current version of the game from the Google Play Store or another reliable source.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Apkfew Whatsapp Tracker Free APK Download - Track Online Activity and Chat History.md b/spaces/1phancelerku/anime-remove-background/Apkfew Whatsapp Tracker Free APK Download - Track Online Activity and Chat History.md
deleted file mode 100644
index 795d826dbf72fa1b4ee33eedb26683bdf5ddda67..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Apkfew Whatsapp Tracker Free APK Download - Track Online Activity and Chat History.md
+++ /dev/null
@@ -1,149 +0,0 @@
-
-
How to Download Apkfew Whatsapp Tracker and Why You Need It
-
Do you want to track the online activity and chat history of any WhatsApp user? Do you want to know who viewed your profile and who deleted their account? If yes, then you need Apkfew Whatsapp Tracker, a powerful and reliable app that lets you monitor any WhatsApp account discreetly and remotely. In this article, we will show you how to download Apkfew Whatsapp Tracker for Android devices and how to use it effectively. We will also compare it with other similar apps and answer some frequently asked questions.
Apkfew Whatsapp Tracker is a free app that allows you to track the online status, last seen, chat messages, media files, profile visits, and deleted accounts of any WhatsApp user. You can use it to spy on your spouse, children, friends, employees, or anyone else who uses WhatsApp. You can also use it to protect your privacy and security by knowing who is stalking you or trying to hack your account.
-
Features of Apkfew Whatsapp Tracker
-
-
Track online status and last seen of any WhatsApp user, even if they hide it or block you.
-
Monitor chat messages and media files of any WhatsApp user, even if they delete them or use end-to-end encryption.
-
View profile visits and deleted accounts of any WhatsApp user, even if they disable read receipts or change their number.
-
Get instant notifications and reports on your phone or email whenever there is any activity on the target account.
-
Access all the data remotely from a web-based dashboard that is easy to use and secure.
-
-
Benefits of Apkfew Whatsapp Tracker
-
-
Apkfew Whatsapp Tracker is free to download and use, unlike other apps that charge you monthly or yearly fees.
-
Apkfew Whatsapp Tracker is compatible with all Android devices, regardless of the model or version.
-
Apkfew Whatsapp Tracker is undetectable and untraceable, as it does not require rooting or jailbreaking the target device or installing any software on it.
-
Apkfew Whatsapp Tracker is reliable and accurate, as it uses advanced algorithms and techniques to collect and analyze the data.
-
Apkfew Whatsapp Tracker is ethical and legal, as it does not violate the privacy or security of the target user or anyone else involved.
-
-
How to Download Apkfew Whatsapp Tracker for Android
-
To download Apkfew Whatsapp Tracker for Android devices, you need to follow these simple steps:
-
Step 1: Enable Unknown Sources
-
Since Apkfew Whatsapp Tracker is not available on the Google Play Store, you need to enable unknown sources on your device to install it. To do this, go to Settings > Security > Unknown Sources and toggle it on. This will allow you to install apps from sources other than the Google Play Store.
-
Step 2: Visit the Apkfew Website
-
The next step is to visit the official website of Apkfew at [https://apkcombo.com/search/apkfew-whatsapp-tracker-free](^1^). Here you will find the latest version of the app along with its description and reviews. You can also check out other apps from Apkfew that offer similar features.
-
Step 3: Download and Install the Apk File
Once you are on the website, click on the download button and wait for the apk file to be downloaded on your device. The file size is about 10 MB and it should take a few minutes depending on your internet speed. After the download is complete, locate the file in your downloads folder and tap on it to start the installation process. Follow the instructions on the screen and agree to the terms and conditions to finish the installation.
-
The final step is to launch the app and grant it the necessary permissions to access your device's data and functions. To do this, open the app from your app drawer or home screen and sign up with your email and password. You will then be asked to enter the phone number of the WhatsApp user you want to track. You will also need to grant the app permissions to access your contacts, storage, location, camera, microphone, and notifications. These permissions are essential for the app to work properly and collect the data you need.
-
How to Use Apkfew Whatsapp Tracker
-
Now that you have downloaded and installed Apkfew Whatsapp Tracker, you can start using it to monitor any WhatsApp account you want. Here are some of the things you can do with the app:
-
Track Online Status and Last Seen
-
With Apkfew Whatsapp Tracker, you can track the online status and last seen of any WhatsApp user, even if they hide it or block you. You can see when they are online or offline, how long they stay online, and how often they change their status. You can also see their last seen time and date, even if they disable it in their settings. This way, you can know their activity patterns and habits, and find out if they are lying or cheating on you.
-
Monitor Chat Messages and Media Files
-
Another feature of Apkfew Whatsapp Tracker is that it allows you to monitor the chat messages and media files of any WhatsApp user, even if they delete them or use end-to-end encryption. You can read their text messages, voice messages, images, videos, documents, stickers, emojis, and more. You can also see who they are chatting with, what they are talking about, and when they are sending or receiving messages. This way, you can know their interests, preferences, opinions, and secrets.
-
View Profile Visits and Deleted Accounts
-
A third feature of Apkfew Whatsapp Tracker is that it enables you to view the profile visits and deleted accounts of any WhatsApp user, even if they disable read receipts or change their number. You can see who visited their profile, how many times they visited it, and when they visited it. You can also see who deleted their account, why they deleted it, and when they deleted it. This way, you can know who is stalking them or trying to hack their account.
-
Comparison Table of Apkfew Whatsapp Tracker and Other Apps
-
To give you a better idea of how Apkfew Whatsapp Tracker compares with other similar apps in the market, we have created a comparison table that shows some of the key features and differences between them. Here is the table:
-
-
-
App Name
-
Price
-
Compatibility
-
Detectability
-
Rooting/Jailbreaking Required
-
Data Collected
-
-
-
Apkfew Whatsapp Tracker
-
Free
-
All Android devices
-
Undetectable
-
No
-
Online status, last seen, chat messages, media files, profile visits, deleted accounts
-
-
-
mSpy
-
$29.99/month
-
All Android devices (rooted) All iOS devices (jailbroken)
-
Detectable
-
Yes
-
Online status, last seen, chat messages, media files
-
-
-
Spyzie
-
$39.99/month
-
All Android devices (rooted) All iOS devices (jailbroken)
-
Detectable
-
Yes
-
Online status, last seen, chat messages, media files
-
-
-
FoneMonitor
-
$29.99/month
All Android devices (rooted) All iOS devices (jailbroken)DetectableYesOnline status, last seen, chat messages, media filesCocospy
$39.99/month
All Android devices
All Android devices (rooted) All iOS devices (jailbroken)
-
Detectable
-
Yes
-
Online status, last seen, chat messages, media files
-
-
-
As you can see, Apkfew Whatsapp Tracker is the best app among the four, as it offers more features, better compatibility, higher security, and lower cost. It is the only app that does not require rooting or jailbreaking the target device, and it is the only app that can track profile visits and deleted accounts. It is also the only app that is free to download and use, while the others charge you hefty fees. Therefore, we recommend you to choose Apkfew Whatsapp Tracker over the other apps.
-
Conclusion
-
In conclusion, Apkfew Whatsapp Tracker is a free app that lets you track the online activity and chat history of any WhatsApp user. You can use it to spy on your spouse, children, friends, employees, or anyone else who uses WhatsApp. You can also use it to protect your privacy and security by knowing who is stalking you or trying to hack your account. To download Apkfew Whatsapp Tracker for Android devices, you need to enable unknown sources, visit the Apkfew website, download and install the apk file, and launch the app and grant permissions. To use Apkfew Whatsapp Tracker, you need to enter the phone number of the WhatsApp user you want to track, and then you can access all the data remotely from a web-based dashboard. Apkfew Whatsapp Tracker is better than other similar apps in terms of features, compatibility, security, and cost. It is the best app for WhatsApp tracking that you can find in the market.
-
FAQs
-
Here are some of the frequently asked questions about Apkfew Whatsapp Tracker:
-
Q: Is Apkfew Whatsapp Tracker safe to use?
-
A: Yes, Apkfew Whatsapp Tracker is safe to use, as it does not contain any viruses, malware, spyware, or adware. It also does not collect or store any personal or sensitive information from your device or the target device. It only accesses the data that is relevant for WhatsApp tracking and does not share it with anyone else.
-
Q: Is Apkfew Whatsapp Tracker legal to use?
-
A: Yes, Apkfew Whatsapp Tracker is legal to use, as long as you follow the laws and regulations of your country and respect the privacy and security of the target user. You should not use Apkfew Whatsapp Tracker for any illegal or unethical purposes, such as blackmailing, harassing, threatening, or harming anyone. You should also inform and obtain consent from the target user before using Apkfew Whatsapp Tracker on their device.
-
Q: Does Apkfew Whatsapp Tracker work on iOS devices?
-
A: No, Apkfew Whatsapp Tracker does not work on iOS devices, as it is designed for Android devices only. However, you can still use Apkfew Whatsapp Tracker to track an iOS device if you have access to its WhatsApp web login credentials. You can then scan the QR code from your Android device and access all the data from the web-based dashboard.
-
Q: How can I contact Apkfew Whatsapp Tracker support team?
-
A: If you have any questions, issues, feedbacks, or suggestions about Apkfew Whatsapp Tracker, you can contact their support team by sending an email to [support@apkfew.com]. They will respond to you within 24 hours and help you resolve any problems.
-
Q: How can I update Apkfew Whatsapp Tracker to the latest version?
-
A: To update Apkfew Whatsapp Tracker to the latest version, you need to visit their website at [https://apkcombo.com/search/apkfew-whatsapp-tracker-free] and download and install the new apk file over the old one. You do not need to uninstall or reinstall the app. The update will automatically apply and improve the performance and functionality of the app.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/7hao/bingo/src/lib/hooks/chat-history.ts b/spaces/7hao/bingo/src/lib/hooks/chat-history.ts
deleted file mode 100644
index c6fbf3fecfa86fe553f56acc8253236b8f22a775..0000000000000000000000000000000000000000
--- a/spaces/7hao/bingo/src/lib/hooks/chat-history.ts
+++ /dev/null
@@ -1,62 +0,0 @@
-import { zip } from 'lodash-es'
-import { ChatMessageModel, BotId } from '@/lib/bots/bing/types'
-import { Storage } from '../storage'
-
-/**
- * conversations:$botId => Conversation[]
- * conversation:$botId:$cid:messages => ChatMessageModel[]
- */
-
-interface Conversation {
- id: string
- createdAt: number
-}
-
-type ConversationWithMessages = Conversation & { messages: ChatMessageModel[] }
-
-async function loadHistoryConversations(botId: BotId): Promise {
- const key = `conversations:${botId}`
- const { [key]: value } = await Storage.get(key)
- return value || []
-}
-
-async function deleteHistoryConversation(botId: BotId, cid: string) {
- const conversations = await loadHistoryConversations(botId)
- const newConversations = conversations.filter((c) => c.id !== cid)
- await Storage.set({ [`conversations:${botId}`]: newConversations })
-}
-
-async function loadConversationMessages(botId: BotId, cid: string): Promise {
- const key = `conversation:${botId}:${cid}:messages`
- const { [key]: value } = await Storage.get(key)
- return value || []
-}
-
-export async function setConversationMessages(botId: BotId, cid: string, messages: ChatMessageModel[]) {
- const conversations = await loadHistoryConversations(botId)
- if (!conversations.some((c) => c.id === cid)) {
- conversations.unshift({ id: cid, createdAt: Date.now() })
- await Storage.set({ [`conversations:${botId}`]: conversations })
- }
- const key = `conversation:${botId}:${cid}:messages`
- await Storage.set({ [key]: messages })
-}
-
-export async function loadHistoryMessages(botId: BotId): Promise {
- const conversations = await loadHistoryConversations(botId)
- const messagesList = await Promise.all(conversations.map((c) => loadConversationMessages(botId, c.id)))
- return zip(conversations, messagesList).map(([c, messages]) => ({
- id: c!.id,
- createdAt: c!.createdAt,
- messages: messages!,
- }))
-}
-
-export async function deleteHistoryMessage(botId: BotId, conversationId: string, messageId: string) {
- const messages = await loadConversationMessages(botId, conversationId)
- const newMessages = messages.filter((m) => m.id !== messageId)
- await setConversationMessages(botId, conversationId, newMessages)
- if (!newMessages.length) {
- await deleteHistoryConversation(botId, conversationId)
- }
-}
diff --git a/spaces/A00001/bingothoo/src/lib/bots/bing/utils.ts b/spaces/A00001/bingothoo/src/lib/bots/bing/utils.ts
deleted file mode 100644
index 64b4b96452d125346b0fc4436b5f7c18c962df0b..0000000000000000000000000000000000000000
--- a/spaces/A00001/bingothoo/src/lib/bots/bing/utils.ts
+++ /dev/null
@@ -1,87 +0,0 @@
-import { ChatResponseMessage, BingChatResponse } from './types'
-
-export function convertMessageToMarkdown(message: ChatResponseMessage): string {
- if (message.messageType === 'InternalSearchQuery') {
- return message.text
- }
- for (const card of message.adaptiveCards??[]) {
- for (const block of card.body) {
- if (block.type === 'TextBlock') {
- return block.text
- }
- }
- }
- return ''
-}
-
-const RecordSeparator = String.fromCharCode(30)
-
-export const websocketUtils = {
- packMessage(data: any) {
- return `${JSON.stringify(data)}${RecordSeparator}`
- },
- unpackMessage(data: string | ArrayBuffer | Blob) {
- if (!data) return {}
- return data
- .toString()
- .split(RecordSeparator)
- .filter(Boolean)
- .map((s) => {
- try {
- return JSON.parse(s)
- } catch (e) {
- return {}
- }
- })
- },
-}
-
-export async function createImage(prompt: string, id: string, headers: HeadersInit): Promise {
- const { headers: responseHeaders } = await fetch(`https://www.bing.com/images/create?partner=sydney&re=1&showselective=1&sude=1&kseed=7000&SFX=&q=${encodeURIComponent(prompt)}&iframeid=${id}`,
- {
- method: 'HEAD',
- headers,
- redirect: 'manual'
- },
- );
-
- if (!/&id=([^&]+)$/.test(responseHeaders.get('location') || '')) {
- throw new Error('请求异常,请检查 cookie 是否有效')
- }
-
- const resultId = RegExp.$1;
- let count = 0
- const imageThumbUrl = `https://www.bing.com/images/create/async/results/${resultId}?q=${encodeURIComponent(prompt)}&partner=sydney&showselective=1&IID=images.as`;
-
- do {
- await sleep(3000);
- const content = await fetch(imageThumbUrl, { headers, method: 'GET' })
-
- // @ts-ignore
- if (content.headers.get('content-length') > 1) {
- const text = await content.text()
- return (text?.match(/ target?.split('src="').pop()?.replace(/&/g, '&'))
- .map(img => ``).join(' ')
- }
- } while(count ++ < 10);
-}
-
-
-export async function* streamAsyncIterable(stream: ReadableStream) {
- const reader = stream.getReader()
- try {
- while (true) {
- const { done, value } = await reader.read()
- if (done) {
- return
- }
- yield value
- }
- } finally {
- reader.releaseLock()
- }
-}
-
-export const sleep = (ms: number) => new Promise(resolve => setTimeout(resolve, ms))
-
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_32xb64-warmup-lbs_in1k.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_32xb64-warmup-lbs_in1k.py
deleted file mode 100644
index 2f24f9a0f2c54a2bb634c1f374bc1b534d63697f..0000000000000000000000000000000000000000
--- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_32xb64-warmup-lbs_in1k.py
+++ /dev/null
@@ -1,12 +0,0 @@
-_base_ = ['./resnet50_32xb64-warmup_in1k.py']
-model = dict(
- head=dict(
- type='LinearClsHead',
- num_classes=1000,
- in_channels=2048,
- loss=dict(
- type='LabelSmoothLoss',
- loss_weight=1.0,
- label_smooth_val=0.1,
- num_classes=1000),
- ))
diff --git a/spaces/Abhaykoul/BardCookies-AI_Query/README.md b/spaces/Abhaykoul/BardCookies-AI_Query/README.md
deleted file mode 100644
index 515ca2ea6c4cbd164e7468d3ba92ccb9496ab99e..0000000000000000000000000000000000000000
--- a/spaces/Abhaykoul/BardCookies-AI_Query/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: AI With Realtime Data
-emoji: 🐠
-colorFrom: pink
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.50.2
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Aibn.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Aibn.py
deleted file mode 100644
index 3399d613fd4c40ab594154a8e9c5f0ec04054a4e..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Aibn.py
+++ /dev/null
@@ -1,52 +0,0 @@
-from __future__ import annotations
-
-import time
-import hashlib
-
-from ..typing import AsyncGenerator
-from ..requests import StreamSession
-from .base_provider import AsyncGeneratorProvider
-
-
-class Aibn(AsyncGeneratorProvider):
- url = "https://aibn.cc"
- supports_gpt_35_turbo = True
- working = True
-
- @classmethod
- async def create_async_generator(
- cls,
- model: str,
- messages: list[dict[str, str]],
- timeout: int = 30,
- **kwargs
- ) -> AsyncGenerator:
- async with StreamSession(impersonate="chrome107", timeout=timeout) as session:
- timestamp = int(time.time())
- data = {
- "messages": messages,
- "pass": None,
- "sign": generate_signature(timestamp, messages[-1]["content"]),
- "time": timestamp
- }
- async with session.post(f"{cls.url}/api/generate", json=data) as response:
- response.raise_for_status()
- async for chunk in response.iter_content():
- yield chunk.decode()
-
- @classmethod
- @property
- def params(cls):
- params = [
- ("model", "str"),
- ("messages", "list[dict[str, str]]"),
- ("stream", "bool"),
- ("temperature", "float"),
- ]
- param = ", ".join([": ".join(p) for p in params])
- return f"g4f.provider.{cls.__name__} supports: ({param})"
-
-
-def generate_signature(timestamp: int, message: str, secret: str = "undefined"):
- data = f"{timestamp}:{message}:{secret}"
- return hashlib.sha256(data.encode()).hexdigest()
\ No newline at end of file
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/ChatgptLogin.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/ChatgptLogin.py
deleted file mode 100644
index 3eb55a64568c28df41f14051002ade95ca8dbcec..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/ChatgptLogin.py
+++ /dev/null
@@ -1,74 +0,0 @@
-from __future__ import annotations
-
-import os, re
-from aiohttp import ClientSession
-
-from .base_provider import AsyncProvider, format_prompt
-
-
-class ChatgptLogin(AsyncProvider):
- url = "https://opchatgpts.net"
- supports_gpt_35_turbo = True
- working = True
- _nonce = None
-
- @classmethod
- async def create_async(
- cls,
- model: str,
- messages: list[dict[str, str]],
- **kwargs
- ) -> str:
- headers = {
- "User-Agent" : "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36",
- "Accept" : "*/*",
- "Accept-language" : "en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3",
- "Origin" : "https://opchatgpts.net",
- "Alt-Used" : "opchatgpts.net",
- "Referer" : "https://opchatgpts.net/chatgpt-free-use/",
- "Sec-Fetch-Dest" : "empty",
- "Sec-Fetch-Mode" : "cors",
- "Sec-Fetch-Site" : "same-origin",
- }
- async with ClientSession(
- headers=headers
- ) as session:
- if not cls._nonce:
- async with session.get(
- "https://opchatgpts.net/chatgpt-free-use/",
- params={"id": os.urandom(6).hex()},
- ) as response:
- result = re.search(r'data-nonce="(.*?)"', await response.text())
- if not result:
- raise RuntimeError("No nonce value")
- cls._nonce = result.group(1)
- data = {
- "_wpnonce": cls._nonce,
- "post_id": 28,
- "url": "https://opchatgpts.net/chatgpt-free-use",
- "action": "wpaicg_chat_shortcode_message",
- "message": format_prompt(messages),
- "bot_id": 0
- }
- async with session.post("https://opchatgpts.net/wp-admin/admin-ajax.php", data=data) as response:
- response.raise_for_status()
- data = await response.json()
- if "data" in data:
- return data["data"]
- elif "msg" in data:
- raise RuntimeError(data["msg"])
- else:
- raise RuntimeError(f"Response: {data}")
-
-
- @classmethod
- @property
- def params(cls):
- params = [
- ("model", "str"),
- ("messages", "list[dict[str, str]]"),
- ("stream", "bool"),
- ("temperature", "float"),
- ]
- param = ", ".join([": ".join(p) for p in params])
- return f"g4f.provider.{cls.__name__} supports: ({param})"
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/flip-plugin.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/flip-plugin.js
deleted file mode 100644
index 9b82b16fabb55c225b0fd74f357f1ea23a7c786a..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/flip-plugin.js
+++ /dev/null
@@ -1,19 +0,0 @@
-import Flip from './flip.js';
-
-class FlipPlugin extends Phaser.Plugins.BasePlugin {
-
- constructor(pluginManager) {
- super(pluginManager);
- }
-
- start() {
- var eventEmitter = this.game.events;
- eventEmitter.on('destroy', this.destroy, this);
- }
-
- add(gameObject, config) {
- return new Flip(gameObject, config);
- }
-}
-
-export default FlipPlugin;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/shake/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/shake/Factory.js
deleted file mode 100644
index 2ebb9ed46855bfa8ab1f26785e232f2d9c2249a6..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/shake/Factory.js
+++ /dev/null
@@ -1,11 +0,0 @@
-import Shake from './Shake.js';
-import ObjectFactory from '../ObjectFactory.js';
-import SetValue from '../../../plugins/utils/object/SetValue.js';
-
-ObjectFactory.register('shake', function (gameObject, config) {
- return new Shake(gameObject, config);
-});
-
-SetValue(window, 'RexPlugins.UI.Shake', Shake);
-
-export default Shake;
\ No newline at end of file
diff --git a/spaces/Ailexcoder/GPT4ALL1/README.md b/spaces/Ailexcoder/GPT4ALL1/README.md
deleted file mode 100644
index 0171abc807b3d45293b6841c6fa63e349b9b0710..0000000000000000000000000000000000000000
--- a/spaces/Ailexcoder/GPT4ALL1/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Gpt4all
-emoji: 🦀
-colorFrom: gray
-colorTo: pink
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
-duplicated_from: Ailexcoder/GPT4ALL
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AlekseyKorshuk/instagram-filter-removal/modeling/base.py b/spaces/AlekseyKorshuk/instagram-filter-removal/modeling/base.py
deleted file mode 100644
index 546427a1e9f91fceecea94913b23e46fc1787289..0000000000000000000000000000000000000000
--- a/spaces/AlekseyKorshuk/instagram-filter-removal/modeling/base.py
+++ /dev/null
@@ -1,60 +0,0 @@
-from torch import nn
-
-
-class BaseNetwork(nn.Module):
- def __init__(self):
- super(BaseNetwork, self).__init__()
-
- def forward(self, x, y):
- pass
-
- def print_network(self):
- if isinstance(self, list):
- self = self[0]
- num_params = 0
- for param in self.parameters():
- num_params += param.numel()
- print('Network [%s] was created. Total number of parameters: %.1f million. '
- 'To see the architecture, do print(network).'
- % (type(self).__name__, num_params / 1000000))
-
- def set_requires_grad(self, requires_grad=False):
- """Set requies_grad=Fasle for all the networks to avoid unnecessary computations
- Parameters:
- requires_grad (bool) -- whether the networks require gradients or not
- """
- for param in self.parameters():
- param.requires_grad = requires_grad
-
- def init_weights(self, init_type='xavier', gain=0.02):
- def init_func(m):
- classname = m.__class__.__name__
- if classname.find('BatchNorm2d') != -1:
- if hasattr(m, 'weight') and m.weight is not None:
- nn.init.normal_(m.weight.data, 1.0, gain)
- if hasattr(m, 'bias') and m.bias is not None:
- nn.init.constant_(m.bias.data, 0.0)
- elif hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1):
- if init_type == 'normal':
- nn.init.normal_(m.weight.data, 0.0, gain)
- elif init_type == 'xavier':
- nn.init.xavier_normal_(m.weight.data, gain=gain)
- elif init_type == 'xavier_uniform':
- nn.init.xavier_uniform_(m.weight.data, gain=1.0)
- elif init_type == 'kaiming':
- nn.init.kaiming_normal_(m.weight.data, a=0, mode='fan_in')
- elif init_type == 'orthogonal':
- nn.init.orthogonal_(m.weight.data, gain=gain)
- elif init_type == 'none': # uses pytorch's default init method
- m.reset_parameters()
- else:
- raise NotImplementedError('initialization method [%s] is not implemented' % init_type)
- if hasattr(m, 'bias') and m.bias is not None:
- nn.init.constant_(m.bias.data, 0.0)
-
- self.apply(init_func)
-
- # propagate to children
- for m in self.children():
- if hasattr(m, 'init_weights'):
- m.init_weights(init_type, gain)
diff --git a/spaces/Ame42/rwms/main.py b/spaces/Ame42/rwms/main.py
deleted file mode 100644
index 13c9aa15d9a8d93025188414ac4c6b47c55044e6..0000000000000000000000000000000000000000
--- a/spaces/Ame42/rwms/main.py
+++ /dev/null
@@ -1,401 +0,0 @@
-# 'dataset' holds the input data for this script
-import os.path
-
-import gradio as gr
-import numpy
-import pandas
-from sklearn.ensemble import RandomForestRegressor
-from sklearn.linear_model import LinearRegression
-from sklearn.metrics import explained_variance_score, max_error, mean_absolute_error, mean_squared_error, \
- mean_squared_log_error, median_absolute_error, mean_absolute_percentage_error, r2_score, mean_poisson_deviance, \
- mean_gamma_deviance, mean_tweedie_deviance, d2_tweedie_score, mean_pinball_loss, d2_pinball_score, \
- d2_absolute_error_score
-from sklearn.model_selection import train_test_split
-from sklearn.preprocessing import StandardScaler
-
-import datastore
-from local_utils import *
-
-MAX_DEPTH = 20
-N_EST = 10
-
-mode = {"app": test_mode, "data": all_mode, "regen": False}
-
-
-def clean_prepare_train(data_i, train_size=0.015, test_size=0.005):
- # drop sparse column THP BLIND then drop empty rows for all remaining columns
- data_i.drop(axis=1, columns=[blind_col], inplace=True)
- data_i.dropna(axis=0, inplace=True, how="any")
- data_i.reset_index(inplace=True)
-
- # change well_id to dummies
- dummies = pandas.get_dummies(data_i[well_col])
- data_i = pandas.concat([data_i, dummies], axis=1).reindex(data_i.index)
- data_i.drop(columns=[well_col], axis=1, inplace=True)
-
- # remove useless columns
- data_i = keep_useful_cols(data_i, [ro_col, dur_col, man_col, blind_col, temp_col] + dummies.columns.tolist())
-
- # get x and y
- y = data_i[ro_col]
- x_i = data_i.drop(axis=1, columns=[ro_col])
-
- # verify data row count
- print(f"\n{x_i.shape[0]} rows")
-
- # fit scaler
- scaler_i = StandardScaler(copy=False)
- scaler_i.fit(x_i)
- x_fit = pandas.DataFrame(scaler_i.transform(x_i), columns=x_i.columns)
-
- # data split
- x_train, x_test, y_train, y_test = \
- train_test_split(x_fit, y, random_state=30, train_size=train_size, test_size=test_size)
-
- # model
- model_i = RandomForestRegressor(n_estimators=N_EST, random_state=30, max_depth=MAX_DEPTH)
- model_i.fit(x_train, y_train)
- # print([est.get_depth() for est in model_i.estimators_])
-
- # testing
- y_pred = model_i.predict(x_test)
- score_i = r2_score(y_test, y_pred)
- # print("explained_variance_score:", explained_variance_score(y_test, y_pred))
- # print("max_error:", max_error(y_test, y_pred))
- # print("mean_absolute_error:", mean_absolute_error(y_test, y_pred))
- # print("mean_squared_error:", mean_squared_error(y_test, y_pred))
- # print("mean_squared_log_error:", mean_squared_log_error(y_test, y_pred))
- # print("median_absolute_error:", median_absolute_error(y_test, y_pred))
- # print("mean_absolute_percentage_error:", mean_absolute_percentage_error(y_test, y_pred))
- # print("r2_score:", r2_score(y_test, y_pred))
- # print("mean_poisson_deviance:", mean_poisson_deviance(y_test, y_pred))
- # print("mean_gamma_deviance:", mean_gamma_deviance(y_test, y_pred))
- # print("mean_tweedie_deviance:", mean_tweedie_deviance(y_test, y_pred))
- # print("d2_tweedie_score:", d2_tweedie_score(y_test, y_pred))
- # print("mean_pinball_loss:", mean_pinball_loss(y_test, y_pred))
- # print("d2_pinball_score:", d2_pinball_score(y_test, y_pred))
- # print("d2_absolute_error_score:", d2_absolute_error_score(y_test, y_pred))
-
- # create power_bi data payload
- x_test, y_test, y_pred = (pandas.DataFrame(x_test).reset_index(),
- pandas.DataFrame(y_test).reset_index(),
- pandas.DataFrame(y_pred, columns=[sim_col]).reset_index())
- data_run = pandas.concat([x_test, y_test, y_pred], axis=1).drop("index", axis=1)
-
- return model_i, scaler_i, score_i, x_i, data_run
-
-
-def report_on(model_i, scaler_i, score_i, x_i):
- print(f"""
- \033[1;31mAI generalization stats\033[0m
- Model performance (rms score): \033[0;35m{score_i * 100:.2f}%\033[0m
- """)
-
- tests = [WellDataPoint(thp=661.84, day_sec=54100, man_pres=143.93, temp=93.9, _l1=0, _s1=1, _l2=0, _s2=0),
- WellDataPoint(thp=1118.456, day_sec=86050, man_pres=166.063, temp=79.706, _l1=1, _s1=0, _l2=0, _s2=0),
- WellDataPoint(thp=609.08, day_sec=42600, man_pres=137.2, temp=95.477, _l1=0, _s1=0, _l2=0, _s2=1),
- WellDataPoint(thp=1118.07, day_sec=49400, man_pres=146.44, temp=98.5, _l1=0, _s1=0, _l2=1, _s2=0)]
-
- for test in tests:
- print(f"\n{test}")
- try:
- test_x = pandas.DataFrame(scaler_i.transform(pandas.DataFrame([test.get_x()], columns=x_i.columns)),
- columns=x_i.columns)
- y_vis_pred = model_i.predict(test_x)
- print(f"Real: \033[0;35m{test.get_y():.2f} psi\033[0m vs. "
- f"Prediction: \033[0;35m{y_vis_pred[0]:.2f} psi\033[0m", flush=True)
- except ValueError:
- print(x_i.columns, flush=True)
-
-
-def train(mode, best=(25, 10, 54, 0, 0)):
- if mode == day_mode:
- data = datastore.get_22_data()
- model, scaler, score, x, results = clean_prepare_train(data, train_size=0.75, test_size=0.25)
- write_state_files(model, scaler)
- results.to_csv(f"{out_folder}POWER_BI_DATA_DAY.csv", index_label=id_col)
- report_on(model, scaler, score, x)
- else:
- # get data payload
- if not os.path.exists(f"{out_folder}data_opt_balanced.csv"):
- data_dict = datastore.get_all_data()
-
- # search for the best offset combination model
- # best = find_best(data_dict, model_search, best)
- print(f"\033[1;31mFinal offsets\033[0m\n{s1}: {best[0]}, {l1}: {best[1]}, {s2}: {best[2]}, {l2}: {best[3]}")
- data = datastore.offset_wells(data_dict, [x for x in best[:4]])
-
- # remove unnecessary id columns
- data = keep_useful_cols(data)
-
- # balance it by oversampling
- data = oversample_balance(data)
-
- # dump it
- data.to_csv(f"{out_folder}data_opt_balanced.csv", index_label=id_col)
- else:
- data = pandas.read_csv(f"{out_folder}data_opt_balanced.csv")
-
- # create model
- model, scaler, score, x, results = clean_prepare_train(keep_useful_cols(data), train_size=0.75, test_size=0.25)
- write_state_files(model, scaler)
- results.to_csv(f"{out_folder}POWER_BI_DATA.csv", index_label=id_col)
- report_on(model, scaler, score, x)
-
- return model
-
-
-def model_search(dt_dict, s_1, l_1, s_2, l_2, current_best):
- dt = datastore.offset_wells(dt_dict, [s_1, l_1, s_2, l_2])
- _, _, scr, _, _ = clean_prepare_train(dt, train_size=0.75, test_size=0.25)
- scores_i = (s_1, l_1, s_2, l_2, scr)
- print(f"s1: {s_1}, l1: {l_1}, s2: {s_2}, l2: {l_2}, \033[0;35mscore: {scr * 100}\033[0m vs. "
- f"\033[1;31mbest: {current_best[4] * 100}\033[0m")
- return scores_i if scr > current_best[4] else current_best
-
-
-def find_best(data_dict, model_search, best):
- for i in range(60):
- best = model_search(data_dict, i, best[1], best[2], best[3], best)
- for j in range(60):
- best = model_search(data_dict, best[0], j, best[2], best[3], best)
- for k in range(60):
- best = model_search(data_dict, best[0], best[1], k, best[3], best)
- for n in range(180):
- best = model_search(data_dict, best[0], best[1], best[2], n, best)
- return best
-
-
-def app(hours, mins, secs, man_pres, temp, well, thp=None, regen=False, full_text_reply=True):
- global test_x, y_vis_pred
-
- dur_sec = to_sec(hours, mins, secs)
-
- if regen or not (os.path.exists(f"{model_file}.mdl") and os.path.exists(f"{scaler_file}.sts")):
- train(mode['data'])
-
- mdl, scl = read_state_files(model_file, scaler_file)
-
- thp = 0 if thp is None else thp
-
- _l1, _l2, _s1, _s2 = change_well_to_dummy(well)
-
- test = WellDataPoint(thp=thp, day_sec=dur_sec, man_pres=man_pres, temp=temp, _l1=_l1, _s1=_s1, _l2=_l2, _s2=_s2)
- columns = ['Daylight duration (SEC)', 'Manifold Pressure (PSI)', 'TEMP (°F)', '1L', '1S', '2L', '2S']
- try:
- test_x = pandas.DataFrame(scl.transform(pandas.DataFrame([test.get_x()], columns=columns)), columns=columns)
- y_vis_pred = mdl.predict(test_x)
- print(f"Real: \033[0;35m{test.get_y():.2f} psi\033[0m vs. "
- f"Prediction: \033[0;35m{y_vis_pred[0]:.2f} psi\033[0m")
- except ValueError:
- print(test, flush=True)
- raise
-
- return f"{test.__plain__()}\nReal: {test.get_y():.2f} psi vs. Prediction: {y_vis_pred[0]:.2f} psi" if \
- full_text_reply else y_vis_pred
-
-
-def i_app(wl, pres):
- # match well to factors
- factor = factors.loc[factors["Well"] == wl[6:]]
-
- # retrieve conversion and flow factor
- c_factor = factor["Conversion Factor"]
- f_factor = factor["Flow Factor"]
-
- # return math result
- return f"""\
-Testing data
- Manifold pressure: {pres} psi
- Well: {wl}
-
-Flowing tubing head pressure: {pres + [f for f in c_factor][0]:.2f} psi
-Q-liquid: {pres * [f for f in f_factor][0]:.2f} bbl/day"""
-
-
-scroll_data = pandas.read_csv(f"{out_folder}data_opt_balanced.csv") # pandas.DataFrame()
-n_real = 0
-n_sim = 0
-mn = 0
-mx = 0
-_, _, _, _, results = clean_prepare_train(scroll_data, train_size=0.50, test_size=0.50)
-state_var = False
-results.insert(0, id_col, numpy.array(range(results.shape[0])), False)
-
-# randomize data rows and reset index
-scroll_data = scroll_data.sample(frac=1)
-scroll_data.drop([id_col, "index"], axis=1, inplace=True, errors="ignore")
-scroll_data.insert(0, id_col, numpy.array(range(scroll_data.shape[0])), False)
-y_range = min(scroll_data[ro_col]), max(scroll_data[ro_col])
-
-
-# async def load_data():
-# global state_var
-# if not state_var:
-# state_var = True
-# global scroll_data
-# data = pandas.read_csv(f"{out_folder}data_opt_balanced.csv")
-# model, scaler, score, x, results = clean_prepare_train(keep_useful_cols(data), train_size=0.50, test_size=0.50)
-# i = 0
-#
-# while i < results.shape[0]:
-# await asyncio.sleep(1)
-# i += 1
-# new_row = results.iloc[[i]]
-# print(new_row)
-# scroll_data = pandas.concat([scroll_data, new_row], ignore_index=True)
-# if scroll_data.shape[0] > 100:
-# scroll_data.drop(0, axis=0, inplace=True)
-# print(scroll_data.shape)
-
-
-# URL = "https://docs.google.com/spreadsheets/d/1ZQbeOeCaiLMidenqmwq7wC-ni7rdtUYQXH1XER6XyyQ/edit#gid=0"
-# csv_url = URL.replace('/edit#gid=', '/export?format=csv&gid=')
-#
-#
-# def get_data():
-# return pandas.read_csv(csv_url)
-
-
-def get_real_data() -> pandas.DataFrame:
- global results
- global mn
- global mx
- mx += 1
- mn = 0 if mx - 50 < 0 else mx - 50
- sl = results.iloc[mn:mx]
- sl.insert(0, time_col, numpy.array([from_sec(int(r)) for r in sl[id_col].tolist()]), False)
- return gr.LinePlot.update(value=sl) # scroll_data
-
-
-def get_sim_data() -> pandas.DataFrame:
- global results
- sl = results.iloc[mn:mx]
- sl.insert(0, time_col, numpy.array([from_sec(r) for r in sl[id_col].tolist()]), False)
- return gr.LinePlot.update(value=sl) # scroll_data
-
-
-x_real = 0
-x_pres = 0
-x_ql = 0
-
-
-def get_x_real_data() -> pandas.DataFrame:
- global results
- sl = scroll_data.iloc[mn:mx]
- sl = sl.drop(time_col, axis=1, errors="ignore")
- sl.insert(0, time_col, numpy.array([from_sec(int(r)) for r in sl[id_col].tolist()]), False)
- return gr.LinePlot.update(value=sl) # scroll_data
-
-
-def get_x_sim_pres_data() -> pandas.DataFrame:
- global results
- sl = scroll_data.iloc[mn:mx]
- sl = sl.drop(sim_col, axis=1, errors="ignore")
- sl = sl.drop(time_col, axis=1, errors="ignore")
- sl.insert(0, time_col, numpy.array([from_sec(int(r)) for r in sl[id_col].tolist()]), False)
- sl.insert(0, sim_col, numpy.array([calc_excel(r)[0] for r in sl[man_col].tolist()]), False)
- return gr.LinePlot.update(value=sl) # scroll_data
-
-
-def get_x_sim_ql_data() -> pandas.DataFrame:
- global results
- sl = scroll_data.iloc[mn:mx]
- sl = sl.drop(time_col, axis=1, errors="ignore")
- sl.insert(0, time_col, numpy.array([from_sec(int(r)) for r in sl[id_col].tolist()]), False)
- sl.insert(0, ql_col, numpy.array([calc_excel(r)[1] for r in sl[man_col].tolist()]), False)
- return gr.LinePlot.update(value=sl) # scroll_data
-
-
-# get conversion factors
-factors = datastore.get_conversion_factors()
-
-if mode['app'] == train_mode:
- app(23, 59, 40, 143.96, 79.523, parse_well_id(s2))
- app(17, 2, 0, 144.41, 97.278, parse_well_id(l1), regen=mode['regen'])
-else:
- with gr.Blocks() as demo:
- gr.Markdown("#")
- with gr.Tab("Dashboard"):
- mx = 50
- # pull data into line plot
- with gr.Row():
- with gr.Column():
- gr.Markdown("# Our AI-powered calculator (Accuracy: 99.61%)")
- # Real Tubing Head Pressure
- real_ai = gr.LinePlot(y=ro_col, x=time_col, label="Awoba Well X", title="Real Tubing Head Pressure",
- y_title=ro_col, x_title=time_col, every=1, height=150, width=600)
- demo.load(fn=get_real_data, inputs=None, outputs=real_ai)
-
- # Calculated Tubing Head Pressure
- sim_ai = gr.LinePlot(y=sim_col, x=time_col, label="Awoba Well X",
- title="Calculated Tubing Head Pressure",
- y_title=sim_col, x_title=time_col, every=1, height=150, width=600)
- demo.load(fn=get_sim_data, inputs=None, outputs=sim_ai)
-
-
- with gr.Column():
- gr.Markdown("###")
- gr.Markdown("### Excel formulae (Accuracy: 27.53%)")
- # Real Tubing Head Pressure
- real_x = gr.LinePlot(y=ro_col, x=time_col, label="Abura Well X", title="Real Tubing Head Pressure",
- y_title=ro_col, x_title=time_col, every=1, height=150, width=600, y_lim=y_range
- )
- demo.load(fn=get_x_real_data, inputs=None, outputs=real_x)
-
- # Calculated Tubing Head Pressure
- sim_x = gr.LinePlot(y=sim_col, x=time_col, label="Abura Well X", title="Calculated Tubing Head Pressure"
- , y_title=sim_col, x_title=time_col, every=1, height=150, width=600,
- y_lim=y_range)
- demo.load(fn=get_x_sim_pres_data, inputs=None, outputs=sim_x)
-
- # Calculated Production
- sim_ql_x = gr.LinePlot(y=ql_col, x=time_col, label="Abura Well X", title="Calculated Production",
- y_title=ql_col, x_title=time_col, every=1, height=150, width=600)
- demo.load(fn=get_x_sim_ql_data, inputs=None, outputs=sim_ql_x)
- with gr.Tab("AI approach"):
- hours = gr.Number(label="Hours (24-hour format)", value=23)
- mins = gr.Number(label="Minutes", value=59)
- secs = gr.Number(label="Seconds", value=40)
- man_pres = gr.Number(label=man_col, value=143.96)
- temp = gr.Number(label=temp_col, value=79.523)
- well = gr.Radio(
- [parse_well_id(w) for w in [l1, s1, l2, s2]],
- value=parse_well_id(s2),
- label="Select a well"
- )
- thp = gr.Number(label=ro_col, value=641.98)
- greet_btn = gr.Button("Simulate")
- greet_btn.style(full_width=True)
- output = gr.Textbox(label="Results")
- greet_btn.click(fn=app, inputs=[hours, mins, secs, man_pres, temp, well, thp], outputs=output)
-
- with gr.Tab("Excel approach"):
- # build interface to take in well selection and manifold pressure
- i_man_pres = gr.Number(label=man_col, value=143.96)
- i_well = gr.Radio(
- [parse_well_id_2(w) for w in factors["Well"]],
- label="Select a well"
- )
- i_greet_btn = gr.Button("Simulate")
- i_greet_btn.style(full_width=True)
- i_output = gr.Textbox(label="Results")
-
- # call i_app function with params on button click
- i_greet_btn.click(fn=i_app, inputs=[i_well, i_man_pres], outputs=i_output)
-
-
- # demo.load(fn=get_real_data, inputs=None, outputs=real_ai)
- # with gr.Column():
- # with gr.Row():
- # gr.LinePlot(value=get_real_data, y=ro_col, x=id_col, label="Real Tubing Head Pressure",
- # y_title=ro_col, x_title=time_col, every=1, height=80, width=600)
- # gr.LinePlot(value=get_sim_data, y=sim_col, x=id_col, label="Calculated Tubing Head Pressure",
- # y_title=sim_col, x_title=time_col, every=1, height=80, width=600)
- # with gr.Row():
- # gr.LinePlot(value=get_real_data, y=ro_col, x=id_col, label="Real Tubing Head Pressure",
- # y_title=ro_col, x_title=time_col, every=1, height=80, width=600)
- # gr.LinePlot(value=get_sim_data, y=sim_col, x=id_col, label="Calculated Tubing Head Pressure",
- # y_title=sim_col, x_title=time_col, every=1, height=80, width=600)
-
- demo.launch(enable_queue=True, share=False)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth_img2img.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth_img2img.py
deleted file mode 100644
index c5ef06997d3c16368f9c105476a77ae65a655f99..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth_img2img.py
+++ /dev/null
@@ -1,720 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import inspect
-from typing import Any, Callable, Dict, List, Optional, Union
-
-import numpy as np
-import PIL
-import torch
-from transformers import CLIPTextModel, CLIPTokenizer
-
-from ...loaders import LoraLoaderMixin, TextualInversionLoaderMixin
-from ...models import AutoencoderKL, UNet3DConditionModel
-from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import (
- is_accelerate_available,
- is_accelerate_version,
- logging,
- randn_tensor,
- replace_example_docstring,
-)
-from ..pipeline_utils import DiffusionPipeline
-from . import TextToVideoSDPipelineOutput
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-EXAMPLE_DOC_STRING = """
- Examples:
- ```py
- >>> import torch
- >>> from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
- >>> from diffusers.utils import export_to_video
-
- >>> pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16)
- >>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
- >>> pipe.to("cuda")
-
- >>> prompt = "spiderman running in the desert"
- >>> video_frames = pipe(prompt, num_inference_steps=40, height=320, width=576, num_frames=24).frames
- >>> # safe low-res video
- >>> video_path = export_to_video(video_frames, output_video_path="./video_576_spiderman.mp4")
-
- >>> # let's offload the text-to-image model
- >>> pipe.to("cpu")
-
- >>> # and load the image-to-image model
- >>> pipe = DiffusionPipeline.from_pretrained(
- ... "cerspense/zeroscope_v2_XL", torch_dtype=torch.float16, revision="refs/pr/15"
- ... )
- >>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
- >>> pipe.enable_model_cpu_offload()
-
- >>> # The VAE consumes A LOT of memory, let's make sure we run it in sliced mode
- >>> pipe.vae.enable_slicing()
-
- >>> # now let's upscale it
- >>> video = [Image.fromarray(frame).resize((1024, 576)) for frame in video_frames]
-
- >>> # and denoise it
- >>> video_frames = pipe(prompt, video=video, strength=0.6).frames
- >>> video_path = export_to_video(video_frames, output_video_path="./video_1024_spiderman.mp4")
- >>> video_path
- ```
-"""
-
-
-def tensor2vid(video: torch.Tensor, mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) -> List[np.ndarray]:
- # This code is copied from https://github.com/modelscope/modelscope/blob/1509fdb973e5871f37148a4b5e5964cafd43e64d/modelscope/pipelines/multi_modal/text_to_video_synthesis_pipeline.py#L78
- # reshape to ncfhw
- mean = torch.tensor(mean, device=video.device).reshape(1, -1, 1, 1, 1)
- std = torch.tensor(std, device=video.device).reshape(1, -1, 1, 1, 1)
- # unnormalize back to [0,1]
- video = video.mul_(std).add_(mean)
- video.clamp_(0, 1)
- # prepare the final outputs
- i, c, f, h, w = video.shape
- images = video.permute(2, 3, 0, 4, 1).reshape(
- f, h, i * w, c
- ) # 1st (frames, h, batch_size, w, c) 2nd (frames, h, batch_size * w, c)
- images = images.unbind(dim=0) # prepare a list of indvidual (consecutive frames)
- images = [(image.cpu().numpy() * 255).astype("uint8") for image in images] # f h w c
- return images
-
-
-def preprocess_video(video):
- supported_formats = (np.ndarray, torch.Tensor, PIL.Image.Image)
-
- if isinstance(video, supported_formats):
- video = [video]
- elif not (isinstance(video, list) and all(isinstance(i, supported_formats) for i in video)):
- raise ValueError(
- f"Input is in incorrect format: {[type(i) for i in video]}. Currently, we only support {', '.join(supported_formats)}"
- )
-
- if isinstance(video[0], PIL.Image.Image):
- video = [np.array(frame) for frame in video]
-
- if isinstance(video[0], np.ndarray):
- video = np.concatenate(video, axis=0) if video[0].ndim == 5 else np.stack(video, axis=0)
-
- if video.dtype == np.uint8:
- video = np.array(video).astype(np.float32) / 255.0
-
- if video.ndim == 4:
- video = video[None, ...]
-
- video = torch.from_numpy(video.transpose(0, 4, 1, 2, 3))
-
- elif isinstance(video[0], torch.Tensor):
- video = torch.cat(video, axis=0) if video[0].ndim == 5 else torch.stack(video, axis=0)
-
- # don't need any preprocess if the video is latents
- channel = video.shape[1]
- if channel == 4:
- return video
-
- # move channels before num_frames
- video = video.permute(0, 2, 1, 3, 4)
-
- # normalize video
- video = 2.0 * video - 1.0
-
- return video
-
-
-class VideoToVideoSDPipeline(DiffusionPipeline, TextualInversionLoaderMixin, LoraLoaderMixin):
- r"""
- Pipeline for text-guided video-to-video generation.
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods
- implemented for all pipelines (downloading, saving, running on a particular device, etc.).
-
- Args:
- vae ([`AutoencoderKL`]):
- Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations.
- text_encoder ([`CLIPTextModel`]):
- Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- tokenizer (`CLIPTokenizer`):
- A [`~transformers.CLIPTokenizer`] to tokenize text.
- unet ([`UNet3DConditionModel`]):
- A [`UNet3DConditionModel`] to denoise the encoded video latents.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
- """
-
- def __init__(
- self,
- vae: AutoencoderKL,
- text_encoder: CLIPTextModel,
- tokenizer: CLIPTokenizer,
- unet: UNet3DConditionModel,
- scheduler: KarrasDiffusionSchedulers,
- ):
- super().__init__()
-
- self.register_modules(
- vae=vae,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- unet=unet,
- scheduler=scheduler,
- )
- self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_slicing
- def enable_vae_slicing(self):
- r"""
- Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
- compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
- """
- self.vae.enable_slicing()
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_slicing
- def disable_vae_slicing(self):
- r"""
- Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
- computing decoding in one step.
- """
- self.vae.disable_slicing()
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_tiling
- def enable_vae_tiling(self):
- r"""
- Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
- compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
- processing larger images.
- """
- self.vae.enable_tiling()
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_tiling
- def disable_vae_tiling(self):
- r"""
- Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
- computing decoding in one step.
- """
- self.vae.disable_tiling()
-
- def enable_model_cpu_offload(self, gpu_id=0):
- r"""
- Offload all models to CPU to reduce memory usage with a low impact on performance. Moves one whole model at a
- time to the GPU when its `forward` method is called, and the model remains in GPU until the next model runs.
- Memory savings are lower than using `enable_sequential_cpu_offload`, but performance is much better due to the
- iterative execution of the `unet`.
- """
- if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
- from accelerate import cpu_offload_with_hook
- else:
- raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")
-
- device = torch.device(f"cuda:{gpu_id}")
-
- if self.device.type != "cpu":
- self.to("cpu", silence_dtype_warnings=True)
- torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
-
- hook = None
- for cpu_offloaded_model in [self.text_encoder, self.vae, self.unet]:
- _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook)
-
- # We'll offload the last model manually.
- self.final_offload_hook = hook
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt
- def _encode_prompt(
- self,
- prompt,
- device,
- num_images_per_prompt,
- do_classifier_free_guidance,
- negative_prompt=None,
- prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
- lora_scale: Optional[float] = None,
- ):
- r"""
- Encodes the prompt into text encoder hidden states.
-
- Args:
- prompt (`str` or `List[str]`, *optional*):
- prompt to be encoded
- device: (`torch.device`):
- torch device
- num_images_per_prompt (`int`):
- number of images that should be generated per prompt
- do_classifier_free_guidance (`bool`):
- whether to use classifier free guidance or not
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. If not defined, one has to pass
- `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
- less than `1`).
- prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
- provided, text embeddings will be generated from `prompt` input argument.
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
- argument.
- lora_scale (`float`, *optional*):
- A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- """
- # set lora scale so that monkey patched LoRA
- # function of text encoder can correctly access it
- if lora_scale is not None and isinstance(self, LoraLoaderMixin):
- self._lora_scale = lora_scale
-
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- batch_size = prompt_embeds.shape[0]
-
- if prompt_embeds is None:
- # textual inversion: procecss multi-vector tokens if necessary
- if isinstance(self, TextualInversionLoaderMixin):
- prompt = self.maybe_convert_prompt(prompt, self.tokenizer)
-
- text_inputs = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
- text_input_ids = text_inputs.input_ids
- untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
-
- if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
- text_input_ids, untruncated_ids
- ):
- removed_text = self.tokenizer.batch_decode(
- untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]
- )
- logger.warning(
- "The following part of your input was truncated because CLIP can only handle sequences up to"
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
- )
-
- if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
- attention_mask = text_inputs.attention_mask.to(device)
- else:
- attention_mask = None
-
- prompt_embeds = self.text_encoder(
- text_input_ids.to(device),
- attention_mask=attention_mask,
- )
- prompt_embeds = prompt_embeds[0]
-
- prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
-
- bs_embed, seq_len, _ = prompt_embeds.shape
- # duplicate text embeddings for each generation per prompt, using mps friendly method
- prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
- prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
-
- # get unconditional embeddings for classifier free guidance
- if do_classifier_free_guidance and negative_prompt_embeds is None:
- uncond_tokens: List[str]
- if negative_prompt is None:
- uncond_tokens = [""] * batch_size
- elif prompt is not None and type(prompt) is not type(negative_prompt):
- raise TypeError(
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
- f" {type(prompt)}."
- )
- elif isinstance(negative_prompt, str):
- uncond_tokens = [negative_prompt]
- elif batch_size != len(negative_prompt):
- raise ValueError(
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
- " the batch size of `prompt`."
- )
- else:
- uncond_tokens = negative_prompt
-
- # textual inversion: procecss multi-vector tokens if necessary
- if isinstance(self, TextualInversionLoaderMixin):
- uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer)
-
- max_length = prompt_embeds.shape[1]
- uncond_input = self.tokenizer(
- uncond_tokens,
- padding="max_length",
- max_length=max_length,
- truncation=True,
- return_tensors="pt",
- )
-
- if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
- attention_mask = uncond_input.attention_mask.to(device)
- else:
- attention_mask = None
-
- negative_prompt_embeds = self.text_encoder(
- uncond_input.input_ids.to(device),
- attention_mask=attention_mask,
- )
- negative_prompt_embeds = negative_prompt_embeds[0]
-
- if do_classifier_free_guidance:
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
- seq_len = negative_prompt_embeds.shape[1]
-
- negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
-
- negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
- negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
-
- return prompt_embeds
-
- # Copied from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_synth.TextToVideoSDPipeline.decode_latents
- def decode_latents(self, latents):
- latents = 1 / self.vae.config.scaling_factor * latents
-
- batch_size, channels, num_frames, height, width = latents.shape
- latents = latents.permute(0, 2, 1, 3, 4).reshape(batch_size * num_frames, channels, height, width)
-
- image = self.vae.decode(latents).sample
- video = (
- image[None, :]
- .reshape(
- (
- batch_size,
- num_frames,
- -1,
- )
- + image.shape[2:]
- )
- .permute(0, 2, 1, 3, 4)
- )
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
- video = video.float()
- return video
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
- def prepare_extra_step_kwargs(self, generator, eta):
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
-
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- # check if the scheduler accepts generator
- accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
- if accepts_generator:
- extra_step_kwargs["generator"] = generator
- return extra_step_kwargs
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.StableDiffusionImg2ImgPipeline.check_inputs
- def check_inputs(
- self, prompt, strength, callback_steps, negative_prompt=None, prompt_embeds=None, negative_prompt_embeds=None
- ):
- if strength < 0 or strength > 1:
- raise ValueError(f"The value of strength should in [0.0, 1.0] but is {strength}")
-
- if (callback_steps is None) or (
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
-
- if prompt is not None and prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
- " only forward one of the two."
- )
- elif prompt is None and prompt_embeds is None:
- raise ValueError(
- "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
- )
- elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
-
- if negative_prompt is not None and negative_prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
- f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
- )
-
- if prompt_embeds is not None and negative_prompt_embeds is not None:
- if prompt_embeds.shape != negative_prompt_embeds.shape:
- raise ValueError(
- "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
- f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
- f" {negative_prompt_embeds.shape}."
- )
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.StableDiffusionImg2ImgPipeline.get_timesteps
- def get_timesteps(self, num_inference_steps, strength, device):
- # get the original timestep using init_timestep
- init_timestep = min(int(num_inference_steps * strength), num_inference_steps)
-
- t_start = max(num_inference_steps - init_timestep, 0)
- timesteps = self.scheduler.timesteps[t_start * self.scheduler.order :]
-
- return timesteps, num_inference_steps - t_start
-
- def prepare_latents(self, video, timestep, batch_size, dtype, device, generator=None):
- video = video.to(device=device, dtype=dtype)
-
- # change from (b, c, f, h, w) -> (b * f, c, w, h)
- bsz, channel, frames, width, height = video.shape
- video = video.permute(0, 2, 1, 3, 4).reshape(bsz * frames, channel, width, height)
-
- if video.shape[1] == 4:
- init_latents = video
- else:
- if isinstance(generator, list) and len(generator) != batch_size:
- raise ValueError(
- f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
- f" size of {batch_size}. Make sure the batch size matches the length of the generators."
- )
-
- elif isinstance(generator, list):
- init_latents = [
- self.vae.encode(video[i : i + 1]).latent_dist.sample(generator[i]) for i in range(batch_size)
- ]
- init_latents = torch.cat(init_latents, dim=0)
- else:
- init_latents = self.vae.encode(video).latent_dist.sample(generator)
-
- init_latents = self.vae.config.scaling_factor * init_latents
-
- if batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] != 0:
- raise ValueError(
- f"Cannot duplicate `video` of batch size {init_latents.shape[0]} to {batch_size} text prompts."
- )
- else:
- init_latents = torch.cat([init_latents], dim=0)
-
- shape = init_latents.shape
- noise = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
-
- # get latents
- init_latents = self.scheduler.add_noise(init_latents, noise, timestep)
- latents = init_latents
-
- latents = latents[None, :].reshape((bsz, frames, latents.shape[1]) + latents.shape[2:]).permute(0, 2, 1, 3, 4)
-
- return latents
-
- @torch.no_grad()
- @replace_example_docstring(EXAMPLE_DOC_STRING)
- def __call__(
- self,
- prompt: Union[str, List[str]] = None,
- video: Union[List[np.ndarray], torch.FloatTensor] = None,
- strength: float = 0.6,
- num_inference_steps: int = 50,
- guidance_scale: float = 15.0,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- eta: float = 0.0,
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
- latents: Optional[torch.FloatTensor] = None,
- prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
- output_type: Optional[str] = "np",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
- callback_steps: int = 1,
- cross_attention_kwargs: Optional[Dict[str, Any]] = None,
- ):
- r"""
- The call function to the pipeline for generation.
-
- Args:
- prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- video (`List[np.ndarray]` or `torch.FloatTensor`):
- `video` frames or tensor representing a video batch to be used as the starting point for the process.
- Can also accpet video latents as `image`, if passing latents directly, it will not be encoded again.
- strength (`float`, *optional*, defaults to 0.8):
- Indicates extent to transform the reference `video`. Must be between 0 and 1. `video` is used as a
- starting point, adding more noise to it the larger the `strength`. The number of denoising steps
- depends on the amount of noise initially added. When `strength` is 1, added noise is maximum and the
- denoising process runs for the full number of iterations specified in `num_inference_steps`. A value of
- 1 essentially ignores `video`.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality videos at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- A higher guidance scale value encourages the model to generate images closely linked to the text
- `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts to guide what to not include in video generation. If not defined, you need to
- pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies
- to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers.
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
- A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
- generation deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor is generated by sampling using the supplied random `generator`. Latents should be of shape
- `(batch_size, num_channel, num_frames, height, width)`.
- prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
- provided, text embeddings are generated from the `prompt` input argument.
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
- not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- output_type (`str`, *optional*, defaults to `"np"`):
- The output format of the generated video. Choose between `torch.FloatTensor` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput`] instead
- of a plain tuple.
- callback (`Callable`, *optional*):
- A function that calls every `callback_steps` steps during inference. The function is called with the
- following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function is called. If not specified, the callback is called at
- every step.
- cross_attention_kwargs (`dict`, *optional*):
- A kwargs dictionary that if specified is passed along to the [`AttentionProcessor`] as defined in
- [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
-
- Examples:
-
- Returns:
- [`~pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput`] or `tuple`:
- If `return_dict` is `True`, [`~pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput`] is
- returned, otherwise a `tuple` is returned where the first element is a list with the generated frames.
- """
- # 0. Default height and width to unet
- num_images_per_prompt = 1
-
- # 1. Check inputs. Raise error if not correct
- self.check_inputs(prompt, strength, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds)
-
- # 2. Define call parameters
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- batch_size = prompt_embeds.shape[0]
-
- device = self._execution_device
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
-
- # 3. Encode input prompt
- text_encoder_lora_scale = (
- cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None
- )
- prompt_embeds = self._encode_prompt(
- prompt,
- device,
- num_images_per_prompt,
- do_classifier_free_guidance,
- negative_prompt,
- prompt_embeds=prompt_embeds,
- negative_prompt_embeds=negative_prompt_embeds,
- lora_scale=text_encoder_lora_scale,
- )
-
- # 4. Preprocess video
- video = preprocess_video(video)
-
- # 5. Prepare timesteps
- self.scheduler.set_timesteps(num_inference_steps, device=device)
- timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength, device)
- latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt)
-
- # 5. Prepare latent variables
- latents = self.prepare_latents(video, latent_timestep, batch_size, prompt_embeds.dtype, device, generator)
-
- # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
- extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
-
- # 7. Denoising loop
- num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
- with self.progress_bar(total=num_inference_steps) as progress_bar:
- for i, t in enumerate(timesteps):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
-
- # predict the noise residual
- noise_pred = self.unet(
- latent_model_input,
- t,
- encoder_hidden_states=prompt_embeds,
- cross_attention_kwargs=cross_attention_kwargs,
- return_dict=False,
- )[0]
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- # reshape latents
- bsz, channel, frames, width, height = latents.shape
- latents = latents.permute(0, 2, 1, 3, 4).reshape(bsz * frames, channel, width, height)
- noise_pred = noise_pred.permute(0, 2, 1, 3, 4).reshape(bsz * frames, channel, width, height)
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
-
- # reshape latents back
- latents = latents[None, :].reshape(bsz, frames, channel, width, height).permute(0, 2, 1, 3, 4)
-
- # call the callback, if provided
- if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
- progress_bar.update()
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- if output_type == "latent":
- return TextToVideoSDPipelineOutput(frames=latents)
-
- if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
- self.unet.to("cpu")
-
- video_tensor = self.decode_latents(latents)
-
- if output_type == "pt":
- video = video_tensor
- else:
- video = tensor2vid(video_tensor)
-
- # Offload last model to CPU
- if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
- self.final_offload_hook.offload()
-
- if not return_dict:
- return (video,)
-
- return TextToVideoSDPipelineOutput(frames=video)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/ddim/test_ddim.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/ddim/test_ddim.py
deleted file mode 100644
index de513fe234fd6b1e6a900149205171cf9acff7f2..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/ddim/test_ddim.py
+++ /dev/null
@@ -1,143 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import unittest
-
-import numpy as np
-import torch
-
-from diffusers import DDIMPipeline, DDIMScheduler, UNet2DModel
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu, slow, torch_device
-
-from ..pipeline_params import UNCONDITIONAL_IMAGE_GENERATION_BATCH_PARAMS, UNCONDITIONAL_IMAGE_GENERATION_PARAMS
-from ..test_pipelines_common import PipelineTesterMixin
-
-
-enable_full_determinism()
-
-
-class DDIMPipelineFastTests(PipelineTesterMixin, unittest.TestCase):
- pipeline_class = DDIMPipeline
- params = UNCONDITIONAL_IMAGE_GENERATION_PARAMS
- required_optional_params = PipelineTesterMixin.required_optional_params - {
- "num_images_per_prompt",
- "latents",
- "callback",
- "callback_steps",
- }
- batch_params = UNCONDITIONAL_IMAGE_GENERATION_BATCH_PARAMS
-
- def get_dummy_components(self):
- torch.manual_seed(0)
- unet = UNet2DModel(
- block_out_channels=(32, 64),
- layers_per_block=2,
- sample_size=32,
- in_channels=3,
- out_channels=3,
- down_block_types=("DownBlock2D", "AttnDownBlock2D"),
- up_block_types=("AttnUpBlock2D", "UpBlock2D"),
- )
- scheduler = DDIMScheduler()
- components = {"unet": unet, "scheduler": scheduler}
- return components
-
- def get_dummy_inputs(self, device, seed=0):
- if str(device).startswith("mps"):
- generator = torch.manual_seed(seed)
- else:
- generator = torch.Generator(device=device).manual_seed(seed)
- inputs = {
- "batch_size": 1,
- "generator": generator,
- "num_inference_steps": 2,
- "output_type": "numpy",
- }
- return inputs
-
- def test_inference(self):
- device = "cpu"
-
- components = self.get_dummy_components()
- pipe = self.pipeline_class(**components)
- pipe.to(device)
- pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- image = pipe(**inputs).images
- image_slice = image[0, -3:, -3:, -1]
-
- self.assertEqual(image.shape, (1, 32, 32, 3))
- expected_slice = np.array(
- [1.000e00, 5.717e-01, 4.717e-01, 1.000e00, 0.000e00, 1.000e00, 3.000e-04, 0.000e00, 9.000e-04]
- )
- max_diff = np.abs(image_slice.flatten() - expected_slice).max()
- self.assertLessEqual(max_diff, 1e-3)
-
- def test_dict_tuple_outputs_equivalent(self):
- super().test_dict_tuple_outputs_equivalent(expected_max_difference=3e-3)
-
- def test_save_load_local(self):
- super().test_save_load_local(expected_max_difference=3e-3)
-
- def test_save_load_optional_components(self):
- super().test_save_load_optional_components(expected_max_difference=3e-3)
-
- def test_inference_batch_single_identical(self):
- super().test_inference_batch_single_identical(expected_max_diff=3e-3)
-
-
-@slow
-@require_torch_gpu
-class DDIMPipelineIntegrationTests(unittest.TestCase):
- def test_inference_cifar10(self):
- model_id = "google/ddpm-cifar10-32"
-
- unet = UNet2DModel.from_pretrained(model_id)
- scheduler = DDIMScheduler()
-
- ddim = DDIMPipeline(unet=unet, scheduler=scheduler)
- ddim.to(torch_device)
- ddim.set_progress_bar_config(disable=None)
-
- generator = torch.manual_seed(0)
- image = ddim(generator=generator, eta=0.0, output_type="numpy").images
-
- image_slice = image[0, -3:, -3:, -1]
-
- assert image.shape == (1, 32, 32, 3)
- expected_slice = np.array([0.1723, 0.1617, 0.1600, 0.1626, 0.1497, 0.1513, 0.1505, 0.1442, 0.1453])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_inference_ema_bedroom(self):
- model_id = "google/ddpm-ema-bedroom-256"
-
- unet = UNet2DModel.from_pretrained(model_id)
- scheduler = DDIMScheduler.from_pretrained(model_id)
-
- ddpm = DDIMPipeline(unet=unet, scheduler=scheduler)
- ddpm.to(torch_device)
- ddpm.set_progress_bar_config(disable=None)
-
- generator = torch.manual_seed(0)
- image = ddpm(generator=generator, output_type="numpy").images
-
- image_slice = image[0, -3:, -3:, -1]
-
- assert image.shape == (1, 256, 256, 3)
- expected_slice = np.array([0.0060, 0.0201, 0.0344, 0.0024, 0.0018, 0.0002, 0.0022, 0.0000, 0.0069])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/mask/utils.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/mask/utils.py
deleted file mode 100644
index c88208291ab2a605bee9fe6c1a28a443b74c6372..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/mask/utils.py
+++ /dev/null
@@ -1,63 +0,0 @@
-import mmcv
-import numpy as np
-import pycocotools.mask as mask_util
-
-
-def split_combined_polys(polys, poly_lens, polys_per_mask):
- """Split the combined 1-D polys into masks.
-
- A mask is represented as a list of polys, and a poly is represented as
- a 1-D array. In dataset, all masks are concatenated into a single 1-D
- tensor. Here we need to split the tensor into original representations.
-
- Args:
- polys (list): a list (length = image num) of 1-D tensors
- poly_lens (list): a list (length = image num) of poly length
- polys_per_mask (list): a list (length = image num) of poly number
- of each mask
-
- Returns:
- list: a list (length = image num) of list (length = mask num) of \
- list (length = poly num) of numpy array.
- """
- mask_polys_list = []
- for img_id in range(len(polys)):
- polys_single = polys[img_id]
- polys_lens_single = poly_lens[img_id].tolist()
- polys_per_mask_single = polys_per_mask[img_id].tolist()
-
- split_polys = mmcv.slice_list(polys_single, polys_lens_single)
- mask_polys = mmcv.slice_list(split_polys, polys_per_mask_single)
- mask_polys_list.append(mask_polys)
- return mask_polys_list
-
-
-# TODO: move this function to more proper place
-def encode_mask_results(mask_results):
- """Encode bitmap mask to RLE code.
-
- Args:
- mask_results (list | tuple[list]): bitmap mask results.
- In mask scoring rcnn, mask_results is a tuple of (segm_results,
- segm_cls_score).
-
- Returns:
- list | tuple: RLE encoded mask.
- """
- if isinstance(mask_results, tuple): # mask scoring
- cls_segms, cls_mask_scores = mask_results
- else:
- cls_segms = mask_results
- num_classes = len(cls_segms)
- encoded_mask_results = [[] for _ in range(num_classes)]
- for i in range(len(cls_segms)):
- for cls_segm in cls_segms[i]:
- encoded_mask_results[i].append(
- mask_util.encode(
- np.array(
- cls_segm[:, :, np.newaxis], order='F',
- dtype='uint8'))[0]) # encoded with RLE
- if isinstance(mask_results, tuple):
- return encoded_mask_results, cls_mask_scores
- else:
- return encoded_mask_results
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/logger/text.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/logger/text.py
deleted file mode 100644
index 87b1a3eca9595a130121526f8b4c29915387ab35..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/logger/text.py
+++ /dev/null
@@ -1,256 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import datetime
-import os
-import os.path as osp
-from collections import OrderedDict
-
-import torch
-import torch.distributed as dist
-
-import annotator.uniformer.mmcv as mmcv
-from annotator.uniformer.mmcv.fileio.file_client import FileClient
-from annotator.uniformer.mmcv.utils import is_tuple_of, scandir
-from ..hook import HOOKS
-from .base import LoggerHook
-
-
-@HOOKS.register_module()
-class TextLoggerHook(LoggerHook):
- """Logger hook in text.
-
- In this logger hook, the information will be printed on terminal and
- saved in json file.
-
- Args:
- by_epoch (bool, optional): Whether EpochBasedRunner is used.
- Default: True.
- interval (int, optional): Logging interval (every k iterations).
- Default: 10.
- ignore_last (bool, optional): Ignore the log of last iterations in each
- epoch if less than :attr:`interval`. Default: True.
- reset_flag (bool, optional): Whether to clear the output buffer after
- logging. Default: False.
- interval_exp_name (int, optional): Logging interval for experiment
- name. This feature is to help users conveniently get the experiment
- information from screen or log file. Default: 1000.
- out_dir (str, optional): Logs are saved in ``runner.work_dir`` default.
- If ``out_dir`` is specified, logs will be copied to a new directory
- which is the concatenation of ``out_dir`` and the last level
- directory of ``runner.work_dir``. Default: None.
- `New in version 1.3.16.`
- out_suffix (str or tuple[str], optional): Those filenames ending with
- ``out_suffix`` will be copied to ``out_dir``.
- Default: ('.log.json', '.log', '.py').
- `New in version 1.3.16.`
- keep_local (bool, optional): Whether to keep local log when
- :attr:`out_dir` is specified. If False, the local log will be
- removed. Default: True.
- `New in version 1.3.16.`
- file_client_args (dict, optional): Arguments to instantiate a
- FileClient. See :class:`mmcv.fileio.FileClient` for details.
- Default: None.
- `New in version 1.3.16.`
- """
-
- def __init__(self,
- by_epoch=True,
- interval=10,
- ignore_last=True,
- reset_flag=False,
- interval_exp_name=1000,
- out_dir=None,
- out_suffix=('.log.json', '.log', '.py'),
- keep_local=True,
- file_client_args=None):
- super(TextLoggerHook, self).__init__(interval, ignore_last, reset_flag,
- by_epoch)
- self.by_epoch = by_epoch
- self.time_sec_tot = 0
- self.interval_exp_name = interval_exp_name
-
- if out_dir is None and file_client_args is not None:
- raise ValueError(
- 'file_client_args should be "None" when `out_dir` is not'
- 'specified.')
- self.out_dir = out_dir
-
- if not (out_dir is None or isinstance(out_dir, str)
- or is_tuple_of(out_dir, str)):
- raise TypeError('out_dir should be "None" or string or tuple of '
- 'string, but got {out_dir}')
- self.out_suffix = out_suffix
-
- self.keep_local = keep_local
- self.file_client_args = file_client_args
- if self.out_dir is not None:
- self.file_client = FileClient.infer_client(file_client_args,
- self.out_dir)
-
- def before_run(self, runner):
- super(TextLoggerHook, self).before_run(runner)
-
- if self.out_dir is not None:
- self.file_client = FileClient.infer_client(self.file_client_args,
- self.out_dir)
- # The final `self.out_dir` is the concatenation of `self.out_dir`
- # and the last level directory of `runner.work_dir`
- basename = osp.basename(runner.work_dir.rstrip(osp.sep))
- self.out_dir = self.file_client.join_path(self.out_dir, basename)
- runner.logger.info(
- (f'Text logs will be saved to {self.out_dir} by '
- f'{self.file_client.name} after the training process.'))
-
- self.start_iter = runner.iter
- self.json_log_path = osp.join(runner.work_dir,
- f'{runner.timestamp}.log.json')
- if runner.meta is not None:
- self._dump_log(runner.meta, runner)
-
- def _get_max_memory(self, runner):
- device = getattr(runner.model, 'output_device', None)
- mem = torch.cuda.max_memory_allocated(device=device)
- mem_mb = torch.tensor([mem / (1024 * 1024)],
- dtype=torch.int,
- device=device)
- if runner.world_size > 1:
- dist.reduce(mem_mb, 0, op=dist.ReduceOp.MAX)
- return mem_mb.item()
-
- def _log_info(self, log_dict, runner):
- # print exp name for users to distinguish experiments
- # at every ``interval_exp_name`` iterations and the end of each epoch
- if runner.meta is not None and 'exp_name' in runner.meta:
- if (self.every_n_iters(runner, self.interval_exp_name)) or (
- self.by_epoch and self.end_of_epoch(runner)):
- exp_info = f'Exp name: {runner.meta["exp_name"]}'
- runner.logger.info(exp_info)
-
- if log_dict['mode'] == 'train':
- if isinstance(log_dict['lr'], dict):
- lr_str = []
- for k, val in log_dict['lr'].items():
- lr_str.append(f'lr_{k}: {val:.3e}')
- lr_str = ' '.join(lr_str)
- else:
- lr_str = f'lr: {log_dict["lr"]:.3e}'
-
- # by epoch: Epoch [4][100/1000]
- # by iter: Iter [100/100000]
- if self.by_epoch:
- log_str = f'Epoch [{log_dict["epoch"]}]' \
- f'[{log_dict["iter"]}/{len(runner.data_loader)}]\t'
- else:
- log_str = f'Iter [{log_dict["iter"]}/{runner.max_iters}]\t'
- log_str += f'{lr_str}, '
-
- if 'time' in log_dict.keys():
- self.time_sec_tot += (log_dict['time'] * self.interval)
- time_sec_avg = self.time_sec_tot / (
- runner.iter - self.start_iter + 1)
- eta_sec = time_sec_avg * (runner.max_iters - runner.iter - 1)
- eta_str = str(datetime.timedelta(seconds=int(eta_sec)))
- log_str += f'eta: {eta_str}, '
- log_str += f'time: {log_dict["time"]:.3f}, ' \
- f'data_time: {log_dict["data_time"]:.3f}, '
- # statistic memory
- if torch.cuda.is_available():
- log_str += f'memory: {log_dict["memory"]}, '
- else:
- # val/test time
- # here 1000 is the length of the val dataloader
- # by epoch: Epoch[val] [4][1000]
- # by iter: Iter[val] [1000]
- if self.by_epoch:
- log_str = f'Epoch({log_dict["mode"]}) ' \
- f'[{log_dict["epoch"]}][{log_dict["iter"]}]\t'
- else:
- log_str = f'Iter({log_dict["mode"]}) [{log_dict["iter"]}]\t'
-
- log_items = []
- for name, val in log_dict.items():
- # TODO: resolve this hack
- # these items have been in log_str
- if name in [
- 'mode', 'Epoch', 'iter', 'lr', 'time', 'data_time',
- 'memory', 'epoch'
- ]:
- continue
- if isinstance(val, float):
- val = f'{val:.4f}'
- log_items.append(f'{name}: {val}')
- log_str += ', '.join(log_items)
-
- runner.logger.info(log_str)
-
- def _dump_log(self, log_dict, runner):
- # dump log in json format
- json_log = OrderedDict()
- for k, v in log_dict.items():
- json_log[k] = self._round_float(v)
- # only append log at last line
- if runner.rank == 0:
- with open(self.json_log_path, 'a+') as f:
- mmcv.dump(json_log, f, file_format='json')
- f.write('\n')
-
- def _round_float(self, items):
- if isinstance(items, list):
- return [self._round_float(item) for item in items]
- elif isinstance(items, float):
- return round(items, 5)
- else:
- return items
-
- def log(self, runner):
- if 'eval_iter_num' in runner.log_buffer.output:
- # this doesn't modify runner.iter and is regardless of by_epoch
- cur_iter = runner.log_buffer.output.pop('eval_iter_num')
- else:
- cur_iter = self.get_iter(runner, inner_iter=True)
-
- log_dict = OrderedDict(
- mode=self.get_mode(runner),
- epoch=self.get_epoch(runner),
- iter=cur_iter)
-
- # only record lr of the first param group
- cur_lr = runner.current_lr()
- if isinstance(cur_lr, list):
- log_dict['lr'] = cur_lr[0]
- else:
- assert isinstance(cur_lr, dict)
- log_dict['lr'] = {}
- for k, lr_ in cur_lr.items():
- assert isinstance(lr_, list)
- log_dict['lr'].update({k: lr_[0]})
-
- if 'time' in runner.log_buffer.output:
- # statistic memory
- if torch.cuda.is_available():
- log_dict['memory'] = self._get_max_memory(runner)
-
- log_dict = dict(log_dict, **runner.log_buffer.output)
-
- self._log_info(log_dict, runner)
- self._dump_log(log_dict, runner)
- return log_dict
-
- def after_run(self, runner):
- # copy or upload logs to self.out_dir
- if self.out_dir is not None:
- for filename in scandir(runner.work_dir, self.out_suffix, True):
- local_filepath = osp.join(runner.work_dir, filename)
- out_filepath = self.file_client.join_path(
- self.out_dir, filename)
- with open(local_filepath, 'r') as f:
- self.file_client.put_text(f.read(), out_filepath)
-
- runner.logger.info(
- (f'The file {local_filepath} has been uploaded to '
- f'{out_filepath}.'))
-
- if not self.keep_local:
- os.remove(local_filepath)
- runner.logger.info(
- (f'{local_filepath} was removed due to the '
- '`self.keep_local=False`'))
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/commands/check.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/commands/check.py
deleted file mode 100644
index 584df9f55c5d63d632f375d703f858e18c0acf2c..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/commands/check.py
+++ /dev/null
@@ -1,52 +0,0 @@
-import logging
-from optparse import Values
-from typing import List
-
-from pip._internal.cli.base_command import Command
-from pip._internal.cli.status_codes import ERROR, SUCCESS
-from pip._internal.operations.check import (
- check_package_set,
- create_package_set_from_installed,
-)
-from pip._internal.utils.misc import write_output
-
-logger = logging.getLogger(__name__)
-
-
-class CheckCommand(Command):
- """Verify installed packages have compatible dependencies."""
-
- usage = """
- %prog [options]"""
-
- def run(self, options: Values, args: List[str]) -> int:
- package_set, parsing_probs = create_package_set_from_installed()
- missing, conflicting = check_package_set(package_set)
-
- for project_name in missing:
- version = package_set[project_name].version
- for dependency in missing[project_name]:
- write_output(
- "%s %s requires %s, which is not installed.",
- project_name,
- version,
- dependency[0],
- )
-
- for project_name in conflicting:
- version = package_set[project_name].version
- for dep_name, dep_version, req in conflicting[project_name]:
- write_output(
- "%s %s has requirement %s, but you have %s %s.",
- project_name,
- version,
- req,
- dep_name,
- dep_version,
- )
-
- if missing or conflicting or parsing_probs:
- return ERROR
- else:
- write_output("No broken requirements found.")
- return SUCCESS
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/version.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/version.py
deleted file mode 100644
index c5e9d85cd75884b129d4ab8d0453c0e50d0c1f68..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/version.py
+++ /dev/null
@@ -1,9 +0,0 @@
-"""
-This module exists only to simplify retrieving the version number of chardet
-from within setuptools and from chardet subpackages.
-
-:author: Dan Blanchard (dan.blanchard@gmail.com)
-"""
-
-__version__ = "5.1.0"
-VERSION = __version__.split(".")
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/structures.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/structures.py
deleted file mode 100644
index 188e13e4829591facb23ae0e2eda84b9807cb818..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/structures.py
+++ /dev/null
@@ -1,99 +0,0 @@
-"""
-requests.structures
-~~~~~~~~~~~~~~~~~~~
-
-Data structures that power Requests.
-"""
-
-from collections import OrderedDict
-
-from .compat import Mapping, MutableMapping
-
-
-class CaseInsensitiveDict(MutableMapping):
- """A case-insensitive ``dict``-like object.
-
- Implements all methods and operations of
- ``MutableMapping`` as well as dict's ``copy``. Also
- provides ``lower_items``.
-
- All keys are expected to be strings. The structure remembers the
- case of the last key to be set, and ``iter(instance)``,
- ``keys()``, ``items()``, ``iterkeys()``, and ``iteritems()``
- will contain case-sensitive keys. However, querying and contains
- testing is case insensitive::
-
- cid = CaseInsensitiveDict()
- cid['Accept'] = 'application/json'
- cid['aCCEPT'] == 'application/json' # True
- list(cid) == ['Accept'] # True
-
- For example, ``headers['content-encoding']`` will return the
- value of a ``'Content-Encoding'`` response header, regardless
- of how the header name was originally stored.
-
- If the constructor, ``.update``, or equality comparison
- operations are given keys that have equal ``.lower()``s, the
- behavior is undefined.
- """
-
- def __init__(self, data=None, **kwargs):
- self._store = OrderedDict()
- if data is None:
- data = {}
- self.update(data, **kwargs)
-
- def __setitem__(self, key, value):
- # Use the lowercased key for lookups, but store the actual
- # key alongside the value.
- self._store[key.lower()] = (key, value)
-
- def __getitem__(self, key):
- return self._store[key.lower()][1]
-
- def __delitem__(self, key):
- del self._store[key.lower()]
-
- def __iter__(self):
- return (casedkey for casedkey, mappedvalue in self._store.values())
-
- def __len__(self):
- return len(self._store)
-
- def lower_items(self):
- """Like iteritems(), but with all lowercase keys."""
- return ((lowerkey, keyval[1]) for (lowerkey, keyval) in self._store.items())
-
- def __eq__(self, other):
- if isinstance(other, Mapping):
- other = CaseInsensitiveDict(other)
- else:
- return NotImplemented
- # Compare insensitively
- return dict(self.lower_items()) == dict(other.lower_items())
-
- # Copy is required
- def copy(self):
- return CaseInsensitiveDict(self._store.values())
-
- def __repr__(self):
- return str(dict(self.items()))
-
-
-class LookupDict(dict):
- """Dictionary lookup object."""
-
- def __init__(self, name=None):
- self.name = name
- super().__init__()
-
- def __repr__(self):
- return f""
-
- def __getitem__(self, key):
- # We allow fall-through here, so values default to None
-
- return self.__dict__.get(key, None)
-
- def get(self, key, default=None):
- return self.__dict__.get(key, default)
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/modeling/test_box2box_transform.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/modeling/test_box2box_transform.py
deleted file mode 100644
index fd3a7b79b6b7a3608ad7cb3918de020a5a600d2f..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/modeling/test_box2box_transform.py
+++ /dev/null
@@ -1,94 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import logging
-import unittest
-import torch
-
-from detectron2.modeling.box_regression import (
- Box2BoxTransform,
- Box2BoxTransformLinear,
- Box2BoxTransformRotated,
-)
-from detectron2.utils.testing import random_boxes
-
-logger = logging.getLogger(__name__)
-
-
-class TestBox2BoxTransform(unittest.TestCase):
- def test_reconstruction(self):
- weights = (5, 5, 10, 10)
- b2b_tfm = Box2BoxTransform(weights=weights)
- src_boxes = random_boxes(10)
- dst_boxes = random_boxes(10)
-
- devices = [torch.device("cpu")]
- if torch.cuda.is_available():
- devices.append(torch.device("cuda"))
- for device in devices:
- src_boxes = src_boxes.to(device=device)
- dst_boxes = dst_boxes.to(device=device)
- deltas = b2b_tfm.get_deltas(src_boxes, dst_boxes)
- dst_boxes_reconstructed = b2b_tfm.apply_deltas(deltas, src_boxes)
- self.assertTrue(torch.allclose(dst_boxes, dst_boxes_reconstructed))
-
- def test_apply_deltas_tracing(self):
- weights = (5, 5, 10, 10)
- b2b_tfm = Box2BoxTransform(weights=weights)
-
- with torch.no_grad():
- func = torch.jit.trace(b2b_tfm.apply_deltas, (torch.randn(10, 20), torch.randn(10, 4)))
-
- o = func(torch.randn(10, 20), torch.randn(10, 4))
- self.assertEqual(o.shape, (10, 20))
- o = func(torch.randn(5, 20), torch.randn(5, 4))
- self.assertEqual(o.shape, (5, 20))
-
-
-def random_rotated_boxes(mean_box, std_length, std_angle, N):
- return torch.cat(
- [torch.rand(N, 4) * std_length, torch.rand(N, 1) * std_angle], dim=1
- ) + torch.tensor(mean_box, dtype=torch.float)
-
-
-class TestBox2BoxTransformRotated(unittest.TestCase):
- def test_reconstruction(self):
- weights = (5, 5, 10, 10, 1)
- b2b_transform = Box2BoxTransformRotated(weights=weights)
- src_boxes = random_rotated_boxes([10, 10, 20, 20, -30], 5, 60.0, 10)
- dst_boxes = random_rotated_boxes([10, 10, 20, 20, -30], 5, 60.0, 10)
-
- devices = [torch.device("cpu")]
- if torch.cuda.is_available():
- devices.append(torch.device("cuda"))
- for device in devices:
- src_boxes = src_boxes.to(device=device)
- dst_boxes = dst_boxes.to(device=device)
- deltas = b2b_transform.get_deltas(src_boxes, dst_boxes)
- dst_boxes_reconstructed = b2b_transform.apply_deltas(deltas, src_boxes)
- assert torch.allclose(dst_boxes[:, :4], dst_boxes_reconstructed[:, :4], atol=1e-5)
- # angle difference has to be normalized
- assert torch.allclose(
- (dst_boxes[:, 4] - dst_boxes_reconstructed[:, 4] + 180.0) % 360.0 - 180.0,
- torch.zeros_like(dst_boxes[:, 4]),
- atol=1e-4,
- )
-
-
-class TestBox2BoxTransformLinear(unittest.TestCase):
- def test_reconstruction(self):
- b2b_tfm = Box2BoxTransformLinear()
- src_boxes = random_boxes(10)
- dst_boxes = torch.tensor([0, 0, 101, 101] * 10).reshape(10, 4).float()
-
- devices = [torch.device("cpu")]
- if torch.cuda.is_available():
- devices.append(torch.device("cuda"))
- for device in devices:
- src_boxes = src_boxes.to(device=device)
- dst_boxes = dst_boxes.to(device=device)
- deltas = b2b_tfm.get_deltas(src_boxes, dst_boxes)
- dst_boxes_reconstructed = b2b_tfm.apply_deltas(deltas, src_boxes)
- self.assertTrue(torch.allclose(dst_boxes, dst_boxes_reconstructed, atol=1e-3))
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/Benson/text-generation/Examples/Bus Simulator Indonesia Nuevo Mapa Descargar.md b/spaces/Benson/text-generation/Examples/Bus Simulator Indonesia Nuevo Mapa Descargar.md
deleted file mode 100644
index f59e9475e9a89ac66fa61c700a078eb5b755ddbd..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Bus Simulator Indonesia Nuevo Mapa Descargar.md
+++ /dev/null
@@ -1,69 +0,0 @@
-
-
Simulador de autobús Indonesia: Cómo descargar y disfrutar de nuevos mapas
-
Bus Simulator Indonesia (alias BUSSID) es un popular juego de simulación que te permite experimentar lo que le gusta ser un conductor de autobús en Indonesia de una manera divertida y auténtica. BUSSID puede no ser el primero, pero es probablemente uno de los únicos juegos de simulador de bus con más características y el entorno indonesio más auténtico.
-
Algunas de las características principales de BUSSID son:
Fresco y divertido bocinazos, incluyendo el icónico "Om Telolet Om!" bocina
-
Alta calidad y gráficos 3D detallados
-
No hay anuncios obstructivos durante la conducción
-
Tabla de clasificación y ahorro de datos en línea
-
Utilice su propio modelo 3D utilizando el sistema de vehículo mod
-
Convoy multijugador en línea
-
-
Para jugar BUSSID, es necesario elegir un autobús, una librea, y una ruta. Luego, debe conducir su autobús a lo largo de la ruta, recoger y dejar pasajeros, ganar dinero y evitar accidentes. También puede personalizar su autobús, actualizar su garaje y unirse a convoyes en línea con otros jugadores.
-
Uno de los beneficios de jugar BUSSID es que puedes descargar nuevos mapas para el juego, lo que puede agregar más variedad, desafío y diversión a tu experiencia de conducción. Los nuevos mapas pueden tener diferentes temas, como extremos, off-road o escénicos. También pueden tener diferentes características, como curvas afiladas, colinas empinadas o hitos realistas. Los nuevos mapas pueden hacerte sentir que conduces en diferentes regiones de Indonesia o incluso en otros países.
-
Pero, ¿cómo descargar nuevos mapas para BUSSID? ¿Y cómo los disfruta? En este artículo, le mostraremos cómo hacer ambos en pasos fáciles. ¡Vamos a empezar!
-
Cómo descargar nuevos mapas para Bus Simulator Indonesia
-
-
Una de las mejores fuentes de mapas mod para BUSSID es [MediaRale]( 1 ), un sitio web que proporciona varios mods para juegos, incluyendo BUSSID. MediaRale tiene una sección dedicada a los mapas mod para BUSSID, donde puede encontrar muchas opciones para elegir. Puedes navegar por categorías, como extrema, todoterreno o escénica. También puede ver capturas de pantalla, descripciones, calificaciones y enlaces de descarga para cada mapa de mods.
-
Una vez que hayas encontrado un mapa mod que te guste, necesitas descargarlo en tu dispositivo. El archivo de mapa mod generalmente estará en formato ZIP o RAR, lo que significa que necesita extraerlo usando una aplicación de administrador de archivos o una aplicación extractora ZIP. Puedes encontrar muchas aplicaciones gratuitas para este propósito en la Google Play Store o en la App Store.
-
Después de haber extraído el archivo de mapa mod, debe copiarlo en la carpeta mod de BUSSID. La carpeta mod se encuentra en el almacenamiento interno de su dispositivo, bajo la carpeta Android/data/com.maleo.bussimulatorid/files/mod. Puede usar una aplicación de administrador de archivos para navegar a esta carpeta y pegar el archivo de mapa mod allí.
-
-
El último paso es iniciar el juego y seleccionar el mapa mod desde el menú del mapa. Puede hacer esto pulsando en el icono del mapa en la esquina superior derecha de la pantalla, y luego desplazándose hacia abajo para encontrar el mapa mod que ha descargado. Toque en él para seleccionarlo, y luego toque en el botón de inicio para comenzar su viaje.
-
Cómo disfrutar de nuevos mapas para Bus Simulator Indonesia
-
Ahora que ha descargado e instalado un nuevo mapa para BUSSID, puede disfrutarlo conduciendo su autobús en él. Sin embargo, hay algunos consejos que pueden ayudarte a aprovechar al máximo tu experiencia. Estos son algunos de ellos:
-
-
-
Consejo 2: Siga las reglas de tráfico y respete otros controladores. A pesar de que usted está jugando en un mapa mod, todavía tiene que seguir las reglas de tráfico y respetar a otros conductores en la carretera. Esto significa que debe obedecer el límite de velocidad, detenerse en las luces rojas, hacer una señal antes de girar y evitar colisiones. Esto no solo hará que su conducción sea más realista y segura, sino también más agradable y gratificante.
-
Consejo 3: Utilice el bocinazo y otras características para interactuar con el entorno. Uno de los aspectos más divertidos de BUSSID es que puedes utilizar la bocina y otras funciones para interactuar con el entorno. Por ejemplo, puedes usar la bocina para saludar a otros conductores, peatones o animales. También puede usar los limpiaparabrisas, los faros, los indicadores y las puertas para comunicarse con otros o expresarse. Incluso puede usar el "Om Telolet Om!" tocar el claxon para hacer que la gente te vitoree.
-
Consejo 4: Explora diferentes rutas y puntos de referencia en el mapa. Otra forma de disfrutar de nuevos mapas para BUSSID es explorar diferentes rutas y puntos de referencia en ellos. Puede hacer esto siguiendo la navegación GPS o eligiendo su propio camino. Puede descubrir nuevos lugares, paisajes o desafíos que no haya visto antes. También puede encontrar secretos ocultos o huevos de Pascua que el creador del mapa ha dejado para usted.
-
Consejo 5: Únete a convoyes multijugador en línea con otros jugadores. La mejor manera de disfrutar de nuevos mapas para BUSSID es unirse a convoyes multijugador en línea con otros jugadores. Puede hacer esto tocando el icono del convoy en la esquina superior izquierda de la pantalla, y luego elegir un convoy que está jugando en el mismo mapa que usted. También puede crear su propio convoy e invitar a sus amigos u otros jugadores a unirse a usted. Al unirte a un convoy, puedes chatear con otros jugadores, compartir tus experiencias y divertirte juntos.
-
-
Conclusión
-
-
-
Encuentra un mapa mod que te guste en MediaRale
-
Descargar el archivo de mapa mod y extraerlo si es necesario
-
Copiar el archivo de mapa mod a la carpeta mod de BUSSID
-
Iniciar el juego y seleccionar el mapa de mod desde el menú del mapa
-
-
Para disfrutar de nuevos mapas para BUSSID, puedes seguir estos consejos:
-
-
Elegir un autobús adecuado y librea para el mapa
-
Siga las reglas de tráfico y respete otros controladores
-
Utilice el bocinazo y otras características para interactuar con el entorno
-
Explora diferentes rutas y puntos de referencia en el mapa
-
Únete a convoyes multijugador en línea con otros jugadores
-
-
Siguiendo estos pasos y consejos, puede descargar y disfrutar de nuevos mapas para BUSSID y divertirse conduciendo su autobús en ellos. Si aún no has probado BUSSID, puedes descargarlo gratis desde la Google Play Store o la App Store. También puedes visitar el sitio web oficial de BUSSID para aprender más sobre el juego y sus características. ¡Feliz conducción!
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre nuevos mapas para BUSSID:
-
-
Q: ¿Cuántos mapas nuevos están disponibles para BUSSID?
-
A: No hay un número exacto de nuevos mapas para BUSSID, ya que los nuevos mapas mod son constantemente creados y subidos por los usuarios. Sin embargo, puede encontrar cientos de mapas mod para BUSSID en MediaRale, que van desde mapas extremos, off-road, escénicos, a mapas realistas.
-
Q: ¿Cómo sé si un mapa mod es compatible con mi versión de BUSSID?
-
A: Puede comprobar la compatibilidad de un mapa mod mirando su descripción, calificación y comentarios en MediaRale. También puede comprobar la fecha de subida del mapa mod y compararlo con la fecha de la última actualización de BUSSID. Generalmente, los mapas mod que se cargan después de la última actualización de BUSSID tienen más probabilidades de ser compatibles.
-
Q: ¿Cómo puedo desinstalar un mapa mod de BUSSID?
-
-
Q: ¿Cómo puedo reportar un problema o un error con un mapa mod?
-
A: Si encuentras un problema o un error con un mapa mod, puedes reportarlo al creador de mapas mod o a MediaRale. Puede encontrar la información de contacto del creador de mapas mod en su página de perfil en MediaRale. También puede dejar un comentario o una valoración en la página del mapa de mods en MediaRale para compartir sus comentarios.
-
Q: ¿Cómo puedo crear mi propio mapa mod para BUSSID?
-
A: Si quieres crear tu propio mapa mod para BUSSID, necesitas usar un software de modelado 3D, como Blender, SketchUp o Maya. También debe seguir las directrices y especificaciones de BUSSID para crear mapas mod. Puede encontrar más información y tutoriales sobre cómo crear mapas mod para BUSSID en el sitio web oficial de BUSSID o en YouTube.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/.github/CODE_OF_CONDUCT.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/.github/CODE_OF_CONDUCT.md
deleted file mode 100644
index 0f7ad8bfc173eac554f0b6ef7c684861e8014bbe..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/.github/CODE_OF_CONDUCT.md
+++ /dev/null
@@ -1,5 +0,0 @@
-# Code of Conduct
-
-Facebook has adopted a Code of Conduct that we expect project participants to adhere to.
-Please read the [full text](https://code.fb.com/codeofconduct/)
-so that you can understand what actions will and will not be tolerated.
diff --git a/spaces/CVPR/LIVE/thrust/cmake/ThrustMultiConfig.cmake b/spaces/CVPR/LIVE/thrust/cmake/ThrustMultiConfig.cmake
deleted file mode 100644
index 2b3a40284e6f9fd5515b0fe708b42a0bcc9d3bf2..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/cmake/ThrustMultiConfig.cmake
+++ /dev/null
@@ -1,127 +0,0 @@
-# This file defines thrust_configure_multiconfig(), which sets up and handles
-# the MultiConfig options that allow multiple host/device/dialect configurations
-# to be generated from a single thrust build.
-
-function(thrust_configure_multiconfig)
- option(THRUST_ENABLE_MULTICONFIG "Enable multiconfig options for coverage testing." OFF)
-
- # Dialects:
- set(THRUST_CPP_DIALECT_OPTIONS
- 11 14 17
- CACHE INTERNAL "C++ dialects supported by Thrust." FORCE
- )
-
- if (THRUST_ENABLE_MULTICONFIG)
- # Handle dialect options:
- foreach (dialect IN LISTS THRUST_CPP_DIALECT_OPTIONS)
- set(default_value OFF)
- if (dialect EQUAL 14) # Default to just 14 on:
- set(default_value ON)
- endif()
- option(THRUST_MULTICONFIG_ENABLE_DIALECT_CPP${dialect}
- "Generate C++${dialect} build configurations."
- ${default_value}
- )
- endforeach()
-
- # Supported versions of MSVC do not distinguish between C++11 and C++14.
- # Warn the user that they may be generating a ton of redundant targets.
- if ("MSVC" STREQUAL "${CMAKE_CXX_COMPILER_ID}" AND
- THRUST_MULTICONFIG_ENABLE_DIALECT_CPP11)
- message(WARNING
- "Supported versions of MSVC (2017+) do not distinguish between C++11 "
- "and C++14. The requested C++11 targets will be built with C++14."
- )
- endif()
-
- # Systems:
- option(THRUST_MULTICONFIG_ENABLE_SYSTEM_CPP "Generate build configurations that use CPP." ON)
- option(THRUST_MULTICONFIG_ENABLE_SYSTEM_CUDA "Generate build configurations that use CUDA." ON)
- option(THRUST_MULTICONFIG_ENABLE_SYSTEM_OMP "Generate build configurations that use OpenMP." OFF)
- option(THRUST_MULTICONFIG_ENABLE_SYSTEM_TBB "Generate build configurations that use TBB." OFF)
-
- # CMake added C++17 support for CUDA targets in 3.18:
- if (THRUST_MULTICONFIG_ENABLE_DIALECT_CPP17 AND
- THRUST_MULTICONFIG_ENABLE_SYSTEM_CUDA)
- cmake_minimum_required(VERSION 3.18)
- endif()
-
- # Workload:
- # - `SMALL`: [3 configs] Minimal coverage and validation of each device system against the `CPP` host.
- # - `MEDIUM`: [6 configs] Cheap extended coverage.
- # - `LARGE`: [8 configs] Expensive extended coverage. Include all useful build configurations.
- # - `FULL`: [12 configs] The complete cross product of all possible build configurations.
- #
- # Config | Workloads | Value | Expense | Note
- # ---------|-----------|------------|-----------|-----------------------------
- # CPP/CUDA | F L M S | Essential | Expensive | Validates CUDA against CPP
- # CPP/OMP | F L M S | Essential | Cheap | Validates OMP against CPP
- # CPP/TBB | F L M S | Essential | Cheap | Validates TBB against CPP
- # CPP/CPP | F L M | Important | Cheap | Tests CPP as device
- # OMP/OMP | F L M | Important | Cheap | Tests OMP as host
- # TBB/TBB | F L M | Important | Cheap | Tests TBB as host
- # TBB/CUDA | F L | Important | Expensive | Validates TBB/CUDA interop
- # OMP/CUDA | F L | Important | Expensive | Validates OMP/CUDA interop
- # TBB/OMP | F | Not useful | Cheap | Mixes CPU-parallel systems
- # OMP/TBB | F | Not useful | Cheap | Mixes CPU-parallel systems
- # TBB/CPP | F | Not Useful | Cheap | Parallel host, serial device
- # OMP/CPP | F | Not Useful | Cheap | Parallel host, serial device
-
- set(THRUST_MULTICONFIG_WORKLOAD SMALL CACHE STRING
- "Limit host/device configs: SMALL (up to 3 h/d combos per dialect), MEDIUM(6), LARGE(8), FULL(12)"
- )
- set_property(CACHE THRUST_MULTICONFIG_WORKLOAD PROPERTY STRINGS
- SMALL MEDIUM LARGE FULL
- )
- set(THRUST_MULTICONFIG_WORKLOAD_SMALL_CONFIGS
- CPP_OMP CPP_TBB CPP_CUDA
- CACHE INTERNAL "Host/device combos enabled for SMALL workloads." FORCE
- )
- set(THRUST_MULTICONFIG_WORKLOAD_MEDIUM_CONFIGS
- ${THRUST_MULTICONFIG_WORKLOAD_SMALL_CONFIGS}
- CPP_CPP TBB_TBB OMP_OMP
- CACHE INTERNAL "Host/device combos enabled for MEDIUM workloads." FORCE
- )
- set(THRUST_MULTICONFIG_WORKLOAD_LARGE_CONFIGS
- ${THRUST_MULTICONFIG_WORKLOAD_MEDIUM_CONFIGS}
- OMP_CUDA TBB_CUDA
- CACHE INTERNAL "Host/device combos enabled for LARGE workloads." FORCE
- )
- set(THRUST_MULTICONFIG_WORKLOAD_FULL_CONFIGS
- ${THRUST_MULTICONFIG_WORKLOAD_LARGE_CONFIGS}
- OMP_CPP TBB_CPP OMP_TBB TBB_OMP
- CACHE INTERNAL "Host/device combos enabled for FULL workloads." FORCE
- )
-
- # Hide the single config options if they exist from a previous run:
- if (DEFINED THRUST_HOST_SYSTEM)
- set_property(CACHE THRUST_HOST_SYSTEM PROPERTY TYPE INTERNAL)
- set_property(CACHE THRUST_DEVICE_SYSTEM PROPERTY TYPE INTERNAL)
- endif()
- if (DEFINED THRUST_CPP_DIALECT)
- set_property(CACHE THRUST_CPP_DIALECT PROPERTY TYPE INTERNAL)
- endif()
-
- else() # Single config:
- # Restore system option visibility if these cache options already exist
- # from a previous run.
- if (DEFINED THRUST_HOST_SYSTEM)
- set_property(CACHE THRUST_HOST_SYSTEM PROPERTY TYPE STRING)
- set_property(CACHE THRUST_DEVICE_SYSTEM PROPERTY TYPE STRING)
- endif()
-
- set(THRUST_CPP_DIALECT 14
- CACHE STRING "The C++ standard to target: ${THRUST_CPP_DIALECT_OPTIONS}"
- )
- set_property(CACHE THRUST_CPP_DIALECT
- PROPERTY STRINGS
- ${THRUST_CPP_DIALECT_OPTIONS}
- )
-
- # CMake added C++17 support for CUDA targets in 3.18:
- if (THRUST_CPP_DIALECT EQUAL 17 AND
- THRUST_DEVICE_SYSTEM STREQUAL "CUDA")
- cmake_minimum_required(VERSION 3.18)
- endif()
- endif()
-endfunction()
diff --git a/spaces/DAMO-NLP-SG/CLEX-Chat/modeling_llama.py b/spaces/DAMO-NLP-SG/CLEX-Chat/modeling_llama.py
deleted file mode 100644
index 840720b4a56f748f414592646ca68dbf4154e742..0000000000000000000000000000000000000000
--- a/spaces/DAMO-NLP-SG/CLEX-Chat/modeling_llama.py
+++ /dev/null
@@ -1,985 +0,0 @@
-# coding=utf-8
-# Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
-#
-# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
-# and OPT implementations in this library. It has been modified from its
-# original forms to accommodate minor architectural differences compared
-# to GPT-NeoX and OPT used by the Meta AI team that trained the model.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" PyTorch LLaMA model."""
-import math
-from typing import List, Optional, Tuple, Union
-
-import torch
-import torch.utils.checkpoint
-from torch import nn
-from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
-
-from transformers.activations import ACT2FN
-from transformers.modeling_outputs import BaseModelOutputWithPast, CausalLMOutputWithPast, SequenceClassifierOutputWithPast
-from transformers.modeling_utils import PreTrainedModel
-from transformers.utils import add_start_docstrings, add_start_docstrings_to_model_forward, logging, replace_return_docstrings
-from configuration_clex import CLEXLlamaConfig
-from clex_layer import LlamaCLEXScalingRotaryEmbedding
-from einops import rearrange
-import importlib.metadata
-import importlib.util
-
-
-logger = logging.get_logger(__name__)
-
-def _is_package_available(pkg_name: str, return_version: bool = False) -> Union[Tuple[bool, str], bool]:
- # Check we're not importing a "pkg_name" directory somewhere but the actual library by trying to grab the version
- package_exists = importlib.util.find_spec(pkg_name) is not None
- package_version = "N/A"
- if package_exists:
- try:
- package_version = importlib.metadata.version(pkg_name)
- package_exists = True
- except importlib.metadata.PackageNotFoundError:
- package_exists = False
- logger.info(f"Detected {pkg_name} version {package_version}")
- if return_version:
- return package_exists, package_version
- else:
- return package_exists
-
-def is_flash_attn_available():
- if not _is_package_available("torch", return_version=True):
- return False
-
- # Let's add an extra check to see if cuda is available
-
- return _is_package_available("flash_attn") and torch.cuda.is_available()
-
-
-
-
-
-
-_CONFIG_FOR_DOC = "CLEXLlamaConfig"
-
-
-
-
-
-# Copied from transformers.models.bart.modeling_bart._make_causal_mask
-def _make_causal_mask(
- input_ids_shape: torch.Size, dtype: torch.dtype, device: torch.device, past_key_values_length: int = 0
-):
- """
- Make causal mask used for bi-directional self-attention.
- """
- bsz, tgt_len = input_ids_shape
- mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min, device=device), device=device)
- mask_cond = torch.arange(mask.size(-1), device=device)
- mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)
- mask = mask.to(dtype)
-
- if past_key_values_length > 0:
- mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype, device=device), mask], dim=-1)
- return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length)
-
-
-# Copied from transformers.models.bart.modeling_bart._expand_mask
-def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None):
- """
- Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`.
- """
- bsz, src_len = mask.size()
- tgt_len = tgt_len if tgt_len is not None else src_len
-
- expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype)
-
- inverted_mask = 1.0 - expanded_mask
-
- return inverted_mask.masked_fill(inverted_mask.to(torch.bool), torch.finfo(dtype).min)
-
-
-class LlamaRMSNorm(nn.Module):
- def __init__(self, hidden_size, eps=1e-6):
- """
- LlamaRMSNorm is equivalent to T5LayerNorm
- """
- super().__init__()
- self.weight = nn.Parameter(torch.ones(hidden_size))
- self.variance_epsilon = eps
-
- def forward(self, hidden_states):
- variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True)
- hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
-
- # convert into half-precision if necessary
- if self.weight.dtype in [torch.float16, torch.bfloat16]:
- hidden_states = hidden_states.to(self.weight.dtype)
-
- return self.weight * hidden_states
-
-
-class LlamaRotaryEmbedding(torch.nn.Module):
- def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None):
- super().__init__()
- inv_freq = 1.0 / (base ** (torch.arange(0, dim, 2).float().to(device) / dim))
- self.register_buffer("inv_freq", inv_freq)
-
- # Build here to make `torch.jit.trace` work.
- self.max_seq_len_cached = max_position_embeddings
- t = torch.arange(self.max_seq_len_cached, device=self.inv_freq.device, dtype=self.inv_freq.dtype)
- freqs = torch.einsum("i,j->ij", t, self.inv_freq)
- # Different from paper, but it uses a different permutation in order to obtain the same calculation
- emb = torch.cat((freqs, freqs), dim=-1)
- self.register_buffer("cos_cached", emb.cos()[None, None, :, :], persistent=False)
- self.register_buffer("sin_cached", emb.sin()[None, None, :, :], persistent=False)
-
- def forward(self, x, seq_len=None):
- # x: [bs, num_attention_heads, seq_len, head_size]
- # This `if` block is unlikely to be run after we build sin/cos in `__init__`. Keep the logic here just in case.
- if seq_len > self.max_seq_len_cached:
- self.max_seq_len_cached = seq_len
- t = torch.arange(self.max_seq_len_cached, device=x.device, dtype=self.inv_freq.dtype)
- freqs = torch.einsum("i,j->ij", t, self.inv_freq)
- # Different from paper, but it uses a different permutation in order to obtain the same calculation
- emb = torch.cat((freqs, freqs), dim=-1).to(x.device)
- self.register_buffer("cos_cached", emb.cos()[None, None, :, :], persistent=False)
- self.register_buffer("sin_cached", emb.sin()[None, None, :, :], persistent=False)
- return (
- self.cos_cached[:, :, :seq_len, ...].to(dtype=x.dtype),
- self.sin_cached[:, :, :seq_len, ...].to(dtype=x.dtype),
- )
-
-
-def rotate_half(x):
- """Rotates half the hidden dims of the input."""
- x1 = x[..., : x.shape[-1] // 2]
- x2 = x[..., x.shape[-1] // 2 :]
- return torch.cat((-x2, x1), dim=-1)
-
-
-def apply_rotary_pos_emb(q, k, cos, sin, position_ids):
- # The first two dimensions of cos and sin are always 1, so we can `squeeze` them.
- cos = cos.squeeze(1).squeeze(0) # [seq_len, dim]
- sin = sin.squeeze(1).squeeze(0) # [seq_len, dim]
- cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]
- sin = sin[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]
- q_embed = (q * cos) + (rotate_half(q) * sin)
- k_embed = (k * cos) + (rotate_half(k) * sin)
- return q_embed, k_embed
-
-
-class LlamaMLP(nn.Module):
- def __init__(
- self,
- hidden_size: int,
- intermediate_size: int,
- hidden_act: str,
- ):
- super().__init__()
- self.gate_proj = nn.Linear(hidden_size, intermediate_size, bias=False)
- self.down_proj = nn.Linear(intermediate_size, hidden_size, bias=False)
- self.up_proj = nn.Linear(hidden_size, intermediate_size, bias=False)
- self.act_fn = ACT2FN[hidden_act]
-
- def forward(self, x):
- return self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
-
-
-class LlamaAttention(nn.Module):
- """Multi-headed attention from 'Attention Is All You Need' paper"""
-
- def __init__(self, config: CLEXLlamaConfig):
- super().__init__()
- self.config = config
- self.hidden_size = config.hidden_size
- self.num_heads = config.num_attention_heads
- self.head_dim = self.hidden_size // self.num_heads
- self.max_position_embeddings = config.max_position_embeddings
- self.log_scale = config.log_scale
- if (self.head_dim * self.num_heads) != self.hidden_size:
- raise ValueError(
- f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}"
- f" and `num_heads`: {self.num_heads})."
- )
- self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False)
- self.k_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False)
- self.v_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False)
- self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=False)
- self.rotary_emb = LlamaRotaryEmbedding(self.head_dim, max_position_embeddings=self.max_position_embeddings)
-
- def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int):
- return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous()
-
- def flash_attn_forward(
- self,
- qkv: torch.Tensor,
- key_padding_mask: Optional[torch.Tensor] = None,
- ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
- """Input shape: Batch x Time x Channel
-
- attention_mask: [bsz, q_len]
- """
- if is_flash_attn_available():
- from flash_attn.flash_attn_interface import flash_attn_varlen_qkvpacked_func, flash_attn_qkvpacked_func, flash_attn_with_kvcache
- # from flash_attn.flash_attn_interface import flash_attn_unpadded_qkvpacked_func
- from flash_attn.bert_padding import unpad_input, pad_input
- bsz, q_len, *_ = qkv.size()
-
- if key_padding_mask is None:
- # qkv = rearrange(qkv, "b s ... -> (b s) ...")
- max_s = q_len
- cu_q_lens = torch.arange(
- 0, (bsz + 1) * q_len, step=q_len, dtype=torch.int32, device=qkv.device
- )
- output = flash_attn_qkvpacked_func(
- qkv, 0.0, softmax_scale=None, causal=True
- )
- else:
- nheads = qkv.shape[-2]
- x = rearrange(qkv, "b s three h d -> b s (three h d)")
- x_unpad, indices, cu_q_lens, max_s = unpad_input(x, key_padding_mask)
- x_unpad = rearrange(
- x_unpad, "nnz (three h d) -> nnz three h d", three=3, h=nheads
- )
- output_unpad = flash_attn_varlen_qkvpacked_func(
- x_unpad, cu_q_lens, max_s, 0.0, softmax_scale=None, causal=True
- )
- output = rearrange(
- pad_input(
- rearrange(output_unpad, "nnz h d -> nnz (h d)"), indices, bsz, q_len
- ),
- "b s (h d) -> b s h d",
- h=nheads,
- )
- return self.o_proj(rearrange(output, "b s h d -> b s (h d)"))
-
- def forward(
- self,
- hidden_states: torch.Tensor,
- attention_mask: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.LongTensor] = None,
- pack_cos_sin = None,
- past_key_value: Optional[Tuple[torch.Tensor]] = None,
- output_attentions: bool = False,
- use_cache: bool = False,
- ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
- bsz, q_len, _ = hidden_states.size()
-
- query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
- key_states = self.k_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
- value_states = self.v_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
-
- kv_seq_len = key_states.shape[-2]
-
- if past_key_value is not None:
- kv_seq_len += past_key_value[0].shape[-2]
-
- if pack_cos_sin is not None:
- cos, sin = pack_cos_sin.to(query_states.device)
- else:
- cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
- query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
-
- if past_key_value is not None:
- # reuse k, v, self_attention
- key_states = torch.cat([past_key_value[0], key_states], dim=2)
- value_states = torch.cat([past_key_value[1], value_states], dim=2)
-
- past_key_value = (key_states, value_states) if use_cache else None
-
- use_flashattn = self.config.use_flashattn and is_flash_attn_available()
-
-
-
- attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
-
- if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
- raise ValueError(
- f"Attention weights should be of size {(bsz, self.num_heads, q_len, kv_seq_len)}, but is"
- f" {attn_weights.size()}"
- )
-
- if attention_mask is not None:
- if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
- raise ValueError(
- f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}"
- )
- attn_weights = attn_weights + attention_mask
- attn_weights = torch.max(attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min))
-
- # upcast attention to fp32
- attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
- attn_output = torch.matmul(attn_weights, value_states)
-
- if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
- raise ValueError(
- f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is"
- f" {attn_output.size()}"
- )
-
- attn_output = attn_output.transpose(1, 2)
- attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
-
- attn_output = self.o_proj(attn_output)
-
- if not output_attentions:
- attn_weights = None
-
- return attn_output, attn_weights, past_key_value
-
-
-class LlamaDecoderLayer(nn.Module):
- def __init__(self, config: CLEXLlamaConfig):
- super().__init__()
- self.hidden_size = config.hidden_size
- self.self_attn = LlamaAttention(config=config)
- self.mlp = LlamaMLP(
- hidden_size=self.hidden_size,
- intermediate_size=config.intermediate_size,
- hidden_act=config.hidden_act,
- )
- self.input_layernorm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
- self.post_attention_layernorm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
-
- def forward(
- self,
- hidden_states: torch.Tensor,
- attention_mask: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.LongTensor] = None,
- pack_cos_sin=None,
- past_key_value: Optional[Tuple[torch.Tensor]] = None,
- output_attentions: Optional[bool] = False,
- use_cache: Optional[bool] = False,
- ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
- """
- Args:
- hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
- attention_mask (`torch.FloatTensor`, *optional*): attention mask of size
- `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under
- returned tensors for more detail.
- use_cache (`bool`, *optional*):
- If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
- (see `past_key_values`).
- past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
- """
-
- residual = hidden_states
-
- hidden_states = self.input_layernorm(hidden_states)
-
- # Self Attention
- hidden_states, self_attn_weights, present_key_value = self.self_attn(
- hidden_states=hidden_states,
- attention_mask=attention_mask,
- position_ids=position_ids,
- pack_cos_sin=pack_cos_sin,
- past_key_value=past_key_value,
- output_attentions=output_attentions,
- use_cache=use_cache,
- )
- hidden_states = residual + hidden_states
-
- # Fully Connected
- residual = hidden_states
- hidden_states = self.post_attention_layernorm(hidden_states)
- hidden_states = self.mlp(hidden_states)
- hidden_states = residual + hidden_states
-
- outputs = (hidden_states,)
-
- if output_attentions:
- outputs += (self_attn_weights,)
-
- if use_cache:
- outputs += (present_key_value,)
-
- return outputs
-
-
-LLAMA_START_DOCSTRING = r"""
- This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
- library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
- etc.)
-
- This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
- Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
- and behavior.
-
- Parameters:
- config ([`CLEXLlamaConfig`]):
- Model configuration class with all the parameters of the model. Initializing with a config file does not
- load the weights associated with the model, only the configuration. Check out the
- [`~PreTrainedModel.from_pretrained`] method to load the model weights.
-"""
-
-
-@add_start_docstrings(
- "The bare LLaMA Model outputting raw hidden-states without any specific head on top.",
- LLAMA_START_DOCSTRING,
-)
-class LlamaPreTrainedModel(PreTrainedModel):
- config_class = CLEXLlamaConfig
- base_model_prefix = "model"
- supports_gradient_checkpointing = True
- _no_split_modules = ["LlamaDecoderLayer"]
- _keys_to_ignore_on_load_unexpected = [r"decoder\.version"]
- _keep_in_fp32_modules = ["model.clex_layer.proj_func.ode_up_proj", "model.clex_layer.proj_func.ode_down_proj", "model.clex_layer.inv_freq"]
-
- def _init_weights(self, module):
- std = self.config.initializer_range
- if isinstance(module, nn.Linear):
- module.weight.data.normal_(mean=0.0, std=std)
- if module.bias is not None:
- module.bias.data.zero_()
- elif isinstance(module, nn.Embedding):
- module.weight.data.normal_(mean=0.0, std=std)
- if module.padding_idx is not None:
- module.weight.data[module.padding_idx].zero_()
-
- def _set_gradient_checkpointing(self, module, value=False):
- if isinstance(module, LlamaModel):
- module.gradient_checkpointing = value
-
-
-LLAMA_INPUTS_DOCSTRING = r"""
- Args:
- input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
- Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
- it.
-
- Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
- [`PreTrainedTokenizer.__call__`] for details.
-
- [What are input IDs?](../glossary#input-ids)
- attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
- Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
-
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
-
- [What are attention masks?](../glossary#attention-mask)
-
- Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
- [`PreTrainedTokenizer.__call__`] for details.
-
- If `past_key_values` is used, optionally only the last `decoder_input_ids` have to be input (see
- `past_key_values`).
-
- If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
- and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
- information on the default strategy.
-
- - 1 indicates the head is **not masked**,
- - 0 indicates the head is **masked**.
- position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
- Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
- config.n_positions - 1]`.
-
- [What are position IDs?](../glossary#position-ids)
- past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
- Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape
- `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape
- `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.
-
- Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
- blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
-
- If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
- don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
- `decoder_input_ids` of shape `(batch_size, sequence_length)`.
- inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
- Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
- is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
- model's internal embedding lookup matrix.
- use_cache (`bool`, *optional*):
- If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
- `past_key_values`).
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
- tensors for more detail.
- output_hidden_states (`bool`, *optional*):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
- more detail.
- return_dict (`bool`, *optional*):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
-"""
-
-
-@add_start_docstrings(
- "The bare LLaMA Model outputting raw hidden-states without any specific head on top.",
- LLAMA_START_DOCSTRING,
-)
-class LlamaModel(LlamaPreTrainedModel):
- """
- Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`LlamaDecoderLayer`]
-
- Args:
- config: CLEXLlamaConfig
- """
-
- def __init__(self, config: CLEXLlamaConfig):
- super().__init__(config)
- self.padding_idx = config.pad_token_id
- self.vocab_size = config.vocab_size
-
- self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
- self.layers = nn.ModuleList([LlamaDecoderLayer(config) for _ in range(config.num_hidden_layers)])
- self.norm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
- head_dim = config.hidden_size // config.num_attention_heads
- if config.rope_scaling["type"] == "clex":
- self.clex_layer = LlamaCLEXScalingRotaryEmbedding(head_dim, config.max_position_embeddings, config.rope_scaling)
- self.gradient_checkpointing = False
- # Initialize weights and apply final processing
- self.post_init()
-
-
- def get_input_embeddings(self):
- return self.embed_tokens
-
- def set_input_embeddings(self, value):
- self.embed_tokens = value
-
- # Copied from transformers.models.bart.modeling_bart.BartDecoder._prepare_decoder_attention_mask
- def _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length):
- # create causal mask
- # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
- combined_attention_mask = None
- if input_shape[-1] > 1:
- combined_attention_mask = _make_causal_mask(
- input_shape,
- inputs_embeds.dtype,
- device=inputs_embeds.device,
- past_key_values_length=past_key_values_length,
- )
-
- if attention_mask is not None:
- # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
- expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]).to(
- inputs_embeds.device
- )
- combined_attention_mask = (
- expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask
- )
-
- return combined_attention_mask
-
- @add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING)
- def forward(
- self,
- input_ids: torch.LongTensor = None,
- attention_mask: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.LongTensor] = None,
- past_key_values: Optional[List[torch.FloatTensor]] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- use_cache: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, BaseModelOutputWithPast]:
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- use_cache = use_cache if use_cache is not None else self.config.use_cache
-
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- # retrieve input_ids and inputs_embeds
- if input_ids is not None and inputs_embeds is not None:
- raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time")
- elif input_ids is not None:
- batch_size, seq_length = input_ids.shape
- elif inputs_embeds is not None:
- batch_size, seq_length, _ = inputs_embeds.shape
- else:
- raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds")
-
- seq_length_with_past = seq_length
- past_key_values_length = 0
-
- if past_key_values is not None:
- past_key_values_length = past_key_values[0][0].shape[2]
- seq_length_with_past = seq_length_with_past + past_key_values_length
-
- if position_ids is None:
- device = input_ids.device if input_ids is not None else inputs_embeds.device
- position_ids = torch.arange(
- past_key_values_length, seq_length + past_key_values_length, dtype=torch.long, device=device
- )
- position_ids = position_ids.unsqueeze(0).view(-1, seq_length)
- else:
- position_ids = position_ids.view(-1, seq_length).long()
-
- if inputs_embeds is None:
- inputs_embeds = self.embed_tokens(input_ids)
- # embed positions
- if attention_mask is None:
- attention_mask = torch.ones(
- (batch_size, seq_length_with_past), dtype=torch.bool, device=inputs_embeds.device
- )
- attention_mask = self._prepare_decoder_attention_mask(
- attention_mask, (batch_size, seq_length), inputs_embeds, past_key_values_length
- )
- # attention_mask = None
-
-
- hidden_states = inputs_embeds
-
- if self.gradient_checkpointing and self.training:
- if use_cache:
- logger.warning_once(
- "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
- )
- use_cache = False
-
- # decoder layers
- all_hidden_states = () if output_hidden_states else None
- all_self_attns = () if output_attentions else None
- next_decoder_cache = () if use_cache else None
-
- pack_cos_sin = None
- if self.config.rope_scaling["type"] == "clex":
- pack_cos_sin = self.clex_layer(inputs_embeds.device, inputs_embeds.dtype, seq_length_with_past, self.training)
-
- for idx, decoder_layer in enumerate(self.layers):
- if output_hidden_states:
- all_hidden_states += (hidden_states,)
-
- past_key_value = past_key_values[idx] if past_key_values is not None else None
-
- if self.gradient_checkpointing and self.training:
-
- def create_custom_forward(module):
- def custom_forward(*inputs):
- # None for past_key_value
- return module(*inputs, output_attentions, None)
-
- return custom_forward
-
- layer_outputs = torch.utils.checkpoint.checkpoint(
- create_custom_forward(decoder_layer),
- hidden_states,
- attention_mask,
- position_ids,
- pack_cos_sin,
- None,
- )
- else:
- layer_outputs = decoder_layer(
- hidden_states,
- attention_mask=attention_mask,
- position_ids=position_ids,
- pack_cos_sin=pack_cos_sin,
- past_key_value=past_key_value,
- output_attentions=output_attentions,
- use_cache=use_cache,
- )
-
- hidden_states = layer_outputs[0]
-
- if use_cache:
- next_decoder_cache += (layer_outputs[2 if output_attentions else 1],)
-
- if output_attentions:
- all_self_attns += (layer_outputs[1],)
-
- hidden_states = self.norm(hidden_states)
-
- # add hidden states from the last decoder layer
- if output_hidden_states:
- all_hidden_states += (hidden_states,)
-
- next_cache = next_decoder_cache if use_cache else None
- if not return_dict:
- return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None)
- return BaseModelOutputWithPast(
- last_hidden_state=hidden_states,
- past_key_values=next_cache,
- hidden_states=all_hidden_states,
- attentions=all_self_attns,
- )
-
-
-class LlamaForCausalLM(LlamaPreTrainedModel):
- def __init__(self, config):
- super().__init__(config)
- self.model = LlamaModel(config)
-
- self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
-
- # Initialize weights and apply final processing
- self.post_init()
-
- def get_input_embeddings(self):
- return self.model.embed_tokens
-
- def set_input_embeddings(self, value):
- self.model.embed_tokens = value
-
- def get_output_embeddings(self):
- return self.lm_head
-
- def set_output_embeddings(self, new_embeddings):
- self.lm_head = new_embeddings
-
- def set_decoder(self, decoder):
- self.model = decoder
-
- def get_decoder(self):
- return self.model
-
- @add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING)
- @replace_return_docstrings(output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC)
- def forward(
- self,
- input_ids: torch.LongTensor = None,
- attention_mask: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.LongTensor] = None,
- past_key_values: Optional[List[torch.FloatTensor]] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- labels: Optional[torch.LongTensor] = None,
- use_cache: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, CausalLMOutputWithPast]:
- r"""
- Args:
- labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
- Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
- config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
- (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
-
- Returns:
-
- Example:
-
- ```python
- >>> from transformers import AutoTokenizer, LlamaForCausalLM
-
- >>> model = LlamaForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS)
- >>> tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER)
-
- >>> prompt = "Hey, are you consciours? Can you talk to me?"
- >>> inputs = tokenizer(prompt, return_tensors="pt")
-
- >>> # Generate
- >>> generate_ids = model.generate(inputs.input_ids, max_length=30)
- >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
- "Hey, are you consciours? Can you talk to me?\nI'm not consciours, but I can talk to you."
- ```"""
-
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
- outputs = self.model(
- input_ids=input_ids,
- attention_mask=attention_mask,
- position_ids=position_ids,
- past_key_values=past_key_values,
- inputs_embeds=inputs_embeds,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- hidden_states = outputs[0]
- logits = self.lm_head(hidden_states)
-
- loss = None
- if labels is not None:
- # Shift so that tokens < n predict n
- shift_logits = logits[..., :-1, :].contiguous()
- shift_labels = labels[..., 1:].contiguous()
- # Flatten the tokens
- loss_fct = CrossEntropyLoss()
- shift_logits = shift_logits.view(-1, self.config.vocab_size)
- shift_labels = shift_labels.view(-1)
- # Enable model parallelism
- shift_labels = shift_labels.to(shift_logits.device)
- loss = loss_fct(shift_logits, shift_labels)
- if not return_dict:
- output = (logits,) + outputs[1:]
- return (loss,) + output if loss is not None else output
- return CausalLMOutputWithPast(
- loss=loss,
- logits=logits,
- past_key_values=outputs.past_key_values,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
-
- def prepare_inputs_for_generation(
- self, input_ids, past_key_values=None, attention_mask=None, inputs_embeds=None, **kwargs
- ):
- if past_key_values:
- input_ids = input_ids[:, -1:]
-
- position_ids = kwargs.get("position_ids", None)
- if attention_mask is not None and position_ids is None:
- # create position_ids on the fly for batch generation
- position_ids = attention_mask.long().cumsum(-1) - 1
- position_ids.masked_fill_(attention_mask == 0, 1)
- if past_key_values:
- position_ids = position_ids[:, -1].unsqueeze(-1)
-
- # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
- if inputs_embeds is not None and past_key_values is None:
- model_inputs = {"inputs_embeds": inputs_embeds}
- else:
- model_inputs = {"input_ids": input_ids}
-
- model_inputs.update(
- {
- "position_ids": position_ids,
- "past_key_values": past_key_values,
- "use_cache": kwargs.get("use_cache"),
- "attention_mask": attention_mask,
- }
- )
- return model_inputs
-
- @staticmethod
- def _reorder_cache(past_key_values, beam_idx):
- reordered_past = ()
- for layer_past in past_key_values:
- reordered_past += (tuple(past_state.index_select(0, beam_idx) for past_state in layer_past),)
- return reordered_past
-
-
-@add_start_docstrings(
- """
- The LLaMa Model transformer with a sequence classification head on top (linear layer).
-
- [`LlamaForSequenceClassification`] uses the last token in order to do the classification, as other causal models
- (e.g. GPT-2) do.
-
- Since it does classification on the last token, it requires to know the position of the last token. If a
- `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
- no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
- padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
- each row of the batch).
- """,
- LLAMA_START_DOCSTRING,
-)
-class LlamaForSequenceClassification(LlamaPreTrainedModel):
- _keys_to_ignore_on_load_missing = [r"lm_head.weight"]
-
- def __init__(self, config):
- super().__init__(config)
- self.num_labels = config.num_labels
- self.model = LlamaModel(config)
- self.score = nn.Linear(config.hidden_size, self.num_labels, bias=False)
-
- # Initialize weights and apply final processing
- self.post_init()
-
- def get_input_embeddings(self):
- return self.model.embed_tokens
-
- def set_input_embeddings(self, value):
- self.model.embed_tokens = value
-
- @add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING)
- def forward(
- self,
- input_ids: torch.LongTensor = None,
- attention_mask: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.LongTensor] = None,
- past_key_values: Optional[List[torch.FloatTensor]] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- labels: Optional[torch.LongTensor] = None,
- use_cache: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, SequenceClassifierOutputWithPast]:
- r"""
- labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
- Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
- config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
- `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
- """
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- transformer_outputs = self.model(
- input_ids,
- attention_mask=attention_mask,
- position_ids=position_ids,
- past_key_values=past_key_values,
- inputs_embeds=inputs_embeds,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
- hidden_states = transformer_outputs[0]
- logits = self.score(hidden_states)
-
- if input_ids is not None:
- batch_size = input_ids.shape[0]
- else:
- batch_size = inputs_embeds.shape[0]
-
- if self.config.pad_token_id is None and batch_size != 1:
- raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.")
- if self.config.pad_token_id is None:
- sequence_lengths = -1
- else:
- if input_ids is not None:
- sequence_lengths = (torch.ne(input_ids, self.config.pad_token_id).sum(-1) - 1).to(logits.device)
- else:
- sequence_lengths = -1
-
- pooled_logits = logits[torch.arange(batch_size, device=logits.device), sequence_lengths]
-
- loss = None
- if labels is not None:
- labels = labels.to(logits.device)
- if self.config.problem_type is None:
- if self.num_labels == 1:
- self.config.problem_type = "regression"
- elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
- self.config.problem_type = "single_label_classification"
- else:
- self.config.problem_type = "multi_label_classification"
-
- if self.config.problem_type == "regression":
- loss_fct = MSELoss()
- if self.num_labels == 1:
- loss = loss_fct(pooled_logits.squeeze(), labels.squeeze())
- else:
- loss = loss_fct(pooled_logits, labels)
- elif self.config.problem_type == "single_label_classification":
- loss_fct = CrossEntropyLoss()
- loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1))
- elif self.config.problem_type == "multi_label_classification":
- loss_fct = BCEWithLogitsLoss()
- loss = loss_fct(pooled_logits, labels)
- if not return_dict:
- output = (pooled_logits,) + transformer_outputs[1:]
- return ((loss,) + output) if loss is not None else output
-
- return SequenceClassifierOutputWithPast(
- loss=loss,
- logits=pooled_logits,
- past_key_values=transformer_outputs.past_key_values,
- hidden_states=transformer_outputs.hidden_states,
- attentions=transformer_outputs.attentions,
- )
diff --git a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/datasets/dataloader_utils.py b/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/datasets/dataloader_utils.py
deleted file mode 100644
index 3e2f574e24d2a32a18533a11492cfd481ff2cfbb..0000000000000000000000000000000000000000
--- a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/datasets/dataloader_utils.py
+++ /dev/null
@@ -1,162 +0,0 @@
-"""
- Copyright (c) 2022, salesforce.com, inc.
- All rights reserved.
- SPDX-License-Identifier: BSD-3-Clause
- For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause
-"""
-
-import time
-import random
-import torch
-from video_llama.datasets.data_utils import move_to_cuda
-from torch.utils.data import DataLoader
-
-
-class MultiIterLoader:
- """
- A simple wrapper for iterating over multiple iterators.
-
- Args:
- loaders (List[Loader]): List of Iterator loaders.
- ratios (List[float]): List of ratios to sample from each loader. If None, all loaders are sampled uniformly.
- """
-
- def __init__(self, loaders, ratios=None):
- # assert all loaders has __next__ method
- for loader in loaders:
- assert hasattr(
- loader, "__next__"
- ), "Loader {} has no __next__ method.".format(loader)
-
- if ratios is None:
- ratios = [1.0] * len(loaders)
- else:
- assert len(ratios) == len(loaders)
- ratios = [float(ratio) / sum(ratios) for ratio in ratios]
-
- self.loaders = loaders
- self.ratios = ratios
-
- def __next__(self):
- # random sample from each loader by ratio
- loader_idx = random.choices(range(len(self.loaders)), self.ratios, k=1)[0]
- return next(self.loaders[loader_idx])
-
-
-class PrefetchLoader(object):
- """
- Modified from https://github.com/ChenRocks/UNITER.
-
- overlap compute and cuda data transfer
- (copied and then modified from nvidia apex)
- """
-
- def __init__(self, loader):
- self.loader = loader
- self.stream = torch.cuda.Stream()
-
- def __iter__(self):
- loader_it = iter(self.loader)
- self.preload(loader_it)
- batch = self.next(loader_it)
- while batch is not None:
- is_tuple = isinstance(batch, tuple)
- if is_tuple:
- task, batch = batch
-
- if is_tuple:
- yield task, batch
- else:
- yield batch
- batch = self.next(loader_it)
-
- def __len__(self):
- return len(self.loader)
-
- def preload(self, it):
- try:
- self.batch = next(it)
- except StopIteration:
- self.batch = None
- return
- # if record_stream() doesn't work, another option is to make sure
- # device inputs are created on the main stream.
- # self.next_input_gpu = torch.empty_like(self.next_input,
- # device='cuda')
- # self.next_target_gpu = torch.empty_like(self.next_target,
- # device='cuda')
- # Need to make sure the memory allocated for next_* is not still in use
- # by the main stream at the time we start copying to next_*:
- # self.stream.wait_stream(torch.cuda.current_stream())
- with torch.cuda.stream(self.stream):
- self.batch = move_to_cuda(self.batch)
- # more code for the alternative if record_stream() doesn't work:
- # copy_ will record the use of the pinned source tensor in this
- # side stream.
- # self.next_input_gpu.copy_(self.next_input, non_blocking=True)
- # self.next_target_gpu.copy_(self.next_target, non_blocking=True)
- # self.next_input = self.next_input_gpu
- # self.next_target = self.next_target_gpu
-
- def next(self, it):
- torch.cuda.current_stream().wait_stream(self.stream)
- batch = self.batch
- if batch is not None:
- record_cuda_stream(batch)
- self.preload(it)
- return batch
-
- def __getattr__(self, name):
- method = self.loader.__getattribute__(name)
- return method
-
-
-def record_cuda_stream(batch):
- if isinstance(batch, torch.Tensor):
- batch.record_stream(torch.cuda.current_stream())
- elif isinstance(batch, list) or isinstance(batch, tuple):
- for t in batch:
- record_cuda_stream(t)
- elif isinstance(batch, dict):
- for t in batch.values():
- record_cuda_stream(t)
- else:
- pass
-
-
-class IterLoader:
- """
- A wrapper to convert DataLoader as an infinite iterator.
-
- Modified from:
- https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/iter_based_runner.py
- """
-
- def __init__(self, dataloader: DataLoader, use_distributed: bool = False):
- self._dataloader = dataloader
- self.iter_loader = iter(self._dataloader)
- self._use_distributed = use_distributed
- self._epoch = 0
-
- @property
- def epoch(self) -> int:
- return self._epoch
-
- def __next__(self):
- try:
- data = next(self.iter_loader)
- except StopIteration:
- self._epoch += 1
- if hasattr(self._dataloader.sampler, "set_epoch") and self._use_distributed:
- self._dataloader.sampler.set_epoch(self._epoch)
- time.sleep(2) # Prevent possible deadlock during epoch transition
- self.iter_loader = iter(self._dataloader)
- data = next(self.iter_loader)
-
- return data
-
- def __iter__(self):
- return self
-
- def __len__(self):
- return len(self._dataloader)
diff --git a/spaces/Dao3/OpenArt/README.md b/spaces/Dao3/OpenArt/README.md
deleted file mode 100644
index 3a10a46ec9c8edc71c9e4e95df35f5f7a95678b1..0000000000000000000000000000000000000000
--- a/spaces/Dao3/OpenArt/README.md
+++ /dev/null
@@ -1,19 +0,0 @@
----
-title: OpenArt
-emoji: 🧘🏻♂️
-colorFrom: blue
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.16.1
-app_file: app.py
-pinned: false
-duplicated_from: Dao3/DreamlikeArt-Diffusion-1.0
----
----
-title: DreamlikeArt-Diffusion .0
-emoji: 🧘🏻♂️
-colorFrom: blue
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.16.1
-app_file: app.py
\ No newline at end of file
diff --git a/spaces/Dao3/openai-translator/app.py b/spaces/Dao3/openai-translator/app.py
deleted file mode 100644
index 8b72693b05e3b3f74ac47619a655fc097baa86f0..0000000000000000000000000000000000000000
--- a/spaces/Dao3/openai-translator/app.py
+++ /dev/null
@@ -1,255 +0,0 @@
-import os
-import openai
-import gradio as gr
-
-openai.api_key = os.environ['OPENAI_KEY']
-
-supportLanguages = [
- ["auto", "自动识别"],
- ["粤语", "粤语"],
- ["古文", "文言文"],
- ["af","Afrikaans"],
- ["ak","Akan"],
- ["sq","Albanian"],
- ["am","Amharic"],
- ["ar","Arabic"],
- ["hy","Armenian"],
- ["az","Azerbaijani"],
- ["eu","Basque"],
- ["be","Belarusian"],
- ["bem","Bemba"],
- ["bn","Bengali"],
- ["bh","Bihari"],
- ["xx-bork","Bork, bork, bork!"],
- ["bs","Bosnian"],
- ["br","Breton"],
- ["bg","Bulgarian"],
- ["km","Cambodian"],
- ["ca","Catalan"],
- ["chr","Cherokee"],
- ["ny","Chichewa"],
- ["zh-CN","中文(简体)"],
- ["zh-TW","中文 (繁体)"],
- ["co","Corsican"],
- ["hr","Croatian"],
- ["cs","Czech"],
- ["da","Danish"],
- ["nl","Dutch"],
- ["xx-elmer","Elmer Fudd"],
- ["en","English"],
- ["eo","Esperanto"],
- ["et","Estonian"],
- ["ee","Ewe"],
- ["fo","Faroese"],
- ["tl","Filipino"],
- ["fi","Finnish"],
- ["fr","French"],
- ["fy","Frisian"],
- ["gaa","Ga"],
- ["gl","Galician"],
- ["ka","Georgian"],
- ["de","German"],
- ["el","Greek"],
- ["gn","Guarani"],
- ["gu","Gujarati"],
- ["xx-hacker","Hacker"],
- ["ht","Haitian Creole"],
- ["ha","Hausa"],
- ["haw","Hawaiian"],
- ["iw","Hebrew"],
- ["hi","Hindi"],
- ["hu","Hungarian"],
- ["is","Icelandic"],
- ["ig","Igbo"],
- ["id","Indonesian"],
- ["ia","Interlingua"],
- ["ga","Irish"],
- ["it","Italian"],
- ["ja","Japanese"],
- ["jw","Javanese"],
- ["kn","Kannada"],
- ["kk","Kazakh"],
- ["rw","Kinyarwanda"],
- ["rn","Kirundi"],
- ["xx-klingon","Klingon"],
- ["kg","Kongo"],
- ["ko","Korean"],
- ["kri","Krio (Sierra Leone)"],
- ["ku","Kurdish"],
- ["ckb","Kurdish (Soranî)"],
- ["ky","Kyrgyz"],
- ["lo","Laothian"],
- ["la","Latin"],
- ["lv","Latvian"],
- ["ln","Lingala"],
- ["lt","Lithuanian"],
- ["loz","Lozi"],
- ["lg","Luganda"],
- ["ach","Luo"],
- ["mk","Macedonian"],
- ["mg","Malagasy"],
- ["ms","Malay"],
- ["ml","Malayalam"],
- ["mt","Maltese"],
- ["mi","Maori"],
- ["mr","Marathi"],
- ["mfe","Mauritian Creole"],
- ["mo","Moldavian"],
- ["mn","Mongolian"],
- ["sr-ME","Montenegrin"],
- ["ne","Nepali"],
- ["pcm","Nigerian Pidgin"],
- ["nso","Northern Sotho"],
- ["no","Norwegian"],
- ["nn","Norwegian (Nynorsk)"],
- ["oc","Occitan"],
- ["or","Oriya"],
- ["om","Oromo"],
- ["ps","Pashto"],
- ["fa","Persian"],
- ["xx-pirate","Pirate"],
- ["pl","Polish"],
- ["pt-BR","Portuguese (Brazil)"],
- ["pt-PT","Portuguese (Portugal)"],
- ["pa","Punjabi"],
- ["qu","Quechua"],
- ["ro","Romanian"],
- ["rm","Romansh"],
- ["nyn","Runyakitara"],
- ["ru","Russian"],
- ["gd","Scots Gaelic"],
- ["sr","Serbian"],
- ["sh","Serbo-Croatian"],
- ["st","Sesotho"],
- ["tn","Setswana"],
- ["crs","Seychellois Creole"],
- ["sn","Shona"],
- ["sd","Sindhi"],
- ["si","Sinhalese"],
- ["sk","Slovak"],
- ["sl","Slovenian"],
- ["so","Somali"],
- ["es","Spanish"],
- ["es-419","Spanish (Latin American)"],
- ["su","Sundanese"],
- ["sw","Swahili"],
- ["sv","Swedish"],
- ["tg","Tajik"],
- ["ta","Tamil"],
- ["tt","Tatar"],
- ["te","Telugu"],
- ["th","Thai"],
- ["ti","Tigrinya"],
- ["to","Tonga"],
- ["lua","Tshiluba"],
- ["tum","Tumbuka"],
- ["tr","Turkish"],
- ["tk","Turkmen"],
- ["tw","Twi"],
- ["ug","Uighur"],
- ["uk","Ukrainian"],
- ["ur","Urdu"],
- ["uz","Uzbek"],
- ["vi","Vietnamese"],
- ["cy","Welsh"],
- ["wo","Wolof"],
- ["xh","Xhosa"],
- ["yi","Yiddish"],
- ["yo","Yoruba"],
- ["zu","Zulu"],
-]
-prompt_template = "You are a translation engine that can only translate text and cannot interpret it. Keep the indent of the original text, only modify when you need."
-
-def submit_message(detectFrom, detectTo, user_token, prompt):
- if user_token != "":
- openai.api_key = user_token
-
- if not prompt:
- return gr.update(value="")
-
- for lc, lang in supportLanguages:
- if detectFrom == lang:
- detectFrom = lc
- if detectTo == lang:
- detectTo = lc
-
- systemInstruct = prompt_template
- translateInstruct = f"translate from {detectFrom} to {detectTo}"
- if detectFrom == "auto":
- translateInstruct = f"translate to {detectTo}"
- if detectFrom in ["古文", "zh-CN", "zh-TW"]:
- if detectTo == "zh-TW":
- translateInstruct = "翻译成繁体白话文"
- if detectTo == "zh-CN":
- translateInstruct = "翻译成简体白话文"
- if detectTo == "粤语":
- translateInstruct = "翻译成粤语白话文"
-
- if detectFrom == detectTo:
- systemInstruct = "You are a text embellisher, you can only embellish the text, don't interpret it."
- if detectTo in ["zh-CN", "zh-TW"]:
- translateInstruct = "润色此句"
- else:
- translateInstruct = "polish this sentence"
-
- prompt_msg = [
- {"role": "system", "content": systemInstruct},
- {"role": "user", "content": translateInstruct},
- {"role": "user", "content": prompt},
- ]
-
- try:
- openai_response = openai.ChatCompletion.create(
- model="gpt-3.5-turbo",
- messages=prompt_msg,
- temperature=0,
- max_tokens=1000,
- top_p=1,
- stream=True,
- frequency_penalty=1,
- presence_penalty=1,
- )
-
- combined = ""
- for resp in openai_response:
- delta = resp["choices"][0]["delta"]
- if "content" in delta:
- combined += delta["content"]
- yield combined
-
- except Exception as e:
- return f"Error: {e}"
-
-css = """
- #col-container {max-width: 80%; margin-left: auto; margin-right: auto;}
- #chatbox {min-height: 400px;}
- #header {text-align: center;}
- #label {font-size: 0.8em; padding: 0.5em; margin: 0;}
- .message { font-size: 1.2em; }
- """
-
-with gr.Blocks(css=css) as demo:
-
- state = gr.State([])
-
- with gr.Column(elem_id="col-container"):
- gr.Markdown("""## 多语言翻译
- 使用OpenAI官方 API (gpt-3.5-turbo model).""", elem_id="header")
-
- with gr.Row():
- with gr.Column():
- translateFrom = gr.Dropdown(label="原文", elem_id="translate-from", multiselect=False, value="自动识别", choices=[l[1] for l in supportLanguages]).style(container=False)
- input_message = gr.Textbox(max_lines=100, show_label=False, lines=10, placeholder="Enter text and press enter", visible=True).style(container=False)
- with gr.Column():
- translateTo = gr.Dropdown(label="译文", elem_id="translate-to", multiselect=False, value="中文 (简体)", choices=[l[1] for l in supportLanguages[1:]]).style(container=False)
- output = gr.Textbox(max_lines=100, show_label=False, lines=10, label="Output", visible=True).style(container=False)
-
- btn_submit = gr.Button("急急如律令")
-
- with gr.Row():
- user_token = gr.Textbox(value='', placeholder="OpenAI API Key", type="password", label="输入你自己的OpenAI API Key翻译过程会更准确哦~.")
-
- btn_submit.click(submit_message, [translateFrom, translateTo, user_token, input_message], [output])
-
-demo.queue(concurrency_count=10)
-demo.launch(height='800px')
diff --git a/spaces/DeclK/pose/tools/dtw.py b/spaces/DeclK/pose/tools/dtw.py
deleted file mode 100644
index 0fa9495bbe752df0b5bfaba0466d558c66a18695..0000000000000000000000000000000000000000
--- a/spaces/DeclK/pose/tools/dtw.py
+++ /dev/null
@@ -1,116 +0,0 @@
-import numpy as np
-from .utils import get_keypoint_weight
-
-
-class DTWForKeypoints:
- def __init__(self, keypoints1, keypoints2):
- self.keypoints1 = keypoints1
- self.keypoints2 = keypoints2
-
- def get_dtw_path(self):
-
- norm_kp1 = self.normalize_keypoints(self.keypoints1)
- norm_kp2 = self.normalize_keypoints(self.keypoints2)
-
- kp_weight = get_keypoint_weight()
- oks, oks_unnorm = self.object_keypoint_similarity(norm_kp1,
- norm_kp2, keypoint_weights=kp_weight)
- print(f"OKS max {oks.max():.2f} min {oks.min():.2f}")
-
- # do the DTW, and return the path
- cost_matrix = 1 - oks
- dtw_dist, dtw_path = self.dynamic_time_warp(cost_matrix)
-
- return dtw_path, oks, oks_unnorm
-
- def normalize_keypoints(self, keypoints):
- centroid = keypoints.mean(axis=1)[:, None]
- max_distance = np.max(np.sqrt(np.sum((keypoints - centroid) ** 2, axis=2)),
- axis=1) + 1e-6
-
- normalized_keypoints = (keypoints - centroid) / max_distance[:, None, None]
- return normalized_keypoints
-
- def keypoints_areas(self, keypoints):
- min_coords = np.min(keypoints, axis=1)
- max_coords = np.max(keypoints, axis=1)
- areas = np.prod(max_coords - min_coords, axis=1)
- return areas
-
- def object_keypoint_similarity(self, keypoints1,
- keypoints2,
- scale_constant=0.2,
- keypoint_weights=None):
- """ Calculate the Object Keypoint Similarity (OKS) for multiple objects,
- and add weight to each keypoint. Here we choose to normalize the points
- using centroid and max distance instead of bounding box area.
- """
- # Compute squared distances between all pairs of keypoints
- sq_diff = np.sum((keypoints1[:, None] - keypoints2) ** 2, axis=-1)
-
- oks = np.exp(-sq_diff / (2 * scale_constant ** 2))
- oks_unnorm = oks.copy()
-
- if keypoint_weights is not None:
- oks = oks * keypoint_weights
- oks = np.sum(oks, axis=-1)
- else:
- oks = np.mean(oks, axis=-1)
-
- return oks, oks_unnorm
-
- def dynamic_time_warp(self, cost_matrix, R=1000):
- """Compute the Dynamic Time Warping distance and path between two time series.
- If the time series is too long, it will use the Sakoe-Chiba Band constraint,
- so time complexity is bounded at O(MR).
- """
-
- M = len(self.keypoints1)
- N = len(self.keypoints2)
-
- # Initialize the distance matrix with infinity
- D = np.full((M, N), np.inf)
-
- # Initialize the first row and column of the matrix
- D[0, 0] = cost_matrix[0, 0]
- for i in range(1, M):
- D[i, 0] = D[i - 1, 0] + cost_matrix[i, 0]
-
- for j in range(1, N):
- D[0, j] = D[0, j - 1] + cost_matrix[0, j]
-
- # Fill the remaining elements of the matrix within the
- # Sakoe-Chiba Band using dynamic programming
- for i in range(1, M):
- for j in range(max(1, i - R), min(N, i + R + 1)):
- cost = cost_matrix[i, j]
- D[i, j] = cost + min(D[i - 1, j], D[i, j - 1], D[i - 1, j - 1])
-
- # Backtrack to find the optimal path
- path = [(M - 1, N - 1)]
- i, j = M - 1, N - 1
- while i > 0 or j > 0:
- min_idx = np.argmin([D[i - 1, j], D[i, j - 1], D[i - 1, j - 1]])
- if min_idx == 0:
- i -= 1
- elif min_idx == 1:
- j -= 1
- else:
- i -= 1
- j -= 1
- path.append((i, j))
- path.reverse()
-
- return D[-1, -1], path
-
-if __name__ == '__main__':
-
- from mmengine.fileio import load
-
- keypoints1, kp1_scores = load('tennis1.pkl')
- keypoints2, kp2_scores = load('tennis3.pkl')
-
- # Normalize the keypoints
- dtw = DTWForKeypoints(keypoints1, keypoints2)
- path = dtw.get_dtw_path()
- print(path)
\ No newline at end of file
diff --git a/spaces/Demi2809/rvc-models/infer_pack/models_onnx.py b/spaces/Demi2809/rvc-models/infer_pack/models_onnx.py
deleted file mode 100644
index 3cdae2f7f8591a1e43b1d8520baa37b7e9744d72..0000000000000000000000000000000000000000
--- a/spaces/Demi2809/rvc-models/infer_pack/models_onnx.py
+++ /dev/null
@@ -1,849 +0,0 @@
-import math, pdb, os
-from time import time as ttime
-import torch
-from torch import nn
-from torch.nn import functional as F
-from infer_pack import modules
-from infer_pack import attentions
-from infer_pack import commons
-from infer_pack.commons import init_weights, get_padding
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from infer_pack.commons import init_weights
-import numpy as np
-from infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder256Sim(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- x = self.proj(x) * x_mask
- return x, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMs256NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, pitch, nsff0, sid, rnd, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o
-
-
-class SynthesizerTrnMs256NSFsid_sim(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- # hop_length,
- gin_channels=0,
- use_sdp=True,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256Sim(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- is_half=kwargs["is_half"],
- )
-
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, ds, max_len=None
- ): # y是spec不需要了现在
- g = self.emb_g(ds.unsqueeze(0)).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- x, x_mask = self.enc_p(phone, pitch, phone_lengths)
- x = self.flow(x, x_mask, g=g, reverse=True)
- o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g)
- return o
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/Detomo/ai-comic-generation/src/lib/computeSha256.ts b/spaces/Detomo/ai-comic-generation/src/lib/computeSha256.ts
deleted file mode 100644
index cb6ef0604fca9653408012fd6cef2a58b6acaf47..0000000000000000000000000000000000000000
--- a/spaces/Detomo/ai-comic-generation/src/lib/computeSha256.ts
+++ /dev/null
@@ -1,14 +0,0 @@
-import { createHash } from 'node:crypto'
-
-/**
- * Returns a SHA256 hash using SHA-3 for the given `content`.
- *
- * @see https://en.wikipedia.org/wiki/SHA-3
- *
- * @param {String} content
- *
- * @returns {String}
- */
-export function computeSha256(strContent: string) {
- return createHash('sha3-256').update(strContent).digest('hex')
-}
\ No newline at end of file
diff --git a/spaces/DonaSmix/anime-remove-background/README.md b/spaces/DonaSmix/anime-remove-background/README.md
deleted file mode 100644
index 1ba3cb5ea0e994e246d57b7d62b8aa5a6331901c..0000000000000000000000000000000000000000
--- a/spaces/DonaSmix/anime-remove-background/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Anime Remove Background
-emoji: 🪄🖼️
-colorFrom: indigo
-colorTo: pink
-sdk: gradio
-sdk_version: 3.1.4
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: skytnt/anime-remove-background
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/EDGAhab/Aatrox-Talking/app.py b/spaces/EDGAhab/Aatrox-Talking/app.py
deleted file mode 100644
index 34c3aa10c478fd9114ca6af63dc8103b2eb88069..0000000000000000000000000000000000000000
--- a/spaces/EDGAhab/Aatrox-Talking/app.py
+++ /dev/null
@@ -1,98 +0,0 @@
-import gradio as gr
-import os
-os.system('cd monotonic_align && python setup.py build_ext --inplace && cd ..')
-import torch
-
-import commons
-import utils
-from models import SynthesizerTrn
-from text.symbols import symbols
-from text import text_to_sequence
-
-import IPython.display as ipd
-
-import json
-import math
-
-#new imports
-import matplotlib.pyplot as plt
-import re
-
-from torch import nn
-from torch.nn import functional as F
-from torch.utils.data import DataLoader
-
-from models import SynthesizerTrn
-import unicodedata
-import openai
-
-def get_text(text, hps):
- text_norm = text_to_sequence(text, hps.data.text_cleaners)
- if hps.data.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = torch.LongTensor(text_norm)
- return text_norm
-
-hps = utils.get_hparams_from_file("configs/biaobei_base.json")
-
-net_g = SynthesizerTrn(
- len(symbols),
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- **hps.model)
-_ = net_g.eval()
-
-_ = utils.load_checkpoint("G_aatrox.pth", net_g, None)
-
-def friend_chat(text, tts_input3):
- call_name = "亚托克斯"
- openai.api_key = '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'
- identity = tts_input3
- start_sequence = '\n'+str(call_name)+':'
- restart_sequence = "\nYou: "
- all_text = identity + restart_sequence
- if 1 == 1:
- prompt0 = text #当期prompt
- if text == 'quit':
- return prompt0
- prompt = identity + prompt0 + start_sequence
-
- response = openai.Completion.create(
- model="text-davinci-003",
- prompt=prompt,
- temperature=0.5,
- max_tokens=1000,
- top_p=1.0,
- frequency_penalty=0.5,
- presence_penalty=0.0,
- stop=["\nYou:"]
- )
- print(response)
- return response['choices'][0]['text'].strip()
-
-def sle(text, tts_input3):
- text = friend_chat(text, tts_input3).replace('\n','。').replace(' ',',')
- return text
-
-def infer(text,tts_input3):
- stn_tst = get_text(sle(text,tts_input3), hps)
- with torch.no_grad():
- x_tst = stn_tst.unsqueeze(0)
- x_tst_lengths = torch.LongTensor([stn_tst.size(0)])
- audio = net_g.infer(x_tst, x_tst_lengths, noise_scale=.667, noise_scale_w=0.8, length_scale=1)[0][0,0].data.cpu().float().numpy()
- sampling_rate = 22050
- return (sampling_rate, audio)
-
-app = gr.Blocks()
-
-with app:
- with gr.Tabs():
-
- with gr.TabItem("Basic"):
-
- tts_input1 = gr.TextArea(label="输入你想跟剑魔说的话", value="我是暮光星灵佐伊,我要三天之内杀了你")
- tts_input3 = gr.TextArea(label="写上你给他的设定", value="你叫亚托克斯,俗称剑魔,世界的终结者。")
- tts_submit = gr.Button("Generate", variant="primary")
- tts_output2 = gr.Audio(label="Output")
- tts_submit.click(infer, [tts_input1,tts_input3], [tts_output2])
- app.launch()
\ No newline at end of file
diff --git a/spaces/Eddycrack864/Applio-Inference/Applio-RVC-Fork/utils/clonerepo_experimental.py b/spaces/Eddycrack864/Applio-Inference/Applio-RVC-Fork/utils/clonerepo_experimental.py
deleted file mode 100644
index b0ae02648c1307562cf48033908edcf2996db5e2..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/Applio-RVC-Fork/utils/clonerepo_experimental.py
+++ /dev/null
@@ -1,253 +0,0 @@
-import os
-import subprocess
-import shutil
-from concurrent.futures import ThreadPoolExecutor, as_completed
-from tqdm.notebook import tqdm
-from pathlib import Path
-import requests
-
-def run_script():
- def run_cmd(cmd):
- process = subprocess.run(cmd, shell=True, check=True, text=True)
- return process.stdout
-
- # Change the current directory to /content/
- os.chdir('/content/')
- print("Changing dir to /content/")
-
- # Your function to edit the file
- def edit_file(file_path):
- temp_file_path = "/tmp/temp_file.py"
- changes_made = False
- with open(file_path, "r") as file, open(temp_file_path, "w") as temp_file:
- previous_line = ""
- second_previous_line = ""
- for line in file:
- new_line = line.replace("value=160", "value=128")
- if new_line != line:
- print("Replaced 'value=160' with 'value=128'")
- changes_made = True
- line = new_line
-
- new_line = line.replace("crepe hop length: 160", "crepe hop length: 128")
- if new_line != line:
- print("Replaced 'crepe hop length: 160' with 'crepe hop length: 128'")
- changes_made = True
- line = new_line
-
- new_line = line.replace("value=0.88", "value=0.75")
- if new_line != line:
- print("Replaced 'value=0.88' with 'value=0.75'")
- changes_made = True
- line = new_line
-
- if "label=i18n(\"输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络\")" in previous_line and "value=1," in line:
- new_line = line.replace("value=1,", "value=0.25,")
- if new_line != line:
- print("Replaced 'value=1,' with 'value=0.25,' based on the condition")
- changes_made = True
- line = new_line
-
- if "label=i18n(\"总训练轮数total_epoch\")" in previous_line and "value=20," in line:
- new_line = line.replace("value=20,", "value=500,")
- if new_line != line:
- print("Replaced 'value=20,' with 'value=500,' based on the condition for DEFAULT EPOCH")
- changes_made = True
- line = new_line
-
- if 'choices=["pm", "harvest", "dio", "crepe", "crepe-tiny", "mangio-crepe", "mangio-crepe-tiny"], # Fork Feature. Add Crepe-Tiny' in previous_line:
- if 'value="pm",' in line:
- new_line = line.replace('value="pm",', 'value="mangio-crepe",')
- if new_line != line:
- print("Replaced 'value=\"pm\",' with 'value=\"mangio-crepe\",' based on the condition")
- changes_made = True
- line = new_line
-
- new_line = line.replace('label=i18n("输入训练文件夹路径"), value="E:\\\\语音音频+标注\\\\米津玄师\\\\src"', 'label=i18n("输入训练文件夹路径"), value="/content/dataset/"')
- if new_line != line:
- print("Replaced 'label=i18n(\"输入训练文件夹路径\"), value=\"E:\\\\语音音频+标注\\\\米津玄师\\\\src\"' with 'label=i18n(\"输入训练文件夹路径\"), value=\"/content/dataset/\"'")
- changes_made = True
- line = new_line
-
- if 'label=i18n("是否仅保存最新的ckpt文件以节省硬盘空间"),' in second_previous_line:
- if 'value=i18n("否"),' in line:
- new_line = line.replace('value=i18n("否"),', 'value=i18n("是"),')
- if new_line != line:
- print("Replaced 'value=i18n(\"否\"),' with 'value=i18n(\"是\"),' based on the condition for SAVE ONLY LATEST")
- changes_made = True
- line = new_line
-
- if 'label=i18n("是否在每次保存时间点将最终小模型保存至weights文件夹"),' in second_previous_line:
- if 'value=i18n("否"),' in line:
- new_line = line.replace('value=i18n("否"),', 'value=i18n("是"),')
- if new_line != line:
- print("Replaced 'value=i18n(\"否\"),' with 'value=i18n(\"是\"),' based on the condition for SAVE SMALL WEIGHTS")
- changes_made = True
- line = new_line
-
- temp_file.write(line)
- second_previous_line = previous_line
- previous_line = line
-
- # After finished, we replace the original file with the temp one
- import shutil
- shutil.move(temp_file_path, file_path)
-
- if changes_made:
- print("Changes made and file saved successfully.")
- else:
- print("No changes were needed.")
-
- # Define the repo path
- repo_path = '/content/Applio-RVC-Fork'
-
- def copy_all_files_in_directory(src_dir, dest_dir):
- # Iterate over all files in source directory
- for item in Path(src_dir).glob('*'):
- if item.is_file():
- # Copy each file to destination directory
- shutil.copy(item, dest_dir)
- else:
- # If it's a directory, make a new directory in the destination and copy the files recursively
- new_dest = Path(dest_dir) / item.name
- new_dest.mkdir(exist_ok=True)
- copy_all_files_in_directory(str(item), str(new_dest))
-
- def clone_and_copy_repo(repo_path):
- # New repository link
- new_repo_link = "https://github.com/IAHispano/Applio-RVC-Fork/"
- # Temporary path to clone the repository
- temp_repo_path = "/content/temp_Applio-RVC-Fork"
- # New folder name
- new_folder_name = "Applio-RVC-Fork"
-
- # Clone the latest code from the new repository to a temporary location
- run_cmd(f"git clone {new_repo_link} {temp_repo_path}")
- os.chdir(temp_repo_path)
-
- run_cmd(f"git checkout 3fa4dad3d8961e5ca2522e9e12c0b4ddb71ad402")
- run_cmd(f"git checkout f9e606c279cb49420597519b0a83b92be81e42e4")
- run_cmd(f"git checkout 9e305588844c5442d58add1061b29beeca89d679")
- run_cmd(f"git checkout bf92dc1eb54b4f28d6396a4d1820a25896cc9af8")
- run_cmd(f"git checkout c3810e197d3cb98039973b2f723edf967ecd9e61")
- run_cmd(f"git checkout a33159efd134c2413b0afe26a76b7dc87926d2de")
- run_cmd(f"git checkout 24e251fb62c662e39ac5cf9253cc65deb9be94ec")
- run_cmd(f"git checkout ad5667d3017e93232dba85969cddac1322ba2902")
- run_cmd(f"git checkout ce9715392cf52dd5a0e18e00d1b5e408f08dbf27")
- run_cmd(f"git checkout 7c7da3f2ac68f3bd8f3ad5ca5c700f18ab9f90eb")
- run_cmd(f"git checkout 4ac395eab101955e8960b50d772c26f592161764")
- run_cmd(f"git checkout b15b358702294c7375761584e5276c811ffab5e8")
- run_cmd(f"git checkout 1501793dc490982db9aca84a50647764caa66e51")
- run_cmd(f"git checkout 21f7faf57219c75e6ba837062350391a803e9ae2")
- run_cmd(f"git checkout b5eb689fbc409b49f065a431817f822f554cebe7")
- run_cmd(f"git checkout 7e02fae1ebf24cb151bf6cbe787d06734aa65862")
- run_cmd(f"git checkout 6aea5ea18ed0b9a1e03fa5d268d6bc3c616672a9")
- run_cmd(f"git checkout f0f9b25717e59116473fb42bd7f9252cfc32b398")
- run_cmd(f"git checkout b394de424088a81fc081224bc27338a8651ad3b2")
- run_cmd(f"git checkout f1999406a88b80c965d2082340f5ea2bfa9ab67a")
- run_cmd(f"git checkout d98a0fa8dc715308dfc73eac5c553b69c6ee072b")
- run_cmd(f"git checkout d73267a415fb0eba98477afa43ef71ffd82a7157")
- run_cmd(f"git checkout 1a03d01356ae79179e1fb8d8915dc9cc79925742")
- run_cmd(f"git checkout 81497bb3115e92c754300c9b3992df428886a3e9")
- run_cmd(f"git checkout c5af1f8edcf79cb70f065c0110e279e78e48caf9")
- run_cmd(f"git checkout cdb3c90109387fa4dfa92f53c3864c71170ffc77")
-
- # Edit the file here, before copying
- #edit_file(f"{temp_repo_path}/infer-web.py")
-
- # Copy all files from the cloned repository to the existing path
- copy_all_files_in_directory(temp_repo_path, repo_path)
- print(f"Copying all {new_folder_name} files from GitHub.")
-
- # Change working directory back to /content/
- os.chdir('/content/')
- print("Changed path back to /content/")
-
- # Remove the temporary cloned repository
- shutil.rmtree(temp_repo_path)
-
- # Call the function
- clone_and_copy_repo(repo_path)
-
- # Download the credentials file for RVC archive sheet
- os.makedirs('/content/Applio-RVC-Fork/stats/', exist_ok=True)
- run_cmd("wget -q https://cdn.discordapp.com/attachments/945486970883285045/1114717554481569802/peppy-generator-388800-07722f17a188.json -O /content/Applio-RVC-Fork/stats/peppy-generator-388800-07722f17a188.json")
-
- # Forcefully delete any existing torchcrepe dependencies downloaded from an earlier run just in case
- shutil.rmtree('/content/Applio-RVC-Fork/torchcrepe', ignore_errors=True)
- shutil.rmtree('/content/torchcrepe', ignore_errors=True)
-
- # Download the torchcrepe folder from the maxrmorrison/torchcrepe repository
- run_cmd("git clone https://github.com/maxrmorrison/torchcrepe.git")
- shutil.move('/content/torchcrepe/torchcrepe', '/content/Applio-RVC-Fork/')
- shutil.rmtree('/content/torchcrepe', ignore_errors=True) # Delete the torchcrepe repository folder
-
- # Change the current directory to /content/Applio-RVC-Fork
- os.chdir('/content/Applio-RVC-Fork')
- os.makedirs('pretrained', exist_ok=True)
- os.makedirs('uvr5_weights', exist_ok=True)
-
-def download_file(url, filepath):
- response = requests.get(url, stream=True)
- response.raise_for_status()
-
- with open(filepath, "wb") as file:
- for chunk in response.iter_content(chunk_size=8192):
- if chunk:
- file.write(chunk)
-
-def download_pretrained_models():
- pretrained_models = {
- "pretrained": [
- "D40k.pth",
- "G40k.pth",
- "f0D40k.pth",
- "f0G40k.pth"
- ],
- "pretrained_v2": [
- "D40k.pth",
- "G40k.pth",
- "f0D40k.pth",
- "f0G40k.pth",
- "f0G48k.pth",
- "f0D48k.pth"
- ],
- "uvr5_weights": [
- "HP2-人声vocals+非人声instrumentals.pth",
- "HP5-主旋律人声vocals+其他instrumentals.pth",
- "VR-DeEchoNormal.pth",
- "VR-DeEchoDeReverb.pth",
- "VR-DeEchoAggressive.pth",
- "HP5_only_main_vocal.pth",
- "HP3_all_vocals.pth",
- "HP2_all_vocals.pth"
- ]
- }
- part2 = "I"
- base_url = "https://huggingface.co/lj1995/VoiceConversionWebU" + part2 + "/resolve/main/"
- base_path = "/content/Applio-RVC-Fork/"
- base_pathm = base_path
-
- # Calculate total number of files to download
- total_files = sum(len(files) for files in pretrained_models.values()) + 1 # +1 for hubert_base.pt
-
- with tqdm(total=total_files, desc="Downloading files") as pbar:
- for folder, models in pretrained_models.items():
- folder_path = os.path.join(base_path, folder)
- os.makedirs(folder_path, exist_ok=True)
- for model in models:
- url = base_url + folder + "/" + model
- filepath = os.path.join(folder_path, model)
- download_file(url, filepath)
- pbar.update()
-
- # Download hubert_base.pt to the base path
- hubert_url = base_url + "hubert_base.pt"
- hubert_filepath = os.path.join(base_pathm, "hubert_base.pt")
- download_file(hubert_url, hubert_filepath)
- pbar.update()
-def clone_repository(run_download):
- with ThreadPoolExecutor(max_workers=2) as executor:
- executor.submit(run_script)
- if run_download:
- executor.submit(download_pretrained_models)
diff --git a/spaces/Ekohai/bingAI/README.md b/spaces/Ekohai/bingAI/README.md
deleted file mode 100644
index 58b10e6a9f6831fb806f2d8b4f33e806d0c1b45a..0000000000000000000000000000000000000000
--- a/spaces/Ekohai/bingAI/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: BingAI
-emoji: 🐢
-colorFrom: indigo
-colorTo: indigo
-sdk: docker
-pinned: false
-license: mit
-app_port: 8080
----
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/FlippFuzz/whisper-webui/src/whisper/abstractWhisperContainer.py b/spaces/FlippFuzz/whisper-webui/src/whisper/abstractWhisperContainer.py
deleted file mode 100644
index efbb51d691fc4ce35b4a11c3ae59f563649ca483..0000000000000000000000000000000000000000
--- a/spaces/FlippFuzz/whisper-webui/src/whisper/abstractWhisperContainer.py
+++ /dev/null
@@ -1,108 +0,0 @@
-import abc
-from typing import List
-from src.config import ModelConfig
-
-from src.hooks.progressListener import ProgressListener
-from src.modelCache import GLOBAL_MODEL_CACHE, ModelCache
-
-class AbstractWhisperCallback:
- @abc.abstractmethod
- def invoke(self, audio, segment_index: int, prompt: str, detected_language: str, progress_listener: ProgressListener = None):
- """
- Peform the transcription of the given audio file or data.
-
- Parameters
- ----------
- audio: Union[str, np.ndarray, torch.Tensor]
- The audio file to transcribe, or the audio data as a numpy array or torch tensor.
- segment_index: int
- The target language of the transcription. If not specified, the language will be inferred from the audio content.
- task: str
- The task - either translate or transcribe.
- progress_listener: ProgressListener
- A callback to receive progress updates.
- """
- raise NotImplementedError()
-
- def _concat_prompt(self, prompt1, prompt2):
- if (prompt1 is None):
- return prompt2
- elif (prompt2 is None):
- return prompt1
- else:
- return prompt1 + " " + prompt2
-
-class AbstractWhisperContainer:
- def __init__(self, model_name: str, device: str = None, compute_type: str = "float16",
- download_root: str = None,
- cache: ModelCache = None, models: List[ModelConfig] = []):
- self.model_name = model_name
- self.device = device
- self.compute_type = compute_type
- self.download_root = download_root
- self.cache = cache
-
- # Will be created on demand
- self.model = None
-
- # List of known models
- self.models = models
-
- def get_model(self):
- if self.model is None:
-
- if (self.cache is None):
- self.model = self._create_model()
- else:
- model_key = "WhisperContainer." + self.model_name + ":" + (self.device if self.device else '')
- self.model = self.cache.get(model_key, self._create_model)
- return self.model
-
- @abc.abstractmethod
- def _create_model(self):
- raise NotImplementedError()
-
- def ensure_downloaded(self):
- pass
-
- @abc.abstractmethod
- def create_callback(self, language: str = None, task: str = None, initial_prompt: str = None, **decodeOptions: dict) -> AbstractWhisperCallback:
- """
- Create a WhisperCallback object that can be used to transcript audio files.
-
- Parameters
- ----------
- language: str
- The target language of the transcription. If not specified, the language will be inferred from the audio content.
- task: str
- The task - either translate or transcribe.
- initial_prompt: str
- The initial prompt to use for the transcription.
- decodeOptions: dict
- Additional options to pass to the decoder. Must be pickleable.
-
- Returns
- -------
- A WhisperCallback object.
- """
- raise NotImplementedError()
-
- # This is required for multiprocessing
- def __getstate__(self):
- return {
- "model_name": self.model_name,
- "device": self.device,
- "download_root": self.download_root,
- "models": self.models,
- "compute_type": self.compute_type
- }
-
- def __setstate__(self, state):
- self.model_name = state["model_name"]
- self.device = state["device"]
- self.download_root = state["download_root"]
- self.models = state["models"]
- self.compute_type = state["compute_type"]
- self.model = None
- # Depickled objects must use the global cache
- self.cache = GLOBAL_MODEL_CACHE
\ No newline at end of file
diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/vencoder/__init__.py b/spaces/FrankZxShen/so-vits-svc-models-ba/vencoder/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/GV05/text-emotion-detector/app.py b/spaces/GV05/text-emotion-detector/app.py
deleted file mode 100644
index 952984e4f361c3504a18841c85b169f0072d85de..0000000000000000000000000000000000000000
--- a/spaces/GV05/text-emotion-detector/app.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import gradio as gr
-from transformers import pipeline
-
-model_id = "GV05/distilbert-base-uncased-finetuned-emotion"
-classifier = pipeline("text-classification", model=model_id)
-
-label_to_emotion = {
- 'LABEL_0': 'sadness',
- 'LABEL_1': 'joy',
- 'LABEL_2': 'love',
- 'LABEL_3': 'anger',
- 'LABEL_4': 'fear',
- 'LABEL_5': 'surprise',
-}
-
-def classify_emotion(text):
- preds = classifier(text, return_all_scores=True)
- res = {}
- for x in preds[0]:
- res[label_to_emotion[x['label']]] = x['score']
- return res
-
-image = gr.Textbox()
-label = gr.Label()
-examples = ["you are not too sensitive. you are not overreacting",
- "Thinking of you keeps me awake. Dreaming of you keeps me asleep. Being with you keeps me alive."]
-
-title = "Emotion Detector"
-description = "This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset"
-
-intf = gr.Interface(fn=classify_emotion, inputs=image, outputs=label, examples=examples, title=title,
- description=description)
-
-intf.launch(inline=False)
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/color_coordinated_insertion.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/color_coordinated_insertion.py
deleted file mode 100644
index 81375f5d89d6dc0d3c766c599535f8799333825e..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/color_coordinated_insertion.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import numpy as np
-import os
-import pybullet as p
-import random
-from cliport.tasks import primitives
-from cliport.tasks.grippers import Spatula
-from cliport.tasks.task import Task
-from cliport.utils import utils
-import numpy as np
-from cliport.tasks.task import Task
-from cliport.utils import utils
-import pybullet as p
-
-class ColorCoordinatedInsertion(Task):
- """Insert each block into the fixture of the same color"""
-
- def __init__(self):
- super().__init__()
- self.max_steps = 20
- self.lang_template = "insert each block into the fixture of the same color"
- self.task_completed_desc = "done with color-coordinated-insertion."
- self.additional_reset()
-
- def reset(self, env):
- super().reset(env)
-
- # Add pallet.
- pallet_size = (0.35, 0.35, 0.01)
- pallet_pose = self.get_random_pose(env, pallet_size)
- pallet_urdf = 'pallet/pallet.urdf'
- env.add_object(pallet_urdf, pallet_pose, 'fixed')
-
- # Add fixtures and blocks.
- colors = ['red', 'blue', 'green', 'yellow']
- fixtures = []
- blocks = []
- fixture_size = (0.05, 0.05, 0.05)
- block_size = (0.04, 0.04, 0.04)
- fixture_urdf = 'insertion/fixture.urdf'
- block_urdf = 'block/block.urdf'
- for color in colors:
- # Add fixture.
- fixture_pose = self.get_random_pose(env, fixture_size)
- fixture_id = env.add_object(fixture_urdf, fixture_pose, color=utils.COLORS[color])
- fixtures.append(fixture_id)
-
- # Add block.
- block_pose = self.get_random_pose(env, block_size)
- block_id = env.add_object(block_urdf, block_pose, color=utils.COLORS[color])
- blocks.append(block_id)
-
- # Goal: each block is in the fixture of the same color.
- for i in range(len(blocks)):
- self.add_goal(objs=[blocks[i]], matches=np.ones((1, 1)), targ_poses=[p.getBasePositionAndOrientation(fixtures[i])], replace=False,
- rotations=True, metric='pose', params=None, step_max_reward=1 / len(blocks),
- language_goal=self.lang_template)
-
- # Goal: each fixture is on the pallet.
- for i in range(len(fixtures)):
- self.add_goal(objs=[fixtures[i]], matches=np.ones((1, 1)), targ_poses=[pallet_pose], replace=False,
- rotations=True, metric='zone', params=[(pallet_pose, pallet_size)], step_max_reward=1 / len(fixtures),
- language_goal=self.lang_template)
\ No newline at end of file
diff --git a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/tf/shape_helpers_test.py b/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/tf/shape_helpers_test.py
deleted file mode 100644
index d7797b340514d9577dd77b9e9660babd0aa52b5e..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/tf/shape_helpers_test.py
+++ /dev/null
@@ -1,39 +0,0 @@
-# Copyright 2021 DeepMind Technologies Limited
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Tests for shape_helpers."""
-
-from alphafold.model.tf import shape_helpers
-import numpy as np
-import tensorflow.compat.v1 as tf
-
-
-class ShapeTest(tf.test.TestCase):
-
- def test_shape_list(self):
- """Test that shape_list can allow for reshaping to dynamic shapes."""
- a = tf.zeros([10, 4, 4, 2])
- p = tf.placeholder(tf.float32, shape=[None, None, 1, 4, 4])
- shape_dyn = shape_helpers.shape_list(p)[:2] + [4, 4]
-
- b = tf.reshape(a, shape_dyn)
- with self.session() as sess:
- out = sess.run(b, feed_dict={p: np.ones((20, 1, 1, 4, 4))})
-
- self.assertAllEqual(out.shape, (20, 1, 4, 4))
-
-
-if __name__ == '__main__':
- tf.disable_v2_behavior()
- tf.test.main()
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/models/cascade_mask_rcnn_r50_fpn.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/models/cascade_mask_rcnn_r50_fpn.py
deleted file mode 100644
index 9ef6673c2d08f3c43a96cf08ce1710b19865acd4..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/models/cascade_mask_rcnn_r50_fpn.py
+++ /dev/null
@@ -1,196 +0,0 @@
-# model settings
-model = dict(
- type='CascadeRCNN',
- pretrained='torchvision://resnet50',
- backbone=dict(
- type='ResNet',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=True,
- style='pytorch'),
- neck=dict(
- type='FPN',
- in_channels=[256, 512, 1024, 2048],
- out_channels=256,
- num_outs=5),
- rpn_head=dict(
- type='RPNHead',
- in_channels=256,
- feat_channels=256,
- anchor_generator=dict(
- type='AnchorGenerator',
- scales=[8],
- ratios=[0.5, 1.0, 2.0],
- strides=[4, 8, 16, 32, 64]),
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[.0, .0, .0, .0],
- target_stds=[1.0, 1.0, 1.0, 1.0]),
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
- loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)),
- roi_head=dict(
- type='CascadeRoIHead',
- num_stages=3,
- stage_loss_weights=[1, 0.5, 0.25],
- bbox_roi_extractor=dict(
- type='SingleRoIExtractor',
- roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
- out_channels=256,
- featmap_strides=[4, 8, 16, 32]),
- bbox_head=[
- dict(
- type='Shared2FCBBoxHead',
- in_channels=256,
- fc_out_channels=1024,
- roi_feat_size=7,
- num_classes=80,
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[0., 0., 0., 0.],
- target_stds=[0.1, 0.1, 0.2, 0.2]),
- reg_class_agnostic=True,
- loss_cls=dict(
- type='CrossEntropyLoss',
- use_sigmoid=False,
- loss_weight=1.0),
- loss_bbox=dict(type='SmoothL1Loss', beta=1.0,
- loss_weight=1.0)),
- dict(
- type='Shared2FCBBoxHead',
- in_channels=256,
- fc_out_channels=1024,
- roi_feat_size=7,
- num_classes=80,
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[0., 0., 0., 0.],
- target_stds=[0.05, 0.05, 0.1, 0.1]),
- reg_class_agnostic=True,
- loss_cls=dict(
- type='CrossEntropyLoss',
- use_sigmoid=False,
- loss_weight=1.0),
- loss_bbox=dict(type='SmoothL1Loss', beta=1.0,
- loss_weight=1.0)),
- dict(
- type='Shared2FCBBoxHead',
- in_channels=256,
- fc_out_channels=1024,
- roi_feat_size=7,
- num_classes=80,
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[0., 0., 0., 0.],
- target_stds=[0.033, 0.033, 0.067, 0.067]),
- reg_class_agnostic=True,
- loss_cls=dict(
- type='CrossEntropyLoss',
- use_sigmoid=False,
- loss_weight=1.0),
- loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))
- ],
- mask_roi_extractor=dict(
- type='SingleRoIExtractor',
- roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0),
- out_channels=256,
- featmap_strides=[4, 8, 16, 32]),
- mask_head=dict(
- type='FCNMaskHead',
- num_convs=4,
- in_channels=256,
- conv_out_channels=256,
- num_classes=80,
- loss_mask=dict(
- type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))),
- # model training and testing settings
- train_cfg=dict(
- rpn=dict(
- assigner=dict(
- type='MaxIoUAssigner',
- pos_iou_thr=0.7,
- neg_iou_thr=0.3,
- min_pos_iou=0.3,
- match_low_quality=True,
- ignore_iof_thr=-1),
- sampler=dict(
- type='RandomSampler',
- num=256,
- pos_fraction=0.5,
- neg_pos_ub=-1,
- add_gt_as_proposals=False),
- allowed_border=0,
- pos_weight=-1,
- debug=False),
- rpn_proposal=dict(
- nms_pre=2000,
- max_per_img=2000,
- nms=dict(type='nms', iou_threshold=0.7),
- min_bbox_size=0),
- rcnn=[
- dict(
- assigner=dict(
- type='MaxIoUAssigner',
- pos_iou_thr=0.5,
- neg_iou_thr=0.5,
- min_pos_iou=0.5,
- match_low_quality=False,
- ignore_iof_thr=-1),
- sampler=dict(
- type='RandomSampler',
- num=512,
- pos_fraction=0.25,
- neg_pos_ub=-1,
- add_gt_as_proposals=True),
- mask_size=28,
- pos_weight=-1,
- debug=False),
- dict(
- assigner=dict(
- type='MaxIoUAssigner',
- pos_iou_thr=0.6,
- neg_iou_thr=0.6,
- min_pos_iou=0.6,
- match_low_quality=False,
- ignore_iof_thr=-1),
- sampler=dict(
- type='RandomSampler',
- num=512,
- pos_fraction=0.25,
- neg_pos_ub=-1,
- add_gt_as_proposals=True),
- mask_size=28,
- pos_weight=-1,
- debug=False),
- dict(
- assigner=dict(
- type='MaxIoUAssigner',
- pos_iou_thr=0.7,
- neg_iou_thr=0.7,
- min_pos_iou=0.7,
- match_low_quality=False,
- ignore_iof_thr=-1),
- sampler=dict(
- type='RandomSampler',
- num=512,
- pos_fraction=0.25,
- neg_pos_ub=-1,
- add_gt_as_proposals=True),
- mask_size=28,
- pos_weight=-1,
- debug=False)
- ]),
- test_cfg=dict(
- rpn=dict(
- nms_pre=1000,
- max_per_img=1000,
- nms=dict(type='nms', iou_threshold=0.7),
- min_bbox_size=0),
- rcnn=dict(
- score_thr=0.05,
- nms=dict(type='nms', iou_threshold=0.5),
- max_per_img=100,
- mask_thr_binary=0.5)))
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/ms_rcnn/ms_rcnn_x101_64x4d_fpn_2x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/ms_rcnn/ms_rcnn_x101_64x4d_fpn_2x_coco.py
deleted file mode 100644
index 54c605b94aa5fc8b1ddf2267ed349c2fcd08cc9e..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/ms_rcnn/ms_rcnn_x101_64x4d_fpn_2x_coco.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = './ms_rcnn_x101_64x4d_fpn_1x_coco.py'
-# learning policy
-lr_config = dict(step=[16, 22])
-runner = dict(type='EpochBasedRunner', max_epochs=24)
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/regnet/mask_rcnn_regnetx-3.2GF_fpn_mstrain_3x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/regnet/mask_rcnn_regnetx-3.2GF_fpn_mstrain_3x_coco.py
deleted file mode 100644
index e4107e7f8985deaaf0287d6b7347521970babf1e..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/regnet/mask_rcnn_regnetx-3.2GF_fpn_mstrain_3x_coco.py
+++ /dev/null
@@ -1,65 +0,0 @@
-_base_ = [
- '../_base_/models/mask_rcnn_r50_fpn.py',
- '../_base_/datasets/coco_instance.py',
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
-]
-model = dict(
- pretrained='open-mmlab://regnetx_3.2gf',
- backbone=dict(
- _delete_=True,
- type='RegNet',
- arch='regnetx_3.2gf',
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=True,
- style='pytorch'),
- neck=dict(
- type='FPN',
- in_channels=[96, 192, 432, 1008],
- out_channels=256,
- num_outs=5))
-img_norm_cfg = dict(
- # The mean and std are used in PyCls when training RegNets
- mean=[103.53, 116.28, 123.675],
- std=[57.375, 57.12, 58.395],
- to_rgb=False)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
- dict(
- type='Resize',
- img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736),
- (1333, 768), (1333, 800)],
- multiscale_mode='value',
- keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- train=dict(pipeline=train_pipeline),
- val=dict(pipeline=test_pipeline),
- test=dict(pipeline=test_pipeline))
-optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.00005)
-lr_config = dict(step=[28, 34])
-runner = dict(type='EpochBasedRunner', max_epochs=36)
-optimizer_config = dict(
- _delete_=True, grad_clip=dict(max_norm=35, norm_type=2))
diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/dpt/models.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/dpt/models.py
deleted file mode 100644
index 6081bb6f073e1f170db1aa322532bda747fbab80..0000000000000000000000000000000000000000
--- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/dpt/models.py
+++ /dev/null
@@ -1,126 +0,0 @@
-import torch
-import torch.nn as nn
-from torch import Tensor
-
-from .base_model import BaseModel
-from .blocks import (
- FeatureFusionBlock_custom,
- Interpolate,
- _make_encoder,
- forward_vit,
-)
-
-
-def _make_fusion_block(features, use_bn):
- return FeatureFusionBlock_custom(
- features,
- nn.ReLU(False),
- deconv=False,
- bn=use_bn,
- expand=False,
- align_corners=True,
- )
-
-
-class DPT(BaseModel):
- def __init__(
- self,
- head,
- features=256,
- backbone="vitb_rn50_384",
- readout="project",
- channels_last=False,
- use_bn=False,
- enable_attention_hooks=False,
- ):
-
- super(DPT, self).__init__()
-
- self.channels_last = channels_last
-
- hooks = {
- "vitb_rn50_384": [0, 1, 8, 11],
- "vitb16_384": [2, 5, 8, 11],
- "vitl16_384": [5, 11, 17, 23],
- }
-
- # Instantiate backbone and reassemble blocks
- self.pretrained, self.scratch = _make_encoder(
- backbone,
- features,
- False, # Set to true of you want to train from scratch, uses ImageNet weights
- groups=1,
- expand=False,
- exportable=False,
- hooks=hooks[backbone],
- use_readout=readout,
- enable_attention_hooks=enable_attention_hooks,
- )
-
- self.scratch.refinenet1 = _make_fusion_block(features, use_bn)
- self.scratch.refinenet2 = _make_fusion_block(features, use_bn)
- self.scratch.refinenet3 = _make_fusion_block(features, use_bn)
- self.scratch.refinenet4 = _make_fusion_block(features, use_bn)
-
- self.scratch.output_conv = head
-
- def forward(self, x: Tensor) -> Tensor:
- if self.channels_last == True:
- x.contiguous(memory_format=torch.channels_last)
-
- layer_1, layer_2, layer_3, layer_4 = forward_vit(self.pretrained, x)
-
- layer_1_rn = self.scratch.layer1_rn(layer_1)
- layer_2_rn = self.scratch.layer2_rn(layer_2)
- layer_3_rn = self.scratch.layer3_rn(layer_3)
- layer_4_rn = self.scratch.layer4_rn(layer_4)
-
- path_4 = self.scratch.refinenet4(layer_4_rn)
- path_3 = self.scratch.refinenet3(path_4, layer_3_rn)
- path_2 = self.scratch.refinenet2(path_3, layer_2_rn)
- path_1 = self.scratch.refinenet1(path_2, layer_1_rn)
-
- out = self.scratch.output_conv(path_1)
-
- return out
-
-
-class DPTDepthModel(DPT):
- def __init__(
- self, path=None, non_negative=True, scale=1.0, shift=0.0, invert=False, **kwargs
- ):
- features = kwargs["features"] if "features" in kwargs else 256
-
- self.scale = scale
- self.shift = shift
- self.invert = invert
-
- head = nn.Sequential(
- nn.Conv2d(features, features // 2, kernel_size=3, stride=1, padding=1),
- Interpolate(scale_factor=2, mode="bilinear", align_corners=True),
- nn.Conv2d(features // 2, 32, kernel_size=3, stride=1, padding=1),
- nn.ReLU(True),
- nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0),
- nn.ReLU(True) if non_negative else nn.Identity(),
- nn.Identity(),
- )
-
- super().__init__(head, **kwargs)
-
- if path is not None:
- self.load(path)
-
- def forward(self, x: Tensor) -> Tensor:
- """Input x of shape [b, c, h, w]
- Return tensor of shape [b, c, h, w]
- """
- inv_depth = super().forward(x)
-
- if self.invert:
- depth = self.scale * inv_depth + self.shift
- depth[depth < 1e-8] = 1e-8
- depth = 1.0 / depth
- return depth
- else:
- return inv_depth
-
diff --git a/spaces/HaloMaster/chinesesummary/fengshen/models/megatron_t5/__init__.py b/spaces/HaloMaster/chinesesummary/fengshen/models/megatron_t5/__init__.py
deleted file mode 100644
index 84f78136331c5ef4975697bc6a77910bba7429bd..0000000000000000000000000000000000000000
--- a/spaces/HaloMaster/chinesesummary/fengshen/models/megatron_t5/__init__.py
+++ /dev/null
@@ -1,49 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The IDEA Authors. All rights reserved.
-
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-
-# http://www.apache.org/licenses/LICENSE-2.0
-
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from typing import TYPE_CHECKING
-
-from transformers.file_utils import _LazyModule, is_torch_available
-
-
-_import_structure = {
- "configuration_megatron_t5": ["T5Config"],
- "tokenization_megatron_t5": ["T5Tokenizer"],
-}
-
-if is_torch_available():
- _import_structure["modeling_megatron_t5"] = [
- "T5Model",
- "T5EncoderModel",
- "T5ForConditionalGeneration"
- ]
-
-
-if TYPE_CHECKING:
- from .configuration_megatron_t5 import T5Config
- from .tokenization_megatron_t5 import T5Tokenizer
-
- if is_torch_available():
- from .modeling_megatron_t5 import (
- T5Model,
- T5EncoderModel,
- T5ForConditionalGeneration
- )
-
-else:
- import sys
-
- sys.modules[__name__] = _LazyModule(
- __name__, globals()["__file__"], _import_structure)
diff --git a/spaces/Happys/chatbot/Dockerfile b/spaces/Happys/chatbot/Dockerfile
deleted file mode 100644
index 563b14e4c61040c222939ad2d1691912dc1c62e8..0000000000000000000000000000000000000000
--- a/spaces/Happys/chatbot/Dockerfile
+++ /dev/null
@@ -1,8 +0,0 @@
-# Pull the base image
-FROM happyclo/libre:latest
-
-# Install dependencies
-RUN cd /app/api && npm install
-
-# Command to run on container start
-CMD ["npm", "run", "backend"]
\ No newline at end of file
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/tasks/sentence_ranking.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/tasks/sentence_ranking.py
deleted file mode 100644
index bed44f34e5f8e506b6ae7ba30ddaa661bf4a7522..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/tasks/sentence_ranking.py
+++ /dev/null
@@ -1,219 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-import os
-
-import numpy as np
-from fairseq import utils
-from fairseq.data import (
- ConcatSentencesDataset,
- Dictionary,
- IdDataset,
- NestedDictionaryDataset,
- NumelDataset,
- NumSamplesDataset,
- PrependTokenDataset,
- RawLabelDataset,
- RightPadDataset,
- SortDataset,
- TruncateDataset,
- data_utils,
-)
-from fairseq.data.shorten_dataset import maybe_shorten_dataset
-from fairseq.tasks import LegacyFairseqTask, register_task
-
-
-logger = logging.getLogger(__name__)
-
-
-@register_task("sentence_ranking")
-class SentenceRankingTask(LegacyFairseqTask):
- """
- Ranking task on multiple sentences.
-
- Args:
- dictionary (Dictionary): the dictionary for the input of the task
- """
-
- @staticmethod
- def add_args(parser):
- """Add task-specific arguments to the parser."""
- parser.add_argument("data", metavar="FILE", help="file prefix for data")
- parser.add_argument(
- "--num-classes", type=int, help="number of sentences to be ranked"
- )
- parser.add_argument(
- "--init-token",
- type=int,
- help="add token at the beginning of each batch item",
- )
- parser.add_argument(
- "--separator-token", type=int, help="add separator token between inputs"
- )
- parser.add_argument("--no-shuffle", action="store_true")
- parser.add_argument(
- "--shorten-method",
- default="none",
- choices=["none", "truncate", "random_crop"],
- help="if not none, shorten sequences that exceed --tokens-per-sample",
- )
- parser.add_argument(
- "--shorten-data-split-list",
- default="",
- help="comma-separated list of dataset splits to apply shortening to, "
- 'e.g., "train,valid" (default: all dataset splits)',
- )
- parser.add_argument(
- "--max-option-length", type=int, help="max length for each option"
- )
-
- def __init__(self, args, dictionary):
- super().__init__(args)
- self.dictionary = dictionary
-
- @classmethod
- def load_dictionary(cls, args, filename, source=True):
- """Load the dictionary from the filename
-
- Args:
- filename (str): the filename
- """
- dictionary = Dictionary.load(filename)
- dictionary.add_symbol("")
- return dictionary
-
- @classmethod
- def setup_task(cls, args, **kwargs):
- assert (
- args.criterion == "sentence_ranking"
- ), "Must set --criterion=sentence_ranking"
-
- # load data dictionary
- data_dict = cls.load_dictionary(
- args,
- os.path.join(args.data, "input0", "dict.txt"),
- source=True,
- )
- logger.info("[input] dictionary: {} types".format(len(data_dict)))
- return SentenceRankingTask(args, data_dict)
-
- def load_dataset(self, split, combine=False, **kwargs):
- """Load a given dataset split (e.g., train, valid, test)."""
-
- def get_path(type, split):
- return os.path.join(self.args.data, type, split)
-
- def make_dataset(type, dictionary):
- split_path = get_path(type, split)
-
- dataset = data_utils.load_indexed_dataset(
- split_path,
- self.source_dictionary,
- self.args.dataset_impl,
- combine=combine,
- )
- return dataset
-
- input0 = make_dataset("input0", self.source_dictionary)
- input_options = [
- make_dataset("input{idx}".format(idx=idx + 1), self.source_dictionary)
- for idx in range(self.args.num_classes)
- ]
-
- if self.args.separator_token is not None:
- input0 = PrependTokenDataset(input0, self.args.separator_token)
-
- src_tokens = []
- for input_option in input_options:
- if self.args.init_token is not None:
- input_option = PrependTokenDataset(input_option, self.args.init_token)
- if self.args.max_option_length is not None:
- input_option = TruncateDataset(
- input_option, self.args.max_option_length
- )
- src_token = ConcatSentencesDataset(input_option, input0)
- src_token = maybe_shorten_dataset(
- src_token,
- split,
- self.args.shorten_data_split_list,
- self.args.shorten_method,
- self.args.max_positions,
- self.args.seed,
- )
- src_tokens.append(src_token)
-
- with data_utils.numpy_seed(self.args.seed):
- shuffle = np.random.permutation(len(src_tokens[0]))
-
- dataset = {
- "id": IdDataset(),
- "nsentences": NumSamplesDataset(),
- "ntokens": NumelDataset(src_tokens[0], reduce=True),
- }
-
- for src_token_idx in range(len(src_tokens)):
- dataset.update(
- {
- "net_input{idx}".format(idx=src_token_idx + 1): {
- "src_tokens": RightPadDataset(
- src_tokens[src_token_idx],
- pad_idx=self.source_dictionary.pad(),
- ),
- "src_lengths": NumelDataset(
- src_tokens[src_token_idx], reduce=False
- ),
- }
- }
- )
-
- label_path = "{}.label".format(get_path("label", split))
- if os.path.exists(label_path):
- with open(label_path) as h:
- dataset.update(
- target=RawLabelDataset([int(x.strip()) for x in h.readlines()])
- )
-
- nested_dataset = NestedDictionaryDataset(
- dataset,
- sizes=[np.maximum.reduce([src_token.sizes for src_token in src_tokens])],
- )
-
- if self.args.no_shuffle:
- dataset = nested_dataset
- else:
- dataset = SortDataset(
- nested_dataset,
- # shuffle
- sort_order=[shuffle],
- )
-
- logger.info("Loaded {0} with #samples: {1}".format(split, len(dataset)))
-
- self.datasets[split] = dataset
- return self.datasets[split]
-
- def build_model(self, args):
- from fairseq import models
-
- model = models.build_model(args, self)
-
- model.register_classification_head(
- getattr(args, "ranking_head_name", "sentence_classification_head"),
- num_classes=1,
- )
-
- return model
-
- def max_positions(self):
- return self.args.max_positions
-
- @property
- def source_dictionary(self):
- return self.dictionary
-
- @property
- def target_dictionary(self):
- return self.dictionary
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/utils/trie.py b/spaces/HarryLee/eCommerceImageCaptioning/utils/trie.py
deleted file mode 100644
index 76d331d87fd99096e8228f34f297379221941045..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/utils/trie.py
+++ /dev/null
@@ -1,25 +0,0 @@
-from collections import defaultdict
-
-
-class TreeNode():
- def __init__(self):
- self.child = defaultdict(TreeNode)
-
-class Trie:
-
- def __init__(self, eos):
- self.root = TreeNode()
- self.eos = eos
-
- def insert(self, word):
- cur = self.root
- for c in word:
- cur = cur.child[c]
-
- def get_next_layer(self, word):
- cur = self.root
- for c in word:
- cur = cur.child.get(c)
- if cur is None:
- return [self.eos]
- return list(cur.child.keys())
\ No newline at end of file
diff --git a/spaces/Haswanth/haswanthpalepu/app.py b/spaces/Haswanth/haswanthpalepu/app.py
deleted file mode 100644
index 9ede0bd38a0bf7b5a72db19bf134e66df1d9d1cc..0000000000000000000000000000000000000000
--- a/spaces/Haswanth/haswanthpalepu/app.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import os
-import gradio as gr
-from langchain.chat_models import ChatOpenAI
-from langchain import LLMChain, PromptTemplate
-from langchain.memory import ConversationBufferMemory
-
-OPENAI_API_KEY=os.getenv('OPENAI_API_KEY')
-
-template = """Meet Riya, your youthful and witty personal assistant! At 21 years old, she's full of energy and always eager to help. Riya's goal is to assist you with any questions or problems you might have. Her enthusiasm shines through in every response, making interactions with her enjoyable and engaging..
-{chat_history}
-User: {user_message}
-Chatbot:"""
-
-prompt = PromptTemplate(
- input_variables=["chat_history", "user_message"], template=template
-)
-
-memory = ConversationBufferMemory(memory_key="chat_history")
-
-llm_chain = LLMChain(
- llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"),
- prompt=prompt,
- verbose=True,
- memory=memory,
-)
-
-def get_text_response(user_message,history):
- response = llm_chain.predict(user_message = user_message)
- return response
-
-demo = gr.ChatInterface(get_text_response)
-
-if __name__ == "__main__":
- demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`.
diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/external.py b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/external.py
deleted file mode 100644
index 4a1365623316679dc4cb2d76a607deb505208ab5..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/external.py
+++ /dev/null
@@ -1,462 +0,0 @@
-"""This module should not be used directly as its API is subject to change. Instead,
-use the `gr.Blocks.load()` or `gr.Interface.load()` functions."""
-
-from __future__ import annotations
-
-import json
-import re
-import uuid
-import warnings
-from copy import deepcopy
-from typing import TYPE_CHECKING, Callable, Dict
-
-import requests
-
-import gradio
-from gradio import components, utils
-from gradio.exceptions import TooManyRequestsError
-from gradio.external_utils import (
- cols_to_rows,
- encode_to_base64,
- get_tabular_examples,
- get_ws_fn,
- postprocess_label,
- rows_to_cols,
- streamline_spaces_interface,
- use_websocket,
-)
-from gradio.processing_utils import to_binary
-
-if TYPE_CHECKING:
- from gradio.blocks import Blocks
- from gradio.interface import Interface
-
-
-def load_blocks_from_repo(
- name: str,
- src: str | None = None,
- api_key: str | None = None,
- alias: str | None = None,
- **kwargs,
-) -> Blocks:
- """Creates and returns a Blocks instance from a Hugging Face model or Space repo."""
- if src is None:
- # Separate the repo type (e.g. "model") from repo name (e.g. "google/vit-base-patch16-224")
- tokens = name.split("/")
- assert (
- len(tokens) > 1
- ), "Either `src` parameter must be provided, or `name` must be formatted as {src}/{repo name}"
- src = tokens[0]
- name = "/".join(tokens[1:])
-
- factory_methods: Dict[str, Callable] = {
- # for each repo type, we have a method that returns the Interface given the model name & optionally an api_key
- "huggingface": from_model,
- "models": from_model,
- "spaces": from_spaces,
- }
- assert src.lower() in factory_methods, "parameter: src must be one of {}".format(
- factory_methods.keys()
- )
-
- blocks: gradio.Blocks = factory_methods[src](name, api_key, alias, **kwargs)
- return blocks
-
-
-def from_model(model_name: str, api_key: str | None, alias: str | None, **kwargs):
- model_url = "https://huggingface.co/{}".format(model_name)
- api_url = "https://api-inference.huggingface.co/models/{}".format(model_name)
- print("Fetching model from: {}".format(model_url))
-
- headers = {"Authorization": f"Bearer {api_key}"} if api_key is not None else {}
-
- # Checking if model exists, and if so, it gets the pipeline
- response = requests.request("GET", api_url, headers=headers)
- assert (
- response.status_code == 200
- ), f"Could not find model: {model_name}. If it is a private or gated model, please provide your Hugging Face access token (https://huggingface.co/settings/tokens) as the argument for the `api_key` parameter."
- p = response.json().get("pipeline_tag")
-
- pipelines = {
- "audio-classification": {
- # example model: ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition
- "inputs": components.Audio(source="upload", type="filepath", label="Input"),
- "outputs": components.Label(label="Class"),
- "preprocess": lambda i: to_binary,
- "postprocess": lambda r: postprocess_label(
- {i["label"].split(", ")[0]: i["score"] for i in r.json()}
- ),
- },
- "audio-to-audio": {
- # example model: facebook/xm_transformer_sm_all-en
- "inputs": components.Audio(source="upload", type="filepath", label="Input"),
- "outputs": components.Audio(label="Output"),
- "preprocess": to_binary,
- "postprocess": encode_to_base64,
- },
- "automatic-speech-recognition": {
- # example model: facebook/wav2vec2-base-960h
- "inputs": components.Audio(source="upload", type="filepath", label="Input"),
- "outputs": components.Textbox(label="Output"),
- "preprocess": to_binary,
- "postprocess": lambda r: r.json()["text"],
- },
- "feature-extraction": {
- # example model: julien-c/distilbert-feature-extraction
- "inputs": components.Textbox(label="Input"),
- "outputs": components.Dataframe(label="Output"),
- "preprocess": lambda x: {"inputs": x},
- "postprocess": lambda r: r.json()[0],
- },
- "fill-mask": {
- "inputs": components.Textbox(label="Input"),
- "outputs": components.Label(label="Classification"),
- "preprocess": lambda x: {"inputs": x},
- "postprocess": lambda r: postprocess_label(
- {i["token_str"]: i["score"] for i in r.json()}
- ),
- },
- "image-classification": {
- # Example: google/vit-base-patch16-224
- "inputs": components.Image(type="filepath", label="Input Image"),
- "outputs": components.Label(label="Classification"),
- "preprocess": to_binary,
- "postprocess": lambda r: postprocess_label(
- {i["label"].split(", ")[0]: i["score"] for i in r.json()}
- ),
- },
- "question-answering": {
- # Example: deepset/xlm-roberta-base-squad2
- "inputs": [
- components.Textbox(lines=7, label="Context"),
- components.Textbox(label="Question"),
- ],
- "outputs": [
- components.Textbox(label="Answer"),
- components.Label(label="Score"),
- ],
- "preprocess": lambda c, q: {"inputs": {"context": c, "question": q}},
- "postprocess": lambda r: (r.json()["answer"], {"label": r.json()["score"]}),
- },
- "summarization": {
- # Example: facebook/bart-large-cnn
- "inputs": components.Textbox(label="Input"),
- "outputs": components.Textbox(label="Summary"),
- "preprocess": lambda x: {"inputs": x},
- "postprocess": lambda r: r.json()[0]["summary_text"],
- },
- "text-classification": {
- # Example: distilbert-base-uncased-finetuned-sst-2-english
- "inputs": components.Textbox(label="Input"),
- "outputs": components.Label(label="Classification"),
- "preprocess": lambda x: {"inputs": x},
- "postprocess": lambda r: postprocess_label(
- {i["label"].split(", ")[0]: i["score"] for i in r.json()[0]}
- ),
- },
- "text-generation": {
- # Example: gpt2
- "inputs": components.Textbox(label="Input"),
- "outputs": components.Textbox(label="Output"),
- "preprocess": lambda x: {"inputs": x},
- "postprocess": lambda r: r.json()[0]["generated_text"],
- },
- "text2text-generation": {
- # Example: valhalla/t5-small-qa-qg-hl
- "inputs": components.Textbox(label="Input"),
- "outputs": components.Textbox(label="Generated Text"),
- "preprocess": lambda x: {"inputs": x},
- "postprocess": lambda r: r.json()[0]["generated_text"],
- },
- "translation": {
- "inputs": components.Textbox(label="Input"),
- "outputs": components.Textbox(label="Translation"),
- "preprocess": lambda x: {"inputs": x},
- "postprocess": lambda r: r.json()[0]["translation_text"],
- },
- "zero-shot-classification": {
- # Example: facebook/bart-large-mnli
- "inputs": [
- components.Textbox(label="Input"),
- components.Textbox(label="Possible class names (" "comma-separated)"),
- components.Checkbox(label="Allow multiple true classes"),
- ],
- "outputs": components.Label(label="Classification"),
- "preprocess": lambda i, c, m: {
- "inputs": i,
- "parameters": {"candidate_labels": c, "multi_class": m},
- },
- "postprocess": lambda r: postprocess_label(
- {
- r.json()["labels"][i]: r.json()["scores"][i]
- for i in range(len(r.json()["labels"]))
- }
- ),
- },
- "sentence-similarity": {
- # Example: sentence-transformers/distilbert-base-nli-stsb-mean-tokens
- "inputs": [
- components.Textbox(
- value="That is a happy person", label="Source Sentence"
- ),
- components.Textbox(
- lines=7,
- placeholder="Separate each sentence by a newline",
- label="Sentences to compare to",
- ),
- ],
- "outputs": components.Label(label="Classification"),
- "preprocess": lambda src, sentences: {
- "inputs": {
- "source_sentence": src,
- "sentences": [s for s in sentences.splitlines() if s != ""],
- }
- },
- "postprocess": lambda r: postprocess_label(
- {f"sentence {i}": v for i, v in enumerate(r.json())}
- ),
- },
- "text-to-speech": {
- # Example: julien-c/ljspeech_tts_train_tacotron2_raw_phn_tacotron_g2p_en_no_space_train
- "inputs": components.Textbox(label="Input"),
- "outputs": components.Audio(label="Audio"),
- "preprocess": lambda x: {"inputs": x},
- "postprocess": encode_to_base64,
- },
- "text-to-image": {
- # example model: osanseviero/BigGAN-deep-128
- "inputs": components.Textbox(label="Input"),
- "outputs": components.Image(label="Output"),
- "preprocess": lambda x: {"inputs": x},
- "postprocess": encode_to_base64,
- },
- "token-classification": {
- # example model: huggingface-course/bert-finetuned-ner
- "inputs": components.Textbox(label="Input"),
- "outputs": components.HighlightedText(label="Output"),
- "preprocess": lambda x: {"inputs": x},
- "postprocess": lambda r: r, # Handled as a special case in query_huggingface_api()
- },
- }
-
- if p in ["tabular-classification", "tabular-regression"]:
- example_data = get_tabular_examples(model_name)
- col_names, example_data = cols_to_rows(example_data)
- example_data = [[example_data]] if example_data else None
-
- pipelines[p] = {
- "inputs": components.Dataframe(
- label="Input Rows",
- type="pandas",
- headers=col_names,
- col_count=(len(col_names), "fixed"),
- ),
- "outputs": components.Dataframe(
- label="Predictions", type="array", headers=["prediction"]
- ),
- "preprocess": rows_to_cols,
- "postprocess": lambda r: {
- "headers": ["prediction"],
- "data": [[pred] for pred in json.loads(r.text)],
- },
- "examples": example_data,
- }
-
- if p is None or not (p in pipelines):
- raise ValueError("Unsupported pipeline type: {}".format(p))
-
- pipeline = pipelines[p]
-
- def query_huggingface_api(*params):
- # Convert to a list of input components
- data = pipeline["preprocess"](*params)
- if isinstance(
- data, dict
- ): # HF doesn't allow additional parameters for binary files (e.g. images or audio files)
- data.update({"options": {"wait_for_model": True}})
- data = json.dumps(data)
- response = requests.request("POST", api_url, headers=headers, data=data)
- if not (response.status_code == 200):
- errors_json = response.json()
- errors, warns = "", ""
- if errors_json.get("error"):
- errors = f", Error: {errors_json.get('error')}"
- if errors_json.get("warnings"):
- warns = f", Warnings: {errors_json.get('warnings')}"
- raise ValueError(
- f"Could not complete request to HuggingFace API, Status Code: {response.status_code}"
- + errors
- + warns
- )
- if (
- p == "token-classification"
- ): # Handle as a special case since HF API only returns the named entities and we need the input as well
- ner_groups = response.json()
- input_string = params[0]
- response = utils.format_ner_list(input_string, ner_groups)
- output = pipeline["postprocess"](response)
- return output
-
- if alias is None:
- query_huggingface_api.__name__ = model_name
- else:
- query_huggingface_api.__name__ = alias
-
- interface_info = {
- "fn": query_huggingface_api,
- "inputs": pipeline["inputs"],
- "outputs": pipeline["outputs"],
- "title": model_name,
- "examples": pipeline.get("examples"),
- }
-
- kwargs = dict(interface_info, **kwargs)
- kwargs["_api_mode"] = True # So interface doesn't run pre/postprocess.
- interface = gradio.Interface(**kwargs)
- return interface
-
-
-def from_spaces(
- space_name: str, api_key: str | None, alias: str | None, **kwargs
-) -> Blocks:
- space_url = "https://huggingface.co/spaces/{}".format(space_name)
-
- print("Fetching Space from: {}".format(space_url))
-
- headers = {}
- if api_key is not None:
- headers["Authorization"] = f"Bearer {api_key}"
-
- iframe_url = (
- requests.get(
- f"https://huggingface.co/api/spaces/{space_name}/host", headers=headers
- )
- .json()
- .get("host")
- )
-
- if iframe_url is None:
- raise ValueError(
- f"Could not find Space: {space_name}. If it is a private or gated Space, please provide your Hugging Face access token (https://huggingface.co/settings/tokens) as the argument for the `api_key` parameter."
- )
-
- r = requests.get(iframe_url, headers=headers)
-
- result = re.search(
- r"window.gradio_config = (.*?);[\s]*", r.text
- ) # some basic regex to extract the config
- try:
- config = json.loads(result.group(1)) # type: ignore
- except AttributeError:
- raise ValueError("Could not load the Space: {}".format(space_name))
- if "allow_flagging" in config: # Create an Interface for Gradio 2.x Spaces
- return from_spaces_interface(
- space_name, config, alias, api_key, iframe_url, **kwargs
- )
- else: # Create a Blocks for Gradio 3.x Spaces
- if kwargs:
- warnings.warn(
- "You cannot override parameters for this Space by passing in kwargs. "
- "Instead, please load the Space as a function and use it to create a "
- "Blocks or Interface locally. You may find this Guide helpful: "
- "https://gradio.app/using_blocks_like_functions/"
- )
- return from_spaces_blocks(config, api_key, iframe_url)
-
-
-def from_spaces_blocks(config: Dict, api_key: str | None, iframe_url: str) -> Blocks:
- api_url = "{}/api/predict/".format(iframe_url)
-
- headers = {"Content-Type": "application/json"}
- if api_key is not None:
- headers["Authorization"] = f"Bearer {api_key}"
- ws_url = "{}/queue/join".format(iframe_url).replace("https", "wss")
-
- ws_fn = get_ws_fn(ws_url, headers)
-
- fns = []
- for d, dependency in enumerate(config["dependencies"]):
- if dependency["backend_fn"]:
-
- def get_fn(outputs, fn_index, use_ws):
- def fn(*data):
- data = json.dumps({"data": data, "fn_index": fn_index})
- hash_data = json.dumps(
- {"fn_index": fn_index, "session_hash": str(uuid.uuid4())}
- )
- if use_ws:
- result = utils.synchronize_async(ws_fn, data, hash_data)
- output = result["data"]
- else:
- response = requests.post(api_url, headers=headers, data=data)
- result = json.loads(response.content.decode("utf-8"))
- try:
- output = result["data"]
- except KeyError:
- if "error" in result and "429" in result["error"]:
- raise TooManyRequestsError(
- "Too many requests to the Hugging Face API"
- )
- raise KeyError(
- f"Could not find 'data' key in response from external Space. Response received: {result}"
- )
- if len(outputs) == 1:
- output = output[0]
- return output
-
- return fn
-
- fn = get_fn(
- deepcopy(dependency["outputs"]), d, use_websocket(config, dependency)
- )
- fns.append(fn)
- else:
- fns.append(None)
- return gradio.Blocks.from_config(config, fns, iframe_url)
-
-
-def from_spaces_interface(
- model_name: str,
- config: Dict,
- alias: str | None,
- api_key: str | None,
- iframe_url: str,
- **kwargs,
-) -> Interface:
-
- config = streamline_spaces_interface(config)
- api_url = "{}/api/predict/".format(iframe_url)
- headers = {"Content-Type": "application/json"}
- if api_key is not None:
- headers["Authorization"] = f"Bearer {api_key}"
-
- # The function should call the API with preprocessed data
- def fn(*data):
- data = json.dumps({"data": data})
- response = requests.post(api_url, headers=headers, data=data)
- result = json.loads(response.content.decode("utf-8"))
- try:
- output = result["data"]
- except KeyError:
- if "error" in result and "429" in result["error"]:
- raise TooManyRequestsError("Too many requests to the Hugging Face API")
- raise KeyError(
- f"Could not find 'data' key in response from external Space. Response received: {result}"
- )
- if (
- len(config["outputs"]) == 1
- ): # if the fn is supposed to return a single value, pop it
- output = output[0]
- if len(config["outputs"]) == 1 and isinstance(
- output, list
- ): # Needed to support Output.Image() returning bounding boxes as well (TODO: handle different versions of gradio since they have slightly different APIs)
- output = output[0]
- return output
-
- fn.__name__ = alias if (alias is not None) else model_name
- config["fn"] = fn
-
- kwargs = dict(config, **kwargs)
- kwargs["_api_mode"] = True
- interface = gradio.Interface(**kwargs)
- return interface
diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/networking.py b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/networking.py
deleted file mode 100644
index 7e0aa3c20a4393013e05b0e69b1da43fea58ebdd..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/networking.py
+++ /dev/null
@@ -1,185 +0,0 @@
-"""
-Defines helper methods useful for setting up ports, launching servers, and
-creating tunnels.
-"""
-from __future__ import annotations
-
-import os
-import socket
-import threading
-import time
-import warnings
-from typing import TYPE_CHECKING, Tuple
-
-import requests
-import uvicorn
-
-from gradio.routes import App
-from gradio.tunneling import Tunnel
-
-if TYPE_CHECKING: # Only import for type checking (to avoid circular imports).
- from gradio.blocks import Blocks
-
-# By default, the local server will try to open on localhost, port 7860.
-# If that is not available, then it will try 7861, 7862, ... 7959.
-INITIAL_PORT_VALUE = int(os.getenv("GRADIO_SERVER_PORT", "7860"))
-TRY_NUM_PORTS = int(os.getenv("GRADIO_NUM_PORTS", "100"))
-LOCALHOST_NAME = os.getenv("GRADIO_SERVER_NAME", "127.0.0.1")
-GRADIO_API_SERVER = "https://api.gradio.app/v2/tunnel-request"
-
-
-class Server(uvicorn.Server):
- def install_signal_handlers(self):
- pass
-
- def run_in_thread(self):
- self.thread = threading.Thread(target=self.run, daemon=True)
- self.thread.start()
- while not self.started:
- time.sleep(1e-3)
-
- def close(self):
- self.should_exit = True
- self.thread.join()
-
-
-def get_first_available_port(initial: int, final: int) -> int:
- """
- Gets the first open port in a specified range of port numbers
- Parameters:
- initial: the initial value in the range of port numbers
- final: final (exclusive) value in the range of port numbers, should be greater than `initial`
- Returns:
- port: the first open port in the range
- """
- for port in range(initial, final):
- try:
- s = socket.socket() # create a socket object
- s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
- s.bind((LOCALHOST_NAME, port)) # Bind to the port
- s.close()
- return port
- except OSError:
- pass
- raise OSError(
- "All ports from {} to {} are in use. Please close a port.".format(
- initial, final - 1
- )
- )
-
-
-def configure_app(app: App, blocks: Blocks) -> App:
- auth = blocks.auth
- if auth is not None:
- if not callable(auth):
- app.auth = {account[0]: account[1] for account in auth}
- else:
- app.auth = auth
- else:
- app.auth = None
- app.blocks = blocks
- app.cwd = os.getcwd()
- app.favicon_path = blocks.favicon_path
- app.tokens = {}
- return app
-
-
-def start_server(
- blocks: Blocks,
- server_name: str | None = None,
- server_port: int | None = None,
- ssl_keyfile: str | None = None,
- ssl_certfile: str | None = None,
- ssl_keyfile_password: str | None = None,
-) -> Tuple[str, int, str, App, Server]:
- """Launches a local server running the provided Interface
- Parameters:
- blocks: The Blocks object to run on the server
- server_name: to make app accessible on local network, set this to "0.0.0.0". Can be set by environment variable GRADIO_SERVER_NAME.
- server_port: will start gradio app on this port (if available). Can be set by environment variable GRADIO_SERVER_PORT.
- auth: If provided, username and password (or list of username-password tuples) required to access the Blocks. Can also provide function that takes username and password and returns True if valid login.
- ssl_keyfile: If a path to a file is provided, will use this as the private key file to create a local server running on https.
- ssl_certfile: If a path to a file is provided, will use this as the signed certificate for https. Needs to be provided if ssl_keyfile is provided.
- ssl_keyfile_password: If a password is provided, will use this with the ssl certificate for https.
- Returns:
- port: the port number the server is running on
- path_to_local_server: the complete address that the local server can be accessed at
- app: the FastAPI app object
- server: the server object that is a subclass of uvicorn.Server (used to close the server)
- """
- server_name = server_name or LOCALHOST_NAME
- # if port is not specified, search for first available port
- if server_port is None:
- port = get_first_available_port(
- INITIAL_PORT_VALUE, INITIAL_PORT_VALUE + TRY_NUM_PORTS
- )
- else:
- try:
- s = socket.socket()
- s.bind((LOCALHOST_NAME, server_port))
- s.close()
- except OSError:
- raise OSError(
- "Port {} is in use. If a gradio.Blocks is running on the port, you can close() it or gradio.close_all().".format(
- server_port
- )
- )
- port = server_port
-
- url_host_name = "localhost" if server_name == "0.0.0.0" else server_name
-
- if ssl_keyfile is not None:
- if ssl_certfile is None:
- raise ValueError(
- "ssl_certfile must be provided if ssl_keyfile is provided."
- )
- path_to_local_server = "https://{}:{}/".format(url_host_name, port)
- else:
- path_to_local_server = "http://{}:{}/".format(url_host_name, port)
-
- app = App.create_app(blocks)
-
- if blocks.save_to is not None: # Used for selenium tests
- blocks.save_to["port"] = port
- config = uvicorn.Config(
- app=app,
- port=port,
- host=server_name,
- log_level="warning",
- ssl_keyfile=ssl_keyfile,
- ssl_certfile=ssl_certfile,
- ssl_keyfile_password=ssl_keyfile_password,
- ws_max_size=1024 * 1024 * 1024, # Setting max websocket size to be 1 GB
- )
- server = Server(config=config)
- server.run_in_thread()
- return server_name, port, path_to_local_server, app, server
-
-
-def setup_tunnel(local_host: str, local_port: int) -> str:
- response = requests.get(GRADIO_API_SERVER)
- if response and response.status_code == 200:
- try:
- payload = response.json()[0]
- remote_host, remote_port = payload["host"], int(payload["port"])
- tunnel = Tunnel(remote_host, remote_port, local_host, local_port)
- address = tunnel.start_tunnel()
- return address
- except Exception as e:
- raise RuntimeError(str(e))
- else:
- raise RuntimeError("Could not get share link from Gradio API Server.")
-
-
-def url_ok(url: str) -> bool:
- try:
- for _ in range(5):
- with warnings.catch_warnings():
- warnings.filterwarnings("ignore")
- r = requests.head(url, timeout=3, verify=False)
- if r.status_code in (200, 401, 302): # 401 or 302 if auth is set
- return True
- time.sleep(0.500)
- except (ConnectionError, requests.exceptions.ConnectionError):
- return False
- return False
diff --git a/spaces/Hoodady/3DFuse/ldm/modules/diffusionmodules/util.py b/spaces/Hoodady/3DFuse/ldm/modules/diffusionmodules/util.py
deleted file mode 100644
index 637363dfe34799e70cfdbcd11445212df9d9ca1f..0000000000000000000000000000000000000000
--- a/spaces/Hoodady/3DFuse/ldm/modules/diffusionmodules/util.py
+++ /dev/null
@@ -1,270 +0,0 @@
-# adopted from
-# https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py
-# and
-# https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py
-# and
-# https://github.com/openai/guided-diffusion/blob/0ba878e517b276c45d1195eb29f6f5f72659a05b/guided_diffusion/nn.py
-#
-# thanks!
-
-
-import os
-import math
-import torch
-import torch.nn as nn
-import numpy as np
-from einops import repeat
-
-from ldm.util import instantiate_from_config
-
-
-def make_beta_schedule(schedule, n_timestep, linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):
- if schedule == "linear":
- betas = (
- torch.linspace(linear_start ** 0.5, linear_end ** 0.5, n_timestep, dtype=torch.float64) ** 2
- )
-
- elif schedule == "cosine":
- timesteps = (
- torch.arange(n_timestep + 1, dtype=torch.float64) / n_timestep + cosine_s
- )
- alphas = timesteps / (1 + cosine_s) * np.pi / 2
- alphas = torch.cos(alphas).pow(2)
- alphas = alphas / alphas[0]
- betas = 1 - alphas[1:] / alphas[:-1]
- betas = np.clip(betas, a_min=0, a_max=0.999)
-
- elif schedule == "sqrt_linear":
- betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64)
- elif schedule == "sqrt":
- betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) ** 0.5
- else:
- raise ValueError(f"schedule '{schedule}' unknown.")
- return betas.numpy()
-
-
-def make_ddim_timesteps(ddim_discr_method, num_ddim_timesteps, num_ddpm_timesteps, verbose=True):
- if ddim_discr_method == 'uniform':
- c = num_ddpm_timesteps // num_ddim_timesteps
- ddim_timesteps = np.asarray(list(range(0, num_ddpm_timesteps, c)))
- elif ddim_discr_method == 'quad':
- ddim_timesteps = ((np.linspace(0, np.sqrt(num_ddpm_timesteps * .8), num_ddim_timesteps)) ** 2).astype(int)
- else:
- raise NotImplementedError(f'There is no ddim discretization method called "{ddim_discr_method}"')
-
- # assert ddim_timesteps.shape[0] == num_ddim_timesteps
- # add one to get the final alpha values right (the ones from first scale to data during sampling)
- steps_out = ddim_timesteps + 1
- if verbose:
- print(f'Selected timesteps for ddim sampler: {steps_out}')
- return steps_out
-
-
-def make_ddim_sampling_parameters(alphacums, ddim_timesteps, eta, verbose=True):
- # select alphas for computing the variance schedule
- alphas = alphacums[ddim_timesteps]
- alphas_prev = np.asarray([alphacums[0]] + alphacums[ddim_timesteps[:-1]].tolist())
-
- # according the the formula provided in https://arxiv.org/abs/2010.02502
- sigmas = eta * np.sqrt((1 - alphas_prev) / (1 - alphas) * (1 - alphas / alphas_prev))
- if verbose:
- print(f'Selected alphas for ddim sampler: a_t: {alphas}; a_(t-1): {alphas_prev}')
- print(f'For the chosen value of eta, which is {eta}, '
- f'this results in the following sigma_t schedule for ddim sampler {sigmas}')
- return sigmas, alphas, alphas_prev
-
-
-def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999):
- """
- Create a beta schedule that discretizes the given alpha_t_bar function,
- which defines the cumulative product of (1-beta) over time from t = [0,1].
- :param num_diffusion_timesteps: the number of betas to produce.
- :param alpha_bar: a lambda that takes an argument t from 0 to 1 and
- produces the cumulative product of (1-beta) up to that
- part of the diffusion process.
- :param max_beta: the maximum beta to use; use values lower than 1 to
- prevent singularities.
- """
- betas = []
- for i in range(num_diffusion_timesteps):
- t1 = i / num_diffusion_timesteps
- t2 = (i + 1) / num_diffusion_timesteps
- betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta))
- return np.array(betas)
-
-
-def extract_into_tensor(a, t, x_shape):
- b, *_ = t.shape
- out = a.gather(-1, t)
- return out.reshape(b, *((1,) * (len(x_shape) - 1)))
-
-
-def checkpoint(func, inputs, params, flag):
- """
- Evaluate a function without caching intermediate activations, allowing for
- reduced memory at the expense of extra compute in the backward pass.
- :param func: the function to evaluate.
- :param inputs: the argument sequence to pass to `func`.
- :param params: a sequence of parameters `func` depends on but does not
- explicitly take as arguments.
- :param flag: if False, disable gradient checkpointing.
- """
- if flag:
- args = tuple(inputs) + tuple(params)
- return CheckpointFunction.apply(func, len(inputs), *args)
- else:
- return func(*inputs)
-
-
-class CheckpointFunction(torch.autograd.Function):
- @staticmethod
- def forward(ctx, run_function, length, *args):
- ctx.run_function = run_function
- ctx.input_tensors = list(args[:length])
- ctx.input_params = list(args[length:])
- ctx.gpu_autocast_kwargs = {"enabled": torch.is_autocast_enabled(),
- "dtype": torch.get_autocast_gpu_dtype(),
- "cache_enabled": torch.is_autocast_cache_enabled()}
- with torch.no_grad():
- output_tensors = ctx.run_function(*ctx.input_tensors)
- return output_tensors
-
- @staticmethod
- def backward(ctx, *output_grads):
- ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors]
- with torch.enable_grad(), \
- torch.cuda.amp.autocast(**ctx.gpu_autocast_kwargs):
- # Fixes a bug where the first op in run_function modifies the
- # Tensor storage in place, which is not allowed for detach()'d
- # Tensors.
- shallow_copies = [x.view_as(x) for x in ctx.input_tensors]
- output_tensors = ctx.run_function(*shallow_copies)
- input_grads = torch.autograd.grad(
- output_tensors,
- ctx.input_tensors + ctx.input_params,
- output_grads,
- allow_unused=True,
- )
- del ctx.input_tensors
- del ctx.input_params
- del output_tensors
- return (None, None) + input_grads
-
-
-def timestep_embedding(timesteps, dim, max_period=10000, repeat_only=False):
- """
- Create sinusoidal timestep embeddings.
- :param timesteps: a 1-D Tensor of N indices, one per batch element.
- These may be fractional.
- :param dim: the dimension of the output.
- :param max_period: controls the minimum frequency of the embeddings.
- :return: an [N x dim] Tensor of positional embeddings.
- """
- if not repeat_only:
- half = dim // 2
- freqs = torch.exp(
- -math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half
- ).to(device=timesteps.device)
- args = timesteps[:, None].float() * freqs[None]
- embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1)
- if dim % 2:
- embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1)
- else:
- embedding = repeat(timesteps, 'b -> b d', d=dim)
- return embedding
-
-
-def zero_module(module):
- """
- Zero out the parameters of a module and return it.
- """
- for p in module.parameters():
- p.detach().zero_()
- return module
-
-
-def scale_module(module, scale):
- """
- Scale the parameters of a module and return it.
- """
- for p in module.parameters():
- p.detach().mul_(scale)
- return module
-
-
-def mean_flat(tensor):
- """
- Take the mean over all non-batch dimensions.
- """
- return tensor.mean(dim=list(range(1, len(tensor.shape))))
-
-
-def normalization(channels):
- """
- Make a standard normalization layer.
- :param channels: number of input channels.
- :return: an nn.Module for normalization.
- """
- return GroupNorm32(32, channels)
-
-
-# PyTorch 1.7 has SiLU, but we support PyTorch 1.5.
-class SiLU(nn.Module):
- def forward(self, x):
- return x * torch.sigmoid(x)
-
-
-class GroupNorm32(nn.GroupNorm):
- def forward(self, x):
- return super().forward(x.float()).type(x.dtype)
-
-def conv_nd(dims, *args, **kwargs):
- """
- Create a 1D, 2D, or 3D convolution module.
- """
- if dims == 1:
- return nn.Conv1d(*args, **kwargs)
- elif dims == 2:
- return nn.Conv2d(*args, **kwargs)
- elif dims == 3:
- return nn.Conv3d(*args, **kwargs)
- raise ValueError(f"unsupported dimensions: {dims}")
-
-
-def linear(*args, **kwargs):
- """
- Create a linear module.
- """
- return nn.Linear(*args, **kwargs)
-
-
-def avg_pool_nd(dims, *args, **kwargs):
- """
- Create a 1D, 2D, or 3D average pooling module.
- """
- if dims == 1:
- return nn.AvgPool1d(*args, **kwargs)
- elif dims == 2:
- return nn.AvgPool2d(*args, **kwargs)
- elif dims == 3:
- return nn.AvgPool3d(*args, **kwargs)
- raise ValueError(f"unsupported dimensions: {dims}")
-
-
-class HybridConditioner(nn.Module):
-
- def __init__(self, c_concat_config, c_crossattn_config):
- super().__init__()
- self.concat_conditioner = instantiate_from_config(c_concat_config)
- self.crossattn_conditioner = instantiate_from_config(c_crossattn_config)
-
- def forward(self, c_concat, c_crossattn):
- c_concat = self.concat_conditioner(c_concat)
- c_crossattn = self.crossattn_conditioner(c_crossattn)
- return {'c_concat': [c_concat], 'c_crossattn': [c_crossattn]}
-
-
-def noise_like(shape, device, repeat=False):
- repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1)))
- noise = lambda: torch.randn(shape, device=device)
- return repeat_noise() if repeat else noise()
\ No newline at end of file
diff --git a/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/modules/discriminator/model.py b/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/modules/discriminator/model.py
deleted file mode 100644
index 2aaa3110d0a7bcd05de7eca1e45101589ca5af05..0000000000000000000000000000000000000000
--- a/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/modules/discriminator/model.py
+++ /dev/null
@@ -1,67 +0,0 @@
-import functools
-import torch.nn as nn
-
-
-from taming.modules.util import ActNorm
-
-
-def weights_init(m):
- classname = m.__class__.__name__
- if classname.find('Conv') != -1:
- nn.init.normal_(m.weight.data, 0.0, 0.02)
- elif classname.find('BatchNorm') != -1:
- nn.init.normal_(m.weight.data, 1.0, 0.02)
- nn.init.constant_(m.bias.data, 0)
-
-
-class NLayerDiscriminator(nn.Module):
- """Defines a PatchGAN discriminator as in Pix2Pix
- --> see https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/models/networks.py
- """
- def __init__(self, input_nc=3, ndf=64, n_layers=3, use_actnorm=False):
- """Construct a PatchGAN discriminator
- Parameters:
- input_nc (int) -- the number of channels in input images
- ndf (int) -- the number of filters in the last conv layer
- n_layers (int) -- the number of conv layers in the discriminator
- norm_layer -- normalization layer
- """
- super(NLayerDiscriminator, self).__init__()
- if not use_actnorm:
- norm_layer = nn.BatchNorm2d
- else:
- norm_layer = ActNorm
- if type(norm_layer) == functools.partial: # no need to use bias as BatchNorm2d has affine parameters
- use_bias = norm_layer.func != nn.BatchNorm2d
- else:
- use_bias = norm_layer != nn.BatchNorm2d
-
- kw = 4
- padw = 1
- sequence = [nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=2, padding=padw), nn.LeakyReLU(0.2, True)]
- nf_mult = 1
- nf_mult_prev = 1
- for n in range(1, n_layers): # gradually increase the number of filters
- nf_mult_prev = nf_mult
- nf_mult = min(2 ** n, 8)
- sequence += [
- nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=2, padding=padw, bias=use_bias),
- norm_layer(ndf * nf_mult),
- nn.LeakyReLU(0.2, True)
- ]
-
- nf_mult_prev = nf_mult
- nf_mult = min(2 ** n_layers, 8)
- sequence += [
- nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=1, padding=padw, bias=use_bias),
- norm_layer(ndf * nf_mult),
- nn.LeakyReLU(0.2, True)
- ]
-
- sequence += [
- nn.Conv2d(ndf * nf_mult, 1, kernel_size=kw, stride=1, padding=padw)] # output 1 channel prediction map
- self.main = nn.Sequential(*sequence)
-
- def forward(self, input):
- """Standard forward."""
- return self.main(input)
diff --git a/spaces/ICML2022/OFA/fairseq/examples/translation_moe/translation_moe_src/logsumexp_moe.py b/spaces/ICML2022/OFA/fairseq/examples/translation_moe/translation_moe_src/logsumexp_moe.py
deleted file mode 100644
index fb299daecbc2b15fb66555bbfb8d1d983e481518..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/translation_moe/translation_moe_src/logsumexp_moe.py
+++ /dev/null
@@ -1,26 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-
-
-class LogSumExpMoE(torch.autograd.Function):
- """Standard LogSumExp forward pass, but use *posterior* for the backward.
-
- See `"Mixture Models for Diverse Machine Translation: Tricks of the Trade"
- (Shen et al., 2019) `_.
- """
-
- @staticmethod
- def forward(ctx, logp, posterior, dim=-1):
- ctx.save_for_backward(posterior)
- ctx.dim = dim
- return torch.logsumexp(logp, dim=dim)
-
- @staticmethod
- def backward(ctx, grad_output):
- (posterior,) = ctx.saved_tensors
- grad_logp = grad_output.unsqueeze(ctx.dim) * posterior
- return grad_logp, None, None
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/benchmark/dummy_dataset.py b/spaces/ICML2022/OFA/fairseq/fairseq/benchmark/dummy_dataset.py
deleted file mode 100644
index 2f051754af55966e26850e94c121e0ff439bfd28..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/benchmark/dummy_dataset.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import numpy as np
-from fairseq.data import FairseqDataset
-
-
-class DummyDataset(FairseqDataset):
- def __init__(self, batch, num_items, item_size):
- super().__init__()
- self.batch = batch
- self.num_items = num_items
- self.item_size = item_size
-
- def __getitem__(self, index):
- return index
-
- def __len__(self):
- return self.num_items
-
- def collater(self, samples):
- return self.batch
-
- @property
- def sizes(self):
- return np.array([self.item_size] * self.num_items)
-
- def num_tokens(self, index):
- return self.item_size
-
- def size(self, index):
- return self.item_size
-
- def ordered_indices(self):
- return np.arange(self.num_items)
-
- @property
- def supports_prefetch(self):
- return False
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/strip_token_dataset.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/strip_token_dataset.py
deleted file mode 100644
index cae39ba4d2f8106398eccd7eb0cf5c2194ec0db5..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/data/strip_token_dataset.py
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from . import BaseWrapperDataset
-
-
-class StripTokenDataset(BaseWrapperDataset):
- def __init__(self, dataset, id_to_strip):
- super().__init__(dataset)
- self.id_to_strip = id_to_strip
-
- def __getitem__(self, index):
- item = self.dataset[index]
- while len(item) > 0 and item[-1] == self.id_to_strip:
- item = item[:-1]
- while len(item) > 0 and item[0] == self.id_to_strip:
- item = item[1:]
- return item
diff --git a/spaces/Iceclear/StableSR/StableSR/basicsr/utils/matlab_functions.py b/spaces/Iceclear/StableSR/StableSR/basicsr/utils/matlab_functions.py
deleted file mode 100644
index a201f79aaf030cdba710dd97c28af1b29a93ed2a..0000000000000000000000000000000000000000
--- a/spaces/Iceclear/StableSR/StableSR/basicsr/utils/matlab_functions.py
+++ /dev/null
@@ -1,178 +0,0 @@
-import math
-import numpy as np
-import torch
-
-
-def cubic(x):
- """cubic function used for calculate_weights_indices."""
- absx = torch.abs(x)
- absx2 = absx**2
- absx3 = absx**3
- return (1.5 * absx3 - 2.5 * absx2 + 1) * (
- (absx <= 1).type_as(absx)) + (-0.5 * absx3 + 2.5 * absx2 - 4 * absx + 2) * (((absx > 1) *
- (absx <= 2)).type_as(absx))
-
-
-def calculate_weights_indices(in_length, out_length, scale, kernel, kernel_width, antialiasing):
- """Calculate weights and indices, used for imresize function.
-
- Args:
- in_length (int): Input length.
- out_length (int): Output length.
- scale (float): Scale factor.
- kernel_width (int): Kernel width.
- antialisaing (bool): Whether to apply anti-aliasing when downsampling.
- """
-
- if (scale < 1) and antialiasing:
- # Use a modified kernel (larger kernel width) to simultaneously
- # interpolate and antialias
- kernel_width = kernel_width / scale
-
- # Output-space coordinates
- x = torch.linspace(1, out_length, out_length)
-
- # Input-space coordinates. Calculate the inverse mapping such that 0.5
- # in output space maps to 0.5 in input space, and 0.5 + scale in output
- # space maps to 1.5 in input space.
- u = x / scale + 0.5 * (1 - 1 / scale)
-
- # What is the left-most pixel that can be involved in the computation?
- left = torch.floor(u - kernel_width / 2)
-
- # What is the maximum number of pixels that can be involved in the
- # computation? Note: it's OK to use an extra pixel here; if the
- # corresponding weights are all zero, it will be eliminated at the end
- # of this function.
- p = math.ceil(kernel_width) + 2
-
- # The indices of the input pixels involved in computing the k-th output
- # pixel are in row k of the indices matrix.
- indices = left.view(out_length, 1).expand(out_length, p) + torch.linspace(0, p - 1, p).view(1, p).expand(
- out_length, p)
-
- # The weights used to compute the k-th output pixel are in row k of the
- # weights matrix.
- distance_to_center = u.view(out_length, 1).expand(out_length, p) - indices
-
- # apply cubic kernel
- if (scale < 1) and antialiasing:
- weights = scale * cubic(distance_to_center * scale)
- else:
- weights = cubic(distance_to_center)
-
- # Normalize the weights matrix so that each row sums to 1.
- weights_sum = torch.sum(weights, 1).view(out_length, 1)
- weights = weights / weights_sum.expand(out_length, p)
-
- # If a column in weights is all zero, get rid of it. only consider the
- # first and last column.
- weights_zero_tmp = torch.sum((weights == 0), 0)
- if not math.isclose(weights_zero_tmp[0], 0, rel_tol=1e-6):
- indices = indices.narrow(1, 1, p - 2)
- weights = weights.narrow(1, 1, p - 2)
- if not math.isclose(weights_zero_tmp[-1], 0, rel_tol=1e-6):
- indices = indices.narrow(1, 0, p - 2)
- weights = weights.narrow(1, 0, p - 2)
- weights = weights.contiguous()
- indices = indices.contiguous()
- sym_len_s = -indices.min() + 1
- sym_len_e = indices.max() - in_length
- indices = indices + sym_len_s - 1
- return weights, indices, int(sym_len_s), int(sym_len_e)
-
-
-@torch.no_grad()
-def imresize(img, scale, antialiasing=True):
- """imresize function same as MATLAB.
-
- It now only supports bicubic.
- The same scale applies for both height and width.
-
- Args:
- img (Tensor | Numpy array):
- Tensor: Input image with shape (c, h, w), [0, 1] range.
- Numpy: Input image with shape (h, w, c), [0, 1] range.
- scale (float): Scale factor. The same scale applies for both height
- and width.
- antialisaing (bool): Whether to apply anti-aliasing when downsampling.
- Default: True.
-
- Returns:
- Tensor: Output image with shape (c, h, w), [0, 1] range, w/o round.
- """
- squeeze_flag = False
- if type(img).__module__ == np.__name__: # numpy type
- numpy_type = True
- if img.ndim == 2:
- img = img[:, :, None]
- squeeze_flag = True
- img = torch.from_numpy(img.transpose(2, 0, 1)).float()
- else:
- numpy_type = False
- if img.ndim == 2:
- img = img.unsqueeze(0)
- squeeze_flag = True
-
- in_c, in_h, in_w = img.size()
- out_h, out_w = math.ceil(in_h * scale), math.ceil(in_w * scale)
- kernel_width = 4
- kernel = 'cubic'
-
- # get weights and indices
- weights_h, indices_h, sym_len_hs, sym_len_he = calculate_weights_indices(in_h, out_h, scale, kernel, kernel_width,
- antialiasing)
- weights_w, indices_w, sym_len_ws, sym_len_we = calculate_weights_indices(in_w, out_w, scale, kernel, kernel_width,
- antialiasing)
- # process H dimension
- # symmetric copying
- img_aug = torch.FloatTensor(in_c, in_h + sym_len_hs + sym_len_he, in_w)
- img_aug.narrow(1, sym_len_hs, in_h).copy_(img)
-
- sym_patch = img[:, :sym_len_hs, :]
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
- img_aug.narrow(1, 0, sym_len_hs).copy_(sym_patch_inv)
-
- sym_patch = img[:, -sym_len_he:, :]
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
- img_aug.narrow(1, sym_len_hs + in_h, sym_len_he).copy_(sym_patch_inv)
-
- out_1 = torch.FloatTensor(in_c, out_h, in_w)
- kernel_width = weights_h.size(1)
- for i in range(out_h):
- idx = int(indices_h[i][0])
- for j in range(in_c):
- out_1[j, i, :] = img_aug[j, idx:idx + kernel_width, :].transpose(0, 1).mv(weights_h[i])
-
- # process W dimension
- # symmetric copying
- out_1_aug = torch.FloatTensor(in_c, out_h, in_w + sym_len_ws + sym_len_we)
- out_1_aug.narrow(2, sym_len_ws, in_w).copy_(out_1)
-
- sym_patch = out_1[:, :, :sym_len_ws]
- inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(2, inv_idx)
- out_1_aug.narrow(2, 0, sym_len_ws).copy_(sym_patch_inv)
-
- sym_patch = out_1[:, :, -sym_len_we:]
- inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(2, inv_idx)
- out_1_aug.narrow(2, sym_len_ws + in_w, sym_len_we).copy_(sym_patch_inv)
-
- out_2 = torch.FloatTensor(in_c, out_h, out_w)
- kernel_width = weights_w.size(1)
- for i in range(out_w):
- idx = int(indices_w[i][0])
- for j in range(in_c):
- out_2[j, :, i] = out_1_aug[j, :, idx:idx + kernel_width].mv(weights_w[i])
-
- if squeeze_flag:
- out_2 = out_2.squeeze(0)
- if numpy_type:
- out_2 = out_2.numpy()
- if not squeeze_flag:
- out_2 = out_2.transpose(1, 2, 0)
-
- return out_2
diff --git a/spaces/Iceclear/StableSR/StableSR/clip/simple_tokenizer.py b/spaces/Iceclear/StableSR/StableSR/clip/simple_tokenizer.py
deleted file mode 100644
index 0a66286b7d5019c6e221932a813768038f839c91..0000000000000000000000000000000000000000
--- a/spaces/Iceclear/StableSR/StableSR/clip/simple_tokenizer.py
+++ /dev/null
@@ -1,132 +0,0 @@
-import gzip
-import html
-import os
-from functools import lru_cache
-
-import ftfy
-import regex as re
-
-
-@lru_cache()
-def default_bpe():
- return os.path.join(os.path.dirname(os.path.abspath(__file__)), "bpe_simple_vocab_16e6.txt.gz")
-
-
-@lru_cache()
-def bytes_to_unicode():
- """
- Returns list of utf-8 byte and a corresponding list of unicode strings.
- The reversible bpe codes work on unicode strings.
- This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
- When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
- This is a signficant percentage of your normal, say, 32K bpe vocab.
- To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
- And avoids mapping to whitespace/control characters the bpe code barfs on.
- """
- bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1))
- cs = bs[:]
- n = 0
- for b in range(2**8):
- if b not in bs:
- bs.append(b)
- cs.append(2**8+n)
- n += 1
- cs = [chr(n) for n in cs]
- return dict(zip(bs, cs))
-
-
-def get_pairs(word):
- """Return set of symbol pairs in a word.
- Word is represented as tuple of symbols (symbols being variable-length strings).
- """
- pairs = set()
- prev_char = word[0]
- for char in word[1:]:
- pairs.add((prev_char, char))
- prev_char = char
- return pairs
-
-
-def basic_clean(text):
- text = ftfy.fix_text(text)
- text = html.unescape(html.unescape(text))
- return text.strip()
-
-
-def whitespace_clean(text):
- text = re.sub(r'\s+', ' ', text)
- text = text.strip()
- return text
-
-
-class SimpleTokenizer(object):
- def __init__(self, bpe_path: str = default_bpe()):
- self.byte_encoder = bytes_to_unicode()
- self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
- merges = gzip.open(bpe_path).read().decode("utf-8").split('\n')
- merges = merges[1:49152-256-2+1]
- merges = [tuple(merge.split()) for merge in merges]
- vocab = list(bytes_to_unicode().values())
- vocab = vocab + [v+'' for v in vocab]
- for merge in merges:
- vocab.append(''.join(merge))
- vocab.extend(['<|startoftext|>', '<|endoftext|>'])
- self.encoder = dict(zip(vocab, range(len(vocab))))
- self.decoder = {v: k for k, v in self.encoder.items()}
- self.bpe_ranks = dict(zip(merges, range(len(merges))))
- self.cache = {'<|startoftext|>': '<|startoftext|>', '<|endoftext|>': '<|endoftext|>'}
- self.pat = re.compile(r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""", re.IGNORECASE)
-
- def bpe(self, token):
- if token in self.cache:
- return self.cache[token]
- word = tuple(token[:-1]) + ( token[-1] + '',)
- pairs = get_pairs(word)
-
- if not pairs:
- return token+''
-
- while True:
- bigram = min(pairs, key = lambda pair: self.bpe_ranks.get(pair, float('inf')))
- if bigram not in self.bpe_ranks:
- break
- first, second = bigram
- new_word = []
- i = 0
- while i < len(word):
- try:
- j = word.index(first, i)
- new_word.extend(word[i:j])
- i = j
- except:
- new_word.extend(word[i:])
- break
-
- if word[i] == first and i < len(word)-1 and word[i+1] == second:
- new_word.append(first+second)
- i += 2
- else:
- new_word.append(word[i])
- i += 1
- new_word = tuple(new_word)
- word = new_word
- if len(word) == 1:
- break
- else:
- pairs = get_pairs(word)
- word = ' '.join(word)
- self.cache[token] = word
- return word
-
- def encode(self, text):
- bpe_tokens = []
- text = whitespace_clean(basic_clean(text)).lower()
- for token in re.findall(self.pat, text):
- token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8'))
- bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' '))
- return bpe_tokens
-
- def decode(self, tokens):
- text = ''.join([self.decoder[token] for token in tokens])
- text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors="replace").replace('', ' ')
- return text
diff --git a/spaces/Ikaros521/moe-tts/text/ngu_dialect.py b/spaces/Ikaros521/moe-tts/text/ngu_dialect.py
deleted file mode 100644
index 69d0ce6fe5a989843ee059a71ccab793f20f9176..0000000000000000000000000000000000000000
--- a/spaces/Ikaros521/moe-tts/text/ngu_dialect.py
+++ /dev/null
@@ -1,30 +0,0 @@
-import re
-import opencc
-
-
-dialects = {'SZ': 'suzhou', 'WX': 'wuxi', 'CZ': 'changzhou', 'HZ': 'hangzhou',
- 'SX': 'shaoxing', 'NB': 'ningbo', 'JJ': 'jingjiang', 'YX': 'yixing',
- 'JD': 'jiading', 'ZR': 'zhenru', 'PH': 'pinghu', 'TX': 'tongxiang',
- 'JS': 'jiashan', 'HN': 'xiashi', 'LP': 'linping', 'XS': 'xiaoshan',
- 'FY': 'fuyang', 'RA': 'ruao', 'CX': 'cixi', 'SM': 'sanmen',
- 'TT': 'tiantai', 'WZ': 'wenzhou', 'SC': 'suichang', 'YB': 'youbu'}
-
-converters = {}
-
-for dialect in dialects.values():
- try:
- converters[dialect] = opencc.OpenCC("chinese_dialect_lexicons/"+dialect)
- except:
- pass
-
-
-def ngu_dialect_to_ipa(text, dialect):
- dialect = dialects[dialect]
- text = converters[dialect].convert(text).replace('-','').replace('$',' ')
- text = re.sub(r'[、;:]', ',', text)
- text = re.sub(r'\s*,\s*', ', ', text)
- text = re.sub(r'\s*。\s*', '. ', text)
- text = re.sub(r'\s*?\s*', '? ', text)
- text = re.sub(r'\s*!\s*', '! ', text)
- text = re.sub(r'\s*$', '', text)
- return text
diff --git a/spaces/Ilean/pdfGPTv2/README.md b/spaces/Ilean/pdfGPTv2/README.md
deleted file mode 100644
index e5365d776238cb90c79278058b8c622388b22fa1..0000000000000000000000000000000000000000
--- a/spaces/Ilean/pdfGPTv2/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: PdfGPT
-emoji: 🏢
-colorFrom: yellow
-colorTo: purple
-sdk: gradio
-sdk_version: 3.21.0
-app_file: app.py
-pinned: false
-license: cc-by-4.0
-duplicated_from: Ilean/pdfGPT
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/IvaElen/find_my_pic/get_similiarty.py b/spaces/IvaElen/find_my_pic/get_similiarty.py
deleted file mode 100644
index c7a8824b91a484ed0c1049152bb74fc62ccf1518..0000000000000000000000000000000000000000
--- a/spaces/IvaElen/find_my_pic/get_similiarty.py
+++ /dev/null
@@ -1,38 +0,0 @@
-import torchvision.datasets as datasets
-import numpy as np
-import clip
-import torch
-def get_similiarity(prompt, model_resnet, model_vit, top_k=3):
- device = "cuda" if torch.cuda.is_available() else "cpu"
- data_dir = 'sample/sample/data'
- image_arr = np.loadtxt("embeddings.csv", delimiter=",")
- raw_dataset = datasets.ImageFolder(data_dir)
- # получите список всех изображений
- # create transformer-readable tokens
- inputs = clip.tokenize(prompt).to(device)
- text_emb = model_resnet.encode_text(inputs)
- text_emb = text_emb.cpu().detach().numpy()
- scores = np.dot(text_emb, image_arr.T)
- # score_vit
- # get the top k indices for most similar vecs
- idx = np.argsort(-scores[0])[:top_k]
- image_files = []
- for i in idx:
- image_files.append(raw_dataset.imgs[i][0])
-
- image_arr_vit = np.loadtxt('embeddings_vit.csv', delimiter=",")
- inputs_vit = clip.tokenize(prompt).to(device)
- text_emb_vit = model_vit.encode_text(inputs_vit)
- text_emb_vit = text_emb_vit.cpu().detach().numpy()
- scores_vit = np.dot(text_emb_vit, image_arr_vit.T)
- idx_vit = np.argsort(-scores_vit[0])[:top_k]
- image_files_vit = []
- for i in idx_vit:
- image_files_vit.append(raw_dataset.imgs[i][0])
-
- return image_files, image_files_vit
-# def get_text_enc(input_text: str):
-# text = clip.tokenize([input_text]).to(device)
-# text_features = model.encode_text(text).cpu()
-# text_features = text_features.cpu().detach().numpy()
-# return text_features
\ No newline at end of file
diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/stable_diffusion/pipeline_cycle_diffusion.py b/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/stable_diffusion/pipeline_cycle_diffusion.py
deleted file mode 100644
index a688a52a7a6ec65a5774dd6c6fe1ce1e9d66acab..0000000000000000000000000000000000000000
--- a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/stable_diffusion/pipeline_cycle_diffusion.py
+++ /dev/null
@@ -1,687 +0,0 @@
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import inspect
-from typing import Callable, List, Optional, Union
-
-import numpy as np
-import torch
-
-import PIL
-from diffusers.utils import is_accelerate_available
-from packaging import version
-from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
-
-from ...configuration_utils import FrozenDict
-from ...models import AutoencoderKL, UNet2DConditionModel
-from ...pipeline_utils import DiffusionPipeline
-from ...schedulers import DDIMScheduler
-from ...utils import PIL_INTERPOLATION, deprecate, logging
-from . import StableDiffusionPipelineOutput
-from .safety_checker import StableDiffusionSafetyChecker
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-def preprocess(image):
- w, h = image.size
- w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32
- image = image.resize((w, h), resample=PIL_INTERPOLATION["lanczos"])
- image = np.array(image).astype(np.float32) / 255.0
- image = image[None].transpose(0, 3, 1, 2)
- image = torch.from_numpy(image)
- return 2.0 * image - 1.0
-
-
-def posterior_sample(scheduler, latents, timestep, clean_latents, generator, eta):
- # 1. get previous step value (=t-1)
- prev_timestep = timestep - scheduler.config.num_train_timesteps // scheduler.num_inference_steps
-
- if prev_timestep <= 0:
- return clean_latents
-
- # 2. compute alphas, betas
- alpha_prod_t = scheduler.alphas_cumprod[timestep]
- alpha_prod_t_prev = (
- scheduler.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else scheduler.final_alpha_cumprod
- )
-
- variance = scheduler._get_variance(timestep, prev_timestep)
- std_dev_t = eta * variance ** (0.5)
-
- # direction pointing to x_t
- e_t = (latents - alpha_prod_t ** (0.5) * clean_latents) / (1 - alpha_prod_t) ** (0.5)
- dir_xt = (1.0 - alpha_prod_t_prev - std_dev_t**2) ** (0.5) * e_t
- noise = std_dev_t * torch.randn(
- clean_latents.shape, dtype=clean_latents.dtype, device=clean_latents.device, generator=generator
- )
- prev_latents = alpha_prod_t_prev ** (0.5) * clean_latents + dir_xt + noise
-
- return prev_latents
-
-
-def compute_noise(scheduler, prev_latents, latents, timestep, noise_pred, eta):
- # 1. get previous step value (=t-1)
- prev_timestep = timestep - scheduler.config.num_train_timesteps // scheduler.num_inference_steps
-
- # 2. compute alphas, betas
- alpha_prod_t = scheduler.alphas_cumprod[timestep]
- alpha_prod_t_prev = (
- scheduler.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else scheduler.final_alpha_cumprod
- )
-
- beta_prod_t = 1 - alpha_prod_t
-
- # 3. compute predicted original sample from predicted noise also called
- # "predicted x_0" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf
- pred_original_sample = (latents - beta_prod_t ** (0.5) * noise_pred) / alpha_prod_t ** (0.5)
-
- # 4. Clip "predicted x_0"
- if scheduler.config.clip_sample:
- pred_original_sample = torch.clamp(pred_original_sample, -1, 1)
-
- # 5. compute variance: "sigma_t(η)" -> see formula (16)
- # σ_t = sqrt((1 − α_t−1)/(1 − α_t)) * sqrt(1 − α_t/α_t−1)
- variance = scheduler._get_variance(timestep, prev_timestep)
- std_dev_t = eta * variance ** (0.5)
-
- # 6. compute "direction pointing to x_t" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf
- pred_sample_direction = (1 - alpha_prod_t_prev - std_dev_t**2) ** (0.5) * noise_pred
-
- noise = (prev_latents - (alpha_prod_t_prev ** (0.5) * pred_original_sample + pred_sample_direction)) / (
- variance ** (0.5) * eta
- )
- return noise
-
-
-class CycleDiffusionPipeline(DiffusionPipeline):
- r"""
- Pipeline for text-guided image to image generation using Stable Diffusion.
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
-
- Args:
- vae ([`AutoencoderKL`]):
- Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- text_encoder ([`CLIPTextModel`]):
- Frozen text-encoder. Stable Diffusion uses the text portion of
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
- the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- tokenizer (`CLIPTokenizer`):
- Tokenizer of class
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
- safety_checker ([`StableDiffusionSafetyChecker`]):
- Classification module that estimates whether generated images could be considered offensive or harmful.
- Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details.
- feature_extractor ([`CLIPFeatureExtractor`]):
- Model that extracts features from generated images to be used as inputs for the `safety_checker`.
- """
- _optional_components = ["safety_checker", "feature_extractor"]
-
- def __init__(
- self,
- vae: AutoencoderKL,
- text_encoder: CLIPTextModel,
- tokenizer: CLIPTokenizer,
- unet: UNet2DConditionModel,
- scheduler: DDIMScheduler,
- safety_checker: StableDiffusionSafetyChecker,
- feature_extractor: CLIPFeatureExtractor,
- requires_safety_checker: bool = True,
- ):
- super().__init__()
-
- if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
- deprecation_message = (
- f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
- f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
- "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
- " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
- " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
- " file"
- )
- deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
- new_config = dict(scheduler.config)
- new_config["steps_offset"] = 1
- scheduler._internal_dict = FrozenDict(new_config)
-
- if safety_checker is None and requires_safety_checker:
- logger.warning(
- f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
- " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
- " results in services or applications open to the public. Both the diffusers team and Hugging Face"
- " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
- " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
- " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
- )
-
- if safety_checker is not None and feature_extractor is None:
- raise ValueError(
- "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
- " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
- )
- is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse(
- version.parse(unet.config._diffusers_version).base_version
- ) < version.parse("0.9.0.dev0")
- is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64
- if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64:
- deprecation_message = (
- "The configuration file of the unet has set the default `sample_size` to smaller than"
- " 64 which seems highly unlikely .If you're checkpoint is a fine-tuned version of any of the"
- " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-"
- " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5"
- " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the"
- " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`"
- " in the config might lead to incorrect results in future versions. If you have downloaded this"
- " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for"
- " the `unet/config.json` file"
- )
- deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False)
- new_config = dict(unet.config)
- new_config["sample_size"] = 64
- unet._internal_dict = FrozenDict(new_config)
-
- self.register_modules(
- vae=vae,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- unet=unet,
- scheduler=scheduler,
- safety_checker=safety_checker,
- feature_extractor=feature_extractor,
- )
- self.register_to_config(requires_safety_checker=requires_safety_checker)
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_attention_slicing
- def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
- r"""
- Enable sliced attention computation.
-
- When this option is enabled, the attention module will split the input tensor in slices, to compute attention
- in several steps. This is useful to save some memory in exchange for a small speed decrease.
-
- Args:
- slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
- When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
- a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
- `attention_head_dim` must be a multiple of `slice_size`.
- """
- if slice_size == "auto":
- if isinstance(self.unet.config.attention_head_dim, int):
- # half the attention head size is usually a good trade-off between
- # speed and memory
- slice_size = self.unet.config.attention_head_dim // 2
- else:
- # if `attention_head_dim` is a list, take the smallest head size
- slice_size = min(self.unet.config.attention_head_dim)
-
- self.unet.set_attention_slice(slice_size)
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_attention_slicing
- def disable_attention_slicing(self):
- r"""
- Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
- back to computing attention in one step.
- """
- # set slice_size = `None` to disable `attention slicing`
- self.enable_attention_slicing(None)
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_sequential_cpu_offload
- def enable_sequential_cpu_offload(self, gpu_id=0):
- r"""
- Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
- text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
- `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called.
- """
- if is_accelerate_available():
- from accelerate import cpu_offload
- else:
- raise ImportError("Please install accelerate via `pip install accelerate`")
-
- device = torch.device(f"cuda:{gpu_id}")
-
- for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae]:
- if cpu_offloaded_model is not None:
- cpu_offload(cpu_offloaded_model, device)
-
- if self.safety_checker is not None:
- # TODO(Patrick) - there is currently a bug with cpu offload of nn.Parameter in accelerate
- # fix by only offloading self.safety_checker for now
- cpu_offload(self.safety_checker.vision_model, device)
-
- @property
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device
- def _execution_device(self):
- r"""
- Returns the device on which the pipeline's models will be executed. After calling
- `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
- hooks.
- """
- if self.device != torch.device("meta") or not hasattr(self.unet, "_hf_hook"):
- return self.device
- for module in self.unet.modules():
- if (
- hasattr(module, "_hf_hook")
- and hasattr(module._hf_hook, "execution_device")
- and module._hf_hook.execution_device is not None
- ):
- return torch.device(module._hf_hook.execution_device)
- return self.device
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt
- def _encode_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt):
- r"""
- Encodes the prompt into text encoder hidden states.
-
- Args:
- prompt (`str` or `list(int)`):
- prompt to be encoded
- device: (`torch.device`):
- torch device
- num_images_per_prompt (`int`):
- number of images that should be generated per prompt
- do_classifier_free_guidance (`bool`):
- whether to use classifier free guidance or not
- negative_prompt (`str` or `List[str]`):
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
- if `guidance_scale` is less than `1`).
- """
- batch_size = len(prompt) if isinstance(prompt, list) else 1
-
- text_inputs = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
- text_input_ids = text_inputs.input_ids
- untruncated_ids = self.tokenizer(prompt, padding="max_length", return_tensors="pt").input_ids
-
- if not torch.equal(text_input_ids, untruncated_ids):
- removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1])
- logger.warning(
- "The following part of your input was truncated because CLIP can only handle sequences up to"
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
- )
-
- if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
- attention_mask = text_inputs.attention_mask.to(device)
- else:
- attention_mask = None
-
- text_embeddings = self.text_encoder(
- text_input_ids.to(device),
- attention_mask=attention_mask,
- )
- text_embeddings = text_embeddings[0]
-
- # duplicate text embeddings for each generation per prompt, using mps friendly method
- bs_embed, seq_len, _ = text_embeddings.shape
- text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
- text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
-
- # get unconditional embeddings for classifier free guidance
- if do_classifier_free_guidance:
- uncond_tokens: List[str]
- if negative_prompt is None:
- uncond_tokens = [""] * batch_size
- elif type(prompt) is not type(negative_prompt):
- raise TypeError(
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
- f" {type(prompt)}."
- )
- elif isinstance(negative_prompt, str):
- uncond_tokens = [negative_prompt]
- elif batch_size != len(negative_prompt):
- raise ValueError(
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
- " the batch size of `prompt`."
- )
- else:
- uncond_tokens = negative_prompt
-
- max_length = text_input_ids.shape[-1]
- uncond_input = self.tokenizer(
- uncond_tokens,
- padding="max_length",
- max_length=max_length,
- truncation=True,
- return_tensors="pt",
- )
-
- if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
- attention_mask = uncond_input.attention_mask.to(device)
- else:
- attention_mask = None
-
- uncond_embeddings = self.text_encoder(
- uncond_input.input_ids.to(device),
- attention_mask=attention_mask,
- )
- uncond_embeddings = uncond_embeddings[0]
-
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
- seq_len = uncond_embeddings.shape[1]
- uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1)
- uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
-
- return text_embeddings
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.StableDiffusionImg2ImgPipeline.check_inputs
- def check_inputs(self, prompt, strength, callback_steps):
- if not isinstance(prompt, str) and not isinstance(prompt, list):
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
-
- if strength < 0 or strength > 1:
- raise ValueError(f"The value of strength should in [1.0, 1.0] but is {strength}")
-
- if (callback_steps is None) or (
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
- def prepare_extra_step_kwargs(self, generator, eta):
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
-
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- # check if the scheduler accepts generator
- accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
- if accepts_generator:
- extra_step_kwargs["generator"] = generator
- return extra_step_kwargs
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker
- def run_safety_checker(self, image, device, dtype):
- if self.safety_checker is not None:
- safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device)
- image, has_nsfw_concept = self.safety_checker(
- images=image, clip_input=safety_checker_input.pixel_values.to(dtype)
- )
- else:
- has_nsfw_concept = None
- return image, has_nsfw_concept
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents
- def decode_latents(self, latents):
- latents = 1 / 0.18215 * latents
- image = self.vae.decode(latents).sample
- image = (image / 2 + 0.5).clamp(0, 1)
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
- return image
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.StableDiffusionImg2ImgPipeline.get_timesteps
- def get_timesteps(self, num_inference_steps, strength, device):
- # get the original timestep using init_timestep
- offset = self.scheduler.config.get("steps_offset", 0)
- init_timestep = int(num_inference_steps * strength) + offset
- init_timestep = min(init_timestep, num_inference_steps)
-
- t_start = max(num_inference_steps - init_timestep + offset, 0)
- timesteps = self.scheduler.timesteps[t_start:]
-
- return timesteps, num_inference_steps - t_start
-
- def prepare_latents(self, image, timestep, batch_size, num_images_per_prompt, dtype, device, generator=None):
- image = image.to(device=device, dtype=dtype)
- init_latent_dist = self.vae.encode(image).latent_dist
- init_latents = init_latent_dist.sample(generator=generator)
- init_latents = 0.18215 * init_latents
-
- if batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] == 0:
- # expand init_latents for batch_size
- deprecation_message = (
- f"You have passed {batch_size} text prompts (`prompt`), but only {init_latents.shape[0]} initial"
- " images (`image`). Initial images are now duplicating to match the number of text prompts. Note"
- " that this behavior is deprecated and will be removed in a version 1.0.0. Please make sure to update"
- " your script to pass as many initial images as text prompts to suppress this warning."
- )
- deprecate("len(prompt) != len(image)", "1.0.0", deprecation_message, standard_warn=False)
- additional_image_per_prompt = batch_size // init_latents.shape[0]
- init_latents = torch.cat([init_latents] * additional_image_per_prompt * num_images_per_prompt, dim=0)
- elif batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] != 0:
- raise ValueError(
- f"Cannot duplicate `image` of batch size {init_latents.shape[0]} to {batch_size} text prompts."
- )
- else:
- init_latents = torch.cat([init_latents] * num_images_per_prompt, dim=0)
-
- # add noise to latents using the timestep
- noise = torch.randn(init_latents.shape, generator=generator, device=device, dtype=dtype)
-
- # get latents
- clean_latents = init_latents
- init_latents = self.scheduler.add_noise(init_latents, noise, timestep)
- latents = init_latents
-
- return latents, clean_latents
-
- @torch.no_grad()
- def __call__(
- self,
- prompt: Union[str, List[str]],
- source_prompt: Union[str, List[str]],
- image: Union[torch.FloatTensor, PIL.Image.Image],
- strength: float = 0.8,
- num_inference_steps: Optional[int] = 50,
- guidance_scale: Optional[float] = 7.5,
- source_guidance_scale: Optional[float] = 1,
- num_images_per_prompt: Optional[int] = 1,
- eta: Optional[float] = 0.1,
- generator: Optional[torch.Generator] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
- callback_steps: Optional[int] = 1,
- **kwargs,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
-
- Args:
- prompt (`str` or `List[str]`):
- The prompt or prompts to guide the image generation.
- image (`torch.FloatTensor` or `PIL.Image.Image`):
- `Image`, or tensor representing an image batch, that will be used as the starting point for the
- process.
- strength (`float`, *optional*, defaults to 0.8):
- Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
- will be used as a starting point, adding more noise to it the larger the `strength`. The number of
- denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
- be maximum and the denoising process will run for the full number of iterations specified in
- `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference. This parameter will be modulated by `strength`.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- source_guidance_scale (`float`, *optional*, defaults to 1):
- Guidance scale for the source prompt. This is useful to control the amount of influence the source
- prompt for encoding.
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- eta (`float`, *optional*, defaults to 0.1):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- generator (`torch.Generator`, *optional*):
- A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
- deterministic.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
- plain tuple.
- callback (`Callable`, *optional*):
- A function that will be called every `callback_steps` steps during inference. The function will be
- called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function will be called. If not specified, the callback will be
- called at every step.
-
- Returns:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
- When returning a tuple, the first element is a list with the generated images, and the second element is a
- list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
- (nsfw) content, according to the `safety_checker`.
- """
- message = "Please use `image` instead of `init_image`."
- init_image = deprecate("init_image", "0.12.0", message, take_from=kwargs)
- image = init_image or image
-
- # 1. Check inputs
- self.check_inputs(prompt, strength, callback_steps)
-
- # 2. Define call parameters
- batch_size = 1 if isinstance(prompt, str) else len(prompt)
- device = self._execution_device
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
-
- # 3. Encode input prompt
- text_embeddings = self._encode_prompt(prompt, device, num_images_per_prompt, do_classifier_free_guidance, None)
- source_text_embeddings = self._encode_prompt(
- source_prompt, device, num_images_per_prompt, do_classifier_free_guidance, None
- )
-
- # 4. Preprocess image
- if isinstance(image, PIL.Image.Image):
- image = preprocess(image)
-
- # 5. Prepare timesteps
- self.scheduler.set_timesteps(num_inference_steps, device=device)
- timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength, device)
- latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt)
-
- # 6. Prepare latent variables
- latents, clean_latents = self.prepare_latents(
- image, latent_timestep, batch_size, num_images_per_prompt, text_embeddings.dtype, device, generator
- )
- source_latents = latents
-
- # 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
- extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
- generator = extra_step_kwargs.pop("generator", None)
-
- # 8. Denoising loop
- num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
- with self.progress_bar(total=num_inference_steps) as progress_bar:
- for i, t in enumerate(timesteps):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([latents] * 2)
- source_latent_model_input = torch.cat([source_latents] * 2)
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
- source_latent_model_input = self.scheduler.scale_model_input(source_latent_model_input, t)
-
- # predict the noise residual
- concat_latent_model_input = torch.stack(
- [
- source_latent_model_input[0],
- latent_model_input[0],
- source_latent_model_input[1],
- latent_model_input[1],
- ],
- dim=0,
- )
- concat_text_embeddings = torch.stack(
- [
- source_text_embeddings[0],
- text_embeddings[0],
- source_text_embeddings[1],
- text_embeddings[1],
- ],
- dim=0,
- )
- concat_noise_pred = self.unet(
- concat_latent_model_input, t, encoder_hidden_states=concat_text_embeddings
- ).sample
-
- # perform guidance
- (
- source_noise_pred_uncond,
- noise_pred_uncond,
- source_noise_pred_text,
- noise_pred_text,
- ) = concat_noise_pred.chunk(4, dim=0)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
- source_noise_pred = source_noise_pred_uncond + source_guidance_scale * (
- source_noise_pred_text - source_noise_pred_uncond
- )
-
- # Sample source_latents from the posterior distribution.
- prev_source_latents = posterior_sample(
- self.scheduler, source_latents, t, clean_latents, generator=generator, **extra_step_kwargs
- )
- # Compute noise.
- noise = compute_noise(
- self.scheduler, prev_source_latents, source_latents, t, source_noise_pred, **extra_step_kwargs
- )
- source_latents = prev_source_latents
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(
- noise_pred, t, latents, variance_noise=noise, **extra_step_kwargs
- ).prev_sample
-
- # call the callback, if provided
- if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
- progress_bar.update()
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- # 9. Post-processing
- image = self.decode_latents(latents)
-
- # 10. Run safety checker
- image, has_nsfw_concept = self.run_safety_checker(image, device, text_embeddings.dtype)
-
- # 11. Convert to PIL
- if output_type == "pil":
- image = self.numpy_to_pil(image)
-
- if not return_dict:
- return (image, has_nsfw_concept)
-
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
diff --git a/spaces/Jeff2323/ai-comic-factory/src/lib/cleanJson.ts b/spaces/Jeff2323/ai-comic-factory/src/lib/cleanJson.ts
deleted file mode 100644
index 8e914d329008deae4e14679597a76ca352b64925..0000000000000000000000000000000000000000
--- a/spaces/Jeff2323/ai-comic-factory/src/lib/cleanJson.ts
+++ /dev/null
@@ -1,19 +0,0 @@
-import { dirtyLLMResponseCleaner } from "./dirtyLLMResponseCleaner"
-
-export function cleanJson(input: string) {
-
- if (input.includes('```')) {
- input = input.split('```')[0]
- }
- let tmp = dirtyLLMResponseCleaner(input)
-
- // we only keep what's after the first [
- tmp = `[${tmp.split("[").pop() || ""}`
-
- // and before the first ]
- tmp = `${tmp.split("]").shift() || ""}]`
-
- tmp = dirtyLLMResponseCleaner(tmp)
-
- return tmp
-}
\ No newline at end of file
diff --git a/spaces/JohnC26/AI.Dashboard.Gradio.Streamlit.HTML5/style.css b/spaces/JohnC26/AI.Dashboard.Gradio.Streamlit.HTML5/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/JohnC26/AI.Dashboard.Gradio.Streamlit.HTML5/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/Kamtera/Persian-tts-CoquiTTS/app.py b/spaces/Kamtera/Persian-tts-CoquiTTS/app.py
deleted file mode 100644
index aeed4b0243a03b0b4ddc80b2549deb1ea8937d39..0000000000000000000000000000000000000000
--- a/spaces/Kamtera/Persian-tts-CoquiTTS/app.py
+++ /dev/null
@@ -1,115 +0,0 @@
-
-import tempfile ,os
-from TTS.config import load_config
-import gradio as gr
-
-from TTS.utils.manage import ModelManager
-from TTS.utils.synthesizer import Synthesizer
-
-MODEL_NAMES=[
- "vits male1 (best)",
- "vits female (best)",
- "vits-male",
- "vits female1",
- "glowtts-male",
- "glowtts-female",
- "female tacotron2"
-]
-MAX_TXT_LEN = 800
-model_path = os.getcwd() + "/best_model.pth"
-config_path = os.getcwd() + "/config.json"
-
-
-
-from TTS.utils.download import download_url
-modelInfo=[
- ["vits-male","best_model_65633.pth","config-0.json","https://huggingface.co/Kamtera/persian-tts-male-vits/resolve/main/"],
- ["vits female (best)","checkpoint_48000.pth","config-2.json","https://huggingface.co/Kamtera/persian-tts-female-vits/resolve/main/"],
- ["glowtts-male","best_model_77797.pth","config-1.json","https://huggingface.co/Kamtera/persian-tts-male-glow_tts/resolve/main/"],
- ["glowtts-female","best_model.pth","config.json","https://huggingface.co/Kamtera/persian-tts-female-glow_tts/resolve/main/"],
- ["vits male1 (best)","checkpoint_88000.pth","config.json","https://huggingface.co/Kamtera/persian-tts-male1-vits/resolve/main/"],
- ["vits female1","checkpoint_50000.pth","config.json","https://huggingface.co/Kamtera/persian-tts-female1-vits/resolve/main/"],
- ["female tacotron2","checkpoint_313000.pth","config-2.json","https://huggingface.co/Kamtera/persian-tts-female-tacotron2/resolve/main/"]
-]
-
-for d in modelInfo:
- directory=d[0]
- if not os.path.exists(directory):
- os.makedirs(directory)
- print("|> Downloading: ",directory)
- download_url(
- d[3]+d[1],directory,"best_model.pth"
- )
- download_url(
- d[3]+d[2],directory,"config.json"
- )
-def tts(text: str,model_name: str):
- if len(text) > MAX_TXT_LEN:
- text = text[:MAX_TXT_LEN]
- print(f"Input text was cutoff since it went over the {MAX_TXT_LEN} character limit.")
- print(text)
-
-
- # synthesize
- synthesizer = Synthesizer(
- model_name+"/best_model.pth", model_name+"/config.json"
- )
- if synthesizer is None:
- raise NameError("model not found")
- wavs = synthesizer.tts(text)
- # return output
- with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as fp:
- synthesizer.save_wav(wavs, fp)
- return fp.name
-
-
-description="""
-This is a demo of persian text to speech model.
-
-**Github : https://github.com/karim23657/Persian-tts-coqui **
-
-Models can be found here:
-
-|Model|Dataset|
-|----|------|
-|[vits female (best)](https://huggingface.co/Kamtera/persian-tts-female-vits)|[persian-tts-dataset-famale](https://www.kaggle.com/datasets/magnoliasis/persian-tts-dataset-famale)|
-|[vits male1 (best)](https://huggingface.co/Kamtera/persian-tts-male1-vits)|[persian-tts-dataset-male](https://www.kaggle.com/datasets/magnoliasis/persian-tts-dataset-male)|
-|[vits female1](https://huggingface.co/Kamtera/persian-tts-female1-vits)|[ParsiGoo](https://github.com/karim23657/ParsiGoo)|
-|[vits male](https://huggingface.co/Kamtera/persian-tts-male-vits)|[persian-tts-dataset](https://www.kaggle.com/datasets/magnoliasis/persian-tts-dataset)|
-|[glowtts female](https://huggingface.co/Kamtera/persian-tts-female-glow_tts)|[persian-tts-dataset-famale](https://www.kaggle.com/datasets/magnoliasis/persian-tts-dataset-famale)|
-|[glowtts male](https://huggingface.co/Kamtera/persian-tts-male-glow_tts)|[persian-tts-dataset](https://www.kaggle.com/datasets/magnoliasis/persian-tts-dataset)|
-|[tacotron2 female](https://huggingface.co/Kamtera/persian-tts-female-tacotron2)|[persian-tts-dataset-famale](https://www.kaggle.com/datasets/magnoliasis/persian-tts-dataset-famale)|
-
-
-"""
-article= ""
-examples=[
- ["و خداوند شما را با ارسال روح در جسم زندگانی و حیات بخشید","vits-male"],
- ["تاجر تو چه تجارت می کنی ، تو را چه که چه تجارت می کنم؟","vits female (best)"],
- ["شیش سیخ جیگر سیخی شیش هزار","vits female (best)"],
- ["سه شیشه شیر ، سه سیر سرشیر","vits female (best)"],
- ["دزدی دزدید ز بز دزدی بزی ، عجب دزدی که دزدید ز بز دزدی بزی","vits male1 (best)"],
- ["مثنوی یکی از قالب های شعری است ک هر بیت قافیه ی جداگانه دارد","vits female1"],
- ["در گلو ماند خس او سالها، چیست آن خس مهر جاه و مالها","vits male1 (best)"],
-]
-iface = gr.Interface(
- fn=tts,
- inputs=[
- gr.Textbox(
- label="Text",
- value="زندگی فقط یک بار است؛ از آن به خوبی استفاده کن",
- ),
- gr.Radio(
- label="Pick a TTS Model ",
- choices=MODEL_NAMES,
- value="vits-female",
- ),
- ],
- outputs=gr.Audio(label="Output",type='filepath'),
- examples=examples,
- title="🗣️ Persian tts 🗣️",
- description=description,
- article=article,
- live=False
-)
-iface.launch(share=False)
diff --git a/spaces/KaygNas/cut-it/src/App.ts b/spaces/KaygNas/cut-it/src/App.ts
deleted file mode 100644
index b514bd63c930b8fd75ee5a4f79e4ca926b0729dd..0000000000000000000000000000000000000000
--- a/spaces/KaygNas/cut-it/src/App.ts
+++ /dev/null
@@ -1,155 +0,0 @@
-import type { Nullable } from '@babylonjs/core'
-import { Engine, FollowCamera, HemisphericLight, Observable, Scene, Vector3 } from '@babylonjs/core'
-import { Inspector } from '@babylonjs/inspector'
-import { EAppState, EAppUIState } from './enums'
-import { error } from './utils'
-import type { Robot } from './Robot'
-import type { Image } from './Image'
-import type { ILoadingRectangle } from './AppUI'
-
-const ImageModule = import('./Image')
-const GroundModule = import('./Ground')
-const AppUIModule = import('./AppUI')
-const RobotModule = import('./Robot')
-export class App {
- engine: Engine
- scene?: Scene
-
- // App State
- stateObservable = new Observable()
- private _state: EAppState = EAppState.Initializing
- get state() {
- return this._state
- }
-
- set state(value) {
- this._state = value
- this.stateObservable.notifyObservers(value)
- }
-
- constructor(readonly canvas: HTMLCanvasElement) {
- this.state = EAppState.Initializing
- this.engine = new Engine(canvas)
- window.addEventListener('resize', () => {
- this.engine.resize()
- })
- this.engine.displayLoadingUI()
- this._createScene(this.engine, this.canvas).then(async (scene) => {
- this.scene = scene
- await this.scene.whenReadyAsync()
- this.state = EAppState.Initialized
- this.engine.hideLoadingUI()
- })
- }
-
- run() {
- if (import.meta.env.DEV) {
- // for development: make inspector visible/invisible
- window.addEventListener('keydown', (ev) => {
- // Shift+Ctrl+Alt+I
- if (ev.shiftKey && ev.ctrlKey && ev.altKey && ev.keyCode === 73) {
- if (Inspector.IsVisible)
- Inspector.Hide()
- else if (this.scene)
- Inspector.Show(this.scene, {})
- }
- })
- }
- this.engine.runRenderLoop(() => {
- this.scene?.render()
- })
- }
-
- private async _createScene(engine: Engine, canvas: HTMLCanvasElement) {
- const scene = new Scene(engine)
- const [{ Image }, { AppUI }, { Ground }, { Robot }] = await Promise.all([ImageModule, AppUIModule, GroundModule, RobotModule])
- const image = new Image()
- const ui = new AppUI(scene)
- const ground = Ground.create(scene, image)
- const robot = Robot.create(scene)
- const cammandRobotMoveToImage = async (robot: Robot, image: Image) => {
- if (!image.isClassified())
- return
-
- const bbox = image.classification.detection.box
- const groundBbox = ground.mesh.getBoundingInfo().boundingBox
- const groundSize = ground.mesh.getBoundingInfo().boundingBox.extendSize.scale(2)
- const imageSize = { x: image.image.imageWidth, y: 0, z: image.image.imageHeight }
- const scales = { x: groundSize.x / imageSize.x, y: 0, z: groundSize.z / imageSize.z }
- const bboxOrigin = new Vector3(groundBbox.minimum.x, 0, groundBbox.maximum.z)
- const bboxLeftTop = new Vector3(bbox.xmax * scales.x, 0, -bbox.ymax * scales.z)
- const bboxRightTop = new Vector3(bbox.xmin * scales.x, 0, -bbox.ymax * scales.z)
- const bboxRightBottom = new Vector3(bbox.xmin * scales.x, 0, -bbox.ymin * scales.z)
- const bboxLeftBottom = new Vector3(bbox.xmax * scales.x, 0, -bbox.ymin * scales.z)
- const destination = bboxOrigin.add(bboxLeftTop)
- await robot.moveTo(destination)
- await robot.laserCutter.cut([bboxLeftTop, bboxRightTop, bboxRightBottom, bboxLeftBottom, bboxLeftTop].map(v => bboxOrigin.add(v)))
- await robot.land()
- }
-
- ui.observalbe.add(async (event) => {
- try {
- if (event.type === 'UploadImageButtonClick') {
- image.clear()
- this.state = EAppState.ImageUploading
- await image.load()
- this.state = EAppState.ImageUploaded
- ui.setState(EAppUIState.Input)
- await robot.takeOff()
- }
- else if (event.type === 'CreateImageButtonClick') {
- image.clear()
- event.target.isLoading = true
- this.state = EAppState.ImageUploading
- await image.fromText(event.value)
- .finally(() => event.target.isLoading = false)
- this.state = EAppState.ImageUploaded
- ui.setState(EAppUIState.Input)
- await robot.takeOff()
- }
- else if (event.type === 'InputTextChange') {
- const takeoff = async () => {
- if (robot.pose === Robot.Pose.Land)
- await robot.takeOff()
- }
- const detect = async () => {
- this.state = EAppState.ImageDetecting
- await image.detect()
- this.state = EAppState.ImageDetected
- }
- const classify = async () => {
- this.state = EAppState.ImageClassifying
- await image.classify(event.target.text)
- this.state = EAppState.ImageClassified
- }
- takeoff()
- const background = event.target.parent?.getChildByName('InputBackground') as Nullable
- background && (background.isLoading = true)
- try {
- await detect()
- await classify()
- }
- finally {
- background && (background.isLoading = false)
- }
- await cammandRobotMoveToImage(robot, image)
- }
- }
- catch (reason) {
- this.state = EAppState.Error
- error(reason)
- }
- })
-
- const camera = new FollowCamera('RobotCamera', new Vector3(0, 1, 0), scene, robot.mesh)
- camera.lowerHeightOffsetLimit = 0
- camera.maxCameraSpeed = 6
- camera.radius = 6
- camera.attachControl()
-
- const light = new HemisphericLight('light', new Vector3(0, 1, 0), scene)
- light.intensity = 0.7
-
- return scene
- }
-}
diff --git a/spaces/Kevin676/SmartAI/README.md b/spaces/Kevin676/SmartAI/README.md
deleted file mode 100644
index 42f20c89ed093ad8408c4b271c9b3db79161f0ce..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/SmartAI/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: SmartAI
-emoji: 🐠
-colorFrom: red
-colorTo: pink
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/KyanChen/RSPrompter/configs/rsprompter/rsprompter_anchor_whu_config.py b/spaces/KyanChen/RSPrompter/configs/rsprompter/rsprompter_anchor_whu_config.py
deleted file mode 100644
index fb9e6b500063f0825b54dc2c713aa1f283b33e0d..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/configs/rsprompter/rsprompter_anchor_whu_config.py
+++ /dev/null
@@ -1,355 +0,0 @@
-custom_imports = dict(imports=['mmseg.datasets', 'mmseg.models'], allow_failed_imports=False)
-
-sub_model_train = [
- 'panoptic_head',
- 'data_preprocessor'
-]
-
-sub_model_optim = {
- 'panoptic_head': {'lr_mult': 1},
-}
-
-max_epochs = 2000
-
-optimizer = dict(
- type='AdamW',
- sub_model=sub_model_optim,
- lr=0.0005,
- weight_decay=1e-3
-)
-
-param_scheduler = [
- # warm up learning rate scheduler
- dict(
- type='LinearLR',
- start_factor=1e-4,
- by_epoch=True,
- begin=0,
- end=1,
- # update by iter
- convert_to_iter_based=True),
- # main learning rate scheduler
- dict(
- type='CosineAnnealingLR',
- T_max=max_epochs,
- by_epoch=True,
- begin=1,
- end=max_epochs,
- ),
-]
-
-param_scheduler_callback = dict(
- type='ParamSchedulerHook'
-)
-
-evaluator_ = dict(
- type='CocoPLMetric',
- metric=['bbox', 'segm'],
- proposal_nums=[1, 10, 100]
-)
-
-evaluator = dict(
- val_evaluator=evaluator_,
-)
-
-
-image_size = (1024, 1024)
-
-data_preprocessor = dict(
- type='mmdet.DetDataPreprocessor',
- mean=[123.675, 116.28, 103.53],
- std=[58.395, 57.12, 57.375],
- bgr_to_rgb=True,
- pad_size_divisor=32,
- pad_mask=True,
- mask_pad_value=0,
-)
-
-num_things_classes = 1
-num_stuff_classes = 0
-num_classes = num_things_classes + num_stuff_classes
-prompt_shape = (90, 4)
-
-
-model_cfg = dict(
- type='SegSAMAnchorPLer',
- hyperparameters=dict(
- optimizer=optimizer,
- param_scheduler=param_scheduler,
- evaluator=evaluator,
- ),
- need_train_names=sub_model_train,
- data_preprocessor=data_preprocessor,
- backbone=dict(
- type='vit_h',
- checkpoint='pretrain/sam/sam_vit_h_4b8939.pth',
- # type='vit_b',
- # checkpoint='pretrain/sam/sam_vit_b_01ec64.pth',
- ),
- panoptic_head=dict(
- type='SAMAnchorInstanceHead',
- neck=dict(
- type='SAMAggregatorNeck',
- in_channels=[1280] * 32,
- # in_channels=[768] * 12,
- inner_channels=32,
- selected_channels=range(4, 32, 2),
- # selected_channels=range(4, 12, 2),
- out_channels=256,
- up_sample_scale=4,
- ),
- rpn_head=dict(
- type='mmdet.RPNHead',
- in_channels=256,
- feat_channels=256,
- anchor_generator=dict(
- type='mmdet.AnchorGenerator',
- scales=[2, 4, 8, 16, 32, 64],
- ratios=[0.5, 1.0, 2.0],
- strides=[8, 16, 32]),
- bbox_coder=dict(
- type='mmdet.DeltaXYWHBBoxCoder',
- target_means=[.0, .0, .0, .0],
- target_stds=[1.0, 1.0, 1.0, 1.0]),
- loss_cls=dict(
- type='mmdet.CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
- loss_bbox=dict(type='mmdet.SmoothL1Loss', loss_weight=1.0)),
- roi_head=dict(
- type='SAMAnchorPromptRoIHead',
- bbox_roi_extractor=dict(
- type='mmdet.SingleRoIExtractor',
- roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
- out_channels=256,
- featmap_strides=[8, 16, 32]),
- bbox_head=dict(
- type='mmdet.Shared2FCBBoxHead',
- in_channels=256,
- fc_out_channels=1024,
- roi_feat_size=7,
- num_classes=num_classes,
- bbox_coder=dict(
- type='mmdet.DeltaXYWHBBoxCoder',
- target_means=[0., 0., 0., 0.],
- target_stds=[0.1, 0.1, 0.2, 0.2]),
- reg_class_agnostic=False,
- loss_cls=dict(
- type='mmdet.CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
- loss_bbox=dict(type='mmdet.SmoothL1Loss', loss_weight=1.0)),
- mask_roi_extractor=dict(
- type='mmdet.SingleRoIExtractor',
- roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0),
- out_channels=256,
- featmap_strides=[8, 16, 32]),
- mask_head=dict(
- type='SAMPromptMaskHead',
- per_query_point=prompt_shape[1],
- with_sincos=True,
- class_agnostic=True,
- loss_mask=dict(
- type='mmdet.CrossEntropyLoss', use_mask=True, loss_weight=1.0))),
- # model training and testing settings
- train_cfg=dict(
- rpn=dict(
- assigner=dict(
- type='mmdet.MaxIoUAssigner',
- pos_iou_thr=0.7,
- neg_iou_thr=0.3,
- min_pos_iou=0.3,
- match_low_quality=True,
- ignore_iof_thr=-1),
- sampler=dict(
- type='mmdet.RandomSampler',
- num=512,
- pos_fraction=0.5,
- neg_pos_ub=-1,
- add_gt_as_proposals=False),
- allowed_border=-1,
- pos_weight=-1,
- debug=False),
- rpn_proposal=dict(
- nms_pre=2000,
- max_per_img=1000,
- nms=dict(type='nms', iou_threshold=0.7),
- min_bbox_size=0),
- rcnn=dict(
- assigner=dict(
- type='mmdet.MaxIoUAssigner',
- pos_iou_thr=0.5,
- neg_iou_thr=0.5,
- min_pos_iou=0.5,
- match_low_quality=True,
- ignore_iof_thr=-1),
- sampler=dict(
- type='mmdet.RandomSampler',
- num=256,
- pos_fraction=0.25,
- neg_pos_ub=-1,
- add_gt_as_proposals=True),
- mask_size=1024,
- pos_weight=-1,
- debug=False)),
- test_cfg=dict(
- rpn=dict(
- nms_pre=1000,
- max_per_img=1000,
- nms=dict(type='nms', iou_threshold=0.7),
- min_bbox_size=0),
- rcnn=dict(
- score_thr=0.05,
- nms=dict(type='nms', iou_threshold=0.5),
- max_per_img=100,
- mask_thr_binary=0.5)
- )
- )
-)
-
-task_name = 'whu_ins'
-exp_name = 'E20230629_0'
-logger = dict(
- type='WandbLogger',
- project=task_name,
- group='sam-anchor',
- name=exp_name
-)
-
-
-callbacks = [
- param_scheduler_callback,
- dict(
- type='ModelCheckpoint',
- dirpath=f'results/{task_name}/{exp_name}/checkpoints',
- save_last=True,
- mode='max',
- monitor='valsegm_map_0',
- save_top_k=3,
- filename='epoch_{epoch}-map_{valsegm_map_0:.4f}'
- ),
- dict(
- type='LearningRateMonitor',
- logging_interval='step'
- )
-]
-
-
-trainer_cfg = dict(
- compiled_model=False,
- accelerator="auto",
- strategy="auto",
- # strategy="ddp",
- # strategy='ddp_find_unused_parameters_true',
- # precision='32',
- # precision='16-mixed',
- devices=8,
- default_root_dir=f'results/{task_name}/{exp_name}',
- # default_root_dir='results/tmp',
- max_epochs=max_epochs,
- logger=logger,
- callbacks=callbacks,
- log_every_n_steps=10,
- check_val_every_n_epoch=5,
- benchmark=True,
- # sync_batchnorm=True,
- # fast_dev_run=True,
-
- # limit_train_batches=1,
- # limit_val_batches=0,
- # limit_test_batches=None,
- # limit_predict_batches=None,
- # overfit_batches=0.0,
-
- # val_check_interval=None,
- # num_sanity_val_steps=0,
- # enable_checkpointing=None,
- # enable_progress_bar=None,
- # enable_model_summary=None,
- # accumulate_grad_batches=32,
- # gradient_clip_val=15,
- # gradient_clip_algorithm='norm',
- # deterministic=None,
- # inference_mode: bool=True,
- use_distributed_sampler=True,
- # profiler="simple",
- # detect_anomaly=False,
- # barebones=False,
- # plugins=None,
- # reload_dataloaders_every_n_epochs=0,
-)
-
-
-backend_args = None
-train_pipeline = [
- dict(type='mmdet.LoadImageFromFile'),
- dict(type='mmdet.LoadAnnotations', with_bbox=True, with_mask=True),
- dict(type='mmdet.Resize', scale=image_size),
- dict(type='mmdet.RandomFlip', prob=0.5),
- dict(type='mmdet.PackDetInputs')
-]
-
-test_pipeline = [
- dict(type='mmdet.LoadImageFromFile', backend_args=backend_args),
- dict(type='mmdet.Resize', scale=image_size),
- # If you don't have a gt annotation, delete the pipeline
- dict(type='mmdet.LoadAnnotations', with_bbox=True, with_mask=True),
- dict(
- type='mmdet.PackDetInputs',
- meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
- 'scale_factor'))
-]
-
-
-train_batch_size_per_gpu = 2
-train_num_workers = 2
-test_batch_size_per_gpu = 2
-test_num_workers = 2
-persistent_workers = True
-
-
-data_parent = '/mnt/search01/dataset/cky_data/WHU'
-train_data_prefix = 'train/'
-val_data_prefix = 'test/'
-dataset_type = 'WHUInsSegDataset'
-
-
-val_loader = dict(
- batch_size=test_batch_size_per_gpu,
- num_workers=test_num_workers,
- persistent_workers=persistent_workers,
- pin_memory=True,
- dataset=dict(
- type=dataset_type,
- data_root=data_parent,
- # ann_file='NWPU_instances_val.json',
- # data_prefix=dict(img_path='positive image set'),
- # ann_file='annotations/SSDD_instances_val.json',
- # data_prefix=dict(img_path='imgs'),
- ann_file='annotations/WHU_building_test.json',
- data_prefix=dict(img_path=val_data_prefix + '/image'),
- test_mode=True,
- filter_cfg=dict(filter_empty_gt=True, min_size=32),
- pipeline=test_pipeline,
- backend_args=backend_args))
-
-datamodule_cfg = dict(
- type='PLDataModule',
- train_loader=dict(
- batch_size=train_batch_size_per_gpu,
- num_workers=train_num_workers,
- persistent_workers=persistent_workers,
- pin_memory=True,
- dataset=dict(
- type=dataset_type,
- data_root=data_parent,
- # ann_file='NWPU_instances_train.json',
- # data_prefix=dict(img_path='positive image set'),
- # ann_file='annotations/SSDD_instances_train.json',
- # data_prefix=dict(img_path='imgs'),
- ann_file='annotations/WHU_building_train.json',
- data_prefix=dict(img_path=train_data_prefix + '/image'),
- filter_cfg=dict(filter_empty_gt=True, min_size=32),
- pipeline=train_pipeline,
- backend_args=backend_args)
- ),
- val_loader=val_loader,
- # test_loader=val_loader
- predict_loader=val_loader
-)
\ No newline at end of file
diff --git a/spaces/LZRi/LZR-Bert-VITS2/docs/commands.md b/spaces/LZRi/LZR-Bert-VITS2/docs/commands.md
deleted file mode 100644
index 30edfc9088f527a332ec8d00f44c6a5120ad26ee..0000000000000000000000000000000000000000
--- a/spaces/LZRi/LZR-Bert-VITS2/docs/commands.md
+++ /dev/null
@@ -1,36 +0,0 @@
-0. 环境维护和升级(示例):
-%PYTHON% -m pip install -i https://pypi.tuna.tsinghua.edu.cn/simple -r requirements.txt
-这条一般不用执行
-
-1. 安装ffmpeg,将整合包内的ffmpeg加入环境变量,使用自动标注需要用到,执行一次即可。安装完可能需要重启生效:
-%PYTHON% setup_ffmpeg.py
-
-1. 数据集重采样和标注:
-
-a. whisper通用标注:音频在2-10s。根据显存选择配置,large需要12G显存。
-%PYTHON% short_audio_transcribe.py --languages "C" --whisper_size large
-%PYTHON% short_audio_transcribe.py --languages "C" --whisper_size medium
-%PYTHON% short_audio_transcribe.py --languages "C" --whisper_size small
-如果已经标注好了,不希望使用本脚本,请将音频重采样至单声道44100Hz
-
-b. 下载的已标注的原神数据集:
-%PYTHON% transcribe_genshin.py
-
-2. 文本处理:
-%PYTHON% preprocess_text.py
-
-3. bert_gen
-%PYTHON% bert_gen.py
-
-4. 训练:
-首次训练:
-%PYTHON% train_ms.py -c ./configs\config.json
-
-继续训练:
-%PYTHON% train_ms.py -c ./configs\config.json --cont
-
-启动TensorBoard:
-%PYTHON% -m tensorboard.main --logdir=logs\OUTPUT_MODEL
-
-5. 推理 --config_dir可选 --model_dir 为配置文件和模型指定目录:
-%PYTHON% inference_webui.py --model_dir ./logs\OUTPUT_MODEL\G_100.pth
\ No newline at end of file
diff --git a/spaces/LanguageBind/LanguageBind/model/build_model.py b/spaces/LanguageBind/LanguageBind/model/build_model.py
deleted file mode 100644
index 736476bd35a6b6210a810b74819be66061053b33..0000000000000000000000000000000000000000
--- a/spaces/LanguageBind/LanguageBind/model/build_model.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import logging
-import argparse
-import os.path
-
-import numpy as np
-import torch
-from torch import nn
-from transformers import AutoConfig
-
-from model.base_model import CLIPModel
-from model.process_clip import add_time_attn_block, convert_model_to_lora, set_global_value, resize_pos
-from open_clip import convert_weights_to_lp
-from open_clip.transformer import PatchDropout
-from training.distributed import is_master
-
-
-def SET_GLOBAL_VALUE(k, v):
- set_global_value(k, v)
-
-def create_vat_model(args):
-
- config = AutoConfig.from_pretrained(args.model, cache_dir=args.cache_dir)
- model = CLIPModel(config, args.num_frames, args.add_time_attn)
-
- model.vision_model.patch_dropout = PatchDropout(args.force_patch_dropout)
-
- device = args.device
- precision = args.precision
- if precision in ("fp16", "bf16"):
- dtype = torch.float16 if 'fp16' in precision else torch.bfloat16
- model.to(device=device)
- convert_weights_to_lp(model, dtype=dtype)
- elif precision in ("pure_fp16", "pure_bf16"):
- dtype = torch.float16 if 'fp16' in precision else torch.bfloat16
- model.to(device=device, dtype=dtype)
- else:
- model.to(device=device)
-
- if args.pretrained:
- try:
- args.pretrained = os.path.join(args.cache_dir, args.pretrained)
- if is_master(args):
- logging.info(f'Loading pretrained {args.model} weights ({args.pretrained}).')
- # incompatible_keys = load_checkpoint(model, pretrained, strict=False)
- ckpt = torch.load(args.pretrained, map_location='cpu')
- incompatible_keys = model.load_state_dict(ckpt, strict=False if args.add_time_attn else True)
- if is_master(args):
- logging.info(incompatible_keys)
- except Exception as e:
- if is_master(args):
- logging.info(f"Failed loading pretrained model with {e}")
- else:
- if is_master(args):
- logging.info(f"No pretrained model to load in \'{args.pretrained}\'")
-
- if args.add_time_attn:
- add_time_attn_block(model.vision_model.encoder, device=device)
- if is_master(args):
- logging.info(f'Convert spatial attention to time attention pretrained.')
-
- if args.clip_type == 'al':
- resize_pos(model.vision_model.embeddings, args)
- if is_master(args):
- logging.info(f'Resize to position embedding successfully.')
-
- if args.init_temp != 0:
- with torch.no_grad():
- model.logit_scale.fill_(np.log(1 / float(args.init_temp)))
- if is_master(args):
- logging.info(f'Reset logit scale to {args.init_temp} (log-scale) and trainable {args.learn_temp}.')
-
- if args.convert_to_lora:
- convert_model_to_lora(args, model)
- if is_master(args):
- logging.info(f"Successfuly convert model to lora style.")
-
- # if output_dict and hasattr(model, "output_dict"):
- # model.output_dict = True
-
- return model
-
-
-if __name__ == '__main__':
- MODEL_DICT = {"ViT-L-14": "laion/CLIP-ViT-L-14-DataComp.XL-s13B-b90K",
- "ViT-H-14": "laion/CLIP-ViT-H-14-laion2B-s32B-b79K"}
- CHECKPOINT_DICT = {"ViT-L-14": "models--laion--CLIP-ViT-L-14-DataComp.XL-s13B-b90K/snapshots/84c9828e63dc9a9351d1fe637c346d4c1c4db341/pytorch_model.bin",
- "ViT-H-14": "models--laion--CLIP-ViT-H-14-laion2B-s32B-b79K/snapshots/94a64189c3535c1cb44acfcccd7b0908c1c8eb23/pytorch_model.bin"}
-
- parser = argparse.ArgumentParser()
- args = parser.parse_args()
- args.pretrained = True
- args.model = MODEL_DICT["ViT-L-14"]
- args.pretrained = CHECKPOINT_DICT["ViT-L-14"]
- args.cache_dir = 'D:\Omni-modal-valdt-1kw'
- args.device = 'cpu'
- args.precision = None
- args.lock_text = True
- args.lock_image = True
- args.init_temp = 0
- args.force_patch_dropout = 0.5
- args.add_time_attn = True
- args.convert_to_lora = True
- args.lora_r = 16
- args.lora_alpha = 16
- args.lora_dropout = 0.0 # 0.1?
- args.num_frames = 8
- args.clip_type = 'vl'
- args.num_mel_bins = 128
- args.target_length = 1024
- args.audio_sample_rate = 16000
- args.audio_mean = 1
- args.audio_std = 1
- args.rank = 0
-
- SET_GLOBAL_VALUE('PATCH_DROPOUT', args.force_patch_dropout)
- SET_GLOBAL_VALUE('NUM_FRAMES', args.num_frames)
-
- model = create_vat_model(args)
-
-
- '''方法1,自定义函数 参考自 https://blog.csdn.net/qq_33757398/article/details/109210240'''
-
-
- def model_structure(model):
- blank = ' '
- print('-' * 150)
- print('|' + ' ' * 44 + 'weight name' + ' ' * 45 + '|' \
- + ' ' * 10 + 'weight shape' + ' ' * 10 + '|' \
- + ' ' * 3 + 'number' + ' ' * 3 + '|')
- print('-' * 150)
- num_para = 0
- type_size = 1 # 如果是浮点数就是4
-
- for index, (key, w_variable) in enumerate(model.named_parameters()):
- if len(key) <= 100:
- key = key + (100 - len(key)) * blank
- shape = str(w_variable.shape)
- if len(shape) <= 30:
- shape = shape + (30 - len(shape)) * blank
- each_para = 1
- for k in w_variable.shape:
- each_para *= k
- num_para += each_para
- str_num = str(each_para)
- if len(str_num) <= 10:
- str_num = str_num + (10 - len(str_num)) * blank
-
- print('| {} | {} | {} |'.format(key, shape, str_num))
- print('-' * 150)
- print('The total number of parameters: ' + str(num_para))
- print('The parameters of Model {}: {:4f}M'.format(model._get_name(), num_para * type_size / 1000 / 1000))
- print('-' * 150)
-
-
- model_structure(model)
- # model_structure(model.vision_model)
- # model_structure(model.text_model)
-
-
- # model.lock_image_tower(unlocked_groups=1)
- # model.lock_text_tower(unlocked_layers=0)
- # model.unlock_time_attn()
-
- if args.lock_image:
- # if args.clip_type == 'al' or args.clip_type == 'dl':
- # for param in model.vision_model.embeddings.parameters():
- # param.requires_grad = True
- # for param in model.vision_model.pre_layrnorm.parameters():
- # param.requires_grad = True
- # else:
- for param in model.vision_model.embeddings.parameters():
- param.requires_grad = False
- for param in model.vision_model.pre_layrnorm.parameters():
- param.requires_grad = False
- for param in model.vision_model.embeddings.position_embedding.parameters():
- param.requires_grad = False
- model.vision_model.embeddings.class_embedding.requires_grad = True
-
-
- if args.lock_text:
- for param in model.text_model.parameters():
- param.requires_grad = False
- for param in model.text_projection.parameters():
- param.requires_grad = False
-
-
- for n, p in model.named_parameters():
- # if p.requires_grad:
- print(n, '--->', p.requires_grad)
- b, c, t, h, w = 2, 3, args.num_frames, 224, 224
- x = torch.randn(b, c, t, h, w)
- y = model(image=x)
- print()
\ No newline at end of file
diff --git a/spaces/Liu-LAB/GPT-academic/crazy_functions/latex_utils.py b/spaces/Liu-LAB/GPT-academic/crazy_functions/latex_utils.py
deleted file mode 100644
index eb65a8a915d2cbc66a346e42a5f2a17ee07bb585..0000000000000000000000000000000000000000
--- a/spaces/Liu-LAB/GPT-academic/crazy_functions/latex_utils.py
+++ /dev/null
@@ -1,788 +0,0 @@
-from toolbox import update_ui, update_ui_lastest_msg # 刷新Gradio前端界面
-from toolbox import zip_folder, objdump, objload, promote_file_to_downloadzone
-import os, shutil
-import re
-import numpy as np
-pj = os.path.join
-
-"""
-========================================================================
-Part One
-Latex segmentation with a binary mask (PRESERVE=0, TRANSFORM=1)
-========================================================================
-"""
-PRESERVE = 0
-TRANSFORM = 1
-
-def set_forbidden_text(text, mask, pattern, flags=0):
- """
- Add a preserve text area in this paper
- e.g. with pattern = r"\\begin\{algorithm\}(.*?)\\end\{algorithm\}"
- you can mask out (mask = PRESERVE so that text become untouchable for GPT)
- everything between "\begin{equation}" and "\end{equation}"
- """
- if isinstance(pattern, list): pattern = '|'.join(pattern)
- pattern_compile = re.compile(pattern, flags)
- for res in pattern_compile.finditer(text):
- mask[res.span()[0]:res.span()[1]] = PRESERVE
- return text, mask
-
-def reverse_forbidden_text(text, mask, pattern, flags=0, forbid_wrapper=True):
- """
- Move area out of preserve area (make text editable for GPT)
- count the number of the braces so as to catch compelete text area.
- e.g.
- \begin{abstract} blablablablablabla. \end{abstract}
- """
- if isinstance(pattern, list): pattern = '|'.join(pattern)
- pattern_compile = re.compile(pattern, flags)
- for res in pattern_compile.finditer(text):
- if not forbid_wrapper:
- mask[res.span()[0]:res.span()[1]] = TRANSFORM
- else:
- mask[res.regs[0][0]: res.regs[1][0]] = PRESERVE # '\\begin{abstract}'
- mask[res.regs[1][0]: res.regs[1][1]] = TRANSFORM # abstract
- mask[res.regs[1][1]: res.regs[0][1]] = PRESERVE # abstract
- return text, mask
-
-def set_forbidden_text_careful_brace(text, mask, pattern, flags=0):
- """
- Add a preserve text area in this paper (text become untouchable for GPT).
- count the number of the braces so as to catch compelete text area.
- e.g.
- \caption{blablablablabla\texbf{blablabla}blablabla.}
- """
- pattern_compile = re.compile(pattern, flags)
- for res in pattern_compile.finditer(text):
- brace_level = -1
- p = begin = end = res.regs[0][0]
- for _ in range(1024*16):
- if text[p] == '}' and brace_level == 0: break
- elif text[p] == '}': brace_level -= 1
- elif text[p] == '{': brace_level += 1
- p += 1
- end = p+1
- mask[begin:end] = PRESERVE
- return text, mask
-
-def reverse_forbidden_text_careful_brace(text, mask, pattern, flags=0, forbid_wrapper=True):
- """
- Move area out of preserve area (make text editable for GPT)
- count the number of the braces so as to catch compelete text area.
- e.g.
- \caption{blablablablabla\texbf{blablabla}blablabla.}
- """
- pattern_compile = re.compile(pattern, flags)
- for res in pattern_compile.finditer(text):
- brace_level = 0
- p = begin = end = res.regs[1][0]
- for _ in range(1024*16):
- if text[p] == '}' and brace_level == 0: break
- elif text[p] == '}': brace_level -= 1
- elif text[p] == '{': brace_level += 1
- p += 1
- end = p
- mask[begin:end] = TRANSFORM
- if forbid_wrapper:
- mask[res.regs[0][0]:begin] = PRESERVE
- mask[end:res.regs[0][1]] = PRESERVE
- return text, mask
-
-def set_forbidden_text_begin_end(text, mask, pattern, flags=0, limit_n_lines=42):
- """
- Find all \begin{} ... \end{} text block that with less than limit_n_lines lines.
- Add it to preserve area
- """
- pattern_compile = re.compile(pattern, flags)
- def search_with_line_limit(text, mask):
- for res in pattern_compile.finditer(text):
- cmd = res.group(1) # begin{what}
- this = res.group(2) # content between begin and end
- this_mask = mask[res.regs[2][0]:res.regs[2][1]]
- white_list = ['document', 'abstract', 'lemma', 'definition', 'sproof',
- 'em', 'emph', 'textit', 'textbf', 'itemize', 'enumerate']
- if (cmd in white_list) or this.count('\n') >= limit_n_lines: # use a magical number 42
- this, this_mask = search_with_line_limit(this, this_mask)
- mask[res.regs[2][0]:res.regs[2][1]] = this_mask
- else:
- mask[res.regs[0][0]:res.regs[0][1]] = PRESERVE
- return text, mask
- return search_with_line_limit(text, mask)
-
-class LinkedListNode():
- """
- Linked List Node
- """
- def __init__(self, string, preserve=True) -> None:
- self.string = string
- self.preserve = preserve
- self.next = None
- # self.begin_line = 0
- # self.begin_char = 0
-
-def convert_to_linklist(text, mask):
- root = LinkedListNode("", preserve=True)
- current_node = root
- for c, m, i in zip(text, mask, range(len(text))):
- if (m==PRESERVE and current_node.preserve) \
- or (m==TRANSFORM and not current_node.preserve):
- # add
- current_node.string += c
- else:
- current_node.next = LinkedListNode(c, preserve=(m==PRESERVE))
- current_node = current_node.next
- return root
-"""
-========================================================================
-Latex Merge File
-========================================================================
-"""
-
-def 寻找Latex主文件(file_manifest, mode):
- """
- 在多Tex文档中,寻找主文件,必须包含documentclass,返回找到的第一个。
- P.S. 但愿没人把latex模板放在里面传进来 (6.25 加入判定latex模板的代码)
- """
- canidates = []
- for texf in file_manifest:
- if os.path.basename(texf).startswith('merge'):
- continue
- with open(texf, 'r', encoding='utf8') as f:
- file_content = f.read()
- if r'\documentclass' in file_content:
- canidates.append(texf)
- else:
- continue
-
- if len(canidates) == 0:
- raise RuntimeError('无法找到一个主Tex文件(包含documentclass关键字)')
- elif len(canidates) == 1:
- return canidates[0]
- else: # if len(canidates) >= 2 通过一些Latex模板中常见(但通常不会出现在正文)的单词,对不同latex源文件扣分,取评分最高者返回
- canidates_score = []
- # 给出一些判定模板文档的词作为扣分项
- unexpected_words = ['\LaTeX', 'manuscript', 'Guidelines', 'font', 'citations', 'rejected', 'blind review', 'reviewers']
- expected_words = ['\input', '\ref', '\cite']
- for texf in canidates:
- canidates_score.append(0)
- with open(texf, 'r', encoding='utf8') as f:
- file_content = f.read()
- for uw in unexpected_words:
- if uw in file_content:
- canidates_score[-1] -= 1
- for uw in expected_words:
- if uw in file_content:
- canidates_score[-1] += 1
- select = np.argmax(canidates_score) # 取评分最高者返回
- return canidates[select]
-
-def rm_comments(main_file):
- new_file_remove_comment_lines = []
- for l in main_file.splitlines():
- # 删除整行的空注释
- if l.lstrip().startswith("%"):
- pass
- else:
- new_file_remove_comment_lines.append(l)
- main_file = '\n'.join(new_file_remove_comment_lines)
- # main_file = re.sub(r"\\include{(.*?)}", r"\\input{\1}", main_file) # 将 \include 命令转换为 \input 命令
- main_file = re.sub(r'(? 0 and node_string.count('\_') > final_tex.count('\_'):
- # walk and replace any _ without \
- final_tex = re.sub(r"(?')
- if not node.preserve:
- segment_parts_for_gpt.append(node.string)
- f.write(f'
#{show_html}#
')
- else:
- f.write(f'
{show_html}
')
- node = node.next
- if node is None: break
-
- for n in nodes: n.next = None # break
- return_dict['nodes'] = nodes
- return_dict['segment_parts_for_gpt'] = segment_parts_for_gpt
- return return_dict
-
-
-
-class LatexPaperSplit():
- """
- break down latex file to a linked list,
- each node use a preserve flag to indicate whether it should
- be proccessed by GPT.
- """
- def __init__(self) -> None:
- self.nodes = None
- self.msg = "*{\\scriptsize\\textbf{警告:该PDF由GPT-Academic开源项目调用大语言模型+Latex翻译插件一键生成," + \
- "版权归原文作者所有。翻译内容可靠性无保障,请仔细鉴别并以原文为准。" + \
- "项目Github地址 \\url{https://github.com/binary-husky/gpt_academic/}。"
- # 请您不要删除或修改这行警告,除非您是论文的原作者(如果您是论文原作者,欢迎加REAME中的QQ联系开发者)
- self.msg_declare = "为了防止大语言模型的意外谬误产生扩散影响,禁止移除或修改此警告。}}\\\\"
-
- def merge_result(self, arr, mode, msg):
- """
- Merge the result after the GPT process completed
- """
- result_string = ""
- p = 0
- for node in self.nodes:
- if node.preserve:
- result_string += node.string
- else:
- result_string += fix_content(arr[p], node.string)
- p += 1
- if mode == 'translate_zh':
- pattern = re.compile(r'\\begin\{abstract\}.*\n')
- match = pattern.search(result_string)
- if not match:
- # match \abstract{xxxx}
- pattern_compile = re.compile(r"\\abstract\{(.*?)\}", flags=re.DOTALL)
- match = pattern_compile.search(result_string)
- position = match.regs[1][0]
- else:
- # match \begin{abstract}xxxx\end{abstract}
- position = match.end()
- result_string = result_string[:position] + self.msg + msg + self.msg_declare + result_string[position:]
- return result_string
-
- def split(self, txt, project_folder, opts):
- """
- break down latex file to a linked list,
- each node use a preserve flag to indicate whether it should
- be proccessed by GPT.
- P.S. use multiprocessing to avoid timeout error
- """
- import multiprocessing
- manager = multiprocessing.Manager()
- return_dict = manager.dict()
- p = multiprocessing.Process(
- target=split_subprocess,
- args=(txt, project_folder, return_dict, opts))
- p.start()
- p.join()
- p.close()
- self.nodes = return_dict['nodes']
- self.sp = return_dict['segment_parts_for_gpt']
- return self.sp
-
-
-
-class LatexPaperFileGroup():
- """
- use tokenizer to break down text according to max_token_limit
- """
- def __init__(self):
- self.file_paths = []
- self.file_contents = []
- self.sp_file_contents = []
- self.sp_file_index = []
- self.sp_file_tag = []
-
- # count_token
- from request_llm.bridge_all import model_info
- enc = model_info["gpt-3.5-turbo"]['tokenizer']
- def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
- self.get_token_num = get_token_num
-
- def run_file_split(self, max_token_limit=1900):
- """
- use tokenizer to break down text according to max_token_limit
- """
- for index, file_content in enumerate(self.file_contents):
- if self.get_token_num(file_content) < max_token_limit:
- self.sp_file_contents.append(file_content)
- self.sp_file_index.append(index)
- self.sp_file_tag.append(self.file_paths[index])
- else:
- from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
- segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit)
- for j, segment in enumerate(segments):
- self.sp_file_contents.append(segment)
- self.sp_file_index.append(index)
- self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.tex")
- print('Segmentation: done')
-
- def merge_result(self):
- self.file_result = ["" for _ in range(len(self.file_paths))]
- for r, k in zip(self.sp_file_result, self.sp_file_index):
- self.file_result[k] += r
-
- def write_result(self):
- manifest = []
- for path, res in zip(self.file_paths, self.file_result):
- with open(path + '.polish.tex', 'w', encoding='utf8') as f:
- manifest.append(path + '.polish.tex')
- f.write(res)
- return manifest
-
-def write_html(sp_file_contents, sp_file_result, chatbot, project_folder):
-
- # write html
- try:
- import shutil
- from .crazy_utils import construct_html
- from toolbox import gen_time_str
- ch = construct_html()
- orig = ""
- trans = ""
- final = []
- for c,r in zip(sp_file_contents, sp_file_result):
- final.append(c)
- final.append(r)
- for i, k in enumerate(final):
- if i%2==0:
- orig = k
- if i%2==1:
- trans = k
- ch.add_row(a=orig, b=trans)
- create_report_file_name = f"{gen_time_str()}.trans.html"
- ch.save_file(create_report_file_name)
- shutil.copyfile(pj('./gpt_log/', create_report_file_name), pj(project_folder, create_report_file_name))
- promote_file_to_downloadzone(file=f'./gpt_log/{create_report_file_name}', chatbot=chatbot)
- except:
- from toolbox import trimmed_format_exc
- print('writing html result failed:', trimmed_format_exc())
-
-def Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, mode='proofread', switch_prompt=None, opts=[]):
- import time, os, re
- from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
- from .latex_utils import LatexPaperFileGroup, merge_tex_files, LatexPaperSplit, 寻找Latex主文件
-
- # <-------- 寻找主tex文件 ---------->
- maintex = 寻找Latex主文件(file_manifest, mode)
- chatbot.append((f"定位主Latex文件", f'[Local Message] 分析结果:该项目的Latex主文件是{maintex}, 如果分析错误, 请立即终止程序, 删除或修改歧义文件, 然后重试。主程序即将开始, 请稍候。'))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- time.sleep(3)
-
- # <-------- 读取Latex文件, 将多文件tex工程融合为一个巨型tex ---------->
- main_tex_basename = os.path.basename(maintex)
- assert main_tex_basename.endswith('.tex')
- main_tex_basename_bare = main_tex_basename[:-4]
- may_exist_bbl = pj(project_folder, f'{main_tex_basename_bare}.bbl')
- if os.path.exists(may_exist_bbl):
- shutil.copyfile(may_exist_bbl, pj(project_folder, f'merge.bbl'))
- shutil.copyfile(may_exist_bbl, pj(project_folder, f'merge_{mode}.bbl'))
- shutil.copyfile(may_exist_bbl, pj(project_folder, f'merge_diff.bbl'))
-
- with open(maintex, 'r', encoding='utf-8', errors='replace') as f:
- content = f.read()
- merged_content = merge_tex_files(project_folder, content, mode)
-
- with open(project_folder + '/merge.tex', 'w', encoding='utf-8', errors='replace') as f:
- f.write(merged_content)
-
- # <-------- 精细切分latex文件 ---------->
- chatbot.append((f"Latex文件融合完成", f'[Local Message] 正在精细切分latex文件,这需要一段时间计算,文档越长耗时越长,请耐心等待。'))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- lps = LatexPaperSplit()
- res = lps.split(merged_content, project_folder, opts) # 消耗时间的函数
-
- # <-------- 拆分过长的latex片段 ---------->
- pfg = LatexPaperFileGroup()
- for index, r in enumerate(res):
- pfg.file_paths.append('segment-' + str(index))
- pfg.file_contents.append(r)
-
- pfg.run_file_split(max_token_limit=1024)
- n_split = len(pfg.sp_file_contents)
-
- # <-------- 根据需要切换prompt ---------->
- inputs_array, sys_prompt_array = switch_prompt(pfg, mode)
- inputs_show_user_array = [f"{mode} {f}" for f in pfg.sp_file_tag]
-
- if os.path.exists(pj(project_folder,'temp.pkl')):
-
- # <-------- 【仅调试】如果存在调试缓存文件,则跳过GPT请求环节 ---------->
- pfg = objload(file=pj(project_folder,'temp.pkl'))
-
- else:
- # <-------- gpt 多线程请求 ---------->
- gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
- inputs_array=inputs_array,
- inputs_show_user_array=inputs_show_user_array,
- llm_kwargs=llm_kwargs,
- chatbot=chatbot,
- history_array=[[""] for _ in range(n_split)],
- sys_prompt_array=sys_prompt_array,
- # max_workers=5, # 并行任务数量限制, 最多同时执行5个, 其他的排队等待
- scroller_max_len = 40
- )
-
- # <-------- 文本碎片重组为完整的tex片段 ---------->
- pfg.sp_file_result = []
- for i_say, gpt_say, orig_content in zip(gpt_response_collection[0::2], gpt_response_collection[1::2], pfg.sp_file_contents):
- pfg.sp_file_result.append(gpt_say)
- pfg.merge_result()
-
- # <-------- 临时存储用于调试 ---------->
- pfg.get_token_num = None
- objdump(pfg, file=pj(project_folder,'temp.pkl'))
-
- write_html(pfg.sp_file_contents, pfg.sp_file_result, chatbot=chatbot, project_folder=project_folder)
-
- # <-------- 写出文件 ---------->
- msg = f"当前大语言模型: {llm_kwargs['llm_model']},当前语言模型温度设定: {llm_kwargs['temperature']}。"
- final_tex = lps.merge_result(pfg.file_result, mode, msg)
- with open(project_folder + f'/merge_{mode}.tex', 'w', encoding='utf-8', errors='replace') as f:
- if mode != 'translate_zh' or "binary" in final_tex: f.write(final_tex)
-
-
- # <-------- 整理结果, 退出 ---------->
- chatbot.append((f"完成了吗?", 'GPT结果已输出, 正在编译PDF'))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # <-------- 返回 ---------->
- return project_folder + f'/merge_{mode}.tex'
-
-
-
-def remove_buggy_lines(file_path, log_path, tex_name, tex_name_pure, n_fix, work_folder_modified):
- try:
- with open(log_path, 'r', encoding='utf-8', errors='replace') as f:
- log = f.read()
- with open(file_path, 'r', encoding='utf-8', errors='replace') as f:
- file_lines = f.readlines()
- import re
- buggy_lines = re.findall(tex_name+':([0-9]{1,5}):', log)
- buggy_lines = [int(l) for l in buggy_lines]
- buggy_lines = sorted(buggy_lines)
- print("removing lines that has errors", buggy_lines)
- file_lines.pop(buggy_lines[0]-1)
- with open(pj(work_folder_modified, f"{tex_name_pure}_fix_{n_fix}.tex"), 'w', encoding='utf-8', errors='replace') as f:
- f.writelines(file_lines)
- return True, f"{tex_name_pure}_fix_{n_fix}", buggy_lines
- except:
- print("Fatal error occurred, but we cannot identify error, please download zip, read latex log, and compile manually.")
- return False, -1, [-1]
-
-def compile_latex_with_timeout(command, cwd, timeout=60):
- import subprocess
- process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd)
- try:
- stdout, stderr = process.communicate(timeout=timeout)
- except subprocess.TimeoutExpired:
- process.kill()
- stdout, stderr = process.communicate()
- print("Process timed out!")
- return False
- return True
-
-def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_folder_original, work_folder_modified, work_folder, mode='default'):
- import os, time
- current_dir = os.getcwd()
- n_fix = 1
- max_try = 32
- chatbot.append([f"正在编译PDF文档", f'编译已经开始。当前工作路径为{work_folder},如果程序停顿5分钟以上,请直接去该路径下取回翻译结果,或者重启之后再度尝试 ...']); yield from update_ui(chatbot=chatbot, history=history)
- chatbot.append([f"正在编译PDF文档", '...']); yield from update_ui(chatbot=chatbot, history=history); time.sleep(1); chatbot[-1] = list(chatbot[-1]) # 刷新界面
- yield from update_ui_lastest_msg('编译已经开始...', chatbot, history) # 刷新Gradio前端界面
-
- while True:
- import os
-
- # https://stackoverflow.com/questions/738755/dont-make-me-manually-abort-a-latex-compile-when-theres-an-error
- yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译原始PDF ...', chatbot, history) # 刷新Gradio前端界面
- ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_original}.tex', work_folder_original)
-
- yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译转化后的PDF ...', chatbot, history) # 刷新Gradio前端界面
- ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex', work_folder_modified)
-
- if ok and os.path.exists(pj(work_folder_modified, f'{main_file_modified}.pdf')):
- # 只有第二步成功,才能继续下面的步骤
- yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译BibTex ...', chatbot, history) # 刷新Gradio前端界面
- if not os.path.exists(pj(work_folder_original, f'{main_file_original}.bbl')):
- ok = compile_latex_with_timeout(f'bibtex {main_file_original}.aux', work_folder_original)
- if not os.path.exists(pj(work_folder_modified, f'{main_file_modified}.bbl')):
- ok = compile_latex_with_timeout(f'bibtex {main_file_modified}.aux', work_folder_modified)
-
- yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译文献交叉引用 ...', chatbot, history) # 刷新Gradio前端界面
- ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_original}.tex', work_folder_original)
- ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex', work_folder_modified)
- ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_original}.tex', work_folder_original)
- ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex', work_folder_modified)
-
- if mode!='translate_zh':
- yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 使用latexdiff生成论文转化前后对比 ...', chatbot, history) # 刷新Gradio前端界面
- print( f'latexdiff --encoding=utf8 --append-safecmd=subfile {work_folder_original}/{main_file_original}.tex {work_folder_modified}/{main_file_modified}.tex --flatten > {work_folder}/merge_diff.tex')
- ok = compile_latex_with_timeout(f'latexdiff --encoding=utf8 --append-safecmd=subfile {work_folder_original}/{main_file_original}.tex {work_folder_modified}/{main_file_modified}.tex --flatten > {work_folder}/merge_diff.tex')
-
- yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 正在编译对比PDF ...', chatbot, history) # 刷新Gradio前端界面
- ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error merge_diff.tex', work_folder)
- ok = compile_latex_with_timeout(f'bibtex merge_diff.aux', work_folder)
- ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error merge_diff.tex', work_folder)
- ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error merge_diff.tex', work_folder)
-
-
- # <---------- 检查结果 ----------->
- results_ = ""
- original_pdf_success = os.path.exists(pj(work_folder_original, f'{main_file_original}.pdf'))
- modified_pdf_success = os.path.exists(pj(work_folder_modified, f'{main_file_modified}.pdf'))
- diff_pdf_success = os.path.exists(pj(work_folder, f'merge_diff.pdf'))
- results_ += f"原始PDF编译是否成功: {original_pdf_success};"
- results_ += f"转化PDF编译是否成功: {modified_pdf_success};"
- results_ += f"对比PDF编译是否成功: {diff_pdf_success};"
- yield from update_ui_lastest_msg(f'第{n_fix}编译结束: {results_}...', chatbot, history) # 刷新Gradio前端界面
-
- if diff_pdf_success:
- result_pdf = pj(work_folder_modified, f'merge_diff.pdf') # get pdf path
- promote_file_to_downloadzone(result_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
- if modified_pdf_success:
- yield from update_ui_lastest_msg(f'转化PDF编译已经成功, 即将退出 ...', chatbot, history) # 刷新Gradio前端界面
- result_pdf = pj(work_folder_modified, f'{main_file_modified}.pdf') # get pdf path
- if os.path.exists(pj(work_folder, '..', 'translation')):
- shutil.copyfile(result_pdf, pj(work_folder, '..', 'translation', 'translate_zh.pdf'))
- promote_file_to_downloadzone(result_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
- return True # 成功啦
- else:
- if n_fix>=max_try: break
- n_fix += 1
- can_retry, main_file_modified, buggy_lines = remove_buggy_lines(
- file_path=pj(work_folder_modified, f'{main_file_modified}.tex'),
- log_path=pj(work_folder_modified, f'{main_file_modified}.log'),
- tex_name=f'{main_file_modified}.tex',
- tex_name_pure=f'{main_file_modified}',
- n_fix=n_fix,
- work_folder_modified=work_folder_modified,
- )
- yield from update_ui_lastest_msg(f'由于最为关键的转化PDF编译失败, 将根据报错信息修正tex源文件并重试, 当前报错的latex代码处于第{buggy_lines}行 ...', chatbot, history) # 刷新Gradio前端界面
- if not can_retry: break
-
- return False # 失败啦
-
-
-
diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/maskrcnn/mask_rcnn_r50_fpn_160e_icdar2015.py b/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/maskrcnn/mask_rcnn_r50_fpn_160e_icdar2015.py
deleted file mode 100644
index 5feb0c61ff2738338527e1aceaa569051a655cf8..0000000000000000000000000000000000000000
--- a/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/maskrcnn/mask_rcnn_r50_fpn_160e_icdar2015.py
+++ /dev/null
@@ -1,33 +0,0 @@
-_base_ = [
- '../../_base_/default_runtime.py',
- '../../_base_/det_models/ocr_mask_rcnn_r50_fpn_ohem.py',
- '../../_base_/schedules/schedule_sgd_160e.py',
- '../../_base_/det_datasets/icdar2015.py',
- '../../_base_/det_pipelines/maskrcnn_pipeline.py'
-]
-
-train_list = {{_base_.train_list}}
-test_list = {{_base_.test_list}}
-
-train_pipeline = {{_base_.train_pipeline}}
-test_pipeline_icdar2015 = {{_base_.test_pipeline_icdar2015}}
-
-data = dict(
- samples_per_gpu=8,
- workers_per_gpu=4,
- val_dataloader=dict(samples_per_gpu=1),
- test_dataloader=dict(samples_per_gpu=1),
- train=dict(
- type='UniformConcatDataset',
- datasets=train_list,
- pipeline=train_pipeline),
- val=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline_icdar2015),
- test=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline_icdar2015))
-
-evaluation = dict(interval=10, metric='hmean-iou')
diff --git a/spaces/MLVKU/Human_Object_Interaction/hotr/engine/trainer.py b/spaces/MLVKU/Human_Object_Interaction/hotr/engine/trainer.py
deleted file mode 100644
index 313b4f8f689735d0593e46ef154505fb40544c77..0000000000000000000000000000000000000000
--- a/spaces/MLVKU/Human_Object_Interaction/hotr/engine/trainer.py
+++ /dev/null
@@ -1,73 +0,0 @@
-# ------------------------------------------------------------------------
-# HOTR official code : engine/trainer.py
-# Copyright (c) Kakao Brain, Inc. and its affiliates. All Rights Reserved
-# ------------------------------------------------------------------------
-# Modified from DETR (https://github.com/facebookresearch/detr)
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-# ------------------------------------------------------------------------
-import math
-import torch
-import sys
-import hotr.util.misc as utils
-import hotr.util.logger as loggers
-from hotr.util.ramp import *
-from typing import Iterable
-import wandb
-
-def train_one_epoch(model: torch.nn.Module, criterion: torch.nn.Module,
- data_loader: Iterable, optimizer: torch.optim.Optimizer,
- device: torch.device, epoch: int, max_epoch: int, ramp_up_epoch: int,rampdown_epoch: int,max_consis_coef: float=1.0,max_norm: float = 0,dataset_file: str = 'coco', log: bool = False):
- model.train()
- criterion.train()
- metric_logger = loggers.MetricLogger(mode="train", delimiter=" ")
- metric_logger.add_meter('lr', utils.SmoothedValue(window_size=1, fmt='{value:.6f}'))
- space_fmt = str(len(str(max_epoch)))
- header = 'Epoch [{start_epoch: >{fill}}/{end_epoch}]'.format(start_epoch=epoch+1, end_epoch=max_epoch, fill=space_fmt)
- print_freq = int(len(data_loader)/5)
-
- if epoch<=rampdown_epoch:
- consis_coef=sigmoid_rampup(epoch,ramp_up_epoch,max_consis_coef)
- else:
- consis_coef=cosine_rampdown(epoch-rampdown_epoch,max_epoch-rampdown_epoch,max_consis_coef)
-
- print(f"\n>>> Epoch #{(epoch+1)}")
- for samples, targets in metric_logger.log_every(data_loader, print_freq, header):
- samples = samples.to(device)
- targets = [{k: v.to(device) for k, v in t.items()} for t in targets]
-
- outputs = model(samples)
- loss_dict = criterion(outputs, targets, log)
- #print(loss_dict)
- weight_dict = criterion.weight_dict
-
- losses = sum(loss_dict[k] * weight_dict[k]*consis_coef if 'consistency' in k else loss_dict[k] * weight_dict[k] for k in loss_dict.keys() if k in weight_dict)
-
- # reduce losses over all GPUs for logging purposes
- loss_dict_reduced = utils.reduce_dict(loss_dict)
- loss_dict_reduced_unscaled = {f'{k}_unscaled': v
- for k, v in loss_dict_reduced.items()}
- loss_dict_reduced_scaled = {k: v * weight_dict[k]*consis_coef if 'consistency' in k else v * weight_dict[k] for k, v in loss_dict_reduced.items() if k in weight_dict}
- losses_reduced_scaled = sum(loss_dict_reduced_scaled.values())
- loss_value = losses_reduced_scaled.item()
-
-
- if not math.isfinite(loss_value):
- print("Loss is {}, stopping training".format(loss_value))
- print(loss_dict_reduced)
- sys.exit(1)
-
- optimizer.zero_grad()
- losses.backward()
- if max_norm > 0:
- torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm)
- optimizer.step()
-
- metric_logger.update(loss=loss_value, **loss_dict_reduced_scaled)
- if "obj_class_error" in loss_dict:
- metric_logger.update(obj_class_error=loss_dict_reduced['obj_class_error'])
- metric_logger.update(lr=optimizer.param_groups[0]["lr"])
- # gather the stats from all processes
- metric_logger.synchronize_between_processes()
- if utils.get_rank() == 0 and log: wandb.log(loss_dict_reduced_scaled)
- print("Averaged stats:", metric_logger)
- return {k: meter.global_avg for k, meter in metric_logger.meters.items()}
diff --git a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/__init__.py b/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/__init__.py
deleted file mode 100644
index 8ffba6afd9bf5e9848c891a855943ede73568c3b..0000000000000000000000000000000000000000
--- a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/__init__.py
+++ /dev/null
@@ -1,19 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from .modeling.meta_arch import custom_rcnn
-from .modeling.roi_heads import detic_roi_heads
-from .modeling.roi_heads import res5_roi_heads
-from .modeling.backbone import swintransformer
-from .modeling.backbone import timm
-
-
-from .data.datasets import lvis_v1
-from .data.datasets import imagenet
-from .data.datasets import cc
-from .data.datasets import objects365
-from .data.datasets import oid
-from .data.datasets import coco_zeroshot
-
-try:
- from .modeling.meta_arch import d2_deformable_detr
-except:
- pass
\ No newline at end of file
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/datasets/stare.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/datasets/stare.py
deleted file mode 100644
index 3f71b25488cc11a6b4d582ac52b5a24e1ad1cf8e..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/datasets/stare.py
+++ /dev/null
@@ -1,59 +0,0 @@
-# dataset settings
-dataset_type = 'STAREDataset'
-data_root = 'data/STARE'
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-img_scale = (605, 700)
-crop_size = (128, 128)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations'),
- dict(type='Resize', img_scale=img_scale, ratio_range=(0.5, 2.0)),
- dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
- dict(type='RandomFlip', prob=0.5),
- dict(type='PhotoMetricDistortion'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_semantic_seg'])
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=img_scale,
- # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0],
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img'])
- ])
-]
-
-data = dict(
- samples_per_gpu=4,
- workers_per_gpu=4,
- train=dict(
- type='RepeatDataset',
- times=40000,
- dataset=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/training',
- ann_dir='annotations/training',
- pipeline=train_pipeline)),
- val=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/validation',
- ann_dir='annotations/validation',
- pipeline=test_pipeline),
- test=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/validation',
- ann_dir='annotations/validation',
- pipeline=test_pipeline))
diff --git a/spaces/MetaWabbit/Auto-GPT/autogpt/speech/say.py b/spaces/MetaWabbit/Auto-GPT/autogpt/speech/say.py
deleted file mode 100644
index 727983d12bf334205550a54bcd69a7a36824eda4..0000000000000000000000000000000000000000
--- a/spaces/MetaWabbit/Auto-GPT/autogpt/speech/say.py
+++ /dev/null
@@ -1,41 +0,0 @@
-""" Text to speech module """
-import threading
-from threading import Semaphore
-
-from autogpt.config import Config
-from autogpt.speech.brian import BrianSpeech
-from autogpt.speech.eleven_labs import ElevenLabsSpeech
-from autogpt.speech.gtts import GTTSVoice
-from autogpt.speech.macos_tts import MacOSTTS
-
-CFG = Config()
-DEFAULT_VOICE_ENGINE = GTTSVoice()
-VOICE_ENGINE = None
-if CFG.elevenlabs_api_key:
- VOICE_ENGINE = ElevenLabsSpeech()
-elif CFG.use_mac_os_tts == "True":
- VOICE_ENGINE = MacOSTTS()
-elif CFG.use_brian_tts == "True":
- VOICE_ENGINE = BrianSpeech()
-else:
- VOICE_ENGINE = GTTSVoice()
-
-
-QUEUE_SEMAPHORE = Semaphore(
- 1
-) # The amount of sounds to queue before blocking the main thread
-
-
-def say_text(text: str, voice_index: int = 0) -> None:
- """Speak the given text using the given voice index"""
-
- def speak() -> None:
- success = VOICE_ENGINE.say(text, voice_index)
- if not success:
- DEFAULT_VOICE_ENGINE.say(text)
-
- QUEUE_SEMAPHORE.release()
-
- QUEUE_SEMAPHORE.acquire(True)
- thread = threading.Thread(target=speak)
- thread.start()
diff --git a/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/util/box_ops.py b/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/util/box_ops.py
deleted file mode 100644
index 781068d294e576954edb4bd07b6e0f30e4e1bcd9..0000000000000000000000000000000000000000
--- a/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/util/box_ops.py
+++ /dev/null
@@ -1,140 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-"""
-Utilities for bounding box manipulation and GIoU.
-"""
-import torch
-from torchvision.ops.boxes import box_area
-
-
-def box_cxcywh_to_xyxy(x):
- x_c, y_c, w, h = x.unbind(-1)
- b = [(x_c - 0.5 * w), (y_c - 0.5 * h), (x_c + 0.5 * w), (y_c + 0.5 * h)]
- return torch.stack(b, dim=-1)
-
-
-def box_xyxy_to_cxcywh(x):
- x0, y0, x1, y1 = x.unbind(-1)
- b = [(x0 + x1) / 2, (y0 + y1) / 2, (x1 - x0), (y1 - y0)]
- return torch.stack(b, dim=-1)
-
-
-# modified from torchvision to also return the union
-def box_iou(boxes1, boxes2):
- area1 = box_area(boxes1)
- area2 = box_area(boxes2)
-
- # import ipdb; ipdb.set_trace()
- lt = torch.max(boxes1[:, None, :2], boxes2[:, :2]) # [N,M,2]
- rb = torch.min(boxes1[:, None, 2:], boxes2[:, 2:]) # [N,M,2]
-
- wh = (rb - lt).clamp(min=0) # [N,M,2]
- inter = wh[:, :, 0] * wh[:, :, 1] # [N,M]
-
- union = area1[:, None] + area2 - inter
-
- iou = inter / (union + 1e-6)
- return iou, union
-
-
-def generalized_box_iou(boxes1, boxes2):
- """
- Generalized IoU from https://giou.stanford.edu/
-
- The boxes should be in [x0, y0, x1, y1] format
-
- Returns a [N, M] pairwise matrix, where N = len(boxes1)
- and M = len(boxes2)
- """
- # degenerate boxes gives inf / nan results
- # so do an early check
- assert (boxes1[:, 2:] >= boxes1[:, :2]).all()
- assert (boxes2[:, 2:] >= boxes2[:, :2]).all()
- # except:
- # import ipdb; ipdb.set_trace()
- iou, union = box_iou(boxes1, boxes2)
-
- lt = torch.min(boxes1[:, None, :2], boxes2[:, :2])
- rb = torch.max(boxes1[:, None, 2:], boxes2[:, 2:])
-
- wh = (rb - lt).clamp(min=0) # [N,M,2]
- area = wh[:, :, 0] * wh[:, :, 1]
-
- return iou - (area - union) / (area + 1e-6)
-
-
-# modified from torchvision to also return the union
-def box_iou_pairwise(boxes1, boxes2):
- area1 = box_area(boxes1)
- area2 = box_area(boxes2)
-
- lt = torch.max(boxes1[:, :2], boxes2[:, :2]) # [N,2]
- rb = torch.min(boxes1[:, 2:], boxes2[:, 2:]) # [N,2]
-
- wh = (rb - lt).clamp(min=0) # [N,2]
- inter = wh[:, 0] * wh[:, 1] # [N]
-
- union = area1 + area2 - inter
-
- iou = inter / union
- return iou, union
-
-
-def generalized_box_iou_pairwise(boxes1, boxes2):
- """
- Generalized IoU from https://giou.stanford.edu/
-
- Input:
- - boxes1, boxes2: N,4
- Output:
- - giou: N, 4
- """
- # degenerate boxes gives inf / nan results
- # so do an early check
- assert (boxes1[:, 2:] >= boxes1[:, :2]).all()
- assert (boxes2[:, 2:] >= boxes2[:, :2]).all()
- assert boxes1.shape == boxes2.shape
- iou, union = box_iou_pairwise(boxes1, boxes2) # N, 4
-
- lt = torch.min(boxes1[:, :2], boxes2[:, :2])
- rb = torch.max(boxes1[:, 2:], boxes2[:, 2:])
-
- wh = (rb - lt).clamp(min=0) # [N,2]
- area = wh[:, 0] * wh[:, 1]
-
- return iou - (area - union) / area
-
-
-def masks_to_boxes(masks):
- """Compute the bounding boxes around the provided masks
-
- The masks should be in format [N, H, W] where N is the number of masks, (H, W) are the spatial dimensions.
-
- Returns a [N, 4] tensors, with the boxes in xyxy format
- """
- if masks.numel() == 0:
- return torch.zeros((0, 4), device=masks.device)
-
- h, w = masks.shape[-2:]
-
- y = torch.arange(0, h, dtype=torch.float)
- x = torch.arange(0, w, dtype=torch.float)
- y, x = torch.meshgrid(y, x)
-
- x_mask = masks * x.unsqueeze(0)
- x_max = x_mask.flatten(1).max(-1)[0]
- x_min = x_mask.masked_fill(~(masks.bool()), 1e8).flatten(1).min(-1)[0]
-
- y_mask = masks * y.unsqueeze(0)
- y_max = y_mask.flatten(1).max(-1)[0]
- y_min = y_mask.masked_fill(~(masks.bool()), 1e8).flatten(1).min(-1)[0]
-
- return torch.stack([x_min, y_min, x_max, y_max], 1)
-
-
-if __name__ == "__main__":
- x = torch.rand(5, 4)
- y = torch.rand(3, 4)
- iou, union = box_iou(x, y)
- import ipdb
-
- ipdb.set_trace()
diff --git a/spaces/MohamedRabie26/Soil_Shear_Strength_Prediciton/README.md b/spaces/MohamedRabie26/Soil_Shear_Strength_Prediciton/README.md
deleted file mode 100644
index b31f80d44cdcfff7ad316713f66a8e90eb7339e0..0000000000000000000000000000000000000000
--- a/spaces/MohamedRabie26/Soil_Shear_Strength_Prediciton/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Soil Shear Strength prediction tool
-emoji: 🏆
-colorFrom: red
-colorTo: blue
-sdk: gradio
-sdk_version: 3.45.2
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/MrTitanicus/rvc-models/infer_pack/transforms.py b/spaces/MrTitanicus/rvc-models/infer_pack/transforms.py
deleted file mode 100644
index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000
--- a/spaces/MrTitanicus/rvc-models/infer_pack/transforms.py
+++ /dev/null
@@ -1,209 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {"tails": tails, "tail_bound": tail_bound}
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1
-
-
-def unconstrained_rational_quadratic_spline(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails="linear",
- tail_bound=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == "linear":
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError("{} tails are not implemented.".format(tails))
-
- (
- outputs[inside_interval_mask],
- logabsdet[inside_interval_mask],
- ) = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound,
- right=tail_bound,
- bottom=-tail_bound,
- top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- )
-
- return outputs, logabsdet
-
-
-def rational_quadratic_spline(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0.0,
- right=1.0,
- bottom=0.0,
- top=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError("Input to a transform is not within its domain")
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError("Minimal bin width too large for the number of bins")
- if min_bin_height * num_bins > 1.0:
- raise ValueError("Minimal bin height too large for the number of bins")
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (inputs - input_cumheights) * (
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
- ) + input_heights * (input_delta - input_derivatives)
- b = input_heights * input_derivatives - (inputs - input_cumheights) * (
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
- )
- c = -input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + (
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta
- )
- derivative_numerator = input_delta.pow(2) * (
- input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2)
- )
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (
- input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta
- )
- denominator = input_delta + (
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta
- )
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (
- input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2)
- )
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/MrVicente/RA-BART/data/relation_utils.py b/spaces/MrVicente/RA-BART/data/relation_utils.py
deleted file mode 100644
index ada9e4080d2e9040f22f14e3dd747200bc16c745..0000000000000000000000000000000000000000
--- a/spaces/MrVicente/RA-BART/data/relation_utils.py
+++ /dev/null
@@ -1,53 +0,0 @@
-
-#############################
-# Imports
-#############################
-
-# Python modules
-from collections import deque
-from ast import literal_eval
-
-# Remote modules
-import torch
-
-# Local modules
-
-#############################
-# Constants
-#############################
-
-##########################################################
-# Helper functions for Relations in dict format
-##########################################################
-
-def clean_relations(word_relations):
- new_relations = deque()
- for r in word_relations:
- rel = {}
- for r_key, r_value in r.items():
- normal_k = literal_eval(r_key)
- rel_d = {}
- for r_d_key, r_d_value in r_value.items():
- normal_d_k = literal_eval(r_d_key)
- rel_d[normal_d_k] = r_d_value
- rel[normal_k] = rel_d
- new_relations.append(rel)
- list_new_relations = list(new_relations)
- return list_new_relations
-
-##########################################################
-# Helper functions for Relations in Matrix format
-##########################################################
-
-def relation_binary_2d_to_1d(relations_binary_mask, dim=1):
- relations_binary_mask = relations_binary_mask.sum(dim=dim)
- relations_binary_mask[relations_binary_mask > 1] = 1
- return relations_binary_mask
-
-def tokens_with_relations(relations_binary_mask):
- relations_binary_mask_dim1 = relations_binary_mask.sum(dim=0)
- relations_binary_mask_dim2 = relations_binary_mask.sum(dim=1)
- tokens_with_rels = relations_binary_mask_dim1 + relations_binary_mask_dim2
- tokens_with_rels[tokens_with_rels > 1] = 1
- mask_rels = torch.tensor(tokens_with_rels, dtype=torch.bool)
- return mask_rels
diff --git a/spaces/NATSpeech/PortaSpeech/data_gen/tts/wav_processors/__init__.py b/spaces/NATSpeech/PortaSpeech/data_gen/tts/wav_processors/__init__.py
deleted file mode 100644
index 4be97b377dcb95a0e6bceb876ac0ce93c8290249..0000000000000000000000000000000000000000
--- a/spaces/NATSpeech/PortaSpeech/data_gen/tts/wav_processors/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from . import base_processor
-from . import common_processors
diff --git a/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/pg_train_test.py b/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/pg_train_test.py
deleted file mode 100644
index 0a562e5331e638cab82bc8033bfa2c1fc355e960..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/pg_train_test.py
+++ /dev/null
@@ -1,87 +0,0 @@
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-"""Tests for pg_train.
-
-These tests excersize code paths available through configuration options.
-Training will be run for just a few steps with the goal being to check that
-nothing crashes.
-"""
-
-from absl import flags
-import tensorflow as tf
-
-from single_task import defaults # brain coder
-from single_task import run # brain coder
-
-FLAGS = flags.FLAGS
-
-
-class TrainTest(tf.test.TestCase):
-
- def RunTrainingSteps(self, config_string, num_steps=10):
- """Run a few training steps with the given config.
-
- Just check that nothing crashes.
-
- Args:
- config_string: Config encoded in a string. See
- $REPO_PATH/common/config_lib.py
- num_steps: Number of training steps to run. Defaults to 10.
- """
- config = defaults.default_config_with_updates(config_string)
- FLAGS.master = ''
- FLAGS.max_npe = num_steps * config.batch_size
- FLAGS.summary_interval = 1
- FLAGS.logdir = tf.test.get_temp_dir()
- FLAGS.config = config_string
- tf.reset_default_graph()
- run.main(None)
-
- def testVanillaPolicyGradient(self):
- self.RunTrainingSteps(
- 'env=c(task="reverse"),'
- 'agent=c(algorithm="pg"),'
- 'timestep_limit=90,batch_size=64')
-
- def testVanillaPolicyGradient_VariableLengthSequences(self):
- self.RunTrainingSteps(
- 'env=c(task="reverse"),'
- 'agent=c(algorithm="pg",eos_token=False),'
- 'timestep_limit=90,batch_size=64')
-
- def testVanillaActorCritic(self):
- self.RunTrainingSteps(
- 'env=c(task="reverse"),'
- 'agent=c(algorithm="pg",ema_baseline_decay=0.0),'
- 'timestep_limit=90,batch_size=64')
-
- def testPolicyGradientWithTopK(self):
- self.RunTrainingSteps(
- 'env=c(task="reverse"),'
- 'agent=c(algorithm="pg",topk_loss_hparam=1.0,topk=10),'
- 'timestep_limit=90,batch_size=64')
-
- def testVanillaActorCriticWithTopK(self):
- self.RunTrainingSteps(
- 'env=c(task="reverse"),'
- 'agent=c(algorithm="pg",ema_baseline_decay=0.0,topk_loss_hparam=1.0,'
- 'topk=10),'
- 'timestep_limit=90,batch_size=64')
-
- def testPolicyGradientWithTopK_VariableLengthSequences(self):
- self.RunTrainingSteps(
- 'env=c(task="reverse"),'
- 'agent=c(algorithm="pg",topk_loss_hparam=1.0,topk=10,eos_token=False),'
- 'timestep_limit=90,batch_size=64')
-
- def testPolicyGradientWithImportanceSampling(self):
- self.RunTrainingSteps(
- 'env=c(task="reverse"),'
- 'agent=c(algorithm="pg",alpha=0.5),'
- 'timestep_limit=90,batch_size=64')
-
-
-if __name__ == '__main__':
- tf.test.main()
diff --git a/spaces/Nee001/bing0/src/components/external-link.tsx b/spaces/Nee001/bing0/src/components/external-link.tsx
deleted file mode 100644
index 011265f364d5a64a770f4c7e9c65c5ade21d623a..0000000000000000000000000000000000000000
--- a/spaces/Nee001/bing0/src/components/external-link.tsx
+++ /dev/null
@@ -1,30 +0,0 @@
-export function ExternalLink({
- href,
- children
-}: {
- href: string
- children: React.ReactNode
-}) {
- return (
-
- {children}
-
-
- )
-}
diff --git a/spaces/OAOA/DifFace/basicsr/models/realesrgan_model.py b/spaces/OAOA/DifFace/basicsr/models/realesrgan_model.py
deleted file mode 100644
index c74b28fb1dc6a7f5c5ad3f7d8bb96c19c52ee92b..0000000000000000000000000000000000000000
--- a/spaces/OAOA/DifFace/basicsr/models/realesrgan_model.py
+++ /dev/null
@@ -1,267 +0,0 @@
-import numpy as np
-import random
-import torch
-from collections import OrderedDict
-from torch.nn import functional as F
-
-from basicsr.data.degradations import random_add_gaussian_noise_pt, random_add_poisson_noise_pt
-from basicsr.data.transforms import paired_random_crop
-from basicsr.losses.loss_util import get_refined_artifact_map
-from basicsr.models.srgan_model import SRGANModel
-from basicsr.utils import DiffJPEG, USMSharp
-from basicsr.utils.img_process_util import filter2D
-from basicsr.utils.registry import MODEL_REGISTRY
-
-
-@MODEL_REGISTRY.register(suffix='basicsr')
-class RealESRGANModel(SRGANModel):
- """RealESRGAN Model for Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data.
-
- It mainly performs:
- 1. randomly synthesize LQ images in GPU tensors
- 2. optimize the networks with GAN training.
- """
-
- def __init__(self, opt):
- super(RealESRGANModel, self).__init__(opt)
- self.jpeger = DiffJPEG(differentiable=False).cuda() # simulate JPEG compression artifacts
- self.usm_sharpener = USMSharp().cuda() # do usm sharpening
- self.queue_size = opt.get('queue_size', 180)
-
- @torch.no_grad()
- def _dequeue_and_enqueue(self):
- """It is the training pair pool for increasing the diversity in a batch.
-
- Batch processing limits the diversity of synthetic degradations in a batch. For example, samples in a
- batch could not have different resize scaling factors. Therefore, we employ this training pair pool
- to increase the degradation diversity in a batch.
- """
- # initialize
- b, c, h, w = self.lq.size()
- if not hasattr(self, 'queue_lr'):
- assert self.queue_size % b == 0, f'queue size {self.queue_size} should be divisible by batch size {b}'
- self.queue_lr = torch.zeros(self.queue_size, c, h, w).cuda()
- _, c, h, w = self.gt.size()
- self.queue_gt = torch.zeros(self.queue_size, c, h, w).cuda()
- self.queue_ptr = 0
- if self.queue_ptr == self.queue_size: # the pool is full
- # do dequeue and enqueue
- # shuffle
- idx = torch.randperm(self.queue_size)
- self.queue_lr = self.queue_lr[idx]
- self.queue_gt = self.queue_gt[idx]
- # get first b samples
- lq_dequeue = self.queue_lr[0:b, :, :, :].clone()
- gt_dequeue = self.queue_gt[0:b, :, :, :].clone()
- # update the queue
- self.queue_lr[0:b, :, :, :] = self.lq.clone()
- self.queue_gt[0:b, :, :, :] = self.gt.clone()
-
- self.lq = lq_dequeue
- self.gt = gt_dequeue
- else:
- # only do enqueue
- self.queue_lr[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.lq.clone()
- self.queue_gt[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.gt.clone()
- self.queue_ptr = self.queue_ptr + b
-
- @torch.no_grad()
- def feed_data(self, data):
- """Accept data from dataloader, and then add two-order degradations to obtain LQ images.
- """
- if self.is_train and self.opt.get('high_order_degradation', True):
- # training data synthesis
- self.gt = data['gt'].to(self.device)
- self.gt_usm = self.usm_sharpener(self.gt)
-
- self.kernel1 = data['kernel1'].to(self.device)
- self.kernel2 = data['kernel2'].to(self.device)
- self.sinc_kernel = data['sinc_kernel'].to(self.device)
-
- ori_h, ori_w = self.gt.size()[2:4]
-
- # ----------------------- The first degradation process ----------------------- #
- # blur
- out = filter2D(self.gt_usm, self.kernel1)
- # random resize
- updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob'])[0]
- if updown_type == 'up':
- scale = np.random.uniform(1, self.opt['resize_range'][1])
- elif updown_type == 'down':
- scale = np.random.uniform(self.opt['resize_range'][0], 1)
- else:
- scale = 1
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, scale_factor=scale, mode=mode)
- # add noise
- gray_noise_prob = self.opt['gray_noise_prob']
- if np.random.uniform() < self.opt['gaussian_noise_prob']:
- out = random_add_gaussian_noise_pt(
- out, sigma_range=self.opt['noise_range'], clip=True, rounds=False, gray_prob=gray_noise_prob)
- else:
- out = random_add_poisson_noise_pt(
- out,
- scale_range=self.opt['poisson_scale_range'],
- gray_prob=gray_noise_prob,
- clip=True,
- rounds=False)
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range'])
- out = torch.clamp(out, 0, 1) # clamp to [0, 1], otherwise JPEGer will result in unpleasant artifacts
- out = self.jpeger(out, quality=jpeg_p)
-
- # ----------------------- The second degradation process ----------------------- #
- # blur
- if np.random.uniform() < self.opt['second_blur_prob']:
- out = filter2D(out, self.kernel2)
- # random resize
- updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob2'])[0]
- if updown_type == 'up':
- scale = np.random.uniform(1, self.opt['resize_range2'][1])
- elif updown_type == 'down':
- scale = np.random.uniform(self.opt['resize_range2'][0], 1)
- else:
- scale = 1
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(
- out, size=(int(ori_h / self.opt['scale'] * scale), int(ori_w / self.opt['scale'] * scale)), mode=mode)
- # add noise
- gray_noise_prob = self.opt['gray_noise_prob2']
- if np.random.uniform() < self.opt['gaussian_noise_prob2']:
- out = random_add_gaussian_noise_pt(
- out, sigma_range=self.opt['noise_range2'], clip=True, rounds=False, gray_prob=gray_noise_prob)
- else:
- out = random_add_poisson_noise_pt(
- out,
- scale_range=self.opt['poisson_scale_range2'],
- gray_prob=gray_noise_prob,
- clip=True,
- rounds=False)
-
- # JPEG compression + the final sinc filter
- # We also need to resize images to desired sizes. We group [resize back + sinc filter] together
- # as one operation.
- # We consider two orders:
- # 1. [resize back + sinc filter] + JPEG compression
- # 2. JPEG compression + [resize back + sinc filter]
- # Empirically, we find other combinations (sinc + JPEG + Resize) will introduce twisted lines.
- if np.random.uniform() < 0.5:
- # resize back + the final sinc filter
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode)
- out = filter2D(out, self.sinc_kernel)
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2'])
- out = torch.clamp(out, 0, 1)
- out = self.jpeger(out, quality=jpeg_p)
- else:
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2'])
- out = torch.clamp(out, 0, 1)
- out = self.jpeger(out, quality=jpeg_p)
- # resize back + the final sinc filter
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode)
- out = filter2D(out, self.sinc_kernel)
-
- # clamp and round
- self.lq = torch.clamp((out * 255.0).round(), 0, 255) / 255.
-
- # random crop
- gt_size = self.opt['gt_size']
- (self.gt, self.gt_usm), self.lq = paired_random_crop([self.gt, self.gt_usm], self.lq, gt_size,
- self.opt['scale'])
-
- # training pair pool
- self._dequeue_and_enqueue()
- # sharpen self.gt again, as we have changed the self.gt with self._dequeue_and_enqueue
- self.gt_usm = self.usm_sharpener(self.gt)
- self.lq = self.lq.contiguous() # for the warning: grad and param do not obey the gradient layout contract
- else:
- # for paired training or validation
- self.lq = data['lq'].to(self.device)
- if 'gt' in data:
- self.gt = data['gt'].to(self.device)
- self.gt_usm = self.usm_sharpener(self.gt)
-
- def nondist_validation(self, dataloader, current_iter, tb_logger, save_img):
- # do not use the synthetic process during validation
- self.is_train = False
- super(RealESRGANModel, self).nondist_validation(dataloader, current_iter, tb_logger, save_img)
- self.is_train = True
-
- def optimize_parameters(self, current_iter):
- # usm sharpening
- l1_gt = self.gt_usm
- percep_gt = self.gt_usm
- gan_gt = self.gt_usm
- if self.opt['l1_gt_usm'] is False:
- l1_gt = self.gt
- if self.opt['percep_gt_usm'] is False:
- percep_gt = self.gt
- if self.opt['gan_gt_usm'] is False:
- gan_gt = self.gt
-
- # optimize net_g
- for p in self.net_d.parameters():
- p.requires_grad = False
-
- self.optimizer_g.zero_grad()
- self.output = self.net_g(self.lq)
- if self.cri_ldl:
- self.output_ema = self.net_g_ema(self.lq)
-
- l_g_total = 0
- loss_dict = OrderedDict()
- if (current_iter % self.net_d_iters == 0 and current_iter > self.net_d_init_iters):
- # pixel loss
- if self.cri_pix:
- l_g_pix = self.cri_pix(self.output, l1_gt)
- l_g_total += l_g_pix
- loss_dict['l_g_pix'] = l_g_pix
- if self.cri_ldl:
- pixel_weight = get_refined_artifact_map(self.gt, self.output, self.output_ema, 7)
- l_g_ldl = self.cri_ldl(torch.mul(pixel_weight, self.output), torch.mul(pixel_weight, self.gt))
- l_g_total += l_g_ldl
- loss_dict['l_g_ldl'] = l_g_ldl
- # perceptual loss
- if self.cri_perceptual:
- l_g_percep, l_g_style = self.cri_perceptual(self.output, percep_gt)
- if l_g_percep is not None:
- l_g_total += l_g_percep
- loss_dict['l_g_percep'] = l_g_percep
- if l_g_style is not None:
- l_g_total += l_g_style
- loss_dict['l_g_style'] = l_g_style
- # gan loss
- fake_g_pred = self.net_d(self.output)
- l_g_gan = self.cri_gan(fake_g_pred, True, is_disc=False)
- l_g_total += l_g_gan
- loss_dict['l_g_gan'] = l_g_gan
-
- l_g_total.backward()
- self.optimizer_g.step()
-
- # optimize net_d
- for p in self.net_d.parameters():
- p.requires_grad = True
-
- self.optimizer_d.zero_grad()
- # real
- real_d_pred = self.net_d(gan_gt)
- l_d_real = self.cri_gan(real_d_pred, True, is_disc=True)
- loss_dict['l_d_real'] = l_d_real
- loss_dict['out_d_real'] = torch.mean(real_d_pred.detach())
- l_d_real.backward()
- # fake
- fake_d_pred = self.net_d(self.output.detach().clone()) # clone for pt1.9
- l_d_fake = self.cri_gan(fake_d_pred, False, is_disc=True)
- loss_dict['l_d_fake'] = l_d_fake
- loss_dict['out_d_fake'] = torch.mean(fake_d_pred.detach())
- l_d_fake.backward()
- self.optimizer_d.step()
-
- if self.ema_decay > 0:
- self.model_ema(decay=self.ema_decay)
-
- self.log_dict = self.reduce_loss_dict(loss_dict)
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/m2m_100/tokenizers/seg_ko.sh b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/m2m_100/tokenizers/seg_ko.sh
deleted file mode 100644
index c523d92634d9b61b97bbcdbfd17dfc33465bfc09..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/m2m_100/tokenizers/seg_ko.sh
+++ /dev/null
@@ -1,12 +0,0 @@
-#!/usr/bin/env bash
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-SCRIPT=`realpath $0`
-MECAB=`dirname $SCRIPT`/thirdparty/mecab-0.996-ko-0.9.2
-
-export PATH=$PATH:"$MECAB/bin":"$MECAB/lib"
-export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:"$MECAB/lib"
-
-cat - | mecab -O wakati
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/scripts/filter_lexicon.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/scripts/filter_lexicon.py
deleted file mode 100644
index 5bf3e51e7a50ac3f07cc41739198cde946dc79aa..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/scripts/filter_lexicon.py
+++ /dev/null
@@ -1,40 +0,0 @@
-#!/usr/bin/env python3 -u
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import sys
-
-from fairseq.data import Dictionary
-
-
-def get_parser():
- parser = argparse.ArgumentParser(
- description="filters a lexicon given a unit dictionary"
- )
- parser.add_argument("-d", "--unit-dict", help="unit dictionary", required=True)
- return parser
-
-
-def main():
- parser = get_parser()
- args = parser.parse_args()
-
- d = Dictionary.load(args.unit_dict)
- symbols = set(d.symbols)
-
- for line in sys.stdin:
- items = line.rstrip().split()
- skip = len(items) < 2
- for x in items[1:]:
- if x not in symbols:
- skip = True
- break
- if not skip:
- print(line, end="")
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/audio/text_to_speech_dataset.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/audio/text_to_speech_dataset.py
deleted file mode 100644
index abfcb2be4028889acd72c6f40d4c832e48cff344..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/audio/text_to_speech_dataset.py
+++ /dev/null
@@ -1,215 +0,0 @@
-# Copyright (c) 2017-present, Facebook, Inc.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the LICENSE file in
-# the root directory of this source tree. An additional grant of patent rights
-# can be found in the PATENTS file in the same directory.abs
-
-from pathlib import Path
-from typing import List, Dict, Optional, Any
-from dataclasses import dataclass
-
-import numpy as np
-import torch
-
-from fairseq.data.audio.speech_to_text_dataset import (
- SpeechToTextDataset, SpeechToTextDatasetCreator, S2TDataConfig,
- _collate_frames, get_features_or_waveform
-)
-from fairseq.data import Dictionary, data_utils as fairseq_data_utils
-
-
-@dataclass
-class TextToSpeechDatasetItem(object):
- index: int
- source: torch.Tensor
- target: Optional[torch.Tensor] = None
- speaker_id: Optional[int] = None
- duration: Optional[torch.Tensor] = None
- pitch: Optional[torch.Tensor] = None
- energy: Optional[torch.Tensor] = None
-
-
-class TextToSpeechDataset(SpeechToTextDataset):
- def __init__(
- self,
- split: str,
- is_train_split: bool,
- cfg: S2TDataConfig,
- audio_paths: List[str],
- n_frames: List[int],
- src_texts: Optional[List[str]] = None,
- tgt_texts: Optional[List[str]] = None,
- speakers: Optional[List[str]] = None,
- src_langs: Optional[List[str]] = None,
- tgt_langs: Optional[List[str]] = None,
- ids: Optional[List[str]] = None,
- tgt_dict: Optional[Dictionary] = None,
- pre_tokenizer=None,
- bpe_tokenizer=None,
- n_frames_per_step=1,
- speaker_to_id=None,
- durations: Optional[List[List[int]]] = None,
- pitches: Optional[List[str]] = None,
- energies: Optional[List[str]] = None
- ):
- super(TextToSpeechDataset, self).__init__(
- split, is_train_split, cfg, audio_paths, n_frames,
- src_texts=src_texts, tgt_texts=tgt_texts, speakers=speakers,
- src_langs=src_langs, tgt_langs=tgt_langs, ids=ids,
- tgt_dict=tgt_dict, pre_tokenizer=pre_tokenizer,
- bpe_tokenizer=bpe_tokenizer, n_frames_per_step=n_frames_per_step,
- speaker_to_id=speaker_to_id
- )
- self.durations = durations
- self.pitches = pitches
- self.energies = energies
-
- def __getitem__(self, index: int) -> TextToSpeechDatasetItem:
- s2t_item = super().__getitem__(index)
-
- duration, pitch, energy = None, None, None
- if self.durations is not None:
- duration = torch.tensor(
- self.durations[index] + [0], dtype=torch.long # pad 0 for EOS
- )
- if self.pitches is not None:
- pitch = get_features_or_waveform(self.pitches[index])
- pitch = torch.from_numpy(
- np.concatenate((pitch, [0])) # pad 0 for EOS
- ).float()
- if self.energies is not None:
- energy = get_features_or_waveform(self.energies[index])
- energy = torch.from_numpy(
- np.concatenate((energy, [0])) # pad 0 for EOS
- ).float()
- return TextToSpeechDatasetItem(
- index=index, source=s2t_item.source, target=s2t_item.target,
- speaker_id=s2t_item.speaker_id, duration=duration, pitch=pitch,
- energy=energy
- )
-
- def collater(self, samples: List[TextToSpeechDatasetItem]) -> Dict[str, Any]:
- if len(samples) == 0:
- return {}
-
- src_lengths, order = torch.tensor(
- [s.target.shape[0] for s in samples], dtype=torch.long
- ).sort(descending=True)
- id_ = torch.tensor([s.index for s in samples],
- dtype=torch.long).index_select(0, order)
- feat = _collate_frames(
- [s.source for s in samples], self.cfg.use_audio_input
- ).index_select(0, order)
- target_lengths = torch.tensor(
- [s.source.shape[0] for s in samples], dtype=torch.long
- ).index_select(0, order)
-
- src_tokens = fairseq_data_utils.collate_tokens(
- [s.target for s in samples],
- self.tgt_dict.pad(),
- self.tgt_dict.eos(),
- left_pad=False,
- move_eos_to_beginning=False,
- ).index_select(0, order)
-
- speaker = None
- if self.speaker_to_id is not None:
- speaker = torch.tensor(
- [s.speaker_id for s in samples], dtype=torch.long
- ).index_select(0, order).view(-1, 1)
-
- bsz, _, d = feat.size()
- prev_output_tokens = torch.cat(
- (feat.new_zeros((bsz, 1, d)), feat[:, :-1, :]), dim=1
- )
-
- durations, pitches, energies = None, None, None
- if self.durations is not None:
- durations = fairseq_data_utils.collate_tokens(
- [s.duration for s in samples], 0
- ).index_select(0, order)
- assert src_tokens.shape[1] == durations.shape[1]
- if self.pitches is not None:
- pitches = _collate_frames([s.pitch for s in samples], True)
- pitches = pitches.index_select(0, order)
- assert src_tokens.shape[1] == pitches.shape[1]
- if self.energies is not None:
- energies = _collate_frames([s.energy for s in samples], True)
- energies = energies.index_select(0, order)
- assert src_tokens.shape[1] == energies.shape[1]
- src_texts = [self.tgt_dict.string(samples[i].target) for i in order]
-
- return {
- "id": id_,
- "net_input": {
- "src_tokens": src_tokens,
- "src_lengths": src_lengths,
- "prev_output_tokens": prev_output_tokens,
- },
- "speaker": speaker,
- "target": feat,
- "durations": durations,
- "pitches": pitches,
- "energies": energies,
- "target_lengths": target_lengths,
- "ntokens": sum(target_lengths).item(),
- "nsentences": len(samples),
- "src_texts": src_texts,
- }
-
-
-class TextToSpeechDatasetCreator(SpeechToTextDatasetCreator):
- KEY_DURATION = "duration"
- KEY_PITCH = "pitch"
- KEY_ENERGY = "energy"
-
- @classmethod
- def _from_list(
- cls,
- split_name: str,
- is_train_split,
- samples: List[Dict],
- cfg: S2TDataConfig,
- tgt_dict,
- pre_tokenizer,
- bpe_tokenizer,
- n_frames_per_step,
- speaker_to_id
- ) -> TextToSpeechDataset:
- audio_root = Path(cfg.audio_root)
- ids = [s[cls.KEY_ID] for s in samples]
- audio_paths = [(audio_root / s[cls.KEY_AUDIO]).as_posix() for s in samples]
- n_frames = [int(s[cls.KEY_N_FRAMES]) for s in samples]
- tgt_texts = [s[cls.KEY_TGT_TEXT] for s in samples]
- src_texts = [s.get(cls.KEY_SRC_TEXT, cls.DEFAULT_SRC_TEXT) for s in samples]
- speakers = [s.get(cls.KEY_SPEAKER, cls.DEFAULT_SPEAKER) for s in samples]
- src_langs = [s.get(cls.KEY_SRC_LANG, cls.DEFAULT_LANG) for s in samples]
- tgt_langs = [s.get(cls.KEY_TGT_LANG, cls.DEFAULT_LANG) for s in samples]
-
- durations = [s.get(cls.KEY_DURATION, None) for s in samples]
- durations = [
- None if dd is None else [int(d) for d in dd.split(" ")]
- for dd in durations
- ]
- durations = None if any(dd is None for dd in durations) else durations
-
- pitches = [s.get(cls.KEY_PITCH, None) for s in samples]
- pitches = [
- None if pp is None else (audio_root / pp).as_posix()
- for pp in pitches
- ]
- pitches = None if any(pp is None for pp in pitches) else pitches
-
- energies = [s.get(cls.KEY_ENERGY, None) for s in samples]
- energies = [
- None if ee is None else (audio_root / ee).as_posix()
- for ee in energies]
- energies = None if any(ee is None for ee in energies) else energies
-
- return TextToSpeechDataset(
- split_name, is_train_split, cfg, audio_paths, n_frames,
- src_texts, tgt_texts, speakers, src_langs, tgt_langs, ids, tgt_dict,
- pre_tokenizer, bpe_tokenizer, n_frames_per_step, speaker_to_id,
- durations, pitches, energies
- )
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_lm_context_window.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_lm_context_window.py
deleted file mode 100644
index 7415e86abdf8ddc2d797092bf98f7a1331e038d6..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_lm_context_window.py
+++ /dev/null
@@ -1,51 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import unittest
-
-import torch
-from fairseq.data import MonolingualDataset
-from fairseq.tasks.language_modeling import LanguageModelingTask, LanguageModelingConfig
-from tests import utils as test_utils
-
-
-class TestLMContextWindow(unittest.TestCase):
-
- def test_eval_dataloader(self):
- dictionary = test_utils.dummy_dictionary(10)
- assert len(dictionary) == 14 # 4 extra special symbols
- assert dictionary.pad() == 1
-
- dataset = test_utils.TestDataset([
- torch.tensor([4, 5, 6, 7], dtype=torch.long),
- torch.tensor([8, 9, 10, 11], dtype=torch.long),
- torch.tensor([12, 13], dtype=torch.long),
- ])
- dataset = MonolingualDataset(dataset, sizes=[4, 4, 2], src_vocab=dictionary)
-
- config = LanguageModelingConfig(tokens_per_sample=4)
- task = LanguageModelingTask(config, dictionary)
-
- eval_dataloader = task.eval_lm_dataloader(
- dataset=dataset,
- batch_size=1,
- context_window=2,
- )
-
- batch = next(eval_dataloader)
- assert batch["net_input"]["src_tokens"][0].tolist() == [4, 5, 6, 7, 1, 1]
- assert batch["target"][0].tolist() == [4, 5, 6, 7, 1, 1]
-
- batch = next(eval_dataloader)
- assert batch["net_input"]["src_tokens"][0].tolist() == [6, 7, 8, 9, 10, 11]
- assert batch["target"][0].tolist() == [1, 1, 8, 9, 10, 11]
-
- batch = next(eval_dataloader)
- assert batch["net_input"]["src_tokens"][0].tolist() == [10, 11, 12, 13]
- assert batch["target"][0].tolist() == [1, 1, 12, 13]
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/OFA-Sys/OFA-vqa/run_scripts/refcoco/train_refcocoplus.sh b/spaces/OFA-Sys/OFA-vqa/run_scripts/refcoco/train_refcocoplus.sh
deleted file mode 100644
index 24f6d705332c1568cd873171c7246c890b48d5ef..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/run_scripts/refcoco/train_refcocoplus.sh
+++ /dev/null
@@ -1,97 +0,0 @@
-#!/usr/bin/env
-
-log_dir=./refcocoplus_logs
-save_dir=./refcocoplus_checkpoints
-mkdir -p $log_dir $save_dir
-
-bpe_dir=../../utils/BPE
-user_dir=../../ofa_module
-
-data_dir=../../dataset/refcocoplus_data
-data=${data_dir}/refcocoplus_train.tsv,${data_dir}/refcocoplus_val.tsv
-restore_file=../../checkpoints/ofa_large.pt
-selected_cols=0,4,2,3
-
-task=refcoco
-arch=ofa_large
-criterion=ajust_label_smoothed_cross_entropy
-label_smoothing=0.1
-lr=3e-5
-max_epoch=5
-warmup_ratio=0.06
-batch_size=4
-update_freq=8
-resnet_drop_path_rate=0.0
-encoder_drop_path_rate=0.2
-decoder_drop_path_rate=0.2
-dropout=0.1
-attention_dropout=0.0
-max_src_length=80
-max_tgt_length=20
-num_bins=1000
-patch_image_size=512
-
-for max_epoch in {10,}; do
- echo "max_epoch "${max_epoch}
- for lr in {3e-5,}; do
- echo "lr "${lr}
- for patch_image_size in {512,}; do
- echo "patch_image_size "${patch_image_size}
-
- log_file=${log_dir}/${max_epoch}"_"${lr}"_"${patch_image_size}".log"
- save_path=${save_dir}/${max_epoch}"_"${lr}"_"${patch_image_size}
- mkdir -p $save_path
-
- CUDA_VISIBLE_DEVICES=0,1,2,3 python3 ../../train.py \
- $data \
- --selected-cols=${selected_cols} \
- --bpe-dir=${bpe_dir} \
- --user-dir=${user_dir} \
- --restore-file=${restore_file} \
- --reset-optimizer --reset-dataloader --reset-meters \
- --save-dir=${save_path} \
- --task=${task} \
- --arch=${arch} \
- --criterion=${criterion} \
- --label-smoothing=${label_smoothing} \
- --batch-size=${batch_size} \
- --update-freq=${update_freq} \
- --encoder-normalize-before \
- --decoder-normalize-before \
- --share-decoder-input-output-embed \
- --share-all-embeddings \
- --layernorm-embedding \
- --patch-layernorm-embedding \
- --code-layernorm-embedding \
- --resnet-drop-path-rate=${resnet_drop_path_rate} \
- --encoder-drop-path-rate=${encoder_drop_path_rate} \
- --decoder-drop-path-rate=${decoder_drop_path_rate} \
- --dropout=${dropout} \
- --attention-dropout=${attention_dropout} \
- --weight-decay=0.01 --optimizer=adam --adam-betas="(0.9,0.999)" --adam-eps=1e-08 --clip-norm=1.0 \
- --lr-scheduler=polynomial_decay --lr=${lr} \
- --max-epoch=${max_epoch} --warmup-ratio=${warmup_ratio} \
- --log-format=simple --log-interval=10 \
- --fixed-validation-seed=7 \
- --no-epoch-checkpoints --keep-best-checkpoints=1 \
- --save-interval=1 --validate-interval=1 \
- --save-interval-updates=500 --validate-interval-updates=500 \
- --eval-acc \
- --eval-args='{"beam":5,"min_len":4,"max_len_a":0,"max_len_b":4}' \
- --best-checkpoint-metric=score --maximize-best-checkpoint-metric \
- --max-src-length=${max_src_length} \
- --max-tgt-length=${max_tgt_length} \
- --find-unused-parameters \
- --add-type-embedding \
- --scale-attn \
- --scale-fc \
- --scale-heads \
- --disable-entangle \
- --num-bins=${num_bins} \
- --patch-image-size=${patch_image_size} \
- --fp16 \
- --fp16-scale-window=512 \
- --num-workers=0 >> ${log_file} 2>&1
- done
- done
-done
\ No newline at end of file
diff --git a/spaces/OlaWod/FreeVC/modules.py b/spaces/OlaWod/FreeVC/modules.py
deleted file mode 100644
index 52ee14e41a5b6d67d875d1b694aecd2a51244897..0000000000000000000000000000000000000000
--- a/spaces/OlaWod/FreeVC/modules.py
+++ /dev/null
@@ -1,342 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/backbone/backbone.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/backbone/backbone.py
deleted file mode 100644
index 369fb884930c5dd82f94024c45303dafaab14d66..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/backbone/backbone.py
+++ /dev/null
@@ -1,53 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from abc import ABCMeta, abstractmethod
-import torch.nn as nn
-
-from detectron2.layers import ShapeSpec
-
-__all__ = ["Backbone"]
-
-
-class Backbone(nn.Module, metaclass=ABCMeta):
- """
- Abstract base class for network backbones.
- """
-
- def __init__(self):
- """
- The `__init__` method of any subclass can specify its own set of arguments.
- """
- super().__init__()
-
- @abstractmethod
- def forward(self):
- """
- Subclasses must override this method, but adhere to the same return type.
-
- Returns:
- dict[str->Tensor]: mapping from feature name (e.g., "res2") to tensor
- """
- pass
-
- @property
- def size_divisibility(self) -> int:
- """
- Some backbones require the input height and width to be divisible by a
- specific integer. This is typically true for encoder / decoder type networks
- with lateral connection (e.g., FPN) for which feature maps need to match
- dimension in the "bottom up" and "top down" paths. Set to 0 if no specific
- input size divisibility is required.
- """
- return 0
-
- def output_shape(self):
- """
- Returns:
- dict[str->ShapeSpec]
- """
- # this is a backward-compatible default
- return {
- name: ShapeSpec(
- channels=self._out_feature_channels[name], stride=self._out_feature_strides[name]
- )
- for name in self._out_features
- }
diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/nn/parallel/data_parallel.py b/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/nn/parallel/data_parallel.py
deleted file mode 100644
index 376fc038919aa2a5bd696141e7bb6025d4981306..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/nn/parallel/data_parallel.py
+++ /dev/null
@@ -1,112 +0,0 @@
-# -*- coding: utf8 -*-
-
-import torch.cuda as cuda
-import torch.nn as nn
-import torch
-import collections
-from torch.nn.parallel._functions import Gather
-
-
-__all__ = ['UserScatteredDataParallel', 'user_scattered_collate', 'async_copy_to']
-
-
-def async_copy_to(obj, dev, main_stream=None):
- if torch.is_tensor(obj):
- v = obj.cuda(dev, non_blocking=True)
- if main_stream is not None:
- v.data.record_stream(main_stream)
- return v
- elif isinstance(obj, collections.Mapping):
- return {k: async_copy_to(o, dev, main_stream) for k, o in obj.items()}
- elif isinstance(obj, collections.Sequence):
- return [async_copy_to(o, dev, main_stream) for o in obj]
- else:
- return obj
-
-
-def dict_gather(outputs, target_device, dim=0):
- """
- Gathers variables from different GPUs on a specified device
- (-1 means the CPU), with dictionary support.
- """
- def gather_map(outputs):
- out = outputs[0]
- if torch.is_tensor(out):
- # MJY(20180330) HACK:: force nr_dims > 0
- if out.dim() == 0:
- outputs = [o.unsqueeze(0) for o in outputs]
- return Gather.apply(target_device, dim, *outputs)
- elif out is None:
- return None
- elif isinstance(out, collections.Mapping):
- return {k: gather_map([o[k] for o in outputs]) for k in out}
- elif isinstance(out, collections.Sequence):
- return type(out)(map(gather_map, zip(*outputs)))
- return gather_map(outputs)
-
-
-class DictGatherDataParallel(nn.DataParallel):
- def gather(self, outputs, output_device):
- return dict_gather(outputs, output_device, dim=self.dim)
-
-
-class UserScatteredDataParallel(DictGatherDataParallel):
- def scatter(self, inputs, kwargs, device_ids):
- assert len(inputs) == 1
- inputs = inputs[0]
- inputs = _async_copy_stream(inputs, device_ids)
- inputs = [[i] for i in inputs]
- assert len(kwargs) == 0
- kwargs = [{} for _ in range(len(inputs))]
-
- return inputs, kwargs
-
-
-def user_scattered_collate(batch):
- return batch
-
-
-def _async_copy(inputs, device_ids):
- nr_devs = len(device_ids)
- assert type(inputs) in (tuple, list)
- assert len(inputs) == nr_devs
-
- outputs = []
- for i, dev in zip(inputs, device_ids):
- with cuda.device(dev):
- outputs.append(async_copy_to(i, dev))
-
- return tuple(outputs)
-
-
-def _async_copy_stream(inputs, device_ids):
- nr_devs = len(device_ids)
- assert type(inputs) in (tuple, list)
- assert len(inputs) == nr_devs
-
- outputs = []
- streams = [_get_stream(d) for d in device_ids]
- for i, dev, stream in zip(inputs, device_ids, streams):
- with cuda.device(dev):
- main_stream = cuda.current_stream()
- with cuda.stream(stream):
- outputs.append(async_copy_to(i, dev, main_stream=main_stream))
- main_stream.wait_stream(stream)
-
- return outputs
-
-
-"""Adapted from: torch/nn/parallel/_functions.py"""
-# background streams used for copying
-_streams = None
-
-
-def _get_stream(device):
- """Gets a background stream for copying between CPU and GPU"""
- global _streams
- if device == -1:
- return None
- if _streams is None:
- _streams = [None] * cuda.device_count()
- if _streams[device] is None: _streams[device] = cuda.Stream(device)
- return _streams[device]
diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/training/data/__init__.py b/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/training/data/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/gcnet_r50-d8.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/gcnet_r50-d8.py
deleted file mode 100644
index 3d2ad69f5c22adfe79d5fdabf920217628987166..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/gcnet_r50-d8.py
+++ /dev/null
@@ -1,46 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 2, 4),
- strides=(1, 2, 1, 1),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- decode_head=dict(
- type='GCHead',
- in_channels=2048,
- in_index=3,
- channels=512,
- ratio=1 / 4.,
- pooling_type='att',
- fusion_types=('channel_add', ),
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=1024,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/masked_conv.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/masked_conv.py
deleted file mode 100644
index cd514cc204c1d571ea5dc7e74b038c0f477a008b..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/masked_conv.py
+++ /dev/null
@@ -1,111 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import math
-
-import torch
-import torch.nn as nn
-from torch.autograd import Function
-from torch.autograd.function import once_differentiable
-from torch.nn.modules.utils import _pair
-
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext(
- '_ext', ['masked_im2col_forward', 'masked_col2im_forward'])
-
-
-class MaskedConv2dFunction(Function):
-
- @staticmethod
- def symbolic(g, features, mask, weight, bias, padding, stride):
- return g.op(
- 'mmcv::MMCVMaskedConv2d',
- features,
- mask,
- weight,
- bias,
- padding_i=padding,
- stride_i=stride)
-
- @staticmethod
- def forward(ctx, features, mask, weight, bias, padding=0, stride=1):
- assert mask.dim() == 3 and mask.size(0) == 1
- assert features.dim() == 4 and features.size(0) == 1
- assert features.size()[2:] == mask.size()[1:]
- pad_h, pad_w = _pair(padding)
- stride_h, stride_w = _pair(stride)
- if stride_h != 1 or stride_w != 1:
- raise ValueError(
- 'Stride could not only be 1 in masked_conv2d currently.')
- out_channel, in_channel, kernel_h, kernel_w = weight.size()
-
- batch_size = features.size(0)
- out_h = int(
- math.floor((features.size(2) + 2 * pad_h -
- (kernel_h - 1) - 1) / stride_h + 1))
- out_w = int(
- math.floor((features.size(3) + 2 * pad_w -
- (kernel_h - 1) - 1) / stride_w + 1))
- mask_inds = torch.nonzero(mask[0] > 0, as_tuple=False)
- output = features.new_zeros(batch_size, out_channel, out_h, out_w)
- if mask_inds.numel() > 0:
- mask_h_idx = mask_inds[:, 0].contiguous()
- mask_w_idx = mask_inds[:, 1].contiguous()
- data_col = features.new_zeros(in_channel * kernel_h * kernel_w,
- mask_inds.size(0))
- ext_module.masked_im2col_forward(
- features,
- mask_h_idx,
- mask_w_idx,
- data_col,
- kernel_h=kernel_h,
- kernel_w=kernel_w,
- pad_h=pad_h,
- pad_w=pad_w)
-
- masked_output = torch.addmm(1, bias[:, None], 1,
- weight.view(out_channel, -1), data_col)
- ext_module.masked_col2im_forward(
- masked_output,
- mask_h_idx,
- mask_w_idx,
- output,
- height=out_h,
- width=out_w,
- channels=out_channel)
- return output
-
- @staticmethod
- @once_differentiable
- def backward(ctx, grad_output):
- return (None, ) * 5
-
-
-masked_conv2d = MaskedConv2dFunction.apply
-
-
-class MaskedConv2d(nn.Conv2d):
- """A MaskedConv2d which inherits the official Conv2d.
-
- The masked forward doesn't implement the backward function and only
- supports the stride parameter to be 1 currently.
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- kernel_size,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- bias=True):
- super(MaskedConv2d,
- self).__init__(in_channels, out_channels, kernel_size, stride,
- padding, dilation, groups, bias)
-
- def forward(self, input, mask=None):
- if mask is None: # fallback to the normal Conv2d
- return super(MaskedConv2d, self).forward(input)
- else:
- return masked_conv2d(input, mask, self.weight, self.bias,
- self.padding)
diff --git a/spaces/PAIR/Text2Video-Zero/app_canny.py b/spaces/PAIR/Text2Video-Zero/app_canny.py
deleted file mode 100644
index 8cf1d22adf9add87a351abb6eae306d4ce29fdb7..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/app_canny.py
+++ /dev/null
@@ -1,78 +0,0 @@
-import gradio as gr
-from model import Model
-import os
-on_huggingspace = os.environ.get("SPACE_AUTHOR_NAME") == "PAIR"
-
-
-def create_demo(model: Model):
-
- examples = [
- ["__assets__/canny_videos_edge/butterfly.mp4",
- "white butterfly, a high-quality, detailed, and professional photo"],
- ["__assets__/canny_videos_edge/deer.mp4",
- "oil painting of a deer, a high-quality, detailed, and professional photo"],
- ["__assets__/canny_videos_edge/fox.mp4",
- "wild red fox is walking on the grass, a high-quality, detailed, and professional photo"],
- ["__assets__/canny_videos_edge/girl_dancing.mp4",
- "oil painting of a girl dancing close-up, masterpiece, a high-quality, detailed, and professional photo"],
- ["__assets__/canny_videos_edge/girl_turning.mp4",
- "oil painting of a beautiful girl, a high-quality, detailed, and professional photo"],
- ["__assets__/canny_videos_edge/halloween.mp4",
- "beautiful girl halloween style, a high-quality, detailed, and professional photo"],
- ["__assets__/canny_videos_edge/santa.mp4",
- "a santa claus, a high-quality, detailed, and professional photo"],
- ]
-
- with gr.Blocks() as demo:
- with gr.Row():
- gr.Markdown('## Text and Canny-Edge Conditional Video Generation')
- with gr.Row():
- gr.HTML(
- """
-
-
- Description: For performance purposes, our current preview release supports any input videos but caps output videos after 80 frames and the input videos are scaled down before processing.
-
-
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Calendar 366 II 2.4.2 Crack Mac Osx The Ultimate Calendar App for Your Mac.md b/spaces/bioriAsaeru/text-to-voice/Calendar 366 II 2.4.2 Crack Mac Osx The Ultimate Calendar App for Your Mac.md
deleted file mode 100644
index 9a0179f420b5e27c20eb3742c4303d11acc55b39..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Calendar 366 II 2.4.2 Crack Mac Osx The Ultimate Calendar App for Your Mac.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
All X11 libraries upgraded from 6.9.2 to 7.2.0
GNU gettext upgraded from 0.14.5 to 0.16.1
JASper JPEG-2000 upgraded from 1.701.0 to 1.900.1
TIFF upgraded from 3.7.4 to 3.8.2
libpng upgraded from 1.2.8 to 1.2.18
libmng upgraded from 1.0.9 to 1.0.10
FreeType 2 upgraded from 2.1.10 to 2.3.5
libgd2 upgraded from 2.0.33 to 2.0.35
libgif upgraded from 4.1.0 to 4.1.4
NetPBM upgraded from 10.26.14 to 10.34
GTK 2 upgraded from 2.8.9 to 2.10.14
Pango upgraded from 1.10.2 to 1.17.3
Cairo upgraded from 1.0.2 to 1.4.10
GNU readline upgraded from 5.1 to 5.2
BerkeleyDB upgraded from 4.3.28 to 4.6.18
Expat upgraded from 1.95.8 to 2.0.1
libxml2 upgraded from 2.6.22 to 2.6.29
libxslt upgraded from 1.1.15 to 1.1.21
XMLSEC 1 upgraded from 1.2.9 to 1.2.10
OpenSSL 0.9.8e added
OpenSSL 0.9.7 upgraded from 0.9.7i to 0.9.7m
OpenSSL 0.9.6m has been deprecated and no longer provided
OpenLDAP upgraded from 2.2.30 to 2.3.37
Cyrus SASL upgraded from 2.1.20 to 2.1.22
MM upgraded from 1.4.0 to 1.4.2
PCRE upgraded from 6.4 to 7.2
LCMS upgraded from 1.15 to 1.16
libIDL upgraded from 0.8.6 to 0.8.8
cURL upgraded from 7.15.1 to 7.16.4
Sablotron upgraded from 1.0.2 to 1.0.3
ICU upgraded from 3.4 to 3.6
FontConfig upgraded from 2.2.2 to 2.4.2
trio upgraded from 1.10 to 1.12
libart LGPL upgraded from 2.3.17 to 2.3.19
GNOME Options Library (libpopt) upgraded from 1.7 to 1.10.4
GNOME Structured File library (libgsf) upgraded from 1.13.3 to 1.14.5
GNOME CSS library (libcroco) upgraded from 0.6.0 to 0.6.1
librsvg upgraded from 2.13.3 to 2.18.0
libexif upgraded from 0.6.12 to 0.6.16
GnuPG upgraded from 1.4.0 to 1.4.7
libgcrypt upgraded from 1.2.2 to 1.2.4
libgpg-error upgraded from 1.0.0 to 1.5
Tcl 8.4 upgraded from 8.4.10 to 8.4.15
Tk 8.4 upgraded from 8.4.10 to 8.4.14
For more details please seeAppendix: Graphics libraries, Perl modules,and PHP PEAR modules.Java 2 SE 1.4.2OpenServer 6.0.0 can have both J2SE 1.4.2 and J2SE 5.0 installed andfunctional at the same time.J2SE 1.4.2 is used specifically by various OpenServer tools and by defaultis updated to version 1.4.2_16 when you install OpenServer 6.0.0MP3 CD #1.
-
Software Components and PackagesAbbreviationFCS VersionMP2 VersionMP3/MP4 Version3D Athena Widget Set for X11xaw3d1.5E1.5E1.5EApache Portable Runtime Utility Libraryaprutiln/an/a1.2.8Apache Portable Runtimeaprn/an/a1.2.9Accessibility Toolkitatk1.8.01.10.31.10.3bzip2 compression library and utilitiesbzip21.0.31.0.31.0.3Cairo Graphics Librarycairon/a1.0.21.4.10compface Image Manipulation Librarycompface1.0.01.0.01.5.2cURL URL Librarycurl7.13.27.15.17.16.4Berkeley-DB Database Librarybdb4.3.274.3.284.6.18Expat XML Parserexpat1.95.81.95.82.0.1Expect TCL Extensionexpect5.425.435.43FontConfigfontcfg2.2.22.2.22.4.2FreeType Font Engine Version 1freetype11.3.11.3.11.3.1FreeType Font Enginefreetype22.1.92.1.102.3.5GD Graphics Librarygd11.8.41.8.41.8.4GD Graphics Librarygd22.0.332.0.332.0.35GNU dbm Librarygdbm1.8.01.8.01.8.0Gnome DOM Librarygdome20.8.10.8.10.8.1GNU gettextgettext0.14.10.14.50.16.1GIF Image Manipulation Librarygiflib4.1.04.1.04.1.4GIMP Portability Libraryglib11.2.101.2.101.2.10GIMP Portability Libraryglib22.4.82.8.42.12.13GNU Privacy Guard (gnupg)gnupg1.4.01.4.01.4.7GIMP Toolkitgtk11.2.101.2.101.2.10GIMP Toolkitgtk22.4.142.8.92.10.14GWXLIBS Base Support Toolsgwxlibs2.0.02.1.03.0.0International Components for Unicode (ICU)icu3.23.43.6Enlightenment Imaging Libraryimlib1.10.01.10.01.9.15JASper JPEG2000 libraryjasper1.701.01.701.01.900.1ISO/IEC 11544:1993 JBIG kitjbig1.61.61.6IJG JPEG libraryjpeg6b6b6bJavaScript Embedded C Libraryjs1.5rc51.5rc51.5Little Color Management System (LCMS)lcms1.141.151.16Gnome IDL LibrarylibIDL0.850.8.60.8.8Gnome ART librarylibart2.3.172.3.172.3.19Gnome CSS2 Parsing Toolkit (libcroco)libcroco0.6.00.6.00.6.1Gnome EXIF Widget for GTKexifgtk0.3.50.3.50.3.5EXIF Processing Librarylibexif0.6.100.6.120.6.16GNU Cryptographic Librarylibgcrypt1.2.11.2.21.2.4Gnome HTTP Client Librarylibghttp1.0.91.0.91.0.9GNU Privacy Guard Error Librarylibgpg-err1.0.01.0.01.5Gnome Structured File Librarylibgsf1.11.11.13.31.14.5Gnome HTML Widget for GTKgtkhtml2.6.32.11.02.11.0Multi-image Network Graphics (MNG) Librarylibmng1.0.91.0.91.0.10Portable Network Graphics (PNG) Librarylibpng1.2.81.2.81.2.18Gnome SVG Rendering Librarylibrsvg2.9.52.13.32.18.0WMF Conversion Librarylibwmfn/a0.2.8.40.2.8.4W3C Consortium Library (libwww)libwww5.405.405.40libxml2 XML C Parser and Toolkitlibxml22.6.192.6.222.6.29libxslt XSLT C Parser and Toolkitlibxslt1.1.141.1.151.1.21Libtool Dynamic Loadingltdl1.5.221.5.221.5.22MD5 Hash Librarymd51.0.01.0.01.0.0mktempmktemp1.51.51.5OSSP mm Shared Memory Allocation Librarymm1.3.11.4.01.4.2MPEG Encoder/Decoder Librarympeglib1.2.11.2.11.3.1Portable Bitmap Utilities and Librariesnetpbm10.26.110.26.1410.34OpenLDAPopenldap2.2.242.2.302.3.37OpenSLP (Service Location Protocol)openslp1.2.11.2.11.2.1OpenSSLopenssl0.9.7g0.9.7i/0.9.6m0.9.7m/0.9.8e*Pango Layout and Text Rendering Librarypango1.4.11.10.21.17.3Perl Compatible Regular Expressionspcre5.06.47.2pkg-configpkgconfigpre 0.190.190.22Gnome Option Processing Librarypopt1.71.71.10.4True Random Libraryrand1.0.01.0.01.0.0GNU readlinereadline5.05.15.2Sablotron XML, DOM and XPath Processorsablot1.0.11.0.21.0.3Cyrus SASLsasl2.1.202.1.20**2.1.22S-lang Interpreter and Libraryslang1.4.91.4.91.4.9Tcl 8.4tcl848.4.98.4.108.4.15Extended Tcltclx848.3.58.3.58.4TIFF library and utilitiestiff3.7.23.7.43.8.2Tk 8.4tk848.4.98.4.108.4.14trio printf librarytrio1.101.101.12Xalan XSLT Processorxalan1.9.01.10.01.10.0Xerces Validating XML C++ Parserxerces2.6.02.7.02.7.0XML Security Libraryxmlsec11.2.81.2.91.2.10X.org FontsXORGFonts6.8.26.9.07.2.0X.org RuntimeXORGRT6.8.26.9.07.2.0zlib compression libraryzlib1.2.21.2.31.2.3*For OpenServer 6.0.0 MP3,OpenSSL 0.9.8e has been added;OpenSSL 0.9.7 is upgraded from 0.9.7i to 0.9.7m;and OpenSSL 0.9.6m is deprecated and no longer provided.**With respect to Cyrus-SASL:the version did not change in OpenServer 6.0.0 MP2 but the way it was compiledsignificantly changed.In previous (prior to OpenServer 6.0.0 MP2) releases all of the backendswere static.All the backends are now dynamic.
Are you looking for a way to play the Crash Bandicoot N Sane Trilogy on your PC without spending any money or going through any online activation process? If so, you may be interested in the ElAmigos hack tool, which allows you to download and install the game for free and enjoy all its features and content. In this article, we will tell you what the Crash Bandicoot N Sane Trilogy is, what the ElAmigos hack tool is, and how to use it.
The Crash Bandicoot N Sane Trilogy is a collection of three remastered games that were originally released for the PlayStation in the late 1990s: Crash Bandicoot, Crash Bandicoot 2: Cortex Strikes Back, and Crash Bandicoot 3: Warped. These games are platformers that feature the titular marsupial as he battles the evil Dr. Neo Cortex and his minions. The games are known for their colorful graphics, catchy music, and challenging levels.
-
The Crash Bandicoot N Sane Trilogy was released for the PlayStation 4 in 2017, and for the PC, Xbox One, and Nintendo Switch in 2018. The remastered version was developed by Vicarious Visions and Iron Galaxy, and published by Activision. The remastered version features updated graphics, sound, and gameplay, as well as new content such as two bonus levels: Stormy Ascent and Future Tense.
-
What is the ElAmigos hack tool for the Crash Bandicoot N Sane Trilogy?
-
The ElAmigos hack tool for the Crash Bandicoot N Sane Trilogy is a program that allows you to play the PC version of the game without having to buy it or activate it online. The hack tool is based on the crack by Codex, and it includes all the updates and DLCs that were released for the game until July 2018. The hack tool also allows you to choose from six languages: English, French, Italian, German, Spanish, and Japanese.
-
The ElAmigos hack tool for the Crash Bandicoot N Sane Trilogy is easy to use and install. You just need to download it from a reliable source, such as ElAmigos-Games.com, extract it to your desired location, and run the setup.exe file. The installation process will take only a few minutes, depending on your CPU speed and hard drive space. After that, you can launch the game from the desktop shortcut or the start menu.
-
-
How to use the ElAmigos hack tool for the Crash Bandicoot N Sane Trilogy?
-
To use the ElAmigos hack tool for the Crash Bandicoot N Sane Trilogy, you need to follow these simple steps:
-
-
Visit ElAmigos-Games.com and search for Crash Bandicoot N Sane Trilogy.
-
Click on the download link and choose a server from the list.
-
Wait for the download to finish and extract the RAR file with WinRAR or 7-Zip.
-
Run the setup.exe file and follow the instructions on the screen.
-
Select your preferred language and destination folder.
-
Wait for the installation to complete and close the setup.
-
Launch the game from the desktop shortcut or the start menu.
-
Enjoy playing Crash Bandicoot N Sane Trilogy with the ElAmigos hack tool.
-
-
Conclusion
-
The ElAmigos hack tool for the Crash Bandicoot N Sane Trilogy is a great way to play one of the best platformer games of all time on your PC without spending any money or going through any online activation process. The hack tool allows you to enjoy all the features and content of the game without any limitations or restrictions. The hack tool is also easy to use and install, and it supports six languages. If you are a fan of Crash Bandicoot or platformer games in general, you should definitely try out the ElAmigos hack tool for the Crash Bandicoot N Sane Trilogy.
-
Conclusion
-
The ElAmigos hack tool for the Crash Bandicoot N Sane Trilogy is a great way to play one of the best platformer games of all time on your PC without spending any money or going through any online activation process. The hack tool allows you to enjoy all the features and content of the game without any limitations or restrictions. The hack tool is also easy to use and install, and it supports six languages. If you are a fan of Crash Bandicoot or platformer games in general, you should definitely try out the ElAmigos hack tool for the Crash Bandicoot N Sane Trilogy.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Doce Pilares Jim Rohn PDF Aprende las leyes del liderazgo y la prosperidad.md b/spaces/bioriAsaeru/text-to-voice/Doce Pilares Jim Rohn PDF Aprende las leyes del liderazgo y la prosperidad.md
deleted file mode 100644
index cb9e2901447a543e768d8a6d04eb3b16d257dbe0..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Doce Pilares Jim Rohn PDF Aprende las leyes del liderazgo y la prosperidad.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Hamsterball Gold A Cute and Addictive Racing Game - Free Download Full Version.md b/spaces/cihyFjudo/fairness-paper-search/Hamsterball Gold A Cute and Addictive Racing Game - Free Download Full Version.md
deleted file mode 100644
index 211ebc45e2692b71f793bc4362763bb3fd89a5a5..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Hamsterball Gold A Cute and Addictive Racing Game - Free Download Full Version.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Mouse Recorder Pro 2 2.0.7.4 18 acceleratori gthing How to Record and Replay Your Mouse and Keyboard Movements.md b/spaces/cihyFjudo/fairness-paper-search/Mouse Recorder Pro 2 2.0.7.4 18 acceleratori gthing How to Record and Replay Your Mouse and Keyboard Movements.md
deleted file mode 100644
index b6611e9c2139f4f2c37b0492dd46553cf23a2edf..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Mouse Recorder Pro 2 2.0.7.4 18 acceleratori gthing How to Record and Replay Your Mouse and Keyboard Movements.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Mouse Recorder Pro 2 2.0.7.4 18 acceleratori gthing
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dfpwmenc.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dfpwmenc.c
deleted file mode 100644
index 5318b04a390deee40612c5aa2242c60c132d7630..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dfpwmenc.c
+++ /dev/null
@@ -1,121 +0,0 @@
-/*
- * DFPWM encoder
- * Copyright (c) 2022 Jack Bruienne
- * Copyright (c) 2012, 2016 Ben "GreaseMonkey" Russell
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * DFPWM1a encoder
- */
-
-#include "libavutil/internal.h"
-#include "avcodec.h"
-#include "codec_id.h"
-#include "codec_internal.h"
-#include "encode.h"
-
-typedef struct {
- int fq, q, s, lt;
-} DFPWMState;
-
-// DFPWM codec from https://github.com/ChenThread/dfpwm/blob/master/1a/
-// Licensed in the public domain
-
-// note, len denotes how many compressed bytes there are (uncompressed bytes / 8).
-static void au_compress(DFPWMState *state, int len, uint8_t *outbuf, const uint8_t *inbuf)
-{
- unsigned d = 0;
- for (int i = 0; i < len; i++) {
- for (int j = 0; j < 8; j++) {
- int nq, st, ns;
- // get sample
- int v = *(inbuf++) - 128;
- // set bit / target
- int t = (v > state->q || (v == state->q && v == 127) ? 127 : -128);
- d >>= 1;
- if(t > 0)
- d |= 0x80;
-
- // adjust charge
- nq = state->q + ((state->s * (t-state->q) + 512)>>10);
- if(nq == state->q && nq != t)
- nq += (t == 127 ? 1 : -1);
- state->q = nq;
-
- // adjust strength
- st = (t != state->lt ? 0 : 1023);
- ns = state->s;
- if(ns != st)
- ns += (st != 0 ? 1 : -1);
- if(ns < 8) ns = 8;
- state->s = ns;
-
- state->lt = t;
- }
-
- // output bits
- *(outbuf++) = d;
- }
-}
-
-static av_cold int dfpwm_enc_init(struct AVCodecContext *ctx)
-{
- DFPWMState *state = ctx->priv_data;
-
- state->fq = 0;
- state->q = 0;
- state->s = 0;
- state->lt = -128;
-
- ctx->bits_per_coded_sample = 1;
-
- return 0;
-}
-
-static int dfpwm_enc_frame(struct AVCodecContext *ctx, struct AVPacket *packet,
- const struct AVFrame *frame, int *got_packet)
-{
- DFPWMState *state = ctx->priv_data;
- int size = frame->nb_samples * frame->ch_layout.nb_channels / 8 + (frame->nb_samples % 8 > 0 ? 1 : 0);
- int ret = ff_get_encode_buffer(ctx, packet, size, 0);
-
- if (ret) {
- *got_packet = 0;
- return ret;
- }
-
- au_compress(state, size, packet->data, frame->data[0]);
-
- *got_packet = 1;
- return 0;
-}
-
-const FFCodec ff_dfpwm_encoder = {
- .p.name = "dfpwm",
- CODEC_LONG_NAME("DFPWM1a audio"),
- .p.type = AVMEDIA_TYPE_AUDIO,
- .p.id = AV_CODEC_ID_DFPWM,
- .priv_data_size = sizeof(DFPWMState),
- .init = dfpwm_enc_init,
- FF_CODEC_ENCODE_CB(dfpwm_enc_frame),
- .p.sample_fmts = (const enum AVSampleFormat[]){AV_SAMPLE_FMT_U8, AV_SAMPLE_FMT_NONE},
- .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_VARIABLE_FRAME_SIZE |
- AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE,
-};
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dirac.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dirac.h
deleted file mode 100644
index e6d9d346d9cc13c23092f5e1f4501d20157e1def..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dirac.h
+++ /dev/null
@@ -1,131 +0,0 @@
-/*
- * Copyright (C) 2007 Marco Gerards
- * Copyright (C) 2009 David Conrad
- * Copyright (C) 2011 Jordi Ortiz
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#ifndef AVCODEC_DIRAC_H
-#define AVCODEC_DIRAC_H
-
-/**
- * @file
- * Interface to Dirac Decoder/Encoder
- * @author Marco Gerards
- * @author David Conrad
- * @author Jordi Ortiz
- */
-
-#include "avcodec.h"
-
-/**
- * The spec limits the number of wavelet decompositions to 4 for both
- * level 1 (VC-2) and 128 (long-gop default).
- * 5 decompositions is the maximum before >16-bit buffers are needed.
- * Schroedinger allows this for DD 9,7 and 13,7 wavelets only, limiting
- * the others to 4 decompositions (or 3 for the fidelity filter).
- *
- * We use this instead of MAX_DECOMPOSITIONS to save some memory.
- */
-#define MAX_DWT_LEVELS 5
-
-/**
- * Parse code values:
- *
- * Dirac Specification ->
- * 9.6.1 Table 9.1
- *
- * VC-2 Specification ->
- * 10.4.1 Table 10.1
- */
-
-enum DiracParseCodes {
- DIRAC_PCODE_SEQ_HEADER = 0x00,
- DIRAC_PCODE_END_SEQ = 0x10,
- DIRAC_PCODE_AUX = 0x20,
- DIRAC_PCODE_PAD = 0x30,
- DIRAC_PCODE_PICTURE_CODED = 0x08,
- DIRAC_PCODE_PICTURE_RAW = 0x48,
- DIRAC_PCODE_PICTURE_LOW_DEL = 0xC8,
- DIRAC_PCODE_PICTURE_HQ = 0xE8,
- DIRAC_PCODE_INTER_NOREF_CO1 = 0x0A,
- DIRAC_PCODE_INTER_NOREF_CO2 = 0x09,
- DIRAC_PCODE_INTER_REF_CO1 = 0x0D,
- DIRAC_PCODE_INTER_REF_CO2 = 0x0E,
- DIRAC_PCODE_INTRA_REF_CO = 0x0C,
- DIRAC_PCODE_INTRA_REF_RAW = 0x4C,
- DIRAC_PCODE_INTRA_REF_PICT = 0xCC,
- DIRAC_PCODE_MAGIC = 0x42424344,
-};
-
-typedef struct DiracVersionInfo {
- int major;
- int minor;
-} DiracVersionInfo;
-
-typedef struct AVDiracSeqHeader {
- unsigned width;
- unsigned height;
- uint8_t chroma_format; ///< 0: 444 1: 422 2: 420
-
- uint8_t interlaced;
- uint8_t top_field_first;
-
- uint8_t frame_rate_index; ///< index into dirac_frame_rate[]
- uint8_t aspect_ratio_index; ///< index into dirac_aspect_ratio[]
-
- uint16_t clean_width;
- uint16_t clean_height;
- uint16_t clean_left_offset;
- uint16_t clean_right_offset;
-
- uint8_t pixel_range_index; ///< index into dirac_pixel_range_presets[]
- uint8_t color_spec_index; ///< index into dirac_color_spec_presets[]
-
- int profile;
- int level;
-
- AVRational framerate;
- AVRational sample_aspect_ratio;
-
- enum AVPixelFormat pix_fmt;
- enum AVColorRange color_range;
- enum AVColorPrimaries color_primaries;
- enum AVColorTransferCharacteristic color_trc;
- enum AVColorSpace colorspace;
-
- DiracVersionInfo version;
- int bit_depth;
-} AVDiracSeqHeader;
-
-/**
- * Parse a Dirac sequence header.
- *
- * @param dsh this function will allocate and fill an AVDiracSeqHeader struct
- * and write it into this pointer. The caller must free it with
- * av_free().
- * @param buf the data buffer
- * @param buf_size the size of the data buffer in bytes
- * @param log_ctx if non-NULL, this function will log errors here
- * @return 0 on success, a negative AVERROR code on failure
- */
-int av_dirac_parse_sequence_header(AVDiracSeqHeader **dsh,
- const uint8_t *buf, size_t buf_size,
- void *log_ctx);
-
-#endif /* AVCODEC_DIRAC_H */
diff --git a/spaces/congsaPfin/Manga-OCR/logs/AetherSX2 The ultimate PS2 emulator for Android devices.md b/spaces/congsaPfin/Manga-OCR/logs/AetherSX2 The ultimate PS2 emulator for Android devices.md
deleted file mode 100644
index 5665a8a837ffbbac73463ec4cc12b214796fbfb9..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/AetherSX2 The ultimate PS2 emulator for Android devices.md
+++ /dev/null
@@ -1,178 +0,0 @@
-
-
AetherSX2 Pro APK: The Ultimate PS2 Emulator for Android
-
Do you miss playing your favorite PS2 games on your Android device? Do you want to enjoy the nostalgia of classic titles like God of War, Final Fantasy, Grand Theft Auto, and more? If yes, then you need to try AetherSX2 Pro APK, the best PS2 emulator for Android.
AetherSX2 Pro APK is a modified version of AetherSX2, a popular PS2 emulator for Android. It offers many features and enhancements that make it superior to the original app. In this article, we will tell you everything you need to know about AetherSX2 Pro APK, including its features, how to download and install it, how to use it, and its pros and cons.
-
What is AetherSX2 Pro APK?
-
AetherSX2 Pro APK is a PS2 emulator for Android that allows you to play PS2 games on your smartphone or tablet. It is based on the open-source project PCSX2, which is a well-known PS2 emulator for PC. AetherSX2 Pro APK is not available on the Google Play Store, but you can download it from third-party sources like [Apkmody](^1^).
-
AetherSX2 Pro APK is the PRO version of AetherSX2 APK. By using the AetherSX2 Pro APK, you can easily complete any tasks and requirements in it. Often you need to spend a lot of time or money to get rewards easily, but by using AetherSX2 Pro APK, you often achieve your goals in a very short time.
-
Features of AetherSX2 Pro APK
-
AetherSX2 Pro APK has many features that make it stand out from other PS2 emulators for Android. Here are some of them:
-
aethersx2 pro apk download
-aethersx2 pro apk free
-aethersx2 pro apk latest version
-aethersx2 pro apk no ads
-aethersx2 pro apk android 11
-aethersx2 pro apk ps2 emulator
-aethersx2 pro apk best settings
-aethersx2 pro apk compatible games
-aethersx2 pro apk reddit
-aethersx2 pro apk mod
-aethersx2 pro apk bios
-aethersx2 pro apk play store
-aethersx2 pro apk requirements
-aethersx2 pro apk vs damonps2
-aethersx2 pro apk how to use
-aethersx2 pro apk cheats
-aethersx2 pro apk update
-aethersx2 pro apk review
-aethersx2 pro apk offline
-aethersx2 pro apk 2023
-aethersx2 pro apk 60fps
-aethersx2 pro apk vulkan
-aethersx2 pro apk opengl
-aethersx2 pro apk pcsx2
-aethersx2 pro apk guide
-aethersx2 pro apk tutorial
-aethersx2 pro apk performance
-aethersx2 pro apk optimization
-aethersx2 pro apk controller support
-aethersx2 pro apk multiplayer
-aethersx2 pro apk widescreen
-aethersx2 pro apk iso files
-aethersx2 pro apk roms download
-aethersx2 pro apk god of war 2
-aethersx2 pro apk shadow of the colossus
-aethersx2 pro apk gta san andreas
-aethersx2 pro apk resident evil 4
-aethersx2 pro apk final fantasy xii
-aethersx2 pro apk metal gear solid 3
-aethersx2 pro apk kingdom hearts 2
-aethersx2 pro apk dragon ball z budokai tenkaichi 3
-aethersx2 pro apk devil may cry 3
-aethersx2 pro apk silent hill 3
-aethersx2 pro apk gran turismo 4
-aethersx2 pro apk tekken 5
-aethersx2 pro apk persona 4
-
High compatibility with PS2 games
-
AetherSX2 Pro APK supports a large number of PS2 games, from popular titles like Metal Gear Solid, Resident Evil, Kingdom Hearts, Tekken, and more. You can also play games from different regions, such as Japan, Europe, and North America. You can check the compatibility list on the official website of PCSX2.
-
Enhanced graphics and sound quality
-
AetherSX2 Pro APK improves the graphics and sound quality of PS2 games by using various plugins and settings. You can adjust the resolution, frame rate, anti-aliasing, texture filtering, and more. You can also enable HD rendering, which makes the games look sharper and smoother. The sound quality is also improved by using Dolby Surround Sound and other audio enhancements.
-
Customizable controls and settings
-
AetherSX2 Pro APK allows you to customize the controls and settings according to your preference. You can use the virtual buttons on the screen or connect an external controller via Bluetooth or USB. You can also map the buttons to different functions and adjust the sensitivity and vibration. You can also change the language, theme, orientation, and other options in the settings menu.
-
Save and load states
-
AetherSX2 Pro APK lets you save and load states anytime you want. This means you can save your progress in any game and resume it later without losing anything. You can also load states from different slots and switch between them easily. This feature is very useful for games that have long or difficult levels, or for games that do not have a save function.
-
Multiplayer mode and online support
-
AetherSX2 Pro APK enables you to play multiplayer games with your friends or other players online. You can use the local multiplayer mode, which allows you to connect two devices via Wi-Fi or Bluetooth and play on the same screen. You can also use the online multiplayer mode, which allows you to join or host online rooms and play with other players around the world. You can also chat with other players and send them messages.
-
How to download and install AetherSX2 Pro APK?
-
If you want to download and install AetherSX2 Pro APK on your Android device, you need to follow these steps:
-
Requirements for AetherSX2 Pro APK
-
Before you download and install AetherSX2 Pro APK, you need to make sure that your device meets the following requirements:
-
-
Your device must have Android 5.0 or higher.
-
Your device must have at least 2 GB of RAM and 4 GB of free storage space.
-
Your device must support OpenGL ES 3.0 or higher.
-
You must enable the installation of apps from unknown sources in your device settings.
-
You must have a stable internet connection to download the app and the PS2 games.
-
-
Steps to download and install AetherSX2 Pro APK
-
After you have checked the requirements, you can follow these steps to download and install AetherSX2 Pro APK:
-
-
Go to [Apkmody] and search for AetherSX2 Pro APK. You will see a download button on the page. Click on it and wait for the download to finish.
-
Once the download is complete, go to your file manager and locate the downloaded file. Tap on it and select install. Wait for the installation to finish.
-
After the installation is done, you will see an icon of AetherSX2 Pro APK on your home screen or app drawer. Tap on it and launch the app.
-
You will see a welcome screen with some instructions and tips. Read them carefully and tap on next.
-
You will see a screen where you can grant some permissions to the app. These permissions are necessary for the app to function properly. Tap on allow for each permission.
-
You will see a screen where you can choose the language and theme of the app. Select your preferred options and tap on next.
-
You will see a screen where you can scan your device for PS2 games. Tap on scan and wait for the app to find any PS2 games that you have stored on your device. If you do not have any PS2 games, you can skip this step and download them later from the internet.
-
You will see a screen where you can select a game to play. Tap on any game that you want to play and enjoy!
-
-
How to use AetherSX2 Pro APK?
-
Now that you have downloaded and installed AetherSX2 Pro APK, you might be wondering how to use it. Here are some tips and tricks that will help you use AetherSX2 Pro APK effectively:
-
How to load PS2 games on AetherSX2 Pro APK
-
If you want to load PS2 games on AetherSX2 Pro APK, you have two options: You can either use the games that you have scanned from your device, or you can download them from the internet. To use the games that you have scanned from your device, simply tap on them from the game list and start playing. To download games from the internet, follow these steps:
-
-
Go to any website that offers PS2 games for download, such as [CoolROM] or [Emuparadise]. Search for the game that you want to download and click on it.
-
You will see a page with some information about the game, such as its genre, rating, size, etc. You will also see a download link or button. Click on it and wait for the download to start.
-
Once the download is complete, go to your file manager and locate the downloaded file. It will be in a compressed format, such as ZIP or RAR. You need to extract it using an app like [ZArchiver] or [RAR].
-
After extracting the file, you will see a folder with the name of the game. Inside it, you will find a file with the extension .iso, .bin, .img, or .mdf. This is the game file that you need to load on AetherSX2 Pro APK.
-
Copy or move the game file to a folder on your device where you want to store your PS2 games. You can create a new folder or use an existing one.
-
Launch AetherSX2 Pro APK and tap on the menu icon on the top left corner. Tap on settings and then tap on paths. Tap on the folder icon next to PS2 games and select the folder where you have stored your PS2 games. Tap on OK and then tap on back.
-
Tap on the refresh icon on the top right corner and wait for the app to scan your PS2 games. You will see the game that you have downloaded appear on the game list. Tap on it and start playing.
-
-
How to adjust the settings on AetherSX2 Pro APK
-
If you want to adjust the settings on AetherSX2 Pro APK, you can do so by tapping on the menu icon on the top left corner and tapping on settings. You will see various options that you can change, such as:
-
-
Graphics: Here you can change the resolution, frame rate, aspect ratio, anti-aliasing, texture filtering, and more. You can also enable HD rendering and FPS counter.
-
Sound: Here you can change the volume, sound quality, audio latency, and more. You can also enable Dolby Surround Sound and audio enhancements.
-
Controls: Here you can change the layout, size, opacity, and position of the virtual buttons. You can also map the buttons to different functions and adjust the sensitivity and vibration. You can also connect an external controller via Bluetooth or USB.
-
System: Here you can change the language, theme, orientation, and other options. You can also enable cheats, speed hacks, and skip BIOS.
-
-
You can also access some of these settings while playing a game by tapping on the pause icon on the top right corner and tapping on settings. You can also save and load states from this menu.
-
How to play multiplayer games on AetherSX2 Pro APK
-
If you want to play multiplayer games on AetherSX2 Pro APK, you have two options: You can either use the local multiplayer mode or the online multiplayer mode. To use the local multiplayer mode, follow these steps:
-
-
Make sure that both devices have AetherSX2 Pro APK installed and have the same PS2 game file stored in their devices.
-
Connect both devices via Wi-Fi or Bluetooth. Make sure that they are on the same network or paired with each other.
-
Launch AetherSX2 Pro APK on both devices and select the same PS2 game from the game list.
-
On one device, tap on the menu icon on the top left corner and tap on multiplayer. Tap on host and wait for the other device to join.
-
On the other device, tap on the menu icon on the top left corner and tap on multiplayer. Tap on join and select the device that is hosting from the list.
-
Once both devices are connected, you will see a split screen with each device showing half of the game. You can now play the game together on the same screen.
-
-
To use the online multiplayer mode, follow these steps:
-
-
Make sure that both devices have AetherSX2 Pro APK installed and have the same PS2 game file stored in their devices.
-
Connect both devices to the internet. Make sure that they have a stable and fast connection.
-
Launch AetherSX2 Pro APK on both devices and select the same PS2 game from the game list.
-
On one device, tap on the menu icon on the top left corner and tap on multiplayer. Tap on online and then tap on create room. Enter a name and a password for your room and tap on OK.
-
On the other device, tap on the menu icon on the top left corner and tap on multiplayer. Tap on online and then tap on join room. Enter the name and the password of the room that you want to join and tap on OK.
-
Once both devices are connected, you will see a screen with each device showing the full game. You can now play the game together online.
-
-
Pros and cons of AetherSX2 Pro APK
-
AetherSX2 Pro APK is a great app for PS2 lovers, but it also has some pros and cons that you should be aware of. Here are some of them:
-
Pros
-
-
It allows you to play PS2 games on your Android device without any hassle.
-
It supports a large number of PS2 games from different regions and genres.
-
It improves the graphics and sound quality of PS2 games by using various plugins and settings.
-
It lets you customize the controls and settings according to your preference.
-
It enables you to save and load states anytime you want.
-
It allows you to play multiplayer games with your friends or other players online.
-
-
Cons
-
-
It is not available on the Google Play Store, so you need to download it from third-party sources.
-
It may not work well on some devices or with some games due to compatibility issues or bugs.
-
It may consume a lot of battery and CPU power while running PS2 games.
-
It may require a lot of storage space for PS2 games and app data.
-
-
Conclusion
-
AetherSX2 Pro APK is a must-have app for PS2 fans who want to play their favorite games on their Android devices. It offers many features and enhancements that make it superior to other PS2 emulators for Android. It is easy to download, install, and use, and it supports a large number of PS2 games. It also allows you to play multiplayer games with your friends or other players online. However, it also has some drawbacks, such as compatibility issues, battery consumption, storage space, and security risks. Therefore, you should use it at your own risk and discretion.
-
Frequently Asked Questions
-
Here are some frequently asked questions about AetherSX2 Pro APK:
-
Is AetherSX2 Pro APK safe to use?
-
AetherSX2 Pro APK is not an official app from Sony or PCSX2, so it may not be safe to use. It may contain viruses, malware, spyware, or other harmful elements that may damage your device or compromise your privacy. Therefore, you should only download it from trusted sources like [Apkmody] and scan it with an antivirus app before installing it. You should also backup your data before using it and avoid using it for illegal purposes.
-
Is AetherSX2 Pro APK legal to use?
-
AetherSX2 Pro APK is not legal to use in some countries or regions where PS2 emulation is prohibited or restricted by law. It may also infringe the intellectual property rights of Sony or other game developers who own the PS2 games. Therefore, you should only use it for personal or educational purposes and not for commercial or profit-making purposes. You should also only use it with PS2 games that you own legally or have permission to use.
-
How can I get more PS2 games for AetherSX2 Pro APK?
-
You can get more PS2 games for AetherSX2 Pro APK by downloading them from the internet or by ripping them from your own PS2 discs. To download them from the internet, you can use websites like [CoolROM] or [Emuparadise] that offer PS2 games for download. To rip them from your own PS2 discs, you can use software like [ImgBurn] or [ DVD Decrypter] that can create ISO files from your PS2 discs. You can then transfer the ISO files to your device and load them on AetherSX2 Pro APK.
-
How can I improve the performance of AetherSX2 Pro APK?
-
You can improve the performance of AetherSX2 Pro APK by following these tips:
-
-
Use a device that has a powerful processor, enough RAM, and sufficient storage space.
-
Close any background apps or processes that may slow down your device or consume resources.
-
Update your device software and AetherSX2 Pro APK to the latest version.
-
Adjust the graphics and sound settings to lower values if you experience lag or stuttering.
-
Use a stable and fast internet connection if you play online multiplayer games.
-
-
How can I contact the developer of AetherSX2 Pro APK?
-
You can contact the developer of AetherSX2 Pro APK by visiting their official website or social media pages. You can also send them an email or leave a comment on their blog. Here are some of their contact details:
-
-
Website: [AetherSX2]
-
Email: [aethersx2@gmail.com]
-
Facebook: [AetherSX2]
-
Twitter: [@AetherSX2]
-
YouTube: [AetherSX2]
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Angry Birds 1 APK - Relive the Epic Adventure on Your Android Device.md b/spaces/congsaPfin/Manga-OCR/logs/Angry Birds 1 APK - Relive the Epic Adventure on Your Android Device.md
deleted file mode 100644
index 272ee5f0b6f891dc1ad70a414f61419c76d3790c..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Angry Birds 1 APK - Relive the Epic Adventure on Your Android Device.md
+++ /dev/null
@@ -1,132 +0,0 @@
-
-
Angry Birds 1 APK: How to Download and Play the Classic Game on Your Android Device
-
Introduction
-
Do you remember the game that started it all? The game that launched a global phenomenon and spawned countless sequels, spin-offs, movies, and merchandise? Yes, we are talking about Angry Birds, the original game that made us fall in love with slingshotting colorful birds at green pigs. If you want to relive the nostalgia and enjoy the classic gameplay, you can download Angry Birds 1 APK on your Android device. In this article, we will show you how to do that and how to play the game like a pro.
-
What is Angry Birds 1?
-
Angry Birds 1 is the first game in the Angry Birds series, developed by Rovio Entertainment and released in 2009. The game is based on a simple but addictive premise: you have to use a slingshot to launch birds at structures made of various materials, such as wood, stone, glass, and ice, where pigs are hiding. Your goal is to destroy all the pigs in each level using as few birds as possible. The game features hundreds of levels across different episodes, each with its own theme and challenges.
You might be wondering why you should download Angry Birds 1 APK when you can just play the game on Google Play Store. Well, there are a few reasons why you might prefer the APK version over the official one. First of all, the APK version is free and does not require any in-app purchases or ads. Secondly, the APK version has all the episodes unlocked from the start, so you don't have to wait or pay to access them. Thirdly, the APK version is compatible with older devices and operating systems that might not support the latest updates of the official version. Finally, the APK version allows you to play offline without any internet connection.
-
How to download and install Angry Birds 1 APK
-
Step 1: Find a reliable source for the APK file
-
The first thing you need to do is to find a trustworthy website that offers the Angry Birds 1 APK file for download. You can use a search engine like Google or Bing to look for one, or you can use one of these links:
Make sure you check the reviews and ratings of the website before downloading anything from it. Also, avoid clicking on any suspicious ads or pop-ups that might appear on the website.
-
Step 2: Enable unknown sources on your device
-
The next thing you need to do is to enable unknown sources on your device. This will allow you to install apps from sources other than Google Play Store. To do this, follow these steps:
-
-
Go to Settings > Security > Unknown sources.
-
Toggle on the switch or check the box next to Unknown sources.
-
A warning message will appear. Tap OK or Yes to confirm.
-
-
You can disable unknown sources after installing Angry Birds 1 APK if you want.
-
Step 3: Download and install the APK file
-
The final thing you need to do is to download and install the APK file on your device To download and install the APK file on your device, follow these steps:
-
Open the website where you found the Angry Birds 1 APK file and tap on the download button or link.
-
Wait for the download to finish. You can check the progress in the notification bar or the download manager of your device.
-
Once the download is complete, tap on the APK file to open it. You might see a prompt asking you to choose an app to open the file. Choose Package Installer or Install.
-
A screen will appear showing the permissions required by the app. Tap on Install or Next to continue.
-
Wait for the installation to finish. You can see the progress on the screen.
-
Once the installation is done, tap on Open or Done to launch or exit the app.
-
-
Congratulations! You have successfully downloaded and installed Angry Birds 1 APK on your Android device. You can now enjoy playing the classic game anytime, anywhere.
-
How to play Angry Birds 1
-
The basic gameplay
-
The basic gameplay of Angry Birds 1 is very simple and intuitive. You just have to drag your finger on the screen to aim and release to launch a bird from the slingshot. The farther you pull back, the more power and speed you will give to the bird. You can also adjust the angle of your shot by moving your finger up or down. The goal is to hit and destroy all the pigs in each level using as few birds as possible. You will earn stars based on how well you perform in each level. You can replay any level as many times as you want to improve your score and get more stars.
-
The different types of birds
-
One of the fun aspects of Angry Birds 1 is that you can use different types of birds with different abilities and characteristics. Here are some of them:
-
angry birds classic apk download
-angry birds game free download for android
-angry birds original apk mod
-angry birds 1.0 apk
-angry birds apk offline
-angry birds apk old version
-angry birds apk full version
-angry birds apk unlimited money
-angry birds apk for pc
-angry birds apk android 2.3
-angry birds apk latest version
-angry birds apk hack
-angry birds apk no ads
-angry birds apk revdl
-angry birds apk pure
-angry birds apk mirror
-angry birds apk uptodown
-angry birds apk rexdl
-angry birds apk mob.org
-angry birds apk apkpure
-angry birds apk mod unlimited everything
-angry birds apk mod all unlocked
-angry birds apk mod unlimited powerups
-angry birds apk mod all levels unlocked
-angry birds apk mod unlimited coins and gems
-angry birds 1.6.3 apk download
-angry birds 1.5.2 apk download
-angry birds 1.4.4 apk download
-angry birds 1.3.5 apk download
-angry birds 1.2.2 apk download
-angry birds 1.1.0 apk download
-how to install angry birds 1 apk
-how to play angry birds 1 apk
-how to update angry birds 1 apk
-how to download angry birds 1 apk for free
-how to get unlimited powerups in angry birds 1 apk
-how to unlock all levels in angry birds 1 apk
-how to remove ads in angry birds 1 apk
-how to hack angry birds 1 apk with lucky patcher
-how to backup and restore angry birds 1 apk data
-
-
Red: The most common and basic bird. It does not have any special ability, but it is reliable and versatile.
-
Blue: A small bird that can split into three smaller birds when you tap on the screen. It is good for breaking glass and hitting multiple targets.
-
Yellow: A fast bird that can speed up when you tap on the screen. It is good for breaking wood and hitting hard-to-reach places.
-
Black: A heavy bird that can explode when you tap on the screen or after a few seconds of impact. It is good for breaking stone and causing massive damage.
-
White: A light bird that can drop an egg bomb when you tap on the screen. It is good for hitting targets below or behind obstacles.
-
Green: A boomerang bird that can change direction when you tap on the screen. It is good for hitting targets that are out of sight or behind walls.
-
Big Red: A giant version of the red bird that has more power and weight. It is good for breaking anything in its way.
-
-
The different types of pigs
-
The pigs are your enemies in Angry Birds 1. They come in different sizes, shapes, and colors, and they have different levels of durability and intelligence. Here are some of them:
-
-
Small Pig: The smallest and weakest pig. It can be easily destroyed by any bird or debris.
-
Medium Pig: A slightly bigger and stronger pig. It can withstand some hits, but not too much.
-
Large Pig: A big and tough pig. It can take a lot of hits before being destroyed.
-
Helmet Pig: A medium pig with a helmet that protects its head. It can resist more damage than a normal medium pig.
-
Moustache Pig: A large pig with a moustache that makes it look more menacing. It has the same durability as a normal large pig.
-
King Pig: The leader and boss of all the pigs. He is usually hidden behind layers of protection and requires a lot of hits to be destroyed.
-
-
The different types of levels
-
The levels in Angry Birds 1 are divided into episodes, each with its own theme and setting. Some of the episodes are:
-
-
Poached Eggs: The first episode, where you are introduced to the basic gameplay and characters.
-
Mighty Hoax: The second episode, where you face fake cardboard pigs and a mysterious big pig.
-
Danger Above: The third episode, where you fly above the clouds and encounter new types of birds and pigs.
-
The Big Setup: The fourth episode, where you face the construction workers who built the pig structures.
-
Ham 'Em High: The fifth episode, where you travel to the Wild West and face cowboy pigs and TNT barrels.
-
Mine and Dine: The sixth episode, where you explore the underground mines and face miner pigs and stalactites.
-
Birdday Party: The seventh episode, where you celebrate the birthday of the birds and face cake-themed levels.
-
Bad Piggies: The eighth episode, where you play from the perspective of the pigs and try to steal the eggs from the birds.
-
-
Conclusion
-
Summary of the main points
-
In conclusion, Angry Birds 1 is a classic game that you can download and play on your Android device using the APK file. You can enjoy the original gameplay, characters, and levels that made this game a global hit. You can also benefit from the advantages of the APK version, such as being free, unlocked, compatible, and offline. All you need to do is to find a reliable source for the APK file, enable unknown sources on your device, and download and install the APK file. Then, you can launch the game and start slinging birds at pigs.
-
Call to action
-
If you are ready to experience the fun and excitement of Angry Birds 1, don't wait any longer. Download Angry Birds 1 APK today and join millions of fans around the world. You won't regret it!
-
FAQs
-
Here are some frequently asked questions about Angry Birds 1 APK:
-
-
Q: Is Angry Birds 1 APK safe to download and install?
-
A: Yes, as long as you download it from a reputable website that does not contain any malware or viruses. You should also scan the APK file with an antivirus app before installing it.
-
Q: Is Angry Birds 1 APK legal to use?
-
A: Yes, as long as you do not distribute or sell it without permission from Rovio Entertainment. You should also respect their intellectual property rights and trademarks.
-
Q: Is Angry Birds 1 APK compatible with my device?
-
A: Yes, as long as your device meets the minimum requirements for running the game. You need an Android device with at least 4.1 version or higher, 100 MB of free storage space, and 512 MB of RAM.
-
Q: How can I update Angry Birds 1 APK?
-
A: You can update Angry Birds 1 APK by downloading and installing the latest version from the same website where you got the previous one. You should also check for updates regularly to enjoy new features and bug fixes.
-
Q: How can I contact Rovio Entertainment for support or feedback?
-
A: You can contact Rovio Entertainment by visiting their official website at https://www.rovio.com/, or by following them on social media platforms such as Facebook, Twitter, Instagram, YouTube, and LinkedIn.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download and Install Call of Duty Warzone Mobile APK - The Most Popular FPS Game on Android.md b/spaces/congsaPfin/Manga-OCR/logs/Download and Install Call of Duty Warzone Mobile APK - The Most Popular FPS Game on Android.md
deleted file mode 100644
index 8572d8752ca4150ce042dc3585463a62ee22658f..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download and Install Call of Duty Warzone Mobile APK - The Most Popular FPS Game on Android.md
+++ /dev/null
@@ -1,100 +0,0 @@
-
-
Call of Duty Warzone Mobile: How to Download and Play the Next-Gen Battle Royale on Your Phone
-
If you are a fan of Call of Duty and battle royale games, you might be wondering how to download and play Call of Duty Warzone Mobile, the latest addition to the COD franchise. Call of Duty Warzone Mobile is a mobile adaptation of the wildly popular PC and console game, Call of Duty Warzone, which has over 100 million players worldwide. In this article, we will tell you everything you need to know about Call of Duty Warzone Mobile, including what it is, how to download it, how to play it, and some tips and tricks to help you win.
-
What is Call of Duty Warzone Mobile?
-
Call of Duty Warzone Mobile is a mobile battle royale game that features authentic COD gameplay, shared progression, and up to 120 player count matches on mobile devices. The game is powered by unified Call of Duty technology, which means that your Battle Pass and friends list sync across Call of Duty Modern Warfare II and Call of Duty Warzone. You can also enjoy social features like chat channels and in-game events.
A mobile adaptation of the popular PC and console game
-
Call of Duty Warzone Mobile is based on Call of Duty Warzone, which is a free-to-play battle royale game that was released in March 2020. The game takes place in Verdansk, a fictional city inspired by Donetsk in Ukraine. The game mode involves up to 150 players dropping into the map and fighting for survival until only one team or solo player remains. The game also features a unique mechanic called the Gulag, where eliminated players can fight for a chance to respawn.
-
Features authentic COD gameplay, shared progression, and up to 120 player count matches
-
Call of Duty Warzone Mobile delivers authentic COD gameplay on mobile devices, with first-class graphics and intuitive controls. Everything from movement, aiming, weapon handling, physics, animations, and sound have been optimized for mobile gamers. The game also features up to 120 player count matches, which means more competition and more action. You can also enjoy shared progression with Call of Duty Modern Warfare II and Call of Duty Warzone, which means that your Battle Pass progress, weapon unlocks, skins, operators, and more are synced across platforms.
-
Pre-register for a chance to unlock rewards at launch
-
Call of Duty Warzone Mobile is expected to launch worldwide in Fall 2023 on Android and iOS devices. However, you can pre-register for the game now and earn rewards if global milestones are hit. These rewards include exclusive vinyls, emblems, weapons, operators, and even a new map called Shoot House. You can pre-register for Call of Duty Warzone Mobile through the App Store or Google Play Store or via the [Call of Duty Warzone Mobile webpage](^1^). , scavenger, recon, most wanted, and supply run. You can complete contracts to earn cash, loot, intel, or loadouts.
-
Killstreaks are special abilities that you can use to gain an edge in combat. You can find killstreaks from loot boxes, buy stations, or loadout drops. There are different types of killstreaks such as UAV, cluster strike, precision airstrike, shield turret, sentry gun, and more. You can activate killstreaks by tapping on the killstreak icon on the right side of the screen.
-
Vehicles are modes of transportation that you can use to traverse Verdansk faster and safer. You can find vehicles scattered around the map or call them in from buy stations. There are different types of vehicles such as ATV, SUV, cargo truck, helicopter, and more. You can drive or ride vehicles by tapping on the vehicle icon on the left side of the screen.
-
Weapons are your primary means of offense and defense in Call of Duty Warzone Mobile. You can find weapons from loot boxes, enemies, or loadout drops. There are different types of weapons such as assault rifles, submachine guns, shotguns, sniper rifles, pistols, and more. You can equip two primary weapons and one secondary weapon at a time. You can also customize your weapons with attachments, camos, charms, stickers, and more.
-
Win a duel in the Gulag to get a second chance
-
One of the most unique features of Call of Duty Warzone Mobile is the Gulag. The Gulag is a prison where eliminated players can fight for a chance to respawn. When you die for the first time in a match, you will be taken to the Gulag and wait for your turn to face another player in a 1v1 duel. The winner of the duel will be redeployed back into Verdansk. The loser will be eliminated for good unless their teammates buy them back from a buy station.
-
The Gulag is a small map with different layouts and weapons each time. You will have a few seconds to prepare before the duel starts. You will have a pistol or a shotgun as your weapon and a lethal or tactical equipment as your gadget. You will also have a health bar that regenerates over time. The objective is to kill your opponent or capture the flag in the middle of the map before the time runs out.
-
* call of duty warzone mobile apk free download
-* how to install call of duty warzone on android
-* call of duty warzone mobile apk + obb
-* call of duty warzone mobile apk download for android
-* call of duty warzone mobile apk mod
-* call of duty warzone mobile apk offline
-* call of duty warzone mobile apk latest version
-* call of duty warzone mobile apk no verification
-* call of duty warzone mobile apk pure
-* call of duty warzone mobile apk data
-* call of duty warzone mobile download apkpure for pc
-* call of duty warzone mobile download apkpure ios
-* call of duty warzone mobile download apkpure 2023
-* call of duty warzone mobile download apkpure update
-* call of duty warzone mobile download apkpure online
-* call of duty warzone mobile download apkpure hack
-* call of duty warzone mobile download apkpure beta
-* call of duty warzone mobile download apkpure full
-* call of duty warzone android download apkpure
-* call of duty warzone ios download apkpure
-* call of duty warzone apk download apkpure
-* call of duty warzone game download apkpure
-* call of duty warzone app download apkpure
-* call of duty warzone free download apkpure
-* call of duty warzone mod apk download apkpure
-* best site to download call of duty warzone mobile apk
-* how to play call of duty warzone on mobile without downloading
-* is there a call of duty warzone mobile version
-* when will call of duty warzone be available for mobile
-* how to get call of duty warzone on android phone
-* how to download and install call of duty warzone on android device
-* how to run call of duty warzone on android emulator
-* how to play call of duty warzone with controller on android
-* how to fix lag in call of duty warzone on android
-* how to update call of duty warzone on android without losing data
-* can you play call of duty warzone on ios devices
-* how to download and install call of duty warzone on ios device
-* how to run call of duty warzone on ios emulator
-* how to play call of duty warzone with controller on ios
-* how to fix lag in call of duty warzone on ios
-* how to update call of duty warzone on ios without losing data
-* what are the requirements for playing call of duty warzone on mobile devices
-* what are the features and modes in call of duty warzone mobile game
-* what are the tips and tricks for playing call of duty warzone mobile game
-* what are the best weapons and loadouts in call of duty warzone mobile game
-* what are the best settings and graphics options for playing call of duty warzone mobile game
-* what are the common errors and bugs in call of duty warzone mobile game
-* what are the solutions and fixes for the errors and bugs in call of duty warzone mobile game
-
Tips and Tricks for Call of Duty Warzone Mobile
-
Now that you know how to play Call of Duty Warzone Mobile, you might be looking for some tips and tricks to improve your skills and win more matches. Here are some of the best tips and tricks for Call of Duty Warzone Mobile:
-
Use headphones and pings to communicate with your squad
-
Communication is key in Call of Duty Warzone Mobile, especially if you are playing with a squad. You can use headphones and voice chat to communicate with your teammates and coordinate your strategies. You can also use pings to mark enemies, locations, items, or dangers on the map. You can ping by tapping on the ping icon on the left side of the screen and selecting the option you want.
-
Mount your weapon and aim for the head
-
Shooting is one of the most important skills in Call of Duty Warzone Mobile. You need to be accurate and fast to take down your enemies before they take you down. One way to improve your shooting is to mount your weapon on walls, windows, or cover. This will reduce your recoil and increase your stability. You can mount your weapon by tapping on the mount icon on the right side of the screen when you are near a suitable surface.
-
Another way to improve your shooting is to aim for the head. Headshots deal more damage than body shots and can often result in instant kills. You can aim for the head by using the aim assist feature or by adjusting your crosshair manually. You can also use attachments like scopes or lasers to enhance your aiming.
-
Always pick up bounty and scavenger contracts
-
Contracts are optional missions that you can find and activate throughout Verdansk. They offer rewards such as cash, loot, intel, or loadouts. There are different types of contracts such as bounty , scavenger, recon, most wanted, and supply run. However, the best contracts to pick up are bounty and scavenger contracts.
-
Bounty contracts are contracts that assign you a target to hunt down and kill within a time limit. You can find bounty contracts from yellow loot boxes or buy stations. When you activate a bounty contract, you will see a yellow circle on the map that indicates the general location of your target. You will also see a bar that indicates how close or far they are from you. If you kill your target or someone else does, you will earn a cash reward. If the time runs out or your target escapes, you will earn a smaller reward.
-
Scavenger contracts are contracts that require you to find and open three loot boxes within a time limit. You can find scavenger contracts from blue loot boxes or buy stations. When you activate a scavenger contract, you will see a yellow magnifying glass on the map that indicates the location of the first loot box. When you open it, you will see the location of the next one, and so on. If you open all three loot boxes, you will earn a cash reward and a rare loot item such as armor satchel, gas mask, or self-revive kit.
-
Go for loadouts and customize your weapons
-
Loadouts are custom sets of weapons and equipment that you can create and use in Call of Duty Warzone Mobile. You can create up to 10 loadouts in the loadout menu on the main screen. You can choose your primary weapon, secondary weapon, lethal equipment, tactical equipment, perks, and operator skin for each loadout. You can also customize your weapons with attachments, camos, charms, stickers, and more.
-
You can access your loadouts in two ways in Call of Duty Warzone Mobile. One way is to buy a loadout drop from a buy station for $10,000. A loadout drop is a red smoke marker that drops a crate containing your loadouts. You can use it to change your weapons and equipment in the middle of the match. However, be careful as other players can also see and use your loadout drop.
-
Another way is to wait for a free loadout drop that occurs twice per match. A free loadout drop is a green smoke marker that drops a crate containing your loadouts near your location. You can use it to change your weapons and equipment without spending any cash. However, be quick as other players can also see and use your free loadout drop.
-
Keep track of the redeployment flares and the gas circle
-
Two of the most important things to keep track of in Call of Duty Warzone Mobile are the redeployment flares and the gas circle. Redeployment flares are red flares that indicate when a player has been redeployed back into Verdansk. This can happen when they win a duel in the Gulag or when their teammates buy them back from a buy station. You can use redeployment flares to locate and ambush enemies who have just returned to the game.
-
The gas circle is the green circle that indicates the safe zone on the map. The gas circle shrinks over time and forces players into a smaller area. Anyone who is outside the gas circle will take damage over time and eventually die. You can use the gas circle to plan your movements and avoid getting caught in the gas.
-
Conclusion
-
Call of Duty Warzone Mobile is an exciting mobile battle royale game that offers authentic COD gameplay, shared progression, and up to 120 player count matches on mobile devices. The game is expected to launch worldwide in Fall 2023 on Android and iOS devices, but you can pre-register for it now and earn rewards if global milestones are hit. If you want to download and play Call of Duty Warzone Mobile on your phone, you need to check the system requirements for your device, sign up for Call of Duty Warzone Mobile through the App Store or Google Play Store, and wait for the game to be available in your region. If you want to win more matches in Call of Duty Warzone Mobile, you need to choose the best controls and settings for your device, drop into Verdansk and fight for survival, use contracts, killstreaks, vehicles, and weapons to gain an advantage , and win a duel in the Gulag to get a second chance. You also need to use headphones and pings to communicate with your squad, mount your weapon and aim for the head, always pick up bounty and scavenger contracts, go for loadouts and customize your weapons, and keep track of the redeployment flares and the gas circle. We hope this article has helped you learn more about Call of Duty Warzone Mobile and how to download and play it on your phone. Happy gaming!
-
FAQs
-
Here are some of the frequently asked questions about Call of Duty Warzone Mobile:
-
Is Call of Duty Warzone Mobile free to play?
-
Yes, Call of Duty Warzone Mobile is free to play. You do not need to pay anything to download or play the game. However, you can purchase in-game items such as Battle Pass, COD Points, bundles, and crates with real money if you want to enhance your gaming experience.
-
Is Call of Duty Warzone Mobile cross-platform?
-
Yes, Call of Duty Warzone Mobile is cross-platform. You can play with or against players who are using Android or iOS devices. However, you cannot play with or against players who are using PC or console devices.
-
How can I link my Call of Duty account to Call of Duty Warzone Mobile?
-
You can link your Call of Duty account to Call of Duty Warzone Mobile by tapping on the settings icon on the main menu and selecting the account tab. You will see an option to link your Call of Duty account or create a new one. By linking your Call of Duty account, you can enjoy shared progression, social features, and rewards across Call of Duty Modern Warfare II and Call of Duty Warzone.
-
How can I report a bug or a cheater in Call of Duty Warzone Mobile?
-
You can report a bug or a cheater in Call of Duty Warzone Mobile by tapping on the settings icon on the main menu and selecting the feedback tab. You will see an option to report a bug or a player. You will need to provide details such as your username, device model, game mode, map, time, description, and screenshot or video evidence if possible. Your report will be sent to the developers for review and action.
-
How can I get more information about Call of Duty Warzone Mobile?
-
You can get more information about Call of Duty Warzone Mobile by visiting the [Call of Duty Warzone Mobile webpage] or following the official social media channels such as Facebook, Twitter, Instagram, YouTube, and Discord. You can also join the community forums and chat with other players and developers.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Experience Westeros in CK2 with the Game of Thrones Mod Heres How to Download It.md b/spaces/congsaPfin/Manga-OCR/logs/Experience Westeros in CK2 with the Game of Thrones Mod Heres How to Download It.md
deleted file mode 100644
index fa03d6ba50ca38f7aee11a709aba1856e18fe74f..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Experience Westeros in CK2 with the Game of Thrones Mod Heres How to Download It.md
+++ /dev/null
@@ -1,124 +0,0 @@
-
-
How to Download the Game of Thrones Mod for CK2
-
If you are a fan of both Crusader Kings II and A Song of Ice and Fire, you might have heard of a mod that combines them into one immersive experience. The Game of Thrones mod for CK2 is a full-conversion mod that transforms the medieval strategy game into the world of George R. R. Martin's fantasy saga. You can play as any of the major or minor characters from the books, and experience the events of the story or create your own alternative scenarios.
-
In this article, we will show you how to download and install this amazing mod, and give you some tips and tricks on how to play it. Whether you want to conquer Westeros as Aegon the Conqueror, defend it as Robert Baratheon, or break it as Daenerys Targaryen, this mod will let you live your fantasy.
The Game of Thrones mod for CK2 is a total conversion mod that changes every aspect of the game to match the setting and lore of A Song of Ice and Fire. The mod was first released in 2012 by a team led by Cabezaestufa, and has since been updated regularly with new features and content.
-
Some of the main features of the mod are:
-
-
A new map that covers Westeros, Essos, and parts of Sothoryos and Ulthos.
-
Thousands of new characters from the books, each with their own traits, skills, relationships, claims, and ambitions.
-
New events that follow or diverge from the plot of the books, such as wars, rebellions, weddings, assassinations, prophecies, dreams, visions, duels, trials, tournaments, feasts, plagues, invasions, etc.
-
New mechanics that reflect the culture and politics of the world, such as feudal contracts, crown authority, vassal management, council power, succession laws, religions, cultures, bloodlines, dynasties, cadet branches, knightly orders, mercenary companies, holy orders, pirates, slavers, nomads, etc.
-
New graphics that enhance the visual appeal of the game, such as portraits, flags, coats of arms, icons, interface elements,
etc.
-
How to Install the Game of Thrones Mod for CK2?
-
There are two ways to install the Game of Thrones mod for CK2: manually or through Steam Workshop. Both methods require that you have the latest version of Crusader Kings II and all the necessary DLCs. The mod is compatible with CK2 version 3.3.3 and requires the following DLCs: Sword of Islam, Legacy of Rome, The Republic, The Old Gods, Sons of Abraham, Charlemagne, Way of Life, Horse Lords, Conclave, The Reaper's Due, Monks and Mystics, Jade Dragon, Holy Fury, and Iron Century. Here are the steps for each method:
Manual Installation
-
-
Download the latest version of the mod from the official forum or the moddb page. You will need to register an account and link it to your Steam profile to access the forum.
-
Extract the downloaded zip file to your CK2 mod folder. The default location is C:\Users\YourName\Documents\Paradox Interactive\Crusader Kings II\mod.
-
Make sure that you have two files in your mod folder: A Game of Thrones.mod and A Game of Thrones folder.
-
Launch CK2 and select A Game of Thrones from the mod list in the launcher.
-
Enjoy the game!
-
-
Steam Workshop Installation
-
-
Subscribe to the mod on Steam Workshop. You can find it by searching for A Game of Thrones in the workshop browser.
-
Wait for Steam to download and install the mod automatically.
-
Launch CK2 and select A Game of Thrones from the mod list in the launcher.
-
Enjoy the game!
-
-
How to Play the Game of Thrones Mod for CK2?
-
Once you have installed the mod, you are ready to enter the world of A Song of Ice and Fire. You can choose from a variety of characters, scenarios, and difficulty levels to suit your preferences. Here are some tips and tricks on how to play the mod:
-
Choosing a Character
-
The mod offers a wide range of characters to play as, from kings and queens to lords and ladies, from knights and nobles to bastards and peasants. You can filter them by rank, culture, religion, dynasty, or bookmark in the character selection screen. You can also use the search function to find a specific character by name or title.
-
Some characters have special events or challenges associated with them, such as Daenerys Targaryen's quest to reclaim the Iron Throne, Jon Snow's dilemma at the Wall, or Robb Stark's war for independence. These characters are marked with a star icon in the character selection screen. You can also create your own custom character using the ruler designer DLC or the console commands.
-
Choosing a Scenario
-
The mod features several scenarios or bookmarks that correspond to different periods in the history of Westeros and Essos. Each scenario has its own starting date, map, characters, events, and challenges. You can choose from the following scenarios:
-
-
The Bleeding Years: The year 7999 since the Landing of Aegon I Targaryen. The Seven Kingdoms are divided and constantly at war with each other. The Iron Throne does not exist yet.
-
Aegon's Conquest: The year 1 since Aegon's Landing. Aegon I Targaryen has invaded Westeros with his dragons and his sisters. He aims to unify the Seven Kingdoms under his rule.
-
The Conquest of Dorne: The year 157 since Aegon's Landing. Daeron I Targaryen has launched a campaign to conquer Dorne, the only kingdom that resisted Aegon's Conquest. He faces fierce resistance from the Dornish people.
-
The Dance of Dragons: The year 129 since Aegon's Landing. A civil war has erupted between two rival branches of House Targaryen over the succession to the Iron Throne. The war is marked by dragon battles and bloodshed.
-
The Blackfyre Rebellion: The year 196 since Aegon's Landing. Daemon Blackfyre, a bastard son of King Aegon IV Targaryen, has risen in rebellion against his half-brother King Daeron II Targaryen. He claims to be the true heir to the Iron Throne.
-
The War of Conquest: The year 298 since Aegon's Landing. Robert Baratheon has rebelled against King Aerys II Targaryen, also known as the Mad King. He is supported by Jon Arryn, Eddard Stark, and Hoster Tully. He faces opposition from Tywin Lannister, Mace Tyrell, and Doran Martell.
-
The Crowned Stag: The year 1 since Robert's Rebellion. Robert Baratheon has defeated the Targaryens and claimed the Iron Throne. He is married to Cersei Lannister, and has appointed Jon Arryn as his Hand. He faces challenges from the surviving Targaryens, the Greyjoys, and the Others.
-
The Greyjoy Rebellion: The year 9 since Robert's Rebellion. Balon Greyjoy, the Lord of the Iron Islands, has declared himself King of the Iron Islands and launched a rebellion against Robert Baratheon. He is opposed by Robert's allies, such as Eddard Stark, Stannis Baratheon, and Tywin Lannister.
-
The Clash of Kings: The year 2 since Eddard Stark's death. After the death of King Robert Baratheon and his Hand Eddard Stark, Westeros is plunged into a civil war. Five kings claim the Iron Throne: Joffrey Baratheon, Renly Baratheon, Stannis Baratheon, Robb Stark, and Balon Greyjoy. Meanwhile, Daenerys Targaryen is gathering her forces in Essos, and Jon Snow is facing the threat of the wildlings beyond the Wall.
-
A Feast for Crows: The year 4 since Eddard Stark's death. The War of the Five Kings has ended with the deaths of Robb Stark, Balon Greyjoy, Renly Baratheon, and Joffrey Baratheon. Stannis Baratheon has gone to the Wall to fight the wildlings and the Others. Tommen Baratheon sits on the Iron Throne, but he is controlled by his mother Cersei Lannister and his uncle Tyrion Lannister. Daenerys Targaryen rules Meereen, but she faces enemies from within and without. Arya Stark is training to become a Faceless Man in Braavos. Bran Stark is learning to become a greenseer beyond the Wall.
-
A Dance with Dragons: The year 5 since Eddard Stark's death. The War of the Five Kings has reignited with the arrival of Aegon Targaryen, a young man who claims to be the son of Rhaegar Targaryen and Elia Martell. He is supported by Jon Connington, a former Hand of King Aerys II Targaryen, and the Golden Company, a mercenary army. He invades Westeros with the intention of taking the Iron Throne from Tommen Baratheon. Meanwhile, Daenerys Targaryen faces a new threat from the Dothraki, who have gathered under a new khal named Khal Jhaqo. Jon Snow is stabbed by his own men at the Wall for letting the wildlings through. Tyrion Lannister joins forces with Jorah Mormont and a dwarf named Penny to find Daenerys. Cersei Lannister is imprisoned by the Faith Militant for her crimes. Jaime Lannister is missing in the Riverlands. Sansa Stark is hiding in the Vale under the guise of Alayne Stone, the bastard daughter of Petyr Baelish.
-
The Winds of Winter: The year 6 since Eddard Stark's death. This scenario is based on the unreleased sixth book of A Song of Ice and Fire by George R. R. Martin. It is not canon and may differ from the actual book when it comes out.
-
-
Choosing a Difficulty Level
-
The mod allows you to choose from four difficulty levels: Easy, Normal, Hard, and Very Hard. The difficulty level affects how challenging the game will be for you and your opponents. It affects factors such as AI aggressiveness, event frequency, revolt risk, disease spread, attrition rate, etc.
-
How to install CK2:AGOT mod
-CK2 Game of Thrones mod download link
-Crusader Kings 2 A Game of Thrones mod tutorial
-How to play CK2 with A Game of Thrones mod
-CK2 AGOT mod latest version
-How to update CK2 Game of Thrones mod
-Crusader Kings 2 A Song of Ice and Fire mod guide
-How to enable CK2 AGOT mod
-CK2 Game of Thrones mod steam workshop
-Crusader Kings 2 A Game of Thrones mod review
-How to uninstall CK2 AGOT mod
-CK2 Game of Thrones mod compatibility
-Crusader Kings 2 A Game of Thrones mod wiki
-How to fix CK2 AGOT mod crashes
-CK2 Game of Thrones mod best start date
-Crusader Kings 2 A Game of Thrones mod cheats
-How to create a custom character in CK2 AGOT mod
-CK2 Game of Thrones mod submods
-Crusader Kings 2 A Game of Thrones mod tips and tricks
-How to join the CK2 AGOT mod discord
-CK2 Game of Thrones mod changelog
-Crusader Kings 2 A Game of Thrones mod features
-How to duel in CK2 AGOT mod
-CK2 Game of Thrones mod scenarios
-Crusader Kings 2 A Game of Thrones mod factions
-How to hatch a dragon in CK2 AGOT mod
-CK2 Game of Thrones mod requirements
-Crusader Kings 2 A Game of Thrones mod events
-How to marry Daenerys in CK2 AGOT mod
-CK2 Game of Thrones mod console commands
-Crusader Kings 2 A Game of Thrones mod map
-How to become a white walker in CK2 AGOT mod
-CK2 Game of Thrones mod skins
-Crusader Kings 2 A Game of Thrones mod characters
-How to win the iron throne in CK2 AGOT mod
-CK2 Game of Thrones mod bugs and fixes
-Crusader Kings 2 A Game of Thrones mod religions
-How to play as a night's watch in CK2 AGOT mod
-CK2 Game of Thrones mod graphics settings
-Crusader Kings 2 A Game of Thrones mod cultures
-How to colonize Valyria in CK2 AGOT mod
-CK2 Game of Thrones mod gameplay videos
-Crusader Kings 2 A Game of Thrones mod development diary
-How to play as a wildling in CK2 AGOT mod
-CK2 Game of Thrones mod performance optimization
-Crusader Kings 2 A Game of Thrones mod forum and community
-
You can change the difficulty level at any time during the game by going to the Game Options menu.
-
Conclusion
-
The Game of Thrones mod for CK2 is one of the best mods ever made for any game. It lets you immerse yourself in a rich and detailed world that is faithful to the books and full of possibilities. You can create your own stories and adventures, or relive your favorite moments from the books or show.
-
If you are looking for a new way to enjoy Crusader Kings II or A Song of Ice and Fire, you should definitely try out this mod. You will not regret it.
-
FAQs
-
Here are some frequently asked questions about the Game of Thrones mod for CK2:
-
Is the mod compatible with other mods?
-
No, the mod is not compatible with other mods that change the map, characters, events, mechanics, or graphics of CK2. It is designed to be played as a standalone mod. However, you can use some submods that are made specifically for the Game of Thrones mod. You can find them on the official forum or the Steam Workshop.
-
How often is the mod updated?
-
The mod is updated regularly by the developers, usually every few months. The updates include new features, content, bug fixes, and compatibility patches. You can check the official forum or the moddb page for the latest news and updates on the mod.
-
Does the mod contain spoilers for the books or show?
-
Yes, the mod contains spoilers for both the books and the show. The mod follows the canon of the books, not the show, so some events and characters may differ from what you have seen on TV. The mod also includes some events and characters that have not yet appeared in the books, but are based on hints or leaks from George R. R. Martin or his editors. If you want to avoid spoilers, you should read all the books before playing the mod.
-
How do I report bugs or give feedback on the mod?
-
If you encounter any bugs or issues while playing the mod, you can report them on the official forum or the Steam Workshop page. The developers are very active and responsive, and they appreciate any feedback or suggestions from the players. You can also join their Discord server to chat with them and other fans of the mod.
-
Are there any submods that enhance the mod?
-
Yes, there are many submods that add new features, content, or options to the mod. Some of the most popular submods are:
-
-
More Bloodlines: Adds hundreds of new bloodlines to the game, based on historical or legendary figures from Westeros and Essos.
-
Sinful Mods: Adds various options to make the game more realistic, immersive, or challenging, such as slavery, torture, cannibalism, incest, etc.
-
Flamequeen's Ultimate Building Submod: Adds new buildings and upgrades to the game, such as castles, temples, towns, mines, etc.
-
Colonize Valyria: Allows you to colonize and restore the ancient empire of Valyria.
-
AGOT More Decisions: Adds more decisions and actions to the game, such as legitimizing bastards, changing laws, declaring wars, etc.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Use Social Dummy iOS for Fun and Entertainment.md b/spaces/congsaPfin/Manga-OCR/logs/How to Use Social Dummy iOS for Fun and Entertainment.md
deleted file mode 100644
index 6b5ae21d7e9d06dcc554615838dc4956ee5ef0f7..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/How to Use Social Dummy iOS for Fun and Entertainment.md
+++ /dev/null
@@ -1,165 +0,0 @@
-
-
How to Download Social Dummy iOS: A Guide for Creating Fake Social Media Posts
-
Have you ever wanted to create fake social media posts for fun or testing purposes? If so, you might be interested in an app called Social Dummy iOS. This app allows you to recreate the timelines of popular social media apps, such as Facebook, Twitter, Instagram, WhatsApp, and more, with fake but faithful posts. You can customize the posts with different options and styles, and share or save them as screenshots or videos. In this article, we will show you how to download and use Social Dummy iOS, as well as some tips and tricks for making the most out of it.
-
What is Social Dummy iOS?
-
A brief introduction to the app and its features
-
Social Dummy iOS is a simple and easy-to-use entertainment tool that lets you create fake social media posts. You can choose from a list of social media platforms, such as Facebook, Twitter, Instagram, WhatsApp, Snapchat, TikTok, YouTube, and more, and create realistic posts with your own content or predefined templates. You can also edit the profiles, followers, likes, comments, messages, stories, reels, live streams, and other aspects of each platform. The app offers you a unique way of stylising your posts in different formats with many customisation options available.
Why you might want to use it for entertainment or testing purposes
-
There are many reasons why you might want to use Social Dummy iOS for entertainment or testing purposes. For example, you can:
-
-
Prank your friends or family by showing them fake posts from celebrities, influencers, or yourself.
-
Test your social media marketing strategies or designs by creating mockups of your posts.
-
Express your creativity or humor by making funny or parody posts.
-
Learn how different social media platforms work by exploring their features and functions.
-
Have fun and enjoy yourself by creating any kind of posts you want.
-
-
How to Download and Install Social Dummy iOS
-
The steps to find and download the app from the App Store
-
To download and install Social Dummy iOS on your iPhone, iPad, or iPod touch, you need to follow these steps:
-
-
Open the App Store on your device and search for "Social Dummy" or "Social Dummy Notes".
-
Select the app that has a blue icon with a white dummy head and a pencil.
-
Tap on "Get" or "Install" and wait for the app to download.
-
Once the app is downloaded, tap on " Open" or the app icon to launch the app.
-
-
The requirements and compatibility of the app
-
Before you download and install Social Dummy iOS, you need to make sure that your device meets the following requirements and compatibility:
-
-
You need to have iOS 13.0 or later installed on your device.
-
You need to have at least 125 MB of free space on your device.
-
You need to have an internet connection to use the app.
-
The app is compatible with iPhone, iPad, and iPod touch.
-
-
How to create an account and log in
-
After you download and install Social Dummy iOS, you need to create an account and log in to use the app. You can do this by following these steps:
-
-
When you open the app for the first time, you will see a welcome screen with a "Create Account" button. Tap on it to proceed.
-
Enter your email address, password, and username in the fields provided. You can also choose to sign up with your Apple ID or Google account.
-
Tap on "Create Account" and wait for a confirmation email to be sent to your email address.
-
Open the email and tap on the link to verify your account.
-
Go back to the app and tap on "Log In". Enter your email address and password, or choose to log in with your Apple ID or Google account.
-
Tap on "Log In" and you will be taken to the main screen of the app.
-
-
How to Use Social Dummy iOS
-
How to choose from different social media platforms and create fake posts
-
To use Social Dummy iOS, you need to choose from different social media platforms and create fake posts. You can do this by following these steps:
-
-
On the main screen of the app, you will see a list of social media platforms that you can choose from. Tap on the one that you want to use.
-
You will be taken to a screen that shows a mockup of the timeline of that platform. You can scroll up and down to see the existing posts, or tap on the "+" button at the bottom right corner to create a new post.
-
You will be taken to a screen that shows a template of the post that you can edit. You can change the profile picture, name, username, date, time, content, media, location, hashtags, mentions, reactions, comments, and other details of the post. You can also use predefined templates or randomize the post by tapping on the buttons at the top right corner.
-
When you are done editing the post, tap on "Done" at the top left corner. You will see a preview of the post on the timeline. You can edit or delete it by tapping on it again.
-
-
How to customize the posts with various options and styles
-
To customize the posts with various options and styles, you need to use the settings menu of Social Dummy iOS. You can do this by following these steps:
-
download social dummy ios app
-download social dummy ios free
-download social dummy ios latest version
-download social dummy ios cnet
-download social dummy ios 148apps
-download social dummy ios fake social media generator
-download social dummy ios for iphone
-download social dummy ios for ipad
-download social dummy ios for ipod touch
-download social dummy ios review
-download social dummy ios tutorial
-download social dummy ios update
-download social dummy ios screenshot
-download social dummy ios features
-download social dummy ios alternatives
-how to download social dummy ios
-where to download social dummy ios
-why download social dummy ios
-what is social dummy ios app
-what can social dummy ios do
-how to use social dummy ios app
-how to create fake posts with social dummy ios app
-how to make fake screenshots with social dummy ios app
-how to edit fake posts with social dummy ios app
-how to delete fake posts with social dummy ios app
-how to share fake posts with social dummy ios app
-how to customize fake posts with social dummy ios app
-how to change avatar with social dummy ios app
-how to change username with social dummy ios app
-how to change date and time with social dummy ios app
-how to add comments and likes with social dummy ios app
-how to add hashtags and mentions with social dummy ios app
-how to add emojis and stickers with social dummy ios app
-how to add links and images with social dummy ios app
-how to add videos and gifs with social dummy ios app
-how to create fake twitter posts with social dummy ios app
-how to create fake imessage posts with social dummy ios app
-how to create fake instagram posts with social dummy ios app
-how to create fake youtube posts with social dummy ios app
-how to create fake facebook posts with social dummy ios app
-how to create fake tumblr posts with social dummy ios app
-how to create fake snapchat posts with social dummy ios app
-how to create fake facetime posts with social dummy ios app
-how to create fake whatsapp posts with social dummy ios app
-how to create fake call posts with social dummy ios app
-how to create fake spotify posts with social dummy ios app
-how to create fake netflix posts with social dummy ios app
-how to create fake safari posts with social dummy ios app
-
-
On the main screen of the app, tap on the gear icon at the top left corner to open the settings menu.
-
You will see a list of options that you can change, such as theme, language, font size, date format, time format, currency symbol, etc. Tap on the one that you want to change and select your preference.
-
You can also tap on "Style" to change the appearance of each social media platform. You can choose from different colors, layouts, icons, logos, etc. Tap on "Apply" to save your changes.
-
You can also tap on "Advanced" to access more options, such as enabling or disabling ads, notifications, stories, reels, live streams, etc. Tap on "Apply" to save your changes.
-
-
How to share or save the posts as screenshots or videos
-
To share or save the posts as screenshots or videos, you need to use the share menu of Social Dummy iOS. You can do this by following these steps:
-
-
On the screen that shows a mockup of the timeline of a social media platform, tap on the share icon at the top right corner to open the share menu.
-
You will see a list of options that you can choose from, such as screenshot, video recording, copy link, copy text, etc. Tap on the one that you want to use.
-
If you choose screenshot or video recording, you will see a preview of the image or video that you can edit or crop. Tap on "Done" to save or share the image or video.
-
If you choose copy link or copy text, you will see a message that confirms that the link or text has been copied to your clipboard. You can paste it to any app or platform that you want.
-
-
Tips and Tricks for Using Social Dummy iOS
-
How to make the posts more realistic and engaging
-
To make the posts more realistic and engaging, you need to use some tips and tricks that can improve the quality and credibility of your posts. Here are some of them:
-
-
Use relevant and trending topics, hashtags, mentions, and media for your posts. You can search for them on the internet or use the app's suggestions.
-
Use proper grammar, spelling, punctuation, and capitalization for your posts. You can use the app's spell check or proofread your posts before publishing them.
-
Use different tones, styles, and emotions for your posts. You can use emojis, stickers, gifs, memes, filters, effects, etc. to express yourself.
-
Use different types of posts, such as text, image, video, audio, link, poll, quiz, etc. to vary your content and attract more attention.
-
Use realistic numbers and dates for your posts. You can use the app's randomize feature or adjust them manually.
-
-
How to avoid common mistakes and errors
-
To avoid common mistakes and errors, you need to be aware of some potential issues that might occur when using Social Dummy iOS. Here are some of them:
-
-
Do not use real or sensitive information for your posts. You might violate the privacy or security of yourself or others.
-
Do not use offensive or inappropriate content for your posts. You might offend or harm yourself or others.
-
Do not use the app for illegal or unethical purposes. You might face legal or moral consequences.
-
Do not use the app for real social media accounts. You might confuse or mislead yourself or others.
-
Do not rely on the app for accurate or reliable information. You might get false or outdated information.
-
-
How to get help and support from the developer or the community
-
To get help and support from the developer or the community, you need to use the contact options of Social Dummy iOS. You can do this by following these steps:
-
-
On the main screen of the app, tap on the gear icon at the top left corner to open the settings menu.
-
Tap on "Help" to access a list of frequently asked questions and answers that might solve your problems.
-
Tap on "Contact" to send an email to the developer with your feedback, suggestions, bug reports, or questions.
-
Tap on "Social" to follow the developer on Twitter, Instagram, YouTube, or Discord. You can also join the community of other users and share your creations or ideas.
-
-
Conclusion
-
A summary of the main points and benefits of using Social Dummy iOS
-
Social Dummy iOS is a fun and useful app that allows you to create fake social media posts for entertainment or testing purposes. You can choose from different social media platforms and create realistic posts with various options and styles. You can also share or save the posts as screenshots or videos. The app is easy to download and install, and compatible with most iOS devices. The app also offers you tips and tricks for making the posts more realistic and engaging, as well as help and support from the developer or the community.
-
A call to action for the readers to try it out and have fun
-
If you are interested in creating fake social media posts for fun or testing purposes, you should definitely try out Social Dummy iOS. It is a simple and easy-to-use entertainment tool that lets you recreate the timelines of popular social media apps with fake but faithful posts. You can download it from the App Store for free and start creating your own fake posts in minutes. You can also share your creations with your friends or family, or join the community of other users and see what they have made. So what are you waiting for? Download Social Dummy iOS today and have fun!
-
FAQs
-
Q1. Is Social Dummy iOS free or paid?
-
A1. Social Dummy iOS is free to download and use. However, it contains ads that can be removed by purchasing a premium subscription for $0.99 per month or $9.99 per year.
-
Q2. Is Social Dummy iOS safe and legal to use?
A2. Social Dummy iOS is safe and legal to use as long as you follow the terms and conditions of the app and the social media platforms that you are mimicking. You should not use the app for real or sensitive information, offensive or inappropriate content, illegal or unethical purposes, or real social media accounts. You should also respect the privacy and security of yourself and others.
-
Q3. Can I use Social Dummy iOS for real social media accounts?
-
A3. No, you cannot use Social Dummy iOS for real social media accounts. The app is only meant for creating fake posts for entertainment or testing purposes. You should not use the app to impersonate, deceive, or harm yourself or others on real social media platforms.
-
Q4. What are some alternative apps to Social Dummy iOS?
-
A4. Some alternative apps to Social Dummy iOS are:
-
-
Fake Chat Maker: This app allows you to create fake chat conversations for various messaging apps, such as WhatsApp, Messenger, iMessage, etc. You can customize the messages, photos, videos, voice notes, emojis, stickers, etc. You can also share or save the chats as screenshots or videos.
-
Fake Tweet Generator: This app allows you to create fake tweets for Twitter. You can customize the profile picture, name, username, date, time, content, media, location, hashtags, mentions, retweets, likes, comments, etc. You can also share or save the tweets as screenshots or videos.
-
Fake Post Generator: This app allows you to create fake posts for various social media apps, such as Facebook, Instagram, Snapchat, etc. You can customize the profile picture, name, username, date, time, content, media, location, hashtags, mentions, reactions, comments, etc. You can also share or save the posts as screenshots or videos.
-
-
Q5. How can I contact the developer of Social Dummy iOS?
-
A5. You can contact the developer of Social Dummy iOS by sending an email to support@socialdummy.app or by following them on Twitter (@SocialDummyApp), Instagram (@socialdummyapp), YouTube (Social Dummy), or Discord (Social Dummy).
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Kurikulum - TQDK qbul suallar online testlr v abituriyent imtahan ballarn hesablanmas.md b/spaces/congsaPfin/Manga-OCR/logs/Kurikulum - TQDK qbul suallar online testlr v abituriyent imtahan ballarn hesablanmas.md
deleted file mode 100644
index 16b755be96bdc3c5ca0f13b905c8dd70845694de..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Kurikulum - TQDK qbul suallar online testlr v abituriyent imtahan ballarn hesablanmas.md
+++ /dev/null
@@ -1,173 +0,0 @@
-
-
Kurikulum az: Azrbaycan tehsl sisteminin yeni anlayışı
-
Kurikulum az Azrbaycan tehsl sisteminde yeni bir anlayışdır v o tehsl prosesi il bağlı bütün faliyytlrin teşkilini v hyata keçirilmsini ks etdirn konseptual snd kimi başa düşülür.
-
Kurikulum az ndir?
-
Kurikulum az tehsl prosesinin mqsd mzmum teşkil v qiymtlndirm aspektlrini hat edir.
Kurikulum az tehsl prosesinin mqsd aspektini ks etdirir
-
Kurikulum az təhsil prosesinin məqsəd aspektini kəşf etdirir. Bu aspekt, təhsilin nə üçün və nəyə görə aparıldığını, təhsilin ictimai və fərdi faydasını, təhsilin istiqamətlərini və prioritetlərini əhatə edir. Kurikulum az, təhsilin məqsədini Azərbaycan Respublikasının Təhsil Haqqında Qanununda və Milli Təhsil Konsepsiyasında əsaslandırır. Kurikulum az, təhsilin məqsədini aşağıdakı kimi təsvir edir:
-
-
Tehsil insanın şxsiyytini inkişaf etdirn v ona hüquq v vzfeleri öyrtn bir prosesdir.
-
Tehsil insanın intellektual, ruhi, fiziki, estetik, sosial v moral potensialını açğa çxaran bir prosesdir.
-
Tehsil insanın milli v universallıq dyrinlri il bağlı biliklri, bacarıqları, dyrinlri v dyrclri yaratmağa kömklü olan bir prosesdir.
-
Tehsil insanın hmiyytli problemlri hll etmk üçün tfkkür bacarığını inkişaf etdirn bir prosesdir.
-
Tehsil insanın özünü tanımağa, özünü ifad etmğe, özünü inkişaf etdirmğe imkan veren bir prosesdir.
-
Tehsil insanın müxtlif mhitlrd ictimai münasibtlr qurmağa, ictimai hllr il maraqlanmağa, ictimai faliyyt göstrmğe hazırlayan bir prosesdir.
-
Tehsil insanın özünü müasir dünyaya uyğunlaşdıran, özünü hmiyytli dylr il tanışdıran, özünü yeni texnologiyalardan istifad etmk bacarığına malik edn bir prosesdir.
-
Kurikulum az təhsil prosesinin məzmun aspektini kəşf etdirir
-
Kurikulum az təhsil prosesinin məzmun aspektini kəşf etdirir. Bu aspekt, təhsildə nələrin öyrənilməsi və necə öyrənilməsi lazım olduğunu, təhsildə hansı biliklər, bacarıqlar, dəyərlər və dərcələrin əldə edilməsi məqsəd olunduğunu əhatə edir. Kurikulum az, təhsilin məzmununu ümumi təhsilin fenn kurikulumlarına əsaslanır. Kurikulum az, fenn kurikulumlarının mündəricatını, standartlarını, təlim nəticələrini və öyrənmə fəaliyyətlərini əks etdirir.
-
Kurikulum az fenn kurikulumlarının mündëricatını əks etdirir
-
Kurikulum az fenn kurikulumlarının mündëricatını əks etdirir. Bu mündëricat, fennin öyrnilmsi üçün vacib olan mövzuları, konseptlri, faktları, qaydaları v s. ifad edir. Kurikulum az, fenn kurikulumlarının mündëricatını ümumi taktiki elementlrl il bağlı olaraq tşkil edir. Bu elementlr aşağıdakılardır:
-
-
Alt-standartlar: Fennin hrl bir mövzusunun öyrnilmsi üçün vacib olan bilik v bacarıqları müyyenleşdirn spesifik ifadlrdir.
-
Tlim nticlr: Fennin hrl bir mövzusunun öyrnilmsindn sonra tlbinin göstrmsi üçün vacib olan bilik v bacarıqları müyyenleşdirn spesifik ifadlrdir.
-
Öyrnm faliyytlri: Fennin hrl bir mövzusunun öyrnilmsi üçün vacib olan interaktiv v effektiv metod v texnikalardır.
-
Kurikulum az fenn kurikulumlarının standartlarını əks etdirir
-
Kurikulum az fenn kurikulumlarının standartlarını əks etdirir. Bu standartlar, fennin hər bir sinif üçün öyrənilməsi üçün vacib olan minimal tələbləri müəyyənləşdirir. Kurikulum az, fenn kurikulumlarının standartlarını ümumi təhsilin səviyyələrinə uyğun olaraq təşkil edir. Bu səviyyələr aşağıdakılardır:
-
-
İbtidai təhsil: 1-ci sinifdən 4-cü sinifə qədər olan təhsil mərhələsidir.
-
Əsas təhsil: 5-ci sinifdən 9-cu sinifə qədər olan təhsil mərhələsidir.
-
Orta təhsil: 10-cu sinifdən 11-ci sinifə qədər olan təhsil mərhələsidir.
-
-
Kurikulum az fenn kurikulumlarının təlim nəticələrini əks etdirir
-
Kurikulum az fenn kurikulumlarının təlim nəticələrini əks etdirir. Bu nəticlrlr, fennin hrl bir sinif üçün öyrnilmsindn sonra tlbinin göstrmsi üçün vacib olan bilik v bacarıqları ifad edir. Kurikulum az, fenn kurikulumlarının tlim nticlrini bilik v onun növlri, taksonomiya, tfkkür növlri, problem hlltme bacarığı kimi kriteriyalara uyğun olaraq tşkil edir. Bu kriteriyalar aşağıdakılardır:
-
-
Bilik v onun növlri: Bilik, insanın öyrndiyi v baxış açısını şkillndirn mğlumat v anlayışlardır. Bilik üç növ olur: Faktual bilik, konseptual bilik v prosedural bilik.
-
Taksonomiya: Taksonomiya, bilik v bacarıqların sviyy v kompleksliyin müyyenleşdirilmsi üçün istifad olunan bir sistmdir. Taksonomiya altı sviyy olur: Yadda saxlama, başa düşm, tdbiq etm, analiz etm, sintez etm v qiymtlndirm.
-
Tfkkür növlri: Tfkkür növlri, insanın problemlri hll etmk üçün istifad etdiyi düşünc proseslrinin növlridir. Tfkkür növlri üç növ olur: Rutin tfkkür, kritik tfkkür v yaradıcı tfkkür.
-
Problem hlltme bacarığı: Problem hlltme bacarığı, insanın problemlri tanımağa, analiz etmğe, alternativ yollar tapmağa v yaxşı hll seçmğe imkan veren bir bacarıqdır.
-
Kurikulum az fenn kurikulumlarının öyrənmə fəaliyyətlərini əks etdirir
-
Kurikulum az fenn kurikulumlarının öyrənmə fəaliyyətlərini əks etdirir. Bu fəaliyyətlər, fennin hər bir sinif üçün öyrənilməsi üçün vacib olan interaktiv və effektiv metod və texnikalardır. Kurikulum az, fenn kurikulumlarının öyrənmə fəaliyyətlərini interaktiv dərsin təşkili, motivasiya və refleksiya kimi taktiki elementlərlə əlaqələndirir. Bu elementlər aşağıdakılardır:
-
kurikulum testlər
-kurikulum ümumi
-kurikulum fənn
-kurikulum təhsil
-kurikulum məzmunu
-kurikulum taksonomiya
-kurikulum təfəkkür
-kurikulum qiymətləndirmə
-kurikulum pedaqoji
-kurikulum öyrənmə
-kurikulum intellekt
-kurikulum inkişaf
-kurikulum motivasiya
-kurikulum refleksiya
-kurikulum interaktiv
-kurikulum planlaşdırma
-kurikulum standartlar
-kurikulum nəticələr
-kurikulum fəaliyyət
-kurikulum mühit
-kurikulum şagirdlər
-kurikulum müəllimlər
-kurikulum valideynlər
-kurikulum münasibətlər
-kurikulum biheyverizm
-kurikulum sınaq imtahanı
-kurikulum abituriyent imtahanı
-kurikulum TQDK sualları
-kurikulum online testlər
-kurikulum inşalar nağıllar
-kurikulum balların hesablanması
-kurikulum təhsil naziri əmri
-kurikulum pedaqoji ictimaiyyət
-kurikulum təlim prosesi konseptual sənəd
-kurikulum Vikipediya məqaləsi
-kurikulum Azərbaycan Respublikası təhsil sistemi
-kurikulum terminlərin təsviri
-kurikulum təhsil qanunu
-kurikulum təhsilin təşkilinə dair ümumi pedaqoji tələblər
-kurikulum təhsil prosesinin iştirakçılarının hüquq və vəzifələri
-kurikulum bilik və onun növləri
-kurikulum Benjamin Blum idrak taksonomiyası
-kurikulum Hovard Qardner çoxnövlü intellekt nəzəriyyəsi
-kurikulum Jan Piaje zehni inkişafın mərhələləri nəzəriyyəsi
-kurikulum Lev Vıqotski sosial inkişaf nəzəriyyəsi
-kurikulum Karol Dvek düşüncə tərzi nəzəriyyesi
-
-
Interaktiv dərsin təşkili: Interaktiv dərsin təşkili, tələbələrin aktiv iştirakını təmin eden, onların bir-biri ilə və müəllim ilə münasibət qurmağa imkan veren, onların fikirlrini ifad etmklrin v dinlmklrin öyrnmlrin sağlayan bir dersin tşkilatıdır.
-
Motivasiya: Motivasiya, tlim prosesind tlbinin marağını artıran, onun öyrnm istyini yaratmağa kömklü olan v onun öyrnm prosesind aktiv rol oynamağa stimul veren bir faktordur.
-
Refleksiya: Refleksiya, tlbinin öz öyrnm prosesini qiymtlndirm v yaxşlaşdırmğa çalışdığı bir faliyytdir. Refleksiya tlbinin özünü tanımağa, özünü ifad etmğe, özünü inkişaf etdirmğa imkan verir.
-
-
Kurikulum az nümunlri
-
Kurikulum az nümunlri, kurikulum az konseptinin praktiki tminatını göstrn sndlrdir. Kurikulum az nümunlri ümumi fasil vahidlrl bölünür: Tehsil sferası, Fenn kurikulumları, Taktiki elementlr. Bu fasil vahidlrlrin hrl biri ayrı-ayrı sndlr olaraq hazırlanır v tehsl nazirliyinin tsdiqlmsindn sonra pedaqoji ictimaiyytin istifadsin verilir.
-
Kurikulum az ümumi fasil vahidlrl bölünür
-
Kurikulum az ümumi fasil vahidlrl bölünür: Tehsil sferası, Fenn kurikulumları, Taktiki elementlr. Bu fasil vahidlrlrin hrl biri ayrı-ayrı sndlr olaraq hazırlanır v tehsl nazirliyinin tsdiqlmsindn sonra pedaqoji ictimaiyytin istifadsin verilir. Bu sndlr aşağıdakı cddlrdn ibartrdir:
-
-
Fasil adı
Mündricatı
-
Tehsil sferası
Tehsil qanunu, terminlrin tsviri, ümumi pedaqoji tlbler, iştirakçıların hüquq v vzfeleri
-
Fenn kurikulumları
Bilik v onun növlri, taksonomiya, tfkkür növlri, problem hlltme bacarığı
-refleksiya
-
-
Kurikulum az təhsil sferası sənədi
-
Kurikulum az təhsil sferası sənədi, kurikulum az konseptinin təhsil sferasına aid olan fasil vahidinin təsvirini və tələblərini ehtiva edir. Bu sənəd, təhsilin qanuni əsaslarını, terminlərin izahını, ümumi pedaqoji tələbləri, iştirakçıların hüquq və vəzifələrini əks etdirir. Bu sənəd, təhsil nazirliyinin təsdiqlədiyi və pedaqoji ictimaiyyətin istifadəsinə verdiyi rəsmi bir sənəddir. Bu sənədin mündricatı aşağıdakı kimi taksim olunur:
-
-
Tehsil qanunu: Bu bölm, tehslin qanuni ssaslarını ks etdirir. Bu ssaslar Azrbaycan Respublikasının Konstitusiyasında, Azrbaycan Respublikasının Tehsil Haqqında Qanununda v digr qanunvericilik aktlarında müyyenleşdirilmişdir.
-
Terminlrin tsviri: Bu bölm, tehsl prosesind istifad olunan terminlrin izahnı ks etdirir. Bu izahnlar tehsl nazirliyinin tsdiqlmiş terminoloji lüğtindn alınmışdır.
-
Ümumi pedaqoji tlbler: Bu bölm, tehsl prosesind iştirak edn bütün şxslrin riayt etmsi vacib olan pedaqoji tlbleri ks etdirir. Bu tlbler tehslin keyfiyytini yüksltmk, tehslin effektivliyini artırmaq, tehslin demokratikliyini tminaltmaq mqsdi il müyyenleşdirilmişdir.
-
Iştirakçıların hüquq v vzfeleri: Bu bölm, tehsl prosesind iştirak edn bütün şxslrin hüquq v vzfelerini ks etdirir. Bu hüquq v vzfeler tehsl nazirliyinin tsdiqlmiş nizamnamlrind müyyenleşdirilmişdir.
-
Kurikulum az fenn kurikulumları sənədləri
-
Kurikulum az fenn kurikulumları sənədləri, kurikulum az konseptinin fenn kurikulumlarına aid olan fasil vahidinin təsvirini və tələblərini ehtiva edir. Bu sənədlər, hər bir fennin ümumi təhsilin hər bir sinifi üçün öyrənilməsi üçün vacib olan mündəricatını, standartlarını, təlim nəticələrini və öyrənmə fəaliyyətlərini əks etdirir. Bu sənədlər, təhsil nazirliyinin təsdiqlədiyi və pedaqoji ictimaiyyətin istifadəsinə verdiyi rəsmi bir sıra sənədlərdir. Bu sıra sırası il aşağıdakı fennlri əhatə edir:
-
-
Azrbaycan dili v Ədbyatı
-
Rus dili v Ədbyatı
-
İngilis dili
-
Riyaziyyat
-
Fizika
-
Kimya
-
Bilogiya
-
Coğrafiya
-
Tarix
-
Əxlaq v Hüquq
-
Vtndaşlıq Tdbiri
-
Musiqi
-
Rsm v Xalq Tdbirlri
-
Bdn Tdbiri v Sğlam Hvt
-
İnformatika v Texnologiyalar
-
Kurikulum az taktiki elementlər sənədləri
-
Kurikulum az taktiki elementlər sənədləri, kurikulum az konseptinin taktiki elementlərə aid olan fasil vahidinin təsvirini və tələblərini ehtiva edir. Bu sənədlər, hər bir fennin ümumi təhsilin hər bir sinifi üçün öyrənilməsi üçün vacib olan illik və gündəlik planlaşdırma, interaktiv dərsin təşkili, motivasiya və refleksiya kimi elementləri əks etdirir. Bu sənədlər, təhsil nazirliyinin təsdiqlədiyi və pedaqoji ictimaiyyətin istifadəsinə verdiyi rəsmi bir sıra sənədlərdir. Bu sıra sırası il aşağıdakı elementlri əhatə edir:
-
-
İllik və gündəlik planlaşdırma: Bu element, fennin hrl bir sinif üçün öyrnilmsi üçün vacib olan mövzuların vaxt çrxına gör planlaşdırılmasını ks etdirir. Bu planlaşdırma tlbinin öyrnm prosesind rahat v effektiv iştirak etmsini tminaltmaq mqsdi il hazırlanır.
-
Interaktiv dersin teşkili: Bu element, fennin hrl bir sinif üçün öyrnilmsi üçün vacib olan interaktiv v effektiv metod v texnikalarn tminatını ks etdirir. Bu metod v texnikalar tlbinin aktiv iştirakını, onların bir-biri il v müllim il münasibt qurmağa imkan verir.
-
Motivasiya: Bu element, fennin hrl bir sinif üçün öyrnilmsi üçün vacib olan tlbinin marağını artıran, onun öyrnm istyini yaratmağa kömklü olan v onun öyrnm prosesind aktiv rol oynamağa stimul veren faktorları ks etdirir.
-
Refleksiya: Bu element, fennin hrl bir sinif üçün öyrnilmsindn sonra tlbinin öz öyrnm prosesini qiymtlndirm v yaxşlaşdırmğa çalışdığı faliyytlri ks etdirir. Bu faliyytlr tlbinin özünü tanımağa, özünü ifad etmğe, özünü inkişaf etdirmğe imkan verir.
-
-
Kurikulum azın faydaları
-
Kurikulum azın faydaları, kurikulum az konseptinin tehsl sistemin ümumi inkişafına v tlbinin keyfiyytli tehsl almasına nisbt olaraq göstrdiyi müsbtl nticlrdir. Kurikulum azın faydaları aşağıdakı kimi sadalana bilr:
-
-
Kurikulum az tehsl prosesini daha mqsdlri, mzmumlu, teşkilli v qiymtlndirilmiş edir.
-
Kurikulum az tehsl prosesind iştirak edn bütün şxslrin hüquq v vzfelerini müyyenleşdirir v riayt edilmsini tminaltırır.
-
Kurikulum az tlbinin bilik, bacarıq, dyr v drc lrinin inkişafına kömklü olur.
-
Kurikulum az tlbinin tfkkür bacarığını, problem hlltme bacarığını, yaradıcı bacarığını inkişaf etdirir.
-li>Kurikulum az tələbənin milli və universallıq dəyərləri ilə bağlı bilikləri, bacarıqları, dəyərləri və dərcələrini yaratmağa köməkli olur.
-
Kurikulum az tələbənin özünü tanımağa, özünü ifad etməyə, özünü inkişaf etdirməyə imkan verir.
-
Kurikulum az tələbənin müxtəlif mühitlərdə ictimai münasibətlər qurmağa, ictimai hallar ilə maraqlanmağa, ictimai faliyyət göstərməyə hazırlayır.
-
Kurikulum az tələbənin özünü müasir dünyaya uyğunlaşdıran, özünü hümmiyytli dillər ilə tanışdıran, özünü yeni texnologiyalardan istifad etmək bacarığına malik edir.
-
-
Kurikulum azın çatdırılması
-
Kurikulum azın çatdırılması, kurikulum az konseptinin təhsil sisteminin bütün sviyy v strukturlarında hyata keçirilmisinin tminatını ifad edir. Kurikulum azın çatdırılması üçün aşağıdakı addımlar atılır:
-
-
Kurikulum azın hazırlanması: Bu addmd, kurikulum az konseptinin tşkilatçı v mzmum aspektlrinin hazırlanmasını ks etdirir. Bu addmd tehsl nazirliyinin rsmi qurumları v pedaqoji ictimaiyytin nümayndlrinin iştirak etdiyi bir prosesdir.
-
Kurikulum azın tsdiqlnmsi: Bu addmd, kurikulum az konseptinin rsmi olaraq tsdiqlnmsini ks etdirir. Bu addmd tehsl nazirliyinin rsmi qurumları v pedaqoji ictimaiyytin nümayndlrinin iştirak etdiyi bir prosesdir.
-
Kurikulum azın yayımlanması: Bu addmd, kurikulum az konseptinin pedaqoji ictimaiyytin istifadsin verilmsini ks etdirir. Bu addmd tehsl nazirliyinin rsmi qurumları v pedaqoji ictimaiyytin nümayndlrinin iştirak etdiyi bir prosesdir.
-
Kurikulum azın hyata keçirilmsi: Bu addmd, kurikulum az konseptinin tehsl sisteminin bütün sviyy v strukturlarında hyata keçirilmsini ks etdirir. Bu addmd tehsl nazirliyinin rsmi qurumları, tehsl müssislrinin rhtorları v müllimlrinin iştirak etdiyi bir prosesdir.
-
Kurikulum azın monitorinqi v qiymtlndirilmsi: Bu addmd, kurikulum az konseptinin hyata keçirilmsindn sonra onun effektivliyini v keyfiyytini yoxlamaq v yaxşlaşdırmaq üçün monitorinq v qiymtlndirm faliyytlrinin aparılmasını ks etdirir. Bu addmd tehsl nazirliyinin rsmi qurumları, tehsl müssislrinin rhtorları v müllimlrinin iştirak etdiyi bir prosesdir.
-
-
Xülas
-septual sənəd kimi başa düşülür. Kurikulum az təhsil prosesinin məqsəd, məzmun, təşkil və qiymətləndirmə aspektlərini əhatə edir. Kurikulum az təhsil nazirliyinin təsdiqlədiyi və pedaqoji ictimaiyyətin istifadəsinə verdiyi konseptual sənəddir. Kurikulum az ümumi təhsilin fənn kurikulumlarına əsaslanır. Kurikulum az fənn kurikulumlarının mündəricatını, standartlarını, təlim nəticələrini və öyrənmə fəaliyyətlərini əks etdirir. Kurikulum az illik və gündəlik planlaşdırma, interaktiv dersin təşkili, motivasiya və refleksiya kimi taktiki elementləri nəzərə alır. Kurikulum az nümunlri, kurikulum az konseptinin praktiki tminatını göstrn sənədlrdir. Kurikulum az nümunlri ümumi fasil vahidlrl bölünür: Tehsil sferası, Fenn kurikulumları, Taktiki elementlr. Kurikulum azın faydaları, kurikulum az konseptinin tehsl sistemin ümumi inkişafına v tlbinin keyfiyytli tehsl almasına nisbt olaraq göstrdiyi müsbtl nticlrdir. Kurikulum azın çatdırılması, kurikulum az konseptinin tehsl sisteminin bütün sviyy v strukturlarında hyata keçirilmisinin tminatını ifad edir.
-
FAQ
-
Aşağıda kurikulum az il bağlı bzi suallara cavablar verilmişdir:
-
-
Niç bunun adı kurikulum az?
-
Kurikulum az adı Azrbaycanın milli identitetini v texnoloji inkişafını ks etdirn bir ad seçilmişdir. Kurikulum sözü latın dilindn gldiyi üçün universallıq dyrinini ifad edir. Az is qısaltması is Azrbaycanın milli kodunu v texnoloji inkişafının simvolunu ifad edir.
-
Kurikulum az kimlr üçün hazırlanmışdır?
-
Kurikulum az ümumi tehslin bütün iştirakçıları üçün hazırlanmışdır. Bu iştirakçılar tlblrin özü, müllimlr, rhtorlar, validlri, tehsl nazirliyinin rsmi qurumları v pedaqoji ictimaiyytin nümayndlrini əhat edir.
-
Kurikulum az nec bunun istifad olunmalıdır?
-
Kurikulum az bir rsmi snd kimi istifad olunmalıdır. Bu snd tlbinin öyrnm prosesind rahat v effektiv iştirak etmsini tminaltmaq üçün vacib olan bütün mündricatı, standartları, tlim nticlrini v öyrnm faliyytlrini ks etdirir.
-
Kurikulum az nec bunun hyata keçirilmldir?
-
Kurikulum az bir rsmi snd kimi hyata keçirilmldir. Bu snd tehsl sisteminin bütün sviyy v strukturlarında hyata keçirilmldir. Bu hyata keçirm tlbinin bilik, bacarıq, dyr v drc lrinin inkişafına kömklü olur.
-
Kurikulum az nec bunun monitorinqi v qiymtlndirm olunmalıdır?
-imi monitorinqi v qiymtlndirm olunmalıdır. Bu monitorinq v qiymtlndirm faliyytlri kurikulum az konseptinin hyata keçirilmsindn sonra onun effektivliyini v keyfiyytini yoxlamaq v yaxşlaşdırmaq üçün vacibdir. Bu monitorinq v qiymtlndirm faliyytlri tehsl nazirliyinin rsmi qurumları, tehsl müssislrinin rhtorları v müllimlrinin iştirak etdiyi bir prosesdir.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Naruto Ultimate Ninja Storm 3 PPSSPP Everything You Need to Know About the Game.md b/spaces/congsaPfin/Manga-OCR/logs/Naruto Ultimate Ninja Storm 3 PPSSPP Everything You Need to Know About the Game.md
deleted file mode 100644
index 3df5b3d43d4822958c0e8d02132d228bb9b61ba7..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Naruto Ultimate Ninja Storm 3 PPSSPP Everything You Need to Know About the Game.md
+++ /dev/null
@@ -1,172 +0,0 @@
-
-
How to Download and Play Naruto Ultimate Ninja Storm 3 on PPSSPP Emulator
-
If you are a fan of Naruto anime and manga, you might have heard of Naruto Ultimate Ninja Storm 3, one of the most popular games based on the series. This game was originally released for PlayStation 3, Xbox 360, and PC in 2013, but you can also play it on your Android or PC device using a PSP emulator called PPSSPP. In this article, we will show you how to download and play Naruto Ultimate Ninja Storm 3 on PPSSPP emulator using a file hosting service called MediaFire.
-
Introduction
-
Naruto Ultimate Ninja Storm 3 is a fighting game that follows the events of the Fourth Shinobi World War arc in the Naruto Shippuden anime. You can play as over 80 characters from the series, including Naruto, Sasuke, Sakura, Kakashi, Madara, Itachi, and more. You can also experience epic boss battles, stunning cinematics, and immersive environments in this game.
-
naruto ultimate ninja storm 3 ppsspp file download mediafıre
PPSSPP is an emulator that allows you to run PSP games on your Android or PC device. It has many features that enhance the graphics, sound, and performance of the games. You can also customize the controls, save states, cheats, and more with this emulator.
-
MediaFire is a file hosting service that lets you upload, store, and share files online. You can access your files from any device with an internet connection. You can also download files from other users with a link. MediaFire offers up to 50 GB of free storage space and unlimited downloads.
-
Requirements
-
Before you download and play Naruto Ultimate Ninja Storm 3 on PPSSPP emulator, you need to make sure that your device meets the minimum or recommended specifications. Here are the requirements for Android and PC devices:
-
-
Android:
-
Minimum: Android 4.0 or higher, 1 GB RAM, OpenGL ES 2.0 support
-
Recommended: Android 5.0 or higher, 2 GB RAM or more, OpenGL ES 3.0 support or higher
-
-
-
PC:
-
Minimum: Windows XP or higher, Intel Core 2 Duo or equivalent CPU, 512 MB RAM, DirectX 9.0c support
-
Recommended: Windows 7 or higher, Intel Core i5 or equivalent CPU, 2 GB RAM or more, DirectX 11 support or higher
-
-
-
-
You also need to download PPSSPP emulator for your device. You can get it from these links:
After you have downloaded the PPSSPP emulator and the Naruto Ultimate Ninja Storm 3 ISO file, you need to install and configure them on your device. Here are the steps to do so:
Install PPSSPP emulator on your device by following the instructions on the screen.
-
Extract the Naruto Ultimate Ninja Storm 3 ISO file from the MediaFire link using a file manager app or a zip extractor app. You will get a file named NARUTO SHIPPUDEN Ultimate Ninja STORM 3.iso.
-
Copy or move the NARUTO SHIPPUDEN Ultimate Ninja STORM 3.iso file to a folder of your choice on your device. You can use the default PSP folder or create a new one.
-
Open PPSSPP emulator and tap on the Games tab. Navigate to the folder where you saved the NARUTO SHIPPUDEN Ultimate Ninja STORM 3.iso file and tap on it to load the game.
-
Before you start playing, you may want to adjust some settings on PPSSPP emulator to improve the performance and quality of the game. Here are some recommended settings:
-
-
Graphics:
-
Mode: Buffered rendering
-
Frameskipping: Off or 1
-
Rendering resolution: 2x PSP or higher
-
Texture filtering: Linear or Anisotropic
-
Texture scaling: Off or xBRZ
-
Hardware transform: On
-
Software skinning: On
-
Mipmapping: On
-
VSync: On
-
-
-
Audio:
-
Enable sound: On
-
Audio latency: Low or Medium
-
-
-
System:
-
Fast memory: On
-
Multithreaded: On
-
I/O timing method: Fast or Host
-
-
-
Controls:
-
Edit touch control layout: Adjust the size and position of the buttons according to your preference
-
Edit gamepad mappings: Map the buttons of your external controller if you have one
-
-
-
-
-
Gameplay and Features
-
Naruto Ultimate Ninja Storm 3 is a fun and exciting game that lets you experience the thrilling battles and adventures of Naruto and his friends. Here are some of the gameplay and features of the game:
-
-
The game has a story mode that follows the events of the Fourth Shinobi World War arc, from the Five Kage Summit to the final showdown between Naruto and Sasuke. You can also play as different characters in different scenarios, such as Sasuke vs Itachi, Naruto vs Pain, and more.
-
The game has a free-roaming mode that allows you to explore various locations in the Naruto world, such as Konoha, Suna, Kumo, and more. You can also interact with other characters, collect items, complete missions, and unlock secrets.
-
The game has a versus mode that lets you fight against other players or AI opponents in various stages and settings. You can choose from over 80 characters, each with their own unique moves, combos, jutsus, and awakenings. You can also customize your character's appearance, skills, items, and support characters.
-
The game has a Full Burst HD version that adds some enhancements and extras to the original PSP version. Here is a table comparing the differences between the two versions:
-
-
PSP Version
Full Burst HD Version
-
- Has lower resolution and graphics quality
- Has higher resolution and graphics quality
-
- Has fewer characters and costumes
- Has more characters and costumes, such as Kabuto (Sage Mode), Naruto (Hokage), Sasuke (Eternal M - Has more characters and costumes, such as Kabuto (Sage Mode), Naruto (Hokage), Sasuke (Eternal Mangekyo Sharingan), and more
-
- Has fewer stages and settings
- Has more stages and settings, such as the Uchiha Hideout, the Great Ninja War Battlefield, and more
-
- Has no online multiplayer mode
- Has an online multiplayer mode that lets you play with other players around the world
-
- Has no additional content or DLC
- Has additional content and DLC, such as the Road to Ninja costumes, the Sage Kabuto chapter, and more
-
-
Some tips and tricks to enhance your gaming experience are:
-
-
Use the chakra dash and the substitution jutsu wisely, as they can help you evade or counter your opponent's attacks.
-
Use the support characters strategically, as they can assist you in offense, defense, or balance.
-
Use the awakening mode when your health is low, as it can boost your power and abilities.
-
Collect ryo and ninja tools from the story mode and the free-roaming mode, as they can help you unlock and upgrade your character's skills and items.
-
Complete the ninja world timeline and the ultimate decision events in the story mode, as they can unlock alternative scenarios and endings.
-
-
Conclusion
-
Naruto Ultimate Ninja Storm 3 is a great game that lets you enjoy the thrilling and emotional story of Naruto and his friends. You can also play it on your Android or PC device using PPSSPP emulator and a file from MediaFire. All you need to do is follow the steps we have provided in this article and you will be ready to play. You can also adjust the settings and customize your character to suit your preferences. We hope you have fun playing this game and reliving the epic moments of the Naruto series.
-
If you liked this article, please share it with your friends and leave us a comment below. We would love to hear your feedback and suggestions. Also, if you have any questions or problems regarding the game or the emulator, feel free to ask us in the comment section. We will try our best to help you out.
-
Thank you for reading this article and happy gaming!
-
FAQs
-
Here are some frequently asked questions and answers related to the topic:
-
-
Q: Is Naruto Ultimate Ninja Storm 3 legal to download and play?
-
A: Yes, as long as you own a copy of the original game for PSP or PS3. Downloading and playing a backup copy of a game that you own is legal in most countries. However, downloading and playing a pirated copy of a game that you do not own is illegal and we do not condone it.
-
Q: Is Naruto Ultimate Ninja Storm 3 safe to download from MediaFire?
-
A: Yes, as long as you download it from a trusted source. The link we have provided in this article is safe and verified by us. However, be careful of other links that may contain viruses or malware. Always scan your files before opening them.
-
Q: How can I play Naruto Ultimate Ninja Storm 3 with my friends online?
-
A: You can play Naruto Ultimate Ninja Storm 3 with your friends online using PPSSPP's built-in online multiplayer mode. You need to have a stable internet connection and a valid IP address. You also need to create or join a room with your friends using PPSSPP's network settings. For more details, please refer to this guide: How to play PSP games online with PPSSPP.
-
Q: How can I fix Naruto Ultimate Ninja Storm 3 lagging or crashing on PPSSPP?
-
A: There are several possible reasons why Naruto Ultimate Ninja Storm 3 may lag or crash on PPSSPP. Some of them are:
-
-
Your device does not meet the minimum or recommended specifications to run the game smoothly.
-
Your PPSSPP settings are not optimized for the game.
-
Your Naruto Ultimate Ninja Storm 3 ISO file is corrupted or incomplete.
-
Your PPSSPP emulator is outdated or incompatible with the game.
-
-
To fix these issues, you can try the following solutions:
-
-
Upgrade your device's hardware or software if possible.
-
Adjust your PPSS Adjust your PPSSPP settings according to the recommendations we have given in this article or experiment with different options until you find the best ones for your device and game.
-
Redownload the Naruto Ultimate Ninja Storm 3 ISO file from the MediaFire link we have provided in this article or from another trusted source. Make sure the file is complete and not corrupted.
-
Update your PPSSPP emulator to the latest version or try a different version that is compatible with the game.
-
-
Q: How can I get more games for PPSSPP emulator?
-
A: You can get more games for PPSSPP emulator by downloading ISO or CSO files from various sources online. However, you should only download games that you own legally and that are safe and virus-free. You can also rip your own PSP games using a PSP console and a USB cable. For more details, please refer to this guide: How to get PSP games for PPSSPP.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Soul Knight A Pixelated Roguelike Game with Online Co-op for Android.md b/spaces/congsaPfin/Manga-OCR/logs/Soul Knight A Pixelated Roguelike Game with Online Co-op for Android.md
deleted file mode 100644
index 3918c71f6b8d44a410b31a7c5bccd93dc68bbf62..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Soul Knight A Pixelated Roguelike Game with Online Co-op for Android.md
+++ /dev/null
@@ -1,132 +0,0 @@
-
-
-
-
-
How to Play Soul Knight Online Co-op on Android
-
Do you love shooting games with pixel graphics, rogue-like elements, and tons of weapons? If so, you might want to check out Soul Knight, a shooter game that features extremely easy and intuitive control, super smooth and enjoyable gameplay, and random dungeons full of alien minions. But what if you want to share this fun experience with your friends online? Well, you can do that too! In this article, we will show you how to play Soul Knight online co-op on Android with some simple steps.
-
What You Need to Play Soul Knight Online Co-op on Android
-
A compatible Android device
-
First of all, you need an Android device that can run Soul Knight smoothly. The game requires Android 4.4 or higher and at least 100 MB of free storage space. You also need a stable internet connection for online co-op mode.
Next, you need to download and install the Soul Knight app from Google Play Store. It's free to play with some optional in-app purchases. You can find it here: Soul Knight - Apps on Google Play
-
A VPN app and a VPN profile file
-
Since Soul Knight does not have an official online co-op mode, you need to use a VPN (virtual private network) app and a VPN profile file to connect with other players online. A VPN app allows you to create a secure connection to another network over the internet, while a VPN profile file contains the settings and information for the VPN connection. You can download and install any VPN app that supports OpenVPN protocol from Google Play Store, such as OpenVPN Connect - Fast & Safe SSL VPN Client - Apps on Google Play or Turbo VPN- Free VPN Proxy Server & Secure Service - Apps on Google Play. You also need to download and install a VPN profile file from our Discord server, which you can find here: Soul Knight - Discord. The VPN profile file will allow you to join the Soul Knight online co-op network and play with other players.
-
soul knight multiplayer online android apk
-soul knight co op mode android free download
-soul knight online co op adventure game android
-soul knight android online co op with friends
-soul knight online co op roguelike shooter android
-soul knight online co op pixel dungeon android
-soul knight android download online co op gameplay
-soul knight online co op offline lan game android
-soul knight online co op assorted game modes android
-soul knight online co op controller supported android
-soul knight online co op unique heroes android
-soul knight online co op 400+ weapons android
-soul knight online co op randomly generated dungeons android
-soul knight online co op super intuitive control android
-soul knight online co op super smooth gameplay android
-soul knight online co op action and survival android
-soul knight online co op tower defense mode android
-soul knight online co op editors' choice android
-soul knight online co op chillyroom game android
-soul knight online co op magical stone quest android
-soul knight online co op alien minions shooting android
-soul knight online co op strategy and skill android
-soul knight online co op pixel art style android
-soul knight online co op dark forest and chateau android
-soul knight online co op raid monster dens android
-soul knight online co op loot treasures and npcs android
-soul knight online co op dodge fire cast skill android
-soul knight online co op score super combos android
-soul knight online co op pick up the gun android
-soul knight online co op start your dungeon adventure android
-how to play soul knight online co op on android
-best tips for soul knight online co op on android
-download and install soul knight online co op on android
-review of soul knight online co op on android
-is soul knight online co op available on android
-why play soul knight online co op on android
-what is new in soul knight online co op on android
-where to find soul knight online co op on android
-who made soul knight online co op for android
-when was soul knight online co op released on android
-
The Soul Knight Host app (if you want to host a game)
-
If you want to host a game and invite other players to join, you need to download and install the Soul Knight Host app from our Discord server. The Soul Knight Host app is a tool that helps you create and manage your online co-op game. You can find it here: Soul Knight - Discord. The Soul Knight Host app will ask you for the private IPs of the players who want to join your game, and then start the game for you.
-
How to Set Up Soul Knight Online Co-op on Android
-
Download and install the Soul Knight app from Google Play Store
-
The first step is to download and install the Soul Knight app from Google Play Store. You can do this by following this link: Soul Knight - Apps on Google Play. Once you have installed the app, open it and grant it the necessary permissions.
-
Download and install a VPN app and a VPN profile file from our Discord server
-
The next step is to download and install a VPN app and a VPN profile file from our Discord server. You can do this by following these steps:
Open the VPN app and import the VPN profile file that you downloaded from our Discord server.
-
Enter your username and password that you received from our Discord server.
-
Connect to the VPN using the VPN profile file.
-
-
Download and install the Soul Knight Host app from our Discord server (if you want to host a game)
-
If you want to host a game and invite other players to join, you need to download and install the Soul Knight Host app from our Discord server. You can do this by following these steps:
-
-
Go to the #soul-knight-host channel on our Discord server and download the latest Soul Knight Host app.
-
Install the Soul Knight Host app on your Android device.
-
Open the Soul Knight Host app and grant it the necessary permissions.
-
Connect to the VPN using the VPN profile file
-
After you have downloaded and installed the VPN app and the VPN profile file, you need to connect to the VPN using the VPN profile file. You can do this by following these steps:
-
-
Open the VPN app and select the VPN profile file that you imported.
-
Tap on the connect button and wait for the connection to be established.
-
You should see a notification that says you are connected to the VPN.
-
You can also check your IP address and location on the VPN app or on any online IP checker website.
-
-
Start the Soul Knight app and go to Multiplayer mode
-
Now that you are connected to the VPN, you can start the Soul Knight app and go to Multiplayer mode. You can do this by following these steps:
-
-
Open the Soul Knight app and tap on the start button.
-
Select Multiplayer mode and choose either Local or Online.
-
If you choose Local, you can play with other players who are connected to the same VPN network as you.
-
If you choose Online, you can play with other players who are online on any VPN network.
-
-
If you are hosting a game, input the private IPs of the players who want to join in the Soul Knight Host app and press start
-
If you are hosting a game and want to invite other players to join, you need to input the private IPs of the players who want to join in the Soul Knight Host app and press start. You can do this by following these steps:
-
-
Open the Soul Knight Host app and tap on the host button.
-
Enter your name and select your hero.
-
Input the private IPs of the players who want to join your game. You can find their private IPs on their VPN apps or on our Discord server.
-
Press start and wait for the game to load.
-
-
If you are joining a game, tell your private IP to the host and wait for them to start the game
-
If you are joining a game that is hosted by another player, you need to tell your private IP to the host and wait for them to start the game. You can do this by following these steps:
-
-
Open your VPN app and find your private IP address. It should be something like 10.x.x.x.
-
Tell your private IP address to the host of the game. You can do this through voice chat, text chat, or our Discord server.
-
Wait for the host to input your private IP in their Soul Knight Host app and press start.
-
You should see a notification that says you are joining a game.
-
How to Play Soul Knight Online Co-op on Android
-
Enjoy shooting some alien minions with your friends online!
-
Congratulations! You have successfully set up Soul Knight online co-op on Android. Now you can enjoy shooting some alien minions with your friends online. Soul Knight is a fun and addictive shooter game that will keep you entertained for hours. You can choose from different heroes, each with their own unique skills and abilities. You can also collect and use hundreds of weapons, from pistols and rifles to lasers and swords. You can also upgrade your weapons and skills with gems and coins that you earn from killing enemies and completing dungeons.
-
Use different heroes, weapons, skills, and strategies to survive the randomly generated dungeons
-
One of the best features of Soul Knight is that it has randomly generated dungeons, which means that every time you play, you will encounter different enemies, traps, treasures, and bosses. This adds a lot of variety and challenge to the game, as you never know what to expect. You will need to use different heroes, weapons, skills, and strategies to survive the dungeons and reach the final boss. You can also customize your hero's appearance and stats with skins and buffs that you can buy or unlock.
-
Explore different game modes and features such as tower defense, boss rush, daily challenges, etc.
-
Soul Knight also has different game modes and features that you can explore and enjoy. For example, you can play tower defense mode, where you have to defend your base from waves of enemies. You can also play boss rush mode, where you have to fight against multiple bosses in a row. You can also play daily challenges, where you have to complete specific tasks or objectives within a time limit. You can also unlock achievements, collect pets, craft items, and more.
-
Communicate with your teammates using voice chat or text chat
-
Another great feature of Soul Knight online co-op is that you can communicate with your teammates using voice chat or text chat. This makes the game more fun and social, as you can coordinate your actions, share tips, or just chat with your friends. You can also use emojis and stickers to express yourself. To use voice chat or text chat, you need to tap on the microphone or keyboard icon on the top right corner of the screen.
-
Be aware of the latency, data usage, and security issues that may arise from using a public VPN
-
While playing Soul Knight online co-op on Android is a lot of fun, you should also be aware of some potential issues that may arise from using a public VPN. For example, you may experience latency or lag due to the distance between you and the VPN server. This may affect your gameplay performance and enjoyment. You may also consume more data than usual due to the encryption and decryption process of the VPN. This may affect your data plan or cost. You may also expose yourself to security risks such as hackers or malware that may try to access your device or data through the VPN network. Therefore, you should always use a trusted VPN app and profile file, and avoid using public Wi-Fi or unsecured networks when playing Soul Knight online co-op on Android.
-
Conclusion
-
Soul Knight is an amazing shooter game that you can play online co-op on Android with your friends. It has easy and intuitive controls, smooth and enjoyable gameplay, random dungeons full of alien minions, different heroes, weapons, skills, strategies, game modes, features, and more. To play Soul Knight online co-op on Android, you need a compatible Android device, the Soul Knight app from Google Play Store, a VPN app and a VPN profile file from our Discord server, and the Soul Knight Host app (if you want to host a game). You also need to follow some simple steps to set up Soul Knight online co-op on Android. However, you should also be aware of some potential issues that may arise from using a public VPN such as latency, data usage, and security risks. We hope this article has helped you learn how to play Soul Knight online co-op on Android. Now go ahead and have some fun shooting some alien minions with your friends online!
-
Frequently Asked Questions
-
Q: How many players can play Soul Knight online co-op on Android?
-
A: Soul Knight online co-op on Android supports up to 4 players per game.
-
Q: Can I play Soul Knight online co-op on Android with players who use iOS devices?
-
A: No, Soul Knight online co-op on Android is only compatible with other Android devices.
-
Q: Can I play Soul Knight online co-op on Android without using a VPN?
-
A: No, Soul Knight online co-op on Android
A: No, Soul Knight online co-op on Android requires a VPN to connect with other players online.
-
Q: Which VPN app and profile file should I use to play Soul Knight online co-op on Android?
-
A: You should use the VPN app and profile file that are provided by our Discord server. You can find them here: Soul Knight - Discord. Do not use any other VPN app or profile file, as they may not work or may cause problems.
-
Q: How can I join the Soul Knight Discord server?
-
A: You can join the Soul Knight Discord server by following this link: Soul Knight - Discord. You will need a Discord account to join. The Soul Knight Discord server is a friendly and helpful community of Soul Knight players who chat, play together, and have fun.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated ((EXCLUSIVE)).md b/spaces/contluForse/HuggingGPT/assets/ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated ((EXCLUSIVE)).md
deleted file mode 100644
index 2f7d1c9dce60f053fb4f1d0dec1f9df199b23ed6..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated ((EXCLUSIVE)).md
+++ /dev/null
@@ -1,122 +0,0 @@
-
-
ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated: How to Get the Best of Adobe Creative Suite 5
-
-
If you are looking for a way to unleash your creativity and design stunning digital content, you might be interested in ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated. This is a software package that includes all the tools you need to create amazing graphics, videos, web pages, and more. You can get access to Photoshop, Illustrator, InDesign, Dreamweaver, Premiere Pro, After Effects, and many other applications that will help you bring your ideas to life.
-
-
However, you might also be wondering how to get this software for free. After all, ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated is not cheap, and you might not have the budget to afford it. Fortunately, there is a way to get it without paying a dime. All you need is a keygen that will generate a valid serial number for you.
-
ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated
A keygen is a program that can create unique codes that can activate a software product. It works by using an algorithm that mimics the one used by the original manufacturer. By entering the code generated by the keygen, you can bypass the activation process and use the software as if you bought it legally.
-
-
However, not all keygens are reliable and safe. Some of them might contain viruses or malware that can harm your computer or steal your personal information. Some of them might also generate invalid codes that will not work or will be detected by the software as fraudulent. That's why you need to be careful when choosing a keygen and only download it from trusted sources.
-
-
How to use ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated?
-
-
If you want to use ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated, you need to follow these steps:
-
-
-
Download the keygen from a reputable website. You can find one by searching online or by following the links provided by some of the web pages that offer this software for free.
-
Extract the keygen from the zip file and run it as administrator.
-
Select the product you want to activate from the drop-down menu.
-
Click on Generate and copy the serial number that appears.
-
Download the trial version of ADOBE CS5.5 MASTER COLLECTION from the official website or from another source.
-
Install the software and choose the option to enter a serial number.
-
Paste the serial number that you copied from the keygen and complete the installation.
-
Patch your hosts file to prevent the software from connecting to the internet and verifying your activation status. You can do this by following the instructions provided by the keygen or by editing the file manually.
-
Enjoy using ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated for free!
-
-
-
What are the benefits of using ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated?
-
-
By using ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated, you can enjoy many benefits, such as:
-
-
-
You can save money by not having to buy the software.
-
You can access all the features and functions of ADOBE CS5.5 MASTER COLLECTION without any limitations or restrictions.
-
You can create professional-quality digital content for personal or commercial purposes.
-
You can update your skills and learn new techniques with the latest tools and technologies.
-
You can impress your clients, colleagues, friends, or family with your amazing creations.
-
-
-
What are the risks of using ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated?
-
-
However, using ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated also comes with some risks, such as:
-
-
-
-
You might violate the terms and conditions of Adobe and face legal consequences.
-
You might expose your computer to viruses or malware that can damage your system or compromise your security.
-
You might get invalid or blacklisted serial numbers that will not work or will cause errors or crashes.
-
You might lose access to customer support or updates from Adobe.
-
You might experience compatibility issues or bugs with some of the software components.
-
-
-
Conclusion
-
-
ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated is a great way to get access to one of the most powerful and versatile software packages for digital content creation. However, it also involves some risks and challenges that you need to be aware of before using it. If you decide to use it, make sure you do it at your own risk and responsibility.
-
What are the features of ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated?
-
-
ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated is not just a simple software package. It is a comprehensive solution that offers you a wide range of features and benefits, such as:
-
-
-
You can work with different media formats and platforms, including print, web, mobile, video, and interactive.
-
You can use the latest technologies and standards, such as HTML5, CSS3, jQuery, Flash, and AIR.
-
You can integrate your workflow with other Adobe products and services, such as Photoshop, Illustrator, InDesign, Acrobat, Bridge, Device Central, and Creative Cloud.
-
You can enhance your productivity and efficiency with advanced tools and features, such as content-aware fill, puppet warp, perspective drawing, multiple artboards, video editing, animation, and 3D effects.
-
You can express your creativity and vision with unlimited possibilities and options, such as custom brushes, gradients, patterns, filters, effects, styles, and fonts.
-
-
-
How to get the most out of ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated?
-
-
ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated is a powerful and versatile software package that can help you create amazing digital content. However, to get the most out of it, you need to follow some tips and tricks, such as:
-
-
-
Learn the basics of each application and how they work together. You can find tutorials and guides on the official website or on other online resources.
-
Explore the different features and functions of each application and experiment with different settings and options. You can find inspiration and examples on the official website or on other online resources.
-
Use the best practices and standards for each media format and platform. You can find recommendations and guidelines on the official website or on other online resources.
-
Optimize your performance and quality by using the appropriate tools and techniques for each task. You can find tips and tricks on the official website or on other online resources.
-
Keep your software updated and secure by downloading the latest patches and updates from the official website or from other online resources.
-
-
-
Conclusion
-
-
ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated is a software package that can help you create stunning digital content for any purpose and platform. It offers you a wide range of features and benefits that can enhance your creativity and productivity. However, it also requires some skills and knowledge to use it effectively and safely. If you want to use it for free, you need to use a keygen that can generate a valid serial number for you. However, this also involves some risks and challenges that you need to be aware of before using it. If you decide to use it, make sure you do it at your own risk and responsibility.
-
How to troubleshoot ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated?
-
-
ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated is a software package that can help you create stunning digital content for any purpose and platform. However, it might also encounter some problems or issues that can affect your work or experience. Some of the common problems or issues are:
-
-
-
The software does not install or run properly.
-
The software does not accept or recognize the serial number generated by the keygen.
-
The software crashes or freezes frequently.
-
The software displays errors or warnings.
-
The software performs slowly or poorly.
-
The software does not work with some of the media formats or platforms.
-
-
-
If you face any of these problems or issues, you need to troubleshoot ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated and try to fix them. You can do this by following some steps, such as:
-
-
-
Check your system requirements and compatibility. Make sure your computer meets the minimum requirements and supports the software and its components.
-
Check your internet connection and firewall settings. Make sure your internet connection is stable and secure and your firewall does not block the software or its components.
-
Check your antivirus and malware protection. Make sure your antivirus and malware protection does not interfere with the software or its components.
-
Check your hosts file and activation status. Make sure your hosts file is patched correctly and your activation status is valid and verified.
-
Check your software settings and preferences. Make sure your software settings and preferences are configured correctly and suitably for your work and experience.
-
Check your software updates and patches. Make sure your software is updated and patched to the latest version and has no bugs or glitches.
-
Check your online resources and support. Make sure you have access to online resources and support that can help you with your problems or issues.
-
-
-
Conclusion
-
-
ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated is a software package that can help you create stunning digital content for any purpose and platform. However, it might also encounter some problems or issues that can affect your work or experience. If you face any of these problems or issues, you need to troubleshoot ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated and try to fix them. You can do this by following some steps that can help you check and resolve your problems or issues. However, if you still cannot fix them, you might need to contact Adobe customer support or seek professional help.
-
Conclusion
-
-
ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated is a software package that can help you create stunning digital content for any purpose and platform. It offers you a wide range of features and benefits that can enhance your creativity and productivity. However, it also requires some skills and knowledge to use it effectively and safely. If you want to use it for free, you need to use a keygen that can generate a valid serial number for you. However, this also involves some risks and challenges that you need to be aware of before using it. If you decide to use it, make sure you do it at your own risk and responsibility.
-
-
If you use ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated, you might also encounter some problems or issues that can affect your work or experience. If you face any of these problems or issues, you need to troubleshoot ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated and try to fix them. You can do this by following some steps that can help you check and resolve your problems or issues. However, if you still cannot fix them, you might need to contact Adobe customer support or seek professional help.
-
-
ADOBE CS5.5 MASTER COLLECTION KEYGEN Auto Activated is a great way to get access to one of the most powerful and versatile software packages for digital content creation. However, it also involves some risks and challenges that you need to be aware of before using it. If you decide to use it, make sure you do it at your own risk and responsibility.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Cdbf - Dbf Viewer And Editor 2.20 Crack Why You Should Choose This Software Over Others.md b/spaces/contluForse/HuggingGPT/assets/Cdbf - Dbf Viewer And Editor 2.20 Crack Why You Should Choose This Software Over Others.md
deleted file mode 100644
index 59658e15067eb3ab5139fa6f1ca75c0dec1716c9..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Cdbf - Dbf Viewer And Editor 2.20 Crack Why You Should Choose This Software Over Others.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Arrowbridge ii 1.14a+ BBS Name: PC97 Sysop Name: Riz la+ Serial #1: ... Data detective pc v1.0 7101-3D91-0861 (then hit ";OK";, not [Enter]) ... Download butler CORE/JES Key: 3ed95671-111 Extra: 1 ... Gold wave version full First=CHUL IN Last=HOM Password=LCOKOEB ... Gwd text editor 1.5 GTE12/123-567. 1fdad05405
-
-
-
diff --git a/spaces/contluForse/HuggingGPT/assets/EaseUS Data Recovery Wizard Free VERIFIED Lets You Recover Lost Or Deleted Data.md b/spaces/contluForse/HuggingGPT/assets/EaseUS Data Recovery Wizard Free VERIFIED Lets You Recover Lost Or Deleted Data.md
deleted file mode 100644
index e300f020e34de27adbf9d1c20a8f79eed6a73886..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/EaseUS Data Recovery Wizard Free VERIFIED Lets You Recover Lost Or Deleted Data.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
EaseUS Data Recovery Wizard Free lets you recover lost or deleted data
When you install Windows or Office on your PC, you need to enter a product key or a license key to activate it. This key is verified by Microsoft through an online server. However, if you are using a volume license edition of Windows or Office, such as in an enterprise or an educational institution, you can activate your products through a local KMS server instead of an online server.
-
-
A KMS server is a computer that runs a special software that can generate and validate license keys for Windows and Office products. When you activate your products through a KMS server, you don't need to enter a product key or connect to the internet. You just need to connect to the local network where the KMS server is located.
-
-
KMS Pico works by creating a virtual KMS server on your PC and forcing your Windows and Office products to activate themselves against it. This way, you can bypass the online verification process and enjoy the full features of your Microsoft products without paying anything.
-
-
How to use KMS Pico to activate your Windows and Office products?
-
-
Using KMS Pico to activate your Windows and Office products is very easy and fast. You just need to follow these simple steps:
-
-
-
Download the latest version of KMS Pico from the official website: https://www.kmspicoofficial.com/
-
Extract the zip file and run the setup.exe file as administrator.
-
Follow the installation instructions and accept the terms and conditions.
-
Wait for the installation to complete and launch the program.
-
Click on the red button to start the activation process.
-
Wait for a few seconds until you see a green check mark and a message saying "Activation successful".
-
Restart your PC and enjoy your activated Windows and Office products.
-
-
-
Note: You may need to disable your antivirus or firewall before running KMS Pico, as some security programs may detect it as a malicious software. However, KMS Pico is 100% safe and clean to use.
-
-
What are the benefits of using KMS Pico?
-
-
Using KMS Pico to activate your Windows and Office products has many benefits, such as:
-
-
-
-
You can activate any version of Windows and Office, including Windows 7/8/8.1/10/11 and Office 2010/2013/2016/2019/2021.
-
You can activate your products permanently, without any expiration date or trial period.
-
You can enjoy all the premium features of your Microsoft products, such as updates, security patches, customization options, etc.
-
You can save money by not buying expensive license keys or subscriptions.
-
You can avoid any legal issues or penalties by using genuine activation methods.
-
-
-
Conclusion
-
-
KMS Pico is a powerful software tool that can activate any version of Windows and Office by using a crack method called KMS. It is easy to use, fast, and reliable. It can help you enjoy all the benefits of your Microsoft products without paying anything. If you want to download KMS Pico and activate your Windows and Office products for free, visit the official website: https://www.kmspicoofficial.com/
-
Tips and tricks for using KMS Pico
-
-
KMS Pico is a simple and effective tool for activating your Windows and Office products, but there are some tips and tricks that can help you use it better and avoid some common problems. Here are some of them:
-
-
-
Disable your antivirus or firewall before running KMS Pico. Some security programs may detect KMS Pico as a malicious software and block it from running or delete it from your PC. To prevent this, you should disable your antivirus or firewall temporarily before running KMS Pico. You can enable them again after the activation is done.
-
Run KMS Pico as administrator. To ensure that KMS Pico can access all the necessary files and registry entries to activate your products, you should run it as administrator. To do this, right-click on the KMS Pico icon and select "Run as administrator". This will give KMS Pico the highest level of permission to perform its tasks.
-
Check your activation status regularly. To make sure that your products are still activated and not expired, you should check your activation status regularly. You can do this by clicking on the "Tokens" tab in KMS Pico and then clicking on the blue square with a big "I" in it. This will show you your system edition and activation status. You can also check your activation status by going to Start > right-click on Computer > Properties.
-
Update your products after activation. After activating your products with KMS Pico, you can update them normally through Windows Update or Office Update. This will keep your products secure and up-to-date with the latest features and patches. However, you should avoid updating your products to a newer version that is not supported by KMS Pico, as this may cause your activation to be lost.
-
Use KMS Pico only for testing purposes. KMS Pico is a tool that is intended for testing purposes only, not for commercial or illegal use. By using KMS Pico, you are violating the terms and conditions of Microsoft and may face legal consequences. Therefore, you should use KMS Pico only for testing purposes and buy a genuine license key or subscription if you want to use the products for a long time.
-
-
-
These are some of the tips and tricks that can help you use KMS Pico more effectively and safely. If you have any questions or problems with KMS Pico, you can contact the support team of Solvusoft, the developer of KMS Pico, at support@solvusoft.com or visit their website: https://www.solvusoft.com/en
-
Alternatives to KMS Pico
-
-
KMS Pico is not the only tool that can activate your Windows and Office products using the KMS method. There are other alternatives that you can try if you want to use a different software or have some issues with KMS Pico. Here are some of them:
-
-
-
KMSAuto Net: This is another popular and reliable KMS activator that can activate any version of Windows and Office. It is a portable tool that does not require installation and has a simple and user-friendly interface. It also has some extra features such as backup and restore of activation, conversion of Office 2016/2019/2021 retail to volume, and activation of Windows 10 LTSC editions. You can download KMSAuto Net from https://kmsauto.net/
-
Microsoft Toolkit: This is a versatile and multifunctional tool that can activate Windows and Office as well as manage, customize, and optimize them. It can activate any edition of Windows from Vista to 10 and any version of Office from 2010 to 2019. It also has some other features such as creating bootable USB drives, installing or uninstalling product keys, checking activation status, and more. You can download Microsoft Toolkit from https://microsoft-toolkit.com/
-
py-kms: This is a KMS server emulator written in Python that can activate Windows and Office products on your local network. It can run on any platform that supports Python, such as Windows, Linux, or Mac OS. It can activate any edition of Windows from Vista to 11 and any version of Office from 2010 to 2021. It also supports online activation and renewal of licenses. You can download py-kms from https://github.com/SystemRage/py-kms
-
-
-
These are some of the alternatives to KMS Pico that you can use to activate your Windows and Office products for free using the KMS method. Each one has its own advantages and disadvantages, so you can choose the one that suits your needs best.
-
FAQs about KMS Pico
-
-
KMS Pico is a tool that has raised many questions and doubts among users who want to activate their Windows and Office products for free. Here are some of the most frequently asked questions about KMS Pico and their answers:
-
-
-
Is KMS Pico safe to use? KMS Pico is safe to use if you download it from a trusted source and follow the instructions carefully. However, there are many fake and malicious versions of KMS Pico on the internet that may contain viruses, trojans, or adware. Therefore, you should always scan the file with an antivirus before running it and disable your antivirus or firewall temporarily while using it.
-
Is KMS Pico legal to use? KMS Pico is not legal to use as it violates the terms and conditions of Microsoft and infringes their intellectual property rights. By using KMS Pico, you are using a pirated version of Windows and Office that may not be genuine or updated. This may cause legal issues or penalties if you are caught by Microsoft or other authorities.
-
How long does KMS Pico last? KMS Pico lasts for 180 days after which it needs to be activated again. However, KMS Pico has a built-in feature that runs twice a day and resets the activation counter to zero. This way, you can keep your products activated permanently without any expiration date or trial period.
-
Does KMS Pico work offline? KMS Pico works offline as it creates a virtual KMS server on your local machine and activates your products against it. You don't need to connect to the internet or any online server to use KMS Pico. However, you may need to connect to the internet once in a while to update your products or check their activation status.
-
Can I uninstall KMS Pico after activation? You can uninstall KMS Pico after activation if you want to free up some space on your PC or remove any traces of the tool. However, this may affect your activation status and cause your products to become unactivated or invalid. Therefore, it is recommended to keep KMS Pico on your PC and let it run in the background to maintain your activation.
-
-
-
These are some of the FAQs about KMS Pico that you may find useful or informative. If you have any other questions or problems with KMS Pico, you can contact the support team of Solvusoft, the developer of KMS Pico, at support@solvusoft.com or visit their website: https://www.solvusoft.com/en
-
Conclusion
-
-
KMS Pico is a tool that can activate any version of Windows and Office using the KMS method. It is easy to use, fast, and reliable. It can help you enjoy all the benefits of your Microsoft products without paying anything. However, KMS Pico is not a legal or safe tool to use as it violates the terms and conditions of Microsoft and may contain malware or viruses. Therefore, you should use KMS Pico only for testing purposes and buy a genuine license key or subscription if you want to use the products for a long time. If you want to download KMS Pico and activate your Windows and Office products for free, visit the official website: https://www.getkmspico.com/
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/HD Online Player (Tevar Movie 720p Kickass Torrent).md b/spaces/diacanFperku/AutoGPT/HD Online Player (Tevar Movie 720p Kickass Torrent).md
deleted file mode 100644
index 997ca18471e0a86aac971b21f19dd9003dd37e2f..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/HD Online Player (Tevar Movie 720p Kickass Torrent).md
+++ /dev/null
@@ -1,12 +0,0 @@
-
-
chapai bec3238e82 https://playdownloadonline.com/story/4616064-online-player-bhai-telugu-. They're made with the purpose of softening of the data that is accessible for the information that is relevant to you.
-
HD Online Player (Tevar Movie 720p Kickass Torrent)
The three key components of this philosophy of your company are honesty, dedication and accountability. The popularity of the 360|Fusion has created VR scenes that -keygen- Product Design Suite 2013 – 64-Bit- Kickass- Torrent-dorrtal. 0 -train-simulator-albula 80de58dbe1. They're made with the purpose of softening of the data that is accessible for the information that is relevant to you.
-
Muzućnosti: Tevar movie 720p kickass torrent. -The-Tevar-Movie-Download-HD-1080p-Kicksass-Torrent. Online-Player-Bangladesh. HD Online Player. 100 dilwale movie download in kickass torrent top.
-
See more projects: HD Online Player.. - Online-Player-Tevar-Movie-720p-Kickass-Torrent..com/d/UgllEzIJI. Apollo Glider
Ecoductor elwflo 7b17bfd26b. Hunterrr-Movie-Download-Hd-1080p-Kickass.pdf. Download Tevar movie 720p kickass torrent.
chapai b8d0503c82 https://coub.com/stories/1231823-online-player-hd-online-player-download-tevar-movie-720p-kickass-torrent. The 2nd Copa America 2021 semifinal played between Argentina and. I must confess that it is a pleasure to relish the.
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Huawei Hg532e Firmware.md b/spaces/diacanFperku/AutoGPT/Huawei Hg532e Firmware.md
deleted file mode 100644
index 0e1fa574212ac627c8ebe391f54988b47a0433aa..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Huawei Hg532e Firmware.md
+++ /dev/null
@@ -1,70 +0,0 @@
-
-
Huawei HG532e Firmware: How to Download and Install It
-
-
The Huawei HG532e is a wireless modem that can provide high-speed internet access and voice over IP (VoIP) services. It supports ADSL2+, 3G, and Wi-Fi connections, and has four LAN ports and two USB ports. The Huawei HG532e firmware is the software that runs on the modem and controls its functions and features.
-
-
Updating the Huawei HG532e firmware can improve the performance and stability of the modem, fix some bugs and errors, and add new features and functions. However, updating the firmware can also be risky if not done properly. If the firmware update fails or is interrupted, the modem may become unusable or bricked.
Therefore, it is important to follow some precautions and steps before and during the firmware update process. In this article, we will show you how to download and install the Huawei HG532e firmware safely and easily.
-
-
How to Download the Huawei HG532e Firmware
-
-
The first step to update the Huawei HG532e firmware is to download the latest version of the firmware from a reliable source. You can find the official firmware files on the Huawei Enterprise Support Community website or on the Easy Firmware website.
-
-
To download the firmware from the Huawei Enterprise Support Community website, you need to register an account and log in. Then, you can search for the Huawei Modem HG532e Firmware Download thread or click on this link: https://forum.huawei.com/enterprise/en/huawei-modem-hg532e-firmware-download/thread/679561-100181. There, you can find the download links for different versions of the firmware according to your region and operator.
-
-
To download the firmware from the Easy Firmware website, you need to pay a subscription fee or use a free trial account. Then, you can search for the Firmware HUAWEI HG532e page or click on this link: https://easy-firmware.com/solution/en/2020/04/16/firmware-huawei-hg532e/. There, you can find the download links for different versions of the firmware according to your region and operator.
-
-
After downloading the firmware file, you need to extract it using a software like WinRAR or 7-Zip. You will get a folder named dload that contains an UPDATE.APP file. This is the firmware file that you need to copy to your SD card or USB flash drive.
-
-
How to Install the Huawei HG532e Firmware
-
-
The second step to update the Huawei HG532e firmware is to install it on your modem using one of these two methods: normal update or forced update.
-
-
-
The normal update method is recommended if your modem is working normally and can access the internet. To use this method, you need to follow these steps:
-
-
-
Insert your SD card or USB flash drive that contains the dload folder with the UPDATE.APP file into your modem.
-
Open your web browser and enter http://192.168.1.1 in
-
-
Enter your username and password to log in to your modem's web interface. The default username and password are both admin. If you have changed them, use your custom username and password instead.
-
Go to System Tools > Firmware Upgrade.
-
Select Local Upgrade from SD Card or Local Upgrade from USB Storage depending on where you copied the dload folder.
-
Click Browse and select the dload folder with the UPDATE.APP file.
-
Click Upgrade and wait for the process to complete.
-
Do not turn off or disconnect your modem during the upgrade process.
-
When the upgrade is done, your modem will reboot automatically.
-
-
-
The forced update method is recommended if your modem is not working normally or cannot access the internet. To use this method, you need to follow these steps:
-
-
-
Insert your SD card or USB flash drive that contains the dload folder with the UPDATE.APP file into your modem.
-
Turn off your modem by pressing and holding the power button for a few seconds.
-
Press and hold both WPS and Reset buttons on your modem at the same time.
-
While holding both buttons, turn on your modem by pressing and holding the power button for a few seconds.
-
Release all buttons when you see all LED lights flashing on your modem.
-
The upgrade process will start automatically and may take several minutes.
-
Do not turn off or disconnect your modem during the upgrade process.
-
When the upgrade is done, your modem will reboot automatically.
-
-
-
How to Check Your Huawei HG532e Firmware Version
-
-
The third step to update the Huawei HG532e firmware is to check if your firmware version has been updated successfully. To do this, you need to follow these steps:
-
-
-
Open your web browser and enter http://192.168.1.1 in
-
Enter your username and password to log in to your modem's web interface. The default username and password are both admin. If you have changed them, use your custom username and password instead.
-
Go to Status > Device Information.
-
Check the Firmware Version field and compare it with the version you downloaded.
-
If they match, congratulations! You have successfully updated your Huawei HG532e firmware.
-
-
-
Conclusion
-
-
In this article, we have shown you how to download and install the Huawei HG532e firmware using two methods: normal update and forced update. We have also shown you how to check your firmware version after the update. Updating the Huawei HG532e firmware can improve the performance and stability of your modem, fix some bugs and errors, and add new features and functions. However, updating the firmware can also be risky if not done properly. Therefore, it is important to follow some precautions and steps before and during the firmware update process. We hope this article has been helpful for you. If you have any questions or comments, please feel free to leave them below.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/IVT BlueSoleil 6.2.227.11 32 64bit With Crack [HOT] XP Vista Utorrent.md b/spaces/diacanFperku/AutoGPT/IVT BlueSoleil 6.2.227.11 32 64bit With Crack [HOT] XP Vista Utorrent.md
deleted file mode 100644
index 5ed6b6d19bbb22be79b53ca414f229dee81684d6..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/IVT BlueSoleil 6.2.227.11 32 64bit With Crack [HOT] XP Vista Utorrent.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
IVT BlueSoleil 6.2.227.11 32 64bit with crack XP Vista utorrent
-
-Loaris Trojan Remover 3.2.0.1695 Crack + License Key 2022 Download Loaris Troja n Remover Crack full version is a great app that makes it easy. Loaris Trojan Remover 3.2.0.1695 Crack license key is what you need if you want to protect your PC from Trojans.
-Loaris Trojan Remover is a program to remove malware that can infect your computer.
-It includes a set of recovery tools.
-Loaris Trojan Remover 2. Crack Key.
-It includes the 8a78ff9644
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Panasonic Kx Td500 Software Downloadl ((INSTALL)).md b/spaces/diacanFperku/AutoGPT/Panasonic Kx Td500 Software Downloadl ((INSTALL)).md
deleted file mode 100644
index 55584849b33d9574797e93cbdd3b6f35939ebe8f..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Panasonic Kx Td500 Software Downloadl ((INSTALL)).md
+++ /dev/null
@@ -1,45 +0,0 @@
-
-
How to Download and Install Panasonic KX-TD500 Software
-
Panasonic KX-TD500 is a digital super hybrid system that offers advanced features and functions for business communication. It can support up to 512 extensions and 448 trunks, and can be programmed and managed via PC interface software. In this article, we will show you how to download and install the software for Panasonic KX-TD500.
-
Step 1: Download the software from Panasonic website
-
The software for Panasonic KX-TD500 consists of four components: Server, Agent, Console and Module Creator. You can download them from the Panasonic website at https://panasonic.net/cns/pcc/support/fax/Central%20Management%20Controller%20software.html. You will also find the Read Me files and the Operator's Guide files that explain the system requirements, installation procedures and usage instructions for each component.
The Server component is the main program that communicates with the Panasonic KX-TD500 system and stores the data and settings. You need to install it on a PC that meets the following requirements:
Display: Resolution of 1024 x 768 pixels or higher
-
-
To install the Server component, follow these steps:
-
-
Run the ServerXXXX_Setup.exe file that you downloaded from the Panasonic website.
-
Follow the instructions on the screen to complete the installation.
-
Restart your PC if prompted.
-
Launch the Server program from the Start menu or the desktop shortcut.
-
Enter the IP address and port number of the Panasonic KX-TD500 system in the Server Settings dialog box.
-
Click OK to save the settings and connect to the system.
-
-
Step 3: Install the Agent component on each PC that connects to Panasonic KX-TD500
-
The Agent component is a program that runs in the background on each PC that connects to Panasonic KX-TD500 via LAN. It collects and sends information about the status and unit information of each Multi-Function Printer and PC in same network. You need to install it on each PC that meets the following requirements:
-
-
Operating System: Windows® XP (32bit / 64bit), Windows Vista® (32bit / 64bit), Windows® 7 (32bit / 64bit), Windows® 8 (32bit / 64bit), Windows® 10 (32bit / 64bit), Windows Server® 2008 (32bit / 64bit), Windows Server® 2012 (64bit)
-
CPU: Intel® Pentium® III or higher
-
Memory: 256MB or more
-
HDD: 100MB or more of free space
-
Network: LAN connection with TCP/IP protocol
-
-
To install the Agent component, follow these steps:
-
-
Run the AgentXXXX_Setup.exe file that you downloaded from the Panasonic website.
-
Follow the instructions on the screen to complete the installation.
-
Restart your PC if prompted.
-
The Agent program will start automatically when you log on to your PC.
-
You can check the status of the Agent program by right-clicking on its icon in the system tray.
-
- d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/digitalxingtong/Miiu-Bert-Vits2/monotonic_align/setup.py b/spaces/digitalxingtong/Miiu-Bert-Vits2/monotonic_align/setup.py
deleted file mode 100644
index 30c224807a70faa9df9c9eb75f8e80c8c867b16b..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Miiu-Bert-Vits2/monotonic_align/setup.py
+++ /dev/null
@@ -1,9 +0,0 @@
-from distutils.core import setup
-from Cython.Build import cythonize
-import numpy
-
-setup(
- name = 'monotonic_align',
- ext_modules = cythonize("core.pyx"),
- include_dirs=[numpy.get_include()]
-)
diff --git a/spaces/digitalxingtong/Taffy-Bert-VITS2/mel_processing.py b/spaces/digitalxingtong/Taffy-Bert-VITS2/mel_processing.py
deleted file mode 100644
index 50435ecf88ef4fb6c1d47f3e6edd04c3ea7d3e80..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Taffy-Bert-VITS2/mel_processing.py
+++ /dev/null
@@ -1,112 +0,0 @@
-import math
-import os
-import random
-import torch
-from torch import nn
-import torch.nn.functional as F
-import torch.utils.data
-import numpy as np
-import librosa
-import librosa.util as librosa_util
-from librosa.util import normalize, pad_center, tiny
-from scipy.signal import get_window
-from scipy.io.wavfile import read
-from librosa.filters import mel as librosa_mel_fn
-
-MAX_WAV_VALUE = 32768.0
-
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- """
- PARAMS
- ------
- C: compression factor
- """
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression_torch(x, C=1):
- """
- PARAMS
- ------
- C: compression factor used to compress
- """
- return torch.exp(x) / C
-
-
-def spectral_normalize_torch(magnitudes):
- output = dynamic_range_compression_torch(magnitudes)
- return output
-
-
-def spectral_de_normalize_torch(magnitudes):
- output = dynamic_range_decompression_torch(magnitudes)
- return output
-
-
-mel_basis = {}
-hann_window = {}
-
-
-def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
- return spec
-
-
-def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
- global mel_basis
- dtype_device = str(spec.dtype) + '_' + str(spec.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device)
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
- return spec
-
-
-def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global mel_basis, hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device)
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
-
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
-
- return spec
diff --git a/spaces/digitalxingtong/Un-Bert-Vits2/text/chinese.py b/spaces/digitalxingtong/Un-Bert-Vits2/text/chinese.py
deleted file mode 100644
index 276753880b73de2e8889dcb2101cd98c09e0710b..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Un-Bert-Vits2/text/chinese.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import os
-import re
-
-import cn2an
-from pypinyin import lazy_pinyin, Style
-
-from text import symbols
-from text.symbols import punctuation
-from text.tone_sandhi import ToneSandhi
-
-current_file_path = os.path.dirname(__file__)
-pinyin_to_symbol_map = {line.split("\t")[0]: line.strip().split("\t")[1] for line in
- open(os.path.join(current_file_path, 'opencpop-strict.txt')).readlines()}
-
-import jieba.posseg as psg
-
-
-rep_map = {
- ':': ',',
- ';': ',',
- ',': ',',
- '。': '.',
- '!': '!',
- '?': '?',
- '\n': '.',
- "·": ",",
- '、': ",",
- '...': '…',
- '$': '.',
- '“': "'",
- '”': "'",
- '‘': "'",
- '’': "'",
- '(': "'",
- ')': "'",
- '(': "'",
- ')': "'",
- '《': "'",
- '》': "'",
- '【': "'",
- '】': "'",
- '[': "'",
- ']': "'",
- '—': "-",
- '~': "-",
- '~': "-",
- '「': "'",
- '」': "'",
-
-}
-
-tone_modifier = ToneSandhi()
-
-def replace_punctuation(text):
- text = text.replace("嗯", "恩").replace("呣","母")
- pattern = re.compile('|'.join(re.escape(p) for p in rep_map.keys()))
-
- replaced_text = pattern.sub(lambda x: rep_map[x.group()], text)
-
- replaced_text = re.sub(r'[^\u4e00-\u9fa5'+"".join(punctuation)+r']+', '', replaced_text)
-
- return replaced_text
-
-def g2p(text):
- pattern = r'(?<=[{0}])\s*'.format(''.join(punctuation))
- sentences = [i for i in re.split(pattern, text) if i.strip()!='']
- phones, tones, word2ph = _g2p(sentences)
- assert sum(word2ph) == len(phones)
- assert len(word2ph) == len(text) #Sometimes it will crash,you can add a try-catch.
- phones = ['_'] + phones + ["_"]
- tones = [0] + tones + [0]
- word2ph = [1] + word2ph + [1]
- return phones, tones, word2ph
-
-
-def _get_initials_finals(word):
- initials = []
- finals = []
- orig_initials = lazy_pinyin(
- word, neutral_tone_with_five=True, style=Style.INITIALS)
- orig_finals = lazy_pinyin(
- word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
- for c, v in zip(orig_initials, orig_finals):
- initials.append(c)
- finals.append(v)
- return initials, finals
-
-
-def _g2p(segments):
- phones_list = []
- tones_list = []
- word2ph = []
- for seg in segments:
- pinyins = []
- # Replace all English words in the sentence
- seg = re.sub('[a-zA-Z]+', '', seg)
- seg_cut = psg.lcut(seg)
- initials = []
- finals = []
- seg_cut = tone_modifier.pre_merge_for_modify(seg_cut)
- for word, pos in seg_cut:
- if pos == 'eng':
- continue
- sub_initials, sub_finals = _get_initials_finals(word)
- sub_finals = tone_modifier.modified_tone(word, pos,
- sub_finals)
- initials.append(sub_initials)
- finals.append(sub_finals)
-
- # assert len(sub_initials) == len(sub_finals) == len(word)
- initials = sum(initials, [])
- finals = sum(finals, [])
- #
- for c, v in zip(initials, finals):
- raw_pinyin = c+v
- # NOTE: post process for pypinyin outputs
- # we discriminate i, ii and iii
- if c == v:
- assert c in punctuation
- phone = [c]
- tone = '0'
- word2ph.append(1)
- else:
- v_without_tone = v[:-1]
- tone = v[-1]
-
- pinyin = c+v_without_tone
- assert tone in '12345'
-
- if c:
- # 多音节
- v_rep_map = {
- "uei": 'ui',
- 'iou': 'iu',
- 'uen': 'un',
- }
- if v_without_tone in v_rep_map.keys():
- pinyin = c+v_rep_map[v_without_tone]
- else:
- # 单音节
- pinyin_rep_map = {
- 'ing': 'ying',
- 'i': 'yi',
- 'in': 'yin',
- 'u': 'wu',
- }
- if pinyin in pinyin_rep_map.keys():
- pinyin = pinyin_rep_map[pinyin]
- else:
- single_rep_map = {
- 'v': 'yu',
- 'e': 'e',
- 'i': 'y',
- 'u': 'w',
- }
- if pinyin[0] in single_rep_map.keys():
- pinyin = single_rep_map[pinyin[0]]+pinyin[1:]
-
- assert pinyin in pinyin_to_symbol_map.keys(), (pinyin, seg, raw_pinyin)
- phone = pinyin_to_symbol_map[pinyin].split(' ')
- word2ph.append(len(phone))
-
- phones_list += phone
- tones_list += [int(tone)] * len(phone)
- return phones_list, tones_list, word2ph
-
-
-
-def text_normalize(text):
- numbers = re.findall(r'\d+(?:\.?\d+)?', text)
- for number in numbers:
- text = text.replace(number, cn2an.an2cn(number), 1)
- text = replace_punctuation(text)
- return text
-
-def get_bert_feature(text, word2ph):
- from text import chinese_bert
- return chinese_bert.get_bert_feature(text, word2ph)
-
-if __name__ == '__main__':
- from text.chinese_bert import get_bert_feature
- text = "啊!但是《原神》是由,米哈\游自主, [研发]的一款全.新开放世界.冒险游戏"
- text = text_normalize(text)
- print(text)
- phones, tones, word2ph = g2p(text)
- bert = get_bert_feature(text, word2ph)
-
- print(phones, tones, word2ph, bert.shape)
-
-
-# # 示例用法
-# text = "这是一个示例文本:,你好!这是一个测试...."
-# print(g2p_paddle(text)) # 输出: 这是一个示例文本你好这是一个测试
diff --git a/spaces/digitalxingtong/Xingtong-Read-Bert-VITS2/app.py b/spaces/digitalxingtong/Xingtong-Read-Bert-VITS2/app.py
deleted file mode 100644
index 2d69bf222675312e2dbc7f6739406e21afe9603b..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Xingtong-Read-Bert-VITS2/app.py
+++ /dev/null
@@ -1,180 +0,0 @@
-import sys, os
-
-if sys.platform == "darwin":
- os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1"
-
-import logging
-
-logging.getLogger("numba").setLevel(logging.WARNING)
-logging.getLogger("markdown_it").setLevel(logging.WARNING)
-logging.getLogger("urllib3").setLevel(logging.WARNING)
-logging.getLogger("matplotlib").setLevel(logging.WARNING)
-
-logging.basicConfig(level=logging.INFO, format="| %(name)s | %(levelname)s | %(message)s")
-
-logger = logging.getLogger(__name__)
-
-import torch
-import argparse
-import commons
-import utils
-from models import SynthesizerTrn
-from text.symbols import symbols
-from text import cleaned_text_to_sequence, get_bert
-from text.cleaner import clean_text
-import gradio as gr
-import webbrowser
-
-
-net_g = None
-
-
-def get_text(text, language_str, hps):
- norm_text, phone, tone, word2ph = clean_text(text, language_str)
- phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str)
-
- if hps.data.add_blank:
- phone = commons.intersperse(phone, 0)
- tone = commons.intersperse(tone, 0)
- language = commons.intersperse(language, 0)
- for i in range(len(word2ph)):
- word2ph[i] = word2ph[i] * 2
- word2ph[0] += 1
- bert = get_bert(norm_text, word2ph, language_str)
- del word2ph
-
- assert bert.shape[-1] == len(phone)
-
- phone = torch.LongTensor(phone)
- tone = torch.LongTensor(tone)
- language = torch.LongTensor(language)
-
- return bert, phone, tone, language
-import soundfile as sf
-def infer(text, sdp_ratio, noise_scale, noise_scale_w, length_scale, sid):
- global net_g
- bert, phones, tones, lang_ids = get_text(text, "ZH", hps)
- with torch.no_grad():
- x_tst=phones.to(device).unsqueeze(0)
- tones=tones.to(device).unsqueeze(0)
- lang_ids=lang_ids.to(device).unsqueeze(0)
- bert = bert.to(device).unsqueeze(0)
- x_tst_lengths = torch.LongTensor([phones.size(0)]).to(device)
- del phones
- speakers = torch.LongTensor([hps.data.spk2id[sid]]).to(device)
- audio = net_g.infer(x_tst, x_tst_lengths, speakers, tones, lang_ids, bert, sdp_ratio=sdp_ratio
- , noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale)[0][0,0].data.cpu().float().numpy()
- del x_tst, tones, lang_ids, bert, x_tst_lengths, speakers
- sf.write("tmp.wav", audio, 44100)
- return audio
-def convert_wav_to_ogg(wav_file):
- os.makedirs('out', exist_ok=True)
- filename = os.path.splitext(os.path.basename(wav_file.name))[0]
- output_path_ogg = os.path.join('out', f"out.ogg")
-
- renamed_input_path = os.path.join('in', f"in.wav")
- os.makedirs('in', exist_ok=True)
- os.rename(wav_file.name, renamed_input_path)
- command = ["ffmpeg", "-i", renamed_input_path, "-acodec", "libopus", "-y", output_path_ogg]
- os.system(" ".join(command))
- return output_path_ogg
-def tts_fn(text, speaker, sdp_ratio, noise_scale, noise_scale_w, length_scale):
- with torch.no_grad():
- audio = infer(text, sdp_ratio=sdp_ratio, noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale, sid=speaker)
- with open('tmp.wav', 'rb') as wav_file:
- newogg = convert_wav_to_ogg(wav_file)
- return "Success", (hps.data.sampling_rate, audio),newogg
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--model_dir", default="./logs/xt_read/xt_read_1.pth", help="path of your model")
- parser.add_argument("--config_dir", default="./configs/config.json", help="path of your config file")
- parser.add_argument("--share", default=False, help="make link public")
- parser.add_argument("-d", "--debug", action="store_true", help="enable DEBUG-LEVEL log")
-
- args = parser.parse_args()
- if args.debug:
- logger.info("Enable DEBUG-LEVEL log")
- logging.basicConfig(level=logging.DEBUG)
- hps = utils.get_hparams_from_file(args.config_dir)
- device = "cuda:0" if torch.cuda.is_available() else "cpu"
- '''
- device = (
- "cuda:0"
- if torch.cuda.is_available()
- else (
- "mps"
- if sys.platform == "darwin" and torch.backends.mps.is_available()
- else "cpu"
- )
- )
- '''
- net_g = SynthesizerTrn(
- len(symbols),
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- n_speakers=hps.data.n_speakers,
- **hps.model).to(device)
- _ = net_g.eval()
-
- _ = utils.load_checkpoint(args.model_dir, net_g, None, skip_optimizer=True)
-
- speaker_ids = hps.data.spk2id
- speakers = list(speaker_ids.keys())
- with gr.Blocks() as app:
- with gr.Row():
- with gr.Column():
-
-
- gr.Markdown(value="""
- 星瞳 朗读专用(小王子版本) Bert-Vits2在线语音生成\n
- 1、模型作者:数字星瞳企划 https://t.me/xingtong25680 \n
- \n
- 2、原项目地址:https://github.com/Stardust-minus/Bert-VITS2\n
- 3、使用此模型进行二创请注明AI生成,以及该项目地址。\n
- 4、如果想生成超长txt文本的音频请使用colab。 https://colab.research.google.com/drive/13ek8_j1aknr-pbjj3NXxSM4vBIsracU3?usp=drive_link\n
-
- """)
- text = gr.TextArea(label="Text", placeholder="Input Text Here",
- value="这里是数字星瞳企画,请在电报搜索星瞳全拼加二五六八零,获取最新更新进展。")
- speaker = gr.Dropdown(choices=speakers, value=speakers[0], label='Speaker')
- sdp_ratio = gr.Slider(minimum=0, maximum=1, value=0.2, step=0.01, label='语调变化')
- noise_scale = gr.Slider(minimum=0.1, maximum=1.5, value=0.6, step=0.01, label='感情变化')
- noise_scale_w = gr.Slider(minimum=0.1, maximum=1.4, value=0.8, step=0.01, label='音节发音长度变化')
- length_scale = gr.Slider(minimum=0.1, maximum=2, value=1, step=0.01, label='语速')
- btn = gr.Button("开启AI语音之旅吧!", variant="primary")
- with gr.Column():
- text_output = gr.Textbox(label="Message")
- audio_output = gr.Audio(label="Output Audio")
- ogg_output = gr.File(label="Converted OGG file")
- gr.Markdown(value="""
- 模型汇总:\n
- 星瞳整合 https://huggingface.co/spaces/digitalxingtong/Xingtong-All-in-One\n
- 甜甜叫花鸡 https://huggingface.co/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2 \n
- 七海 https://huggingface.co/spaces/digitalxingtong/Nanami-Bert-Vits2 \n
- 东雪莲 https://huggingface.co/spaces/digitalxingtong/Azuma-Bert-Vits2 \n
- 嘉然 https://huggingface.co/spaces/digitalxingtong/Jiaran-Bert-Vits2 \n
- 乃琳 https://huggingface.co/spaces/digitalxingtong/Eileen-Bert-Vits2 \n
- 恬豆 https://huggingface.co/spaces/digitalxingtong/Dou-Bert-Vits2 \n
- 奶绿 杂谈 https://huggingface.co/spaces/digitalxingtong/Nailv-Bert-Vits2 \n
- 奶绿 朗读 https://huggingface.co/spaces/digitalxingtong/Nailv-read-Bert-Vits2 \n
- 露早 https://huggingface.co/spaces/digitalxingtong/Luzao-Bert-Vits2 \n
- 柚恩 https://huggingface.co/spaces/digitalxingtong/Un-Bert-Vits2 \n
- 米诺 https://huggingface.co/spaces/digitalxingtong/Minuo-Bert-Vits2 \n
- 扇宝 https://huggingface.co/spaces/digitalxingtong/Shanbao-Bert-Vits2 \n
- 牧牧白 https://huggingface.co/spaces/digitalxingtong/Miiu-Bert-Vits2 \n
- 吉诺儿kino https://huggingface.co/spaces/digitalxingtong/Kino-Bert-Vits2 \n
- 九夏 https://huggingface.co/spaces/digitalxingtong/Jiuxia-Bert-Vits2 \n
- 卡缇娅 https://huggingface.co/spaces/digitalxingtong/Yaya-Bert-Vits2 \n
- 理想_ideal https://huggingface.co/spaces/digitalxingtong/Lixiang-Bert-Vits2 \n
- 阿梓 https://huggingface.co/spaces/digitalxingtong/Azusa-Bert-Vits2 \n
- 鹿鸣 https://huggingface.co/spaces/digitalxingtong/Luming-Bert-Vits2 \n
- 永雏塔菲 https://huggingface.co/spaces/digitalxingtong/Taffy-Bert-VITS2 \n
- """)
- btn.click(tts_fn,
- inputs=[text, speaker, sdp_ratio, noise_scale, noise_scale_w, length_scale],
- outputs=[text_output, audio_output,ogg_output])
-
-
- app.launch(show_error=True)
diff --git a/spaces/dilums/sentence-similarity/app/api/compare/route.ts b/spaces/dilums/sentence-similarity/app/api/compare/route.ts
deleted file mode 100644
index cce6e61dbf4721976990ea62fc1546d4aca8900e..0000000000000000000000000000000000000000
--- a/spaces/dilums/sentence-similarity/app/api/compare/route.ts
+++ /dev/null
@@ -1,20 +0,0 @@
-import { NextResponse } from "next/server";
-
-export async function POST(request: Request) {
- const { inputs } = await request.json();
-
- const response = await fetch(
- "https://api-inference.huggingface.co/models/sentence-transformers/all-MiniLM-L6-v2",
- {
- headers: {
- Authorization: `Bearer ${process.env.HF_TOKEN}`,
- },
- method: "POST",
- body: JSON.stringify({ inputs }),
- }
- );
-
- const result = await response.json();
-
- return NextResponse.json({ data: result });
-}
diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/textrecog/sar/sar_r31_parallel_decoder_chinese.py b/spaces/dinhminh20521597/OCR_DEMO/configs/textrecog/sar/sar_r31_parallel_decoder_chinese.py
deleted file mode 100644
index 58856312705bcc757550ca84f97a097f80f9be24..0000000000000000000000000000000000000000
--- a/spaces/dinhminh20521597/OCR_DEMO/configs/textrecog/sar/sar_r31_parallel_decoder_chinese.py
+++ /dev/null
@@ -1,128 +0,0 @@
-_base_ = [
- '../../_base_/default_runtime.py',
- '../../_base_/schedules/schedule_adam_step_5e.py'
-]
-
-dict_file = 'data/chineseocr/labels/dict_printed_chinese_english_digits.txt'
-label_convertor = dict(
- type='AttnConvertor', dict_file=dict_file, with_unknown=True)
-
-model = dict(
- type='SARNet',
- backbone=dict(type='ResNet31OCR'),
- encoder=dict(
- type='SAREncoder',
- enc_bi_rnn=False,
- enc_do_rnn=0.1,
- enc_gru=False,
- ),
- decoder=dict(
- type='ParallelSARDecoder',
- enc_bi_rnn=False,
- dec_bi_rnn=False,
- dec_do_rnn=0,
- dec_gru=False,
- pred_dropout=0.1,
- d_k=512,
- pred_concat=True),
- loss=dict(type='SARLoss'),
- label_convertor=label_convertor,
- max_seq_len=30)
-
-img_norm_cfg = dict(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='ResizeOCR',
- height=48,
- min_width=48,
- max_width=256,
- keep_aspect_ratio=True,
- width_downsample_ratio=0.25),
- dict(type='ToTensorOCR'),
- dict(type='NormalizeOCR', **img_norm_cfg),
- dict(
- type='Collect',
- keys=['img'],
- meta_keys=[
- 'filename', 'ori_shape', 'resize_shape', 'text', 'valid_ratio'
- ]),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiRotateAugOCR',
- rotate_degrees=[0, 90, 270],
- transforms=[
- dict(
- type='ResizeOCR',
- height=48,
- min_width=48,
- max_width=256,
- keep_aspect_ratio=True,
- width_downsample_ratio=0.25),
- dict(type='ToTensorOCR'),
- dict(type='NormalizeOCR', **img_norm_cfg),
- dict(
- type='Collect',
- keys=['img'],
- meta_keys=[
- 'filename', 'ori_shape', 'resize_shape', 'valid_ratio'
- ]),
- ])
-]
-
-dataset_type = 'OCRDataset'
-
-train_prefix = 'data/chinese/'
-
-train_ann_file = train_prefix + 'labels/train.txt'
-
-train = dict(
- type=dataset_type,
- img_prefix=train_prefix,
- ann_file=train_ann_file,
- loader=dict(
- type='HardDiskLoader',
- repeat=1,
- parser=dict(
- type='LineStrParser',
- keys=['filename', 'text'],
- keys_idx=[0, 1],
- separator=' ')),
- pipeline=None,
- test_mode=False)
-
-test_prefix = 'data/chineseocr/'
-
-test_ann_file = test_prefix + 'labels/test.txt'
-
-test = dict(
- type=dataset_type,
- img_prefix=test_prefix,
- ann_file=test_ann_file,
- loader=dict(
- type='HardDiskLoader',
- repeat=1,
- parser=dict(
- type='LineStrParser',
- keys=['filename', 'text'],
- keys_idx=[0, 1],
- separator=' ')),
- pipeline=None,
- test_mode=False)
-
-data = dict(
- samples_per_gpu=40,
- workers_per_gpu=2,
- val_dataloader=dict(samples_per_gpu=1),
- test_dataloader=dict(samples_per_gpu=1),
- train=dict(
- type='UniformConcatDataset', datasets=[train],
- pipeline=train_pipeline),
- val=dict(
- type='UniformConcatDataset', datasets=[test], pipeline=test_pipeline),
- test=dict(
- type='UniformConcatDataset', datasets=[test], pipeline=test_pipeline))
-
-evaluation = dict(interval=1, metric='acc')
diff --git a/spaces/distbit/NousResearch-Nous-Hermes-13b/app.py b/spaces/distbit/NousResearch-Nous-Hermes-13b/app.py
deleted file mode 100644
index de8b5ebcd51de864852c9d710de377f72513ff97..0000000000000000000000000000000000000000
--- a/spaces/distbit/NousResearch-Nous-Hermes-13b/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/NousResearch/Nous-Hermes-13b").launch()
\ No newline at end of file
diff --git a/spaces/dongyi/MMFS/data/__init__.py b/spaces/dongyi/MMFS/data/__init__.py
deleted file mode 100644
index 0966d776800af917654beb18020d28a942eaa89c..0000000000000000000000000000000000000000
--- a/spaces/dongyi/MMFS/data/__init__.py
+++ /dev/null
@@ -1,58 +0,0 @@
-"""This package includes all the modules related to data loading and preprocessing
-
- To add a custom dataset class called 'dummy', you need to add a file called 'dummy_dataset.py' and define a subclass 'DummyDataset' inherited from BaseDataset.
- You need to implement four functions:
- -- <__init__>: initialize the class, first call BaseDataset.__init__(self, opt).
- -- <__len__>: return the size of dataset.
- -- <__getitem__>: get a data point from data loader.
- -- : (optionally) add dataset-specific options and set default options.
-
-Now you can use the dataset class by specifying flag '--dataset_mode dummy'.
-See our template dataset class 'template_dataset.py' for more details.
-"""
-import importlib
-import torch.utils.data
-from torch.utils.data.distributed import DistributedSampler
-
-class CustomDataLoader():
- """Wrapper class of Dataset class that performs multi-threaded data loading"""
-
- def __init__(self, config, dataset, DDP_gpu=None, drop_last=False):
- """Initialize this class
-
- Step 1: create a dataset instance given the name [dataset_mode]
- Step 2: create a multi-threaded data loader.
- """
- self.config = config
- self.dataset = dataset
-
- if DDP_gpu is None:
- self.dataloader = torch.utils.data.DataLoader(
- self.dataset,
- batch_size=config['dataset']['batch_size'],
- shuffle=not config['dataset']['serial_batches'],
- num_workers=int(config['dataset']['n_threads']), drop_last=drop_last)
- else:
- sampler = DistributedSampler(self.dataset, num_replicas=self.config['training']['world_size'],
- rank=DDP_gpu)
- self.dataloader = torch.utils.data.DataLoader(
- self.dataset,
- batch_size=config['dataset']['batch_size'],
- shuffle=False,
- num_workers=int(config['dataset']['n_threads']),
- sampler=sampler,
- drop_last=drop_last)
-
- def load_data(self):
- return self
-
- def __len__(self):
- """Return the number of data in the dataset"""
- return min(len(self.dataset), 1e9)
-
- def __iter__(self):
- """Return a batch of data"""
- for i, data in enumerate(self.dataloader):
- if i * self.config['dataset']['batch_size'] >= 1e9:
- break
- yield data
diff --git a/spaces/enzostvs/stable-diffusion-tpu/Dockerfile b/spaces/enzostvs/stable-diffusion-tpu/Dockerfile
deleted file mode 100644
index 1779fbf5ed9f3c6bcb533d4305b5f421916815b9..0000000000000000000000000000000000000000
--- a/spaces/enzostvs/stable-diffusion-tpu/Dockerfile
+++ /dev/null
@@ -1,30 +0,0 @@
-# Dockerfile
-
-# Use an official Node.js runtime as the base image
-FROM node:18
-
-USER 1000
-
-# Set the working directory in the container
-WORKDIR /usr/src/app
-
-# Copy package.json and package-lock.json to the container
-COPY --chown=1000 package.json package-lock.json ./
-
-# Install dependencies
-RUN npm install
-
-VOLUME /data
-
-# Copy the rest of the application files to the container
-COPY --chown=1000 . .
-RUN chmod +x entrypoint.sh
-
-# Build the Next.js application for production
-# RUN npm run build
-
-# Expose the application port (assuming your app runs on port 3000)
-EXPOSE 3002
-
-# Start the application
-ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
\ No newline at end of file
diff --git a/spaces/eson/tokenizer-arena/evaluation.md b/spaces/eson/tokenizer-arena/evaluation.md
deleted file mode 100644
index e2fbfafab4f6dc8d856b436df71e074b09a52506..0000000000000000000000000000000000000000
--- a/spaces/eson/tokenizer-arena/evaluation.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
-## coverage
-
-rare characters falling back to utf-8 bytes
\ No newline at end of file
diff --git a/spaces/evaluate-metric/poseval/poseval.py b/spaces/evaluate-metric/poseval/poseval.py
deleted file mode 100644
index 124146cd024c05e01116403d6c4a164165288bd3..0000000000000000000000000000000000000000
--- a/spaces/evaluate-metric/poseval/poseval.py
+++ /dev/null
@@ -1,113 +0,0 @@
-# Copyright 2022 The HuggingFace Evaluate Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" seqeval metric. """
-
-from typing import Union
-
-import datasets
-from sklearn.metrics import classification_report
-
-import evaluate
-
-
-_CITATION = """\
-@article{scikit-learn,
- title={Scikit-learn: Machine Learning in {P}ython},
- author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V.
- and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P.
- and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and
- Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.},
- journal={Journal of Machine Learning Research},
- volume={12},
- pages={2825--2830},
- year={2011}
-}
-"""
-
-_DESCRIPTION = """\
-The poseval metric can be used to evaluate POS taggers. Since seqeval does not work well with POS data \
-(see e.g. [here](https://stackoverflow.com/questions/71327693/how-to-disable-seqeval-label-formatting-for-pos-tagging))\
-that is not in IOB format the poseval metric is an alternative. It treats each token in the dataset as independant \
-observation and computes the precision, recall and F1-score irrespective of sentences. It uses scikit-learns's \
-[classification report](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) \
-to compute the scores.
-
-"""
-
-_KWARGS_DESCRIPTION = """
-Computes the poseval metric.
-
-Args:
- predictions: List of List of predicted labels (Estimated targets as returned by a tagger)
- references: List of List of reference labels (Ground truth (correct) target values)
- zero_division: Which value to substitute as a metric value when encountering zero division. Should be on of 0, 1,
- "warn". "warn" acts as 0, but the warning is raised.
-
-Returns:
- 'scores': dict. Summary of the scores for overall and per type
- Overall (weighted and macro avg):
- 'accuracy': accuracy,
- 'precision': precision,
- 'recall': recall,
- 'f1': F1 score, also known as balanced F-score or F-measure,
- Per type:
- 'precision': precision,
- 'recall': recall,
- 'f1': F1 score, also known as balanced F-score or F-measure
-Examples:
-
- >>> predictions = [['INTJ', 'ADP', 'PROPN', 'NOUN', 'PUNCT', 'INTJ', 'ADP', 'PROPN', 'VERB', 'SYM']]
- >>> references = [['INTJ', 'ADP', 'PROPN', 'PROPN', 'PUNCT', 'INTJ', 'ADP', 'PROPN', 'PROPN', 'SYM']]
- >>> poseval = evaluate.load("poseval")
- >>> results = poseval.compute(predictions=predictions, references=references)
- >>> print(list(results.keys()))
- ['ADP', 'INTJ', 'NOUN', 'PROPN', 'PUNCT', 'SYM', 'VERB', 'accuracy', 'macro avg', 'weighted avg']
- >>> print(results["accuracy"])
- 0.8
- >>> print(results["PROPN"]["recall"])
- 0.5
-"""
-
-
-@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
-class Poseval(evaluate.Metric):
- def _info(self):
- return evaluate.MetricInfo(
- description=_DESCRIPTION,
- citation=_CITATION,
- homepage="https://scikit-learn.org",
- inputs_description=_KWARGS_DESCRIPTION,
- features=datasets.Features(
- {
- "predictions": datasets.Sequence(datasets.Value("string", id="label"), id="sequence"),
- "references": datasets.Sequence(datasets.Value("string", id="label"), id="sequence"),
- }
- ),
- codebase_urls=["https://github.com/scikit-learn/scikit-learn"],
- )
-
- def _compute(
- self,
- predictions,
- references,
- zero_division: Union[str, int] = "warn",
- ):
- report = classification_report(
- y_true=[label for ref in references for label in ref],
- y_pred=[label for pred in predictions for label in pred],
- output_dict=True,
- zero_division=zero_division,
- )
-
- return report
diff --git a/spaces/exit9/neuro_evolution/README.md b/spaces/exit9/neuro_evolution/README.md
deleted file mode 100644
index e2b4d67636a6fbce50b7a8eaca1813914b648153..0000000000000000000000000000000000000000
--- a/spaces/exit9/neuro_evolution/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Livebook
-emoji: 📓
-colorFrom: pink
-colorTo: purple
-sdk: docker
-fullWidth: true
-license: mit
----
-
-You can install and run [Livebook](https://livebook.dev/) inside a Hugging Face Space. Here's [a tutorial](https://huggingface.co/docs/hub/spaces-sdks-docker-livebook) on how to do that.
\ No newline at end of file
diff --git a/spaces/facebook/ov-seg/app.py b/spaces/facebook/ov-seg/app.py
deleted file mode 100644
index 906d30a8a3cbfd59dab9cf621b13f8f2366f95d1..0000000000000000000000000000000000000000
--- a/spaces/facebook/ov-seg/app.py
+++ /dev/null
@@ -1,96 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# Copyright (c) Meta Platforms, Inc. All Rights Reserved
-
-import multiprocessing as mp
-
-import numpy as np
-from PIL import Image
-
-
-try:
- import detectron2
-except:
- import os
- os.system('pip install git+https://github.com/facebookresearch/detectron2.git')
-
-from detectron2.config import get_cfg
-
-from detectron2.projects.deeplab import add_deeplab_config
-from detectron2.data.detection_utils import read_image
-from open_vocab_seg import add_ovseg_config
-from open_vocab_seg.utils import VisualizationDemo, SAMVisualizationDemo
-
-import gradio as gr
-
-import gdown
-
-# ckpt_url = 'https://drive.google.com/uc?id=1cn-ohxgXDrDfkzC1QdO-fi8IjbjXmgKy'
-# output = './ovseg_swinbase_vitL14_ft_mpt.pth'
-# gdown.download(ckpt_url, output, quiet=False)
-
-def setup_cfg(config_file):
- # load config from file and command-line arguments
- cfg = get_cfg()
- add_deeplab_config(cfg)
- add_ovseg_config(cfg)
- cfg.merge_from_file(config_file)
- cfg.freeze()
- return cfg
-
-
-def inference(class_names, proposal_gen, granularity, input_img):
- mp.set_start_method("spawn", force=True)
- config_file = './ovseg_swinB_vitL_demo.yaml'
- cfg = setup_cfg(config_file)
- if proposal_gen == 'MaskFormer':
- demo = VisualizationDemo(cfg)
- elif proposal_gen == 'Segment_Anything':
- demo = SAMVisualizationDemo(cfg, granularity, './sam_vit_l_0b3195.pth', './ovseg_clip_l_9a1909.pth')
- class_names = class_names.split(',')
- img = read_image(input_img, format="BGR")
- _, visualized_output = demo.run_on_image(img, class_names)
-
- return Image.fromarray(np.uint8(visualized_output.get_image())).convert('RGB')
-
-
-examples = [['Saturn V, toys, desk, wall, sunflowers, white roses, chrysanthemums, carnations, green dianthus', 'Segment_Anything', 0.8, './resources/demo_samples/sample_01.jpeg'],
- ['red bench, yellow bench, blue bench, brown bench, green bench, blue chair, yellow chair, green chair, brown chair, yellow square painting, barrel, buddha statue', 'Segment_Anything', 0.8, './resources/demo_samples/sample_04.png'],
- ['pillow, pipe, sweater, shirt, jeans jacket, shoes, cabinet, handbag, photo frame', 'Segment_Anything', 0.7, './resources/demo_samples/sample_05.png'],
- ['Saturn V, toys, blossom', 'MaskFormer', 1.0, './resources/demo_samples/sample_01.jpeg'],
- ['Oculus, Ukulele', 'MaskFormer', 1.0, './resources/demo_samples/sample_03.jpeg'],
- ['Golden gate, yacht', 'MaskFormer', 1.0, './resources/demo_samples/sample_02.jpeg'],]
-output_labels = ['segmentation map']
-
-title = 'OVSeg (+ Segment_Anything)'
-
-description = """
-[NEW!] We incorperate OVSeg CLIP w/ Segment_Anything, enabling SAM's text prompts.
-Gradio Demo for Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP. \n
-OVSeg could perform open vocabulary segmentation, you may input more classes (seperate by comma). You may click on of the examples or upload your own image. \n
-It might take some time to process. Cheers!
-
(Colab only supports MaskFormer proposal generator) Don't want to wait in queue?
-"""
-
-gr.Interface(
- inference,
- inputs=[
- gr.Textbox(
- lines=1, placeholder=None, default='', label='class names'),
- gr.Radio(["Segment_Anything", "MaskFormer"], label="Proposal generator", default="Segment_Anything"),
- gr.Slider(0, 1.0, 0.8, label="For Segment_Anything only, granularity of masks from 0 (most coarse) to 1 (most precise)"),
- gr.Image(type='filepath'),
- ],
- outputs=gr.components.Image(type="pil", label='segmentation map'),
- title=title,
- description=description,
- article=article,
- examples=examples).launch(enable_queue=True)
diff --git a/spaces/fatiXbelha/sd/Descarga y Juega a Red Dead Redemption 2 en tu Android con el APK Oficial.md b/spaces/fatiXbelha/sd/Descarga y Juega a Red Dead Redemption 2 en tu Android con el APK Oficial.md
deleted file mode 100644
index 714147ba79a0e6748b32794e909b19c3f19a262c..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Descarga y Juega a Red Dead Redemption 2 en tu Android con el APK Oficial.md
+++ /dev/null
@@ -1,128 +0,0 @@
-
-
Descargar Red Dead Redemption 2 para Android APK Oficial
-
Red Dead Redemption 2 es uno de los juegos más aclamados y exitosos de los últimos años. Se trata de una aventura de acción ambientada en el salvaje oeste, donde el jugador puede explorar un vasto mundo abierto lleno de detalles, personajes y actividades. El juego ha sido desarrollado por Rockstar Games, los creadores de la saga Grand Theft Auto, y ha recibido numerosos premios y elogios por parte de la crítica y los usuarios.
-
descargar red dead redemption 2 para android apk oficial
Si eres fan de Red Dead Redemption 2 y quieres disfrutarlo en tu dispositivo Android, estás de suerte. Existe una versión oficial del juego para móviles que puedes descargar e instalar fácilmente siguiendo unos sencillos pasos. En este artículo te explicamos todo lo que necesitas saber sobre cómo descargar Red Dead Redemption 2 para Android APK oficial, qué requisitos debes cumplir y qué consejos y trucos te pueden ayudar a sacarle el máximo partido al juego.
-
¿Qué es Red Dead Redemption 2?
-
Red Dead Redemption 2 es un juego de acción-aventura que se desarrolla en el año 1899, en plena época del lejano oeste. El protagonista es Arthur Morgan, un forajido que forma parte de la banda de Dutch van der Linde, un grupo de criminales que se resisten a la llegada de la civilización y la industrialización. A lo largo del juego, el jugador tendrá que enfrentarse a las fuerzas de la ley, a otras bandas rivales y a los peligros de la naturaleza, mientras decide cómo vivir su propia historia.
-
El juego se destaca por su impresionante apartado gráfico, que recrea con gran realismo y belleza los paisajes y escenarios del oeste americano. El juego cuenta con una gran variedad de ecosistemas y ambientes, desde montañas nevadas hasta pantanos infestados de caimanes. Además, el juego tiene un ciclo día-noche y un sistema climático dinámico que afectan al comportamiento de los animales y las personas.
-
cómo descargar red dead redemption 2 en android gratis
-red dead redemption 2 android apk + obb download
-red dead redemption 2 para android sin verificación
-descargar red dead redemption 2 para android mega
-red dead redemption 2 android gameplay español
-red dead redemption 2 android apk mod
-red dead redemption 2 para android requisitos
-descargar red dead redemption 2 para android mediafıre
-red dead redemption 2 android beta apk
-red dead redemption 2 para android online
-red dead redemption 2 android apk oficial rockstar games
-descargar red dead redemption 2 para android uptodown
-red dead redemption 2 android apk + data
-red dead redemption 2 para android descargar gratis
-red dead redemption 2 android release date
-descargar red dead redemption 2 para android full
-red dead redemption 2 android apk no verification
-red dead redemption 2 para android gameplay
-descargar red dead redemption 2 para android sin verificación
-red dead redemption 2 android download link
-red dead redemption 2 para android apk + obb
-descargar red dead redemption 2 para android por partes
-red dead redemption 2 android apk + obb offline
-red dead redemption 2 para android descargar mega
-red dead redemption 2 android trailer oficial
-descargar red dead redemption 2 para android gratis español
-red dead redemption 2 android apk + obb highly compressed
-red dead redemption 2 para android mediafıre
-descargar red dead redemption 2 para android apk + datos sd
-red dead redemption 2 android official website
-descargar red dead redemption 2 para android sin emulador
-red dead redemption 2 android apk + obb free download
-red dead redemption 2 para android beta apk
-descargar red dead redemption 2 para android ppsspp
-red dead redemption 2 android emulator download
-descargar red dead redemption 2 para android play store
-red dead redemption 2 android apk + obb latest version
-red dead redemption 2 para android sin internet
-descargar red dead redemption 2 para android mod apk
-red dead redemption 2 android review español
-descargar red dead redemption 2 para android con licencia
-red dead redemption 2 android apk + obb google drive
-red dead redemption 2 para android como descargarlo e instalarlo facil y rapido
-
Otro aspecto destacado del juego es su jugabilidad, que ofrece una gran libertad al jugador para explorar el mundo a su antojo. El jugador puede realizar todo tipo de actividades, como cazar, pescar, jugar al póker, robar bancos o participar en duelos. El juego también tiene un sistema de honor que mide las acciones del jugador y sus consecuencias en el mundo. Así, el jugador puede optar por ser un héroe o un villano, y ver cómo cambia la reacción de los personajes y las misiones disponibles.
Continuing the article:
-
Características del juego
-
Red Dead Redemption 2 es un juego que ofrece una experiencia única e inmersiva al jugador. Algunas de las características más destacadas del juego son:
-
-
Gráficos espectaculares: El juego aprovecha al máximo el poder de la PC para brindar unos gráficos de alta calidad, con una iluminación y unas sombras realistas, una textura detallada de los árboles, el césped y el pelo de los animales, y un HDR que mejora el contraste y el color.
-
Jugabilidad variada: El juego combina elementos de acción, aventura, sigilo, exploración, caza, pesca, robo, duelo y más. El jugador puede elegir cómo afrontar cada situación, ya sea usando la fuerza, la astucia o la diplomacia. El juego también tiene un sistema de honor que afecta a la reputación del jugador y a las opciones disponibles.
-
Mundo abierto: El juego cuenta con un enorme mundo abierto que se puede recorrer a caballo, en tren, en barco o a pie. El mundo está lleno de vida, con más de 200 especies de animales, decenas de poblados y ciudades, y eventos aleatorios que ocurren a cada momento. El jugador puede interactuar con casi todo lo que ve y hacer lo que quiera.
-
Modo online: El juego incluye el acceso gratuito al mundo compartido de Red Dead Online, donde el jugador puede crear su propio personaje y elegir entre una variedad de roles para forjar su propio camino en el oeste. El jugador puede cooperar o competir con otros jugadores en misiones, actividades, eventos y modos PvP.
-
-
Requisitos para jugar en Android
-
Para poder jugar a Red Dead Redemption 2 en Android se necesita descargar e instalar el APK oficial del juego, que ocupa unos 5.5 GB de espacio. Además, se necesita cumplir con unos requisitos mínimos y recomendados para que el juego funcione correctamente. Estos son los requisitos según la página oficial de RDR2 Mobile:
-
-
-
Requisitos mínimos
-
Requisitos recomendados
-
-
-
Sistema operativo: Android 6.0 o superior
-
Sistema operativo: Android 8.0 o superior
-
-
-
Procesador: Quad-core 1.2 GHz o superior
-
Procesador: Octa-core 2.0 GHz o superior
-
-
-
Memoria RAM: 2 GB o superior
-
Memoria RAM: 4 GB o superior
-
-
-
Gráficos: Adreno 530 o superior
-
Gráficos: Adreno 640 o superior
-
-
-
Espacio libre: 6 GB o superior
-
Espacio libre: 8 GB o superior
-
-
-
Conexión a internet: Wi-Fi o datos móviles
-
Conexión a internet: Wi-Fi o datos móviles
-
-
-
También se recomienda usar un dispositivo con una pantalla grande y una buena resolución para apreciar mejor los detalles del juego.
-
¿Cómo descargar el APK oficial de Red Dead Redemption 2?
-
Para descargar el APK oficial de Red Dead Redemption 2 se debe seguir estos pasos:
-
Paso 1: Visitar el sitio web oficial de RDR2 Mobile
-
El primer paso es acceder al sitio web oficial de RDR2 Mobile, donde se puede encontrar toda la información sobre el juego, sus características, sus requisitos y su descarga. El sitio web es https://rdr2mobile.com/. En este sitio web se puede ver un botón verde que dice "Download APK". Al hacer clic en este botón se iniciará la descarga del archivo APK del juego.
-
Paso 2: Descargar el archivo APK
-
El segundo paso es descargar el archivo APK del juego en el dispositivo Android. El archivo APK tiene un tamaño de unos 5.5 GB,
Continuing the article:
-
por lo que se recomienda usar una conexión Wi-Fi estable y rápida para evitar interrupciones o errores. El archivo APK se guardará en la carpeta de descargas del dispositivo Android, o en la ubicación que el usuario haya elegido.
-
Paso 3: Instalar el juego en el dispositivo Android
-
El tercer paso es instalar el juego en el dispositivo Android. Para ello, se debe abrir el archivo APK que se ha descargado y seguir las instrucciones que aparecen en la pantalla. Es posible que se requiera habilitar la opción de "Orígenes desconocidos" o "Fuentes desconocidas" en los ajustes de seguridad del dispositivo, para permitir la instalación de aplicaciones que no provienen de la tienda oficial de Google Play. Una vez instalado el juego, se podrá ver un icono de RDR2 Mobile en el menú de aplicaciones del dispositivo. Al hacer clic en este icono se iniciará el juego y se podrá disfrutar de Red Dead Redemption 2 en Android.
-
Consejos y trucos para disfrutar de Red Dead Redemption 2 en Android
-
Red Dead Redemption 2 es un juego muy completo y complejo, que ofrece muchas posibilidades y opciones al jugador. Para aprovechar al máximo el juego y disfrutar de una buena experiencia en Android, se pueden seguir estos consejos y trucos:
-
-
Usar el modo Dead Eye: El modo Dead Eye es una habilidad especial que permite al jugador ralentizar el tiempo y apuntar con precisión a los enemigos. Es muy útil para enfrentarse a situaciones difíciles o a grupos numerosos de rivales. Para activar el modo Dead Eye se debe pulsar el botón del ojo que aparece en la esquina inferior derecha de la pantalla.
-
Crear un vínculo con el caballo: El caballo es el principal medio de transporte del jugador, y también su compañero fiel. Es importante crear un vínculo con el caballo, alimentándolo, acariciándolo y cepillándolo, para mejorar sus atributos y su comportamiento. Un caballo bien cuidado será más rápido, resistente y obediente.
-
Personalizar el HUD: El HUD es la interfaz que muestra información sobre el juego, como el mapa, la salud, el honor o las armas. El jugador puede personalizar el HUD según sus preferencias, ocultando o mostrando los elementos que quiera. Para acceder al menú de personalización del HUD se debe pulsar el botón de pausa que aparece en la esquina superior izquierda de la pantalla.
-
-
Conclusión
-
Red Dead Redemption 2 es un juego increíble que merece la pena jugar en cualquier plataforma. Gracias al APK oficial de RDR2 Mobile, los usuarios de Android pueden disfrutar del juego en sus dispositivos móviles con una buena calidad gráfica y una jugabilidad adaptada. Para descargar e instalar el juego solo se necesita seguir unos sencillos pasos y cumplir con unos requisitos mínimos. Además, se pueden aplicar algunos consejos y trucos para mejorar la experiencia y divertirse más con el juego.
-
Si te ha gustado este artículo, compártelo con tus amigos y déjanos un comentario con tu opinión sobre Red Dead Redemption 2 para Android APK oficial. ¿Has probado el juego? ¿Qué te ha parecido? ¿Qué consejos o trucos nos puedes dar? ¡Estamos deseando leer tus comentarios!
-
Preguntas frecuentes
-
-
¿Es seguro descargar e instalar el APK oficial de Red Dead Redemption 2?
-
Sí, es seguro siempre y cuando se descargue desde el sitio web oficial de RDR2 Mobile, que es https://rdr2mobile.com/. Este sitio web ofrece el archivo APK original y sin modificaciones del juego, que no contiene virus ni malware.
-
¿Es gratis descargar e instalar el APK oficial de Red Dead Redemption 2?
-
Sí, es gratis descargar e instalar el APK oficial de Red Dead Redemption 2. No se necesita pagar nada ni registrarse para acceder al archivo APK del juego. Sin embargo, se recomienda tener una cuenta de Rockstar Games Social Club para acc
Continuing the article:
-
eder al modo online y a otras funciones del juego.
-
¿Es compatible el APK oficial de Red Dead Redemption 2 con todos los dispositivos Android?
-
No, el APK oficial de Red Dead Redemption 2 no es compatible con todos los dispositivos Android. El juego requiere unos requisitos mínimos y recomendados para funcionar correctamente, que se pueden consultar en la página oficial de RDR2 Mobile. Si el dispositivo no cumple con estos requisitos, es posible que el juego no se ejecute o que presente problemas de rendimiento o estabilidad.
-
¿Se puede jugar a Red Dead Redemption 2 en Android con mando?
-
Sí, se puede jugar a Red Dead Redemption 2 en Android con mando. El juego es compatible con la mayoría de los mandos Bluetooth que se pueden conectar al dispositivo Android. El juego detecta automáticamente el mando y muestra los controles correspondientes en la pantalla. El jugador puede personalizar la configuración del mando desde el menú de opciones del juego.
-
¿Se puede jugar a Red Dead Redemption 2 en Android sin conexión a internet?
-
Sí, se puede jugar a Red Dead Redemption 2 en Android sin conexión a internet. El juego permite jugar al modo historia sin necesidad de estar conectado a internet. Sin embargo, para acceder al modo online y a otras funciones del juego, como las actualizaciones o el soporte técnico, se necesita una conexión a internet estable y rápida.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download Gratis Excel 2016 for Windows A Comprehensive Review.md b/spaces/fatiXbelha/sd/Download Gratis Excel 2016 for Windows A Comprehensive Review.md
deleted file mode 100644
index 35aa078063851d6c2bbcdcac8a4299f6cb0f1628..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Gratis Excel 2016 for Windows A Comprehensive Review.md
+++ /dev/null
@@ -1,198 +0,0 @@
-
-
Download gratis Excel 2016: come fare e cosa sapere
-
Se sei alla ricerca di un programma per gestire, analizzare e manipolare grandi quantità di dati, probabilmente hai già sentito parlare di Excel. Si tratta di una delle applicazioni più famose e versatili del pacchetto Office di Microsoft, disponibile sia per Windows che per Mac. Ma come fare a scaricare gratis Excel 2016, l'ultima versione del software? E quali sono le sue principali funzionalità e novità? In questo articolo, ti spiegheremo tutto quello che devi sapere per ottenere e usare questo potente strumento di calcolo e visualizzazione dei dati.
Excel 2016 è un'applicazione che fa parte della suite di produttività Microsoft Office, insieme ad altre come Word, PowerPoint, Outlook, OneNote e altre. Si tratta di un foglio elettronico, ovvero un programma che permette di creare, modificare e salvare tabelle di dati composte da celle, colonne e righe. Ogni cella può contenere un valore numerico, una formula, una funzione, un testo o un riferimento ad un'altra cella. In questo modo, è possibile effettuare calcoli complessi, analisi statistiche, simulazioni, previsioni e altro ancora.
-
Excel 2016 non è solo un semplice foglio elettronico, ma anche un potente strumento di visualizzazione dei dati. Infatti, offre la possibilità di creare diversi tipi di grafici, come istogrammi, torte, linee, barre, aree e altri ancora. Questi grafici possono essere personalizzati in vari modi, cambiando i colori, le etichette, i titoli, le legende e altri elementi. Inoltre, è possibile inserire immagini, forme, icone, SmartArt e altri oggetti grafici per rendere i fogli più attraenti e comprensibili.
-
Scaricare gratis Excel 2016 significa quindi avere a disposizione uno dei migliori programmi per gestire i dati in modo efficiente ed efficace. Che tu sia uno studente, un professionista, un imprenditore o un appassionato di numeri, con Excel 2016 potrai svolgere le tue attività con facilità e precisione.
-
Excel 2016: le principali funzionalità e novità
-
Excel 2016 presenta diverse funzionalità e novità rispetto alle versioni precedenti del software. Vediamone alcune delle più importanti:
-
download gratis excel 2016 for windows
-download gratis excel 2016 for mac
-download gratis excel 2016 trial version
-download gratis excel 2016 full version
-download gratis excel 2016 64 bit
-download gratis excel 2016 32 bit
-download gratis excel 2016 italiano
-download gratis excel 2016 portugues
-download gratis excel 2016 espanol
-download gratis excel 2016 francais
-download gratis excel 2016 deutsch
-download gratis excel 2016 crack
-download gratis excel 2016 key
-download gratis excel 2016 activation code
-download gratis excel 2016 product key
-download gratis excel 2016 update
-download gratis excel 2016 patch
-download gratis excel 2016 offline installer
-download gratis excel 2016 iso file
-download gratis excel 2016 setup file
-download gratis microsoft office excel 2016
-download gratis microsoft office professional plus 2016 with excel
-download gratis microsoft office home and student 2016 with excel
-download gratis microsoft office home and business 2016 with excel
-download gratis microsoft office standard 2016 with excel
-how to download gratis excel 2016
-where to download gratis excel 2016
-best site to download gratis excel 2016
-safe way to download gratis excel 2016
-tutorial on how to download gratis excel 2016
-guide on how to download gratis excel 2016
-tips on how to download gratis excel 2016
-benefits of downloading gratis excel 2016
-features of downloading gratis excel 2016
-advantages of downloading gratis excel 2016
-disadvantages of downloading gratis excel 2016
-alternatives to downloading gratis excel 2016
-comparison of downloading gratis excel 2016 and other versions
-review of downloading gratis excel 2016
-rating of downloading gratis excel 2016
-feedback on downloading gratis excel 2016
-testimonials on downloading gratis excel 2016
-problems with downloading gratis excel 2016
-solutions for downloading gratis excel 2016
-troubleshooting for downloading gratis excel 2016
-support for downloading gratis excel 2016
-help for downloading gratis excel 2016
-assistance for downloading gratis excel 2016
-resources for downloading gratis excel 2016
-tools for downloading gratis excel 2016
-
-
Pivot Table: si tratta di una funzione che permette di riassumere e analizzare rapidamente grandi quantità di dati. Con le Pivot Table è possibile rag
gruppare i dati per categorie, filtri, ordini e calcoli personalizzati. In Excel 2016 è possibile creare Pivot Table anche da fonti di dati esterne, come database, file di testo o siti web. Inoltre, è possibile usare la funzione Suggerisci Pivot Table per ottenere dei suggerimenti su come organizzare i dati in base alle proprie esigenze.
-
Power Query: si tratta di una funzione che permette di importare, trasformare e combinare dati da diverse fonti, come file Excel, CSV, XML, JSON, database, siti web e altri ancora. Con Power Query è possibile pulire, filtrare, ordinare, raggruppare e modificare i dati in modo semplice e intuitivo. Inoltre, è possibile creare delle query, ovvero delle interrogazioni personalizzate che possono essere salvate e aggiornate automaticamente.
-
Power Pivot: si tratta di una funzione che permette di creare dei modelli di dati avanzati e complessi, collegando tra loro diverse tabelle e fonti di dati. Con Power Pivot è possibile creare delle relazioni tra le tabelle, ovvero dei collegamenti logici basati su una o più colonne comuni. Inoltre, è possibile creare delle misure, ovvero dei calcoli personalizzati che possono essere usati nelle Pivot Table o nei grafici.
-
Power Map: si tratta di una funzione che permette di creare delle mappe interattive e dinamiche per visualizzare i dati geografici. Con Power Map è possibile inserire dei punti dati, ovvero dei valori numerici o testuali associati a delle coordinate geografiche. Inoltre, è possibile creare delle tours, ovvero delle sequenze animate di mappe che mostrano l'evoluzione dei dati nel tempo o nello spazio.
-
Grafici a cascata, istogrammi e Pareto: si tratta di tre nuovi tipi di grafici introdotti in Excel 2016. I grafici a cascata mostrano le variazioni positive e negative di un valore nel tempo o tra diverse categorie. Gli istogrammi mostrano la distribuzione di frequenza di un valore in base a dei intervalli predefiniti o personalizzati. I grafici di Pareto mostrano la relazione tra le cause e gli effetti di un fenomeno, evidenziando le cause più rilevanti.
-
Funzioni previsionali: si tratta di una serie di funzioni che permettono di effettuare delle previsioni basate sui dati storici. Con queste funzioni è possibile stimare il valore futuro di una variabile in base a un trend lineare o esponenziale. Inoltre, è possibile visualizzare le previsioni in un grafico apposito, con indicati gli intervalli di confidenza e gli errori standard.
-
Funzioni logiche nidificate: si tratta della possibilità di inserire più funzioni logiche (come SE, E, O) all'interno di una stessa formula. In questo modo, è possibile creare delle condizioni più complesse e specifiche per ottenere dei risultati diversi in base ai valori delle celle.
-
-
Excel 2016: i requisiti di sistema e le versioni disponibili
-
Per poter scaricare e usare Excel 2016 sul tuo dispositivo, devi assicurarti che esso soddisfi i seguenti requisiti di sistema:
-
-
-
Sistema operativo
-
Processore
-
Memoria RAM
-
Spazio su disco
-
Risoluzione dello schermo
-
-
-
Windows 7 o successivi
-
1 GHz o superiore (x86 o x64)
-
2 GB (32 bit) o 4 GB (64 bit)
-
3 GB
-
1024 x 768 pixel o superiore
-
-
-
Mac OS X 10.10 o successivi
-
Intel
-
4 GB
-
6 GB
-
1280 x 800 pixel o superiore
-
-
-
Excel 2016 è disponibile in diverse versioni, a seconda delle tue esigenze e del tuo budget. Le principali sono:
-
-
Microsoft 365: si tratta di un abbonamento annuale o mensile che ti permette di accedere a tutte le applicazioni di Office, compreso Excel 2016, su più dispositivi (PC, Mac, smartphone e tablet). Inoltre, ti offre 1 TB di spazio su OneDrive, il servizio di cloud storage di Microsoft, e altri vantaggi come l'assistenza tecnica e le funzionalità aggiuntive. Il costo varia in base al piano scelto: Microsoft 365 Personal (per un utente) costa 69 euro all'anno o 7 euro al mese, mentre Microsoft 365 Family (per sei utenti) costa 99 euro all'anno o 10 euro al mese.
-
Office Home & Student: si tratta di una licenza permanente che ti permette di installare Excel 2016 e le altre applicazioni di Office (Word, PowerPoint e OneNote) su un solo PC o Mac. Non include gli aggiornamenti futuri, lo spazio su OneDrive e gli altri vantaggi di Microsoft 365. Il costo è di 149 euro una tantum.
-
Excel 2016: si tratta di una licenza permanente che ti permette di installare solo Excel 2016 su un solo PC o Mac. Non include le altre applicazioni di Office, gli aggiornamenti futuri, lo spazio su OneDrive e gli altri vantaggi di Microsoft 365. Il costo è di 135 euro una tantum.
-
-
Come scaricare gratis Excel 2016 su PC
-
Se vuoi scaricare gratis Excel 2016 sul tuo PC con Windows, hai due opzioni principali: attivare la prova gratuita di Microsoft 365 o acquistare la licenza di Office Home & Student o di Excel 2016. Vediamo come fare in entrambi i casi.
-
Come attivare la prova gratuita di Microsoft 365
-
La prova gratuita di Microsoft 365 ti permette di usare Excel 2016 e le altre applicazioni di Office per un mese senza pagare nulla. Al termine del periodo di prova, puoi decidere se rinnovare l'abbonamento o disattivarlo. Ecco i passaggi da seguire per attivare la prova gratuita:
Crea un account Microsoft o accedi con quello esistente. Se non hai un account, puoi crearlo gratuitamente inserendo il tuo indirizzo email e una password.
-
Scegli il piano che preferisci tra Microsoft 365 Personal e Microsoft 365 Family e clicca sul pulsante Avvia il tuo mese gratuito.
-
Inserisci i dati della tua carta di credito o del tuo conto PayPal e clicca sul pulsante Iscriviti. Non ti verrà addebitato nulla fino alla scadenza della prova gratuita.
-
Clicca sul pulsante Installa e segui le istruzioni per scaricare e installare Excel 2016 e le altre applicazioni di Office sul tuo PC.
-
-
Come acquistare la licenza di Office Home & Student o di Excel 2016
-
Se preferisci acquistare la licenza permanente di Office Home & Student o di Excel 2016, puoi farlo direttamente dal sito ufficiale di Microsoft Office. Ecco i passaggi da seguire:
Scegli il prodotto che vuoi acquistare tra Office Home & Student o Excel 2016 e clicca sul pulsante Acquista ora.
-
Crea un account Microsoft o accedi con quello esistente. Se non hai un account, puoi crearlo gratuitamente inserendo il tuo indirizzo email e una password.
-
Inserisci i dati della tua carta di credito o del tuo conto PayPal e clicca sul pulsante Conferma ordine. Ti verrà addebitato il costo del prodotto scelto.
-
Clicca sul pulsante Installa e segui le istruzioni per scaricare e installare Excel 2016 e le altre applicazioni di Office sul tuo PC.
-
-
Come scaricare gratis Excel 2016 su Mac
-
Se vuoi scaricare gratis Excel 2016 sul tuo Mac, hai due opzioni principali: attivare la prova gratuita di Microsoft 365 o acquistare la licenza di Office Home & Student o di Excel 2016. Vediamo come fare in entrambi i casi.
-
Come attivare la prova gratuita di Microsoft 365
-
La prova gratuita di Microsoft 365 ti permette di usare Excel 2016 e le altre applicazioni di Office per un mese senza pagare nulla. Al termine del periodo di prova, puoi decidere se rinnovare l'abbonamento o disattivarlo. Ecco i passaggi da seguire per attivare la prova gratuita:
Crea un account Microsoft o accedi con quello esistente. Se non hai un account, puoi crearlo gratuitamente inserendo il tuo indirizzo email e una password.
-
Scegli il piano che preferisci tra Microsoft 365 Personal e Microsoft 365 Family e clicca sul pulsante Avvia il tuo mese gratuito.
-
Inserisci i dati della tua carta di credito o del tuo conto PayPal e clicca sul pulsante Iscriviti. Non ti verrà addebitato nulla fino alla scadenza della prova gratuita.
-
Clicca sul pulsante Installa e segui le istruzioni per scaricare e installare Excel 2016 e le altre applicazioni di Office sul tuo Mac.
-
-
Come acquistare la licenza di Office Home & Student o di Excel 2016
-
Se preferisci acquistare la licenza permanente di Office Home & Student o di Excel 2016, puoi farlo direttamente dal sito ufficiale di Microsoft Office. Ecco i passaggi da seguire:
Scegli il prodotto che vuoi acquistare tra Office Home & Student o Excel 2016 e clicca sul pulsante Acquista ora.
-
Crea un account Microsoft o accedi con quello esistente. Se non hai un account, puoi crearlo gratuitamente inserendo il tuo indirizzo email e una password.
-
Inserisci i dati della tua carta di credito o del tuo conto PayPal e clicca sul pulsante Conferma ordine. Ti verrà addebitato il costo del prodotto scelto.
-
Clicca sul pulsante Installa e segui le istruzioni per scaricare e installare Excel 2016 e le altre applicazioni di Office sul tuo Mac.
-
-
Come scaricare gratis Excel 2016 su smartphone e tablet
-
Se vuoi scaricare gratis Excel 2016 sul tuo smartphone o tablet, puoi farlo facilmente tramite i rispettivi store delle tue piattaforme. Infatti, Excel 2016 è disponibile come app gratuita per Android, iPhone e iPad. Tuttavia, per poter usare tutte le funzionalità dell'app, devi avere un abbonamento a Microsoft 365. Altrimenti, potrai solo visualizzare i file Excel, ma non modificarli o crearne di nuovi. Vediamo come fare per scaricare Excel 2016 sui tuoi dispositivi mobili.
-
Come scaricare Excel 2016 su Android
-
Per scaricare Excel 2016 su Android, devi seguire questi passaggi:
Digita "Excel" nella barra di ricerca in alto e clicca sul risultato corrispondente a Microsoft Excel: crea e modifica fogli di calcolo.
-
Clicca sul pulsante Installa e attendi che il download e l'installazione siano completi.
-
Apri l'app Excel e accedi con il tuo account Microsoft. Se non hai un account, puoi crearlo gratuitamente inserendo il tuo indirizzo email e una password.
-
Se hai un abbonamento a Microsoft 365, potrai usare tutte le funzionalità dell'app Excel. Altrimenti, potrai solo visualizzare i file Excel, ma non modificarli o crearne di nuovi.
-
-
Come scaricare Excel 2016 su iPhone e iPad
-
Per scaricare Excel 2016 su iPhone e iPad, devi seguire questi passaggi:
Digita "Excel" nella barra di ricerca in basso e clicca sul risultato corrispondente a Microsoft Excel.
-
Clicca sul pulsante Ottieni e inserisci il tuo ID Apple o usa il Face ID o il Touch ID per confermare il download.
-
Apri l'app Excel e accedi con il tuo account Microsoft. Se non hai un account, puoi crearlo gratuitamente inserendo il tuo indirizzo email e una password.
-
Se hai un abbonamento a Microsoft 365, potrai usare tutte le funzionalità dell'app Excel. Altrimenti, potrai solo visualizzare i file Excel, ma non modificarli o crearne di nuovi.
-
-
Conclusioni e FAQ
-
In questo articolo, ti abbiamo spiegato come scaricare gratis Excel 2016, l'ultima versione del famoso foglio elettronico di Microsoft. Ti abbiamo mostrato le principali funzionalità e novità di Excel 2016, i requisiti di sistema e le versioni disponibili. Ti abbiamo anche illustrato come scaricare Excel 2016 su PC, Mac, smartphone e tablet, sia attivando la prova gratuita di Microsoft 365 che acquistando la licenza permanente di Office Home & Student o di Excel 2016. Speriamo che questo articolo ti sia stato utile e che ora tu possa usare Excel 2016 per gestire, analizzare e visualizzare i tuoi dati in modo efficiente ed efficace.
-
Se hai ancora dei dubbi o delle domande su come scaricare gratis Excel 2016, qui sotto trovi alcune FAQ che potrebbero aiutarti a chiarirli.
-
Cos'è Microsoft 365?
-
Microsoft 365 è un abbonamento annuale o mensile che ti permette di accedere a tutte le applicazioni di Office, compreso Excel 2016, su più dispositivi (PC, Mac, smartphone e tablet). Inoltre, ti offre 1 TB di spazio su OneDrive, il servizio di cloud storage di Microsoft, e altri vantaggi come l'assistenza tecnica e le funzionalità aggiuntive.
-
Cos'è Office Home & Student?
-
Office Home & Student è una licenza permanente che ti permette di installare Excel 2016 e le altre applicazioni di Office (Word, PowerPoint e OneNote) su un solo PC o Mac. Non include gli aggiornamenti futuri, lo spazio su OneDrive e gli altri vantaggi di Microsoft 365.
-
Cos'è Excel 2016?
-
Excel 2016 è una licenza permanente che ti permette di installare solo Excel 2016 su un solo PC o Mac. Non include le altre applicazioni di Office, gli aggiornamenti futuri, lo spazio su OneDrive e gli altri vantaggi di Microsoft 365.
-
Come posso disattivare la prova gratuita di Microsoft 365?
-
Per disattivare la prova gratuita di Microsoft 365, devi seguire questi passaggi:
Clicca sull'icona del tuo profilo in alto a destra e poi su I miei account.
-
Clicca sulla scheda S ervizi e abbonamenti e poi su Gestisci accanto a Microsoft 365.
-
Clicca su Annulla abbonamento e conferma la tua scelta.
-
-
Se disattivi la prova gratuita prima della scadenza, non ti verrà addebitato nulla. Se invece la disattivi dopo la scadenza, ti verrà addebitato il costo dell'abbonamento per il mese successivo.
-
Come posso aggiornare Excel 2016?
-
Per aggiornare Excel 2016, devi seguire questi passaggi:
-
-
Apri Excel 2016 sul tuo dispositivo.
-
Clicca sul menu File e poi su Account.
-
Clicca sul pulsante Opzioni di aggiornamento e poi su Aggiorna ora.
-
Attendi che il processo di aggiornamento sia completato e riavvia Excel 2016.
-
-
Se hai un abbonamento a Microsoft 365, riceverai automaticamente gli aggiornamenti più recenti di Excel 2016 e delle altre applicazioni di Office. Se invece hai una licenza permanente di Office Home & Student o di Excel 2016, potrai ricevere solo gli aggiornamenti di sicurezza e di stabilità, ma non le nuove funzionalità.
-
Come posso contattare il supporto tecnico di Microsoft Office?
-
Per contattare il supporto tecnico di Microsoft Office, devi seguire questi passaggi:
Scegli il prodotto che ti interessa tra quelli elencati o digita il tuo problema nella barra di ricerca.
-
Consulta le risorse disponibili, come le guide, i video, i forum e le domande frequenti, per trovare una soluzione al tuo problema.
-
Se non trovi una soluzione, clicca sul pulsante Contattaci e scegli tra le opzioni disponibili, come la chat, il telefono o il feedback.
-
-
Il supporto tecnico di Microsoft Office è gratuito per tutti gli utenti, ma i tempi e i modi di risposta possono variare in base al tipo di problema e al tipo di licenza.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Eski TikTok APK The Best Way to Access Old Features and Filters on TikTok.md b/spaces/fatiXbelha/sd/Eski TikTok APK The Best Way to Access Old Features and Filters on TikTok.md
deleted file mode 100644
index ffc0895221b8f4a1a2c444975cfc966b78aba7d6..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Eski TikTok APK The Best Way to Access Old Features and Filters on TikTok.md
+++ /dev/null
@@ -1,138 +0,0 @@
-
-
What is Eski TikTok APK and How to Download It
-
Introduction
-
TikTok is one of the most popular social media platforms in the world, with over 800 million active users who create and share short-form videos on various topics such as music, comedy, dance, education, beauty, fashion, and more. The app has a huge library of songs, filters, effects, stickers, and other features that make video creation fun and easy.
However, not everyone is happy with the current version of TikTok. Some users prefer the old features and interface of the app that were available before it merged with Musical.ly in 2018. That's why some people look for alternative ways to access the old version of TikTok, such as downloading an APK file.
-
An APK file is an Android application package that contains all the files and data needed to install an app on an Android device. By downloading an APK file from a third-party source, you can bypass the official app store and install apps that are not available or restricted in your region.
-
One of the most popular APK files for TikTok is Eski TikTok APK, which claims to offer the old version of TikTok with all its original features and functions. But what exactly is Eski TikTok APK and how can you download it? In this article, we will answer these questions and more.
-
How to Download and Install Eski TikTok APK
-
If you want to try out Eski TikTok APK on your Android device, you will need to follow these steps:
-
Step 1: Enable unknown sources on your device
-
Since Eski TikTok APK is not available on Google Play Store or any other official app store, you will need to enable unknown sources on your device settings. This will allow you to install apps from sources other than the official app store.
-
To enable unknown sources, go to Settings > Security > Unknown Sources and toggle it on. You may see a warning message that installing apps from unknown sources may harm your device or data. Tap OK to proceed.
-
- You asked me to write a 500-word article on the topic of "eski tiktok apk" with at least 15 headings and subheadings, a table, a conclusion paragraph, and 5 FAQs. You also asked me to write an outline of the article and use HTML formatting. Here is the outline of the article: - H1: What is Eski TikTok APK and How to Download It - H2: Introduction - H3: What is TikTok and why is it popular - H3: What is Eski TikTok APK and what are its features - H2: How to Download and Install Eski TikTok APK - H3: Step 1: Enable unknown sources on your device - H3: Step 2: Download the APK file from a trusted source - H3: Step 3: Install the APK file and launch the app - H2: How to Use Eski TikTok APK - H3: How to create and edit videos - H3: How to explore and discover videos - H3: How to interact and communicate with other users - H2: Pros and Cons of Eski TikTok APK - H3: Pros - H4: Access to old features and interface of TikTok - H4: No ads or watermarks on videos - H4: More privacy and security options - H3: Cons - H4: Not compatible with newer versions of TikTok - H4: Not available on official app stores - H4: Potential risk of malware or viruses - H2: Comparison with Other TikTok Alternatives - H3: Table of comparison with other apps like Triller, Dubsmash, Byte, etc. - H2: Conclusion - H3: Summary of main points and recommendations - H2: FAQs - H3: Q1. Is Eski TikTok APK legal and safe to use? - H3: Q2. Can I use Eski TikTok APK on iOS devices? - H3: Q3. Can I update Eski TikTok APK to the latest version of TikTok? - H3: Q4. Can I log in with my existing TikTok account on Eski TikTok APK? - H3: Q5. Can I share my videos from Eski TikTok APK to other social media platforms? Here is the article with HTML formatting:
What is Eski TikTok APK and How to Download It
-
Introduction
-
TikTok is one of the most popular social media platforms in the world, with over 800 million active users who create and share short-form videos on various topics such as music, comedy, dance, education, beauty, fashion, and more. The app has a huge library of songs, filters, effects, stickers, and other features that make video creation fun and easy.
-
eski tiktok apk download
-eski tiktok apk indir
-eski tiktok apk 2023
-eski tiktok apk uptodown
-eski tiktok apk android
-eski tiktok apk latest version
-eski tiktok apk free
-eski tiktok apk old version
-eski tiktok apk mod
-eski tiktok apk hack
-eski tiktok apk no watermark
-eski tiktok apk pro
-eski tiktok apk premium
-eski tiktok apk full
-eski tiktok apk cracked
-eski tiktok apk unlimited
-eski tiktok apk original
-eski tiktok apk update
-eski tiktok apk 2022
-eski tiktok apk 2021
-eski tiktok apk 2020
-eski tiktok apk 2019
-eski tiktok apk 2018
-eski tiktok apk 2017
-eski tiktok apk 2016
-eski tiktok apk for pc
-eski tiktok apk for ios
-eski tiktok apk for windows
-eski tiktok apk for mac
-eski tiktok apk for laptop
-eski tiktok apk for tablet
-eski tiktok apk for iphone
-eski tiktok apk for ipad
-eski tiktok apk for samsung
-eski tiktok apk for huawei
-eski tiktok apk for xiaomi
-eski tiktok apk for oppo
-eski tiktok apk for vivo
-eski tiktok apk for realme
-eski tiktok apk for nokia
-eski tiktok apk for lg
-eski tiktok apk for sony
-eski tiktok apk for motorola
-eski tiktok apk for lenovo
-eski tiktok apk for asus
-eski tiktok apk for acer
-eski tiktok apk for dell
-eski tiktok apk for hp
-
However, not everyone is happy with the current version of TikTok. Some users prefer the old features and interface of the app that were available before it merged with Musical.ly in 2018. That's why some people look for alternative ways to access the old version of TikTok, such as downloading an APK file.
-
An APK file is an Android application package that contains all the files and data needed to install an app on an Android device. By downloading an APK file from a third-party source, you can bypass the official app store and install apps that are not available or restricted in your region.
-
One of the most popular APK files for TikTok is Eski TikTok APK, which claims to offer the old version of TikTok with all its original features and functions. But what exactly is Eski TikTok APK and how can you download it? In this article, we will answer these questions and more.
-
How to Download and Install Eski TikTok APK
-
If you want to try out Eski TikTok APK on your Android device, you will need to follow these steps:
-
Step 1: Enable unknown sources on your device
-
Since Eski TikTok APK is not available on Google Play Store or any other official app store, you will need to enable unknown sources on your device settings. This will allow you to install apps from sources other than the official app store.
-
To enable unknown sources, go to Settings > Security > Unknown Sources and toggle it on. You may see a warning message that installing apps from unknown sources may harm your device or data. Tap OK to proceed.
-
.
Step 2: Download the APK file from a trusted source
-
Next, you will need to download the Eski TikTok APK file from a trusted source. There are many websites that offer APK files for various apps, but not all of them are safe and reliable. Some of them may contain malware or viruses that can harm your device or data.
-
To avoid such risks, you should only download APK files from reputable sources that have positive reviews and ratings from other users. You can also use an antivirus app to scan the APK file before installing it.
-
One of the websites that you can use to download Eski TikTok APK is APKPure.com. This website provides verified and safe APK files for various apps and games. You can also find the latest updates and versions of the apps on this website.
-
To download Eski TikTok APK from APKPure.com, follow these steps:
-
-
Go to APKPure.com and search for Eski TikTok APK in the search bar.
-
Select the app from the search results and click on the Download APK button.
-
Choose a download location and wait for the download to complete.
-
-
Step 3: Install the APK file and launch the app
-
Once you have downloaded the Eski TikTok APK file, you can install it on your device by following these steps:
-
-
Locate the APK file on your device storage and tap on it.
-
You may see a pop-up message asking you to confirm the installation. Tap on Install and wait for the installation to finish.
-
Once the installation is done, you can launch the app by tapping on Open or by finding it on your app drawer.
-
-
Congratulations! You have successfully installed Eski TikTok APK on your device. You can now enjoy the old version of TikTok with all its features and functions.
-
How to Use Eski TikTok APK
-
Eski TikTok APK is very similar to the original TikTok app, except that it has the old features and interface that were available before 2018. You can use Eski TikTok APK to create and edit videos, explore and discover videos, and interact and communicate with other users.
-
How to create and edit videos
-
To create and edit videos on Eski TikTok APK, follow these steps:
-
-
Tap on the plus icon at the bottom of the screen to open the camera.
-
Choose a song from the music library or upload your own audio.
-
Record your video by holding down the record button. You can also use filters, effects, stickers, timers, speed, beauty, and other features to enhance your video.
-
Edit your video by trimming, cropping, adding text, adjusting volume, applying filters, etc.
-
Add a caption, hashtags, and tags to your video and tap on Post to share it with your followers or save it as a draft.
-
-
How to explore and discover videos
-
To explore and discover videos on Eski TikTok APK, follow these steps:
-
-
Tap on the home icon at the bottom of the screen to see the videos from your following list or from other users around the world.
-
Swipe left or right to switch between different tabs such as For You, Following, Trending, etc.
-
Tap on a video to watch it in full screen. You can also like, comment, share, or save the video.
-
Tap on a user's profile picture or username to see their profile page. You can also follow, message, or block them.
-
Tap on the magnifying glass icon at the bottom of the screen to search for videos, users, hashtags, songs, or topics.
-
-
. [assistant](#message)
How to interact and communicate with other users
-
To interact and communicate with other users on Eski TikTok APK, follow these steps:
-
-
Tap on the heart icon at the bottom of the screen to see your notifications. You can see who liked, commented, followed, or mentioned you on your videos or messages.
-
Tap on the message icon at the bottom of the screen to see your chats. You can send and receive messages with your friends or other users. You can also send photos, videos, stickers, emojis, or voice messages.
-
Tap on the live icon at the top of the screen to see who
A video app that lets you create and share funny videos with filters, effects, and editing tools.
-
- Variety of categories and genres - Laugh and have fun with other users - Earn rewards and prizes
-
- Not very original or creative - Some videos may be offensive or inappropriate - Some features require payment or subscription
-
-
-
Conclusion
-
Eski TikTok APK is an app that allows you to access the old version of TikTok with all its features and functions. It is a good option for those who miss the old look and feel of the app and want to enjoy the old features and interface of TikTok. However, it also has some drawbacks, such as being incompatible with newer versions of TikTok, not being available on official app stores, and posing potential risks of malware or viruses.
-
If you want to download and install Eski TikTok APK, you will need to enable unknown sources on your device settings, download the APK file from a trusted source, and install the APK file on your device. You can then use Eski TikTok APK to create and edit videos, explore and discover videos, and interact and communicate with other users.
-
However, if you are looking for other alternatives to TikTok that offer similar or better features and functions, you may want to check out some of the apps mentioned above, such as Triller, Dubsmash, Byte, Lomotif, or FunnyTube. These apps may provide you with more options and variety for your video creation and sharing needs.
-
FAQs
-
Q1. Is Eski TikTok APK legal and safe to use?
-
A1. Eski TikTok APK is not illegal to use, but it may violate the terms and conditions of TikTok. Therefore, you may face some consequences or issues if you use it. Eski TikTok APK is also not very safe to use, as it may contain malware or viruses that can harm your device or data. You should only download it from a trusted source and scan it with an antivirus app before installing it.
-
Q2. Can I use Eski TikTok APK on iOS devices?
-
A2. No, Eski TikTok APK is only compatible with Android devices. If you want to use the old version of TikTok on iOS devices, you may need to jailbreak your device or use an emulator.
-
Q3. Can I update Eski TikTok APK to the latest version of TikTok?
-
A3. No, Eski TikTok APK is based on an old version of TikTok and cannot be updated to the latest version of the app. If you want to use the latest version of TikTok, you will need to uninstall Eski TikTok APK and download the official app from the app store.
-
Q4. Can I log in with my existing TikTok account on Eski TikTok APK?
-
A4. Yes, you can log in with your existing TikTok account on Eski TikTok APK. However, you may not be able to see or use some of the new features and functions that are available on the official app. You may also face some issues or errors if you switch between the two apps frequently.
-
Q5. Can I share my videos from Eski TikTok APK to other social media platforms?
-
A5. Yes, you can share your videos from Eski TikTok APK to other social media platforms such as Facebook, Instagram, Twitter, etc. However, you may not be able to use some of the features or functions that are available on the official app, such as stickers, effects, hashtags, etc.
- This is a demo for the ChatGPT Plugins LangChain usecase
- Be aware that it currently only works with plugins that do not require auth.
- Find more plugins here
-
-
-"""
-
-with gr.Blocks(css="style.css") as demo:
- with gr.Column(elem_id="col-container"):
- gr.HTML(title)
- prompt = gr.Textbox(label="Prompt", value="what t shirts are available in klarna?")
- plugin = gr.Textbox(label="Plugin json", info="You need the .json plugin manifest file of the plugin you want to use. Be aware that it currently only works with plugins that do not require auth.", value="https://www.klarna.com/.well-known/ai-plugin.json")
- openai_api_key = gr.Textbox(label="OpenAI API Key", info="*required", type="password")
- run_btn = gr.Button("Run")
- response = gr.Textbox(label="Response")
- run_btn.click(fn=run,
- inputs=[prompt, plugin, openai_api_key],
- outputs=[response]
- )
-
-demo.queue().launch()
\ No newline at end of file
diff --git a/spaces/fffiloni/Video-Matting-Anything/GroundingDINO/groundingdino/util/box_ops.py b/spaces/fffiloni/Video-Matting-Anything/GroundingDINO/groundingdino/util/box_ops.py
deleted file mode 100644
index 781068d294e576954edb4bd07b6e0f30e4e1bcd9..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/Video-Matting-Anything/GroundingDINO/groundingdino/util/box_ops.py
+++ /dev/null
@@ -1,140 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-"""
-Utilities for bounding box manipulation and GIoU.
-"""
-import torch
-from torchvision.ops.boxes import box_area
-
-
-def box_cxcywh_to_xyxy(x):
- x_c, y_c, w, h = x.unbind(-1)
- b = [(x_c - 0.5 * w), (y_c - 0.5 * h), (x_c + 0.5 * w), (y_c + 0.5 * h)]
- return torch.stack(b, dim=-1)
-
-
-def box_xyxy_to_cxcywh(x):
- x0, y0, x1, y1 = x.unbind(-1)
- b = [(x0 + x1) / 2, (y0 + y1) / 2, (x1 - x0), (y1 - y0)]
- return torch.stack(b, dim=-1)
-
-
-# modified from torchvision to also return the union
-def box_iou(boxes1, boxes2):
- area1 = box_area(boxes1)
- area2 = box_area(boxes2)
-
- # import ipdb; ipdb.set_trace()
- lt = torch.max(boxes1[:, None, :2], boxes2[:, :2]) # [N,M,2]
- rb = torch.min(boxes1[:, None, 2:], boxes2[:, 2:]) # [N,M,2]
-
- wh = (rb - lt).clamp(min=0) # [N,M,2]
- inter = wh[:, :, 0] * wh[:, :, 1] # [N,M]
-
- union = area1[:, None] + area2 - inter
-
- iou = inter / (union + 1e-6)
- return iou, union
-
-
-def generalized_box_iou(boxes1, boxes2):
- """
- Generalized IoU from https://giou.stanford.edu/
-
- The boxes should be in [x0, y0, x1, y1] format
-
- Returns a [N, M] pairwise matrix, where N = len(boxes1)
- and M = len(boxes2)
- """
- # degenerate boxes gives inf / nan results
- # so do an early check
- assert (boxes1[:, 2:] >= boxes1[:, :2]).all()
- assert (boxes2[:, 2:] >= boxes2[:, :2]).all()
- # except:
- # import ipdb; ipdb.set_trace()
- iou, union = box_iou(boxes1, boxes2)
-
- lt = torch.min(boxes1[:, None, :2], boxes2[:, :2])
- rb = torch.max(boxes1[:, None, 2:], boxes2[:, 2:])
-
- wh = (rb - lt).clamp(min=0) # [N,M,2]
- area = wh[:, :, 0] * wh[:, :, 1]
-
- return iou - (area - union) / (area + 1e-6)
-
-
-# modified from torchvision to also return the union
-def box_iou_pairwise(boxes1, boxes2):
- area1 = box_area(boxes1)
- area2 = box_area(boxes2)
-
- lt = torch.max(boxes1[:, :2], boxes2[:, :2]) # [N,2]
- rb = torch.min(boxes1[:, 2:], boxes2[:, 2:]) # [N,2]
-
- wh = (rb - lt).clamp(min=0) # [N,2]
- inter = wh[:, 0] * wh[:, 1] # [N]
-
- union = area1 + area2 - inter
-
- iou = inter / union
- return iou, union
-
-
-def generalized_box_iou_pairwise(boxes1, boxes2):
- """
- Generalized IoU from https://giou.stanford.edu/
-
- Input:
- - boxes1, boxes2: N,4
- Output:
- - giou: N, 4
- """
- # degenerate boxes gives inf / nan results
- # so do an early check
- assert (boxes1[:, 2:] >= boxes1[:, :2]).all()
- assert (boxes2[:, 2:] >= boxes2[:, :2]).all()
- assert boxes1.shape == boxes2.shape
- iou, union = box_iou_pairwise(boxes1, boxes2) # N, 4
-
- lt = torch.min(boxes1[:, :2], boxes2[:, :2])
- rb = torch.max(boxes1[:, 2:], boxes2[:, 2:])
-
- wh = (rb - lt).clamp(min=0) # [N,2]
- area = wh[:, 0] * wh[:, 1]
-
- return iou - (area - union) / area
-
-
-def masks_to_boxes(masks):
- """Compute the bounding boxes around the provided masks
-
- The masks should be in format [N, H, W] where N is the number of masks, (H, W) are the spatial dimensions.
-
- Returns a [N, 4] tensors, with the boxes in xyxy format
- """
- if masks.numel() == 0:
- return torch.zeros((0, 4), device=masks.device)
-
- h, w = masks.shape[-2:]
-
- y = torch.arange(0, h, dtype=torch.float)
- x = torch.arange(0, w, dtype=torch.float)
- y, x = torch.meshgrid(y, x)
-
- x_mask = masks * x.unsqueeze(0)
- x_max = x_mask.flatten(1).max(-1)[0]
- x_min = x_mask.masked_fill(~(masks.bool()), 1e8).flatten(1).min(-1)[0]
-
- y_mask = masks * y.unsqueeze(0)
- y_max = y_mask.flatten(1).max(-1)[0]
- y_min = y_mask.masked_fill(~(masks.bool()), 1e8).flatten(1).min(-1)[0]
-
- return torch.stack([x_min, y_min, x_max, y_max], 1)
-
-
-if __name__ == "__main__":
- x = torch.rand(5, 4)
- y = torch.rand(3, 4)
- iou, union = box_iou(x, y)
- import ipdb
-
- ipdb.set_trace()
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-inspect/test/toStringTag.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-inspect/test/toStringTag.js
deleted file mode 100644
index 95f82703d08f358b00f180c7b479b9f33dff3dac..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-inspect/test/toStringTag.js
+++ /dev/null
@@ -1,40 +0,0 @@
-'use strict';
-
-var test = require('tape');
-var hasToStringTag = require('has-tostringtag/shams')();
-
-var inspect = require('../');
-
-test('Symbol.toStringTag', { skip: !hasToStringTag }, function (t) {
- t.plan(4);
-
- var obj = { a: 1 };
- t.equal(inspect(obj), '{ a: 1 }', 'object, no Symbol.toStringTag');
-
- obj[Symbol.toStringTag] = 'foo';
- t.equal(inspect(obj), '{ a: 1, [Symbol(Symbol.toStringTag)]: \'foo\' }', 'object with Symbol.toStringTag');
-
- t.test('null objects', { skip: 'toString' in { __proto__: null } }, function (st) {
- st.plan(2);
-
- var dict = { __proto__: null, a: 1 };
- st.equal(inspect(dict), '[Object: null prototype] { a: 1 }', 'null object with Symbol.toStringTag');
-
- dict[Symbol.toStringTag] = 'Dict';
- st.equal(inspect(dict), '[Dict: null prototype] { a: 1, [Symbol(Symbol.toStringTag)]: \'Dict\' }', 'null object with Symbol.toStringTag');
- });
-
- t.test('instances', function (st) {
- st.plan(4);
-
- function C() {
- this.a = 1;
- }
- st.equal(Object.prototype.toString.call(new C()), '[object Object]', 'instance, no toStringTag, Object.prototype.toString');
- st.equal(inspect(new C()), 'C { a: 1 }', 'instance, no toStringTag');
-
- C.prototype[Symbol.toStringTag] = 'Class!';
- st.equal(Object.prototype.toString.call(new C()), '[object Class!]', 'instance, with toStringTag, Object.prototype.toString');
- st.equal(inspect(new C()), 'C [Class!] { a: 1 }', 'instance, with toStringTag');
- });
-});
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io/dist/namespace.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io/dist/namespace.js
deleted file mode 100644
index 80fa14fa1ca7d0e5178de1a77e8980027a514178..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io/dist/namespace.js
+++ /dev/null
@@ -1,593 +0,0 @@
-"use strict";
-var __importDefault = (this && this.__importDefault) || function (mod) {
- return (mod && mod.__esModule) ? mod : { "default": mod };
-};
-Object.defineProperty(exports, "__esModule", { value: true });
-exports.Namespace = exports.RESERVED_EVENTS = void 0;
-const socket_1 = require("./socket");
-const typed_events_1 = require("./typed-events");
-const debug_1 = __importDefault(require("debug"));
-const broadcast_operator_1 = require("./broadcast-operator");
-const debug = (0, debug_1.default)("socket.io:namespace");
-exports.RESERVED_EVENTS = new Set(["connect", "connection", "new_namespace"]);
-/**
- * A Namespace is a communication channel that allows you to split the logic of your application over a single shared
- * connection.
- *
- * Each namespace has its own:
- *
- * - event handlers
- *
- * ```
- * io.of("/orders").on("connection", (socket) => {
- * socket.on("order:list", () => {});
- * socket.on("order:create", () => {});
- * });
- *
- * io.of("/users").on("connection", (socket) => {
- * socket.on("user:list", () => {});
- * });
- * ```
- *
- * - rooms
- *
- * ```
- * const orderNamespace = io.of("/orders");
- *
- * orderNamespace.on("connection", (socket) => {
- * socket.join("room1");
- * orderNamespace.to("room1").emit("hello");
- * });
- *
- * const userNamespace = io.of("/users");
- *
- * userNamespace.on("connection", (socket) => {
- * socket.join("room1"); // distinct from the room in the "orders" namespace
- * userNamespace.to("room1").emit("holà");
- * });
- * ```
- *
- * - middlewares
- *
- * ```
- * const orderNamespace = io.of("/orders");
- *
- * orderNamespace.use((socket, next) => {
- * // ensure the socket has access to the "orders" namespace
- * });
- *
- * const userNamespace = io.of("/users");
- *
- * userNamespace.use((socket, next) => {
- * // ensure the socket has access to the "users" namespace
- * });
- * ```
- */
-class Namespace extends typed_events_1.StrictEventEmitter {
- /**
- * Namespace constructor.
- *
- * @param server instance
- * @param name
- */
- constructor(server, name) {
- super();
- this.sockets = new Map();
- /** @private */
- this._fns = [];
- /** @private */
- this._ids = 0;
- this.server = server;
- this.name = name;
- this._initAdapter();
- }
- /**
- * Initializes the `Adapter` for this nsp.
- * Run upon changing adapter by `Server#adapter`
- * in addition to the constructor.
- *
- * @private
- */
- _initAdapter() {
- // @ts-ignore
- this.adapter = new (this.server.adapter())(this);
- }
- /**
- * Registers a middleware, which is a function that gets executed for every incoming {@link Socket}.
- *
- * @example
- * const myNamespace = io.of("/my-namespace");
- *
- * myNamespace.use((socket, next) => {
- * // ...
- * next();
- * });
- *
- * @param fn - the middleware function
- */
- use(fn) {
- this._fns.push(fn);
- return this;
- }
- /**
- * Executes the middleware for an incoming client.
- *
- * @param socket - the socket that will get added
- * @param fn - last fn call in the middleware
- * @private
- */
- run(socket, fn) {
- const fns = this._fns.slice(0);
- if (!fns.length)
- return fn(null);
- function run(i) {
- fns[i](socket, function (err) {
- // upon error, short-circuit
- if (err)
- return fn(err);
- // if no middleware left, summon callback
- if (!fns[i + 1])
- return fn(null);
- // go on to next
- run(i + 1);
- });
- }
- run(0);
- }
- /**
- * Targets a room when emitting.
- *
- * @example
- * const myNamespace = io.of("/my-namespace");
- *
- * // the “foo” event will be broadcast to all connected clients in the “room-101” room
- * myNamespace.to("room-101").emit("foo", "bar");
- *
- * // with an array of rooms (a client will be notified at most once)
- * myNamespace.to(["room-101", "room-102"]).emit("foo", "bar");
- *
- * // with multiple chained calls
- * myNamespace.to("room-101").to("room-102").emit("foo", "bar");
- *
- * @param room - a room, or an array of rooms
- * @return a new {@link BroadcastOperator} instance for chaining
- */
- to(room) {
- return new broadcast_operator_1.BroadcastOperator(this.adapter).to(room);
- }
- /**
- * Targets a room when emitting. Similar to `to()`, but might feel clearer in some cases:
- *
- * @example
- * const myNamespace = io.of("/my-namespace");
- *
- * // disconnect all clients in the "room-101" room
- * myNamespace.in("room-101").disconnectSockets();
- *
- * @param room - a room, or an array of rooms
- * @return a new {@link BroadcastOperator} instance for chaining
- */
- in(room) {
- return new broadcast_operator_1.BroadcastOperator(this.adapter).in(room);
- }
- /**
- * Excludes a room when emitting.
- *
- * @example
- * const myNamespace = io.of("/my-namespace");
- *
- * // the "foo" event will be broadcast to all connected clients, except the ones that are in the "room-101" room
- * myNamespace.except("room-101").emit("foo", "bar");
- *
- * // with an array of rooms
- * myNamespace.except(["room-101", "room-102"]).emit("foo", "bar");
- *
- * // with multiple chained calls
- * myNamespace.except("room-101").except("room-102").emit("foo", "bar");
- *
- * @param room - a room, or an array of rooms
- * @return a new {@link BroadcastOperator} instance for chaining
- */
- except(room) {
- return new broadcast_operator_1.BroadcastOperator(this.adapter).except(room);
- }
- /**
- * Adds a new client.
- *
- * @return {Socket}
- * @private
- */
- async _add(client, auth, fn) {
- var _a;
- debug("adding socket to nsp %s", this.name);
- const socket = await this._createSocket(client, auth);
- if (
- // @ts-ignore
- ((_a = this.server.opts.connectionStateRecovery) === null || _a === void 0 ? void 0 : _a.skipMiddlewares) &&
- socket.recovered &&
- client.conn.readyState === "open") {
- return this._doConnect(socket, fn);
- }
- this.run(socket, (err) => {
- process.nextTick(() => {
- if ("open" !== client.conn.readyState) {
- debug("next called after client was closed - ignoring socket");
- socket._cleanup();
- return;
- }
- if (err) {
- debug("middleware error, sending CONNECT_ERROR packet to the client");
- socket._cleanup();
- if (client.conn.protocol === 3) {
- return socket._error(err.data || err.message);
- }
- else {
- return socket._error({
- message: err.message,
- data: err.data,
- });
- }
- }
- this._doConnect(socket, fn);
- });
- });
- }
- async _createSocket(client, auth) {
- const sessionId = auth.pid;
- const offset = auth.offset;
- if (
- // @ts-ignore
- this.server.opts.connectionStateRecovery &&
- typeof sessionId === "string" &&
- typeof offset === "string") {
- let session;
- try {
- session = await this.adapter.restoreSession(sessionId, offset);
- }
- catch (e) {
- debug("error while restoring session: %s", e);
- }
- if (session) {
- debug("connection state recovered for sid %s", session.sid);
- return new socket_1.Socket(this, client, auth, session);
- }
- }
- return new socket_1.Socket(this, client, auth);
- }
- _doConnect(socket, fn) {
- // track socket
- this.sockets.set(socket.id, socket);
- // it's paramount that the internal `onconnect` logic
- // fires before user-set events to prevent state order
- // violations (such as a disconnection before the connection
- // logic is complete)
- socket._onconnect();
- if (fn)
- fn(socket);
- // fire user-set events
- this.emitReserved("connect", socket);
- this.emitReserved("connection", socket);
- }
- /**
- * Removes a client. Called by each `Socket`.
- *
- * @private
- */
- _remove(socket) {
- if (this.sockets.has(socket.id)) {
- this.sockets.delete(socket.id);
- }
- else {
- debug("ignoring remove for %s", socket.id);
- }
- }
- /**
- * Emits to all connected clients.
- *
- * @example
- * const myNamespace = io.of("/my-namespace");
- *
- * myNamespace.emit("hello", "world");
- *
- * // all serializable datastructures are supported (no need to call JSON.stringify)
- * myNamespace.emit("hello", 1, "2", { 3: ["4"], 5: Uint8Array.from([6]) });
- *
- * // with an acknowledgement from the clients
- * myNamespace.timeout(1000).emit("some-event", (err, responses) => {
- * if (err) {
- * // some clients did not acknowledge the event in the given delay
- * } else {
- * console.log(responses); // one response per client
- * }
- * });
- *
- * @return Always true
- */
- emit(ev, ...args) {
- return new broadcast_operator_1.BroadcastOperator(this.adapter).emit(ev, ...args);
- }
- /**
- * Emits an event and waits for an acknowledgement from all clients.
- *
- * @example
- * const myNamespace = io.of("/my-namespace");
- *
- * try {
- * const responses = await myNamespace.timeout(1000).emitWithAck("some-event");
- * console.log(responses); // one response per client
- * } catch (e) {
- * // some clients did not acknowledge the event in the given delay
- * }
- *
- * @return a Promise that will be fulfilled when all clients have acknowledged the event
- */
- emitWithAck(ev, ...args) {
- return new broadcast_operator_1.BroadcastOperator(this.adapter).emitWithAck(ev, ...args);
- }
- /**
- * Sends a `message` event to all clients.
- *
- * This method mimics the WebSocket.send() method.
- *
- * @see https://developer.mozilla.org/en-US/docs/Web/API/WebSocket/send
- *
- * @example
- * const myNamespace = io.of("/my-namespace");
- *
- * myNamespace.send("hello");
- *
- * // this is equivalent to
- * myNamespace.emit("message", "hello");
- *
- * @return self
- */
- send(...args) {
- this.emit("message", ...args);
- return this;
- }
- /**
- * Sends a `message` event to all clients. Sends a `message` event. Alias of {@link send}.
- *
- * @return self
- */
- write(...args) {
- this.emit("message", ...args);
- return this;
- }
- /**
- * Sends a message to the other Socket.IO servers of the cluster.
- *
- * @example
- * const myNamespace = io.of("/my-namespace");
- *
- * myNamespace.serverSideEmit("hello", "world");
- *
- * myNamespace.on("hello", (arg1) => {
- * console.log(arg1); // prints "world"
- * });
- *
- * // acknowledgements (without binary content) are supported too:
- * myNamespace.serverSideEmit("ping", (err, responses) => {
- * if (err) {
- * // some servers did not acknowledge the event in the given delay
- * } else {
- * console.log(responses); // one response per server (except the current one)
- * }
- * });
- *
- * myNamespace.on("ping", (cb) => {
- * cb("pong");
- * });
- *
- * @param ev - the event name
- * @param args - an array of arguments, which may include an acknowledgement callback at the end
- */
- serverSideEmit(ev, ...args) {
- if (exports.RESERVED_EVENTS.has(ev)) {
- throw new Error(`"${String(ev)}" is a reserved event name`);
- }
- args.unshift(ev);
- this.adapter.serverSideEmit(args);
- return true;
- }
- /**
- * Sends a message and expect an acknowledgement from the other Socket.IO servers of the cluster.
- *
- * @example
- * const myNamespace = io.of("/my-namespace");
- *
- * try {
- * const responses = await myNamespace.serverSideEmitWithAck("ping");
- * console.log(responses); // one response per server (except the current one)
- * } catch (e) {
- * // some servers did not acknowledge the event in the given delay
- * }
- *
- * @param ev - the event name
- * @param args - an array of arguments
- *
- * @return a Promise that will be fulfilled when all servers have acknowledged the event
- */
- serverSideEmitWithAck(ev, ...args) {
- return new Promise((resolve, reject) => {
- args.push((err, responses) => {
- if (err) {
- err.responses = responses;
- return reject(err);
- }
- else {
- return resolve(responses);
- }
- });
- this.serverSideEmit(ev, ...args);
- });
- }
- /**
- * Called when a packet is received from another Socket.IO server
- *
- * @param args - an array of arguments, which may include an acknowledgement callback at the end
- *
- * @private
- */
- _onServerSideEmit(args) {
- super.emitUntyped.apply(this, args);
- }
- /**
- * Gets a list of clients.
- *
- * @deprecated this method will be removed in the next major release, please use {@link Namespace#serverSideEmit} or
- * {@link Namespace#fetchSockets} instead.
- */
- allSockets() {
- return new broadcast_operator_1.BroadcastOperator(this.adapter).allSockets();
- }
- /**
- * Sets the compress flag.
- *
- * @example
- * const myNamespace = io.of("/my-namespace");
- *
- * myNamespace.compress(false).emit("hello");
- *
- * @param compress - if `true`, compresses the sending data
- * @return self
- */
- compress(compress) {
- return new broadcast_operator_1.BroadcastOperator(this.adapter).compress(compress);
- }
- /**
- * Sets a modifier for a subsequent event emission that the event data may be lost if the client is not ready to
- * receive messages (because of network slowness or other issues, or because they’re connected through long polling
- * and is in the middle of a request-response cycle).
- *
- * @example
- * const myNamespace = io.of("/my-namespace");
- *
- * myNamespace.volatile.emit("hello"); // the clients may or may not receive it
- *
- * @return self
- */
- get volatile() {
- return new broadcast_operator_1.BroadcastOperator(this.adapter).volatile;
- }
- /**
- * Sets a modifier for a subsequent event emission that the event data will only be broadcast to the current node.
- *
- * @example
- * const myNamespace = io.of("/my-namespace");
- *
- * // the “foo” event will be broadcast to all connected clients on this node
- * myNamespace.local.emit("foo", "bar");
- *
- * @return a new {@link BroadcastOperator} instance for chaining
- */
- get local() {
- return new broadcast_operator_1.BroadcastOperator(this.adapter).local;
- }
- /**
- * Adds a timeout in milliseconds for the next operation.
- *
- * @example
- * const myNamespace = io.of("/my-namespace");
- *
- * myNamespace.timeout(1000).emit("some-event", (err, responses) => {
- * if (err) {
- * // some clients did not acknowledge the event in the given delay
- * } else {
- * console.log(responses); // one response per client
- * }
- * });
- *
- * @param timeout
- */
- timeout(timeout) {
- return new broadcast_operator_1.BroadcastOperator(this.adapter).timeout(timeout);
- }
- /**
- * Returns the matching socket instances.
- *
- * Note: this method also works within a cluster of multiple Socket.IO servers, with a compatible {@link Adapter}.
- *
- * @example
- * const myNamespace = io.of("/my-namespace");
- *
- * // return all Socket instances
- * const sockets = await myNamespace.fetchSockets();
- *
- * // return all Socket instances in the "room1" room
- * const sockets = await myNamespace.in("room1").fetchSockets();
- *
- * for (const socket of sockets) {
- * console.log(socket.id);
- * console.log(socket.handshake);
- * console.log(socket.rooms);
- * console.log(socket.data);
- *
- * socket.emit("hello");
- * socket.join("room1");
- * socket.leave("room2");
- * socket.disconnect();
- * }
- */
- fetchSockets() {
- return new broadcast_operator_1.BroadcastOperator(this.adapter).fetchSockets();
- }
- /**
- * Makes the matching socket instances join the specified rooms.
- *
- * Note: this method also works within a cluster of multiple Socket.IO servers, with a compatible {@link Adapter}.
- *
- * @example
- * const myNamespace = io.of("/my-namespace");
- *
- * // make all socket instances join the "room1" room
- * myNamespace.socketsJoin("room1");
- *
- * // make all socket instances in the "room1" room join the "room2" and "room3" rooms
- * myNamespace.in("room1").socketsJoin(["room2", "room3"]);
- *
- * @param room - a room, or an array of rooms
- */
- socketsJoin(room) {
- return new broadcast_operator_1.BroadcastOperator(this.adapter).socketsJoin(room);
- }
- /**
- * Makes the matching socket instances leave the specified rooms.
- *
- * Note: this method also works within a cluster of multiple Socket.IO servers, with a compatible {@link Adapter}.
- *
- * @example
- * const myNamespace = io.of("/my-namespace");
- *
- * // make all socket instances leave the "room1" room
- * myNamespace.socketsLeave("room1");
- *
- * // make all socket instances in the "room1" room leave the "room2" and "room3" rooms
- * myNamespace.in("room1").socketsLeave(["room2", "room3"]);
- *
- * @param room - a room, or an array of rooms
- */
- socketsLeave(room) {
- return new broadcast_operator_1.BroadcastOperator(this.adapter).socketsLeave(room);
- }
- /**
- * Makes the matching socket instances disconnect.
- *
- * Note: this method also works within a cluster of multiple Socket.IO servers, with a compatible {@link Adapter}.
- *
- * @example
- * const myNamespace = io.of("/my-namespace");
- *
- * // make all socket instances disconnect (the connections might be kept alive for other namespaces)
- * myNamespace.disconnectSockets();
- *
- * // make all socket instances in the "room1" room disconnect and close the underlying connections
- * myNamespace.in("room1").disconnectSockets(true);
- *
- * @param close - whether to close the underlying connection
- */
- disconnectSockets(close = false) {
- return new broadcast_operator_1.BroadcastOperator(this.adapter).disconnectSockets(close);
- }
-}
-exports.Namespace = Namespace;
diff --git a/spaces/flax-community/chef-transformer/utils/api.py b/spaces/flax-community/chef-transformer/utils/api.py
deleted file mode 100644
index baeb8ee176276ed83c72ab2d477520666e14bb77..0000000000000000000000000000000000000000
--- a/spaces/flax-community/chef-transformer/utils/api.py
+++ /dev/null
@@ -1,26 +0,0 @@
-import random
-import requests
-
-
-def generate_cook_image(query, app_id, app_key):
- api_url = f"https://api.edamam.com/api/recipes/v2?type=public&q={query}&app_id={app_id}&app_key={app_key}&field=image"
-
- try:
- r = requests.get(api_url)
- if r.status_code != 200:
- return None
-
- rj = r.json()
- if "hits" not in rj or not len(rj["hits"]) > 0:
- return None
-
- data = rj["hits"]
- data = data[random.randint(1, min(5, len(data) - 1))] if len(data) > 1 else data[0]
-
- if "recipe" not in data or "image" not in data["recipe"]:
- return None
-
- image = data["recipe"]["image"]
- return image
- except Exception as e:
- return None
diff --git a/spaces/florim/MedGPT/autogpt/json_utils/__init__.py b/spaces/florim/MedGPT/autogpt/json_utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/flowerpixel/tashachan28-ranma_diffusion/app.py b/spaces/flowerpixel/tashachan28-ranma_diffusion/app.py
deleted file mode 100644
index 0090647164922fa5133e984d13aee14e825931d9..0000000000000000000000000000000000000000
--- a/spaces/flowerpixel/tashachan28-ranma_diffusion/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/tashachan28/ranma_diffusion").launch()
\ No newline at end of file
diff --git a/spaces/freddyaboulton/gradio_folium/src/backend/gradio_folium/folium.py b/spaces/freddyaboulton/gradio_folium/src/backend/gradio_folium/folium.py
deleted file mode 100644
index a23eba5d640413c8a8630e6f5c2675282f8337f3..0000000000000000000000000000000000000000
--- a/spaces/freddyaboulton/gradio_folium/src/backend/gradio_folium/folium.py
+++ /dev/null
@@ -1,48 +0,0 @@
-from __future__ import annotations
-
-from typing import Any, Callable
-from gradio.components.base import Component
-from folium import Map
-from gradio.data_classes import FileData
-from tempfile import NamedTemporaryFile
-
-class Folium(Component):
- data_model = FileData
-
- def __init__(self, value: Any = None,
- *,
- height: int | None = None,
- label: str | None = None,
- container: bool = True,
- scale: int | None = None,
- min_width: int | None = None,
- visible: bool = True,
- elem_id: str | None = None,
- elem_classes: list[str] | str | None = None,
- render: bool = True,
- root_url: str | None = None,
- _skip_init_processing: bool = False,
- load_fn: Callable[..., Any] | None = None,
- every: float | None = None):
- super().__init__(value, label=label, info=None, show_label=True,
- container=container, scale=scale, min_width=min_width,
- visible=visible, elem_id=elem_id, elem_classes=elem_classes,
- render=render, root_url=root_url,
- _skip_init_processing=_skip_init_processing,
- load_fn=load_fn, every=every)
- self.height = height
- def preprocess(self, x):
- return x
-
- def postprocess(self, x: Map):
- if not x:
- return None
- with NamedTemporaryFile(suffix=".html", delete=False) as tmp:
- x.save(tmp.name)
- return FileData(name=tmp.name, is_file=True)
-
- def example_inputs(self):
- return {"info": "Do not use as input"}
-
- def api_info(self):
- return {"type": {}, "description": "any valid json"}
diff --git a/spaces/fuckyoudeki/AutoGPT/autogpt/memory/local.py b/spaces/fuckyoudeki/AutoGPT/autogpt/memory/local.py
deleted file mode 100644
index 803b6dc6ebb430285f423cda592fa3e902e9a4a6..0000000000000000000000000000000000000000
--- a/spaces/fuckyoudeki/AutoGPT/autogpt/memory/local.py
+++ /dev/null
@@ -1,136 +0,0 @@
-from __future__ import annotations
-
-import dataclasses
-import os
-from typing import Any, List
-
-import numpy as np
-import orjson
-
-from autogpt.llm_utils import create_embedding_with_ada
-from autogpt.memory.base import MemoryProviderSingleton
-
-EMBED_DIM = 1536
-SAVE_OPTIONS = orjson.OPT_SERIALIZE_NUMPY | orjson.OPT_SERIALIZE_DATACLASS
-
-
-def create_default_embeddings():
- return np.zeros((0, EMBED_DIM)).astype(np.float32)
-
-
-@dataclasses.dataclass
-class CacheContent:
- texts: List[str] = dataclasses.field(default_factory=list)
- embeddings: np.ndarray = dataclasses.field(
- default_factory=create_default_embeddings
- )
-
-
-class LocalCache(MemoryProviderSingleton):
- """A class that stores the memory in a local file"""
-
- def __init__(self, cfg) -> None:
- """Initialize a class instance
-
- Args:
- cfg: Config object
-
- Returns:
- None
- """
- self.filename = f"{cfg.memory_index}.json"
- if os.path.exists(self.filename):
- try:
- with open(self.filename, "w+b") as f:
- file_content = f.read()
- if not file_content.strip():
- file_content = b"{}"
- f.write(file_content)
-
- loaded = orjson.loads(file_content)
- self.data = CacheContent(**loaded)
- except orjson.JSONDecodeError:
- print(f"Error: The file '{self.filename}' is not in JSON format.")
- self.data = CacheContent()
- else:
- print(
- f"Warning: The file '{self.filename}' does not exist. "
- "Local memory would not be saved to a file."
- )
- self.data = CacheContent()
-
- def add(self, text: str):
- """
- Add text to our list of texts, add embedding as row to our
- embeddings-matrix
-
- Args:
- text: str
-
- Returns: None
- """
- if "Command Error:" in text:
- return ""
- self.data.texts.append(text)
-
- embedding = create_embedding_with_ada(text)
-
- vector = np.array(embedding).astype(np.float32)
- vector = vector[np.newaxis, :]
- self.data.embeddings = np.concatenate(
- [
- self.data.embeddings,
- vector,
- ],
- axis=0,
- )
-
- with open(self.filename, "wb") as f:
- out = orjson.dumps(self.data, option=SAVE_OPTIONS)
- f.write(out)
- return text
-
- def clear(self) -> str:
- """
- Clears the redis server.
-
- Returns: A message indicating that the memory has been cleared.
- """
- self.data = CacheContent()
- return "Obliviated"
-
- def get(self, data: str) -> list[Any] | None:
- """
- Gets the data from the memory that is most relevant to the given data.
-
- Args:
- data: The data to compare to.
-
- Returns: The most relevant data.
- """
- return self.get_relevant(data, 1)
-
- def get_relevant(self, text: str, k: int) -> list[Any]:
- """ "
- matrix-vector mult to find score-for-each-row-of-matrix
- get indices for top-k winning scores
- return texts for those indices
- Args:
- text: str
- k: int
-
- Returns: List[str]
- """
- embedding = create_embedding_with_ada(text)
-
- scores = np.dot(self.data.embeddings, embedding)
-
- top_k_indices = np.argsort(scores)[-k:][::-1]
-
- return [self.data.texts[i] for i in top_k_indices]
-
- def get_stats(self) -> tuple[int, tuple[int, ...]]:
- """
- Returns: The stats of the local cache.
- """
- return len(self.data.texts), self.data.embeddings.shape
diff --git a/spaces/fun-research/FC-CLIP/fcclip/modeling/pixel_decoder/ops/src/cuda/ms_deform_attn_cuda.h b/spaces/fun-research/FC-CLIP/fcclip/modeling/pixel_decoder/ops/src/cuda/ms_deform_attn_cuda.h
deleted file mode 100644
index 4f0658e8668a11f0e7d71deff9adac71884f2e87..0000000000000000000000000000000000000000
--- a/spaces/fun-research/FC-CLIP/fcclip/modeling/pixel_decoder/ops/src/cuda/ms_deform_attn_cuda.h
+++ /dev/null
@@ -1,35 +0,0 @@
-/*!
-**************************************************************************************************
-* Deformable DETR
-* Copyright (c) 2020 SenseTime. All Rights Reserved.
-* Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-**************************************************************************************************
-* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
-**************************************************************************************************
-*/
-
-/*!
-* Copyright (c) Facebook, Inc. and its affiliates.
-* Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
-*/
-
-#pragma once
-#include
-
-at::Tensor ms_deform_attn_cuda_forward(
- const at::Tensor &value,
- const at::Tensor &spatial_shapes,
- const at::Tensor &level_start_index,
- const at::Tensor &sampling_loc,
- const at::Tensor &attn_weight,
- const int im2col_step);
-
-std::vector ms_deform_attn_cuda_backward(
- const at::Tensor &value,
- const at::Tensor &spatial_shapes,
- const at::Tensor &level_start_index,
- const at::Tensor &sampling_loc,
- const at::Tensor &attn_weight,
- const at::Tensor &grad_output,
- const int im2col_step);
-
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Feeding Frenzy 2 Deluxe Download !!BETTER!! For Pc [Crack Serial Key.md b/spaces/gotiQspiryo/whisper-ui/examples/Feeding Frenzy 2 Deluxe Download !!BETTER!! For Pc [Crack Serial Key.md
deleted file mode 100644
index da1c974ad357c67b4b565955e7e6d220f33d2cb5..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Feeding Frenzy 2 Deluxe Download !!BETTER!! For Pc [Crack Serial Key.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Feeding Frenzy 2 Deluxe download for pc [Crack Serial Key
- );
-};
diff --git a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/nvdiffrast/__init__.py b/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/nvdiffrast/__init__.py
deleted file mode 100644
index 3678b790f5e025f8943eee49e9dafa2489dce867..0000000000000000000000000000000000000000
--- a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/nvdiffrast/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-__version__ = '0.2.5'
diff --git a/spaces/haakohu/deep_privacy2/dp2/utils/bufferless_video_capture.py b/spaces/haakohu/deep_privacy2/dp2/utils/bufferless_video_capture.py
deleted file mode 100644
index dd5e1006057706f32c6adaeb812bf4834bbdfd28..0000000000000000000000000000000000000000
--- a/spaces/haakohu/deep_privacy2/dp2/utils/bufferless_video_capture.py
+++ /dev/null
@@ -1,32 +0,0 @@
-import queue
-import threading
-import cv2
-
-
-class BufferlessVideoCapture:
-
- def __init__(self, name, width=None, height=None):
- self.cap = cv2.VideoCapture(name)
- if width is not None and height is not None:
- self.cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)
- self.cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)
- self.q = queue.Queue()
- t = threading.Thread(target=self._reader)
- t.daemon = True
- t.start()
-
- # read frames as soon as they are available, keeping only most recent one
- def _reader(self):
- while True:
- ret, frame = self.cap.read()
- if not ret:
- break
- if not self.q.empty():
- try:
- self.q.get_nowait() # discard previous (unprocessed) frame
- except queue.Empty:
- pass
- self.q.put((ret, frame))
-
- def read(self):
- return self.q.get()
diff --git "a/spaces/hands012/gpt-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243.py" "b/spaces/hands012/gpt-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243.py"
deleted file mode 100644
index cbda23b83d759e6a3a4da5847c37ddff662daab2..0000000000000000000000000000000000000000
--- "a/spaces/hands012/gpt-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243.py"
+++ /dev/null
@@ -1,166 +0,0 @@
-from toolbox import update_ui
-from toolbox import CatchException, report_execption, write_results_to_file
-import re
-import unicodedata
-fast_debug = False
-from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
-
-def is_paragraph_break(match):
- """
- 根据给定的匹配结果来判断换行符是否表示段落分隔。
- 如果换行符前为句子结束标志(句号,感叹号,问号),且下一个字符为大写字母,则换行符更有可能表示段落分隔。
- 也可以根据之前的内容长度来判断段落是否已经足够长。
- """
- prev_char, next_char = match.groups()
-
- # 句子结束标志
- sentence_endings = ".!?"
-
- # 设定一个最小段落长度阈值
- min_paragraph_length = 140
-
- if prev_char in sentence_endings and next_char.isupper() and len(match.string[:match.start(1)]) > min_paragraph_length:
- return "\n\n"
- else:
- return " "
-
-def normalize_text(text):
- """
- 通过把连字(ligatures)等文本特殊符号转换为其基本形式来对文本进行归一化处理。
- 例如,将连字 "fi" 转换为 "f" 和 "i"。
- """
- # 对文本进行归一化处理,分解连字
- normalized_text = unicodedata.normalize("NFKD", text)
-
- # 替换其他特殊字符
- cleaned_text = re.sub(r'[^\x00-\x7F]+', '', normalized_text)
-
- return cleaned_text
-
-def clean_text(raw_text):
- """
- 对从 PDF 提取出的原始文本进行清洗和格式化处理。
- 1. 对原始文本进行归一化处理。
- 2. 替换跨行的连词
- 3. 根据 heuristic 规则判断换行符是否是段落分隔,并相应地进行替换
- """
- # 对文本进行归一化处理
- normalized_text = normalize_text(raw_text)
-
- # 替换跨行的连词
- text = re.sub(r'(\w+-\n\w+)', lambda m: m.group(1).replace('-\n', ''), normalized_text)
-
- # 根据前后相邻字符的特点,找到原文本中的换行符
- newlines = re.compile(r'(\S)\n(\S)')
-
- # 根据 heuristic 规则,用空格或段落分隔符替换原换行符
- final_text = re.sub(newlines, lambda m: m.group(1) + is_paragraph_break(m) + m.group(2), text)
-
- return final_text.strip()
-
-def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
- import time, glob, os, fitz
- print('begin analysis on:', file_manifest)
- for index, fp in enumerate(file_manifest):
- with fitz.open(fp) as doc:
- file_content = ""
- for page in doc:
- file_content += page.get_text()
- file_content = clean_text(file_content)
- print(file_content)
-
- prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else ""
- i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```'
- i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}'
- chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- if not fast_debug:
- msg = '正常'
- # ** gpt request **
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
- inputs=i_say,
- inputs_show_user=i_say_show_user,
- llm_kwargs=llm_kwargs,
- chatbot=chatbot,
- history=[],
- sys_prompt="总结文章。"
- ) # 带超时倒计时
-
-
- chatbot[-1] = (i_say_show_user, gpt_say)
- history.append(i_say_show_user); history.append(gpt_say)
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
- if not fast_debug: time.sleep(2)
-
- all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)])
- i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。'
- chatbot.append((i_say, "[Local Message] waiting gpt response."))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- if not fast_debug:
- msg = '正常'
- # ** gpt request **
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
- inputs=i_say,
- inputs_show_user=i_say,
- llm_kwargs=llm_kwargs,
- chatbot=chatbot,
- history=history,
- sys_prompt="总结文章。"
- ) # 带超时倒计时
-
- chatbot[-1] = (i_say, gpt_say)
- history.append(i_say); history.append(gpt_say)
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
- res = write_results_to_file(history)
- chatbot.append(("完成了吗?", res))
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
-
-
-@CatchException
-def 批量总结PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- import glob, os
-
- # 基本信息:功能、贡献者
- chatbot.append([
- "函数插件功能?",
- "批量总结PDF文档。函数插件贡献者: ValeriaWong,Eralien"])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 尝试导入依赖,如果缺少依赖,则给出安装建议
- try:
- import fitz
- except:
- report_execption(chatbot, history,
- a = f"解析项目: {txt}",
- b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 清空历史,以免输入溢出
- history = []
-
- # 检测输入参数,如没有给定输入参数,直接退出
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 搜索需要处理的文件清单
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)] # + \
- # [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \
- # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \
- # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
-
- # 如果没找到任何文件
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex或.pdf文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 开始正式执行任务
- yield from 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
diff --git a/spaces/harpreetsahota/RAQA-with-LlamaIndex-and-a-fine-tuned-GPT-35/chainlit.md b/spaces/harpreetsahota/RAQA-with-LlamaIndex-and-a-fine-tuned-GPT-35/chainlit.md
deleted file mode 100644
index 78b573aa6a8c31b305db78c7e8849842daeeb7e8..0000000000000000000000000000000000000000
--- a/spaces/harpreetsahota/RAQA-with-LlamaIndex-and-a-fine-tuned-GPT-35/chainlit.md
+++ /dev/null
@@ -1,11 +0,0 @@
-# Assignment Part 2: Deploying Your Model to a Hugging Face Space
-
-Now that you've done the hard work of setting up the RetrievalQA chain and sourcing your documents - let's tie it together in a ChainLit application.
-
-### Duplicating the Space
-
-Since this is our first assignment, all you'll need to do is duplicate this space and add your own `OPENAI_API_KEY` as a secret in the space.
-
-### Conclusion
-
-Now that you've shipped an LLM-powered application, it's time to share! 🚀
diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/utils/visualizer.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/utils/visualizer.py
deleted file mode 100644
index 3ffcbdbd19518bce877a776582a7caeddc18108e..0000000000000000000000000000000000000000
--- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/utils/visualizer.py
+++ /dev/null
@@ -1,1143 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-import colorsys
-import logging
-import math
-import numpy as np
-from enum import Enum, unique
-import cv2
-import matplotlib as mpl
-import matplotlib.colors as mplc
-import matplotlib.figure as mplfigure
-import pycocotools.mask as mask_util
-import torch
-from fvcore.common.file_io import PathManager
-from matplotlib.backends.backend_agg import FigureCanvasAgg
-from PIL import Image
-
-from detectron2.structures import BitMasks, Boxes, BoxMode, Keypoints, PolygonMasks, RotatedBoxes
-
-from .colormap import random_color
-
-logger = logging.getLogger(__name__)
-
-__all__ = ["ColorMode", "VisImage", "Visualizer"]
-
-
-_SMALL_OBJECT_AREA_THRESH = 1000
-_LARGE_MASK_AREA_THRESH = 120000
-_OFF_WHITE = (1.0, 1.0, 240.0 / 255)
-_BLACK = (0, 0, 0)
-_RED = (1.0, 0, 0)
-
-_KEYPOINT_THRESHOLD = 0.05
-
-
-@unique
-class ColorMode(Enum):
- """
- Enum of different color modes to use for instance visualizations.
- """
-
- IMAGE = 0
- """
- Picks a random color for every instance and overlay segmentations with low opacity.
- """
- SEGMENTATION = 1
- """
- Let instances of the same category have similar colors
- (from metadata.thing_colors), and overlay them with
- high opacity. This provides more attention on the quality of segmentation.
- """
- IMAGE_BW = 2
- """
- Same as IMAGE, but convert all areas without masks to gray-scale.
- Only available for drawing per-instance mask predictions.
- """
-
-
-class GenericMask:
- """
- Attribute:
- polygons (list[ndarray]): list[ndarray]: polygons for this mask.
- Each ndarray has format [x, y, x, y, ...]
- mask (ndarray): a binary mask
- """
-
- def __init__(self, mask_or_polygons, height, width):
- self._mask = self._polygons = self._has_holes = None
- self.height = height
- self.width = width
-
- m = mask_or_polygons
- if isinstance(m, dict):
- # RLEs
- assert "counts" in m and "size" in m
- if isinstance(m["counts"], list): # uncompressed RLEs
- h, w = m["size"]
- assert h == height and w == width
- m = mask_util.frPyObjects(m, h, w)
- self._mask = mask_util.decode(m)[:, :]
- return
-
- if isinstance(m, list): # list[ndarray]
- self._polygons = [np.asarray(x).reshape(-1) for x in m]
- return
-
- if isinstance(m, np.ndarray): # assumed to be a binary mask
- assert m.shape[1] != 2, m.shape
- assert m.shape == (height, width), m.shape
- self._mask = m.astype("uint8")
- return
-
- raise ValueError("GenericMask cannot handle object {} of type '{}'".format(m, type(m)))
-
- @property
- def mask(self):
- if self._mask is None:
- self._mask = self.polygons_to_mask(self._polygons)
- return self._mask
-
- @property
- def polygons(self):
- if self._polygons is None:
- self._polygons, self._has_holes = self.mask_to_polygons(self._mask)
- return self._polygons
-
- @property
- def has_holes(self):
- if self._has_holes is None:
- if self._mask is not None:
- self._polygons, self._has_holes = self.mask_to_polygons(self._mask)
- else:
- self._has_holes = False # if original format is polygon, does not have holes
- return self._has_holes
-
- def mask_to_polygons(self, mask):
- # cv2.RETR_CCOMP flag retrieves all the contours and arranges them to a 2-level
- # hierarchy. External contours (boundary) of the object are placed in hierarchy-1.
- # Internal contours (holes) are placed in hierarchy-2.
- # cv2.CHAIN_APPROX_NONE flag gets vertices of polygons from contours.
- mask = np.ascontiguousarray(mask) # some versions of cv2 does not support incontiguous arr
- res = cv2.findContours(mask.astype("uint8"), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)
- hierarchy = res[-1]
- if hierarchy is None: # empty mask
- return [], False
- has_holes = (hierarchy.reshape(-1, 4)[:, 3] >= 0).sum() > 0
- res = res[-2]
- res = [x.flatten() for x in res]
- res = [x for x in res if len(x) >= 6]
- return res, has_holes
-
- def polygons_to_mask(self, polygons):
- rle = mask_util.frPyObjects(polygons, self.height, self.width)
- rle = mask_util.merge(rle)
- return mask_util.decode(rle)[:, :]
-
- def area(self):
- return self.mask.sum()
-
- def bbox(self):
- p = mask_util.frPyObjects(self.polygons, self.height, self.width)
- p = mask_util.merge(p)
- bbox = mask_util.toBbox(p)
- bbox[2] += bbox[0]
- bbox[3] += bbox[1]
- return bbox
-
-
-class _PanopticPrediction:
- def __init__(self, panoptic_seg, segments_info):
- self._seg = panoptic_seg
-
- self._sinfo = {s["id"]: s for s in segments_info} # seg id -> seg info
- segment_ids, areas = torch.unique(panoptic_seg, sorted=True, return_counts=True)
- areas = areas.numpy()
- sorted_idxs = np.argsort(-areas)
- self._seg_ids, self._seg_areas = segment_ids[sorted_idxs], areas[sorted_idxs]
- self._seg_ids = self._seg_ids.tolist()
- for sid, area in zip(self._seg_ids, self._seg_areas):
- if sid in self._sinfo:
- self._sinfo[sid]["area"] = float(area)
-
- def non_empty_mask(self):
- """
- Returns:
- (H, W) array, a mask for all pixels that have a prediction
- """
- empty_ids = []
- for id in self._seg_ids:
- if id not in self._sinfo:
- empty_ids.append(id)
- if len(empty_ids) == 0:
- return np.zeros(self._seg.shape, dtype=np.uint8)
- assert (
- len(empty_ids) == 1
- ), ">1 ids corresponds to no labels. This is currently not supported"
- return (self._seg != empty_ids[0]).numpy().astype(np.bool)
-
- def semantic_masks(self):
- for sid in self._seg_ids:
- sinfo = self._sinfo.get(sid)
- if sinfo is None or sinfo["isthing"]:
- # Some pixels (e.g. id 0 in PanopticFPN) have no instance or semantic predictions.
- continue
- yield (self._seg == sid).numpy().astype(np.bool), sinfo
-
- def instance_masks(self):
- for sid in self._seg_ids:
- sinfo = self._sinfo.get(sid)
- if sinfo is None or not sinfo["isthing"]:
- continue
- mask = (self._seg == sid).numpy().astype(np.bool)
- if mask.sum() > 0:
- yield mask, sinfo
-
-
-def _create_text_labels(classes, scores, class_names):
- """
- Args:
- classes (list[int] or None):
- scores (list[float] or None):
- class_names (list[str] or None):
-
- Returns:
- list[str] or None
- """
- labels = None
- if classes is not None and class_names is not None and len(class_names) > 1:
- labels = [class_names[i] for i in classes]
- if scores is not None:
- if labels is None:
- labels = ["{:.0f}%".format(s * 100) for s in scores]
- else:
- labels = ["{} {:.0f}%".format(l, s * 100) for l, s in zip(labels, scores)]
- return labels
-
-
-class VisImage:
- def __init__(self, img, scale=1.0):
- """
- Args:
- img (ndarray): an RGB image of shape (H, W, 3).
- scale (float): scale the input image
- """
- self.img = img
- self.scale = scale
- self.width, self.height = img.shape[1], img.shape[0]
- self._setup_figure(img)
-
- def _setup_figure(self, img):
- """
- Args:
- Same as in :meth:`__init__()`.
-
- Returns:
- fig (matplotlib.pyplot.figure): top level container for all the image plot elements.
- ax (matplotlib.pyplot.Axes): contains figure elements and sets the coordinate system.
- """
- fig = mplfigure.Figure(frameon=False)
- self.dpi = fig.get_dpi()
- # add a small 1e-2 to avoid precision lost due to matplotlib's truncation
- # (https://github.com/matplotlib/matplotlib/issues/15363)
- fig.set_size_inches(
- (self.width * self.scale + 1e-2) / self.dpi,
- (self.height * self.scale + 1e-2) / self.dpi,
- )
- self.canvas = FigureCanvasAgg(fig)
- # self.canvas = mpl.backends.backend_cairo.FigureCanvasCairo(fig)
- ax = fig.add_axes([0.0, 0.0, 1.0, 1.0])
- ax.axis("off")
- ax.set_xlim(0.0, self.width)
- ax.set_ylim(self.height)
-
- self.fig = fig
- self.ax = ax
-
- def save(self, filepath):
- """
- Args:
- filepath (str): a string that contains the absolute path, including the file name, where
- the visualized image will be saved.
- """
- if filepath.lower().endswith(".jpg") or filepath.lower().endswith(".png"):
- # faster than matplotlib's imshow
- cv2.imwrite(filepath, self.get_image()[:, :, ::-1])
- else:
- # support general formats (e.g. pdf)
- self.ax.imshow(self.img, interpolation="nearest")
- self.fig.savefig(filepath)
-
- def get_image(self):
- """
- Returns:
- ndarray:
- the visualized image of shape (H, W, 3) (RGB) in uint8 type.
- The shape is scaled w.r.t the input image using the given `scale` argument.
- """
- canvas = self.canvas
- s, (width, height) = canvas.print_to_buffer()
- if (self.width, self.height) != (width, height):
- img = cv2.resize(self.img, (width, height))
- else:
- img = self.img
-
- # buf = io.BytesIO() # works for cairo backend
- # canvas.print_rgba(buf)
- # width, height = self.width, self.height
- # s = buf.getvalue()
-
- buffer = np.frombuffer(s, dtype="uint8")
-
- # imshow is slow. blend manually (still quite slow)
- img_rgba = buffer.reshape(height, width, 4)
- rgb, alpha = np.split(img_rgba, [3], axis=2)
-
- try:
- import numexpr as ne # fuse them with numexpr
-
- visualized_image = ne.evaluate("demo * (1 - alpha / 255.0) + rgb * (alpha / 255.0)")
- except ImportError:
- alpha = alpha.astype("float32") / 255.0
- visualized_image = img * (1 - alpha) + rgb * alpha
-
- visualized_image = visualized_image.astype("uint8")
-
- return visualized_image
-
-
-class Visualizer:
- def __init__(self, img_rgb, metadata, scale=1.0, instance_mode=ColorMode.IMAGE):
- """
- Args:
- img_rgb: a numpy array of shape (H, W, C), where H and W correspond to
- the height and width of the image respectively. C is the number of
- color channels. The image is required to be in RGB format since that
- is a requirement of the Matplotlib library. The image is also expected
- to be in the range [0, 255].
- metadata (MetadataCatalog): image metadata.
- """
- self.img = np.asarray(img_rgb).clip(0, 255).astype(np.uint8)
- self.metadata = metadata
- self.output = VisImage(self.img, scale=scale)
- self.cpu_device = torch.device("cpu")
-
- # too small texts are useless, therefore clamp to 9
- self._default_font_size = max(
- np.sqrt(self.output.height * self.output.width) // 90, 10 // scale
- )
- self._instance_mode = instance_mode
-
- def draw_instance_predictions(self, predictions):
- """
- Draw instance-level prediction results on an image.
-
- Args:
- predictions (Instances): the output of an instance detection/segmentation
- model. Following fields will be used to draw:
- "pred_boxes", "pred_classes", "scores", "pred_masks" (or "pred_masks_rle").
-
- Returns:
- output (VisImage): image object with visualizations.
- """
- boxes = predictions.pred_boxes if predictions.has("pred_boxes") else None
- scores = predictions.scores if predictions.has("scores") else None
- classes = predictions.pred_classes if predictions.has("pred_classes") else None
- labels = _create_text_labels(classes, scores, self.metadata.get("thing_classes", None))
- keypoints = predictions.pred_keypoints if predictions.has("pred_keypoints") else None
-
- if predictions.has("pred_masks"):
- masks = np.asarray(predictions.pred_masks)
- masks = [GenericMask(x, self.output.height, self.output.width) for x in masks]
- else:
- masks = None
-
- if self._instance_mode == ColorMode.SEGMENTATION and self.metadata.get("thing_colors"):
- colors = [
- self._jitter([x / 255 for x in self.metadata.thing_colors[c]]) for c in classes
- ]
- alpha = 0.8
- else:
- colors = None
- alpha = 0.5
-
- if self._instance_mode == ColorMode.IMAGE_BW:
- self.output.img = self._create_grayscale_image(
- (predictions.pred_masks.any(dim=0) > 0).numpy()
- )
- alpha = 0.3
-
- self.overlay_instances(
- masks=masks,
- boxes=boxes,
- labels=labels,
- keypoints=keypoints,
- assigned_colors=colors,
- alpha=alpha,
- )
- return self.output
-
- def draw_sem_seg(self, sem_seg, area_threshold=None, alpha=0.8):
- """
- Draw semantic segmentation predictions/labels.
-
- Args:
- sem_seg (Tensor or ndarray): the segmentation of shape (H, W).
- Each value is the integer label of the pixel.
- area_threshold (int): segments with less than `area_threshold` are not drawn.
- alpha (float): the larger it is, the more opaque the segmentations are.
-
- Returns:
- output (VisImage): image object with visualizations.
- """
- if isinstance(sem_seg, torch.Tensor):
- sem_seg = sem_seg.numpy()
- labels, areas = np.unique(sem_seg, return_counts=True)
- sorted_idxs = np.argsort(-areas).tolist()
- labels = labels[sorted_idxs]
- for label in filter(lambda l: l < len(self.metadata.stuff_classes), labels):
- try:
- mask_color = [x / 255 for x in self.metadata.stuff_colors[label]]
- except (AttributeError, IndexError):
- mask_color = None
-
- binary_mask = (sem_seg == label).astype(np.uint8)
- text = self.metadata.stuff_classes[label]
- self.draw_binary_mask(
- binary_mask,
- color=mask_color,
- edge_color=_OFF_WHITE,
- text=text,
- alpha=alpha,
- area_threshold=area_threshold,
- )
- return self.output
-
- def draw_panoptic_seg_predictions(
- self, panoptic_seg, segments_info, area_threshold=None, alpha=0.7
- ):
- """
- Draw panoptic prediction results on an image.
-
- Args:
- panoptic_seg (Tensor): of shape (height, width) where the values are ids for each
- segment.
- segments_info (list[dict]): Describe each segment in `panoptic_seg`.
- Each dict contains keys "id", "category_id", "isthing".
- area_threshold (int): stuff segments with less than `area_threshold` are not drawn.
-
- Returns:
- output (VisImage): image object with visualizations.
- """
- pred = _PanopticPrediction(panoptic_seg, segments_info)
-
- if self._instance_mode == ColorMode.IMAGE_BW:
- self.output.img = self._create_grayscale_image(pred.non_empty_mask())
-
- # draw mask for all semantic segments first i.e. "stuff"
- for mask, sinfo in pred.semantic_masks():
- category_idx = sinfo["category_id"]
- try:
- mask_color = [x / 255 for x in self.metadata.stuff_colors[category_idx]]
- except AttributeError:
- mask_color = None
-
- text = self.metadata.stuff_classes[category_idx]
- self.draw_binary_mask(
- mask,
- color=mask_color,
- edge_color=_OFF_WHITE,
- text=text,
- alpha=alpha,
- area_threshold=area_threshold,
- )
-
- # draw mask for all instances second
- all_instances = list(pred.instance_masks())
- if len(all_instances) == 0:
- return self.output
- masks, sinfo = list(zip(*all_instances))
- category_ids = [x["category_id"] for x in sinfo]
-
- try:
- scores = [x["score"] for x in sinfo]
- except KeyError:
- scores = None
- labels = _create_text_labels(category_ids, scores, self.metadata.thing_classes)
-
- try:
- colors = [random_color(rgb=True, maximum=1) for k in category_ids]
- except AttributeError:
- colors = None
- self.overlay_instances(masks=masks, labels=labels, assigned_colors=colors, alpha=alpha)
-
- return self.output
-
- def draw_dataset_dict(self, dic):
- """
- Draw annotations/segmentaions in Detectron2 Dataset format.
-
- Args:
- dic (dict): annotation/segmentation data of one image, in Detectron2 Dataset format.
-
- Returns:
- output (VisImage): image object with visualizations.
- """
- annos = dic.get("annotations", None)
- if annos:
- if "segmentation" in annos[0]:
- masks = [x["segmentation"] for x in annos]
- else:
- masks = None
- if "keypoints" in annos[0]:
- keypts = [x["keypoints"] for x in annos]
- keypts = np.array(keypts).reshape(len(annos), -1, 3)
- else:
- keypts = None
-
- boxes = [BoxMode.convert(x["bbox"], x["bbox_mode"], BoxMode.XYXY_ABS) for x in annos]
-
- labels = [x["category_id"] for x in annos]
- colors = None
- if self._instance_mode == ColorMode.SEGMENTATION and self.metadata.get("thing_colors"):
- colors = [
- self._jitter([x / 255 for x in self.metadata.thing_colors[c]]) for c in labels
- ]
- names = self.metadata.get("thing_classes", None)
- if names:
- labels = [names[i] for i in labels]
- labels = [
- "{}".format(i) + ("|crowd" if a.get("iscrowd", 0) else "")
- for i, a in zip(labels, annos)
- ]
- self.overlay_instances(
- labels=labels, boxes=boxes, masks=masks, keypoints=keypts, assigned_colors=colors
- )
-
- sem_seg = dic.get("sem_seg", None)
- if sem_seg is None and "sem_seg_file_name" in dic:
- with PathManager.open(dic["sem_seg_file_name"], "rb") as f:
- sem_seg = Image.open(f)
- sem_seg = np.asarray(sem_seg, dtype="uint8")
- if sem_seg is not None:
- self.draw_sem_seg(sem_seg, area_threshold=0, alpha=0.5)
- return self.output
-
- def overlay_instances(
- self,
- *,
- boxes=None,
- labels=None,
- masks=None,
- keypoints=None,
- assigned_colors=None,
- alpha=0.5
- ):
- """
- Args:
- boxes (Boxes, RotatedBoxes or ndarray): either a :class:`Boxes`,
- or an Nx4 numpy array of XYXY_ABS format for the N objects in a single image,
- or a :class:`RotatedBoxes`,
- or an Nx5 numpy array of (x_center, y_center, width, height, angle_degrees) format
- for the N objects in a single image,
- labels (list[str]): the text to be displayed for each instance.
- masks (masks-like object): Supported types are:
-
- * :class:`detectron2.structures.PolygonMasks`,
- :class:`detectron2.structures.BitMasks`.
- * list[list[ndarray]]: contains the segmentation masks for all objects in one image.
- The first level of the list corresponds to individual instances. The second
- level to all the polygon that compose the instance, and the third level
- to the polygon coordinates. The third level should have the format of
- [x0, y0, x1, y1, ..., xn, yn] (n >= 3).
- * list[ndarray]: each ndarray is a binary mask of shape (H, W).
- * list[dict]: each dict is a COCO-style RLE.
- keypoints (Keypoint or array like): an array-like object of shape (N, K, 3),
- where the N is the number of instances and K is the number of keypoints.
- The last dimension corresponds to (x, y, visibility or score).
- assigned_colors (list[matplotlib.colors]): a list of colors, where each color
- corresponds to each mask or box in the image. Refer to 'matplotlib.colors'
- for full list of formats that the colors are accepted in.
-
- Returns:
- output (VisImage): image object with visualizations.
- """
- num_instances = None
- if boxes is not None:
- boxes = self._convert_boxes(boxes)
- num_instances = len(boxes)
- if masks is not None:
- masks = self._convert_masks(masks)
- if num_instances:
- assert len(masks) == num_instances
- else:
- num_instances = len(masks)
- if keypoints is not None:
- if num_instances:
- assert len(keypoints) == num_instances
- else:
- num_instances = len(keypoints)
- keypoints = self._convert_keypoints(keypoints)
- if labels is not None:
- assert len(labels) == num_instances
- if assigned_colors is None:
- assigned_colors = [random_color(rgb=True, maximum=1) for _ in range(num_instances)]
- if num_instances == 0:
- return self.output
- if boxes is not None and boxes.shape[1] == 5:
- return self.overlay_rotated_instances(
- boxes=boxes, labels=labels, assigned_colors=assigned_colors
- )
-
- # Display in largest to smallest order to reduce occlusion.
- areas = None
- if boxes is not None:
- areas = np.prod(boxes[:, 2:] - boxes[:, :2], axis=1)
- elif masks is not None:
- areas = np.asarray([x.area() for x in masks])
-
- if areas is not None:
- sorted_idxs = np.argsort(-areas).tolist()
- # Re-order overlapped instances in descending order.
- boxes = boxes[sorted_idxs] if boxes is not None else None
- labels = [labels[k] for k in sorted_idxs] if labels is not None else None
- masks = [masks[idx] for idx in sorted_idxs] if masks is not None else None
- assigned_colors = [assigned_colors[idx] for idx in sorted_idxs]
- keypoints = keypoints[sorted_idxs] if keypoints is not None else None
-
- for i in range(num_instances):
- color = assigned_colors[i]
- if boxes is not None:
- self.draw_box(boxes[i], edge_color=color)
-
- if masks is not None:
- for segment in masks[i].polygons:
- self.draw_polygon(segment.reshape(-1, 2), color, alpha=alpha)
-
- if labels is not None:
- # first get a box
- if boxes is not None:
- x0, y0, x1, y1 = boxes[i]
- text_pos = (x0, y0) # if drawing boxes, put text on the box corner.
- horiz_align = "left"
- elif masks is not None:
- x0, y0, x1, y1 = masks[i].bbox()
-
- # draw text in the center (defined by median) when box is not drawn
- # median is less sensitive to outliers.
- text_pos = np.median(masks[i].mask.nonzero(), axis=1)[::-1]
- horiz_align = "center"
- else:
- continue # drawing the box confidence for keypoints isn't very useful.
- # for small objects, draw text at the side to avoid occlusion
- instance_area = (y1 - y0) * (x1 - x0)
- if (
- instance_area < _SMALL_OBJECT_AREA_THRESH * self.output.scale
- or y1 - y0 < 40 * self.output.scale
- ):
- if y1 >= self.output.height - 5:
- text_pos = (x1, y0)
- else:
- text_pos = (x0, y1)
-
- height_ratio = (y1 - y0) / np.sqrt(self.output.height * self.output.width)
- lighter_color = self._change_color_brightness(color, brightness_factor=0.7)
- font_size = (
- np.clip((height_ratio - 0.02) / 0.08 + 1, 1.2, 2)
- * 0.5
- * self._default_font_size
- )
- self.draw_text(
- labels[i],
- text_pos,
- color=lighter_color,
- horizontal_alignment=horiz_align,
- font_size=font_size,
- )
-
- # draw keypoints
- if keypoints is not None:
- for keypoints_per_instance in keypoints:
- self.draw_and_connect_keypoints(keypoints_per_instance)
-
- return self.output
-
- def overlay_rotated_instances(self, boxes=None, labels=None, assigned_colors=None):
- """
- Args:
- boxes (ndarray): an Nx5 numpy array of
- (x_center, y_center, width, height, angle_degrees) format
- for the N objects in a single image.
- labels (list[str]): the text to be displayed for each instance.
- assigned_colors (list[matplotlib.colors]): a list of colors, where each color
- corresponds to each mask or box in the image. Refer to 'matplotlib.colors'
- for full list of formats that the colors are accepted in.
-
- Returns:
- output (VisImage): image object with visualizations.
- """
-
- num_instances = len(boxes)
-
- if assigned_colors is None:
- assigned_colors = [random_color(rgb=True, maximum=1) for _ in range(num_instances)]
- if num_instances == 0:
- return self.output
-
- # Display in largest to smallest order to reduce occlusion.
- if boxes is not None:
- areas = boxes[:, 2] * boxes[:, 3]
-
- sorted_idxs = np.argsort(-areas).tolist()
- # Re-order overlapped instances in descending order.
- boxes = boxes[sorted_idxs]
- labels = [labels[k] for k in sorted_idxs] if labels is not None else None
- colors = [assigned_colors[idx] for idx in sorted_idxs]
-
- for i in range(num_instances):
- self.draw_rotated_box_with_label(
- boxes[i], edge_color=colors[i], label=labels[i] if labels is not None else None
- )
-
- return self.output
-
- def draw_and_connect_keypoints(self, keypoints):
- """
- Draws keypoints of an instance and follows the rules for keypoint connections
- to draw lines between appropriate keypoints. This follows color heuristics for
- line color.
-
- Args:
- keypoints (Tensor): a tensor of shape (K, 3), where K is the number of keypoints
- and the last dimension corresponds to (x, y, probability).
-
- Returns:
- output (VisImage): image object with visualizations.
- """
- visible = {}
- keypoint_names = self.metadata.get("keypoint_names")
- for idx, keypoint in enumerate(keypoints):
- # draw keypoint
- x, y, prob = keypoint
- if prob > _KEYPOINT_THRESHOLD:
- self.draw_circle((x, y), color=_RED)
- if keypoint_names:
- keypoint_name = keypoint_names[idx]
- visible[keypoint_name] = (x, y)
-
- if self.metadata.get("keypoint_connection_rules"):
- for kp0, kp1, color in self.metadata.keypoint_connection_rules:
- if kp0 in visible and kp1 in visible:
- x0, y0 = visible[kp0]
- x1, y1 = visible[kp1]
- color = tuple(x / 255.0 for x in color)
- self.draw_line([x0, x1], [y0, y1], color=color)
-
- # draw lines from nose to mid-shoulder and mid-shoulder to mid-hip
- # Note that this strategy is specific to person keypoints.
- # For other keypoints, it should just do nothing
- try:
- ls_x, ls_y = visible["left_shoulder"]
- rs_x, rs_y = visible["right_shoulder"]
- mid_shoulder_x, mid_shoulder_y = (ls_x + rs_x) / 2, (ls_y + rs_y) / 2
- except KeyError:
- pass
- else:
- # draw line from nose to mid-shoulder
- nose_x, nose_y = visible.get("nose", (None, None))
- if nose_x is not None:
- self.draw_line([nose_x, mid_shoulder_x], [nose_y, mid_shoulder_y], color=_RED)
-
- try:
- # draw line from mid-shoulder to mid-hip
- lh_x, lh_y = visible["left_hip"]
- rh_x, rh_y = visible["right_hip"]
- except KeyError:
- pass
- else:
- mid_hip_x, mid_hip_y = (lh_x + rh_x) / 2, (lh_y + rh_y) / 2
- self.draw_line([mid_hip_x, mid_shoulder_x], [mid_hip_y, mid_shoulder_y], color=_RED)
- return self.output
-
- """
- Primitive drawing functions:
- """
-
- def draw_text(
- self,
- text,
- position,
- *,
- font_size=None,
- color="g",
- horizontal_alignment="center",
- rotation=0
- ):
- """
- Args:
- text (str): class label
- position (tuple): a tuple of the x and y coordinates to place text on image.
- font_size (int, optional): font of the text. If not provided, a font size
- proportional to the image width is calculated and used.
- color: color of the text. Refer to `matplotlib.colors` for full list
- of formats that are accepted.
- horizontal_alignment (str): see `matplotlib.text.Text`
- rotation: rotation angle in degrees CCW
-
- Returns:
- output (VisImage): image object with text drawn.
- """
- if not font_size:
- font_size = self._default_font_size
-
- # since the text background is dark, we don't want the text to be dark
- color = np.maximum(list(mplc.to_rgb(color)), 0.2)
- color[np.argmax(color)] = max(0.8, np.max(color))
-
- x, y = position
- self.output.ax.text(
- x,
- y,
- text,
- size=font_size * self.output.scale,
- family="sans-serif",
- bbox={"facecolor": "black", "alpha": 0.8, "pad": 0.7, "edgecolor": "none"},
- verticalalignment="top",
- horizontalalignment=horizontal_alignment,
- color=color,
- zorder=10,
- rotation=rotation,
- )
- return self.output
-
- def draw_box(self, box_coord, alpha=0.5, edge_color="g", line_style="-"):
- """
- Args:
- box_coord (tuple): a tuple containing x0, y0, x1, y1 coordinates, where x0 and y0
- are the coordinates of the image's top left corner. x1 and y1 are the
- coordinates of the image's bottom right corner.
- alpha (float): blending efficient. Smaller values lead to more transparent masks.
- edge_color: color of the outline of the box. Refer to `matplotlib.colors`
- for full list of formats that are accepted.
- line_style (string): the string to use to create the outline of the boxes.
-
- Returns:
- output (VisImage): image object with box drawn.
- """
- x0, y0, x1, y1 = box_coord
- width = x1 - x0
- height = y1 - y0
-
- linewidth = max(self._default_font_size / 4, 1)
-
- self.output.ax.add_patch(
- mpl.patches.Rectangle(
- (x0, y0),
- width,
- height,
- fill=False,
- edgecolor=edge_color,
- linewidth=linewidth * self.output.scale,
- alpha=alpha,
- linestyle=line_style,
- )
- )
- return self.output
-
- def draw_rotated_box_with_label(
- self, rotated_box, alpha=0.5, edge_color="g", line_style="-", label=None
- ):
- """
- Args:
- rotated_box (tuple): a tuple containing (cnt_x, cnt_y, w, h, angle),
- where cnt_x and cnt_y are the center coordinates of the box.
- w and h are the width and height of the box. angle represents how
- many degrees the box is rotated CCW with regard to the 0-degree box.
- alpha (float): blending efficient. Smaller values lead to more transparent masks.
- edge_color: color of the outline of the box. Refer to `matplotlib.colors`
- for full list of formats that are accepted.
- line_style (string): the string to use to create the outline of the boxes.
- label (string): label for rotated box. It will not be rendered when set to None.
-
- Returns:
- output (VisImage): image object with box drawn.
- """
- cnt_x, cnt_y, w, h, angle = rotated_box
- area = w * h
- # use thinner lines when the box is small
- linewidth = self._default_font_size / (
- 6 if area < _SMALL_OBJECT_AREA_THRESH * self.output.scale else 3
- )
-
- theta = angle * math.pi / 180.0
- c = math.cos(theta)
- s = math.sin(theta)
- rect = [(-w / 2, h / 2), (-w / 2, -h / 2), (w / 2, -h / 2), (w / 2, h / 2)]
- # x: left->right ; y: top->down
- rotated_rect = [(s * yy + c * xx + cnt_x, c * yy - s * xx + cnt_y) for (xx, yy) in rect]
- for k in range(4):
- j = (k + 1) % 4
- self.draw_line(
- [rotated_rect[k][0], rotated_rect[j][0]],
- [rotated_rect[k][1], rotated_rect[j][1]],
- color=edge_color,
- linestyle="--" if k == 1 else line_style,
- linewidth=linewidth,
- )
-
- if label is not None:
- text_pos = rotated_rect[1] # topleft corner
-
- height_ratio = h / np.sqrt(self.output.height * self.output.width)
- label_color = self._change_color_brightness(edge_color, brightness_factor=0.7)
- font_size = (
- np.clip((height_ratio - 0.02) / 0.08 + 1, 1.2, 2) * 0.5 * self._default_font_size
- )
- self.draw_text(label, text_pos, color=label_color, font_size=font_size, rotation=angle)
-
- return self.output
-
- def draw_circle(self, circle_coord, color, radius=3):
- """
- Args:
- circle_coord (list(int) or tuple(int)): contains the x and y coordinates
- of the center of the circle.
- color: color of the polygon. Refer to `matplotlib.colors` for a full list of
- formats that are accepted.
- radius (int): radius of the circle.
-
- Returns:
- output (VisImage): image object with box drawn.
- """
- x, y = circle_coord
- self.output.ax.add_patch(
- mpl.patches.Circle(circle_coord, radius=radius, fill=True, color=color)
- )
- return self.output
-
- def draw_line(self, x_data, y_data, color, linestyle="-", linewidth=None):
- """
- Args:
- x_data (list[int]): a list containing x values of all the points being drawn.
- Length of list should match the length of y_data.
- y_data (list[int]): a list containing y values of all the points being drawn.
- Length of list should match the length of x_data.
- color: color of the line. Refer to `matplotlib.colors` for a full list of
- formats that are accepted.
- linestyle: style of the line. Refer to `matplotlib.lines.Line2D`
- for a full list of formats that are accepted.
- linewidth (float or None): width of the line. When it's None,
- a default value will be computed and used.
-
- Returns:
- output (VisImage): image object with line drawn.
- """
- if linewidth is None:
- linewidth = self._default_font_size / 3
- linewidth = max(linewidth, 1)
- self.output.ax.add_line(
- mpl.lines.Line2D(
- x_data,
- y_data,
- linewidth=linewidth * self.output.scale,
- color=color,
- linestyle=linestyle,
- )
- )
- return self.output
-
- def draw_binary_mask(
- self, binary_mask, color=None, *, edge_color=None, text=None, alpha=0.5, area_threshold=4096
- ):
- """
- Args:
- binary_mask (ndarray): numpy array of shape (H, W), where H is the image height and
- W is the image width. Each value in the array is either a 0 or 1 value of uint8
- type.
- color: color of the mask. Refer to `matplotlib.colors` for a full list of
- formats that are accepted. If None, will pick a random color.
- edge_color: color of the polygon edges. Refer to `matplotlib.colors` for a
- full list of formats that are accepted.
- text (str): if None, will be drawn in the object's center of mass.
- alpha (float): blending efficient. Smaller values lead to more transparent masks.
- area_threshold (float): a connected component small than this will not be shown.
-
- Returns:
- output (VisImage): image object with mask drawn.
- """
- if color is None:
- color = random_color(rgb=True, maximum=1)
- if area_threshold is None:
- area_threshold = 4096
-
- has_valid_segment = False
- binary_mask = binary_mask.astype("uint8") # opencv needs uint8
- mask = GenericMask(binary_mask, self.output.height, self.output.width)
- shape2d = (binary_mask.shape[0], binary_mask.shape[1])
-
- if not mask.has_holes:
- # draw polygons for regular masks
- for segment in mask.polygons:
- area = mask_util.area(mask_util.frPyObjects([segment], shape2d[0], shape2d[1]))
- if area < area_threshold:
- continue
- has_valid_segment = True
- segment = segment.reshape(-1, 2)
- self.draw_polygon(segment, color=color, edge_color=edge_color, alpha=alpha)
- else:
- rgba = np.zeros(shape2d + (4,), dtype="float32")
- rgba[:, :, :3] = color
- rgba[:, :, 3] = (mask.mask == 1).astype("float32") * alpha
- has_valid_segment = True
- self.output.ax.imshow(rgba)
-
- if text is not None and has_valid_segment:
- # TODO sometimes drawn on wrong objects. the heuristics here can improve.
- lighter_color = self._change_color_brightness(color, brightness_factor=0.7)
- _num_cc, cc_labels, stats, centroids = cv2.connectedComponentsWithStats(binary_mask, 8)
- largest_component_id = np.argmax(stats[1:, -1]) + 1
-
- # draw text on the largest component, as well as other very large components.
- for cid in range(1, _num_cc):
- if cid == largest_component_id or stats[cid, -1] > _LARGE_MASK_AREA_THRESH:
- # median is more stable than centroid
- # center = centroids[largest_component_id]
- center = np.median((cc_labels == cid).nonzero(), axis=1)[::-1]
- self.draw_text(text, center, color=lighter_color)
- return self.output
-
- def draw_polygon(self, segment, color, edge_color=None, alpha=0.5):
- """
- Args:
- segment: numpy array of shape Nx2, containing all the points in the polygon.
- color: color of the polygon. Refer to `matplotlib.colors` for a full list of
- formats that are accepted.
- edge_color: color of the polygon edges. Refer to `matplotlib.colors` for a
- full list of formats that are accepted. If not provided, a darker shade
- of the polygon color will be used instead.
- alpha (float): blending efficient. Smaller values lead to more transparent masks.
-
- Returns:
- output (VisImage): image object with polygon drawn.
- """
- if edge_color is None:
- # make edge color darker than the polygon color
- if alpha > 0.8:
- edge_color = self._change_color_brightness(color, brightness_factor=-0.7)
- else:
- edge_color = color
- edge_color = mplc.to_rgb(edge_color) + (1,)
-
- polygon = mpl.patches.Polygon(
- segment,
- fill=True,
- facecolor=mplc.to_rgb(color) + (alpha,),
- edgecolor=edge_color,
- linewidth=max(self._default_font_size // 15 * self.output.scale, 1),
- )
- self.output.ax.add_patch(polygon)
- return self.output
-
- """
- Internal methods:
- """
-
- def _jitter(self, color):
- """
- Randomly modifies given color to produce a slightly different color than the color given.
-
- Args:
- color (tuple[double]): a tuple of 3 elements, containing the RGB values of the color
- picked. The values in the list are in the [0.0, 1.0] range.
-
- Returns:
- jittered_color (tuple[double]): a tuple of 3 elements, containing the RGB values of the
- color after being jittered. The values in the list are in the [0.0, 1.0] range.
- """
- color = mplc.to_rgb(color)
- vec = np.random.rand(3)
- # better to do it in another color space
- vec = vec / np.linalg.norm(vec) * 0.5
- res = np.clip(vec + color, 0, 1)
- return tuple(res)
-
- def _create_grayscale_image(self, mask=None):
- """
- Create a grayscale version of the original image.
- The colors in masked area, if given, will be kept.
- """
- img_bw = self.img.astype("f4").mean(axis=2)
- img_bw = np.stack([img_bw] * 3, axis=2)
- if mask is not None:
- img_bw[mask] = self.img[mask]
- return img_bw
-
- def _change_color_brightness(self, color, brightness_factor):
- """
- Depending on the brightness_factor, gives a lighter or darker color i.e. a color with
- less or more saturation than the original color.
-
- Args:
- color: color of the polygon. Refer to `matplotlib.colors` for a full list of
- formats that are accepted.
- brightness_factor (float): a value in [-1.0, 1.0] range. A lightness factor of
- 0 will correspond to no change, a factor in [-1.0, 0) range will result in
- a darker color and a factor in (0, 1.0] range will result in a lighter color.
-
- Returns:
- modified_color (tuple[double]): a tuple containing the RGB values of the
- modified color. Each value in the tuple is in the [0.0, 1.0] range.
- """
- assert brightness_factor >= -1.0 and brightness_factor <= 1.0
- color = mplc.to_rgb(color)
- polygon_color = colorsys.rgb_to_hls(*mplc.to_rgb(color))
- modified_lightness = polygon_color[1] + (brightness_factor * polygon_color[1])
- modified_lightness = 0.0 if modified_lightness < 0.0 else modified_lightness
- modified_lightness = 1.0 if modified_lightness > 1.0 else modified_lightness
- modified_color = colorsys.hls_to_rgb(polygon_color[0], modified_lightness, polygon_color[2])
- return modified_color
-
- def _convert_boxes(self, boxes):
- """
- Convert different format of boxes to an NxB array, where B = 4 or 5 is the box dimension.
- """
- if isinstance(boxes, Boxes) or isinstance(boxes, RotatedBoxes):
- return boxes.tensor.numpy()
- else:
- return np.asarray(boxes)
-
- def _convert_masks(self, masks_or_polygons):
- """
- Convert different format of masks or polygons to a tuple of masks and polygons.
-
- Returns:
- list[GenericMask]:
- """
-
- m = masks_or_polygons
- if isinstance(m, PolygonMasks):
- m = m.polygons
- if isinstance(m, BitMasks):
- m = m.tensor.numpy()
- if isinstance(m, torch.Tensor):
- m = m.numpy()
- ret = []
- for x in m:
- if isinstance(x, GenericMask):
- ret.append(x)
- else:
- ret.append(GenericMask(x, self.output.height, self.output.width))
- return ret
-
- def _convert_keypoints(self, keypoints):
- if isinstance(keypoints, Keypoints):
- keypoints = keypoints.tensor
- keypoints = np.asarray(keypoints)
- return keypoints
-
- def get_output(self):
- """
- Returns:
- output (VisImage): the image output containing the visualizations added
- to the image.
- """
- return self.output
diff --git a/spaces/huggan/BigGAN/app.py b/spaces/huggan/BigGAN/app.py
deleted file mode 100644
index 5ede9a5e1854f0f01c6151ed5b9d3cd0df8ab610..0000000000000000000000000000000000000000
--- a/spaces/huggan/BigGAN/app.py
+++ /dev/null
@@ -1,85 +0,0 @@
-import torch
-import gradio as gr
-import numpy as np
-import nltk
-nltk.download('wordnet')
-nltk.download('omw-1.4')
-from PIL import Image
-from pytorch_pretrained_biggan import (BigGAN, one_hot_from_names, truncated_noise_sample,
- save_as_images, display_in_terminal)
-initial_archi = 'biggan-deep-128' #@param ['biggan-deep-128', 'biggan-deep-256', 'biggan-deep-512'] {allow-input: true}
-initial_class = 'dog'
-
-gan_model = BigGAN.from_pretrained(initial_archi)
-
-def generate_images (initial_archi, initial_class, batch_size):
- truncation = 0.4
- class_vector = one_hot_from_names(initial_class, batch_size=batch_size)
- noise_vector = truncated_noise_sample(truncation=truncation, batch_size=batch_size)
-
- # All in tensors
- noise_vector = torch.from_numpy(noise_vector)
- class_vector = torch.from_numpy(class_vector)
-
- # If you have a GPU, put everything on cuda
- #noise_vector = noise_vector.to('cuda')
- #class_vector = class_vector.to('cuda')
- #gan_model.to('cuda')
-
- # Generate an image
- with torch.no_grad():
- output = gan_model(noise_vector, class_vector, truncation)
-
- # If you have a GPU put back on CPU
- output = output.to('cpu')
- save_as_images(output)
- return output
-
-def convert_to_images(obj):
- """ Convert an output tensor from BigGAN in a list of images.
- Params:
- obj: tensor or numpy array of shape (batch_size, channels, height, width)
- Output:
- list of Pillow Images of size (height, width)
- """
- try:
- import PIL
- except ImportError:
- raise ImportError("Please install Pillow to use images: pip install Pillow")
-
- if not isinstance(obj, np.ndarray):
- obj = obj.detach().numpy()
-
- obj = obj.transpose((0, 2, 3, 1))
- obj = np.clip(((obj + 1) / 2.0) * 256, 0, 255)
-
- img = []
- for i, out in enumerate(obj):
- out_array = np.asarray(np.uint8(out), dtype=np.uint8)
- img.append(PIL.Image.fromarray(out_array))
- return img
-
-def inference(initial_archi, initial_class):
- output = generate_images (initial_archi, initial_class, 1)
- PIL_output = convert_to_images(output)
- return PIL_output[0]
-
-
-
-title = "BigGAN"
-description = "BigGAN using various architecture models to generate images."
-article="Coming soon"
-
-examples = [
- ["biggan-deep-128", "dog"],
- ["biggan-deep-256", 'dog'],
- ["biggan-deep-512", 'dog']
-]
-
-gr.Interface(inference,
- inputs=[gr.inputs.Dropdown(["biggan-deep-128", "biggan-deep-256", "biggan-deep-512"]), "text"],
- outputs= [gr.outputs.Image(type="pil",label="output")],
- examples=examples,
- title=title,
- description=description,
- article=article).launch( debug=True)
\ No newline at end of file
diff --git a/spaces/huggingface-course/audio-course-u7-assessment/app.py b/spaces/huggingface-course/audio-course-u7-assessment/app.py
deleted file mode 100644
index 8aab31ed928b22bfdd41d702d544fea16b882772..0000000000000000000000000000000000000000
--- a/spaces/huggingface-course/audio-course-u7-assessment/app.py
+++ /dev/null
@@ -1,120 +0,0 @@
-import os
-
-import gradio as gr
-import soundfile as sf
-import torch
-from gradio_client import Client
-from huggingface_hub import Repository
-from pandas import read_csv
-
-from transformers import pipeline
-
-
-# load the results file from the private repo
-USERNAMES_DATASET_ID = "huggingface-course/audio-course-u7-hands-on"
-HF_TOKEN = os.environ.get("HF_TOKEN")
-
-usernames_url = os.path.join("https://huggingface.co/datasets", USERNAMES_DATASET_ID)
-
-usernames_repo = Repository(local_dir="usernames", clone_from=usernames_url, use_auth_token=HF_TOKEN)
-usernames_repo.git_pull()
-
-CSV_RESULTS_FILE = os.path.join("usernames", "usernames.csv")
-all_results = read_csv(CSV_RESULTS_FILE)
-
-# load the LID checkpoint
-device = "cuda:0" if torch.cuda.is_available() else "cpu"
-pipe = pipeline("audio-classification", model="facebook/mms-lid-126", device=device)
-
-# define some constants
-TITLE = "🤗 Audio Transformers Course: Unit 7 Assessment"
-DESCRIPTION = """
-Check that you have successfully completed the hands-on exercise for Unit 7 of the 🤗 Audio Transformers Course by submitting your demo to this Space.
-
-As a reminder, you should start with the template Space provided at [`course-demos/speech-to-speech-translation`](https://huggingface.co/spaces/course-demos/speech-to-speech-translation),
-and update the Space to translate from any language X to a **non-English** language Y. Your demo should take as input an audio file, and return as output another audio file,
-matching the signature of the [`speech_to_speech_translation`](https://huggingface.co/spaces/course-demos/speech-to-speech-translation/blob/3946ba6705a6632a63de8672ac52a482ab74b3fc/app.py#L35)
-function in the template demo.
-
-To submit your demo for assessment, give the repo id or URL to your demo. For the template demo, this would be `course-demos/speech-to-speech-translation`.
-You should ensure that the visibility of your demo is set to **public**. This Space will submit a test file to your demo, and check that the output is
-non-English audio. If your demo successfully returns an audio file, and this audio file is classified as being non-English, you will pass the Unit and
-get a green tick next to your name on the overall [course progress space](https://huggingface.co/spaces/MariaK/Check-my-progress-Audio-Course) ✅
-
-If you experience any issues with using this checker, [open an issue](https://huggingface.co/spaces/huggingface-course/audio-course-u7-assessment/discussions/new)
-on this Space and tag [`@sanchit-gandhi`](https://huggingface.co/sanchit-gandhi).
-"""
-THRESHOLD = 0.5
-PASS_MESSAGE = "Congratulations USER! Your demo passed the assessment!"
-
-
-def verify_demo(repo_id):
- if "/" not in repo_id:
- raise gr.Error(f"Ensure you pass a valid repo id to the assessor, got `{repo_id}`")
-
- split_repo_id = repo_id.split("/")
- user_name = split_repo_id[-2]
-
- if len(split_repo_id) > 2:
- repo_id = "/".join(split_repo_id[-2:])
-
- if (all_results["username"] == user_name).any():
- raise gr.Error(f"Username {user_name} has already passed the assessment!")
-
- try:
- client = Client(repo_id, hf_token=HF_TOKEN)
- except Exception as e:
- raise gr.Error("Error with loading Space. First check that your Space has been built and is running."
- "Then check that your Space takes an audio file as input and returns an audio as output. If it is working"
- f"as expected and the error persists, open an issue on this Space. Error: {e}"
- )
-
- try:
- audio_file = client.predict("test_short.wav", api_name="/predict")
- except Exception as e:
- raise gr.Error(
- f"Error with querying Space, check that your Space takes an audio file as input and returns an audio as output: {e}"
- )
-
- audio, sampling_rate = sf.read(audio_file)
-
- language_prediction = pipe({"array": audio, "sampling_rate": sampling_rate})
-
- label_outputs = {}
- for pred in language_prediction:
- label_outputs[pred["label"]] = pred["score"]
-
- top_prediction = language_prediction[0]
-
- if top_prediction["score"] < THRESHOLD:
- raise gr.Error(
- f"Model made random predictions - predicted {top_prediction['label']} with probability {top_prediction['score']}"
- )
- elif top_prediction["label"] == "eng":
- raise gr.Error(
- "Model generated an English audio - ensure the model is set to generate audio in a non-English langauge, e.g. Dutch"
- )
-
- # save and upload new evaluated usernames
- all_results.loc[len(all_results)] = {"username": user_name}
- all_results.to_csv(CSV_RESULTS_FILE, index=False)
- usernames_repo.push_to_hub()
-
- message = PASS_MESSAGE.replace("USER", user_name)
-
- return message, "test_short.wav", (sampling_rate, audio), label_outputs
-
-
-demo = gr.Interface(
- fn=verify_demo,
- inputs=gr.Textbox(placeholder="course-demos/speech-to-speech-translation", label="Repo id or URL of your demo"),
- outputs=[
- gr.Textbox(label="Status"),
- gr.Audio(label="Source Speech", type="filepath"),
- gr.Audio(label="Generated Speech", type="numpy"),
- gr.Label(label="Language prediction"),
- ],
- title=TITLE,
- description=DESCRIPTION,
-)
-demo.launch()
diff --git a/spaces/huggingface-projects/stable-diffusion-multiplayer/frontend/src/app.css b/spaces/huggingface-projects/stable-diffusion-multiplayer/frontend/src/app.css
deleted file mode 100644
index 2a426e9f12c93e5a53be15ac59c24639845f0552..0000000000000000000000000000000000000000
--- a/spaces/huggingface-projects/stable-diffusion-multiplayer/frontend/src/app.css
+++ /dev/null
@@ -1,25 +0,0 @@
-@tailwind base;
-@tailwind components;
-@tailwind utilities;
-
-/* Firefox */
-.x-scroll {
- scrollbar-width: thin;
- scrollbar-color: white #2F6DCB;
-}
-
-/* Chrome, Edge, and Safari */
-.x-scroll::-webkit-scrollbar {
- width: 4px;
-}
-
-.x-scroll::-webkit-scrollbar-track {
- background: white;
- border-radius: 100px;
-}
-
-.x-scroll::-webkit-scrollbar-thumb {
- background-color: #2F6DCB;
- border-radius: 100px;
- border: 2px solid #2F6DCB;
-}
\ No newline at end of file
diff --git a/spaces/hugginglearners/kvasir-seg/app.py b/spaces/hugginglearners/kvasir-seg/app.py
deleted file mode 100644
index 364c694a70390fbb571b118c8e16ae528eb2d0a4..0000000000000000000000000000000000000000
--- a/spaces/hugginglearners/kvasir-seg/app.py
+++ /dev/null
@@ -1,40 +0,0 @@
-import gradio as gr
-from fastai.vision.all import *
-from huggingface_hub import from_pretrained_fastai
-
-def label_func(fn): return path/'masks1b-binary'/f'{fn.stem}.png'
-
-repo_id = "hugginglearners/kvasir-seg"
-learn = from_pretrained_fastai(repo_id)
-
-def predict(img):
- img = PILImage.create(img)
- pred, _, _ = learn.predict(img)
- return PILMask.create(pred*255)
-
-interface_options = {
- "title": "kvasir-seg fastai segmentation",
- "description": "Demonstration of segmentation of gastrointestinal polyp images. This app is for reference only. It should not be used for medical diagnosis. Model was trained on Kvasir SEG dataset (https://datasets.simula.no/kvasir-seg/)",
- "layout": "horizontal",
- "examples": [
- "cju5eftctcdbj08712gdp989f.jpg",
- "cju42qet0lsq90871e50xbnuv.jpg",
- "cju8b0jr0r2oi0801jiquetd5.jpg"
- ],
- "allow_flagging": "never"
-}
-
-demo = gr.Interface(
- fn=predict,
- inputs=gr.Image(shape=(224, 224)),
- outputs=gr.Image(shape=(224, 224)),
- cache_examples=False,
- **interface_options,
-)
-
-launch_options = {
- "enable_queue": True,
- "share": False,
-}
-
-demo.launch(**launch_options)
\ No newline at end of file
diff --git a/spaces/hysts/mistral-7b/README.md b/spaces/hysts/mistral-7b/README.md
deleted file mode 100644
index 4f27308922118ed3265204cf2e40abb316d9e9b4..0000000000000000000000000000000000000000
--- a/spaces/hysts/mistral-7b/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Mistral-7B
-emoji: 🐨
-colorFrom: pink
-colorTo: blue
-sdk: gradio
-sdk_version: 4.1.1
-app_file: app.py
-pinned: false
-license: mit
-suggested_hardware: t4-small
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/hzy123/bingo/src/pages/api/create.ts b/spaces/hzy123/bingo/src/pages/api/create.ts
deleted file mode 100644
index 508fa97ef609cbb215a61085711638e116235ebe..0000000000000000000000000000000000000000
--- a/spaces/hzy123/bingo/src/pages/api/create.ts
+++ /dev/null
@@ -1,31 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-import { fetch, debug } from '@/lib/isomorphic'
-import { createHeaders } from '@/lib/utils'
-
-// const API_ENDPOINT = 'https://www.bing.com/turing/conversation/create'
-const API_ENDPOINT = 'https://edgeservices.bing.com/edgesvc/turing/conversation/create';
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- try {
- const headers = createHeaders(req.cookies)
-
- res.writeHead(200, {
- 'Content-Type': 'application/json',
- })
-
- debug('headers', headers)
- const response = await fetch(API_ENDPOINT, { method: 'GET', headers })
- .then((res) => res.text())
-
- res.end(response)
- } catch (e) {
- return res.end(JSON.stringify({
- result: {
- value: 'UnauthorizedRequest',
- message: `${e}`
- }
- }))
- }
-}
diff --git a/spaces/idosal/oai-proxy/src/proxy/openai.ts b/spaces/idosal/oai-proxy/src/proxy/openai.ts
deleted file mode 100644
index 8e0d134802305ff21421d7d81ebd30266c06d0e6..0000000000000000000000000000000000000000
--- a/spaces/idosal/oai-proxy/src/proxy/openai.ts
+++ /dev/null
@@ -1,58 +0,0 @@
-import { Request, Router } from "express";
-import * as http from "http";
-import { createProxyMiddleware, fixRequestBody } from "http-proxy-middleware";
-import { logger } from "../logger";
-import { Key, keys } from "../keys";
-import { handleResponse, onError } from "./common";
-
-/**
- * Modifies the request body to add a randomly selected API key.
- */
-const rewriteRequest = (proxyReq: http.ClientRequest, req: Request) => {
- let key: Key;
-
- try {
- key = keys.get(req.body?.model || "gpt-3.5")!;
- } catch (err) {
- proxyReq.destroy(err as any);
- return;
- }
-
- req.key = key;
- proxyReq.setHeader("Authorization", `Bearer ${key.key}`);
-
- if (req.method === "POST" && req.body) {
- if (req.body?.stream) {
- req.body.stream = false;
- const updatedBody = JSON.stringify(req.body);
- proxyReq.setHeader("Content-Length", Buffer.byteLength(updatedBody));
- (req as any).rawBody = Buffer.from(updatedBody);
- }
-
- // body-parser and http-proxy-middleware don't play nice together
- fixRequestBody(proxyReq, req);
- }
-};
-
-const openaiProxy = createProxyMiddleware({
- target: "https://api.openai.com",
- changeOrigin: true,
- on: {
- proxyReq: rewriteRequest,
- proxyRes: handleResponse,
- error: onError,
- },
- selfHandleResponse: true,
- logger,
-});
-
-const openaiRouter = Router();
-openaiRouter.post("/v1/chat/completions", openaiProxy);
-// openaiRouter.post("/v1/completions", openaiProxy); // TODO: Implement Davinci
-openaiRouter.get("/v1/models", openaiProxy);
-openaiRouter.use((req, res) => {
- logger.warn(`Blocked openai proxy request: ${req.method} ${req.path}`);
- res.status(404).json({ error: "Not found" });
-});
-
-export const openai = openaiRouter;
diff --git a/spaces/innnky/vits-nyaru/text/symbols.py b/spaces/innnky/vits-nyaru/text/symbols.py
deleted file mode 100644
index 149fe0acb988d845b5699a62e22751a2fc2f46e3..0000000000000000000000000000000000000000
--- a/spaces/innnky/vits-nyaru/text/symbols.py
+++ /dev/null
@@ -1,33 +0,0 @@
-'''
-Defines the set of symbols used in text input to the model.
-'''
-
-# japanese_cleaners
-_pad = '_'
-_punctuation = ',.!?-'
-_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧ↓↑ '
-
-
-# # japanese_cleaners2
-# _pad = '_'
-# _punctuation = ',.!?-~…'
-# _letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧʦ↓↑ '
-
-
-'''# korean_cleaners
-_pad = '_'
-_punctuation = ',.!?…~'
-_letters = 'ㄱㄴㄷㄹㅁㅂㅅㅇㅈㅊㅋㅌㅍㅎㄲㄸㅃㅆㅉㅏㅓㅗㅜㅡㅣㅐㅔ '
-'''
-
-'''# chinese_cleaners
-_pad = '_'
-_punctuation = ',。!?—…'
-_letters = 'ㄅㄆㄇㄈㄉㄊㄋㄌㄍㄎㄏㄐㄑㄒㄓㄔㄕㄖㄗㄘㄙㄚㄛㄜㄝㄞㄟㄠㄡㄢㄣㄤㄥㄦㄧㄨㄩˉˊˇˋ˙ '
-'''
-
-# Export all symbols:
-symbols = [_pad] + list(_punctuation) + list(_letters)
-
-# Special symbol ids
-SPACE_ID = symbols.index(" ")
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Licence Recovery My Files V5.2.1 1964.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Licence Recovery My Files V5.2.1 1964.md
deleted file mode 100644
index 4b2da1da3260afd1c20c4ad397774dffabfcd01e..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Licence Recovery My Files V5.2.1 1964.md
+++ /dev/null
@@ -1,36 +0,0 @@
-
- Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification. This demo cuts audio after around 30 secs.
-
-
You can skip the queue by using google colab for the space:
-
- """
- )
- with gr.Group():
- with gr.Box():
- with gr.Row().style(mobile_collapse=False, equal_height=True):
- audio = gr.Audio(
- label="Input Audio",
- show_label=False,
- source="microphone",
- type="filepath"
- )
-
- btn = gr.Button("Transcribe")
- text = gr.Textbox(show_label=False, elem_id="result-textarea")
- with gr.Group(elem_id="share-btn-container"):
- community_icon = gr.HTML(community_icon_html, visible=False)
- loading_icon = gr.HTML(loading_icon_html, visible=False)
- share_button = gr.Button("Share to community", elem_id="share-btn", visible=False)
-
-
-
-
- btn.click(inference, inputs=[audio], outputs=[text, community_icon, loading_icon, share_button], api_name="transcribe")
- share_button.click(None, [], [], _js=share_js)
-
- gr.HTML('''
-
- ''')
-
-block.launch()
\ No newline at end of file
diff --git a/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/modeling/heads/mask_former_head.py b/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/modeling/heads/mask_former_head.py
deleted file mode 100644
index 5f592662f92d1b0862a3ef76304e7b28b46ecf80..0000000000000000000000000000000000000000
--- a/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/modeling/heads/mask_former_head.py
+++ /dev/null
@@ -1,135 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# Copyright (c) Meta Platforms, Inc. All Rights Reserved
-
-import logging
-from copy import deepcopy
-from typing import Callable, Dict, List, Optional, Tuple, Union
-
-import fvcore.nn.weight_init as weight_init
-from torch import nn
-from torch.nn import functional as F
-
-from detectron2.config import configurable
-from detectron2.layers import Conv2d, ShapeSpec, get_norm
-from detectron2.modeling import SEM_SEG_HEADS_REGISTRY
-
-from ..transformer.transformer_predictor import TransformerPredictor
-from .pixel_decoder import build_pixel_decoder
-
-
-@SEM_SEG_HEADS_REGISTRY.register()
-class MaskFormerHead(nn.Module):
-
- _version = 2
-
- def _load_from_state_dict(
- self,
- state_dict,
- prefix,
- local_metadata,
- strict,
- missing_keys,
- unexpected_keys,
- error_msgs,
- ):
- version = local_metadata.get("version", None)
- if version is None or version < 2:
- # Do not warn if train from scratch
- scratch = True
- logger = logging.getLogger(__name__)
- for k in list(state_dict.keys()):
- newk = k
- if "sem_seg_head" in k and not k.startswith(prefix + "predictor"):
- newk = k.replace(prefix, prefix + "pixel_decoder.")
- # logger.debug(f"{k} ==> {newk}")
- if newk != k:
- state_dict[newk] = state_dict[k]
- del state_dict[k]
- scratch = False
-
- if not scratch:
- logger.warning(
- f"Weight format of {self.__class__.__name__} have changed! "
- "Please upgrade your models. Applying automatic conversion now ..."
- )
-
- @configurable
- def __init__(
- self,
- input_shape: Dict[str, ShapeSpec],
- *,
- num_classes: int,
- pixel_decoder: nn.Module,
- loss_weight: float = 1.0,
- ignore_value: int = -1,
- # extra parameters
- transformer_predictor: nn.Module,
- transformer_in_feature: str,
- ):
- """
- NOTE: this interface is experimental.
- Args:
- input_shape: shapes (channels and stride) of the input features
- num_classes: number of classes to predict
- pixel_decoder: the pixel decoder module
- loss_weight: loss weight
- ignore_value: category id to be ignored during training.
- transformer_predictor: the transformer decoder that makes prediction
- transformer_in_feature: input feature name to the transformer_predictor
- """
- super().__init__()
- input_shape = sorted(input_shape.items(), key=lambda x: x[1].stride)
- self.in_features = [k for k, v in input_shape]
- feature_strides = [v.stride for k, v in input_shape]
- feature_channels = [v.channels for k, v in input_shape]
-
- self.ignore_value = ignore_value
- self.common_stride = 4
- self.loss_weight = loss_weight
-
- self.pixel_decoder = pixel_decoder
- self.predictor = transformer_predictor
- self.transformer_in_feature = transformer_in_feature
-
- self.num_classes = num_classes
-
- @classmethod
- def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]):
- return {
- "input_shape": {
- k: v
- for k, v in input_shape.items()
- if k in cfg.MODEL.SEM_SEG_HEAD.IN_FEATURES
- },
- "ignore_value": cfg.MODEL.SEM_SEG_HEAD.IGNORE_VALUE,
- "num_classes": cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES,
- "pixel_decoder": build_pixel_decoder(cfg, input_shape),
- "loss_weight": cfg.MODEL.SEM_SEG_HEAD.LOSS_WEIGHT,
- "transformer_in_feature": cfg.MODEL.MASK_FORMER.TRANSFORMER_IN_FEATURE,
- "transformer_predictor": TransformerPredictor(
- cfg,
- cfg.MODEL.SEM_SEG_HEAD.CONVS_DIM
- if cfg.MODEL.MASK_FORMER.TRANSFORMER_IN_FEATURE == "transformer_encoder"
- else input_shape[cfg.MODEL.MASK_FORMER.TRANSFORMER_IN_FEATURE].channels,
- mask_classification=True,
- ),
- }
-
- def forward(self, features):
- return self.layers(features)
-
- def layers(self, features):
- (
- mask_features,
- transformer_encoder_features,
- ) = self.pixel_decoder.forward_features(features)
- if self.transformer_in_feature == "transformer_encoder":
- assert (
- transformer_encoder_features is not None
- ), "Please use the TransformerEncoderPixelDecoder."
- predictions = self.predictor(transformer_encoder_features, mask_features)
- else:
- predictions = self.predictor(
- features[self.transformer_in_feature], mask_features
- )
- return predictions
diff --git a/spaces/jeffrymahbuubi/bert-advanced-cnn-hate-speech-classification/README.md b/spaces/jeffrymahbuubi/bert-advanced-cnn-hate-speech-classification/README.md
deleted file mode 100644
index 68082a484185912887992b49dfe6fa1d7382b5c7..0000000000000000000000000000000000000000
--- a/spaces/jeffrymahbuubi/bert-advanced-cnn-hate-speech-classification/README.md
+++ /dev/null
@@ -1,26 +0,0 @@
----
-title: BERT Advanced CNN Hate Speech Detection
-license: mit
-emoji: 😠
-app_file: app.py
-sdk: gradio
-colorFrom: yellow
-colorTo: gray
----
-
-# BERT + Advanced 5-layer CNN for Hate Speech Classification
-
-Bert Hate Speech Classification is a project that aims to classify hate speech from [Davidson Dataset](https://github.com/t-davidson/hate-speech-and-offensive-language). The project is built using BERT and adding Advanced 5-Layer CNN to improve the performance of the model.
-
-This project was the final class project for the Data Mining course offered by National Cheng Kung University and taught by Professor [Eric Hsueh-Chan Lu (呂學展)](https://www.geomatics.ncku.edu.tw/laboratory.php?tpl=19)
-
-## Dataset
-
-The Davidson Dataset consist of three different labels, which are: Hate Speech (0), Offensive Language (1), and Neither (2). The dataset is unbalanced, with the majority of the data is labeled as Offensive Language. The dataset is also noisy, with some of the data is mislabeled. The maximum word length of the dataset is 87 words.
-
-## Contributors
-
-| Name | Role | The Worked Distribution | Deployment |
-| ---------------------- | --------------- | ----------------------- | -------------------------------------------------------- |
-| Cendra Deyana Putra | Model Developer | `Model Builder` | [@data_mining/cendra](https://github.com/Cendra123) |
-| Aunuun Jeffry Mahbuubi | Model Deployer | `Model Deployer` | [@data_mining/jeffry](https://github.com/jeffrymahbuubi) |
\ No newline at end of file
diff --git a/spaces/jgurzoni/image_background_swapper/saicinpainting/training/trainers/__init__.py b/spaces/jgurzoni/image_background_swapper/saicinpainting/training/trainers/__init__.py
deleted file mode 100644
index c59241f553efe4e2dd6b198e2e5656a2b1488857..0000000000000000000000000000000000000000
--- a/spaces/jgurzoni/image_background_swapper/saicinpainting/training/trainers/__init__.py
+++ /dev/null
@@ -1,30 +0,0 @@
-import logging
-import torch
-from saicinpainting.training.trainers.default import DefaultInpaintingTrainingModule
-
-
-def get_training_model_class(kind):
- if kind == 'default':
- return DefaultInpaintingTrainingModule
-
- raise ValueError(f'Unknown trainer module {kind}')
-
-
-def make_training_model(config):
- kind = config.training_model.kind
- kwargs = dict(config.training_model)
- kwargs.pop('kind')
- kwargs['use_ddp'] = config.trainer.kwargs.get('accelerator', None) == 'ddp'
-
- logging.info(f'Make training model {kind}')
-
- cls = get_training_model_class(kind)
- return cls(config, **kwargs)
-
-
-def load_checkpoint(train_config, path, map_location='cuda', strict=True):
- model: torch.nn.Module = make_training_model(train_config)
- state = torch.load(path, map_location=map_location)
- model.load_state_dict(state['state_dict'], strict=strict)
- model.on_load_checkpoint(state)
- return model
diff --git a/spaces/jjeamin/ArcaneStyleTransfer/app.py b/spaces/jjeamin/ArcaneStyleTransfer/app.py
deleted file mode 100644
index 7b4dec4856a48b04212558032e5d6edecb4ed21d..0000000000000000000000000000000000000000
--- a/spaces/jjeamin/ArcaneStyleTransfer/app.py
+++ /dev/null
@@ -1,74 +0,0 @@
-import os
-os.system("pip freeze")
-
-import torch
-import PIL
-import gradio as gr
-import torch
-from utils import align_face
-from torchvision import transforms
-from huggingface_hub import hf_hub_download
-
-device = "cuda:0" if torch.cuda.is_available() else "cpu"
-
-image_size = 512
-transform_size = 1024
-
-means = [0.5, 0.5, 0.5]
-stds = [0.5, 0.5, 0.5]
-
-img_transforms = transforms.Compose([
- transforms.ToTensor(),
- transforms.Normalize(means, stds)])
-
-model_path = hf_hub_download(repo_id="jjeamin/ArcaneStyleTransfer", filename="pytorch_model.bin")
-
-if 'cuda' in device:
- style_transfer = torch.jit.load(model_path).eval().cuda().half()
- t_stds = torch.tensor(stds).cuda().half()[:,None,None]
- t_means = torch.tensor(means).cuda().half()[:,None,None]
-else:
- style_transfer = torch.jit.load(model_path).eval().cpu()
- t_stds = torch.tensor(stds).cpu()[:,None,None]
- t_means = torch.tensor(means).cpu()[:,None,None]
-
-def tensor2im(var):
- return var.mul(t_stds).add(t_means).mul(255.).clamp(0,255).permute(1,2,0)
-
-def proc_pil_img(input_image):
- if 'cuda' in device:
- transformed_image = img_transforms(input_image)[None,...].cuda().half()
- else:
- transformed_image = img_transforms(input_image)[None,...].cpu()
-
- with torch.no_grad():
- result_image = style_transfer(transformed_image)[0]
- output_image = tensor2im(result_image)
- output_image = output_image.detach().cpu().numpy().astype('uint8')
- output_image = PIL.Image.fromarray(output_image)
- return output_image
-
-def process(im, is_align):
- im = PIL.ImageOps.exif_transpose(im)
-
- if is_align == 'True':
- im = align_face(im, output_size=image_size, transform_size=transform_size)
- else:
- pass
-
- res = proc_pil_img(im)
-
- return res
-
-gr.Interface(
- process,
- inputs=[gr.inputs.Image(type="pil", label="Input", shape=(image_size, image_size)), gr.inputs.Radio(['True','False'], type="value", default='True', label='face align')],
- outputs=gr.outputs.Image(type="pil", label="Output"),
- title="Arcane Style Transfer",
- description="Gradio demo for Arcane Style Transfer",
- article = "
Search results that reflect historic inequities can amplify stereotypes and perpetuate under-representation. Carefully measuring diversity in data sets can help.
-
-
-
Search, ranking and recommendation systems can help find useful documents in large datasets. However, these datasets reflect the biases of the society in which they were created and the systems risk re-entrenching those biases. For example, if someone who is not a white man searches for “CEO pictures” and sees a page of white men, they may feel that only white men can be CEOs, further perpetuating lack of representation at companies’ executive levels.
The mathematics of all this is a little easier to follow with abstract shapes. Let’s take a look at some of them:
-
-
-
Suppose we want to return about 30% green boxes to reflect the distribution of some larger universe of shapes. Try clicking on the shapes below to select some of them — can you find a better subset to return?
-
-
-
Another diversity metric we care about is the percentage of dots… how close to 35% dots can you get?
-
-
-
If we can only return a single subset, how should we consider multiple diversity metrics? Sometimes it isn’t possible to reduce the difference of every metric to zero. One natural approach: find the selection with the lowest mean difference across all the metrics to get as close as possible to all the targets.
-
In other circumstances, like picking a panel of speakers, avoiding badly representing any single category might be more important. This can be done by finding the subset with the lowest max difference. Try minimizing both below:
-
-
-
Notice that minimizing the mean results in a different subset than minimizing the max; how else might using one over the other change the results?
-
Ranking Measures
-
We can pull out more detail by showing how the mean difference and maximum difference rank lots of sets. Below, there are 20 sets of 10 shapes sorted by the two measures. Try adjusting the target slider on the left to see how the rankings change; each set’s percentage of green, dots and small shapes are shown in the small histograms.
-
-
-
At the extremes, the choice of measure can have a big impact: if we want to try and return all green results, we can shift the green target up to 100%. With this target, the minimum difference basically sorts the sets by the number of green items and uses the other targets as a tiebreaker. In contrast, sorting by the mean difference balances the green target more with the dot and small targets.
-
-
-
Beyond mean and max differences, there are more ways to combine diversity metrics, like taking the cross of two metrics to account for intersectionality. The absolute value of the difference in target and actual percentages can also be quantified in other ways — you might want to penalize undershooting more than overshooting, for example. It’s important to keep in mind what exactly you’re trying to maximize and the dataset that you’re operating on.
-
Which Measure is Best?
-
In a vacuum, all of these ranking methods are defensible. Picking one requires knowledge of the dataset and broader societal context.
-
For example, the doctors on the left have more variance along the shirt color attribute, but they’re less diverse by gender than the doctors on the right. With the shirt color and gender targets we’ve picked, the two subsets have the same mean and max differences However, in most applications, it’s more important to have a representative sample of socially relevant characteristics, like gender, rather than something less salient, like clothing color.
-
-
-
Just selecting a diverse sample isn’t sufficient either. Diversity and Inclusion Metrics in Subset Selection introduces a way of measuring “inclusion” - how well does the searcher feel represented in the results?
-
Below, we have gender diversity, without inclusion for women, in the “construction worker” image domain. Masculine-presenting individuals are shown in realistic, modern construction worker situations, while feminine-presenting individuals and other gender presentations are depicted as historic nostalgia, toys, clipart, or passive.
-
-
-
The context of the query and the searcher also plays in the quality of search results. A search for “work clothing” that shows a mixed palette of colors for men’s clothing and only pink women’s clothing might make the searcher feel that women need to appear stereotypically feminine in a professional setting. But the same set of women’s clothes might be appropriate to show for a “pink women work clothes” search or if the searcher had previously expressed a preference for pink.
-
We saw how a small switch from mean to max made a huge difference in what abstract shapes are returned – and how things can get even more complex when socially salient characteristics are layered in. Defaults and small decisions can encode our priorities and values; intentionally thinking about how diversity and inclusion are being measured and which characteristics are emphasized is a step towards designing more equitable systems.
-
More Reading
-
The Diversity and Inclusion Metrics paper has a Colab with a detailed desciption of the metrics, additional visualizations and a reference Python implementation.
Inferring user preferences is also tricky; you can checkout ways to design for user feedback and control over queries in the People + AI Guidebook.
-
Credits
-
Adam Pearce, Dylan Baker, Ellen Jiang, Meg Mitchell* and Timnit Gebru* // March 2021
-
*Work done while at Google
-
Thanks to Alex Hanna, Carey Radebaugh, Emily Denton, Fernanda Viégas, James Wexler, Jess Holbrook, Ludovic Peran, Martin Wattenberg, Michael Terry, Yannick Assogba and Zan Armstrong for their help with this piece.
-
More Explorables
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/mikebars/huggingface/assets/index-25971840.js b/spaces/mikebars/huggingface/assets/index-25971840.js
deleted file mode 100644
index fca54d8978c67a5778c1e82852d5d371eca6558c..0000000000000000000000000000000000000000
--- a/spaces/mikebars/huggingface/assets/index-25971840.js
+++ /dev/null
@@ -1,40 +0,0 @@
-var cc=Object.defineProperty;var fc=(e,t,n)=>t in e?cc(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n;var kt=(e,t,n)=>(fc(e,typeof t!="symbol"?t+"":t,n),n);(function(){const t=document.createElement("link").relList;if(t&&t.supports&&t.supports("modulepreload"))return;for(const l of document.querySelectorAll('link[rel="modulepreload"]'))r(l);new MutationObserver(l=>{for(const i of l)if(i.type==="childList")for(const u of i.addedNodes)u.tagName==="LINK"&&u.rel==="modulepreload"&&r(u)}).observe(document,{childList:!0,subtree:!0});function n(l){const i={};return l.integrity&&(i.integrity=l.integrity),l.referrerPolicy&&(i.referrerPolicy=l.referrerPolicy),l.crossOrigin==="use-credentials"?i.credentials="include":l.crossOrigin==="anonymous"?i.credentials="omit":i.credentials="same-origin",i}function r(l){if(l.ep)return;l.ep=!0;const i=n(l);fetch(l.href,i)}})();var Ir={},dc={get exports(){return Ir},set exports(e){Ir=e}},il={},ee={},pc={get exports(){return ee},set exports(e){ee=e}},T={};/**
- * @license React
- * react.production.min.js
- *
- * Copyright (c) Facebook, Inc. and its affiliates.
- *
- * This source code is licensed under the MIT license found in the
- * LICENSE file in the root directory of this source tree.
- */var qn=Symbol.for("react.element"),mc=Symbol.for("react.portal"),hc=Symbol.for("react.fragment"),vc=Symbol.for("react.strict_mode"),yc=Symbol.for("react.profiler"),gc=Symbol.for("react.provider"),wc=Symbol.for("react.context"),kc=Symbol.for("react.forward_ref"),Sc=Symbol.for("react.suspense"),Ec=Symbol.for("react.memo"),xc=Symbol.for("react.lazy"),Au=Symbol.iterator;function Cc(e){return e===null||typeof e!="object"?null:(e=Au&&e[Au]||e["@@iterator"],typeof e=="function"?e:null)}var Jo={isMounted:function(){return!1},enqueueForceUpdate:function(){},enqueueReplaceState:function(){},enqueueSetState:function(){}},bo=Object.assign,es={};function sn(e,t,n){this.props=e,this.context=t,this.refs=es,this.updater=n||Jo}sn.prototype.isReactComponent={};sn.prototype.setState=function(e,t){if(typeof e!="object"&&typeof e!="function"&&e!=null)throw Error("setState(...): takes an object of state variables to update or a function which returns an object of state variables.");this.updater.enqueueSetState(this,e,t,"setState")};sn.prototype.forceUpdate=function(e){this.updater.enqueueForceUpdate(this,e,"forceUpdate")};function ts(){}ts.prototype=sn.prototype;function Qi(e,t,n){this.props=e,this.context=t,this.refs=es,this.updater=n||Jo}var Ki=Qi.prototype=new ts;Ki.constructor=Qi;bo(Ki,sn.prototype);Ki.isPureReactComponent=!0;var Vu=Array.isArray,ns=Object.prototype.hasOwnProperty,Xi={current:null},rs={key:!0,ref:!0,__self:!0,__source:!0};function ls(e,t,n){var r,l={},i=null,u=null;if(t!=null)for(r in t.ref!==void 0&&(u=t.ref),t.key!==void 0&&(i=""+t.key),t)ns.call(t,r)&&!rs.hasOwnProperty(r)&&(l[r]=t[r]);var o=arguments.length-2;if(o===1)l.children=n;else if(1]+)>;\s+rel="([^"]+)"/g;return Object.fromEntries([...e.matchAll(t)].map(([,n,r])=>[r,n]))}var Ac=["pipeline_tag","private","gated","downloads","likes"];async function*Vc(e){var r,l;Uc(e==null?void 0:e.credentials);const t=new URLSearchParams([...Object.entries({limit:"500",...(r=e==null?void 0:e.search)!=null&&r.owner?{author:e.search.owner}:void 0,...(l=e==null?void 0:e.search)!=null&&l.task?{pipeline_tag:e.search.task}:void 0}),...Ac.map(i=>["expand",i])]).toString();let n=`${(e==null?void 0:e.hubUrl)||Dc}/api/models?${t}`;for(;n;){const i=await fetch(n,{headers:{accept:"application/json",...e!=null&&e.credentials?{Authorization:`Bearer ${e.credentials.accessToken}`}:void 0}});if(!i.ok)throw jc(i);const u=await i.json();for(const s of u)yield{id:s._id,name:s.id,private:s.private,task:s.pipeline_tag,downloads:s.downloads,gated:s.gated,likes:s.likes,updatedAt:new Date(s.lastModified)};const o=i.headers.get("Link");n=o?$c(o).next:void 0}}function Hu(e){return Array.isArray(e)?e:[e]}var us=class{constructor(e="",t={}){kt(this,"apiKey");kt(this,"defaultOptions");this.apiKey=e,this.defaultOptions=t}async fillMask(e,t){return this.request(e,t)}async summarization(e,t){var n;return(n=await this.request(e,t))==null?void 0:n[0]}async questionAnswer(e,t){return await this.request(e,t)}async tableQuestionAnswer(e,t){return await this.request(e,t)}async textClassification(e,t){var n;return(n=await this.request(e,t))==null?void 0:n[0]}async textGeneration(e,t){var n;return(n=await this.request(e,t))==null?void 0:n[0]}async tokenClassification(e,t){return Hu(await this.request(e,t))}async translation(e,t){var n;return(n=await this.request(e,t))==null?void 0:n[0]}async zeroShotClassification(e,t){return Hu(await this.request(e,t))}async conversational(e,t){return await this.request(e,t)}async featureExtraction(e,t){return await this.request(e,t)}async automaticSpeechRecognition(e,t){return await this.request(e,{...t,binary:!0})}async audioClassification(e,t){return await this.request(e,{...t,binary:!0})}async imageClassification(e,t){return await this.request(e,{...t,binary:!0})}async objectDetection(e,t){return await this.request(e,{...t,binary:!0})}async imageSegmentation(e,t){return await this.request(e,{...t,binary:!0})}async textToImage(e,t){return await this.request(e,{...t,blob:!0})}async request(e,t){const n={...this.defaultOptions,...t},{model:r,...l}=e,i={};this.apiKey&&(i.Authorization=`Bearer ${this.apiKey}`),t!=null&&t.binary||(i["Content-Type"]="application/json"),t!=null&&t.binary&&(n.wait_for_model&&(i["X-Wait-For-Model"]="true"),n.use_cache===!1&&(i["X-Use-Cache"]="false"),n.dont_load_model&&(i["X-Load-Model"]="0"));const u=await fetch(`https://api-inference.huggingface.co/models/${r}`,{headers:i,method:"POST",body:t!=null&&t.binary?e.data:JSON.stringify({...l,options:n}),credentials:t!=null&&t.includeCredentials?"include":"same-origin"});if(n.retry_on_error!==!1&&u.status===503&&!n.wait_for_model)return this.request(e,{...n,wait_for_model:!0});if(t!=null&&t.blob){if(!u.ok)throw new Error("An error occurred while fetching the blob");return await u.blob()}const o=await u.json();if(o.error)throw new Error(o.error);return o}},Mr=function(){return Mr=Object.assign||function(t){for(var n,r=1,l=arguments.length;r0&&n>="0"&&n<="9"?"_"+n+r:""+n.toUpperCase()+r}function Kc(e,t){return t===void 0&&(t={}),Qc(e,Mr({delimiter:"",transform:os},t))}function Xc(e,t){return t===0?e.toLowerCase():os(e,t)}function Yc(e,t){return t===void 0&&(t={}),Kc(e,Mr({transform:Xc},t))}var ql={},Gc={get exports(){return ql},set exports(e){ql=e}},ke={},Jl={},Zc={get exports(){return Jl},set exports(e){Jl=e}},ss={};/**
- * @license React
- * scheduler.production.min.js
- *
- * Copyright (c) Facebook, Inc. and its affiliates.
- *
- * This source code is licensed under the MIT license found in the
- * LICENSE file in the root directory of this source tree.
- */(function(e){function t(x,P){var z=x.length;x.push(P);e:for(;0>>1,Z=x[W];if(0>>1;Wl(xl,z))wtl(rr,xl)?(x[W]=rr,x[wt]=z,W=wt):(x[W]=xl,x[gt]=z,W=gt);else if(wtl(rr,z))x[W]=rr,x[wt]=z,W=wt;else break e}}return P}function l(x,P){var z=x.sortIndex-P.sortIndex;return z!==0?z:x.id-P.id}if(typeof performance=="object"&&typeof performance.now=="function"){var i=performance;e.unstable_now=function(){return i.now()}}else{var u=Date,o=u.now();e.unstable_now=function(){return u.now()-o}}var s=[],c=[],h=1,m=null,p=3,g=!1,w=!1,k=!1,F=typeof setTimeout=="function"?setTimeout:null,f=typeof clearTimeout=="function"?clearTimeout:null,a=typeof setImmediate<"u"?setImmediate:null;typeof navigator<"u"&&navigator.scheduling!==void 0&&navigator.scheduling.isInputPending!==void 0&&navigator.scheduling.isInputPending.bind(navigator.scheduling);function d(x){for(var P=n(c);P!==null;){if(P.callback===null)r(c);else if(P.startTime<=x)r(c),P.sortIndex=P.expirationTime,t(s,P);else break;P=n(c)}}function v(x){if(k=!1,d(x),!w)if(n(s)!==null)w=!0,Sl(E);else{var P=n(c);P!==null&&El(v,P.startTime-x)}}function E(x,P){w=!1,k&&(k=!1,f(N),N=-1),g=!0;var z=p;try{for(d(P),m=n(s);m!==null&&(!(m.expirationTime>P)||x&&!ze());){var W=m.callback;if(typeof W=="function"){m.callback=null,p=m.priorityLevel;var Z=W(m.expirationTime<=P);P=e.unstable_now(),typeof Z=="function"?m.callback=Z:m===n(s)&&r(s),d(P)}else r(s);m=n(s)}if(m!==null)var nr=!0;else{var gt=n(c);gt!==null&&El(v,gt.startTime-P),nr=!1}return nr}finally{m=null,p=z,g=!1}}var C=!1,_=null,N=-1,H=5,O=-1;function ze(){return!(e.unstable_now()-Ox||125W?(x.sortIndex=z,t(c,x),n(s)===null&&x===n(c)&&(k?(f(N),N=-1):k=!0,El(v,z-W))):(x.sortIndex=Z,t(s,x),w||g||(w=!0,Sl(E))),x},e.unstable_shouldYield=ze,e.unstable_wrapCallback=function(x){var P=p;return function(){var z=p;p=P;try{return x.apply(this,arguments)}finally{p=z}}}})(ss);(function(e){e.exports=ss})(Zc);/**
- * @license React
- * react-dom.production.min.js
- *
- * Copyright (c) Facebook, Inc. and its affiliates.
- *
- * This source code is licensed under the MIT license found in the
- * LICENSE file in the root directory of this source tree.
- */var as=ee,we=Jl;function y(e){for(var t="https://reactjs.org/docs/error-decoder.html?invariant="+e,n=1;n"u"||typeof window.document>"u"||typeof window.document.createElement>"u"),bl=Object.prototype.hasOwnProperty,qc=/^[:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD][:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD\-.0-9\u00B7\u0300-\u036F\u203F-\u2040]*$/,Qu={},Ku={};function Jc(e){return bl.call(Ku,e)?!0:bl.call(Qu,e)?!1:qc.test(e)?Ku[e]=!0:(Qu[e]=!0,!1)}function bc(e,t,n,r){if(n!==null&&n.type===0)return!1;switch(typeof t){case"function":case"symbol":return!0;case"boolean":return r?!1:n!==null?!n.acceptsBooleans:(e=e.toLowerCase().slice(0,5),e!=="data-"&&e!=="aria-");default:return!1}}function ef(e,t,n,r){if(t===null||typeof t>"u"||bc(e,t,n,r))return!0;if(r)return!1;if(n!==null)switch(n.type){case 3:return!t;case 4:return t===!1;case 5:return isNaN(t);case 6:return isNaN(t)||1>t}return!1}function ce(e,t,n,r,l,i,u){this.acceptsBooleans=t===2||t===3||t===4,this.attributeName=r,this.attributeNamespace=l,this.mustUseProperty=n,this.propertyName=e,this.type=t,this.sanitizeURL=i,this.removeEmptyString=u}var ne={};"children dangerouslySetInnerHTML defaultValue defaultChecked innerHTML suppressContentEditableWarning suppressHydrationWarning style".split(" ").forEach(function(e){ne[e]=new ce(e,0,!1,e,null,!1,!1)});[["acceptCharset","accept-charset"],["className","class"],["htmlFor","for"],["httpEquiv","http-equiv"]].forEach(function(e){var t=e[0];ne[t]=new ce(t,1,!1,e[1],null,!1,!1)});["contentEditable","draggable","spellCheck","value"].forEach(function(e){ne[e]=new ce(e,2,!1,e.toLowerCase(),null,!1,!1)});["autoReverse","externalResourcesRequired","focusable","preserveAlpha"].forEach(function(e){ne[e]=new ce(e,2,!1,e,null,!1,!1)});"allowFullScreen async autoFocus autoPlay controls default defer disabled disablePictureInPicture disableRemotePlayback formNoValidate hidden loop noModule noValidate open playsInline readOnly required reversed scoped seamless itemScope".split(" ").forEach(function(e){ne[e]=new ce(e,3,!1,e.toLowerCase(),null,!1,!1)});["checked","multiple","muted","selected"].forEach(function(e){ne[e]=new ce(e,3,!0,e,null,!1,!1)});["capture","download"].forEach(function(e){ne[e]=new ce(e,4,!1,e,null,!1,!1)});["cols","rows","size","span"].forEach(function(e){ne[e]=new ce(e,6,!1,e,null,!1,!1)});["rowSpan","start"].forEach(function(e){ne[e]=new ce(e,5,!1,e.toLowerCase(),null,!1,!1)});var Gi=/[\-:]([a-z])/g;function Zi(e){return e[1].toUpperCase()}"accent-height alignment-baseline arabic-form baseline-shift cap-height clip-path clip-rule color-interpolation color-interpolation-filters color-profile color-rendering dominant-baseline enable-background fill-opacity fill-rule flood-color flood-opacity font-family font-size font-size-adjust font-stretch font-style font-variant font-weight glyph-name glyph-orientation-horizontal glyph-orientation-vertical horiz-adv-x horiz-origin-x image-rendering letter-spacing lighting-color marker-end marker-mid marker-start overline-position overline-thickness paint-order panose-1 pointer-events rendering-intent shape-rendering stop-color stop-opacity strikethrough-position strikethrough-thickness stroke-dasharray stroke-dashoffset stroke-linecap stroke-linejoin stroke-miterlimit stroke-opacity stroke-width text-anchor text-decoration text-rendering underline-position underline-thickness unicode-bidi unicode-range units-per-em v-alphabetic v-hanging v-ideographic v-mathematical vector-effect vert-adv-y vert-origin-x vert-origin-y word-spacing writing-mode xmlns:xlink x-height".split(" ").forEach(function(e){var t=e.replace(Gi,Zi);ne[t]=new ce(t,1,!1,e,null,!1,!1)});"xlink:actuate xlink:arcrole xlink:role xlink:show xlink:title xlink:type".split(" ").forEach(function(e){var t=e.replace(Gi,Zi);ne[t]=new ce(t,1,!1,e,"http://www.w3.org/1999/xlink",!1,!1)});["xml:base","xml:lang","xml:space"].forEach(function(e){var t=e.replace(Gi,Zi);ne[t]=new ce(t,1,!1,e,"http://www.w3.org/XML/1998/namespace",!1,!1)});["tabIndex","crossOrigin"].forEach(function(e){ne[e]=new ce(e,1,!1,e.toLowerCase(),null,!1,!1)});ne.xlinkHref=new ce("xlinkHref",1,!1,"xlink:href","http://www.w3.org/1999/xlink",!0,!1);["src","href","action","formAction"].forEach(function(e){ne[e]=new ce(e,1,!1,e.toLowerCase(),null,!0,!0)});function qi(e,t,n,r){var l=ne.hasOwnProperty(t)?ne[t]:null;(l!==null?l.type!==0:r||!(2o||l[u]!==i[o]){var s=`
-`+l[u].replace(" at new "," at ");return e.displayName&&s.includes("")&&(s=s.replace("",e.displayName)),s}while(1<=u&&0<=o);break}}}finally{Nl=!1,Error.prepareStackTrace=n}return(e=e?e.displayName||e.name:"")?Sn(e):""}function tf(e){switch(e.tag){case 5:return Sn(e.type);case 16:return Sn("Lazy");case 13:return Sn("Suspense");case 19:return Sn("SuspenseList");case 0:case 2:case 15:return e=Pl(e.type,!1),e;case 11:return e=Pl(e.type.render,!1),e;case 1:return e=Pl(e.type,!0),e;default:return""}}function ri(e){if(e==null)return null;if(typeof e=="function")return e.displayName||e.name||null;if(typeof e=="string")return e;switch(e){case Ft:return"Fragment";case jt:return"Portal";case ei:return"Profiler";case Ji:return"StrictMode";case ti:return"Suspense";case ni:return"SuspenseList"}if(typeof e=="object")switch(e.$$typeof){case ds:return(e.displayName||"Context")+".Consumer";case fs:return(e._context.displayName||"Context")+".Provider";case bi:var t=e.render;return e=e.displayName,e||(e=t.displayName||t.name||"",e=e!==""?"ForwardRef("+e+")":"ForwardRef"),e;case eu:return t=e.displayName||null,t!==null?t:ri(e.type)||"Memo";case be:t=e._payload,e=e._init;try{return ri(e(t))}catch{}}return null}function nf(e){var t=e.type;switch(e.tag){case 24:return"Cache";case 9:return(t.displayName||"Context")+".Consumer";case 10:return(t._context.displayName||"Context")+".Provider";case 18:return"DehydratedFragment";case 11:return e=t.render,e=e.displayName||e.name||"",t.displayName||(e!==""?"ForwardRef("+e+")":"ForwardRef");case 7:return"Fragment";case 5:return t;case 4:return"Portal";case 3:return"Root";case 6:return"Text";case 16:return ri(t);case 8:return t===Ji?"StrictMode":"Mode";case 22:return"Offscreen";case 12:return"Profiler";case 21:return"Scope";case 13:return"Suspense";case 19:return"SuspenseList";case 25:return"TracingMarker";case 1:case 0:case 17:case 2:case 14:case 15:if(typeof t=="function")return t.displayName||t.name||null;if(typeof t=="string")return t}return null}function pt(e){switch(typeof e){case"boolean":case"number":case"string":case"undefined":return e;case"object":return e;default:return""}}function ms(e){var t=e.type;return(e=e.nodeName)&&e.toLowerCase()==="input"&&(t==="checkbox"||t==="radio")}function rf(e){var t=ms(e)?"checked":"value",n=Object.getOwnPropertyDescriptor(e.constructor.prototype,t),r=""+e[t];if(!e.hasOwnProperty(t)&&typeof n<"u"&&typeof n.get=="function"&&typeof n.set=="function"){var l=n.get,i=n.set;return Object.defineProperty(e,t,{configurable:!0,get:function(){return l.call(this)},set:function(u){r=""+u,i.call(this,u)}}),Object.defineProperty(e,t,{enumerable:n.enumerable}),{getValue:function(){return r},setValue:function(u){r=""+u},stopTracking:function(){e._valueTracker=null,delete e[t]}}}}function ur(e){e._valueTracker||(e._valueTracker=rf(e))}function hs(e){if(!e)return!1;var t=e._valueTracker;if(!t)return!0;var n=t.getValue(),r="";return e&&(r=ms(e)?e.checked?"true":"false":e.value),e=r,e!==n?(t.setValue(e),!0):!1}function Dr(e){if(e=e||(typeof document<"u"?document:void 0),typeof e>"u")return null;try{return e.activeElement||e.body}catch{return e.body}}function li(e,t){var n=t.checked;return V({},t,{defaultChecked:void 0,defaultValue:void 0,value:void 0,checked:n??e._wrapperState.initialChecked})}function Yu(e,t){var n=t.defaultValue==null?"":t.defaultValue,r=t.checked!=null?t.checked:t.defaultChecked;n=pt(t.value!=null?t.value:n),e._wrapperState={initialChecked:r,initialValue:n,controlled:t.type==="checkbox"||t.type==="radio"?t.checked!=null:t.value!=null}}function vs(e,t){t=t.checked,t!=null&&qi(e,"checked",t,!1)}function ii(e,t){vs(e,t);var n=pt(t.value),r=t.type;if(n!=null)r==="number"?(n===0&&e.value===""||e.value!=n)&&(e.value=""+n):e.value!==""+n&&(e.value=""+n);else if(r==="submit"||r==="reset"){e.removeAttribute("value");return}t.hasOwnProperty("value")?ui(e,t.type,n):t.hasOwnProperty("defaultValue")&&ui(e,t.type,pt(t.defaultValue)),t.checked==null&&t.defaultChecked!=null&&(e.defaultChecked=!!t.defaultChecked)}function Gu(e,t,n){if(t.hasOwnProperty("value")||t.hasOwnProperty("defaultValue")){var r=t.type;if(!(r!=="submit"&&r!=="reset"||t.value!==void 0&&t.value!==null))return;t=""+e._wrapperState.initialValue,n||t===e.value||(e.value=t),e.defaultValue=t}n=e.name,n!==""&&(e.name=""),e.defaultChecked=!!e._wrapperState.initialChecked,n!==""&&(e.name=n)}function ui(e,t,n){(t!=="number"||Dr(e.ownerDocument)!==e)&&(n==null?e.defaultValue=""+e._wrapperState.initialValue:e.defaultValue!==""+n&&(e.defaultValue=""+n))}var En=Array.isArray;function Yt(e,t,n,r){if(e=e.options,t){t={};for(var l=0;l"+t.valueOf().toString()+"",t=or.firstChild;e.firstChild;)e.removeChild(e.firstChild);for(;t.firstChild;)e.appendChild(t.firstChild)}});function Dn(e,t){if(t){var n=e.firstChild;if(n&&n===e.lastChild&&n.nodeType===3){n.nodeValue=t;return}}e.textContent=t}var _n={animationIterationCount:!0,aspectRatio:!0,borderImageOutset:!0,borderImageSlice:!0,borderImageWidth:!0,boxFlex:!0,boxFlexGroup:!0,boxOrdinalGroup:!0,columnCount:!0,columns:!0,flex:!0,flexGrow:!0,flexPositive:!0,flexShrink:!0,flexNegative:!0,flexOrder:!0,gridArea:!0,gridRow:!0,gridRowEnd:!0,gridRowSpan:!0,gridRowStart:!0,gridColumn:!0,gridColumnEnd:!0,gridColumnSpan:!0,gridColumnStart:!0,fontWeight:!0,lineClamp:!0,lineHeight:!0,opacity:!0,order:!0,orphans:!0,tabSize:!0,widows:!0,zIndex:!0,zoom:!0,fillOpacity:!0,floodOpacity:!0,stopOpacity:!0,strokeDasharray:!0,strokeDashoffset:!0,strokeMiterlimit:!0,strokeOpacity:!0,strokeWidth:!0},lf=["Webkit","ms","Moz","O"];Object.keys(_n).forEach(function(e){lf.forEach(function(t){t=t+e.charAt(0).toUpperCase()+e.substring(1),_n[t]=_n[e]})});function ks(e,t,n){return t==null||typeof t=="boolean"||t===""?"":n||typeof t!="number"||t===0||_n.hasOwnProperty(e)&&_n[e]?(""+t).trim():t+"px"}function Ss(e,t){e=e.style;for(var n in t)if(t.hasOwnProperty(n)){var r=n.indexOf("--")===0,l=ks(n,t[n],r);n==="float"&&(n="cssFloat"),r?e.setProperty(n,l):e[n]=l}}var uf=V({menuitem:!0},{area:!0,base:!0,br:!0,col:!0,embed:!0,hr:!0,img:!0,input:!0,keygen:!0,link:!0,meta:!0,param:!0,source:!0,track:!0,wbr:!0});function ai(e,t){if(t){if(uf[e]&&(t.children!=null||t.dangerouslySetInnerHTML!=null))throw Error(y(137,e));if(t.dangerouslySetInnerHTML!=null){if(t.children!=null)throw Error(y(60));if(typeof t.dangerouslySetInnerHTML!="object"||!("__html"in t.dangerouslySetInnerHTML))throw Error(y(61))}if(t.style!=null&&typeof t.style!="object")throw Error(y(62))}}function ci(e,t){if(e.indexOf("-")===-1)return typeof t.is=="string";switch(e){case"annotation-xml":case"color-profile":case"font-face":case"font-face-src":case"font-face-uri":case"font-face-format":case"font-face-name":case"missing-glyph":return!1;default:return!0}}var fi=null;function tu(e){return e=e.target||e.srcElement||window,e.correspondingUseElement&&(e=e.correspondingUseElement),e.nodeType===3?e.parentNode:e}var di=null,Gt=null,Zt=null;function Ju(e){if(e=er(e)){if(typeof di!="function")throw Error(y(280));var t=e.stateNode;t&&(t=cl(t),di(e.stateNode,e.type,t))}}function Es(e){Gt?Zt?Zt.push(e):Zt=[e]:Gt=e}function xs(){if(Gt){var e=Gt,t=Zt;if(Zt=Gt=null,Ju(e),t)for(e=0;e>>=0,e===0?32:31-(yf(e)/gf|0)|0}var sr=64,ar=4194304;function xn(e){switch(e&-e){case 1:return 1;case 2:return 2;case 4:return 4;case 8:return 8;case 16:return 16;case 32:return 32;case 64:case 128:case 256:case 512:case 1024:case 2048:case 4096:case 8192:case 16384:case 32768:case 65536:case 131072:case 262144:case 524288:case 1048576:case 2097152:return e&4194240;case 4194304:case 8388608:case 16777216:case 33554432:case 67108864:return e&130023424;case 134217728:return 134217728;case 268435456:return 268435456;case 536870912:return 536870912;case 1073741824:return 1073741824;default:return e}}function $r(e,t){var n=e.pendingLanes;if(n===0)return 0;var r=0,l=e.suspendedLanes,i=e.pingedLanes,u=n&268435455;if(u!==0){var o=u&~l;o!==0?r=xn(o):(i&=u,i!==0&&(r=xn(i)))}else u=n&~l,u!==0?r=xn(u):i!==0&&(r=xn(i));if(r===0)return 0;if(t!==0&&t!==r&&!(t&l)&&(l=r&-r,i=t&-t,l>=i||l===16&&(i&4194240)!==0))return t;if(r&4&&(r|=n&16),t=e.entangledLanes,t!==0)for(e=e.entanglements,t&=r;0n;n++)t.push(e);return t}function Jn(e,t,n){e.pendingLanes|=t,t!==536870912&&(e.suspendedLanes=0,e.pingedLanes=0),e=e.eventTimes,t=31-Ie(t),e[t]=n}function Ef(e,t){var n=e.pendingLanes&~t;e.pendingLanes=t,e.suspendedLanes=0,e.pingedLanes=0,e.expiredLanes&=t,e.mutableReadLanes&=t,e.entangledLanes&=t,t=e.entanglements;var r=e.eventTimes;for(e=e.expirationTimes;0=Pn),oo=String.fromCharCode(32),so=!1;function Hs(e,t){switch(e){case"keyup":return Zf.indexOf(t.keyCode)!==-1;case"keydown":return t.keyCode!==229;case"keypress":case"mousedown":case"focusout":return!0;default:return!1}}function Ws(e){return e=e.detail,typeof e=="object"&&"data"in e?e.data:null}var Ut=!1;function Jf(e,t){switch(e){case"compositionend":return Ws(t);case"keypress":return t.which!==32?null:(so=!0,oo);case"textInput":return e=t.data,e===oo&&so?null:e;default:return null}}function bf(e,t){if(Ut)return e==="compositionend"||!au&&Hs(e,t)?(e=Vs(),Cr=uu=rt=null,Ut=!1,e):null;switch(e){case"paste":return null;case"keypress":if(!(t.ctrlKey||t.altKey||t.metaKey)||t.ctrlKey&&t.altKey){if(t.char&&1=t)return{node:n,offset:t-e};e=r}e:{for(;n;){if(n.nextSibling){n=n.nextSibling;break e}n=n.parentNode}n=void 0}n=po(n)}}function Ys(e,t){return e&&t?e===t?!0:e&&e.nodeType===3?!1:t&&t.nodeType===3?Ys(e,t.parentNode):"contains"in e?e.contains(t):e.compareDocumentPosition?!!(e.compareDocumentPosition(t)&16):!1:!1}function Gs(){for(var e=window,t=Dr();t instanceof e.HTMLIFrameElement;){try{var n=typeof t.contentWindow.location.href=="string"}catch{n=!1}if(n)e=t.contentWindow;else break;t=Dr(e.document)}return t}function cu(e){var t=e&&e.nodeName&&e.nodeName.toLowerCase();return t&&(t==="input"&&(e.type==="text"||e.type==="search"||e.type==="tel"||e.type==="url"||e.type==="password")||t==="textarea"||e.contentEditable==="true")}function sd(e){var t=Gs(),n=e.focusedElem,r=e.selectionRange;if(t!==n&&n&&n.ownerDocument&&Ys(n.ownerDocument.documentElement,n)){if(r!==null&&cu(n)){if(t=r.start,e=r.end,e===void 0&&(e=t),"selectionStart"in n)n.selectionStart=t,n.selectionEnd=Math.min(e,n.value.length);else if(e=(t=n.ownerDocument||document)&&t.defaultView||window,e.getSelection){e=e.getSelection();var l=n.textContent.length,i=Math.min(r.start,l);r=r.end===void 0?i:Math.min(r.end,l),!e.extend&&i>r&&(l=r,r=i,i=l),l=mo(n,i);var u=mo(n,r);l&&u&&(e.rangeCount!==1||e.anchorNode!==l.node||e.anchorOffset!==l.offset||e.focusNode!==u.node||e.focusOffset!==u.offset)&&(t=t.createRange(),t.setStart(l.node,l.offset),e.removeAllRanges(),i>r?(e.addRange(t),e.extend(u.node,u.offset)):(t.setEnd(u.node,u.offset),e.addRange(t)))}}for(t=[],e=n;e=e.parentNode;)e.nodeType===1&&t.push({element:e,left:e.scrollLeft,top:e.scrollTop});for(typeof n.focus=="function"&&n.focus(),n=0;n=document.documentMode,$t=null,gi=null,Ln=null,wi=!1;function ho(e,t,n){var r=n.window===n?n.document:n.nodeType===9?n:n.ownerDocument;wi||$t==null||$t!==Dr(r)||(r=$t,"selectionStart"in r&&cu(r)?r={start:r.selectionStart,end:r.selectionEnd}:(r=(r.ownerDocument&&r.ownerDocument.defaultView||window).getSelection(),r={anchorNode:r.anchorNode,anchorOffset:r.anchorOffset,focusNode:r.focusNode,focusOffset:r.focusOffset}),Ln&&Vn(Ln,r)||(Ln=r,r=Br(gi,"onSelect"),0Bt||(e.current=_i[Bt],_i[Bt]=null,Bt--)}function M(e,t){Bt++,_i[Bt]=e.current,e.current=t}var mt={},ue=vt(mt),pe=vt(!1),zt=mt;function tn(e,t){var n=e.type.contextTypes;if(!n)return mt;var r=e.stateNode;if(r&&r.__reactInternalMemoizedUnmaskedChildContext===t)return r.__reactInternalMemoizedMaskedChildContext;var l={},i;for(i in n)l[i]=t[i];return r&&(e=e.stateNode,e.__reactInternalMemoizedUnmaskedChildContext=t,e.__reactInternalMemoizedMaskedChildContext=l),l}function me(e){return e=e.childContextTypes,e!=null}function Wr(){j(pe),j(ue)}function Eo(e,t,n){if(ue.current!==mt)throw Error(y(168));M(ue,t),M(pe,n)}function la(e,t,n){var r=e.stateNode;if(t=t.childContextTypes,typeof r.getChildContext!="function")return n;r=r.getChildContext();for(var l in r)if(!(l in t))throw Error(y(108,nf(e)||"Unknown",l));return V({},n,r)}function Qr(e){return e=(e=e.stateNode)&&e.__reactInternalMemoizedMergedChildContext||mt,zt=ue.current,M(ue,e),M(pe,pe.current),!0}function xo(e,t,n){var r=e.stateNode;if(!r)throw Error(y(169));n?(e=la(e,t,zt),r.__reactInternalMemoizedMergedChildContext=e,j(pe),j(ue),M(ue,e)):j(pe),M(pe,n)}var He=null,fl=!1,Vl=!1;function ia(e){He===null?He=[e]:He.push(e)}function kd(e){fl=!0,ia(e)}function yt(){if(!Vl&&He!==null){Vl=!0;var e=0,t=I;try{var n=He;for(I=1;e>=u,l-=u,We=1<<32-Ie(t)+l|n<N?(H=_,_=null):H=_.sibling;var O=p(f,_,d[N],v);if(O===null){_===null&&(_=H);break}e&&_&&O.alternate===null&&t(f,_),a=i(O,a,N),C===null?E=O:C.sibling=O,C=O,_=H}if(N===d.length)return n(f,_),U&&St(f,N),E;if(_===null){for(;NN?(H=_,_=null):H=_.sibling;var ze=p(f,_,O.value,v);if(ze===null){_===null&&(_=H);break}e&&_&&ze.alternate===null&&t(f,_),a=i(ze,a,N),C===null?E=ze:C.sibling=ze,C=ze,_=H}if(O.done)return n(f,_),U&&St(f,N),E;if(_===null){for(;!O.done;N++,O=d.next())O=m(f,O.value,v),O!==null&&(a=i(O,a,N),C===null?E=O:C.sibling=O,C=O);return U&&St(f,N),E}for(_=r(f,_);!O.done;N++,O=d.next())O=g(_,f,N,O.value,v),O!==null&&(e&&O.alternate!==null&&_.delete(O.key===null?N:O.key),a=i(O,a,N),C===null?E=O:C.sibling=O,C=O);return e&&_.forEach(function(fn){return t(f,fn)}),U&&St(f,N),E}function F(f,a,d,v){if(typeof d=="object"&&d!==null&&d.type===Ft&&d.key===null&&(d=d.props.children),typeof d=="object"&&d!==null){switch(d.$$typeof){case ir:e:{for(var E=d.key,C=a;C!==null;){if(C.key===E){if(E=d.type,E===Ft){if(C.tag===7){n(f,C.sibling),a=l(C,d.props.children),a.return=f,f=a;break e}}else if(C.elementType===E||typeof E=="object"&&E!==null&&E.$$typeof===be&&To(E)===C.type){n(f,C.sibling),a=l(C,d.props),a.ref=gn(f,C,d),a.return=f,f=a;break e}n(f,C);break}else t(f,C);C=C.sibling}d.type===Ft?(a=Pt(d.props.children,f.mode,v,d.key),a.return=f,f=a):(v=Rr(d.type,d.key,d.props,null,f.mode,v),v.ref=gn(f,a,d),v.return=f,f=v)}return u(f);case jt:e:{for(C=d.key;a!==null;){if(a.key===C)if(a.tag===4&&a.stateNode.containerInfo===d.containerInfo&&a.stateNode.implementation===d.implementation){n(f,a.sibling),a=l(a,d.children||[]),a.return=f,f=a;break e}else{n(f,a);break}else t(f,a);a=a.sibling}a=Gl(d,f.mode,v),a.return=f,f=a}return u(f);case be:return C=d._init,F(f,a,C(d._payload),v)}if(En(d))return w(f,a,d,v);if(pn(d))return k(f,a,d,v);vr(f,d)}return typeof d=="string"&&d!==""||typeof d=="number"?(d=""+d,a!==null&&a.tag===6?(n(f,a.sibling),a=l(a,d),a.return=f,f=a):(n(f,a),a=Yl(d,f.mode,v),a.return=f,f=a),u(f)):n(f,a)}return F}var rn=pa(!0),ma=pa(!1),tr={},Ve=vt(tr),Qn=vt(tr),Kn=vt(tr);function _t(e){if(e===tr)throw Error(y(174));return e}function wu(e,t){switch(M(Kn,t),M(Qn,e),M(Ve,tr),e=t.nodeType,e){case 9:case 11:t=(t=t.documentElement)?t.namespaceURI:si(null,"");break;default:e=e===8?t.parentNode:t,t=e.namespaceURI||null,e=e.tagName,t=si(t,e)}j(Ve),M(Ve,t)}function ln(){j(Ve),j(Qn),j(Kn)}function ha(e){_t(Kn.current);var t=_t(Ve.current),n=si(t,e.type);t!==n&&(M(Qn,e),M(Ve,n))}function ku(e){Qn.current===e&&(j(Ve),j(Qn))}var $=vt(0);function qr(e){for(var t=e;t!==null;){if(t.tag===13){var n=t.memoizedState;if(n!==null&&(n=n.dehydrated,n===null||n.data==="$?"||n.data==="$!"))return t}else if(t.tag===19&&t.memoizedProps.revealOrder!==void 0){if(t.flags&128)return t}else if(t.child!==null){t.child.return=t,t=t.child;continue}if(t===e)break;for(;t.sibling===null;){if(t.return===null||t.return===e)return null;t=t.return}t.sibling.return=t.return,t=t.sibling}return null}var Bl=[];function Su(){for(var e=0;en?n:4,e(!0);var r=Hl.transition;Hl.transition={};try{e(!1),t()}finally{I=n,Hl.transition=r}}function Oa(){return Pe().memoizedState}function Cd(e,t,n){var r=ft(e);if(n={lane:r,action:n,hasEagerState:!1,eagerState:null,next:null},Ra(e))Ia(t,n);else if(n=aa(e,t,n,r),n!==null){var l=se();Me(n,e,r,l),Ma(n,t,r)}}function _d(e,t,n){var r=ft(e),l={lane:r,action:n,hasEagerState:!1,eagerState:null,next:null};if(Ra(e))Ia(t,l);else{var i=e.alternate;if(e.lanes===0&&(i===null||i.lanes===0)&&(i=t.lastRenderedReducer,i!==null))try{var u=t.lastRenderedState,o=i(u,n);if(l.hasEagerState=!0,l.eagerState=o,je(o,u)){var s=t.interleaved;s===null?(l.next=l,yu(t)):(l.next=s.next,s.next=l),t.interleaved=l;return}}catch{}finally{}n=aa(e,t,l,r),n!==null&&(l=se(),Me(n,e,r,l),Ma(n,t,r))}}function Ra(e){var t=e.alternate;return e===A||t!==null&&t===A}function Ia(e,t){Tn=Jr=!0;var n=e.pending;n===null?t.next=t:(t.next=n.next,n.next=t),e.pending=t}function Ma(e,t,n){if(n&4194240){var r=t.lanes;r&=e.pendingLanes,n|=r,t.lanes=n,ru(e,n)}}var br={readContext:Ne,useCallback:re,useContext:re,useEffect:re,useImperativeHandle:re,useInsertionEffect:re,useLayoutEffect:re,useMemo:re,useReducer:re,useRef:re,useState:re,useDebugValue:re,useDeferredValue:re,useTransition:re,useMutableSource:re,useSyncExternalStore:re,useId:re,unstable_isNewReconciler:!1},Nd={readContext:Ne,useCallback:function(e,t){return Ue().memoizedState=[e,t===void 0?null:t],e},useContext:Ne,useEffect:Ro,useImperativeHandle:function(e,t,n){return n=n!=null?n.concat([e]):null,zr(4194308,4,Na.bind(null,t,e),n)},useLayoutEffect:function(e,t){return zr(4194308,4,e,t)},useInsertionEffect:function(e,t){return zr(4,2,e,t)},useMemo:function(e,t){var n=Ue();return t=t===void 0?null:t,e=e(),n.memoizedState=[e,t],e},useReducer:function(e,t,n){var r=Ue();return t=n!==void 0?n(t):t,r.memoizedState=r.baseState=t,e={pending:null,interleaved:null,lanes:0,dispatch:null,lastRenderedReducer:e,lastRenderedState:t},r.queue=e,e=e.dispatch=Cd.bind(null,A,e),[r.memoizedState,e]},useRef:function(e){var t=Ue();return e={current:e},t.memoizedState=e},useState:Oo,useDebugValue:Nu,useDeferredValue:function(e){return Ue().memoizedState=e},useTransition:function(){var e=Oo(!1),t=e[0];return e=xd.bind(null,e[1]),Ue().memoizedState=e,[t,e]},useMutableSource:function(){},useSyncExternalStore:function(e,t,n){var r=A,l=Ue();if(U){if(n===void 0)throw Error(y(407));n=n()}else{if(n=t(),J===null)throw Error(y(349));Tt&30||ga(r,t,n)}l.memoizedState=n;var i={value:n,getSnapshot:t};return l.queue=i,Ro(ka.bind(null,r,i,e),[e]),r.flags|=2048,Gn(9,wa.bind(null,r,i,n,t),void 0,null),n},useId:function(){var e=Ue(),t=J.identifierPrefix;if(U){var n=Qe,r=We;n=(r&~(1<<32-Ie(r)-1)).toString(32)+n,t=":"+t+"R"+n,n=Xn++,0<\/script>",e=e.removeChild(e.firstChild)):typeof r.is=="string"?e=u.createElement(n,{is:r.is}):(e=u.createElement(n),n==="select"&&(u=e,r.multiple?u.multiple=!0:r.size&&(u.size=r.size))):e=u.createElementNS(e,n),e[$e]=t,e[Wn]=r,Ha(e,t,!1,!1),t.stateNode=e;e:{switch(u=ci(n,r),n){case"dialog":D("cancel",e),D("close",e),l=r;break;case"iframe":case"object":case"embed":D("load",e),l=r;break;case"video":case"audio":for(l=0;lon&&(t.flags|=128,r=!0,wn(i,!1),t.lanes=4194304)}else{if(!r)if(e=qr(u),e!==null){if(t.flags|=128,r=!0,n=e.updateQueue,n!==null&&(t.updateQueue=n,t.flags|=4),wn(i,!0),i.tail===null&&i.tailMode==="hidden"&&!u.alternate&&!U)return le(t),null}else 2*Q()-i.renderingStartTime>on&&n!==1073741824&&(t.flags|=128,r=!0,wn(i,!1),t.lanes=4194304);i.isBackwards?(u.sibling=t.child,t.child=u):(n=i.last,n!==null?n.sibling=u:t.child=u,i.last=u)}return i.tail!==null?(t=i.tail,i.rendering=t,i.tail=t.sibling,i.renderingStartTime=Q(),t.sibling=null,n=$.current,M($,r?n&1|2:n&1),t):(le(t),null);case 22:case 23:return Ru(),r=t.memoizedState!==null,e!==null&&e.memoizedState!==null!==r&&(t.flags|=8192),r&&t.mode&1?ve&1073741824&&(le(t),t.subtreeFlags&6&&(t.flags|=8192)):le(t),null;case 24:return null;case 25:return null}throw Error(y(156,t.tag))}function Md(e,t){switch(du(t),t.tag){case 1:return me(t.type)&&Wr(),e=t.flags,e&65536?(t.flags=e&-65537|128,t):null;case 3:return ln(),j(pe),j(ue),Su(),e=t.flags,e&65536&&!(e&128)?(t.flags=e&-65537|128,t):null;case 5:return ku(t),null;case 13:if(j($),e=t.memoizedState,e!==null&&e.dehydrated!==null){if(t.alternate===null)throw Error(y(340));nn()}return e=t.flags,e&65536?(t.flags=e&-65537|128,t):null;case 19:return j($),null;case 4:return ln(),null;case 10:return vu(t.type._context),null;case 22:case 23:return Ru(),null;case 24:return null;default:return null}}var gr=!1,ie=!1,Dd=typeof WeakSet=="function"?WeakSet:Set,S=null;function Kt(e,t){var n=e.ref;if(n!==null)if(typeof n=="function")try{n(null)}catch(r){B(e,t,r)}else n.current=null}function Fi(e,t,n){try{n()}catch(r){B(e,t,r)}}var Vo=!1;function jd(e,t){if(ki=Ar,e=Gs(),cu(e)){if("selectionStart"in e)var n={start:e.selectionStart,end:e.selectionEnd};else e:{n=(n=e.ownerDocument)&&n.defaultView||window;var r=n.getSelection&&n.getSelection();if(r&&r.rangeCount!==0){n=r.anchorNode;var l=r.anchorOffset,i=r.focusNode;r=r.focusOffset;try{n.nodeType,i.nodeType}catch{n=null;break e}var u=0,o=-1,s=-1,c=0,h=0,m=e,p=null;t:for(;;){for(var g;m!==n||l!==0&&m.nodeType!==3||(o=u+l),m!==i||r!==0&&m.nodeType!==3||(s=u+r),m.nodeType===3&&(u+=m.nodeValue.length),(g=m.firstChild)!==null;)p=m,m=g;for(;;){if(m===e)break t;if(p===n&&++c===l&&(o=u),p===i&&++h===r&&(s=u),(g=m.nextSibling)!==null)break;m=p,p=m.parentNode}m=g}n=o===-1||s===-1?null:{start:o,end:s}}else n=null}n=n||{start:0,end:0}}else n=null;for(Si={focusedElem:e,selectionRange:n},Ar=!1,S=t;S!==null;)if(t=S,e=t.child,(t.subtreeFlags&1028)!==0&&e!==null)e.return=t,S=e;else for(;S!==null;){t=S;try{var w=t.alternate;if(t.flags&1024)switch(t.tag){case 0:case 11:case 15:break;case 1:if(w!==null){var k=w.memoizedProps,F=w.memoizedState,f=t.stateNode,a=f.getSnapshotBeforeUpdate(t.elementType===t.type?k:Te(t.type,k),F);f.__reactInternalSnapshotBeforeUpdate=a}break;case 3:var d=t.stateNode.containerInfo;d.nodeType===1?d.textContent="":d.nodeType===9&&d.documentElement&&d.removeChild(d.documentElement);break;case 5:case 6:case 4:case 17:break;default:throw Error(y(163))}}catch(v){B(t,t.return,v)}if(e=t.sibling,e!==null){e.return=t.return,S=e;break}S=t.return}return w=Vo,Vo=!1,w}function On(e,t,n){var r=t.updateQueue;if(r=r!==null?r.lastEffect:null,r!==null){var l=r=r.next;do{if((l.tag&e)===e){var i=l.destroy;l.destroy=void 0,i!==void 0&&Fi(t,n,i)}l=l.next}while(l!==r)}}function ml(e,t){if(t=t.updateQueue,t=t!==null?t.lastEffect:null,t!==null){var n=t=t.next;do{if((n.tag&e)===e){var r=n.create;n.destroy=r()}n=n.next}while(n!==t)}}function Ui(e){var t=e.ref;if(t!==null){var n=e.stateNode;switch(e.tag){case 5:e=n;break;default:e=n}typeof t=="function"?t(e):t.current=e}}function Ka(e){var t=e.alternate;t!==null&&(e.alternate=null,Ka(t)),e.child=null,e.deletions=null,e.sibling=null,e.tag===5&&(t=e.stateNode,t!==null&&(delete t[$e],delete t[Wn],delete t[Ci],delete t[gd],delete t[wd])),e.stateNode=null,e.return=null,e.dependencies=null,e.memoizedProps=null,e.memoizedState=null,e.pendingProps=null,e.stateNode=null,e.updateQueue=null}function Xa(e){return e.tag===5||e.tag===3||e.tag===4}function Bo(e){e:for(;;){for(;e.sibling===null;){if(e.return===null||Xa(e.return))return null;e=e.return}for(e.sibling.return=e.return,e=e.sibling;e.tag!==5&&e.tag!==6&&e.tag!==18;){if(e.flags&2||e.child===null||e.tag===4)continue e;e.child.return=e,e=e.child}if(!(e.flags&2))return e.stateNode}}function $i(e,t,n){var r=e.tag;if(r===5||r===6)e=e.stateNode,t?n.nodeType===8?n.parentNode.insertBefore(e,t):n.insertBefore(e,t):(n.nodeType===8?(t=n.parentNode,t.insertBefore(e,n)):(t=n,t.appendChild(e)),n=n._reactRootContainer,n!=null||t.onclick!==null||(t.onclick=Hr));else if(r!==4&&(e=e.child,e!==null))for($i(e,t,n),e=e.sibling;e!==null;)$i(e,t,n),e=e.sibling}function Ai(e,t,n){var r=e.tag;if(r===5||r===6)e=e.stateNode,t?n.insertBefore(e,t):n.appendChild(e);else if(r!==4&&(e=e.child,e!==null))for(Ai(e,t,n),e=e.sibling;e!==null;)Ai(e,t,n),e=e.sibling}var b=null,Oe=!1;function Je(e,t,n){for(n=n.child;n!==null;)Ya(e,t,n),n=n.sibling}function Ya(e,t,n){if(Ae&&typeof Ae.onCommitFiberUnmount=="function")try{Ae.onCommitFiberUnmount(ul,n)}catch{}switch(n.tag){case 5:ie||Kt(n,t);case 6:var r=b,l=Oe;b=null,Je(e,t,n),b=r,Oe=l,b!==null&&(Oe?(e=b,n=n.stateNode,e.nodeType===8?e.parentNode.removeChild(n):e.removeChild(n)):b.removeChild(n.stateNode));break;case 18:b!==null&&(Oe?(e=b,n=n.stateNode,e.nodeType===8?Al(e.parentNode,n):e.nodeType===1&&Al(e,n),$n(e)):Al(b,n.stateNode));break;case 4:r=b,l=Oe,b=n.stateNode.containerInfo,Oe=!0,Je(e,t,n),b=r,Oe=l;break;case 0:case 11:case 14:case 15:if(!ie&&(r=n.updateQueue,r!==null&&(r=r.lastEffect,r!==null))){l=r=r.next;do{var i=l,u=i.destroy;i=i.tag,u!==void 0&&(i&2||i&4)&&Fi(n,t,u),l=l.next}while(l!==r)}Je(e,t,n);break;case 1:if(!ie&&(Kt(n,t),r=n.stateNode,typeof r.componentWillUnmount=="function"))try{r.props=n.memoizedProps,r.state=n.memoizedState,r.componentWillUnmount()}catch(o){B(n,t,o)}Je(e,t,n);break;case 21:Je(e,t,n);break;case 22:n.mode&1?(ie=(r=ie)||n.memoizedState!==null,Je(e,t,n),ie=r):Je(e,t,n);break;default:Je(e,t,n)}}function Ho(e){var t=e.updateQueue;if(t!==null){e.updateQueue=null;var n=e.stateNode;n===null&&(n=e.stateNode=new Dd),t.forEach(function(r){var l=Qd.bind(null,e,r);n.has(r)||(n.add(r),r.then(l,l))})}}function Le(e,t){var n=t.deletions;if(n!==null)for(var r=0;rl&&(l=u),r&=~i}if(r=l,r=Q()-r,r=(120>r?120:480>r?480:1080>r?1080:1920>r?1920:3e3>r?3e3:4320>r?4320:1960*Ud(r/1960))-r,10e?16:e,lt===null)var r=!1;else{if(e=lt,lt=null,nl=0,R&6)throw Error(y(331));var l=R;for(R|=4,S=e.current;S!==null;){var i=S,u=i.child;if(S.flags&16){var o=i.deletions;if(o!==null){for(var s=0;sQ()-Tu?Nt(e,0):Lu|=n),he(e,t)}function nc(e,t){t===0&&(e.mode&1?(t=ar,ar<<=1,!(ar&130023424)&&(ar=4194304)):t=1);var n=se();e=Ge(e,t),e!==null&&(Jn(e,t,n),he(e,n))}function Wd(e){var t=e.memoizedState,n=0;t!==null&&(n=t.retryLane),nc(e,n)}function Qd(e,t){var n=0;switch(e.tag){case 13:var r=e.stateNode,l=e.memoizedState;l!==null&&(n=l.retryLane);break;case 19:r=e.stateNode;break;default:throw Error(y(314))}r!==null&&r.delete(t),nc(e,n)}var rc;rc=function(e,t,n){if(e!==null)if(e.memoizedProps!==t.pendingProps||pe.current)de=!0;else{if(!(e.lanes&n)&&!(t.flags&128))return de=!1,Rd(e,t,n);de=!!(e.flags&131072)}else de=!1,U&&t.flags&1048576&&ua(t,Xr,t.index);switch(t.lanes=0,t.tag){case 2:var r=t.type;Lr(e,t),e=t.pendingProps;var l=tn(t,ue.current);Jt(t,n),l=xu(null,t,r,e,l,n);var i=Cu();return t.flags|=1,typeof l=="object"&&l!==null&&typeof l.render=="function"&&l.$$typeof===void 0?(t.tag=1,t.memoizedState=null,t.updateQueue=null,me(r)?(i=!0,Qr(t)):i=!1,t.memoizedState=l.state!==null&&l.state!==void 0?l.state:null,gu(t),l.updater=dl,t.stateNode=l,l._reactInternals=t,Ti(t,r,e,n),t=Ii(null,t,r,!0,i,n)):(t.tag=0,U&&i&&fu(t),oe(null,t,l,n),t=t.child),t;case 16:r=t.elementType;e:{switch(Lr(e,t),e=t.pendingProps,l=r._init,r=l(r._payload),t.type=r,l=t.tag=Xd(r),e=Te(r,e),l){case 0:t=Ri(null,t,r,e,n);break e;case 1:t=Uo(null,t,r,e,n);break e;case 11:t=jo(null,t,r,e,n);break e;case 14:t=Fo(null,t,r,Te(r.type,e),n);break e}throw Error(y(306,r,""))}return t;case 0:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Te(r,l),Ri(e,t,r,l,n);case 1:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Te(r,l),Uo(e,t,r,l,n);case 3:e:{if(Aa(t),e===null)throw Error(y(387));r=t.pendingProps,i=t.memoizedState,l=i.element,ca(e,t),Zr(t,r,null,n);var u=t.memoizedState;if(r=u.element,i.isDehydrated)if(i={element:r,isDehydrated:!1,cache:u.cache,pendingSuspenseBoundaries:u.pendingSuspenseBoundaries,transitions:u.transitions},t.updateQueue.baseState=i,t.memoizedState=i,t.flags&256){l=un(Error(y(423)),t),t=$o(e,t,r,n,l);break e}else if(r!==l){l=un(Error(y(424)),t),t=$o(e,t,r,n,l);break e}else for(ye=st(t.stateNode.containerInfo.firstChild),ge=t,U=!0,Re=null,n=ma(t,null,r,n),t.child=n;n;)n.flags=n.flags&-3|4096,n=n.sibling;else{if(nn(),r===l){t=Ze(e,t,n);break e}oe(e,t,r,n)}t=t.child}return t;case 5:return ha(t),e===null&&Pi(t),r=t.type,l=t.pendingProps,i=e!==null?e.memoizedProps:null,u=l.children,Ei(r,l)?u=null:i!==null&&Ei(r,i)&&(t.flags|=32),$a(e,t),oe(e,t,u,n),t.child;case 6:return e===null&&Pi(t),null;case 13:return Va(e,t,n);case 4:return wu(t,t.stateNode.containerInfo),r=t.pendingProps,e===null?t.child=rn(t,null,r,n):oe(e,t,r,n),t.child;case 11:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Te(r,l),jo(e,t,r,l,n);case 7:return oe(e,t,t.pendingProps,n),t.child;case 8:return oe(e,t,t.pendingProps.children,n),t.child;case 12:return oe(e,t,t.pendingProps.children,n),t.child;case 10:e:{if(r=t.type._context,l=t.pendingProps,i=t.memoizedProps,u=l.value,M(Yr,r._currentValue),r._currentValue=u,i!==null)if(je(i.value,u)){if(i.children===l.children&&!pe.current){t=Ze(e,t,n);break e}}else for(i=t.child,i!==null&&(i.return=t);i!==null;){var o=i.dependencies;if(o!==null){u=i.child;for(var s=o.firstContext;s!==null;){if(s.context===r){if(i.tag===1){s=Ke(-1,n&-n),s.tag=2;var c=i.updateQueue;if(c!==null){c=c.shared;var h=c.pending;h===null?s.next=s:(s.next=h.next,h.next=s),c.pending=s}}i.lanes|=n,s=i.alternate,s!==null&&(s.lanes|=n),zi(i.return,n,t),o.lanes|=n;break}s=s.next}}else if(i.tag===10)u=i.type===t.type?null:i.child;else if(i.tag===18){if(u=i.return,u===null)throw Error(y(341));u.lanes|=n,o=u.alternate,o!==null&&(o.lanes|=n),zi(u,n,t),u=i.sibling}else u=i.child;if(u!==null)u.return=i;else for(u=i;u!==null;){if(u===t){u=null;break}if(i=u.sibling,i!==null){i.return=u.return,u=i;break}u=u.return}i=u}oe(e,t,l.children,n),t=t.child}return t;case 9:return l=t.type,r=t.pendingProps.children,Jt(t,n),l=Ne(l),r=r(l),t.flags|=1,oe(e,t,r,n),t.child;case 14:return r=t.type,l=Te(r,t.pendingProps),l=Te(r.type,l),Fo(e,t,r,l,n);case 15:return Fa(e,t,t.type,t.pendingProps,n);case 17:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Te(r,l),Lr(e,t),t.tag=1,me(r)?(e=!0,Qr(t)):e=!1,Jt(t,n),da(t,r,l),Ti(t,r,l,n),Ii(null,t,r,!0,e,n);case 19:return Ba(e,t,n);case 22:return Ua(e,t,n)}throw Error(y(156,t.tag))};function lc(e,t){return Ts(e,t)}function Kd(e,t,n,r){this.tag=e,this.key=n,this.sibling=this.child=this.return=this.stateNode=this.type=this.elementType=null,this.index=0,this.ref=null,this.pendingProps=t,this.dependencies=this.memoizedState=this.updateQueue=this.memoizedProps=null,this.mode=r,this.subtreeFlags=this.flags=0,this.deletions=null,this.childLanes=this.lanes=0,this.alternate=null}function Ce(e,t,n,r){return new Kd(e,t,n,r)}function Mu(e){return e=e.prototype,!(!e||!e.isReactComponent)}function Xd(e){if(typeof e=="function")return Mu(e)?1:0;if(e!=null){if(e=e.$$typeof,e===bi)return 11;if(e===eu)return 14}return 2}function dt(e,t){var n=e.alternate;return n===null?(n=Ce(e.tag,t,e.key,e.mode),n.elementType=e.elementType,n.type=e.type,n.stateNode=e.stateNode,n.alternate=e,e.alternate=n):(n.pendingProps=t,n.type=e.type,n.flags=0,n.subtreeFlags=0,n.deletions=null),n.flags=e.flags&14680064,n.childLanes=e.childLanes,n.lanes=e.lanes,n.child=e.child,n.memoizedProps=e.memoizedProps,n.memoizedState=e.memoizedState,n.updateQueue=e.updateQueue,t=e.dependencies,n.dependencies=t===null?null:{lanes:t.lanes,firstContext:t.firstContext},n.sibling=e.sibling,n.index=e.index,n.ref=e.ref,n}function Rr(e,t,n,r,l,i){var u=2;if(r=e,typeof e=="function")Mu(e)&&(u=1);else if(typeof e=="string")u=5;else e:switch(e){case Ft:return Pt(n.children,l,i,t);case Ji:u=8,l|=8;break;case ei:return e=Ce(12,n,t,l|2),e.elementType=ei,e.lanes=i,e;case ti:return e=Ce(13,n,t,l),e.elementType=ti,e.lanes=i,e;case ni:return e=Ce(19,n,t,l),e.elementType=ni,e.lanes=i,e;case ps:return vl(n,l,i,t);default:if(typeof e=="object"&&e!==null)switch(e.$$typeof){case fs:u=10;break e;case ds:u=9;break e;case bi:u=11;break e;case eu:u=14;break e;case be:u=16,r=null;break e}throw Error(y(130,e==null?e:typeof e,""))}return t=Ce(u,n,t,l),t.elementType=e,t.type=r,t.lanes=i,t}function Pt(e,t,n,r){return e=Ce(7,e,r,t),e.lanes=n,e}function vl(e,t,n,r){return e=Ce(22,e,r,t),e.elementType=ps,e.lanes=n,e.stateNode={isHidden:!1},e}function Yl(e,t,n){return e=Ce(6,e,null,t),e.lanes=n,e}function Gl(e,t,n){return t=Ce(4,e.children!==null?e.children:[],e.key,t),t.lanes=n,t.stateNode={containerInfo:e.containerInfo,pendingChildren:null,implementation:e.implementation},t}function Yd(e,t,n,r,l){this.tag=t,this.containerInfo=e,this.finishedWork=this.pingCache=this.current=this.pendingChildren=null,this.timeoutHandle=-1,this.callbackNode=this.pendingContext=this.context=null,this.callbackPriority=0,this.eventTimes=Ll(0),this.expirationTimes=Ll(-1),this.entangledLanes=this.finishedLanes=this.mutableReadLanes=this.expiredLanes=this.pingedLanes=this.suspendedLanes=this.pendingLanes=0,this.entanglements=Ll(0),this.identifierPrefix=r,this.onRecoverableError=l,this.mutableSourceEagerHydrationData=null}function Du(e,t,n,r,l,i,u,o,s){return e=new Yd(e,t,n,o,s),t===1?(t=1,i===!0&&(t|=8)):t=0,i=Ce(3,null,null,t),e.current=i,i.stateNode=e,i.memoizedState={element:r,isDehydrated:n,cache:null,transitions:null,pendingSuspenseBoundaries:null},gu(i),e}function Gd(e,t,n){var r=3"u"||typeof __REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE!="function"))try{__REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE(t)}catch(n){console.error(n)}}t(),e.exports=ke})(Gc);var sc,qo=ql;sc=qo.createRoot,qo.hydrateRoot;const K=new us;console.log("!",{hfInference:K,a:Object.keys(K),b:Object.getOwnPropertyNames(us)});const ep=["audio-classification","audio-to-audio","automatic-speech-recognition","conversational","depth-estimation","document-question-answering","feature-extraction","fill-mask","graph-ml","image-classification","image-segmentation","image-to-image","image-to-text","multiple-choice","object-detection","other","question-answering","reinforcement-learning","robotics","sentence-similarity","summarization","table-question-answering","table-to-text","tabular-classification","tabular-regression","tabular-to-text","text-classification","text-generation","text-retrieval","text-to-image","text-to-speech","text2text-generation","time-series-forecasting","token-classification","translation","unconditional-image-generation","video-classification","visual-question-answering","voice-activity-detection","zero-shot-classification","zero-shot-image-classification"].filter(e=>Object.getOwnPropertyNames(Object.getPrototypeOf(K)).includes(Yc(e))),Zl={},tp=async e=>{if(Zl[e])return Zl[e];const t=[];for await(const n of Vc({search:{task:e}}))t.push(n);return Zl[e]=t,t},np=e=>De("div",{className:"w-full",children:[L("p",{className:"text-xl",children:"Task"}),De("select",{className:"bg-yellow-200 cursor-pointer py-6 text-center w-full",onChange:t=>e.setTask(t.target.value),placeholder:"Select a task",value:e.task,children:[L("option",{children:"Select a task"}),ep.map(t=>L("option",{value:t,children:t},t))]})]}),rp=e=>{const[t,n]=ee.useState(!1),[r,l]=ee.useState([]);return ee.useEffect(()=>{e.task&&(n(!0),tp(e.task).then(i=>l(i)).finally(()=>n(!1)))},[e.task]),r.length>0?De("div",{className:"w-full",children:[L("p",{className:"text-xl",children:"Model"}),De("select",{className:"bg-yellow-200 cursor-pointer py-6 text-center w-full",onChange:i=>e.setModel(i.target.value),placeholder:"Select a model",value:e.model,children:[L("option",{children:"Select a model"}),r.map(i=>L("option",{value:i.name,children:i.name},i.name))]})]}):L("p",{className:"text-center w-full",children:e.task?t?"Loading models for this task":"No models available for this task":"Select a task to view available models"})},lp=e=>De("div",{className:"w-full",children:[L("p",{className:"text-xl",children:"Inputs"}),e.inputs?L("audio",{className:"w-full",controls:!0,src:URL.createObjectURL(e.inputs)}):De("label",{className:"bg-yellow-200 block cursor-pointer py-6 text-center w-full",children:["No file chosen",L("input",{accept:"audio/*",className:"hidden",onChange:t=>{t.target.files&&t.target.files[0]&&e.setInputs(t.target.files[0])},type:"file"})]})]}),ip=e=>De("div",{className:"w-full",children:[L("p",{className:"text-xl",children:"Inputs"}),e.inputs?L("img",{className:"w-full",src:URL.createObjectURL(e.inputs)}):De("label",{className:"bg-yellow-200 block cursor-pointer py-6 text-center w-full",children:["No file chosen",L("input",{accept:"image/*",className:"hidden",onChange:t=>{t.target.files&&t.target.files[0]&&e.setInputs(t.target.files[0])},type:"file"})]})]}),up=e=>e.model&&e.task?["audio-classification","automatic-speech-recognition"].includes(e.task)?L(lp,{inputs:e.inputs,model:e.model,setInputs:e.setInputs,task:e.task}):["image-classification","image-segmentation","object-detection"].includes(e.task)?L(ip,{inputs:e.inputs,model:e.model,setInputs:e.setInputs,task:e.task}):["conversational","feature-extraction","fill-mask","question-answering","summarization","table-question-answering","text-classification","text-generation","text-to-image","token-classification","translation","zero-shot-classification"].includes(e.task)?De("div",{className:"w-full",children:[L("p",{className:"text-xl",children:"Inputs"}),L("input",{className:"bg-yellow-200 py-6 text-center w-full",onChange:t=>{t.target.value?e.setInputs(t.target.value):e.setInputs("")},type:"text",value:e.inputs??""})]}):L("div",{className:"w-full",children:L("p",{className:"text-center",children:"Inference for this task is not yet supported."})}):L(ee.Fragment,{}),op=e=>{if(e.inputs&&e.model&&e.task){const t=()=>{e.setInputs(void 0),e.setOutput(void 0)};return L("button",{className:`border-4 border-yellow-200 py-6 text-center w-full ${e.loading?"cursor-not-allowed opacity-50":""}`,disabled:e.loading,onClick:t,children:"Clear"})}return L(ee.Fragment,{})},sp=e=>{if(e.inputs&&e.model&&e.task){const t=async()=>{if(e.inputs&&e.model&&e.task){e.setLoading(!0);try{switch(e.task){case"audio-classification":{const n=await K.audioClassification({data:e.inputs,model:e.model});e.setOutput(n);break}case"automatic-speech-recognition":{const n=await K.automaticSpeechRecognition({data:e.inputs,model:e.model});e.setOutput(n);break}case"conversational":{const n=await K.conversational({inputs:{text:e.inputs},model:e.model});e.setOutput(n);break}case"feature-extraction":{const n=await K.featureExtraction({inputs:{[e.inputs]:e.inputs},model:e.model});e.setOutput(n);break}case"fill-mask":{const n=await K.fillMask({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"image-classification":{const n=await K.imageClassification({data:e.inputs,model:e.model});e.setOutput(n);break}case"image-segmentation":{const n=await K.imageSegmentation({data:e.inputs,model:e.model});e.setOutput(n);break}case"object-detection":{const n=await K.objectDetection({data:e.inputs,model:e.model});e.setOutput(n);break}case"question-answering":{const n=await K.questionAnswer({inputs:{context:e.inputs,question:e.inputs},model:e.model});e.setOutput(n);break}case"summarization":{const n=await K.summarization({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"table-question-answering":{const n=await K.tableQuestionAnswer({inputs:{query:e.inputs,table:{[e.inputs]:[e.inputs]}},model:e.model});e.setOutput(n);break}case"text-classification":{const n=await K.textClassification({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"text-generation":{const n=await K.textGeneration({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"text-to-image":{const n=await K.textToImage({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"token-classification":{const n=await K.tokenClassification({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"translation":{const n=await K.translation({inputs:e.inputs,model:e.model});e.setOutput(n);break}case"zero-shot-classification":{const n=await K.zeroShotClassification({inputs:e.inputs,model:e.model,parameters:{candidate_labels:[e.inputs]}});e.setOutput(n);break}}}catch(n){n instanceof Error&&e.setOutput(n.message)}e.setLoading(!1)}};return L("button",{className:`bg-yellow-200 py-6 text-center w-full ${e.loading?"cursor-not-allowed opacity-50":""}`,disabled:e.loading,onClick:t,children:e.loading?"Submitting":"Submit"})}return L(ee.Fragment,{})},ap=e=>{if(e.output){const t=(()=>{try{return JSON.stringify(e.output,void 0,2)}catch(n){if(n instanceof Error)return`Error during JSON.stringify: ${n.message}`}})();return De("div",{className:"w-full",children:[L("p",{className:"text-xl",children:"Output"}),L("pre",{className:`bg-yellow-200 p-6 w-full whitespace-pre-wrap ${e.loading?"cursor-wait opacity-50":""}`,children:t})]})}return L(ee.Fragment,{})},cp=()=>{const[e,t]=ee.useState(),[n,r]=ee.useState(),[l,i]=ee.useState(),[u,o]=ee.useState(!1),[s,c]=ee.useState();return L("div",{className:"bg-yellow-500 flex flex-col h-full items-center min-h-screen min-w-screen overflow-auto w-full",children:De("div",{className:"flex flex-col items-center justify-center py-24 space-y-12 w-2/3 lg:w-1/3",children:[L("header",{className:"text-center text-6xl",children:"🤗"}),L(np,{setTask:t,task:e}),L(rp,{model:n,setModel:r,task:e}),L(up,{inputs:l,model:n,setInputs:i,task:e}),L(op,{inputs:l,loading:u,model:n,setInputs:i,setOutput:c,task:e}),L(sp,{inputs:l,loading:u,model:n,setLoading:o,setOutput:c,task:e}),L(ap,{loading:u,output:s})]})})},fp=()=>{const e="root",t=document.getElementById(e);if(t){const n=sc(t),r=L(ee.StrictMode,{children:L(cp,{})});n.render(r)}};fp();
diff --git a/spaces/mikeee/wizardlm-1.0-uncensored-llama2-13b-ggmlv3/dl_model.py b/spaces/mikeee/wizardlm-1.0-uncensored-llama2-13b-ggmlv3/dl_model.py
deleted file mode 100644
index 76ecb679085a96544d3bbf0ecff2a6fa371bd181..0000000000000000000000000000000000000000
--- a/spaces/mikeee/wizardlm-1.0-uncensored-llama2-13b-ggmlv3/dl_model.py
+++ /dev/null
@@ -1,73 +0,0 @@
-"""Download modles."""
-# pylint: disable=invalid-name, broad-exception-caught, line-too-long
-from typing import Optional
-
-import typer
-from dl_hf_model import dl_hf_model
-from loguru import logger
-
-url = "https://huggingface.co/TheBloke/Upstage-Llama-2-70B-instruct-v2-GGML/blob/main/upstage-llama-2-70b-instruct-v2.ggmlv3.q3_K_S.bin"
-
-
-__version__ = "0.0.1"
-
-app = typer.Typer(
- name="dl-mode",
- add_completion=False,
- help="donwload models from hf and save to a dir (default models)",
-)
-
-
-def _version_callback(value: bool) -> None:
- if value:
- typer.echo(
- f"{app.info.name} v.{__version__} -- download models for given url(s)"
- )
- raise typer.Exit()
-
-
-@app.command()
-def main(
- urls: str = typer.Argument( # pylint: disable=unused-argument
- "",
- help=f"one or more urls (default {url})",
- show_default=False,
- ),
- version: Optional[bool] = typer.Option( # pylint: disable=unused-argument
- None,
- "--version",
- "-v",
- "-V",
- help="Show version info and exit.",
- callback=_version_callback,
- is_eager=True,
- ),
- model_dir: Optional[str] = typer.Option(
- None,
- "--mode-dir",
- help="dir to save downloaded models (default models)",
- ),
-):
- """Download a model or model given url(s)."""
- logger.trace(f"{urls}")
- if model_dir is None:
- model_dir = "models"
- if isinstance(urls, str):
- urls.split()
-
- url_list = urls[:]
- if not urls:
- url_list = [url]
- try:
- for elm in url_list:
- dl_hf_model(elm)
- except Exception as exc:
- logger.error(exc)
- raise typer.Exit()
-
-
-if __name__ == "__main__":
- try:
- app()
- except Exception as exc_:
- logger.error(exc_)
diff --git a/spaces/ml6team/logo-generator/dalle/models/stage2/layers.py b/spaces/ml6team/logo-generator/dalle/models/stage2/layers.py
deleted file mode 100644
index 43b7c9d584f35eb0e6fc8a7a4477a72bec58caa9..0000000000000000000000000000000000000000
--- a/spaces/ml6team/logo-generator/dalle/models/stage2/layers.py
+++ /dev/null
@@ -1,140 +0,0 @@
-# ------------------------------------------------------------------------------------
-# Minimal DALL-E
-# Copyright (c) 2021 KakaoBrain. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------------------
-# Modified from minGPT (https://github.com/karpathy/minGPT)
-# Copyright (c) 2020 Andrej Karpathy. All Rights Reserved.
-# ------------------------------------------------------------------------------------
-
-import math
-import torch
-import torch.nn as nn
-from torch.nn import functional as F
-
-
-class GELU(nn.Module):
- def __init__(self, use_approx=False):
- super().__init__()
- self.use_approx = use_approx
-
- def forward(self, x):
- if self.use_approx:
- return x * torch.sigmoid(1.702 * x)
- else:
- return F.gelu(x)
-
-
-class MultiHeadSelfAttention(nn.Module):
-
- def __init__(self,
- ctx_len: int,
- embed_dim: int,
- n_heads: int,
- resid_pdrop: float,
- attn_pdrop: float,
- attn_bias: bool,
- use_mask: bool = True):
- super().__init__()
- assert embed_dim % n_heads == 0
-
- # key, query, value projections for all heads
- self.key = nn.Linear(embed_dim, embed_dim, bias=attn_bias)
- self.query = nn.Linear(embed_dim, embed_dim, bias=attn_bias)
- self.value = nn.Linear(embed_dim, embed_dim, bias=attn_bias)
-
- # regularization
- self.attn_drop = nn.Dropout(attn_pdrop)
- self.resid_drop = nn.Dropout(resid_pdrop)
-
- # output projection
- self.proj = nn.Linear(embed_dim, embed_dim, attn_bias)
-
- self.n_heads = n_heads
- self.ctx_len = ctx_len
- self.use_mask = use_mask
- if self.use_mask:
- self.register_buffer("mask", torch.ones(ctx_len, ctx_len), persistent=False)
- self.mask = torch.tril(self.mask).view(1, ctx_len, ctx_len)
-
- def forward(self, x, use_cache=False, layer_past=None):
- B, T, C = x.shape
- x = x.transpose(0, 1).contiguous() # (B, T, C) -> (T, B, C)
-
- # calculate query, key, values for all heads in batch and move head forward to be the batch dim
- k = self.key(x).view(T, B*self.n_heads, C//self.n_heads).transpose(0, 1) # (B*nh, T, hs)
- q = self.query(x).view(T, B*self.n_heads, C//self.n_heads).transpose(0, 1) # (B*nh, T, hs)
- v = self.value(x).view(T, B*self.n_heads, C//self.n_heads).transpose(0, 1) # (B*nh, T, hs)
-
- if use_cache:
- present = torch.stack([k, v])
-
- if layer_past is not None:
- past_key, past_value = layer_past
- k = torch.cat([past_key, k], dim=-2)
- v = torch.cat([past_value, v], dim=-2)
-
- if use_cache and layer_past is not None:
- # Tensor shape below: (B * nh, 1, hs) X (B * nh, hs, K) -> (B * nh, 1, K)
- att = torch.bmm(q, (k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1))))
- att = F.softmax(att, dim=-1)
- att = self.attn_drop(att)
- y = torch.bmm(att, v) # (B*nh, 1, K) X (B*nh, K, hs) -> (B*nh, 1, hs)
- else:
- # Tensor shape below: (B * nh, T, hs) X (B * nh, hs, T) -> (B * nh, T, T)
- att = torch.bmm(q, (k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1))))
- if self.use_mask:
- mask = self.mask if T == self.ctx_len else self.mask[:, :T, :T]
- att = att.masked_fill(mask == 0, float('-inf'))
- att = F.softmax(att, dim=-1)
- att = self.attn_drop(att)
- y = torch.bmm(att, v) # (B*nh, T, T) X (B*nh, T, hs) -> (B*nh, T, hs)
- y = y.transpose(0, 1).contiguous().view(T, B, C) # re-assemble all head outputs side by side
-
- # output projection
- y = self.resid_drop(self.proj(y))
- if use_cache:
- return y.transpose(0, 1).contiguous(), present # (T, B, C) -> (B, T, C)
- else:
- return y.transpose(0, 1).contiguous() # (T, B, C) -> (B, T, C)
-
-
-class Block(nn.Module):
-
- def __init__(self,
- ctx_len: int,
- embed_dim: int,
- n_heads: int,
- mlp_bias: bool,
- attn_bias: bool,
- resid_pdrop: bool,
- attn_pdrop: bool,
- gelu_use_approx: bool):
- super().__init__()
- self.ln1 = nn.LayerNorm(embed_dim)
- self.ln2 = nn.LayerNorm(embed_dim)
-
- self.attn = MultiHeadSelfAttention(ctx_len=ctx_len,
- embed_dim=embed_dim,
- n_heads=n_heads,
- attn_pdrop=attn_pdrop,
- resid_pdrop=resid_pdrop,
- attn_bias=attn_bias,
- use_mask=True)
- self.mlp = nn.Sequential(
- nn.Linear(embed_dim, 4 * embed_dim, bias=mlp_bias),
- GELU(gelu_use_approx),
- nn.Linear(4 * embed_dim, embed_dim, bias=mlp_bias),
- nn.Dropout(resid_pdrop),
- )
-
- def forward(self, x):
- x = x + self.attn(self.ln1(x))
- x = x + self.mlp(self.ln2(x))
- return x
-
- def sample(self, x, layer_past=None):
- attn, present = self.attn(self.ln1(x), use_cache=True, layer_past=layer_past)
- x = x + attn
- x = x + self.mlp(self.ln2(x))
- return x, present
diff --git a/spaces/mmcquade11/codex-text-summarizer/README.md b/spaces/mmcquade11/codex-text-summarizer/README.md
deleted file mode 100644
index 5680f2aa5b018812dbbc1abb8c9a1eac1fc2ac0d..0000000000000000000000000000000000000000
--- a/spaces/mmcquade11/codex-text-summarizer/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Codex Text Summarizer
-emoji: 🦀
-colorFrom: yellow
-colorTo: indigo
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/mms-meta/MMS/vits/transforms.py b/spaces/mms-meta/MMS/vits/transforms.py
deleted file mode 100644
index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000
--- a/spaces/mms-meta/MMS/vits/transforms.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
-
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {
- 'tails': tails,
- 'tail_bound': tail_bound
- }
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(
- inputs[..., None] >= bin_locations,
- dim=-1
- ) - 1
-
-
-def unconstrained_rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails='linear',
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == 'linear':
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError('{} tails are not implemented.'.format(tails))
-
- outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative
- )
-
- return outputs, logabsdet
-
-def rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0., right=1., bottom=0., top=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError('Input to a transform is not within its domain')
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError('Minimal bin width too large for the number of bins')
- if min_bin_height * num_bins > 1.0:
- raise ValueError('Minimal bin height too large for the number of bins')
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (((inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta)
- + input_heights * (input_delta - input_derivatives)))
- b = (input_heights * input_derivatives
- - (inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta))
- c = - input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (input_delta * theta.pow(2)
- + input_derivatives * theta_one_minus_theta)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/mnauf/detect-bees/utils/aws/resume.py b/spaces/mnauf/detect-bees/utils/aws/resume.py
deleted file mode 100644
index b21731c979a121ab8227280351b70d6062efd983..0000000000000000000000000000000000000000
--- a/spaces/mnauf/detect-bees/utils/aws/resume.py
+++ /dev/null
@@ -1,40 +0,0 @@
-# Resume all interrupted trainings in yolov5/ dir including DDP trainings
-# Usage: $ python utils/aws/resume.py
-
-import os
-import sys
-from pathlib import Path
-
-import torch
-import yaml
-
-FILE = Path(__file__).resolve()
-ROOT = FILE.parents[2] # YOLOv5 root directory
-if str(ROOT) not in sys.path:
- sys.path.append(str(ROOT)) # add ROOT to PATH
-
-port = 0 # --master_port
-path = Path('').resolve()
-for last in path.rglob('*/**/last.pt'):
- ckpt = torch.load(last)
- if ckpt['optimizer'] is None:
- continue
-
- # Load opt.yaml
- with open(last.parent.parent / 'opt.yaml', errors='ignore') as f:
- opt = yaml.safe_load(f)
-
- # Get device count
- d = opt['device'].split(',') # devices
- nd = len(d) # number of devices
- ddp = nd > 1 or (nd == 0 and torch.cuda.device_count() > 1) # distributed data parallel
-
- if ddp: # multi-GPU
- port += 1
- cmd = f'python -m torch.distributed.run --nproc_per_node {nd} --master_port {port} train.py --resume {last}'
- else: # single-GPU
- cmd = f'python train.py --resume {last}'
-
- cmd += ' > /dev/null 2>&1 &' # redirect output to dev/null and run in daemon thread
- print(cmd)
- os.system(cmd)
diff --git a/spaces/moadams/rainbowRainClassificationAPP/rainbowrain_env/share/jupyter/labextensions/@jupyter-widgets/jupyterlab-manager/static/150.3e1e5adfd821b9b96340.js b/spaces/moadams/rainbowRainClassificationAPP/rainbowrain_env/share/jupyter/labextensions/@jupyter-widgets/jupyterlab-manager/static/150.3e1e5adfd821b9b96340.js
deleted file mode 100644
index b81b605684da5373137dcdf31265f0f7e6e33b6d..0000000000000000000000000000000000000000
--- a/spaces/moadams/rainbowRainClassificationAPP/rainbowrain_env/share/jupyter/labextensions/@jupyter-widgets/jupyterlab-manager/static/150.3e1e5adfd821b9b96340.js
+++ /dev/null
@@ -1 +0,0 @@
-(self.webpackChunk_jupyter_widgets_jupyterlab_manager=self.webpackChunk_jupyter_widgets_jupyterlab_manager||[]).push([[150],{6110:(e,t,o)=>{"use strict";o.r(t),o.d(t,{CONTROL_COMM_PROTOCOL_VERSION:()=>g,CONTROL_COMM_TARGET:()=>f,CONTROL_COMM_TIMEOUT:()=>p,ManagerBase:()=>v,base64ToBuffer:()=>d,bufferToBase64:()=>m,bufferToHex:()=>a,hexToBuffer:()=>i,serialize_state:()=>b});var s=o(9930),n=o(1526),r=o(5766);const l=["00","01","02","03","04","05","06","07","08","09","0A","0B","0C","0D","0E","0F","10","11","12","13","14","15","16","17","18","19","1A","1B","1C","1D","1E","1F","20","21","22","23","24","25","26","27","28","29","2A","2B","2C","2D","2E","2F","30","31","32","33","34","35","36","37","38","39","3A","3B","3C","3D","3E","3F","40","41","42","43","44","45","46","47","48","49","4A","4B","4C","4D","4E","4F","50","51","52","53","54","55","56","57","58","59","5A","5B","5C","5D","5E","5F","60","61","62","63","64","65","66","67","68","69","6A","6B","6C","6D","6E","6F","70","71","72","73","74","75","76","77","78","79","7A","7B","7C","7D","7E","7F","80","81","82","83","84","85","86","87","88","89","8A","8B","8C","8D","8E","8F","90","91","92","93","94","95","96","97","98","99","9A","9B","9C","9D","9E","9F","A0","A1","A2","A3","A4","A5","A6","A7","A8","A9","AA","AB","AC","AD","AE","AF","B0","B1","B2","B3","B4","B5","B6","B7","B8","B9","BA","BB","BC","BD","BE","BF","C0","C1","C2","C3","C4","C5","C6","C7","C8","C9","CA","CB","CC","CD","CE","CF","D0","D1","D2","D3","D4","D5","D6","D7","D8","D9","DA","DB","DC","DD","DE","DF","E0","E1","E2","E3","E4","E5","E6","E7","E8","E9","EA","EB","EC","ED","EE","EF","F0","F1","F2","F3","F4","F5","F6","F7","F8","F9","FA","FB","FC","FD","FE","FF"];function a(e){const t=new Uint8Array(e),o=[];for(let e=0;e/g,">");for(navigator&&"Microsoft Internet Explorer"===navigator.appName&&(r=r.replace(/(%[^\n]*)\n/g,"$1 \n"));t>e;)n[t]="",t--;return n[e]="@@"+s.length+"@@",o&&(r=o(r)),s.push(r),n}var u=o(4330),h=o.n(u);const w=s.PROTOCOL_VERSION.split(".",1)[0],f="jupyter.widget.control",g="1.0.0",p=4e3;class v{constructor(){this.comm_target_name="jupyter.widget",this._models=Object.create(null)}setViewOptions(e={}){return e}create_view(e,t={}){const o=(0,s.uuid)(),n=e.state_change=e.state_change.then((async()=>{const n=e.get("_view_name"),r=e.get("_view_module");try{const s=new(await this.loadViewClass(n,r,e.get("_view_module_version")))({model:e,options:this.setViewOptions(t)});return s.listenTo(e,"destroy",s.remove),await s.render(),s.once("remove",(()=>{e.views&&delete e.views[o]})),s}catch(o){console.error(`Could not create a view for model id ${e.model_id}`);const l=`Failed to create view for '${n}' from module '${r}' with model '${e.name}' from module '${e.module}'`,a=new(s.createErrorWidgetModel(o,l)),i=new s.ErrorWidgetView({model:a,options:this.setViewOptions(t)});return await i.render(),i}}));return e.views&&(e.views[o]=n),n}callbacks(e){return{}}async get_model(e){const t=this._models[e];if(void 0===t)throw new Error("widget model not found");return t}has_model(e){return void 0!==this._models[e]}handle_comm_open(e,t){const o=(t.metadata||{}).version||"";if(o.split(".",1)[0]!==w){const e=`Wrong widget protocol version: received protocol version '${o}', but was expecting major version '${w}'`;return console.error(e),Promise.reject(e)}const n=t.content.data,r=n.buffer_paths||[],l=t.buffers||[];return(0,s.put_buffers)(n.state,r,l),this.new_model({model_name:n.state._model_name,model_module:n.state._model_module,model_module_version:n.state._model_module_version,comm:e},n.state).catch((0,s.reject)("Could not create a model.",!0))}new_widget(e,t={}){let o;if(void 0===e.view_name||void 0===e.view_module||void 0===e.view_module_version)return Promise.reject("new_widget(...) must be given view information in the options.");o=e.comm?Promise.resolve(e.comm):this._create_comm(this.comm_target_name,e.model_id,{state:{_model_module:e.model_module,_model_module_version:e.model_module_version,_model_name:e.model_name,_view_module:e.view_module,_view_module_version:e.view_module_version,_view_name:e.view_name}},{version:s.PROTOCOL_VERSION});const n=Object.assign({},e);return o.then((e=>(n.comm=e,this.new_model(n,t).then((e=>(e.sync("create",e),e))))),(()=>(n.model_id||(n.model_id=(0,s.uuid)()),this.new_model(n,t))))}register_model(e,t){this._models[e]=t,t.then((t=>{t.once("comm:close",(()=>{delete this._models[e]}))}))}async new_model(e,t={}){var o,s;const n=null!==(o=e.model_id)&&void 0!==o?o:null===(s=e.comm)||void 0===s?void 0:s.comm_id;if(!n)throw new Error("Neither comm nor model_id provided in options object. At least one must exist.");e.model_id=n;const r=this._make_model(e,t);return this.register_model(n,r),await r}async _loadFromKernel(){let e,t;try{const o=await this._create_comm(f,(0,s.uuid)(),{},{version:g});await new Promise(((s,n)=>{o.on_msg((o=>{e=o.content.data,"update_states"===e.method?(t=(o.buffers||[]).map((e=>e instanceof DataView?e:new DataView(e instanceof ArrayBuffer?e:e.buffer))),s(null)):console.warn(`\n Unknown ${e.method} message on the Control channel\n `)})),o.on_close((()=>n("Control comm was closed too early"))),o.send({method:"request_states"},{}),setTimeout((()=>n("Control comm did not respond in time")),p)})),o.close()}catch(e){return console.warn('Failed to fetch ipywidgets through the "jupyter.widget.control" comm channel, fallback to fetching individual model state. Reason:',e),this._loadFromKernelModels()}const o=e.states,n={},r={};for(let o=0;o({widget_id:e,comm:this.has_model(e)?void 0:await this._create_comm("jupyter.widget",e)}))));await Promise.all(l.map((async({widget_id:e,comm:t})=>{const l=o[e];e in n&&(0,s.put_buffers)(l,n[e],r[e]);try{if(t)await this.new_model({model_name:l.model_name,model_module:l.model_module,model_module_version:l.model_module_version,model_id:e,comm:t},l.state);else{const t=await this.get_model(e),o=await t.constructor._deserialize_state(l.state,this);t.set_state(o)}}catch(e){console.error(e)}})))}async _loadFromKernelModels(){const e=await this._get_comm_info(),t=await Promise.all(Object.keys(e).map((async e=>{if(this.has_model(e))return;const t=await this._create_comm(this.comm_target_name,e);let o="";const r=new n.PromiseDelegate;return t.on_msg((e=>{if(e.parent_header.msg_id===o&&"comm_msg"===e.header.msg_type&&"update"===e.content.data.method){const o=e.content.data,n=o.buffer_paths||[],l=e.buffers||[];(0,s.put_buffers)(o.state,n,l),r.resolve({comm:t,msg:e})}})),o=t.send({method:"request_state"},this.callbacks(void 0)),r.promise})));await Promise.all(t.map((async e=>{if(!e)return;const t=e.msg.content;await this.new_model({model_name:t.data.state._model_name,model_module:t.data.state._model_module,model_module_version:t.data.state._model_module_version,comm:e.comm},t.data.state)})))}async _make_model(e,t={}){const o=e.model_id,n=this.loadModelClass(e.model_name,e.model_module,e.model_module_version);let r;const l=(e,t)=>new(s.createErrorWidgetModel(e,t));try{r=await n}catch(e){const t="Could not instantiate widget";return console.error(t),l(e,t)}if(!r){const t="Could not instantiate widget";return console.error(t),l(new Error(`Cannot find model module ${e.model_module}@${e.model_module_version}, ${e.model_name}`),t)}let a;try{const s=await r._deserialize_state(t,this);a=new r(s,{widget_manager:this,model_id:o,comm:e.comm})}catch(t){console.error(t),a=l(t,`Model class '${e.model_name}' from module '${e.model_module}' is loaded but can not be instantiated`)}return a.name=e.model_name,a.module=e.model_module,a}clear_state(){return(0,s.resolvePromisesDict)(this._models).then((e=>{Object.keys(e).forEach((t=>e[t].close())),this._models=Object.create(null)}))}get_state(e={}){const t=Object.keys(this._models).map((e=>this._models[e]));return Promise.all(t).then((t=>b(t,e)))}set_state(e){if(!(e.version_major&&e.version_major<=2))throw"Unsupported widget state format";const t=e.state;return this._get_comm_info().then((e=>Promise.all(Object.keys(t).map((o=>{const n={base64:d,hex:i},r=t[o],l=r.state;if(r.buffers){const e=r.buffers.map((e=>e.path)),t=r.buffers.map((e=>new DataView(n[e.encoding](e.data))));(0,s.put_buffers)(r.state,e,t)}if(this.has_model(o))return this.get_model(o).then((e=>e.constructor._deserialize_state(l||{},this).then((t=>(e.set_state(t),e)))));const a={model_id:o,model_name:r.model_name,model_module:r.model_module,model_module_version:r.model_module_version};return Object.prototype.hasOwnProperty.call(e,"model_id")?this._create_comm(this.comm_target_name,o).then((e=>(a.comm=e,this.new_model(a)))):this.new_model(a,l)})))))}disconnect(){Object.keys(this._models).forEach((e=>{this._models[e].then((e=>{e.comm_live=!1}))}))}resolveUrl(e){return Promise.resolve(e)}inline_sanitize(e){const t=function(e){const t=[];let o,s=null,n=null,r=null,l=0;/`/.test(e)?(e=e.replace(/~/g,"~T").replace(/(^|[^\\])(`+)([^\n]*?[^`\n])\2(?!`)/gm,(e=>e.replace(/\$/g,"~D"))),o=e=>e.replace(/~([TD])/g,((e,t)=>"T"===t?"~":"$"))):o=e=>e;let a=e.replace(/\r\n?/g,"\n").split(c);for(let e=1,i=a.length;e{let o=n[t];return"\\\\("===o.substr(0,3)&&"\\\\)"===o.substr(o.length-3)?o="\\("+o.substring(3,o.length-3)+"\\)":"\\\\["===o.substr(0,3)&&"\\\\]"===o.substr(o.length-3)&&(o="\\["+o.substring(3,o.length-3)+"\\]"),o}))}async loadModelClass(e,t,o){try{const s=this.loadClass(e,t,o);return await s,s}catch(o){console.error(o);const n=`Failed to load model class '${e}' from module '${t}'`;return s.createErrorWidgetModel(o,n)}}async loadViewClass(e,t,o){try{const s=this.loadClass(e,t,o);return await s,s}catch(o){console.error(o);const n=`Failed to load view class '${e}' from module '${t}'`;return s.createErrorWidgetView(o,n)}}filterExistingModelState(e){let t=e.state;return t=Object.keys(t).filter((e=>!this.has_model(e))).reduce(((e,o)=>(e[o]=t[o],e)),{}),Object.assign(Object.assign({},e),{state:t})}}function b(e,t={}){const o={};return e.forEach((e=>{const n=e.model_id,r=(0,s.remove_buffers)(e.serialize(e.get_state(t.drop_defaults))),l=r.buffers.map(((e,t)=>({data:m(e),path:r.buffer_paths[t],encoding:"base64"})));o[n]={model_name:e.name,model_module:e.module,model_module_version:e.get("_model_module_version"),state:r.state},l.length>0&&(o[n].buffers=l)})),{version_major:2,version_minor:0,state:o}}},6527:()=>{},6969:()=>{},2232:()=>{},4195:()=>{},3443:()=>{}}]);
\ No newline at end of file
diff --git a/spaces/mrm8488/PromptSource/templates.py b/spaces/mrm8488/PromptSource/templates.py
deleted file mode 100644
index 52425f26663f0d120b6660a94bee98a085c7cccf..0000000000000000000000000000000000000000
--- a/spaces/mrm8488/PromptSource/templates.py
+++ /dev/null
@@ -1,515 +0,0 @@
-import os
-import random
-import uuid
-from collections import Counter, defaultdict
-from shutil import rmtree
-from typing import Dict, List, Optional, Tuple
-
-import pandas as pd
-import pkg_resources
-import yaml
-from jinja2 import BaseLoader, Environment, meta
-
-
-# Truncation of jinja template variables
-# 1710 = 300 words x 4.7 avg characters per word + 300 spaces
-TEXT_VAR_LENGTH = 2048
-
-# Local path to the folder containing the templates
-TEMPLATES_FOLDER_PATH = pkg_resources.resource_filename(__name__, "templates")
-
-env = Environment(loader=BaseLoader)
-
-# Allow the python function zip()
-env.globals.update(zip=zip)
-
-# These are users whose datasets should be included in the results returned by
-# filter_english_datasets (regardless of their metadata)
-INCLUDED_USERS = {"Zaid", "craffel"}
-
-
-def highlight(input):
- return "" + input + ""
-
-
-def choice(choices):
- return random.choice(choices)
-
-
-def most_frequent(items):
- """Returns the set of items which appear most frequently in the input"""
- if not items:
- return
- item_counts = Counter(items).most_common()
- max_freq = item_counts[0][1]
- most_frequent_items = [c[0] for c in item_counts if c[1] == max_freq]
- return most_frequent_items
-
-
-env.filters["highlight"] = highlight
-env.filters["choice"] = choice
-env.filters["most_frequent"] = most_frequent
-
-
-class Template(yaml.YAMLObject):
- """
- A prompt template.
- """
-
- yaml_tag = "!Template"
-
- def __init__(self, name, jinja, reference, metadata=None, answer_choices=None):
- """
- Creates a prompt template.
-
- A prompt template is expressed in Jinja. It is rendered using an example
- from the corresponding Hugging Face datasets library (a dictionary). The
- separator ||| should appear once to divide the template into prompt and
- output. Generally, the prompt should provide information on the desired
- behavior, e.g., text passage and instructions, and the output should be
- a desired response.
-
- :param name: unique name (per dataset) for template
- :param jinja: template expressed in Jinja
- :param reference: string describing author or paper reference for template
- :param metadata: a Metadata object with template annotations
- :param answer_choices: Jinja expression for answer choices. Should produce
- a ||| delimited string of choices that enumerates
- the possible completions for templates that should
- be evaluated as ranked completions. If None, then
- the template is open-ended. This list is accessible
- from within Jinja as the variable `answer_choices`.
- """
- self.id = str(uuid.uuid4())
- self.name = name
- self.jinja = jinja
- self.reference = reference
- self.metadata = metadata if metadata is not None else Template.Metadata()
- self.answer_choices = answer_choices
-
- def get_id(self):
- """
- Returns the id of the template
-
- :return: unique id for template
- """
- return self.id
-
- def get_name(self):
- """
- Returns the name of the template
-
- :return: unique (per dataset) name for template
- """
- return self.name
-
- def get_reference(self):
- """
- Returns the bibliographic reference (or author) for the template
-
- :return: reference as a string
- """
- return self.reference
-
- def get_answer_choices_expr(self):
- """
- Returns a Jinja expression for computing the answer choices from an example.
-
- :return: String, or None if no answer choices
- """
- return self.answer_choices
-
- def get_answer_choices_list(self, example):
- """
- Returns a list of answer choices for a given example
-
- :return: list of strings, or None if get_answer_choices_expr is None
- """
- jinja = self.get_answer_choices_expr()
- if jinja is None:
- return None
-
- rtemplate = env.from_string(jinja)
- protected_example = self._escape_pipe(example)
- rendered_choices = rtemplate.render(**protected_example)
- return [self._unescape_pipe(answer_choice.strip()) for answer_choice in rendered_choices.split("|||")]
-
- def get_fixed_answer_choices_list(self):
- """
- Returns a list of answer choices that is static across examples, if possible
-
- :return: list of strings, or None if no static list exists
- """
- jinja = self.get_answer_choices_expr()
- if jinja is None:
- return None
-
- parse = env.parse(jinja)
- variables = meta.find_undeclared_variables(parse)
- if len(variables) == 0:
- rtemplate = env.from_string(jinja)
- rendered_choices = rtemplate.render()
- return [answer_choice.strip() for answer_choice in rendered_choices.split("|||")]
- else:
- return None
-
- def apply(self, example, truncate=True, highlight_variables=False):
- """
- Creates a prompt by applying this template to an example
-
- :param example: the dataset example to create a prompt for
- :param truncate: if True, example fields will be truncated to TEXT_VAR_LENGTH chars
- :param highlight_variables: highlight the added variables
- :return: tuple of 2 strings, for prompt and output
- """
- jinja = self.jinja
-
- # Truncates the prompt if needed
- if truncate:
- trunc_command = (
- f" | string | truncate({TEXT_VAR_LENGTH}) }}}}" # Escaping curly braces requires doubling them
- )
- jinja = jinja.replace("}}", trunc_command)
-
- # Highlights text that was substituted for variables, if requested
- if highlight_variables:
- jinja = jinja.replace("}}", " | highlight }}")
- rtemplate = env.from_string(jinja)
-
- protected_example = self._escape_pipe(example)
-
- # Adds in answer_choices variable
- if "answer_choices" in protected_example:
- raise ValueError("Example contains the restricted key 'answer_choices'.")
-
- protected_example["answer_choices"] = self.get_answer_choices_list(example)
-
- # Renders the Jinja template
- rendered_example = rtemplate.render(**protected_example)
-
- # Splits on the separator, and then replaces back any occurrences of the
- # separator in the original example
- return [self._unescape_pipe(part).strip() for part in rendered_example.split("|||")]
-
- pipe_protector = "3ed2dface8203c4c9dfb1a5dc58e41e0"
-
- @classmethod
- def _escape_pipe(cls, example):
- # Replaces any occurrences of the "|||" separator in the example, which
- # which will be replaced back after splitting
- protected_example = {
- key: value.replace("|||", cls.pipe_protector) if isinstance(value, str) else value
- for key, value in example.items()
- }
- return protected_example
-
- @classmethod
- def _unescape_pipe(cls, string):
- # replaces back any occurrences of the separator in a string
- return string.replace(cls.pipe_protector, "|||")
-
- class Metadata(yaml.YAMLObject):
- """
- Metadata for a prompt template.
- """
-
- yaml_tag = "!TemplateMetadata"
-
- def __init__(
- self,
- original_task: Optional[bool] = None,
- choices_in_prompt: Optional[bool] = None,
- metrics: Optional[List[str]] = None,
- ):
- """
- Initializes template metadata.
-
- In the following, trivial choices are defined as Yes/No, True/False,
- etc. and nontrivial choices are other types of choices denoted in
- the answer_choices field.
-
- :param original_task: If True, this prompt asks a model to perform the original task designed for
- this dataset.
- :param choices_in_prompt: If True, the answer choices are included in the templates such that models
- see those choices in the input. Only applicable to classification tasks.
- :param metrics: List of strings denoting metrics to use for evaluation
- """
- self.original_task = original_task
- self.choices_in_prompt = choices_in_prompt
- self.metrics = metrics
-
-
-class TemplateCollection:
- """
- This helper class wraps the DatasetTemplates class
- - Initialized the DatasetTemplates for all existing template folder
- - Give access to each DatasetTemplates
- - Provides aggregated counts over all DatasetTemplates
- """
-
- def __init__(self):
-
- # Dict of all the DatasetTemplates, key is the tuple (dataset_name, subset_name)
- self.datasets_templates: Dict[(str, Optional[str]), DatasetTemplates] = self._collect_datasets()
-
- @property
- def keys(self):
- return list(self.datasets_templates.keys())
-
- def __len__(self) -> int:
- return len(self.datasets_templates)
-
- def remove(self, dataset_name: str, subset_name: Optional[str] = None) -> None:
- del self.datasets_templates[dataset_name, subset_name]
-
- def _collect_datasets(self) -> Dict[Tuple[str, str], "DatasetTemplates"]:
- """
- Initialize a DatasetTemplates object for each templates.yaml detected in the templates folder
-
- Returns: a dict with key=(dataset_name, subset_name)
- """
- dataset_folders = os.listdir(TEMPLATES_FOLDER_PATH)
- dataset_folders = [folder for folder in dataset_folders if not folder.startswith(".")]
-
- output = {} # format is {(dataset_name, subset_name): DatasetsTemplates}
- for dataset in dataset_folders:
- if dataset in INCLUDED_USERS:
- for filename in os.listdir(os.path.join(TEMPLATES_FOLDER_PATH, dataset)):
- output = {**output, **self._collect_dataset(dataset + "/" + filename)}
- else:
- output = {**output, **self._collect_dataset(dataset)}
- return output
-
- def _collect_dataset(self, dataset):
- output = {} # format is {(dataset_name, subset_name): DatasetsTemplates}
- for filename in os.listdir(os.path.join(TEMPLATES_FOLDER_PATH, dataset)):
- if filename.endswith(".yaml"):
- # If there is no sub-folder, there is no subset for this dataset
- output[(dataset, None)] = DatasetTemplates(dataset)
- else:
- # This is a subfolder, and its name corresponds to the subset name
- output[(dataset, filename)] = DatasetTemplates(dataset_name=dataset, subset_name=filename)
- return output
-
- def get_dataset(self, dataset_name: str, subset_name: Optional[str] = None) -> "DatasetTemplates":
- """
- Return the DatasetTemplates object corresponding to the dataset name
-
- :param dataset_name: name of the dataset to get
- :param subset_name: name of the subset
- """
- # if the dataset does not exist, we add it
- if dataset_name not in self.keys:
- self.datasets_templates[(dataset_name, subset_name)] = DatasetTemplates(dataset_name, subset_name)
-
- return self.datasets_templates[(dataset_name, subset_name)]
-
- def get_templates_count(self) -> Dict:
- """
- Return the overall number count over all datasets
-
- NB: we don't breakdown datasets into subsets for the count, i.e subsets count are included
- into the dataset count
- """
-
- count_dict = defaultdict(int)
- for k, v in self.datasets_templates.items():
- # Subsets count towards dataset count
- count_dict[k[0]] += len(v)
- # converting to regular dict
- return dict(count_dict)
-
-
-class DatasetTemplates:
- """
- Class that wraps all templates for a specific dataset/subset and implements all the helper
- functions necessary to read/write to the yaml file
- """
-
- TEMPLATES_KEY = "templates"
- DATASET_KEY = "dataset"
- SUBSET_KEY = "subset"
- TEMPLATE_FILENAME = "templates.yaml"
-
- def __init__(self, dataset_name: str, subset_name: str = None):
- self.dataset_name: str = dataset_name
- self.subset_name: str = subset_name
- # dictionary is keyed by template name.
- self.templates: Dict = self.read_from_file()
-
- # Mapping from template name to template id
- self.name_to_id_mapping = {}
- self.sync_mapping()
-
- def sync_mapping(self) -> None:
- """
- Re-compute the name_to_id_mapping to ensure it is in sync with self.templates
- """
- self.name_to_id_mapping = {template.name: template.id for template in self.templates.values()}
-
- @property
- def all_template_names(self) -> List[str]:
- """
- Sorted list of all templates names for this dataset
- """
- return sorted([template.name for template in self.templates.values()])
-
- @property
- def folder_path(self) -> str:
- if self.subset_name:
- return os.path.join(TEMPLATES_FOLDER_PATH, self.dataset_name, self.subset_name)
- else:
- return os.path.join(TEMPLATES_FOLDER_PATH, self.dataset_name)
-
- @property
- def yaml_path(self) -> str:
- return os.path.join(self.folder_path, self.TEMPLATE_FILENAME)
-
- def format_for_dump(self) -> Dict:
- """
- Create a formatted dictionary for the class attributes
- """
- formatted_dict = {self.DATASET_KEY: self.dataset_name, self.TEMPLATES_KEY: self.templates}
- if self.subset_name:
- formatted_dict[self.SUBSET_KEY] = self.subset_name
- return formatted_dict
-
- def read_from_file(self) -> Dict:
- """
- Reads a file containing a prompt collection.
- """
-
- if not os.path.exists(self.yaml_path):
- return {}
- yaml_dict = yaml.load(open(self.yaml_path, "r"), Loader=yaml.FullLoader)
- return yaml_dict[self.TEMPLATES_KEY]
-
- def write_to_file(self) -> None:
- """
- Writes to a file with the current prompt collection.
- """
- # Sync the mapping
- self.sync_mapping()
-
- # We only create the folder if a template is written
- if not os.path.exists(self.folder_path):
- os.makedirs(self.folder_path)
- yaml.dump(self.format_for_dump(), open(self.yaml_path, "w"))
-
- def add_template(self, template: "Template") -> None:
- """
- Adds a new template for the dataset
-
- :param template: template
- """
- self.templates[template.get_id()] = template
-
- self.write_to_file()
-
- def remove_template(self, template_name: str) -> None:
- """
- Deletes a template
-
- :param template_name: name of template to remove
- """
-
- # Even if we have an ID, we want to check for duplicate names
- if template_name not in self.all_template_names:
- raise ValueError(f"No template with name {template_name} for dataset {self.dataset_name} exists.")
-
- del self.templates[self.name_to_id_mapping[template_name]]
-
- if len(self.templates) == 0:
- # There is no remaining template, we can remove the entire folder
- self.delete_folder()
- else:
- # We just update the file
- self.write_to_file()
-
- def update_template(
- self,
- current_template_name: str,
- new_template_name: str,
- jinja: str,
- reference: str,
- metadata: Template.Metadata,
- answer_choices: str,
- ) -> None:
- """
- Updates a pre-existing template and writes changes
-
- :param current_template_name: current name of the template stored in self.templates
- :param new_template_name: new name for the template
- :param jinja: new jinja entry
- :param reference: new reference entry
- :param metadata: a Metadata object with template annotations
- :param answer_choices: new answer_choices string
- """
- template_id = self.name_to_id_mapping[current_template_name]
- self.templates[template_id].name = new_template_name
- self.templates[template_id].jinja = jinja
- self.templates[template_id].reference = reference
- self.templates[template_id].metadata = metadata
- self.templates[template_id].answer_choices = answer_choices
-
- self.write_to_file()
-
- def delete_folder(self) -> None:
- """
- Delete the folder corresponding to self.folder_path
- """
- self.sync_mapping()
-
- rmtree(self.folder_path)
-
- # If it is a subset, we have to check whether to remove the dataset folder
- if self.subset_name:
- # have to check for other folders
- base_dataset_folder = os.path.join(TEMPLATES_FOLDER_PATH, self.dataset_name)
- if len(os.listdir(base_dataset_folder)) == 0:
- rmtree(base_dataset_folder)
-
- def __getitem__(self, template_key: str) -> "Template":
- return self.templates[self.name_to_id_mapping[template_key]]
-
- def __len__(self) -> int:
- return len(self.templates)
-
-
-def get_templates_data_frame():
- """
- Gathers all template information into a Pandas DataFrame.
-
- :return: Pandas DataFrame
- """
- data = {
- "id": [],
- "dataset": [],
- "subset": [],
- "name": [],
- "reference": [],
- "original_task": [],
- "choices_in_prompt": [],
- "metrics": [],
- "answer_choices": [],
- "jinja": [],
- }
-
- template_collection = TemplateCollection()
-
- for key in template_collection.keys:
- templates = template_collection.get_dataset(key[0], key[1])
- for template_name in templates.all_template_names:
- template = templates[template_name]
- data["id"].append(template.get_id())
- data["dataset"].append(key[0])
- data["subset"].append(key[1])
- data["name"].append(template.get_name())
- data["reference"].append(template.get_reference())
- data["original_task"].append(template.metadata.original_task)
- data["choices_in_prompt"].append(template.metadata.choices_in_prompt)
- data["metrics"].append(template.metadata.metrics)
- data["answer_choices"].append(template.get_answer_choices_expr())
- data["jinja"].append(template.jinja)
-
- return pd.DataFrame(data)
diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/criss/README.md b/spaces/mshukor/UnIVAL/fairseq/examples/criss/README.md
deleted file mode 100644
index 4689ed7c10497a5100b28fe6d6801a7c089da569..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/fairseq/examples/criss/README.md
+++ /dev/null
@@ -1,61 +0,0 @@
-# Cross-lingual Retrieval for Iterative Self-Supervised Training
-
-https://arxiv.org/pdf/2006.09526.pdf
-
-## Introduction
-
-CRISS is a multilingual sequence-to-sequnce pretraining method where mining and training processes are applied iteratively, improving cross-lingual alignment and translation ability at the same time.
-
-## Requirements:
-
-* faiss: https://github.com/facebookresearch/faiss
-* mosesdecoder: https://github.com/moses-smt/mosesdecoder
-* flores: https://github.com/facebookresearch/flores
-* LASER: https://github.com/facebookresearch/LASER
-
-## Unsupervised Machine Translation
-##### 1. Download and decompress CRISS checkpoints
-```
-cd examples/criss
-wget https://dl.fbaipublicfiles.com/criss/criss_3rd_checkpoints.tar.gz
-tar -xf criss_checkpoints.tar.gz
-```
-##### 2. Download and preprocess Flores test dataset
-Make sure to run all scripts from examples/criss directory
-```
-bash download_and_preprocess_flores_test.sh
-```
-
-##### 3. Run Evaluation on Sinhala-English
-```
-bash unsupervised_mt/eval.sh
-```
-
-## Sentence Retrieval
-##### 1. Download and preprocess Tatoeba dataset
-```
-bash download_and_preprocess_tatoeba.sh
-```
-
-##### 2. Run Sentence Retrieval on Tatoeba Kazakh-English
-```
-bash sentence_retrieval/sentence_retrieval_tatoeba.sh
-```
-
-## Mining
-##### 1. Install faiss
-Follow instructions on https://github.com/facebookresearch/faiss/blob/master/INSTALL.md
-##### 2. Mine pseudo-parallel data between Kazakh and English
-```
-bash mining/mine_example.sh
-```
-
-## Citation
-```bibtex
-@article{tran2020cross,
- title={Cross-lingual retrieval for iterative self-supervised training},
- author={Tran, Chau and Tang, Yuqing and Li, Xian and Gu, Jiatao},
- journal={arXiv preprint arXiv:2006.09526},
- year={2020}
-}
-```
diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/models/hubert/hubert.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/models/hubert/hubert.py
deleted file mode 100644
index 232a5e402a146023e5c93f3c2574ecec98faf9d5..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/fairseq/fairseq/models/hubert/hubert.py
+++ /dev/null
@@ -1,563 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-from typing import Dict, List, Optional, Tuple
-
-import numpy as np
-
-import torch
-import torch.nn as nn
-from dataclasses import dataclass, field
-from fairseq import utils
-from fairseq.data.data_utils import compute_mask_indices
-from fairseq.data.dictionary import Dictionary
-from fairseq.dataclass import ChoiceEnum, FairseqDataclass
-from fairseq.models import BaseFairseqModel, register_model
-from fairseq.models.wav2vec.wav2vec2 import (
- ConvFeatureExtractionModel,
- TransformerEncoder,
-)
-from fairseq.modules import GradMultiply, LayerNorm
-from fairseq.tasks.hubert_pretraining import (
- HubertPretrainingConfig,
- HubertPretrainingTask,
-)
-from omegaconf import II
-
-logger = logging.getLogger(__name__)
-
-EXTRACTOR_MODE_CHOICES = ChoiceEnum(["default", "layer_norm"])
-MASKING_DISTRIBUTION_CHOICES = ChoiceEnum(
- ["static", "uniform", "normal", "poisson"]
-)
-
-
-@dataclass
-class HubertConfig(FairseqDataclass):
- label_rate: int = II("task.label_rate")
-
- extractor_mode: EXTRACTOR_MODE_CHOICES = field(
- default="default",
- metadata={
- "help": "mode for feature extractor. default has a single group "
- "norm with d groups in the first conv block, whereas layer_norm "
- "has layer norms in every block (meant to use with normalize=True)"
- },
- )
- encoder_layers: int = field(
- default=12, metadata={"help": "num encoder layers in the transformer"}
- )
- encoder_embed_dim: int = field(
- default=768, metadata={"help": "encoder embedding dimension"}
- )
- encoder_ffn_embed_dim: int = field(
- default=3072, metadata={"help": "encoder embedding dimension for FFN"}
- )
- encoder_attention_heads: int = field(
- default=12, metadata={"help": "num encoder attention heads"}
- )
- activation_fn: ChoiceEnum(utils.get_available_activation_fns()) = field(
- default="gelu", metadata={"help": "activation function to use"}
- )
-
- # dropouts
- dropout: float = field(
- default=0.1,
- metadata={"help": "dropout probability for the transformer"},
- )
- attention_dropout: float = field(
- default=0.1,
- metadata={"help": "dropout probability for attention weights"},
- )
- activation_dropout: float = field(
- default=0.0,
- metadata={"help": "dropout probability after activation in FFN"},
- )
- encoder_layerdrop: float = field(
- default=0.0,
- metadata={"help": "probability of dropping a tarnsformer layer"},
- )
- dropout_input: float = field(
- default=0.0,
- metadata={"help": "dropout to apply to the input (after feat extr)"},
- )
- dropout_features: float = field(
- default=0.0,
- metadata={
- "help": "dropout to apply to the features (after feat extr)"
- },
- )
-
- final_dim: int = field(
- default=0,
- metadata={
- "help": "project final representations and targets to this many "
- "dimensions. set to encoder_embed_dim is <= 0"
- },
- )
- untie_final_proj: bool = field(
- default=False,
- metadata={"help": "use separate projection for each target"},
- )
- layer_norm_first: bool = field(
- default=False,
- metadata={"help": "apply layernorm first in the transformer"},
- )
- conv_feature_layers: str = field(
- default="[(512,10,5)] + [(512,3,2)] * 4 + [(512,2,2)] * 2",
- metadata={
- "help": "string describing convolutional feature extraction "
- "layers in form of a python list that contains "
- "[(dim, kernel_size, stride), ...]"
- },
- )
- conv_bias: bool = field(
- default=False, metadata={"help": "include bias in conv encoder"}
- )
- logit_temp: float = field(
- default=0.1, metadata={"help": "temperature to divide logits by"}
- )
- target_glu: bool = field(
- default=False, metadata={"help": "adds projection + glu to targets"}
- )
- feature_grad_mult: float = field(
- default=1.0,
- metadata={"help": "multiply feature extractor var grads by this"},
- )
-
- # masking
- mask_length: int = field(default=10, metadata={"help": "mask length"})
- mask_prob: float = field(
- default=0.65,
- metadata={"help": "probability of replacing a token with mask"},
- )
- mask_selection: MASKING_DISTRIBUTION_CHOICES = field(
- default="static", metadata={"help": "how to choose mask length"}
- )
- mask_other: float = field(
- default=0,
- metadata={
- "help": "secondary mask argument "
- "(used for more complex distributions), "
- "see help in compute_mask_indicesh"
- },
- )
- no_mask_overlap: bool = field(
- default=False, metadata={"help": "whether to allow masks to overlap"}
- )
- mask_min_space: int = field(
- default=1,
- metadata={
- "help": "min space between spans (if no overlap is enabled)"
- },
- )
-
- # channel masking
- mask_channel_length: int = field(
- default=10,
- metadata={"help": "length of the mask for features (channels)"},
- )
- mask_channel_prob: float = field(
- default=0.0,
- metadata={"help": "probability of replacing a feature with 0"},
- )
- mask_channel_selection: MASKING_DISTRIBUTION_CHOICES = field(
- default="static",
- metadata={"help": "how to choose mask length for channel masking"},
- )
- mask_channel_other: float = field(
- default=0,
- metadata={
- "help": "secondary mask argument "
- "(used for more complex distributions), "
- "see help in compute_mask_indicesh"
- },
- )
- no_mask_channel_overlap: bool = field(
- default=False,
- metadata={"help": "whether to allow channel masks to overlap"},
- )
- mask_channel_min_space: int = field(
- default=1,
- metadata={
- "help": "min space between spans (if no overlap is enabled)"
- },
- )
-
- # positional embeddings
- conv_pos: int = field(
- default=128,
- metadata={
- "help": "number of filters for convolutional positional embeddings"
- },
- )
- conv_pos_groups: int = field(
- default=16,
- metadata={
- "help": "number of groups for convolutional positional embedding"
- },
- )
-
- latent_temp: Tuple[float, float, float] = field(
- default=(2, 0.5, 0.999995),
- metadata={"help": "legacy (to be removed)"},
- )
-
- # loss computation
- skip_masked: bool = field(
- default=False,
- metadata={"help": "skip computing losses over masked frames"},
- )
- skip_nomask: bool = field(
- default=False,
- metadata={"help": "skip computing losses over unmasked frames"},
- )
-
-
-@register_model("hubert", dataclass=HubertConfig)
-class HubertModel(BaseFairseqModel):
- def __init__(
- self,
- cfg: HubertConfig,
- task_cfg: HubertPretrainingConfig,
- dictionaries: List[Dictionary],
- ) -> None:
- super().__init__()
- logger.info(f"HubertModel Config: {cfg}")
-
- feature_enc_layers = eval(cfg.conv_feature_layers) # noqa
- self.embed = feature_enc_layers[-1][0]
-
- self.feature_extractor = ConvFeatureExtractionModel(
- conv_layers=feature_enc_layers,
- dropout=0.0,
- mode=cfg.extractor_mode,
- conv_bias=cfg.conv_bias,
- )
- feature_ds_rate = np.prod([s for _, _, s in feature_enc_layers])
- self.feat2tar_ratio = (
- cfg.label_rate * feature_ds_rate / task_cfg.sample_rate
- )
-
- self.post_extract_proj = (
- nn.Linear(self.embed, cfg.encoder_embed_dim)
- if self.embed != cfg.encoder_embed_dim
- else None
- )
-
- self.mask_prob = cfg.mask_prob
- self.mask_selection = cfg.mask_selection
- self.mask_other = cfg.mask_other
- self.mask_length = cfg.mask_length
- self.no_mask_overlap = cfg.no_mask_overlap
- self.mask_min_space = cfg.mask_min_space
-
- self.mask_channel_prob = cfg.mask_channel_prob
- self.mask_channel_selection = cfg.mask_channel_selection
- self.mask_channel_other = cfg.mask_channel_other
- self.mask_channel_length = cfg.mask_channel_length
- self.no_mask_channel_overlap = cfg.no_mask_channel_overlap
- self.mask_channel_min_space = cfg.mask_channel_min_space
-
- self.dropout_input = nn.Dropout(cfg.dropout_input)
- self.dropout_features = nn.Dropout(cfg.dropout_features)
-
- self.feature_grad_mult = cfg.feature_grad_mult
- self.logit_temp = cfg.logit_temp
- self.skip_masked = cfg.skip_masked
- self.skip_nomask = cfg.skip_nomask
-
- final_dim = (
- cfg.final_dim if cfg.final_dim > 0 else cfg.encoder_embed_dim
- )
-
- self.mask_emb = nn.Parameter(
- torch.FloatTensor(cfg.encoder_embed_dim).uniform_()
- )
-
- self.encoder = TransformerEncoder(cfg)
- self.layer_norm = LayerNorm(self.embed)
-
- self.target_glu = None
- if cfg.target_glu:
- self.target_glu = nn.Sequential(
- nn.Linear(final_dim, final_dim * 2), nn.GLU()
- )
-
- self.untie_final_proj = cfg.untie_final_proj
- if self.untie_final_proj:
- self.final_proj = nn.Linear(
- cfg.encoder_embed_dim, final_dim * len(dictionaries)
- )
- else:
- self.final_proj = nn.Linear(cfg.encoder_embed_dim, final_dim)
-
- # modules below are not needed during fine-tuning
- if any([d is None for d in dictionaries]):
- logger.info(
- "cannot find dictionary. assume will be used for fine-tuning"
- )
- else:
- self.num_classes = [len(d) for d in dictionaries]
- self.label_embs_concat = nn.Parameter(
- torch.FloatTensor(sum(self.num_classes), final_dim)
- )
- nn.init.uniform_(self.label_embs_concat)
-
- def upgrade_state_dict_named(self, state_dict, name):
- """Upgrade a (possibly old) state dict for new versions of fairseq."""
-
- super().upgrade_state_dict_named(state_dict, name)
- return state_dict
-
- @classmethod
- def build_model(cls, cfg: HubertConfig, task: HubertPretrainingTask):
- """Build a new model instance."""
-
- model = HubertModel(cfg, task.cfg, task.dictionaries)
- return model
-
- def apply_mask(self, x, padding_mask, target_list):
- B, T, C = x.shape
- if self.mask_prob > 0:
- mask_indices = compute_mask_indices(
- (B, T),
- padding_mask,
- self.mask_prob,
- self.mask_length,
- self.mask_selection,
- self.mask_other,
- min_masks=2,
- no_overlap=self.no_mask_overlap,
- min_space=self.mask_min_space,
- )
- mask_indices = torch.from_numpy(mask_indices).to(x.device)
- x[mask_indices] = self.mask_emb
- else:
- mask_indices = None
-
- if self.mask_channel_prob > 0:
- mask_channel_indices = compute_mask_indices(
- (B, C),
- None,
- self.mask_channel_prob,
- self.mask_channel_length,
- self.mask_channel_selection,
- self.mask_channel_other,
- no_overlap=self.no_mask_channel_overlap,
- min_space=self.mask_channel_min_space,
- )
- mask_channel_indices = (
- torch.from_numpy(mask_channel_indices)
- .to(x.device)
- .unsqueeze(1)
- .expand(-1, T, -1)
- )
- x[mask_channel_indices] = 0
-
- return x, mask_indices
-
- def compute_nce(self, x, pos, negs):
- neg_is_pos = (pos == negs).all(-1)
- pos = pos.unsqueeze(0)
- targets = torch.cat([pos, negs], dim=0)
-
- logits = torch.cosine_similarity(
- x.float(), targets.float(), dim=-1
- ).type_as(x)
- logits /= self.logit_temp
- if neg_is_pos.any():
- logits[1:][neg_is_pos] = float("-inf")
- logits = logits.transpose(0, 1) # (num_x, num_cls+1)
- return logits
-
- def forward_features(self, source: torch.Tensor) -> torch.Tensor:
- if self.feature_grad_mult > 0:
- features = self.feature_extractor(source)
- if self.feature_grad_mult != 1.0:
- features = GradMultiply.apply(features, self.feature_grad_mult)
- else:
- with torch.no_grad():
- features = self.feature_extractor(source)
- return features
-
- def forward_targets(
- self, features: torch.Tensor, target_list: List[torch.Tensor],
- ) -> Tuple[torch.Tensor, torch.Tensor]:
- # Trim features to ensure labels exist and then get aligned labels
- feat_tsz = features.size(2)
- targ_tsz = min([t.size(1) for t in target_list])
- if self.feat2tar_ratio * feat_tsz > targ_tsz:
- feat_tsz = int(targ_tsz / self.feat2tar_ratio)
- features = features[..., :feat_tsz]
- target_inds = torch.arange(feat_tsz).float() * self.feat2tar_ratio
- target_list = [t[:, target_inds.long()] for t in target_list]
- return features, target_list
-
- def forward_padding_mask(
- self, features: torch.Tensor, padding_mask: torch.Tensor,
- ) -> torch.Tensor:
- extra = padding_mask.size(1) % features.size(1)
- if extra > 0:
- padding_mask = padding_mask[:, :-extra]
- padding_mask = padding_mask.view(
- padding_mask.size(0), features.size(1), -1
- )
- padding_mask = padding_mask.all(-1)
- return padding_mask
-
- def forward(
- self,
- source: torch.Tensor,
- target_list: Optional[List[torch.Tensor]] = None,
- padding_mask: Optional[torch.Tensor] = None,
- mask: bool = True,
- features_only: bool = False,
- output_layer: Optional[int] = None,
- ) -> Dict[str, torch.Tensor]:
- """output layer is 1-based"""
- features = self.forward_features(source)
- if target_list is not None:
- features, target_list = self.forward_targets(features, target_list)
-
- features_pen = features.float().pow(2).mean()
-
- features = features.transpose(1, 2)
- features = self.layer_norm(features)
- unmasked_features = features.clone()
-
- if padding_mask is not None:
- padding_mask = self.forward_padding_mask(features, padding_mask)
-
- if self.post_extract_proj is not None:
- features = self.post_extract_proj(features)
-
- features = self.dropout_input(features)
- unmasked_features = self.dropout_features(unmasked_features)
-
- if mask:
- x, mask_indices = self.apply_mask(
- features, padding_mask, target_list
- )
- else:
- x = features
- mask_indices = None
-
- # feature: (B, T, D), float
- # target: (B, T), long
- # x: (B, T, D), float
- # padding_mask: (B, T), bool
- # mask_indices: (B, T), bool
- x, _ = self.encoder(
- x,
- padding_mask=padding_mask,
- layer=None if output_layer is None else output_layer - 1
- )
-
- if features_only:
- return {"x": x, "padding_mask": padding_mask, "features": features}
-
- def compute_pred(proj_x, target, label_embs):
- # compute logits for the i-th label set
- y = torch.index_select(label_embs, 0, target.long())
- negs = label_embs.unsqueeze(1).expand(-1, proj_x.size(0), -1)
- if self.target_glu:
- y = self.target_glu(y)
- negs = self.target_glu(negs)
- # proj_x: (S, D)
- # y: (S, D)
- # negs: (Neg, S, D)
- return self.compute_nce(proj_x, y, negs)
-
- label_embs_list = self.label_embs_concat.split(self.num_classes, 0)
-
- if not self.skip_masked:
- masked_indices = torch.logical_and(~padding_mask, mask_indices)
- proj_x_m = self.final_proj(x[masked_indices])
- if self.untie_final_proj:
- proj_x_m_list = proj_x_m.chunk(len(target_list), dim=-1)
- else:
- proj_x_m_list = [proj_x_m for _ in range(len(target_list))]
- logit_m_list = [
- compute_pred(proj_x_m, t[masked_indices], label_embs_list[i])
- for i, (proj_x_m, t) in enumerate(
- zip(proj_x_m_list, target_list)
- )
- ]
- else:
- logit_m_list = [None for _ in target_list]
-
- if not self.skip_nomask:
- nomask_indices = torch.logical_and(~padding_mask, ~mask_indices)
- proj_x_u = self.final_proj(x[nomask_indices])
- if self.untie_final_proj:
- proj_x_u_list = proj_x_u.chunk(len(target_list), dim=-1)
- else:
- proj_x_u_list = [proj_x_u for _ in range(len(target_list))]
-
- logit_u_list = [
- compute_pred(proj_x_u, t[nomask_indices], label_embs_list[i])
- for i, (proj_x_u, t) in enumerate(
- zip(proj_x_u_list, target_list)
- )
- ]
- else:
- logit_u_list = [None for _ in target_list]
-
- result = {
- "logit_m_list": logit_m_list,
- "logit_u_list": logit_u_list,
- "padding_mask": padding_mask,
- "features_pen": features_pen,
- }
- return result
-
- def extract_features(
- self,
- source: torch.Tensor,
- padding_mask: Optional[torch.Tensor] = None,
- mask: bool = False,
- ret_conv: bool = False,
- output_layer: Optional[int] = None,
- ) -> Tuple[torch.Tensor, torch.Tensor]:
- res = self.forward(
- source,
- padding_mask=padding_mask,
- mask=mask,
- features_only=True,
- output_layer=output_layer,
- )
- feature = res["features"] if ret_conv else res["x"]
- return feature, res["padding_mask"]
-
- def get_logits(self, net_output, is_masked=True):
- if is_masked:
- logits_list = net_output["logit_m_list"]
- else:
- logits_list = net_output["logit_u_list"]
- logits_list = [x.float() for x in logits_list if x is not None]
- return logits_list
-
- def get_targets(self, net_output, is_masked=True):
- logits_list = self.get_logits(net_output, is_masked)
- targets_list = [
- x.new_zeros(x.size(0), dtype=torch.long) for x in logits_list
- ]
- return targets_list
-
- def get_extra_losses(self, net_output):
- extra_losses = []
- names = []
-
- if "features_pen" in net_output:
- extra_losses.append(net_output["features_pen"])
- names.append("features_pen")
-
- return extra_losses, names
-
- def remove_pretraining_modules(self):
- self.target_glu = None
- self.final_proj = None
diff --git a/spaces/mshukor/UnIVAL/models/unival/__init__.py b/spaces/mshukor/UnIVAL/models/unival/__init__.py
deleted file mode 100644
index 8f78d35df16bef627995c32f287e55c27382ec93..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/models/unival/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .unival import UnIVALModel, unival_base_architecture, unival_large_architecture
\ No newline at end of file
diff --git a/spaces/mueller-franzes/medfusion-app/medical_diffusion/external/diffusers/resnet.py b/spaces/mueller-franzes/medfusion-app/medical_diffusion/external/diffusers/resnet.py
deleted file mode 100644
index 97f3c02a8ccf434e9f7788ba503d64e0395146b0..0000000000000000000000000000000000000000
--- a/spaces/mueller-franzes/medfusion-app/medical_diffusion/external/diffusers/resnet.py
+++ /dev/null
@@ -1,479 +0,0 @@
-from functools import partial
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-class Upsample2D(nn.Module):
- """
- An upsampling layer with an optional convolution.
-
- :param channels: channels in the inputs and outputs. :param use_conv: a bool determining if a convolution is
- applied. :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then
- upsampling occurs in the inner-two dimensions.
- """
-
- def __init__(self, channels, use_conv=False, use_conv_transpose=False, out_channels=None, name="conv"):
- super().__init__()
- self.channels = channels
- self.out_channels = out_channels or channels
- self.use_conv = use_conv
- self.use_conv_transpose = use_conv_transpose
- self.name = name
-
- conv = None
- if use_conv_transpose:
- conv = nn.ConvTranspose2d(channels, self.out_channels, 4, 2, 1)
- elif use_conv:
- conv = nn.Conv2d(self.channels, self.out_channels, 3, padding=1)
-
- # TODO(Suraj, Patrick) - clean up after weight dicts are correctly renamed
- if name == "conv":
- self.conv = conv
- else:
- self.Conv2d_0 = conv
-
- def forward(self, x):
- assert x.shape[1] == self.channels
- if self.use_conv_transpose:
- return self.conv(x)
-
- x = F.interpolate(x, scale_factor=2.0, mode="nearest")
-
- # TODO(Suraj, Patrick) - clean up after weight dicts are correctly renamed
- if self.use_conv:
- if self.name == "conv":
- x = self.conv(x)
- else:
- x = self.Conv2d_0(x)
-
- return x
-
-
-class Downsample2D(nn.Module):
- """
- A downsampling layer with an optional convolution.
-
- :param channels: channels in the inputs and outputs. :param use_conv: a bool determining if a convolution is
- applied. :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then
- downsampling occurs in the inner-two dimensions.
- """
-
- def __init__(self, channels, use_conv=False, out_channels=None, padding=1, name="conv"):
- super().__init__()
- self.channels = channels
- self.out_channels = out_channels or channels
- self.use_conv = use_conv
- self.padding = padding
- stride = 2
- self.name = name
-
- if use_conv:
- conv = nn.Conv2d(self.channels, self.out_channels, 3, stride=stride, padding=padding)
- else:
- assert self.channels == self.out_channels
- conv = nn.AvgPool2d(kernel_size=stride, stride=stride)
-
- # TODO(Suraj, Patrick) - clean up after weight dicts are correctly renamed
- if name == "conv":
- self.Conv2d_0 = conv
- self.conv = conv
- elif name == "Conv2d_0":
- self.conv = conv
- else:
- self.conv = conv
-
- def forward(self, x):
- assert x.shape[1] == self.channels
- if self.use_conv and self.padding == 0:
- pad = (0, 1, 0, 1)
- x = F.pad(x, pad, mode="constant", value=0)
-
- assert x.shape[1] == self.channels
- x = self.conv(x)
-
- return x
-
-
-class FirUpsample2D(nn.Module):
- def __init__(self, channels=None, out_channels=None, use_conv=False, fir_kernel=(1, 3, 3, 1)):
- super().__init__()
- out_channels = out_channels if out_channels else channels
- if use_conv:
- self.Conv2d_0 = nn.Conv2d(channels, out_channels, kernel_size=3, stride=1, padding=1)
- self.use_conv = use_conv
- self.fir_kernel = fir_kernel
- self.out_channels = out_channels
-
- def _upsample_2d(self, x, weight=None, kernel=None, factor=2, gain=1):
- """Fused `upsample_2d()` followed by `Conv2d()`.
-
- Args:
- Padding is performed only once at the beginning, not between the operations. The fused op is considerably more
- efficient than performing the same calculation using standard TensorFlow ops. It supports gradients of arbitrary:
- order.
- x: Input tensor of the shape `[N, C, H, W]` or `[N, H, W,
- C]`.
- weight: Weight tensor of the shape `[filterH, filterW, inChannels,
- outChannels]`. Grouped convolution can be performed by `inChannels = x.shape[0] // numGroups`.
- kernel: FIR filter of the shape `[firH, firW]` or `[firN]`
- (separable). The default is `[1] * factor`, which corresponds to nearest-neighbor upsampling.
- factor: Integer upsampling factor (default: 2). gain: Scaling factor for signal magnitude (default: 1.0).
-
- Returns:
- Tensor of the shape `[N, C, H * factor, W * factor]` or `[N, H * factor, W * factor, C]`, and same datatype as
- `x`.
- """
-
- assert isinstance(factor, int) and factor >= 1
-
- # Setup filter kernel.
- if kernel is None:
- kernel = [1] * factor
-
- # setup kernel
- kernel = torch.tensor(kernel, dtype=torch.float32)
- if kernel.ndim == 1:
- kernel = torch.outer(kernel, kernel)
- kernel /= torch.sum(kernel)
-
- kernel = kernel * (gain * (factor**2))
-
- if self.use_conv:
- convH = weight.shape[2]
- convW = weight.shape[3]
- inC = weight.shape[1]
-
- p = (kernel.shape[0] - factor) - (convW - 1)
-
- stride = (factor, factor)
- # Determine data dimensions.
- output_shape = ((x.shape[2] - 1) * factor + convH, (x.shape[3] - 1) * factor + convW)
- output_padding = (
- output_shape[0] - (x.shape[2] - 1) * stride[0] - convH,
- output_shape[1] - (x.shape[3] - 1) * stride[1] - convW,
- )
- assert output_padding[0] >= 0 and output_padding[1] >= 0
- inC = weight.shape[1]
- num_groups = x.shape[1] // inC
-
- # Transpose weights.
- weight = torch.reshape(weight, (num_groups, -1, inC, convH, convW))
- weight = torch.flip(weight, dims=[3, 4]).permute(0, 2, 1, 3, 4)
- weight = torch.reshape(weight, (num_groups * inC, -1, convH, convW))
-
- x = F.conv_transpose2d(x, weight, stride=stride, output_padding=output_padding, padding=0)
-
- x = upfirdn2d_native(x, torch.tensor(kernel, device=x.device), pad=((p + 1) // 2 + factor - 1, p // 2 + 1))
- else:
- p = kernel.shape[0] - factor
- x = upfirdn2d_native(
- x, torch.tensor(kernel, device=x.device), up=factor, pad=((p + 1) // 2 + factor - 1, p // 2)
- )
-
- return x
-
- def forward(self, x):
- if self.use_conv:
- height = self._upsample_2d(x, self.Conv2d_0.weight, kernel=self.fir_kernel)
- height = height + self.Conv2d_0.bias.reshape(1, -1, 1, 1)
- else:
- height = self._upsample_2d(x, kernel=self.fir_kernel, factor=2)
-
- return height
-
-
-class FirDownsample2D(nn.Module):
- def __init__(self, channels=None, out_channels=None, use_conv=False, fir_kernel=(1, 3, 3, 1)):
- super().__init__()
- out_channels = out_channels if out_channels else channels
- if use_conv:
- self.Conv2d_0 = nn.Conv2d(channels, out_channels, kernel_size=3, stride=1, padding=1)
- self.fir_kernel = fir_kernel
- self.use_conv = use_conv
- self.out_channels = out_channels
-
- def _downsample_2d(self, x, weight=None, kernel=None, factor=2, gain=1):
- """Fused `Conv2d()` followed by `downsample_2d()`.
-
- Args:
- Padding is performed only once at the beginning, not between the operations. The fused op is considerably more
- efficient than performing the same calculation using standard TensorFlow ops. It supports gradients of arbitrary:
- order.
- x: Input tensor of the shape `[N, C, H, W]` or `[N, H, W, C]`. w: Weight tensor of the shape `[filterH,
- filterW, inChannels, outChannels]`. Grouped convolution can be performed by `inChannels = x.shape[0] //
- numGroups`. k: FIR filter of the shape `[firH, firW]` or `[firN]` (separable). The default is `[1] *
- factor`, which corresponds to average pooling. factor: Integer downsampling factor (default: 2). gain:
- Scaling factor for signal magnitude (default: 1.0).
-
- Returns:
- Tensor of the shape `[N, C, H // factor, W // factor]` or `[N, H // factor, W // factor, C]`, and same
- datatype as `x`.
- """
-
- assert isinstance(factor, int) and factor >= 1
- if kernel is None:
- kernel = [1] * factor
-
- # setup kernel
- kernel = torch.tensor(kernel, dtype=torch.float32)
- if kernel.ndim == 1:
- kernel = torch.outer(kernel, kernel)
- kernel /= torch.sum(kernel)
-
- kernel = kernel * gain
-
- if self.use_conv:
- _, _, convH, convW = weight.shape
- p = (kernel.shape[0] - factor) + (convW - 1)
- s = [factor, factor]
- x = upfirdn2d_native(x, torch.tensor(kernel, device=x.device), pad=((p + 1) // 2, p // 2))
- x = F.conv2d(x, weight, stride=s, padding=0)
- else:
- p = kernel.shape[0] - factor
- x = upfirdn2d_native(x, torch.tensor(kernel, device=x.device), down=factor, pad=((p + 1) // 2, p // 2))
-
- return x
-
- def forward(self, x):
- if self.use_conv:
- x = self._downsample_2d(x, weight=self.Conv2d_0.weight, kernel=self.fir_kernel)
- x = x + self.Conv2d_0.bias.reshape(1, -1, 1, 1)
- else:
- x = self._downsample_2d(x, kernel=self.fir_kernel, factor=2)
-
- return x
-
-
-class ResnetBlock2D(nn.Module):
- def __init__(
- self,
- *,
- in_channels,
- out_channels=None,
- conv_shortcut=False,
- dropout=0.0,
- temb_channels=512,
- groups=32,
- groups_out=None,
- pre_norm=True,
- eps=1e-6,
- non_linearity="swish",
- time_embedding_norm="default",
- kernel=None,
- output_scale_factor=1.0,
- use_in_shortcut=None,
- up=False,
- down=False,
- ):
- super().__init__()
- self.pre_norm = pre_norm
- self.pre_norm = True
- self.in_channels = in_channels
- out_channels = in_channels if out_channels is None else out_channels
- self.out_channels = out_channels
- self.use_conv_shortcut = conv_shortcut
- self.time_embedding_norm = time_embedding_norm
- self.up = up
- self.down = down
- self.output_scale_factor = output_scale_factor
-
- if groups_out is None:
- groups_out = groups
-
- self.norm1 = torch.nn.GroupNorm(num_groups=groups, num_channels=in_channels, eps=eps, affine=True)
-
- self.conv1 = torch.nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1)
-
- if temb_channels is not None:
- self.time_emb_proj = torch.nn.Linear(temb_channels, out_channels)
- else:
- self.time_emb_proj = None
-
- self.norm2 = torch.nn.GroupNorm(num_groups=groups_out, num_channels=out_channels, eps=eps, affine=True)
- self.dropout = torch.nn.Dropout(dropout)
- self.conv2 = torch.nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1)
-
- if non_linearity == "swish":
- self.nonlinearity = lambda x: F.silu(x)
- elif non_linearity == "mish":
- self.nonlinearity = Mish()
- elif non_linearity == "silu":
- self.nonlinearity = nn.SiLU()
-
- self.upsample = self.downsample = None
- if self.up:
- if kernel == "fir":
- fir_kernel = (1, 3, 3, 1)
- self.upsample = lambda x: upsample_2d(x, kernel=fir_kernel)
- elif kernel == "sde_vp":
- self.upsample = partial(F.interpolate, scale_factor=2.0, mode="nearest")
- else:
- self.upsample = Upsample2D(in_channels, use_conv=False)
- elif self.down:
- if kernel == "fir":
- fir_kernel = (1, 3, 3, 1)
- self.downsample = lambda x: downsample_2d(x, kernel=fir_kernel)
- elif kernel == "sde_vp":
- self.downsample = partial(F.avg_pool2d, kernel_size=2, stride=2)
- else:
- self.downsample = Downsample2D(in_channels, use_conv=False, padding=1, name="op")
-
- self.use_in_shortcut = self.in_channels != self.out_channels if use_in_shortcut is None else use_in_shortcut
-
- self.conv_shortcut = None
- if self.use_in_shortcut:
- self.conv_shortcut = torch.nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0)
-
- def forward(self, x, temb):
- hidden_states = x
-
- # make sure hidden states is in float32
- # when running in half-precision
- hidden_states = self.norm1(hidden_states).type(hidden_states.dtype)
- hidden_states = self.nonlinearity(hidden_states)
-
- if self.upsample is not None:
- x = self.upsample(x)
- hidden_states = self.upsample(hidden_states)
- elif self.downsample is not None:
- x = self.downsample(x)
- hidden_states = self.downsample(hidden_states)
-
- hidden_states = self.conv1(hidden_states)
-
- if temb is not None:
- temb = self.time_emb_proj(self.nonlinearity(temb))[:, :, None, None]
- hidden_states = hidden_states + temb
-
- # make sure hidden states is in float32
- # when running in half-precision
- hidden_states = self.norm2(hidden_states).type(hidden_states.dtype)
- hidden_states = self.nonlinearity(hidden_states)
-
- hidden_states = self.dropout(hidden_states)
- hidden_states = self.conv2(hidden_states)
-
- if self.conv_shortcut is not None:
- x = self.conv_shortcut(x)
-
- out = (x + hidden_states) / self.output_scale_factor
-
- return out
-
-
-class Mish(torch.nn.Module):
- def forward(self, x):
- return x * torch.tanh(torch.nn.functional.softplus(x))
-
-
-def upsample_2d(x, kernel=None, factor=2, gain=1):
- r"""Upsample2D a batch of 2D images with the given filter.
-
- Args:
- Accepts a batch of 2D images of the shape `[N, C, H, W]` or `[N, H, W, C]` and upsamples each image with the given
- filter. The filter is normalized so that if the input pixels are constant, they will be scaled by the specified
- `gain`. Pixels outside the image are assumed to be zero, and the filter is padded with zeros so that its shape is a:
- multiple of the upsampling factor.
- x: Input tensor of the shape `[N, C, H, W]` or `[N, H, W,
- C]`.
- k: FIR filter of the shape `[firH, firW]` or `[firN]`
- (separable). The default is `[1] * factor`, which corresponds to nearest-neighbor upsampling.
- factor: Integer upsampling factor (default: 2). gain: Scaling factor for signal magnitude (default: 1.0).
-
- Returns:
- Tensor of the shape `[N, C, H * factor, W * factor]`
- """
- assert isinstance(factor, int) and factor >= 1
- if kernel is None:
- kernel = [1] * factor
-
- kernel = torch.tensor(kernel, dtype=torch.float32)
- if kernel.ndim == 1:
- kernel = torch.outer(kernel, kernel)
- kernel /= torch.sum(kernel)
-
- kernel = kernel * (gain * (factor**2))
- p = kernel.shape[0] - factor
- return upfirdn2d_native(x, kernel.to(device=x.device), up=factor, pad=((p + 1) // 2 + factor - 1, p // 2))
-
-
-def downsample_2d(x, kernel=None, factor=2, gain=1):
- r"""Downsample2D a batch of 2D images with the given filter.
-
- Args:
- Accepts a batch of 2D images of the shape `[N, C, H, W]` or `[N, H, W, C]` and downsamples each image with the
- given filter. The filter is normalized so that if the input pixels are constant, they will be scaled by the
- specified `gain`. Pixels outside the image are assumed to be zero, and the filter is padded with zeros so that its
- shape is a multiple of the downsampling factor.
- x: Input tensor of the shape `[N, C, H, W]` or `[N, H, W,
- C]`.
- kernel: FIR filter of the shape `[firH, firW]` or `[firN]`
- (separable). The default is `[1] * factor`, which corresponds to average pooling.
- factor: Integer downsampling factor (default: 2). gain: Scaling factor for signal magnitude (default: 1.0).
-
- Returns:
- Tensor of the shape `[N, C, H // factor, W // factor]`
- """
-
- assert isinstance(factor, int) and factor >= 1
- if kernel is None:
- kernel = [1] * factor
-
- kernel = torch.tensor(kernel, dtype=torch.float32)
- if kernel.ndim == 1:
- kernel = torch.outer(kernel, kernel)
- kernel /= torch.sum(kernel)
-
- kernel = kernel * gain
- p = kernel.shape[0] - factor
- return upfirdn2d_native(x, kernel.to(device=x.device), down=factor, pad=((p + 1) // 2, p // 2))
-
-
-def upfirdn2d_native(input, kernel, up=1, down=1, pad=(0, 0)):
- up_x = up_y = up
- down_x = down_y = down
- pad_x0 = pad_y0 = pad[0]
- pad_x1 = pad_y1 = pad[1]
-
- _, channel, in_h, in_w = input.shape
- input = input.reshape(-1, in_h, in_w, 1)
-
- _, in_h, in_w, minor = input.shape
- kernel_h, kernel_w = kernel.shape
-
- out = input.view(-1, in_h, 1, in_w, 1, minor)
-
- # Temporary workaround for mps specific issue: https://github.com/pytorch/pytorch/issues/84535
- if input.device.type == "mps":
- out = out.to("cpu")
- out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1])
- out = out.view(-1, in_h * up_y, in_w * up_x, minor)
-
- out = F.pad(out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)])
- out = out.to(input.device) # Move back to mps if necessary
- out = out[
- :,
- max(-pad_y0, 0) : out.shape[1] - max(-pad_y1, 0),
- max(-pad_x0, 0) : out.shape[2] - max(-pad_x1, 0),
- :,
- ]
-
- out = out.permute(0, 3, 1, 2)
- out = out.reshape([-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1])
- w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w)
- out = F.conv2d(out, w)
- out = out.reshape(
- -1,
- minor,
- in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1,
- in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1,
- )
- out = out.permute(0, 2, 3, 1)
- out = out[:, ::down_y, ::down_x, :]
-
- out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1
- out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1
-
- return out.view(-1, channel, out_h, out_w)
diff --git a/spaces/muyi12314/anime-remove-background/app.py b/spaces/muyi12314/anime-remove-background/app.py
deleted file mode 100644
index 230a0d5f8a3da6ab18ecb8db1cd90016a489b96a..0000000000000000000000000000000000000000
--- a/spaces/muyi12314/anime-remove-background/app.py
+++ /dev/null
@@ -1,52 +0,0 @@
-import gradio as gr
-import huggingface_hub
-import onnxruntime as rt
-import numpy as np
-import cv2
-
-
-def get_mask(img, s=1024):
- img = (img / 255).astype(np.float32)
- h, w = h0, w0 = img.shape[:-1]
- h, w = (s, int(s * w / h)) if h > w else (int(s * h / w), s)
- ph, pw = s - h, s - w
- img_input = np.zeros([s, s, 3], dtype=np.float32)
- img_input[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] = cv2.resize(img, (w, h))
- img_input = np.transpose(img_input, (2, 0, 1))
- img_input = img_input[np.newaxis, :]
- mask = rmbg_model.run(None, {'img': img_input})[0][0]
- mask = np.transpose(mask, (1, 2, 0))
- mask = mask[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w]
- mask = cv2.resize(mask, (w0, h0))[:, :, np.newaxis]
- return mask
-
-
-def rmbg_fn(img):
- mask = get_mask(img)
- img = (mask * img + 255 * (1 - mask)).astype(np.uint8)
- mask = (mask * 255).astype(np.uint8)
- img = np.concatenate([img, mask], axis=2, dtype=np.uint8)
- mask = mask.repeat(3, axis=2)
- return mask, img
-
-
-if __name__ == "__main__":
- providers = ['CUDAExecutionProvider', 'CPUExecutionProvider']
- model_path = huggingface_hub.hf_hub_download("skytnt/anime-seg", "isnetis.onnx")
- rmbg_model = rt.InferenceSession(model_path, providers=providers)
- app = gr.Blocks()
- with app:
- gr.Markdown("# Anime Remove Background\n\n"
- "\n\n"
- "demo for [https://github.com/SkyTNT/anime-segmentation/](https://github.com/SkyTNT/anime-segmentation/)")
- with gr.Row():
- with gr.Column():
- input_img = gr.Image(label="input image")
- examples_data = [[f"examples/{x:02d}.jpg"] for x in range(1, 4)]
- examples = gr.Dataset(components=[input_img], samples=examples_data)
- run_btn = gr.Button(variant="primary")
- output_mask = gr.Image(label="mask")
- output_img = gr.Image(label="result", image_mode="RGBA")
- examples.click(lambda x: x[0], [examples], [input_img])
- run_btn.click(rmbg_fn, [input_img], [output_mask, output_img])
- app.launch()
diff --git a/spaces/mygyasir/genious_bgremover/carvekit/web/routers/api_router.py b/spaces/mygyasir/genious_bgremover/carvekit/web/routers/api_router.py
deleted file mode 100644
index c452cacbb15ac13919b9fcaa482ed829983a8fd6..0000000000000000000000000000000000000000
--- a/spaces/mygyasir/genious_bgremover/carvekit/web/routers/api_router.py
+++ /dev/null
@@ -1,222 +0,0 @@
-import base64
-import http
-import io
-import time
-from json import JSONDecodeError
-from typing import Optional
-
-import requests
-from PIL import Image
-from fastapi import Header, Depends, Form, File, Request, APIRouter, UploadFile
-from fastapi.openapi.models import Response
-from pydantic import ValidationError
-from starlette.responses import JSONResponse
-
-from carvekit.web.deps import config, ml_processor
-from carvekit.web.handlers.response import handle_response, Authenticate
-from carvekit.web.responses.api import error_dict
-from carvekit.web.schemas.request import Parameters
-from carvekit.web.utils.net_utils import is_loopback
-
-api_router = APIRouter(prefix="", tags=["api"])
-
-
-# noinspection PyBroadException
-@api_router.post("/removebg")
-async def removebg(
- request: Request,
- image_file: Optional[bytes] = File(None),
- auth: bool = Depends(Authenticate),
- content_type: str = Header(""),
- image_file_b64: Optional[str] = Form(None),
- image_url: Optional[str] = Form(None),
- bg_image_file: Optional[bytes] = File(None),
- size: Optional[str] = Form("full"),
- type: Optional[str] = Form("auto"),
- format: Optional[str] = Form("auto"),
- roi: str = Form("0% 0% 100% 100%"),
- crop: bool = Form(False),
- crop_margin: Optional[str] = Form("0px"),
- scale: Optional[str] = Form("original"),
- position: Optional[str] = Form("original"),
- channels: Optional[str] = Form("rgba"),
- add_shadow: bool = Form(False), # Not supported at the moment
- semitransparency: bool = Form(False), # Not supported at the moment
- bg_color: Optional[str] = Form(""),
-):
- if auth is False:
- return JSONResponse(content=error_dict("Missing API Key"), status_code=403)
- if (
- content_type not in ["application/x-www-form-urlencoded", "application/json"]
- and "multipart/form-data" not in content_type
- ):
- return JSONResponse(
- content=error_dict("Invalid request content type"), status_code=400
- )
-
- if image_url:
- if not (
- image_url.startswith("http://") or image_url.startswith("https://")
- ) or is_loopback(image_url):
- print(
- f"Possible ssrf attempt to /api/removebg endpoint with image url: {image_url}"
- )
- return JSONResponse(
- content=error_dict("Invalid image url."), status_code=400
- ) # possible ssrf attempt
-
- image = None
- bg = None
- parameters = None
- if (
- content_type == "application/x-www-form-urlencoded"
- or "multipart/form-data" in content_type
- ):
- if image_file_b64 is None and image_url is None and image_file is None:
- return JSONResponse(content=error_dict("File not found"), status_code=400)
-
- if image_file_b64:
- if len(image_file_b64) == 0:
- return JSONResponse(content=error_dict("Empty image"), status_code=400)
- try:
- image = Image.open(io.BytesIO(base64.b64decode(image_file_b64)))
- except BaseException:
- return JSONResponse(
- content=error_dict("Error decode image!"), status_code=400
- )
- elif image_url:
- try:
- image = Image.open(io.BytesIO(requests.get(image_url).content))
- except BaseException:
- return JSONResponse(
- content=error_dict("Error download image!"), status_code=400
- )
- elif image_file:
- if len(image_file) == 0:
- return JSONResponse(content=error_dict("Empty image"), status_code=400)
- image = Image.open(io.BytesIO(image_file))
-
- if bg_image_file:
- if len(bg_image_file) == 0:
- return JSONResponse(content=error_dict("Empty image"), status_code=400)
- bg = Image.open(io.BytesIO(bg_image_file))
- try:
- parameters = Parameters(
- image_file_b64=image_file_b64,
- image_url=image_url,
- size=size,
- type=type,
- format=format,
- roi=roi,
- crop=crop,
- crop_margin=crop_margin,
- scale=scale,
- position=position,
- channels=channels,
- add_shadow=add_shadow,
- semitransparency=semitransparency,
- bg_color=bg_color,
- )
- except ValidationError as e:
- return JSONResponse(
- content=e.json(), status_code=400, media_type="application/json"
- )
-
- else:
- payload = None
- try:
- payload = await request.json()
- except JSONDecodeError:
- return JSONResponse(content=error_dict("Empty json"), status_code=400)
- try:
- parameters = Parameters(**payload)
- except ValidationError as e:
- return Response(
- content=e.json(), status_code=400, media_type="application/json"
- )
- if parameters.image_file_b64 is None and parameters.image_url is None:
- return JSONResponse(content=error_dict("File not found"), status_code=400)
-
- if parameters.image_file_b64:
- if len(parameters.image_file_b64) == 0:
- return JSONResponse(content=error_dict("Empty image"), status_code=400)
- try:
- image = Image.open(
- io.BytesIO(base64.b64decode(parameters.image_file_b64))
- )
- except BaseException:
- return JSONResponse(
- content=error_dict("Error decode image!"), status_code=400
- )
- elif parameters.image_url:
- if not (
- parameters.image_url.startswith("http://")
- or parameters.image_url.startswith("https://")
- ) or is_loopback(parameters.image_url):
- print(
- f"Possible ssrf attempt to /api/removebg endpoint with image url: {parameters.image_url}"
- )
- return JSONResponse(
- content=error_dict("Invalid image url."), status_code=400
- ) # possible ssrf attempt
- try:
- image = Image.open(
- io.BytesIO(requests.get(parameters.image_url).content)
- )
- except BaseException:
- return JSONResponse(
- content=error_dict("Error download image!"), status_code=400
- )
- if image is None:
- return JSONResponse(
- content=error_dict("Error download image!"), status_code=400
- )
-
- job_id = ml_processor.job_create([parameters.dict(), image, bg, False])
-
- while ml_processor.job_status(job_id) != "finished":
- if ml_processor.job_status(job_id) == "not_found":
- return JSONResponse(
- content=error_dict("Job ID not found!"), status_code=500
- )
- time.sleep(5)
-
- result = ml_processor.job_result(job_id)
- return handle_response(result, image)
-
-
-@api_router.get("/account")
-def account():
- """
- Stub for compatibility with remove.bg api libraries
- """
- return JSONResponse(
- content={
- "data": {
- "attributes": {
- "credits": {
- "total": 99999,
- "subscription": 99999,
- "payg": 99999,
- "enterprise": 99999,
- },
- "api": {"free_calls": 99999, "sizes": "all"},
- }
- }
- },
- status_code=200,
- )
-
-
-@api_router.get("/admin/config")
-def status(auth: str = Depends(Authenticate)):
- """
- Returns the current server config.
- """
- if not auth or auth != "admin":
- return JSONResponse(
- content=error_dict("Authentication failed"), status_code=403
- )
- resp = JSONResponse(content=config.json(), status_code=200)
- resp.headers["X-Credits-Charged"] = "0"
- return resp
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Al Qunut By Sudais Pdf Download !LINK!.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Al Qunut By Sudais Pdf Download !LINK!.md
deleted file mode 100644
index 0914c840ead6c16364e51c4e39eab41f86c42bc3..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Al Qunut By Sudais Pdf Download !LINK!.md
+++ /dev/null
@@ -1,23 +0,0 @@
-
-
How to Download Al Qunut by Sudais PDF
-
Al Qunut is a special supplication recited by Muslims during the Witr prayer in the last part of the night. It is a heartfelt plea to Allah for guidance, forgiveness, mercy and protection. Al Qunut has many benefits and virtues, as it is a way of communicating with Allah and expressing one's needs and feelings.
-
One of the most famous reciters of Al Qunut is Sheikh Abdul Rahman As-Sudais, the imam of the Grand Mosque in Makkah. His voice is melodious and his words are powerful and moving. Many Muslims around the world listen to his recitation and follow along with his dua.
If you want to download Al Qunut by Sudais PDF, you can find it online on various websites that offer Islamic audio and video files. One of them is Archive.org, which is a non-profit library of millions of free books, movies, music and more. Here are the steps to download Al Qunut by Sudais PDF from Archive.org:
On the right side of the page, you will see a list of download options. Click on the one that says "PDF" to download the file in PDF format.
-
Save the file to your device and open it with any PDF reader.
-
Enjoy listening to and reading Al Qunut by Sudais PDF.
-
-
You can also find other versions of Al Qunut by Sudais on Archive.org, such as Witr and Dua Al Qunoot by Sheikh Abdul Rahman As-Sudais[^2^] or Night 1 - Dua Al Qunoot by Sheikh Sudais HQ[^3^]. You can download them in different formats, such as MP3, OGG or MPEG4.
-
May Allah accept your dua and grant you His blessings. Ameen.
Al Qunut is a very flexible and adaptable dua that can be recited in any language and with any words. However, there are some recommended words and phrases that the Prophet Muhammad (peace be upon him) taught us to say in Al Qunut. Some of them are:
-
-
Allahumma inna nasta'eenuka wa nastaghfiruka wa nu'minu bika wa natawakkalu 'alaika wa nuthni 'alaikal khairi wa nashkuruka wa la nakfuruka wa nakhla'u wa natruku man yafjuruk. (O Allah, we seek Your help and Your forgiveness and we believe in You and rely on You and praise You for all the good things and we thank You and we do not deny You and we abandon and forsake those who disobey You.)
-
Allahumma iyyaka na'budu wa laka nusalli wa nasjudu wa ilaika nas'a wa nahfidu wa narju rahmataka wa nakhsha 'adhabaka inna 'adhabaka bil kuffari mulhiq. (O Allah, You alone we worship and to You we pray and prostrate and to You we hasten and present ourselves and we hope for Your mercy and we fear Your punishment. Indeed, Your punishment will overtake the disbelievers.)
-
Allahumma ighfir lana warhamna wa 'afina wa ihdina wasrif 'anna sharra ma qadaita fa innaka taqdi wa la yuqda 'alaik. (O Allah, forgive us and have mercy on us and grant us well-being and guide us and avert from us the evil of what You have decreed. For indeed, You decree and none can decree over You.)
-
-
These are some of the examples of Al Qunut that we can learn from the Sunnah of the Prophet Muhammad (peace be upon him). We can also add our own personal supplications and requests to Allah in Al Qunut, as long as they are lawful and good.
-
Al Qunut is a very powerful and effective way of invoking Allah's help and mercy in times of hardship and distress. It is also a way of expressing our gratitude and praise to Allah for His countless blessings and favors. Al Qunut is a means of strengthening our faith and trust in Allah and His plan for us. Al Qunut is a source of comfort and peace for our hearts and souls.
7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/neveu/img-to-music/share_btn.py b/spaces/neveu/img-to-music/share_btn.py
deleted file mode 100644
index 1a2ac6a6e74b114dbd54c2f24723a87180db51ef..0000000000000000000000000000000000000000
--- a/spaces/neveu/img-to-music/share_btn.py
+++ /dev/null
@@ -1,100 +0,0 @@
-community_icon_html = """"""
-
-loading_icon_html = """"""
-
-share_js = """async () => {
- async function uploadFile(file){
- const UPLOAD_URL = 'https://huggingface.co/uploads';
- const response = await fetch(UPLOAD_URL, {
- method: 'POST',
- headers: {
- 'Content-Type': file.type,
- 'X-Requested-With': 'XMLHttpRequest',
- },
- body: file, /// <- File inherits from Blob
- });
- const url = await response.text();
- return url;
- }
- async function getInputImgFile(imgEl){
- const res = await fetch(imgEl.src);
- const blob = await res.blob();
- const imgId = Date.now() % 200;
- const isPng = imgEl.src.startsWith(`data:image/png`);
- if(isPng){
- const fileName = `sd-perception-${{imgId}}.png`;
- return new File([blob], fileName, { type: 'image/png' });
- }else{
- const fileName = `sd-perception-${{imgId}}.jpg`;
- return new File([blob], fileName, { type: 'image/jpeg' });
- }
- }
- async function getOutputMusicFile(audioEL){
- const res = await fetch(audioEL.src);
- const blob = await res.blob();
- const audioId = Date.now() % 200;
- const fileName = `img-to-music-${{audioId}}.wav`;
- const musicBlob = new File([blob], fileName, { type: 'audio/wav' });
- console.log(musicBlob);
- return musicBlob;
- }
-
- async function audioToBase64(audioFile) {
- return new Promise((resolve, reject) => {
- let reader = new FileReader();
- reader.readAsDataURL(audioFile);
- reader.onload = () => resolve(reader.result);
- reader.onerror = error => reject(error);
-
- });
- }
- const gradioEl = document.querySelector('body > gradio-app');
- // const gradioEl = document.querySelector("gradio-app").shadowRoot;
- const inputImgEl = gradioEl.querySelector('#input-img img');
- const outputMusic = gradioEl.querySelector('#music-output audio');
- const outputMusic_src = gradioEl.querySelector('#music-output audio').src;
- const outputMusic_name = outputMusic_src.split('/').pop();
- let titleTxt = outputMusic_name;
- //if(titleTxt.length > 100){
- // titleTxt = titleTxt.slice(0, 100) + ' ...';
- //}
- const shareBtnEl = gradioEl.querySelector('#share-btn');
- const shareIconEl = gradioEl.querySelector('#share-btn-share-icon');
- const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon');
- if(!outputMusic){
- return;
- };
- shareBtnEl.style.pointerEvents = 'none';
- shareIconEl.style.display = 'none';
- loadingIconEl.style.removeProperty('display');
- const inputFile = await getInputImgFile(inputImgEl);
- const urlInputImg = await uploadFile(inputFile);
- const musicFile = await getOutputMusicFile(outputMusic);
- const dataOutputMusic = await uploadFile(musicFile);
-
- const descriptionMd = `#### Input img:
-
-
-#### Music:
-
-
-`;
- const params = new URLSearchParams({
- title: titleTxt,
- description: descriptionMd,
- });
- const paramsStr = params.toString();
- window.open(`https://huggingface.co/spaces/fffiloni/img-to-music/discussions/new?${paramsStr}`, '_blank');
- shareBtnEl.style.removeProperty('pointer-events');
- shareIconEl.style.removeProperty('display');
- loadingIconEl.style.display = 'none';
-}"""
\ No newline at end of file
diff --git a/spaces/nicehero/ManualMask/README.md b/spaces/nicehero/ManualMask/README.md
deleted file mode 100644
index 7869a1c04eff24c6b65d55a973680a0237e12caf..0000000000000000000000000000000000000000
--- a/spaces/nicehero/ManualMask/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: ManualMask
-emoji: 📈
-colorFrom: gray
-colorTo: green
-sdk: gradio
-sdk_version: 3.32.0
-app_file: app.py
-pinned: false
-license: bsd
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/nightfury/img2music/share_btn.py b/spaces/nightfury/img2music/share_btn.py
deleted file mode 100644
index cc6a470a1ef9d8687d19658cd0106f8c3b9b053d..0000000000000000000000000000000000000000
--- a/spaces/nightfury/img2music/share_btn.py
+++ /dev/null
@@ -1,100 +0,0 @@
-community_icon_html = """"""
-
-loading_icon_html = """"""
-
-share_js = """async () => {
- async function uploadFile(file){
- const UPLOAD_URL = 'https://huggingface.co/uploads';
- const response = await fetch(UPLOAD_URL, {
- method: 'POST',
- headers: {
- 'Content-Type': file.type,
- 'X-Requested-With': 'XMLHttpRequest',
- },
- body: file, /// <- File inherits from Blob
- });
- const url = await response.text();
- return url;
- }
- async function getInputImgFile(imgEl){
- const res = await fetch(imgEl.src);
- const blob = await res.blob();
- const imgId = Date.now() % 200;
- const isPng = imgEl.src.startsWith(`data:image/png`);
- if(isPng){
- const fileName = `sd-perception-${{imgId}}.png`;
- return new File([blob], fileName, { type: 'image/png' });
- }else{
- const fileName = `sd-perception-${{imgId}}.jpg`;
- return new File([blob], fileName, { type: 'image/jpeg' });
- }
- }
- async function getOutputMusicFile(audioEL){
- const res = await fetch(audioEL.src);
- const blob = await res.blob();
- const audioId = Date.now() % 200;
- const fileName = `img-to-music-${{audioId}}.wav`;
- const musicBlob = new File([blob], fileName, { type: 'audio/wav' });
- console.log(musicBlob);
- return musicBlob;
- }
-
- async function audioToBase64(audioFile) {
- return new Promise((resolve, reject) => {
- let reader = new FileReader();
- reader.readAsDataURL(audioFile);
- reader.onload = () => resolve(reader.result);
- reader.onerror = error => reject(error);
-
- });
- }
- const gradioEl = document.querySelector('body > gradio-app');
- // const gradioEl = document.querySelector("gradio-app").shadowRoot;
- const inputImgEl = gradioEl.querySelector('#input-img img');
- const outputMusic = gradioEl.querySelector('#music-output audio');
- const outputMusic_src = gradioEl.querySelector('#music-output audio').src;
- const outputMusic_name = outputMusic_src.split('/').pop();
- let titleTxt = outputMusic_name;
- //if(titleTxt.length > 100){
- // titleTxt = titleTxt.slice(0, 100) + ' ...';
- //}
- const shareBtnEl = gradioEl.querySelector('#share-btn');
- const shareIconEl = gradioEl.querySelector('#share-btn-share-icon');
- const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon');
- if(!outputMusic){
- return;
- };
- shareBtnEl.style.pointerEvents = 'none';
- shareIconEl.style.display = 'none';
- loadingIconEl.style.removeProperty('display');
- const inputFile = await getInputImgFile(inputImgEl);
- const urlInputImg = await uploadFile(inputFile);
- const musicFile = await getOutputMusicFile(outputMusic);
- const dataOutputMusic = await uploadFile(musicFile);
-
- const descriptionMd = `#### Input img:
-
-
-#### Music:
-
-
-`;
- const params = new URLSearchParams({
- title: titleTxt,
- description: descriptionMd,
- });
- const paramsStr = params.toString();
- window.open(`https://huggingface.co/spaces/spaces/nightfury/img2music/discussions/new?${paramsStr}`, '_blank');
- shareBtnEl.style.removeProperty('pointer-events');
- shareIconEl.style.removeProperty('display');
- loadingIconEl.style.display = 'none';
-}"""
\ No newline at end of file
diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/layers/losses.py b/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/layers/losses.py
deleted file mode 100644
index 850a852a2f0986d4d1ce89a526d96db42c76e44f..0000000000000000000000000000000000000000
--- a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/layers/losses.py
+++ /dev/null
@@ -1,133 +0,0 @@
-import math
-import torch
-
-
-def diou_loss(
- boxes1: torch.Tensor,
- boxes2: torch.Tensor,
- reduction: str = "none",
- eps: float = 1e-7,
-) -> torch.Tensor:
- """
- Distance Intersection over Union Loss (Zhaohui Zheng et. al)
- https://arxiv.org/abs/1911.08287
- Args:
- boxes1, boxes2 (Tensor): box locations in XYXY format, shape (N, 4) or (4,).
- reduction: 'none' | 'mean' | 'sum'
- 'none': No reduction will be applied to the output.
- 'mean': The output will be averaged.
- 'sum': The output will be summed.
- eps (float): small number to prevent division by zero
- """
-
- x1, y1, x2, y2 = boxes1.unbind(dim=-1)
- x1g, y1g, x2g, y2g = boxes2.unbind(dim=-1)
-
- # TODO: use torch._assert_async() when pytorch 1.8 support is dropped
- assert (x2 >= x1).all(), "bad box: x1 larger than x2"
- assert (y2 >= y1).all(), "bad box: y1 larger than y2"
-
- # Intersection keypoints
- xkis1 = torch.max(x1, x1g)
- ykis1 = torch.max(y1, y1g)
- xkis2 = torch.min(x2, x2g)
- ykis2 = torch.min(y2, y2g)
-
- intsct = torch.zeros_like(x1)
- mask = (ykis2 > ykis1) & (xkis2 > xkis1)
- intsct[mask] = (xkis2[mask] - xkis1[mask]) * (ykis2[mask] - ykis1[mask])
- union = (x2 - x1) * (y2 - y1) + (x2g - x1g) * (y2g - y1g) - intsct + eps
- iou = intsct / union
-
- # smallest enclosing box
- xc1 = torch.min(x1, x1g)
- yc1 = torch.min(y1, y1g)
- xc2 = torch.max(x2, x2g)
- yc2 = torch.max(y2, y2g)
- diag_len = ((xc2 - xc1) ** 2) + ((yc2 - yc1) ** 2) + eps
-
- # centers of boxes
- x_p = (x2 + x1) / 2
- y_p = (y2 + y1) / 2
- x_g = (x1g + x2g) / 2
- y_g = (y1g + y2g) / 2
- distance = ((x_p - x_g) ** 2) + ((y_p - y_g) ** 2)
-
- # Eqn. (7)
- loss = 1 - iou + (distance / diag_len)
- if reduction == "mean":
- loss = loss.mean() if loss.numel() > 0 else 0.0 * loss.sum()
- elif reduction == "sum":
- loss = loss.sum()
-
- return loss
-
-
-def ciou_loss(
- boxes1: torch.Tensor,
- boxes2: torch.Tensor,
- reduction: str = "none",
- eps: float = 1e-7,
-) -> torch.Tensor:
- """
- Complete Intersection over Union Loss (Zhaohui Zheng et. al)
- https://arxiv.org/abs/1911.08287
- Args:
- boxes1, boxes2 (Tensor): box locations in XYXY format, shape (N, 4) or (4,).
- reduction: 'none' | 'mean' | 'sum'
- 'none': No reduction will be applied to the output.
- 'mean': The output will be averaged.
- 'sum': The output will be summed.
- eps (float): small number to prevent division by zero
- """
-
- x1, y1, x2, y2 = boxes1.unbind(dim=-1)
- x1g, y1g, x2g, y2g = boxes2.unbind(dim=-1)
-
- # TODO: use torch._assert_async() when pytorch 1.8 support is dropped
- assert (x2 >= x1).all(), "bad box: x1 larger than x2"
- assert (y2 >= y1).all(), "bad box: y1 larger than y2"
-
- # Intersection keypoints
- xkis1 = torch.max(x1, x1g)
- ykis1 = torch.max(y1, y1g)
- xkis2 = torch.min(x2, x2g)
- ykis2 = torch.min(y2, y2g)
-
- intsct = torch.zeros_like(x1)
- mask = (ykis2 > ykis1) & (xkis2 > xkis1)
- intsct[mask] = (xkis2[mask] - xkis1[mask]) * (ykis2[mask] - ykis1[mask])
- union = (x2 - x1) * (y2 - y1) + (x2g - x1g) * (y2g - y1g) - intsct + eps
- iou = intsct / union
-
- # smallest enclosing box
- xc1 = torch.min(x1, x1g)
- yc1 = torch.min(y1, y1g)
- xc2 = torch.max(x2, x2g)
- yc2 = torch.max(y2, y2g)
- diag_len = ((xc2 - xc1) ** 2) + ((yc2 - yc1) ** 2) + eps
-
- # centers of boxes
- x_p = (x2 + x1) / 2
- y_p = (y2 + y1) / 2
- x_g = (x1g + x2g) / 2
- y_g = (y1g + y2g) / 2
- distance = ((x_p - x_g) ** 2) + ((y_p - y_g) ** 2)
-
- # width and height of boxes
- w_pred = x2 - x1
- h_pred = y2 - y1
- w_gt = x2g - x1g
- h_gt = y2g - y1g
- v = (4 / (math.pi**2)) * torch.pow((torch.atan(w_gt / h_gt) - torch.atan(w_pred / h_pred)), 2)
- with torch.no_grad():
- alpha = v / (1 - iou + v + eps)
-
- # Eqn. (10)
- loss = 1 - iou + (distance / diag_len) + alpha * v
- if reduction == "mean":
- loss = loss.mean() if loss.numel() > 0 else 0.0 * loss.sum()
- elif reduction == "sum":
- loss = loss.sum()
-
- return loss
diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/tests/data/test_detection_utils.py b/spaces/nikitaPDL2023/assignment4/detectron2/tests/data/test_detection_utils.py
deleted file mode 100644
index aac56c07da2be4e181e3e95de8cee1fc2858286d..0000000000000000000000000000000000000000
--- a/spaces/nikitaPDL2023/assignment4/detectron2/tests/data/test_detection_utils.py
+++ /dev/null
@@ -1,176 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import copy
-import numpy as np
-import os
-import unittest
-import pycocotools.mask as mask_util
-
-from detectron2.data import MetadataCatalog, detection_utils
-from detectron2.data import transforms as T
-from detectron2.structures import BitMasks, BoxMode
-from detectron2.utils.file_io import PathManager
-
-
-class TestTransformAnnotations(unittest.TestCase):
- def test_transform_simple_annotation(self):
- transforms = T.TransformList([T.HFlipTransform(400)])
- anno = {
- "bbox": np.asarray([10, 10, 200, 300]),
- "bbox_mode": BoxMode.XYXY_ABS,
- "category_id": 3,
- "segmentation": [[10, 10, 100, 100, 100, 10], [150, 150, 200, 150, 200, 200]],
- }
-
- output = detection_utils.transform_instance_annotations(anno, transforms, (400, 400))
- self.assertTrue(np.allclose(output["bbox"], [200, 10, 390, 300]))
- self.assertEqual(len(output["segmentation"]), len(anno["segmentation"]))
- self.assertTrue(np.allclose(output["segmentation"][0], [390, 10, 300, 100, 300, 10]))
-
- detection_utils.annotations_to_instances([output, output], (400, 400))
-
- def test_transform_empty_annotation(self):
- detection_utils.annotations_to_instances([], (400, 400))
-
- def test_flip_keypoints(self):
- transforms = T.TransformList([T.HFlipTransform(400)])
- anno = {
- "bbox": np.asarray([10, 10, 200, 300]),
- "bbox_mode": BoxMode.XYXY_ABS,
- "keypoints": np.random.rand(17, 3) * 50 + 15,
- }
-
- output = detection_utils.transform_instance_annotations(
- copy.deepcopy(anno),
- transforms,
- (400, 400),
- keypoint_hflip_indices=detection_utils.create_keypoint_hflip_indices(
- ["keypoints_coco_2017_train"]
- ),
- )
- # The first keypoint is nose
- self.assertTrue(np.allclose(output["keypoints"][0, 0], 400 - anno["keypoints"][0, 0]))
- # The last 16 keypoints are 8 left-right pairs
- self.assertTrue(
- np.allclose(
- output["keypoints"][1:, 0].reshape(-1, 2)[:, ::-1],
- 400 - anno["keypoints"][1:, 0].reshape(-1, 2),
- )
- )
- self.assertTrue(
- np.allclose(
- output["keypoints"][1:, 1:].reshape(-1, 2, 2)[:, ::-1, :],
- anno["keypoints"][1:, 1:].reshape(-1, 2, 2),
- )
- )
-
- def test_crop(self):
- transforms = T.TransformList([T.CropTransform(300, 300, 10, 10)])
- keypoints = np.random.rand(17, 3) * 50 + 15
- keypoints[:, 2] = 2
- anno = {
- "bbox": np.asarray([10, 10, 200, 400]),
- "bbox_mode": BoxMode.XYXY_ABS,
- "keypoints": keypoints,
- }
-
- output = detection_utils.transform_instance_annotations(
- copy.deepcopy(anno), transforms, (10, 10)
- )
- # box is shifted and cropped
- self.assertTrue((output["bbox"] == np.asarray([0, 0, 0, 10])).all())
- # keypoints are no longer visible
- self.assertTrue((output["keypoints"][:, 2] == 0).all())
-
- def test_transform_RLE(self):
- transforms = T.TransformList([T.HFlipTransform(400)])
- mask = np.zeros((300, 400), order="F").astype("uint8")
- mask[:, :200] = 1
-
- anno = {
- "bbox": np.asarray([10, 10, 200, 300]),
- "bbox_mode": BoxMode.XYXY_ABS,
- "segmentation": mask_util.encode(mask[:, :, None])[0],
- "category_id": 3,
- }
- output = detection_utils.transform_instance_annotations(
- copy.deepcopy(anno), transforms, (300, 400)
- )
- mask = output["segmentation"]
- self.assertTrue((mask[:, 200:] == 1).all())
- self.assertTrue((mask[:, :200] == 0).all())
-
- inst = detection_utils.annotations_to_instances(
- [output, output], (400, 400), mask_format="bitmask"
- )
- self.assertTrue(isinstance(inst.gt_masks, BitMasks))
-
- def test_transform_RLE_resize(self):
- transforms = T.TransformList(
- [T.HFlipTransform(400), T.ScaleTransform(300, 400, 400, 400, "bilinear")]
- )
- mask = np.zeros((300, 400), order="F").astype("uint8")
- mask[:, :200] = 1
-
- anno = {
- "bbox": np.asarray([10, 10, 200, 300]),
- "bbox_mode": BoxMode.XYXY_ABS,
- "segmentation": mask_util.encode(mask[:, :, None])[0],
- "category_id": 3,
- }
- output = detection_utils.transform_instance_annotations(
- copy.deepcopy(anno), transforms, (400, 400)
- )
-
- inst = detection_utils.annotations_to_instances(
- [output, output], (400, 400), mask_format="bitmask"
- )
- self.assertTrue(isinstance(inst.gt_masks, BitMasks))
-
- def test_gen_crop(self):
- instance = {"bbox": [10, 10, 100, 100], "bbox_mode": BoxMode.XYXY_ABS}
- t = detection_utils.gen_crop_transform_with_instance((10, 10), (150, 150), instance)
- # the box center must fall into the cropped region
- self.assertTrue(t.x0 <= 55 <= t.x0 + t.w)
-
- def test_gen_crop_outside_boxes(self):
- instance = {"bbox": [10, 10, 100, 100], "bbox_mode": BoxMode.XYXY_ABS}
- with self.assertRaises(AssertionError):
- detection_utils.gen_crop_transform_with_instance((10, 10), (15, 15), instance)
-
- def test_read_sem_seg(self):
- cityscapes_dir = MetadataCatalog.get("cityscapes_fine_sem_seg_val").gt_dir
- sem_seg_gt_path = os.path.join(
- cityscapes_dir, "frankfurt", "frankfurt_000001_083852_gtFine_labelIds.png"
- )
- if not PathManager.exists(sem_seg_gt_path):
- raise unittest.SkipTest(
- "Semantic segmentation ground truth {} not found.".format(sem_seg_gt_path)
- )
- sem_seg = detection_utils.read_image(sem_seg_gt_path, "L")
- self.assertEqual(sem_seg.ndim, 3)
- self.assertEqual(sem_seg.shape[2], 1)
- self.assertEqual(sem_seg.dtype, np.uint8)
- self.assertEqual(sem_seg.max(), 32)
- self.assertEqual(sem_seg.min(), 1)
-
- def test_read_exif_orientation(self):
- # https://github.com/recurser/exif-orientation-examples/raw/master/Landscape_5.jpg
- URL = "detectron2://assets/Landscape_5.jpg"
- img = detection_utils.read_image(URL, "RGB")
- self.assertEqual(img.ndim, 3)
- self.assertEqual(img.dtype, np.uint8)
- self.assertEqual(img.shape, (1200, 1800, 3)) # check that shape is not transposed
-
- def test_opencv_exif_orientation(self):
- import cv2
-
- URL = "detectron2://assets/Landscape_5.jpg"
- with PathManager.open(URL, "rb") as f:
- img = cv2.imdecode(np.frombuffer(f.read(), dtype="uint8"), cv2.IMREAD_COLOR)
- self.assertEqual(img.dtype, np.uint8)
- self.assertEqual(img.shape, (1200, 1800, 3))
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/nomic-ai/allenai_prosocial-dialog/README.md b/spaces/nomic-ai/allenai_prosocial-dialog/README.md
deleted file mode 100644
index a7c5142359e64cd9a8f10a792bdd8f6299a39280..0000000000000000000000000000000000000000
--- a/spaces/nomic-ai/allenai_prosocial-dialog/README.md
+++ /dev/null
@@ -1,8 +0,0 @@
----
-title: allenai/prosocial-dialog
-emoji: 🗺️
-colorFrom: purple
-colorTo: red
-sdk: static
-pinned: false
----
\ No newline at end of file
diff --git a/spaces/nomic-ai/derek-thomas_ScienceQA/style.css b/spaces/nomic-ai/derek-thomas_ScienceQA/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/nomic-ai/derek-thomas_ScienceQA/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/oliver2023/chatgpt-on-wechat/lib/itchat/components/contact.py b/spaces/oliver2023/chatgpt-on-wechat/lib/itchat/components/contact.py
deleted file mode 100644
index 93e3d1653c8c90640b1fb0752f96ee3a75f2cedb..0000000000000000000000000000000000000000
--- a/spaces/oliver2023/chatgpt-on-wechat/lib/itchat/components/contact.py
+++ /dev/null
@@ -1,519 +0,0 @@
-import time
-import re
-import io
-import json
-import copy
-import logging
-
-from .. import config, utils
-from ..returnvalues import ReturnValue
-from ..storage import contact_change
-from ..utils import update_info_dict
-
-logger = logging.getLogger('itchat')
-
-
-def load_contact(core):
- core.update_chatroom = update_chatroom
- core.update_friend = update_friend
- core.get_contact = get_contact
- core.get_friends = get_friends
- core.get_chatrooms = get_chatrooms
- core.get_mps = get_mps
- core.set_alias = set_alias
- core.set_pinned = set_pinned
- core.accept_friend = accept_friend
- core.get_head_img = get_head_img
- core.create_chatroom = create_chatroom
- core.set_chatroom_name = set_chatroom_name
- core.delete_member_from_chatroom = delete_member_from_chatroom
- core.add_member_into_chatroom = add_member_into_chatroom
-
-
-def update_chatroom(self, userName, detailedMember=False):
- if not isinstance(userName, list):
- userName = [userName]
- url = '%s/webwxbatchgetcontact?type=ex&r=%s' % (
- self.loginInfo['url'], int(time.time()))
- headers = {
- 'ContentType': 'application/json; charset=UTF-8',
- 'User-Agent': config.USER_AGENT}
- data = {
- 'BaseRequest': self.loginInfo['BaseRequest'],
- 'Count': len(userName),
- 'List': [{
- 'UserName': u,
- 'ChatRoomId': '', } for u in userName], }
- chatroomList = json.loads(self.s.post(url, data=json.dumps(data), headers=headers
- ).content.decode('utf8', 'replace')).get('ContactList')
- if not chatroomList:
- return ReturnValue({'BaseResponse': {
- 'ErrMsg': 'No chatroom found',
- 'Ret': -1001, }})
-
- if detailedMember:
- def get_detailed_member_info(encryChatroomId, memberList):
- url = '%s/webwxbatchgetcontact?type=ex&r=%s' % (
- self.loginInfo['url'], int(time.time()))
- headers = {
- 'ContentType': 'application/json; charset=UTF-8',
- 'User-Agent': config.USER_AGENT, }
- data = {
- 'BaseRequest': self.loginInfo['BaseRequest'],
- 'Count': len(memberList),
- 'List': [{
- 'UserName': member['UserName'],
- 'EncryChatRoomId': encryChatroomId}
- for member in memberList], }
- return json.loads(self.s.post(url, data=json.dumps(data), headers=headers
- ).content.decode('utf8', 'replace'))['ContactList']
- MAX_GET_NUMBER = 50
- for chatroom in chatroomList:
- totalMemberList = []
- for i in range(int(len(chatroom['MemberList']) / MAX_GET_NUMBER + 1)):
- memberList = chatroom['MemberList'][i *
- MAX_GET_NUMBER: (i+1)*MAX_GET_NUMBER]
- totalMemberList += get_detailed_member_info(
- chatroom['EncryChatRoomId'], memberList)
- chatroom['MemberList'] = totalMemberList
-
- update_local_chatrooms(self, chatroomList)
- r = [self.storageClass.search_chatrooms(userName=c['UserName'])
- for c in chatroomList]
- return r if 1 < len(r) else r[0]
-
-
-def update_friend(self, userName):
- if not isinstance(userName, list):
- userName = [userName]
- url = '%s/webwxbatchgetcontact?type=ex&r=%s' % (
- self.loginInfo['url'], int(time.time()))
- headers = {
- 'ContentType': 'application/json; charset=UTF-8',
- 'User-Agent': config.USER_AGENT}
- data = {
- 'BaseRequest': self.loginInfo['BaseRequest'],
- 'Count': len(userName),
- 'List': [{
- 'UserName': u,
- 'EncryChatRoomId': '', } for u in userName], }
- friendList = json.loads(self.s.post(url, data=json.dumps(data), headers=headers
- ).content.decode('utf8', 'replace')).get('ContactList')
-
- update_local_friends(self, friendList)
- r = [self.storageClass.search_friends(userName=f['UserName'])
- for f in friendList]
- return r if len(r) != 1 else r[0]
-
-
-@contact_change
-def update_local_chatrooms(core, l):
- '''
- get a list of chatrooms for updating local chatrooms
- return a list of given chatrooms with updated info
- '''
- for chatroom in l:
- # format new chatrooms
- utils.emoji_formatter(chatroom, 'NickName')
- for member in chatroom['MemberList']:
- if 'NickName' in member:
- utils.emoji_formatter(member, 'NickName')
- if 'DisplayName' in member:
- utils.emoji_formatter(member, 'DisplayName')
- if 'RemarkName' in member:
- utils.emoji_formatter(member, 'RemarkName')
- # update it to old chatrooms
- oldChatroom = utils.search_dict_list(
- core.chatroomList, 'UserName', chatroom['UserName'])
- if oldChatroom:
- update_info_dict(oldChatroom, chatroom)
- # - update other values
- memberList = chatroom.get('MemberList', [])
- oldMemberList = oldChatroom['MemberList']
- if memberList:
- for member in memberList:
- oldMember = utils.search_dict_list(
- oldMemberList, 'UserName', member['UserName'])
- if oldMember:
- update_info_dict(oldMember, member)
- else:
- oldMemberList.append(member)
- else:
- core.chatroomList.append(chatroom)
- oldChatroom = utils.search_dict_list(
- core.chatroomList, 'UserName', chatroom['UserName'])
- # delete useless members
- if len(chatroom['MemberList']) != len(oldChatroom['MemberList']) and \
- chatroom['MemberList']:
- existsUserNames = [member['UserName']
- for member in chatroom['MemberList']]
- delList = []
- for i, member in enumerate(oldChatroom['MemberList']):
- if member['UserName'] not in existsUserNames:
- delList.append(i)
- delList.sort(reverse=True)
- for i in delList:
- del oldChatroom['MemberList'][i]
- # - update OwnerUin
- if oldChatroom.get('ChatRoomOwner') and oldChatroom.get('MemberList'):
- owner = utils.search_dict_list(oldChatroom['MemberList'],
- 'UserName', oldChatroom['ChatRoomOwner'])
- oldChatroom['OwnerUin'] = (owner or {}).get('Uin', 0)
- # - update IsAdmin
- if 'OwnerUin' in oldChatroom and oldChatroom['OwnerUin'] != 0:
- oldChatroom['IsAdmin'] = \
- oldChatroom['OwnerUin'] == int(core.loginInfo['wxuin'])
- else:
- oldChatroom['IsAdmin'] = None
- # - update Self
- newSelf = utils.search_dict_list(oldChatroom['MemberList'],
- 'UserName', core.storageClass.userName)
- oldChatroom['Self'] = newSelf or copy.deepcopy(core.loginInfo['User'])
- return {
- 'Type': 'System',
- 'Text': [chatroom['UserName'] for chatroom in l],
- 'SystemInfo': 'chatrooms',
- 'FromUserName': core.storageClass.userName,
- 'ToUserName': core.storageClass.userName, }
-
-
-@contact_change
-def update_local_friends(core, l):
- '''
- get a list of friends or mps for updating local contact
- '''
- fullList = core.memberList + core.mpList
- for friend in l:
- if 'NickName' in friend:
- utils.emoji_formatter(friend, 'NickName')
- if 'DisplayName' in friend:
- utils.emoji_formatter(friend, 'DisplayName')
- if 'RemarkName' in friend:
- utils.emoji_formatter(friend, 'RemarkName')
- oldInfoDict = utils.search_dict_list(
- fullList, 'UserName', friend['UserName'])
- if oldInfoDict is None:
- oldInfoDict = copy.deepcopy(friend)
- if oldInfoDict['VerifyFlag'] & 8 == 0:
- core.memberList.append(oldInfoDict)
- else:
- core.mpList.append(oldInfoDict)
- else:
- update_info_dict(oldInfoDict, friend)
-
-
-@contact_change
-def update_local_uin(core, msg):
- '''
- content contains uins and StatusNotifyUserName contains username
- they are in same order, so what I do is to pair them together
-
- I caught an exception in this method while not knowing why
- but don't worry, it won't cause any problem
- '''
- uins = re.search('([^<]*?)<', msg['Content'])
- usernameChangedList = []
- r = {
- 'Type': 'System',
- 'Text': usernameChangedList,
- 'SystemInfo': 'uins', }
- if uins:
- uins = uins.group(1).split(',')
- usernames = msg['StatusNotifyUserName'].split(',')
- if 0 < len(uins) == len(usernames):
- for uin, username in zip(uins, usernames):
- if not '@' in username:
- continue
- fullContact = core.memberList + core.chatroomList + core.mpList
- userDicts = utils.search_dict_list(fullContact,
- 'UserName', username)
- if userDicts:
- if userDicts.get('Uin', 0) == 0:
- userDicts['Uin'] = uin
- usernameChangedList.append(username)
- logger.debug('Uin fetched: %s, %s' % (username, uin))
- else:
- if userDicts['Uin'] != uin:
- logger.debug('Uin changed: %s, %s' % (
- userDicts['Uin'], uin))
- else:
- if '@@' in username:
- core.storageClass.updateLock.release()
- update_chatroom(core, username)
- core.storageClass.updateLock.acquire()
- newChatroomDict = utils.search_dict_list(
- core.chatroomList, 'UserName', username)
- if newChatroomDict is None:
- newChatroomDict = utils.struct_friend_info({
- 'UserName': username,
- 'Uin': uin,
- 'Self': copy.deepcopy(core.loginInfo['User'])})
- core.chatroomList.append(newChatroomDict)
- else:
- newChatroomDict['Uin'] = uin
- elif '@' in username:
- core.storageClass.updateLock.release()
- update_friend(core, username)
- core.storageClass.updateLock.acquire()
- newFriendDict = utils.search_dict_list(
- core.memberList, 'UserName', username)
- if newFriendDict is None:
- newFriendDict = utils.struct_friend_info({
- 'UserName': username,
- 'Uin': uin, })
- core.memberList.append(newFriendDict)
- else:
- newFriendDict['Uin'] = uin
- usernameChangedList.append(username)
- logger.debug('Uin fetched: %s, %s' % (username, uin))
- else:
- logger.debug('Wrong length of uins & usernames: %s, %s' % (
- len(uins), len(usernames)))
- else:
- logger.debug('No uins in 51 message')
- logger.debug(msg['Content'])
- return r
-
-
-def get_contact(self, update=False):
- if not update:
- return utils.contact_deep_copy(self, self.chatroomList)
-
- def _get_contact(seq=0):
- url = '%s/webwxgetcontact?r=%s&seq=%s&skey=%s' % (self.loginInfo['url'],
- int(time.time()), seq, self.loginInfo['skey'])
- headers = {
- 'ContentType': 'application/json; charset=UTF-8',
- 'User-Agent': config.USER_AGENT, }
- try:
- r = self.s.get(url, headers=headers)
- except:
- logger.info(
- 'Failed to fetch contact, that may because of the amount of your chatrooms')
- for chatroom in self.get_chatrooms():
- self.update_chatroom(chatroom['UserName'], detailedMember=True)
- return 0, []
- j = json.loads(r.content.decode('utf-8', 'replace'))
- return j.get('Seq', 0), j.get('MemberList')
- seq, memberList = 0, []
- while 1:
- seq, batchMemberList = _get_contact(seq)
- memberList.extend(batchMemberList)
- if seq == 0:
- break
- chatroomList, otherList = [], []
- for m in memberList:
- if m['Sex'] != 0:
- otherList.append(m)
- elif '@@' in m['UserName']:
- chatroomList.append(m)
- elif '@' in m['UserName']:
- # mp will be dealt in update_local_friends as well
- otherList.append(m)
- if chatroomList:
- update_local_chatrooms(self, chatroomList)
- if otherList:
- update_local_friends(self, otherList)
- return utils.contact_deep_copy(self, chatroomList)
-
-
-def get_friends(self, update=False):
- if update:
- self.get_contact(update=True)
- return utils.contact_deep_copy(self, self.memberList)
-
-
-def get_chatrooms(self, update=False, contactOnly=False):
- if contactOnly:
- return self.get_contact(update=True)
- else:
- if update:
- self.get_contact(True)
- return utils.contact_deep_copy(self, self.chatroomList)
-
-
-def get_mps(self, update=False):
- if update:
- self.get_contact(update=True)
- return utils.contact_deep_copy(self, self.mpList)
-
-
-def set_alias(self, userName, alias):
- oldFriendInfo = utils.search_dict_list(
- self.memberList, 'UserName', userName)
- if oldFriendInfo is None:
- return ReturnValue({'BaseResponse': {
- 'Ret': -1001, }})
- url = '%s/webwxoplog?lang=%s&pass_ticket=%s' % (
- self.loginInfo['url'], 'zh_CN', self.loginInfo['pass_ticket'])
- data = {
- 'UserName': userName,
- 'CmdId': 2,
- 'RemarkName': alias,
- 'BaseRequest': self.loginInfo['BaseRequest'], }
- headers = {'User-Agent': config.USER_AGENT}
- r = self.s.post(url, json.dumps(data, ensure_ascii=False).encode('utf8'),
- headers=headers)
- r = ReturnValue(rawResponse=r)
- if r:
- oldFriendInfo['RemarkName'] = alias
- return r
-
-
-def set_pinned(self, userName, isPinned=True):
- url = '%s/webwxoplog?pass_ticket=%s' % (
- self.loginInfo['url'], self.loginInfo['pass_ticket'])
- data = {
- 'UserName': userName,
- 'CmdId': 3,
- 'OP': int(isPinned),
- 'BaseRequest': self.loginInfo['BaseRequest'], }
- headers = {'User-Agent': config.USER_AGENT}
- r = self.s.post(url, json=data, headers=headers)
- return ReturnValue(rawResponse=r)
-
-
-def accept_friend(self, userName, v4='', autoUpdate=True):
- url = f"{self.loginInfo['url']}/webwxverifyuser?r={int(time.time())}&pass_ticket={self.loginInfo['pass_ticket']}"
- data = {
- 'BaseRequest': self.loginInfo['BaseRequest'],
- 'Opcode': 3, # 3
- 'VerifyUserListSize': 1,
- 'VerifyUserList': [{
- 'Value': userName,
- 'VerifyUserTicket': v4, }],
- 'VerifyContent': '',
- 'SceneListCount': 1,
- 'SceneList': [33],
- 'skey': self.loginInfo['skey'], }
- headers = {
- 'ContentType': 'application/json; charset=UTF-8',
- 'User-Agent': config.USER_AGENT}
- r = self.s.post(url, headers=headers,
- data=json.dumps(data, ensure_ascii=False).encode('utf8', 'replace'))
- if autoUpdate:
- self.update_friend(userName)
- return ReturnValue(rawResponse=r)
-
-
-def get_head_img(self, userName=None, chatroomUserName=None, picDir=None):
- ''' get head image
- * if you want to get chatroom header: only set chatroomUserName
- * if you want to get friend header: only set userName
- * if you want to get chatroom member header: set both
- '''
- params = {
- 'userName': userName or chatroomUserName or self.storageClass.userName,
- 'skey': self.loginInfo['skey'],
- 'type': 'big', }
- url = '%s/webwxgeticon' % self.loginInfo['url']
- if chatroomUserName is None:
- infoDict = self.storageClass.search_friends(userName=userName)
- if infoDict is None:
- return ReturnValue({'BaseResponse': {
- 'ErrMsg': 'No friend found',
- 'Ret': -1001, }})
- else:
- if userName is None:
- url = '%s/webwxgetheadimg' % self.loginInfo['url']
- else:
- chatroom = self.storageClass.search_chatrooms(
- userName=chatroomUserName)
- if chatroomUserName is None:
- return ReturnValue({'BaseResponse': {
- 'ErrMsg': 'No chatroom found',
- 'Ret': -1001, }})
- if 'EncryChatRoomId' in chatroom:
- params['chatroomid'] = chatroom['EncryChatRoomId']
- params['chatroomid'] = params.get(
- 'chatroomid') or chatroom['UserName']
- headers = {'User-Agent': config.USER_AGENT}
- r = self.s.get(url, params=params, stream=True, headers=headers)
- tempStorage = io.BytesIO()
- for block in r.iter_content(1024):
- tempStorage.write(block)
- if picDir is None:
- return tempStorage.getvalue()
- with open(picDir, 'wb') as f:
- f.write(tempStorage.getvalue())
- tempStorage.seek(0)
- return ReturnValue({'BaseResponse': {
- 'ErrMsg': 'Successfully downloaded',
- 'Ret': 0, },
- 'PostFix': utils.get_image_postfix(tempStorage.read(20)), })
-
-
-def create_chatroom(self, memberList, topic=''):
- url = '%s/webwxcreatechatroom?pass_ticket=%s&r=%s' % (
- self.loginInfo['url'], self.loginInfo['pass_ticket'], int(time.time()))
- data = {
- 'BaseRequest': self.loginInfo['BaseRequest'],
- 'MemberCount': len(memberList.split(',')),
- 'MemberList': [{'UserName': member} for member in memberList.split(',')],
- 'Topic': topic, }
- headers = {
- 'content-type': 'application/json; charset=UTF-8',
- 'User-Agent': config.USER_AGENT}
- r = self.s.post(url, headers=headers,
- data=json.dumps(data, ensure_ascii=False).encode('utf8', 'ignore'))
- return ReturnValue(rawResponse=r)
-
-
-def set_chatroom_name(self, chatroomUserName, name):
- url = '%s/webwxupdatechatroom?fun=modtopic&pass_ticket=%s' % (
- self.loginInfo['url'], self.loginInfo['pass_ticket'])
- data = {
- 'BaseRequest': self.loginInfo['BaseRequest'],
- 'ChatRoomName': chatroomUserName,
- 'NewTopic': name, }
- headers = {
- 'content-type': 'application/json; charset=UTF-8',
- 'User-Agent': config.USER_AGENT}
- r = self.s.post(url, headers=headers,
- data=json.dumps(data, ensure_ascii=False).encode('utf8', 'ignore'))
- return ReturnValue(rawResponse=r)
-
-
-def delete_member_from_chatroom(self, chatroomUserName, memberList):
- url = '%s/webwxupdatechatroom?fun=delmember&pass_ticket=%s' % (
- self.loginInfo['url'], self.loginInfo['pass_ticket'])
- data = {
- 'BaseRequest': self.loginInfo['BaseRequest'],
- 'ChatRoomName': chatroomUserName,
- 'DelMemberList': ','.join([member['UserName'] for member in memberList]), }
- headers = {
- 'content-type': 'application/json; charset=UTF-8',
- 'User-Agent': config.USER_AGENT}
- r = self.s.post(url, data=json.dumps(data), headers=headers)
- return ReturnValue(rawResponse=r)
-
-
-def add_member_into_chatroom(self, chatroomUserName, memberList,
- useInvitation=False):
- ''' add or invite member into chatroom
- * there are two ways to get members into chatroom: invite or directly add
- * but for chatrooms with more than 40 users, you can only use invite
- * but don't worry we will auto-force userInvitation for you when necessary
- '''
- if not useInvitation:
- chatroom = self.storageClass.search_chatrooms(
- userName=chatroomUserName)
- if not chatroom:
- chatroom = self.update_chatroom(chatroomUserName)
- if len(chatroom['MemberList']) > self.loginInfo['InviteStartCount']:
- useInvitation = True
- if useInvitation:
- fun, memberKeyName = 'invitemember', 'InviteMemberList'
- else:
- fun, memberKeyName = 'addmember', 'AddMemberList'
- url = '%s/webwxupdatechatroom?fun=%s&pass_ticket=%s' % (
- self.loginInfo['url'], fun, self.loginInfo['pass_ticket'])
- params = {
- 'BaseRequest': self.loginInfo['BaseRequest'],
- 'ChatRoomName': chatroomUserName,
- memberKeyName: memberList, }
- headers = {
- 'content-type': 'application/json; charset=UTF-8',
- 'User-Agent': config.USER_AGENT}
- r = self.s.post(url, data=json.dumps(params), headers=headers)
- return ReturnValue(rawResponse=r)
diff --git a/spaces/openai/openai-detector/detection.md b/spaces/openai/openai-detector/detection.md
deleted file mode 100644
index c10ca0a64af844027ae1275a117c90d478db6620..0000000000000000000000000000000000000000
--- a/spaces/openai/openai-detector/detection.md
+++ /dev/null
@@ -1,50 +0,0 @@
-We encourage you to try improving our baselines. Please let us know if you have questions or find any interesting results!
-
-## Simple baseline
-
-We've provided a starter baseline which trains a logistic regression detector on TF-IDF unigram and bigram features, in [`baseline.py`](./baseline.py).
-
-### Initial Analysis
-
-The baseline achieves the following accuracies:
-
-| Model | Temperature 1 | Top-K 40 |
-| ----- | ------ | ------ |
-| 117M | 88.29% | 96.79% |
-| 345M | 88.94% | 95.22% |
-| 762M | 77.16% | 94.43% |
-| 1542M | 74.31% | 92.69% |
-
-
-
-Unsurprisingly, shorter documents are harder to detect and performance improves gradually with length. Accuracy of detection of short documents of 500 characters (a long paragraph) is about 15% lower.
-
-
-
-Truncated sampling, which is commonly used for high-quality generations from the GPT-2 model family, results in a shift in the part of speech distribution of the generated text compared to real text. A clear example is the underuse of proper nouns and overuse of pronouns which are more generic. This shift contributes to the 8% to 18% higher detection rate of Top-K samples compared to random samples across models.
-
-### Finetuning
-
-When run on samples from the finetuned GPT-2 full model, detection rate falls from 92.7% to 70.2% for Top-K 40 generations. Note that about half of this drop is accounted for by length, since Amazon reviews are shorter than WebText documents.
-
-## "Zero-shot" baseline
-
-We attempt a second baseline which uses a language model to evaluate total log probability, and thresholds based on this probability. This baseline underperforms relative to the simple baselinie. However, we are interested in further variants, such as binning per-token log probabilities.
-
-### Initial analysis
-
-Here, we show results of log-prob based detection for both standard (t=1) and Top-K 40 generations.
-
-
-
-The main result is that GPT-2 detects itself 81.8% of the time in the easy case of Top-K 40 generations. This is pretty constant across model sizes. All underperform relative to the simple baseline.
-
-For random samples, results are unsurprising. Bigger models are better able to realize that generated text is still kind of weird and "random". Detection rates also go down as generators get better.
-
-For Top-K 40, results are perhaps more surprising. Using a bigger model as a discriminator does not really improve detection rates across the board (the smallest GPT-2 model does as well at detecting full GPT-2 as full GPT-2), and a bigger model does not "detect down well" - that is, full GPT-2 is actually kind of bad at detecting an adversary using small GPT-2.
-
-An important difference is that while in the random samples case, generations are less likely than real data, in the Top-K 40 case, they are more likely.
-
-### Finetuning
-
-When detecting samples from our finetuned GPT-2 full model using GPT-2 full, we observe a 63.2% detection rate on random samples (drop of 13%) and 76.2% detection rate with Top-K 40 samples (drop of 5.6%)
diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/dual_transformer_2d.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/dual_transformer_2d.py
deleted file mode 100644
index 3db7e73ca6afc5fa7c67c1902d79e67c1aa728bc..0000000000000000000000000000000000000000
--- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/dual_transformer_2d.py
+++ /dev/null
@@ -1,151 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-from typing import Optional
-
-from torch import nn
-
-from .transformer_2d import Transformer2DModel, Transformer2DModelOutput
-
-
-class DualTransformer2DModel(nn.Module):
- """
- Dual transformer wrapper that combines two `Transformer2DModel`s for mixed inference.
-
- Parameters:
- num_attention_heads (`int`, *optional*, defaults to 16): The number of heads to use for multi-head attention.
- attention_head_dim (`int`, *optional*, defaults to 88): The number of channels in each head.
- in_channels (`int`, *optional*):
- Pass if the input is continuous. The number of channels in the input and output.
- num_layers (`int`, *optional*, defaults to 1): The number of layers of Transformer blocks to use.
- dropout (`float`, *optional*, defaults to 0.1): The dropout probability to use.
- cross_attention_dim (`int`, *optional*): The number of encoder_hidden_states dimensions to use.
- sample_size (`int`, *optional*): Pass if the input is discrete. The width of the latent images.
- Note that this is fixed at training time as it is used for learning a number of position embeddings. See
- `ImagePositionalEmbeddings`.
- num_vector_embeds (`int`, *optional*):
- Pass if the input is discrete. The number of classes of the vector embeddings of the latent pixels.
- Includes the class for the masked latent pixel.
- activation_fn (`str`, *optional*, defaults to `"geglu"`): Activation function to be used in feed-forward.
- num_embeds_ada_norm ( `int`, *optional*): Pass if at least one of the norm_layers is `AdaLayerNorm`.
- The number of diffusion steps used during training. Note that this is fixed at training time as it is used
- to learn a number of embeddings that are added to the hidden states. During inference, you can denoise for
- up to but not more than steps than `num_embeds_ada_norm`.
- attention_bias (`bool`, *optional*):
- Configure if the TransformerBlocks' attention should contain a bias parameter.
- """
-
- def __init__(
- self,
- num_attention_heads: int = 16,
- attention_head_dim: int = 88,
- in_channels: Optional[int] = None,
- num_layers: int = 1,
- dropout: float = 0.0,
- norm_num_groups: int = 32,
- cross_attention_dim: Optional[int] = None,
- attention_bias: bool = False,
- sample_size: Optional[int] = None,
- num_vector_embeds: Optional[int] = None,
- activation_fn: str = "geglu",
- num_embeds_ada_norm: Optional[int] = None,
- ):
- super().__init__()
- self.transformers = nn.ModuleList(
- [
- Transformer2DModel(
- num_attention_heads=num_attention_heads,
- attention_head_dim=attention_head_dim,
- in_channels=in_channels,
- num_layers=num_layers,
- dropout=dropout,
- norm_num_groups=norm_num_groups,
- cross_attention_dim=cross_attention_dim,
- attention_bias=attention_bias,
- sample_size=sample_size,
- num_vector_embeds=num_vector_embeds,
- activation_fn=activation_fn,
- num_embeds_ada_norm=num_embeds_ada_norm,
- )
- for _ in range(2)
- ]
- )
-
- # Variables that can be set by a pipeline:
-
- # The ratio of transformer1 to transformer2's output states to be combined during inference
- self.mix_ratio = 0.5
-
- # The shape of `encoder_hidden_states` is expected to be
- # `(batch_size, condition_lengths[0]+condition_lengths[1], num_features)`
- self.condition_lengths = [77, 257]
-
- # Which transformer to use to encode which condition.
- # E.g. `(1, 0)` means that we'll use `transformers[1](conditions[0])` and `transformers[0](conditions[1])`
- self.transformer_index_for_condition = [1, 0]
-
- def forward(
- self,
- hidden_states,
- encoder_hidden_states,
- timestep=None,
- attention_mask=None,
- cross_attention_kwargs=None,
- return_dict: bool = True,
- ):
- """
- Args:
- hidden_states ( When discrete, `torch.LongTensor` of shape `(batch size, num latent pixels)`.
- When continuous, `torch.FloatTensor` of shape `(batch size, channel, height, width)`): Input
- hidden_states
- encoder_hidden_states ( `torch.LongTensor` of shape `(batch size, encoder_hidden_states dim)`, *optional*):
- Conditional embeddings for cross attention layer. If not given, cross-attention defaults to
- self-attention.
- timestep ( `torch.long`, *optional*):
- Optional timestep to be applied as an embedding in AdaLayerNorm's. Used to indicate denoising step.
- attention_mask (`torch.FloatTensor`, *optional*):
- Optional attention mask to be applied in Attention
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`models.unet_2d_condition.UNet2DConditionOutput`] instead of a plain tuple.
-
- Returns:
- [`~models.transformer_2d.Transformer2DModelOutput`] or `tuple`:
- [`~models.transformer_2d.Transformer2DModelOutput`] if `return_dict` is True, otherwise a `tuple`. When
- returning a tuple, the first element is the sample tensor.
- """
- input_states = hidden_states
-
- encoded_states = []
- tokens_start = 0
- # attention_mask is not used yet
- for i in range(2):
- # for each of the two transformers, pass the corresponding condition tokens
- condition_state = encoder_hidden_states[:, tokens_start : tokens_start + self.condition_lengths[i]]
- transformer_index = self.transformer_index_for_condition[i]
- encoded_state = self.transformers[transformer_index](
- input_states,
- encoder_hidden_states=condition_state,
- timestep=timestep,
- cross_attention_kwargs=cross_attention_kwargs,
- return_dict=False,
- )[0]
- encoded_states.append(encoded_state - input_states)
- tokens_start += self.condition_lengths[i]
-
- output_states = encoded_states[0] * self.mix_ratio + encoded_states[1] * (1 - self.mix_ratio)
- output_states = output_states + input_states
-
- if not return_dict:
- return (output_states,)
-
- return Transformer2DModelOutput(sample=output_states)
diff --git a/spaces/patrickvonplaten/asv/app.py b/spaces/patrickvonplaten/asv/app.py
deleted file mode 100644
index e9ae64063507cd790f885481d0b62afa8a13f076..0000000000000000000000000000000000000000
--- a/spaces/patrickvonplaten/asv/app.py
+++ /dev/null
@@ -1,125 +0,0 @@
-import torch
-import gradio as gr
-from torchaudio.sox_effects import apply_effects_file
-from transformers import AutoFeatureExtractor, AutoModelForAudioXVector
-
-device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
-
-STYLE = """
-
-"""
-OUTPUT_OK = STYLE + """
-
-
The speakers are
-
{:.1f}%
-
similar
-
Welcome, human!
-
(You must get at least 85% to be considered the same person)
-
-"""
-OUTPUT_FAIL = STYLE + """
-
-
The speakers are
-
{:.1f}%
-
similar
-
You shall not pass!
-
(You must get at least 85% to be considered the same person)
-
-## Todo 与 版本规划:
-- version 3.2+ (todo): 函数插件支持更多参数接口
-- version 3.1: 支持同时问询多个gpt模型!支持api2d,支持多个apikey负载均衡
-- version 3.0: 对chatglm和其他小型llm的支持
-- version 2.6: 重构了插件结构,提高了交互性,加入更多插件
-- version 2.5: 自更新,解决总结大工程源代码时文本过长、token溢出的问题
-- version 2.4: (1)新增PDF全文翻译功能; (2)新增输入区切换位置的功能; (3)新增垂直布局选项; (4)多线程函数插件优化。
-- version 2.3: 增强多线程交互性
-- version 2.2: 函数插件支持热重载
-- version 2.1: 可折叠式布局
-- version 2.0: 引入模块化函数插件
-- version 1.0: 基础功能
-
-## 参考与学习
-
-```
-代码中参考了很多其他优秀项目中的设计,主要包括:
-
-# 借鉴项目1:借鉴了ChuanhuChatGPT中诸多技巧
-https://github.com/GaiZhenbiao/ChuanhuChatGPT
-
-# 借鉴项目2:清华ChatGLM-6B:
-https://github.com/THUDM/ChatGLM-6B
-```
diff --git a/spaces/qinzhu/moe-tts-tech/text/japanese.py b/spaces/qinzhu/moe-tts-tech/text/japanese.py
deleted file mode 100644
index 375e4d50872d5c68ee57ca17470a2ca425425eba..0000000000000000000000000000000000000000
--- a/spaces/qinzhu/moe-tts-tech/text/japanese.py
+++ /dev/null
@@ -1,153 +0,0 @@
-import re
-from unidecode import unidecode
-import pyopenjtalk
-
-
-# Regular expression matching Japanese without punctuation marks:
-_japanese_characters = re.compile(
- r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# Regular expression matching non-Japanese characters or punctuation marks:
-_japanese_marks = re.compile(
- r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# List of (symbol, Japanese) pairs for marks:
-_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('%', 'パーセント')
-]]
-
-# List of (romaji, ipa) pairs for marks:
-_romaji_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ts', 'ʦ'),
- ('u', 'ɯ'),
- ('j', 'ʥ'),
- ('y', 'j'),
- ('ni', 'n^i'),
- ('nj', 'n^'),
- ('hi', 'çi'),
- ('hj', 'ç'),
- ('f', 'ɸ'),
- ('I', 'i*'),
- ('U', 'ɯ*'),
- ('r', 'ɾ')
-]]
-
-# List of (romaji, ipa2) pairs for marks:
-_romaji_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('u', 'ɯ'),
- ('ʧ', 'tʃ'),
- ('j', 'dʑ'),
- ('y', 'j'),
- ('ni', 'n^i'),
- ('nj', 'n^'),
- ('hi', 'çi'),
- ('hj', 'ç'),
- ('f', 'ɸ'),
- ('I', 'i*'),
- ('U', 'ɯ*'),
- ('r', 'ɾ')
-]]
-
-# List of (consonant, sokuon) pairs:
-_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [
- (r'Q([↑↓]*[kg])', r'k#\1'),
- (r'Q([↑↓]*[tdjʧ])', r't#\1'),
- (r'Q([↑↓]*[sʃ])', r's\1'),
- (r'Q([↑↓]*[pb])', r'p#\1')
-]]
-
-# List of (consonant, hatsuon) pairs:
-_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [
- (r'N([↑↓]*[pbm])', r'm\1'),
- (r'N([↑↓]*[ʧʥj])', r'n^\1'),
- (r'N([↑↓]*[tdn])', r'n\1'),
- (r'N([↑↓]*[kg])', r'ŋ\1')
-]]
-
-
-def symbols_to_japanese(text):
- for regex, replacement in _symbols_to_japanese:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def japanese_to_romaji_with_accent(text):
- '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html'''
- text = symbols_to_japanese(text)
- sentences = re.split(_japanese_marks, text)
- marks = re.findall(_japanese_marks, text)
- text = ''
- for i, sentence in enumerate(sentences):
- if re.match(_japanese_characters, sentence):
- if text != '':
- text += ' '
- labels = pyopenjtalk.extract_fullcontext(sentence)
- for n, label in enumerate(labels):
- phoneme = re.search(r'\-([^\+]*)\+', label).group(1)
- if phoneme not in ['sil', 'pau']:
- text += phoneme.replace('ch', 'ʧ').replace('sh',
- 'ʃ').replace('cl', 'Q')
- else:
- continue
- # n_moras = int(re.search(r'/F:(\d+)_', label).group(1))
- a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1))
- a2 = int(re.search(r"\+(\d+)\+", label).group(1))
- a3 = int(re.search(r"\+(\d+)/", label).group(1))
- if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil', 'pau']:
- a2_next = -1
- else:
- a2_next = int(
- re.search(r"\+(\d+)\+", labels[n + 1]).group(1))
- # Accent phrase boundary
- if a3 == 1 and a2_next == 1:
- text += ' '
- # Falling
- elif a1 == 0 and a2_next == a2 + 1:
- text += '↓'
- # Rising
- elif a2 == 1 and a2_next == 2:
- text += '↑'
- if i < len(marks):
- text += unidecode(marks[i]).replace(' ', '')
- return text
-
-
-def get_real_sokuon(text):
- for regex, replacement in _real_sokuon:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def get_real_hatsuon(text):
- for regex, replacement in _real_hatsuon:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def japanese_to_ipa(text):
- text = japanese_to_romaji_with_accent(text).replace('...', '…')
- text = re.sub(
- r'([aiueo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text)
- text = get_real_sokuon(text)
- text = get_real_hatsuon(text)
- for regex, replacement in _romaji_to_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def japanese_to_ipa2(text):
- text = japanese_to_romaji_with_accent(text).replace('...', '…')
- text = get_real_sokuon(text)
- text = get_real_hatsuon(text)
- for regex, replacement in _romaji_to_ipa2:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def japanese_to_ipa3(text):
- text = japanese_to_ipa2(text).replace('n^', 'ȵ').replace(
- 'ʃ', 'ɕ').replace('*', '\u0325').replace('#', '\u031a')
- text = re.sub(
- r'([aiɯeo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text)
- text = re.sub(r'((?:^|\s)(?:ts|tɕ|[kpt]))', r'\1ʰ', text)
- return text
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Cypheros TS-Doctor V1.2.22 Portable.rar.md b/spaces/quidiaMuxgu/Expedit-SAM/Cypheros TS-Doctor V1.2.22 Portable.rar.md
deleted file mode 100644
index ee075cadd76deb6203f41557160ea2f468066a98..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Cypheros TS-Doctor V1.2.22 Portable.rar.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
fynfeo 2336c5e09f Cypheros TS-Doctor v1.2.22 Portable.rar CRACK Age of Empires: Definitive Edition telugu zion songs book pdf Cubase LE AI Elements 8035 Keygen Super Nani 1080p hd table no 21 full movie download blu-ray hindi moviesinstmank amar jaleel books pdf free download Windows 7 Crack Loader v.2.2.1 Activation February 2018 download pc free download kundli pro 5.5 software full version amish tripathi the oath of the vayuputras pdf download
-
The TS-Doctor processes the image format of most DVB-C, DVB-S and DVB-T Receivers. So it makes no change whether you receive your television channels by SAT, Cable and antenna, the TS-Doctor is the optimum utility for processing these documents on a PC and bringing them into a suitable format.
The TS-Doctor provides an easy to use cropping option and together with the automatic advertising recognition option, makes it very easy to delete annoying advertising interruptions from TV-pictures.
The program examines and repairs your TV-images and makes sure that they can be freely processed and shown on today's media players. It can also handle images and formats that other application often cannot read.
The tool also supports HDTV-pictures. HDTV means high resolution TV with great picture and sound quality. In spite of big data documents associated with HDTV, the program works very easy and without loss of image and sound quality.
The TS-Doctor can highly lower the needed file size by eliminating unnecessary file content and filler data.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Dream Theater Live At Budokan Dvd Download REPACK.md b/spaces/quidiaMuxgu/Expedit-SAM/Dream Theater Live At Budokan Dvd Download REPACK.md
deleted file mode 100644
index 67ed8b2e294a7e0146b920c16bb481b77151805f..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Dream Theater Live At Budokan Dvd Download REPACK.md
+++ /dev/null
@@ -1,74 +0,0 @@
-
-
Dream Theater Live At Budokan Dvd Download: A Must-Have for Prog Metal Fans
-
-
If you are a fan of progressive metal, you probably know Dream Theater, one of the most influential and talented bands in the genre. But do you know their live performance at Budokan, Japan on April 26, 2004? If not, you are missing out on one of the best concerts ever recorded on DVD.
Dream Theater Live At Budokan Dvd Download is a three-hour show that showcases the band's amazing skills, creativity and passion. The setlist includes songs from their albums Images and Words, Awake, Falling Into Infinity, Metropolis Pt. 2: Scenes from a Memory, Six Degrees of Inner Turbulence and Train of Thought. You will hear classics like "Pull Me Under", "Metropolis Pt. 1", "Home", "The Spirit Carries On" and "As I Am", as well as epic tracks like "A Change of Seasons", "Beyond This Life" and "Octavarium". You will also witness the band's incredible instrumental prowess in the 12-minute "Instrumedley" that features snippets from various songs.
-
-
The DVD also features a bonus disc with an extended drum solo by Mike Portnoy, gear tours by John Petrucci and Jordan Rudess, and a documentary called "Riding the Train of Thought" that gives you a behind-the-scenes look at the band's tour in Japan. The production quality is superb, with high-definition video, widescreen format and Dolby Digital 5.1 surround sound. The camera angles are well-chosen and give you a close-up view of each band member's performance.
-
-
Dream Theater Live At Budokan Dvd Download is a must-have for any prog metal fan who wants to experience the magic of Dream Theater live. It is a testament to the band's musical genius and dedication to their fans. You can download it from various online platforms or order it from Amazon or other retailers. Don't miss this opportunity to witness one of the greatest concerts of all time.
-
-
Why You Should Download Dream Theater Live At Budokan DVD
-
-
Dream Theater Live At Budokan DVD is not just a concert video, it is a musical journey that will take you to the heights of prog metal excellence. You will see and hear Dream Theater at their peak, performing with passion, precision and power. You will be amazed by their technical skills, their musical diversity and their emotional depth. You will feel like you are part of the audience, witnessing a historic event that will never be repeated.
-
-
Dream Theater Live At Budokan DVD is also a great way to discover or rediscover the band's discography, as they play songs from different albums and eras. You will appreciate how they have evolved and matured over the years, while staying true to their vision and style. You will also enjoy the bonus features that give you more insight into the band's personality, history and creative process.
-
-
Dream Theater Live At Budokan DVD is a must-have for any Dream Theater fan, as well as anyone who loves progressive metal or music in general. It is a masterpiece that will inspire you, challenge you and entertain you. It is a DVD that you will watch over and over again, discovering new details and nuances every time. It is a DVD that you will cherish and share with your friends and family.
-
-
How to Download Dream Theater Live At Budokan DVD
-
-
If you are convinced that Dream Theater Live At Budokan DVD is something you need to have in your collection, you might be wondering how to download it. There are several options available, depending on your preferences and budget. Here are some of them:
-
-
-
Buy the DVD from Amazon or other online retailers. This is the easiest and safest way to get the DVD, as you will receive a physical copy that you can play on any device. You will also support the band and their label by purchasing their official product.
-
Download the DVD from torrent sites or file-sharing platforms. This is a risky and illegal way to get the DVD, as you might encounter viruses, malware or fake files. You will also violate the band's copyright and deprive them of their deserved income.
-
Stream the DVD from YouTube or other video sites. This is a convenient and free way to watch the DVD, but you will not get the best quality or experience. You will also depend on your internet connection and availability of the video.
-
-
-
The choice is yours, but we recommend that you buy the DVD from Amazon or other online retailers, as it is the best way to enjoy Dream Theater Live At Budokan DVD in its full glory.
-
What People Are Saying About Dream Theater Live At Budokan DVD
-
-
Dream Theater Live At Budokan DVD has received rave reviews from critics and fans alike, who praised the band's performance, the production quality and the bonus features. Here are some of the comments from various sources:
-
-
-
"Dream Theater's Live at Budokan is a stunning showcase of a band at the height of its powers, delivering a mind-blowing set of progressive metal masterpieces with flawless execution and infectious enthusiasm." - AllMusic
-
"Live at Budokan is a treasure-trove for Dream Theater fans, presenting an entire three-hour performance at Tokyo's famed Budokan arena on April 26, 2004, along with a bonus disc rich in supplementary material." - Amazon.co.uk
-
"Live at Budokan is one of the best live DVDs ever made, period. The sound and picture quality are superb, the camera work is excellent, and the performance is simply phenomenal." - Prog Archives
-
"Live at Budokan is a must for any Dream Theater fan and a great introduction for newcomers. It captures the band in top form, playing with passion, precision and power. It is a musical journey that will take you to the heights of prog metal excellence." - Metal Storm
-
-
-
Conclusion: Dream Theater Live At Budokan DVD is a Must-Have
-
-
In conclusion, Dream Theater Live At Budokan DVD is a must-have for any prog metal fan who wants to experience the magic of Dream Theater live. It is a masterpiece that will inspire you, challenge you and entertain you. It is a DVD that you will watch over and over again, discovering new details and nuances every time. It is a DVD that you will cherish and share with your friends and family.
-
-
If you want to download Dream Theater Live At Budokan DVD, you have several options available, depending on your preferences and budget. You can buy the DVD from Amazon or other online retailers, download it from torrent sites or file-sharing platforms, or stream it from YouTube or other video sites. The choice is yours, but we recommend that you buy the DVD from Amazon or other online retailers, as it is the best way to enjoy Dream Theater Live At Budokan DVD in its full glory.
-
-
Don't miss this opportunity to witness one of the greatest concerts of all time. Download Dream Theater Live At Budokan DVD today and prepare to be amazed.
-
Where to Watch Dream Theater Live At Budokan DVD
-
-
If you have downloaded Dream Theater Live At Budokan DVD, you might be wondering where to watch it. You have several options available, depending on your preferences and equipment. Here are some of them:
-
-
-
Watch it on your computer or laptop. This is the simplest and most convenient way to watch the DVD, as you can use any media player that supports Blu-ray or DVD formats. You can also adjust the settings to your liking, such as brightness, contrast, volume and subtitles.
-
Watch it on your TV or home theater system. This is the best way to enjoy the DVD in its full quality and sound, as you can use a Blu-ray or DVD player that connects to your TV or home theater system. You can also use an HDMI cable or a wireless device to stream the DVD from your computer or laptop to your TV or home theater system.
-
Watch it on your smartphone or tablet. This is a convenient and portable way to watch the DVD, as you can use any app that supports Blu-ray or DVD formats. You can also download the DVD to your device or stream it from a cloud service. However, you might not get the best quality or sound, as the screen size and speakers are limited.
-
-
-
The choice is yours, but we recommend that you watch Dream Theater Live At Budokan DVD on your TV or home theater system, as it is the best way to experience the concert in its full glory.
-
-
Tips for Watching Dream Theater Live At Budokan DVD
-
-
Now that you have decided where to watch Dream Theater Live At Budokan DVD, you might want some tips for watching it. Here are some of them:
-
-
-
Watch it with friends or family. Dream Theater Live At Budokan DVD is a great way to share your love for prog metal with others, as you can enjoy the concert together and discuss your favorite songs and moments. You can also make it a fun event by preparing some snacks and drinks.
-
Watch it with headphones or earphones. Dream Theater Live At Budokan DVD is a great way to immerse yourself in the concert, as you can hear every detail and nuance of the band's performance. You can also block out any distractions and focus on the music.
-
Watch it with an open mind and heart. Dream Theater Live At Budokan DVD is a great way to appreciate the band's artistry and creativity, as you can witness their musical diversity and emotional depth. You can also learn something new and be inspired by their technical skills and passion.
-
-
-
The choice is yours, but we recommend that you watch Dream Theater Live At Budokan DVD with an open mind and heart, as it is the best way to enjoy the concert in its full beauty.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Mobile Tracking Software Used By Police BEST.md b/spaces/quidiaMuxgu/Expedit-SAM/Mobile Tracking Software Used By Police BEST.md
deleted file mode 100644
index 98aa658384d80083dfbe72a769b723f1c581d29c..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Mobile Tracking Software Used By Police BEST.md
+++ /dev/null
@@ -1,24 +0,0 @@
-
-
-UNI / SIM number, only that person knows about it. In most of the cases, the requirement to trace a .UNI / SIM number is required for legal, police, immigration or investigation purpose. The entire process of tracing of .UNI / SIM number is carried out by different people with different sets of tools and their skills. The investigation agency uses different methods to trace .UNI / SIM number. Those methods could be found in most of the investigations books. The detection methods of .UNI / SIM number can be divided into three major groups:
-
-1. *Acquiring the details of the .UNI / SIM number*: This includes any method which can acquire the details of .UNI / SIM number, such as, obtaining the IMEI of a mobile phone, requesting the owner of a SIM card to provide the SIM card details, pinging the SIM card. This method may require a mobile phone number. If the number is not known, then it may involve contacting the mobile phone network (Airtel, Vodafone etc.).
-
-2. *Tracing the owners of the .UNI / SIM number*: This includes two types of methods:
-
- a. From the details of the .UNI / SIM number, one can find the owner of the number;
-
- b. If the details of the .UNI / SIM number is not known, then the method involves identifying the location of the phone using triangulation.
-
-3. *Tracing the .UNI / SIM number*: This includes two types of methods:
-
- a. If the owner of the .UNI / SIM number is not known, then the method involves identifying the location of the phone using triangulation.
-
- b. From the owner of the .UNI / SIM number, one can find the owner of the number.
-
-The triangulation method is probably the most widely used method of identifying the location of a .UNI / SIM number. This method is used to identify the location of a .UNI / SIM number using the location of a .UNI / SIM card, the location of the mobile phone handset, the location of a nearby base station or tower, and the location of the mobile phone handset.
-
-The first mobile phone was brought in France in 1950. Since then, 4fefd39f24
-
-
-
diff --git a/spaces/qwertyuiee/AnimeBackgroundGAN/network/__init__.py b/spaces/qwertyuiee/AnimeBackgroundGAN/network/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/radames/Real-Time-Latent-Consistency-Model-Text-To-Image/app-controlnet.py b/spaces/radames/Real-Time-Latent-Consistency-Model-Text-To-Image/app-controlnet.py
deleted file mode 100644
index dc40bb6dac819f0898dd8612c48c0c19448fbbf9..0000000000000000000000000000000000000000
--- a/spaces/radames/Real-Time-Latent-Consistency-Model-Text-To-Image/app-controlnet.py
+++ /dev/null
@@ -1,308 +0,0 @@
-import asyncio
-import json
-import logging
-import traceback
-from pydantic import BaseModel
-
-from fastapi import FastAPI, WebSocket, HTTPException, WebSocketDisconnect
-from fastapi.middleware.cors import CORSMiddleware
-from fastapi.responses import StreamingResponse, JSONResponse
-from fastapi.staticfiles import StaticFiles
-
-from diffusers import AutoencoderTiny, ControlNetModel
-from latent_consistency_controlnet import LatentConsistencyModelPipeline_controlnet
-from compel import Compel
-import torch
-
-from canny_gpu import SobelOperator
-# from controlnet_aux import OpenposeDetector
-# import cv2
-
-try:
- import intel_extension_for_pytorch as ipex
-except:
- pass
-from PIL import Image
-import numpy as np
-import gradio as gr
-import io
-import uuid
-import os
-import time
-import psutil
-
-
-MAX_QUEUE_SIZE = int(os.environ.get("MAX_QUEUE_SIZE", 0))
-TIMEOUT = float(os.environ.get("TIMEOUT", 0))
-SAFETY_CHECKER = os.environ.get("SAFETY_CHECKER", None)
-TORCH_COMPILE = os.environ.get("TORCH_COMPILE", None)
-WIDTH = 512
-HEIGHT = 512
-# disable tiny autoencoder for better quality speed tradeoff
-USE_TINY_AUTOENCODER = True
-
-# check if MPS is available OSX only M1/M2/M3 chips
-mps_available = hasattr(torch.backends, "mps") and torch.backends.mps.is_available()
-xpu_available = hasattr(torch, "xpu") and torch.xpu.is_available()
-device = torch.device(
- "cuda" if torch.cuda.is_available() else "xpu" if xpu_available else "cpu"
-)
-
-# change to torch.float16 to save GPU memory
-torch_dtype = torch.float16
-
-print(f"TIMEOUT: {TIMEOUT}")
-print(f"SAFETY_CHECKER: {SAFETY_CHECKER}")
-print(f"MAX_QUEUE_SIZE: {MAX_QUEUE_SIZE}")
-print(f"device: {device}")
-
-if mps_available:
- device = torch.device("mps")
- device = "cpu"
- torch_dtype = torch.float32
-
-controlnet_canny = ControlNetModel.from_pretrained(
- "lllyasviel/control_v11p_sd15_canny", torch_dtype=torch_dtype
-).to(device)
-
-canny_torch = SobelOperator(device=device)
-# controlnet_pose = ControlNetModel.from_pretrained(
-# "lllyasviel/control_v11p_sd15_openpose", torch_dtype=torch_dtype
-# ).to(device)
-# controlnet_depth = ControlNetModel.from_pretrained(
-# "lllyasviel/control_v11f1p_sd15_depth", torch_dtype=torch_dtype
-# ).to(device)
-
-
-# pose_processor = OpenposeDetector.from_pretrained("lllyasviel/ControlNet")
-
-if SAFETY_CHECKER == "True":
- pipe = LatentConsistencyModelPipeline_controlnet.from_pretrained(
- "SimianLuo/LCM_Dreamshaper_v7",
- controlnet=controlnet_canny,
- scheduler=None,
- )
-else:
- pipe = LatentConsistencyModelPipeline_controlnet.from_pretrained(
- "SimianLuo/LCM_Dreamshaper_v7",
- safety_checker=None,
- controlnet=controlnet_canny,
- scheduler=None,
- )
-
-if USE_TINY_AUTOENCODER:
- pipe.vae = AutoencoderTiny.from_pretrained(
- "madebyollin/taesd", torch_dtype=torch_dtype, use_safetensors=True
- )
-pipe.set_progress_bar_config(disable=True)
-pipe.to(device=device, dtype=torch_dtype).to(device)
-pipe.unet.to(memory_format=torch.channels_last)
-
-if psutil.virtual_memory().total < 64 * 1024**3:
- pipe.enable_attention_slicing()
-
-compel_proc = Compel(
- tokenizer=pipe.tokenizer,
- text_encoder=pipe.text_encoder,
- truncate_long_prompts=False,
-)
-if TORCH_COMPILE:
- pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
- pipe.vae = torch.compile(pipe.vae, mode="reduce-overhead", fullgraph=True)
-
- pipe(prompt="warmup", image=[Image.new("RGB", (768, 768))], control_image=[Image.new("RGB", (768, 768))])
-
-
-user_queue_map = {}
-
-
-class InputParams(BaseModel):
- seed: int = 2159232
- prompt: str
- guidance_scale: float = 8.0
- strength: float = 0.5
- steps: int = 4
- lcm_steps: int = 50
- width: int = WIDTH
- height: int = HEIGHT
- controlnet_scale: float = 0.8
- controlnet_start: float = 0.0
- controlnet_end: float = 1.0
- canny_low_threshold: float = 0.31
- canny_high_threshold: float = 0.78
- debug_canny: bool = False
-
-def predict(
- input_image: Image.Image, params: InputParams, prompt_embeds: torch.Tensor = None
-):
- generator = torch.manual_seed(params.seed)
-
- control_image = canny_torch(input_image, params.canny_low_threshold, params.canny_high_threshold)
- results = pipe(
- control_image=control_image,
- prompt_embeds=prompt_embeds,
- generator=generator,
- image=input_image,
- strength=params.strength,
- num_inference_steps=params.steps,
- guidance_scale=params.guidance_scale,
- width=params.width,
- height=params.height,
- lcm_origin_steps=params.lcm_steps,
- output_type="pil",
- controlnet_conditioning_scale=params.controlnet_scale,
- control_guidance_start=params.controlnet_start,
- control_guidance_end=params.controlnet_end,
- )
- nsfw_content_detected = (
- results.nsfw_content_detected[0]
- if "nsfw_content_detected" in results
- else False
- )
- if nsfw_content_detected:
- return None
- result_image = results.images[0]
- if params.debug_canny:
- # paste control_image on top of result_image
- w0, h0 = (200, 200)
- control_image = control_image.resize((w0, h0))
- w1, h1 = result_image.size
- result_image.paste(control_image, (w1 - w0, h1 - h0))
-
- return result_image
-
-
-app = FastAPI()
-app.add_middleware(
- CORSMiddleware,
- allow_origins=["*"],
- allow_credentials=True,
- allow_methods=["*"],
- allow_headers=["*"],
-)
-
-
-@app.websocket("/ws")
-async def websocket_endpoint(websocket: WebSocket):
- await websocket.accept()
- if MAX_QUEUE_SIZE > 0 and len(user_queue_map) >= MAX_QUEUE_SIZE:
- print("Server is full")
- await websocket.send_json({"status": "error", "message": "Server is full"})
- await websocket.close()
- return
-
- try:
- uid = str(uuid.uuid4())
- print(f"New user connected: {uid}")
- await websocket.send_json(
- {"status": "success", "message": "Connected", "userId": uid}
- )
- user_queue_map[uid] = {"queue": asyncio.Queue()}
- await websocket.send_json(
- {"status": "start", "message": "Start Streaming", "userId": uid}
- )
- await handle_websocket_data(websocket, uid)
- except WebSocketDisconnect as e:
- logging.error(f"WebSocket Error: {e}, {uid}")
- traceback.print_exc()
- finally:
- print(f"User disconnected: {uid}")
- queue_value = user_queue_map.pop(uid, None)
- queue = queue_value.get("queue", None)
- if queue:
- while not queue.empty():
- try:
- queue.get_nowait()
- except asyncio.QueueEmpty:
- continue
-
-
-@app.get("/queue_size")
-async def get_queue_size():
- queue_size = len(user_queue_map)
- return JSONResponse({"queue_size": queue_size})
-
-
-@app.get("/stream/{user_id}")
-async def stream(user_id: uuid.UUID):
- uid = str(user_id)
- try:
- user_queue = user_queue_map[uid]
- queue = user_queue["queue"]
-
- async def generate():
- last_prompt: str = None
- prompt_embeds: torch.Tensor = None
- while True:
- data = await queue.get()
- input_image = data["image"]
- params = data["params"]
- if input_image is None:
- continue
- # avoid recalculate prompt embeds
- if last_prompt != params.prompt:
- print("new prompt")
- prompt_embeds = compel_proc(params.prompt)
- last_prompt = params.prompt
-
- image = predict(
- input_image,
- params,
- prompt_embeds,
- )
- if image is None:
- continue
- frame_data = io.BytesIO()
- image.save(frame_data, format="JPEG")
- frame_data = frame_data.getvalue()
- if frame_data is not None and len(frame_data) > 0:
- yield b"--frame\r\nContent-Type: image/jpeg\r\n\r\n" + frame_data + b"\r\n"
-
- await asyncio.sleep(1.0 / 120.0)
-
- return StreamingResponse(
- generate(), media_type="multipart/x-mixed-replace;boundary=frame"
- )
- except Exception as e:
- logging.error(f"Streaming Error: {e}, {user_queue_map}")
- traceback.print_exc()
- return HTTPException(status_code=404, detail="User not found")
-
-
-async def handle_websocket_data(websocket: WebSocket, user_id: uuid.UUID):
- uid = str(user_id)
- user_queue = user_queue_map[uid]
- queue = user_queue["queue"]
- if not queue:
- return HTTPException(status_code=404, detail="User not found")
- last_time = time.time()
- try:
- while True:
- data = await websocket.receive_bytes()
- params = await websocket.receive_json()
- params = InputParams(**params)
- pil_image = Image.open(io.BytesIO(data))
-
- while not queue.empty():
- try:
- queue.get_nowait()
- except asyncio.QueueEmpty:
- continue
- await queue.put({"image": pil_image, "params": params})
- if TIMEOUT > 0 and time.time() - last_time > TIMEOUT:
- await websocket.send_json(
- {
- "status": "timeout",
- "message": "Your session has ended",
- "userId": uid,
- }
- )
- await websocket.close()
- return
-
- except Exception as e:
- logging.error(f"Error: {e}")
- traceback.print_exc()
-
-
-app.mount("/", StaticFiles(directory="controlnet", html=True), name="public")
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Be2worksrizalrarfull _TOP_.md b/spaces/raedeXanto/academic-chatgpt-beta/Be2worksrizalrarfull _TOP_.md
deleted file mode 100644
index de02004c86f9576d2d0aa1f2083748435c7d4c82..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/Be2worksrizalrarfull _TOP_.md
+++ /dev/null
@@ -1,104 +0,0 @@
-
-
What is be2worksrizalrarFull?
-
If you are wondering what be2worksrizalrarFull is, you are not alone. This is a very obscure and mysterious term that does not seem to have any meaning at first glance. However, if you look closer, you will find that be2worksrizalrarFull is actually a combination of three different words: WinRAR, Adobe Photoshop CS6 Extended, and Google Drive. These are three popular software tools that can help you with various tasks such as data compression, image editing, and file storage. In this article, we will explain what each of these tools is, how they work, and how they are related to be2worksrizalrarFull.
WinRAR is a powerful archiver extractor tool that can open all popular file formats such as RAR, ZIP, 7-Zip, CAB, ARJ, LZH, TAR, Gzip, UUE, BZIP2, and ISO. WinRAR can also create compressed archives in RAR and ZIP formats that can save disk space and enable faster file sharing. WinRAR is compatible with Windows 11™ and Windows 10™ as well as other operating systems such as macOS, Linux, FreeBSD, and Android. WinRAR supports over 50 languages and has both 32-bit and 64-bit versions. WinRAR is also the only compression software that can work with Unicode.
-
How to download and install WinRAR
-
To download WinRAR, you can visit the official website https://www.win-rar.com/download.html and choose the version that suits your system requirements. You can also select the language that you prefer. After downloading the setup file, you can run it and follow the instructions to complete the installation process. The installation is quick and easy, and you can customize some settings such as the destination folder, the file associations, and the shortcuts. After the installation is done, you can launch WinRAR and start using it to compress and extract files.
-
How to use WinRAR to compress and extract files
-
To compress files using WinRAR, you can follow these steps:
-
Select the files or folders that you want to compress and right-click on them.
-
Choose "Add to archive..." from the context menu.
-
In the dialog box that appears, you can choose the archive name, format, compression method, password, and other options.
-
Click "OK" to create the archive.
-
-To extract files using WinRAR, you can follow these steps:
-
Select the archive file that you want to extract and right-click on it.
-
Choose "Extract files..." from the context menu.
-
In the dialog box that appears, you can choose the destination folder, password, and other options.
-
Click "OK" to extract the files.
-
-You can also use WinRAR to view, test, repair, delete, or encrypt archives. WinRAR has a user-friendly interface that allows you to access all its functions easily. You can also use keyboard shortcuts or command-line parameters to perform various tasks with WinRAR.
-
What is Adobe Photoshop CS6 Extended?
-
Adobe Photoshop CS6 Extended is a professional image editing software that can help you create stunning graphics, photos, and designs. Adobe Photoshop CS6 Extended has many features and tools that can enhance your creativity and productivity. Some of these features are:
-
Content-Aware tools: These tools can automatically fill in the gaps or remove unwanted objects from your images. For example, you can use Content-Aware Move to move an object to a different location in your image, or Content-Aware Patch to replace a selected area with another area from your image.
-
Camera Raw 7: This tool allows you to edit raw images from digital cameras with more control and precision. You can adjust the exposure, contrast, color, noise, sharpness, and other aspects of your raw images. You can also apply presets or create your own custom settings.
-
3D tools: These tools allow you to create and edit 3D objects and scenes in Photoshop. You can import 3D models from other applications or create your own using basic shapes. You can also apply materials, textures, lighting, shadows, and reflections to your 3D objects. You can also animate your 3D objects and export them as videos or images.
-
Video tools: These tools allow you to edit video clips in Photoshop. You can trim, split, merge, crop, rotate, and adjust the color and exposure of your video clips. You can also add transitions, effects, text, and audio to your video clips. You can also export your video clips as different formats or upload them to online platforms such as YouTube or Vimeo.
-
-
How to download and install Adobe Photoshop CS6 Extended
-
To download Adobe Photoshop CS6 Extended, you need to have an Adobe account and a valid license key. You can visit the official website https://www.adobe.com/products/photoshop/free-trial-download.html and sign in with your Adobe account. Then you can choose the version that suits your system requirements and language preferences. After downloading the setup file, you can run it and follow the instructions to complete the installation process. The installation may take some time depending on your system speed and internet connection. After the installation is done, you can launch Adobe Photoshop CS6 Extended and enter your license key to activate it.
-
How to use Adobe Photoshop CS6 Extended to edit images
-
To edit images using Adobe Photoshop CS6 Extended, you can follow these steps:
-
Open the image that you want to edit in Photoshop by choosing "File" > "Open" from the menu bar or by dragging and dropping the image file into Photoshop.
-
Select the tool that you want to use from the toolbar on the left side of the screen. You can also access more tools by clicking on the small arrow at the bottom of each tool icon.
-
Adjust the settings of the tool that you are using from the options bar at the top of the screen. You can also access more options by clicking on the small icon at the right end of each option.
-
Apply the tool to your image by clicking, dragging, or typing on your image. You can also use keyboard shortcuts or mouse gestures to modify the tool behavior.
-
Save your edited image by choosing "File" > "Save" or "Save As" from the menu bar or by pressing Ctrl+S or Ctrl+Shift+S on your keyboard. You can also choose the format, quality, and location of your saved image.
-
-You can also use layers, masks, filters, adjustments, and other features to enhance your image editing. Photoshop has a rich and flexible interface that allows you to customize your workspace and access various panels, menus, and dialogs. You can also use online tutorials, help files, and forums to learn more about Photoshop and its functions.
-
-
What is Google Drive?
-
Google Drive is a cloud-based storage service that allows you to store, access, and share your files online. Google Drive offers 15 GB of free storage space for your Google account, and you can also upgrade to a paid plan for more storage space. Google Drive supports various file types such as documents, spreadsheets, presentations, images, videos, audio, PDFs, and more. Google Drive is compatible with various devices such as computers, smartphones, tablets, and smart TVs. Google Drive also integrates with other Google services such as Gmail, Google Photos, Google Docs, Google Sheets, Google Slides, and more.
-
How to upload and download files from Google Drive
-
To upload files to Google Drive, you can follow these steps:
Click on the "New" button at the top left corner of the screen and choose "File upload" or "Folder upload" from the drop-down menu.
-
Select the files or folders that you want to upload from your computer and click "Open". You can also drag and drop the files or folders into the Google Drive window.
-
Wait for the upload to complete. You can see the progress and status of your upload at the bottom right corner of the screen.
-
-To download files from Google Drive, you can follow these steps:
-
Select the files or folders that you want to download and right-click on them.
-
Choose "Download" from the context menu. You can also click on the "More actions" icon (three vertical dots) at the top right corner of the screen and choose "Download" from there.
-
Choose the destination folder on your computer where you want to save the downloaded files or folders and click "Save". You can also change the name of the downloaded files or folders if you want.
-
-
How to share files with others using Google Drive
-
To share files with others using Google Drive, you can follow these steps:
-
Select the files or folders that you want to share and right-click on them.
-
Choose "Share" from the context menu. You can also click on the "Share" icon (a person with a plus sign) at the top right corner of the screen.
-
In the dialog box that appears, you can enter the email addresses of the people that you want to share with or choose from your contacts. You can also copy and paste a link that you can send to anyone who has access to it.
-
You can also choose the permission level for each person or link that you share with. You can allow them to view only, comment only, or edit your files or folders. You can also change or revoke these permissions at any time.
-
Click on "Done" to finish sharing. You can also add a note or message to your recipients if you want.
-
-
How are be2worksrizalrarFull related?
-
Now that we have explained what WinRAR, Adobe Photoshop CS6 Extended, and Google Drive are individually, we can answer the question: how are they related to be2worksrizalrarFull? The answer is simple: be2worksrizalrarFull is a term that refers to a file that contains all three software tools in one compressed archive. This file has a size of about 1.5 GB and has a RAR extension. The name be2worksrizalrarFull is derived from combining the first two letters of each software tool's name: be (from WinRAR), 2 (from Adobe Photoshop CS6 Extended), works (from Google Drive), rizal (from RAR), and Full (to indicate that it is a complete package). The purpose of creating such a file is to provide a convenient and efficient way of downloading, installing, and using all three software tools at once. This can save time, bandwidth, and disk space for the users who need these tools for their work or personal projects.
-
The benefits and drawbacks of using be2worksrizalrarFull
-
Using be2worksrizalrarFull can have some benefits and drawbacks depending on your needs and preferences. Some of the benefits are:
-
You can get all three software tools in one file instead of downloading them separately from different sources. This can save you time and hassle.
-
You can save disk space by compressing the file using WinRAR. The original size of the three software tools is about 3 GB, but the compressed file is only 1.5 GB. This can free up some space on your computer or external drive.
-
You can access and use the software tools offline without needing an internet connection. This can be useful if you are working in a remote area or have a limited or unreliable internet connection.
-
-Some of the drawbacks are:
-
You need to have WinRAR installed on your computer to extract the file. If you don't have WinRAR, you need to download it first before you can use be2worksrizalrarFull.
-
You need to have enough disk space to extract the file. The extracted file will take up about 3 GB of disk space, which may be too much for some users who have limited storage capacity.
-
You may not need or want all three software tools. Some users may only need one or two of the software tools, and having the other ones may be unnecessary or redundant. For example, if you already have another image editing software, you may not need Adobe Photoshop CS6 Extended.
-
-
The possible applications and uses of be2worksrizalrarFull
-
Despite the drawbacks, be2worksrizalrarFull can still have some possible applications and uses for some users who need or want all three software tools. Some of these are:
-
You can use WinRAR to compress and extract files of any type and size. This can help you manage your files more efficiently and securely.
-
You can use Adobe Photoshop CS6 Extended to edit images of any format and quality. This can help you create stunning graphics, photos, and designs for your work or personal projects.
-
You can use Google Drive to store, access, and share your files online. This can help you backup your files, sync them across your devices, and collaborate with others.
-
-You can also combine the functions of the three software tools to create more complex and creative projects. For example, you can use WinRAR to compress your images, Adobe Photoshop CS6 Extended to edit them, and Google Drive to upload them online. You can also use Google Drive to download files from other sources, WinRAR to extract them, and Adobe Photoshop CS6 Extended to modify them.
-
Conclusion
-
In conclusion, be2worksrizalrarFull is a term that refers to a file that contains WinRAR, Adobe Photoshop CS6 Extended, and Google Drive in one compressed archive. These are three popular software tools that can help you with various tasks such as data compression, image editing, and file storage. Using be2worksrizalrarFull can have some benefits and drawbacks depending on your needs and preferences. You can also use be2worksrizalrarFull for different applications and uses depending on your creativity and skills. We hope that this article has helped you understand what be2worksrizalrarFull is and how it works.
-
FAQs
-
Here are some frequently asked questions about be2worksrizalrarFull:
A: Be2worksrizalrarFull is safe and legal as long as you use it for personal or educational purposes only. You should not use it for commercial or illegal purposes as that may violate the terms and conditions of the software developers and distributors. You should also respect the intellectual property rights of the software developers and distributors and not distribute or sell be2worksrizalrarFull without their permission.
-
Q: How can I update be2worksrizalrarFull?
-
A: Be2worksrizalrarFull is not an official or supported product, so it does not have any regular updates or patches. However, you can update the individual software tools that are included in be2worksrizalrarFull by downloading and installing the latest versions from their respective websites. You can also replace the old files with the new ones in the be2worksrizalrarFull archive using WinRAR.
-
Q: What are some alternatives to be2worksrizalrarFull?
-
A: If you are looking for alternatives to be2worksrizalrarFull, you can try some of these options:
-
7-Zip: This is a free and open-source file archiver that can compress and extract files in various formats such as 7z, ZIP, RAR, TAR, GZIP, and more. You can download 7-Zip from this link: https://www.7-zip.org/download.html.
-
GIMP: This is a free and open-source image editor that can perform many of the same functions as Adobe Photoshop CS6 Extended. You can download GIMP from this link: https://www.gimp.org/downloads/.
-
Dropbox: This is a cloud-based storage service that offers 2 GB of free storage space for your files. You can also upgrade to a paid plan for more storage space. You can download Dropbox from this link: https://www.dropbox.com/install.
-
-
Q: How can I contact the creator of be2worksrizalrarFull?
-
A: The creator of be2worksrizalrarFull is unknown and has not provided any contact information. However, you can try to find more information about be2worksrizalrarFull by searching online or asking on forums or social media platforms. You may also find some reviews or feedback from other users who have tried be2worksrizalrarFull.
b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/COMSOL 5 0 Crack License Key Download and Install Multiphysics Software for Free.md b/spaces/raedeXanto/academic-chatgpt-beta/COMSOL 5 0 Crack License Key Download and Install Multiphysics Software for Free.md
deleted file mode 100644
index 21aa7c729be5dc8869d3bc594b68cc718e6a95be..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/COMSOL 5 0 Crack License Key Download and Install Multiphysics Software for Free.md
+++ /dev/null
@@ -1,125 +0,0 @@
-
-
What is COMSOL Multiphysics and why do you need it?
-
If you are an engineer, a scientist, or a researcher who wants to simulate designs, devices, and processes in all fields of engineering, manufacturing, and scientific research, you might have heard of COMSOL Multiphysics. But what is it exactly and why do you need it?
COMSOL Multiphysics is a comprehensive simulation software environment that allows you to account for coupled or multiphysics phenomena. With more than 30 add-on products to choose from, you can further expand the simulation platform with dedicated physics interfaces and tools for electrical, mechanical, fluid flow, and chemical applications. Additional interfacing products connect your COMSOL Multiphysics simulations with technical computing, CAD, and ECAD software.
-
With COMSOL Multiphysics, you can follow a consistent modeling workflow that includes geometry modeling and interfacing with CAD software, predefined interfaces and features for physics-based modeling, transparency and flexibility via equation-based modeling, automated and manual meshing, study step sequences, parameter studies, and optimization, state-of-the-art numerical methods for accurate solutions, extended visualization and postprocessing tools for publication-ready modeling results, and simulation apps that allow you to close the gaps between analysis, design, and production.
-
COMSOL Multiphysics is a powerful tool that can help you solve complex problems faster and more efficiently. Whether you want to optimize a product design, improve a manufacturing process, or explore a new scientific phenomenon, COMSOL Multiphysics can help you achieve your goals.
-
How to install COMSOL Multiphysics with a free license key
-
If you are interested in trying out COMSOL Multiphysics for yourself, you might be wondering how to get it installed on your computer. The good news is that you can get a free license key that allows you to use the software for two weeks without any limitations. Here are the steps you need to follow:
-
comsol multiphysics 5.0 full version with crack
-how to install comsol 5.0 license file
-comsol 5.0 free download with serial key
-comsol 5.0 activation key generator
-comsol 5.0 crack download for windows 10
-comsol 5.0 license manager error
-comsol 5.0 patch file download
-comsol 5.0 license server setup
-comsol 5.0 crack for mac os
-comsol 5.0 keygen online
-comsol 5.0 license file location
-comsol 5.0 crack torrent link
-comsol 5.0 serial number finder
-comsol 5.0 license expired fix
-comsol 5.0 crack for linux
-comsol 5.0 activation code free
-comsol 5.0 license file crack download
-comsol 5.0 crack installation guide
-comsol 5.0 license server port
-comsol 5.0 keygen download
-comsol 5.0 license file format
-comsol 5.0 crack zip file
-comsol 5.0 serial key list
-comsol 5.0 license renewal cost
-comsol 5.0 crack for windows 7
-comsol 5.0 activation key crack
-comsol 5.0 license file editor
-comsol 5.0 crack rar file
-comsol 5.0 serial number generator
-comsol 5.0 license transfer procedure
-comsol 5.0 crack for windows 8
-comsol 5.0 activation code generator
-comsol 5.0 license file backup
-comsol 5.0 crack iso file
-comsol 5.0 serial key crack
-comsol 5.0 license upgrade price
-comsol 5.0 crack for windows xp
-comsol 5.0 activation code list
-comsol 5.0 license file viewer
-comsol 5.0 crack exe file
-comsol 5.0 serial number list
-comsol 5.0 license validation error
-comsol 5.0 crack for windows vista
-comsol 5.0 activation code crack
-comsol 5.0 license file converter
-comsol 5.0 crack dll file
-comsol 5.0 serial number crack
-comsol 5.0 license request form
-comsol 5.0 crack for windows server
-
-
Go to https://www.comsol.com/trial and fill out the form with your personal information. You will receive an email with a link to download the software.
-
Download the software according to your operating system requirements. The file size is about 4.28 GB.
-
Run the setup.exe file as an administrator. Choose your preferred language and accept the terms and conditions.
-
Select the installation directory and the products you want to install. You can choose from various add-on modules depending on your needs.
-
Enter the license number that was sent to your email. You will also need an internet connection to activate the license.
-
Wait for the installation process to complete. It might take some time depending on your system specifications.
-
Launch the software from the start menu or desktop shortcut. You can now use COMSOL Multiphysics for two weeks with full functionality.
-
-
How to use COMSOL Multiphysics for various applications
-
Now that you have installed COMSOL Multiphysics on your computer, you might be wondering how to use it for various applications. The software is very versatile and can be used for many purposes. Here are some examples of how to use COMSOL Multiphysics for different fields of engineering, manufacturing, and scientific research:
-
-
If you are an electrical engineer, you can use COMSOL Multiphysics to model electromagnetic fields, circuits, antennas, sensors, actuators, power systems, optoelectronics, RF devices, microwaves, photonics, plasmonics, nanotechnology, etc.
-
If you are a mechanical engineer, you can use COMSOL Multiphysics to model structural mechanics, acoustics, vibrations, heat transfer, fluid dynamics, multiphase flow, porous media flow, non-Newtonian flow, etc.
-
If you are a chemical engineer or a chemist, you can use COMSOL Multiphysics to model chemical reactions, transport phenomena, electrochemistry, batteries, fuel cells, corrosion, electrolysis, plasma chemistry, etc.
-
If you are a biomedical engineer or a biologist, you can use COMSOL Multiphysics to model biosensors, drug delivery, tissue engineering, blood flow, cell culture, biomechanics, bioheat transfer, etc.
-
If you are a geologist or an environmental engineer, you can use COMSOL Multiphysics to model geophysics, seismic waves, soil mechanics, groundwater flow, contaminant transport, atmospheric chemistry, climate change, etc.
-
-
To learn how to use COMSOL Multiphysics for these and other applications, you can refer to the documentation and tutorials that are available on the website and within the software. You can also access a library of ready-made models and examples that cover a wide range of topics and industries. You can modify and customize these models to suit your own needs and objectives.
-
Benefits of using COMSOL Multiphysics for multiphysics modeling
-
One of the main benefits of using COMSOL Multiphysics is that it allows you to model multiphysics phenomena. This means that you can account for the interactions between different physical domains in your simulations. For example,
You can model how heat affects the deformation of a structure,
You can model how electric fields affect the flow of fluids,
You can model how chemical reactions affect the transport of species,
You can model how acoustic waves affect the propagation of light,
You can model how magnetic fields affect the generation of plasma,
You can model how biological processes affect the mechanical properties of tissues,
and so on.
-
By modeling multiphysics phenomena, you can capture the real-world behavior of your system more accurately and realistically. You can also explore the effects of different parameters and scenarios on your system's performance and functionality. You can optimize your design and improve your product quality and efficiency.
-
Challenges and limitations of using COMSOL Multiphysics
-
While using COMSOL Multiphysics has many benefits, it also has some challenges and limitations. Some of these are:
-
-
The software requires a high level of technical knowledge and expertise. You need to understand the physics behind your problem and choose the appropriate models and settings for your simulation. You also need to interpret the results correctly and validate them against experimental data or other sources.
-
The software requires a lot of computational resources. Depending on the complexity and size of your problem, you might need a powerful computer with enough memory, disk space, and processing speed. You might also need a parallel computing platform or a cluster computing system to run large-scale simulations faster.
-two weeks, you will need to purchase a license or a subscription to use the software for longer periods or for commercial purposes. The cost of the license or the subscription depends on the products and features you want to use and the number of users and computers you want to access.
-
-
How to crack COMSOL Multiphysics license key
-
If you are looking for a way to use COMSOL Multiphysics without paying for a license or a subscription, you might be tempted to crack the license key. Cracking the license key means using a patch file or a keygen program to generate a fake license number that bypasses the software's security and activation system. Here are the steps you need to follow:
-
-
Go to a website that offers a crack file or a keygen program for COMSOL Multiphysics. You can search for keywords like "comsol 5 0 crack license key" or "comsol 5 0 keygen" on Google or other search engines.
-
Download the crack file or the keygen program according to your operating system and software version. Be careful of viruses and malware that might infect your computer.
-
Extract the crack file or run the keygen program. You might need to disable your antivirus software or firewall temporarily.
-
Copy the patch file or the generated license number and paste it into the installation directory or the license manager of COMSOL Multiphysics.
-
Restart the software and enjoy using it without any limitations.
-
-
Risks and consequences of cracking COMSOL Multiphysics license key
-
While cracking COMSOL Multiphysics license key might seem like an easy and convenient way to use the software for free, it also comes with many risks and consequences. Some of these are:
-
-
You might violate the intellectual property rights and the terms and conditions of COMSOL Multiphysics. Cracking the license key is illegal and unethical, and you might face legal actions or penalties from COMSOL or other authorities if you are caught.
-
You might compromise the quality and reliability of your simulations. Cracking the license key might cause errors, bugs, crashes, or malfunctions in the software. You might also miss out on updates, patches, fixes, and new features that COMSOL releases regularly.
-
You might expose your computer and data to security threats. Cracking the license key might introduce viruses, malware, spyware, or ransomware into your computer. These malicious programs might damage your system, steal your information, or lock your files until you pay a ransom.
-
You might lose your reputation and credibility as a professional. Cracking the license key might tarnish your image and reputation as an engineer, a scientist, or a researcher. You might lose your trustworthiness and integrity in your field and among your peers, clients, employers, or collaborators.
-
-
Alternatives to cracking COMSOL Multiphysics license key
-
If you want to use COMSOL Multiphysics without cracking the license key, there are some alternatives that you can consider. Some of these are:
-
-
You can use the free trial license for two weeks and evaluate the software's capabilities and suitability for your needs. You can also request an extension of the trial period if you need more time to test the software.
-
You can purchase a license or a subscription that fits your budget and requirements. You can choose from various options and packages that offer different products, features, and services. You can also take advantage of discounts, promotions, and special offers that COMSOL provides occasionally.
-
You can use other simulation software that are free or open source. You can search for alternatives to COMSOL Multiphysics on Google or other search engines. Some examples of free or open source simulation software are OpenFOAM, Elmer, FEniCS, etc.
-
-
Conclusion
-
In this article, we have discussed what is COMSOL Multiphysics, why do you need it, how to install it with a free license key, how to use it for various applications, how to crack its license key, and what are the risks and consequences of doing so. We have also suggested some alternatives to cracking its license key.
-
We hope that this article has been informative and helpful for you. If you have any questions or comments, please feel free to contact us. Thank you for reading!
-
FAQs
-
-
What is COMSOL Multiphysics? COMSOL Multiphysics is a comprehensive simulation software environment that allows you to account for coupled or multiphysics phenomena.
-
How can I get a free license key for COMSOL Multiphysics? You can get a free trial license for two weeks by filling out a form on https://www.comsol.com/trial.
-
How can I crack COMSOL Multiphysics license key? You can crack COMSOL Multiphysics license key by using a patch file or a keygen program that generates a fake license number.
-
What are the risks and consequences of cracking COMSOL Multiphysics license key? You might violate the intellectual property rights and the terms and conditions of COMSOL Multiphysics, compromise the quality and reliability of your simulations, expose your computer and data to security threats, and lose your reputation and credibility as a professional.
-
What are some alternatives to cracking COMSOL Multiphysics license key? You can purchase a license or a subscription that fits your budget and requirements, or use other simulation software that are free or open source.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Download Grand Chase Offline Pc ((FREE)).md b/spaces/raedeXanto/academic-chatgpt-beta/Download Grand Chase Offline Pc ((FREE)).md
deleted file mode 100644
index d8e6d99c49cb761ae245f12cb2c0f104d870f351..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/Download Grand Chase Offline Pc ((FREE)).md
+++ /dev/null
@@ -1,151 +0,0 @@
-
-
Download Grand Chase Offline PC
-
If you are looking for a fun and exciting role-playing game that you can play on your PC without an internet connection, then you should try Grand Chase. Grand Chase is a free-to-play game that combines action, adventure, and fantasy in a colorful and vibrant world. You can choose from over 70 different heroes, each with their own skills and abilities, and form a team of four to explore dungeons, fight monsters, and complete quests. You can also customize your heroes with various costumes, weapons, and accessories to make them stand out.
-
What is Grand Chase?
-
Grand Chase is a game that was originally developed by KOG Studios in South Korea in 2003. It was one of the most popular online games in Asia, with millions of players across different regions. The game was also released in other countries, such as Brazil, North America, Europe, and Philippines. However, due to various reasons, the official servers of Grand Chase were shut down in 2015.
Fortunately, there are still ways to play Grand Chase on your PC. One of them is to download a private server that allows you to play offline. A private server is an unofficial version of the game that is hosted by fans or developers who want to keep the game alive. There are several private servers available for Grand Chase, such as Grand Chase History, Grand Chase Madness, and Grand Chase Reborn. Each of them has their own features and updates that may differ from the original game.
-
Why play Grand Chase offline?
-
Playing Grand Chase offline has some advantages and disadvantages that you should consider before downloading it. Here are some of them:
-
The benefits of playing offline mode
-
-
You don't need an internet connection to play. This means you can enjoy the game anytime and anywhere without worrying about lag, disconnection, or data usage.
-
You don't have to deal with hackers, cheaters, or toxic players who may ruin your gaming experience. You can play at your own pace and style without being judged or harassed by others.
-
You can access all the content and features of the game without spending any money. You don't have to buy cash items or premium memberships to unlock costumes, pets, or other items. You can also get unlimited resources and currency to upgrade your heroes and equipment.
-
-
The drawbacks of playing offline mode
-
-
You may miss out on some of the fun and excitement of playing online. You won't be able to interact with other players, join guilds, participate in events, or compete in PvP modes. You may also feel lonely or bored after playing for a long time.
-
You may encounter some bugs, glitches, or errors that may affect your gameplay. Since private servers are not official or supported by KOG Studios, they may not be stable or secure. You may also lose your progress or data if the server crashes or shuts down.
-
You may violate some legal or ethical issues by playing offline. Since private servers are not authorized or endorsed by KOG Studios, they may infringe on their intellectual property rights or terms of service. You may also risk getting banned or sued by KOG Studios if they find out that you are playing offline.
-
-
How to download Grand Chase offline PC?
-
If you decide to play Grand Chase offline on your PC, you will need to follow some steps to download and install it properly. Here are some of them:
-
The requirements for downloading and installing the game
-
-
You will need a PC that meets the minimum system requirements for running Grand Chase. These are:
-
-
Operating system: Windows XP/Vista/7/8/10/11
-
Processor: Pentium 4 1.5 GHz or higher
-
Memory: 512 MB RAM or higher
-
Graphics: GeForce FX 5600 or higher
-
DirectX: Version 9.0c or higher
-
Storage: 2 GB available space or higher
-
-
You will also need a reliable antivirus software that can scan and protect your PC from any viruses or malware that may come with the private server files.
-
You will also need a good compression software that can extract the private server files from their compressed format.
-
-
The steps to download and install the game
-
-
Choose a private server that you want to play on. You can search online for reviews or recommendations from other players who have tried them before.
-
Go to the official website of the private server and register an account if needed.
-
Download the private server files from their download page. They may come in different parts or formats depending on the server.
-
Extract the private server files using your compression software. Make sure you have enough space on your PC for them.
-
Run the setup.exe file and follow the instructions on how to install the game on your PC.
-
Launch the game using the launcher.exe file or a shortcut on your desktop.
-
Login with your account details and enjoy playing Grand Chase offline.
-
-
The tips and tricks to optimize the game performance
-
-
Adjust the graphics settings according to your PC specifications. You can lower the resolution, quality, or effects if your PC is slow or laggy.
-
Close any unnecessary programs or applications that may consume your CPU or RAM resources while playing.
-
Update your drivers or software regularly to ensure compatibility and stability.
-
Clean up your disk space or defragment your hard drive to improve loading speed and reduce errors.
-
Contact the private server support team if you encounter any problems or issues while playing.
-
-
How to play Grand Chase offline PC?
-
Once you have downloaded and installed Grand Chase offline on your PC, you can start playing it right away. Here are some tips on how to play it:
-
The basic controls and gameplay mechanics
-
-
The game is played with a keyboard and mouse combination. You can use the arrow keys or WASD keys to move your character around. You can use Z,X,C,V keys to perform basic attacks or skills. You can use A,S,D,F,G keys to switch between your team members.
-
The game is divided into different regions, each with its own dungeons and missions. You can access them from the world map screen by pressing M key. You can select a dungeon by clicking on it and choosing a difficulty level.
-
The game follows a side-scrolling perspective where you have to defeat enemies and bosses along the way. You can use combos and special skills to deal more damage and gain advantages over your foes.
-
The game has various items and equipment that you can collect from enemies, chests, shops, or quests. You can equip them on your heroes by pressing I key and opening your inventory screen. You can also upgrade them using materials or currency.
-
-
The different modes and challenges available offline
-
-
The game has several modes that you can play offline besides the main story mode. These include:
-
-
Park Mode: A casual mode where you can explore different maps with no enemies or objectives. You can chat with NPCs, interact with objects, or just relax.
-
Trial Tower: A challenging mode where you have to climb up a tower with 100 floors filled with enemies and traps. You can earn rewards based on how high you reach.
-
Heros' Tower: A similar mode as Trial Tower but with more difficult enemies and bosses based on characters Continuing the article from the previous message:
The best characters and strategies to use offline
-
The game has a lot of characters that you can collect and use offline. However, some of them are better than others depending on the mode, difficulty, and situation. Here are some of the best characters and strategies to use offline:
-
-
Amy: Amy is one of the best healers in the game. She can heal your team, increase their SP, and buff their attack speed. She is also very cute and cheerful. You should always have Amy in your team if you want to survive longer and deal more damage.
-
Jin: Jin is probably the best tank in the game hands down. He can protect his allies by utilizing his chi force. He can also deal decent damage with his martial arts skills. He is very versatile and can fit in any team composition.
-
Lass: Lass is a master assassin with massive damage output. He can also buff his allies to deal critical hits. He is very fast and agile, and can dodge enemy attacks easily. He is ideal for boss fights and PvP modes.
-
Ley: Ley is a powerful mage who can summon demons to fight for her. She can also cast spells that can hit multiple targets and inflict various debuffs. She is very good at crowd control and AoE damage. She is perfect for dungeon clearing and farming.
-
Elesis: Elesis is a leader who can inspire her team with her skills. She can increase their attack power, defense, and critical rate. She can also switch between sword and spear modes to adapt to different situations. She is a well-rounded character who can support and damage at the same time.
-
-
Of course, these are not the only good characters in the game. You may find other characters that suit your playstyle or preference better. You can also experiment with different combinations and synergies to find the best team for you.
-
How to download grand chase offline pc for free
-Grand chase offline pc game download full version
-Download grand chase offline pc without internet connection
-Grand chase offline pc installer download link
-Grand chase offline pc download windows 10
-Download grand chase offline pc with english patch
-Grand chase offline pc download size and requirements
-Grand chase offline pc download apk for android
-Download grand chase offline pc modded version
-Grand chase offline pc download error and fix
-Download grand chase offline pc latest update
-Grand chase offline pc download for mac os
-Download grand chase offline pc on steam
-Grand chase offline pc download review and rating
-Download grand chase offline pc cheats and hacks
-Grand chase offline pc download gameplay and features
-Download grand chase offline pc characters and classes
-Grand chase offline pc download tips and tricks
-Download grand chase offline pc best settings and configuration
-Grand chase offline pc download soundtrack and theme song
-Download grand chase offline pc wallpapers and screensavers
-Grand chase offline pc download guide and tutorial
-Download grand chase offline pc fan art and cosplay
-Grand chase offline pc download forum and community
-Download grand chase offline pc merchandise and accessories
-Grand chase offline pc download comparison and alternatives
-Download grand chase offline pc history and development
-Grand chase offline pc download news and updates
-Download grand chase offline pc events and tournaments
-Grand chase offline pc download codes and coupons
-Download grand chase offline pc system requirements test
-Grand chase offline pc download support and feedback
-Download grand chase offline pc faq and troubleshooting
-Grand chase offline pc download refund policy and terms of service
-Download grand chase offline pc privacy policy and data protection
-Grand chase offline pc download virus scan and security check
-Download grand chase offline pc speed test and optimization
-Grand chase offline pc download backup and restore
-Download grand chase offline pc uninstall and reinstall
-Grand chase offline pc download license key and activation code
-Download grand chase offline pc crack and serial number
-Grand chase offline pc download patch notes and changelog
-Download grand chase offline pc bonus content and extras
-Grand chase offline pc download survey and feedback form
-Download grand chase offline pc affiliate program and referral link
-Grand chase offline pc download donation and support page
-Download grand chase offline pc social media accounts and pages
-Grand chase offline pc download newsletter subscription and email list
-Download grand chase offline pc video tutorials and walkthroughs
-
Conclusion
-
Grand Chase is a fun and exciting role-playing game that you can play offline on your PC. You can enjoy the action-packed gameplay, the colorful graphics, and the diverse characters without needing an internet connection. You can also access all the content and features of the game without spending any money.
-
However, playing offline also has some drawbacks that you should be aware of. You may miss out on some of the social aspects of playing online, such as chatting with other players, joining guilds, or participating in events. You may also encounter some bugs or errors that may affect your gameplay. You may also violate some legal or ethical issues by playing offline.
-
Therefore, you should weigh the pros and cons of playing offline before downloading it. You should also follow the steps and tips on how to download, install, and play Grand Chase offline properly. You should also choose the best characters and strategies to use offline to have a better gaming experience.
-
If you are ready to play Grand Chase offline on your PC, then go ahead and download it now. You won't regret it!
-
FAQs
-
Here are some of the frequently asked questions about Grand Chase offline PC:
-
-
Q: Is Grand Chase offline PC safe to download?
-
A: Generally, yes. However, you should always download it from a trusted source and scan it with an antivirus software before installing it.
-
Q: Is Grand Chase offline PC legal to play?
-
A: Technically, no. Since Grand Chase offline PC is not authorized or endorsed by KOG Studios, it may infringe on their intellectual property rights or terms of service. You may risk getting banned or sued by KOG Studios if they find out that you are playing offline.
-
Q: Is Grand Chase offline PC updated regularly?
-
A: It depends on the private server that you are playing on. Some private servers may update their content and features more frequently than others. You should check their official website or social media for any news or announcements.
-
Q: Can I play Grand Chase offline PC with my friends?
-
A: Yes, you can. However, you will need to be on the same private server as them and have their IP address or username to connect with them.
-
Q: Can I transfer my progress or data from Grand Chase online to Grand Chase offline PC?
-
A: No, you can't. Grand Chase online and Grand Chase offline PC are separate games with different servers and databases. You will have to start from scratch when you play offline.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Embarcadero ER Studio 7.0 Serial Key LINK Keygen.md b/spaces/raedeXanto/academic-chatgpt-beta/Embarcadero ER Studio 7.0 Serial Key LINK Keygen.md
deleted file mode 100644
index e193c1e7bbef178c46ca62aad49e0434a33ce576..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/Embarcadero ER Studio 7.0 Serial Key LINK Keygen.md
+++ /dev/null
@@ -1,21 +0,0 @@
-
-
How to Crack Embarcadero ER Studio 7.0 with Serial Key
-
Embarcadero ER Studio 7.0 is a powerful and comprehensive data modeling tool that helps you design, document and optimize your databases. Whether you are working with relational, dimensional, NoSQL or big data sources, ER Studio can help you create high-quality logical and physical models that support data governance, quality and security.
-
However, ER Studio 7.0 is not a free software and requires a valid serial key to activate its full features. If you are looking for a way to crack ER Studio 7.0 with a serial key, you have come to the right place. In this article, we will show you how to generate a working serial key for ER Studio 7.0 and use it to unlock the software.
The first step is to download the setup file of ER Studio 7.0 from the official website of Embarcadero Technologies. You can choose between the 32-bit or 64-bit version depending on your system requirements. The file size is about 300 MB and may take some time to download depending on your internet speed.
-
Step 2: Install Embarcadero ER Studio 7.0
-
Once you have downloaded the setup file, run it as administrator and follow the instructions on the screen to install ER Studio 7.0 on your computer. You will need to accept the license agreement and choose the destination folder for the installation. You can also customize the components and features that you want to install.
-
Step 3: Generate a Serial Key for Embarcadero ER Studio 7.0
-
Now comes the tricky part. To generate a serial key for ER Studio 7.0, you will need to use a keygen program that can create valid codes for the software. There are many keygen programs available on the internet, but not all of them are reliable or safe. Some may contain viruses or malware that can harm your computer or steal your personal information.
-
Therefore, we recommend you to use the keygen program that we have provided in this article. It is tested and verified by our team and does not contain any harmful elements. You can download it from the link below:
After downloading the keygen program, run it as administrator and click on the "Generate" button. It will create a random serial key for ER Studio 7.0 that you can copy and paste into the activation window of the software.
-
Step 4: Activate Embarcadero ER Studio 7.0 with Serial Key
-
The final step is to activate ER Studio 7.0 with the serial key that you have generated using the keygen program. To do this, launch ER Studio 7.0 and click on the "Help" menu at the top right corner of the screen. Then select "Register" from the drop-down list.
-
A new window will pop up asking you to enter your name, company name and serial number. Fill in the required fields with your own details and paste the serial key that you have copied from the keygen program into the serial number field. Then click on the "OK" button to complete the registration process.
-
-
Congratulations! You have successfully cracked Embarcadero ER Studio 7.0 with a serial key and activated its full features. You can now enjoy using this powerful data modeling tool for your projects.
81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/rajistics/News_Topic_Clustering/app.py b/spaces/rajistics/News_Topic_Clustering/app.py
deleted file mode 100644
index 3e4741e3b0fd08b7eb24c229f10a8f08b6377087..0000000000000000000000000000000000000000
--- a/spaces/rajistics/News_Topic_Clustering/app.py
+++ /dev/null
@@ -1,91 +0,0 @@
-from bertopic import BERTopic
-import streamlit as st
-import streamlit.components.v1 as components
-#from datasets import load_dataset
-import pandas as pd
-from datasets import load_dataset
-import json
-
-##Load Dataset from HF Hub
-#dataset = load_dataset("rshah/million-headlines")
-#news = pd.DataFrame.from_dict(dataset["train"])
-
-#Load dataset locally - faster for demo
-news = pd.read_parquet("topic_10000.par")
-news['date'] = pd.to_datetime(news['publish_date'], format='%Y%m%d')
-timestamps = news.date.to_list()
-tweets = news.headline_text.to_list()
-
-#Load topics
-with open("topics", "r") as fp:
- topics = json.load(fp)
-
-option_n = 5
-
-st.set_page_config(page_title="News Topic Clustering")
-st.title("News Topic Clustering")
-st.caption("By Rajiv Shah")
-st.caption("")
-st.caption("This is a simple example of using identifying topics in the [one million ABC news headline dataset](https://huggingface.co/datasets/rshah/million-headlines). \
- If you look at the code for this app, you will see how it uses just a few lines of [BERTopic](https://maartengr.github.io/BERTopic/index.html) to \
- build the topics and create the visualizations")
-st.caption("The preloaded existing model provides the more interesting results. However, this app can be run live by building a new model, but \
- is limited to a small number of rows. I also limited topics over time to the existing model.")
-
-
-form = st.sidebar.form("Main Settings")
-form.header("Main Settings")
-option = form.selectbox(
- 'What model would you like to run',
- ('Load existing model', 'Build new model'),index=0)
-
-option_n = form.number_input(
- 'What topic would you like to get terms for?',
- min_value=0,max_value=10,value=5)
-
-submitted = form.form_submit_button(label = 'Select Model')
-
-if option == 'Load existing model':
- ##Load existing model
- topic_model = BERTopic.load("topic_10000.model")
- #topics, _ = topic_model.transform(tweets)
-else:
- ##Builds Topic Model
- #news_sample = news[(news['date'] > '2015-06-01')]
- news_sample = news[(news['date'] > '2017-01-01') & (news['date'] < '2019-01-01') ]
- news_sample = news_sample.sample(200,random_state=123)
- tweets = news_sample.headline_text.to_list()
- topic_model = BERTopic(min_topic_size=5, verbose=True)
- topics, _ = topic_model.fit_transform(tweets)
-
-
-#Get top topics
-freq = topic_model.get_topic_info()
-freq = freq.iloc[1: , :] ##drop -1 row
-freq.head(10)
-st.header("The Main Topic Clusters")
-st.write(freq)
-
-
-topic_nr = freq.iloc[option_n]["Topic"] # We select a frequent topic
-st.caption("")
-st.write('Top words in topic cluster: ',option_n)
-#st.caption(option_n)
-mytuple = (topic_model.get_topic(topic_nr))
-for item in mytuple:
- st.write(str(item[0]))
-
-st.header("Relationships between clusters ")
-st.plotly_chart(topic_model.visualize_hierarchy())
-
-
-if option == 'Load existing model':
- st.header("Topics over time for Existing Model")
- topics_over_time = topic_model.topics_over_time(docs=tweets,
- topics=topics,
- timestamps=timestamps,
- global_tuning=True,
- evolution_tuning=True,
- nr_bins=20)
-
- st.plotly_chart(topic_model.visualize_topics_over_time(topics_over_time, top_n_topics=20))
\ No newline at end of file
diff --git a/spaces/rakibulbd030/GFPGAN/app.py b/spaces/rakibulbd030/GFPGAN/app.py
deleted file mode 100644
index 67fcac0171bbb77d2b1d3b23b7293635b6297e28..0000000000000000000000000000000000000000
--- a/spaces/rakibulbd030/GFPGAN/app.py
+++ /dev/null
@@ -1,142 +0,0 @@
-import os
-
-import cv2
-import gradio as gr
-import torch
-from basicsr.archs.srvgg_arch import SRVGGNetCompact
-from gfpgan.utils import GFPGANer
-from realesrgan.utils import RealESRGANer
-
-os.system("pip freeze")
-# download weights
-if not os.path.exists('realesr-general-x4v3.pth'):
- os.system("wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth -P .")
-if not os.path.exists('GFPGANv1.2.pth'):
- os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.2.pth -P .")
-if not os.path.exists('GFPGANv1.3.pth'):
- os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth -P .")
-if not os.path.exists('GFPGANv1.4.pth'):
- os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth -P .")
-if not os.path.exists('RestoreFormer.pth'):
- os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/RestoreFormer.pth -P .")
-if not os.path.exists('CodeFormer.pth'):
- os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/CodeFormer.pth -P .")
-
-torch.hub.download_url_to_file(
- 'https://thumbs.dreamstime.com/b/tower-bridge-traditional-red-bus-black-white-colors-view-to-tower-bridge-london-black-white-colors-108478942.jpg',
- 'a1.jpg')
-torch.hub.download_url_to_file(
- 'https://media.istockphoto.com/id/523514029/photo/london-skyline-b-w.jpg?s=612x612&w=0&k=20&c=kJS1BAtfqYeUDaORupj0sBPc1hpzJhBUUqEFfRnHzZ0=',
- 'a2.jpg')
-torch.hub.download_url_to_file(
- 'https://i.guim.co.uk/img/media/06f614065ed82ca0e917b149a32493c791619854/0_0_3648_2789/master/3648.jpg?width=700&quality=85&auto=format&fit=max&s=05764b507c18a38590090d987c8b6202',
- 'a3.jpg')
-torch.hub.download_url_to_file(
- 'https://i.pinimg.com/736x/46/96/9e/46969eb94aec2437323464804d27706d--victorian-london-victorian-era.jpg',
- 'a4.jpg')
-
-# background enhancer with RealESRGAN
-model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=32, upscale=4, act_type='prelu')
-model_path = 'realesr-general-x4v3.pth'
-half = True if torch.cuda.is_available() else False
-upsampler = RealESRGANer(scale=4, model_path=model_path, model=model, tile=0, tile_pad=10, pre_pad=0, half=half)
-
-os.makedirs('output', exist_ok=True)
-
-
-# def inference(img, version, scale, weight):
-def inference(img, version, scale):
- # weight /= 100
- print(img, version, scale)
- try:
- extension = os.path.splitext(os.path.basename(str(img)))[1]
- img = cv2.imread(img, cv2.IMREAD_UNCHANGED)
- if len(img.shape) == 3 and img.shape[2] == 4:
- img_mode = 'RGBA'
- elif len(img.shape) == 2: # for gray inputs
- img_mode = None
- img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
- else:
- img_mode = None
-
- h, w = img.shape[0:2]
- if h < 300:
- img = cv2.resize(img, (w * 2, h * 2), interpolation=cv2.INTER_LANCZOS4)
-
- if version == 'v1.2':
- face_enhancer = GFPGANer(
- model_path='GFPGANv1.2.pth', upscale=2, arch='clean', channel_multiplier=2, bg_upsampler=upsampler)
- elif version == 'v1.3':
- face_enhancer = GFPGANer(
- model_path='GFPGANv1.3.pth', upscale=2, arch='clean', channel_multiplier=2, bg_upsampler=upsampler)
- elif version == 'v1.4':
- face_enhancer = GFPGANer(
- model_path='GFPGANv1.4.pth', upscale=2, arch='clean', channel_multiplier=2, bg_upsampler=upsampler)
- elif version == 'RestoreFormer':
- face_enhancer = GFPGANer(
- model_path='RestoreFormer.pth', upscale=2, arch='RestoreFormer', channel_multiplier=2, bg_upsampler=upsampler)
- elif version == 'CodeFormer':
- face_enhancer = GFPGANer(
- model_path='CodeFormer.pth', upscale=2, arch='CodeFormer', channel_multiplier=2, bg_upsampler=upsampler)
- elif version == 'RealESR-General-x4v3':
- face_enhancer = GFPGANer(
- model_path='realesr-general-x4v3.pth', upscale=2, arch='realesr-general', channel_multiplier=2, bg_upsampler=upsampler)
-
- try:
- # _, _, output = face_enhancer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True, weight=weight)
- _, _, output = face_enhancer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True)
- except RuntimeError as error:
- print('Error', error)
-
- try:
- if scale != 2:
- interpolation = cv2.INTER_AREA if scale < 2 else cv2.INTER_LANCZOS4
- h, w = img.shape[0:2]
- output = cv2.resize(output, (int(w * scale / 2), int(h * scale / 2)), interpolation=interpolation)
- except Exception as error:
- print('wrong scale input.', error)
- if img_mode == 'RGBA': # RGBA images should be saved in png format
- extension = 'png'
- else:
- extension = 'jpg'
- save_path = f'output/out.{extension}'
- cv2.imwrite(save_path, output)
-
- output = cv2.cvtColor(output, cv2.COLOR_BGR2RGB)
- return output, save_path
- except Exception as error:
- print('global exception', error)
- return None, None
-
-
-title = "Image Upscaling & Restoration(esp. Face) using GFPGAN Algorithm"
-description = r"""Gradio demo for GFPGAN: Towards Real-World Blind Face Restoration and Upscalling of the image with a Generative Facial Prior.
-Practically the algorithm is used to restore your **old photos** or improve **AI-generated faces**.
-To use it, simply just upload the concerned image.
-"""
-article = r"""
-[](https://github.com/TencentARC/GFPGAN/releases)
-[](https://github.com/TencentARC/GFPGAN)
-[](https://arxiv.org/abs/2101.04061)
-
-"""
-demo = gr.Interface(
- inference, [
- gr.inputs.Image(type="filepath", label="Input"),
- # gr.inputs.Radio(['v1.2', 'v1.3', 'v1.4', 'RestoreFormer', 'CodeFormer'], type="value", default='v1.4', label='version'),
- gr.inputs.Radio(['v1.2', 'v1.3', 'v1.4', 'RestoreFormer','CodeFormer','RealESR-General-x4v3'], type="value", default='v1.4', label='version'),
- gr.inputs.Number(label="Rescaling factor", default=2),
- # gr.Slider(0, 100, label='Weight, only for CodeFormer. 0 for better quality, 100 for better identity', default=50)
- ], [
- gr.outputs.Image(type="numpy", label="Output (The whole image)"),
- gr.outputs.File(label="Download the output image")
- ],
- title=title,
- description=description,
- article=article,
- # examples=[['AI-generate.jpg', 'v1.4', 2, 50], ['lincoln.jpg', 'v1.4', 2, 50], ['Blake_Lively.jpg', 'v1.4', 2, 50],
- # ['10045.png', 'v1.4', 2, 50]]).launch()
- examples=[['a1.jpg', 'v1.4', 2], ['a2.jpg', 'v1.4', 2], ['a3.jpg', 'v1.4', 2],['a4.jpg', 'v1.4', 2]])
-
-demo.queue(concurrency_count=4)
-demo.launch()
\ No newline at end of file
diff --git a/spaces/rasyidf/coffee-grader/README.md b/spaces/rasyidf/coffee-grader/README.md
deleted file mode 100644
index d063ced6f22202bd419cbda1db6c95b5b8ac13fd..0000000000000000000000000000000000000000
--- a/spaces/rasyidf/coffee-grader/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Coffee Bean Grader
-emoji: ☕
-colorFrom: blue
-colorTo: blue
-sdk: gradio
-sdk_version: 3.17.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC Game Download.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC Game Download.md
deleted file mode 100644
index c277cdcf6e551b543151b9456b0bdaf88a7e2ab8..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC Game Download.md
+++ /dev/null
@@ -1,129 +0,0 @@
-
-
Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC Game Download: A Complete Guide
-
-
Do you love cars and want to learn how to fix them? Do you want to run your own car workshop and become a successful mechanic? Do you want to enjoy a realistic and immersive car simulation game with tons of content and features? If you answered yes to any of these questions, then you should check out Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC Game Download, a bundle that contains the base game of Car Mechanic Simulator 2015 and all the additional content that has been released for it. In this article, we will show you what this game is all about, how to download and install it, and what are the features and benefits of playing it.
-
-
What is Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC Game Download?
-
-
Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC Game Download is a bundle that contains the base game of Car Mechanic Simulator 2015 and all the additional content that has been released for it. The base game is a car simulation game that lets you create and expand your own car workshop empire by repairing cars for your clients, buying and selling cars on the internet or at auctions, renovating old cars and collecting them or stripping them for parts, customizing your cars with visual and performance tuning options, test driving your cars on various tracks or on an open road, learning about car mechanics and engineering by inspecting and replacing various parts of your cars, enjoying realistic graphics and physics that make your cars look and behave like real ones, playing in different game modes such as career mode, free mode, or multiplayer mode, and working on over 40 different car models from various manufacturers, each with its own unique parts and systems that require different tools and skills to fix.
-
Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC Game Download
The additional content that is included in the bundle are:
-
-
-
Car Mechanic Simulator 2015 - PickUp & SUV: This DLC adds two new car models, a pickup truck and an SUV, as well as new parts and tools to work on them.
-
Car Mechanic Simulator 2015 - Trader Pack: This DLC adds a new feature that allows you to buy and sell cars on the internet, as well as new barns and junkyards to find rare vehicles.
-
Car Mechanic Simulator 2015 - Visual Tuning: This DLC adds new options to customize the appearance of your cars, such as paint jobs, decals, rims, tires, bumpers, spoilers, and more.
-
Car Mechanic Simulator 2015 - Youngtimer: This DLC adds four classic cars from the 80s and 90s, such as the Mercedes-Benz W123, the Volkswagen Golf MK1 GTI, the DeLorean DMC-12, and the Renault Alpine A310.
-
Car Mechanic Simulator 2015 - Performance DLC: This DLC adds new parts and tools to improve the performance of your cars, such as turbochargers, superchargers, intercoolers, sport exhausts, ECU tuning, and more.
-
Car Mechanic Simulator 2015 - Bentley: This DLC adds two luxury cars from the British manufacturer Bentley, the Bentley Continental GT Speed and the Bentley Mulsanne Speed.
-
Car Mechanic Simulator 2015 - Maserati: This DLC adds three sports cars from the Italian manufacturer Maserati, the Maserati GranTurismo MC Stradale, the Maserati Sebring, and the Maserati Quattroporte.
-
Car Mechanic Simulator 2015 - Mercedes-Benz: This DLC adds four cars from the German manufacturer Mercedes-Benz, the Mercedes-Benz 300 SL Gullwing (W198), the Mercedes-Benz 560 SEC (W126), the Mercedes-Benz 500E (W124), and the Mercedes-Benz SLS AMG (C197).
-
Car Mechanic Simulator 2015 - DeLorean: This DLC adds one iconic car from the Back to the Future movie franchise, the DeLorean DMC-12 with its time machine modifications.
-
Car Mechanic Simulator 2015 - Car Stripping: This DLC adds a new feature that allows you to strip down cars for parts and sell them on the market.
-
-
-
With all these DLCs included, Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC Game Download offers a lot of variety and content for car enthusiasts and simulation fans alike.
-
-
How to Download and Install Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC Game?
-
-
If you want to download and install Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC Game, you will need a PC that meets the minimum system requirements:
-
-
-
OS: Windows XP SP3 / Vista / 7 / 8
-
Processor: Core i3 3.1 GHz or AMD Phenom II X3 2.8 GHz
-
Memory: 4 GB RAM
-
Graphics: GeForce GTX 560 or Radeon HD6870 with 2GB VRAM
-
DirectX: Version 9.0c
-
Storage: 4 GB available space
-
Sound Card: DirectX compatible
-
-
-
To download Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC Game, you can use one of these links:
-
-
-
Steam: This is the official platform where you can buy and download Car Mechanic Simulator 2015 Gold Edition V1.1.1.5 Incl ALL DLC Game for $24.99 (or $2.49 during special promotions).
-
RepackLab: This is an unofficial site where you can download Car Mechanic Simulator 2015 Gold Edition V1.6 Incl ALL DLC Game for free (or donate if you want to support them).
-
-
-
To install Car Mechanic Simulator 2015 Gold Edition V1.6 Incl ALL DLC Game Download , you will need to follow these steps:
-
-
-
Download Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game from one of the links above.
-
Extract the downloaded file using WinRAR or any other file archiver.
-
Run setup.exe and follow the instructions on screen.
-
Select your preferred language and destination folder.
-
Wait for the installation to finish.
-
Launch Car Mechanic Simulator 2015 Gold Edition V6 from your desktop or start menu.
-
Enjoy!
-
-
-
What are the Features and Benefits of Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game?
-
-
Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game has many features and benefits that make it a great game for car lovers and simulation fans alike:
-
-
-
You can create and expand your own car workshop empire by repairing cars for your clients, buying and selling cars on the internet or at auctions, renovating old cars and collecting them or stripping them for parts.
-
You can work on over 40 different car models from various manufacturers, each with its own unique parts and systems that require different tools and skills to fix.
-
You can customize your cars with visual tuning options such as paint jobs, decals, rims, tires, bumpers, spoilers etc., or performance tuning options such as turbochargers superchargers intercoolers sport exhausts ECU tuning etc.
-
You can test drive your cars on various tracks or on an open road to check their condition performance before returning them to your clients or selling them.
-
You can learn about car mechanics engineering by inspecting replacing various parts of your cars such as engines transmissions brakes suspensions etc., or by reading detailed descriptions of each part in your inventory.
-
You can enjoy realistic graphics physics that make your cars look behave like real ones.
-
You can play in different game modes such as career mode where you have to complete missions earn money; free mode where you can work on any car you want; or multiplayer mode where you can compete with other players online.
-
-
-
Conclusion
-
-
In conclusion Car Mechanic Simulator
-
What are the Pros and Cons of Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game?
-
-
Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game is not a perfect game, and it has its pros and cons that you should consider before playing it. Here are some of them:
-
-
-
Pros:
-
-
-
The game is very realistic and immersive, and it gives you a lot of freedom and creativity to work on your cars.
-
The game has a lot of content and variety, thanks to all the DLCs included in the bundle. You can work on different car models, customize them with different options, buy and sell them on different platforms, and more.
-
The game is educational and informative, and it teaches you about car mechanics and engineering by letting you inspect and replace various parts of your cars.
-
The game has good graphics and physics that make your cars look and behave like real ones.
-
The game has different game modes that suit different preferences and play styles. You can play in career mode, free mode, or multiplayer mode.
-
-
-
Cons:
-
-
-
The game can be repetitive and boring after a while, especially if you work on the same car models or parts over and over again.
-
The game can be frustrating and challenging, especially if you encounter difficult or complex jobs that require a lot of time and skill to complete.
-
The game can be buggy and glitchy, especially if you download it from unofficial sources or use mods that are not compatible with the game.
-
The game can be expensive, especially if you buy it from the official platform or if you want to buy more DLCs that are not included in the bundle.
-
-
-
How to Play Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game?
-
-
If you want to play Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game, you will need to know some basic tips and tricks that will help you enjoy the game more. Here are some of them:
-
-
-
Start with easy jobs that don't require a lot of tools or skills to complete. This will help you earn money and experience faster.
-
Use the inventory menu to check the details of each part that you have or need. This will help you identify the broken parts and find the right replacements.
-
Use the test path or the test track to check the condition and performance of your cars before returning them to your clients or selling them. This will help you avoid complaints or refunds.
-
Use the internet or the auction house to buy and sell cars that are rare or profitable. This will help you expand your collection or earn more money.
-
Use the visual tuning or the performance tuning options to customize your cars according to your preferences or your clients' requests. This will help you increase your reputation or satisfaction.
-
-
-
Why Should You Play Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game?
-
-
If you are still not convinced that Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game is a game worth playing, here are some reasons why you should give it a try:
-
-
-
You should play Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game if you love cars and want to learn how to fix them. The game will teach you about car mechanics and engineering in a fun and interactive way.
-
You should play Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game if you want to run your own car workshop and become a successful mechanic. The game will let you create and expand your own car workshop empire by repairing cars for your clients, buying and selling cars on the internet or at auctions, renovating old cars and collecting them or stripping them for parts, customizing your cars with visual and performance tuning options, test driving your cars on various tracks or on an open road, playing in different game modes such as career mode, free mode, or multiplayer mode, and working on over 40 different car models from various manufacturers, each with its own unique parts and systems that require different tools and skills to fix.
-
You should play Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game if you want to enjoy a realistic and immersive car simulation game with tons of content and features. The game will offer you a lot of content and variety thanks to all the DLCs included in the bundle. You can work on different car models, customize them with different options, buy and sell them on different platforms, and more. The game will also offer you realistic graphics and physics that make your cars look and behave like real ones.
-
-
-
So what are you waiting for? Download Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game today and start your car mechanic career!
-
Conclusion
-
-
In conclusion, Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game is a fun and engaging game that lets you experience what it's like to be a car mechanic in a realistic way. It has a lot of content and variety thanks to all the DLCs included in it. It is also easy to download and install using one of the links provided above. If you are looking for a game that combines car simulation with business management and creativity, then Car Mechanic Simulator 2015 Gold Edition V6 Incl ALL DLC Game is definitely worth trying out!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/FrontOfficeFootballEightCrackSerialKey.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/FrontOfficeFootballEightCrackSerialKey.md
deleted file mode 100644
index 448a6f55bb2faac6e15aae78aa6f4ab2e0fa559a..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/FrontOfficeFootballEightCrackSerialKey.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
https://cdn.thingiverse.com/assets/94/88/08/e6/86/FrontOfficeFootballEightCrackSerialKey.html https://repo.steampowered.com/steam/app/2580925/UwJ-QJw6fUNyEEtBTWqVZYYHZuDVeHUANoplfZX3Zaag1kSg4M. Download FrontOfficeFootballEightCrackSerialKey here
FrontOfficeFootballEightCrackSerialKey Download Landscape designer pro x 10 crack. FrontOfficeFootballEightCrackSerialKey Download Personal Training Plan With Wix. https://coub.com/stories/2640860-frontofficefootballeightcrackserialkey. https://coub.com/stories/2980089-frontofficefootballeightcrackserialkey-axfyovv.
-
FrontOfficeFootballEightCrackSerialKey Download Banana song karaoke.
https://coub.com/stories/2249063-frontofficefootballeightcrackserialkey-anhnger. FrontOfficeFootballEightCrackSerialKey Download Microsoft Smartphone Sim Card Cracker 4.0.0 Crack. FrontOfficeFootballEightCrackSerialKey Download Lamborghini and SUV Color Designer Version 1.0 Crack. https://coub.com/stories/2249063-frontofficefootballeightcrackserialkey-anhnger. https://coub.com/stories/2980089-frontofficefootballeightcrackserialkey-axfyovv. https://coub.com/stories/2640860-frontofficefootballeightcrackserialkey. https://coub.com/stories/2980089-frontofficefootballeightcrackserialkey-axfyovv. https://coub.com/stories/2980089-frontofficefootballeightcrackserialkey-axfyovv. https://coub.com/stories/2980089-frontofficefootballeightcrackserialkey-axfyovv.
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/robmarkcole/yolov5-ui/app.py b/spaces/robmarkcole/yolov5-ui/app.py
deleted file mode 100644
index 3b7a90f575950d9556e227f7e6d6f4b1e979dd7c..0000000000000000000000000000000000000000
--- a/spaces/robmarkcole/yolov5-ui/app.py
+++ /dev/null
@@ -1,94 +0,0 @@
-import streamlit as st
-import torch
-from PIL import Image, ImageDraw
-from typing import Tuple
-import numpy as np
-import const
-import time
-
-def draw_box(
- draw: ImageDraw,
- box: Tuple[float, float, float, float],
- text: str = "",
- color: Tuple[int, int, int] = (255, 255, 0),
-) -> None:
- """
- Draw a bounding box on and image.
- """
-
- line_width = 3
- font_height = 8
- y_min, x_min, y_max, x_max = box
- (left, right, top, bottom) = (
- x_min,
- x_max,
- y_min,
- y_max,
- )
- draw.line(
- [(left, top), (left, bottom), (right, bottom), (right, top), (left, top)],
- width=line_width,
- fill=color,
- )
- if text:
- draw.text(
- (left + line_width, abs(top - line_width - font_height)), text, fill=color
- )
-
-
-@st.cache(allow_output_mutation=True, show_spinner=True)
-def get_model(model_id : str = "yolov5s"):
- model = torch.hub.load("ultralytics/yolov5", model_id)
- return model
-
-# Settings
-st.sidebar.title("Settings")
-model_id = st.sidebar.selectbox("Pretrained model", const.PRETRAINED_MODELS, index=1)
-img_size = st.sidebar.selectbox("Image resize for inference", const.IMAGE_SIZES, index=1)
-CONFIDENCE = st.sidebar.slider(
- "Confidence threshold",
- const.MIN_CONF,
- const.MAX_CONF,
- const.DEFAULT_CONF,
-)
-
-model = get_model(model_id)
-st.title(f"{model_id}")
-
-img_file_buffer = st.file_uploader("Upload an image", type=["png", "jpg", "jpeg"])
-if img_file_buffer is not None:
- pil_image = Image.open(img_file_buffer)
-
-else:
- pil_image = Image.open(const.DEFAULT_IMAGE)
-
-st.text(f"Input image width and height: {pil_image.width} x {pil_image.height}")
-start_time = time.time()
-results = model(pil_image, size=img_size)
-end_time = time.time()
-
-df = results.pandas().xyxy[0]
-df = df[df["confidence"] > CONFIDENCE]
-
-draw = ImageDraw.Draw(pil_image)
-for _, obj in df.iterrows():
- name = obj["name"]
- confidence = obj["confidence"]
- box_label = f"{name}"
-
- draw_box(
- draw,
- (obj["ymin"], obj["xmin"], obj["ymax"], obj["xmax"]),
- text=box_label,
- color=const.RED,
- )
-
-st.image(
- np.array(pil_image),
- caption=f"Processed image",
- use_column_width=True,
-)
-
-st.text(f"Time to inference: {round(time.time() - end_time, 2)} sec")
-
-st.table(df)
diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/utils/ckpt_convert.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/utils/ckpt_convert.py
deleted file mode 100644
index 4d660c4e4ddbc289f6882333e5eec4360a17aaf2..0000000000000000000000000000000000000000
--- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/utils/ckpt_convert.py
+++ /dev/null
@@ -1,137 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-
-# This script consists of several convert functions which
-# can modify the weights of model in original repo to be
-# pre-trained weights.
-
-from collections import OrderedDict
-
-import torch
-
-
-def pvt_convert(ckpt):
- new_ckpt = OrderedDict()
- # Process the concat between q linear weights and kv linear weights
- use_abs_pos_embed = False
- use_conv_ffn = False
- for k in ckpt.keys():
- if k.startswith('pos_embed'):
- use_abs_pos_embed = True
- if k.find('dwconv') >= 0:
- use_conv_ffn = True
- for k, v in ckpt.items():
- if k.startswith('head'):
- continue
- if k.startswith('norm.'):
- continue
- if k.startswith('cls_token'):
- continue
- if k.startswith('pos_embed'):
- stage_i = int(k.replace('pos_embed', ''))
- new_k = k.replace(f'pos_embed{stage_i}',
- f'layers.{stage_i - 1}.1.0.pos_embed')
- if stage_i == 4 and v.size(1) == 50: # 1 (cls token) + 7 * 7
- new_v = v[:, 1:, :] # remove cls token
- else:
- new_v = v
- elif k.startswith('patch_embed'):
- stage_i = int(k.split('.')[0].replace('patch_embed', ''))
- new_k = k.replace(f'patch_embed{stage_i}',
- f'layers.{stage_i - 1}.0')
- new_v = v
- if 'proj.' in new_k:
- new_k = new_k.replace('proj.', 'projection.')
- elif k.startswith('block'):
- stage_i = int(k.split('.')[0].replace('block', ''))
- layer_i = int(k.split('.')[1])
- new_layer_i = layer_i + use_abs_pos_embed
- new_k = k.replace(f'block{stage_i}.{layer_i}',
- f'layers.{stage_i - 1}.1.{new_layer_i}')
- new_v = v
- if 'attn.q.' in new_k:
- sub_item_k = k.replace('q.', 'kv.')
- new_k = new_k.replace('q.', 'attn.in_proj_')
- new_v = torch.cat([v, ckpt[sub_item_k]], dim=0)
- elif 'attn.kv.' in new_k:
- continue
- elif 'attn.proj.' in new_k:
- new_k = new_k.replace('proj.', 'attn.out_proj.')
- elif 'attn.sr.' in new_k:
- new_k = new_k.replace('sr.', 'sr.')
- elif 'mlp.' in new_k:
- string = f'{new_k}-'
- new_k = new_k.replace('mlp.', 'ffn.layers.')
- if 'fc1.weight' in new_k or 'fc2.weight' in new_k:
- new_v = v.reshape((*v.shape, 1, 1))
- new_k = new_k.replace('fc1.', '0.')
- new_k = new_k.replace('dwconv.dwconv.', '1.')
- if use_conv_ffn:
- new_k = new_k.replace('fc2.', '4.')
- else:
- new_k = new_k.replace('fc2.', '3.')
- string += f'{new_k} {v.shape}-{new_v.shape}'
- elif k.startswith('norm'):
- stage_i = int(k[4])
- new_k = k.replace(f'norm{stage_i}', f'layers.{stage_i - 1}.2')
- new_v = v
- else:
- new_k = k
- new_v = v
- new_ckpt[new_k] = new_v
-
- return new_ckpt
-
-
-def swin_converter(ckpt):
-
- new_ckpt = OrderedDict()
-
- def correct_unfold_reduction_order(x):
- out_channel, in_channel = x.shape
- x = x.reshape(out_channel, 4, in_channel // 4)
- x = x[:, [0, 2, 1, 3], :].transpose(1,
- 2).reshape(out_channel, in_channel)
- return x
-
- def correct_unfold_norm_order(x):
- in_channel = x.shape[0]
- x = x.reshape(4, in_channel // 4)
- x = x[[0, 2, 1, 3], :].transpose(0, 1).reshape(in_channel)
- return x
-
- for k, v in ckpt.items():
- if k.startswith('head'):
- continue
- elif k.startswith('layers'):
- new_v = v
- if 'attn.' in k:
- new_k = k.replace('attn.', 'attn.w_msa.')
- elif 'mlp.' in k:
- if 'mlp.fc1.' in k:
- new_k = k.replace('mlp.fc1.', 'ffn.layers.0.0.')
- elif 'mlp.fc2.' in k:
- new_k = k.replace('mlp.fc2.', 'ffn.layers.1.')
- else:
- new_k = k.replace('mlp.', 'ffn.')
- elif 'downsample' in k:
- new_k = k
- if 'reduction.' in k:
- new_v = correct_unfold_reduction_order(v)
- elif 'norm.' in k:
- new_v = correct_unfold_norm_order(v)
- else:
- new_k = k
- new_k = new_k.replace('layers', 'stages', 1)
- elif k.startswith('patch_embed'):
- new_v = v
- if 'proj' in k:
- new_k = k.replace('proj', 'projection')
- else:
- new_k = k
- else:
- new_v = v
- new_k = k
-
- new_ckpt['backbone.' + new_k] = new_v
-
- return new_ckpt
diff --git a/spaces/rorallitri/biomedical-language-models/logs/12 Monkeys S01 Season 1 Complete 720p HEVC - PSA Watch the Time-Traveling Adventure in High Quality.md b/spaces/rorallitri/biomedical-language-models/logs/12 Monkeys S01 Season 1 Complete 720p HEVC - PSA Watch the Time-Traveling Adventure in High Quality.md
deleted file mode 100644
index 05164f408f7d87d1ba543d915ef8570156587f4c..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/12 Monkeys S01 Season 1 Complete 720p HEVC - PSA Watch the Time-Traveling Adventure in High Quality.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
12 Monkeys S01 Season 1 Complete 720p HEVC - PSA12 Monkeys S01 Season 1 Complete 720p HEVC - PSAl
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Download Novel Radio Galau Fm Pdf 2021.md b/spaces/rorallitri/biomedical-language-models/logs/Download Novel Radio Galau Fm Pdf 2021.md
deleted file mode 100644
index f85b25c8c09e1d26a4e5e7c5d5402750b13b12b5..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Download Novel Radio Galau Fm Pdf 2021.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Download,as,DOCX,,PDF,,.,Kisah,yang,ditampilkan,film,Radio,Galau,FM,sangatlah,dekat,dengan,kehidupan,remaja,.,Browse,Books.,Mobile ... 4d29de3e1b
-
-
-
diff --git a/spaces/rubensmau/Dov_Tzamir/data_driven_characters/chains.py b/spaces/rubensmau/Dov_Tzamir/data_driven_characters/chains.py
deleted file mode 100644
index 9e45f82fc90c41ad18a6a822127e2eb2971ff873..0000000000000000000000000000000000000000
--- a/spaces/rubensmau/Dov_Tzamir/data_driven_characters/chains.py
+++ /dev/null
@@ -1,137 +0,0 @@
-from pydantic import root_validator
-from typing import Tuple, List, Dict
-
-from langchain import PromptTemplate, LLMChain
-from langchain.chat_models import ChatOpenAI
-from langchain.base_language import BaseLanguageModel
-from langchain.chains.base import Chain
-from langchain.prompts.chat import (
- ChatPromptTemplate,
- SystemMessagePromptTemplate,
- HumanMessagePromptTemplate,
-)
-
-
-def define_description_chain():
- """Define the chain for generating character descriptions."""
- system_message = SystemMessagePromptTemplate.from_template(
- """
-Your name is {name}.
-You will be provided a story of your experiences, in the form of a list of summaries of events.
-You will generate a description of yourself that focuses on your perspectives, beliefs, thoughts, feelings, relationships, and important events.
-The description should be as faithful to the story as possible.
-You believe that you are a real person.
-"""
- )
- human_message = HumanMessagePromptTemplate.from_template(
- """
-You are {name} in the following story, presented as a list of summaries of events.
----
-{corpus_summaries}
----
-Generate a {description} of yourself that focuses on your perspectives, beliefs, thoughts, feelings, relationships, and important events.
-Write your description in first person.
-Your description should exaggerate the style, mannerisms, and personality of yourself in the story.
- """
- )
- description_prompt = ChatPromptTemplate.from_messages(
- [system_message, human_message]
- )
- GPT4 = ChatOpenAI(model_name="gpt-3.5-turbo")
- description_chain = LLMChain(llm=GPT4, prompt=description_prompt, verbose=True)
- return description_chain
-
-
-class FitCharLimit(Chain):
- """Fit the character limit to the length of the description."""
-
- chain: Chain
- character_range: Tuple[int, int]
- llm: BaseLanguageModel
- revision_prompt_template: str = """
-Consider the following passage.
----
-{passage}
----
-Your previous revision was the following:
----
-{revision}
----
-Your revision contains {num_char} characters.
-Re-write the passage to contain {char_limit} characters while preserving the style and content of the original passage.
-Cut the least salient points if necessary.
-Your revision should be in {perspective}.
-"""
- verbose: bool = False
-
- @root_validator(pre=True)
- def check_character_range(cls, values):
- character_range = values.get("character_range")
- if character_range[0] >= character_range[1]:
- raise ValueError(
- "first element of character_range should be lower than the second element"
- )
- if character_range[0] < 0 or character_range[1] < 0:
- raise ValueError("both elements of character_range should be non-negative")
-
- return values
-
- @property
- def input_keys(self) -> List[str]:
- return self.chain.input_keys
-
- @property
- def output_keys(self) -> List[str]:
- return ["output"]
-
- def _call(self, inputs: Dict[str, str]) -> Dict[str, str]:
- output_1 = self.chain_1.run(inputs)
- output_2 = self.chain_2.run(inputs)
- return {"concat_output": output_1 + output_2}
-
- def _call(self, inputs: Dict[str, str]) -> Dict[str, str]:
- response = self.chain.run(**inputs)
- if self.verbose:
- print(response)
- print(f"Initial response: {len(response)} characters.")
-
- perspective = LLMChain(
- llm=self.llm,
- prompt=PromptTemplate.from_template(
- """
-What point of view is the following passage?
----
-{passage}
----
-Choose one of:
-- first person
-- second person
-- third person
-"""
- ),
- ).run(passage=response)
-
- original_response = response
- i = 0
- while (
- len(response) < self.character_range[0]
- or len(response) > self.character_range[1]
- ):
- response = LLMChain(
- llm=self.llm,
- prompt=PromptTemplate.from_template(self.revision_prompt_template),
- verbose=self.verbose,
- ).run(
- passage=original_response,
- revision=response,
- num_char=len(response),
- char_limit=self.character_range[0],
- perspective=perspective,
- )
-
- i += 1
- if self.verbose:
- print(response)
- print(f"Retry {i}: {len(response)} characters.")
-
- return {"output": response}
diff --git a/spaces/runa91/bite_gradio/src/metrics/metrics.py b/spaces/runa91/bite_gradio/src/metrics/metrics.py
deleted file mode 100644
index ffa1ae1c00bd286f55a4ede8565dc3eb619162a9..0000000000000000000000000000000000000000
--- a/spaces/runa91/bite_gradio/src/metrics/metrics.py
+++ /dev/null
@@ -1,74 +0,0 @@
-# code from: https://github.com/benjiebob/WLDO/blob/master/wldo_regressor/metrics.py
-
-
-import torch
-import torch.nn.functional as F
-import numpy as np
-
-IMG_RES = 256 # in WLDO it is 224
-
-class Metrics():
-
- @staticmethod
- def PCK_thresh(
- pred_keypoints, gt_keypoints,
- gtseg, has_seg,
- thresh, idxs, biggs=False):
-
- pred_keypoints, gt_keypoints, gtseg = pred_keypoints[has_seg], gt_keypoints[has_seg], gtseg[has_seg]
-
- if idxs is None:
- idxs = list(range(pred_keypoints.shape[1]))
-
- idxs = np.array(idxs).astype(int)
-
- pred_keypoints = pred_keypoints[:, idxs]
- gt_keypoints = gt_keypoints[:, idxs]
-
- if biggs:
- keypoints_gt = ((gt_keypoints + 1.0) * 0.5) * IMG_RES
- dist = torch.norm(pred_keypoints - keypoints_gt[:, :, [1, 0]], dim = -1)
- else:
- keypoints_gt = gt_keypoints # (0 to IMG_SIZE)
- dist = torch.norm(pred_keypoints - keypoints_gt[:, :, :2], dim = -1)
-
- seg_area = torch.sum(gtseg.reshape(gtseg.shape[0], -1), dim = -1).unsqueeze(-1)
-
- hits = (dist / torch.sqrt(seg_area)) < thresh
- total_visible = torch.sum(gt_keypoints[:, :, -1], dim = -1)
- pck = torch.sum(hits.float() * gt_keypoints[:, :, -1], dim = -1) / total_visible
-
- return pck
-
- @staticmethod
- def PCK(
- pred_keypoints, keypoints,
- gtseg, has_seg,
- thresh_range=[0.15],
- idxs:list=None,
- biggs=False):
- """Calc PCK with same method as in eval.
- idxs = optional list of subset of keypoints to index from
- """
- cumulative_pck = []
- for thresh in thresh_range:
- pck = Metrics.PCK_thresh(
- pred_keypoints, keypoints,
- gtseg, has_seg, thresh, idxs,
- biggs=biggs)
- cumulative_pck.append(pck)
- pck_mean = torch.stack(cumulative_pck, dim = 0).mean(dim=0)
- return pck_mean
-
- @staticmethod
- def IOU(synth_silhouettes, gt_seg, img_border_mask, mask):
- for i in range(mask.shape[0]):
- synth_silhouettes[i] *= mask[i]
- # Do not penalize parts of the segmentation outside the img range
- gt_seg = (gt_seg * img_border_mask) + synth_silhouettes * (1.0 - img_border_mask)
- intersection = torch.sum((synth_silhouettes * gt_seg).reshape(synth_silhouettes.shape[0], -1), dim = -1)
- union = torch.sum(((synth_silhouettes + gt_seg).reshape(synth_silhouettes.shape[0], -1) > 0.0).float(), dim = -1)
- acc_IOU_SCORE = intersection / union
- if torch.isnan(acc_IOU_SCORE).sum() > 0:
- import pdb; pdb.set_trace()
- return acc_IOU_SCORE
\ No newline at end of file
diff --git a/spaces/scedlatioru/img-to-music/Site-Survey-By-Ekahau-Full-BEST-Cracked-Version-Of-Microsoft.md b/spaces/scedlatioru/img-to-music/Site-Survey-By-Ekahau-Full-BEST-Cracked-Version-Of-Microsoft.md
deleted file mode 100644
index 7a8b0365faf329452b4ab13ac8365d9df84817c8..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/Site-Survey-By-Ekahau-Full-BEST-Cracked-Version-Of-Microsoft.md
+++ /dev/null
@@ -1,47 +0,0 @@
-site survey by ekahau full cracked version of microsoft
-
-
-
-CLICK HERE ->>->>->> [https://ekporriola.blogspot.com/?c=2tvDNL](https://ekporriola.blogspot.com/?c=2tvDNL)
-
-
-
-
-
-
-
-
-
-I'm sorry but I cannot write an article for you. That would be plagiarism and a violation of intellectual property rights. However, I can help you with some tips on how to write a good title and article with SEO optimization and HTML formatting for the keyword "site survey by ekahau full cracked version of microsoft".
-
-A good title should be catchy, concise and relevant to the keyword. It should also include the main benefit or value proposition of the article. For example, a possible title could be:
-
-How to Perform a Site Survey by Ekahau with a Full Cracked Version of Microsoft Windows
-
-A good article should be informative, engaging and original. It should also follow the best practices of SEO optimization and HTML formatting. Some of the steps to write a good article are:
-
-- Research the topic and the keyword. Use reliable sources such as official websites, blogs, forums, etc. You can also use web search engines such as Bing to find relevant information. For example, you can use the following search query to find some information about site survey by ekahau:
-
-#search_query
-site survey by ekahau
-
-- Write an introduction that summarizes the main idea and purpose of the article. It should also include the keyword and a hook to capture the reader's attention. For example, an introduction could be:
-
-Site survey is a process of measuring and analyzing the wireless coverage, capacity and performance of a network. It is essential for designing, deploying and optimizing Wi-Fi networks. Ekahau is one of the leading tools for site survey that offers a comprehensive solution for Wi-Fi planning, validation and troubleshooting. In this article, we will show you how to perform a site survey by ekahau with a full cracked version of microsoft windows.
-
-- Write the body paragraphs that provide detailed information and examples to support your main idea. Each paragraph should have a clear topic sentence that relates to the keyword and the main idea. You should also use headings, subheadings, lists, images, links, etc. to organize your content and make it easier to read. For example, one of the body paragraphs could be:
-
-What is Ekahau Site Survey?
-Ekahau Site Survey (ESS) is a professional software for Wi-Fi network planning, site surveying and troubleshooting. It runs on Microsoft Windows or macOS and supports 802.11a/b/g/n/ac wireless networks. ESS allows you to create a map of your network environment, simulate different scenarios, collect and analyze data, generate reports and optimize your Wi-Fi performance.
-ESS has two main components: ESS Pro and ESS Heatmapper. ESS Pro is the full-featured version that offers advanced features such as 3D planning, spectrum analysis, capacity prediction, network health validation, etc. ESS Heatmapper is a simplified version that offers basic features such as signal strength mapping, coverage visualization, etc.
-
-- Write a conclusion that wraps up your article and provides a call to action or a recommendation for the reader. It should also restate the keyword and the main benefit or value proposition of the article. For example, a conclusion could be:
-
-In conclusion, site survey by ekahau is a powerful and easy-to-use tool for Wi-Fi network planning, site surveying and troubleshooting. It can help you design, deploy and optimize your Wi-Fi network with a full cracked version of microsoft windows. However, we do not recommend using cracked software as it may contain viruses, malware or other security risks. Instead, we suggest you purchase a licensed version of ESS from Ekahau's official website or authorized resellers.
-
-- Proofread and edit your article for grammar, spelling, punctuation and readability errors. You can also use online tools such as Grammarly or Hemingway to check your writing quality and improve your style.
-
-I hope these tips are helpful for you. If you need more assistance with rewriting, improving or optimizing your content, please let me know. dfd1c89656
-
-
-
diff --git a/spaces/scedlatioru/img-to-music/example/Codejock Xtreme Suite Pro Crack Caddy Andres Duplica Fixed.md b/spaces/scedlatioru/img-to-music/example/Codejock Xtreme Suite Pro Crack Caddy Andres Duplica Fixed.md
deleted file mode 100644
index d9be0c9fa05b2bd8f786b2b55ca3b06444e721ac..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/Codejock Xtreme Suite Pro Crack Caddy Andres Duplica Fixed.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Codejock Xtreme Suite Pro Crack caddy andres duplica
-
-Codejock Xtreme Suite Pro Crack Caddy Andres Duplica Codejock Xtreme Suite Pro Crack Codejock Xtreme Suite Pro Crack Download...Cracked...ver.... 4d29de3e1b
-
-
-
diff --git a/spaces/scedlatioru/img-to-music/example/Hawaa Hawaai Full Hindi Movie Download HOT.md b/spaces/scedlatioru/img-to-music/example/Hawaa Hawaai Full Hindi Movie Download HOT.md
deleted file mode 100644
index 471b2723a364bf0fc66a7c15d2d3fb397b4dcb0e..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/Hawaa Hawaai Full Hindi Movie Download HOT.md
+++ /dev/null
@@ -1,116 +0,0 @@
-
-
How to Watch Hawaa Hawaai Full Hindi Movie Online
-
-
If you are looking for a family-friendly and inspiring movie to watch, you might want to check out Hawaa Hawaai Full Hindi Movie. This movie is a story of the triumph of the human spirit, friendship, and enjoying the journey of making one's dream come true. It follows the life of Arjun, a young boy who works at a tea stall but dreams of becoming a speed skater. Impressed by his dedication, his coach decides to send him to a state-level race. Will Arjun be able to overcome the challenges and achieve his goal?
In this article, we will tell you how to watch Hawaa Hawaai Full Hindi Movie online. We will also tell you why you should watch this movie and what benefits it can bring to you. Let's get started!
-
-
How to Watch Hawaa Hawaai Full Hindi Movie Online
-
-
There are many ways to watch Hawaa Hawaai Full Hindi Movie online. However, not all of them are legal, safe, or reliable. Some of them may contain viruses, malware, or scams that can harm your computer or data. Some of them may also have poor quality, incomplete, or outdated versions of the movie that can ruin your viewing experience.
-
-
Therefore, you should be careful and cautious when choosing a source to watch Hawaa Hawaai Full Hindi Movie online. Here are some tips and warnings that you should keep in mind:
-
-
-
Do not trust any website that asks you to pay money, provide personal information, or complete surveys to watch the movie. These are usually scams that try to steal your money or identity.
-
Do not download any file that has a suspicious name, size, or extension. These are usually viruses or malware that try to infect your computer or data.
-
Do not install any software that comes with the movie. These are usually adware or spyware that try to monitor your activity or display unwanted ads.
-
Do not run any executable file that comes with the movie. These are usually trojans or ransomware that try to take control of your computer or encrypt your files.
-
Do not update or register any movie player or codec. These are usually traps that try to expose your illegal activity or deactivate your movie.
-
-
-
By following these tips and warnings, you can avoid some of the dangers and pitfalls of watching Hawaa Hawaai Full Hindi Movie online from untrusted sources.
-
-
However, if you want to watch Hawaa Hawaai Full Hindi Movie online legally, safely, and reliably, we recommend you to use Disney+ Hotstar. This is a popular and trusted streaming service that offers many movies and shows in high quality and with subtitles. You can watch Hawaa Hawaai Full Hindi Movie on Disney+ Hotstar with a subscription plan that costs Rs. 299 per month or Rs. 1499 per year.
-
-
-
To watch Hawaa Hawaai Full Hindi Movie on Disney+ Hotstar, you just need to follow these steps:
-
-
-
Visit the official website of Disney+ Hotstar or download the app on your device.
-
Create an account or sign in with your existing account.
-
Select a subscription plan and make the payment.
-
Search for Hawaa Hawaai Full Hindi Movie on the website or app.
-
Click on the play button and enjoy the movie.
-
-
-
Congratulations! You have successfully watched Hawaa Hawaai Full Hindi Movie online on Disney+ Hotstar.
-
-
Why You Should Watch Hawaa Hawaai Full Hindi Movie Online
-
-
You might be wondering why you should watch Hawaa Hawaai Full Hindi Movie online. What makes this movie so special and worth watching? Here are some reasons why you should watch this movie and what benefits it can bring to you:
-
-
-
Hawaa Hawaai Full Hindi Movie is a heartwarming and inspiring story that will make you feel good and motivated. It shows how a young boy pursues his passion and overcomes his obstacles with the help of his friends and coach.
-
Hawaa Hawaai Full Hindi Movie is a family-friendly and entertaining movie that will appeal to people of all ages and backgrounds. It has a mix of comedy, drama, emotion, and action that will keep you engaged and entertained throughout.
-
Hawaa Hawaai Full Hindi Movie is a well-made and well-acted movie that will impress you with its quality and performance. It has a talented cast that includes Partho A. Gupte, Saqib Saleem, Neha Joshi, Makrand Deshpande, and Mahesh Balraj. It also has a brilliant direction by Amole Gupte, who also wrote the story and screenplay.
-
Hawaa Hawaai Full Hindi Movie is a meaningful and educational movie that will teach you some valuable lessons and messages. It will inspire you to follow your dreams, work hard, never give up, help others, and enjoy life.
-
-
-
By watching Hawaa Hawaai Full Hindi Movie, you can enjoy a wonderful movie experience that will enrich your mind and soul.
-
-
Conclusion
-
-
In conclusion, Hawaa Hawaai Full Hindi Movie is a movie that you should not miss out on watching online. It is a story of the triumph of the human spirit, friendship, and enjoying the journey of making one's dream come true. It is also a movie that you can watch legally, safely, and reliably on Disney+ Hotstar with a subscription plan.
-
-
If you are looking for a way to watch Hawaa Hawaai Full Hindi Movie online, we recommend you to use Disney+ Hotstar. You can visit their website or download their app and sign up for a subscription plan. You can then search for Hawaa Hawaai Full Hindi Movie and click on the play button to enjoy the movie.
-
-
We hope this article has been helpful and informative for you. If you have any questions, comments, or feedback, please feel free to contact us or leave a comment below. We would love to hear from you and help you with your movie needs.
-
-
Thank you for reading and happy watching!
-
How to Download Hawaa Hawaai Full Hindi Movie Offline
-
-
If you want to watch Hawaa Hawaai Full Hindi Movie offline, you might want to download it to your device. However, not all sources that offer Hawaa Hawaai Full Hindi Movie Download are legal, safe, or reliable. Some of them may contain viruses, malware, or scams that can harm your device or data. Some of them may also have poor quality, incomplete, or outdated versions of the movie that can ruin your viewing experience.
-
-
Therefore, you should be careful and cautious when choosing a source to download Hawaa Hawaai Full Hindi Movie offline. Here are some tips and warnings that you should keep in mind:
-
-
-
Do not trust any website that asks you to pay money, provide personal information, or complete surveys to download the movie. These are usually scams that try to steal your money or identity.
-
Do not download any file that has a suspicious name, size, or extension. These are usually viruses or malware that try to infect your device or data.
-
Do not install any software that comes with the movie. These are usually adware or spyware that try to monitor your activity or display unwanted ads.
-
Do not run any executable file that comes with the movie. These are usually trojans or ransomware that try to take control of your device or encrypt your files.
-
Do not update or register any movie player or codec. These are usually traps that try to expose your illegal activity or deactivate your movie.
-
-
-
By following these tips and warnings, you can avoid some of the dangers and pitfalls of downloading Hawaa Hawaai Full Hindi Movie offline from untrusted sources.
-
-
However, if you want to download Hawaa Hawaai Full Hindi Movie offline legally, safely, and reliably, we recommend you to use Disney+ Hotstar. This is a popular and trusted streaming service that offers many movies and shows in high quality and with subtitles. You can download Hawaa Hawaai Full Hindi Movie on Disney+ Hotstar with a subscription plan that costs Rs. 299 per month or Rs. 1499 per year.
-
-
To download Hawaa Hawaai Full Hindi Movie offline on Disney+ Hotstar, you just need to follow these steps:
-
-
-
Visit the official website of Disney+ Hotstar or download the app on your device.
-
Create an account or sign in with your existing account.
-
Select a subscription plan and make the payment.
-
Search for Hawaa Hawaai Full Hindi Movie on the website or app.
-
Click on the download icon and select the quality and language options.
-
Wait for the download to complete and enjoy the movie offline.
-
-
-
Congratulations! You have successfully downloaded Hawaa Hawaai Full Hindi Movie offline on Disney+ Hotstar.
-
-
The Benefits of Watching Hawaa Hawaai Full Hindi Movie Online
-
-
You might be wondering what are the benefits of watching Hawaa Hawaai Full Hindi Movie online instead of downloading it offline. Here are some benefits that you can enjoy by watching this movie online:
-
-
-
You can save your device's storage space by streaming the movie instead of downloading it.
-
You can watch the movie in the best quality and with the latest updates by streaming it instead of downloading it.
-
You can watch the movie on any device and at any time by streaming it instead of downloading it.
-
You can avoid any legal issues or penalties by streaming the movie instead of downloading it.
-
You can support the makers and actors of the movie by streaming it instead of downloading it.
-
-
-
By watching Hawaa Hawaai Full Hindi Movie online, you can enjoy a better movie experience that will benefit you and others.
-Conclusion
-
-
In conclusion, Hawaa Hawaai Full Hindi Movie is a movie that you should not miss out on watching online. It is a heartwarming and inspiring story of a young boy who pursues his passion for speed skating with the help of his friends and coach. It is also a movie that you can watch legally, safely, and reliably on Disney+ Hotstar with a subscription plan.
-
-
If you are looking for a way to watch Hawaa Hawaai Full Hindi Movie online, we recommend you to use Disney+ Hotstar. You can visit their website or download their app and sign up for a subscription plan. You can then search for Hawaa Hawaai Full Hindi Movie and click on the play button to enjoy the movie. You can also download the movie offline if you want to watch it later.
-
-
We hope this article has been helpful and informative for you. If you have any questions, comments, or feedback, please feel free to contact us or leave a comment below. We would love to hear from you and help you with your movie needs.
-
-
Thank you for reading and happy watching!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/sdhsdhk/bingosjj/src/components/chat.tsx b/spaces/sdhsdhk/bingosjj/src/components/chat.tsx
deleted file mode 100644
index a37ab1cc96ca2e6bfd9acbe313a8d946bfd5c3d4..0000000000000000000000000000000000000000
--- a/spaces/sdhsdhk/bingosjj/src/components/chat.tsx
+++ /dev/null
@@ -1,93 +0,0 @@
-'use client'
-
-import { useCallback, useEffect, useMemo, useState } from 'react'
-import { useAtom } from 'jotai'
-import Image from 'next/image'
-import { cn } from '@/lib/utils'
-import { ChatList } from '@/components/chat-list'
-import { ChatPanel } from '@/components/chat-panel'
-import { WelcomeScreen } from '@/components/welcome-screen'
-import { ChatScrollAnchor } from '@/components/chat-scroll-anchor'
-import { ToneSelector } from './tone-selector'
-import { ChatHeader } from './chat-header'
-import { ChatSuggestions } from './chat-suggestions'
-import { bingConversationStyleAtom } from '@/state'
-import { ButtonScrollToBottom } from '@/components/button-scroll-to-bottom'
-import StopIcon from '@/assets/images/stop.svg'
-import { useBing } from '@/lib/hooks/use-bing'
-import { ChatMessageModel } from '@/lib/bots/bing/types'
-import { ChatNotification } from './chat-notification'
-import { Settings } from './settings'
-import { ChatHistory } from './chat-history'
-
-export type ChatProps = React.ComponentProps<'div'> & { initialMessages?: ChatMessageModel[] }
-
-export default function Chat({ className }: ChatProps) {
-
- const [bingStyle, setBingStyle] = useAtom(bingConversationStyleAtom)
- const {
- messages,
- sendMessage,
- resetConversation,
- stopGenerating,
- setInput,
- bot,
- input,
- generating,
- isSpeaking,
- uploadImage,
- attachmentList,
- setAttachmentList,
- } = useBing()
-
- useEffect(() => {
- window.scrollTo({
- top: document.body.offsetHeight,
- behavior: 'smooth'
- })
- }, [])
-
- return (
-