it is interesting to note that self-reported hypnotic sleep disturbances are very common in clinical practice and typical examples are sleepwalking, sleep paralysis, and nightmares [ 36 ]. also, in case of nightmares, the dreamer is usually awoken by a sound, for instance an object hitting the window [ 36, 37 ].
-the results of the present study suggest that pain sensation during sleep does not reduce dream recall per se but that it may affect dream content to a similar degree as external stimuli like sound, touch, and smell. this hypothesis is supported by the fact that the frequency of pain dream reports in the patient sample was surprisingly low in comparison to previous studies although the pain intensity reported by the patients was similar to the studies mentioned above. it is also in accordance with the results of nielsen et al . [ 12 ]; their data further support the idea that it might be necessary to apply rather strong stimuli to induce pain during sleep.
-what distinguishes pain dreams from dreams with a different content is the high pain intensity. although the percentage of these pain dreams was higher than the percentage of pain dreams in most of the studies mentioned above, the absolute number of pain dreams is probably too low to be a driving force in most nightmares. the relationship between dreaming and pain is obviously not simple. there are various confounding and confounding factors influencing the relationship of pain and dreaming: sleep paralysis, hypnagogic or hypnopompic symptoms, pain medication and psychological stress can affect the frequency of pain dreams.
899543212bIf you are a fan of R. Kelly's musical saga Trapped in the Closet, you might be wondering how to watch it online in high definition. Trapped in the Closet is a rap opera by American R&B singer R. Kelly, which tells a story of a one-night stand that sets off a chain of events, gradually revealing a greater web of lies, sex, and deceit. The series consists of 33 chapters, released between 2005 and 2012.
-Trapped in the Closet is available to stream online on various platforms, depending on your region and preference. Here are some of the options you can choose from:
-Trapped in the Closet is a unique and controversial musical masterpiece that has been described as "the most successful hip-hopera ever" by The Guardian. If you want to watch it online in HD, you have several options to choose from. Just make sure you have enough popcorn and time to enjoy this epic tale of sex and suspicion.
-
-Trapped in the Closet is a rap opera that follows the adventures of Sylvester, a married man who has an affair with a woman named Cathy. However, things get complicated when Cathy's husband Rufus comes home unexpectedly, forcing Sylvester to hide in the closet. From there, the story unfolds with a series of twists and turns, involving characters such as Chuck, Rufus's secret lover; Twan, Sylvester's friend who just got out of prison; Bridget, a midget stripper who is pregnant with Big Man's baby; and many more. Each chapter ends with a cliffhanger that leaves the audience wondering what will happen next.
-Trapped in the Closet is a musical masterpiece that combines R. Kelly's smooth vocals, catchy melodies, and witty lyrics. The songs are narrated by R. Kelly himself, who plays multiple roles and uses different voices to portray each character. The songs are also accompanied by a music video that shows the action on screen, with R. Kelly lip-syncing to his own narration. The music video is shot in a minimalist style, with mostly static shots and simple sets. The focus is on the facial expressions and body language of the actors, who deliver their lines with dramatic flair.
-
-Trapped in the Closet has received mixed reviews from critics and audiences alike. Some have praised it as a creative and original work of art, while others have criticized it as a ridiculous and self-indulgent vanity project. Some have also questioned the morality and legality of R. Kelly's involvement in the project, given his history of sexual abuse allegations and scandals.
-However, Trapped in the Closet has also gained a cult following among fans who appreciate its humor, absurdity, and entertainment value. The series has been parodied and referenced by various media outlets, such as South Park, Saturday Night Live, The Simpsons, Family Guy, and more. The series has also inspired interactive screenings, where fans dress up as their favorite characters, sing along to the songs, and shout out comments and jokes during the show.
-Trapped in the Closet is a unique and controversial musical saga that has been described as "the most successful hip-hopera ever" by The Guardian. Whether you love it or hate it, you can't deny its impact and influence on pop culture and music history.
d5da3c52bfbasically, we should always start with a manual, take the first set of keywords that we get, add them to the campaign as you usually would, go to the ad groups options, add the same structure and then run an ad group.
-yasuhiro lived alone with his mother, hiroko hagakure, who loves him dearly despite his many flaws. very little is known about his father, though his father apparently lived with them at some point, as yasuhiro once mentioned their house burning down because his father fell asleep with a burning cigarette. according to his mother, there was some sort of trouble between them because she did something and let it go on too long, causing them to break up. yasuhiro himself had several problems with women and money, and he was also held back about four times in middle and high school. he used to attend kiiwatetsu commerce high school.
-an account created using biddable can only make use of one type of smart bidding campaign to achieve its goals. the possibility of defining different types of smart campaigns for the same account thus limits the account creation process, and should thus only be used with biddable accounts. hagakure is more flexible in this regard, since it allows you to create any number of types of campaigns.
-If you are a fan of the classic turn-based strategy game Heroes of Might and Magic III, you might be interested in getting the HD Edition that was released in 2015. This edition features updated graphics, wide screen compatibility, a new online multiplayer lobby, and 7 exciting campaign scenarios. However, you might also be wondering how to get a working key generatorgolkes for this game, so you can enjoy it without any hassle.
-
-A key generatorgolkes is a software tool that can generate valid serial keys for various games and applications. It can help you bypass the activation process and play the game without any restrictions. However, not all key generatorgolkes are reliable and safe. Some of them might contain viruses, malware, or spyware that can harm your computer or steal your personal information. Therefore, you need to be careful when choosing a key generatorgolkes for Heroes.of.Might.and.Magic.3.HD.Edition-RELOADED.
-There are many websites that claim to offer key generatorgolkes for various games, including Heroes.of.Might.and.Magic.3.HD.Edition-RELOADED. However, not all of them are trustworthy and legitimate. Some of them might require you to complete surveys, download additional software, or enter your personal details before giving you the key generatorgolkes. Others might give you fake or invalid keys that will not work or will get you banned from the game.
-
-To avoid these scams and risks, you need to follow some tips when looking for a key generatorgolkes for Heroes.of.Might.and.Magic.3.HD.Edition-RELOADED:
-
-Once you have found a reliable key generatorgolkes for Heroes.of.Might.and.Magic.3.HD.Edition-RELOADED, you can use it to generate serial keys for the game. Here are the steps to follow:
-
-Heroes.of.Might.and.Magic.3.HD.Edition-RELOADED is a great game that offers hours of fun and entertainment for strategy lovers. However, if you want to play it without any limitations or restrictions, you might need a key generatorgolkes to activate it. A key generatorgolkes is a software tool that can generate valid serial keys for various games and applications. However, not all key generatorgolkes are safe and reliable. You need to be careful when choosing and using a key generatorgolkes for Heroes.of.Might.and.Magic.3.HD.Edition-RELOADED.
-
-We hope this guide has helped you find and use a key generatorgolkes for Heroes.of.Might.and.Magic.3.HD.Edition-RELOADED successfully. If you have any questions or comments, feel free to leave them below.
-Using a key generatorgolkes for Heroes.of.Might.and.Magic.3.HD.Edition-RELOADED can have many benefits for you as a gamer. Here are some of them:
-
-However, using a key generatorgolkes for Heroes.of.Might.and.Magic.3.HD.Edition-RELOADED also comes with some risks and challenges. Here are some of them:
-
-If you are not comfortable or confident with using a key generatorgolkes for Heroes.of.Might.and.Magic.3.HD.Edition-RELOADED, you might want to consider some alternatives. Here are some of them:
-Heroes.of.Might.and.Magic.3.HD.Edition-RELOADED is an amazing game that deserves to be played by every strategy fan. However, if you want to play it without paying or waiting, you might need a key generatorgolkes to activate it. A key generatorgolkes is a software tool that can generate valid serial keys for various games and applications. However, not all key generatorgolkes are safe and reliable. You need to be careful when choosing and using a key generatorgolkes for Heroes.of.Might.and.Magic.3.HD.Edition-RELOADED.
-
-We hope this article has helped you understand everything you need to know about key generatorgolkes for Heroes.of.Might.and.Magic.3.HD.Edition-RELOADED. If you have any questions or comments, feel free to leave them below.
-After you have activated the game with a key generatorgolkes, you need to install and play it on your computer. Here are the steps to follow:
-
-Sometimes, you might encounter some problems or errors when using a key generatorgolkes or playing the game. Here are some common issues and their solutions:
-
-If you want to uninstall and remove Heroes.of.Might.and.Magic.3.HD.Edition-RELOADED from your computer, you need to follow these steps:
-
-Heroes.of.Might.and.Magic.3.HD.Edition-RELOADED is a fantastic game that brings back the classic and nostalgic gameplay of Heroes of Might and Magic III in a modern and improved way. However, if you want to play it without paying or waiting, you might need a key generatorgolkes to activate it. A key generatorgolkes is a software tool that can generate valid serial keys for various games and applications. However, not all key generatorgolkes are safe and reliable. You need to be careful when choosing and using a key generatorgolkes for Heroes.of.Might.and.Magic.3.HD.Edition-RELOADED.
-
-We hope this article has helped you understand everything you need to know about key generatorgolkes for Heroes.of.Might.and.Magic.3.HD.Edition-RELOADED. If you have any questions or comments, feel free to leave them below.
3cee63e6c2dimple chaddha, a brash 19-year-old delhi college girl falls in love with sunny singh, a fitness trainer. raina parulekar, a 28-year-old independent and successful corporate woman in mumbai does business with an art dealer deven shah. saira rashid, a 24-year-old sweet, hard-working widow in lucknow makes a new friend, the shy iqbal khan. three very different girls each get taken for a lot of money by each of these three men. the problem is that it is actually just one man ricky bahl who got the looks and hes got the charm. he could have the pick of the ladies. but love isnt rickys priority money is! but a chance encounter unites the three girls and, discovering the truth, they hatch a plan to get their money back. so now unsuspecting ricky is about to meet his match in the shapely form of ishika desai.
-an adaptation of an argentinian film, this one is, perhaps, most well-known for its ost, featuring a track where lead star abhishek bachchan actually raps. in the movie, conman roy (bachchan) decides to quit the criminal life after his girlfriend ( priyanka chopra ) finds out about his shady secret life and leaves him. five years later, he meets two small-time con artists who beg him to teach them his ways. roy, who is now an alcoholic, is diagnosed with a terminal disease and decides to help the two out as a final act of kindness.
- kuch kuch hota hai 1998. kajol rani mukerji reema lagoo. ladies vs ricky bahl movie: check out ranveer singh's ladies vs ricky bahl movie release date, review, cast & crew, trailer, songs, teaser, story, budget, first day collection, box office. ladies vs. ricky bahl 2011 hd dvd 720p + 1080p blu-ray ladies vs. ricky bahl 2011 dailymotion filmlik tv.
899543212bDo you love superhero movies with a twist of comedy and romance? If yes, then you might have heard of Sky High , a 2005 film that follows the adventures of a group of teenagers who attend a high school for superheroes. The film was a hit among audiences and critics alike, and many people have been wondering if there will ever be a sequel. Well, we have some good news and some bad news for you. The good news is that there is a possibility of a Sky High sequel happening in the near future. The bad news is that it might not be easy to find or watch it, especially if you want to watch it in Hindi. In this article, we will tell you everything you need to know about Sky High and its potential sequel, as well as how to download Sky High 2 movie in Hindi for free. Read on to find out more!
-The film follows Will's journey as he discovers his true powers, faces his enemies, and learns valuable lessons about friendship, loyalty, and self-confidence. The film is a parody of both the superhero genre and the teen-high school genre, as it makes fun of the clichés and tropes of both. The film also features many references and homages to other superhero films and comics, such as Superman , Spider-Man , X-Men , Batman , etc. The film also has a stellar supporting cast that includes Bruce Campbell, Dave Foley, Kevin Heffernan, Lynda Carter, Cloris Leachman, Jim Rash, Kevin McDonald, Tom Kenny, Patrick Warburton, and more.
- The film was well-received by critics and audiences alike. It has a rating of 73% on Rotten Tomatoes and an average score of 6.1/10 on IMDb. It also earned over $86 million worldwide against a budget of $35 million. Many people praised the film for its humor, charm, heart, and originality. The film also has a cult following among fans who love its quirky characters, witty dialogue, and nostalgic appeal.
- Well, there are some reasons to be optimistic about it. First of all, the original cast and crew have expressed their interest and willingness to return for a sequel multiple times over the years. Michael Angarano said in an interview that he would love to do a sequel if it had a good script and story. Kurt Russell also said that he had a lot of fun making the first film and would be happy to reprise his role as The Commander. Director Mike Mitchell also revealed that he had an idea for a sequel titled Save U , which would follow Will and his friends as they go to college for superheroes. He said that he pitched the idea to Disney but they were not interested at that time.
- The film follows Kyle Watson (Duane Martin), a talented high school basketball player who dreams of playing in Georgetown University. He lives in Harlem with his mother (Tonya Pinkins) and his older brother Bernard (David Bailey), who works as a security guard at a local school. Kyle is torn between two mentors: his coach Rollins (Leon), who wants him to focus on his academics and his future; and Birdie (Tupac Shakur), a charismatic drug lord who runs a street basketball tournament called Shoot-Out and wants Kyle to play for his team. Kyle also develops a romantic interest in Lala (Wood Harris), Birdie's sister.
-Meanwhile, Birdie's rival is his older brother Shep (Leon), a former basketball star who quit the game after accidentally killing his best friend Nutso (Marlon Wayans) during a rooftop game. Shep now works as a security guard at Kyle's school and lives in isolation. He is haunted by visions of Nutso and blames himself for his death. He also has a strained relationship with Birdie, who resents him for leaving him alone in the streets.
-The film culminates in the Shoot-Out tournament, where Kyle has to choose between playing for Birdie's team or Rollins' team. He also has to face Shep, who decides to join Rollins' team after being challenged by Birdie. The final game is full of tension, violence, and surprises, as Kyle learns some hard lessons about life, loyalty, and basketball.
-The film features an impressive cast of actors, many of whom went on to have successful careers in Hollywood. Here are some of the main actors and their roles:
-Vizer.tv is a website that allows you to watch movies and TV shows online for free. You can find O Lance do Crime on Vizer.tv with Portuguese dubbing and subtitles. You can also choose between different video qualities and servers. The website has a simple and user-friendly interface, and you can also comment and rate the movies you watch. However, Vizer.tv also has some disadvantages, such as having pop-up ads, requiring registration, and having some broken links. You can access Vizer.tv at .
- Assistironline.net is another website that allows you to watch movies and TV shows online for free. You can also find O Lance do Crime on Assistironline.net with Portuguese dubbing and subtitles. You can also choose between different video qualities and servers. The website has a simple and user-friendly interface, and you can also comment and rate the movies you watch. However, Assistironline.net also has some disadvantages, such as having pop-up ads, requiring registration, and having some broken links. You can access Assistironline.net at .
- SoundCloud is a website that allows you to listen to music and podcasts online for free. You might be surprised to know that you can also find O Lance do Crime on SoundCloud with Portuguese dubbing. You can listen to the movie as an audio file on SoundCloud, which might be convenient if you don't want to watch the video or if you have a slow internet connection. The website has a simple and user-friendly interface, and you can also comment and like the audio files you listen to. However, SoundCloud also has some disadvantages, such as having ads, requiring registration, and having low audio quality. You can access SoundCloud at .
- Piracy is the act of copying or distributing copyrighted material without permission or payment. Copyright infringement is the legal term for violating the rights of the copyright owner. If you download O Lance do Crime illegally from a website that does not have the rights to distribute it, you are committing piracy and copyright infringement. This is not only unethical but also illegal, and you could face serious consequences such as fines, lawsuits, or even jail time.
- Malware is any software that is designed to harm or disrupt your computer or device. Viruses are a type of malware that can infect your computer or device by copying themselves from one file to another. If you download O Lance do Crime illegally from a website that is not secure or trustworthy, you could expose your computer or device to malware and viruses. This could damage your system, compromise your data, or steal your personal information.
- Do you want to download your favorite YouTube videos and watch them offline? Do you want to convert them to different formats and play them on various devices? Do you want to do all this for free and without any limitations? If you answered yes to any of these questions, then you might be interested in Free YouTube Download 4.3.9.129 Crack 2020 Serial Key.
-In this article, we will tell you everything you need to know about Free YouTube Download 4.3.9.129 Crack 2020 Serial Key, including what it is, why you need it, how to get it, what features it offers, and what are its pros and cons. We will also answer some frequently asked questions about this software at the end of the article.
-Free YouTube Download is a popular software that allows you to download YouTube videos and save them on your computer or mobile device. You can choose from various formats and resolutions, such as MP4, MP3, AVI, MKV, etc., and adjust the quality and size of the output file according to your needs.
-Free YouTube Download also lets you convert YouTube videos to other formats, such as audio files or DVD movies. You can also batch download multiple videos at once, which saves you time and bandwidth.
-Free YouTube Download is not completely free, as it has some limitations and restrictions in its free version. For example, you can only download up to 25 videos per day, and you cannot download playlists or channels. You also have to deal with annoying ads and pop-ups that may interfere with your user experience.
-To unlock the full potential of Free YouTube Download, you need to purchase a premium subscription that costs $19 per year or $29 for a lifetime license. However, not everyone can afford or wants to pay for this software.
-This is where a crack and a serial key come in handy. A crack is a modified version of the original software that bypasses its security features and allows you to use it without paying anything. A serial key is a code that activates the software and grants you access to all its features.
-By using a crack and a serial key, you can enjoy the full version of Free YouTube Download without spending a dime.
-If you are interested in getting Free YouTube Download 4.3.9.129 Crack 2020 Serial Key, here are the steps you need to follow:
-The first thing you need to do is find a trustworthy website that offers the crack file for Free YouTube Download 4.3.9.129 Crack 2020 Serial Key. There are many websites that claim to provide this file, but not all of them are safe or genuine.
-You should be careful when choosing a source for downloading the crack file, as some of them may contain viruses or malware that can harm your device or data.
-Once you have downloaded the crack file, you need to extract it using a tool like WinRAR or 7-Zip.
-Then, you need to run the setup file and follow the instructions on the screen.
-During the installation process, you will be asked to enter a serial key to activate the software.
-Copy any one of them and paste it in the required field.
-Congratulations! You have successfully installed Free YouTube Download 4.3.9.129 Crack 2020 Serial Key on your device.
-You can now launch the software and start downloading your favorite YouTube videos without any limitations or ads.
-Free YouTube Download 4.3.9.129 Crack 2020 Serial Key offers many features that make it one of the best software for downloading YouTube videos.
-You can choose from a wide range of formats and resolutions for your downloaded videos, such as MP4, MP3, AVI, MKV, FLV, WEBM, etc.
-You can also select the quality and size of the output file according to your preferences and device compatibility.
-You can also convert your downloaded videos to other formats that suit your needs.
-For example, you can convert YouTube videos to MP3 files if you only want to listen to the audio part.
-You can also convert them to MP4 or AVI files if you want to play them on different devices or edit them with other software.
-You don't have to download one video at a time with Free YouTube Download 4.3.9.129 Crack 2020 Serial Key.
-You can batch download multiple videos at once by adding them to a queue or copying their URLs from a text file.
-This way, you can save time and bandwidth by downloading several videos in one go.
-You can also customize your download settings and preferences with Free YouTube Download 4.3.9.129 Crack 2020 Serial Key.
-and more.
- In conclusion, Free YouTube Download 4.3.9.129 Crack 2020 Serial Key is a powerful and convenient software that allows you to download and convert YouTube videos for free and without any limitations.
-However, it is also illegal and risky to use this software, as you may face legal troubles or damage your device or data.
-We do not recommend using this software, as it is against the law and the ethics of the software industry.
-We suggest that you either purchase a legitimate license for Free YouTube Download or use an alternative free YouTube downloader that is legal and safe to use.
-Here are some frequently asked questions about Free YouTube Download 4.3.9.129 Crack 2020 Serial Key:
-
-Is Free YouTube Download 4.3.9.129 Crack 2020 Serial Key safe to use? No, it is not safe to use this software, as it may contain viruses or malware that can harm your device or data. You should always scan any file you download from the internet with a reliable antivirus software before opening it.
-Is Free YouTube Download 4.3.9.129 Crack 2020 Serial Key legal to use? No, it is not legal to use this software, as it violates the terms and conditions of the software developer and the YouTube platform. You may face legal consequences or penalties if you are caught using this software.
-What are some alternative free YouTube downloaders that are legal and safe to use? Some alternative free YouTube downloaders that are legal and safe to use are:
-
-YTD Video Downloader : A simple and easy-to-use software that allows you to download YouTube videos in various formats and resolutions.
-Videoder : A versatile and powerful software that allows you to download YouTube videos as well as videos from other platforms such as Facebook, Instagram, TikTok, etc.
-ClipGrab : A user-friendly and fast software that allows you to download YouTube videos and convert them to MP3, MP4, AVI, etc.
-
-How can I contact the developer of Free YouTube Download? You can contact the developer of Free YouTube Download by visiting their official website here . You can also follow them on their social media accounts such as Facebook, Twitter, Instagram, etc.
-How can I support the developer of Free YouTube Download? You can support the developer of Free YouTube Download by purchasing a premium subscription for their software or by donating to them via PayPal or Bitcoin.
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Freelancer Game Download Vollversion Das ultimative Handbuch fr angehende Raumfahrer.md b/spaces/raedeXanto/academic-chatgpt-beta/Freelancer Game Download Vollversion Das ultimative Handbuch fr angehende Raumfahrer.md
deleted file mode 100644
index a367efc18c5439e0eb050118f8292bfc10ab0c1a..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/Freelancer Game Download Vollversion Das ultimative Handbuch fr angehende Raumfahrer.md
+++ /dev/null
@@ -1,144 +0,0 @@
-
-Freelancer Game Download Vollversion
-Do you love space games? Do you want to explore a vast galaxy full of planets, stars, asteroids, and aliens? Do you want to trade, fight, or smuggle your way to fame and fortune? If you answered yes to any of these questions, then you should definitely check out Freelancer , one of the best space games ever made.
-freelancer game download vollversion Download File >>> https://tinourl.com/2uL3KU
-Freelancer is a space trading and combat simulation game developed by Digital Anvil and published by Microsoft Game Studios in 2003. It is a sequel to Starlancer, a combat flight simulator released in 2000. Freelancer lets you play as Edison Trent, a freelance pilot who gets involved in a galactic conspiracy after surviving an attack on a space station. You can choose your own path in the game, whether you want to follow the main story, take on side missions, or just roam around the galaxy.
-In this article, we will tell you everything you need to know about Freelancer, including its story, gameplay, graphics, sound, and how to download and install it on your PC. We will also show you how to enhance your gaming experience with mods and patches that make Freelancer look and sound even better than before. So buckle up and get ready for an epic space adventure!
- The Story of Freelancer
-Freelancer is set in the year 2340, 800 years after the events of Starlancer. In Starlancer, a coalition of Earth's colonies fought against an alliance of rogue nations known as the Eastern Coalition. The war ended with a devastating nuclear attack on Earth that forced the coalition to flee into deep space. They eventually settled in a distant star system called Sirius, where they established four major factions: Liberty, Bretonia, Rheinland, and Kusari.
-Freelancer begins with Edison Trent arriving at Freeport 7, a neutral space station in the border world of Magellan. He is there to meet a mysterious contact who promises him a lucrative deal. However, before he can meet his contact, Freeport 7 is attacked by an unknown force that destroys the station. Trent barely escapes with his life and is rescued by Jun'ko Zane, an agent of the Liberty Security Force (LSF). She tells him that Freeport 7 was not the only target; several other stations across Sirius have been attacked by the same enemy.
-Trent agrees to help Zane investigate the attacks and find out who is behind them. Along the way, they discover that the attackers are part of a secret cult called the Order, which worships an ancient alien race called the Nomads. The Nomads have been manipulating human history for centuries, using their mind control abilities to influence key events and leaders. They are now preparing for a final invasion of Sirius, using their agents within the four factions to sow chaos and discord.
-freelancer pc game free download full version
-freelancer space sim download complete edition
-freelancer game download deutsch kostenlos
-download freelancer game for windows 10
-freelancer game full version download with crack
-freelancer 2 game download full version free
-freelancer game download vollversion gratis
-freelancer game download full version english
-freelancer game download vollversion mac
-freelancer game download full version iso
-freelancer game download vollversion steam
-freelancer game download full version rar
-freelancer game download vollversion online
-freelancer game download full version utorrent
-freelancer game download vollversion patch
-freelancer game download full version mods
-freelancer game download vollversion mega
-freelancer game download full version apk
-freelancer game download vollversion gog
-freelancer game download full version android
-freelancer game download vollversion skidrow
-freelancer game download full version no cd
-freelancer game download vollversion torrent
-freelancer game download full version highly compressed
-freelancer game download vollversion update
-freelancer game download full version softonic
-freelancer game download vollversion mediafire
-freelancer game download full version ocean of games
-freelancer game download vollversion direct link
-freelancer game download full version setup.exe
-freelancer game download vollversion repack
-freelancer game download full version pcworld
-freelancer game download vollversion fitgirl
-freelancer game download full version igg games
-freelancer game download vollversion codex
-freelancer game download full version cnet
-freelancer game download vollversion reloaded
-freelancer game download full version filehippo
-freelancer game download vollversion nosteam
-freelancer game download full version kickass
-freelancer star citizen mod download full version
-crossfire mod for freelancer game download vollversion
-discovery mod for freelancer game download vollversion
-star wars mod for freelancer game download vollversion
-starlancer mod for freelancer game download vollversion
-mass effect mod for freelancer game download vollversion
-firefly mod for freelancer game download vollversion
-stargate mod for freelancer game download vollversion
-halo mod for freelancer game download vollversion
-star trek mod for freelancer game download vollversion
-Trent and Zane must join forces with other freelancers, rebels, pirates, and outcasts to stop the Nomads from destroying humanity. They will also uncover secrets about their own pasts and destinies that will change their lives forever.
- The Gameplay of Freelancer
-Freelancer is a game that gives you a lot of freedom and choice. You can play it however you want, depending on your mood and style. You can follow the main story missions that advance the plot and unlock new locations and equipment. You can take on side missions that offer rewards and reputation with different factions. You can trade goods between planets and stations to make money. You can fight enemies such as pirates, bounty hunters, or rival factions. You can explore hidden systems and find rare items or secrets. You can even ignore all of that and just fly around enjoying the scenery.
-The game has three main aspects: trading, combat, and exploration. Let's take a closer look at each one.
- Trading and Economy
-One way to earn money in Freelancer is by trading goods between different locations. Each planet or station has its own supply and demand for various commodities such as food, water, ore, weapons, drugs, etc. You can buy low and sell high to make a profit. You can also find special deals or opportunities that offer higher rewards or lower risks.
-travel time, market fluctuations, etc. You also have to watch out for enemies such as pirates, who will try to steal your cargo or extort money from you. You can either fight them, run away, or pay them off. You can also become a pirate yourself and attack other traders, but be prepared to face the consequences of your actions.
-Trading also affects your reputation with different factions. Each faction has its own preferences and dislikes for certain goods. For example, Liberty likes food and water, but hates drugs and artifacts. Bretonia likes ore and machinery, but hates weapons and slaves. Rheinland likes weapons and luxury goods, but hates alien organisms and cardamine. Kusari likes artifacts and cardamine, but hates machinery and luxury goods. Trading with a faction will increase your reputation with them, but decrease it with their enemies. Having a good or bad reputation will affect how they treat you and what missions they offer you.
- Combat and Missions
-Another way to earn money in Freelancer is by combat and missions. You can take on various types of missions that involve fighting enemies, escorting allies, delivering cargo, scanning objects, mining asteroids, etc. You can find missions from mission boards at planets or stations, or from random NPCs that hail you in space. Some missions are part of the main story, while others are optional or repeatable.
-Combat in Freelancer is fast-paced and exciting. You can use different types of weapons such as lasers, missiles, torpedoes, mines, etc. You can also use countermeasures such as flares, chaffs, or jammers to evade enemy attacks. You can customize your ship with different equipment such as shields, engines, thrusters, scanners, etc. You can also buy new ships or upgrade your existing ones at ship dealers.
-Combat also affects your reputation with different factions. Each faction has its own enemies and allies that they are at war or peace with. For example, Liberty is at war with Rheinland and the Outcasts, but at peace with Bretonia and the Junkers. Bretonia is at war with Kusari and the Corsairs, but at peace with Liberty and the Mollys. Rheinland is at war with Liberty and the Red Hessians, but at peace with Kusari and the Bundschuh. Kusari is at war with Bretonia and the Hogosha, but at peace with Rheinland and the Blood Dragons. Fighting for or against a faction will increase or decrease your reputation with them accordingly.
- Exploration and Factions
-The third aspect of Freelancer is exploration and factions. You can explore a vast galaxy full of 48 star systems that span four regions: Liberty Space, Bretonia Space, Rheinland Space, and Kusari Space. Each system has its own planets, stations, jump gates, jump holes, trade lanes, asteroid fields, nebulas, etc. You can discover new locations by following clues or rumors, scanning objects or ships, finding hidden jump holes or anomalies, etc.
-such as the Junkers (scavengers and traders), the Zoners (peaceful settlers), the LSF (Liberty's secret service), etc. Some of them are hostile or unfriendly to you, such as the Outcasts (drug smugglers and pirates), the Corsairs (raiders and slavers), the Nomads (alien invaders), etc. You can learn more about each faction by talking to NPCs, reading news or logs, or completing missions.
-Exploration also rewards you with money, reputation, equipment, or secrets. You can find hidden caches or wrecks that contain valuable items or information. You can also find special locations or events that trigger unique dialogues or cutscenes. You can also unlock new missions or opportunities by exploring different systems or factions.
- The Graphics and Sound of Freelancer
-Freelancer is a game that was released in 2003, but it still looks and sounds amazing in 2021. The game has a realistic and immersive graphics engine that creates stunning visuals of space and planets. The game also has a dynamic and atmospheric sound system that creates realistic sounds of engines, weapons, explosions, etc. The game also has a great soundtrack that features original music composed by Andrew Sega and Jeehun Hwang.
-However, if you want to make Freelancer look and sound even better than before, you can use mods and patches that enhance the game's graphics and sound. There are many mods and patches available for Freelancer that improve its textures, effects, resolution, models, lighting, etc. There are also mods and patches that add new content such as ships, weapons, systems, factions, missions, etc. We will show you how to install some of these mods and patches later in this article.
- The Original Version
-The original version of Freelancer is the one that was released in 2003 by Microsoft Game Studios. It has a resolution of 800x600 pixels and a frame rate of 30 frames per second. It has a graphics engine that uses DirectX 8.1 and supports features such as bump mapping, specular lighting, dynamic shadows, etc. It has a sound engine that uses DirectSound and supports features such as 3D positional audio, environmental effects, etc.
-The original version of Freelancer looks and sounds good for its time, but it has some limitations and flaws. For example, it has low-resolution textures that look blurry or pixelated. It has low-polygon models that look blocky or jagged. It has low-quality effects that look dull or unrealistic. It has low-range lighting that creates dark or washed-out scenes. It has low-variety sounds that sound repetitive or generic.
- The Enhanced Version
-environmental effects, etc.
-The enhanced version of Freelancer looks and sounds amazing for its age, but it requires some work and knowledge to install. For example, you need to download and extract various mods and patches that are compatible with each other. You need to edit some configuration files and registry entries to make the game run properly. You need to backup your original files and folders in case something goes wrong. You also need to test the game and adjust the settings to your preference.
- How to Download and Install Freelancer
-If you want to play Freelancer on your PC, you have three options: the official version, the abandonware version, or the modded version. Each option has its own advantages and disadvantages. Let's see how to download and install each one.
- The Official Version
-The official version of Freelancer is the one that you can buy and download from official sources such as Microsoft Store or Steam. It costs around $10 and comes with a digital copy of the game and a manual. It is the easiest and safest way to get Freelancer on your PC, but it has some drawbacks. For example, it has limited availability and compatibility. It may not work on newer operating systems or hardware. It may also have some bugs or glitches that were never fixed.
-To download and install the official version of Freelancer, you need to follow these steps:
-
-Go to Microsoft Store or Steam and search for Freelancer.
-Buy the game and add it to your library.
-Download the game and run the installer.
-Follow the instructions on the screen and choose express install.
-Restart your computer when prompted.
-Run the game as administrator using the desktop shortcut.
-Enjoy!
-
- The Abandonware Version
-The abandonware version of Freelancer is the one that you can download for free from unofficial sources such as My Abandonware or Old Games Download. It comes with an ISO file of the game that you can mount and install. It is a convenient and cheap way to get Freelancer on your PC, but it has some risks. For example, it may be illegal or unethical in some countries or regions. It may also contain viruses or malware that can harm your computer or data.
-To download and install the abandonware version of Freelancer, you need to follow these steps:
-
-Go to My Abandonware or Old Games Download and search for Freelancer.
-Download the ISO file of the game and extract it.
-Open the "Game Files" folder and mount the ISO file using a virtual drive software such as Daemon Tools or PowerISO.
-Run the game setup (autorun.exe) and choose express install.
-Restart your computer when prompted.
-Run the game as administrator using the desktop shortcut.
-Enjoy!
-
- The Modded Version
-and registry entries to make the game run properly. You also need to backup your original files and folders in case something goes wrong.
-To download and install the modded version of Freelancer, you need to follow these steps:
-
-Download and install the original or abandonware version of Freelancer as described above.
-Go to Mod DB or The Starport and search for Freelancer mods and patches that you like.
-Download the files and extract them.
-Copy and paste the files into the game directory (e.g. C:\Program Files (x86)\Microsoft Games\Freelancer).
-Overwrite the existing files when prompted.
-Edit the configuration files (e.g. freelancer.ini) and registry entries (e.g. HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Freelancer) according to the instructions of each mod or patch.
-Run the game as administrator using the desktop shortcut or a mod launcher (e.g. Freelancer Mod Manager).
-Enjoy!
-
- Conclusion
-Freelancer is a game that deserves to be played in 2021. It is a game that offers a lot of freedom and choice, a game that has a rich and immersive story, a game that has a realistic and dynamic graphics and sound, and a game that can be enhanced with mods and patches. It is a game that will make you feel like a real space adventurer.
-If you are interested in playing Freelancer, you can download and install it on your PC using one of the three options we described above: the official version, the abandonware version, or the modded version. Each option has its own pros and cons, so choose the one that suits you best. You can also check out some of the links we provided for more information and resources about Freelancer.
-We hope you enjoyed this article and learned something new about Freelancer. We also hope you will have fun playing this amazing game. Thank you for reading and happy gaming!
- FAQs
-
-Q: What are the system requirements for Freelancer?
-A: The minimum system requirements for Freelancer are: Windows 98/ME/2000/XP, Pentium III 600 MHz or equivalent processor, 128 MB of RAM, 4x CD-ROM drive, 16 MB DirectX 8.1 compatible video card, DirectX 8.1 compatible sound card, 900 MB of hard disk space, keyboard and mouse. The recommended system requirements for Freelancer are: Windows XP, Pentium III 800 MHz or equivalent processor, 256 MB of RAM, 32x CD-ROM drive, 32 MB DirectX 8.1 compatible video card, DirectX 8.1 compatible sound card, 1 GB of hard disk space, joystick or gamepad.
-Q: How long is Freelancer?
-A: The length of Freelancer depends on how you play it. If you only follow the main story missions, it will take you around 10 hours to complete. If you also do some side missions and exploration, it will take you around 20 hours to complete. If you want to do everything in the game, it will take you around 40 hours to complete.
-Q: How many ships are there in Freelancer?
-A: There are 42 ships in Freelancer that you can buy or fly. They are divided into five classes: light fighters, heavy fighters, very heavy fighters, freighters, and transports. Each class has its own advantages and disadvantages in terms of speed, maneuverability, armor, cargo space, etc. Each ship also has its own design and style that reflects its faction and role.
-Q: How many mods are there for Freelancer?
-graphics enhancements, etc.), and Freelancer: Sirius Revival (a single-player mod that adds new story missions, graphics enhancements, etc.). You can find more mods on Mod DB or The Starport.
-
Q: Is Freelancer multiplayer?
-A: Yes, Freelancer has a multiplayer mode that allows you to play with other players online. You can join or host a server that supports up to 128 players. You can also use mods that enhance the multiplayer experience. However, you may need to use third-party software or services to find and connect to servers, as the official multiplayer service was shut down in 2008.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/rayan-saleh/whisper2notion/client/src/index.css b/spaces/rayan-saleh/whisper2notion/client/src/index.css
deleted file mode 100644
index 917888c1d1115a5bb9c12bad4b6f11f7def422be..0000000000000000000000000000000000000000
--- a/spaces/rayan-saleh/whisper2notion/client/src/index.css
+++ /dev/null
@@ -1,70 +0,0 @@
-:root {
- font-family: Inter, Avenir, Helvetica, Arial, sans-serif;
- font-size: 16px;
- line-height: 24px;
- font-weight: 400;
-
- color-scheme: light dark;
- color: rgba(255, 255, 255, 0.87);
- background-color: #242424;
-
- font-synthesis: none;
- text-rendering: optimizeLegibility;
- -webkit-font-smoothing: antialiased;
- -moz-osx-font-smoothing: grayscale;
- -webkit-text-size-adjust: 100%;
-}
-
-a {
- font-weight: 500;
- color: #646cff;
- text-decoration: inherit;
-}
-a:hover {
- color: #535bf2;
-}
-
-body {
- margin: 0;
- display: flex;
- place-items: center;
- min-width: 320px;
- min-height: 100vh;
-}
-
-h1 {
- font-size: 3.2em;
- line-height: 1.1;
-}
-
-button {
- border-radius: 8px;
- border: 1px solid transparent;
- padding: 0.6em 1.2em;
- font-size: 1em;
- font-weight: 500;
- font-family: inherit;
- background-color: #1a1a1a;
- cursor: pointer;
- transition: border-color 0.25s;
-}
-button:hover {
- border-color: #646cff;
-}
-button:focus,
-button:focus-visible {
- outline: 4px auto -webkit-focus-ring-color;
-}
-
-@media (prefers-color-scheme: light) {
- :root {
- color: #213547;
- background-color: #ffffff;
- }
- a:hover {
- color: #747bff;
- }
- button {
- background-color: #f9f9f9;
- }
-}
diff --git a/spaces/rayan-saleh/whisper2notion/client/vite.config.js b/spaces/rayan-saleh/whisper2notion/client/vite.config.js
deleted file mode 100644
index 5a33944a9b41b59a9cf06ee4bb5586c77510f06b..0000000000000000000000000000000000000000
--- a/spaces/rayan-saleh/whisper2notion/client/vite.config.js
+++ /dev/null
@@ -1,7 +0,0 @@
-import { defineConfig } from 'vite'
-import react from '@vitejs/plugin-react'
-
-// https://vitejs.dev/config/
-export default defineConfig({
- plugins: [react()],
-})
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Autodesk 3ds Max 2016 Keygen.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Autodesk 3ds Max 2016 Keygen.md
deleted file mode 100644
index d541f5cdd760b5624ab8129d4c64976b0cfc4fa0..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Autodesk 3ds Max 2016 Keygen.md
+++ /dev/null
@@ -1,11 +0,0 @@
-Autodesk 3ds Max 2016 Keygen Download » https://urlgoal.com/2uCMKe
-
-February 25, 2018 — AUTODESK 3DS MAX 2016 FINAL FULL + KEYGEN IS THE ULTIMATE 3D MODELING SOFTWARE THAT USES MILLIONS WORLDWIDE! IS FREE!
-PLEASE NOTE: TO USE THIS SOFTWARE YOU MUST USE DATA PROCESSING (DETAILER) FOR 3DS MAX.
-COPYRIGHT: © 2016 AUTODESK CORPORATION.
-All rights reserved.
-3DS MAX is owned by Autodesk Inc., Dassault Systemes SA, Emerald Group LP and other registered trademarks of Autodesk, Inc. or its subsidiaries.
-This software is licensed under the current Autodesk software license agreements. 8a78ff9644
-
-
-
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Die.Welle.2008.720p.BluRay.x264-CiNEFiLE.mkv.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Die.Welle.2008.720p.BluRay.x264-CiNEFiLE.mkv.md
deleted file mode 100644
index 14fd56c13893d244fb49eeaf7833d4b7f532711b..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Die.Welle.2008.720p.BluRay.x264-CiNEFiLE.mkv.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Die.Welle.2008.720p.BluRay.x264-CiNEFiLE.mkv Download ✦ https://urlgoal.com/2uCMJF
-
-Die.Welle.2008.720p.BluRay.x264-CiNEFiLE.mkv · The Final Exit full movie in hindi dubbed download 720p movie · manycam pro 3.1 crack ... 1fdad05405
-
-
-
diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/hook/set_epoch_info_hook.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/hook/set_epoch_info_hook.py
deleted file mode 100644
index c2b134ceb69856338097cf283f67d7e2c580739f..0000000000000000000000000000000000000000
--- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/hook/set_epoch_info_hook.py
+++ /dev/null
@@ -1,15 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from mmcv.parallel import is_module_wrapper
-from mmcv.runner import HOOKS, Hook
-
-
-@HOOKS.register_module()
-class SetEpochInfoHook(Hook):
- """Set runner's epoch information to the model."""
-
- def before_train_epoch(self, runner):
- epoch = runner.epoch
- model = runner.model
- if is_module_wrapper(model):
- model = model.module
- model.set_epoch(epoch)
diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/segment_anything/modeling/transformer.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/segment_anything/modeling/transformer.py
deleted file mode 100644
index f1a2812f613cc55b1d0b3e3e1d0c84a760d1fb87..0000000000000000000000000000000000000000
--- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/segment_anything/modeling/transformer.py
+++ /dev/null
@@ -1,240 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-from torch import Tensor, nn
-
-import math
-from typing import Tuple, Type
-
-from .common import MLPBlock
-
-
-class TwoWayTransformer(nn.Module):
- def __init__(
- self,
- depth: int,
- embedding_dim: int,
- num_heads: int,
- mlp_dim: int,
- activation: Type[nn.Module] = nn.ReLU,
- attention_downsample_rate: int = 2,
- ) -> None:
- """
- A transformer decoder that attends to an input image using
- queries whose positional embedding is supplied.
-
- Args:
- depth (int): number of layers in the transformer
- embedding_dim (int): the channel dimension for the input embeddings
- num_heads (int): the number of heads for multihead attention. Must
- divide embedding_dim
- mlp_dim (int): the channel dimension internal to the MLP block
- activation (nn.Module): the activation to use in the MLP block
- """
- super().__init__()
- self.depth = depth
- self.embedding_dim = embedding_dim
- self.num_heads = num_heads
- self.mlp_dim = mlp_dim
- self.layers = nn.ModuleList()
-
- for i in range(depth):
- self.layers.append(
- TwoWayAttentionBlock(
- embedding_dim=embedding_dim,
- num_heads=num_heads,
- mlp_dim=mlp_dim,
- activation=activation,
- attention_downsample_rate=attention_downsample_rate,
- skip_first_layer_pe=(i == 0),
- )
- )
-
- self.final_attn_token_to_image = Attention(
- embedding_dim, num_heads, downsample_rate=attention_downsample_rate
- )
- self.norm_final_attn = nn.LayerNorm(embedding_dim)
-
- def forward(
- self,
- image_embedding: Tensor,
- image_pe: Tensor,
- point_embedding: Tensor,
- ) -> Tuple[Tensor, Tensor]:
- """
- Args:
- image_embedding (torch.Tensor): image to attend to. Should be shape
- B x embedding_dim x h x w for any h and w.
- image_pe (torch.Tensor): the positional encoding to add to the image. Must
- have the same shape as image_embedding.
- point_embedding (torch.Tensor): the embedding to add to the query points.
- Must have shape B x N_points x embedding_dim for any N_points.
-
- Returns:
- torch.Tensor: the processed point_embedding
- torch.Tensor: the processed image_embedding
- """
- # BxCxHxW -> BxHWxC == B x N_image_tokens x C
- bs, c, h, w = image_embedding.shape
- image_embedding = image_embedding.flatten(2).permute(0, 2, 1)
- image_pe = image_pe.flatten(2).permute(0, 2, 1)
-
- # Prepare queries
- queries = point_embedding
- keys = image_embedding
-
- # Apply transformer blocks and final layernorm
- for layer in self.layers:
- queries, keys = layer(
- queries=queries,
- keys=keys,
- query_pe=point_embedding,
- key_pe=image_pe,
- )
-
- # Apply the final attenion layer from the points to the image
- q = queries + point_embedding
- k = keys + image_pe
- attn_out = self.final_attn_token_to_image(q=q, k=k, v=keys)
- queries = queries + attn_out
- queries = self.norm_final_attn(queries)
-
- return queries, keys
-
-
-class TwoWayAttentionBlock(nn.Module):
- def __init__(
- self,
- embedding_dim: int,
- num_heads: int,
- mlp_dim: int = 2048,
- activation: Type[nn.Module] = nn.ReLU,
- attention_downsample_rate: int = 2,
- skip_first_layer_pe: bool = False,
- ) -> None:
- """
- A transformer block with four layers: (1) self-attention of sparse
- inputs, (2) cross attention of sparse inputs to dense inputs, (3) mlp
- block on sparse inputs, and (4) cross attention of dense inputs to sparse
- inputs.
-
- Arguments:
- embedding_dim (int): the channel dimension of the embeddings
- num_heads (int): the number of heads in the attention layers
- mlp_dim (int): the hidden dimension of the mlp block
- activation (nn.Module): the activation of the mlp block
- skip_first_layer_pe (bool): skip the PE on the first layer
- """
- super().__init__()
- self.self_attn = Attention(embedding_dim, num_heads)
- self.norm1 = nn.LayerNorm(embedding_dim)
-
- self.cross_attn_token_to_image = Attention(
- embedding_dim, num_heads, downsample_rate=attention_downsample_rate
- )
- self.norm2 = nn.LayerNorm(embedding_dim)
-
- self.mlp = MLPBlock(embedding_dim, mlp_dim, activation)
- self.norm3 = nn.LayerNorm(embedding_dim)
-
- self.norm4 = nn.LayerNorm(embedding_dim)
- self.cross_attn_image_to_token = Attention(
- embedding_dim, num_heads, downsample_rate=attention_downsample_rate
- )
-
- self.skip_first_layer_pe = skip_first_layer_pe
-
- def forward(
- self, queries: Tensor, keys: Tensor, query_pe: Tensor, key_pe: Tensor
- ) -> Tuple[Tensor, Tensor]:
- # Self attention block
- if self.skip_first_layer_pe:
- queries = self.self_attn(q=queries, k=queries, v=queries)
- else:
- q = queries + query_pe
- attn_out = self.self_attn(q=q, k=q, v=queries)
- queries = queries + attn_out
- queries = self.norm1(queries)
-
- # Cross attention block, tokens attending to image embedding
- q = queries + query_pe
- k = keys + key_pe
- attn_out = self.cross_attn_token_to_image(q=q, k=k, v=keys)
- queries = queries + attn_out
- queries = self.norm2(queries)
-
- # MLP block
- mlp_out = self.mlp(queries)
- queries = queries + mlp_out
- queries = self.norm3(queries)
-
- # Cross attention block, image embedding attending to tokens
- q = queries + query_pe
- k = keys + key_pe
- attn_out = self.cross_attn_image_to_token(q=k, k=q, v=queries)
- keys = keys + attn_out
- keys = self.norm4(keys)
-
- return queries, keys
-
-
-class Attention(nn.Module):
- """
- An attention layer that allows for downscaling the size of the embedding
- after projection to queries, keys, and values.
- """
-
- def __init__(
- self,
- embedding_dim: int,
- num_heads: int,
- downsample_rate: int = 1,
- ) -> None:
- super().__init__()
- self.embedding_dim = embedding_dim
- self.internal_dim = embedding_dim // downsample_rate
- self.num_heads = num_heads
- assert self.internal_dim % num_heads == 0, "num_heads must divide embedding_dim."
-
- self.q_proj = nn.Linear(embedding_dim, self.internal_dim)
- self.k_proj = nn.Linear(embedding_dim, self.internal_dim)
- self.v_proj = nn.Linear(embedding_dim, self.internal_dim)
- self.out_proj = nn.Linear(self.internal_dim, embedding_dim)
-
- def _separate_heads(self, x: Tensor, num_heads: int) -> Tensor:
- b, n, c = x.shape
- x = x.reshape(b, n, num_heads, c // num_heads)
- return x.transpose(1, 2) # B x N_heads x N_tokens x C_per_head
-
- def _recombine_heads(self, x: Tensor) -> Tensor:
- b, n_heads, n_tokens, c_per_head = x.shape
- x = x.transpose(1, 2)
- return x.reshape(b, n_tokens, n_heads * c_per_head) # B x N_tokens x C
-
- def forward(self, q: Tensor, k: Tensor, v: Tensor) -> Tensor:
- # Input projections
- q = self.q_proj(q)
- k = self.k_proj(k)
- v = self.v_proj(v)
-
- # Separate into heads
- q = self._separate_heads(q, self.num_heads)
- k = self._separate_heads(k, self.num_heads)
- v = self._separate_heads(v, self.num_heads)
-
- # Attention
- _, _, _, c_per_head = q.shape
- attn = q @ k.permute(0, 1, 3, 2) # B x N_heads x N_tokens x N_tokens
- attn = attn / math.sqrt(c_per_head)
- attn = torch.softmax(attn, dim=-1)
-
- # Get output
- out = attn @ v
- out = self._recombine_heads(out)
- out = self.out_proj(out)
-
- return out
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Adobe Cs6 Master Collection Aio Patcher V1.2 111.md b/spaces/rorallitri/biomedical-language-models/logs/Adobe Cs6 Master Collection Aio Patcher V1.2 111.md
deleted file mode 100644
index e3caef2c99ec07214ba27baead5b4b320cbf7a42..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Adobe Cs6 Master Collection Aio Patcher V1.2 111.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Adobe Cs6 Master Collection Aio Patcher V1.2 111 DOWNLOAD ✦✦✦ https://tinurll.com/2uzoJE
-
-Adobe Acrobat X Pro v10.1.2.45 Multilingual incl Keygen.zip. Adobe CS5 ... Adobe CS6 Master Collection WinMacOSX. zip. Adobe CS6 ... All.Patcher-DVT-MPT-Lz0.zip. Adobe. ... Adobe Acrobat X KEYGEN 100. zip. Age.Of.Empires.III.v1.09.FR.NO-CD CRKEXE-FFF.zip. Age. ... ARIAL - AIO - Keygen - by FOFF.zip. 4d29de3e1b
-
-
-
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Far.Cry.2.Fortunes.Edition.MULTi5-PROPHET Tool Why You Should Download and Play the Fortunes Edition of Far Cry 2 Today.md b/spaces/rorallitri/biomedical-language-models/logs/Far.Cry.2.Fortunes.Edition.MULTi5-PROPHET Tool Why You Should Download and Play the Fortunes Edition of Far Cry 2 Today.md
deleted file mode 100644
index 9dbffdd1c5dee59b8a9a29d2b4cb3e73463ce0f2..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Far.Cry.2.Fortunes.Edition.MULTi5-PROPHET Tool Why You Should Download and Play the Fortunes Edition of Far Cry 2 Today.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Far.Cry.2.Fortunes.Edition.MULTi5-PROPHET Tool DOWNLOAD ✯✯✯ https://tinurll.com/2uzlzS
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Free Paysafecard Generator 48 The Easy and Safe Solution to Shop Online.md b/spaces/rorallitri/biomedical-language-models/logs/Free Paysafecard Generator 48 The Easy and Safe Solution to Shop Online.md
deleted file mode 100644
index 12fa564179fa81c3effd0850521e7b6b6b3f4dc5..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Free Paysafecard Generator 48 The Easy and Safe Solution to Shop Online.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-Sib. 7.1: Bizarre playback problem Posted by Mark Starr - 19 Dec 05:20PM (edited 19 Dec 05:22PM) Hide picture I have encountered a bizarre playback problem. I am using Sibelius 7 Sounds (updated) with the latest version of Sibelius 7 on Win7. This problem affects only the Bass Trombone (as far as I know.) The Bass Trombone plays back normally under most conditions. However, when I place an accent mark on a notehead (specifically the right-facing arrowhead accent mark on page 1 of the keypad,) Sibelius Sounds will not playback that note. This problem does not occur with the tenor trombone, or any other instrument. Nor does it occur with any other accent mark, staccato dot, tenuto mark, etc. on a notehead. I have checked Properties and Mixer, and can find nothing amiss. Moreover, I am referring to notes within the playing range of the Bass Trombone. As soon as I add an accent mark to a notehead, the Bass Trombone fails to play it back (but goes on to play subsequent notes.) As soon as I remove the accent mark from the notehead, the note plays back correctly. Any solutions? Many thanks to the person who can crack this nut. MS Back to top | Allthreads Re: Sib. 7.1: Bizarre playback problem Posted by Derek Bourgeois - 19 Dec 05:30PM (edited 19 Dec 05:36PM) Hide picture I can't make this fail. I've added a part for bass trombone to my current score and assigned Sib 7 Sounds Bass Trombone to it, and all the accents play as expected. Check what instrument is selected by the mixer. Maybe it's picked on something else. Failing that, are you sure that you upgraded to the latest version of Sib 7 Sounds? I remember in the original version there was a similar problem to this. One other thought. Is there something odd about your accent articulation in the Playback dictionary? The accent should read +accent and make sure that 'No Playback effect' is not ticked. And in Playback Devices is Sibelius Player set to Sibelius 7 Sounds? Apologies if these suggestions seem too condescending but we all sometimes don't do something that should be obvious. I still make elementary mistakes at times. -- Derek Bourgeois, Dorset UK Windows 7 Ultimate 64 bit - Intel Xeon@3.47 Ghz - 24GB RAM - NVidia Quadro 5000 with 6143 graphics memory - 3 TB hard disks (of which 2 TB SSD), Sibelius 7.1.3, Vienna Symphonic Library Special Edition plus full percussion library. East West QL Symphonic Orchestra and Choirs. Sibelius 7 Sounds. M-Box. Back to top | Allthreads Re: Sib. 7.1: Bizarre playback problem Posted by Roy Moore - 20 Dec 10:21AM Hide picture As Derek says this was a problem early on in Sib 7. If you quote the size of your Sib 7 Sounds folder to ensure that it is complete. -- Roy Moore Sib 7.1.3,Windows 7 SP 1 pro x64 8gb ram,2x quad core 2.5ghz , Audiophile 2496, JABB 3, GPO 4, Rock & Pop, Garritan Steinway, EWQLSO platinum Back to top | Allthreads Re: Sib. 7.1: Bizarre playback problem Posted by Mark Starr - 21 Dec 03:36AM (edited 21 Dec 03:36AM) Hide picture Hello Derek and Roy Following your suggestions, I re-installed Sibelius 7 Sounds from the original DVD's, and then downloaded and installed the latest updates for both Sibelius 7.1.3 and Sibelius 7 Sounds 7.1.2. Finally, the bass trombone notes with accent marks played back correctly. Wherever the problem was, that seems to have fixed it. Best wishes, MS Back to top | Allthreads
-The Vienna Symphonic Library (or VSL) currently consists of two components: the Orchestral Cube First Edition (44GB on seven DVDs), and the Performance Set First Edition (50GB on seven DVDs). The Orchestral Cube is basically a multisampled orchestra comprising Strings (10.77GB on two DVDs), Brass & Woodwinds (20.71GB on three DVDs),. The Vienna Symphonic Library Special Edition Complete Bundle brings together the complete orchestra - and more - in an affordable Collection. You get the entire contents of Vienna Symphonic Library Special Edition Vol. 1 PLUS, and Vol. 2 PLUS in a single package! Vienna Symphonic Library creates high-end orchestral sample libraries and software (Vienna Instruments player, Vienna Ensemble mixing engine, Vienna MIR multi-impulse response reverb, Vienna Suite audio plug-ins) for professional music production. Session Strings Pro (Native Instruments/Kontakt) - Kontakt player may not be perfect but it trumps Vienna, big time! LA Scoring Strings (audiobro) - Sounds amazing! Scoring is no longer boring! Do yourself a favor and save yourself the headaches I went through and invest in a Symphonic Sample Library that will allow you to be an artist. Is that the one. The innovative, research-driven music software and sample library developer. The Vienna Special Editions can be used stand-alone or as AU, VST, AAX. Vienna Sound Library Special Edition Torrent. Vienna symphonic library special edition, vienna symphonic library special edition volume 1, vienna symphonic.
-vienna symphonic library special edition crack DOWNLOAD ►►► https://tinurll.com/2uznZL
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/rupeshs/fastsdcpu/README.md b/spaces/rupeshs/fastsdcpu/README.md
deleted file mode 100644
index cf1335c33c96888dde2de061c5864a8d69135d61..0000000000000000000000000000000000000000
--- a/spaces/rupeshs/fastsdcpu/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Fastsdcpu
-emoji: 📉
-colorFrom: blue
-colorTo: blue
-sdk: gradio
-sdk_version: 3.50.2
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ruslanmv/Clone-Your-Voice/vocoder/gen_wavernn.py b/spaces/ruslanmv/Clone-Your-Voice/vocoder/gen_wavernn.py
deleted file mode 100644
index 2036737f805f6055893812e48f99d524624aab07..0000000000000000000000000000000000000000
--- a/spaces/ruslanmv/Clone-Your-Voice/vocoder/gen_wavernn.py
+++ /dev/null
@@ -1,31 +0,0 @@
-from vocoder.models.fatchord_version import WaveRNN
-from vocoder.audio import *
-
-
-def gen_testset(model: WaveRNN, test_set, samples, batched, target, overlap, save_path):
- k = model.get_step() // 1000
-
- for i, (m, x) in enumerate(test_set, 1):
- if i > samples:
- break
-
- print('\n| Generating: %i/%i' % (i, samples))
-
- x = x[0].numpy()
-
- bits = 16 if hp.voc_mode == 'MOL' else hp.bits
-
- if hp.mu_law and hp.voc_mode != 'MOL' :
- x = decode_mu_law(x, 2**bits, from_labels=True)
- else :
- x = label_2_float(x, bits)
-
- save_wav(x, save_path.joinpath("%dk_steps_%d_target.wav" % (k, i)))
-
- batch_str = "gen_batched_target%d_overlap%d" % (target, overlap) if batched else \
- "gen_not_batched"
- save_str = save_path.joinpath("%dk_steps_%d_%s.wav" % (k, i, batch_str))
-
- wav = model.generate(m, batched, target, overlap, hp.mu_law)
- save_wav(wav, save_str)
-
diff --git a/spaces/rxn4chemistry/synthesis-protocol-extraction/model_cards/sac_synthesis_mining_article.md b/spaces/rxn4chemistry/synthesis-protocol-extraction/model_cards/sac_synthesis_mining_article.md
deleted file mode 100644
index 14e33ce579de6ae8c81fe2f84f43cab2f23c98bf..0000000000000000000000000000000000000000
--- a/spaces/rxn4chemistry/synthesis-protocol-extraction/model_cards/sac_synthesis_mining_article.md
+++ /dev/null
@@ -1,81 +0,0 @@
-# Model documentation & parameters
-
-## Parameters
-
-### Model
-Whether to use the model trained 1) on procedures for heterogeneous single-atom catalyst synthesis, or 2) on organic chemistry procedures.
-
-### Synthesis text
-Synthesis procedure (in English prose) to extract actions from.
-
-
-# Model card -- Text mining synthesis protocols of heterogeneous single-atom catalysts
-
-**Model Details**:
-Sequence-to-sequence transformer model
-
-**Developers**:
-Manu Suvarna, Alain C. Vaucher, Sharon Mitchell, Teodoro Laino, and Javier Pérez-Ramírez.
-
-**Distributors**:
-Same as the *developers*.
-
-**Model date**:
-April 2023.
-
-**Algorithm version**:
-Details in the source code and in the paper.
-
-**Model type**:
-A Transformer-based sequence-to-sequence language model that extracts synthesis actions from procedure text.
-The model relies on the [OpenNMT](https://github.com/OpenNMT/OpenNMT-py) library.
-
-**Information about training algorithms, parameters, fairness constraints or other applied approaches, and features**:
-Details in the source code and in the paper.
-
-**Paper or other resource for more information**:
-Currently under review.
-
-**License**: MIT
-
-**Where to send questions or comments about the model**:
-Contact one of the *developers*.
-
-**Intended Use. Use cases that were envisioned during development**:
-Chemical research, in particular in the field of heterogeneous single-atom catalysts.
-
-**Primary intended uses/users**:
-Researchers and computational chemists using the model for model comparison or research exploration purposes.
-
-**Out-of-scope use cases**:
-Production-level inference.
-
-**Factors**:
-Not applicable.
-
-**Metrics**:
-Details in the source code and in the paper.
-
-**Datasets**:
-Details in the source code and in the paper.
-
-**Ethical Considerations**:
-No specific considerations as no private/personal data is involved.
-Please consult with the authors in case of questions.
-
-**Caveats and Recommendations**:
-Please consult with original authors in case of questions.
-
-Model card prototype inspired by [Mitchell et al. (2019)](https://dl.acm.org/doi/abs/10.1145/3287560.3287596).
-
-
-## Citation
-
-```bib
-@article{suvarna2023textmining,
- title={Text mining and standardization of single-atom catalyst protocols to foster digital synthesis},
- author={Manu Suvarna, Alain C. Vaucher, Sharon Mitchell, Teodoro Laino, and Javier Pérez-Ramírez},
- journal={under review},
-}
-```
-
diff --git a/spaces/safetensors/convert_large/README.md b/spaces/safetensors/convert_large/README.md
deleted file mode 100644
index d5151742d74d00596eb21ac015ebfbeff88c4ca0..0000000000000000000000000000000000000000
--- a/spaces/safetensors/convert_large/README.md
+++ /dev/null
@@ -1,17 +0,0 @@
----
-title: Convert to Safetensors
-emoji: 🐶
-colorFrom: yellow
-colorTo: red
-sdk: gradio
-sdk_version: 3.33.1
-app_file: app.py
-pinned: true
-license: apache-2.0
-models: []
-datasets:
-- safetensors/conversions
-duplicated_from: safetensors/convert
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/sail/lorahub/app.py b/spaces/sail/lorahub/app.py
deleted file mode 100644
index 1d47d182f4c9efa435b75a846035b2c03a0cb14e..0000000000000000000000000000000000000000
--- a/spaces/sail/lorahub/app.py
+++ /dev/null
@@ -1,163 +0,0 @@
-import streamlit as st
-from hub_name import LORA_HUB_NAMES
-from random import shuffle
-import pandas as pd
-import streamlit as st
-import contextlib
-from functools import wraps
-from io import StringIO
-import contextlib
-import redirect as rd
-import torch
-import shutil
-import os
-import uuid
-import json
-
-
-from google.oauth2 import service_account
-import gspread
-from google.oauth2.service_account import Credentials
-
-
-css = """
-
-"""
-st.markdown(css, unsafe_allow_html=True)
-
-def main():
- st.title("💡 LoraHub")
- st.markdown("Low-rank adaptations (LoRA) are techniques for fine-tuning large language models on new tasks. We propose LoraHub, a framework that allows composing multiple LoRA modules trained on different tasks. The goal is to achieve good performance on unseen tasks using just a few examples, without needing extra parameters or training. And we want to build a marketplace where users can share their trained LoRA modules, thereby facilitating the application of these modules to new tasks.")
-
- st.image(open("lorahub_demo.jpg", "rb").read(),
- "The Illustration of LoraHub Learning", use_column_width=True)
-
- st.markdown("In this demo, you will use avaiable lora modules selected in the left sidebar to tackle your new task. When the LoraHub learning is done, you can download the final LoRA module and use it for your new task. You can check out more details in our [paper](https://huggingface.co/papers/2307.13269).")
-
- with st.sidebar:
- st.title("🛒 LoRA Module Market", help="Feel free to clone this demo and add more modules to the marketplace. Remember to make sure your lora modules share the same base model and have the same rank.")
- st.markdown(
- "The following modules are available for you to compose for your new task. Every module name is a peft repository in Huggingface Hub, and you can find them [here](https://huggingface.co/models?search=lorahub).")
-
- df = pd.DataFrame({
- "Index": list(range(len(LORA_HUB_NAMES))),
- "Module Name": LORA_HUB_NAMES,
- })
- st.data_editor(df,
- disabled=["LoRA Module", "Index"],
- hide_index=True)
-
- st.multiselect(
- 'Choose the modules you want to add',
- list(range(len(LORA_HUB_NAMES))),
- [],
- key="select_names")
-
- def set_lucky_modules():
- names = list(range(len(LORA_HUB_NAMES)))
- shuffle(names)
- names = names[:20]
- st.session_state["select_names"] = names
-
- st.button(":game_die: Give 20 Lucky Modules",
- on_click=set_lucky_modules)
- st.write('We will use the following modules', [
- LORA_HUB_NAMES[i] for i in st.session_state["select_names"]])
-
- st.subheader("Choose the Module Candidates")
- st.markdown("Please checkout the sidebar on the left to select the modules you want to compose for your new task. You can also click the button to **get 20 lucky modules**.")
-
- st.subheader("Upload Examples of Your Task")
- st.markdown("When faced with a new task, our method requires a few examples of that task in order to perform the lora module composition. Below you should provide a few examples of the task you want to perform. The default examples are from the Date Understanding task of the BBH benchmark.")
-
- txt_input = st.text_area('*Examples Inputs (One Line One Input)*',
- '''
-Infer the date from context. Q: Today, 8/3/1997, is a day that we will never forget. What is the date one week ago from today in MM/DD/YYYY? Options: (A) 03/27/1998 (B) 09/02/1997 (C) 07/27/1997 (D) 06/29/1997 (E) 07/27/1973 (F) 12/27/1997 A:
-Infer the date from context. Q: May 6, 1992 is like yesterday to Jane, but that is actually ten years ago. What is the date tomorrow in MM/DD/YYYY? Options: (A) 04/16/2002 (B) 04/07/2003 (C) 05/07/2036 (D) 05/28/2002 (E) 05/07/2002 A:
-Infer the date from context. Q: Today is the second day of the third month of 1966. What is the date one week ago from today in MM/DD/YYYY? Options: (A) 02/26/1966 (B) 01/13/1966 (C) 02/02/1966 (D) 10/23/1966 (E) 02/23/1968 (F) 02/23/1966 A:
-'''.strip())
-
- txt_output = st.text_area('*Examples Outputs (One Line One Output)*', '''
-(C)
-(E)
-(F)
-'''.strip())
-
- st.subheader("Set Iteration Steps")
- st.markdown("Our method involves performing multiple inference iterations to perform the LoRA module composition. The module can then be intergrated into the LLM to carry out the new task. The maximum number of inference steps impacts performance and speed. We suggest setting it to 40 steps if 20 modules were chosen, with more steps typically needed for more modules.")
-
- max_step = st.slider('Maximum iteration step', 10, 100, step=5)
-
- st.subheader("Start LoraHub Learning")
-
- st.markdown("Note that the learning process may take a while (depending on the maximum iteration step), and downloading LoRA modules from HuggingfaceHub also takes some time. This demo runs on CPU by default, and you can monitor the learning logs below.")
- # st.subheader("Watch the logs below")
- buffer = st.expander("Learning Logs")
-
- if st.button(':rocket: Start!'):
- if len(st.session_state["select_names"]) == 0:
- st.error("Please select at least 1 module!")
- elif max_step < len(st.session_state["select_names"]):
- st.error(
- "Please specify a larger maximum iteration step than the number of selected modules!")
- else:
- buffer.text("* begin to perform lorahub learning *")
- from util import lorahub_learning
- with rd.stderr(to=buffer):
- recommendation, final_lora = lorahub_learning([LORA_HUB_NAMES[i] for i in st.session_state["select_names"]],
- txt_input, txt_output, max_inference_step=max_step)
-
- st.success("Lorahub learning finished! You got the following recommendation:")
-
- df = {
- "modules": [LORA_HUB_NAMES[i] for i in st.session_state["select_names"]],
- "weights": recommendation.value,
- }
-
-
-
- def share():
- credentials = service_account.Credentials.from_service_account_info(
- json.loads(st.secrets["gcp_service_account"]),
- scopes=[
- "https://www.googleapis.com/auth/spreadsheets",
- ]
- )
- gsheet_url = st.secrets["private_gsheets_url"]
- gc = gspread.authorize(credentials)
- sh = gc.open_by_url(gsheet_url)
-
- ws = sh.sheet1
- ws.insert_rows([[LORA_HUB_NAMES[i] for i in st.session_state["select_names"]],recommendation.value.tolist(),[max_step]])
- st.table(df)
- random_id = uuid.uuid4().hex
- os.makedirs(f"lora/{random_id}")
- # copy config file
- shutil.copyfile("lora/adapter_config.json", f"lora/{random_id}/adapter_config.json")
- # zip the final lora module
- torch.save(final_lora, f"lora/{random_id}/adapter_model.bin")
- # create a zip file
- shutil.make_archive(f"lora_{random_id}", 'zip', f"lora/{random_id}")
- with open(f"lora_{random_id}.zip", "rb") as fp:
- btn = st.download_button(
- label="📥 Download the final LoRA Module",
- data=fp,
- file_name=f"lora_{random_id}.zip",
- mime="application/zip"
- )
- with open(f"lora_{random_id}.zip", "rb") as fp:
- btn = st.download_button(
- label="📥 Download and share your results",
- data=fp,
- file_name=f"lora_{random_id}.zip",
- mime="application/zip",
- on_click=share
- )
- st.warning("The page will be refreshed once you click the download button. Share results may cost 1-2 mins.")
-
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/scedlatioru/img-to-music/example/Amtlib.dll Cs6 Crack Illustrator Cs3.md b/spaces/scedlatioru/img-to-music/example/Amtlib.dll Cs6 Crack Illustrator Cs3.md
deleted file mode 100644
index d98973935e26d851ddaa5a02cafa01cf67b8396f..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/Amtlib.dll Cs6 Crack Illustrator Cs3.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Amtlib.dll Cs6 Crack Illustrator Cs3 Download File ⏩ https://gohhs.com/2uEzke
-
- d5da3c52bf
-
-
-
diff --git a/spaces/scedlatioru/img-to-music/example/Circuit Wizard 2 Code Activation.md b/spaces/scedlatioru/img-to-music/example/Circuit Wizard 2 Code Activation.md
deleted file mode 100644
index 0dff5be002691feae7322f682d643c00d1beda48..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/Circuit Wizard 2 Code Activation.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Circuit Wizard 2 Code Activation Download Zip ✔ https://gohhs.com/2uEz4S
-
-Download Circuit Wizard 2.0, DEVELOPER HOME Circuit ... Using Circuit Wizard Professional Edition crack, key, serial numbers, registration codes is illegal. 4d29de3e1b
-
-
-
diff --git a/spaces/scedlatioru/img-to-music/example/Scooby Doo Wrestlemania Mystery Full Movie In Hindi Download HOTk.md b/spaces/scedlatioru/img-to-music/example/Scooby Doo Wrestlemania Mystery Full Movie In Hindi Download HOTk.md
deleted file mode 100644
index 46a19982f346746108f6c39fb40aec41df06ce52..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/Scooby Doo Wrestlemania Mystery Full Movie In Hindi Download HOTk.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Scooby Doo Wrestlemania Mystery Full Movie In Hindi Downloadk Download 🔗 https://gohhs.com/2uEyRp
-
-'Scenu Scenu' Official Song Video from the Movie Sigaram Thodu. Vikram Prabhu . ... scooby doo wrestlemania mystery full movie in hindi downloadk. 1fdad05405
-
-
-
diff --git a/spaces/seanghay/khmer-tts/README.md b/spaces/seanghay/khmer-tts/README.md
deleted file mode 100644
index 1d7f444bfdecacc235b2369f815599af1db47385..0000000000000000000000000000000000000000
--- a/spaces/seanghay/khmer-tts/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Khmer Text-to-Speech
-emoji: 🎤
-colorFrom: indigo
-colorTo: pink
-sdk: gradio
-sdk_version: 3.32.0
-app_file: app.py
-pinned: true
-license: cc-by-4.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/conformer/convolution.py b/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/conformer/convolution.py
deleted file mode 100644
index 6a5d2c30c313e73fa2097bc28721be00aeb6910f..0000000000000000000000000000000000000000
--- a/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/conformer/convolution.py
+++ /dev/null
@@ -1,79 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-
-# Copyright 2020 Johns Hopkins University (Shinji Watanabe)
-# Northwestern Polytechnical University (Pengcheng Guo)
-# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0)
-
-"""ConvolutionModule definition."""
-
-from torch import nn
-
-
-class ConvolutionModule(nn.Module):
- """ConvolutionModule in Conformer model.
-
- Args:
- channels (int): The number of channels of conv layers.
- kernel_size (int): Kernerl size of conv layers.
-
- """
-
- def __init__(self, channels, kernel_size, activation=nn.ReLU(), bias=True):
- """Construct an ConvolutionModule object."""
- super(ConvolutionModule, self).__init__()
- # kernerl_size should be a odd number for 'SAME' padding
- assert (kernel_size - 1) % 2 == 0
-
- self.pointwise_conv1 = nn.Conv1d(
- channels,
- 2 * channels,
- kernel_size=1,
- stride=1,
- padding=0,
- bias=bias,
- )
- self.depthwise_conv = nn.Conv1d(
- channels,
- channels,
- kernel_size,
- stride=1,
- padding=(kernel_size - 1) // 2,
- groups=channels,
- bias=bias,
- )
- self.norm = nn.BatchNorm1d(channels)
- self.pointwise_conv2 = nn.Conv1d(
- channels,
- channels,
- kernel_size=1,
- stride=1,
- padding=0,
- bias=bias,
- )
- self.activation = activation
-
- def forward(self, x):
- """Compute convolution module.
-
- Args:
- x (torch.Tensor): Input tensor (#batch, time, channels).
-
- Returns:
- torch.Tensor: Output tensor (#batch, time, channels).
-
- """
- # exchange the temporal dimension and the feature dimension
- x = x.transpose(1, 2)
-
- # GLU mechanism
- x = self.pointwise_conv1(x) # (batch, 2*channel, dim)
- x = nn.functional.glu(x, dim=1) # (batch, channel, dim)
-
- # 1D Depthwise Conv
- x = self.depthwise_conv(x)
- x = self.activation(self.norm(x))
-
- x = self.pointwise_conv2(x)
-
- return x.transpose(1, 2)
diff --git a/spaces/shengyi-qian/3DOI/monoarti/midas_loss.py b/spaces/shengyi-qian/3DOI/monoarti/midas_loss.py
deleted file mode 100644
index 49ceb9cfba9bec4e1262bd34547a1cecf84bd2b1..0000000000000000000000000000000000000000
--- a/spaces/shengyi-qian/3DOI/monoarti/midas_loss.py
+++ /dev/null
@@ -1,162 +0,0 @@
-
-
-import torch
-import torch.nn as nn
-import numpy as np
-
-#from .masked_losses import masked_l1_loss
-
-def masked_l1_loss(preds, target, mask_valid):
- element_wise_loss = abs(preds - target)
- element_wise_loss[~mask_valid] = 0
- return element_wise_loss.sum() / mask_valid.sum()
-
-
-def compute_scale_and_shift(prediction, target, mask):
- # system matrix: A = [[a_00, a_01], [a_10, a_11]]
- a_00 = torch.sum(mask * prediction * prediction, (1, 2))
- a_01 = torch.sum(mask * prediction, (1, 2))
- a_11 = torch.sum(mask, (1, 2))
-
- # right hand side: b = [b_0, b_1]
- b_0 = torch.sum(mask * prediction * target, (1, 2))
- b_1 = torch.sum(mask * target, (1, 2))
-
- # solution: x = A^-1 . b = [[a_11, -a_01], [-a_10, a_00]] / (a_00 * a_11 - a_01 * a_10) . b
- x_0 = torch.zeros_like(b_0)
- x_1 = torch.zeros_like(b_1)
-
- det = a_00 * a_11 - a_01 * a_01
- valid = det.nonzero()
-
- x_0[valid] = (a_11[valid] * b_0[valid] - a_01[valid] * b_1[valid]) / (det[valid] + 1e-6)
- x_1[valid] = (-a_01[valid] * b_0[valid] + a_00[valid] * b_1[valid]) / (det[valid] + 1e-6)
-
- return x_0, x_1
-
-
-def masked_shift_and_scale(depth_preds, depth_gt, mask_valid):
- depth_preds_nan = depth_preds.clone()
- depth_gt_nan = depth_gt.clone()
- depth_preds_nan[~mask_valid] = np.nan
- depth_gt_nan[~mask_valid] = np.nan
-
- mask_diff = mask_valid.view(mask_valid.size()[:2] + (-1,)).sum(-1, keepdims=True) + 1
-
- t_gt = depth_gt_nan.view(depth_gt_nan.size()[:2] + (-1,)).nanmedian(-1, keepdims=True)[0].unsqueeze(-1)
- t_gt[torch.isnan(t_gt)] = 0
- diff_gt = torch.abs(depth_gt - t_gt)
- diff_gt[~mask_valid] = 0
- s_gt = (diff_gt.view(diff_gt.size()[:2] + (-1,)).sum(-1, keepdims=True) / mask_diff).unsqueeze(-1)
- depth_gt_aligned = (depth_gt - t_gt) / (s_gt + 1e-6)
-
-
- t_pred = depth_preds_nan.view(depth_preds_nan.size()[:2] + (-1,)).nanmedian(-1, keepdims=True)[0].unsqueeze(-1)
- t_pred[torch.isnan(t_pred)] = 0
- diff_pred = torch.abs(depth_preds - t_pred)
- diff_pred[~mask_valid] = 0
- s_pred = (diff_pred.view(diff_pred.size()[:2] + (-1,)).sum(-1, keepdims=True) / mask_diff).unsqueeze(-1)
- depth_pred_aligned = (depth_preds - t_pred) / (s_pred + 1e-6)
-
- return depth_pred_aligned, depth_gt_aligned
-
-
-def reduction_batch_based(image_loss, M):
- # average of all valid pixels of the batch
-
- # avoid division by 0 (if sum(M) = sum(sum(mask)) = 0: sum(image_loss) = 0)
- divisor = torch.sum(M)
-
- if divisor == 0:
- return 0
- else:
- return torch.sum(image_loss) / divisor
-
-
-def reduction_image_based(image_loss, M):
- # mean of average of valid pixels of an image
-
- # avoid division by 0 (if M = sum(mask) = 0: image_loss = 0)
- valid = M.nonzero()
-
- image_loss[valid] = image_loss[valid] / M[valid]
-
- return torch.mean(image_loss)
-
-
-
-def gradient_loss(prediction, target, mask, reduction=reduction_batch_based):
-
- M = torch.sum(mask, (1, 2))
-
- diff = prediction - target
- diff = torch.mul(mask, diff)
-
- grad_x = torch.abs(diff[:, :, 1:] - diff[:, :, :-1])
- mask_x = torch.mul(mask[:, :, 1:], mask[:, :, :-1])
- grad_x = torch.mul(mask_x, grad_x)
-
- grad_y = torch.abs(diff[:, 1:, :] - diff[:, :-1, :])
- mask_y = torch.mul(mask[:, 1:, :], mask[:, :-1, :])
- grad_y = torch.mul(mask_y, grad_y)
-
- image_loss = torch.sum(grad_x, (1, 2)) + torch.sum(grad_y, (1, 2))
-
- return reduction(image_loss, M)
-
-
-
-class SSIMAE(nn.Module):
- def __init__(self):
- super().__init__()
-
- def forward(self, depth_preds, depth_gt, mask_valid):
- depth_pred_aligned, depth_gt_aligned = masked_shift_and_scale(depth_preds, depth_gt, mask_valid)
- ssi_mae_loss = masked_l1_loss(depth_pred_aligned, depth_gt_aligned, mask_valid)
- return ssi_mae_loss
-
-
-class GradientMatchingTerm(nn.Module):
- def __init__(self, scales=4, reduction='batch-based'):
- super().__init__()
-
- if reduction == 'batch-based':
- self.__reduction = reduction_batch_based
- else:
- self.__reduction = reduction_image_based
-
- self.__scales = scales
-
- def forward(self, prediction, target, mask):
- total = 0
-
- for scale in range(self.__scales):
- step = pow(2, scale)
-
- total += gradient_loss(prediction[:, ::step, ::step], target[:, ::step, ::step],
- mask[:, ::step, ::step], reduction=self.__reduction)
-
- return total
-
-
-class MidasLoss(nn.Module):
- def __init__(self, alpha=0.1, scales=4, reduction='image-based'):
- super().__init__()
-
- self.__ssi_mae_loss = SSIMAE()
- self.__gradient_matching_term = GradientMatchingTerm(scales=scales, reduction=reduction)
- self.__alpha = alpha
- self.__prediction_ssi = None
-
- def forward(self, prediction, target, mask):
- prediction_inverse = 1 / (prediction.squeeze(1)+1e-6)
- target_inverse = 1 / (target.squeeze(1)+1e-6)
- ssi_loss = self.__ssi_mae_loss(prediction, target, mask)
-
- scale, shift = compute_scale_and_shift(prediction_inverse, target_inverse, mask.squeeze(1))
- self.__prediction_ssi = scale.view(-1, 1, 1) * prediction_inverse + shift.view(-1, 1, 1)
- reg_loss = self.__gradient_matching_term(self.__prediction_ssi, target_inverse, mask.squeeze(1))
- if self.__alpha > 0:
- total = ssi_loss + self.__alpha * reg_loss
-
- return total, ssi_loss, reg_loss
\ No newline at end of file
diff --git a/spaces/shi-labs/Versatile-Diffusion/lib/log_service.py b/spaces/shi-labs/Versatile-Diffusion/lib/log_service.py
deleted file mode 100644
index 348afd412a2686a264a108fe6bf9e30e289d5947..0000000000000000000000000000000000000000
--- a/spaces/shi-labs/Versatile-Diffusion/lib/log_service.py
+++ /dev/null
@@ -1,166 +0,0 @@
-import timeit
-import numpy as np
-import os
-import os.path as osp
-import shutil
-import copy
-import torch
-import torch.nn as nn
-import torch.distributed as dist
-from .cfg_holder import cfg_unique_holder as cfguh
-from . import sync
-
-print_console_local_rank0_only = True
-
-def print_log(*console_info):
- local_rank = sync.get_rank('local')
- if print_console_local_rank0_only and (local_rank!=0):
- return
- console_info = [str(i) for i in console_info]
- console_info = ' '.join(console_info)
- print(console_info)
-
- if local_rank!=0:
- return
-
- log_file = None
- try:
- log_file = cfguh().cfg.train.log_file
- except:
- try:
- log_file = cfguh().cfg.eval.log_file
- except:
- return
- if log_file is not None:
- with open(log_file, 'a') as f:
- f.write(console_info + '\n')
-
-class distributed_log_manager(object):
- def __init__(self):
- self.sum = {}
- self.cnt = {}
- self.time_check = timeit.default_timer()
-
- cfgt = cfguh().cfg.train
- use_tensorboard = getattr(cfgt, 'log_tensorboard', False)
-
- self.ddp = sync.is_ddp()
- self.rank = sync.get_rank('local')
- self.world_size = sync.get_world_size('local')
-
- self.tb = None
- if use_tensorboard and (self.rank==0):
- import tensorboardX
- monitoring_dir = osp.join(cfguh().cfg.train.log_dir, 'tensorboard')
- self.tb = tensorboardX.SummaryWriter(osp.join(monitoring_dir))
-
- def accumulate(self, n, **data):
- if n < 0:
- raise ValueError
-
- for itemn, di in data.items():
- if itemn in self.sum:
- self.sum[itemn] += di * n
- self.cnt[itemn] += n
- else:
- self.sum[itemn] = di * n
- self.cnt[itemn] = n
-
- def get_mean_value_dict(self):
- value_gather = [
- self.sum[itemn]/self.cnt[itemn] \
- for itemn in sorted(self.sum.keys()) ]
-
- value_gather_tensor = torch.FloatTensor(value_gather).to(self.rank)
- if self.ddp:
- dist.all_reduce(value_gather_tensor, op=dist.ReduceOp.SUM)
- value_gather_tensor /= self.world_size
-
- mean = {}
- for idx, itemn in enumerate(sorted(self.sum.keys())):
- mean[itemn] = value_gather_tensor[idx].item()
- return mean
-
- def tensorboard_log(self, step, data, mode='train', **extra):
- if self.tb is None:
- return
- if mode == 'train':
- self.tb.add_scalar('other/epochn', extra['epochn'], step)
- if 'lr' in extra:
- self.tb.add_scalar('other/lr', extra['lr'], step)
- for itemn, di in data.items():
- if itemn.find('loss') == 0:
- self.tb.add_scalar('loss/'+itemn, di, step)
- elif itemn == 'Loss':
- self.tb.add_scalar('Loss', di, step)
- else:
- self.tb.add_scalar('other/'+itemn, di, step)
- elif mode == 'eval':
- if isinstance(data, dict):
- for itemn, di in data.items():
- self.tb.add_scalar('eval/'+itemn, di, step)
- else:
- self.tb.add_scalar('eval', data, step)
- return
-
- def train_summary(self, itern, epochn, samplen, lr, tbstep=None):
- console_info = [
- 'Iter:{}'.format(itern),
- 'Epoch:{}'.format(epochn),
- 'Sample:{}'.format(samplen),]
-
- if lr is not None:
- console_info += ['LR:{:.4E}'.format(lr)]
-
- mean = self.get_mean_value_dict()
-
- tbstep = itern if tbstep is None else tbstep
- self.tensorboard_log(
- tbstep, mean, mode='train',
- itern=itern, epochn=epochn, lr=lr)
-
- loss = mean.pop('Loss')
- mean_info = ['Loss:{:.4f}'.format(loss)] + [
- '{}:{:.4f}'.format(itemn, mean[itemn]) \
- for itemn in sorted(mean.keys()) \
- if itemn.find('loss') == 0
- ]
- console_info += mean_info
- console_info.append('Time:{:.2f}s'.format(
- timeit.default_timer() - self.time_check))
- return ' , '.join(console_info)
-
- def clear(self):
- self.sum = {}
- self.cnt = {}
- self.time_check = timeit.default_timer()
-
- def tensorboard_close(self):
- if self.tb is not None:
- self.tb.close()
-
-# ----- also include some small utils -----
-
-def torch_to_numpy(*argv):
- if len(argv) > 1:
- data = list(argv)
- else:
- data = argv[0]
-
- if isinstance(data, torch.Tensor):
- return data.to('cpu').detach().numpy()
-
- elif isinstance(data, (list, tuple)):
- out = []
- for di in data:
- out.append(torch_to_numpy(di))
- return out
-
- elif isinstance(data, dict):
- out = {}
- for ni, di in data.items():
- out[ni] = torch_to_numpy(di)
- return out
-
- else:
- return data
diff --git a/spaces/shikunl/prismer/prismer/experts/obj_detection/unidet/data/multi_dataset_dataloader.py b/spaces/shikunl/prismer/prismer/experts/obj_detection/unidet/data/multi_dataset_dataloader.py
deleted file mode 100644
index 3487e4641067b4986c793f2ba190c15166199614..0000000000000000000000000000000000000000
--- a/spaces/shikunl/prismer/prismer/experts/obj_detection/unidet/data/multi_dataset_dataloader.py
+++ /dev/null
@@ -1,224 +0,0 @@
-import copy
-import logging
-import numpy as np
-import operator
-import torch.utils.data
-import json
-from detectron2.utils.comm import get_world_size
-
-from detectron2.data import samplers
-from torch.utils.data.sampler import BatchSampler, Sampler
-from detectron2.data.common import AspectRatioGroupedDataset, DatasetFromList, MapDataset
-from detectron2.data.dataset_mapper import DatasetMapper
-from detectron2.data.build import worker_init_reset_seed, print_instances_class_histogram
-from detectron2.data.build import filter_images_with_only_crowd_annotations
-from detectron2.data.build import filter_images_with_few_keypoints
-from detectron2.data.build import check_metadata_consistency
-from detectron2.data.catalog import MetadataCatalog, DatasetCatalog
-from detectron2.utils import comm
-import itertools
-import math
-from collections import defaultdict
-from typing import Optional
-
-
-def get_detection_dataset_dicts_with_source(
- dataset_names, filter_empty=True, min_keypoints=0, proposal_files=None
-):
- """
- Similar to detectron2.data.build.get_detection_dataset_dicts, but also returns the dataset
- source.
- """
- assert len(dataset_names)
- dataset_dicts = [DatasetCatalog.get(dataset_name) for dataset_name in dataset_names]
- for dataset_name, dicts in zip(dataset_names, dataset_dicts):
- assert len(dicts), "Dataset '{}' is empty!".format(dataset_name)
-
- for source_id, (dataset_name, dicts) in \
- enumerate(zip(dataset_names, dataset_dicts)):
- assert len(dicts), "Dataset '{}' is empty!".format(dataset_name)
- for d in dicts:
- d['dataset_source'] = source_id
-
- if "annotations" in dicts[0]:
- try:
- class_names = MetadataCatalog.get(dataset_name).thing_classes
- check_metadata_consistency("thing_classes", dataset_name)
- print_instances_class_histogram(dicts, class_names)
- except AttributeError: # class names are not available for this dataset
- pass
-
- assert proposal_files is None
-
- dataset_dicts = list(itertools.chain.from_iterable(dataset_dicts))
-
- has_instances = "annotations" in dataset_dicts[0]
- if filter_empty and has_instances:
- dataset_dicts = filter_images_with_only_crowd_annotations(dataset_dicts)
- if min_keypoints > 0 and has_instances:
- dataset_dicts = filter_images_with_few_keypoints(dataset_dicts, min_keypoints)
-
- return dataset_dicts
-
-def build_multi_dataset_train_loader(cfg, mapper=None):
- """
- Modified from detectron2.data.build.build_custom_train_loader, but supports
- different samplers
- """
- num_workers = get_world_size()
- images_per_batch = cfg.SOLVER.IMS_PER_BATCH
- assert (
- images_per_batch % num_workers == 0
- ), "SOLVER.IMS_PER_BATCH ({}) must be divisible by the number of workers ({}).".format(
- images_per_batch, num_workers
- )
- assert (
- images_per_batch >= num_workers
- ), "SOLVER.IMS_PER_BATCH ({}) must be larger than the number of workers ({}).".format(
- images_per_batch, num_workers
- )
- images_per_worker = images_per_batch // num_workers
-
- dataset_dicts = get_detection_dataset_dicts_with_source(
- cfg.DATASETS.TRAIN,
- filter_empty=cfg.DATALOADER.FILTER_EMPTY_ANNOTATIONS,
- min_keypoints=cfg.MODEL.ROI_KEYPOINT_HEAD.MIN_KEYPOINTS_PER_IMAGE
- if cfg.MODEL.KEYPOINT_ON
- else 0,
- proposal_files=cfg.DATASETS.PROPOSAL_FILES_TRAIN if cfg.MODEL.LOAD_PROPOSALS else None,
- )
- sizes = [0 for _ in range(len(cfg.DATASETS.TRAIN))]
- for d in dataset_dicts:
- sizes[d['dataset_source']] += 1
- # print('sizes', sizes)
- dataset = DatasetFromList(dataset_dicts, copy=False)
- if mapper is None:
- mapper = DatasetMapper(cfg, True)
- dataset = MapDataset(dataset, mapper)
-
- sampler_name = cfg.DATALOADER.SAMPLER_TRAIN
- logger = logging.getLogger(__name__)
- logger.info("Using training sampler {}".format(sampler_name))
- if sampler_name == 'MultiDatasetSampler':
- sampler = MultiDatasetSampler(cfg, dataset_dicts, sizes)
- else:
- raise ValueError("Unknown training sampler: {}".format(sampler_name))
-
- assert cfg.DATALOADER.ASPECT_RATIO_GROUPING
-
- data_loader = torch.utils.data.DataLoader(
- dataset,
- sampler=sampler,
- num_workers=cfg.DATALOADER.NUM_WORKERS,
- batch_sampler=None,
- collate_fn=operator.itemgetter(0), # don't batch, but yield individual elements
- worker_init_fn=worker_init_reset_seed,
- ) # yield individual mapped dict
-
- data_loader = MDAspectRatioGroupedDataset(
- data_loader, images_per_worker, num_datasets=len(sizes))
-
- return data_loader
-
-
-class MultiDatasetSampler(Sampler):
- def __init__(self, cfg, dataset_dicts, sizes, seed: Optional[int] = None):
- """
- """
- self.sizes = sizes
- self.sample_epoch_size = cfg.MULTI_DATASET.SAMPLE_EPOCH_SIZE
- assert self.sample_epoch_size % cfg.SOLVER.IMS_PER_BATCH == 0
- print('self.epoch_size', self.sample_epoch_size)
- if seed is None:
- seed = comm.shared_random_seed()
- self._seed = int(seed)
-
- self._rank = comm.get_rank()
- self._world_size = comm.get_world_size()
- self._batch_size = cfg.SOLVER.IMS_PER_BATCH
- self._ims_per_gpu = self._batch_size // self._world_size
-
- self.dataset_ids = torch.tensor(
- [d['dataset_source'] for d in dataset_dicts], dtype=torch.long)
- st = 0
-
- dataset_ratio = cfg.MULTI_DATASET.DATA_RATIO
- assert len(dataset_ratio) == len(sizes), \
- 'length of dataset ratio {} should be equal to number if dataset {}'.format(
- len(dataset_ratio), len(sizes)
- )
- dataset_weight = [torch.ones(s) * max(sizes) / s * r / sum(dataset_ratio) \
- for i, (r, s) in enumerate(zip(dataset_ratio, sizes))]
- st = 0
- cas_factors = []
- for i, s in enumerate(sizes):
- if cfg.MULTI_DATASET.USE_CAS[i]:
- cas_factor = self._get_class_balance_factor_per_dataset(
- dataset_dicts[st: st + s],
- l=cfg.MULTI_DATASET.CAS_LAMBDA)
- cas_factor = cas_factor * (s / cas_factor.sum())
- else:
- cas_factor = torch.ones(s)
- cas_factors.append(cas_factor)
- st = st + s
- cas_factors = torch.cat(cas_factors)
- dataset_weight = torch.cat(dataset_weight)
- self.weights = dataset_weight * cas_factors
-
-
- def __iter__(self):
- start = self._rank
- yield from itertools.islice(
- self._infinite_indices(), start, None, self._world_size)
-
-
- def _infinite_indices(self):
- g = torch.Generator()
- g.manual_seed(self._seed)
- while True:
- ids = torch.multinomial(
- self.weights, self.sample_epoch_size, generator=g,
- replacement=True)
- # nums = [(self.dataset_ids[ids] == i).sum().int().item() \
- # for i in range(len(self.sizes))]
- # print('_rank, len, nums, self.dataset_ids[ids[:10]], ',
- # self._rank, len(ids), nums, self.dataset_ids[ids[:10]],
- # flush=True)
- yield from ids
-
-
- def _get_class_balance_factor_per_dataset(self, dataset_dicts, l=1.):
- ret = []
- category_freq = defaultdict(int)
- for dataset_dict in dataset_dicts: # For each image (without repeats)
- cat_ids = {ann["category_id"] for ann in dataset_dict["annotations"]}
- for cat_id in cat_ids:
- category_freq[cat_id] += 1
- for i, dataset_dict in enumerate(dataset_dicts):
- cat_ids = {ann["category_id"] for ann in dataset_dict["annotations"]}
- ret.append(sum(
- [1. / (category_freq[cat_id] ** l) for cat_id in cat_ids]))
- return torch.tensor(ret).float()
-
-class MDAspectRatioGroupedDataset(torch.utils.data.IterableDataset):
- """
- """
-
- def __init__(self, dataset, batch_size, num_datasets):
- """
- """
- self.dataset = dataset
- self.batch_size = batch_size
- self._buckets = [[] for _ in range(2 * num_datasets)]
-
-
- def __iter__(self):
- for d in self.dataset:
- w, h = d["width"], d["height"]
- aspect_ratio_bucket_id = 0 if w > h else 1
- bucket_id = d['dataset_source'] * 2 + aspect_ratio_bucket_id
- bucket = self._buckets[bucket_id]
- bucket.append(d)
- if len(bucket) == self.batch_size:
- yield bucket[:]
- del bucket[:]
diff --git a/spaces/shogi880/ChatGPT-StableDiffusion-CharacterDesign/README.md b/spaces/shogi880/ChatGPT-StableDiffusion-CharacterDesign/README.md
deleted file mode 100644
index 81769dc41400ec0a92e54d9895b2b0795fed5e25..0000000000000000000000000000000000000000
--- a/spaces/shogi880/ChatGPT-StableDiffusion-CharacterDesign/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: ChatGPT StableDiffusion CharacterDesign
-emoji: 😻
-colorFrom: green
-colorTo: pink
-sdk: gradio
-sdk_version: 3.17.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/silk-road/ChatHaruhi-Needy/Needy-Haruhi/src/Agent.py b/spaces/silk-road/ChatHaruhi-Needy/Needy-Haruhi/src/Agent.py
deleted file mode 100644
index eb97a7afc32b091f4996ac3a3ed1562e402af6c0..0000000000000000000000000000000000000000
--- a/spaces/silk-road/ChatHaruhi-Needy/Needy-Haruhi/src/Agent.py
+++ /dev/null
@@ -1,73 +0,0 @@
-# Implemented by 李鲁鲁
-#
-# ChatHaruhi X 主播女孩重度依赖,
-#
-# 母项目主页 https://github.com/LC1332/Chat-Haruhi-Suzumiya
-#
-# 我希望实现一个Agent类
-#
-# 这个agent有多个属性(目前设计有 Stress , Darkness和Affection)
-#
-# 可以通过类似agent["Stress"]这样的形式调用
-#
-# 请用self.attributes 字典形式存储,并且重载[]操作符使得agent的行为和字典一致
-#
-# 同时实现一个成员函数apply_attribute_change( attribute_change )
-#
-# attribute_change是一个形如{"Darkness":-1, "Stress":1}的字典,如果字典key的值在self.attributes中存在,则累加在上面,不然则汇报warning并跳过
-
-import json
-
-class Agent:
-
- def __init__(self, attributes_str=None):
- if attributes_str:
- attributes = json.loads(attributes_str)
- else:
- attributes = {
- "Stress": 0,
- "Darkness": 0,
- "Affection": 0
- }
- self.attributes = attributes
-
- def save_to_str(self):
- return json.dumps(self.attributes, ensure_ascii=False)
-
- def __getitem__(self, key):
- return self.attributes.get(key)
-
- def __setitem__(self, key, value):
- self.attributes[key] = value
-
- def apply_attribute_change(self, attribute_change):
- for key, value in attribute_change.items():
- if key in self.attributes:
- self.attributes[key] += value
- if self.attributes[key] < 0:
- self.attributes[key] = 0
- else:
- print(f"Warning: {key} not in attributes, skipping")
-
- def in_condition( self, condition ):
- if condition is None:
- return True
- if condition[0] in self.attributes:
- return self.attributes[condition[0]] >= condition[1] and self.attributes[condition[0]] <= condition[2]
- else:
- return False
-
-if __name__ == "__main__":
- # 示例用法
- agent = Agent()
- print(agent["Stress"]) # 输出 0
- agent["Stress"] += 1
- print(agent["Stress"]) # 输出 1
- agent.apply_attribute_change({"Darkness": -1, "Stress": 1})
- print(agent["Darkness"]) # 输出 -1
- print(agent["Stress"]) # 输出 2
- agent.apply_attribute_change({"Nonexistent": 5}) # 输出 Warning: Nonexistent not in attributes, skipping
-
- condition = ('Stress', 0, 19)
-
- print( agent.in_condition( condition ) )
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Build and Decorate Your Own Hotel with Design Hotel My Hotel Home Mod APK.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Build and Decorate Your Own Hotel with Design Hotel My Hotel Home Mod APK.md
deleted file mode 100644
index a2025c8b3b4857f9b0076446844ab186e8e3910e..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Build and Decorate Your Own Hotel with Design Hotel My Hotel Home Mod APK.md
+++ /dev/null
@@ -1,81 +0,0 @@
-
-Design Hotel: My Hotel Home Mod APK - A Fun and Creative Game for Hotel Lovers
-If you are a fan of hotel management games, you will love Design Hotel: My Hotel Home. This is a casual simulation game where you can design and run your own hotel, from the lobby to the rooms, from the staff to the guests. You can customize your hotel with various themes and decorations, meet and interact with different characters, complete quests and challenges, and enjoy the relaxing and soothing soundtrack and graphics. In this article, we will tell you more about this game and how you can download Design Hotel: My Hotel Home Mod APK to get unlimited money and no ads.
- What is Design Hotel: My Hotel Home?
-Design Hotel: My Hotel Home is a game developed by SP COMES, a Korean studio that specializes in casual simulation games. The game was released in March 2021 and has received positive reviews from players who enjoy its simple yet engaging gameplay.
-design hotel my hotel game mod apk Download Zip ○○○ https://ssurll.com/2uNUYz
- A casual simulation game where you can design and manage your own hotel
-In Design Hotel: My Hotel Home, you are the owner of a hotel that needs some renovation and improvement. You can choose from different styles and themes for your hotel, such as modern, classic, vintage, or exotic. You can also decorate your hotel with various items, such as furniture, plants, paintings, lamps, rugs, and more. You can even change the color of the walls, floors, ceilings, and curtains.
- Features of the game
-Design Hotel: My Hotel Home has many features that make it fun and enjoyable to play. Here are some of them:
- Customize your hotel with various themes and decorations
-You can express your creativity and style by choosing from hundreds of options for your hotel design. You can mix and match different elements to create your own unique look. You can also upgrade your hotel facilities, such as the reception desk, the elevator, the restaurant, the spa, and more.
- Meet and interact with different guests and staff members
-You can meet various characters in your hotel, such as guests, staff members, celebrities, influencers, reporters, etc. You can chat with them, listen to their stories, help them with their requests, or even date them. You can also hire new staff members to improve your hotel service.
- Complete quests and challenges to earn rewards and unlock new items
-You can complete various quests and challenges in the game, such as welcoming new guests, serving food and drinks, cleaning rooms, hosting events, etc. By doing so, you can earn money, stars, diamonds, hearts, and other rewards that you can use to buy new items or upgrade your hotel. You can also unlock new themes and decorations as you progress in the game.
- Enjoy the relaxing and soothing soundtrack and graphics
-Design Hotel: My Hotel Home has a relaxing and soothing soundtrack that matches the mood of the game. You can listen to various genres of music, such as pop, jazz, classical, etc. You can also enjoy the beautiful and colorful graphics of the game, which create a cozy and inviting atmosphere for your hotel. You can admire the details of your hotel design, the animations of the characters, and the scenery of the city.
-design hotel my hotel home mod apk unlimited money
-design hotel my hotel game hack apk download
-design hotel my hotel home mod apk latest version
-design hotel my hotel game cheats apk free
-design hotel my hotel home mod apk android 1
-design hotel my hotel game mod apk happymod
-design hotel my hotel home mod apk revdl
-design hotel my hotel game mod apk offline
-design hotel my hotel home mod apk 2023
-design hotel my hotel game mod apk no root
-design hotel my hotel home mod apk rexdl
-design hotel my hotel game mod apk online
-design hotel my hotel home mod apk 1.0.31
-design hotel my hotel game mod apk unlimited stars
-design hotel my hotel home mod apk obb
-design hotel my hotel game mod apk pure
-design hotel my hotel home mod apk vip
-design hotel my hotel game mod apk 2022
-design hotel my hotel home mod apk ios
-design hotel my hotel game mod apk 1.0.30
-design hotel my hotel home mod apk 1.0.32
-design hotel my hotel game mod apk unlimited coins
-design hotel my hotel home mod apk 1.0.29
-design hotel my hotel game mod apk unlimited gems
-design hotel my hotel home mod apk 1.0.28
- Why download Design Hotel: My Hotel Home Mod APK?
-Design Hotel: My Hotel Home is a free game that you can download from the Google Play Store or the App Store. However, if you want to enjoy some extra benefits and features, you might want to download Design Hotel: My Hotel Home Mod APK instead. This is a modified version of the game that gives you unlimited money and no ads.
- Benefits of the mod version
-Here are some of the benefits of downloading Design Hotel: My Hotel Home Mod APK:
- Unlimited money to spend on your hotel upgrades and purchases
-With Design Hotel: My Hotel Home Mod APK, you don't have to worry about running out of money in the game. You can spend as much as you want on your hotel design, facilities, staff, and guests. You can buy any item or theme that you like, without waiting for hours or days to earn enough money. You can also upgrade your hotel to the highest level possible, without any limitations.
- No ads to interrupt your gameplay experience
-With Design Hotel: My Hotel Home Mod APK, you don't have to deal with annoying ads that pop up every few minutes or after every action. You can enjoy your gameplay without any interruptions or distractions. You can also save your data and battery life by not having to watch or download ads.
- Easy and safe installation process
-Downloading and installing Design Hotel: My Hotel Home Mod APK is very easy and safe. You don't need to root or jailbreak your device, or use any third-party apps or tools. You just need to follow a few simple steps that we will explain in the next section.
- How to download and install Design Hotel: My Hotel Home Mod APK?
-If you are interested in downloading and installing Design Hotel: My Hotel Home Mod APK on your Android device, here is a step-by-step guide for you:
- Step-by-step guide for Android devices
-
-Download the mod APK file from a trusted source like HappyMod. You can use this link to access the download page: Design Hotel: My Hotel Home Mod APK Download .
-Enable unknown sources in your device settings. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-Locate and install the mod APK file on your device. You can use a file manager app or your browser to find the downloaded file in your downloads folder. Tap on it and follow the instructions to install it.
-Launch the game and enjoy your modded hotel adventure.
-
- Conclusion
-Design Hotel: My Hotel Home is a fun and creative game for hotel lovers who want to design and manage their own hotel. You can customize your hotel with various themes and decorations, meet and interact with different guests and staff members, complete quests and challenges, and enjoy the relaxing and soothing soundtrack and graphics. If you want to get unlimited money and no ads in the game, you can download Design Hotel: My Hotel Home Mod APK from HappyMod. This is a modified version of the game that gives you these benefits and features. You can easily and safely install it on your Android device by following our step-by-step guide. Download Design Hotel: My Hotel Home Mod APK today and start your hotel journey!
- Frequently Asked Questions
-
-Is Design Hotel: My Hotel Home Mod APK safe to use? Yes, Design Hotel: My Hotel Home Mod APK is safe to use. It does not contain any viruses, malware, or spyware that could harm your device or data. It also does not require any root or jailbreak access, or any third-party apps or tools.
-Is Design Hotel: My Hotel Home Mod APK compatible with my device? Design Hotel: My Hotel Home Mod APK is compatible with most Android devices that run on Android 5.0 or higher. However, some devices may not support some features or functions of the game due to different specifications or settings.
-Can I play Design Hotel: My Hotel Home Mod APK online with other players? No, Design Hotel: My Hotel Home Mod APK is an offline game that does not require an internet connection to play. You can play it anytime and anywhere without worrying about data usage or connection issues. However, you can still share your hotel design and progress with your friends on social media platforms, such as Facebook, Instagram, or Twitter.
-How can I update Design Hotel: My Hotel Home Mod APK? To update Design Hotel: My Hotel Home Mod APK, you need to download and install the latest version of the mod APK file from HappyMod. You can check for updates regularly on the download page or turn on the notifications to get alerted when a new version is available. You can also follow the official Facebook page of the game to get the latest news and updates.
-What if I encounter any problems or errors while playing Design Hotel: My Hotel Home Mod APK? If you encounter any problems or errors while playing Design Hotel: My Hotel Home Mod APK, you can try the following solutions:
-
-Restart your device and launch the game again.
-Clear the cache and data of the game from your device settings.
-Reinstall the mod APK file from HappyMod.
-Contact the developer of the game via email or Facebook for further assistance.
-
-Can I play Design Hotel: My Hotel Home Mod APK on PC? Yes, you can play Design Hotel: My Hotel Home Mod APK on PC using an Android emulator. An Android emulator is a software that allows you to run Android apps and games on your PC. Some of the popular Android emulators are BlueStacks, NoxPlayer, LDPlayer, etc. You can download and install any of these emulators on your PC, then download and install Design Hotel: My Hotel Home Mod APK from HappyMod. You can then enjoy playing the game on a bigger screen with better controls and performance.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Free Minecraft Online Everything You Need to Know Before You Start.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Free Minecraft Online Everything You Need to Know Before You Start.md
deleted file mode 100644
index c8bc275b6112bf22d98ce947c5b37be1710acc6c..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Free Minecraft Online Everything You Need to Know Before You Start.md
+++ /dev/null
@@ -1,96 +0,0 @@
-
-How to Download Free Minecraft Online
-If you are looking for a fun and creative game that you can play online for free, you might want to try Minecraft . Minecraft is one of the most popular games in the world, with millions of fans who enjoy building, exploring, and surviving in a sandbox world. In this article, we will show you how to download free Minecraft online for different devices and platforms, and how to play with your friends on free servers. We will also give you some tips on how to play safely and responsibly on free servers.
- What is Minecraft?
-Minecraft is a game that lets you create your own unique world using blocks. You can mine resources, craft tools and items, build structures and machines, fight enemies, and explore infinite worlds. You can play in different modes, such as survival mode where you have to gather resources and fend off dangers, creative mode where you have unlimited resources and can build anything you want, adventure mode where you can play custom maps created by other players, or spectator mode where you can fly around and watch others play.
-download free minecraft online DOWNLOAD ✒ ✒ ✒ https://ssurll.com/2uNSiq
- Why is Minecraft Popular?
-Minecraft is popular because it offers a lot of freedom and creativity to its players. You can build anything you can imagine, from simple houses to complex cities, from pixel art to sculptures, from farms to factories, from puzzles to games. You can also explore different biomes, such as forests, deserts, oceans, mountains, caves, and more. You can discover secrets, treasures, and mysteries along the way. You can also play with other players online, either cooperatively or competitively, and share your creations and experiences with them. You can also customize your game with mods, skins, textures, and plugins that add new features and content to the game.
- What are the Benefits of Playing Minecraft Online for Free?
-Playing Minecraft online for free has many benefits. First of all, you can save money by not having to buy the game or pay for a subscription. You can still enjoy the core features and gameplay of Minecraft without spending a dime. Second, you can access multiplayer modes that let you play with other players online. You can join or create servers that host different types of games, such as survival, creative, mini-games, PvP, factions, skyblock, and more. You can also chat and interact with other players on these servers. Third, you can enjoy updates and mods that add new content and features to the game. You can download and install mods that change the gameplay, add new blocks and items, enhance the graphics, and more. You can also update your game to the latest version that includes new features and bug fixes.
- Where to Download Free Minecraft Online
-There are different ways to download free Minecraft online, depending on the device and platform you want to use. Here are some of the options you can try:
- Download Free Minecraft Online for PC
-Minecraft Classic
-If you want to play the original version of Minecraft that was released in 2009, you can try Minecraft Classic. This is a web-based version of the game that you can play on your browser for free. You don't need to download or install anything. You just need to visit the official site and start playing. However, keep in mind that this version is very limited and outdated compared to the current version of Minecraft. It only has 32 blocks, no enemies, no crafting, no multiplayer, and no saving. It is mainly for nostalgia purposes or for testing the game.
- Minecraft Trial
-If you want to play a more updated version of Minecraft for free on your PC, you can try Minecraft Trial. This is a downloadable version of the game that you can get from the official site. You need to have a Microsoft account to download and play it. You also need to have a Windows 10 device that meets the minimum system requirements. The trial version lets you play for 90 minutes in survival mode on a single world. You can explore, build, craft, fight, and mine as much as you want within the time limit. However, you cannot save your progress or join multiplayer servers.
- Minecraft Server Software
-If you want to play multiplayer mode for free on your PC, you can try Minecraft Server Software. This is a program that lets you run your own server for Minecraft on your PC. You can download it from the official site for Windows, Linux, or Mac devices. You need to have Java installed on your device to run it. You also need to have a good internet connection and enough RAM to host a server. The server software lets you create and customize your own world and settings for multiplayer mode. You can invite other players to join your server by giving them your IP address or using a service like Hamachi. However, you cannot join other servers or play single-player mode with this option.
- Download Free Minecraft Online for Mobile Devices
-Minecraft Trial for Android and iOS
-If you want to play Minecraft for free on your mobile device, you can try Minecraft Trial for Android or iOS devices. This is a downloadable version of the game that you can get from the Google Play Store or the App Store. You need to have a compatible device that meets the minimum system requirements . The trial version lets you play for 90 minutes in survival mode on a single world. You can explore, build, craft, fight, and mine as much as you want within the time limit. However, you cannot save your progress or join multiplayer servers.
- Other Options for Android Devices
-If you want to play other free versions of Minecraft for your Android device, you can try some third-party sites or apps that offer them. However, be careful when downloading and installing these versions, as they may contain viruses, malware, or unwanted ads. They may also not be updated or compatible with the official version of Minecraft. Some of the sites or apps that you can try are APKPure or Aptoide. These are platforms that let you download and install free apps and games for your Android device. You can search for Minecraft or similar games on these platforms and download them to your device. However, you may not be able to access all the features or content of the official version of Minecraft with these options.
-Download free minecraft trial for different devices
-Download free minecraft server software for Java and Bedrock
-Download free minecraft trial app on Google Play
-Download free minecraft launcher for Windows 10/11
-Download free minecraft for Linux distributions
-Download free minecraft education edition for schools
-Download free minecraft dungeons trial for consoles
-Download free minecraft legends beta for mobile devices
-Download free minecraft skins and texture packs
-Download free minecraft mods and maps
-Download free minecraft adventure maps online
-Download free minecraft survival mode online
-Download free minecraft creative mode online
-Download free minecraft multiplayer online
-Download free minecraft realms online
-Download free minecraft bedrock edition online
-Download free minecraft java edition online
-Download free minecraft pocket edition online
-Download free minecraft windows 10 edition online
-Download free minecraft story mode online
-Download free minecraft classic online
-Download free minecraft demo online
-Download free minecraft unblocked online
-Download free minecraft hacked online
-Download free minecraft cracked online
-How to download free minecraft online legally
-How to download free minecraft online without virus
-How to download free minecraft online on PC
-How to download free minecraft online on Mac
-How to download free minecraft online on Android
-How to download free minecraft online on iOS
-How to download free minecraft online on Xbox One
-How to download free minecraft online on PS4
-How to download free minecraft online on Switch
-How to download free minecraft online on Chromebook
-How to download free minecraft online with friends
-How to download free minecraft online with mods
-How to download free minecraft online with skins
-How to download free minecraft online with servers
-How to download free minecraft online with realms
- How to Play Free Minecraft Online with Friends
-Playing free Minecraft online with friends is possible and fun, but it requires some steps and precautions. You need to find a way to join or create a server that hosts the game online, and you need to play safely and responsibly on these servers. Here are some tips on how to do that:
- How to Join a Free Minecraft Online Server
-If you want to join a free Minecraft online server, you need to find one that suits your preferences and needs. There are many sites that list and rank free servers for Minecraft online, such as Minehut or Aternos. These sites let you browse and search for servers by categories, such as game mode, theme, language, region, and more. You can also read reviews and ratings from other players about these servers. Once you find a server that you like, you need to copy its IP address and port number, and paste it into the multiplayer menu of your game. Then, you can join the server and start playing with other players online.
- How to Create a Free Minecraft Online Server
-If you want to create a free Minecraft online server, you need to have a way to host the game online and invite other players to join it. There are two main ways to do this: using a site that offers free server hosting, or using the Minecraft Server Software. The first option is easier and faster, but it may have some limitations and drawbacks, such as limited slots, resources, uptime, customization, and support. The second option is more complicated and time-consuming, but it may give you more control and flexibility over your server.
- To use the first option, you need to visit a site that offers free server hosting for Minecraft online, such as Minehut or Aternos. These sites let you create and manage your own server for free using their web interface. You can choose your server name, game mode, world type, plugins, settings, and more. You can also invite other players to join your server by giving them your server address or URL. However, keep in mind that these sites may have some restrictions and conditions for using their service, such as limited slots, resources, uptime, customization, and support. You may also need to wait in a queue or watch ads to start your server. To use the second option, you need to download and run the Minecraft Server Software on your PC. You can get it from the official site for Windows, Linux, or Mac devices. You need to have Java installed on your device to run it. You also need to have a good internet connection and enough RAM to host a server. You need to configure your server settings and properties using a text editor or a GUI tool. You also need to port forward your router or use a service like Hamachi to allow other players to join your server. You can invite other players to join your server by giving them your IP address or using a service like Hamachi. However, keep in mind that this option may require more technical skills and resources to set up and maintain your server.
How to Play Safely and Responsibly on Free Minecraft Online Servers
-Playing on free Minecraft online servers can be fun and exciting, but it can also pose some risks and challenges. You need to be careful and respectful when playing on these servers, as you may encounter some problems or issues, such as hackers, griefers, scammers, malware, or inappropriate content. Here are some tips and advice on how to play safely and responsibly on free Minecraft online servers:
-
-Follow the rules and guidelines of the server you are playing on. Each server may have its own rules and expectations for its players, such as what you can or cannot do, say, or build on the server. You can usually find these rules on the server's website, forum, or chat. Respect these rules and follow them accordingly. If you break the rules, you may face consequences, such as warnings, kicks, bans, or reports.
-Respect other players and their creations on the server. Don't be rude, mean, or offensive to other players on the server. Don't harass, bully, or troll other players on the server. Don't grief, steal, or destroy other players' creations on the server. Don't cheat, hack, or exploit glitches on the server. Be friendly, helpful, and cooperative with other players on the server.
-Avoid scams or malware on the server. Don't click on suspicious links or download unknown files from the server or its players. Don't give out your personal or account information to anyone on the server. Don't accept or send any payments or donations to anyone on the server. Don't install any mods or plugins that are not verified or trusted by the server or its players.
-Use parental controls or filters if you are playing with children or minors on the server. Some servers may have content or language that is not suitable for children or minors. You can use parental controls or filters to block or limit access to these servers or their features. You can also monitor or supervise your children's or minors' online activity and behavior when playing on these servers.
-
- Conclusion
-Minecraft is a great game that you can play online for free with different devices and platforms. You can download free versions of Minecraft online from various sources and enjoy its features and gameplay. You can also play with your friends on free servers online and have fun together. However, you need to be careful and responsible when playing on these servers and follow some tips and advice to play safely and respectfully.
- If you are interested in playing free Minecraft online, why not give it a try? You might discover a new world of creativity and adventure that you will love.
- FAQs
-
-Q: Is playing free Minecraft online legal?
-A: Playing free Minecraft online is not illegal per se, but it may violate the terms of service of the official Minecraft game. If you want to play free Minecraft online legally, you should use the official sources provided by Mojang Studios, such as Minecraft Classic or Minecraft Trial.
-Q: Is playing free Minecraft online safe?
-A: Playing free Minecraft online is not completely safe, as there may be some risks and dangers involved, such as hackers, griefers, scammers, malware, or inappropriate content. You should be careful and responsible when playing on free servers online and follow some tips and advice to play safely and respectfully.
-Q: How can I play free Minecraft online with mods?
-A: You can play free Minecraft online with mods by downloading and installing them on your device or server. However, you should only use mods that are verified or trusted by the official Minecraft site or community, as some mods may contain viruses, malware, or unwanted ads. You should also check the compatibility and requirements of the mods before using them, as some mods may not work with certain versions or platforms of the game.
-Q: How can I play free Minecraft online with cross-play?
-A: You can play free Minecraft online with cross-play by using the same version and platform of the game as your friends. For example, if you want to play with your friends who are using Windows 10 devices, you should use the Windows 10 version of the game. If you want to play with your friends who are using mobile devices, you should use the mobile version of the game. You should also join the same server or realm as your friends, or use a service like Xbox Live or Realms Plus to invite them to your game.
-Q: How can I play free Minecraft online without downloading anything?
-A: You can play free Minecraft online without downloading anything by using the web-based version of the game, such as Minecraft Classic. This version lets you play on your browser for free without installing anything. However, this version is very limited and outdated compared to the current version of the game. It only has 32 blocks, no enemies, no crafting, no multiplayer, and no saving. It is mainly for nostalgia purposes or for testing the game.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Vampire Survivors APK and Slay the Hordes of Hell.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Vampire Survivors APK and Slay the Hordes of Hell.md
deleted file mode 100644
index f59dc82b362e31d1765a02dfc5e2e946420205c0..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Vampire Survivors APK and Slay the Hordes of Hell.md
+++ /dev/null
@@ -1,142 +0,0 @@
-
-Vampire Survivors: A Gothic Horror Casual Game with Rogue-lite Elements
- If you are a fan of Castlevania, bullet hell, or rogue-lite games, you might want to check out Vampire Survivors, a gothic horror casual game with minimalistic gameplay and tons of fun. In this article, we will tell you what Vampire Survivors is, why you should play it, and how to download and play it. We will also share some tips and strategies to help you survive the endless waves of monsters that will try to kill you.
-vampire survivors apkpure DOWNLOAD ››› https://ssurll.com/2uNYEC
- What is Vampire Survivors?
- Vampire Survivors is a game developed and published by poncle, an indie game studio based in Taiwan. It was released in October 2020 on Steam and later on Android devices. It is currently in early access, which means that it is still being updated and improved by the developers.
- The premise and the gameplay
- The premise of Vampire Survivors is simple: you are a vampire hunter who has to survive as long as possible against hordes of undead creatures that spawn every minute. You have to move around the map, collect gems, chests, and items, level up your character, and upgrade your weapons. You also have to avoid direct contact with enemies, as they can deal damage to you and drain your health.
- The gameplay of Vampire Survivors is also simple: you don't have to worry about aiming or shooting, as your character will automatically attack the nearest enemy on the screen. You only have to focus on positioning yourself strategically and dodging enemy attacks. The game has a top-down perspective and pixelated graphics that give it a retro feel.
-vampire survivors android game download
-vampire survivors apk mod unlimited money
-vampire survivors gothic horror casual game
-vampire survivors roguelite roguelike elements
-vampire survivors wave survival touchscreen
-vampire survivors hundreds of monsters
-vampire survivors legions of hell
-vampire survivors choices snowball
-vampire survivors com.poncle.vampiresurvivors
-vampire survivors APK for Android Download
-vampire survivors APKPure.com
-vampire survivors latest version update
-vampire survivors free online play
-vampire survivors offline mode support
-vampire survivors tips and tricks guide
-vampire survivors best weapons and skills
-vampire survivors reviews and ratings
-vampire survivors gameplay videos and screenshots
-vampire survivors cheats and hacks
-vampire survivors bugs and issues fix
-vampire survivors alternative games similar
-vampire survivors genre and theme
-vampire survivors developer and publisher
-vampire survivors release date and size
-vampire survivors system requirements and compatibility
-vampire survivors how to install and uninstall
-vampire survivors features and benefits
-vampire survivors pros and cons comparison
-vampire survivors FAQs and answers
-vampire survivors forums and communities
-vampire survivors news and updates
-vampire survivors feedback and suggestions
-vampire survivors customer service and support
-vampire survivors refund policy and terms of service
-vampire survivors privacy policy and security
-vampire survivors awards and achievements
-vampire survivors rankings and statistics
-vampire survivors promotions and discounts
-vampire survivors events and challenges
-vampire survivors seasons and episodes
-vampire survivors characters and stories
-vampire survivors maps and locations
-vampire survivors modes and levels
-vampire survivors items and resources
-vampire survivors shop and inventory
-vampire survivors missions and quests
-vampire survivors clans and friends
-vampire survivors chat and voice commands
- The characters and the weapons
- Vampire Survivors features 12 playable characters, each with their own unique starting weapon and passive ability. For example, Arca Ladonna has a wand that shoots fireballs randomly, while Imelda Belpase has a magic wand that shoots at the closest enemy. You can unlock more characters by completing certain achievements or by spending coins.
- The game also features over 40 different weapons, each with their own damage, range, fire rate, cooldown, and special effect. Some weapons can also evolve into more powerful versions if you pair them with certain items. For example, the basic magic wand can evolve into a laser wand if you pair it with a laser pointer. You can get new weapons by leveling up your character or by opening chests.
- The enemies and the challenges
- Vampire Survivors has a wide variety of enemies, ranging from bats and ghosts to werewolves and vampires. Some enemies have stronger variants that have more health, damage, speed, or special abilities. For example, some bats can explode on contact, while some vampires can teleport or summon minions. You have to learn the behaviors and weaknesses of each enemy type to survive longer.
- The game also has different challenges that increase the difficulty and the fun of each run. For example, some challenges can make enemies faster, stronger, or more numerous. Some challenges can also affect your character's stats or abilities. For example, some challenges can reduce your health or movement speed, while others can increase your fire rate or damage output.
- Why should you play Vampire Survivors?
- Vampire Survivors is a game that offers a lot of entertainment and satisfaction for a low price. Here are some reasons why you should play it:
- The addictive and satisfying combat loop
- Vampire Survivors has a simple but addictive combat loop that will keep you hooked for hours. The game is easy to pick up and play, but hard to master. You have to balance between killing enemies and collecting gems, chests, and items, while avoiding damage and death. You have to make quick decisions and use your skills and items wisely. You also have to adapt to the changing challenges and enemy types. The game rewards you with a satisfying sense of achievement and progression as you level up your character, unlock new weapons, and beat your high score.
- The variety and replayability of the game modes
- Vampire Survivors has four different game modes that offer different experiences and challenges. The game modes are:
-
-Survival Mode: This is the main mode of the game, where you have to survive as long as possible against endless waves of enemies. You can choose from three difficulty levels: Normal, Hard, and Nightmare. The higher the difficulty, the more enemies, challenges, and rewards you will encounter.
-Adventure Mode: This is a mode where you have to explore a randomly generated map and find the exit. You can encounter enemies, traps, secrets, and bosses along the way. You can also collect coins, keys, and items that can help you in your journey. You can choose from three map sizes: Small, Medium, and Large. The larger the map, the more time and resources you will need to complete it.
-Boss Rush Mode: This is a mode where you have to fight against a series of bosses without any breaks or items. You can choose from three boss sets: Easy, Medium, and Hard. The harder the set, the more bosses and health they will have. You can also choose from three lives: One Life, Three Lives, and Infinite Lives. The fewer lives you have, the more points you will earn.
-Custom Mode: This is a mode where you can create your own rules and settings for the game. You can customize the enemy types, the challenges, the items, the weapons, the map size, the difficulty level, and more. You can also save and load your custom modes for future use.
-
- The retro-inspired pixel art and sound design
- Vampire Survivors has a charming and nostalgic pixel art style that pays homage to the classic games of the 80s and 90s. The game has colorful and detailed graphics that create a contrast between the dark and gloomy atmosphere of the gothic horror theme and the bright and cheerful tone of the casual gameplay. The game also has smooth animations and dynamic lighting effects that enhance the visual appeal.
- The game also has a catchy and immersive sound design that complements the pixel art style. The game has an original soundtrack that consists of 10 tracks that range from upbeat and energetic to eerie and ominous. The game also has realistic sound effects that create a sense of immersion and feedback. You can hear the sounds of your weapons firing, enemies dying, chests opening, gems collecting, and more.
- How to download and play Vampire Survivors?
- Vampire Survivors is a game that is easy to download and play on various platforms. Here are some details on how to do it:
- The platforms and the requirements
- Vampire Survivors is available on Steam for Windows PC and on Google Play for Android devices. The game is compatible with most devices that meet the following minimum requirements:
-
-
-Platform
-Minimum Requirements
-
-
-Steam (Windows PC)
-
-OS: Windows 7 or later
-Processor: Intel Core 2 Duo or equivalent
-Memory: 2 GB RAM
-Graphics: DirectX 9 compatible graphics card
-Storage: 200 MB available space
-
-
-
-Google Play (Android)
-
-OS: Android 4.4 or later
-Processor: 1 GHz or faster
-Memory: 1 GB RAM
-Graphics: OpenGL ES 2.0 compatible graphics card
-Storage: 100 MB available space
-
-
-
- The sources and the links
- You can download Vampire Survivors from the following sources and links:
-
-Steam: You can buy Vampire Survivors for $1.99 USD on Steam by visiting this link: [Vampire Survivors on Steam]. You can also download a free demo version of the game by clicking on "Download Demo" on the same page.
-Google Play: You can download Vampire Survivors for free on Google Play by visiting this link : [Vampire Survivors - Apps on Google Play]. You can also watch a trailer of the game and read some user reviews on the same page.
-
- The tips and the strategies
- Vampire Survivors is a game that requires skill and strategy to survive longer and score higher. Here are some tips and strategies that can help you improve your performance:
-
-Choose your character and weapon wisely: Each character and weapon has its own strengths and weaknesses, so you should choose the ones that suit your playstyle and preference. For example, if you like to keep your distance from enemies, you might want to choose a character with a long-range weapon and a passive ability that boosts your movement speed or defense. On the other hand, if you like to get up close and personal with enemies, you might want to choose a character with a short-range weapon and a passive ability that increases your damage or health.
-Use your items effectively: Items are essential for surviving longer and dealing more damage in Vampire Survivors. You can find items by opening chests, killing enemies, or buying them from shops. Items can have various effects, such as healing you, boosting your stats, giving you special abilities, or modifying your weapons. You should use your items wisely and strategically, as some items have limited uses or durations. You should also try to combine items that synergize well with each other or with your character and weapon.
-Avoid unnecessary damage: One of the most important skills in Vampire Survivors is avoiding damage from enemies and traps. You should always be aware of your surroundings and the enemy patterns, and move accordingly. You should also use the dodge button to roll out of harm's way when necessary. You should avoid getting cornered or surrounded by enemies, as they can quickly overwhelm you. You should also avoid wasting your health on chests or shops that are not worth it.
-Manage your time and resources: Vampire Survivors is a game that tests your time management and resource management skills. You should always keep an eye on the timer, as it indicates when the next wave of enemies will spawn. You should try to kill as many enemies as possible before the timer runs out, as they will drop more gems, chests, and items. You should also collect as many gems as possible, as they are used to level up your character, buy items from shops, or unlock new characters and weapons. You should also spend your gems wisely, as they are limited and valuable.
-
- Conclusion
- Vampire Survivors is a gothic horror casual game with rogue-lite elements that offers a lot of fun and challenge for a low price. The game has simple but addictive gameplay, varied and replayable game modes, charming and nostalgic pixel art, and catchy and immersive sound design. The game is easy to download and play on Steam for Windows PC or on Google Play for Android devices. The game is also constantly being updated and improved by the developers.
- If you are looking for a game that will keep you entertained for hours with minimalistic but satisfying combat, Vampire Survivors is the game for you. Download it now and join the vampire hunting adventure!
- We hope you enjoyed this article about Vampire Survivors. If you have any questions, comments, or feedback, feel free to leave them below. We would love to hear from you!
- FAQs
- Here are some frequently asked questions about Vampire Survivors:
-
-Q: How long is Vampire Survivors?
-A: Vampire Survivors is a game that has no end, as it is based on how long you can survive against endless waves of enemies. However, the game has various achievements that you can complete to measure your progress and skill.
-Q: How many weapons are there in Vampire Survivors?
-A: Vampire Survivors has over 40 different weapons that you can unlock by leveling up your character or by opening chests. Some weapons can also evolve into more powerful versions if you pair them with certain items.
-Q: How do I unlock new characters in Vampire Survivors?
-A: Vampire Survivors has 12 playable characters that you can unlock by completing certain achievements or by spending coins. Coins are earned by playing the game or by watching ads.
-Q: Is Vampire Survivors multiplayer?
-A: Vampire Survivors is currently a single-player game, but the developers have stated that they plan to add multiplayer features in the future.
-Q: Is Vampire Survivors free?
-A: Vampire Survivors is free to download and play on Google Play for Android devices, but it costs $1.99 USD on Steam for Windows PC. The game also has some optional in-app purchases that can enhance your gameplay experience, such as removing ads, unlocking all characters and weapons, or getting more coins and gems.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Dynamons World Mega Mod APK Hack The Ultimate Guide to Unlock All Features and Characters.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Dynamons World Mega Mod APK Hack The Ultimate Guide to Unlock All Features and Characters.md
deleted file mode 100644
index f6404f611f2a4c643dc414faa77795fddbcc7c37..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Dynamons World Mega Mod APK Hack The Ultimate Guide to Unlock All Features and Characters.md
+++ /dev/null
@@ -1,101 +0,0 @@
-
-Dynamons World Mega Mod APK Hack Download: A Complete Guide
-If you are a fan of RPG games that involve catching and training cute monsters, then you might have heard of Dynamons World. This game is a popular and addictive online game that lets you explore an open world, battle other players, and collect dozens of unique Dynamons. But what if you want to enjoy the game without any limitations or interruptions? That's where Dynamons World Mega Mod APK Hack comes in handy. In this article, we will tell you everything you need to know about this modded version of the game, including its features, benefits, download link, installation guide, and tips and tricks. Read on to find out more!
-dynamons world mega mod apk hack download Download ✑ ✑ ✑ https://ssurll.com/2uNUpH
- What is Dynamons World?
-Dynamons World is a game developed by Azerion Casual that features:
-
-An exciting campaign that will take you through multiple challenges.
-Enjoyable online experiences.
-Dozens of unique Dynamons with varied powers and abilities.
-Useful items and boosters to make uses of.
-Tactical turn-based battles with many strategic elements.
-Multiple areas on the maps for you to travel.
-Tons of new updates with interesting content to expect.
-Free to play.
-
-The game is available for Android, iOS, and web browsers. You can download it from the Google Play Store, the App Store, or play it online on CrazyGames. However, if you want to experience the game with more advantages and fun, you might want to try the Dynamons World Mega Mod APK Hack version.
- Why use Dynamons World Mega Mod APK Hack?
-Dynamons World Mega Mod APK Hack is a modified version of the original game that gives you access to some amazing features that are not available in the official version. These features include:
- Unlimited money
-With this mod, you will never run out of money in the game. You can use it to buy anything you want, such as items, boosters, skill cards, and more. You can also upgrade your Dynamons to make them stronger and more powerful.
- No ads
-One of the most annoying things about the original game is the ads that pop up every time you move or finish a battle. These ads can interrupt your gameplay and ruin your mood. With this mod, you can enjoy the game without any ads at all. You can play as long as you want without any distractions or delays.
- All Dynamons unlocked
-This mod also allows you to unlock all the Dynamons in the game. You don't have to catch them or wait for them to appear in certain areas. You can simply choose any Dynamon you want from the menu and add it to your team. You can also switch between different Dynamons anytime you want. This way, you can have more variety and options in your battles.
-dynamons world unlimited money mod apk free download
-dynamons world hack apk latest version 2023
-dynamons world mod apk android 1 no root
-dynamons world mega mod apk revdl offline
-dynamons world hack apk download for pc
-dynamons world mod apk unlimited everything unlocked
-dynamons world hack apk online generator
-dynamons world mega mod apk rexdl premium
-dynamons world hack apk download for ios
-dynamons world mod apk unlimited coins and gems
-dynamons world hack apk no verification or survey
-dynamons world mega mod apk happymod pro
-dynamons world hack apk download for android
-dynamons world mod apk god mode and one hit kill
-dynamons world hack apk unlimited energy and health
-dynamons world mega mod apk pure vip
-dynamons world hack apk download for windows 10
-dynamons world mod apk all characters and skills unlocked
-dynamons world hack apk no human verification or password
-dynamons world mega mod apk apkpure full
-dynamons world hack apk download for mac
-dynamons world mod apk unlimited diamonds and gold
-dynamons world hack apk with obb data file
-dynamons world mega mod apk mob.org original
-dynamons world hack apk download for laptop
-dynamons world mod apk all levels and stages unlocked
-dynamons world hack apk without root or jailbreak
-dynamons world mega mod apk uptodown new
-dynamons world hack apk download for chromebook
-dynamons world mod apk unlimited keys and chests
- How to download and install Dynamons World Mega Mod APK Hack?
-If you are interested in trying this mod, you will need to follow these simple steps:
- Step 1: Download the APK file
-The first thing you need to do is to download the APK file of the mod from a reliable source. You can use this link to download it directly to your device. The file size is about 50 MB, so make sure you have enough storage space and a stable internet connection.
- Step 2: Enable unknown sources
-Before you can install the APK file, you need to enable the option of unknown sources on your device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to your device settings, then security, then unknown sources, and toggle it on.
- Step 3: Install the APK file
-Once you have downloaded the APK file and enabled unknown sources, you can proceed to install the APK file. To do this, locate the file in your device's file manager and tap on it. You will see a prompt asking you to confirm the installation. Tap on install and wait for the process to finish.
- Step 4: Launch the game and enjoy
-After the installation is complete, you can launch the game from your app drawer or home screen. You will see the Dynamons World icon with a red M on it. Tap on it and start playing the game with all the mod features. You can also sign in with your Facebook account to save your progress and play online with other players.
- Tips and tricks for Dynamons World
-Now that you have installed the mod and started playing the game, you might want to know some tips and tricks to make your gameplay more enjoyable and successful. Here are some of them:
- Know your Dynamons types
-Dynamons are divided into six types: fire, water, plant, electric, wind, and earth. Each type has its own strengths and weaknesses against other types. For example, fire is strong against plant but weak against water. You can see the type chart below for more details:
- | Type | Strong against | Weak against | | --- | --- | --- | | Fire | Plant, Wind | Water, Earth | | Water | Fire, Earth | Plant, Electric | | Plant | Water, Electric | Fire, Wind | | Electric | Water, Wind | Plant, Earth | | Wind | Plant, Earth | Fire, Electric | | Earth | Fire, Electric | Water, Wind | You should use this information to your advantage when choosing your Dynamons and planning your attacks. You should also pay attention to the type of your opponent's Dynamons and switch yours accordingly.
- Watch your health and energy levels
-Each Dynamon has two bars: a green one that indicates its health and a blue one that indicates its energy. Health is the amount of damage a Dynamon can take before it faints. Energy is the amount of power a Dynamon can use to perform its attacks. Each attack has a different energy cost and effect. You should always keep an eye on both bars and use items or skills to restore them when needed. You should also avoid using attacks that consume more energy than you have or that have no effect on your opponent.
- Balance your team and level up your Dynamons
-You can have up to four Dynamons in your team at a time. You should try to balance your team with different types of Dynamons that can cover each other's weaknesses. You should also level up your Dynamons by battling other players or wild Dynamons. Leveling up will increase their stats and unlock new skills. You can also evolve some Dynamons into more powerful forms when they reach certain levels.
- Catch more Dynamons and use skill cards
-You can catch more Dynamons by using capture balls that you can buy from shops or find in chests. To catch a Dynamon, you need to weaken it first by attacking it until its health bar turns red. Then, you can throw a capture ball at it and hope for the best. The higher the level and rarity of the Dynamon, the harder it is to catch. You can also use skill cards to enhance your Dynamons' abilities or change their skills. Skill cards can be bought from shops or obtained from chests or quests.
- Conclusion
-Dynamons World is a fun and addictive game that will keep you entertained for hours. With Dynamons World Mega Mod APK Hack, you can enjoy the game even more with unlimited money, no ads, and all Dynamons unlocked. You can download this mod from the link provided in this article and follow the installation guide to get started. You can also use the tips and tricks we shared to improve your gameplay and become a master trainer of Dynamons.
- Frequently Asked Questions
-
-Q: Is Dynamons World Mega Mod APK Hack safe to use?
-A: Yes, this mod is safe to use as long as you download it from a trusted source and follow the installation guide. However, you should always be careful when installing any modded apps on your device and use them at your own risk.
-Q: Can I play Dynamons World online with this mod?
-A: Yes, you can play Dynamons World online with this mod. You can sign in with your Facebook account and battle other players from around the world. You can also join tournaments and events to win prizes and rewards.
-Q: How can I update Dynamons World Mega Mod APK Hack?
-A: To update this mod, you need to download the latest version of the APK file from the same source and install it over the existing one. You don't need to uninstall the previous version or lose your progress. However, you should always check the compatibility and stability of the new version before installing it.
-Q: What are the best Dynamons to use in Dynamons World?
-A: There is no definitive answer to this question, as different Dynamons have different strengths and weaknesses. However, some of the most popular and powerful Dynamons in the game are:
-
-Pyroblaze: A fire-type Dynamon that can deal massive damage with its fire attacks and burn its enemies.
-Aquarion: A water-type Dynamon that can heal itself and its allies with its water attacks and freeze its enemies.
-Florion: A plant-type Dynamon that can poison its enemies with its plant attacks and regenerate its health.
-Voltix: An electric-type Dynamon that can paralyze its enemies with its electric attacks and boost its speed.
-Aerodactyl: A wind-type Dynamon that can dodge attacks with its wind attacks and lower its enemies' defense.
-Terranox: An earth-type Dynamon that can shield itself and its allies with its earth attacks and stun its enemies.
-
-Q: Where can I find more information about Dynamons World?
-A: You can find more information about Dynamons World on its official website, Facebook page, or YouTube channel. You can also check out some online forums or blogs that discuss the game and share tips and tricks.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Epic Battles Online MOD APK 7.1 Whats New and How to Download.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Epic Battles Online MOD APK 7.1 Whats New and How to Download.md
deleted file mode 100644
index 68c3a5ef04f43b8f2b1a8796d41ca7183755f924..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Epic Battles Online MOD APK 7.1 Whats New and How to Download.md
+++ /dev/null
@@ -1,118 +0,0 @@
-
-Epic Battles Online Mod APK 7.1: A Fun and Addictive Strategy Game
-If you are looking for a fun and addictive strategy game that will challenge your skills and creativity, then you should try Epic Battles Online Mod APK 7.1 . This is a game where you can create your own army of warriors and fight against other players online in various modes and levels. You can also unlock all the warriors, get unlimited money, and enjoy the game without any ads with this mod apk version.
-In this article, we will tell you everything you need to know about Epic Battles Online Mod APK 7.1, including its features, how to download and install it, tips and tricks for playing it, pros and cons, and FAQs. Let's get started!
-epic battles online mod apk 7.1 Download ❤ https://ssurll.com/2uNT80
- Features of Epic Battles Online Mod APK 7.1
-Epic Battles Online Mod APK 7.1 is a modified version of the original game that gives you access to some amazing features that will enhance your gaming experience. Here are some of them:
- Unlocked All Warriors
-One of the best features of Epic Battles Online Mod APK 7.1 is that it unlocks all the warriors in the game for you. You don't have to spend any money or time to unlock them, as they are already available for you to use from the start. You can choose from over 100 different warriors , each with their own unique abilities, stats, and appearance. You can mix and match them to create your own army of epic warriors that will suit your style and strategy.
-epic battles online mod apk latest version
-epic battles online unlocked all warriors mod apk
-epic battles online hack mod apk download
-epic battles online mod apk free shopping
-epic battles online mod apk unlimited money and gems
-epic battles online mod apk android 1
-epic battles online mod apk revdl
-epic battles online mod apk no ads
-epic battles online mod apk offline
-epic battles online mod apk 7.1 for ios
-epic battles online mod apk 7.1 for pc
-epic battles online mod apk 7.1 gameplay
-epic battles online mod apk 7.1 features
-epic battles online mod apk 7.1 review
-epic battles online mod apk 7.1 update
-epic battles online mod apk 7.1 cheats
-epic battles online mod apk 7.1 tips and tricks
-epic battles online mod apk 7.1 how to install
-epic battles online mod apk 7.1 how to play
-epic battles online mod apk 7.1 best warriors
-epic battles online mod apk 7.1 new warriors
-epic battles online mod apk 7.1 legendary warriors
-epic battles online mod apk 7.1 rare warriors
-epic battles online mod apk 7.1 warrior types
-epic battles online mod apk 7.1 warrior skills
-epic battles online mod apk 7.1 warrior stats
-epic battles online mod apk 7.1 warrior upgrade
-epic battles online mod apk 7.1 warrior fusion
-epic battles online mod apk 7.1 warrior evolution
-epic battles online mod apk 7.1 warrior customization
-epic battles online mod apk 7.1 battle modes
-epic battles online mod apk 7.1 battle arenas
-epic battles online mod apk 7.1 battle strategies
-epic battles online mod apk 7.1 battle rewards
-epic battles online mod apk 7.1 battle challenges
-epic battles online mod apk 7.1 battle rankings
-epic battles online mod apk 7.1 battle replays
-epic battles online mod apk 7.1 battle royale mode
-epic battles online mod apk 7.1 multiplayer mode
-epic battles online mod apk 7.1 co-op mode
-epic battles online mod apk 7.1 pvp mode
-epic battles online mod apk 7.1 pve mode
-epic battles online mod apk 7.1 clan mode
-epic battles online mod apk 7.1 tournament mode
-epic battles online mod apk 7.1 campaign mode
-epic battles online mod apk 7.1 story mode
- Unlimited Money
-Another great feature of Epic Battles Online Mod APK 7.1 is that it gives you unlimited money in the game. You can use this money to buy anything you want in the game, such as upgrades, items, skins, and more. You don't have to worry about running out of money or saving up for something expensive, as you have unlimited money to spend. This feature will make the game more fun and easy for you, as you can try out different combinations of warriors and items without any limitations.
- No Ads
-The last feature of Epic Battles Online Mod APK 7.1 that we will mention is that it removes all the ads from the game. You don't have to watch any annoying or intrusive ads that will interrupt your gameplay or waste your time. You can enjoy the game without any distractions or interruptions, and focus on the epic battles and strategies that you will create.
- How to Download and Install Epic Battles Online Mod APK 7.1
-Now that you know the features of Epic Battles Online Mod APK 7.1, you might be wondering how to download and install it on your device. Don't worry, it's very easy and simple. Just follow these steps:
- Step 1: Download the mod apk file from a trusted source
-The first thing you need to do is to download the mod apk file of Epic Battles Online Mod APK 7.1 from a trusted source. You can use the link below to download it safely and quickly:
-Epic Battles Online Mod APK 7.1 Download Link
-Make sure you have enough space on your device to store the file, which is about 100 MB in size.
- Step 2: Enable unknown sources on your device
-The next thing you need to do is to enable unknown sources on your device. This will allow you to install apps that are not from the official Google Play Store, such as Epic Battles Online Mod APK 7.1. To do this, go to your device settings, then security, then unknown sources, and turn it on. You might see a warning message, but don't worry, it's safe to do this.
- Step 3: Install the mod apk file and launch the game
-The final thing you need to do is to install the mod apk file that you downloaded in step 1. To do this, locate the file in your device storage, tap on it, and follow the instructions on the screen. It will take a few seconds to install the game on your device. Once it's done, you can launch the game and enjoy all the features of Epic Battles Online Mod APK 7.1.
- Tips and Tricks for Playing Epic Battles Online Mod APK 7.1
-Epic Battles Online Mod APK 7.1 is a game that requires skill and strategy to win. You can't just rely on the features of the mod apk version, you also need to use your brain and creativity to create your own epic army and defeat your enemies. Here are some tips and tricks that will help you play better and have more fun:
- Tip 1: Choose your warriors wisely according to their strengths and weaknesses
-As we mentioned before, there are over 100 different warriors in Epic Battles Online Mod APK 7.1, each with their own abilities, stats, and appearance. You can choose up to 10 warriors for each battle, so you need to choose them wisely according to their strengths and weaknesses. For example, some warriors are good at melee combat, some are good at ranged combat, some are good at healing or buffing, some are good at debuffing or stunning, etc. You need to balance your army with different types of warriors that can complement each other and counter your enemies.
- Tip 2: Upgrade your warriors regularly to increase their power and skills
-Another thing you need to do is to upgrade your warriors regularly to increase their power and skills. You can use the unlimited money feature of Epic Battles Online Mod APK 7.1 to buy upgrades for your warriors in the shop. You can upgrade their level, health, damage, speed, skill cooldown, etc. Upgrading your warriors will make them stronger and more effective in battle, as well as unlock new skills and abilities for them.
- Tip 3: Use the right strategy for each battle scenario and enemy type
-The last tip we will give you is to use the right strategy for each battle scenario and enemy type. Epic Battles Online Mod APK 7.1 has various modes and levels that will challenge your skills and creativity. You need to adapt your strategy according to the situation and the enemy type that you face. For example, some enemies are immune or resistant to certain types of damage or effects, some enemies have special skills or abilities that can harm or hinder you, some enemies have more health or defense than others, etc. You need to analyze your enemies' strengths and weaknesses, and use your warriors accordingly. You can also use the skills of your warriors at the right time and place to gain an advantage or turn the tide of the battle.
- Pros and Cons of Epic Battles Online Mod APK 7.1
-Epic Battles Online Mod APK 7.1 is a game that has many pros and cons that you should be aware of before playing it. Here are some of them:
- Pro 1: Fun and engaging gameplay with various modes and levels
-One of the pros of Epic Battles Online Mod APK 7.1 is that it has a fun and engaging gameplay that will keep you entertained for hours. You can create your own army of epic warriors and fight against other players online in various modes and levels. You can also challenge yourself with different difficulty settings and objectives, and earn rewards and achievements for your performance. The game is easy to learn but hard to master, as you need to use your skill and strategy to win.
- Pro 2: High-quality graphics and sound effects with smooth performance
-Another pro of Epic Battles Online Mod APK 7.1 is that it has high-quality graphics and sound effects that will immerse you in the game world. The game has a colorful and cartoonish style that suits the theme and mood of the game. The game also has realistic and dynamic sound effects that will make you feel the impact and excitement of the battles. The game runs smoothly on most devices and operating systems, without any lag or glitches.
- Pro 3: Easy to use interface and controls with customizable settings
-The last pro of Epic Battles Online Mod APK 7.1 that we will mention is that it has an easy to use interface and controls that will make the game accessible and enjoyable for everyone. The game has a simple and intuitive interface that will show you all the information and options you need in the game. The game also has easy and responsive controls that will let you control your warriors and use their skills with just a few taps or swipes. You can also customize the settings of the game according to your preferences, such as the sound, music, language, graphics, etc.
- Con 1: Requires internet connection to play online with other players
-One of the cons of Epic Battles Online Mod APK 7.1 is that it requires an internet connection to play online with other players. You can't play the game offline or without a stable internet connection, as you need to connect to the server and match with other players in real time. This might be a problem for some players who don't have access to a good internet connection or who want to play the game offline.
- Con 2: May not be compatible with some devices or operating systems
-Another con of Epic Battles Online Mod APK 7.1 is that it may not be compatible with some devices or operating systems. The game has some minimum requirements that your device needs to meet in order to run the game properly. If your device doesn't meet these requirements, you might experience some problems or errors while playing the game, such as crashes, freezes, bugs, etc. You should check the compatibility of your device before downloading and installing the game.
- Conclusion
-Epic Battles Online Mod APK 7.1 is a fun and addictive strategy game that will challenge your skills and creativity. You can create your own army of warriors and fight against other players online in various modes and levels. You can also unlock all the warriors, get unlimited money, and enjoy the game without any ads with this mod apk version.
-If you are looking for a game that will keep you entertained for hours, then you should try Epic Battles Online Mod APK 7.1. You can download it from the link below and start playing it right away:
-Epic Battles Online Mod APK 7.1 Download Link
- FAQs
-Here are some frequently asked questions about Epic Battles Online Mod APK 7.1:
- Q1: Is Epic Battles Online Mod APK 7.1 safe to download and install?
-A1: Yes, Epic Battles Online Mod APK 7.1 is safe to download and install on your device, as long as you download it from a trusted source like the one we provided above. The mod apk file is free from any viruses or malware that might harm your device or data.
- Q2: How can I update Epic Battles Online Mod APK 7.1 to the latest version?
-A2: You can update Epic Battles Online Mod APK 7.1 to the latest version by following the same steps that we explained above for downloading and installing the game. You just need to download the latest mod apk file from the same source and install it over the existing one. You don't need to uninstall the previous version or lose your progress, as the mod apk file will update the game automatically.
- Q3: How can I contact the developer of Epic Battles Online Mod APK 7.1 for feedback or support?
-A3: You can contact the developer of Epic Battles Online Mod APK 7.1 for feedback or support by visiting their official website or social media pages. You can also leave a comment or rating on the download page of the mod apk file. The developer is always open to suggestions and improvements from the players, and they will try to fix any issues or bugs that you might encounter while playing the game.
- Q4: What are some similar games to Epic Battles Online Mod APK 7.1 that I can try?
-A4: If you like Epic Battles Online Mod APK 7.1, you might also like some similar games that have the same genre or theme. Here are some of them:
-
-Clash of Clans Mod APK : A popular strategy game where you can build your own village, train your troops, and fight against other players online.
-Stick War: Legacy Mod APK : A fun and addictive strategy game where you can control a stickman army and fight against other stickman nations.
-Art of War: Legions Mod APK : A simple and elegant strategy game where you can lead your legion of soldiers and win epic battles.
-
- Q5: What are some of the best warriors to use in Epic Battles Online Mod APK 7.1?
-A5: There are many warriors to choose from in Epic Battles Online Mod APK 7.1, and each one has its own advantages and disadvantages. However, some of the best warriors to use in the game are:
-
-Dragon Knight : A powerful warrior that can breathe fire and slash enemies with his sword. He has high health and damage, and can also fly over obstacles and enemies.
-Ninja : A stealthy warrior that can throw shurikens and dash through enemies. He has high speed and skill cooldown, and can also dodge attacks and become invisible.
-Priest : A supportive warrior that can heal and buff allies with his magic. He has low health and damage, but he can also revive fallen allies and protect them from harm.
-
- I hope you enjoyed this article and learned something new about Epic Battles Online Mod APK 7.1. If you have any questions or comments, feel free to leave them below. Thanks for reading!
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/skf15963/summary/fengshen/examples/pretrain_t5/convert_ckpt_randeng_t5_char.sh b/spaces/skf15963/summary/fengshen/examples/pretrain_t5/convert_ckpt_randeng_t5_char.sh
deleted file mode 100644
index 5c446fd8784477d1caa1519b614d759aa3cb6ec8..0000000000000000000000000000000000000000
--- a/spaces/skf15963/summary/fengshen/examples/pretrain_t5/convert_ckpt_randeng_t5_char.sh
+++ /dev/null
@@ -1,30 +0,0 @@
-#!/bin/bash
-set -x -e
-
-echo "START TIME: $(date)"
-BIN_DIR=/cognitive_comp/ganruyi/experiments/randeng_t5_char_57M/randeng_t5_char_57M
-if [ ! -d ${BIN_DIR} ];then
- mkdir ${BIN_DIR}
- echo ${BIN_DIR} created!!!!!!!!!!!!!!
-else
- echo ${BIN_DIR} exist!!!!!!!!!!!!!!!
-fi
-
-export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions
-
-
-MODEL_ARGS="
- --ckpt_path /cognitive_comp/ganruyi/experiments/randeng_t5_char_57M/ckpt/last.ckpt/checkpoint/mp_rank_00_model_states.pt \
- --bin_path ${BIN_DIR}/pytorch_model.bin \
- --rm_prefix module.model. \
-"
-
-SCRIPTS_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/pretrain_t5/convert_ckpt_to_bin.py
-
-export CMD=" \
- $SCRIPTS_PATH \
- $MODEL_ARGS \
- "
-
-echo $CMD
-/home/ganruyi/anaconda3/bin/python $CMD
diff --git a/spaces/society-ethics/model-card-regulatory-check/Dockerfile b/spaces/society-ethics/model-card-regulatory-check/Dockerfile
deleted file mode 100644
index 32f51b694f7de55f14b3997fee2ddd6b856d4a22..0000000000000000000000000000000000000000
--- a/spaces/society-ethics/model-card-regulatory-check/Dockerfile
+++ /dev/null
@@ -1,27 +0,0 @@
-FROM python:3.11-slim-bullseye
-
-# Set the working directory to /code
-WORKDIR /code
-
-# Copy the current directory contents into the container at /code
-COPY ./requirements.txt /code/requirements.txt
-
-# Install requirements.txt
-RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
-
-# Set up a new user named "user" with user ID 1000
-RUN useradd -m -u 1000 user
-# Switch to the "user" user
-USER user
-# Set home to the user's home directory
-ENV HOME=/home/user \
- PATH=/home/user/.local/bin:$PATH
-
-# Set the working directory to the user's home directory
-WORKDIR $HOME/app
-
-# Copy the current directory contents into the container at $HOME/app setting the owner to the user
-COPY --chown=user . $HOME/app
-
-EXPOSE 7860
-CMD ["uvicorn", "server:app","--proxy-headers", "--host", "0.0.0.0", "--port", "7860"]
\ No newline at end of file
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/nonautoregressive_translation/scripts.md b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/nonautoregressive_translation/scripts.md
deleted file mode 100644
index 9d3d7b67dc08440b5f4d1c5a7ffcd4bd6e76c14f..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/nonautoregressive_translation/scripts.md
+++ /dev/null
@@ -1,179 +0,0 @@
-# Examples of Training scripts for Non-autoregressive Machine Translation models
-
-### Non-autoregressive Transformer (NAT, Gu et al., 2017)
-Note that we need to have an additional module to perform "length prediction" (`--length-loss-factor`) before generating the whole sequence.
-```bash
-fairseq-train \
- data-bin/wmt14_en_de_distill \
- --save-dir checkpoints \
- --ddp-backend=legacy_ddp \
- --task translation_lev \
- --criterion nat_loss \
- --arch nonautoregressive_transformer \
- --noise full_mask \
- --share-all-embeddings \
- --optimizer adam --adam-betas '(0.9,0.98)' \
- --lr 0.0005 --lr-scheduler inverse_sqrt \
- --stop-min-lr '1e-09' --warmup-updates 10000 \
- --warmup-init-lr '1e-07' --label-smoothing 0.1 \
- --dropout 0.3 --weight-decay 0.01 \
- --decoder-learned-pos \
- --encoder-learned-pos \
- --pred-length-offset \
- --length-loss-factor 0.1 \
- --apply-bert-init \
- --log-format 'simple' --log-interval 100 \
- --fixed-validation-seed 7 \
- --max-tokens 8000 \
- --save-interval-updates 10000 \
- --max-update 300000
-```
-
-### Fast Structured Decoding for Sequence Models (NAT-CRF, Sun et al., 2019)
-Note that we implemented a low-rank appromixated CRF model by setting `--crf-lowrank-approx=32` and `--crf-beam-approx=64` as discribed in the original paper. All other settings are the same as the vanilla NAT model.
-```bash
-fairseq-train \
- data-bin/wmt14_en_de_distill \
- --save-dir checkpoints \
- --ddp-backend=legacy_ddp \
- --task translation_lev \
- --criterion nat_loss \
- --arch nacrf_transformer \
- --noise full_mask \
- --share-all-embeddings \
- --optimizer adam --adam-betas '(0.9,0.98)' \
- --lr 0.0005 --lr-scheduler inverse_sqrt \
- --stop-min-lr '1e-09' --warmup-updates 10000 \
- --warmup-init-lr '1e-07' --label-smoothing 0.1 \
- --dropout 0.3 --weight-decay 0.01 \
- --decoder-learned-pos \
- --encoder-learned-pos \
- --pred-length-offset \
- --length-loss-factor 0.1 \
- --word-ins-loss-factor 0.5 \
- --crf-lowrank-approx 32 \
- --crf-beam-approx 64 \
- --apply-bert-init \
- --log-format 'simple' --log-interval 100 \
- --fixed-validation-seed 7 \
- --max-tokens 8000 \
- --save-interval-updates 10000 \
- --max-update 300000
-```
-
-
-### Non-autoregressive Transformer with Iterative Refinement (iNAT, Lee et al., 2018)
-Note that `--train-step` means how many iterations of refinement we used during training, and `--dae-ratio` controls the ratio of denoising auto-encoder training described in the original paper.
-```bash
-fairseq-train \
- data-bin/wmt14_en_de_distill \
- --save-dir checkpoints \
- --ddp-backend=legacy_ddp \
- --task translation_lev \
- --criterion nat_loss \
- --arch iterative_nonautoregressive_transformer \
- --noise full_mask \
- --share-all-embeddings \
- --optimizer adam --adam-betas '(0.9,0.98)' \
- --lr 0.0005 --lr-scheduler inverse_sqrt \
- --stop-min-lr '1e-09' --warmup-updates 10000 \
- --warmup-init-lr '1e-07' --label-smoothing 0.1 \
- --dropout 0.3 --weight-decay 0.01 \
- --decoder-learned-pos \
- --encoder-learned-pos \
- --pred-length-offset \
- --length-loss-factor 0.1 \
- --train-step 4 \
- --dae-ratio 0.5 \
- --stochastic-approx \
- --apply-bert-init \
- --log-format 'simple' --log-interval 100 \
- --fixed-validation-seed 7 \
- --max-tokens 8000 \
- --save-interval-updates 10000 \
- --max-update 300000
-```
-
-### Insertion Transformer (InsT, Stern et al., 2019)
-Note that we need to specify the "slot-loss" (uniform or balanced tree) described in the original paper. Here we use `--label-tau` to control the temperature.
-
-```bash
-fairseq-train \
- data-bin/wmt14_en_de_distill \
- --save-dir checkpoints \
- --ddp-backend=legacy_ddp \
- --task translation_lev \
- --criterion nat_loss \
- --arch insertion_transformer \
- --noise random_delete \
- --share-all-embeddings \
- --optimizer adam --adam-betas '(0.9,0.98)' \
- --lr 0.0005 --lr-scheduler inverse_sqrt \
- --stop-min-lr '1e-09' --warmup-updates 10000 \
- --warmup-init-lr '1e-07' --label-smoothing 0.1 \
- --dropout 0.3 --weight-decay 0.01 \
- --decoder-learned-pos \
- --encoder-learned-pos \
- --apply-bert-init \
- --log-format 'simple' --log-interval 100 \
- --fixed-validation-seed 7 \
- --max-tokens 8000 \
- --save-interval-updates 10000 \
- --max-update 300000
-```
-
-
-### Mask Predict (CMLM, Ghazvininejad et al., 2019)
-```bash
-fairseq-train \
- data-bin/wmt14_en_de_distill \
- --save-dir checkpoints \
- --ddp-backend=legacy_ddp \
- --task translation_lev \
- --criterion nat_loss \
- --arch cmlm_transformer \
- --noise random_mask \
- --share-all-embeddings \
- --optimizer adam --adam-betas '(0.9,0.98)' \
- --lr 0.0005 --lr-scheduler inverse_sqrt \
- --stop-min-lr '1e-09' --warmup-updates 10000 \
- --warmup-init-lr '1e-07' --label-smoothing 0.1 \
- --dropout 0.3 --weight-decay 0.01 \
- --decoder-learned-pos \
- --encoder-learned-pos \
- --apply-bert-init \
- --log-format 'simple' --log-interval 100 \
- --fixed-validation-seed 7 \
- --max-tokens 8000 \
- --save-interval-updates 10000 \
- --max-update 300000
-```
-
-
-
-
-### Levenshtein Transformer (LevT, Gu et al., 2019)
-```bash
-fairseq-train \
- data-bin/wmt14_en_de_distill \
- --save-dir checkpoints \
- --ddp-backend=legacy_ddp \
- --task translation_lev \
- --criterion nat_loss \
- --arch levenshtein_transformer \
- --noise random_delete \
- --share-all-embeddings \
- --optimizer adam --adam-betas '(0.9,0.98)' \
- --lr 0.0005 --lr-scheduler inverse_sqrt \
- --stop-min-lr '1e-09' --warmup-updates 10000 \
- --warmup-init-lr '1e-07' --label-smoothing 0.1 \
- --dropout 0.3 --weight-decay 0.01 \
- --decoder-learned-pos \
- --encoder-learned-pos \
- --apply-bert-init \
- --log-format 'simple' --log-interval 100 \
- --fixed-validation-seed 7 \
- --max-tokens 8000 \
- --save-interval-updates 10000 \
- --max-update 300000
-```
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/encoders/__init__.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/encoders/__init__.py
deleted file mode 100644
index 7cbe00a10520331709441e5e77991bd2edca8c06..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/encoders/__init__.py
+++ /dev/null
@@ -1,29 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-
-import importlib
-import os
-
-from fairseq import registry
-
-
-build_tokenizer, register_tokenizer, TOKENIZER_REGISTRY, _ = registry.setup_registry(
- "--tokenizer",
- default=None,
-)
-
-
-build_bpe, register_bpe, BPE_REGISTRY, _ = registry.setup_registry(
- "--bpe",
- default=None,
-)
-
-
-# automatically import any Python files in the encoders/ directory
-for file in sorted(os.listdir(os.path.dirname(__file__))):
- if file.endswith(".py") and not file.startswith("_"):
- module = file[: file.find(".py")]
- importlib.import_module("fairseq.data.encoders." + module)
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/distributed/__init__.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/distributed/__init__.py
deleted file mode 100644
index d0b96b734c4b5e7cd5d295238d0764c05093dc27..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/distributed/__init__.py
+++ /dev/null
@@ -1,21 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from .distributed_timeout_wrapper import DistributedTimeoutWrapper
-from .fully_sharded_data_parallel import fsdp_enable_wrap, fsdp_wrap, FullyShardedDataParallel
-from .legacy_distributed_data_parallel import LegacyDistributedDataParallel
-from .module_proxy_wrapper import ModuleProxyWrapper
-from .tpu_distributed_data_parallel import TPUDistributedDataParallel
-
-
-__all__ = [
- "DistributedTimeoutWrapper",
- "fsdp_enable_wrap",
- "fsdp_wrap",
- "FullyShardedDataParallel",
- "LegacyDistributedDataParallel",
- "ModuleProxyWrapper",
- "TPUDistributedDataParallel",
-]
diff --git a/spaces/srush/minichain/color.py b/spaces/srush/minichain/color.py
deleted file mode 100644
index a2c8d298cf6ff5793064aaaa36b622eb692def68..0000000000000000000000000000000000000000
--- a/spaces/srush/minichain/color.py
+++ /dev/null
@@ -1,26 +0,0 @@
-# Answer a math problem with code.
-# Adapted from Dust [maths-generate-code](https://dust.tt/spolu/a/d12ac33169)
-
-from minichain import Backend, JinjaPrompt, Prompt, start_chain, SimplePrompt, show_log
-
-
-# Prompt that asks LLM for code from math.
-
-class ColorPrompt(Prompt[str, bool]):
- def parse(inp: str) -> str:
- return f"Answer 'Yes' if this is a color, {inp}. Answer:"
-
- def parse(out: str, inp) -> bool:
- # Encode the parsing logic
- return out.strip() == "Yes"
-ColorPrompt().show({"inp": "dog"}, "No")
-
-
-with start_chain("color") as backend:
- question = 'What is the sum of the powers of 3 (3^i) that are smaller than 100?'
- prompt = MathPrompt(backend.OpenAI()).chain(SimplePrompt(backend.Python()))
- result = prompt({"question": question})
- print(result)
-
-
-show_log("math.log")
diff --git a/spaces/stomexserde/gpt4-ui/Examples/1137 - Ek Tera Saath Movie Download In Hindi Dubbed Mp4.md b/spaces/stomexserde/gpt4-ui/Examples/1137 - Ek Tera Saath Movie Download In Hindi Dubbed Mp4.md
deleted file mode 100644
index 0219f6c6387d5e1c4010b3678008dd735206e686..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/1137 - Ek Tera Saath Movie Download In Hindi Dubbed Mp4.md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-How to Watch 1:13:7 - Ek Tera Saath, a Horror-Romance Film from India
-If you are a fan of horror and romance genres, you might be interested in watching 1:13:7 - Ek Tera Saath, a 2016 Indian film directed by Arshad Siddiqui. The film stars Sharad Malhotra, Hritu Dudani and Melanie Nazareth in lead roles and was released on 21 October 2016. The film is about paranormal activities happening around a royal prince and his wife, who died under mysterious circumstances.
-The film has a bit of supernatural element and has been shot in various locations in India, such as Ghanerao, Jaisalmer, Jodhpur, Delhi, Chandigarh, Shimla and Mumbai. The film also features some melodious songs sung by Rahat Fateh Ali Khan, K.K, Sonu Nigam and others. The music is composed by Sunil Singh, Liyakat Ajmeri and Ali-Anirudh.
-1:13:7 - Ek Tera Saath Movie Download In Hindi Dubbed Mp4 Download ->>> https://urlgoal.com/2uI6Pi
-If you want to watch this film online, you might be wondering how to download it in Hindi dubbed mp4 format. Well, there are some websites that offer this option, but you should be careful about the quality and legality of the content. Some of these websites might have low-quality videos or malware that can harm your device. Some of them might also violate the copyright laws and infringe the rights of the filmmakers.
-Therefore, we recommend you to watch this film legally and safely on a streaming platform that has the rights to show it. One such platform is Zee5, which is a popular OTT service in India that offers a variety of content in different languages. You can watch 1:13:7 - Ek Tera Saath on Zee5 with a subscription or a free trial. You can also download the film on your device for offline viewing.
-
-To watch 1:13:7 - Ek Tera Saath on Zee5, you need to follow these steps:
-
-Go to the Zee5 website or app and sign up or log in with your account.
-Search for 1:13:7 - Ek Tera Saath in the search bar or browse through the categories.
-Select the film and click on play or download.
-Enjoy watching the film with subtitles or dubbing options.
-
-We hope you enjoy watching this film and share your feedback with us. If you have any questions or suggestions, please let us know.
-
-1:13:7 - Ek Tera Saath is a film that explores the themes of love, loyalty, betrayal and revenge. The film revolves around Kunwar Aditya Pratap Singh, a royal prince who is haunted by the ghost of his wife, Rani Kasturi Devi, who died in a car accident. He is also troubled by the political conspiracies and enemies who want to take over his throne. He finds solace in his childhood friend, Sonali, who is a journalist and helps him uncover the truth behind his wife's death.
-The film has some thrilling and suspenseful moments that keep the audience hooked. The film also has some emotional and romantic scenes that show the bond between Aditya and Sonali. The film also has some twists and turns that reveal the secrets and motives of the characters. The film has a climax that surprises the audience and leaves them with a message.
-1:13:7 - Ek Tera Saath is a film that has something for everyone. It is a film that combines horror and romance in a unique way. It is a film that showcases the talent and chemistry of the actors. It is a film that has a gripping story and a catchy soundtrack. It is a film that you should not miss.
e93f5a0c3f
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Audio Beats ? Mp3 Music Player V3.0 Apk Build 303 [Premium] [Latest].md b/spaces/stomexserde/gpt4-ui/Examples/Audio Beats ? Mp3 Music Player V3.0 Apk Build 303 [Premium] [Latest].md
deleted file mode 100644
index f45553425e43a32e7b15e80741140b9670506217..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Audio Beats ? Mp3 Music Player V3.0 Apk Build 303 [Premium] [Latest].md
+++ /dev/null
@@ -1,24 +0,0 @@
-
-Audio Beats â Mp3 Music Player: A High-Quality Music App for Android
-If you are looking for a free and powerful music player app for your Android device, you might want to check out Audio Beats â Mp3 Music Player. This app is a part of the multimedia category and enables you to listen to music and inbuilt sound effects of different songs and tracks in high-quality audio format.
-Audio Beats â Mp3 Music Player has been completely rebuilt from scratch, having all the new features that its predecessor had. The new version has been developed keeping in mind the specific requirements of mobile users. Its unique features include:
-Audio Beats – Mp3 Music Player v3.0 Apk build 303 [Premium] [Latest] Download Zip 🗸 https://urlgoal.com/2uI9gA
-
-It supports almost all the formats and also provides a rich set of audio options to make your android device even smarter.
-It uses the Dalmatian audio engine that is well known for its high-quality audio files.
-It features a large library containing thousands of favorite songs.
-It has many advanced features that would enable you to listen to the best music with the utmost clarity and ease.
-It has a user-friendly interface that makes it easy to use even for people who have very less knowledge about music and sound processing.
-It has tons of different options for customization. You can change the background, crossfade, widget layout, and even reorder the song queue. You can also add your own pictures to the app.
-It has tons of different themes to choose from, and you can change the color scheme of the app.
-
-The latest version of Audio Beats â Mp3 Music Player is v3.0 Apk build 303 [Premium] [Latest], which was released on Saturday, February 11th 2023. This version has some bug fixes and performance improvements. You can download it from Google Play or from other sources[^1^] [^2^] [^3^].
-If you are a music lover who wants to enjoy your favorite songs on your Android device, Audio Beats â Mp3 Music Player is a must-have app for you. Download it today and experience the difference!
-
-
-Audio Beats â Mp3 Music Player is not just a music player app, but also a music manager app. You can easily create playlists, edit tags, delete songs, and share your music with others. You can also browse your music by albums, artists, genres, folders, and songs. You can also search for your favorite songs using the built-in search function.
-Audio Beats â Mp3 Music Player also has a powerful equalizer that lets you adjust the sound effects according to your preference. You can choose from various presets or create your own custom settings. You can also boost the bass and treble levels, and enhance the stereo effect. You can also use the sleep timer feature to set a time for the app to stop playing music automatically.
-Audio Beats â Mp3 Music Player is more than just a music player app, it is a music lover's dream app. It has everything you need to enjoy your music on your Android device. It is fast, smooth, and reliable. It is also compatible with most Android devices and versions. It is one of the best music player apps available on the market.
-So what are you waiting for? Download Audio Beats â Mp3 Music Player v3.0 Apk build 303 [Premium] [Latest] today and enjoy your music like never before!
e93f5a0c3f
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Cats - The Musical-(1998) [DVDri ((FULL)).md b/spaces/stomexserde/gpt4-ui/Examples/Cats - The Musical-(1998) [DVDri ((FULL)).md
deleted file mode 100644
index 6ed7fb871ee1ff9b2297b562a599f9356e62cd89..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Cats - The Musical-(1998) [DVDri ((FULL)).md
+++ /dev/null
@@ -1,26 +0,0 @@
-
-Cats - The Musical-(1998) [DVDri: A Review of the Classic Stage Show on Film
-
-Cats is one of the most famous and successful musicals of all time, composed by Andrew Lloyd Webber and based on the poems of T.S. Eliot. The show features a cast of feline characters who gather for the annual Jellicle Ball, where one of them will be chosen to ascend to the Heaviside Layer and be reborn into a new life.
-
-In 1998, a direct-to-video film adaptation of Cats was released, featuring a selection of performers from various international productions of the show. The film was directed by David Mallet and produced by Lloyd Webber himself, who also oversaw the orchestration and musical arrangements. The film was shot at the Adelphi Theatre in London, with new staging and editing to fit the screen format.
-Cats - The Musical-(1998) [DVDri Download Zip ✪✪✪ https://urlgoal.com/2uI9nR
-
-The film captures the magic and spectacle of the stage show, with stunning costumes, makeup, lighting and choreography. The cast includes Elaine Paige as Grizabella, the faded glamour cat who sings the iconic song "Memory"; John Mills as Old Deuteronomy, the wise and benevolent leader of the Jellicle tribe; Ken Page as Old Deuteronomy's nemesis, the evil Macavity; and many more talented singers and dancers who bring the characters to life.
-
-Cats - The Musical-(1998) [DVDri is a must-have for any fan of musical theatre, as well as anyone who loves cats and their mysterious ways. The film is available on DVD and Blu-ray, as well as online streaming platforms. You can also watch it on television channels such as PBS, BBC and Ovation TV.
-
-If you want to experience the joy and wonder of Cats - The Musical-(1998) [DVDri, don't miss this opportunity to get your copy today. You will be enchanted by the music, the story and the performances of this timeless masterpiece.
-
-But Cats - The Musical-(1998) [DVDri is not just a faithful reproduction of the stage show. It also features some changes and additions that make it unique and appealing to a wider audience. For example, the film includes a new song written by Lloyd Webber and Trevor Nunn, called \"The Moments of Happiness\", which is sung by Old Deuteronomy and Jemima after Grizabella's first appearance. The song replaces \"The Awefull Battle of the Pekes and the Pollicles\", which was cut from the film for time reasons.
-
-
-Another change is the omission of \"Growltiger's Last Stand\", a lengthy flashback sequence that shows Gus the Theatre Cat's past as a pirate. Instead, Gus sings a shorter version of his song, followed by a brief reprise of \"The Awefull Battle of the Pekes and the Pollicles\". The film also adds some dialogue and narration to clarify the plot and the characters' motivations, such as Munkustrap's explanation of the Heaviside Layer and Old Deuteronomy's speech before the Jellicle Choice.
-
-One of the most notable additions to the film is the character of Exotica, a black and white cat who was created specifically for Femi Taylor, who had previously played Tantomile in the original London cast. Exotica does not have any solo lines or songs, but she is prominently featured in several group numbers and dances. She also has a close relationship with Jemima, who is her twin sister in some versions of the show.
-
-Cats - The Musical-(1998) [DVDri is full of trivia and Easter eggs for fans of the musical and its creators. For instance, the license plate on the car at the back of the stage reads \"TSE 1\", which stands for T.S. Eliot, the author of the poems that inspired Cats. The film also pays homage to some of the original cast members, such as Elaine Paige, who originated Grizabella in London; Ken Page, who originated Old Deuteronomy on Broadway; and Susan Jane Tanner, who originated Jellylorum in London.
-
-Cats - The Musical-(1998) [DVDri is a masterpiece of musical theatre that has captivated millions of viewers around the world. It showcases the talents and skills of some of the best performers and artists in the industry, as well as the genius and vision of Andrew Lloyd Webber and his collaborators. It is a film that celebrates life, love, music and cats in all their glory.
81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Codigo De Activacion Para 3skeng __TOP__.md b/spaces/stomexserde/gpt4-ui/Examples/Codigo De Activacion Para 3skeng __TOP__.md
deleted file mode 100644
index fa6fd06018eebf732d0563d6cc9e39bbfb4487c7..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Codigo De Activacion Para 3skeng __TOP__.md
+++ /dev/null
@@ -1,35 +0,0 @@
-
-How to Activate 3skeng for SketchUp
-3skeng is a 3D engineering software for Trimble SketchUp that allows you to design and document MEP and 3D piping and steel structures. If you want to use 3skeng for SketchUp, you need to activate it with a valid license key. Here are the steps to do it:
-
-Download and install 3skeng for SketchUp from their official website . You can choose between a free trial version or a paid version.
-Launch SketchUp and open the Extension Manager from the Window menu.
-Find 3skeng in the list of extensions and click on the Activate button.
-Enter your email address and your license key in the pop-up window. You can find your license key in the confirmation email that you received after purchasing 3skeng.
-Click on the Activate button and wait for the confirmation message.
-Restart SketchUp and enjoy using 3skeng for your projects.
-
-If you have any problems with the activation process, you can contact 3skeng support at support@3skeng.com .
-Codigo De Activacion Para 3skeng DOWNLOAD ••• https://urlgoal.com/2uI9vz
-
-Why Use 3skeng for SketchUp?
-3skeng for SketchUp is a powerful and intuitive extension that enables you to create realistic and detailed 3D models of engineering systems. Here are some of the benefits of using 3skeng for SketchUp:
-
-It is compatible with SketchUp 2017 to 2023 and works on Windows and Mac OS.
-It has a user-friendly interface that follows the SketchUp logic and workflow.
-It has a large library of parametric components for different industries and standards.
-It allows you to edit and modify your models with ease using smart tools and features.
-It supports BIM (Building Information Modeling) and exports your models to IFC format.
-It generates automatic reports and bills of materials for your projects.
-
-If you want to learn more about 3skeng for SketchUp, you can visit their learning page or watch their video tutorials .
-
-How to Use 3skeng for SketchUp?
-Using 3skeng for SketchUp is very easy and fun. You can start by choosing a tool from the 3skeng toolbar, such as Pipe, Steelwork, Channel or Mount. Then, you can select a component from the 3skeng library and place it on your model. You can also use the SketchUp tools to draw lines and shapes and convert them to 3skeng elements. You can adjust the size, orientation and position of your components using the 3skeng handles and settings. You can also use the Connect tool to join your components automatically or manually. You can edit your model at any time using the Edit tool or the SketchUp tools.
-When you are done with your model, you can use the Label tool to add annotations and dimensions to your model. You can also use the List tool to generate reports and bills of materials for your project. You can export your model to IFC format using the Export tool or save it as a SketchUp file. You can also import IFC files using the Import tool or open existing SketchUp files with 3skeng elements.
-
-Where to Get 3skeng for SketchUp?
-If you are interested in getting 3skeng for SketchUp, you can visit their shop page and choose the license that suits your needs. You can buy a single-user license or a multi-user license for different durations. You can also get a free trial version for 30 days and test all the features of 3skeng for SketchUp. You can download the trial version from their download page and activate it with your email address.
-If you have any questions or feedback about 3skeng for SketchUp, you can contact their support team at support@3skeng.com or visit their support page . You can also join their Facebook page or follow them on Twitter to get the latest news and updates about 3skeng.
7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Jim Dunlop Cry Baby Wah Gcb 95 Serial Number.md b/spaces/stomexserde/gpt4-ui/Examples/Jim Dunlop Cry Baby Wah Gcb 95 Serial Number.md
deleted file mode 100644
index 6408fddc1f5925061227bc9f8eb264ad5b6850d5..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Jim Dunlop Cry Baby Wah Gcb 95 Serial Number.md
+++ /dev/null
@@ -1,53 +0,0 @@
-
-How to Identify the Age of Your Jim Dunlop Cry Baby Wah GCB-95 by Serial Number
-
-The Jim Dunlop Cry Baby Wah GCB-95 is one of the most iconic and popular guitar effects pedals ever made. It has been used by countless musicians, from Jimi Hendrix and Eric Clapton to Slash and Kirk Hammett. But how can you tell how old your Cry Baby Wah is? And does it matter?
-jim dunlop cry baby wah gcb 95 serial number Download ►►► https://urlgoal.com/2uIaM1
-
-In this article, we will show you how to identify the age of your Jim Dunlop Cry Baby Wah GCB-95 by serial number, and explain some of the differences between different versions and revisions of this classic pedal.
-
-What is a Serial Number?
-
-A serial number is a unique code that is assigned to a product by the manufacturer. It usually consists of letters and numbers, and it can provide information about the date, place, and batch of production. Serial numbers are often found on the bottom or inside of the product, or on the packaging or warranty card.
-
-Serial numbers can be useful for identifying the authenticity, origin, and history of a product. They can also help with troubleshooting, repairs, warranty claims, and resale value.
-
-
-Where to Find the Serial Number on Your Jim Dunlop Cry Baby Wah GCB-95?
-
-The serial number on your Jim Dunlop Cry Baby Wah GCB-95 can be found on the bottom plate of the pedal, usually near the input jack. It should start with the letters CB followed by six digits. For example, CB531081.
-
-However, the serial number alone may not be enough to determine the exact age of your pedal. You may also need to look at other features, such as the potentiometer (pot), the printed circuit board (PCB), the inductor, the power jack, and the jacks.
-
-How to Date Your Jim Dunlop Cry Baby Wah GCB-95 by Serial Number?
-
-There is no definitive guide or database for dating your Jim Dunlop Cry Baby Wah GCB-95 by serial number. However, based on some online sources and forums[^1^] [^2^], we can make some educated guesses based on some general trends and patterns.
-
-Here are some approximate date ranges for different serial number prefixes:
-
-
-CB0: 1982-1984
-CB1: 1984-1986
-CB2: 1986-1988
-CB3: 1988-1990
-CB4: 1990-1992
-CB5: 1992-1994
-CB6: 1994-1996
-CB7: 1996-1998
-CB8: 1998-2000
-CB9: 2000-2002
-
-
-Note that these date ranges are not exact and may vary depending on other factors. Also note that some older pedals may have different serial number formats or no serial number at all.
-
-How to Identify Different Versions and Revisions of Your Jim Dunlop Cry Baby Wah GCB-95?
-
-Besides the serial number, there are other ways to identify different versions and revisions of your Jim Dunlop Cry Baby Wah GCB-95. Here are some of the main features to look for:
-
-Potentiometer (Pot)
-
-The potentiometer (pot) is the part that controls the sweep of the wah effect. It is connected to the footswitch and has a gear that rotates when you rock the pedal. The pot has a code that indicates its manufacturer, resistance value, taper type, and date of production.
-
-The code usually consists of three or four digits followed by a letter and another three digits. For example, 100K EJ 9033. The first three or four digits indicate the resistance value in ohms (K for kiloohms). The letter indicates the taper type (A for audio/logarithmic or B for linear). The last three digits indicate the week and year of production (for example, 9033 means 33rd week of 1990
81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/sub314xxl/MetaGPT/metagpt/utils/parse_html.py b/spaces/sub314xxl/MetaGPT/metagpt/utils/parse_html.py
deleted file mode 100644
index 62de2654140bb7eb63a7ec7393c546211dce0287..0000000000000000000000000000000000000000
--- a/spaces/sub314xxl/MetaGPT/metagpt/utils/parse_html.py
+++ /dev/null
@@ -1,57 +0,0 @@
-#!/usr/bin/env python
-from __future__ import annotations
-
-from typing import Generator, Optional
-from urllib.parse import urljoin, urlparse
-
-from bs4 import BeautifulSoup
-from pydantic import BaseModel
-
-
-class WebPage(BaseModel):
- inner_text: str
- html: str
- url: str
-
- class Config:
- underscore_attrs_are_private = True
-
- _soup : Optional[BeautifulSoup] = None
- _title: Optional[str] = None
-
- @property
- def soup(self) -> BeautifulSoup:
- if self._soup is None:
- self._soup = BeautifulSoup(self.html, "html.parser")
- return self._soup
-
- @property
- def title(self):
- if self._title is None:
- title_tag = self.soup.find("title")
- self._title = title_tag.text.strip() if title_tag is not None else ""
- return self._title
-
- def get_links(self) -> Generator[str, None, None]:
- for i in self.soup.find_all("a", href=True):
- url = i["href"]
- result = urlparse(url)
- if not result.scheme and result.path:
- yield urljoin(self.url, url)
- elif url.startswith(("http://", "https://")):
- yield urljoin(self.url, url)
-
-
-def get_html_content(page: str, base: str):
- soup = _get_soup(page)
-
- return soup.get_text(strip=True)
-
-
-def _get_soup(page: str):
- soup = BeautifulSoup(page, "html.parser")
- # https://stackoverflow.com/questions/1936466/how-to-scrape-only-visible-webpage-text-with-beautifulsoup
- for s in soup(["style", "script", "[document]", "head", "title"]):
- s.extract()
-
- return soup
diff --git a/spaces/sub314xxl/MusicGen/tests/data/test_audio_utils.py b/spaces/sub314xxl/MusicGen/tests/data/test_audio_utils.py
deleted file mode 100644
index 0480671bb17281d61ce02bce6373a5ccec89fece..0000000000000000000000000000000000000000
--- a/spaces/sub314xxl/MusicGen/tests/data/test_audio_utils.py
+++ /dev/null
@@ -1,110 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import julius
-import torch
-import pytest
-
-from audiocraft.data.audio_utils import (
- _clip_wav,
- convert_audio_channels,
- convert_audio,
- normalize_audio
-)
-from ..common_utils import get_batch_white_noise
-
-
-class TestConvertAudioChannels:
-
- def test_convert_audio_channels_downmix(self):
- b, c, t = 2, 3, 100
- audio = get_batch_white_noise(b, c, t)
- mixed = convert_audio_channels(audio, channels=2)
- assert list(mixed.shape) == [b, 2, t]
-
- def test_convert_audio_channels_nochange(self):
- b, c, t = 2, 3, 100
- audio = get_batch_white_noise(b, c, t)
- mixed = convert_audio_channels(audio, channels=c)
- assert list(mixed.shape) == list(audio.shape)
-
- def test_convert_audio_channels_upmix(self):
- b, c, t = 2, 1, 100
- audio = get_batch_white_noise(b, c, t)
- mixed = convert_audio_channels(audio, channels=3)
- assert list(mixed.shape) == [b, 3, t]
-
- def test_convert_audio_channels_upmix_error(self):
- b, c, t = 2, 2, 100
- audio = get_batch_white_noise(b, c, t)
- with pytest.raises(ValueError):
- convert_audio_channels(audio, channels=3)
-
-
-class TestConvertAudio:
-
- def test_convert_audio_channels_downmix(self):
- b, c, dur = 2, 3, 4.
- sr = 128
- audio = get_batch_white_noise(b, c, int(sr * dur))
- out = convert_audio(audio, from_rate=sr, to_rate=sr, to_channels=2)
- assert list(out.shape) == [audio.shape[0], 2, audio.shape[-1]]
-
- def test_convert_audio_channels_upmix(self):
- b, c, dur = 2, 1, 4.
- sr = 128
- audio = get_batch_white_noise(b, c, int(sr * dur))
- out = convert_audio(audio, from_rate=sr, to_rate=sr, to_channels=3)
- assert list(out.shape) == [audio.shape[0], 3, audio.shape[-1]]
-
- def test_convert_audio_upsample(self):
- b, c, dur = 2, 1, 4.
- sr = 2
- new_sr = 3
- audio = get_batch_white_noise(b, c, int(sr * dur))
- out = convert_audio(audio, from_rate=sr, to_rate=new_sr, to_channels=c)
- out_j = julius.resample.resample_frac(audio, old_sr=sr, new_sr=new_sr)
- assert torch.allclose(out, out_j)
-
- def test_convert_audio_resample(self):
- b, c, dur = 2, 1, 4.
- sr = 3
- new_sr = 2
- audio = get_batch_white_noise(b, c, int(sr * dur))
- out = convert_audio(audio, from_rate=sr, to_rate=new_sr, to_channels=c)
- out_j = julius.resample.resample_frac(audio, old_sr=sr, new_sr=new_sr)
- assert torch.allclose(out, out_j)
-
-
-class TestNormalizeAudio:
-
- def test_clip_wav(self):
- b, c, dur = 2, 1, 4.
- sr = 3
- audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur))
- _clip_wav(audio)
- assert audio.abs().max() <= 1
-
- def test_normalize_audio_clip(self):
- b, c, dur = 2, 1, 4.
- sr = 3
- audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur))
- norm_audio = normalize_audio(audio, strategy='clip')
- assert norm_audio.abs().max() <= 1
-
- def test_normalize_audio_rms(self):
- b, c, dur = 2, 1, 4.
- sr = 3
- audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur))
- norm_audio = normalize_audio(audio, strategy='rms')
- assert norm_audio.abs().max() <= 1
-
- def test_normalize_audio_peak(self):
- b, c, dur = 2, 1, 4.
- sr = 3
- audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur))
- norm_audio = normalize_audio(audio, strategy='peak')
- assert norm_audio.abs().max() <= 1
diff --git a/spaces/sumit12/SHIPMENT_PRICING_PREDICTION/README.md b/spaces/sumit12/SHIPMENT_PRICING_PREDICTION/README.md
deleted file mode 100644
index 0db06c03d04cf670c7c39b064a6e00146d3872de..0000000000000000000000000000000000000000
--- a/spaces/sumit12/SHIPMENT_PRICING_PREDICTION/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Shipment
-emoji: 📊
-colorFrom: green
-colorTo: purple
-sdk: gradio
-sdk_version: 3.0.22
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/sunder-ali/Image_Denoising_Demo/README.md b/spaces/sunder-ali/Image_Denoising_Demo/README.md
deleted file mode 100644
index e410e7c61a57795ed1b90c867af3c7f70da130a7..0000000000000000000000000000000000000000
--- a/spaces/sunder-ali/Image_Denoising_Demo/README.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Image Denoising Demo
-emoji: 🏃
-colorFrom: red
-colorTo: gray
-sdk: streamlit
-sdk_version: 1.19.0
-app_file: app.py
-pinned: false
-license: cc-by-sa-4.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
-
-The method "Dense Residual Swin Transformer for Image Denoising" is included in the benchmark report "NTIRE 2023 Challenge on Image Denoising: Methods and Results" in CVPR workshop 2023.
-
-The team include following members:
-- Sunder Ali Khowaja (Department of Telecommunication Engineering, University of Sindh, Pakistan)
-- Jiseok Yoon (IKLAB Inc.)
-- Ik Hyun Lee (IKLAB Inc. and Tech University of Korea, Republic of Korea)
-
-If you find the demo useful or want to use the results in your research work, kindly cite our work
-
-@inproceedings{li2023ntire_dn50,
- title={NTIRE 2023 Challenge on Image Denoising: Methods and Results},
- author={Li, Yawei and Zhang, Yulun and Van Gool, Luc and Timofte, Radu and others},
- booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
- year={2023}
-}
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Lock Folder Xp 3.9.2 Crack Free Download HOT!.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Lock Folder Xp 3.9.2 Crack Free Download HOT!.md
deleted file mode 100644
index 7bd6fb2ad16e8b644922a34855c799cc518288b4..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Lock Folder Xp 3.9.2 Crack Free Download HOT!.md
+++ /dev/null
@@ -1,6 +0,0 @@
-lock folder xp 3.9.2 crack free download Download Zip > https://cinurl.com/2uEXPD
-
-Jul 27, 2012 Lock Folder XP 3.9.2 registration code: Hide and Lock Folders in Windows XP. Free download provided for 32-bit and 64-bit versions of Windows. ... Solidworks Free Download For Windows 7 Crack Key. 4d29de3e1b
-
-
-
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Neural Network Tutorial Pdf Free Download !EXCLUSIVE!.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Neural Network Tutorial Pdf Free Download !EXCLUSIVE!.md
deleted file mode 100644
index 69a85947df3979e69810be7e4270e397a1594785..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Neural Network Tutorial Pdf Free Download !EXCLUSIVE!.md
+++ /dev/null
@@ -1,43 +0,0 @@
-
-Neural Networks: A Simple Introduction
-Neural networks are a type of artificial intelligence that mimics the structure and function of the human brain. They consist of interconnected units called neurons that process information and learn from data. Neural networks can perform complex tasks such as image recognition, natural language processing, and machine translation.
-Neural Network Tutorial Pdf Free Download Download ✔✔✔ https://cinurl.com/2uEYzn
-In this tutorial, you will learn the basic concepts and principles of neural networks, such as how they work, how they are trained, and how they are applied to various problems. You will also get a chance to practice your skills by implementing a simple neural network in Python.
-This tutorial is intended for beginners who want to get a quick overview of neural networks and how they can be used in artificial intelligence. No prior knowledge of neural networks or programming is required. However, some familiarity with basic mathematics and logic will be helpful.
-To download this tutorial as a PDF file, please click on the link below:
-Neural Networks Tutorial PDF Neural Network Architecture
-A neural network architecture defines how the neurons are organized and connected in the network. There are different types of neural network architectures, depending on the problem domain and the desired output. Some of the common architectures are:
-
-
-Feedforward neural network: This is the simplest type of neural network, where the information flows from the input layer to the output layer without any loops or cycles. The hidden layers can have different activation functions, such as sigmoid, tanh, or ReLU.
-Recurrent neural network (RNN): This is a type of neural network that has feedback connections, meaning that the output of a neuron can be fed back to itself or to previous neurons. This allows the network to have a memory of previous inputs and outputs, which is useful for sequential data such as text or speech.
-Convolutional neural network (CNN): This is a type of neural network that uses convolutional layers, which are composed of filters that slide over the input and perform element-wise multiplication and summation. This reduces the number of parameters and captures spatial features such as edges and shapes. CNNs are widely used for image processing and computer vision tasks.
-
-In this tutorial, you will focus on feedforward neural networks, as they are the simplest and most intuitive to understand. However, you can also explore other architectures using Python libraries such as PyTorch or TensorFlow.
Implementing a Feedforward Neural Network in Python
-One of the most popular libraries for implementing neural networks in Python is PyTorch, which provides a high-level API for building and training deep learning models. In this section, you will learn how to use PyTorch to create a simple feedforward neural network that can classify handwritten digits from the MNIST dataset.
-The MNIST dataset consists of 60,000 training images and 10,000 test images of handwritten digits from 0 to 9. Each image is 28 by 28 pixels and has a grayscale value between 0 and 255. The goal is to train a neural network that can take an image as input and output the correct digit label.
-To implement a feedforward neural network in PyTorch, you need to follow these steps:
-
-Import the necessary modules and libraries.
-Load and preprocess the data.
-Define the network architecture and parameters.
-Define the loss function and the optimizer.
-Train the network on the training data.
-Evaluate the network on the test data.
-
-Let's go through each step in detail.
-1. Import the necessary modules and libraries
-The first step is to import the modules and libraries that you will need for this tutorial. You will need torch for working with tensors and neural networks, torchvision for loading and transforming the MNIST dataset, matplotlib for plotting, and numpy for numerical computations. You can also set a random seed for reproducibility.
-
-```python
-# Import modules
-import torch
-import torchvision
-import matplotlib.pyplot as plt
-import numpy as np
-
-# Set random seed
-torch.manual_seed(42)
-``` d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Rending Sky Crack Full Version Download TOP.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Rending Sky Crack Full Version Download TOP.md
deleted file mode 100644
index afa4ea6f950d2cfdb159213ad6558639d18d4818..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Rending Sky Crack Full Version Download TOP.md
+++ /dev/null
@@ -1,32 +0,0 @@
-Rending Sky crack full version download Download File »»» https://cinurl.com/2uEXGe
-
-hack .hack .games’s gameplay, but also deepens the experience of its theme.
-
-Main Features
-
-Rich fantasy world – 9 .hack .games can be played in a fantastic world of endless light and darkness, where the player is present, for the first time, in this kind of fantasy. The player can also explore multiple regions full of vivid colors, unusual landscapes and mythical creatures.
-
-High-level graphics – 9 .hack .games is presented in a similar style to .hack and .hack 2, with detailed polygonal characters, high resolution textures, and a full-color environment.
-
-Variety of techniques – 9 .hack .games offers challenging gameplay, original story and characters, as well as other features such as “dual-targeting”, “scrolling”, and “formation”.
-
-Storyline – The storyline of 9 .hack .games takes place in the past, in the present, and in the future. 9 .hack .games takes place in the center of the story.
-
-First quest – 9 .hack .games is an episodic game. The player is introduced to a mysterious character who has chosen him to change the fate of his beloved world. The player can therefore enjoy a unique adventure story while playing 9 .hack .games.
-
-Adventure game – The story of 9 .hack .games is presented through a wide variety of stories and situations, and the player’s choices and interactions influence the game.
-
-Story: The player is “Summoning Shinryu”.
-
-Present: Each episode has a time limit, and if the player has not completed the episode within a certain time limit, he or she will be punished.
-
-Future: The protagonist moves forward in time, and the destiny of the world depends on his actions.
-
-Gameplay: As the protagonist of the story, the player can learn various battle techniques and develop a number of abilities.
-
-World map – The world map of 9 .hack .games is a 2D side-scrolling game, and all the missions in the game can be achieved through combat. The player can play with a second player through a parallel two-player mode, or with up to four players at a time through co-operative play.
-
-Special items – There are special items which have 4fefd39f24
-
-
-
diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/datasets/ade20k.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/datasets/ade20k.py
deleted file mode 100644
index efc8b4bb20c981f3db6df7eb52b3dc0744c94cc0..0000000000000000000000000000000000000000
--- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/datasets/ade20k.py
+++ /dev/null
@@ -1,54 +0,0 @@
-# dataset settings
-dataset_type = 'ADE20KDataset'
-data_root = 'data/ade/ADEChallengeData2016'
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-crop_size = (512, 512)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', reduce_zero_label=True),
- dict(type='Resize', img_scale=(2048, 512), ratio_range=(0.5, 2.0)),
- dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
- dict(type='RandomFlip', prob=0.5),
- dict(type='PhotoMetricDistortion'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_semantic_seg']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(2048, 512),
- # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75],
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- samples_per_gpu=4,
- workers_per_gpu=4,
- train=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/training',
- ann_dir='annotations/training',
- pipeline=train_pipeline),
- val=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/validation',
- ann_dir='annotations/validation',
- pipeline=test_pipeline),
- test=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/validation',
- ann_dir='annotations/validation',
- pipeline=test_pipeline))
diff --git a/spaces/taesiri/ConvolutionalHoughMatchingNetworks/README.md b/spaces/taesiri/ConvolutionalHoughMatchingNetworks/README.md
deleted file mode 100644
index 3748627262e8b8a19db06a4c40ccfafc6caee546..0000000000000000000000000000000000000000
--- a/spaces/taesiri/ConvolutionalHoughMatchingNetworks/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Convolutional Hough Matching Networks
-emoji: 📚
-colorFrom: red
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.30.0
-app_file: app.py
-pinned: false
----
-
-# Convolutional Hough Matching Networks
-
-A demo for Convolutional Hough Matching Networks. [[Paper](https://arxiv.org/abs/2109.05221)] [[Official Github Repo](https://github.com/juhongm999/chm.git)]
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Avast Premium Security 20.1.2397 (Build 20.1.5069) With License Key LINK.md b/spaces/terfces0erbo/CollegeProjectV2/Avast Premium Security 20.1.2397 (Build 20.1.5069) With License Key LINK.md
deleted file mode 100644
index ab8f9133d437b936451e3965a9be37816e125094..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Avast Premium Security 20.1.2397 (Build 20.1.5069) With License Key LINK.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-13.15.3. the appliance is not warranted against functional defects or defects of design or manufacturing due to improper handling or operation. all applicable warranties and conditions, including but not limited to, the warranties of merchantability and fitness for a particular purpose, are expressly excluded. without limiting the foregoing, no vendor, member of the vendor group or vendor partner is responsible for any loss of data, information, programs or files that may occur as a result of viruses, unauthorized software access or any other problems that may be caused by the use of the appliance.
-Avast Premium Security 20.1.2397 (Build 20.1.5069) with License Key Download Zip ✶✶✶ https://bytlly.com/2uGkzx
-13.16.1. in no event will vendor, vendor group or vendor partner be liable for any damages whatsoever, including without limitation direct or indirect, incidental, consequential, exemplary, punitive or other damages, arising out of or related to the use of or inability to use the solution, or the product, even if vendor has been advised of the possibility of such damages and notwithstanding vendor's failure to inform you of this agreement.
-14.1. you may not redistribute the appliance, or any portion thereof, except as required for your internal, standalone use. the appliance may not be modified, copied, reproduced, sold, transferred, leased, licensed, distributed, or used in any way except as expressly permitted in this agreement. any such use is strictly prohibited.
-14.2. you may not reverse engineer, decompile, disassemble, or create derivative works of the appliance, except as required by law or in the case of any product or service licensed or sold by avast to you.
-
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Caneco Bt V53 Gratuit.md b/spaces/terfces0erbo/CollegeProjectV2/Caneco Bt V53 Gratuit.md
deleted file mode 100644
index 535b5263a7ebe62018ad6d2b74800507cd20f105..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Caneco Bt V53 Gratuit.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
- caneco bt is a powerful and convenient tool for creating electrical installations. it is suitable for all users. the tool allows you to create, modify and export diagrams of electrical installations. it is a feature-rich tool. you can download robocatcher 2017.
- caneco bt is a very powerful and useful application. this program is designed for easy use and to make drawings and diagrams of the installation as easily and quickly as possible. the program allows you to design the installation. this software is based on a powerful set of tools. it provides the creation and modification of drawings, diagrams and data of the installation and the assignment of the appropriate materials. the program is compatible with microsoft office 2013, 2016 and 2007. you can also download autocad 2017.
-Caneco Bt V53 Gratuit Download === https://bytlly.com/2uGlYN
-caneco bt v5.10 is a wonderful application for the automatic calculations, sizing as well as the diagrams of these very low voltage electrical installations. it has lots of useful features such as it can perform all the calculations in compliance with the applicable standards and the electrical constraints. you can also download caneco bt v5.4 free download.
- caneco bt v5.10 is a wonderful application for the automatic calculations, sizing as well as the diagrams of these very low voltage electrical installations. it has lots of useful features such as it can perform all the calculations in compliance with the applicable standards and the electrical constraints. you can also download caneco bt v5.5 free download.
-caneco bt v5.10 is a wonderful application for the automatic calculations, sizing as well as the diagrams of these very low voltage electrical installations. it has lots of useful features such as it can perform all the calculations in compliance with the applicable standards and the electrical constraints. you can also download arveo v5.0.0 free download.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Driver San Francisco Activation Key.rar.md b/spaces/terfces0erbo/CollegeProjectV2/Driver San Francisco Activation Key.rar.md
deleted file mode 100644
index 9a303298582e66c6df8e59846aaae9c454707d40..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Driver San Francisco Activation Key.rar.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Driver San Francisco Activation Key.rar Download File ---> https://bytlly.com/2uGiz8
-
-rar“. . . the activation key must be . . . . This key can be used to activate software (such as the Intel ME/Management Engine, . . . ) under system-level control. . . . .. . This code provides the same functions as the “user lock” . . . . . .. . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . .. . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . .. . . . . . . .. . . . . . . . . 4fefd39f24
-
-
-
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download Free REPACK Packet Tracer For Windows 10.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download Free REPACK Packet Tracer For Windows 10.md
deleted file mode 100644
index a17a53e980a0ec0b4e5669eb4b7a4d8ad44a933e..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Download Free REPACK Packet Tracer For Windows 10.md
+++ /dev/null
@@ -1,13 +0,0 @@
-
-How to Download Free Packet Tracer for Windows 10
-Packet Tracer is a network simulation software that allows users to create and test various network scenarios. It is developed by Cisco Systems and is mainly used for educational purposes. Packet Tracer can help users to learn networking concepts, practice skills, and prepare for certification exams. Packet Tracer can also be used for network design, troubleshooting, and experimentation.
-If you want to download free Packet Tracer for Windows 10, you will need to follow these steps:
-download free packet tracer for windows 10 DOWNLOAD ··· https://urlcod.com/2uK7Eb
-
-Sign up for a Cisco Networking Academy account : Packet Tracer is available for free only to Cisco Networking Academy students, instructors, alumni, and administrators. You can sign up for a free account at https://www.netacad.com/ . You will need to provide some personal information and agree to the terms and conditions.
-Enroll in a Packet Tracer course : To download Packet Tracer, you will need to enroll in one of the three self-paced Packet Tracer courses offered by Cisco Networking Academy. These courses are: Getting Started with Cisco Packet Tracer, Exploring Networking with Cisco Packet Tracer, and Exploring Internet of Things with Cisco Packet Tracer. You can find these courses at https://www.netacad.com/courses/packet-tracer . You can choose any course that suits your level and interest.
-Download and install Packet Tracer : After enrolling in a course, you will be able to access the download page for Packet Tracer. You can download the latest version of Packet Tracer (8.0.0.211) for Windows 10 from there. The file size is about 147 MB. Once the download is complete, you can run the installer file and follow the instructions to complete the installation. You will need to activate your license online or offline using your Cisco Networking Academy account.
-
-That's it! You have successfully downloaded and installed free Packet Tracer for Windows 10. You can now launch the software from the Start menu or by clicking on the shortcut icon on your desktop. You can start creating and simulating your own network scenarios using the components and tools provided by Packet Tracer. You can also access the tutorials, webinars, manuals, and knowledge base articles from the Help menu or from the Cisco Networking Academy website.
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/How to Download Leo Star Professional Full Version for Free.md b/spaces/tialenAdioni/chat-gpt-api/logs/How to Download Leo Star Professional Full Version for Free.md
deleted file mode 100644
index 956189b9f9d76f2dc0eeff802ca6156d6a004946..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/How to Download Leo Star Professional Full Version for Free.md
+++ /dev/null
@@ -1,27 +0,0 @@
-
-Leo Star Professional: A Comprehensive Astrology Software
-If you are looking for a reliable and accurate astrology software, you might want to consider Leo Star Professional. This software is developed by Future Point, a pioneer in the field of computerized astrology since 1978. Leo Star Professional contains all aspects of astrology with comprehensive calculations, remedies, predictions, various Vedic charts, matching, Varshphal, numerology, horary, KP, Lal Kitab, muhurat, panchang, mundane, books, calendar, transit, mantras and lots of other useful information.
-leo star professional full download Download — https://urlcod.com/2uKa63
-Leo Star Professional is available in more than 12 languages including Hindi and English. It has different versions and modules to suit the needs of different users and astrologers. You can customize the screen as per your choice and need. You can also print horoscopes in various sizes and formats. Leo Star Professional has its own data bank containing birth details of thousands of celebrities which can be used for research and comparative study. It also has a customer support team with remote access facility for any queries or issues.
-Leo Star Professional is the most authentic and accurate astrology software in the market. It meets the prescribed international quality standards and has undergone several tests to prove its credibility. It provides precise information about running dasha, balance of dasha, accurate placement of planets in various houses, rashi and nakshatras. It also gives detailed information about lordship of nakshatra in a horoscope which most of the astrology software fail to provide. It is widely acclaimed as the best Jyotish software by the users and astrologers.
-If you want to download Leo Star Professional full version, you can visit the official website of Future Point and choose the package that suits your requirements. You can also get a 30 digit activation code after purchasing the software. Alternatively, you can also download Leo Star Professional from other sources such as Catalystsom, but be careful about the authenticity and security of the software.
-Leo Star Professional is a complete solution for all your astrological needs. Whether you are a beginner or a professional astrologer, you will find this software very useful and easy to use. With Leo Star Professional, you can get accurate predictions and guidance for yourself and others.
-
Features of Leo Star Professional
-Leo Star Professional has many features that make it a comprehensive and user-friendly astrology software. Some of the main features are:
-
-Multiple Horoscopes: You can view and match multiple horoscopes on one screen. You can also compare the horoscopes of different celebrities or famous personalities.
-Various Charts: You can generate various Vedic charts such as Lagna, Navamsa, Bhava, Saptamsa, Dashamsa, Shodashvarga, Ashtakvarga, Shadbala, Vimshottari Dasha, Yogini Dasha, Char Dasha, Kalachakra Dasha, etc. You can also generate various divisional charts such as D-60, D-45, D-30, etc.
-Remedies and Predictions: You can get detailed remedies and predictions based on different systems of astrology such as Parashari, Jaimini, Lal Kitab, KP, Horary, etc. You can also get gem recommendations, rudraksha suggestions, mantra recitation, donation advice, etc.
-Numerology: You can get numerology reports based on your name and date of birth. You can also get lucky numbers, colors, days, etc.
-Horary: You can get answers to your specific questions based on the time and place of query. You can also use different methods such as KP Horary, Prashna Kundali, Ramal Shastra, Tarot Cards, etc.
-Muhurat: You can find auspicious timings for various events and activities such as marriage, travel, business, education, etc. You can also customize the muhurat parameters according to your preferences.
-Panchang: You can get detailed panchang information such as tithi, nakshatra, yoga, karana, rahu kalam, gulika kalam, abhijit muhurat, etc. You can also print panchang for any year of past or future.
-Mundane: You can get mundane astrology reports such as political predictions, natural calamities, war and peace situations, etc. based on the planetary movements and transits.
-Books: You can access various books and articles on astrology written by renowned astrologers and scholars. You can also read the classics of astrology such as Brihat Parashara Hora Shastra, Brihat Jataka, Saravali, Phaladeepika, etc.
-Calendar: You can view the calendar of any year with the information of festivals, holidays, eclipses, etc.
-Transit: You can see the current position of planets and their effects on your horoscope. You can also see the transit chart and the dasha chart simultaneously.
-Mantras: You can listen to various mantras and stotras for different planets and deities. You can also learn the pronunciation and meaning of the mantras.
-
-These are some of the features of Leo Star Professional that make it a complete astrology software. You can explore more features by downloading the software and trying it yourself.
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/tikendraw/movie-recommender/call_api.py b/spaces/tikendraw/movie-recommender/call_api.py
deleted file mode 100644
index bc8c15fd4b798cf08e763c57f4b2eec5308479a7..0000000000000000000000000000000000000000
--- a/spaces/tikendraw/movie-recommender/call_api.py
+++ /dev/null
@@ -1,27 +0,0 @@
-import json
-import requests
-import os
-
-headers = {
- "X-RapidAPI-Key": os.environ['X-RapidAPI-Key'],
- "X-RapidAPI-Host": os.environ['X-RapidAPI-Host']
-}
-
-urls = [
- "https://imdb-search2.p.rapidapi.com/superman2",
- "https://imdb-search2.p.rapidapi.com/spiderman2",
- "https://imdb-search2.p.rapidapi.com/300",
-]
-
-
-def make_url(name: str):
- x = name.lower()
- x = name.strip()
- x = x.replace(" ", "%20")
- return f"https://imdb-search2.p.rapidapi.com/{x}"
-
-
-def get_data(name: str, headers: dict = headers) -> json:
- url = make_url(name)
- response = requests.get(url, headers=headers)
- return response.json()
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Cj Adkins Equilibrium Thermodynamics Solutions 51.md b/spaces/tioseFevbu/cartoon-converter/scripts/Cj Adkins Equilibrium Thermodynamics Solutions 51.md
deleted file mode 100644
index 805b17bf14dc2f0869db430a637fe3f5e49d4d71..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Cj Adkins Equilibrium Thermodynamics Solutions 51.md
+++ /dev/null
@@ -1,12 +0,0 @@
-
-How to Find Solutions for Equilibrium Thermodynamics Problems by C.J. Adkins
-Equilibrium thermodynamics is a branch of physics that studies the properties and behavior of systems in thermal equilibrium. Thermal equilibrium means that there is no net flow of heat or work between the system and its surroundings. Equilibrium thermodynamics can be used to analyze various phenomena, such as phase transitions, chemical reactions, heat engines, and refrigerators.
-One of the most popular textbooks on equilibrium thermodynamics is Equilibrium Thermodynamics by C.J. Adkins, first published in 1968 and revised in 1983. This book covers the basic concepts and principles of equilibrium thermodynamics, as well as some applications to ideal gases, liquids, solids, and mixtures. The book also includes many exercises and problems for students to practice their skills and understanding.
-cj adkins equilibrium thermodynamics solutions 51 Download ⚙⚙⚙ https://urlcod.com/2uHxzR
-However, finding solutions for these problems can be challenging, especially for beginners. There are no official solutions manuals or online resources that provide detailed answers and explanations for the problems in Adkins' book. Therefore, students have to rely on their own knowledge and creativity to solve them.
-One possible way to find solutions for equilibrium thermodynamics problems by C.J. Adkins is to use online platforms such as Docker[^1^], Trello[^2^], or SoundCloud[^3^]. These platforms allow users to share their work and collaborate with others who have similar interests or goals. For example, on Docker, there is a repository called leoguatore/cj-adkins-equilibrium-thermodynamics-solutions-51 that contains a file with solutions for problem 51 in chapter 5 of Adkins' book. On Trello, there is a board called consfalcugym that has a card with a link to download solutions for various problems in Adkins' book. On SoundCloud, there is an audio clip by Marlene Rickards that explains how to solve problem 51 in chapter 5 of Adkins' book.
-These online platforms can be useful for finding solutions for equilibrium thermodynamics problems by C.J. Adkins, but they also have some limitations and drawbacks. For instance, the quality and accuracy of the solutions may vary depending on the source and author. The solutions may not be complete or comprehensive enough to cover all the aspects of the problem. The solutions may not follow the same notation or conventions as Adkins' book. The solutions may not be updated or maintained regularly. The solutions may not be accessible or available at all times.
-Therefore, students who want to find solutions for equilibrium thermodynamics problems by C.J. Adkins should use these online platforms with caution and discretion. They should always check the validity and reliability of the solutions before using them. They should also try to solve the problems on their own first, before looking for solutions online. They should use the solutions as a reference or a guide, not as a substitute or a shortcut. They should also acknowledge and cite the sources of the solutions properly if they use them in their work.
-
e93f5a0c3f
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Compare Quicken For Mac 2015 Amp 2017.md b/spaces/tioseFevbu/cartoon-converter/scripts/Compare Quicken For Mac 2015 Amp 2017.md
deleted file mode 100644
index f583dd088a120884c794a93e2685b9cf95917bdd..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Compare Quicken For Mac 2015 Amp 2017.md
+++ /dev/null
@@ -1,42 +0,0 @@
-
-Compare Quicken For Mac 2015 & 2017: Which One Is Better For You?
-Quicken is one of the most popular personal finance software in the market. It helps you manage your money, budget, investments, and taxes. But if you are a Mac user, you may be wondering which version of Quicken is best for you: Quicken For Mac 2015 or Quicken For Mac 2017?
-Compare Quicken For Mac 2015 amp; 2017 Download ❤❤❤ https://urlcod.com/2uHxFt
-In this article, we will compare the features, performance, and pricing of these two versions of Quicken For Mac. We will also give you some tips on how to choose the right one for your needs. Let's get started!
-Features
-Quicken For Mac 2015 and Quicken For Mac 2017 have many features in common, such as:
-
-Importing data from previous versions of Quicken or other financial software
-Connecting to over 14,000 financial institutions to download transactions and balances
-Categorizing and tracking your income and expenses
-Creating budgets and savings goals
-Generating reports and graphs to monitor your financial health
-Exporting data to Excel or PDF
-Backing up your data securely online
-Accessing your data from any device with Quicken Mobile app
-
-However, there are also some differences between the two versions. Quicken For Mac 2017 has some new and improved features that Quicken For Mac 2015 does not have, such as:
-
-Customizing your dashboard with widgets to see your most important information at a glance
-Paying bills online from within Quicken with Quicken Bill Pay service (requires subscription)
-Managing your investments with more tools and options, such as portfolio performance analysis, asset allocation, capital gains tracking, and tax reports
-Reconciling your accounts with paper or online statements
-Using Quick Math to calculate simple equations in any field
-Searching for transactions across all accounts with a single query
-Comparing your spending and income with previous periods or budgets
-Syncing your data with iCloud to share it across your devices (requires macOS Sierra or later)
-
-Performance
-Quicken For Mac 2017 is designed to work faster and smoother than Quicken For Mac 2015. It has a more modern and intuitive user interface that is optimized for Retina displays. It also has better compatibility with the latest macOS versions and security updates.
-
-Quicken For Mac 2015 may experience some issues or bugs with newer macOS versions or hardware. It also has a more outdated and cluttered user interface that may be harder to navigate. It may take longer to load or process data than Quicken For Mac 2017.
-Pricing
-Quicken For Mac 2017 is more expensive than Quicken For Mac 2015. The current price for Quicken For Mac 2017 is $74.99 for a one-time purchase. The current price for Quicken For Mac 2015 is $49.99 for a one-time purchase.
-However, Quicken For Mac 2017 also offers more value for money than Quicken For Mac 2015. It has more features and benefits that can help you manage your finances better. It also has more frequent updates and support from the developers.
-Quicken For Mac 2015 may not receive any more updates or support from the developers in the future. It may also become obsolete or incompatible with newer macOS versions or hardware. It may not offer the best user experience or functionality for your needs.
-Tips on How to Choose
-The best way to choose between Quicken For Mac 2015 and Quicken For Mac 2017 is to consider your personal preferences, needs, and budget. Here are some questions you can ask yourself to help you decide:
-
-How important are the new and improved features of Quicken For Mac 2017 for you? cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Kyun Main Jaagoon Mp3 Download 320kbps NEW.md b/spaces/tioseFevbu/cartoon-converter/scripts/Kyun Main Jaagoon Mp3 Download 320kbps NEW.md
deleted file mode 100644
index e56179eb3ed013c2740c5a6be195fdbe693701da..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Kyun Main Jaagoon Mp3 Download 320kbps NEW.md
+++ /dev/null
@@ -1,37 +0,0 @@
-
-Kyun Main Jaagoon Mp3 Download 320kbps: How to Get the Best Quality Song from Patiala House
-
-If you are looking for a high-quality mp3 download of Kyun Main Jaagoon, the soulful song by Shafqat Amanat Ali from the movie Patiala House, you have come to the right place. In this article, we will show you how to get the best quality song from Patiala House in 320kbps, which is the highest bitrate available for mp3 files.
-
-Kyun Main Jaagoon is a beautiful song that expresses the emotions of a man who is torn between his dreams and his family. The song was composed by Shankar-Ehsaan-Loy and written by Anvita Dutt Guptan. It was sung by Shafqat Amanat Ali, who is known for his melodious voice and versatile singing style. The song was released in 2011 as part of the soundtrack of Patiala House, a movie starring Akshay Kumar and Anushka Sharma.
-kyun main jaagoon mp3 download 320kbps Download Zip https://urlcod.com/2uHyLH
-
-Many people love this song and want to download it in mp3 format to listen to it offline or on their devices. However, not all mp3 downloads are created equal. Some websites may offer low-quality or corrupted files that can ruin your listening experience. To avoid this, you need to find a reliable and trustworthy website that offers Kyun Main Jaagoon mp3 download 320kbps.
-
-How to Download Kyun Main Jaagoon Mp3 320kbps
-
-There are many websites that claim to offer Kyun Main Jaagoon mp3 download 320kbps, but not all of them are safe or legal. Some may contain viruses, malware, or spyware that can harm your device or compromise your privacy. Some may also violate the copyright laws and infringe on the rights of the artists and producers.
-
-To download Kyun Main Jaagoon mp3 320kbps safely and legally, you need to use a reputable website that has a license to distribute the song. One such website is Pagalworld[^1^], which is one of the most popular and trusted sources for Bollywood mp3 songs. Pagalworld offers Kyun Main Jaagoon mp3 download 320kbps in high quality and fast speed. You can also find other songs from Patiala House and other Bollywood movies on Pagalworld.
-
-To download Kyun Main Jaagoon mp3 320kbps from Pagalworld, you need to follow these simple steps:
-
-
-Go to Pagalworld website and search for Kyun Main Jaagoon (Patiala House) Mp3 Song Download.
-Click on the link that says Kyun Main Jaagoon (Patiala House) Mp3 Song Download Pagalworld.
-Choose the bitrate you want to download. For the best quality, select 320 KBPS MP3.
-Click on the download button and wait for the file to be downloaded.
-Enjoy listening to Kyun Main Jaagoon mp3 320kbps on your device.
-
-
-Why Download Kyun Main Jaagoon Mp3 320kbps
-
-There are many benefits of downloading Kyun Main Jaagoon mp3 320kbps instead of lower bitrates. Here are some of them:
-
-
-
-You get the best sound quality possible for mp3 files. 320kbps means that there are 320 kilobits of data per second in the file, which translates to more details and clarity in the sound. You can hear every nuance and emotion in Shafqat Amanat Ali's voice and appreciate the music better.
-You get a larger file size that can store more information and metadata. Metadata is the information that is embedded in the file, such as the title, artist, album, genre, cover art, lyrics, etc. Metadata can help you organize your music library and enhance your listening experience.
-You get a more compatible file format that can play on most devices and platforms. Mp3 is one of the most widely used and supported audio formats in the world. You can play Kyun Main Jaagoon mp3 320kbps on your computer, smartphone 81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/distributions/__init__.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/distributions/__init__.py
deleted file mode 100644
index 9a89a838b9a5cb264e9ae9d269fbedca6e2d6333..0000000000000000000000000000000000000000
--- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/distributions/__init__.py
+++ /dev/null
@@ -1,21 +0,0 @@
-from pip._internal.distributions.base import AbstractDistribution
-from pip._internal.distributions.sdist import SourceDistribution
-from pip._internal.distributions.wheel import WheelDistribution
-from pip._internal.req.req_install import InstallRequirement
-
-
-def make_distribution_for_install_requirement(
- install_req: InstallRequirement,
-) -> AbstractDistribution:
- """Returns a Distribution for the given InstallRequirement"""
- # Editable requirements will always be source distributions. They use the
- # legacy logic until we create a modern standard for them.
- if install_req.editable:
- return SourceDistribution(install_req)
-
- # If it's a wheel, it's a WheelDistribution
- if install_req.is_wheel:
- return WheelDistribution(install_req)
-
- # Otherwise, a SourceDistribution
- return SourceDistribution(install_req)
diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/operations/check.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/operations/check.py
deleted file mode 100644
index fb3ac8b9c9ea57ec1bb667cb8e904a8b5b2f9df2..0000000000000000000000000000000000000000
--- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/operations/check.py
+++ /dev/null
@@ -1,149 +0,0 @@
-"""Validation of dependencies of packages
-"""
-
-import logging
-from typing import Callable, Dict, List, NamedTuple, Optional, Set, Tuple
-
-from pip._vendor.packaging.requirements import Requirement
-from pip._vendor.packaging.utils import NormalizedName, canonicalize_name
-
-from pip._internal.distributions import make_distribution_for_install_requirement
-from pip._internal.metadata import get_default_environment
-from pip._internal.metadata.base import DistributionVersion
-from pip._internal.req.req_install import InstallRequirement
-
-logger = logging.getLogger(__name__)
-
-
-class PackageDetails(NamedTuple):
- version: DistributionVersion
- dependencies: List[Requirement]
-
-
-# Shorthands
-PackageSet = Dict[NormalizedName, PackageDetails]
-Missing = Tuple[NormalizedName, Requirement]
-Conflicting = Tuple[NormalizedName, DistributionVersion, Requirement]
-
-MissingDict = Dict[NormalizedName, List[Missing]]
-ConflictingDict = Dict[NormalizedName, List[Conflicting]]
-CheckResult = Tuple[MissingDict, ConflictingDict]
-ConflictDetails = Tuple[PackageSet, CheckResult]
-
-
-def create_package_set_from_installed() -> Tuple[PackageSet, bool]:
- """Converts a list of distributions into a PackageSet."""
- package_set = {}
- problems = False
- env = get_default_environment()
- for dist in env.iter_installed_distributions(local_only=False, skip=()):
- name = dist.canonical_name
- try:
- dependencies = list(dist.iter_dependencies())
- package_set[name] = PackageDetails(dist.version, dependencies)
- except (OSError, ValueError) as e:
- # Don't crash on unreadable or broken metadata.
- logger.warning("Error parsing requirements for %s: %s", name, e)
- problems = True
- return package_set, problems
-
-
-def check_package_set(
- package_set: PackageSet, should_ignore: Optional[Callable[[str], bool]] = None
-) -> CheckResult:
- """Check if a package set is consistent
-
- If should_ignore is passed, it should be a callable that takes a
- package name and returns a boolean.
- """
-
- missing = {}
- conflicting = {}
-
- for package_name, package_detail in package_set.items():
- # Info about dependencies of package_name
- missing_deps: Set[Missing] = set()
- conflicting_deps: Set[Conflicting] = set()
-
- if should_ignore and should_ignore(package_name):
- continue
-
- for req in package_detail.dependencies:
- name = canonicalize_name(req.name)
-
- # Check if it's missing
- if name not in package_set:
- missed = True
- if req.marker is not None:
- missed = req.marker.evaluate()
- if missed:
- missing_deps.add((name, req))
- continue
-
- # Check if there's a conflict
- version = package_set[name].version
- if not req.specifier.contains(version, prereleases=True):
- conflicting_deps.add((name, version, req))
-
- if missing_deps:
- missing[package_name] = sorted(missing_deps, key=str)
- if conflicting_deps:
- conflicting[package_name] = sorted(conflicting_deps, key=str)
-
- return missing, conflicting
-
-
-def check_install_conflicts(to_install: List[InstallRequirement]) -> ConflictDetails:
- """For checking if the dependency graph would be consistent after \
- installing given requirements
- """
- # Start from the current state
- package_set, _ = create_package_set_from_installed()
- # Install packages
- would_be_installed = _simulate_installation_of(to_install, package_set)
-
- # Only warn about directly-dependent packages; create a whitelist of them
- whitelist = _create_whitelist(would_be_installed, package_set)
-
- return (
- package_set,
- check_package_set(
- package_set, should_ignore=lambda name: name not in whitelist
- ),
- )
-
-
-def _simulate_installation_of(
- to_install: List[InstallRequirement], package_set: PackageSet
-) -> Set[NormalizedName]:
- """Computes the version of packages after installing to_install."""
- # Keep track of packages that were installed
- installed = set()
-
- # Modify it as installing requirement_set would (assuming no errors)
- for inst_req in to_install:
- abstract_dist = make_distribution_for_install_requirement(inst_req)
- dist = abstract_dist.get_metadata_distribution()
- name = dist.canonical_name
- package_set[name] = PackageDetails(dist.version, list(dist.iter_dependencies()))
-
- installed.add(name)
-
- return installed
-
-
-def _create_whitelist(
- would_be_installed: Set[NormalizedName], package_set: PackageSet
-) -> Set[NormalizedName]:
- packages_affected = set(would_be_installed)
-
- for package_name in package_set:
- if package_name in packages_affected:
- continue
-
- for req in package_set[package_name].dependencies:
- if canonicalize_name(req.name) in packages_affected:
- packages_affected.add(package_name)
- break
-
- return packages_affected
diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/requests/certs.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/requests/certs.py
deleted file mode 100644
index 2743144b9944d9a20e7fcd0cad360c4cd06a42be..0000000000000000000000000000000000000000
--- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/requests/certs.py
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/usr/bin/env python
-
-"""
-requests.certs
-~~~~~~~~~~~~~~
-
-This module returns the preferred default CA certificate bundle. There is
-only one — the one from the certifi package.
-
-If you are packaging Requests, e.g., for a Linux distribution or a managed
-environment, you can change the definition of where() to return a separately
-packaged CA bundle.
-"""
-from pip._vendor.certifi import where
-
-if __name__ == "__main__":
- print(where())
diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/requests/utils.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/requests/utils.py
deleted file mode 100644
index 33f394d265d5da17dd5b3c2467e2e4e71af1395d..0000000000000000000000000000000000000000
--- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/requests/utils.py
+++ /dev/null
@@ -1,1086 +0,0 @@
-"""
-requests.utils
-~~~~~~~~~~~~~~
-
-This module provides utility functions that are used within Requests
-that are also useful for external consumption.
-"""
-
-import codecs
-import contextlib
-import io
-import os
-import re
-import socket
-import struct
-import sys
-import tempfile
-import warnings
-import zipfile
-from collections import OrderedDict
-
-from pip._vendor.urllib3.util import make_headers, parse_url
-
-from . import certs
-from .__version__ import __version__
-
-# to_native_string is unused here, but imported here for backwards compatibility
-from ._internal_utils import HEADER_VALIDATORS, to_native_string # noqa: F401
-from .compat import (
- Mapping,
- basestring,
- bytes,
- getproxies,
- getproxies_environment,
- integer_types,
-)
-from .compat import parse_http_list as _parse_list_header
-from .compat import (
- proxy_bypass,
- proxy_bypass_environment,
- quote,
- str,
- unquote,
- urlparse,
- urlunparse,
-)
-from .cookies import cookiejar_from_dict
-from .exceptions import (
- FileModeWarning,
- InvalidHeader,
- InvalidURL,
- UnrewindableBodyError,
-)
-from .structures import CaseInsensitiveDict
-
-NETRC_FILES = (".netrc", "_netrc")
-
-DEFAULT_CA_BUNDLE_PATH = certs.where()
-
-DEFAULT_PORTS = {"http": 80, "https": 443}
-
-# Ensure that ', ' is used to preserve previous delimiter behavior.
-DEFAULT_ACCEPT_ENCODING = ", ".join(
- re.split(r",\s*", make_headers(accept_encoding=True)["accept-encoding"])
-)
-
-
-if sys.platform == "win32":
- # provide a proxy_bypass version on Windows without DNS lookups
-
- def proxy_bypass_registry(host):
- try:
- import winreg
- except ImportError:
- return False
-
- try:
- internetSettings = winreg.OpenKey(
- winreg.HKEY_CURRENT_USER,
- r"Software\Microsoft\Windows\CurrentVersion\Internet Settings",
- )
- # ProxyEnable could be REG_SZ or REG_DWORD, normalizing it
- proxyEnable = int(winreg.QueryValueEx(internetSettings, "ProxyEnable")[0])
- # ProxyOverride is almost always a string
- proxyOverride = winreg.QueryValueEx(internetSettings, "ProxyOverride")[0]
- except (OSError, ValueError):
- return False
- if not proxyEnable or not proxyOverride:
- return False
-
- # make a check value list from the registry entry: replace the
- # '' string by the localhost entry and the corresponding
- # canonical entry.
- proxyOverride = proxyOverride.split(";")
- # now check if we match one of the registry values.
- for test in proxyOverride:
- if test == "":
- if "." not in host:
- return True
- test = test.replace(".", r"\.") # mask dots
- test = test.replace("*", r".*") # change glob sequence
- test = test.replace("?", r".") # change glob char
- if re.match(test, host, re.I):
- return True
- return False
-
- def proxy_bypass(host): # noqa
- """Return True, if the host should be bypassed.
-
- Checks proxy settings gathered from the environment, if specified,
- or the registry.
- """
- if getproxies_environment():
- return proxy_bypass_environment(host)
- else:
- return proxy_bypass_registry(host)
-
-
-def dict_to_sequence(d):
- """Returns an internal sequence dictionary update."""
-
- if hasattr(d, "items"):
- d = d.items()
-
- return d
-
-
-def super_len(o):
- total_length = None
- current_position = 0
-
- if hasattr(o, "__len__"):
- total_length = len(o)
-
- elif hasattr(o, "len"):
- total_length = o.len
-
- elif hasattr(o, "fileno"):
- try:
- fileno = o.fileno()
- except (io.UnsupportedOperation, AttributeError):
- # AttributeError is a surprising exception, seeing as how we've just checked
- # that `hasattr(o, 'fileno')`. It happens for objects obtained via
- # `Tarfile.extractfile()`, per issue 5229.
- pass
- else:
- total_length = os.fstat(fileno).st_size
-
- # Having used fstat to determine the file length, we need to
- # confirm that this file was opened up in binary mode.
- if "b" not in o.mode:
- warnings.warn(
- (
- "Requests has determined the content-length for this "
- "request using the binary size of the file: however, the "
- "file has been opened in text mode (i.e. without the 'b' "
- "flag in the mode). This may lead to an incorrect "
- "content-length. In Requests 3.0, support will be removed "
- "for files in text mode."
- ),
- FileModeWarning,
- )
-
- if hasattr(o, "tell"):
- try:
- current_position = o.tell()
- except OSError:
- # This can happen in some weird situations, such as when the file
- # is actually a special file descriptor like stdin. In this
- # instance, we don't know what the length is, so set it to zero and
- # let requests chunk it instead.
- if total_length is not None:
- current_position = total_length
- else:
- if hasattr(o, "seek") and total_length is None:
- # StringIO and BytesIO have seek but no usable fileno
- try:
- # seek to end of file
- o.seek(0, 2)
- total_length = o.tell()
-
- # seek back to current position to support
- # partially read file-like objects
- o.seek(current_position or 0)
- except OSError:
- total_length = 0
-
- if total_length is None:
- total_length = 0
-
- return max(0, total_length - current_position)
-
-
-def get_netrc_auth(url, raise_errors=False):
- """Returns the Requests tuple auth for a given url from netrc."""
-
- netrc_file = os.environ.get("NETRC")
- if netrc_file is not None:
- netrc_locations = (netrc_file,)
- else:
- netrc_locations = (f"~/{f}" for f in NETRC_FILES)
-
- try:
- from netrc import NetrcParseError, netrc
-
- netrc_path = None
-
- for f in netrc_locations:
- try:
- loc = os.path.expanduser(f)
- except KeyError:
- # os.path.expanduser can fail when $HOME is undefined and
- # getpwuid fails. See https://bugs.python.org/issue20164 &
- # https://github.com/psf/requests/issues/1846
- return
-
- if os.path.exists(loc):
- netrc_path = loc
- break
-
- # Abort early if there isn't one.
- if netrc_path is None:
- return
-
- ri = urlparse(url)
-
- # Strip port numbers from netloc. This weird `if...encode`` dance is
- # used for Python 3.2, which doesn't support unicode literals.
- splitstr = b":"
- if isinstance(url, str):
- splitstr = splitstr.decode("ascii")
- host = ri.netloc.split(splitstr)[0]
-
- try:
- _netrc = netrc(netrc_path).authenticators(host)
- if _netrc:
- # Return with login / password
- login_i = 0 if _netrc[0] else 1
- return (_netrc[login_i], _netrc[2])
- except (NetrcParseError, OSError):
- # If there was a parsing error or a permissions issue reading the file,
- # we'll just skip netrc auth unless explicitly asked to raise errors.
- if raise_errors:
- raise
-
- # App Engine hackiness.
- except (ImportError, AttributeError):
- pass
-
-
-def guess_filename(obj):
- """Tries to guess the filename of the given object."""
- name = getattr(obj, "name", None)
- if name and isinstance(name, basestring) and name[0] != "<" and name[-1] != ">":
- return os.path.basename(name)
-
-
-def extract_zipped_paths(path):
- """Replace nonexistent paths that look like they refer to a member of a zip
- archive with the location of an extracted copy of the target, or else
- just return the provided path unchanged.
- """
- if os.path.exists(path):
- # this is already a valid path, no need to do anything further
- return path
-
- # find the first valid part of the provided path and treat that as a zip archive
- # assume the rest of the path is the name of a member in the archive
- archive, member = os.path.split(path)
- while archive and not os.path.exists(archive):
- archive, prefix = os.path.split(archive)
- if not prefix:
- # If we don't check for an empty prefix after the split (in other words, archive remains unchanged after the split),
- # we _can_ end up in an infinite loop on a rare corner case affecting a small number of users
- break
- member = "/".join([prefix, member])
-
- if not zipfile.is_zipfile(archive):
- return path
-
- zip_file = zipfile.ZipFile(archive)
- if member not in zip_file.namelist():
- return path
-
- # we have a valid zip archive and a valid member of that archive
- tmp = tempfile.gettempdir()
- extracted_path = os.path.join(tmp, member.split("/")[-1])
- if not os.path.exists(extracted_path):
- # use read + write to avoid the creating nested folders, we only want the file, avoids mkdir racing condition
- with atomic_open(extracted_path) as file_handler:
- file_handler.write(zip_file.read(member))
- return extracted_path
-
-
-@contextlib.contextmanager
-def atomic_open(filename):
- """Write a file to the disk in an atomic fashion"""
- tmp_descriptor, tmp_name = tempfile.mkstemp(dir=os.path.dirname(filename))
- try:
- with os.fdopen(tmp_descriptor, "wb") as tmp_handler:
- yield tmp_handler
- os.replace(tmp_name, filename)
- except BaseException:
- os.remove(tmp_name)
- raise
-
-
-def from_key_val_list(value):
- """Take an object and test to see if it can be represented as a
- dictionary. Unless it can not be represented as such, return an
- OrderedDict, e.g.,
-
- ::
-
- >>> from_key_val_list([('key', 'val')])
- OrderedDict([('key', 'val')])
- >>> from_key_val_list('string')
- Traceback (most recent call last):
- ...
- ValueError: cannot encode objects that are not 2-tuples
- >>> from_key_val_list({'key': 'val'})
- OrderedDict([('key', 'val')])
-
- :rtype: OrderedDict
- """
- if value is None:
- return None
-
- if isinstance(value, (str, bytes, bool, int)):
- raise ValueError("cannot encode objects that are not 2-tuples")
-
- return OrderedDict(value)
-
-
-def to_key_val_list(value):
- """Take an object and test to see if it can be represented as a
- dictionary. If it can be, return a list of tuples, e.g.,
-
- ::
-
- >>> to_key_val_list([('key', 'val')])
- [('key', 'val')]
- >>> to_key_val_list({'key': 'val'})
- [('key', 'val')]
- >>> to_key_val_list('string')
- Traceback (most recent call last):
- ...
- ValueError: cannot encode objects that are not 2-tuples
-
- :rtype: list
- """
- if value is None:
- return None
-
- if isinstance(value, (str, bytes, bool, int)):
- raise ValueError("cannot encode objects that are not 2-tuples")
-
- if isinstance(value, Mapping):
- value = value.items()
-
- return list(value)
-
-
-# From mitsuhiko/werkzeug (used with permission).
-def parse_list_header(value):
- """Parse lists as described by RFC 2068 Section 2.
-
- In particular, parse comma-separated lists where the elements of
- the list may include quoted-strings. A quoted-string could
- contain a comma. A non-quoted string could have quotes in the
- middle. Quotes are removed automatically after parsing.
-
- It basically works like :func:`parse_set_header` just that items
- may appear multiple times and case sensitivity is preserved.
-
- The return value is a standard :class:`list`:
-
- >>> parse_list_header('token, "quoted value"')
- ['token', 'quoted value']
-
- To create a header from the :class:`list` again, use the
- :func:`dump_header` function.
-
- :param value: a string with a list header.
- :return: :class:`list`
- :rtype: list
- """
- result = []
- for item in _parse_list_header(value):
- if item[:1] == item[-1:] == '"':
- item = unquote_header_value(item[1:-1])
- result.append(item)
- return result
-
-
-# From mitsuhiko/werkzeug (used with permission).
-def parse_dict_header(value):
- """Parse lists of key, value pairs as described by RFC 2068 Section 2 and
- convert them into a python dict:
-
- >>> d = parse_dict_header('foo="is a fish", bar="as well"')
- >>> type(d) is dict
- True
- >>> sorted(d.items())
- [('bar', 'as well'), ('foo', 'is a fish')]
-
- If there is no value for a key it will be `None`:
-
- >>> parse_dict_header('key_without_value')
- {'key_without_value': None}
-
- To create a header from the :class:`dict` again, use the
- :func:`dump_header` function.
-
- :param value: a string with a dict header.
- :return: :class:`dict`
- :rtype: dict
- """
- result = {}
- for item in _parse_list_header(value):
- if "=" not in item:
- result[item] = None
- continue
- name, value = item.split("=", 1)
- if value[:1] == value[-1:] == '"':
- value = unquote_header_value(value[1:-1])
- result[name] = value
- return result
-
-
-# From mitsuhiko/werkzeug (used with permission).
-def unquote_header_value(value, is_filename=False):
- r"""Unquotes a header value. (Reversal of :func:`quote_header_value`).
- This does not use the real unquoting but what browsers are actually
- using for quoting.
-
- :param value: the header value to unquote.
- :rtype: str
- """
- if value and value[0] == value[-1] == '"':
- # this is not the real unquoting, but fixing this so that the
- # RFC is met will result in bugs with internet explorer and
- # probably some other browsers as well. IE for example is
- # uploading files with "C:\foo\bar.txt" as filename
- value = value[1:-1]
-
- # if this is a filename and the starting characters look like
- # a UNC path, then just return the value without quotes. Using the
- # replace sequence below on a UNC path has the effect of turning
- # the leading double slash into a single slash and then
- # _fix_ie_filename() doesn't work correctly. See #458.
- if not is_filename or value[:2] != "\\\\":
- return value.replace("\\\\", "\\").replace('\\"', '"')
- return value
-
-
-def dict_from_cookiejar(cj):
- """Returns a key/value dictionary from a CookieJar.
-
- :param cj: CookieJar object to extract cookies from.
- :rtype: dict
- """
-
- cookie_dict = {}
-
- for cookie in cj:
- cookie_dict[cookie.name] = cookie.value
-
- return cookie_dict
-
-
-def add_dict_to_cookiejar(cj, cookie_dict):
- """Returns a CookieJar from a key/value dictionary.
-
- :param cj: CookieJar to insert cookies into.
- :param cookie_dict: Dict of key/values to insert into CookieJar.
- :rtype: CookieJar
- """
-
- return cookiejar_from_dict(cookie_dict, cj)
-
-
-def get_encodings_from_content(content):
- """Returns encodings from given content string.
-
- :param content: bytestring to extract encodings from.
- """
- warnings.warn(
- (
- "In requests 3.0, get_encodings_from_content will be removed. For "
- "more information, please see the discussion on issue #2266. (This"
- " warning should only appear once.)"
- ),
- DeprecationWarning,
- )
-
- charset_re = re.compile(r']', flags=re.I)
- pragma_re = re.compile(r']', flags=re.I)
- xml_re = re.compile(r'^<\?xml.*?encoding=["\']*(.+?)["\'>]')
-
- return (
- charset_re.findall(content)
- + pragma_re.findall(content)
- + xml_re.findall(content)
- )
-
-
-def _parse_content_type_header(header):
- """Returns content type and parameters from given header
-
- :param header: string
- :return: tuple containing content type and dictionary of
- parameters
- """
-
- tokens = header.split(";")
- content_type, params = tokens[0].strip(), tokens[1:]
- params_dict = {}
- items_to_strip = "\"' "
-
- for param in params:
- param = param.strip()
- if param:
- key, value = param, True
- index_of_equals = param.find("=")
- if index_of_equals != -1:
- key = param[:index_of_equals].strip(items_to_strip)
- value = param[index_of_equals + 1 :].strip(items_to_strip)
- params_dict[key.lower()] = value
- return content_type, params_dict
-
-
-def get_encoding_from_headers(headers):
- """Returns encodings from given HTTP Header Dict.
-
- :param headers: dictionary to extract encoding from.
- :rtype: str
- """
-
- content_type = headers.get("content-type")
-
- if not content_type:
- return None
-
- content_type, params = _parse_content_type_header(content_type)
-
- if "charset" in params:
- return params["charset"].strip("'\"")
-
- if "text" in content_type:
- return "ISO-8859-1"
-
- if "application/json" in content_type:
- # Assume UTF-8 based on RFC 4627: https://www.ietf.org/rfc/rfc4627.txt since the charset was unset
- return "utf-8"
-
-
-def stream_decode_response_unicode(iterator, r):
- """Stream decodes an iterator."""
-
- if r.encoding is None:
- yield from iterator
- return
-
- decoder = codecs.getincrementaldecoder(r.encoding)(errors="replace")
- for chunk in iterator:
- rv = decoder.decode(chunk)
- if rv:
- yield rv
- rv = decoder.decode(b"", final=True)
- if rv:
- yield rv
-
-
-def iter_slices(string, slice_length):
- """Iterate over slices of a string."""
- pos = 0
- if slice_length is None or slice_length <= 0:
- slice_length = len(string)
- while pos < len(string):
- yield string[pos : pos + slice_length]
- pos += slice_length
-
-
-def get_unicode_from_response(r):
- """Returns the requested content back in unicode.
-
- :param r: Response object to get unicode content from.
-
- Tried:
-
- 1. charset from content-type
- 2. fall back and replace all unicode characters
-
- :rtype: str
- """
- warnings.warn(
- (
- "In requests 3.0, get_unicode_from_response will be removed. For "
- "more information, please see the discussion on issue #2266. (This"
- " warning should only appear once.)"
- ),
- DeprecationWarning,
- )
-
- tried_encodings = []
-
- # Try charset from content-type
- encoding = get_encoding_from_headers(r.headers)
-
- if encoding:
- try:
- return str(r.content, encoding)
- except UnicodeError:
- tried_encodings.append(encoding)
-
- # Fall back:
- try:
- return str(r.content, encoding, errors="replace")
- except TypeError:
- return r.content
-
-
-# The unreserved URI characters (RFC 3986)
-UNRESERVED_SET = frozenset(
- "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz" + "0123456789-._~"
-)
-
-
-def unquote_unreserved(uri):
- """Un-escape any percent-escape sequences in a URI that are unreserved
- characters. This leaves all reserved, illegal and non-ASCII bytes encoded.
-
- :rtype: str
- """
- parts = uri.split("%")
- for i in range(1, len(parts)):
- h = parts[i][0:2]
- if len(h) == 2 and h.isalnum():
- try:
- c = chr(int(h, 16))
- except ValueError:
- raise InvalidURL(f"Invalid percent-escape sequence: '{h}'")
-
- if c in UNRESERVED_SET:
- parts[i] = c + parts[i][2:]
- else:
- parts[i] = f"%{parts[i]}"
- else:
- parts[i] = f"%{parts[i]}"
- return "".join(parts)
-
-
-def requote_uri(uri):
- """Re-quote the given URI.
-
- This function passes the given URI through an unquote/quote cycle to
- ensure that it is fully and consistently quoted.
-
- :rtype: str
- """
- safe_with_percent = "!#$%&'()*+,/:;=?@[]~"
- safe_without_percent = "!#$&'()*+,/:;=?@[]~"
- try:
- # Unquote only the unreserved characters
- # Then quote only illegal characters (do not quote reserved,
- # unreserved, or '%')
- return quote(unquote_unreserved(uri), safe=safe_with_percent)
- except InvalidURL:
- # We couldn't unquote the given URI, so let's try quoting it, but
- # there may be unquoted '%'s in the URI. We need to make sure they're
- # properly quoted so they do not cause issues elsewhere.
- return quote(uri, safe=safe_without_percent)
-
-
-def address_in_network(ip, net):
- """This function allows you to check if an IP belongs to a network subnet
-
- Example: returns True if ip = 192.168.1.1 and net = 192.168.1.0/24
- returns False if ip = 192.168.1.1 and net = 192.168.100.0/24
-
- :rtype: bool
- """
- ipaddr = struct.unpack("=L", socket.inet_aton(ip))[0]
- netaddr, bits = net.split("/")
- netmask = struct.unpack("=L", socket.inet_aton(dotted_netmask(int(bits))))[0]
- network = struct.unpack("=L", socket.inet_aton(netaddr))[0] & netmask
- return (ipaddr & netmask) == (network & netmask)
-
-
-def dotted_netmask(mask):
- """Converts mask from /xx format to xxx.xxx.xxx.xxx
-
- Example: if mask is 24 function returns 255.255.255.0
-
- :rtype: str
- """
- bits = 0xFFFFFFFF ^ (1 << 32 - mask) - 1
- return socket.inet_ntoa(struct.pack(">I", bits))
-
-
-def is_ipv4_address(string_ip):
- """
- :rtype: bool
- """
- try:
- socket.inet_aton(string_ip)
- except OSError:
- return False
- return True
-
-
-def is_valid_cidr(string_network):
- """
- Very simple check of the cidr format in no_proxy variable.
-
- :rtype: bool
- """
- if string_network.count("/") == 1:
- try:
- mask = int(string_network.split("/")[1])
- except ValueError:
- return False
-
- if mask < 1 or mask > 32:
- return False
-
- try:
- socket.inet_aton(string_network.split("/")[0])
- except OSError:
- return False
- else:
- return False
- return True
-
-
-@contextlib.contextmanager
-def set_environ(env_name, value):
- """Set the environment variable 'env_name' to 'value'
-
- Save previous value, yield, and then restore the previous value stored in
- the environment variable 'env_name'.
-
- If 'value' is None, do nothing"""
- value_changed = value is not None
- if value_changed:
- old_value = os.environ.get(env_name)
- os.environ[env_name] = value
- try:
- yield
- finally:
- if value_changed:
- if old_value is None:
- del os.environ[env_name]
- else:
- os.environ[env_name] = old_value
-
-
-def should_bypass_proxies(url, no_proxy):
- """
- Returns whether we should bypass proxies or not.
-
- :rtype: bool
- """
- # Prioritize lowercase environment variables over uppercase
- # to keep a consistent behaviour with other http projects (curl, wget).
- def get_proxy(key):
- return os.environ.get(key) or os.environ.get(key.upper())
-
- # First check whether no_proxy is defined. If it is, check that the URL
- # we're getting isn't in the no_proxy list.
- no_proxy_arg = no_proxy
- if no_proxy is None:
- no_proxy = get_proxy("no_proxy")
- parsed = urlparse(url)
-
- if parsed.hostname is None:
- # URLs don't always have hostnames, e.g. file:/// urls.
- return True
-
- if no_proxy:
- # We need to check whether we match here. We need to see if we match
- # the end of the hostname, both with and without the port.
- no_proxy = (host for host in no_proxy.replace(" ", "").split(",") if host)
-
- if is_ipv4_address(parsed.hostname):
- for proxy_ip in no_proxy:
- if is_valid_cidr(proxy_ip):
- if address_in_network(parsed.hostname, proxy_ip):
- return True
- elif parsed.hostname == proxy_ip:
- # If no_proxy ip was defined in plain IP notation instead of cidr notation &
- # matches the IP of the index
- return True
- else:
- host_with_port = parsed.hostname
- if parsed.port:
- host_with_port += f":{parsed.port}"
-
- for host in no_proxy:
- if parsed.hostname.endswith(host) or host_with_port.endswith(host):
- # The URL does match something in no_proxy, so we don't want
- # to apply the proxies on this URL.
- return True
-
- with set_environ("no_proxy", no_proxy_arg):
- # parsed.hostname can be `None` in cases such as a file URI.
- try:
- bypass = proxy_bypass(parsed.hostname)
- except (TypeError, socket.gaierror):
- bypass = False
-
- if bypass:
- return True
-
- return False
-
-
-def get_environ_proxies(url, no_proxy=None):
- """
- Return a dict of environment proxies.
-
- :rtype: dict
- """
- if should_bypass_proxies(url, no_proxy=no_proxy):
- return {}
- else:
- return getproxies()
-
-
-def select_proxy(url, proxies):
- """Select a proxy for the url, if applicable.
-
- :param url: The url being for the request
- :param proxies: A dictionary of schemes or schemes and hosts to proxy URLs
- """
- proxies = proxies or {}
- urlparts = urlparse(url)
- if urlparts.hostname is None:
- return proxies.get(urlparts.scheme, proxies.get("all"))
-
- proxy_keys = [
- urlparts.scheme + "://" + urlparts.hostname,
- urlparts.scheme,
- "all://" + urlparts.hostname,
- "all",
- ]
- proxy = None
- for proxy_key in proxy_keys:
- if proxy_key in proxies:
- proxy = proxies[proxy_key]
- break
-
- return proxy
-
-
-def resolve_proxies(request, proxies, trust_env=True):
- """This method takes proxy information from a request and configuration
- input to resolve a mapping of target proxies. This will consider settings
- such a NO_PROXY to strip proxy configurations.
-
- :param request: Request or PreparedRequest
- :param proxies: A dictionary of schemes or schemes and hosts to proxy URLs
- :param trust_env: Boolean declaring whether to trust environment configs
-
- :rtype: dict
- """
- proxies = proxies if proxies is not None else {}
- url = request.url
- scheme = urlparse(url).scheme
- no_proxy = proxies.get("no_proxy")
- new_proxies = proxies.copy()
-
- if trust_env and not should_bypass_proxies(url, no_proxy=no_proxy):
- environ_proxies = get_environ_proxies(url, no_proxy=no_proxy)
-
- proxy = environ_proxies.get(scheme, environ_proxies.get("all"))
-
- if proxy:
- new_proxies.setdefault(scheme, proxy)
- return new_proxies
-
-
-def default_user_agent(name="python-requests"):
- """
- Return a string representing the default user agent.
-
- :rtype: str
- """
- return f"{name}/{__version__}"
-
-
-def default_headers():
- """
- :rtype: requests.structures.CaseInsensitiveDict
- """
- return CaseInsensitiveDict(
- {
- "User-Agent": default_user_agent(),
- "Accept-Encoding": DEFAULT_ACCEPT_ENCODING,
- "Accept": "*/*",
- "Connection": "keep-alive",
- }
- )
-
-
-def parse_header_links(value):
- """Return a list of parsed link headers proxies.
-
- i.e. Link: ; rel=front; type="image/jpeg",; rel=back;type="image/jpeg"
-
- :rtype: list
- """
-
- links = []
-
- replace_chars = " '\""
-
- value = value.strip(replace_chars)
- if not value:
- return links
-
- for val in re.split(", *<", value):
- try:
- url, params = val.split(";", 1)
- except ValueError:
- url, params = val, ""
-
- link = {"url": url.strip("<> '\"")}
-
- for param in params.split(";"):
- try:
- key, value = param.split("=")
- except ValueError:
- break
-
- link[key.strip(replace_chars)] = value.strip(replace_chars)
-
- links.append(link)
-
- return links
-
-
-# Null bytes; no need to recreate these on each call to guess_json_utf
-_null = "\x00".encode("ascii") # encoding to ASCII for Python 3
-_null2 = _null * 2
-_null3 = _null * 3
-
-
-def guess_json_utf(data):
- """
- :rtype: str
- """
- # JSON always starts with two ASCII characters, so detection is as
- # easy as counting the nulls and from their location and count
- # determine the encoding. Also detect a BOM, if present.
- sample = data[:4]
- if sample in (codecs.BOM_UTF32_LE, codecs.BOM_UTF32_BE):
- return "utf-32" # BOM included
- if sample[:3] == codecs.BOM_UTF8:
- return "utf-8-sig" # BOM included, MS style (discouraged)
- if sample[:2] in (codecs.BOM_UTF16_LE, codecs.BOM_UTF16_BE):
- return "utf-16" # BOM included
- nullcount = sample.count(_null)
- if nullcount == 0:
- return "utf-8"
- if nullcount == 2:
- if sample[::2] == _null2: # 1st and 3rd are null
- return "utf-16-be"
- if sample[1::2] == _null2: # 2nd and 4th are null
- return "utf-16-le"
- # Did not detect 2 valid UTF-16 ascii-range characters
- if nullcount == 3:
- if sample[:3] == _null3:
- return "utf-32-be"
- if sample[1:] == _null3:
- return "utf-32-le"
- # Did not detect a valid UTF-32 ascii-range character
- return None
-
-
-def prepend_scheme_if_needed(url, new_scheme):
- """Given a URL that may or may not have a scheme, prepend the given scheme.
- Does not replace a present scheme with the one provided as an argument.
-
- :rtype: str
- """
- parsed = parse_url(url)
- scheme, auth, host, port, path, query, fragment = parsed
-
- # A defect in urlparse determines that there isn't a netloc present in some
- # urls. We previously assumed parsing was overly cautious, and swapped the
- # netloc and path. Due to a lack of tests on the original defect, this is
- # maintained with parse_url for backwards compatibility.
- netloc = parsed.netloc
- if not netloc:
- netloc, path = path, netloc
-
- if auth:
- # parse_url doesn't provide the netloc with auth
- # so we'll add it ourselves.
- netloc = "@".join([auth, netloc])
- if scheme is None:
- scheme = new_scheme
- if path is None:
- path = ""
-
- return urlunparse((scheme, netloc, path, "", query, fragment))
-
-
-def get_auth_from_url(url):
- """Given a url with authentication components, extract them into a tuple of
- username,password.
-
- :rtype: (str,str)
- """
- parsed = urlparse(url)
-
- try:
- auth = (unquote(parsed.username), unquote(parsed.password))
- except (AttributeError, TypeError):
- auth = ("", "")
-
- return auth
-
-
-def check_header_validity(header):
- """Verifies that header parts don't contain leading whitespace
- reserved characters, or return characters.
-
- :param header: tuple, in the format (name, value).
- """
- name, value = header
-
- for part in header:
- if type(part) not in HEADER_VALIDATORS:
- raise InvalidHeader(
- f"Header part ({part!r}) from {{{name!r}: {value!r}}} must be "
- f"of type str or bytes, not {type(part)}"
- )
-
- _validate_header_part(name, "name", HEADER_VALIDATORS[type(name)][0])
- _validate_header_part(value, "value", HEADER_VALIDATORS[type(value)][1])
-
-
-def _validate_header_part(header_part, header_kind, validator):
- if not validator.match(header_part):
- raise InvalidHeader(
- f"Invalid leading whitespace, reserved character(s), or return"
- f"character(s) in header {header_kind}: {header_part!r}"
- )
-
-
-def urldefragauth(url):
- """
- Given a url remove the fragment and the authentication part.
-
- :rtype: str
- """
- scheme, netloc, path, params, query, fragment = urlparse(url)
-
- # see func:`prepend_scheme_if_needed`
- if not netloc:
- netloc, path = path, netloc
-
- netloc = netloc.rsplit("@", 1)[-1]
-
- return urlunparse((scheme, netloc, path, params, query, ""))
-
-
-def rewind_body(prepared_request):
- """Move file pointer back to its recorded starting position
- so it can be read again on redirect.
- """
- body_seek = getattr(prepared_request.body, "seek", None)
- if body_seek is not None and isinstance(
- prepared_request._body_position, integer_types
- ):
- try:
- body_seek(prepared_request._body_position)
- except OSError:
- raise UnrewindableBodyError(
- "An error occurred when rewinding request body for redirect."
- )
- else:
- raise UnrewindableBodyError("Unable to rewind request body for redirect.")
diff --git a/spaces/tobiascz/demotime/pytorch_grad_cam/utils/image.py b/spaces/tobiascz/demotime/pytorch_grad_cam/utils/image.py
deleted file mode 100644
index 95127788a7a820c9009d8bdfb23818b21b3afeb5..0000000000000000000000000000000000000000
--- a/spaces/tobiascz/demotime/pytorch_grad_cam/utils/image.py
+++ /dev/null
@@ -1,73 +0,0 @@
-import cv2
-import numpy as np
-import torch
-from torchvision.transforms import Compose, Normalize, ToTensor
-
-
-def preprocess_image(img: np.ndarray, mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) -> torch.Tensor:
- preprocessing = Compose([
- ToTensor(),
- Normalize(mean=mean, std=std)
- ])
- return preprocessing(img.copy()).unsqueeze(0)
-
-
-def deprocess_image(img):
- """ see https://github.com/jacobgil/keras-grad-cam/blob/master/grad-cam.py#L65 """
- img = img - np.mean(img)
- img = img / (np.std(img) + 1e-5)
- img = img * 0.1
- img = img + 0.5
- img = np.clip(img, 0, 1)
- return np.uint8(img * 255)
-
-
-def show_cam_on_image(img: np.ndarray,
- mask: np.ndarray,
- use_rgb: bool = False,
- colormap: int = cv2.COLORMAP_JET) -> np.ndarray:
- """ This function overlays the cam mask on the image as an heatmap.
- By default the heatmap is in BGR format.
-
- :param img: The base image in RGB or BGR format.
- :param mask: The cam mask.
- :param use_rgb: Whether to use an RGB or BGR heatmap, this should be set to True if 'img' is in RGB format.
- :param colormap: The OpenCV colormap to be used.
- :returns: The default image with the cam overlay.
- """
- heatmap = cv2.applyColorMap(np.uint8(255 * mask), colormap)
- if use_rgb:
- heatmap = cv2.cvtColor(heatmap, cv2.COLOR_BGR2RGB)
- heatmap = np.float32(heatmap) / 255
-
- if np.max(img) > 1:
- raise Exception(
- "The input image should np.float32 in the range [0, 1]")
-
- cam = heatmap + img
- cam = cam / np.max(cam)
- return np.uint8(255 * cam)
-
-def scale_cam_image(cam, target_size=None):
- result = []
- for img in cam:
- img = img - np.min(img)
- img = img / (1e-7 + np.max(img))
- if target_size is not None:
- img = cv2.resize(img, target_size)
- result.append(img)
- result = np.float32(result)
-
- return result
-
-def scale_accross_batch_and_channels(tensor, target_size):
- batch_size, channel_size = tensor.shape[:2]
- reshaped_tensor = tensor.reshape(
- batch_size * channel_size, *tensor.shape[2:])
- result = scale_cam_image(reshaped_tensor, target_size)
- result = result.reshape(
- batch_size,
- channel_size,
- target_size[1],
- target_size[0])
- return result
diff --git a/spaces/tomandandy/MusicGen3/audiocraft/models/encodec.py b/spaces/tomandandy/MusicGen3/audiocraft/models/encodec.py
deleted file mode 100644
index 69621a695887b0b41614c51cae020f6fd0af221d..0000000000000000000000000000000000000000
--- a/spaces/tomandandy/MusicGen3/audiocraft/models/encodec.py
+++ /dev/null
@@ -1,302 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from abc import ABC, abstractmethod
-import typing as tp
-
-from einops import rearrange
-import torch
-from torch import nn
-
-from .. import quantization as qt
-
-
-class CompressionModel(ABC, nn.Module):
-
- @abstractmethod
- def forward(self, x: torch.Tensor) -> qt.QuantizedResult:
- ...
-
- @abstractmethod
- def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]:
- """See `EncodecModel.encode`"""
- ...
-
- @abstractmethod
- def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None):
- """See `EncodecModel.decode`"""
- ...
-
- @property
- @abstractmethod
- def channels(self) -> int:
- ...
-
- @property
- @abstractmethod
- def frame_rate(self) -> int:
- ...
-
- @property
- @abstractmethod
- def sample_rate(self) -> int:
- ...
-
- @property
- @abstractmethod
- def cardinality(self) -> int:
- ...
-
- @property
- @abstractmethod
- def num_codebooks(self) -> int:
- ...
-
- @property
- @abstractmethod
- def total_codebooks(self) -> int:
- ...
-
- @abstractmethod
- def set_num_codebooks(self, n: int):
- """Set the active number of codebooks used by the quantizer.
- """
- ...
-
-
-class EncodecModel(CompressionModel):
- """Encodec model operating on the raw waveform.
-
- Args:
- encoder (nn.Module): Encoder network.
- decoder (nn.Module): Decoder network.
- quantizer (qt.BaseQuantizer): Quantizer network.
- frame_rate (int): Frame rate for the latent representation.
- sample_rate (int): Audio sample rate.
- channels (int): Number of audio channels.
- causal (bool): Whether to use a causal version of the model.
- renormalize (bool): Whether to renormalize the audio before running the model.
- """
- # we need assignement to override the property in the abstract class,
- # I couldn't find a better way...
- frame_rate: int = 0
- sample_rate: int = 0
- channels: int = 0
-
- def __init__(self,
- encoder: nn.Module,
- decoder: nn.Module,
- quantizer: qt.BaseQuantizer,
- frame_rate: int,
- sample_rate: int,
- channels: int,
- causal: bool = False,
- renormalize: bool = False):
- super().__init__()
- self.encoder = encoder
- self.decoder = decoder
- self.quantizer = quantizer
- self.frame_rate = frame_rate
- self.sample_rate = sample_rate
- self.channels = channels
- self.renormalize = renormalize
- self.causal = causal
- if self.causal:
- # we force disabling here to avoid handling linear overlap of segments
- # as supported in original EnCodec codebase.
- assert not self.renormalize, 'Causal model does not support renormalize'
-
- @property
- def total_codebooks(self):
- """Total number of quantizer codebooks available.
- """
- return self.quantizer.total_codebooks
-
- @property
- def num_codebooks(self):
- """Active number of codebooks used by the quantizer.
- """
- return self.quantizer.num_codebooks
-
- def set_num_codebooks(self, n: int):
- """Set the active number of codebooks used by the quantizer.
- """
- self.quantizer.set_num_codebooks(n)
-
- @property
- def cardinality(self):
- """Cardinality of each codebook.
- """
- return self.quantizer.bins
-
- def preprocess(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]:
- scale: tp.Optional[torch.Tensor]
- if self.renormalize:
- mono = x.mean(dim=1, keepdim=True)
- volume = mono.pow(2).mean(dim=2, keepdim=True).sqrt()
- scale = 1e-8 + volume
- x = x / scale
- scale = scale.view(-1, 1)
- else:
- scale = None
- return x, scale
-
- def postprocess(self,
- x: torch.Tensor,
- scale: tp.Optional[torch.Tensor] = None) -> torch.Tensor:
- if scale is not None:
- assert self.renormalize
- x = x * scale.view(-1, 1, 1)
- return x
-
- def forward(self, x: torch.Tensor) -> qt.QuantizedResult:
- assert x.dim() == 3
- length = x.shape[-1]
- x, scale = self.preprocess(x)
-
- emb = self.encoder(x)
- q_res = self.quantizer(emb, self.frame_rate)
- out = self.decoder(q_res.x)
-
- # remove extra padding added by the encoder and decoder
- assert out.shape[-1] >= length, (out.shape[-1], length)
- out = out[..., :length]
-
- q_res.x = self.postprocess(out, scale)
-
- return q_res
-
- def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]:
- """Encode the given input tensor to quantized representation along with scale parameter.
-
- Args:
- x (torch.Tensor): Float tensor of shape [B, C, T]
-
- Returns:
- codes, scale (tp.Tuple[torch.Tensor, torch.Tensor]): Tuple composed of:
- codes a float tensor of shape [B, K, T] with K the number of codebooks used and T the timestep.
- scale a float tensor containing the scale for audio renormalizealization.
- """
- assert x.dim() == 3
- x, scale = self.preprocess(x)
- emb = self.encoder(x)
- codes = self.quantizer.encode(emb)
- return codes, scale
-
- def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None):
- """Decode the given codes to a reconstructed representation, using the scale to perform
- audio denormalization if needed.
-
- Args:
- codes (torch.Tensor): Int tensor of shape [B, K, T]
- scale (tp.Optional[torch.Tensor]): Float tensor containing the scale value.
-
- Returns:
- out (torch.Tensor): Float tensor of shape [B, C, T], the reconstructed audio.
- """
- emb = self.quantizer.decode(codes)
- out = self.decoder(emb)
- out = self.postprocess(out, scale)
- # out contains extra padding added by the encoder and decoder
- return out
-
-
-class FlattenedCompressionModel(CompressionModel):
- """Wraps a CompressionModel and flatten its codebooks, e.g.
- instead of returning [B, K, T], return [B, S, T * (K // S)] with
- S the number of codebooks per step, and `K // S` the number of 'virtual steps'
- for each real time step.
-
- Args:
- model (CompressionModel): compression model to wrap.
- codebooks_per_step (int): number of codebooks to keep per step,
- this must divide the number of codebooks provided by the wrapped model.
- extend_cardinality (bool): if True, and for instance if codebooks_per_step = 1,
- if each codebook has a cardinality N, then the first codebook will
- use the range [0, N - 1], and the second [N, 2 N - 1] etc.
- On decoding, this can lead to potentially invalid sequences.
- Any invalid entry will be silently remapped to the proper range
- with a modulo.
- """
- def __init__(self, model: CompressionModel, codebooks_per_step: int = 1,
- extend_cardinality: bool = True):
- super().__init__()
- self.model = model
- self.codebooks_per_step = codebooks_per_step
- self.extend_cardinality = extend_cardinality
-
- @property
- def total_codebooks(self):
- return self.model.total_codebooks
-
- @property
- def num_codebooks(self):
- """Active number of codebooks used by the quantizer.
-
- ..Warning:: this reports the number of codebooks after the flattening
- of the codebooks!
- """
- assert self.model.num_codebooks % self.codebooks_per_step == 0
- return self.codebooks_per_step
-
- def set_num_codebooks(self, n: int):
- """Set the active number of codebooks used by the quantizer.
-
- ..Warning:: this sets the number of codebooks **before** the flattening
- of the codebooks.
- """
- assert n % self.codebooks_per_step == 0
- self.model.set_num_codebooks(n)
-
- @property
- def num_virtual_steps(self) -> int:
- """Return the number of virtual steps, e.g. one real step
- will be split into that many steps.
- """
- return self.model.num_codebooks // self.codebooks_per_step
-
- @property
- def frame_rate(self) -> int:
- return self.model.frame_rate * self.num_virtual_steps
-
- @property
- def sample_rate(self) -> int:
- return self.model.sample_rate
-
- @property
- def channels(self) -> int:
- return self.model.channels
-
- @property
- def cardinality(self):
- """Cardinality of each codebook.
- """
- if self.extend_cardinality:
- return self.model.cardinality * self.num_virtual_steps
- else:
- return self.model.cardinality
-
- def forward(self, x: torch.Tensor) -> qt.QuantizedResult:
- raise NotImplementedError("Not supported, use encode and decode.")
-
- def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]:
- indices, scales = self.model.encode(x)
- B, K, T = indices.shape
- indices = rearrange(indices, 'b (k v) t -> b k t v', k=self.codebooks_per_step)
- if self.extend_cardinality:
- for virtual_step in range(1, self.num_virtual_steps):
- indices[..., virtual_step] += self.model.cardinality * virtual_step
- indices = rearrange(indices, 'b k t v -> b k (t v)')
- return (indices, scales)
-
- def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None):
- B, K, T = codes.shape
- assert T % self.num_virtual_steps == 0
- codes = rearrange(codes, 'b k (t v) -> b (k v) t', v=self.num_virtual_steps)
- # We silently ignore potential errors from the LM when
- # using extend_cardinality.
- codes = codes % self.model.cardinality
- return self.model.decode(codes, scale)
diff --git a/spaces/tomofi/ABINet-OCR/modules/model_language.py b/spaces/tomofi/ABINet-OCR/modules/model_language.py
deleted file mode 100644
index a643cd5946240548746b22fc9294db63c2dfe7a1..0000000000000000000000000000000000000000
--- a/spaces/tomofi/ABINet-OCR/modules/model_language.py
+++ /dev/null
@@ -1,67 +0,0 @@
-import logging
-import torch.nn as nn
-from fastai.vision import *
-
-from modules.model import _default_tfmer_cfg
-from modules.model import Model
-from modules.transformer import (PositionalEncoding,
- TransformerDecoder,
- TransformerDecoderLayer)
-
-
-class BCNLanguage(Model):
- def __init__(self, config):
- super().__init__(config)
- d_model = ifnone(config.model_language_d_model, _default_tfmer_cfg['d_model'])
- nhead = ifnone(config.model_language_nhead, _default_tfmer_cfg['nhead'])
- d_inner = ifnone(config.model_language_d_inner, _default_tfmer_cfg['d_inner'])
- dropout = ifnone(config.model_language_dropout, _default_tfmer_cfg['dropout'])
- activation = ifnone(config.model_language_activation, _default_tfmer_cfg['activation'])
- num_layers = ifnone(config.model_language_num_layers, 4)
- self.d_model = d_model
- self.detach = ifnone(config.model_language_detach, True)
- self.use_self_attn = ifnone(config.model_language_use_self_attn, False)
- self.loss_weight = ifnone(config.model_language_loss_weight, 1.0)
- self.max_length = config.dataset_max_length + 1 # additional stop token
- self.debug = ifnone(config.global_debug, False)
-
- self.proj = nn.Linear(self.charset.num_classes, d_model, False)
- self.token_encoder = PositionalEncoding(d_model, max_len=self.max_length)
- self.pos_encoder = PositionalEncoding(d_model, dropout=0, max_len=self.max_length)
- decoder_layer = TransformerDecoderLayer(d_model, nhead, d_inner, dropout,
- activation, self_attn=self.use_self_attn, debug=self.debug)
- self.model = TransformerDecoder(decoder_layer, num_layers)
-
- self.cls = nn.Linear(d_model, self.charset.num_classes)
-
- if config.model_language_checkpoint is not None:
- logging.info(f'Read language model from {config.model_language_checkpoint}.')
- self.load(config.model_language_checkpoint)
-
- def forward(self, tokens, lengths):
- """
- Args:
- tokens: (N, T, C) where T is length, N is batch size and C is classes number
- lengths: (N,)
- """
- if self.detach: tokens = tokens.detach()
- embed = self.proj(tokens) # (N, T, E)
- embed = embed.permute(1, 0, 2) # (T, N, E)
- embed = self.token_encoder(embed) # (T, N, E)
- padding_mask = self._get_padding_mask(lengths, self.max_length)
-
- zeros = embed.new_zeros(*embed.shape)
- qeury = self.pos_encoder(zeros)
- location_mask = self._get_location_mask(self.max_length, tokens.device)
- output = self.model(qeury, embed,
- tgt_key_padding_mask=padding_mask,
- memory_mask=location_mask,
- memory_key_padding_mask=padding_mask) # (T, N, E)
- output = output.permute(1, 0, 2) # (N, T, E)
-
- logits = self.cls(output) # (N, T, C)
- pt_lengths = self._get_length(logits)
-
- res = {'feature': output, 'logits': logits, 'pt_lengths': pt_lengths,
- 'loss_weight':self.loss_weight, 'name': 'language'}
- return res
diff --git a/spaces/tomofi/MMOCR/mmocr/models/common/detectors/__init__.py b/spaces/tomofi/MMOCR/mmocr/models/common/detectors/__init__.py
deleted file mode 100644
index 609824a1b0e67b0110b5b101151243bcd0e338ec..0000000000000000000000000000000000000000
--- a/spaces/tomofi/MMOCR/mmocr/models/common/detectors/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .single_stage import SingleStageDetector
-
-__all__ = ['SingleStageDetector']
diff --git a/spaces/tomofi/NDLOCR/docker/Dockerfile b/spaces/tomofi/NDLOCR/docker/Dockerfile
deleted file mode 100644
index aae3c942282effe9534a43de7bcd53da4c474963..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/docker/Dockerfile
+++ /dev/null
@@ -1,36 +0,0 @@
-FROM nvcr.io/nvidia/cuda:11.1.1-cudnn8-devel-ubuntu18.04
-
-ENV PROJECT_DIR=/root/ocr_cli
-ENV FORCE_CUDA="1"
-ENV TORCH_CUDA_ARCH_LIST="7.5+PTX"
-ENV TORCH_NVCC_FLAGS="-Xfatbin -compress-all"
-
-RUN set -x \
- && apt update \
- && apt upgrade -y
-
-RUN set -x \
- && apt update \
- && apt -y install locales \
- && locale-gen ja_JP.UTF-8
-ENV LANG ja_JP.UTF-8
-ENV LANGUAGE ja_JP:ja
-ENV LC_ALL=ja_JP.UTF-8
-RUN localedef -f UTF-8 -i ja_JP ja_JP.utf8
-
-RUN set -x && apt -y install libgl1-mesa-dev libglib2.0-0 git
-RUN set -x \
- && apt -y install python3.7 python3.7-dev \
- && ln -s /usr/bin/python3.7 /usr/bin/python \
- && apt -y install wget python3-distutils && wget https://bootstrap.pypa.io/get-pip.py && python get-pip.py
-
-COPY . ${PROJECT_DIR}
-
-RUN set -x \
- && pip install -r ${PROJECT_DIR}/requirements.txt
-RUN set -x && pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
-RUN set -x && cd ${PROJECT_DIR}/src/ndl_layout/mmdetection && python setup.py bdist_wheel && pip install dist/*.whl
-ENV PYTHONPATH $PYTHONPATH:${PROJECT_DIR}/src/text_recognition/deep-text-recognition-benchmark
-RUN set -x && pip install mmcv-full==1.4.0 -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.8.0/index.html
-
-WORKDIR ${PROJECT_DIR}
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gn+ws/mask_rcnn_x50_32x4d_fpn_gn_ws-all_2x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gn+ws/mask_rcnn_x50_32x4d_fpn_gn_ws-all_2x_coco.py
deleted file mode 100644
index 9bbc86ead7003ab75264f8cf0cd18edb735fe9fd..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gn+ws/mask_rcnn_x50_32x4d_fpn_gn_ws-all_2x_coco.py
+++ /dev/null
@@ -1,17 +0,0 @@
-_base_ = './mask_rcnn_r50_fpn_gn_ws-all_2x_coco.py'
-# model settings
-conv_cfg = dict(type='ConvWS')
-norm_cfg = dict(type='GN', num_groups=32, requires_grad=True)
-model = dict(
- pretrained='open-mmlab://jhu/resnext50_32x4d_gn_ws',
- backbone=dict(
- type='ResNeXt',
- depth=50,
- groups=32,
- base_width=4,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- style='pytorch',
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg))
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/__init__.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/__init__.py
deleted file mode 100644
index 44ac99855ae52101c91be167fa78d8219fc47259..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/__init__.py
+++ /dev/null
@@ -1,16 +0,0 @@
-from .backbones import * # noqa: F401,F403
-from .builder import (BACKBONES, DETECTORS, HEADS, LOSSES, NECKS,
- ROI_EXTRACTORS, SHARED_HEADS, build_backbone,
- build_detector, build_head, build_loss, build_neck,
- build_roi_extractor, build_shared_head)
-from .dense_heads import * # noqa: F401,F403
-from .detectors import * # noqa: F401,F403
-from .losses import * # noqa: F401,F403
-from .necks import * # noqa: F401,F403
-from .roi_heads import * # noqa: F401,F403
-
-__all__ = [
- 'BACKBONES', 'NECKS', 'ROI_EXTRACTORS', 'SHARED_HEADS', 'HEADS', 'LOSSES',
- 'DETECTORS', 'build_backbone', 'build_neck', 'build_roi_extractor',
- 'build_shared_head', 'build_head', 'build_loss', 'build_detector'
-]
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/detectors/paa.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/detectors/paa.py
deleted file mode 100644
index afc80590796af314b7493e7f102780bbcf65448b..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/detectors/paa.py
+++ /dev/null
@@ -1,18 +0,0 @@
-from ..builder import DETECTORS
-from .single_stage import SingleStageDetector
-
-
-@DETECTORS.register_module()
-class PAA(SingleStageDetector):
- """Implementation of `PAA `_."""
-
- def __init__(self,
- backbone,
- neck,
- bbox_head,
- train_cfg=None,
- test_cfg=None,
- pretrained=None,
- init_cfg=None):
- super(PAA, self).__init__(backbone, neck, bbox_head, train_cfg,
- test_cfg, pretrained, init_cfg)
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/necks/fpn.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/necks/fpn.py
deleted file mode 100644
index 7cb312b783537aa704fbcb8f076b76ec62f35100..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/necks/fpn.py
+++ /dev/null
@@ -1,221 +0,0 @@
-import warnings
-
-import torch.nn as nn
-import torch.nn.functional as F
-from mmcv.cnn import ConvModule
-from mmcv.runner import BaseModule, auto_fp16
-
-from ..builder import NECKS
-
-
-@NECKS.register_module()
-class FPN(BaseModule):
- r"""Feature Pyramid Network.
-
- This is an implementation of paper `Feature Pyramid Networks for Object
- Detection `_.
-
- Args:
- in_channels (List[int]): Number of input channels per scale.
- out_channels (int): Number of output channels (used at each scale)
- num_outs (int): Number of output scales.
- start_level (int): Index of the start input backbone level used to
- build the feature pyramid. Default: 0.
- end_level (int): Index of the end input backbone level (exclusive) to
- build the feature pyramid. Default: -1, which means the last level.
- add_extra_convs (bool | str): If bool, it decides whether to add conv
- layers on top of the original feature maps. Default to False.
- If True, its actual mode is specified by `extra_convs_on_inputs`.
- If str, it specifies the source feature map of the extra convs.
- Only the following options are allowed
-
- - 'on_input': Last feat map of neck inputs (i.e. backbone feature).
- - 'on_lateral': Last feature map after lateral convs.
- - 'on_output': The last output feature map after fpn convs.
- extra_convs_on_inputs (bool, deprecated): Whether to apply extra convs
- on the original feature from the backbone. If True,
- it is equivalent to `add_extra_convs='on_input'`. If False, it is
- equivalent to set `add_extra_convs='on_output'`. Default to True.
- relu_before_extra_convs (bool): Whether to apply relu before the extra
- conv. Default: False.
- no_norm_on_lateral (bool): Whether to apply norm on lateral.
- Default: False.
- caffe2_xavier_init (bool): Whether to apply caffe2_xavier_init on all
- conv in FPN. Default: False.
- conv_cfg (dict): Config dict for convolution layer. Default: None.
- norm_cfg (dict): Config dict for normalization layer. Default: None.
- act_cfg (str): Config dict for activation layer in ConvModule.
- Default: None.
- upsample_cfg (dict): Config dict for interpolate layer.
- Default: `dict(mode='nearest')`
- init_cfg (dict or list[dict], optional): Initialization config dict.
-
- Example:
- >>> import torch
- >>> in_channels = [2, 3, 5, 7]
- >>> scales = [340, 170, 84, 43]
- >>> inputs = [torch.rand(1, c, s, s)
- ... for c, s in zip(in_channels, scales)]
- >>> self = FPN(in_channels, 11, len(in_channels)).eval()
- >>> outputs = self.forward(inputs)
- >>> for i in range(len(outputs)):
- ... print(f'outputs[{i}].shape = {outputs[i].shape}')
- outputs[0].shape = torch.Size([1, 11, 340, 340])
- outputs[1].shape = torch.Size([1, 11, 170, 170])
- outputs[2].shape = torch.Size([1, 11, 84, 84])
- outputs[3].shape = torch.Size([1, 11, 43, 43])
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- num_outs,
- start_level=0,
- end_level=-1,
- add_extra_convs=False,
- extra_convs_on_inputs=True,
- relu_before_extra_convs=False,
- no_norm_on_lateral=False,
- caffe2_xavier_init=False,
- conv_cfg=None,
- norm_cfg=None,
- act_cfg=None,
- upsample_cfg=dict(mode='nearest'),
- init_cfg=dict(
- type='Xavier', layer='Conv2d', distribution='uniform')):
- super(FPN, self).__init__(init_cfg)
- assert isinstance(in_channels, list)
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.num_ins = len(in_channels)
- self.num_outs = num_outs
- self.relu_before_extra_convs = relu_before_extra_convs
- self.no_norm_on_lateral = no_norm_on_lateral
- self.fp16_enabled = False
- self.upsample_cfg = upsample_cfg.copy()
- self.caffe2_xavier_init = caffe2_xavier_init
-
- if end_level == -1:
- self.backbone_end_level = self.num_ins
- assert num_outs >= self.num_ins - start_level
- else:
- # if end_level < inputs, no extra level is allowed
- self.backbone_end_level = end_level
- assert end_level <= len(in_channels)
- assert num_outs == end_level - start_level
- self.start_level = start_level
- self.end_level = end_level
- self.add_extra_convs = add_extra_convs
- assert isinstance(add_extra_convs, (str, bool))
- if isinstance(add_extra_convs, str):
- # Extra_convs_source choices: 'on_input', 'on_lateral', 'on_output'
- assert add_extra_convs in ('on_input', 'on_lateral', 'on_output')
- elif add_extra_convs: # True
- if extra_convs_on_inputs:
- # TODO: deprecate `extra_convs_on_inputs`
- warnings.simplefilter('once')
- warnings.warn(
- '"extra_convs_on_inputs" will be deprecated in v2.9.0,'
- 'Please use "add_extra_convs"', DeprecationWarning)
- self.add_extra_convs = 'on_input'
- else:
- self.add_extra_convs = 'on_output'
-
- self.lateral_convs = nn.ModuleList()
- self.fpn_convs = nn.ModuleList()
-
- for i in range(self.start_level, self.backbone_end_level):
- l_conv = ConvModule(
- in_channels[i],
- out_channels,
- 1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg if not self.no_norm_on_lateral else None,
- act_cfg=act_cfg,
- inplace=False)
- fpn_conv = ConvModule(
- out_channels,
- out_channels,
- 3,
- padding=1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- inplace=False)
-
- self.lateral_convs.append(l_conv)
- self.fpn_convs.append(fpn_conv)
-
- # add extra conv layers (e.g., RetinaNet)
- extra_levels = num_outs - self.backbone_end_level + self.start_level
- if self.add_extra_convs and extra_levels >= 1:
- for i in range(extra_levels):
- if i == 0 and self.add_extra_convs == 'on_input':
- in_channels = self.in_channels[self.backbone_end_level - 1]
- else:
- in_channels = out_channels
- extra_fpn_conv = ConvModule(
- in_channels,
- out_channels,
- 3,
- stride=2,
- padding=1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- inplace=False)
- self.fpn_convs.append(extra_fpn_conv)
-
- @auto_fp16()
- def forward(self, inputs):
- """Forward function."""
- assert len(inputs) == len(self.in_channels)
-
- # build laterals
- laterals = [
- lateral_conv(inputs[i + self.start_level])
- for i, lateral_conv in enumerate(self.lateral_convs)
- ]
-
- # build top-down path
- used_backbone_levels = len(laterals)
- for i in range(used_backbone_levels - 1, 0, -1):
- # In some cases, fixing `scale factor` (e.g. 2) is preferred, but
- # it cannot co-exist with `size` in `F.interpolate`.
- if 'scale_factor' in self.upsample_cfg:
- laterals[i - 1] += F.interpolate(laterals[i],
- **self.upsample_cfg)
- else:
- prev_shape = laterals[i - 1].shape[2:]
- laterals[i - 1] += F.interpolate(
- laterals[i], size=prev_shape, **self.upsample_cfg)
-
- # build outputs
- # part 1: from original levels
- outs = [
- self.fpn_convs[i](laterals[i]) for i in range(used_backbone_levels)
- ]
- # part 2: add extra levels
- if self.num_outs > len(outs):
- # use max pool to get more levels on top of outputs
- # (e.g., Faster R-CNN, Mask R-CNN)
- if not self.add_extra_convs:
- for i in range(self.num_outs - used_backbone_levels):
- outs.append(F.max_pool2d(outs[-1], 1, stride=2))
- # add conv layers on top of original feature maps (RetinaNet)
- else:
- if self.add_extra_convs == 'on_input':
- extra_source = inputs[self.backbone_end_level - 1]
- elif self.add_extra_convs == 'on_lateral':
- extra_source = laterals[-1]
- elif self.add_extra_convs == 'on_output':
- extra_source = outs[-1]
- else:
- raise NotImplementedError
- outs.append(self.fpn_convs[used_backbone_levels](extra_source))
- for i in range(used_backbone_levels + 1, self.num_outs):
- if self.relu_before_extra_convs:
- outs.append(self.fpn_convs[i](F.relu(outs[-1])))
- else:
- outs.append(self.fpn_convs[i](outs[-1]))
- return tuple(outs)
diff --git a/spaces/tsailada/Emily/README.md b/spaces/tsailada/Emily/README.md
deleted file mode 100644
index bbb37252bbba684941e71b3a4cd61573c5f52b8b..0000000000000000000000000000000000000000
--- a/spaces/tsailada/Emily/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Emily
-emoji: 📊
-colorFrom: indigo
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.27.0
-app_file: app.py
-pinned: false
-license: other
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/tsi-org/LLaVA/scripts/finetune_sqa.sh b/spaces/tsi-org/LLaVA/scripts/finetune_sqa.sh
deleted file mode 100644
index 2c1590fdc7511370e8ccc285dcc9c053379b9134..0000000000000000000000000000000000000000
--- a/spaces/tsi-org/LLaVA/scripts/finetune_sqa.sh
+++ /dev/null
@@ -1,34 +0,0 @@
-#!/bin/bash
-
-deepspeed llava/train/train_mem.py \
- --deepspeed ./scripts/zero2.json \
- --model_name_or_path lmsys/vicuna-13b-v1.3 \
- --version $PROMPT_VERSION \
- --data_path /Data/ScienceQA/data/scienceqa/llava_train_QCM-LEA.json \
- --image_folder /Data/ScienceQA/data/scienceqa/images/train \
- --vision_tower openai/clip-vit-large-patch14 \
- --pretrain_mm_mlp_adapter ./checkpoints/huggingface/liuhaotian/llava-pretrain-vicuna-13b-v1.3/mm_projector.bin \
- --mm_vision_select_layer -2 \
- --mm_use_im_start_end False \
- --mm_use_im_patch_token False \
- --bf16 True \
- --output_dir ./checkpoints/llava-vicuna-13b-v1.3-pretrain_lcs558k_plain-ScienceQA_QCM_LEA-12e \
- --num_train_epochs 12 \
- --per_device_train_batch_size 16 \
- --per_device_eval_batch_size 4 \
- --gradient_accumulation_steps 1 \
- --evaluation_strategy "no" \
- --save_strategy "steps" \
- --save_steps 50000 \
- --save_total_limit 1 \
- --learning_rate 2e-5 \
- --weight_decay 0. \
- --warmup_ratio 0.03 \
- --lr_scheduler_type "cosine" \
- --logging_steps 1 \
- --tf32 True \
- --model_max_length 2048 \
- --gradient_checkpointing True \
- --dataloader_num_workers 4 \
- --lazy_preprocess True \
- --report_to wandb
diff --git a/spaces/ultgamerkient/GPT4ALL/README.md b/spaces/ultgamerkient/GPT4ALL/README.md
deleted file mode 100644
index 52bb9f544821e24824e7ce47d4f49db31f3f0796..0000000000000000000000000000000000000000
--- a/spaces/ultgamerkient/GPT4ALL/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Gpt4all
-emoji: 🦀
-colorFrom: gray
-colorTo: pink
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
-duplicated_from: Monster/GPT4ALL
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/unilux/ASR_for_Luxembourgish/app.py b/spaces/unilux/ASR_for_Luxembourgish/app.py
deleted file mode 100644
index e488c84c4b60c27e99abee893cf3719d5b2b1067..0000000000000000000000000000000000000000
--- a/spaces/unilux/ASR_for_Luxembourgish/app.py
+++ /dev/null
@@ -1,62 +0,0 @@
-# from: https://gradio.app/real_time_speech_recognition/
-
-from transformers import pipeline, WhisperProcessor, WhisperForConditionalGeneration
-import torch
-import gradio as gr
-import librosa
-import os
-import time
-
-
-#Loading the model and the tokenizer
-token_key = os.environ.get("HUGGING_FACE_HUB_TOKEN")
-print("key length:", len(token_key.strip()))
-
-model_name = "pgilles/whisper-large-v2-lb_cased_04"
-#model_name = "pgilles/whisper-large-10_Chamber" # model too bad
-
-processor = WhisperProcessor.from_pretrained(model_name, language="lb", task="transcribe")
-tokenizer = processor.tokenizer
-model = WhisperForConditionalGeneration.from_pretrained(model_name, use_auth_token=token_key)
-#p = pipeline("automatic-speech-recognition", model=model, tokenizer=tokenizer, feature_extractor=processor.feature_extractor, decoder=processor.decoder, use_auth_token=token_key)
-
-pipe = pipeline("automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, device=0)
-#pipe = pipeline("automatic-speech-recognition", model=model_name, device=0, use_auth_token=token_key)
-#pipe.model.config.forced_decoder_ids = pipe.tokenizer.get_decoder_prompt_ids(language= "Luxembourgish", task="transcribe")
-#pipe.model.config.forced_decoder_ids = None
-
-
-def load_data(input_file):
-
- """ Function for resampling to ensure that the speech input is sampled at 16KHz.
- """
- sampling_rate = 16_000
- #read the file
- speech, sample_rate = librosa.load(input_file, sr=sampling_rate, mono=True)
- #speech = librosa.effects.trim(speech, top_db= 10)
- return speech
-
-def asr_pipe(input_file, input_file_microphone, chunks):
- input_file = input_file_microphone if input_file_microphone else input_file
- transcription = pipe(input_file, chunk_length_s=chunks)["text"]
- return transcription
-
-inputs = [gr.inputs.Audio(source="upload", type='filepath', label="Eng Audio-Datei eroplueden...", optional = True),
- gr.inputs.Audio(source="microphone", type="filepath", label="... oder direkt mam Mikro ophuelen", optional = True),
- gr.Slider(minimum=3, maximum=32, value=29, step=0.5, label="Chunk Length")
- ]
-
-outputs = [gr.outputs.Textbox(label="Erkannten Text")]
-
-samples = [["Chamber2022_1.wav", "Chamber2022_1.wav", 15.5], ["Chamber2022_2.wav", "Chamber2022_2.wav", 20], ["Chamber2022_3.wav", "Chamber2022_3.wav", 30], ["Erlieft-a-Verzielt.wav", "Erlieft-a-Verzielt.wav", 28.5]]
-
-gr.Interface(fn = asr_pipe,
- inputs = inputs,
- outputs = outputs,
- title="Sproocherkennung fir d'Lëtzebuergescht @uni.lu, based on Whisper-large-v2",
- description = "Dës App convertéiert Är geschwate Sprooch an de (méi oder manner richtegen ;-)) Text!",
- examples = samples,
- examples_per_page = 10,
- article = "Beschreiwung: Dir kënnt Iech selwer iwwer de Mikro ophuelen, eng Datei eroplueden oder e Beispill auswielen. Dëse Modell ass trainéiert mam neisten Sproocherkennungsalgorithmus vun OpenAI: Whisper. Anescht wéi bei deene meeschten Applikatiounen, déi op dem Whisper baséieren, ass dëse lëtzebuergeschen zousätzlech mat enger grousser, kontrolléierter Datebasis trainéiert ginn ('fine-tuning' mat 70 Stonne Lëtzebuergesch aus verschiddene sproochleche Genren). Domat ass eng niddereg Feelerquote méiglech, déi virdrun net denkbar war. D'Grouss- a Klengschreiwung an och d'Punktuatioun gi gréisstendeels richteg ëmgesat. Am Géigesaz zum Wav2vec 2.0-Algorithmus, deen och héich Erkennungsraten huet an och op ville Sproochen trainéiert ass, ass beim Whisper fir vill Sproochen net nëmmen d'Akustik mee och den Text mattrainéiert ginn ('weak-supervised pre-training'). Domat ass net nëmmen déi allgemeng Erkennungsrat méi héich wéi beim Wav2vec 2.0, mee och méisproocheg Schwätze gëtt däitlech besser erkannt. Et kann een also z.B. tëscht Lëtzebuergescht a Franséisch (oder Däitsch, Englesch, Spuenesch, Chineesesch) hin- an hierwiesselen an de System produzéiert de richtegen Text. 't dauert ongeféier e Fënneftel bis e Véierel vun der Dauer vun der Opnam, bis d'Transkriptioun verschafft ass.",
- theme="default").launch(share=False, show_error=True)
-
\ No newline at end of file
diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Adeko X Full LINK.md b/spaces/usbethFlerru/sovits-modelsV2/example/Adeko X Full LINK.md
deleted file mode 100644
index aef0dda5d205eda103f44f86c124d06298a33452..0000000000000000000000000000000000000000
--- a/spaces/usbethFlerru/sovits-modelsV2/example/Adeko X Full LINK.md
+++ /dev/null
@@ -1,107 +0,0 @@
-
-Adeko X Full: The Ultimate Software for 3D Kitchen Design and Interior Decoration
-
-If you are looking for a software that can help you design and decorate your kitchen and other interior spaces in 3D, you should consider Adeko X Full. This software is a powerful and easy-to-use tool that allows you to create stunning 3D models of your kitchen and other interior spaces with realistic materials, colors, lighting, and accessories. However, if you want to use all the features and benefits of this software, you need to activate the full version with the adeko x full crack. In this article, we will tell you what is adeko x full crack, why you need it, and how you can get it for free.
-
-What is adeko x full crack?
-
-Adeko x full crack is a crack that can generate a valid license key for Adeko X Full software. This software is a product of Adeko Technologies, a company that specializes in providing solutions for 3D kitchen design and interior decoration.
-adeko x full Download Zip ⚙⚙⚙ https://urlcod.com/2uyXl9
-
-Adeko X Full software is a desktop application that can be installed on Windows XP, Vista, 7, 8, 8.1, or 10 operating systems. The software allows you to create 3D models of your kitchen and other interior spaces by using a simple drag-and-drop interface. You can also customize your models by adding realistic materials, colors, lighting, and accessories from a rich library of options.
-
-Once you have created your models, you can view them from different angles and perspectives, as well as export them to various formats such as JPG, PNG, PDF, DXF, DWG, or VRML. You can also print them or share them online with your clients or friends.
-
-Adeko X Full software has many advantages over other similar software in the market. Some of these advantages are:
-
-
-It has a user-friendly interface that makes it easy to create and edit 3D models.
-It has a high-quality rendering engine that produces realistic and detailed 3D models.
-It has a large library of materials, colors, lighting, and accessories that can suit any style and taste.
-It has a fast and accurate calculation system that can estimate the cost and quantity of materials needed for your project.
-It has a flexible and modular architecture that can adapt to any size and shape of space.
-It has a secure and reliable system that protects your data and privacy.
-
-
-However, Adeko X Full software is not free. You need to purchase a license key to activate the full version of the software. The license key costs $299 for one year and $499 for lifetime. If you don't have a license key, you can only use the trial version of the software, which has some limitations such as:
-
-
-
-You can only create up to 10 projects.
-You can only use up to 50 items from the library.
-You cannot export or print your models.
-You cannot access the advanced features and settings of the software.
-
-
-That's why you need adeko x full crack to get the full version of the software without paying anything.
-
-Why do you need adeko x full crack?
-
-Adeko x full crack is a crack that can generate a valid license key for Adeko X Full software. By using this crack, you can activate the full version of the software without spending any money. This way, you can enjoy all the features and benefits of the software without any limitations or restrictions.
-
-Adeko x full crack is useful for anyone who wants to use Adeko X Full software for personal or professional purposes. Whether you want to design and decorate your own kitchen or other interior spaces, or offer your services as a 3D kitchen designer or interior decorator, adeko x full crack can help you achieve your goals.
-
-Adeko x full crack can help you save money, time, and effort by allowing you to create and edit stunning 3D models with ease and efficiency. You can also impress your clients or friends by showing them realistic and detailed 3D models of your projects.
-
-How can you get adeko x full crack for free?
-
-If you want to get adeko x full crack for free, you just need to follow these simple steps:
-
-
-Download Adeko X Full software from its official website: https://www.adeko.com/en/product-download/
-Install the software on your PC by following the instructions on the screen.
-Download adeko x full crack from this link: https://www.fullprogramlarindir.net/adeko-full-indir-7093574.html
-Extract the zip file and run the crack.exe file as administrator.
-Click on the "Generate" button and copy the license key that appears on the screen.
-Open Adeko X Full software and click on the "Register" button on the top right corner of the screen.
-Paste the license key that you copied from the crack into the registration form and click on "Activate".
-Congratulations! You have successfully activated the full version of Adeko X Full software with adeko x full crack.
-
-
-Now you can use Adeko X Full software without any limitations or restrictions. You can create unlimited projects, use unlimited items from the library, export or print your models, access advanced features and settings, and more.
-
-Conclusion
-
-In this article, we have explained what is adeko x full crack, why you need it, and how you can get it for free. Adeko x full crack is a crack that can generate a valid license key for Adeko X Full software. This software is a powerful and easy-to-use tool that allows you to create stunning 3D models of your kitchen and other interior spaces with realistic materials, colors, lighting, and accessories. By using adeko x full crack, you can activate the full version of the software without paying anything. This way, you can enjoy all the features and benefits of the software without any limitations or restrictions.
-
-We hope that this article has been helpful and informative for you. If you have any questions or comments about adeko x full crack or Adeko X Full software, feel free to leave us a message below. We will be happy to answer you and assist you.
-
-Thank you for reading this article and have a great day.
-What are some tips and tricks to use Adeko X Full software effectively?
-
-Adeko X Full software is a versatile and powerful tool that can help you create and edit 3D models of your kitchen and other interior spaces. However, to use this software effectively, you need to follow some tips and tricks that can enhance your results and performance. Here are some of them:
-
-
-Plan your projects in advance. Before you start creating and editing 3D models, you need to have a clear idea of what you want to achieve, what style and theme you want to follow, what materials and colors you want to use, and what budget and time frame you have.
-Choose your items carefully. Your items are the elements that make up your 3D models, such as cabinets, countertops, appliances, sinks, faucets, lighting, accessories, etc. You need to make sure they are suitable for your space, your style, and your purpose. You also need to consider their quality, durability, functionality, and cost.
-Test your models before exporting or printing them. Before you export or print your models, you need to test them on your screen to check their quality, accuracy, and realism. You can also ask for feedback from your clients or friends to improve your models.
-Analyze your models after exporting or printing them. After you export or print your models, you need to analyze their results and performance using the software's calculation system. You can see how much materials and money you need for your project, as well as how much time and effort you saved by using the software.
-Optimize your models based on your analysis. Based on your analysis, you can optimize your models by making adjustments and improvements to your items, your materials, your colors, your lighting, and your accessories. You can also experiment with different options and combinations to find the best solution for your project.
-
-
-What are some examples of using Adeko X Full software for different purposes?
-
-Adeko X Full software can be used for different purposes depending on your needs and goals. Here are some examples of using this software for different purposes:
-
-
-For personal use: You can use Adeko X Full software to design and decorate your own kitchen or other interior spaces according to your taste and preference. You can also use it to create 3D models of your dream kitchen or other interior spaces that you want to have in the future.
-For professional use: You can use Adeko X Full software to offer your services as a 3D kitchen designer or interior decorator to your clients. You can also use it to showcase your portfolio and skills to potential clients or employers.
-For educational use: You can use Adeko X Full software to learn and practice 3D kitchen design and interior decoration skills. You can also use it to teach and train others who want to learn 3D kitchen design and interior decoration skills.
-
-
-Conclusion
-
-In this article, we have explained what is adeko x full crack, why you need it, how you can get it for free, what are some tips and tricks to use Adeko X Full software effectively, and what are some examples of using this software for different purposes. Adeko x full crack is a crack that can generate a valid license key for Adeko X Full software. This software is a powerful and easy-to-use tool that allows you to create stunning 3D models of your kitchen and other interior spaces with realistic materials, colors, lighting, and accessories. By using adeko x full crack, you can activate the full version of the software without paying anything. This way, you can enjoy all the features and benefits of the software without any limitations or restrictions.
-
-We hope that this article has been helpful and informative for you. If you have any questions or comments about adeko x full crack or Adeko X Full software, feel free to leave us a message below. We will be happy to answer you and assist you.
-
-Thank you for reading this article and have a great day.
-Conclusion
-
-In this article, we have explained what is adeko x full crack, why you need it, how you can get it for free, what are some tips and tricks to use Adeko X Full software effectively, and what are some examples of using this software for different purposes. Adeko x full crack is a crack that can generate a valid license key for Adeko X Full software. This software is a powerful and easy-to-use tool that allows you to create stunning 3D models of your kitchen and other interior spaces with realistic materials, colors, lighting, and accessories. By using adeko x full crack, you can activate the full version of the software without paying anything. This way, you can enjoy all the features and benefits of the software without any limitations or restrictions.
-
-We hope that this article has been helpful and informative for you. If you have any questions or comments about adeko x full crack or Adeko X Full software, feel free to leave us a message below. We will be happy to answer you and assist you.
-
-Thank you for reading this article and have a great day.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Autodesk EAGLE Premium 11.2.2 Portable Cracked The Most Powerful and Easy-to-Use PCB Design Software.md b/spaces/usbethFlerru/sovits-modelsV2/example/Autodesk EAGLE Premium 11.2.2 Portable Cracked The Most Powerful and Easy-to-Use PCB Design Software.md
deleted file mode 100644
index 1766f4684653b7292d6f4441843801ebbd7c9cf0..0000000000000000000000000000000000000000
--- a/spaces/usbethFlerru/sovits-modelsV2/example/Autodesk EAGLE Premium 11.2.2 Portable Cracked The Most Powerful and Easy-to-Use PCB Design Software.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Autodesk EAGLE Premium 11.2.2 Portable Cracked Download Pc DOWNLOAD 🆓 https://urlcod.com/2uyVUi
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/user238921933/stable-diffusion-webui/modules/ui_extensions.py b/spaces/user238921933/stable-diffusion-webui/modules/ui_extensions.py
deleted file mode 100644
index 12f395cef3a6e1e0ad28d1577c0208794b897335..0000000000000000000000000000000000000000
--- a/spaces/user238921933/stable-diffusion-webui/modules/ui_extensions.py
+++ /dev/null
@@ -1,354 +0,0 @@
-import json
-import os.path
-import shutil
-import sys
-import time
-import traceback
-
-import git
-
-import gradio as gr
-import html
-import shutil
-import errno
-
-from modules import extensions, shared, paths
-from modules.call_queue import wrap_gradio_gpu_call
-
-available_extensions = {"extensions": []}
-
-
-def check_access():
- assert not shared.cmd_opts.disable_extension_access, "extension access disabled because of command line flags"
-
-
-def apply_and_restart(disable_list, update_list):
- check_access()
-
- disabled = json.loads(disable_list)
- assert type(disabled) == list, f"wrong disable_list data for apply_and_restart: {disable_list}"
-
- update = json.loads(update_list)
- assert type(update) == list, f"wrong update_list data for apply_and_restart: {update_list}"
-
- update = set(update)
-
- for ext in extensions.extensions:
- if ext.name not in update:
- continue
-
- try:
- ext.fetch_and_reset_hard()
- except Exception:
- print(f"Error getting updates for {ext.name}:", file=sys.stderr)
- print(traceback.format_exc(), file=sys.stderr)
-
- shared.opts.disabled_extensions = disabled
- shared.opts.save(shared.config_filename)
-
- shared.state.interrupt()
- shared.state.need_restart = True
-
-
-def check_updates(id_task, disable_list):
- check_access()
-
- disabled = json.loads(disable_list)
- assert type(disabled) == list, f"wrong disable_list data for apply_and_restart: {disable_list}"
-
- exts = [ext for ext in extensions.extensions if ext.remote is not None and ext.name not in disabled]
- shared.state.job_count = len(exts)
-
- for ext in exts:
- shared.state.textinfo = ext.name
-
- try:
- ext.check_updates()
- except Exception:
- print(f"Error checking updates for {ext.name}:", file=sys.stderr)
- print(traceback.format_exc(), file=sys.stderr)
-
- shared.state.nextjob()
-
- return extension_table(), ""
-
-
-def extension_table():
- code = f"""
-
- """
-
- return code
-
-
-def normalize_git_url(url):
- if url is None:
- return ""
-
- url = url.replace(".git", "")
- return url
-
-
-def install_extension_from_url(dirname, url):
- check_access()
-
- assert url, 'No URL specified'
-
- if dirname is None or dirname == "":
- *parts, last_part = url.split('/')
- last_part = normalize_git_url(last_part)
-
- dirname = last_part
-
- target_dir = os.path.join(extensions.extensions_dir, dirname)
- assert not os.path.exists(target_dir), f'Extension directory already exists: {target_dir}'
-
- normalized_url = normalize_git_url(url)
- assert len([x for x in extensions.extensions if normalize_git_url(x.remote) == normalized_url]) == 0, 'Extension with this URL is already installed'
-
- tmpdir = os.path.join(paths.data_path, "tmp", dirname)
-
- try:
- shutil.rmtree(tmpdir, True)
-
- repo = git.Repo.clone_from(url, tmpdir)
- repo.remote().fetch()
-
- try:
- os.rename(tmpdir, target_dir)
- except OSError as err:
- # TODO what does this do on windows? I think it'll be a different error code but I don't have a system to check it
- # Shouldn't cause any new issues at least but we probably want to handle it there too.
- if err.errno == errno.EXDEV:
- # Cross device link, typical in docker or when tmp/ and extensions/ are on different file systems
- # Since we can't use a rename, do the slower but more versitile shutil.move()
- shutil.move(tmpdir, target_dir)
- else:
- # Something else, not enough free space, permissions, etc. rethrow it so that it gets handled.
- raise(err)
-
- import launch
- launch.run_extension_installer(target_dir)
-
- extensions.list_extensions()
- return [extension_table(), html.escape(f"Installed into {target_dir}. Use Installed tab to restart.")]
- finally:
- shutil.rmtree(tmpdir, True)
-
-
-def install_extension_from_index(url, hide_tags, sort_column):
- ext_table, message = install_extension_from_url(None, url)
-
- code, _ = refresh_available_extensions_from_data(hide_tags, sort_column)
-
- return code, ext_table, message
-
-
-def refresh_available_extensions(url, hide_tags, sort_column):
- global available_extensions
-
- import urllib.request
- with urllib.request.urlopen(url) as response:
- text = response.read()
-
- available_extensions = json.loads(text)
-
- code, tags = refresh_available_extensions_from_data(hide_tags, sort_column)
-
- return url, code, gr.CheckboxGroup.update(choices=tags), ''
-
-
-def refresh_available_extensions_for_tags(hide_tags, sort_column):
- code, _ = refresh_available_extensions_from_data(hide_tags, sort_column)
-
- return code, ''
-
-
-sort_ordering = [
- # (reverse, order_by_function)
- (True, lambda x: x.get('added', 'z')),
- (False, lambda x: x.get('added', 'z')),
- (False, lambda x: x.get('name', 'z')),
- (True, lambda x: x.get('name', 'z')),
- (False, lambda x: 'z'),
-]
-
-
-def refresh_available_extensions_from_data(hide_tags, sort_column):
- extlist = available_extensions["extensions"]
- installed_extension_urls = {normalize_git_url(extension.remote): extension.name for extension in extensions.extensions}
-
- tags = available_extensions.get("tags", {})
- tags_to_hide = set(hide_tags)
- hidden = 0
-
- code = f"""
-
-
-
- Extension
- Description
- Action
-
-
-
- """
-
- sort_reverse, sort_function = sort_ordering[sort_column if 0 <= sort_column < len(sort_ordering) else 0]
-
- for ext in sorted(extlist, key=sort_function, reverse=sort_reverse):
- name = ext.get("name", "noname")
- added = ext.get('added', 'unknown')
- url = ext.get("url", None)
- description = ext.get("description", "")
- extension_tags = ext.get("tags", [])
-
- if url is None:
- continue
-
- existing = installed_extension_urls.get(normalize_git_url(url), None)
- extension_tags = extension_tags + ["installed"] if existing else extension_tags
-
- if len([x for x in extension_tags if x in tags_to_hide]) > 0:
- hidden += 1
- continue
-
- install_code = f""" """
-
- tags_text = ", ".join([f"{x} " for x in extension_tags])
-
- code += f"""
-
- {html.escape(name)} {tags_text}
- {html.escape(description)}Added: {html.escape(added)}
- {install_code}
-
-
- """
-
- for tag in [x for x in extension_tags if x not in tags]:
- tags[tag] = tag
-
- code += """
-
-
- """
-
- if hidden > 0:
- code += f"Extension hidden: {hidden}
"
-
- return code, list(tags)
-
-
-def create_ui():
- import modules.ui
-
- with gr.Blocks(analytics_enabled=False) as ui:
- with gr.Tabs(elem_id="tabs_extensions") as tabs:
- with gr.TabItem("Installed"):
-
- with gr.Row(elem_id="extensions_installed_top"):
- apply = gr.Button(value="Apply and restart UI", variant="primary")
- check = gr.Button(value="Check for updates")
- extensions_disabled_list = gr.Text(elem_id="extensions_disabled_list", visible=False).style(container=False)
- extensions_update_list = gr.Text(elem_id="extensions_update_list", visible=False).style(container=False)
-
- info = gr.HTML()
- extensions_table = gr.HTML(lambda: extension_table())
-
- apply.click(
- fn=apply_and_restart,
- _js="extensions_apply",
- inputs=[extensions_disabled_list, extensions_update_list],
- outputs=[],
- )
-
- check.click(
- fn=wrap_gradio_gpu_call(check_updates, extra_outputs=[gr.update()]),
- _js="extensions_check",
- inputs=[info, extensions_disabled_list],
- outputs=[extensions_table, info],
- )
-
- with gr.TabItem("Available"):
- with gr.Row():
- refresh_available_extensions_button = gr.Button(value="Load from:", variant="primary")
- available_extensions_index = gr.Text(value="https://raw.githubusercontent.com/wiki/AUTOMATIC1111/stable-diffusion-webui/Extensions-index.md", label="Extension index URL").style(container=False)
- extension_to_install = gr.Text(elem_id="extension_to_install", visible=False)
- install_extension_button = gr.Button(elem_id="install_extension_button", visible=False)
-
- with gr.Row():
- hide_tags = gr.CheckboxGroup(value=["ads", "localization", "installed"], label="Hide extensions with tags", choices=["script", "ads", "localization", "installed"])
- sort_column = gr.Radio(value="newest first", label="Order", choices=["newest first", "oldest first", "a-z", "z-a", "internal order", ], type="index")
-
- install_result = gr.HTML()
- available_extensions_table = gr.HTML()
-
- refresh_available_extensions_button.click(
- fn=modules.ui.wrap_gradio_call(refresh_available_extensions, extra_outputs=[gr.update(), gr.update(), gr.update()]),
- inputs=[available_extensions_index, hide_tags, sort_column],
- outputs=[available_extensions_index, available_extensions_table, hide_tags, install_result],
- )
-
- install_extension_button.click(
- fn=modules.ui.wrap_gradio_call(install_extension_from_index, extra_outputs=[gr.update(), gr.update()]),
- inputs=[extension_to_install, hide_tags, sort_column],
- outputs=[available_extensions_table, extensions_table, install_result],
- )
-
- hide_tags.change(
- fn=modules.ui.wrap_gradio_call(refresh_available_extensions_for_tags, extra_outputs=[gr.update()]),
- inputs=[hide_tags, sort_column],
- outputs=[available_extensions_table, install_result]
- )
-
- sort_column.change(
- fn=modules.ui.wrap_gradio_call(refresh_available_extensions_for_tags, extra_outputs=[gr.update()]),
- inputs=[hide_tags, sort_column],
- outputs=[available_extensions_table, install_result]
- )
-
- with gr.TabItem("Install from URL"):
- install_url = gr.Text(label="URL for extension's git repository")
- install_dirname = gr.Text(label="Local directory name", placeholder="Leave empty for auto")
- install_button = gr.Button(value="Install", variant="primary")
- install_result = gr.HTML(elem_id="extension_install_result")
-
- install_button.click(
- fn=modules.ui.wrap_gradio_call(install_extension_from_url, extra_outputs=[gr.update()]),
- inputs=[install_dirname, install_url],
- outputs=[extensions_table, install_result],
- )
-
- return ui
diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/modes/predict.md b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/modes/predict.md
deleted file mode 100644
index 8708933e618499c628c59dda2fc9eee78a9c23d2..0000000000000000000000000000000000000000
--- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/modes/predict.md
+++ /dev/null
@@ -1,525 +0,0 @@
----
-comments: true
-description: Get started with YOLOv8 Predict mode and input sources. Accepts various input sources such as images, videos, and directories.
-keywords: YOLOv8, predict mode, generator, streaming mode, input sources, video formats, arguments customization
----
-
-
-
-YOLOv8 **predict mode** can generate predictions for various tasks, returning either a list of `Results` objects or a
-memory-efficient generator of `Results` objects when using the streaming mode. Enable streaming mode by
-passing `stream=True` in the predictor's call method.
-
-!!! example "Predict"
-
- === "Return a list with `stream=False`"
- ```python
- from ultralytics import YOLO
-
- # Load a model
- model = YOLO('yolov8n.pt') # pretrained YOLOv8n model
-
- # Run batched inference on a list of images
- results = model(['im1.jpg', 'im2.jpg']) # return a list of Results objects
-
- # Process results list
- for result in results:
- boxes = result.boxes # Boxes object for bbox outputs
- masks = result.masks # Masks object for segmentation masks outputs
- keypoints = result.keypoints # Keypoints object for pose outputs
- probs = result.probs # Class probabilities for classification outputs
- ```
-
- === "Return a generator with `stream=True`"
- ```python
- from ultralytics import YOLO
-
- # Load a model
- model = YOLO('yolov8n.pt') # pretrained YOLOv8n model
-
- # Run batched inference on a list of images
- results = model(['im1.jpg', 'im2.jpg'], stream=True) # return a generator of Results objects
-
- # Process results generator
- for result in results:
- boxes = result.boxes # Boxes object for bbox outputs
- masks = result.masks # Masks object for segmentation masks outputs
- keypoints = result.keypoints # Keypoints object for pose outputs
- probs = result.probs # Class probabilities for classification outputs
- ```
-
-## Inference Sources
-
-YOLOv8 can process different types of input sources for inference, as shown in the table below. The sources include static images, video streams, and various data formats. The table also indicates whether each source can be used in streaming mode with the argument `stream=True` ✅. Streaming mode is beneficial for processing videos or live streams as it creates a generator of results instead of loading all frames into memory.
-
-!!! tip "Tip"
-
- Use `stream=True` for processing long videos or large datasets to efficiently manage memory. When `stream=False`, the results for all frames or data points are stored in memory, which can quickly add up and cause out-of-memory errors for large inputs. In contrast, `stream=True` utilizes a generator, which only keeps the results of the current frame or data point in memory, significantly reducing memory consumption and preventing out-of-memory issues.
-
-| Source | Argument | Type | Notes |
-|-------------|--------------------------------------------|---------------------------------------|----------------------------------------------------------------------------|
-| image | `'image.jpg'` | `str` or `Path` | Single image file. |
-| URL | `'https://ultralytics.com/images/bus.jpg'` | `str` | URL to an image. |
-| screenshot | `'screen'` | `str` | Capture a screenshot. |
-| PIL | `Image.open('im.jpg')` | `PIL.Image` | HWC format with RGB channels. |
-| OpenCV | `cv2.imread('im.jpg')` | `np.ndarray` of `uint8 (0-255)` | HWC format with BGR channels. |
-| numpy | `np.zeros((640,1280,3))` | `np.ndarray` of `uint8 (0-255)` | HWC format with BGR channels. |
-| torch | `torch.zeros(16,3,320,640)` | `torch.Tensor` of `float32 (0.0-1.0)` | BCHW format with RGB channels. |
-| CSV | `'sources.csv'` | `str` or `Path` | CSV file containing paths to images, videos, or directories. |
-| video ✅ | `'video.mp4'` | `str` or `Path` | Video file in formats like MP4, AVI, etc. |
-| directory ✅ | `'path/'` | `str` or `Path` | Path to a directory containing images or videos. |
-| glob ✅ | `'path/*.jpg'` | `str` | Glob pattern to match multiple files. Use the `*` character as a wildcard. |
-| YouTube ✅ | `'https://youtu.be/Zgi9g1ksQHc'` | `str` | URL to a YouTube video. |
-| stream ✅ | `'rtsp://example.com/media.mp4'` | `str` | URL for streaming protocols such as RTSP, RTMP, or an IP address. |
-
-Below are code examples for using each source type:
-
-!!! example "Prediction sources"
-
- === "image"
- Run inference on an image file.
- ```python
- from ultralytics import YOLO
-
- # Load a pretrained YOLOv8n model
- model = YOLO('yolov8n.pt')
-
- # Define path to the image file
- source = 'path/to/image.jpg'
-
- # Run inference on the source
- results = model(source) # list of Results objects
- ```
-
- === "screenshot"
- Run inference on the current screen content as a screenshot.
- ```python
- from ultralytics import YOLO
-
- # Load a pretrained YOLOv8n model
- model = YOLO('yolov8n.pt')
-
- # Define current screenshot as source
- source = 'screen'
-
- # Run inference on the source
- results = model(source) # list of Results objects
- ```
-
- === "URL"
- Run inference on an image or video hosted remotely via URL.
- ```python
- from ultralytics import YOLO
-
- # Load a pretrained YOLOv8n model
- model = YOLO('yolov8n.pt')
-
- # Define remote image or video URL
- source = 'https://ultralytics.com/images/bus.jpg'
-
- # Run inference on the source
- results = model(source) # list of Results objects
- ```
-
- === "PIL"
- Run inference on an image opened with Python Imaging Library (PIL).
- ```python
- from PIL import Image
- from ultralytics import YOLO
-
- # Load a pretrained YOLOv8n model
- model = YOLO('yolov8n.pt')
-
- # Open an image using PIL
- source = Image.open('path/to/image.jpg')
-
- # Run inference on the source
- results = model(source) # list of Results objects
- ```
-
- === "OpenCV"
- Run inference on an image read with OpenCV.
- ```python
- import cv2
- from ultralytics import YOLO
-
- # Load a pretrained YOLOv8n model
- model = YOLO('yolov8n.pt')
-
- # Read an image using OpenCV
- source = cv2.imread('path/to/image.jpg')
-
- # Run inference on the source
- results = model(source) # list of Results objects
- ```
-
- === "numpy"
- Run inference on an image represented as a numpy array.
- ```python
- import numpy as np
- from ultralytics import YOLO
-
- # Load a pretrained YOLOv8n model
- model = YOLO('yolov8n.pt')
-
- # Create a random numpy array of HWC shape (640, 640, 3) with values in range [0, 255] and type uint8
- source = np.random.randint(low=0, high=255, size=(640, 640, 3), dtype='uint8')
-
- # Run inference on the source
- results = model(source) # list of Results objects
- ```
-
- === "torch"
- Run inference on an image represented as a PyTorch tensor.
- ```python
- import torch
- from ultralytics import YOLO
-
- # Load a pretrained YOLOv8n model
- model = YOLO('yolov8n.pt')
-
- # Create a random torch tensor of BCHW shape (1, 3, 640, 640) with values in range [0, 1] and type float32
- source = torch.rand(1, 3, 640, 640, dtype=torch.float32)
-
- # Run inference on the source
- results = model(source) # list of Results objects
- ```
-
- === "CSV"
- Run inference on a collection of images, URLs, videos and directories listed in a CSV file.
- ```python
- import torch
- from ultralytics import YOLO
-
- # Load a pretrained YOLOv8n model
- model = YOLO('yolov8n.pt')
-
- # Define a path to a CSV file with images, URLs, videos and directories
- source = 'path/to/file.csv'
-
- # Run inference on the source
- results = model(source) # list of Results objects
- ```
-
- === "video"
- Run inference on a video file. By using `stream=True`, you can create a generator of Results objects to reduce memory usage.
- ```python
- from ultralytics import YOLO
-
- # Load a pretrained YOLOv8n model
- model = YOLO('yolov8n.pt')
-
- # Define path to video file
- source = 'path/to/video.mp4'
-
- # Run inference on the source
- results = model(source, stream=True) # generator of Results objects
- ```
-
- === "directory"
- Run inference on all images and videos in a directory. To also capture images and videos in subdirectories use a glob pattern, i.e. `path/to/dir/**/*`.
- ```python
- from ultralytics import YOLO
-
- # Load a pretrained YOLOv8n model
- model = YOLO('yolov8n.pt')
-
- # Define path to directory containing images and videos for inference
- source = 'path/to/dir'
-
- # Run inference on the source
- results = model(source, stream=True) # generator of Results objects
- ```
-
- === "glob"
- Run inference on all images and videos that match a glob expression with `*` characters.
- ```python
- from ultralytics import YOLO
-
- # Load a pretrained YOLOv8n model
- model = YOLO('yolov8n.pt')
-
- # Define a glob search for all JPG files in a directory
- source = 'path/to/dir/*.jpg'
-
- # OR define a recursive glob search for all JPG files including subdirectories
- source = 'path/to/dir/**/*.jpg'
-
- # Run inference on the source
- results = model(source, stream=True) # generator of Results objects
- ```
-
- === "YouTube"
- Run inference on a YouTube video. By using `stream=True`, you can create a generator of Results objects to reduce memory usage for long videos.
- ```python
- from ultralytics import YOLO
-
- # Load a pretrained YOLOv8n model
- model = YOLO('yolov8n.pt')
-
- # Define source as YouTube video URL
- source = 'https://youtu.be/Zgi9g1ksQHc'
-
- # Run inference on the source
- results = model(source, stream=True) # generator of Results objects
- ```
-
- === "Stream"
- Run inference on remote streaming sources using RTSP, RTMP, and IP address protocols.
- ```python
- from ultralytics import YOLO
-
- # Load a pretrained YOLOv8n model
- model = YOLO('yolov8n.pt')
-
- # Define source as RTSP, RTMP or IP streaming address
- source = 'rtsp://example.com/media.mp4'
-
- # Run inference on the source
- results = model(source, stream=True) # generator of Results objects
- ```
-
-## Inference Arguments
-
-`model.predict` accepts multiple arguments that control the prediction operation. These arguments can be passed directly to `model.predict`:
-!!! example
-
- ```python
- model.predict(source, save=True, imgsz=320, conf=0.5)
- ```
-
-All supported arguments:
-
-| Key | Value | Description |
-|----------------|------------------------|--------------------------------------------------------------------------------|
-| `source` | `'ultralytics/assets'` | source directory for images or videos |
-| `conf` | `0.25` | object confidence threshold for detection |
-| `iou` | `0.7` | intersection over union (IoU) threshold for NMS |
-| `half` | `False` | use half precision (FP16) |
-| `device` | `None` | device to run on, i.e. cuda device=0/1/2/3 or device=cpu |
-| `show` | `False` | show results if possible |
-| `save` | `False` | save images with results |
-| `save_txt` | `False` | save results as .txt file |
-| `save_conf` | `False` | save results with confidence scores |
-| `save_crop` | `False` | save cropped images with results |
-| `hide_labels` | `False` | hide labels |
-| `hide_conf` | `False` | hide confidence scores |
-| `max_det` | `300` | maximum number of detections per image |
-| `vid_stride` | `False` | video frame-rate stride |
-| `line_width` | `None` | The line width of the bounding boxes. If None, it is scaled to the image size. |
-| `visualize` | `False` | visualize model features |
-| `augment` | `False` | apply image augmentation to prediction sources |
-| `agnostic_nms` | `False` | class-agnostic NMS |
-| `retina_masks` | `False` | use high-resolution segmentation masks |
-| `classes` | `None` | filter results by class, i.e. class=0, or class=[0,2,3] |
-| `boxes` | `True` | Show boxes in segmentation predictions |
-
-## Image and Video Formats
-
-YOLOv8 supports various image and video formats, as specified in [yolo/data/utils.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/yolo/data/utils.py). See the tables below for the valid suffixes and example predict commands.
-
-### Image Suffixes
-
-The below table contains valid Ultralytics image formats.
-
-| Image Suffixes | Example Predict Command | Reference |
-|----------------|----------------------------------|-------------------------------------------------------------------------------|
-| .bmp | `yolo predict source=image.bmp` | [Microsoft BMP File Format](https://en.wikipedia.org/wiki/BMP_file_format) |
-| .dng | `yolo predict source=image.dng` | [Adobe DNG](https://www.adobe.com/products/photoshop/extend.displayTab2.html) |
-| .jpeg | `yolo predict source=image.jpeg` | [JPEG](https://en.wikipedia.org/wiki/JPEG) |
-| .jpg | `yolo predict source=image.jpg` | [JPEG](https://en.wikipedia.org/wiki/JPEG) |
-| .mpo | `yolo predict source=image.mpo` | [Multi Picture Object](https://fileinfo.com/extension/mpo) |
-| .png | `yolo predict source=image.png` | [Portable Network Graphics](https://en.wikipedia.org/wiki/PNG) |
-| .tif | `yolo predict source=image.tif` | [Tag Image File Format](https://en.wikipedia.org/wiki/TIFF) |
-| .tiff | `yolo predict source=image.tiff` | [Tag Image File Format](https://en.wikipedia.org/wiki/TIFF) |
-| .webp | `yolo predict source=image.webp` | [WebP](https://en.wikipedia.org/wiki/WebP) |
-| .pfm | `yolo predict source=image.pfm` | [Portable FloatMap](https://en.wikipedia.org/wiki/Netpbm#File_formats) |
-
-### Video Suffixes
-
-The below table contains valid Ultralytics video formats.
-
-| Video Suffixes | Example Predict Command | Reference |
-|----------------|----------------------------------|----------------------------------------------------------------------------------|
-| .asf | `yolo predict source=video.asf` | [Advanced Systems Format](https://en.wikipedia.org/wiki/Advanced_Systems_Format) |
-| .avi | `yolo predict source=video.avi` | [Audio Video Interleave](https://en.wikipedia.org/wiki/Audio_Video_Interleave) |
-| .gif | `yolo predict source=video.gif` | [Graphics Interchange Format](https://en.wikipedia.org/wiki/GIF) |
-| .m4v | `yolo predict source=video.m4v` | [MPEG-4 Part 14](https://en.wikipedia.org/wiki/M4V) |
-| .mkv | `yolo predict source=video.mkv` | [Matroska](https://en.wikipedia.org/wiki/Matroska) |
-| .mov | `yolo predict source=video.mov` | [QuickTime File Format](https://en.wikipedia.org/wiki/QuickTime_File_Format) |
-| .mp4 | `yolo predict source=video.mp4` | [MPEG-4 Part 14 - Wikipedia](https://en.wikipedia.org/wiki/MPEG-4_Part_14) |
-| .mpeg | `yolo predict source=video.mpeg` | [MPEG-1 Part 2](https://en.wikipedia.org/wiki/MPEG-1) |
-| .mpg | `yolo predict source=video.mpg` | [MPEG-1 Part 2](https://en.wikipedia.org/wiki/MPEG-1) |
-| .ts | `yolo predict source=video.ts` | [MPEG Transport Stream](https://en.wikipedia.org/wiki/MPEG_transport_stream) |
-| .wmv | `yolo predict source=video.wmv` | [Windows Media Video](https://en.wikipedia.org/wiki/Windows_Media_Video) |
-| .webm | `yolo predict source=video.webm` | [WebM Project](https://en.wikipedia.org/wiki/WebM) |
-
-## Working with Results
-
-The `Results` object contains the following components:
-
-- `Results.boxes`: `Boxes` object with properties and methods for manipulating bounding boxes
-- `Results.masks`: `Masks` object for indexing masks or getting segment coordinates
-- `Results.keypoints`: `Keypoints` object for with properties and methods for manipulating predicted keypoints.
-- `Results.probs`: `Probs` object for containing class probabilities.
-- `Results.orig_img`: Original image loaded in memory
-- `Results.path`: `Path` containing the path to the input image
-
-Each result is composed of a `torch.Tensor` by default, which allows for easy manipulation:
-
-!!! example "Results"
-
- ```python
- results = results.cuda()
- results = results.cpu()
- results = results.to('cpu')
- results = results.numpy()
- ```
-
-### Boxes
-
-`Boxes` object can be used to index, manipulate, and convert bounding boxes to different formats. Box format conversion
-operations are cached, meaning they're only calculated once per object, and those values are reused for future calls.
-
-- Indexing a `Boxes` object returns a `Boxes` object:
-
-!!! example "Boxes"
-
- ```python
- results = model(img)
- boxes = results[0].boxes
- box = boxes[0] # returns one box
- box.xyxy
- ```
-
-- Properties and conversions
-
-!!! example "Boxes Properties"
-
- ```python
- boxes.xyxy # box with xyxy format, (N, 4)
- boxes.xywh # box with xywh format, (N, 4)
- boxes.xyxyn # box with xyxy format but normalized, (N, 4)
- boxes.xywhn # box with xywh format but normalized, (N, 4)
- boxes.conf # confidence score, (N, )
- boxes.cls # cls, (N, )
- boxes.data # raw bboxes tensor, (N, 6) or boxes.boxes
- ```
-
-### Masks
-
-`Masks` object can be used index, manipulate and convert masks to segments. The segment conversion operation is cached.
-
-!!! example "Masks"
-
- ```python
- results = model(inputs)
- masks = results[0].masks # Masks object
- masks.xy # x, y segments (pixels), List[segment] * N
- masks.xyn # x, y segments (normalized), List[segment] * N
- masks.data # raw masks tensor, (N, H, W) or masks.masks
- ```
-
-### Keypoints
-
-`Keypoints` object can be used index, manipulate and normalize coordinates. The keypoint conversion operation is cached.
-
-!!! example "Keypoints"
-
- ```python
- results = model(inputs)
- keypoints = results[0].keypoints # Masks object
- keypoints.xy # x, y keypoints (pixels), (num_dets, num_kpts, 2/3), the last dimension can be 2 or 3, depends the model.
- keypoints.xyn # x, y keypoints (normalized), (num_dets, num_kpts, 2/3)
- keypoints.conf # confidence score(num_dets, num_kpts) of each keypoint if the last dimension is 3.
- keypoints.data # raw keypoints tensor, (num_dets, num_kpts, 2/3)
- ```
-
-### probs
-
-`Probs` object can be used index, get top1&top5 indices and scores of classification.
-
-!!! example "Probs"
-
- ```python
- results = model(inputs)
- probs = results[0].probs # cls prob, (num_class, )
- probs.top5 # The top5 indices of classification, List[Int] * 5.
- probs.top1 # The top1 indices of classification, a value with Int type.
- probs.top5conf # The top5 scores of classification, a tensor with shape (5, ).
- probs.top1conf # The top1 scores of classification. a value with torch.tensor type.
- keypoints.data # raw probs tensor, (num_class, )
- ```
-
-Class reference documentation for `Results` module and its components can be found [here](../reference/yolo/engine/results.md)
-
-## Plotting results
-
-You can use `plot()` function of `Result` object to plot results on in image object. It plots all components(boxes,
-masks, classification probabilities, etc.) found in the results object
-
-!!! example "Plotting"
-
- ```python
- res = model(img)
- res_plotted = res[0].plot()
- cv2.imshow("result", res_plotted)
- ```
-
-| Argument | Description |
-|-------------------------------|----------------------------------------------------------------------------------------|
-| `conf (bool)` | Whether to plot the detection confidence score. |
-| `line_width (int, optional)` | The line width of the bounding boxes. If None, it is scaled to the image size. |
-| `font_size (float, optional)` | The font size of the text. If None, it is scaled to the image size. |
-| `font (str)` | The font to use for the text. |
-| `pil (bool)` | Whether to use PIL for image plotting. |
-| `example (str)` | An example string to display. Useful for indicating the expected format of the output. |
-| `img (numpy.ndarray)` | Plot to another image. if not, plot to original image. |
-| `labels (bool)` | Whether to plot the label of bounding boxes. |
-| `boxes (bool)` | Whether to plot the bounding boxes. |
-| `masks (bool)` | Whether to plot the masks. |
-| `probs (bool)` | Whether to plot classification probability. |
-
-## Streaming Source `for`-loop
-
-Here's a Python script using OpenCV (cv2) and YOLOv8 to run inference on video frames. This script assumes you have already installed the necessary packages (opencv-python and ultralytics).
-
-!!! example "Streaming for-loop"
-
- ```python
- import cv2
- from ultralytics import YOLO
-
- # Load the YOLOv8 model
- model = YOLO('yolov8n.pt')
-
- # Open the video file
- video_path = "path/to/your/video/file.mp4"
- cap = cv2.VideoCapture(video_path)
-
- # Loop through the video frames
- while cap.isOpened():
- # Read a frame from the video
- success, frame = cap.read()
-
- if success:
- # Run YOLOv8 inference on the frame
- results = model(frame)
-
- # Visualize the results on the frame
- annotated_frame = results[0].plot()
-
- # Display the annotated frame
- cv2.imshow("YOLOv8 Inference", annotated_frame)
-
- # Break the loop if 'q' is pressed
- if cv2.waitKey(1) & 0xFF == ord("q"):
- break
- else:
- # Break the loop if the end of the video is reached
- break
-
- # Release the video capture object and close the display window
- cap.release()
- cv2.destroyAllWindows()
- ```
\ No newline at end of file
diff --git a/spaces/vama09/HashtagAndCaption/app.py b/spaces/vama09/HashtagAndCaption/app.py
deleted file mode 100644
index 8ac85858715f5f7e8f98a12b49e4e842425d76ab..0000000000000000000000000000000000000000
--- a/spaces/vama09/HashtagAndCaption/app.py
+++ /dev/null
@@ -1,141 +0,0 @@
-import streamlit as st
-from PIL import Image
-import numpy as np
-import nltk
-nltk.download('stopwords')
-nltk.download('punkt')
-import pandas as pd
-import random
-import easyocr
-import re
-from nltk.corpus import stopwords
-from nltk.tokenize import word_tokenize
-from sklearn.feature_extraction.text import TfidfVectorizer
-from sklearn.metrics.pairwise import cosine_similarity
-from transformers import AutoTokenizer, ViTFeatureExtractor, VisionEncoderDecoderModel
-
-# Directory path to the saved model on Google Drive
-model = VisionEncoderDecoderModel.from_pretrained("nlpconnect/vit-gpt2-image-captioning")
-
-# Load the feature extractor and tokenizer
-feature_extractor = ViTFeatureExtractor.from_pretrained("nlpconnect/vit-gpt2-image-captioning")
-tokenizer = AutoTokenizer.from_pretrained("nlpconnect/vit-gpt2-image-captioning")
-
-
-def generate_captions(image):
- image = Image.open(image).convert("RGB")
- generated_caption = tokenizer.decode(model.generate(feature_extractor(image, return_tensors="pt").pixel_values.to("cpu"))[0])
- sentence = generated_caption
- text_to_remove = "<|endoftext|>"
- generated_caption = sentence.replace(text_to_remove, "")
- return generated_caption
-
-# use easyocr to extract text from the image
-def image_text(image):
- img_np = np.array(image)
- reader = easyocr.Reader(['en'])
- text = reader.readtext(img_np)
- detected_text = " ".join([item[1] for item in text])
-
- # Extract individual words, convert to lowercase, and add "#" symbol
- detected_text= ['#' + entry[1].strip().lower().replace(" ", "") for entry in text]
- return detected_text
-
-# Load NLTK stopwords for filtering
-stop_words = set(stopwords.words('english'))
-
-# Add hashtags to keywords, which have been generated from image captioing
-def add_hashtags(keywords):
- hashtags = []
-
- for keyword in keywords:
- # Generate hashtag from the keyword (you can modify this part as per your requirements)
- hashtag = '#' + keyword.lower()
-
- hashtags.append(hashtag)
-
- return hashtags
-
-def trending_hashtags(caption):
- # Read trending hashtags from a file separated by commas
- with open("hashies.txt", "r") as file:
- hashtags_string = file.read()
-
- # Split the hashtags by commas and remove any leading/trailing spaces
- trending_hashtags = [hashtag.strip() for hashtag in hashtags_string.split(',')]
-
- # Create a DataFrame from the hashtags
- df = pd.DataFrame(trending_hashtags, columns=["Hashtags"])
-
- # Function to extract keywords from a given text
- def extract_keywords(caption):
- tokens = word_tokenize(caption)
- keywords = [token.lower() for token in tokens if token.lower() not in stop_words]
- return keywords
-
- # Extract keywords from caption and trending hashtags
- caption_keywords = extract_keywords(caption)
- hashtag_keywords = [extract_keywords(hashtag) for hashtag in df["Hashtags"]]
-
- # Function to calculate cosine similarity between two strings
- def calculate_similarity(text1, text2):
- tfidf_vectorizer = TfidfVectorizer()
- tfidf_matrix = tfidf_vectorizer.fit_transform([text1, text2])
- similarity_matrix = cosine_similarity(tfidf_matrix[0], tfidf_matrix[1])
- return similarity_matrix[0][0]
-
- # Calculate similarity between caption and each trending hashtag
- similarities = [calculate_similarity(' '.join(caption_keywords), ' '.join(keywords)) for keywords in hashtag_keywords]
-
- # Sort trending hashtags based on similarity in descending order
- sorted_hashtags = [hashtag for _, hashtag in sorted(zip(similarities, df["Hashtags"]), reverse=True)]
-
- # Select top k relevant hashtags (e.g., top 5) without duplicates
- selected_hashtags = list(set(sorted_hashtags[:5]))
-
- selected_hashtag = [word.strip("'") for word in selected_hashtags]
-
- return selected_hashtag
-
-# create the Streamlit app
-def app():
- st.title('Image from your Side, Trending Hashtags from our Side')
-
- st.write('Upload an image to see what we have in store.')
-
- # create file uploader
- uploaded_file = st.file_uploader("Got You Covered, Upload your wish!, magic on the Way! ", type=["jpg", "jpeg", "png"])
-
- # check if file has been uploaded
- if uploaded_file is not None:
- # load the image
- image = Image.open(uploaded_file).convert("RGB")
-
- # Image Captions
- string = generate_captions(uploaded_file)
- tokens = word_tokenize(string)
- keywords = [token.lower() for token in tokens if token.lower() not in stop_words]
- hashtags = add_hashtags(keywords)
-
- # Text Captions from image
- extracted_text = image_text(image)
-
- #Final Hashtags Generation
- web_hashtags = trending_hashtags(string)
-
- combined_hashtags = hashtags + extracted_text + web_hashtags
-
- # Shuffle the list randomly
- random.shuffle(combined_hashtags)
-
- combined_hashtags = list(set(item for item in combined_hashtags[:15] if not re.search(r'\d$', item)))
-
-
- # display the image
- st.image(image, caption='The Uploaded File')
- st.write("First is first captions for your Photo : ", string)
- st.write("Magical hashies have arrived : ", combined_hashtags)
-
-# run the app
-if __name__ == '__main__':
- app()
diff --git a/spaces/whitphx/gradio-static-test/dist/assets/index-1349a7bd.js b/spaces/whitphx/gradio-static-test/dist/assets/index-1349a7bd.js
deleted file mode 100644
index e5cfc5d7588ff256c39ef278ce66e28675604746..0000000000000000000000000000000000000000
--- a/spaces/whitphx/gradio-static-test/dist/assets/index-1349a7bd.js
+++ /dev/null
@@ -1,5 +0,0 @@
-import{S as he,i as pe,s as ke,C as Ve,D as d,h as S,F as I,G as X,r as E,H as T,I as J,N as A,K as H,a2 as ge,a3 as Ee,a1 as ue,J as j,L as Y,f as oe,O as ul,q as V,n as _e,t as P,p as me,u as Re,a4 as Tl,e as O,m as C,o as N,a5 as Bl,b as He,a6 as Fl,_ as ze,z as G,M as Pe,a7 as Il,a as ol,l as _l,W as Ll,Y as Ul,Z as zl,$ as Dl,y as Ol,a0 as Cl,j as Nl,k as jl}from"../lite.js";import"./Blocks-99723874.js";import{U as Kl}from"./UploadText-ca9fa5cb.js";import{a as ml,B as Yl}from"./Button-0391b19a.js";import{U as ql}from"./Upload-a154f660.js";import{M as Ql}from"./ModifyUpload-ee7ccefb.js";import{B as dl}from"./BlockLabel-a3ec523d.js";/* empty css */import{E as Zl}from"./Empty-91947ea3.js";import{n as Gl}from"./ModifyUpload.svelte_svelte_type_style_lang-ba6baa96.js";function Xl(l){let e,i,n,a;return{c(){e=Ve("svg"),i=Ve("path"),n=Ve("circle"),a=Ve("circle"),d(i,"d","M9 18V5l12-2v13"),d(n,"cx","6"),d(n,"cy","18"),d(n,"r","3"),d(a,"cx","18"),d(a,"cy","16"),d(a,"r","3"),d(e,"xmlns","http://www.w3.org/2000/svg"),d(e,"width","100%"),d(e,"height","100%"),d(e,"viewBox","0 0 24 24"),d(e,"fill","none"),d(e,"stroke","currentColor"),d(e,"stroke-width","1.5"),d(e,"stroke-linecap","round"),d(e,"stroke-linejoin","round"),d(e,"class","feather feather-music")},m(f,t){S(f,e,t),I(e,i),I(e,n),I(e,a)},p:X,i:X,o:X,d(f){f&&E(e)}}}class Ie extends he{constructor(e){super(),pe(this,e,null,Xl,ke,{})}}function De(l,e,i){const n=l.slice();return n[27]=e[i],n[29]=i,n}function Oe(l){let e,i,n,a,f=(l[6]==="label"||l[7]==="label")&&Ce(l);return{c(){e=T("span"),f&&f.c(),d(e,"class","pip first"),d(e,"style",i=l[14]+": 0%;"),A(e,"selected",l[17](l[0])),A(e,"in-range",l[16](l[0]))},m(t,u){S(t,e,u),f&&f.m(e,null),n||(a=[H(e,"click",function(){ge(l[20](l[0]))&&l[20](l[0]).apply(this,arguments)}),H(e,"touchend",Ee(function(){ge(l[20](l[0]))&&l[20](l[0]).apply(this,arguments)}))],n=!0)},p(t,u){l=t,l[6]==="label"||l[7]==="label"?f?f.p(l,u):(f=Ce(l),f.c(),f.m(e,null)):f&&(f.d(1),f=null),u&16384&&i!==(i=l[14]+": 0%;")&&d(e,"style",i),u&131073&&A(e,"selected",l[17](l[0])),u&65537&&A(e,"in-range",l[16](l[0]))},d(t){t&&E(e),f&&f.d(),n=!1,ue(a)}}}function Ce(l){let e,i=l[12](l[0],0,0)+"",n,a=l[10]&&Ne(l),f=l[11]&&je(l);return{c(){e=T("span"),a&&a.c(),n=j(i),f&&f.c(),d(e,"class","pipVal")},m(t,u){S(t,e,u),a&&a.m(e,null),I(e,n),f&&f.m(e,null)},p(t,u){t[10]?a?a.p(t,u):(a=Ne(t),a.c(),a.m(e,n)):a&&(a.d(1),a=null),u&4097&&i!==(i=t[12](t[0],0,0)+"")&&Y(n,i),t[11]?f?f.p(t,u):(f=je(t),f.c(),f.m(e,null)):f&&(f.d(1),f=null)},d(t){t&&E(e),a&&a.d(),f&&f.d()}}}function Ne(l){let e,i;return{c(){e=T("span"),i=j(l[10]),d(e,"class","pipVal-prefix")},m(n,a){S(n,e,a),I(e,i)},p(n,a){a&1024&&Y(i,n[10])},d(n){n&&E(e)}}}function je(l){let e,i;return{c(){e=T("span"),i=j(l[11]),d(e,"class","pipVal-suffix")},m(n,a){S(n,e,a),I(e,i)},p(n,a){a&2048&&Y(i,n[11])},d(n){n&&E(e)}}}function Ke(l){let e,i=Array(l[19]+1),n=[];for(let a=0;ap}=e,{focus:Z=void 0}=e,{orientationStart:$=void 0}=e,{percentOf:ee=void 0}=e,{moveHandle:W=void 0}=e;function w(p){W(void 0,p)}return l.$$set=p=>{"range"in p&&i(21,_=p.range),"min"in p&&i(0,g=p.min),"max"in p&&i(1,o=p.max),"step"in p&&i(22,s=p.step),"values"in p&&i(23,m=p.values),"vertical"in p&&i(2,c=p.vertical),"reversed"in p&&i(3,b=p.reversed),"hoverable"in p&&i(4,y=p.hoverable),"disabled"in p&&i(5,F=p.disabled),"pipstep"in p&&i(24,k=p.pipstep),"all"in p&&i(6,D=p.all),"first"in p&&i(7,q=p.first),"last"in p&&i(8,L=p.last),"rest"in p&&i(9,Q=p.rest),"prefix"in p&&i(10,U=p.prefix),"suffix"in p&&i(11,x=p.suffix),"formatter"in p&&i(12,z=p.formatter),"focus"in p&&i(13,Z=p.focus),"orientationStart"in p&&i(14,$=p.orientationStart),"percentOf"in p&&i(15,ee=p.percentOf),"moveHandle"in p&&i(25,W=p.moveHandle)},l.$$.update=()=>{l.$$.dirty&20971527&&i(26,n=k||((o-g)/s>=(c?50:100)?(o-g)/(c?10:20):1)),l.$$.dirty&71303171&&i(19,a=parseInt((o-g)/(s*n),10)),l.$$.dirty&71303169&&i(18,f=function(p){return g+p*s*n}),l.$$.dirty&8388608&&i(17,t=function(p){return m.some(se=>se===p)}),l.$$.dirty&10485760&&i(16,u=function(p){if(_==="min")return m[0]>p;if(_==="max")return m[0]p})},[g,o,c,b,y,F,D,q,L,Q,U,x,z,Z,$,ee,u,t,f,a,w,_,s,m,k,W,n]}class xl extends he{constructor(e){super(),pe(this,e,Wl,Jl,ke,{range:21,min:0,max:1,step:22,values:23,vertical:2,reversed:3,hoverable:4,disabled:5,pipstep:24,all:6,first:7,last:8,rest:9,prefix:10,suffix:11,formatter:12,focus:13,orientationStart:14,percentOf:15,moveHandle:25})}}function $e(l,e,i){const n=l.slice();return n[63]=e[i],n[65]=i,n}function el(l){let e,i=l[21](l[63],l[65],l[23](l[63]))+"",n,a=l[18]&&ll(l),f=l[19]&&nl(l);return{c(){e=T("span"),a&&a.c(),n=j(i),f&&f.c(),d(e,"class","rangeFloat")},m(t,u){S(t,e,u),a&&a.m(e,null),I(e,n),f&&f.m(e,null)},p(t,u){t[18]?a?a.p(t,u):(a=ll(t),a.c(),a.m(e,n)):a&&(a.d(1),a=null),u[0]&10485761&&i!==(i=t[21](t[63],t[65],t[23](t[63]))+"")&&Y(n,i),t[19]?f?f.p(t,u):(f=nl(t),f.c(),f.m(e,null)):f&&(f.d(1),f=null)},d(t){t&&E(e),a&&a.d(),f&&f.d()}}}function ll(l){let e,i;return{c(){e=T("span"),i=j(l[18]),d(e,"class","rangeFloat-prefix")},m(n,a){S(n,e,a),I(e,i)},p(n,a){a[0]&262144&&Y(i,n[18])},d(n){n&&E(e)}}}function nl(l){let e,i;return{c(){e=T("span"),i=j(l[19]),d(e,"class","rangeFloat-suffix")},m(n,a){S(n,e,a),I(e,i)},p(n,a){a[0]&524288&&Y(i,n[19])},d(n){n&&E(e)}}}function il(l){let e,i,n,a,f,t,u,_,g,o,s,m,c=l[7]&&el(l);return{c(){e=T("span"),i=T("span"),n=J(),c&&c.c(),d(i,"class","rangeNub"),d(e,"role","slider"),d(e,"class","rangeHandle"),d(e,"data-handle",l[65]),d(e,"style",a=l[28]+": "+l[29][l[65]]+"%; z-index: "+(l[26]===l[65]?3:2)+";"),d(e,"aria-valuemin",f=l[2]===!0&&l[65]===1?l[0][0]:l[3]),d(e,"aria-valuemax",t=l[2]===!0&&l[65]===0?l[0][1]:l[4]),d(e,"aria-valuenow",u=l[63]),d(e,"aria-valuetext",_=""+(l[18]+l[21](l[63],l[65],l[23](l[63]))+l[19])),d(e,"aria-orientation",g=l[6]?"vertical":"horizontal"),d(e,"aria-disabled",l[10]),d(e,"disabled",l[10]),d(e,"tabindex",o=l[10]?-1:0),A(e,"active",l[24]&&l[26]===l[65]),A(e,"press",l[25]&&l[26]===l[65])},m(b,y){S(b,e,y),I(e,i),I(e,n),c&&c.m(e,null),s||(m=[H(e,"blur",l[33]),H(e,"focus",l[34]),H(e,"keydown",l[35])],s=!0)},p(b,y){b[7]?c?c.p(b,y):(c=el(b),c.c(),c.m(e,null)):c&&(c.d(1),c=null),y[0]&872415232&&a!==(a=b[28]+": "+b[29][b[65]]+"%; z-index: "+(b[26]===b[65]?3:2)+";")&&d(e,"style",a),y[0]&13&&f!==(f=b[2]===!0&&b[65]===1?b[0][0]:b[3])&&d(e,"aria-valuemin",f),y[0]&21&&t!==(t=b[2]===!0&&b[65]===0?b[0][1]:b[4])&&d(e,"aria-valuemax",t),y[0]&1&&u!==(u=b[63])&&d(e,"aria-valuenow",u),y[0]&11272193&&_!==(_=""+(b[18]+b[21](b[63],b[65],b[23](b[63]))+b[19]))&&d(e,"aria-valuetext",_),y[0]&64&&g!==(g=b[6]?"vertical":"horizontal")&&d(e,"aria-orientation",g),y[0]&1024&&d(e,"aria-disabled",b[10]),y[0]&1024&&d(e,"disabled",b[10]),y[0]&1024&&o!==(o=b[10]?-1:0)&&d(e,"tabindex",o),y[0]&83886080&&A(e,"active",b[24]&&b[26]===b[65]),y[0]&100663296&&A(e,"press",b[25]&&b[26]===b[65])},d(b){b&&E(e),c&&c.d(),s=!1,ue(m)}}}function al(l){let e,i;return{c(){e=T("span"),d(e,"class","rangeBar"),d(e,"style",i=l[28]+": "+l[31](l[29])+"%; "+l[27]+": "+l[32](l[29])+"%;")},m(n,a){S(n,e,a)},p(n,a){a[0]&939524096&&i!==(i=n[28]+": "+n[31](n[29])+"%; "+n[27]+": "+n[32](n[29])+"%;")&&d(e,"style",i)},d(n){n&&E(e)}}}function fl(l){let e,i;return e=new xl({props:{values:l[0],min:l[3],max:l[4],step:l[5],range:l[2],vertical:l[6],reversed:l[8],orientationStart:l[28],hoverable:l[9],disabled:l[10],all:l[13],first:l[14],last:l[15],rest:l[16],pipstep:l[12],prefix:l[18],suffix:l[19],formatter:l[20],focus:l[24],percentOf:l[23],moveHandle:l[30]}}),{c(){O(e.$$.fragment)},m(n,a){C(e,n,a),i=!0},p(n,a){const f={};a[0]&1&&(f.values=n[0]),a[0]&8&&(f.min=n[3]),a[0]&16&&(f.max=n[4]),a[0]&32&&(f.step=n[5]),a[0]&4&&(f.range=n[2]),a[0]&64&&(f.vertical=n[6]),a[0]&256&&(f.reversed=n[8]),a[0]&268435456&&(f.orientationStart=n[28]),a[0]&512&&(f.hoverable=n[9]),a[0]&1024&&(f.disabled=n[10]),a[0]&8192&&(f.all=n[13]),a[0]&16384&&(f.first=n[14]),a[0]&32768&&(f.last=n[15]),a[0]&65536&&(f.rest=n[16]),a[0]&4096&&(f.pipstep=n[12]),a[0]&262144&&(f.prefix=n[18]),a[0]&524288&&(f.suffix=n[19]),a[0]&1048576&&(f.formatter=n[20]),a[0]&16777216&&(f.focus=n[24]),a[0]&8388608&&(f.percentOf=n[23]),e.$set(f)},i(n){i||(V(e.$$.fragment,n),i=!0)},o(n){P(e.$$.fragment,n),i=!1},d(n){N(e,n)}}}function $l(l){let e,i,n,a,f,t,u=l[0],_=[];for(let s=0;s{o=null}),me()),(!a||m[0]&131072)&&d(e,"id",s[17]),(!a||m[0]&4)&&A(e,"range",s[2]),(!a||m[0]&1024)&&A(e,"disabled",s[10]),(!a||m[0]&512)&&A(e,"hoverable",s[9]),(!a||m[0]&64)&&A(e,"vertical",s[6]),(!a||m[0]&256)&&A(e,"reversed",s[8]),(!a||m[0]&16777216)&&A(e,"focus",s[24]),(!a||m[0]&4)&&A(e,"min",s[2]==="min"),(!a||m[0]&4)&&A(e,"max",s[2]==="max"),(!a||m[0]&2048)&&A(e,"pips",s[11]),(!a||m[0]&122880)&&A(e,"pip-labels",s[13]==="label"||s[14]==="label"||s[15]==="label"||s[16]==="label")},i(s){a||(V(o),a=!0)},o(s){P(o),a=!1},d(s){s&&E(e),ul(_,s),g&&g.d(),o&&o.d(),l[49](null),f=!1,ue(t)}}}function tl(l){if(!l)return-1;for(var e=0;l=l.previousElementSibling;)e++;return e}function Fe(l){return l.type.includes("touch")?l.touches[0]:l}function en(l,e,i){let n,a,f,t,u,_,g=X,o=()=>(g(),g=Bl(re,r=>i(29,_=r)),re);l.$$.on_destroy.push(()=>g());let{slider:s}=e,{range:m=!1}=e,{pushy:c=!1}=e,{min:b=0}=e,{max:y=100}=e,{step:F=1}=e,{values:k=[(y+b)/2]}=e,{vertical:D=!1}=e,{float:q=!1}=e,{reversed:L=!1}=e,{hoverable:Q=!0}=e,{disabled:U=!1}=e,{pips:x=!1}=e,{pipstep:z=void 0}=e,{all:Z=void 0}=e,{first:$=void 0}=e,{last:ee=void 0}=e,{rest:W=void 0}=e,{id:w=void 0}=e,{prefix:p=""}=e,{suffix:se=""}=e,{formatter:we=(r,v,M)=>r}=e,{handleFormatter:ae=we}=e,{precision:de=2}=e,{springValues:ve={stiffness:.15,damping:.4}}=e;const Ae=Re();let ye=0,le=!1,fe=!1,te=!1,h=!1,B=k.length-1,K,ne,re;function Me(r){const v=s.querySelectorAll(".handle"),M=Array.prototype.includes.call(v,r),R=Array.prototype.some.call(v,ie=>ie.contains(r));return M||R}function Se(r){return m==="min"||m==="max"?r.slice(0,1):m?r.slice(0,2):r}function ce(){return s.getBoundingClientRect()}function Te(r){const v=ce();let M=0,R=0,ie=0;D?(M=r.clientY-v.top,R=M/v.height*100,R=L?R:100-R):(M=r.clientX-v.left,R=M/v.width*100,R=L?100-R:R),ie=(y-b)/100*R+b;let Ue;return m===!0&&k[0]===k[1]?ie>k[1]?1:0:(Ue=k.indexOf([...k].sort((Hl,Ml)=>Math.abs(ie-Hl)-Math.abs(ie-Ml))[0]),Ue)}function Be(r){const v=ce();let M=0,R=0,ie=0;D?(M=r.clientY-v.top,R=M/v.height*100,R=L?R:100-R):(M=r.clientX-v.left,R=M/v.width*100,R=L?100-R:R),ie=(y-b)/100*R+b,be(B,ie)}function be(r,v){return v=f(v),typeof r>"u"&&(r=B),m&&(r===0&&v>k[1]?c?i(0,k[1]=v,k):v=k[1]:r===1&&vf(r))})}function Le(){!U&&Ae("stop",{activeHandle:B,startValue:K,value:k[B],values:k.map(r=>f(r))})}function Pl(){!U&&Ae("change",{activeHandle:B,startValue:K,previousValue:typeof ne>"u"?K:ne,value:k[B],values:k.map(r=>f(r))})}function Rl(r){He[r?"unshift":"push"](()=>{s=r,i(1,s)})}return l.$$set=r=>{"slider"in r&&i(1,s=r.slider),"range"in r&&i(2,m=r.range),"pushy"in r&&i(43,c=r.pushy),"min"in r&&i(3,b=r.min),"max"in r&&i(4,y=r.max),"step"in r&&i(5,F=r.step),"values"in r&&i(0,k=r.values),"vertical"in r&&i(6,D=r.vertical),"float"in r&&i(7,q=r.float),"reversed"in r&&i(8,L=r.reversed),"hoverable"in r&&i(9,Q=r.hoverable),"disabled"in r&&i(10,U=r.disabled),"pips"in r&&i(11,x=r.pips),"pipstep"in r&&i(12,z=r.pipstep),"all"in r&&i(13,Z=r.all),"first"in r&&i(14,$=r.first),"last"in r&&i(15,ee=r.last),"rest"in r&&i(16,W=r.rest),"id"in r&&i(17,w=r.id),"prefix"in r&&i(18,p=r.prefix),"suffix"in r&&i(19,se=r.suffix),"formatter"in r&&i(20,we=r.formatter),"handleFormatter"in r&&i(21,ae=r.handleFormatter),"precision"in r&&i(44,de=r.precision),"springValues"in r&&i(45,ve=r.springValues)},l.$$.update=()=>{l.$$.dirty[0]&24&&i(48,a=function(r){return r<=b?b:r>=y?y:r}),l.$$.dirty[0]&56|l.$$.dirty[1]&139264&&i(47,f=function(r){if(r<=b)return b;if(r>=y)return y;let v=(r-b)%F,M=r-v;return Math.abs(v)*2>=F&&(M+=v>0?F:-F),M=a(M),parseFloat(M.toFixed(de))}),l.$$.dirty[0]&24|l.$$.dirty[1]&8192&&i(23,n=function(r){let v=(r-b)/(y-b)*100;return isNaN(v)||v<=0?0:v>=100?100:parseFloat(v.toFixed(de))}),l.$$.dirty[0]&12582937|l.$$.dirty[1]&114688&&(Array.isArray(k)||(i(0,k=[(y+b)/2]),console.error("'values' prop should be an Array (https://github.com/simeydotme/svelte-range-slider-pips#slider-props)")),i(0,k=Se(k.map(r=>f(r)))),ye!==k.length?o(i(22,re=Tl(k.map(r=>n(r)),ve))):re.set(k.map(r=>n(r))),i(46,ye=k.length)),l.$$.dirty[0]&320&&i(28,t=D?L?"top":"bottom":L?"right":"left"),l.$$.dirty[0]&320&&i(27,u=D?L?"bottom":"top":L?"left":"right")},[k,s,m,b,y,F,D,q,L,Q,U,x,z,Z,$,ee,W,w,p,se,we,ae,re,n,le,te,B,u,t,_,be,cl,bl,gl,hl,pl,kl,wl,vl,Al,yl,Sl,El,c,de,ve,ye,f,a,Rl]}class ln extends he{constructor(e){super(),pe(this,e,en,$l,ke,{slider:1,range:2,pushy:43,min:3,max:4,step:5,values:0,vertical:6,float:7,reversed:8,hoverable:9,disabled:10,pips:11,pipstep:12,all:13,first:14,last:15,rest:16,id:17,prefix:18,suffix:19,formatter:20,handleFormatter:21,precision:44,springValues:45},null,[-1,-1,-1])}}function nn(l){let e,i,n,a,f,t,u,_,g;e=new Ql({props:{editable:!0,absolute:!0}}),e.$on("clear",l[12]),e.$on("edit",l[26]);let o=l[7]==="edit"&&l[8]?.duration&&sl(l);return{c(){O(e.$$.fragment),i=J(),n=T("audio"),f=J(),o&&o.c(),t=oe(),n.controls=!0,d(n,"preload","metadata"),Pe(n.src,a=l[1].data)||d(n,"src",a),d(n,"class","svelte-1thnwz")},m(s,m){C(e,s,m),S(s,i,m),S(s,n,m),l[27](n),S(s,f,m),o&&o.m(s,m),S(s,t,m),u=!0,_||(g=[Il(l[13].call(null,n)),H(n,"play",l[22]),H(n,"pause",l[23]),H(n,"ended",l[24])],_=!0)},p(s,m){(!u||m[0]&2&&!Pe(n.src,a=s[1].data))&&d(n,"src",a),s[7]==="edit"&&s[8]?.duration?o?(o.p(s,m),m[0]&384&&V(o,1)):(o=sl(s),o.c(),V(o,1),o.m(t.parentNode,t)):o&&(_e(),P(o,1,1,()=>{o=null}),me())},i(s){u||(V(e.$$.fragment,s),V(o),u=!0)},o(s){P(e.$$.fragment,s),P(o),u=!1},d(s){N(e,s),s&&E(i),s&&E(n),l[27](null),s&&E(f),o&&o.d(s),s&&E(t),_=!1,ue(g)}}}function an(l){let e,i,n,a;const f=[tn,fn],t=[];function u(_,g){return _[4]==="microphone"?0:_[4]==="upload"?1:-1}return~(e=u(l))&&(i=t[e]=f[e](l)),{c(){i&&i.c(),n=oe()},m(_,g){~e&&t[e].m(_,g),S(_,n,g),a=!0},p(_,g){let o=e;e=u(_),e===o?~e&&t[e].p(_,g):(i&&(_e(),P(t[o],1,1,()=>{t[o]=null}),me()),~e?(i=t[e],i?i.p(_,g):(i=t[e]=f[e](_),i.c()),V(i,1),i.m(n.parentNode,n)):i=null)},i(_){a||(V(i),a=!0)},o(_){P(i),a=!1},d(_){~e&&t[e].d(_),_&&E(n)}}}function sl(l){let e,i,n;function a(t){l[28](t)}let f={range:!0,min:0,max:100,step:1};return l[9]!==void 0&&(f.values=l[9]),e=new ln({props:f}),He.push(()=>ol(e,"values",a)),e.$on("change",l[14]),{c(){O(e.$$.fragment)},m(t,u){C(e,t,u),n=!0},p(t,u){const _={};!i&&u[0]&512&&(i=!0,_.values=t[9],_l(()=>i=!1)),e.$set(_)},i(t){n||(V(e.$$.fragment,t),n=!0)},o(t){P(e.$$.fragment,t),n=!1},d(t){N(e,t)}}}function fn(l){let e,i,n;function a(t){l[25](t)}let f={filetype:"audio/aac,audio/midi,audio/mpeg,audio/ogg,audio/wav,audio/x-wav,audio/opus,audio/webm,audio/flac,audio/vnd.rn-realaudio,audio/x-ms-wma,audio/x-aiff,audio/amr,audio/*",$$slots:{default:[sn]},$$scope:{ctx:l}};return l[0]!==void 0&&(f.dragging=l[0]),e=new ql({props:f}),He.push(()=>ol(e,"dragging",a)),e.$on("load",l[15]),{c(){O(e.$$.fragment)},m(t,u){C(e,t,u),n=!0},p(t,u){const _={};u[0]&536870912&&(_.$$scope={dirty:u,ctx:t}),!i&&u[0]&1&&(i=!0,_.dragging=t[0],_l(()=>i=!1)),e.$set(_)},i(t){n||(V(e.$$.fragment,t),n=!0)},o(t){P(e.$$.fragment,t),n=!1},d(t){N(e,t)}}}function tn(l){let e,i,n,a;const f=[un,rn],t=[];function u(_,g){return _[6]?0:1}return i=u(l),n=t[i]=f[i](l),{c(){e=T("div"),n.c(),d(e,"class","mic-wrap svelte-1thnwz")},m(_,g){S(_,e,g),t[i].m(e,null),a=!0},p(_,g){let o=i;i=u(_),i===o?t[i].p(_,g):(_e(),P(t[o],1,1,()=>{t[o]=null}),me(),n=t[i],n?n.p(_,g):(n=t[i]=f[i](_),n.c()),V(n,1),n.m(e,null))},i(_){a||(V(n),a=!0)},o(_){P(n),a=!1},d(_){_&&E(e),t[i].d()}}}function sn(l){let e;const i=l[21].default,n=Ll(i,l,l[29],null);return{c(){n&&n.c()},m(a,f){n&&n.m(a,f),e=!0},p(a,f){n&&n.p&&(!e||f[0]&536870912)&&Ul(n,i,a,a[29],e?Dl(i,a[29],f,null):zl(a[29]),null)},i(a){e||(V(n,a),e=!0)},o(a){P(n,a),e=!1},d(a){n&&n.d(a)}}}function rn(l){let e,i;return e=new ml({props:{size:"sm",$$slots:{default:[on]},$$scope:{ctx:l}}}),e.$on("click",l[10]),{c(){O(e.$$.fragment)},m(n,a){C(e,n,a),i=!0},p(n,a){const f={};a[0]&536870912&&(f.$$scope={dirty:a,ctx:n}),e.$set(f)},i(n){i||(V(e.$$.fragment,n),i=!0)},o(n){P(e.$$.fragment,n),i=!1},d(n){N(e,n)}}}function un(l){let e,i;return e=new ml({props:{size:"sm",$$slots:{default:[_n]},$$scope:{ctx:l}}}),e.$on("click",l[11]),{c(){O(e.$$.fragment)},m(n,a){C(e,n,a),i=!0},p(n,a){const f={};a[0]&536870912&&(f.$$scope={dirty:a,ctx:n}),e.$set(f)},i(n){i||(V(e.$$.fragment,n),i=!0)},o(n){P(e.$$.fragment,n),i=!1},d(n){N(e,n)}}}function on(l){let e,i;return{c(){e=T("span"),e.innerHTML=' ',i=j(`
- Record from microphone`),d(e,"class","record-icon svelte-1thnwz")},m(n,a){S(n,e,a),S(n,i,a)},p:X,d(n){n&&E(e),n&&E(i)}}}function _n(l){let e,i;return{c(){e=T("span"),e.innerHTML=`
- `,i=j(`
- Stop recording`),d(e,"class","record-icon svelte-1thnwz")},m(n,a){S(n,e,a),S(n,i,a)},p:X,d(n){n&&E(e),n&&E(i)}}}function mn(l){let e,i,n,a,f,t;e=new dl({props:{show_label:l[3],Icon:Ie,float:l[4]==="upload"&&l[1]===null,label:l[2]||"Audio"}});const u=[an,nn],_=[];function g(o,s){return o[1]===null||o[5]?0:1}return n=g(l),a=_[n]=u[n](l),{c(){O(e.$$.fragment),i=J(),a.c(),f=oe()},m(o,s){C(e,o,s),S(o,i,s),_[n].m(o,s),S(o,f,s),t=!0},p(o,s){const m={};s[0]&8&&(m.show_label=o[3]),s[0]&18&&(m.float=o[4]==="upload"&&o[1]===null),s[0]&4&&(m.label=o[2]||"Audio"),e.$set(m);let c=n;n=g(o),n===c?_[n].p(o,s):(_e(),P(_[c],1,1,()=>{_[c]=null}),me(),a=_[n],a?a.p(o,s):(a=_[n]=u[n](o),a.c()),V(a,1),a.m(f.parentNode,f))},i(o){t||(V(e.$$.fragment,o),V(a),t=!0)},o(o){P(e.$$.fragment,o),P(a),t=!1},d(o){N(e,o),o&&E(i),_[n].d(o),o&&E(f)}}}const dn=500,rl=44;function cn(l){return new Promise((e,i)=>{let n=new FileReader;n.onerror=i,n.onload=()=>e(n.result),n.readAsDataURL(l)})}function bn(l,e,i){let{$$slots:n={},$$scope:a}=e,{value:f=null}=e,{label:t}=e,{show_label:u=!0}=e,{name:_=""}=e,{source:g}=e,{pending:o=!1}=e,{streaming:s=!1}=e,m=!1,c,b="",y,F=[],k=!1,D,q=!1,L=[0,100],Q=[],U;function x(){U=[ze(()=>import("./module-09329bc9.js"),["./module-09329bc9.js","./module-a3cf0cc4.js","../lite.js","../lite.css"],import.meta.url),ze(()=>import("./module-a5a0afa0.js"),["./module-a5a0afa0.js","./module-a3cf0cc4.js"],import.meta.url)]}s&&x();const z=Re(),Z=async(h,B)=>{let K=new Blob(h,{type:"audio/wav"});i(1,f={data:await cn(K),name:"audio.wav"}),z(B,f)};async function $(){let h;try{h=await navigator.mediaDevices.getUserMedia({audio:!0})}catch(B){if(B instanceof DOMException&&B.name=="NotAllowedError"){z("error","Please allow access to the microphone for recording.");return}else throw B}if(h!=null){if(s){const[{MediaRecorder:B,register:K},{connect:ne}]=await Promise.all(U);await K(await ne()),c=new B(h,{mimeType:"audio/wav"});async function re(Me){let Se=await Me.data.arrayBuffer(),ce=new Uint8Array(Se);if(y||(i(18,y=new Uint8Array(Se.slice(0,rl))),ce=new Uint8Array(Se.slice(rl))),o)F.push(ce);else{let Te=[y].concat(F,[ce]);Z(Te,"stream"),i(19,F=[])}}c.addEventListener("dataavailable",re)}else c=new MediaRecorder(h),c.addEventListener("dataavailable",B=>{Q.push(B.data)}),c.addEventListener("stop",async()=>{i(6,m=!1),await Z(Q,"change"),Q=[]});q=!0}}async function ee(){i(6,m=!0),q||await $(),i(18,y=void 0),s?c.start(dn):c.start()}Fl(()=>{c&&c.state!=="inactive"&&c.stop()});const W=async()=>{c.stop(),s&&(i(6,m=!1),o&&i(20,k=!0))};function w(){z("change"),z("clear"),i(7,b=""),i(1,f=null)}function p(h){function B(){const K=L[0]/100*h.duration,ne=L[1]/100*h.duration;h.currentTimene&&(h.currentTime=K,h.pause())}return h.addEventListener("timeupdate",B),{destroy:()=>h.removeEventListener("timeupdate",B)}}function se({detail:{values:h}}){f&&(z("change",{data:f.data,name:_,crop_min:h[0],crop_max:h[1]}),z("edit"))}function we({detail:h}){i(1,f=h),z("change",{data:h.data,name:h.name}),z("upload",h)}let{dragging:ae=!1}=e;function de(h){G.call(this,l,h)}function ve(h){G.call(this,l,h)}function Ae(h){G.call(this,l,h)}function ye(h){ae=h,i(0,ae)}const le=()=>i(7,b="edit");function fe(h){He[h?"unshift":"push"](()=>{D=h,i(8,D)})}function te(h){L=h,i(9,L)}return l.$$set=h=>{"value"in h&&i(1,f=h.value),"label"in h&&i(2,t=h.label),"show_label"in h&&i(3,u=h.show_label),"name"in h&&i(16,_=h.name),"source"in h&&i(4,g=h.source),"pending"in h&&i(17,o=h.pending),"streaming"in h&&i(5,s=h.streaming),"dragging"in h&&i(0,ae=h.dragging),"$$scope"in h&&i(29,a=h.$$scope)},l.$$.update=()=>{if(l.$$.dirty[0]&1966080&&k&&o===!1&&(i(20,k=!1),y&&F)){let h=[y].concat(F);i(19,F=[]),Z(h,"stream")}l.$$.dirty[0]&1&&z("drag",ae)},[ae,f,t,u,g,s,m,b,D,L,ee,W,w,p,se,we,_,o,y,F,k,n,de,ve,Ae,ye,le,fe,te,a]}class gn extends he{constructor(e){super(),pe(this,e,bn,mn,ke,{value:1,label:2,show_label:3,name:16,source:4,pending:17,streaming:5,dragging:0},null,[-1,-1])}}function hn(l){let e,i,n,a;return{c(){e=T("audio"),e.controls=!0,d(e,"preload","metadata"),Pe(e.src,i=l[0].data)||d(e,"src",i),d(e,"class","svelte-eemfgq")},m(f,t){S(f,e,t),n||(a=[H(e,"play",l[4]),H(e,"pause",l[5]),H(e,"ended",l[6])],n=!0)},p(f,t){t&1&&!Pe(e.src,i=f[0].data)&&d(e,"src",i)},i:X,o:X,d(f){f&&E(e),n=!1,ue(a)}}}function pn(l){let e,i;return e=new Zl({props:{size:"small",unpadded_box:!0,$$slots:{default:[kn]},$$scope:{ctx:l}}}),{c(){O(e.$$.fragment)},m(n,a){C(e,n,a),i=!0},p(n,a){const f={};a&256&&(f.$$scope={dirty:a,ctx:n}),e.$set(f)},i(n){i||(V(e.$$.fragment,n),i=!0)},o(n){P(e.$$.fragment,n),i=!1},d(n){N(e,n)}}}function kn(l){let e,i;return e=new Ie({}),{c(){O(e.$$.fragment)},m(n,a){C(e,n,a),i=!0},i(n){i||(V(e.$$.fragment,n),i=!0)},o(n){P(e.$$.fragment,n),i=!1},d(n){N(e,n)}}}function wn(l){let e,i,n,a,f,t;e=new dl({props:{show_label:l[2],Icon:Ie,float:!1,label:l[1]||"Audio"}});const u=[pn,hn],_=[];function g(o,s){return o[0]===null?0:1}return n=g(l),a=_[n]=u[n](l),{c(){O(e.$$.fragment),i=J(),a.c(),f=oe()},m(o,s){C(e,o,s),S(o,i,s),_[n].m(o,s),S(o,f,s),t=!0},p(o,[s]){const m={};s&4&&(m.show_label=o[2]),s&2&&(m.label=o[1]||"Audio"),e.$set(m);let c=n;n=g(o),n===c?_[n].p(o,s):(_e(),P(_[c],1,1,()=>{_[c]=null}),me(),a=_[n],a?a.p(o,s):(a=_[n]=u[n](o),a.c()),V(a,1),a.m(f.parentNode,f))},i(o){t||(V(e.$$.fragment,o),V(a),t=!0)},o(o){P(e.$$.fragment,o),P(a),t=!1},d(o){N(e,o),o&&E(i),_[n].d(o),o&&E(f)}}}function vn(l,e,i){let{value:n=null}=e,{label:a}=e,{name:f}=e,{show_label:t=!0}=e;const u=Re();function _(s){G.call(this,l,s)}function g(s){G.call(this,l,s)}function o(s){G.call(this,l,s)}return l.$$set=s=>{"value"in s&&i(0,n=s.value),"label"in s&&i(1,a=s.label),"name"in s&&i(3,f=s.name),"show_label"in s&&i(2,t=s.show_label)},l.$$.update=()=>{l.$$.dirty&9&&n&&u("change",{name:f,data:n?.data})},[n,a,t,f,_,g,o]}class An extends he{constructor(e){super(),pe(this,e,vn,wn,ke,{value:0,label:1,name:3,show_label:2})}}function yn(l){let e,i;return e=new An({props:{show_label:l[9],value:l[12],name:l[12]?.name||"audio_file",label:l[8]}}),{c(){O(e.$$.fragment)},m(n,a){C(e,n,a),i=!0},p(n,a){const f={};a&512&&(f.show_label=n[9]),a&4096&&(f.value=n[12]),a&4096&&(f.name=n[12]?.name||"audio_file"),a&256&&(f.label=n[8]),e.$set(f)},i(n){i||(V(e.$$.fragment,n),i=!0)},o(n){P(e.$$.fragment,n),i=!1},d(n){N(e,n)}}}function Sn(l){let e,i;return e=new gn({props:{label:l[8],show_label:l[9],value:l[12],name:l[6],source:l[7],pending:l[10],streaming:l[11],$$slots:{default:[En]},$$scope:{ctx:l}}}),e.$on("change",l[17]),e.$on("stream",l[18]),e.$on("drag",l[19]),e.$on("edit",l[20]),e.$on("play",l[21]),e.$on("pause",l[22]),e.$on("ended",l[23]),e.$on("upload",l[24]),e.$on("error",l[25]),{c(){O(e.$$.fragment)},m(n,a){C(e,n,a),i=!0},p(n,a){const f={};a&256&&(f.label=n[8]),a&512&&(f.show_label=n[9]),a&4096&&(f.value=n[12]),a&64&&(f.name=n[6]),a&128&&(f.source=n[7]),a&1024&&(f.pending=n[10]),a&2048&&(f.streaming=n[11]),a&67108864&&(f.$$scope={dirty:a,ctx:n}),e.$set(f)},i(n){i||(V(e.$$.fragment,n),i=!0)},o(n){P(e.$$.fragment,n),i=!1},d(n){N(e,n)}}}function En(l){let e,i;return e=new Kl({props:{type:"audio"}}),{c(){O(e.$$.fragment)},m(n,a){C(e,n,a),i=!0},p:X,i(n){i||(V(e.$$.fragment,n),i=!0)},o(n){P(e.$$.fragment,n),i=!1},d(n){N(e,n)}}}function Vn(l){let e,i,n,a,f,t;const u=[l[1]];let _={};for(let m=0;m{o[y]=null}),me(),a=o[n],a?a.p(m,c):(a=o[n]=g[n](m),a.c()),V(a,1),a.m(f.parentNode,f))},i(m){t||(V(e.$$.fragment,m),V(a),t=!0)},o(m){P(e.$$.fragment,m),P(a),t=!1},d(m){N(e,m),m&&E(i),o[n].d(m),m&&E(f)}}}function Pn(l){let e,i;return e=new Yl({props:{variant:l[5]==="dynamic"&&l[0]===null&&l[7]==="upload"?"dashed":"solid",border_mode:l[13]?"focus":"base",padding:!1,elem_id:l[2],elem_classes:l[3],visible:l[4],$$slots:{default:[Vn]},$$scope:{ctx:l}}}),{c(){O(e.$$.fragment)},m(n,a){C(e,n,a),i=!0},p(n,[a]){const f={};a&161&&(f.variant=n[5]==="dynamic"&&n[0]===null&&n[7]==="upload"?"dashed":"solid"),a&8192&&(f.border_mode=n[13]?"focus":"base"),a&4&&(f.elem_id=n[2]),a&8&&(f.elem_classes=n[3]),a&16&&(f.visible=n[4]),a&67125219&&(f.$$scope={dirty:a,ctx:n}),e.$set(f)},i(n){i||(V(e.$$.fragment,n),i=!0)},o(n){P(e.$$.fragment,n),i=!1},d(n){N(e,n)}}}function Rn(l,e,i){const n=Re();let{elem_id:a=""}=e,{elem_classes:f=[]}=e,{visible:t=!0}=e,{mode:u}=e,{value:_=null}=e,{name:g}=e,{source:o}=e,{label:s}=e,{root:m}=e,{show_label:c}=e,{pending:b}=e,{streaming:y}=e,{root_url:F}=e,{loading_status:k}=e,D,q;const L=({detail:w})=>{i(0,_=w),n("change",_)},Q=({detail:w})=>{i(0,_=w),n("stream",_)},U=({detail:w})=>i(13,q=w);function x(w){G.call(this,l,w)}function z(w){G.call(this,l,w)}function Z(w){G.call(this,l,w)}function $(w){G.call(this,l,w)}function ee(w){G.call(this,l,w)}const W=({detail:w})=>{i(1,k=k||{}),i(1,k.status="error",k),i(1,k.message=w,k)};return l.$$set=w=>{"elem_id"in w&&i(2,a=w.elem_id),"elem_classes"in w&&i(3,f=w.elem_classes),"visible"in w&&i(4,t=w.visible),"mode"in w&&i(5,u=w.mode),"value"in w&&i(0,_=w.value),"name"in w&&i(6,g=w.name),"source"in w&&i(7,o=w.source),"label"in w&&i(8,s=w.label),"root"in w&&i(15,m=w.root),"show_label"in w&&i(9,c=w.show_label),"pending"in w&&i(10,b=w.pending),"streaming"in w&&i(11,y=w.streaming),"root_url"in w&&i(16,F=w.root_url),"loading_status"in w&&i(1,k=w.loading_status)},l.$$.update=()=>{l.$$.dirty&98305&&i(12,D=Gl(_,m,F))},[_,k,a,f,t,u,g,o,s,c,b,y,D,q,n,m,F,L,Q,U,x,z,Z,$,ee,W]}class Hn extends he{constructor(e){super(),pe(this,e,Rn,Pn,ke,{elem_id:2,elem_classes:3,visible:4,mode:5,value:0,name:6,source:7,label:8,root:15,show_label:9,pending:10,streaming:11,root_url:16,loading_status:1})}}const Cn=Hn,Nn=["static","dynamic"],jn=()=>({type:{input_payload:"{ name: string; data: string }",response_object:"{ name: string; data: string, is_file: boolean }"},description:{input_payload:"audio data as object with filename and base64 string",response_object:"object that includes path to audio file. The URL: {ROOT}file={name} contains the data"},example_data:{name:"audio.wav",data:"data:audio/wav;base64,UklGRiQAAABXQVZFZm10IBAAAAABAAEARKwAAIhYAQACABAAZGF0YQAAAAA="}});export{Cn as Component,jn as document,Nn as modes};
-//# sourceMappingURL=index-1349a7bd.js.map
diff --git a/spaces/whitphx/gradio-static-test/dist/assets/index-4ad7d092.js b/spaces/whitphx/gradio-static-test/dist/assets/index-4ad7d092.js
deleted file mode 100644
index 179aaa2a55722304b2f9091b3b9dcae370ac6075..0000000000000000000000000000000000000000
--- a/spaces/whitphx/gradio-static-test/dist/assets/index-4ad7d092.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{C as ge,E as q,L as Pe}from"./index-46909c92.js";import{s as Te,t as S,p as be,L as Ve,i as xe,f as _e,u as ye,b as ve,v as qe,h as z,E as G}from"./index-1040e6d9.js";import{cssLanguage as F,css as $e}from"./index-44ee6c5c.js";import{typescriptLanguage as we,jsxLanguage as Ce,tsxLanguage as Qe,javascriptLanguage as K,javascript as Ae}from"./index-c55b5a90.js";import"../lite.js";import"./Blocks-99723874.js";import"./Button-0391b19a.js";import"./BlockLabel-a3ec523d.js";import"./Empty-91947ea3.js";/* empty css */import"./Copy-d654b047.js";import"./Download-35908774.js";const Xe=54,ke=1,Ye=55,Me=2,Be=56,Ee=3,D=4,Ge=5,y=6,ee=7,te=8,ae=9,le=10,De=11,Re=12,Ze=13,w=57,Ne=14,R=58,We=20,He=22,re=23,Ie=24,k=26,ne=27,Ue=28,je=31,Je=34,se=36,Le=37,ze=0,Fe=1,Ke={area:!0,base:!0,br:!0,col:!0,command:!0,embed:!0,frame:!0,hr:!0,img:!0,input:!0,keygen:!0,link:!0,meta:!0,param:!0,source:!0,track:!0,wbr:!0,menuitem:!0},et={dd:!0,li:!0,optgroup:!0,option:!0,p:!0,rp:!0,rt:!0,tbody:!0,td:!0,tfoot:!0,th:!0,tr:!0},Z={dd:{dd:!0,dt:!0},dt:{dd:!0,dt:!0},li:{li:!0},option:{option:!0,optgroup:!0},optgroup:{optgroup:!0},p:{address:!0,article:!0,aside:!0,blockquote:!0,dir:!0,div:!0,dl:!0,fieldset:!0,footer:!0,form:!0,h1:!0,h2:!0,h3:!0,h4:!0,h5:!0,h6:!0,header:!0,hgroup:!0,hr:!0,menu:!0,nav:!0,ol:!0,p:!0,pre:!0,section:!0,table:!0,ul:!0},rp:{rp:!0,rt:!0},rt:{rp:!0,rt:!0},tbody:{tbody:!0,tfoot:!0},td:{td:!0,th:!0},tfoot:{tbody:!0},th:{td:!0,th:!0},thead:{tbody:!0,tfoot:!0},tr:{tr:!0}};function tt(e){return e==45||e==46||e==58||e>=65&&e<=90||e==95||e>=97&&e<=122||e>=161}function oe(e){return e==9||e==10||e==13||e==32}let N=null,W=null,H=0;function Y(e,t){let l=e.pos+t;if(H==l&&W==e)return N;let a=e.peek(t);for(;oe(a);)a=e.peek(++t);let r="";for(;tt(a);)r+=String.fromCharCode(a),a=e.peek(++t);return W=e,H=l,N=r?r.toLowerCase():a==at||a==lt?void 0:null}const Oe=60,v=62,M=47,at=63,lt=33,rt=45;function I(e,t){this.name=e,this.parent=t,this.hash=t?t.hash:0;for(let l=0;l-1?new I(Y(a,1)||"",e):e},reduce(e,t){return t==We&&e?e.parent:e},reuse(e,t,l,a){let r=t.type.id;return r==y||r==se?new I(Y(a,1)||"",e):e},hash(e){return e?e.hash:0},strict:!1}),ot=new q((e,t)=>{if(e.next!=Oe){e.next<0&&t.context&&e.acceptToken(w);return}e.advance();let l=e.next==M;l&&e.advance();let a=Y(e,0);if(a===void 0)return;if(!a)return e.acceptToken(l?Ne:y);let r=t.context?t.context.name:null;if(l){if(a==r)return e.acceptToken(De);if(r&&et[r])return e.acceptToken(w,-2);if(t.dialectEnabled(ze))return e.acceptToken(Re);for(let n=t.context;n;n=n.parent)if(n.name==a)return;e.acceptToken(Ze)}else{if(a=="script")return e.acceptToken(ee);if(a=="style")return e.acceptToken(te);if(a=="textarea")return e.acceptToken(ae);if(Ke.hasOwnProperty(a))return e.acceptToken(le);r&&Z[r]&&Z[r][a]?e.acceptToken(w,-1):e.acceptToken(y)}},{contextual:!0}),Ot=new q(e=>{for(let t=0,l=0;;l++){if(e.next<0){l&&e.acceptToken(R);break}if(e.next==rt)t++;else if(e.next==v&&t>=2){l>3&&e.acceptToken(R,-2);break}else t=0;e.advance()}});function it(e){for(;e;e=e.parent)if(e.name=="svg"||e.name=="math")return!0;return!1}const ut=new q((e,t)=>{if(e.next==M&&e.peek(1)==v){let l=t.dialectEnabled(Fe)||it(t.context);e.acceptToken(l?Ge:D,2)}else e.next==v&&e.acceptToken(D,1)});function B(e,t,l){let a=2+e.length;return new q(r=>{for(let n=0,o=0,O=0;;O++){if(r.next<0){O&&r.acceptToken(t);break}if(n==0&&r.next==Oe||n==1&&r.next==M||n>=2&&no?r.acceptToken(t,-o):r.acceptToken(l,-(o-2));break}else if((r.next==10||r.next==13)&&O){r.acceptToken(t,1);break}else n=o=0;r.advance()}})}const pt=B("script",Xe,ke),ct=B("style",Ye,Me),dt=B("textarea",Be,Ee),ft=Te({"Text RawText":S.content,"StartTag StartCloseTag SelfClosingEndTag EndTag":S.angleBracket,TagName:S.tagName,"MismatchedCloseTag/TagName":[S.tagName,S.invalid],AttributeName:S.attributeName,"AttributeValue UnquotedAttributeValue":S.attributeValue,Is:S.definitionOperator,"EntityReference CharacterReference":S.character,Comment:S.blockComment,ProcessingInst:S.processingInstruction,DoctypeDecl:S.documentMeta}),ht=Pe.deserialize({version:14,states:",xOVO!rOOO!WQ#tO'#CqO!]Q#tO'#CzO!bQ#tO'#C}O!gQ#tO'#DQO!lQ#tO'#DSO!qOaO'#CpO!|ObO'#CpO#XOdO'#CpO$eO!rO'#CpOOO`'#Cp'#CpO$lO$fO'#DTO$tQ#tO'#DVO$yQ#tO'#DWOOO`'#Dk'#DkOOO`'#DY'#DYQVO!rOOO%OQ&rO,59]O%WQ&rO,59fO%`Q&rO,59iO%hQ&rO,59lO%sQ&rO,59nOOOa'#D^'#D^O%{OaO'#CxO&WOaO,59[OOOb'#D_'#D_O&`ObO'#C{O&kObO,59[OOOd'#D`'#D`O&sOdO'#DOO'OOdO,59[OOO`'#Da'#DaO'WO!rO,59[O'_Q#tO'#DROOO`,59[,59[OOOp'#Db'#DbO'dO$fO,59oOOO`,59o,59oO'lQ#|O,59qO'qQ#|O,59rOOO`-E7W-E7WO'vQ&rO'#CsOOQW'#DZ'#DZO(UQ&rO1G.wOOOa1G.w1G.wO(^Q&rO1G/QOOOb1G/Q1G/QO(fQ&rO1G/TOOOd1G/T1G/TO(nQ&rO1G/WOOO`1G/W1G/WOOO`1G/Y1G/YO(yQ&rO1G/YOOOa-E7[-E7[O)RQ#tO'#CyOOO`1G.v1G.vOOOb-E7]-E7]O)WQ#tO'#C|OOOd-E7^-E7^O)]Q#tO'#DPOOO`-E7_-E7_O)bQ#|O,59mOOOp-E7`-E7`OOO`1G/Z1G/ZOOO`1G/]1G/]OOO`1G/^1G/^O)gQ,UO,59_OOQW-E7X-E7XOOOa7+$c7+$cOOOb7+$l7+$lOOOd7+$o7+$oOOO`7+$r7+$rOOO`7+$t7+$tO)rQ#|O,59eO)wQ#|O,59hO)|Q#|O,59kOOO`1G/X1G/XO*RO7[O'#CvO*dOMhO'#CvOOQW1G.y1G.yOOO`1G/P1G/POOO`1G/S1G/SOOO`1G/V1G/VOOOO'#D['#D[O*uO7[O,59bOOQW,59b,59bOOOO'#D]'#D]O+WOMhO,59bOOOO-E7Y-E7YOOQW1G.|1G.|OOOO-E7Z-E7Z",stateData:"+s~O!^OS~OUSOVPOWQOXROYTO[]O][O^^O`^Oa^Ob^Oc^Ox^O{_O!dZO~OfaO~OfbO~OfcO~OfdO~OfeO~O!WfOPlP!ZlP~O!XiOQoP!ZoP~O!YlORrP!ZrP~OUSOVPOWQOXROYTOZqO[]O][O^^O`^Oa^Ob^Oc^Ox^O!dZO~O!ZrO~P#dO![sO!euO~OfvO~OfwO~OS|OhyO~OS!OOhyO~OS!QOhyO~OS!SOT!TOhyO~OS!TOhyO~O!WfOPlX!ZlX~OP!WO!Z!XO~O!XiOQoX!ZoX~OQ!ZO!Z!XO~O!YlORrX!ZrX~OR!]O!Z!XO~O!Z!XO~P#dOf!_O~O![sO!e!aO~OS!bO~OS!cO~Oi!dOSgXhgXTgX~OS!fOhyO~OS!gOhyO~OS!hOhyO~OS!iOT!jOhyO~OS!jOhyO~Of!kO~Of!lO~Of!mO~OS!nO~Ok!qO!`!oO!b!pO~OS!rO~OS!sO~OS!tO~Oa!uOb!uOc!uO!`!wO!a!uO~Oa!xOb!xOc!xO!b!wO!c!xO~Oa!uOb!uOc!uO!`!{O!a!uO~Oa!xOb!xOc!xO!b!{O!c!xO~OT~bac!dx{!d~",goto:"%p!`PPPPPPPPPPPPPPPPPPPP!a!gP!mPP!yP!|#P#S#Y#]#`#f#i#l#r#x!aP!a!aP$O$U$l$r$x%O%U%[%bPPPPPPPP%hX^OX`pXUOX`pezabcde{}!P!R!UR!q!dRhUR!XhXVOX`pRkVR!XkXWOX`pRnWR!XnXXOX`pQrXR!XpXYOX`pQ`ORx`Q{aQ}bQ!PcQ!RdQ!UeZ!e{}!P!R!UQ!v!oR!z!vQ!y!pR!|!yQgUR!VgQjVR!YjQmWR![mQpXR!^pQtZR!`tS_O`ToXp",nodeNames:"⚠ StartCloseTag StartCloseTag StartCloseTag EndTag SelfClosingEndTag StartTag StartTag StartTag StartTag StartTag StartCloseTag StartCloseTag StartCloseTag IncompleteCloseTag Document Text EntityReference CharacterReference InvalidEntity Element OpenTag TagName Attribute AttributeName Is AttributeValue UnquotedAttributeValue ScriptText CloseTag OpenTag StyleText CloseTag OpenTag TextareaText CloseTag OpenTag CloseTag SelfClosingTag Comment ProcessingInst MismatchedCloseTag CloseTag DoctypeDecl",maxTerm:67,context:st,nodeProps:[["closedBy",-10,1,2,3,7,8,9,10,11,12,13,"EndTag",6,"EndTag SelfClosingEndTag",-4,21,30,33,36,"CloseTag"],["openedBy",4,"StartTag StartCloseTag",5,"StartTag",-4,29,32,35,37,"OpenTag"],["group",-9,14,17,18,19,20,39,40,41,42,"Entity",16,"Entity TextContent",-3,28,31,34,"TextContent Entity"]],propSources:[ft],skippedNodes:[0],repeatNodeCount:9,tokenData:"#%g!aR!YOX$qXY,QYZ,QZ[$q[]&X]^,Q^p$qpq,Qqr-_rs4ysv-_vw5iwxJ^x}-_}!OKP!O!P-_!P!Q$q!Q![-_![!]!!O!]!^-_!^!_!&W!_!`#$o!`!a&X!a!c-_!c!}!!O!}#R-_#R#S!!O#S#T3V#T#o!!O#o#s-_#s$f$q$f%W-_%W%o!!O%o%p-_%p&a!!O&a&b-_&b1p!!O1p4U-_4U4d!!O4d4e-_4e$IS!!O$IS$I`-_$I`$Ib!!O$Ib$Kh-_$Kh%#t!!O%#t&/x-_&/x&Et!!O&Et&FV-_&FV;'S!!O;'S;:j!&Q;:j;=`4s<%l?&r-_?&r?Ah!!O?Ah?BY$q?BY?Mn!!O?MnO$q!Z$|c`PkW!a`!cpOX$qXZ&XZ[$q[^&X^p$qpq&Xqr$qrs&}sv$qvw+Pwx(tx!^$q!^!_*V!_!a&X!a#S$q#S#T&X#T;'S$q;'S;=`+z<%lO$q!R&bX`P!a`!cpOr&Xrs&}sv&Xwx(tx!^&X!^!_*V!_;'S&X;'S;=`*y<%lO&Xq'UV`P!cpOv&}wx'kx!^&}!^!_(V!_;'S&};'S;=`(n<%lO&}P'pT`POv'kw!^'k!_;'S'k;'S;=`(P<%lO'kP(SP;=`<%l'kp([S!cpOv(Vx;'S(V;'S;=`(h<%lO(Vp(kP;=`<%l(Vq(qP;=`<%l&}a({W`P!a`Or(trs'ksv(tw!^(t!^!_)e!_;'S(t;'S;=`*P<%lO(t`)jT!a`Or)esv)ew;'S)e;'S;=`)y<%lO)e`)|P;=`<%l)ea*SP;=`<%l(t!Q*^V!a`!cpOr*Vrs(Vsv*Vwx)ex;'S*V;'S;=`*s<%lO*V!Q*vP;=`<%l*V!R*|P;=`<%l&XW+UYkWOX+PZ[+P^p+Pqr+Psw+Px!^+P!a#S+P#T;'S+P;'S;=`+t<%lO+PW+wP;=`<%l+P!Z+}P;=`<%l$q!a,]``P!a`!cp!^^OX&XXY,QYZ,QZ]&X]^,Q^p&Xpq,Qqr&Xrs&}sv&Xwx(tx!^&X!^!_*V!_;'S&X;'S;=`*y<%lO&X!_-ljhS`PkW!a`!cpOX$qXZ&XZ[$q[^&X^p$qpq&Xqr-_rs&}sv-_vw/^wx(tx!P-_!P!Q$q!Q!^-_!^!_1n!_!a&X!a#S-_#S#T3V#T#s-_#s$f$q$f;'S-_;'S;=`4s<%l?Ah-_?Ah?BY$q?BY?Mn-_?MnO$q[/echSkWOX+PZ[+P^p+Pqr/^sw/^x!P/^!P!Q+P!Q!^/^!^!_0p!a#S/^#S#T0p#T#s/^#s$f+P$f;'S/^;'S;=`1h<%l?Ah/^?Ah?BY+P?BY?Mn/^?MnO+PS0uXhSqr0psw0px!P0p!Q!_0p!a#s0p$f;'S0p;'S;=`1b<%l?Ah0p?BY?Mn0pS1eP;=`<%l0p[1kP;=`<%l/^!U1wbhS!a`!cpOq*Vqr1nrs(Vsv1nvw0pwx)ex!P1n!P!Q*V!Q!_1n!_!a*V!a#s1n#s$f*V$f;'S1n;'S;=`3P<%l?Ah1n?Ah?BY*V?BY?Mn1n?MnO*V!U3SP;=`<%l1n!V3bchS`P!a`!cpOq&Xqr3Vrs&}sv3Vvw0pwx(tx!P3V!P!Q&X!Q!^3V!^!_1n!_!a&X!a#s3V#s$f&X$f;'S3V;'S;=`4m<%l?Ah3V?Ah?BY&X?BY?Mn3V?MnO&X!V4pP;=`<%l3V!_4vP;=`<%l-_!Z5SV!`h`P!cpOv&}wx'kx!^&}!^!_(V!_;'S&};'S;=`(n<%lO&}!_5rjhSkWc!ROX7dXZ8qZ[7d[^8q^p7dqr:crs8qst@Ttw:cwx8qx!P:c!P!Q7d!Q!]:c!]!^/^!^!_=p!_!a8q!a#S:c#S#T=p#T#s:c#s$f7d$f;'S:c;'S;=`?}<%l?Ah:c?Ah?BY7d?BY?Mn:c?MnO7d!Z7ibkWOX7dXZ8qZ[7d[^8q^p7dqr7drs8qst+Ptw7dwx8qx!]7d!]!^9f!^!a8q!a#S7d#S#T8q#T;'S7d;'S;=`:]<%lO7d!R8tVOp8qqs8qt!]8q!]!^9Z!^;'S8q;'S;=`9`<%lO8q!R9`Oa!R!R9cP;=`<%l8q!Z9mYkWa!ROX+PZ[+P^p+Pqr+Psw+Px!^+P!a#S+P#T;'S+P;'S;=`+t<%lO+P!Z:`P;=`<%l7d!_:jjhSkWOX7dXZ8qZ[7d[^8q^p7dqr:crs8qst/^tw:cwx8qx!P:c!P!Q7d!Q!]:c!]!^<[!^!_=p!_!a8q!a#S:c#S#T=p#T#s:c#s$f7d$f;'S:c;'S;=`?}<%l?Ah:c?Ah?BY7d?BY?Mn:c?MnO7d!_b#d#s1n#s$f*V$f;'S1n;'S;=`3P<%l?Ah1n?Ah?BY*V?BY?Mn1n?MnO*V!V!>kdhS!a`!cpOq*Vqr1nrs(Vsv1nvw0pwx)ex!P1n!P!Q*V!Q!_1n!_!a*V!a#V1n#V#W!?y#W#s1n#s$f*V$f;'S1n;'S;=`3P<%l?Ah1n?Ah?BY*V?BY?Mn1n?MnO*V!V!@SdhS!a`!cpOq*Vqr1nrs(Vsv1nvw0pwx)ex!P1n!P!Q*V!Q!_1n!_!a*V!a#h1n#h#i!Ab#i#s1n#s$f*V$f;'S1n;'S;=`3P<%l?Ah1n?Ah?BY*V?BY?Mn1n?MnO*V!V!AkdhS!a`!cpOq*Vqr1nrs(Vsv1nvw0pwx)ex!P1n!P!Q*V!Q!_1n!_!a*V!a#m1n#m#n!By#n#s1n#s$f*V$f;'S1n;'S;=`3P<%l?Ah1n?Ah?BY*V?BY?Mn1n?MnO*V!V!CSdhS!a`!cpOq*Vqr1nrs(Vsv1nvw0pwx)ex!P1n!P!Q*V!Q!_1n!_!a*V!a#d1n#d#e!Db#e#s1n#s$f*V$f;'S1n;'S;=`3P<%l?Ah1n?Ah?BY*V?BY?Mn1n?MnO*V!V!DkdhS!a`!cpOq*Vqr1nrs(Vsv1nvw0pwx)ex!P1n!P!Q*V!Q!_1n!_!a*V!a#X1n#X#Y!5]#Y#s1n#s$f*V$f;'S1n;'S;=`3P<%l?Ah1n?Ah?BY*V?BY?Mn1n?MnO*V!V!FSchS!a`!cpOq!G_qr!Eyrs!HUsv!Eyvw!Ncwx!Jvx!P!Ey!P!Q!G_!Q!_!Ey!_!a!G_!a!b##T!b#s!Ey#s$f!G_$f;'S!Ey;'S;=`#$i<%l?Ah!Ey?Ah?BY!G_?BY?Mn!Ey?MnO!G_!R!GfY!a`!cpOr!G_rs!HUsv!G_vw!Hpwx!Jvx!a!G_!a!b!Lv!b;'S!G_;'S;=`!N]<%lO!G_q!HZV!cpOv!HUvx!Hpx!a!HU!a!b!Iq!b;'S!HU;'S;=`!Jp<%lO!HUP!HsTO!a!Hp!a!b!IS!b;'S!Hp;'S;=`!Ik<%lO!HpP!IVTO!`!Hp!`!a!If!a;'S!Hp;'S;=`!Ik<%lO!HpP!IkOxPP!InP;=`<%l!Hpq!IvV!cpOv!HUvx!Hpx!`!HU!`!a!J]!a;'S!HU;'S;=`!Jp<%lO!HUq!JdS!cpxPOv(Vx;'S(V;'S;=`(h<%lO(Vq!JsP;=`<%l!HUa!J{X!a`Or!Jvrs!Hpsv!Jvvw!Hpw!a!Jv!a!b!Kh!b;'S!Jv;'S;=`!Lp<%lO!Jva!KmX!a`Or!Jvrs!Hpsv!Jvvw!Hpw!`!Jv!`!a!LY!a;'S!Jv;'S;=`!Lp<%lO!Jva!LaT!a`xPOr)esv)ew;'S)e;'S;=`)y<%lO)ea!LsP;=`<%l!Jv!R!L}Y!a`!cpOr!G_rs!HUsv!G_vw!Hpwx!Jvx!`!G_!`!a!Mm!a;'S!G_;'S;=`!N]<%lO!G_!R!MvV!a`!cpxPOr*Vrs(Vsv*Vwx)ex;'S*V;'S;=`*s<%lO*V!R!N`P;=`<%l!G_T!NhbhSOq!Hpqr!Ncrs!Hpsw!Ncwx!Hpx!P!Nc!P!Q!Hp!Q!_!Nc!_!a!Hp!a!b# p!b#s!Nc#s$f!Hp$f;'S!Nc;'S;=`#!}<%l?Ah!Nc?Ah?BY!Hp?BY?Mn!Nc?MnO!HpT# ubhSOq!Hpqr!Ncrs!Hpsw!Ncwx!Hpx!P!Nc!P!Q!Hp!Q!_!Nc!_!`!Hp!`!a!If!a#s!Nc#s$f!Hp$f;'S!Nc;'S;=`#!}<%l?Ah!Nc?Ah?BY!Hp?BY?Mn!Nc?MnO!HpT##QP;=`<%l!Nc!V##^chS!a`!cpOq!G_qr!Eyrs!HUsv!Eyvw!Ncwx!Jvx!P!Ey!P!Q!G_!Q!_!Ey!_!`!G_!`!a!Mm!a#s!Ey#s$f!G_$f;'S!Ey;'S;=`#$i<%l?Ah!Ey?Ah?BY!G_?BY?Mn!Ey?MnO!G_!V#$lP;=`<%l!Ey!V#$zXiS`P!a`!cpOr&Xrs&}sv&Xwx(tx!^&X!^!_*V!_;'S&X;'S;=`*y<%lO&X",tokenizers:[pt,ct,dt,ut,ot,Ot,0,1,2,3,4,5],topRules:{Document:[0,15]},dialects:{noMatch:0,selfClosing:485},tokenPrec:487});function ie(e,t){let l=Object.create(null);for(let a of e.getChildren(re)){let r=a.getChild(Ie),n=a.getChild(k)||a.getChild(ne);r&&(l[t.read(r.from,r.to)]=n?n.type.id==k?t.read(n.from+1,n.to-1):t.read(n.from,n.to):"")}return l}function U(e,t){let l=e.getChild(He);return l?t.read(l.from,l.to):" "}function C(e,t,l){let a;for(let r of l)if(!r.attrs||r.attrs(a||(a=ie(e.node.parent.firstChild,t))))return{parser:r.parser};return null}function ue(e=[],t=[]){let l=[],a=[],r=[],n=[];for(let O of e)(O.tag=="script"?l:O.tag=="style"?a:O.tag=="textarea"?r:n).push(O);let o=t.length?Object.create(null):null;for(let O of t)(o[O.name]||(o[O.name]=[])).push(O);return be((O,p)=>{let h=O.type.id;if(h==Ue)return C(O,p,l);if(h==je)return C(O,p,a);if(h==Je)return C(O,p,r);if(h==se&&n.length){let i=O.node,u=U(i,p),c;for(let d of n)if(d.tag==u&&(!d.attrs||d.attrs(c||(c=ie(i,p))))){let f=i.parent.lastChild;return{parser:d.parser,overlay:[{from:O.to,to:f.type.id==Le?f.from:i.parent.to}]}}}if(o&&h==re){let i=O.node,u;if(u=i.firstChild){let c=o[p.read(u.from,u.to)];if(c)for(let d of c){if(d.tagName&&d.tagName!=U(i.parent,p))continue;let f=i.lastChild;if(f.type.id==k){let P=f.from+1,T=f.lastChild,x=f.to-(T&&T.isError?0:1);if(x>P)return{parser:d.parser,overlay:[{from:P,to:x}]}}else if(f.type.id==ne)return{parser:d.parser,overlay:[{from:f.from,to:f.to}]}}}}return null})}const b=["_blank","_self","_top","_parent"],Q=["ascii","utf-8","utf-16","latin1","latin1"],A=["get","post","put","delete"],X=["application/x-www-form-urlencoded","multipart/form-data","text/plain"],m=["true","false"],s={},mt={a:{attrs:{href:null,ping:null,type:null,media:null,target:b,hreflang:null}},abbr:s,address:s,area:{attrs:{alt:null,coords:null,href:null,target:null,ping:null,media:null,hreflang:null,type:null,shape:["default","rect","circle","poly"]}},article:s,aside:s,audio:{attrs:{src:null,mediagroup:null,crossorigin:["anonymous","use-credentials"],preload:["none","metadata","auto"],autoplay:["autoplay"],loop:["loop"],controls:["controls"]}},b:s,base:{attrs:{href:null,target:b}},bdi:s,bdo:s,blockquote:{attrs:{cite:null}},body:s,br:s,button:{attrs:{form:null,formaction:null,name:null,value:null,autofocus:["autofocus"],disabled:["autofocus"],formenctype:X,formmethod:A,formnovalidate:["novalidate"],formtarget:b,type:["submit","reset","button"]}},canvas:{attrs:{width:null,height:null}},caption:s,center:s,cite:s,code:s,col:{attrs:{span:null}},colgroup:{attrs:{span:null}},command:{attrs:{type:["command","checkbox","radio"],label:null,icon:null,radiogroup:null,command:null,title:null,disabled:["disabled"],checked:["checked"]}},data:{attrs:{value:null}},datagrid:{attrs:{disabled:["disabled"],multiple:["multiple"]}},datalist:{attrs:{data:null}},dd:s,del:{attrs:{cite:null,datetime:null}},details:{attrs:{open:["open"]}},dfn:s,div:s,dl:s,dt:s,em:s,embed:{attrs:{src:null,type:null,width:null,height:null}},eventsource:{attrs:{src:null}},fieldset:{attrs:{disabled:["disabled"],form:null,name:null}},figcaption:s,figure:s,footer:s,form:{attrs:{action:null,name:null,"accept-charset":Q,autocomplete:["on","off"],enctype:X,method:A,novalidate:["novalidate"],target:b}},h1:s,h2:s,h3:s,h4:s,h5:s,h6:s,head:{children:["title","base","link","style","meta","script","noscript","command"]},header:s,hgroup:s,hr:s,html:{attrs:{manifest:null}},i:s,iframe:{attrs:{src:null,srcdoc:null,name:null,width:null,height:null,sandbox:["allow-top-navigation","allow-same-origin","allow-forms","allow-scripts"],seamless:["seamless"]}},img:{attrs:{alt:null,src:null,ismap:null,usemap:null,width:null,height:null,crossorigin:["anonymous","use-credentials"]}},input:{attrs:{alt:null,dirname:null,form:null,formaction:null,height:null,list:null,max:null,maxlength:null,min:null,name:null,pattern:null,placeholder:null,size:null,src:null,step:null,value:null,width:null,accept:["audio/*","video/*","image/*"],autocomplete:["on","off"],autofocus:["autofocus"],checked:["checked"],disabled:["disabled"],formenctype:X,formmethod:A,formnovalidate:["novalidate"],formtarget:b,multiple:["multiple"],readonly:["readonly"],required:["required"],type:["hidden","text","search","tel","url","email","password","datetime","date","month","week","time","datetime-local","number","range","color","checkbox","radio","file","submit","image","reset","button"]}},ins:{attrs:{cite:null,datetime:null}},kbd:s,keygen:{attrs:{challenge:null,form:null,name:null,autofocus:["autofocus"],disabled:["disabled"],keytype:["RSA"]}},label:{attrs:{for:null,form:null}},legend:s,li:{attrs:{value:null}},link:{attrs:{href:null,type:null,hreflang:null,media:null,sizes:["all","16x16","16x16 32x32","16x16 32x32 64x64"]}},map:{attrs:{name:null}},mark:s,menu:{attrs:{label:null,type:["list","context","toolbar"]}},meta:{attrs:{content:null,charset:Q,name:["viewport","application-name","author","description","generator","keywords"],"http-equiv":["content-language","content-type","default-style","refresh"]}},meter:{attrs:{value:null,min:null,low:null,high:null,max:null,optimum:null}},nav:s,noscript:s,object:{attrs:{data:null,type:null,name:null,usemap:null,form:null,width:null,height:null,typemustmatch:["typemustmatch"]}},ol:{attrs:{reversed:["reversed"],start:null,type:["1","a","A","i","I"]},children:["li","script","template","ul","ol"]},optgroup:{attrs:{disabled:["disabled"],label:null}},option:{attrs:{disabled:["disabled"],label:null,selected:["selected"],value:null}},output:{attrs:{for:null,form:null,name:null}},p:s,param:{attrs:{name:null,value:null}},pre:s,progress:{attrs:{value:null,max:null}},q:{attrs:{cite:null}},rp:s,rt:s,ruby:s,samp:s,script:{attrs:{type:["text/javascript"],src:null,async:["async"],defer:["defer"],charset:Q}},section:s,select:{attrs:{form:null,name:null,size:null,autofocus:["autofocus"],disabled:["disabled"],multiple:["multiple"]}},slot:{attrs:{name:null}},small:s,source:{attrs:{src:null,type:null,media:null}},span:s,strong:s,style:{attrs:{type:["text/css"],media:null,scoped:null}},sub:s,summary:s,sup:s,table:s,tbody:s,td:{attrs:{colspan:null,rowspan:null,headers:null}},template:s,textarea:{attrs:{dirname:null,form:null,maxlength:null,name:null,placeholder:null,rows:null,cols:null,autofocus:["autofocus"],disabled:["disabled"],readonly:["readonly"],required:["required"],wrap:["soft","hard"]}},tfoot:s,th:{attrs:{colspan:null,rowspan:null,headers:null,scope:["row","col","rowgroup","colgroup"]}},thead:s,time:{attrs:{datetime:null}},title:s,tr:s,track:{attrs:{src:null,label:null,default:null,kind:["subtitles","captions","descriptions","chapters","metadata"],srclang:null}},ul:{children:["li","script","template","ul","ol"]},var:s,video:{attrs:{src:null,poster:null,width:null,height:null,crossorigin:["anonymous","use-credentials"],preload:["auto","metadata","none"],autoplay:["autoplay"],mediagroup:["movie"],muted:["muted"],controls:["controls"]}},wbr:s},pe={accesskey:null,class:null,contenteditable:m,contextmenu:null,dir:["ltr","rtl","auto"],draggable:["true","false","auto"],dropzone:["copy","move","link","string:","file:"],hidden:["hidden"],id:null,inert:["inert"],itemid:null,itemprop:null,itemref:null,itemscope:["itemscope"],itemtype:null,lang:["ar","bn","de","en-GB","en-US","es","fr","hi","id","ja","pa","pt","ru","tr","zh"],spellcheck:m,autocorrect:m,autocapitalize:m,style:null,tabindex:null,title:null,translate:["yes","no"],rel:["stylesheet","alternate","author","bookmark","help","license","next","nofollow","noreferrer","prefetch","prev","search","tag"],role:"alert application article banner button cell checkbox complementary contentinfo dialog document feed figure form grid gridcell heading img list listbox listitem main navigation region row rowgroup search switch tab table tabpanel textbox timer".split(" "),"aria-activedescendant":null,"aria-atomic":m,"aria-autocomplete":["inline","list","both","none"],"aria-busy":m,"aria-checked":["true","false","mixed","undefined"],"aria-controls":null,"aria-describedby":null,"aria-disabled":m,"aria-dropeffect":null,"aria-expanded":["true","false","undefined"],"aria-flowto":null,"aria-grabbed":["true","false","undefined"],"aria-haspopup":m,"aria-hidden":m,"aria-invalid":["true","false","grammar","spelling"],"aria-label":null,"aria-labelledby":null,"aria-level":null,"aria-live":["off","polite","assertive"],"aria-multiline":m,"aria-multiselectable":m,"aria-owns":null,"aria-posinset":null,"aria-pressed":["true","false","mixed","undefined"],"aria-readonly":m,"aria-relevant":null,"aria-required":m,"aria-selected":["true","false","undefined"],"aria-setsize":null,"aria-sort":["ascending","descending","none","other"],"aria-valuemax":null,"aria-valuemin":null,"aria-valuenow":null,"aria-valuetext":null},ce="beforeunload copy cut dragstart dragover dragleave dragenter dragend drag paste focus blur change click load mousedown mouseenter mouseleave mouseup keydown keyup resize scroll unload".split(" ").map(e=>"on"+e);for(let e of ce)pe[e]=null;class V{constructor(t,l){this.tags=Object.assign(Object.assign({},mt),t),this.globalAttrs=Object.assign(Object.assign({},pe),l),this.allTags=Object.keys(this.tags),this.globalAttrNames=Object.keys(this.globalAttrs)}}V.default=new V;function g(e,t,l=e.length){if(!t)return"";let a=t.firstChild,r=a&&a.getChild("TagName");return r?e.sliceString(r.from,Math.min(r.to,l)):""}function $(e,t=!1){for(let l=e.parent;l;l=l.parent)if(l.name=="Element")if(t)t=!1;else return l;return null}function de(e,t,l){let a=l.tags[g(e,$(t,!0))];return a?.children||l.allTags}function E(e,t){let l=[];for(let a=t;a=$(a);){let r=g(e,a);if(r&&a.lastChild.name=="CloseTag")break;r&&l.indexOf(r)<0&&(t.name=="EndTag"||t.from>=a.firstChild.to)&&l.push(r)}return l}const fe=/^[:\-\.\w\u00b7-\uffff]*$/;function j(e,t,l,a,r){let n=/\s*>/.test(e.sliceDoc(r,r+5))?"":">";return{from:a,to:r,options:de(e.doc,l,t).map(o=>({label:o,type:"type"})).concat(E(e.doc,l).map((o,O)=>({label:"/"+o,apply:"/"+o+n,type:"type",boost:99-O}))),validFor:/^\/?[:\-\.\w\u00b7-\uffff]*$/}}function J(e,t,l,a){let r=/\s*>/.test(e.sliceDoc(a,a+5))?"":">";return{from:l,to:a,options:E(e.doc,t).map((n,o)=>({label:n,apply:n+r,type:"type",boost:99-o})),validFor:fe}}function St(e,t,l,a){let r=[],n=0;for(let o of de(e.doc,l,t))r.push({label:"<"+o,type:"type"});for(let o of E(e.doc,l))r.push({label:""+o+">",type:"type",boost:99-n++});return{from:a,to:a,options:r,validFor:/^<\/?[:\-\.\w\u00b7-\uffff]*$/}}function gt(e,t,l,a,r){let n=$(l),o=n?t.tags[g(e.doc,n)]:null,O=o&&o.attrs?Object.keys(o.attrs):[],p=o&&o.globalAttrs===!1?O:O.length?O.concat(t.globalAttrNames):t.globalAttrNames;return{from:a,to:r,options:p.map(h=>({label:h,type:"property"})),validFor:fe}}function Pt(e,t,l,a,r){var n;let o=(n=l.parent)===null||n===void 0?void 0:n.getChild("AttributeName"),O=[],p;if(o){let h=e.sliceDoc(o.from,o.to),i=t.globalAttrs[h];if(!i){let u=$(l),c=u?t.tags[g(e.doc,u)]:null;i=c?.attrs&&c.attrs[h]}if(i){let u=e.sliceDoc(a,r).toLowerCase(),c='"',d='"';/^['"]/.test(u)?(p=u[0]=='"'?/^[^"]*$/:/^[^']*$/,c="",d=e.sliceDoc(r,r+1)==u[0]?"":u[0],u=u.slice(1),a++):p=/^[^\s<>='"]*$/;for(let f of i)O.push({label:f,apply:c+f+d,type:"constant"})}}return{from:a,to:r,options:O,validFor:p}}function he(e,t){let{state:l,pos:a}=t,r=z(l).resolveInner(a),n=r.resolve(a,-1);for(let o=a,O;r==n&&(O=n.childBefore(o));){let p=O.lastChild;if(!p||!p.type.isError||p.fromhe(a,r)}const me=[{tag:"script",attrs:e=>e.type=="text/typescript"||e.lang=="ts",parser:we.parser},{tag:"script",attrs:e=>e.type=="text/babel"||e.type=="text/jsx",parser:Ce.parser},{tag:"script",attrs:e=>e.type=="text/typescript-jsx",parser:Qe.parser},{tag:"script",attrs(e){return!e.type||/^(?:text|application)\/(?:x-)?(?:java|ecma)script$|^module$|^$/i.test(e.type)},parser:K.parser},{tag:"style",attrs(e){return(!e.lang||e.lang=="css")&&(!e.type||/^(text\/)?(x-)?(stylesheet|css)$/i.test(e.type))},parser:F.parser}],Se=[{name:"style",parser:F.parser.configure({top:"Styles"})}].concat(ce.map(e=>({name:e,parser:K.parser}))),_=Ve.define({name:"html",parser:ht.configure({props:[xe.add({Element(e){let t=/^(\s*)(<\/)?/.exec(e.textAfter);return e.node.to<=e.pos+t[0].length?e.continue():e.lineIndent(e.node.from)+(t[2]?0:e.unit)},"OpenTag CloseTag SelfClosingTag"(e){return e.column(e.node.from)+e.unit},Document(e){if(e.pos+/\s*/.exec(e.textAfter)[0].lengthe.getChild("TagName")})],wrap:ue(me,Se)}),languageData:{commentTokens:{block:{open:""}},indentOnInput:/^\s*<\/\w+\W$/,wordChars:"-._"}});function Yt(e={}){let t="",l;e.matchClosingTags===!1&&(t="noMatch"),e.selfClosingTags===!0&&(t=(t?t+" ":"")+"selfClosing"),(e.nestedLanguages&&e.nestedLanguages.length||e.nestedAttributes&&e.nestedAttributes.length)&&(l=ue((e.nestedLanguages||[]).concat(me),(e.nestedAttributes||[]).concat(Se)));let a=l||t?_.configure({dialect:t,wrap:l}):_;return new ve(a,[_.data.of({autocomplete:Tt(e)}),e.autoCloseTags!==!1?bt:[],Ae().support,$e().support])}const L=new Set("area base br col command embed frame hr img input keygen link meta param source track wbr menuitem".split(" ")),bt=qe.inputHandler.of((e,t,l,a)=>{if(e.composing||e.state.readOnly||t!=l||a!=">"&&a!="/"||!_.isActiveAt(e.state,t,-1))return!1;let{state:r}=e,n=r.changeByRange(o=>{var O,p,h;let{head:i}=o,u=z(r).resolveInner(i,-1),c;if((u.name=="TagName"||u.name=="StartTag")&&(u=u.parent),a==">"&&u.name=="OpenTag"){if(((p=(O=u.parent)===null||O===void 0?void 0:O.lastChild)===null||p===void 0?void 0:p.name)!="CloseTag"&&(c=g(r.doc,u.parent,i))&&!L.has(c)){let d=e.state.doc.sliceString(i,i+1)===">",f=`${d?"":">"}${c}>`;return{range:G.cursor(i+1),changes:{from:i+(d?1:0),insert:f}}}}else if(a=="/"&&u.name=="OpenTag"){let d=u.parent,f=d?.parent;if(d.from==i-1&&((h=f.lastChild)===null||h===void 0?void 0:h.name)!="CloseTag"&&(c=g(r.doc,f,i))&&!L.has(c)){let P=e.state.doc.sliceString(i,i+1)===">",T=`/${c}${P?"":">"}`,x=i+T.length+(P?1:0);return{range:G.cursor(x),changes:{from:i,insert:T}}}}return{range:o}});return n.changes.empty?!1:(e.dispatch(n,{userEvent:"input.type",scrollIntoView:!0}),!0)});export{bt as autoCloseTags,Yt as html,kt as htmlCompletionSource,Tt as htmlCompletionSourceWith,_ as htmlLanguage};
-//# sourceMappingURL=index-4ad7d092.js.map
diff --git a/spaces/wong26/faster-whisper-webui/app-local.py b/spaces/wong26/faster-whisper-webui/app-local.py
deleted file mode 100644
index c7717d096ca5f95177f0dba03cd62ca729bae9f3..0000000000000000000000000000000000000000
--- a/spaces/wong26/faster-whisper-webui/app-local.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Run the app with no audio file restrictions
-from app import create_ui
-from src.config import ApplicationConfig
-
-create_ui(ApplicationConfig.create_default(input_audio_max_duration=-1))
\ No newline at end of file
diff --git a/spaces/wuhuik/bingo/src/pages/api/kblob.ts b/spaces/wuhuik/bingo/src/pages/api/kblob.ts
deleted file mode 100644
index 0ce7e6063cdc06838e76f1cff1d5982d34ef52de..0000000000000000000000000000000000000000
--- a/spaces/wuhuik/bingo/src/pages/api/kblob.ts
+++ /dev/null
@@ -1,56 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-import FormData from 'form-data'
-import { fetch } from '@/lib/isomorphic'
-import { KBlobRequest } from '@/lib/bots/bing/types'
-
-const API_DOMAIN = 'https://bing.vcanbb.top'
-
-export const config = {
- api: {
- bodyParser: {
- sizeLimit: '10mb' // Set desired value here
- }
- }
-}
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- try {
- const { knowledgeRequest, imageBase64 } = req.body as KBlobRequest
-
- const formData = new FormData()
- formData.append('knowledgeRequest', JSON.stringify(knowledgeRequest))
- if (imageBase64) {
- formData.append('imageBase64', imageBase64)
- }
-
- const response = await fetch(`${API_DOMAIN}/images/kblob`,
- {
- method: 'POST',
- body: formData.getBuffer(),
- headers: {
- "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"",
- "sec-ch-ua-mobile": "?0",
- "sec-ch-ua-platform": "\"Windows\"",
- "Referer": `${API_DOMAIN}/web/index.html`,
- "Referrer-Policy": "origin-when-cross-origin",
- 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32',
- ...formData.getHeaders()
- }
- }
- ).then(res => res.text())
-
- res.writeHead(200, {
- 'Content-Type': 'application/json',
- })
- res.end(response || JSON.stringify({ result: { value: 'UploadFailed', message: '请更换 IP 或代理后重试' } }))
- } catch (e) {
- return res.json({
- result: {
- value: 'UploadFailed',
- message: `${e}`
- }
- })
- }
-}
diff --git a/spaces/xfambi/zapi/README.md b/spaces/xfambi/zapi/README.md
deleted file mode 100644
index 3d491011f1b5d6e145fe5feca9aa77974ef00202..0000000000000000000000000000000000000000
--- a/spaces/xfambi/zapi/README.md
+++ /dev/null
@@ -1,148 +0,0 @@
----
-title: LabelStudio
-emoji: 🟧
-colorFrom: yellow
-colorTo: purple
-sdk: docker
-tags:
-- label-studio
-fullwidth: true
-license: wtfpl
-app_port: 8080
-duplicated_from: LabelStudio/LabelStudio
----
-
-
-[Website](https://hubs.ly/Q01CNgsd0) • [Docs](https://hubs.ly/Q01CN9Yq0) • [12K+ GitHub ⭐️!](https://hubs.ly/Q01CNbPQ0) • [Slack Community](https://hubs.ly/Q01CNb9H0)
-
-## What is Label Studio?
-
-Label Studio is an open source data labeling platform. It lets you label audio,
-text, images, videos, and time series data with a simple, straightforward, and
-highly-configurable user interface. Label Studio can prepare new data or
-improve existing training data to get more accurate ML models.
-
-
-## Label Studio in Hugging Face Spaces
-
-The Label Studio community is thrilled to offer Label Studio as a Hugging Face
-Spaces application. You can try the data-annotation interface, connect popular
-machine learning models, and share the application with collaborators. You can
-start immediately by creating an account or replicate the space and work in
-your own environment.
-
-## Creating a Use Account and Logging In
-
-Begin by creating a new account in the Label Studio space, then log in with your
-credentials.
-
-**By default, these spaces permit anyone to create a new login
-account, allowing them to view and modify project configuration, data sets, and
-annotations. Without any modifications, treat this space like a demo environment.**
-
-## Creating a Labeling Project
-
-After logging in, Label Studio will present you with a project view. Here you
-can create a new project with prompts to upload data and set up a custom
-configuration interface.
-
-**Note that in the default configuration, storage is local and temporary. Any
-projects, annotations, and configurations will be lost if the space is restarted.**
-
-## Next Steps and Additional Resources
-
-To help with getting started, the Label Studio community curated a list of
-resources including tutorials and documentation.
-
-- 🚀 [Zero to One with Label Studio Tutorial](https://labelstud.io/blog/introduction-to-label-studio-in-hugging-face-spaces/)
-- 📈 [Try Label Studio Enterprise](https://hubs.ly/Q01CMLll0)
-- 🤗 [Tutorial: Using Label Studio with Hugging Face Datasets Hub](https://danielvanstrien.xyz/huggingface/huggingface-datasets/annotation/full%20stack%20deep%20learning%20notes/2022/09/07/label-studio-annotations-hub.html)
-- 💡 [Label Studio Docs](https://hubs.ly/Q01CN9Yq0)
-
-
-
-
-### Making your Label Studio Hugging Face Space production-ready
-
-By default this space allows for the unrestricted creation of new accounts
-will full access to all projects and data. This is great for trying out
-Label Studio and collaborating on projects, but you may want to restrict
-access to your space to only authorized users. Add the following environment
-variable to your spaces Dockerfile to disable public account creation for
-this space.
-
- ENV LABEL_STUDIO_DISABLE_SIGNUP_WITHOUT_LINK=true
-
-Set secrets in your space to create an inital user, and log in with your
-provided username and password. Do not set these in your Dockerfile, as they
-globally visible on a public space.
-
- LABEL_STUDIO_USERNAME
- LABEL_STUDIO_PASSWORD
-
-You will need to provide new users with an invitation link to join the space,
-which can be found in the Organizations interface of Label Studio
-
-By default this space stores all project configuration and data annotations
-in local storage with Sqlite. If the space is reset, all configuration and
-annotation data in the space will be lost. You can enable configuration
-persistence by connecting an external Postgres database to your space,
-guaranteeing that all project and annotation settings are preserved.
-
-Set the following secret variables to match your own hosted instance of
-Postgres. We strongly recommend setting these as secrets to prevent leaking
-information about your database service to the public in your spaces
-definition.
-
- DJANGO_DB=default
- POSTGRE_NAME=
- POSTGRE_PORT=
- POSTGRE_USER=
- POSTGRE_PASSWORD=
- POSTGRE_PORT=
- POSTGRE_HOST=
-
-Add the following environment variable to remove the warning about ephemeral
-storage.
-
- ENV STORAGE_PERSISTENCE=1
-
-Note that you will need to connect cloud storage to host data items that you
-want to annotate, as local storage will not be preserved across a space reset.
-
-By default the only data storage enabled for this space is local. In the case
-of a space reset, all data will be lost. To enable permanent storage, you
-must enable a cloud storage connector. We also strongly recommend enabling
-configuration persistence to preserve project data, annotations, and user
-settings. Choose the appropriate cloud connector and configure the secrets
-for it.
-
-#### Amazon S3
- STORAGE_TYPE=s3
- STORAGE_AWS_ACCESS_KEY_ID=""
- STORAGE_AWS_SECRET_ACCESS_KEY=""
- STORAGE_AWS_BUCKET_NAME=""
- STORAGE_AWS_REGION_NAME=""
- STORAGE_AWS_FOLDER=""
-
-#### Google Cloud Storage
-
- STORAGE_TYPE=gcs
- STORAGE_GCS_BUCKET_NAME=""
- STORAGE_GCS_PROJECT_ID=""
- STORAGE_GCS_FOLDER=""
- GOOGLE_APPLICATION_CREDENTIALS="/opt/heartex/secrets/key.json"
-
-Azure Blob Storage
-==================
-
- STORAGE_TYPE=azure
- STORAGE_AZURE_ACCOUNT_NAME=""
- STORAGE_AZURE_ACCOUNT_KEY=""
- STORAGE_AZURE_CONTAINER_NAME=""
- STORAGE_AZURE_FOLDER=""
-
-
-## Questions? Concerns? Want to get involved?
-
-Email the community team at [community@labelstud.io](mailto:community@labelstud.io)
diff --git a/spaces/xuxw98/TAPA/howto/finetune_adapter.md b/spaces/xuxw98/TAPA/howto/finetune_adapter.md
deleted file mode 100644
index 8dee36716512bf2c92179b345f14c2f2082a5713..0000000000000000000000000000000000000000
--- a/spaces/xuxw98/TAPA/howto/finetune_adapter.md
+++ /dev/null
@@ -1,109 +0,0 @@
-# Finetuning with Adapter
-
-[LLaMA-Adapter](https://arxiv.org/abs/2303.16199) is a form of prefix-tuning that prepends a learnable adaption-prompt to the inputs of the attention blocks in LLaMA. In total, there are only 1.2M parameters to update during finetuning, which significantly reduces the memory footprint and speeds up training.
-
-We are able to demonstrate instruction-finetuning Lit-LLaMA 7B on the [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) dataset on a **single RTX 3090 (24GB) GPU**. If using 8 GPUs, finetuning can be completed in under 1 hour.
-
-If you are new to LLaMA-Adapter and are interested to learn more about how it works before proceeding with the finetuning guide below, you might find our article [Understanding Parameter-Efficient Finetuning of Large Language Models: From Prefix Tuning to LLaMA-Adapters](https://lightning.ai/pages/community/article/understanding-llama-adapters/) helpful.
-
-## LLaMA-Adapter v2
-
-The LLaMA-Adapter authors developed a newer adapter method called LLaMA-Adapter v2, which is related to this LLaMA-Adapter method but includes more trainable parameters. LLaMA-Adapter v2 is also available via Lit-LLaMA; you can read more about it in [the related how-to doc here](./finetune_adapter_v2.md).
-
-## Preparation
-
-The steps here only need to be done once:
-
-1. Follow the instructions in the [README](README.md) to install the dependencies.
-2. Download and convert the weights and save them in the `./checkpoints` folder as described [here](download_weights.md).
-3. If you want to utilize more than one GPU, you should `pip install deepspeed`.
-4. Download the data and generate the Alpaca instruction tuning dataset:
-
- ```bash
- python scripts/prepare_alpaca.py
- ```
-
- or [prepare your own dataset](#tune-on-your-dataset).
-
-See also: [Finetuning on an unstructured dataset](unstructured_dataset.md)
-
-## Running the finetuning
-
-```bash
-python finetune/adapter.py
-```
-
-The finetuning requires at least one GPU with ~24 GB memory (RTX 3090).
-You can speed up training by setting the `devices` variable in the script to utilize more GPUs if available.
-Depending on the available GPU memory, you can also tune the `micro_batch_size` parameter to utilize the GPU efficiently.
-
-For example, the following settings will let you finetune the model in under 1 hour using DeepSpeed Zero-2:
-
-```python
-devices = 8
-micro_batch_size = 8
-```
-
-This script will save checkpoints periodically to the folder `out/`.
-
-> **Note**
-> All scripts support argument [customization](customize_paths.md)
-
-## Test the model
-
-You can test the finetuned model with your own instructions by running:
-
-```bash
-python generate/adapter.py \
- --prompt "Recommend a movie to watch on the weekend." \
- --quantize llm.int8
-```
-Output:
-```
-A good movie to watch on the weekend would be The Lion King, since it's a classic family film that everyone can enjoy...
-```
-If your GPU supports `bfloat16`, the script will automatically use it. Together with `--quantize llm.int8`, this brings the memory consumption down to ~8 GB.
-
-## Tune on your dataset
-
-With only a few modifications, you can prepare and train on your own instruction dataset.
-
-1. Create a json file in which each row holds one instruction-response pair.
- A row has an entry for 'instruction', 'input', and 'output', where 'input' is optional an can be
- the empty string if the instruction doesn't require a context. Below is an example json file:
-
- ```
- [
- {
- "instruction": "Arrange the given numbers in ascending order.",
- "input": "2, 4, 0, 8, 3",
- "output": "0, 2, 3, 4, 8"
- },
- ...
- ]
- ```
-
-2. Make a copy of `scripts/prepare_alpaca.py` and name it what you want:
-
- ```bash
- cp scripts/prepare_alpaca.py scripts/prepare_mydata.py
- ```
-
-3. Modify `scripts/prepare_mydata.py` to read the json data file.
-4. Run the script to generate the preprocessed, tokenized train-val split:
-
- ```bash
- python scripts/prepare_mydata.py --destination_path data/mydata/
- ```
-
-5. Run `finetune/adapter.py` by passing in the location of your data (and optionally other parameters):
-
- ```bash
- python finetune/adapter.py --data_dir data/mydata/ --out_dir out/myexperiment
- ```
-
-
-## Troubleshooting
-
-If you run into a CUDA error "Expected is_sm80 to be true, but got false", uncomment the line
-`torch.backends.cuda.enable_flash_sdp(False)` in the script below (see https://github.com/Lightning-AI/lit-llama/issues/101).
diff --git a/spaces/xyha/sd/app.py b/spaces/xyha/sd/app.py
deleted file mode 100644
index 22ded759008bce4aade24a27295dbdf6971876d4..0000000000000000000000000000000000000000
--- a/spaces/xyha/sd/app.py
+++ /dev/null
@@ -1,372 +0,0 @@
-import gradio as gr
-#import torch
-#from torch import autocast
-#from diffusers import StableDiffusionPipeline
-from datasets import load_dataset
-from PIL import Image
-#from io import BytesIO
-#import base64
-import re
-import os
-import requests
-
-
-from share_btn import community_icon_html, loading_icon_html, share_js
-
-model_id = "CompVis/stable-diffusion-v1-4"
-device = "cuda"
-
-#If you are running this code locally, you need to either do a 'huggingface-cli login` or paste your User Access Token from here https://huggingface.co/settings/tokens into the use_auth_token field below.
-#pipe = StableDiffusionPipeline.from_pretrained(model_id, use_auth_token=True, revision="fp16", torch_dtype=torch.float16)
-#pipe = pipe.to(device)
-#torch.backends.cudnn.benchmark = True
-
-#When running locally, you won`t have access to this, so you can remove this part
-word_list_dataset = load_dataset("stabilityai/word-list", data_files="list.txt", use_auth_token=True)
-word_list = word_list_dataset["train"]['text']
-
-is_gpu_busy = False
-def infer(prompt):
- global is_gpu_busy
- samples = 4
- steps = 50
- scale = 7.5
- #When running locally you can also remove this filter
- for filter in word_list:
- if re.search(rf"\b{filter}\b", prompt):
- raise gr.Error("Unsafe content found. Please try again with different prompts.")
-
- #generator = torch.Generator(device=device).manual_seed(seed)
- #print("Is GPU busy? ", is_gpu_busy)
- images = []
- #if(not is_gpu_busy):
- # is_gpu_busy = True
- # images_list = pipe(
- # [prompt] * samples,
- # num_inference_steps=steps,
- # guidance_scale=scale,
- #generator=generator,
- # )
- # is_gpu_busy = False
- # safe_image = Image.open(r"unsafe.png")
- # for i, image in enumerate(images_list["sample"]):
- # if(images_list["nsfw_content_detected"][i]):
- # images.append(safe_image)
- # else:
- # images.append(image)
- #else:
- url = os.getenv('JAX_BACKEND_URL')
- payload = {'prompt': prompt}
- images_request = requests.post(url, json = payload)
- for image in images_request.json()["images"]:
- image_b64 = (f"data:image/jpeg;base64,{image}")
- images.append(image_b64)
-
- return images
-
-
-css = """
- .gradio-container {
- font-family: 'IBM Plex Sans', sans-serif;
- }
- .gr-button {
- color: white;
- border-color: black;
- background: black;
- }
- input[type='range'] {
- accent-color: black;
- }
- .dark input[type='range'] {
- accent-color: #dfdfdf;
- }
- .container {
- max-width: 730px;
- margin: auto;
- padding-top: 1.5rem;
- }
- #gallery {
- min-height: 22rem;
- margin-bottom: 15px;
- margin-left: auto;
- margin-right: auto;
- border-bottom-right-radius: .5rem !important;
- border-bottom-left-radius: .5rem !important;
- }
- #gallery>div>.h-full {
- min-height: 20rem;
- }
- .details:hover {
- text-decoration: underline;
- }
- .gr-button {
- white-space: nowrap;
- }
- .gr-button:focus {
- border-color: rgb(147 197 253 / var(--tw-border-opacity));
- outline: none;
- box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000);
- --tw-border-opacity: 1;
- --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);
- --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color);
- --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity));
- --tw-ring-opacity: .5;
- }
- #advanced-btn {
- font-size: .7rem !important;
- line-height: 19px;
- margin-top: 12px;
- margin-bottom: 12px;
- padding: 2px 8px;
- border-radius: 14px !important;
- }
- #advanced-options {
- display: none;
- margin-bottom: 20px;
- }
- .footer {
- margin-bottom: 45px;
- margin-top: 35px;
- text-align: center;
- border-bottom: 1px solid #e5e5e5;
- }
- .footer>p {
- font-size: .8rem;
- display: inline-block;
- padding: 0 10px;
- transform: translateY(10px);
- background: white;
- }
- .dark .footer {
- border-color: #303030;
- }
- .dark .footer>p {
- background: #0b0f19;
- }
- .acknowledgments h4{
- margin: 1.25em 0 .25em 0;
- font-weight: bold;
- font-size: 115%;
- }
- #container-advanced-btns{
- display: flex;
- flex-wrap: wrap;
- justify-content: space-between;
- align-items: center;
- }
- .animate-spin {
- animation: spin 1s linear infinite;
- }
- @keyframes spin {
- from {
- transform: rotate(0deg);
- }
- to {
- transform: rotate(360deg);
- }
- }
- #share-btn-container {
- display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem;
- }
- #share-btn {
- all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;
- }
- #share-btn * {
- all: unset;
- }
- .gr-form{
- flex: 1 1 50%; border-top-right-radius: 0; border-bottom-right-radius: 0;
- }
- #prompt-container{
- gap: 0;
- }
-"""
-
-block = gr.Blocks(css=css)
-
-examples = [
- [
- 'A high tech solarpunk utopia in the Amazon rainforest',
-# 4,
-# 45,
-# 7.5,
-# 1024,
- ],
- [
- 'A pikachu fine dining with a view to the Eiffel Tower',
-# 4,
-# 45,
-# 7,
-# 1024,
- ],
- [
- 'A mecha robot in a favela in expressionist style',
-# 4,
-# 45,
-# 7,
-# 1024,
- ],
- [
- 'an insect robot preparing a delicious meal',
-# 4,
-# 45,
-# 7,
-# 1024,
- ],
- [
- "A small cabin on top of a snowy mountain in the style of Disney, artstation",
-# 4,
-# 45,
-# 7,
-# 1024,
- ],
-]
-
-
-with block:
- gr.HTML(
- """
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Stable Diffusion Demo
-
-
-
- Stable Diffusion is a state of the art text-to-image model that generates
- images from text. For faster generation and API
- access you can try
- DreamStudio Beta
-
-
- """
- )
- with gr.Group():
- with gr.Box():
- with gr.Row(elem_id="prompt-container").style(mobile_collapse=False, equal_height=True):
- text = gr.Textbox(
- label="Enter your prompt",
- show_label=False,
- max_lines=1,
- placeholder="Enter your prompt",
- elem_id="prompt-text-input",
- ).style(
- border=(True, False, True, True),
- rounded=(True, False, False, True),
- container=False,
- )
- btn = gr.Button("Generate image").style(
- margin=False,
- rounded=(False, True, True, False),
- full_width=False,
- )
-
- gallery = gr.Gallery(
- label="Generated images", show_label=False, elem_id="gallery"
- ).style(grid=[2], height="auto")
-
- with gr.Group(elem_id="container-advanced-btns"):
- advanced_button = gr.Button("Advanced options", elem_id="advanced-btn")
- with gr.Group(elem_id="share-btn-container"):
- community_icon = gr.HTML(community_icon_html)
- loading_icon = gr.HTML(loading_icon_html)
- share_button = gr.Button("Share to community", elem_id="share-btn")
-
- with gr.Row(elem_id="advanced-options"):
- gr.Markdown("Advanced settings are temporarily unavailable")
- samples = gr.Slider(label="Images", minimum=1, maximum=4, value=4, step=1)
- steps = gr.Slider(label="Steps", minimum=1, maximum=50, value=45, step=1)
- scale = gr.Slider(
- label="Guidance Scale", minimum=0, maximum=50, value=7.5, step=0.1
- )
- seed = gr.Slider(
- label="Seed",
- minimum=0,
- maximum=2147483647,
- step=1,
- randomize=True,
- )
-
- ex = gr.Examples(examples=examples, fn=infer, inputs=text, outputs=[gallery, community_icon, loading_icon, share_button], cache_examples=False)
- ex.dataset.headers = [""]
-
- text.submit(infer, inputs=text, outputs=[gallery], postprocess=False)
- btn.click(infer, inputs=text, outputs=[gallery], postprocess=False)
-
- advanced_button.click(
- None,
- [],
- text,
- _js="""
- () => {
- const options = document.querySelector("body > gradio-app").querySelector("#advanced-options");
- options.style.display = ["none", ""].includes(options.style.display) ? "flex" : "none";
- }""",
- )
- share_button.click(
- None,
- [],
- [],
- _js=share_js,
- )
- gr.HTML(
- """
-
-
-
LICENSE
-The model is licensed with a
CreativeML Open RAIL-M license. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in this license. The license forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. For the full list of restrictions please
read the license
-
Biases and content acknowledgment
-Despite how impressive being able to turn text into image is, beware to the fact that this model may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography and violence. The model was trained on the
LAION-5B dataset , which scraped non-curated image-text-pairs from the internet (the exception being the removal of illegal content) and is meant for research purposes. You can read more in the
model card
-
- """
- )
-
-block.queue(concurrency_count=40, max_size=20).launch(max_threads=150)
\ No newline at end of file
diff --git a/spaces/yaelvinker/CLIPasso/U2Net_/model/u2net_refactor.py b/spaces/yaelvinker/CLIPasso/U2Net_/model/u2net_refactor.py
deleted file mode 100644
index e668de2c2bc67cbef280eaa5f789c762c4745fa4..0000000000000000000000000000000000000000
--- a/spaces/yaelvinker/CLIPasso/U2Net_/model/u2net_refactor.py
+++ /dev/null
@@ -1,168 +0,0 @@
-import torch
-import torch.nn as nn
-
-import math
-
-__all__ = ['U2NET_full', 'U2NET_lite']
-
-
-def _upsample_like(x, size):
- return nn.Upsample(size=size, mode='bilinear', align_corners=False)(x)
-
-
-def _size_map(x, height):
- # {height: size} for Upsample
- size = list(x.shape[-2:])
- sizes = {}
- for h in range(1, height):
- sizes[h] = size
- size = [math.ceil(w / 2) for w in size]
- return sizes
-
-
-class REBNCONV(nn.Module):
- def __init__(self, in_ch=3, out_ch=3, dilate=1):
- super(REBNCONV, self).__init__()
-
- self.conv_s1 = nn.Conv2d(in_ch, out_ch, 3, padding=1 * dilate, dilation=1 * dilate)
- self.bn_s1 = nn.BatchNorm2d(out_ch)
- self.relu_s1 = nn.ReLU(inplace=True)
-
- def forward(self, x):
- return self.relu_s1(self.bn_s1(self.conv_s1(x)))
-
-
-class RSU(nn.Module):
- def __init__(self, name, height, in_ch, mid_ch, out_ch, dilated=False):
- super(RSU, self).__init__()
- self.name = name
- self.height = height
- self.dilated = dilated
- self._make_layers(height, in_ch, mid_ch, out_ch, dilated)
-
- def forward(self, x):
- sizes = _size_map(x, self.height)
- x = self.rebnconvin(x)
-
- # U-Net like symmetric encoder-decoder structure
- def unet(x, height=1):
- if height < self.height:
- x1 = getattr(self, f'rebnconv{height}')(x)
- if not self.dilated and height < self.height - 1:
- x2 = unet(getattr(self, 'downsample')(x1), height + 1)
- else:
- x2 = unet(x1, height + 1)
-
- x = getattr(self, f'rebnconv{height}d')(torch.cat((x2, x1), 1))
- return _upsample_like(x, sizes[height - 1]) if not self.dilated and height > 1 else x
- else:
- return getattr(self, f'rebnconv{height}')(x)
-
- return x + unet(x)
-
- def _make_layers(self, height, in_ch, mid_ch, out_ch, dilated=False):
- self.add_module('rebnconvin', REBNCONV(in_ch, out_ch))
- self.add_module('downsample', nn.MaxPool2d(2, stride=2, ceil_mode=True))
-
- self.add_module(f'rebnconv1', REBNCONV(out_ch, mid_ch))
- self.add_module(f'rebnconv1d', REBNCONV(mid_ch * 2, out_ch))
-
- for i in range(2, height):
- dilate = 1 if not dilated else 2 ** (i - 1)
- self.add_module(f'rebnconv{i}', REBNCONV(mid_ch, mid_ch, dilate=dilate))
- self.add_module(f'rebnconv{i}d', REBNCONV(mid_ch * 2, mid_ch, dilate=dilate))
-
- dilate = 2 if not dilated else 2 ** (height - 1)
- self.add_module(f'rebnconv{height}', REBNCONV(mid_ch, mid_ch, dilate=dilate))
-
-
-class U2NET(nn.Module):
- def __init__(self, cfgs, out_ch):
- super(U2NET, self).__init__()
- self.out_ch = out_ch
- self._make_layers(cfgs)
-
- def forward(self, x):
- sizes = _size_map(x, self.height)
- maps = [] # storage for maps
-
- # side saliency map
- def unet(x, height=1):
- if height < 6:
- x1 = getattr(self, f'stage{height}')(x)
- x2 = unet(getattr(self, 'downsample')(x1), height + 1)
- x = getattr(self, f'stage{height}d')(torch.cat((x2, x1), 1))
- side(x, height)
- return _upsample_like(x, sizes[height - 1]) if height > 1 else x
- else:
- x = getattr(self, f'stage{height}')(x)
- side(x, height)
- return _upsample_like(x, sizes[height - 1])
-
- def side(x, h):
- # side output saliency map (before sigmoid)
- x = getattr(self, f'side{h}')(x)
- x = _upsample_like(x, sizes[1])
- maps.append(x)
-
- def fuse():
- # fuse saliency probability maps
- maps.reverse()
- x = torch.cat(maps, 1)
- x = getattr(self, 'outconv')(x)
- maps.insert(0, x)
- return [torch.sigmoid(x) for x in maps]
-
- unet(x)
- maps = fuse()
- return maps
-
- def _make_layers(self, cfgs):
- self.height = int((len(cfgs) + 1) / 2)
- self.add_module('downsample', nn.MaxPool2d(2, stride=2, ceil_mode=True))
- for k, v in cfgs.items():
- # build rsu block
- self.add_module(k, RSU(v[0], *v[1]))
- if v[2] > 0:
- # build side layer
- self.add_module(f'side{v[0][-1]}', nn.Conv2d(v[2], self.out_ch, 3, padding=1))
- # build fuse layer
- self.add_module('outconv', nn.Conv2d(int(self.height * self.out_ch), self.out_ch, 1))
-
-
-def U2NET_full():
- full = {
- # cfgs for building RSUs and sides
- # {stage : [name, (height(L), in_ch, mid_ch, out_ch, dilated), side]}
- 'stage1': ['En_1', (7, 3, 32, 64), -1],
- 'stage2': ['En_2', (6, 64, 32, 128), -1],
- 'stage3': ['En_3', (5, 128, 64, 256), -1],
- 'stage4': ['En_4', (4, 256, 128, 512), -1],
- 'stage5': ['En_5', (4, 512, 256, 512, True), -1],
- 'stage6': ['En_6', (4, 512, 256, 512, True), 512],
- 'stage5d': ['De_5', (4, 1024, 256, 512, True), 512],
- 'stage4d': ['De_4', (4, 1024, 128, 256), 256],
- 'stage3d': ['De_3', (5, 512, 64, 128), 128],
- 'stage2d': ['De_2', (6, 256, 32, 64), 64],
- 'stage1d': ['De_1', (7, 128, 16, 64), 64],
- }
- return U2NET(cfgs=full, out_ch=1)
-
-
-def U2NET_lite():
- lite = {
- # cfgs for building RSUs and sides
- # {stage : [name, (height(L), in_ch, mid_ch, out_ch, dilated), side]}
- 'stage1': ['En_1', (7, 3, 16, 64), -1],
- 'stage2': ['En_2', (6, 64, 16, 64), -1],
- 'stage3': ['En_3', (5, 64, 16, 64), -1],
- 'stage4': ['En_4', (4, 64, 16, 64), -1],
- 'stage5': ['En_5', (4, 64, 16, 64, True), -1],
- 'stage6': ['En_6', (4, 64, 16, 64, True), 64],
- 'stage5d': ['De_5', (4, 128, 16, 64, True), 64],
- 'stage4d': ['De_4', (4, 128, 16, 64), 64],
- 'stage3d': ['De_3', (5, 128, 16, 64), 64],
- 'stage2d': ['De_2', (6, 128, 16, 64), 64],
- 'stage1d': ['De_1', (7, 128, 16, 64), 64],
- }
- return U2NET(cfgs=lite, out_ch=1)
diff --git a/spaces/yderre-aubay/midi-player-demo/src/common/helpers/noteAssembler.ts b/spaces/yderre-aubay/midi-player-demo/src/common/helpers/noteAssembler.ts
deleted file mode 100644
index 4cd11bdc657b44c62a35e770b21c968e0617ef21..0000000000000000000000000000000000000000
--- a/spaces/yderre-aubay/midi-player-demo/src/common/helpers/noteAssembler.ts
+++ /dev/null
@@ -1,80 +0,0 @@
-import { NoteOffEvent, NoteOnEvent } from "midifile-ts"
-import { noteOffMidiEvent, noteOnMidiEvent } from "../midi/MidiEvent"
-import { NoteEvent, TickProvider } from "../track"
-
-/**
-
- assemble noteOn and noteOff to single note event to append duration
-
- */
-export function assemble(
- events: (T | TickNoteOffEvent | TickNoteOnEvent)[],
-): (T | NoteEvent)[] {
- const noteOnEvents: TickNoteOnEvent[] = []
-
- function findNoteOn(noteOff: TickNoteOffEvent): TickNoteOnEvent | null {
- const i = noteOnEvents.findIndex((e) => {
- return e.noteNumber === noteOff.noteNumber
- })
- if (i < 0) {
- return null
- }
- const e = noteOnEvents[i]
- noteOnEvents.splice(i, 1)
- return e
- }
-
- const result: (T | NoteEvent)[] = []
- events.forEach((e) => {
- if ("subtype" in e) {
- switch (e.subtype) {
- case "noteOn":
- noteOnEvents.push(e)
- break
- case "noteOff": {
- const noteOn = findNoteOn(e)
- if (noteOn != null) {
- const note: NoteEvent = {
- ...noteOn,
- subtype: "note",
- id: -1,
- tick: noteOn.tick,
- duration: e.tick - noteOn.tick,
- }
- result.push(note)
- }
- break
- }
- default:
- result.push(e)
- break
- }
- } else {
- result.push(e)
- }
- })
-
- return result
-}
-
-export type TickNoteOnEvent = Omit &
- TickProvider
-export type TickNoteOffEvent = Omit &
- TickProvider
-
-// separate note to noteOn + noteOff
-export function deassemble(
- e: T | NoteEvent,
-): (T | TickNoteOnEvent | TickNoteOffEvent)[] {
- if ("subtype" in e && e.subtype === "note") {
- const channel = (e as any)["channel"] ?? -1
- const noteOn = noteOnMidiEvent(0, channel, e.noteNumber, e.velocity)
- const noteOff = noteOffMidiEvent(0, channel, e.noteNumber)
- return [
- { ...noteOn, tick: e.tick },
- { ...noteOff, tick: e.tick + e.duration },
- ]
- } else {
- return [e as T]
- }
-}
diff --git a/spaces/yfyangd/PictureBookUnderstanding/BLIP/predict.py b/spaces/yfyangd/PictureBookUnderstanding/BLIP/predict.py
deleted file mode 100644
index 35426cadcbb3bf8c3d8cb9c910511c154e451f4e..0000000000000000000000000000000000000000
--- a/spaces/yfyangd/PictureBookUnderstanding/BLIP/predict.py
+++ /dev/null
@@ -1,98 +0,0 @@
-"""
-Download the weights in ./checkpoints beforehand for fast inference
-wget https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model*_base_caption.pth
-wget https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model*_vqa.pth
-wget https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_retrieval_coco.pth
-"""
-
-from pathlib import Path
-
-from PIL import Image
-import torch
-from torchvision import transforms
-from torchvision.transforms.functional import InterpolationMode
-import cog
-
-from models.blip import blip_decoder
-from models.blip_vqa import blip_vqa
-from models.blip_itm import blip_itm
-
-
-class Predictor(cog.Predictor):
- def setup(self):
- self.device = "cuda:0"
-
- self.models = {
- 'image_captioning': blip_decoder(pretrained='checkpoints/model*_base_caption.pth',
- image_size=384, vit='base'),
- 'visual_question_answering': blip_vqa(pretrained='checkpoints/model*_vqa.pth',
- image_size=480, vit='base'),
- 'image_text_matching': blip_itm(pretrained='checkpoints/model_base_retrieval_coco.pth',
- image_size=384, vit='base')
- }
-
- @cog.input(
- "image",
- type=Path,
- help="input image",
- )
- @cog.input(
- "task",
- type=str,
- default='image_captioning',
- options=['image_captioning', 'visual_question_answering', 'image_text_matching'],
- help="Choose a task.",
- )
- @cog.input(
- "question",
- type=str,
- default=None,
- help="Type question for the input image for visual question answering task.",
- )
- @cog.input(
- "caption",
- type=str,
- default=None,
- help="Type caption for the input image for image text matching task.",
- )
- def predict(self, image, task, question, caption):
- if task == 'visual_question_answering':
- assert question is not None, 'Please type a question for visual question answering task.'
- if task == 'image_text_matching':
- assert caption is not None, 'Please type a caption for mage text matching task.'
-
- im = load_image(image, image_size=480 if task == 'visual_question_answering' else 384, device=self.device)
- model = self.models[task]
- model.eval()
- model = model.to(self.device)
-
- if task == 'image_captioning':
- with torch.no_grad():
- caption = model.generate(im, sample=False, num_beams=3, max_length=20, min_length=5)
- return 'Caption: ' + caption[0]
-
- if task == 'visual_question_answering':
- with torch.no_grad():
- answer = model(im, question, train=False, inference='generate')
- return 'Answer: ' + answer[0]
-
- # image_text_matching
- itm_output = model(im, caption, match_head='itm')
- itm_score = torch.nn.functional.softmax(itm_output, dim=1)[:, 1]
- itc_score = model(im, caption, match_head='itc')
- return f'The image and text is matched with a probability of {itm_score.item():.4f}.\n' \
- f'The image feature and text feature has a cosine similarity of {itc_score.item():.4f}.'
-
-
-def load_image(image, image_size, device):
- raw_image = Image.open(str(image)).convert('RGB')
-
- w, h = raw_image.size
-
- transform = transforms.Compose([
- transforms.Resize((image_size, image_size), interpolation=InterpolationMode.BICUBIC),
- transforms.ToTensor(),
- transforms.Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711))
- ])
- image = transform(raw_image).unsqueeze(0).to(device)
- return image
diff --git a/spaces/ygangang/VToonify/vtoonify/model/stylegan/op_gpu/upfirdn2d.py b/spaces/ygangang/VToonify/vtoonify/model/stylegan/op_gpu/upfirdn2d.py
deleted file mode 100644
index 3a12f15b3c2347194e3bf0fdfda736415693775f..0000000000000000000000000000000000000000
--- a/spaces/ygangang/VToonify/vtoonify/model/stylegan/op_gpu/upfirdn2d.py
+++ /dev/null
@@ -1,209 +0,0 @@
-from collections import abc
-import os
-
-import torch
-from torch.nn import functional as F
-from torch.autograd import Function
-from torch.utils.cpp_extension import load
-
-
-module_path = os.path.dirname(__file__)
-upfirdn2d_op = load(
- "upfirdn2d",
- sources=[
- os.path.join(module_path, "upfirdn2d.cpp"),
- os.path.join(module_path, "upfirdn2d_kernel.cu"),
- ],
-)
-
-
-class UpFirDn2dBackward(Function):
- @staticmethod
- def forward(
- ctx, grad_output, kernel, grad_kernel, up, down, pad, g_pad, in_size, out_size
- ):
-
- up_x, up_y = up
- down_x, down_y = down
- g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1 = g_pad
-
- grad_output = grad_output.reshape(-1, out_size[0], out_size[1], 1)
-
- grad_input = upfirdn2d_op.upfirdn2d(
- grad_output,
- grad_kernel,
- down_x,
- down_y,
- up_x,
- up_y,
- g_pad_x0,
- g_pad_x1,
- g_pad_y0,
- g_pad_y1,
- )
- grad_input = grad_input.view(in_size[0], in_size[1], in_size[2], in_size[3])
-
- ctx.save_for_backward(kernel)
-
- pad_x0, pad_x1, pad_y0, pad_y1 = pad
-
- ctx.up_x = up_x
- ctx.up_y = up_y
- ctx.down_x = down_x
- ctx.down_y = down_y
- ctx.pad_x0 = pad_x0
- ctx.pad_x1 = pad_x1
- ctx.pad_y0 = pad_y0
- ctx.pad_y1 = pad_y1
- ctx.in_size = in_size
- ctx.out_size = out_size
-
- return grad_input
-
- @staticmethod
- def backward(ctx, gradgrad_input):
- kernel, = ctx.saved_tensors
-
- gradgrad_input = gradgrad_input.reshape(-1, ctx.in_size[2], ctx.in_size[3], 1)
-
- gradgrad_out = upfirdn2d_op.upfirdn2d(
- gradgrad_input,
- kernel,
- ctx.up_x,
- ctx.up_y,
- ctx.down_x,
- ctx.down_y,
- ctx.pad_x0,
- ctx.pad_x1,
- ctx.pad_y0,
- ctx.pad_y1,
- )
- # gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.out_size[0], ctx.out_size[1], ctx.in_size[3])
- gradgrad_out = gradgrad_out.view(
- ctx.in_size[0], ctx.in_size[1], ctx.out_size[0], ctx.out_size[1]
- )
-
- return gradgrad_out, None, None, None, None, None, None, None, None
-
-
-class UpFirDn2d(Function):
- @staticmethod
- def forward(ctx, input, kernel, up, down, pad):
- up_x, up_y = up
- down_x, down_y = down
- pad_x0, pad_x1, pad_y0, pad_y1 = pad
-
- kernel_h, kernel_w = kernel.shape
- batch, channel, in_h, in_w = input.shape
- ctx.in_size = input.shape
-
- input = input.reshape(-1, in_h, in_w, 1)
-
- ctx.save_for_backward(kernel, torch.flip(kernel, [0, 1]))
-
- out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h + down_y) // down_y
- out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w + down_x) // down_x
- ctx.out_size = (out_h, out_w)
-
- ctx.up = (up_x, up_y)
- ctx.down = (down_x, down_y)
- ctx.pad = (pad_x0, pad_x1, pad_y0, pad_y1)
-
- g_pad_x0 = kernel_w - pad_x0 - 1
- g_pad_y0 = kernel_h - pad_y0 - 1
- g_pad_x1 = in_w * up_x - out_w * down_x + pad_x0 - up_x + 1
- g_pad_y1 = in_h * up_y - out_h * down_y + pad_y0 - up_y + 1
-
- ctx.g_pad = (g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1)
-
- out = upfirdn2d_op.upfirdn2d(
- input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1
- )
- # out = out.view(major, out_h, out_w, minor)
- out = out.view(-1, channel, out_h, out_w)
-
- return out
-
- @staticmethod
- def backward(ctx, grad_output):
- kernel, grad_kernel = ctx.saved_tensors
-
- grad_input = None
-
- if ctx.needs_input_grad[0]:
- grad_input = UpFirDn2dBackward.apply(
- grad_output,
- kernel,
- grad_kernel,
- ctx.up,
- ctx.down,
- ctx.pad,
- ctx.g_pad,
- ctx.in_size,
- ctx.out_size,
- )
-
- return grad_input, None, None, None, None
-
-
-def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)):
- if not isinstance(up, abc.Iterable):
- up = (up, up)
-
- if not isinstance(down, abc.Iterable):
- down = (down, down)
-
- if len(pad) == 2:
- pad = (pad[0], pad[1], pad[0], pad[1])
-
- if input.device.type == "cpu":
- out = upfirdn2d_native(input, kernel, *up, *down, *pad)
-
- else:
- out = UpFirDn2d.apply(input, kernel, up, down, pad)
-
- return out
-
-
-def upfirdn2d_native(
- input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1
-):
- _, channel, in_h, in_w = input.shape
- input = input.reshape(-1, in_h, in_w, 1)
-
- _, in_h, in_w, minor = input.shape
- kernel_h, kernel_w = kernel.shape
-
- out = input.view(-1, in_h, 1, in_w, 1, minor)
- out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1])
- out = out.view(-1, in_h * up_y, in_w * up_x, minor)
-
- out = F.pad(
- out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)]
- )
- out = out[
- :,
- max(-pad_y0, 0) : out.shape[1] - max(-pad_y1, 0),
- max(-pad_x0, 0) : out.shape[2] - max(-pad_x1, 0),
- :,
- ]
-
- out = out.permute(0, 3, 1, 2)
- out = out.reshape(
- [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1]
- )
- w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w)
- out = F.conv2d(out, w)
- out = out.reshape(
- -1,
- minor,
- in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1,
- in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1,
- )
- out = out.permute(0, 2, 3, 1)
- out = out[:, ::down_y, ::down_x, :]
-
- out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h + down_y) // down_y
- out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w + down_x) // down_x
-
- return out.view(-1, channel, out_h, out_w)
diff --git a/spaces/ygtxr1997/ReliableSwap_Demo/inference/tricks.py b/spaces/ygtxr1997/ReliableSwap_Demo/inference/tricks.py
deleted file mode 100644
index 22aef856fc26a985cfbe6013b303be3f622b5fe2..0000000000000000000000000000000000000000
--- a/spaces/ygtxr1997/ReliableSwap_Demo/inference/tricks.py
+++ /dev/null
@@ -1,169 +0,0 @@
-import os
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import cv2
-import numpy as np
-
-from third_party.bisenet.bisenet import BiSeNet
-from third_party.GPEN.infer_image import GPENImageInfer
-
-
-make_abs_path = lambda fn: os.path.abspath(os.path.join(os.path.dirname(os.path.realpath(__file__)), fn))
-
-
-class Trick(object):
- def __init__(self):
- self.gpen_model = None
- self.mouth_helper = None
-
- @staticmethod
- def get_any_mask(img, par=None, normalized=False):
- # [0, 'background', 1 'skin', 2 'l_brow', 3 'r_brow', 4 'l_eye', 5 'r_eye',
- # 6 'eye_g', 7 'l_ear', 8 'r_ear', 9 'ear_r', 10 'nose', 11 'mouth', 12 'u_lip',
- # 13 'l_lip', 14 'neck', 15 'neck_l', 16 'cloth', 17 'hair', 18 'hat']
- ori_h, ori_w = img.shape[2], img.shape[3]
- with torch.no_grad():
- img = F.interpolate(img, size=512, mode="nearest", )
- if not normalized:
- img = img * 0.5 + 0.5
- img = img.sub(vgg_mean.detach()).div(vgg_std.detach())
- out = global_bisenet(img)[0]
- parsing = out.softmax(1).argmax(1)
- mask = torch.zeros_like(parsing)
- for p in par:
- mask = mask + ((parsing == p).float())
- mask = mask.unsqueeze(1)
- mask = F.interpolate(mask, size=(ori_h, ori_w), mode="bilinear", align_corners=True)
- return mask
-
- @staticmethod
- def finetune_mask(facial_mask: np.ndarray, lmk_98: np.ndarray = None):
- assert facial_mask.shape[1] == 256
- facial_mask = (facial_mask * 255).astype(np.uint8)
- # h_min = lmk_98[33:41, 0].min() + 20
- h_min = 80
-
- facial_mask = cv2.dilate(facial_mask, (40, 40), iterations=1)
- facial_mask[:h_min] = 0 # black
- facial_mask[255 - 20:] = 0
-
- kernel_size = (20, 20)
- blur_size = tuple(2 * j + 1 for j in kernel_size)
- facial_mask = cv2.GaussianBlur(facial_mask, blur_size, 0)
-
- return facial_mask.astype(np.float32) / 255
-
- @staticmethod
- def smooth_mask(mask_tensor: torch.Tensor):
- mask_tensor, _ = global_smooth_mask(mask_tensor)
- return mask_tensor
-
- @staticmethod
- def tensor_to_arr(tensor):
- return ((tensor + 1.) * 127.5).permute(0, 2, 3, 1).cpu().numpy().astype(np.uint8)
-
- @staticmethod
- def arr_to_tensor(arr, norm: bool = True):
- tensor = torch.tensor(arr, dtype=torch.float).to(global_device) / 255 # in [0,1]
- tensor = (tensor - 0.5) / 0.5 if norm else tensor # in [-1,1]
- tensor = tensor.permute(0, 3, 1, 2)
- return tensor
-
- def gpen(self, img_np: np.ndarray, use_gpen=True):
- if not use_gpen:
- return img_np
- if self.gpen_model is None:
- self.gpen_model = GPENImageInfer(device=global_device)
- img_np = self.gpen_model.image_infer(img_np)
- return img_np
-
- def finetune_mouth(self, i_s, i_t, i_r):
- if self.mouth_helper is None:
- self.load_mouth_helper()
- helper_face = self.mouth_helper(i_s, i_t)[0]
- i_r_mouth_mask = self.get_any_mask(i_r, par=[11, 12, 13]) # (B,1,H,W)
-
- ''' dilate and blur by cv2 '''
- i_r_mouth_mask = self.tensor_to_arr(i_r_mouth_mask)[0] # (H,W,C)
- i_r_mouth_mask = cv2.dilate(i_r_mouth_mask, (20, 20), iterations=1)
-
- kernel_size = (5, 5)
- blur_size = tuple(2 * j + 1 for j in kernel_size)
- i_r_mouth_mask = cv2.GaussianBlur(i_r_mouth_mask, blur_size, 0) # (H,W,C)
- i_r_mouth_mask = i_r_mouth_mask.squeeze()[None, :, :, None] # (1,H,W,1)
- i_r_mouth_mask = self.arr_to_tensor(i_r_mouth_mask, norm=False) # in [0,1]
-
- return helper_face * i_r_mouth_mask + i_r * (1 - i_r_mouth_mask)
-
- def load_mouth_helper(self):
- from modules.networks.faceshifter import FSGenerator
- # mouth_helper_pl = EvaluatorFaceShifter(
- # load_path="/apdcephfs/share_1290939/gavinyuan/out/triplet10w_34/epoch=13-step=737999.ckpt",
- # pt_path=make_abs_path("../ffplus/extracted_ckpt/G_t34_helper_post.pth"),
- # benchmark=None,
- # demo_folder=None,
- # )
- pt_path = make_abs_path("../weights/extracted/G_t34_helper_post.pth")
- self.mouth_helper = FSGenerator(
- make_abs_path("../weights/arcface/ms1mv3_arcface_r100_fp16/backbone.pth"),
- mouth_net_param={"use": False},
- in_size=256,
- downup=False,
- )
- self.mouth_helper.load_state_dict(torch.load(pt_path, "cpu"), strict=True)
- self.mouth_helper.eval()
- print("[Mouth helper] loaded.")
-
-
-""" From MegaFS: https://github.com/zyainfal/One-Shot-Face-Swapping-on-Megapixels/tree/main/inference """
-class SoftErosion(nn.Module):
- def __init__(self, kernel_size=15, threshold=0.6, iterations=1):
- super(SoftErosion, self).__init__()
- r = kernel_size // 2
- self.padding = r
- self.iterations = iterations
- self.threshold = threshold
-
- # Create kernel
- y_indices, x_indices = torch.meshgrid(torch.arange(0., kernel_size), torch.arange(0., kernel_size))
- dist = torch.sqrt((x_indices - r) ** 2 + (y_indices - r) ** 2)
- kernel = dist.max() - dist
- kernel /= kernel.sum()
- kernel = kernel.view(1, 1, *kernel.shape)
- self.register_buffer('weight', kernel)
-
- def forward(self, x):
- x = x.float()
- for i in range(self.iterations - 1):
- x = torch.min(x, F.conv2d(x, weight=self.weight, groups=x.shape[1], padding=self.padding))
- x = F.conv2d(x, weight=self.weight, groups=x.shape[1], padding=self.padding)
-
- mask = x >= self.threshold
- x[mask] = 1.0
- x[~mask] /= x[~mask].max()
-
- return x, mask
-
-
-if torch.cuda.is_available():
- global_device = torch.device(0)
-else:
- global_device = torch.device('cpu')
-vgg_mean = torch.tensor([[[0.485]], [[0.456]], [[0.406]]],
- requires_grad=False, device=global_device)
-vgg_std = torch.tensor([[[0.229]], [[0.224]], [[0.225]]],
- requires_grad=False, device=global_device)
-def load_bisenet():
- bisenet_model = BiSeNet(n_classes=19)
- bisenet_model.load_state_dict(
- torch.load(make_abs_path("../weights/bisenet/79999_iter.pth",), map_location="cpu")
- )
- bisenet_model.eval()
- bisenet_model = bisenet_model.to(global_device)
-
- smooth_mask = SoftErosion(kernel_size=17, threshold=0.9, iterations=7).to(global_device)
- print('[Global] bisenet loaded.')
- return bisenet_model, smooth_mask
-
-global_bisenet, global_smooth_mask = load_bisenet()
diff --git a/spaces/zhangyd/bingo/src/components/user-menu.tsx b/spaces/zhangyd/bingo/src/components/user-menu.tsx
deleted file mode 100644
index 9bd1edc9cf9f39b63629b021f0c1186b1a7c1341..0000000000000000000000000000000000000000
--- a/spaces/zhangyd/bingo/src/components/user-menu.tsx
+++ /dev/null
@@ -1,113 +0,0 @@
-'use client'
-
-import { useEffect, useState } from 'react'
-import Image from 'next/image'
-import { toast } from 'react-hot-toast'
-import { Button } from '@/components/ui/button'
-import pkg from '../../package.json'
-import {
- DropdownMenu,
- DropdownMenuContent,
- DropdownMenuItem,
- DropdownMenuSeparator,
- DropdownMenuTrigger
-} from '@/components/ui/dropdown-menu'
-import { IconCopy, IconExternalLink, IconGitHub } from '@/components/ui/icons'
-import SettingIcon from '@/assets/images/settings.svg'
-import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard'
-
-export function UserMenu() {
- const [host, setHost] = useState('')
- const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 })
- useEffect(() => {
- setHost(location.host)
- }, [])
-
- useEffect(() => {
- if (isCopied) {
- toast.success('复制成功')
- }
- }, [isCopied])
- return (
-
-
-
-
-
-
-
- 设置
-
-
-
-
- location.href='#dialog="settings"'
- }
- className="cursor-pointer"
- >
- 设置用户
-
-
-
- location.href='#dialog="voice"'
- }
- className="cursor-pointer"
- >
- 语音设置
-
-
-
-
- 开源地址
-
-
-
-
-
-
-
- 托管地址
- 🤗
-
-
-
-
-
-
- 复制站点
-
-
-
-
-
- 版本信息 {pkg.version}
-
-
-
- 站点域名
- copyToClipboard(host)} className="flex gap-1 text-xs text-zinc-500 cursor-pointer">
- {host}
-
-
-
-
-
- )
-}
diff --git a/spaces/zhoupin30/zhoupin30/src/lib/hooks/chat-history.ts b/spaces/zhoupin30/zhoupin30/src/lib/hooks/chat-history.ts
deleted file mode 100644
index c6fbf3fecfa86fe553f56acc8253236b8f22a775..0000000000000000000000000000000000000000
--- a/spaces/zhoupin30/zhoupin30/src/lib/hooks/chat-history.ts
+++ /dev/null
@@ -1,62 +0,0 @@
-import { zip } from 'lodash-es'
-import { ChatMessageModel, BotId } from '@/lib/bots/bing/types'
-import { Storage } from '../storage'
-
-/**
- * conversations:$botId => Conversation[]
- * conversation:$botId:$cid:messages => ChatMessageModel[]
- */
-
-interface Conversation {
- id: string
- createdAt: number
-}
-
-type ConversationWithMessages = Conversation & { messages: ChatMessageModel[] }
-
-async function loadHistoryConversations(botId: BotId): Promise {
- const key = `conversations:${botId}`
- const { [key]: value } = await Storage.get(key)
- return value || []
-}
-
-async function deleteHistoryConversation(botId: BotId, cid: string) {
- const conversations = await loadHistoryConversations(botId)
- const newConversations = conversations.filter((c) => c.id !== cid)
- await Storage.set({ [`conversations:${botId}`]: newConversations })
-}
-
-async function loadConversationMessages(botId: BotId, cid: string): Promise {
- const key = `conversation:${botId}:${cid}:messages`
- const { [key]: value } = await Storage.get(key)
- return value || []
-}
-
-export async function setConversationMessages(botId: BotId, cid: string, messages: ChatMessageModel[]) {
- const conversations = await loadHistoryConversations(botId)
- if (!conversations.some((c) => c.id === cid)) {
- conversations.unshift({ id: cid, createdAt: Date.now() })
- await Storage.set({ [`conversations:${botId}`]: conversations })
- }
- const key = `conversation:${botId}:${cid}:messages`
- await Storage.set({ [key]: messages })
-}
-
-export async function loadHistoryMessages(botId: BotId): Promise {
- const conversations = await loadHistoryConversations(botId)
- const messagesList = await Promise.all(conversations.map((c) => loadConversationMessages(botId, c.id)))
- return zip(conversations, messagesList).map(([c, messages]) => ({
- id: c!.id,
- createdAt: c!.createdAt,
- messages: messages!,
- }))
-}
-
-export async function deleteHistoryMessage(botId: BotId, conversationId: string, messageId: string) {
- const messages = await loadConversationMessages(botId, conversationId)
- const newMessages = messages.filter((m) => m.id !== messageId)
- await setConversationMessages(botId, conversationId, newMessages)
- if (!newMessages.length) {
- await deleteHistoryConversation(botId, conversationId)
- }
-}
diff --git a/spaces/zideliu/styledrop/libs/uvit_t2i_vq.py b/spaces/zideliu/styledrop/libs/uvit_t2i_vq.py
deleted file mode 100644
index cf380e1b93ed224c93b8467f1c0b18489fa849ec..0000000000000000000000000000000000000000
--- a/spaces/zideliu/styledrop/libs/uvit_t2i_vq.py
+++ /dev/null
@@ -1,282 +0,0 @@
-import torch
-import torch.nn as nn
-import math
-
-from loguru import logger
-
-import timm
-from timm.models.layers import trunc_normal_
-from timm.models.vision_transformer import PatchEmbed, Mlp
-
-assert timm.__version__ == "0.3.2" # version check
-import einops
-import torch.utils.checkpoint
-import torch.nn.functional as F
-
-try:
- import xformers
- import xformers.ops
-
- XFORMERS_IS_AVAILBLE = True
- print("xformers available, will use xformers attention")
-except:
- XFORMERS_IS_AVAILBLE = False
- print("xformers not available, will use pytorch attention instead")
-
-class BertEmbeddings(nn.Module):
- """Construct the embeddings from word, position and token_type embeddings."""
-
- def __init__(self, vocab_size, hidden_size, max_position_embeddings, dropout=0.1):
- super().__init__()
- self.word_embeddings = nn.Embedding(vocab_size, hidden_size)
- self.position_embeddings = nn.Embedding(max_position_embeddings, hidden_size)
-
- # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load
- # any TensorFlow checkpoint file
- self.LayerNorm = nn.LayerNorm(hidden_size, eps=1e-6)
- self.dropout = nn.Dropout(dropout)
- # position_ids (1, len position emb) is contiguous in memory and exported when serialized
- self.register_buffer("position_ids", torch.arange(max_position_embeddings).expand((1, -1)))
-
- torch.nn.init.normal_(self.word_embeddings.weight, std=.02)
- torch.nn.init.normal_(self.position_embeddings.weight, std=.02)
-
- def forward(
- self, input_ids
- ):
- input_shape = input_ids.size()
-
- seq_length = input_shape[1]
-
- position_ids = self.position_ids[:, :seq_length]
-
- inputs_embeds = self.word_embeddings(input_ids)
-
- position_embeddings = self.position_embeddings(position_ids)
- embeddings = inputs_embeds + position_embeddings
-
- embeddings = self.LayerNorm(embeddings)
- embeddings = self.dropout(embeddings)
- return embeddings
-
-
-class MlmLayer(nn.Module):
-
- def __init__(self, feat_emb_dim, word_emb_dim, vocab_size):
- super().__init__()
- self.fc = nn.Linear(feat_emb_dim, word_emb_dim)
- self.gelu = nn.GELU()
- self.ln = nn.LayerNorm(word_emb_dim)
- self.bias = nn.Parameter(torch.zeros(1, 1, vocab_size))
-
- def forward(self, x, word_embeddings):
- mlm_hidden = self.fc(x)
- mlm_hidden = self.gelu(mlm_hidden)
- mlm_hidden = self.ln(mlm_hidden)
- word_embeddings = word_embeddings.transpose(0, 1)
- logits = torch.matmul(mlm_hidden, word_embeddings)
- logits = logits + self.bias
- return logits
-
-
-class Attention(nn.Module):
- def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0.):
- super().__init__()
- self.num_heads = num_heads
- head_dim = dim // num_heads
- # NOTE scale factor was wrong in my original version, can set manually to be compat with prev weights
- self.scale = qk_scale or head_dim ** -0.5
-
- self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
-
- def forward(self, x):
- B, N, C = x.shape
- if XFORMERS_IS_AVAILBLE:
- qkv = self.qkv(x)
- qkv = einops.rearrange(qkv, 'B L (K H D) -> K B L H D', K=3, H=self.num_heads)
- q, k, v = qkv[0], qkv[1], qkv[2] # B L H D
- x = xformers.ops.memory_efficient_attention(q, k, v)
- x = einops.rearrange(x, 'B L H D -> B L (H D)', H=self.num_heads)
- else:
- qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- attn = (q @ k.transpose(-2, -1)) * self.scale
- attn = attn.softmax(dim=-1)
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B, N, C)
-
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
-class Adapter(nn.Module):
- def __init__(self, d_emb:int, d_prj:int,n_layer: int, is_shared: bool):
- super().__init__()
- self.D = d_emb
- self.H = d_prj
- self.L = n_layer
- self.is_shared = is_shared
- if self.is_shared:
- self.DD = nn.Embedding(self.L,self.H)
- self.DU = nn.Embedding(self.L,self.D)
- self.WD = nn.Embedding(1,self.D*self.H)
- self.WU = nn.Embedding(1,self.H*self.D)
- else:
- self.WD = nn.Embedding(self.L,self.D*self.H)
- self.WU = nn.Embedding(self.L,self.H*self.D)
- self.activate = nn.GELU()
-
- self._init_weights()
- def _init_weights(self):
- for p in self.WU.parameters():
- p.detach().zero_()
- nn.init.trunc_normal_(self.WD.weight,mean=0,std=0.02)
-
- if self.is_shared:
- nn.init.trunc_normal_(self.DD.weight,mean=0,std=0.02)
- for p in self.DU.parameters():
- p.detach().zero_()
-
- def forward(self, emb, layer):
- idx = torch.arange(self.L).to(emb.device)
- layer = torch.tensor(layer).to(emb.device)
- if self.is_shared:
- idx0 = torch.zeros_like(idx).to(emb.device)
- dd = self.DD(idx).reshape(self.L, 1,self.H)
- du = self.DU(idx).reshape(self.L, 1,self.D)
- wd = self.WD(idx0).reshape(self.L, self.D,self.H) + dd
- wu = self.WU(idx0).reshape(self.L, self.H,self.D) + du
- else:
- wd = self.WD(idx).reshape(self.L, self.D,self.H)
- wu = self.WU(idx).reshape(self.L, self.H,self.D)
-
- prj = torch.einsum('...d,dh->...h',emb,wd[layer])
- prj = self.activate(prj)
- prj = torch.einsum('...h,hd->...d',prj,wu[layer])
- return emb + prj
-class Block(nn.Module):
-
- def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None,
- act_layer=nn.GELU, norm_layer=nn.LayerNorm, skip=False, use_checkpoint=False):
- super().__init__()
- self.norm1 = norm_layer(dim)
- self.attn = Attention(
- dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale)
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer)
- self.skip_linear = nn.Linear(2 * dim, dim) if skip else None
- self.use_checkpoint = use_checkpoint
-
- def forward(self, x, skip=None, adapter=None, layer=None):
- if self.use_checkpoint:
- return torch.utils.checkpoint.checkpoint(self._forward, x, skip, adapter, layer)
- else:
- return self._forward(x, skip, adapter, layer)
-
- def _forward(self, x, skip=None,adapter=None, layer=None):
- if self.skip_linear is not None:
- x = self.skip_linear(torch.cat([x, skip], dim=-1))
-
- attn = self.attn(self.norm1(x))
- if adapter is not None:
- attn = adapter(attn, layer)
-
- x = x + attn
- x = x + self.mlp(self.norm2(x))
- return x
-
-
-class UViT(nn.Module):
- def __init__(self, img_size=16, in_chans=8, embed_dim=768, depth=12, num_heads=12, mlp_ratio=4.,
- qkv_bias=False, qk_scale=None, norm_layer=nn.LayerNorm, use_checkpoint=False,
- clip_dim=768, num_clip_token=77, skip=True, codebook_size=1024,d_prj=4,is_shared=True):
- super().__init__()
- logger.debug(f'codebook size in nnet: {codebook_size}')
- self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models
- self.in_chans = in_chans
- self.skip = skip
-
- self.codebook_size = codebook_size
- vocab_size = codebook_size + 1
- self.time_embed = None
- self.extras = num_clip_token
- self.num_vis_tokens = int((img_size) ** 2)
- self.token_emb = BertEmbeddings(vocab_size=vocab_size,
- hidden_size=embed_dim,
- max_position_embeddings=self.num_vis_tokens,
- dropout=0.1)
- print(f'num vis tokens: {self.num_vis_tokens}')
-
- self.context_embed = nn.Linear(clip_dim, embed_dim)
-
- self.in_blocks = nn.ModuleList([
- Block(
- dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- norm_layer=norm_layer, use_checkpoint=use_checkpoint)
- for _ in range(depth // 2)])
-
- self.mid_block = Block(
- dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- norm_layer=norm_layer, use_checkpoint=use_checkpoint)
-
- self.out_blocks = nn.ModuleList([
- Block(
- dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- norm_layer=norm_layer, skip=skip, use_checkpoint=use_checkpoint)
- for _ in range(depth // 2)])
-
- self.norm = norm_layer(embed_dim)
- self.mlm_layer = MlmLayer(feat_emb_dim=embed_dim, word_emb_dim=embed_dim, vocab_size=vocab_size)
- self.adapter = Adapter(d_emb=embed_dim, d_prj=d_prj, n_layer=depth, is_shared=is_shared)
- self.apply(self._init_weights)
-
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- @torch.jit.ignore # type: ignore
- def no_weight_decay(self):
- return {'pos_embed'}
-
- def forward(self, masked_ids, context,use_adapter=False):
- assert len(masked_ids.shape) == 2
- x = self.token_emb(masked_ids)
- context_token = self.context_embed(context.type_as(x))
- x = torch.cat((context_token, x), dim=1)
-
- layer=0
-
- if self.skip:
- skips = []
- for blk in self.in_blocks:
- # 将adapter放在attention之后
- x = blk(x,adapter=self.adapter if use_adapter else None,layer=layer)
- if self.skip:
- skips.append(x)# type: ignore
- layer+=1
-
- x = self.mid_block(x)
-
- for blk in self.out_blocks:
- if self.skip:
- x = blk(x, skips.pop(),adapter = self.adapter if use_adapter else None,layer=layer)# type: ignore
- else:
- x = blk(x,adapter = self.adapter if use_adapter else None,layer=layer)
-
- x = self.norm(x)
-
- word_embeddings = self.token_emb.word_embeddings.weight.data.detach()
- x = self.mlm_layer(x, word_embeddings)
- x = x[:, self.extras:, :self.codebook_size]
- return x
diff --git a/spaces/zideliu/styledrop/taming/util.py b/spaces/zideliu/styledrop/taming/util.py
deleted file mode 100644
index 06053e5defb87977f9ab07e69bf4da12201de9b7..0000000000000000000000000000000000000000
--- a/spaces/zideliu/styledrop/taming/util.py
+++ /dev/null
@@ -1,157 +0,0 @@
-import os, hashlib
-import requests
-from tqdm import tqdm
-
-URL_MAP = {
- "vgg_lpips": "https://heibox.uni-heidelberg.de/f/607503859c864bc1b30b/?dl=1"
-}
-
-CKPT_MAP = {
- "vgg_lpips": "vgg.pth"
-}
-
-MD5_MAP = {
- "vgg_lpips": "d507d7349b931f0638a25a48a722f98a"
-}
-
-
-def download(url, local_path, chunk_size=1024):
- os.makedirs(os.path.split(local_path)[0], exist_ok=True)
- with requests.get(url, stream=True) as r:
- total_size = int(r.headers.get("content-length", 0))
- with tqdm(total=total_size, unit="B", unit_scale=True) as pbar:
- with open(local_path, "wb") as f:
- for data in r.iter_content(chunk_size=chunk_size):
- if data:
- f.write(data)
- pbar.update(chunk_size)
-
-
-def md5_hash(path):
- with open(path, "rb") as f:
- content = f.read()
- return hashlib.md5(content).hexdigest()
-
-
-def get_ckpt_path(name, root, check=False):
- assert name in URL_MAP
- path = os.path.join(root, CKPT_MAP[name])
- if not os.path.exists(path) or (check and not md5_hash(path) == MD5_MAP[name]):
- print("Downloading {} model from {} to {}".format(name, URL_MAP[name], path))
- download(URL_MAP[name], path)
- md5 = md5_hash(path)
- assert md5 == MD5_MAP[name], md5
- return path
-
-
-class KeyNotFoundError(Exception):
- def __init__(self, cause, keys=None, visited=None):
- self.cause = cause
- self.keys = keys
- self.visited = visited
- messages = list()
- if keys is not None:
- messages.append("Key not found: {}".format(keys))
- if visited is not None:
- messages.append("Visited: {}".format(visited))
- messages.append("Cause:\n{}".format(cause))
- message = "\n".join(messages)
- super().__init__(message)
-
-
-def retrieve(
- list_or_dict, key, splitval="/", default=None, expand=True, pass_success=False
-):
- """Given a nested list or dict return the desired value at key expanding
- callable nodes if necessary and :attr:`expand` is ``True``. The expansion
- is done in-place.
-
- Parameters
- ----------
- list_or_dict : list or dict
- Possibly nested list or dictionary.
- key : str
- key/to/value, path like string describing all keys necessary to
- consider to get to the desired value. List indices can also be
- passed here.
- splitval : str
- String that defines the delimiter between keys of the
- different depth levels in `key`.
- default : obj
- Value returned if :attr:`key` is not found.
- expand : bool
- Whether to expand callable nodes on the path or not.
-
- Returns
- -------
- The desired value or if :attr:`default` is not ``None`` and the
- :attr:`key` is not found returns ``default``.
-
- Raises
- ------
- Exception if ``key`` not in ``list_or_dict`` and :attr:`default` is
- ``None``.
- """
-
- keys = key.split(splitval)
-
- success = True
- try:
- visited = []
- parent = None
- last_key = None
- for key in keys:
- if callable(list_or_dict):
- if not expand:
- raise KeyNotFoundError(
- ValueError(
- "Trying to get past callable node with expand=False."
- ),
- keys=keys,
- visited=visited,
- )
- list_or_dict = list_or_dict()
- parent[last_key] = list_or_dict
-
- last_key = key
- parent = list_or_dict
-
- try:
- if isinstance(list_or_dict, dict):
- list_or_dict = list_or_dict[key]
- else:
- list_or_dict = list_or_dict[int(key)]
- except (KeyError, IndexError, ValueError) as e:
- raise KeyNotFoundError(e, keys=keys, visited=visited)
-
- visited += [key]
- # final expansion of retrieved value
- if expand and callable(list_or_dict):
- list_or_dict = list_or_dict()
- parent[last_key] = list_or_dict
- except KeyNotFoundError as e:
- if default is None:
- raise e
- else:
- list_or_dict = default
- success = False
-
- if not pass_success:
- return list_or_dict
- else:
- return list_or_dict, success
-
-
-if __name__ == "__main__":
- config = {"keya": "a",
- "keyb": "b",
- "keyc":
- {"cc1": 1,
- "cc2": 2,
- }
- }
- from omegaconf import OmegaConf
- config = OmegaConf.create(config)
- print(config)
- retrieve(config, "keya")
-
diff --git a/spaces/zomehwh/sovits-tannhauser/inference/infer_tool.py b/spaces/zomehwh/sovits-tannhauser/inference/infer_tool.py
deleted file mode 100644
index fed81f5abb6f2f525af616171ee9838ae341cb5f..0000000000000000000000000000000000000000
--- a/spaces/zomehwh/sovits-tannhauser/inference/infer_tool.py
+++ /dev/null
@@ -1,324 +0,0 @@
-import hashlib
-import io
-import json
-import logging
-import os
-import time
-from pathlib import Path
-from inference import slicer
-
-import librosa
-import numpy as np
-# import onnxruntime
-import parselmouth
-import soundfile
-import torch
-import torchaudio
-
-import cluster
-from hubert import hubert_model
-import utils
-from models import SynthesizerTrn
-
-logging.getLogger('matplotlib').setLevel(logging.WARNING)
-
-
-def read_temp(file_name):
- if not os.path.exists(file_name):
- with open(file_name, "w") as f:
- f.write(json.dumps({"info": "temp_dict"}))
- return {}
- else:
- try:
- with open(file_name, "r") as f:
- data = f.read()
- data_dict = json.loads(data)
- if os.path.getsize(file_name) > 50 * 1024 * 1024:
- f_name = file_name.replace("\\", "/").split("/")[-1]
- print(f"clean {f_name}")
- for wav_hash in list(data_dict.keys()):
- if int(time.time()) - int(data_dict[wav_hash]["time"]) > 14 * 24 * 3600:
- del data_dict[wav_hash]
- except Exception as e:
- print(e)
- print(f"{file_name} error,auto rebuild file")
- data_dict = {"info": "temp_dict"}
- return data_dict
-
-
-def write_temp(file_name, data):
- with open(file_name, "w") as f:
- f.write(json.dumps(data))
-
-
-def timeit(func):
- def run(*args, **kwargs):
- t = time.time()
- res = func(*args, **kwargs)
- print('executing \'%s\' costed %.3fs' % (func.__name__, time.time() - t))
- return res
-
- return run
-
-
-def format_wav(audio_path):
- if Path(audio_path).suffix == '.wav':
- return
- raw_audio, raw_sample_rate = librosa.load(audio_path, mono=True, sr=None)
- soundfile.write(Path(audio_path).with_suffix(".wav"), raw_audio, raw_sample_rate)
-
-
-def get_end_file(dir_path, end):
- file_lists = []
- for root, dirs, files in os.walk(dir_path):
- files = [f for f in files if f[0] != '.']
- dirs[:] = [d for d in dirs if d[0] != '.']
- for f_file in files:
- if f_file.endswith(end):
- file_lists.append(os.path.join(root, f_file).replace("\\", "/"))
- return file_lists
-
-
-def get_md5(content):
- return hashlib.new("md5", content).hexdigest()
-
-def fill_a_to_b(a, b):
- if len(a) < len(b):
- for _ in range(0, len(b) - len(a)):
- a.append(a[0])
-
-def mkdir(paths: list):
- for path in paths:
- if not os.path.exists(path):
- os.mkdir(path)
-
-def pad_array(arr, target_length):
- current_length = arr.shape[0]
- if current_length >= target_length:
- return arr
- else:
- pad_width = target_length - current_length
- pad_left = pad_width // 2
- pad_right = pad_width - pad_left
- padded_arr = np.pad(arr, (pad_left, pad_right), 'constant', constant_values=(0, 0))
- return padded_arr
-
-def split_list_by_n(list_collection, n, pre=0):
- for i in range(0, len(list_collection), n):
- yield list_collection[i-pre if i-pre>=0 else i: i + n]
-
-
-class F0FilterException(Exception):
- pass
-
-class Svc(object):
- def __init__(self, net_g_path, config_path,
- device=None,
- cluster_model_path="logs/44k/kmeans_10000.pt"):
- self.net_g_path = net_g_path
- if device is None:
- self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- else:
- self.dev = torch.device(device)
- self.net_g_ms = None
- self.hps_ms = utils.get_hparams_from_file(config_path)
- self.target_sample = self.hps_ms.data.sampling_rate
- self.hop_size = self.hps_ms.data.hop_length
- self.spk2id = self.hps_ms.spk
- # 加载hubert
- self.hubert_model = utils.get_hubert_model().to(self.dev)
- self.load_model()
- if os.path.exists(cluster_model_path):
- self.cluster_model = cluster.get_cluster_model(cluster_model_path)
-
- def load_model(self):
- # 获取模型配置
- self.net_g_ms = SynthesizerTrn(
- self.hps_ms.data.filter_length // 2 + 1,
- self.hps_ms.train.segment_size // self.hps_ms.data.hop_length,
- **self.hps_ms.model)
- _ = utils.load_checkpoint(self.net_g_path, self.net_g_ms, None)
- if "half" in self.net_g_path and torch.cuda.is_available():
- _ = self.net_g_ms.half().eval().to(self.dev)
- else:
- _ = self.net_g_ms.eval().to(self.dev)
-
-
-
- def get_unit_f0(self, in_path, tran, cluster_infer_ratio, speaker, f0_filter ,F0_mean_pooling):
-
- wav, sr = librosa.load(in_path, sr=self.target_sample)
-
- if F0_mean_pooling == True:
- f0, uv = utils.compute_f0_uv_torchcrepe(torch.FloatTensor(wav), sampling_rate=self.target_sample, hop_length=self.hop_size,device=self.dev)
- if f0_filter and sum(f0) == 0:
- raise F0FilterException("未检测到人声")
- f0 = torch.FloatTensor(list(f0))
- uv = torch.FloatTensor(list(uv))
- if F0_mean_pooling == False:
- f0 = utils.compute_f0_parselmouth(wav, sampling_rate=self.target_sample, hop_length=self.hop_size)
- if f0_filter and sum(f0) == 0:
- raise F0FilterException("未检测到人声")
- f0, uv = utils.interpolate_f0(f0)
- f0 = torch.FloatTensor(f0)
- uv = torch.FloatTensor(uv)
-
- f0 = f0 * 2 ** (tran / 12)
- f0 = f0.unsqueeze(0).to(self.dev)
- uv = uv.unsqueeze(0).to(self.dev)
-
- wav16k = librosa.resample(wav, orig_sr=self.target_sample, target_sr=16000)
- wav16k = torch.from_numpy(wav16k).to(self.dev)
- c = utils.get_hubert_content(self.hubert_model, wav_16k_tensor=wav16k)
- c = utils.repeat_expand_2d(c.squeeze(0), f0.shape[1])
-
- if cluster_infer_ratio !=0:
- cluster_c = cluster.get_cluster_center_result(self.cluster_model, c.cpu().numpy().T, speaker).T
- cluster_c = torch.FloatTensor(cluster_c).to(self.dev)
- c = cluster_infer_ratio * cluster_c + (1 - cluster_infer_ratio) * c
-
- c = c.unsqueeze(0)
- return c, f0, uv
-
- def infer(self, speaker, tran, raw_path,
- cluster_infer_ratio=0,
- auto_predict_f0=False,
- noice_scale=0.4,
- f0_filter=False,
- F0_mean_pooling=False
- ):
-
- speaker_id = self.spk2id.__dict__.get(speaker)
- if not speaker_id and type(speaker) is int:
- if len(self.spk2id.__dict__) >= speaker:
- speaker_id = speaker
- sid = torch.LongTensor([int(speaker_id)]).to(self.dev).unsqueeze(0)
- c, f0, uv = self.get_unit_f0(raw_path, tran, cluster_infer_ratio, speaker, f0_filter,F0_mean_pooling)
- if "half" in self.net_g_path and torch.cuda.is_available():
- c = c.half()
- with torch.no_grad():
- start = time.time()
- audio = self.net_g_ms.infer(c, f0=f0, g=sid, uv=uv, predict_f0=auto_predict_f0, noice_scale=noice_scale)[0,0].data.float()
- use_time = time.time() - start
- print("vits use time:{}".format(use_time))
- return audio, audio.shape[-1]
-
- def clear_empty(self):
- # 清理显存
- torch.cuda.empty_cache()
-
- def slice_inference(self,
- raw_audio_path,
- spk,
- tran,
- slice_db,
- cluster_infer_ratio,
- auto_predict_f0,
- noice_scale,
- pad_seconds=0.5,
- clip_seconds=0,
- lg_num=0,
- lgr_num =0.75,
- F0_mean_pooling = False
- ):
- wav_path = raw_audio_path
- chunks = slicer.cut(wav_path, db_thresh=slice_db)
- audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks)
- per_size = int(clip_seconds*audio_sr)
- lg_size = int(lg_num*audio_sr)
- lg_size_r = int(lg_size*lgr_num)
- lg_size_c_l = (lg_size-lg_size_r)//2
- lg_size_c_r = lg_size-lg_size_r-lg_size_c_l
- lg = np.linspace(0,1,lg_size_r) if lg_size!=0 else 0
-
- audio = []
- for (slice_tag, data) in audio_data:
- print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======')
- # padd
- length = int(np.ceil(len(data) / audio_sr * self.target_sample))
- if slice_tag:
- print('jump empty segment')
- _audio = np.zeros(length)
- audio.extend(list(pad_array(_audio, length)))
- continue
- if per_size != 0:
- datas = split_list_by_n(data, per_size,lg_size)
- else:
- datas = [data]
- for k,dat in enumerate(datas):
- per_length = int(np.ceil(len(dat) / audio_sr * self.target_sample)) if clip_seconds!=0 else length
- if clip_seconds!=0: print(f'###=====segment clip start, {round(len(dat) / audio_sr, 3)}s======')
- # padd
- pad_len = int(audio_sr * pad_seconds)
- dat = np.concatenate([np.zeros([pad_len]), dat, np.zeros([pad_len])])
- raw_path = io.BytesIO()
- soundfile.write(raw_path, dat, audio_sr, format="wav")
- raw_path.seek(0)
- out_audio, out_sr = self.infer(spk, tran, raw_path,
- cluster_infer_ratio=cluster_infer_ratio,
- auto_predict_f0=auto_predict_f0,
- noice_scale=noice_scale,
- F0_mean_pooling = F0_mean_pooling
- )
- _audio = out_audio.cpu().numpy()
- pad_len = int(self.target_sample * pad_seconds)
- _audio = _audio[pad_len:-pad_len]
- _audio = pad_array(_audio, per_length)
- if lg_size!=0 and k!=0:
- lg1 = audio[-(lg_size_r+lg_size_c_r):-lg_size_c_r] if lgr_num != 1 else audio[-lg_size:]
- lg2 = _audio[lg_size_c_l:lg_size_c_l+lg_size_r] if lgr_num != 1 else _audio[0:lg_size]
- lg_pre = lg1*(1-lg)+lg2*lg
- audio = audio[0:-(lg_size_r+lg_size_c_r)] if lgr_num != 1 else audio[0:-lg_size]
- audio.extend(lg_pre)
- _audio = _audio[lg_size_c_l+lg_size_r:] if lgr_num != 1 else _audio[lg_size:]
- audio.extend(list(_audio))
- return np.array(audio)
-
-class RealTimeVC:
- def __init__(self):
- self.last_chunk = None
- self.last_o = None
- self.chunk_len = 16000 # 区块长度
- self.pre_len = 3840 # 交叉淡化长度,640的倍数
-
- """输入输出都是1维numpy 音频波形数组"""
-
- def process(self, svc_model, speaker_id, f_pitch_change, input_wav_path,
- cluster_infer_ratio=0,
- auto_predict_f0=False,
- noice_scale=0.4,
- f0_filter=False):
-
- import maad
- audio, sr = torchaudio.load(input_wav_path)
- audio = audio.cpu().numpy()[0]
- temp_wav = io.BytesIO()
- if self.last_chunk is None:
- input_wav_path.seek(0)
-
- audio, sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path,
- cluster_infer_ratio=cluster_infer_ratio,
- auto_predict_f0=auto_predict_f0,
- noice_scale=noice_scale,
- f0_filter=f0_filter)
-
- audio = audio.cpu().numpy()
- self.last_chunk = audio[-self.pre_len:]
- self.last_o = audio
- return audio[-self.chunk_len:]
- else:
- audio = np.concatenate([self.last_chunk, audio])
- soundfile.write(temp_wav, audio, sr, format="wav")
- temp_wav.seek(0)
-
- audio, sr = svc_model.infer(speaker_id, f_pitch_change, temp_wav,
- cluster_infer_ratio=cluster_infer_ratio,
- auto_predict_f0=auto_predict_f0,
- noice_scale=noice_scale,
- f0_filter=f0_filter)
-
- audio = audio.cpu().numpy()
- ret = maad.util.crossfade(self.last_o, audio, self.pre_len)
- self.last_chunk = audio[-self.pre_len:]
- self.last_o = audio
- return ret[self.chunk_len:2 * self.chunk_len]
diff --git a/spaces/zomehwh/vits-uma-genshin-honkai/mel_processing.py b/spaces/zomehwh/vits-uma-genshin-honkai/mel_processing.py
deleted file mode 100644
index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000
--- a/spaces/zomehwh/vits-uma-genshin-honkai/mel_processing.py
+++ /dev/null
@@ -1,101 +0,0 @@
-import torch
-import torch.utils.data
-from librosa.filters import mel as librosa_mel_fn
-
-MAX_WAV_VALUE = 32768.0
-
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- """
- PARAMS
- ------
- C: compression factor
- """
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression_torch(x, C=1):
- """
- PARAMS
- ------
- C: compression factor used to compress
- """
- return torch.exp(x) / C
-
-
-def spectral_normalize_torch(magnitudes):
- output = dynamic_range_compression_torch(magnitudes)
- return output
-
-
-def spectral_de_normalize_torch(magnitudes):
- output = dynamic_range_decompression_torch(magnitudes)
- return output
-
-
-mel_basis = {}
-hann_window = {}
-
-
-def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
- return spec
-
-
-def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
- global mel_basis
- dtype_device = str(spec.dtype) + '_' + str(spec.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device)
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
- return spec
-
-
-def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global mel_basis, hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device)
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
-
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
-
- return spec